Data Integration
-
Problem adding McAfee ePo server via Syslog
We configured our McAfee ePO (5.10) server to send its logs to a syslog server and configured it in the LP accordingly. Yet, when using the “Test Syslog” Feature in McAfee ePO, the test failed. Nonetheless, we are receiving logs from the server, but they only contain gibberish.
LP raw logs This is as far as i think not a problem with normalization, as a tcpdump also shows the log payload not being human readable.
tcpdump I already tried to change the charset min the log m,collection policy from utf_8 to iso8559_15 and ascii, to no avail.
I found following McAfee ( KB87927 ) document, which says:
ePO syslog forwarding only supports the TCP protocol, and requires Transport Layer Security (TLS) . Specifically, it supports receivers following RFC 5424 and RFC 5425 , which is known as syslog-ng . You do not need to import the certificate used by the syslog receiver into ePO. As long as the certificate is valid, ePO accepts it. Self-signed certificates are supported and are commonly used for this purpose.
So my current guess it that the test connection failed as the ePO is expecting the LP to encrypt the traffic, which it does not do. Yet it still started to send the LP encrypted logs (but what cert does he use), therefore the gibberish.
Hence my question, did anyone manage to successfully retrieve usable logs from a McAfee ePO server using Syslog, or might have any suggestion what is wrong with my configuration ?
-
PaloAlto PanOS 10 - Logs not normalized
Hello,
since we replaced the PaloAlto firewall devices a couple of days ago (the old ones were running PanOS 9.1.7, the new ones are on 10.1.4) for one of our customers, none of the logs coming from the firewalls are normalized anymore (there are 1000s of logs in the repo, but doing a search query “ norm_id="*" “ shows no result).
We are using the same policies (collection, normalization etc) as before, and the firewall admin says that they just migrated the configuration fromt ht eold to the new devices and can not see any changes regarding the log configuration settings.
I already restarted all normalizer services, even rebooted the LP and completely recreated the device configuration.
We are using the latest (5.2.0) PaloAlto Application plugin on LogPoint 6.12.2, and its details clearly state that PanPOS 10 is supported ( Palo Alto Network Firewall – ServiceDesk # LogPoint ). And taking a look at the raw logs, i can not see any differenc in the log format of PanOS 9 and 10. However, also tried adding the “PaloAltoCEFCompiledNormalizer” to the normalization policy (it “only” included the PaloAltoNetworkFirewallCompiledNormalizer), but nothing helped.
Does anyone has any thought what might be the issue or what else i can check before i open a support ticket. Is there any way to debug the normalization preocess on the LogPoint CLI ?
Regards
Andre
-
Windows SCCM send logs to LogPoint
Hi!
I’m curious into how to collect logs from SCCM. Logs related to endpoint protection, virus alarms, quarantind threats etc.
Found out that nxlog provides a configuration file for this. Missing some fields in the configuration file, example <Output out_syslog>. To point out the syslog dst.Microsoft System Center Configuration Manager :: NXLog Documentation
Has anyone any experience about this?Thankful for replies.
-
Re-parse an event for normalization (JSON-event)
Hi !
Just a interesting question. I know that other SIEM vendors have problem with this. Maybe LogPoint have a good function for this.So I received a JSON-event that didn’t normalise, due to that no normalization-package was enabled. I enabled this after I received the event.
So to my question. Is It possible to parse this event afterwards so that It gets normalized? Or do I have to wait for another event from the same logsource to see If this one gets normalized? -
Release of Universal Normalizer v5.0.0.
Dear All,
We are excited to announce the public release of Universal Normalizer v5.0.0.
Why is this great news?
Universal Normalizer enables you to create Customer Compiled Normalizers for a range of supported log formats by yourself with just a few simple steps and no waiting time.
The supported log formats currently include:
-
JSON
-
CSV
-
XML
-
CEF
-
LEEF
-
Key-value pair
To read more about Universal Normalizer v.5.0.0., please follow the links below:
Dowload:
https://servicedesk.logpoint.com/hc/en-us/articles/8874831748253
Documentation:
https://docs.logpoint.com/docs/universalnormalizer/en/latest/#universal-normalizer
-
-
Logpoint Incidents Integrate with ServiceNow
Hi Team,
Is it possible to integrate Logpoint -Incident page to ServiceNow.
We want to check whether Logpoint has that capability.
Thanks
Satya
-
Tagging devices - criticality
Hi Team,
Can we tag the device criticality in logpoint,
We are looking to create notification for critical and high severity devices.
-
Windows Server DNS Query Log
Hi all,
I have configured my windows 2022 dns server to log dns queries. We need those logs for security and possible forensic reasons.
The configuration is done in Windows Event Manager as described in DNS Logging and Diagnostics | Microsoft Learn . We are using LPAgent to collect other logs from this server.
The result is an etl file, which cannot be read from the eventlog with im_msvistalog configuration from LPAgent. etl cannot be read with the im_msvistalog plugin of LPAgent.
I have read that there is an NXLOG EE plugin im_etw out there which should be able to handle this file type, but we do not have the NXLog Enterprise Subscription.
Is there any other option to collect the dns query logs from the server and import them into LogPoint?
Is ther e any the best practice to handle windows dns server query logs (without using NXLOG EE)?
Kind regards
Uwe -
Multi line parser for Java applications.
Hi.
We are trying to push multi-line logs to Logpoint, for example a stack trace.
They are created by Java applications like Jboss, Tomcat and few more. Where we have some debug information in logs such as content of XML messages processes by the system etc.
When such logs are displayed in Logpoint, we need to preserve the line breaks along with indentation to make them readable by a human.
Can you please show a complete recipe on how to achieve that?
I saw this topic
https://community.logpoint.com/normalization-parsing-43/multi-line-parser-147
and understood that there are some pre-compiled normalizers which can be used, can you please explain how they gonna work and how exactly we need to:
1. send logs to Logpoint
2. process logs in logpointIn order to be able to present properly formatted (line breaks and indentation) logs for users who will look for the logs ?
Thanks
-
Debian Normalizers
I want to add some Linux Logs to Logpoint and I have seen that there are so many different normalizers that the testing would be take forever . . .
Has somebody a best practice normalizers for a default Debian rsyslog configuration?
-
how to define a static field on a data source
Hi,
I need to define a static field on a data source, like ‘datacenter=Paris’. What is the best way to achieve that ?
Thanks
-
Change timestamp on incoming logs
Hi!
I’ve several logs that comes in UTC timeformat. My timezone Is UTC + 2. Which mess things up when I perform querys, hunting, analyzing events and taking out reports.
My log Exporters often send syslog in UTC timeformat, RFC-compliant behavior.
Is It possible to apply any sort of Normalization Package for these incoming logs to fix this?
Can I try with some querys that changes the log_ts & col_ts field to UTC +2 timezone? Instead of the default UTC timezone.
Thanks -
WIndows Logs
We would like to send log files from a directory such as C:\Logs to our logpoint server.
What needs to be entered in nxlog.conf? -
Have anyone tried to get logs from Kubernetes, maybee a best pratice.
I see that there are no Vendor Apps for Kubernetes, so normalization are maybee going to be written, but how do you get logs to Logpoint, are there a nativ way for this.
I found that Auditing of logs are not default turned on, and if they are they only reside for 1 hour.
Any one with some god advise in the matter ?
Regards Kai
-
Cisco Firepower logs not normalized
Hi!
Been struggling with the normalization of Cisco Firepower logs, were I expect better normalization and a better enrichment. The syslog is configured from the Firepower Management Center.
Everything should be correct in LogPoint were we’ve put in all the normalization policys for the log source.
Compiled Normalizer:
- Cisco FirepowerNormalizer
- CiscoPIXASACompiledNormalizerNormalization Packages:
- LP_Cisco Firepower
- LP_Cisco Fiirepower Management Center
- LP_Cisco Fiirepower Management Center v6_2- LP_Cisco PIX/ASA Generic
- LP_Cisco PIXASA
Is there any problem with the format syslog? Had the same issue with CheckPoint FW, but this got solved when we changed the format to CEF. Only the problem that Cisco Firepower only support the format syslog.
Is there someone that has any tips on how to move on forward with this? -
CSV Enrichment Source v5.2.0 is now publicly available
Dear All,
We are happy to share that we have released CSV Enrichment Source v5.2.0 publicly.
The CSV Enrichment Source application enables you to use a CSV file as an enrichment source in LogPoint. The application fetches data feeds from a CSV file and enriches search results with the data.
For further information, please visit the link below:
https://servicedesk.logpoint.com/hc/en-us/articles/115003786109
For detailed information about the implementation in Logpoint products, please refer to the articles below:
- Logpoint: https://docs.logpoint.com/docs/csvenrichmentsource/en/latest/
- Director API: https://docs.logpoint.com/docs/csvenrichmentsource-for-director-console-api/en/latest/
- Director Console: https://docs.logpoint.com/docs/csvenrichmentsource-for-director-console-ui/en/latest/
-
Integrating the logs from kaspersky
we need to send the kapsersky logs into logpoint. we have configured the kapsersky to send events to logpoint machine through syslog port 514 and protocol is UDP, but it does not send the logs. need help.
-
EventHubs: Azure AD Identity Protection Timestamp Format
We recently noticed that some Azure EventHubs Applications (e.g. the Azure AD Identity Protection -> https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection ) are setting the "time" field not in the ISO 8601 Datetime format, but in the "general date long time" format (see https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings#GeneralDateLongTime ).
Thus the month and day field seem to be mixed up in these cases, and e.g. events that were actually collected on 6th of april (according to col_ts ) are sorted into the repos on 4th of june (because of the wrong log_ts ).
Also alert rules on these events are then triggering months later, when the accidentally wrongly sorted events slip into the current window of the search time range.The following screenshots shows how the timestamp format of the Azure AD Identity Protection differs from the usual ISO 8601 format.
Do you know if it is somehow possible to change this log timestamp format somewhere in the Azure AD settings?
Or has the compiled normalizer of the EventHub events to be adjusted?
-
Export large amount of raw logs
Hi @all
I need to export a large amount of raw logs - about 450 GB.
Is it possible for me to export this amount in one go via the Export Raw Logs functionality or do I need to export the raw logs sequentially?
Thx a lot!
-
Normalizer Timestamp
I have a nice Logfile (FlatFile with once a day import via Ubuntu LogPoint Agent) containing a timestamp like:
| 20210905 | 231304 |
Any suggestions how i need to modify my Normalizer to understand this Time ?
Edit:
I do some sed Magic and change the Format directly in the Logfile
-
Normalizing Windows logs using nxlog agent
When using the nxlog agent for Windows (instead of the LogPoint Agent for Windows) how can I get the logs properly normalised?
-
Tableau Plugin
I've done a little searching, but I haven't had any luck finding a Tableau plugin.
Does one exist, or have an integration with logs from Tableau coming to Logpoint?
-
Universal REST API Fetcher Release
Hi All,
We are excited to share the release of the new Universal REST API Fetcher.
The Universal REST API fetcher provides a generic interface to fetch logs from cloud sources via REST APIs. The cloud sources can have multiple endpoints, and every configured source consumes one device license.
For more details, please see the links below:
Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/6047943636253-Universal-REST-API-Fetcher
Documentation: https://docs.logpoint.com/docs/universal-rest-api/en/latest/
-
Documentation- Integration
HI Team.
We are checking for Integration documentation with LogPoint for the below products:
1.Okta
2.Saleforce
Do we have usecase package inbuilt ?
-
Notification - Device not sending logs >24 hours
Hi Team,
How to setup “Device not sending logs “ alert in Logpoint and how to configure that alert to email like sftp setup.
Thanks
Satya
-
Howto: Increase maximum size of syslog message
In the default configuration the syslog_collector process only accepts messages (log lines) with a maximum of 10000 bytes (characters). This results in truncated messages and thus they will not be normalized correctly. Especially powershell script blocks may contain important information, but generate very long log messages.
Unfortunately this is a fixed value in the syslog_collector binary.
At least the c code is avialable in the system and you can adjust the values and compile the binary again.For this you need sudo/root access.
sudo -i # become root
cd /opt/immune/installed/col/apps/collector_c/syslog_collector/
cp syslog_collector.h syslog_collector.h.bak # create a backup of the file
nano syslog_collector.hchange the value here in this line:
compile the syslog collector using:/opt/immune/bin/envdo make clean
/opt/immune/bin/envdo make
sv restart /opt/immune/etc/service/syslog_collector/ # restart the serviceIt would be a great feature to be able to set this value within the web UI.
-
Multi line parser?
Hi,
How do you create a “multi line” normaliser, e.g. Java logs (stack traces) or JSON objects?
-
Working with UDP on LPAgent
Hello all!
The docs portal https://docs.logpoint.com/docs/logpoint-agent/en/latest/Installing%20the%20Application.html mentions that we use TCP for communication between LPAgent and the Logpoint server. What can we do in case of UDP instead?
-
Process to request a new log normalization support
Hi Team, My customer has a new network appliance that is not yet supported by LogPoint. What is the process to request its support ?
-
I want to add a field management_address to its respective device_address..
Hi,
I have a case where Analyst uses the management IP, however there is a NAT address on the client side. The device address configured in the logs provide client IPs.So I am looking to add a field management_address that we will define based on the device_address
ie: when an event has device_address=192.168.1.1 add field management_address=10.10.10.10I've looked into a few ways to do this. Enrichment source I didn't see a good way to go about it. Adding a custom normalizer would be possible, but would have to add a signature for every IP <:ALL>192.168.1.1<:ALL> and then add keyvalue management_address=10.10.10.10
Label package would also be do-able and easier than norm signatures, but that would put the new IP in a label, rather than within the normalized event.Wondering if anyone has come up with any other solutions or ideas.