Data Integration
-
search current hash of kaspersky into logpoint
we need to search hash in logpoint of our current endpoint and servers. how we can do that. is there any package we need to install for particular application or anything else
also we have edr kaspersky and imported its package in logpoint. we are getting teh windows logs only nott he hashes from kaspersky. we need to configure to recieve hashes from kasper sky
-
Change timestamp on incoming logs
Hi!
I’ve several logs that comes in UTC timeformat. My timezone Is UTC + 2. Which mess things up when I perform querys, hunting, analyzing events and taking out reports.
My log Exporters often send syslog in UTC timeformat, RFC-compliant behavior.
Is It possible to apply any sort of Normalization Package for these incoming logs to fix this?
Can I try with some querys that changes the log_ts & col_ts field to UTC +2 timezone? Instead of the default UTC timezone.
Thanks -
Normalisation for Miliseconds
<123>2024-01-11T11:11:11.123Z hostname
I want to remove two fields from the above rohdaten using normalization, so to say. After normalization I should have the following fields:
milliseconds=123
log_ts=2024-01-11T11:11:11ZCan I do this in normalization (or signature)? If yes, can you write me the normalization/signature rule?
This problem can be solved like this:
| <<:int>><log_ts:datetime_m> <host:all>
but I don't want that. I want to separate my fields at the beginning using normalization.
-
Windows Server DNS Query Log
Hi all,
I have configured my windows 2022 dns server to log dns queries. We need those logs for security and possible forensic reasons.
The configuration is done in Windows Event Manager as described in DNS Logging and Diagnostics | Microsoft Learn . We are using LPAgent to collect other logs from this server.
The result is an etl file, which cannot be read from the eventlog with im_msvistalog configuration from LPAgent. etl cannot be read with the im_msvistalog plugin of LPAgent.
I have read that there is an NXLOG EE plugin im_etw out there which should be able to handle this file type, but we do not have the NXLog Enterprise Subscription.
Is there any other option to collect the dns query logs from the server and import them into LogPoint?
Is ther e any the best practice to handle windows dns server query logs (without using NXLOG EE)?
Kind regards
Uwe -
WIndows Logs
We would like to send log files from a directory such as C:\Logs to our logpoint server.
What needs to be entered in nxlog.conf? -
Nxlog configuration, dropping logs from local host on IIS
Hi,
I can’t get this to work, maybe some of you have tried this before.
The drop line below works fine syntax wise, but my goal are to get rid of 127.0.0.1 logs, and when i remove the “!” it fails.
Help Are apprecierede
Regards Kai
<Input IIS1>
Module im_file
File "C:\\inetpub\\logs\\LogFiles\\W3SVC1\\u_ex*"
Exec if $MessageSourceAddress != "127.0.0.1" drop ();
ReadFromLast FALSE
Recursive TRUE
PollInterval 1
Exec $FileName = file_name();
Exec if $raw_event =~ /^#/ drop();\
else\
{\
w3c->parse_csv();\
$EventTime = $EventTime - (2 * 3600);\
$SourceName = "IIS";\
}
</Input> -
Re-parse an event for normalization (JSON-event)
Hi !
Just a interesting question. I know that other SIEM vendors have problem with this. Maybe LogPoint have a good function for this.So I received a JSON-event that didn’t normalise, due to that no normalization-package was enabled. I enabled this after I received the event.
So to my question. Is It possible to parse this event afterwards so that It gets normalized? Or do I have to wait for another event from the same logsource to see If this one gets normalized? -
Have anyone tried to get logs from Kubernetes, maybee a best pratice.
I see that there are no Vendor Apps for Kubernetes, so normalization are maybee going to be written, but how do you get logs to Logpoint, are there a nativ way for this.
I found that Auditing of logs are not default turned on, and if they are they only reside for 1 hour.
Any one with some god advise in the matter ?
Regards Kai
-
Cisco Firepower logs not normalized
Hi!
Been struggling with the normalization of Cisco Firepower logs, were I expect better normalization and a better enrichment. The syslog is configured from the Firepower Management Center.
Everything should be correct in LogPoint were we’ve put in all the normalization policys for the log source.
Compiled Normalizer:
- Cisco FirepowerNormalizer
- CiscoPIXASACompiledNormalizerNormalization Packages:
- LP_Cisco Firepower
- LP_Cisco Fiirepower Management Center
- LP_Cisco Fiirepower Management Center v6_2- LP_Cisco PIX/ASA Generic
- LP_Cisco PIXASA
Is there any problem with the format syslog? Had the same issue with CheckPoint FW, but this got solved when we changed the format to CEF. Only the problem that Cisco Firepower only support the format syslog.
Is there someone that has any tips on how to move on forward with this? -
Windows SCCM send logs to LogPoint
Hi!
I’m curious into how to collect logs from SCCM. Logs related to endpoint protection, virus alarms, quarantind threats etc.
Found out that nxlog provides a configuration file for this. Missing some fields in the configuration file, example <Output out_syslog>. To point out the syslog dst.Microsoft System Center Configuration Manager :: NXLog Documentation
Has anyone any experience about this?Thankful for replies.
-
Release of Universal Normalizer v5.0.0.
Dear All,
We are excited to announce the public release of Universal Normalizer v5.0.0.
Why is this great news?
Universal Normalizer enables you to create Customer Compiled Normalizers for a range of supported log formats by yourself with just a few simple steps and no waiting time.
The supported log formats currently include:
-
JSON
-
CSV
-
XML
-
CEF
-
LEEF
-
Key-value pair
To read more about Universal Normalizer v.5.0.0., please follow the links below:
Dowload:
https://servicedesk.logpoint.com/hc/en-us/articles/8874831748253
Documentation:
https://docs.logpoint.com/docs/universalnormalizer/en/latest/#universal-normalizer
-
-
Logpoint Agent Collector v5.2.3 available now
Hi All,
The Logpoint Agent Collector v5.2.3 has been released publicly. For more information, please visit the links below.
Release notes: https://servicedesk.logpoint.com/hc/en-us/articles/360020035977
Documentation: https://docs.logpoint.com/docs/logpoint-agent/en/latest/
-
Universal REST API Fetcher Release
Hi All,
We are excited to share the release of the new Universal REST API Fetcher.
The Universal REST API fetcher provides a generic interface to fetch logs from cloud sources via REST APIs. The cloud sources can have multiple endpoints, and every configured source consumes one device license.
For more details, please see the links below:
Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/6047943636253-Universal-REST-API-Fetcher
Documentation: https://docs.logpoint.com/docs/universal-rest-api/en/latest/
-
CSV Enrichment Source v5.2.0 is now publicly available
Dear All,
We are happy to share that we have released CSV Enrichment Source v5.2.0 publicly.
The CSV Enrichment Source application enables you to use a CSV file as an enrichment source in LogPoint. The application fetches data feeds from a CSV file and enriches search results with the data.
For further information, please visit the link below:
https://servicedesk.logpoint.com/hc/en-us/articles/115003786109
For detailed information about the implementation in Logpoint products, please refer to the articles below:
- Logpoint: https://docs.logpoint.com/docs/csvenrichmentsource/en/latest/
- Director API: https://docs.logpoint.com/docs/csvenrichmentsource-for-director-console-api/en/latest/
- Director Console: https://docs.logpoint.com/docs/csvenrichmentsource-for-director-console-ui/en/latest/
-
Detecting devices that are not sending logs
Receiving logs is one of the cure features of having a SIEM solution but in some cases logs are not received as required. In our newest KB article, we are diving into how to monitor log sources using Logpoint alerts to detect no logs being received on Logpoint within a certain time range.
To read the full article, please see the link below: https://servicedesk.logpoint.com/hc/en-us/articles/5734141307933-Detecting-devices-that-are-not-sending-logs-
-
Understanding file_keeper Working and Configuration
The `file_keeper` service, primarily used for storing raw logs and then forwarding them to be indexed by the `indexsearcher` is often used in its default configuration. However in some real life situations this might not be sufficient to deal with the type, and volume of logs being ingested into LogPoint, hence tuning is required. In our newest KB article, we´re gonna guide you through how exactly to do it.
For more details, please read the full article on the link below:
-
Integrating the logs from kaspersky
we need to send the kapsersky logs into logpoint. we have configured the kapsersky to send events to logpoint machine through syslog port 514 and protocol is UDP, but it does not send the logs. need help.
-
Release of CloudTrailv5.1.0
Dear all,
We´re happy to share the public release of CloudTrailv5.1.0
Please see the details on the link below:
https://servicedesk.logpoint.com/hc/en-us/articles/360000219549
-
New applications available with Logpoint v.7.1.0.
Hi All,
We´re happy to share that we have released the following applications on the Help Center:
- Experimental Median Quartile Quantile Plugin v5.0.0
- Vulnerability Management v6.1.1:
- Lookup Process plugin v5.1.0:
- Logpoint Agent Collector V5.2.2
- Universal REST API Fetcher v1.0.0
All applications have been bundled in Logpoint v7.1.0 and are available out of the box.
-
Common Issues in Normalized logs
Hi All,
Our latest KB article discusses common issues where logs seem normalized ( both norm_id and sig_id are present), but some problems prevent them from being used in analytics.
To read the full article, please follow the link below:
https://servicedesk.logpoint.com/hc/en-us/articles/5830850414493-Common-Issues-in-Normalized-logs
-
Resolving timestamp related issues in Normalization
Hi All,
Sometimes we face an issue like an alert not being triggered or a dashboard widget not being populated. There could be many possible reasons. Among them, one is a huge time gap between log_ts and col_ts . In this article, we will be discussing some of the possible causes and sharing tips and tricks to solve this.
Please see the link to the article below :)
-
New in KB: Addressing delayed Logs and its uncertainty with /logpoint.
Hi All,
We are delighted to share our latest KB article addressing the difference between two fields, log_ts (event creation time) and col_ts (event ingestion time in log point) in logs and how they can alter the expected behavior of logpoint service. You can access the article via the below link:
-
FortiMail Logs not Correctly Separated into Multiple Lines
We recently added a FortiMail appliance as a log source to one of our logpoints and now see an issue during collection and normalization.
It seems that FortiMail is sending the log messages without separating the single messages with a newline or NULL-termination or something else. Thus the syslog_collector is reading from the socket until the maximum buffer length is exceeded.
So we get a maximum length raw log message (10k characters, which then breaks in between a log message), which contains up to 30 or 40 single log messages, which are written one after the other. The normalizer then normalizes only the first message and discards the rest.
Here a shortened example of how this looks like:
550 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]553 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]479 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]324 <6>date=2022-06-20 time=07:24:12.279 device_id=[...]
Is there a way to resolve this issue?
-
EventHubs: Azure AD Identity Protection Timestamp Format
We recently noticed that some Azure EventHubs Applications (e.g. the Azure AD Identity Protection -> https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection ) are setting the "time" field not in the ISO 8601 Datetime format, but in the "general date long time" format (see https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings#GeneralDateLongTime ).
Thus the month and day field seem to be mixed up in these cases, and e.g. events that were actually collected on 6th of april (according to col_ts ) are sorted into the repos on 4th of june (because of the wrong log_ts ).
Also alert rules on these events are then triggering months later, when the accidentally wrongly sorted events slip into the current window of the search time range.The following screenshots shows how the timestamp format of the Azure AD Identity Protection differs from the usual ISO 8601 format.
Do you know if it is somehow possible to change this log timestamp format somewhere in the Azure AD settings?
Or has the compiled normalizer of the EventHub events to be adjusted?
-
Export large amount of raw logs
Hi @all
I need to export a large amount of raw logs - about 450 GB.
Is it possible for me to export this amount in one go via the Export Raw Logs functionality or do I need to export the raw logs sequentially?
Thx a lot!
-
PaloAlto PanOS 10 - Logs not normalized
Hello,
since we replaced the PaloAlto firewall devices a couple of days ago (the old ones were running PanOS 9.1.7, the new ones are on 10.1.4) for one of our customers, none of the logs coming from the firewalls are normalized anymore (there are 1000s of logs in the repo, but doing a search query “ norm_id="*" “ shows no result).
We are using the same policies (collection, normalization etc) as before, and the firewall admin says that they just migrated the configuration fromt ht eold to the new devices and can not see any changes regarding the log configuration settings.
I already restarted all normalizer services, even rebooted the LP and completely recreated the device configuration.
We are using the latest (5.2.0) PaloAlto Application plugin on LogPoint 6.12.2, and its details clearly state that PanPOS 10 is supported ( Palo Alto Network Firewall – ServiceDesk # LogPoint ). And taking a look at the raw logs, i can not see any differenc in the log format of PanOS 9 and 10. However, also tried adding the “PaloAltoCEFCompiledNormalizer” to the normalization policy (it “only” included the PaloAltoNetworkFirewallCompiledNormalizer), but nothing helped.
Does anyone has any thought what might be the issue or what else i can check before i open a support ticket. Is there any way to debug the normalization preocess on the LogPoint CLI ?
Regards
Andre
-
Documentation- Integration
HI Team.
We are checking for Integration documentation with LogPoint for the below products:
1.Okta
2.Saleforce
Do we have usecase package inbuilt ?
-
Logpoint Incidents Integrate with ServiceNow
Hi Team,
Is it possible to integrate Logpoint -Incident page to ServiceNow.
We want to check whether Logpoint has that capability.
Thanks
Satya
-
Notification - Device not sending logs >24 hours
Hi Team,
How to setup “Device not sending logs “ alert in Logpoint and how to configure that alert to email like sftp setup.
Thanks
Satya
-
Tagging devices - criticality
Hi Team,
Can we tag the device criticality in logpoint,
We are looking to create notification for critical and high severity devices.