Data Integration
-
Integrate a use case in Logpoint for all tenants
Good morning, could anyone tell me if I can integrate a use case with its respective query in CSV or JSON so that I don't have to create them one by one in each of the client tenants? If so, could you tell me how to do it?
Thank you very much for everything.
-
Tableau Plugin
I've done a little searching, but I haven't had any luck finding a Tableau plugin.
Does one exist, or have an integration with logs from Tableau coming to Logpoint?
-
Integrating the logs from kaspersky
we need to send the kapsersky logs into logpoint. we have configured the kapsersky to send events to logpoint machine through syslog port 514 and protocol is UDP, but it does not send the logs. need help.
-
search current hash of kaspersky into logpoint
we need to search hash in logpoint of our current endpoint and servers. how we can do that. is there any package we need to install for particular application or anything else
also we have edr kaspersky and imported its package in logpoint. we are getting teh windows logs only nott he hashes from kaspersky. we need to configure to recieve hashes from kasper sky
-
I want to add a field management_address to its respective device_address..
Hi,
I have a case where Analyst uses the management IP, however there is a NAT address on the client side. The device address configured in the logs provide client IPs.So I am looking to add a field management_address that we will define based on the device_address
ie: when an event has device_address=192.168.1.1 add field management_address=10.10.10.10I've looked into a few ways to do this. Enrichment source I didn't see a good way to go about it. Adding a custom normalizer would be possible, but would have to add a signature for every IP <:ALL>192.168.1.1<:ALL> and then add keyvalue management_address=10.10.10.10
Label package would also be do-able and easier than norm signatures, but that would put the new IP in a label, rather than within the normalized event.Wondering if anyone has come up with any other solutions or ideas.
-
Process to request a new log normalization support
Hi Team, My customer has a new network appliance that is not yet supported by LogPoint. What is the process to request its support ?
-
Normalizing Windows logs using nxlog agent
When using the nxlog agent for Windows (instead of the LogPoint Agent for Windows) how can I get the logs properly normalised?
-
Time Searches
I have come across a query that ends up spitting out the month in text - I would have expected it to come out as a number, is that possible?
-
Multi line parser?
Hi,
How do you create a “multi line” normaliser, e.g. Java logs (stack traces) or JSON objects?
-
Working with UDP on LPAgent
Hello all!
The docs portal https://docs.logpoint.com/docs/logpoint-agent/en/latest/Installing%20the%20Application.html mentions that we use TCP for communication between LPAgent and the Logpoint server. What can we do in case of UDP instead?
-
how to define a static field on a data source
Hi,
I need to define a static field on a data source, like ‘datacenter=Paris’. What is the best way to achieve that ?
Thanks
-
Automatic Normalization
It would be great if there were some means to automatically select the respective normalizers automatically. This would reduce the implementation overhead and also help us select the best available normalizers. We could leave a process to analyze the logs and find the normalizers it requires at the start of the implementation and allow it some time to process.
What are the limitations/drawback for doing so?
-
Howto: Increase maximum size of syslog message
In the default configuration the syslog_collector process only accepts messages (log lines) with a maximum of 10000 bytes (characters). This results in truncated messages and thus they will not be normalized correctly. Especially powershell script blocks may contain important information, but generate very long log messages.
Unfortunately this is a fixed value in the syslog_collector binary.
At least the c code is avialable in the system and you can adjust the values and compile the binary again.For this you need sudo/root access.
sudo -i # become root
cd /opt/immune/installed/col/apps/collector_c/syslog_collector/
cp syslog_collector.h syslog_collector.h.bak # create a backup of the file
nano syslog_collector.hchange the value here in this line:
compile the syslog collector using:/opt/immune/bin/envdo make clean
/opt/immune/bin/envdo make
sv restart /opt/immune/etc/service/syslog_collector/ # restart the serviceIt would be a great feature to be able to set this value within the web UI.
-
Debian Normalizers
I want to add some Linux Logs to Logpoint and I have seen that there are so many different normalizers that the testing would be take forever . . .
Has somebody a best practice normalizers for a default Debian rsyslog configuration?
-
Normalizer Timestamp
I have a nice Logfile (FlatFile with once a day import via Ubuntu LogPoint Agent) containing a timestamp like:
| 20210905 | 231304 |
Any suggestions how i need to modify my Normalizer to understand this Time ?
Edit:
I do some sed Magic and change the Format directly in the Logfile
-
Notification - Device not sending logs >24 hours
Hi Team,
How to setup “Device not sending logs “ alert in Logpoint and how to configure that alert to email like sftp setup.
Thanks
Satya
-
Documentation- Integration
HI Team.
We are checking for Integration documentation with LogPoint for the below products:
1.Okta
2.Saleforce
Do we have usecase package inbuilt ?
-
Logpoint Incidents Integrate with ServiceNow
Hi Team,
Is it possible to integrate Logpoint -Incident page to ServiceNow.
We want to check whether Logpoint has that capability.
Thanks
Satya
-
Tagging devices - criticality
Hi Team,
Can we tag the device criticality in logpoint,
We are looking to create notification for critical and high severity devices.
-
Export large amount of raw logs
Hi @all
I need to export a large amount of raw logs - about 450 GB.
Is it possible for me to export this amount in one go via the Export Raw Logs functionality or do I need to export the raw logs sequentially?
Thx a lot!
-
EventHubs: Azure AD Identity Protection Timestamp Format
We recently noticed that some Azure EventHubs Applications (e.g. the Azure AD Identity Protection -> https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection ) are setting the "time" field not in the ISO 8601 Datetime format, but in the "general date long time" format (see https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings#GeneralDateLongTime ).
Thus the month and day field seem to be mixed up in these cases, and e.g. events that were actually collected on 6th of april (according to col_ts ) are sorted into the repos on 4th of june (because of the wrong log_ts ).
Also alert rules on these events are then triggering months later, when the accidentally wrongly sorted events slip into the current window of the search time range.The following screenshots shows how the timestamp format of the Azure AD Identity Protection differs from the usual ISO 8601 format.
Do you know if it is somehow possible to change this log timestamp format somewhere in the Azure AD settings?
Or has the compiled normalizer of the EventHub events to be adjusted?
-
FortiMail Logs not Correctly Separated into Multiple Lines
We recently added a FortiMail appliance as a log source to one of our logpoints and now see an issue during collection and normalization.
It seems that FortiMail is sending the log messages without separating the single messages with a newline or NULL-termination or something else. Thus the syslog_collector is reading from the socket until the maximum buffer length is exceeded.
So we get a maximum length raw log message (10k characters, which then breaks in between a log message), which contains up to 30 or 40 single log messages, which are written one after the other. The normalizer then normalizes only the first message and discards the rest.
Here a shortened example of how this looks like:
550 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]553 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]479 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]324 <6>date=2022-06-20 time=07:24:12.279 device_id=[...]Is there a way to resolve this issue?
-
PaloAlto PanOS 10 - Logs not normalized
Hello,
since we replaced the PaloAlto firewall devices a couple of days ago (the old ones were running PanOS 9.1.7, the new ones are on 10.1.4) for one of our customers, none of the logs coming from the firewalls are normalized anymore (there are 1000s of logs in the repo, but doing a search query “ norm_id="*" “ shows no result).
We are using the same policies (collection, normalization etc) as before, and the firewall admin says that they just migrated the configuration fromt ht eold to the new devices and can not see any changes regarding the log configuration settings.
I already restarted all normalizer services, even rebooted the LP and completely recreated the device configuration.
We are using the latest (5.2.0) PaloAlto Application plugin on LogPoint 6.12.2, and its details clearly state that PanPOS 10 is supported ( Palo Alto Network Firewall – ServiceDesk # LogPoint ). And taking a look at the raw logs, i can not see any differenc in the log format of PanOS 9 and 10. However, also tried adding the “PaloAltoCEFCompiledNormalizer” to the normalization policy (it “only” included the PaloAltoNetworkFirewallCompiledNormalizer), but nothing helped.
Does anyone has any thought what might be the issue or what else i can check before i open a support ticket. Is there any way to debug the normalization preocess on the LogPoint CLI ?
Regards
Andre
-
New in KB: Addressing delayed Logs and its uncertainty with /logpoint.
Hi All,
We are delighted to share our latest KB article addressing the difference between two fields, log_ts (event creation time) and col_ts (event ingestion time in log point) in logs and how they can alter the expected behavior of logpoint service. You can access the article via the below link:
-
Resolving timestamp related issues in Normalization
Hi All,
Sometimes we face an issue like an alert not being triggered or a dashboard widget not being populated. There could be many possible reasons. Among them, one is a huge time gap between log_ts and col_ts . In this article, we will be discussing some of the possible causes and sharing tips and tricks to solve this.
Please see the link to the article below :)
-
Common Issues in Normalized logs
Hi All,
Our latest KB article discusses common issues where logs seem normalized ( both norm_id and sig_id are present), but some problems prevent them from being used in analytics.
To read the full article, please follow the link below:
https://servicedesk.logpoint.com/hc/en-us/articles/5830850414493-Common-Issues-in-Normalized-logs
-
New applications available with Logpoint v.7.1.0.
Hi All,
We´re happy to share that we have released the following applications on the Help Center:
- Experimental Median Quartile Quantile Plugin v5.0.0
- Vulnerability Management v6.1.1:
- Lookup Process plugin v5.1.0:
- Logpoint Agent Collector V5.2.2
- Universal REST API Fetcher v1.0.0
All applications have been bundled in Logpoint v7.1.0 and are available out of the box.
-
Release of CloudTrailv5.1.0
Dear all,
We´re happy to share the public release of CloudTrailv5.1.0
Please see the details on the link below:
https://servicedesk.logpoint.com/hc/en-us/articles/360000219549
-
Understanding file_keeper Working and Configuration
The `file_keeper` service, primarily used for storing raw logs and then forwarding them to be indexed by the `indexsearcher` is often used in its default configuration. However in some real life situations this might not be sufficient to deal with the type, and volume of logs being ingested into LogPoint, hence tuning is required. In our newest KB article, we´re gonna guide you through how exactly to do it.
For more details, please read the full article on the link below:
-
Detecting devices that are not sending logs
Receiving logs is one of the cure features of having a SIEM solution but in some cases logs are not received as required. In our newest KB article, we are diving into how to monitor log sources using Logpoint alerts to detect no logs being received on Logpoint within a certain time range.
To read the full article, please see the link below: https://servicedesk.logpoint.com/hc/en-us/articles/5734141307933-Detecting-devices-that-are-not-sending-logs-