Welcome to Logpoint Community

Connect, share insights, ask questions, and discuss all things about Logpoint products with fellow users.

  • Search for IP Range

    Hi Everyone,

    Was wondering if it's possible to search for an IP range among logs collected.
    For example I might want to search for anything between 10.0.0.1 and 10.0.0.50 which would make investigations easier instead of searching for individual IP's.

    Thanks in advance.

    Andy Clare
  • Request for information on the API Access URL

    Hi,

    Could you please tell me how to find the access URL for the Logpoint API ?

    Thank you in advance.

    Regards,

    Siawash

    Micropole
  • New training platform and events calendar

    Logpoint is happy to announce an update to Certified Logpoint training. Please see all upcoming session on logpoint.com under events and have a look at our new training platform Logpoint Academy 🤗

    On Logpoint Academy you will find some free content and if you have purchased training and received a redemption code from your sales representative, this is the place to enter to unlock your next training course!

    Nanna Dalbjørn Skov
  • distinct_count and followed by Issue while fetching Azure AD sign in logs

    Office365 logs are sending duplicate events. So the generic usecase doesnt really work

    [10 label=User label=Login label=Fail having same user] as s1 followed by [label=User label=Login label=Successful] as s2 on s1.user = s2.user


    [col_type=office365 label=User label=Login label=Fail | chart distinct_count(id) as CNT by user | filter CNT>2] as s1 followed by [col_type=office365 label=User label=Login label=Successful] as s2 on s1.user = s2.user | chart count() by s2.log_ts,s2.user

    Here "id" represents request id in azure AD, which is unique and thats what i want.

    1. Even if there isn't any output from s1, still I get some result from total query
    2. Also, i dont exactly get the followed by event, i get all success events in the timeframe
    Amit Chaugule
  • Unable to see some Pre-configured Playbook Guides

    Hi,

    I cannot see following Pre-configured Playbook Guides https://docs.logpoint.com/soar?p=Logpoint&page=Pre-configured%20Playbook%20Guides .

    1. Brute Force Detected - Multiple Unique Sources Playbook
      https://docs.logpoint.com/docs/brute-force-detected-multiple-unique-sources-playbook/en/latest/
    2. Credential Dumping - Registry Save Playbook
      https://docs.logpoint.com/docs/credential-dumping-registry-save-playbook/en/latest/
    3. Default Brute Force Attempt Multiple Sources Playbook
      https://docs.logpoint.com/docs/default-brute-force-attempt-multiple-sources-playbook/en/latest/
    4. Password Spray Playbook
      https://docs.logpoint.com/docs/password-spray-playbook/en/latest/
    5. PsExec Tool Execution Detected Playbook
      https://docs.logpoint.com/docs/psexec-tool-execution-detected-playbook/en/latest/

    All links direct to the page showing

    Permission Denied
    You don't have the proper permissions to view this page. Please contact the owner of this project to request permission.

    How can I see these pages?

    Best,

    Kaz

    Kaz Yoshinaga
  • Search query for vulnerability score divided into three different levels.

    HI,

    We are using the Rapid7 fetcher and with the score field I want to make a widget where we divide these score into three different levels namely:

    • Low with a score between 0 and 600.
    • Medium with a score between 600 and 800.
    • High with a score between 800 and 1000.

    My issue is that I can't find the right search query to let me see this data the way I explained. I hope you may know the right query I need to use to get the result of the divide levels to work.

    Thanks in advance.

    Martin Vorstenbosch
  • File moved and File deleted

    Hello Logpoint Community!

    I’ve recently begun with the process of trying to create a search template to look up a user, and see recently moved files, or recently deleted files. I’m assuming this needs to be two seperate templates.

    Anyhow, when browsing the file audit logs, I was baffled at the amount of logs generated by moving one file. In our environment, when moving one file, it generates upwards of 14 log entries, all with actions like “requested”, “access” - some log entries have the path of the file, some don’t.


    Anyways, before I commit what I assume would be a long time trying to create this from scratch, I was wondering if some of you would be able to share a template to look up files moved by a user.

    In my mind it would look something like this.

    | Timestamp | Username | File_path_old | File_path_new |

    Any help at all would be appreciated!

    Thanks in advance

    Andreas Emil Weinreich Holm Andersen
  • Correlate target_id and target_user

    Is there an easy way to correlate the target_id and target_user for Windows events that do not have a target_user value?
    For example: what a user is added to a group, only the target_id is displayed in the event, but we also want to show the target_user in the event or dashboard.

    Demis van Putten
  • LogPoint pySigma Backend

    Hi

    Just to draw your attention to this new tool.

    Though still in preliminary stage, certainly worth trying.

    GitHub - logpoint/pySigma-backend-logpoint: Logpoint backend for pySigma that enables seamless conversion of Sigma rules into Logpoint queries.

    Hans-Henrik Mørkholt
  • Playbook not executed when alert is triggered

    Hello,

    I am trying to set up the launch of a plugin when an alert is triggered.


    My alert appears to be working correctly; I receive an email every time it is executed.

    According to the documentation, I've set up the trigger on my Logpoint search node like this:

    SELECT * FROM LogPoint WHERE alertrule_id = 'xxxxxxxxxxxxxxxxxxx' OR name = 'Detection of a Threat 2'

    I have also tried SELECT * FROM LogPoint WHERE alertrule_id LIKE '%xxxxxxxxxxxxxx%'

    Unfortunately, when an alert is triggered, the playbook is not executed.

    Do you have any idea what might be causing this issue? Am I missing something?

    Regards,
    Julien

    Julien Garnier
  • Running LogPoint as Docker Container

    I just managed to run LogPoint as a docker image/container.

    It is relatively simpel and could help improving testing systems, where you want to start from a fresh logpoint for each test, make the desired configurations, run the test and discard the changes at the end.

    Our use case was developing a unit testing framework for alert rules.

    • Spin up the docker container
    • Configurate repo, routing policy, normalization policy, processing policy, device and syslog collector
    • Configurate the alert rule to test (test object)
    • Send some pre-defined logs via syslog to the docker-logpoint
    • Wait pre-defined time to see if the behaviour of the alert rule is as expected (triggers or doesn’t trigger)
    • Stop the docker container, discarding all changes (configuration, log storage, etc.)
    • Repeat with the next test scenario

    Here is what I did to run logpoint in a container. I did this on a linux machine (debian 12) with docker.io installed:

    • Download latest OVA (here logpoint_7.4.0.ova)
    • Extract the OVA (which is a tarball at all)
      • tar xf logpoint_7.4.0.ova
    • Convert the VMDK disk image to a raw disk image with qemu-img
      • qemu-img convert -O raw LogPoint-7.4.0.vmdk LogPoint-7.4.0.raw
    • Figure out the start position of the LVM partition in the disk image
      • parted -s LogPoint-7.4.0.raw unit b print
      • Look for the start number of the 4th partition, copy it without the “B” at the end
    • Create a mountpoint where you mount the LVM partitions to
      • mkdir /mnt/rootfs
    • Create a loop device stating at the 4th partition postition we got from parted
      • losetup -o <START POSITION> -f LogPoint-7.4.0.raw
    • Mount the LVM LVs to our mountpoint
      • mount /dev/LogPoint-vg/root /mnt/rootfs/
      • mount /dev/LogPoint-vg/application /mnt/rootfs/opt/
      • mount /dev/LogPoint-vg/app_store /mnt/rootfs/opt/makalu/app_store/
      • mount /dev/LogPoint-vg/storage /mnt/rootfs/opt/makalu/storage/
    • Compress the whole filesystem into a gzip compress tarball for docker import
      • tar -czf image.tar.gz -C /mnt/rootfs/ .
    • Import the tarball as docker image
      • docker import image.tar.gz logpoint:7.4.0
    • Get the new logpoint docker image ID
      • docker images
    • Spin up a container and run an interactive shell inside the container
      • docker run --security-opt seccomp=unconfined --privileged --ulimit core=0 --ulimit data=-1 --ulimit fsize=-1 --ulimit sigpending=62793 --ulimit memlock=65536 --ulimit rss=-1 --ulimit nofile=50000 --ulimit msgqueue=819200 --ulimit rtprio=0 --ulimit nproc=-1 -p 8443:443 -p 8514:514 -p 822:22 -i -t <IMAGE ID> /bin/bash
    • Switch to the new less memory consuming shenandoah Java GC
      • sudo -u li-admin /opt/immune/bin/li-admin/shenandoah_manager.sh enable
    • Start the logpoint processes
      • /opt/logpoint/embedded/bin/runsvdir-start

    I hope this helps some of you!

    Markus Nebel
  • ChatGPT integration

    Im trying to integrate chatgpt with logpoint and the chatgpt plug in files that i found on the logpoint website are json files. When I tried upladoing them it says that logpoint plugins font support json files. Has anyone been in a similar situation and/or know what to do.

    Thankyou

    BabuRajesh
  • Adding IP into LIST

    Hi folks,

    How can I add IP into LIST in playbook?

    I have created playbook where I am extracting malicious IP and want to add into LIST(SOAR). It would be great if someone have idea which node I should use or any other way?

    Nirmal Unagar
  • search current hash of kaspersky into logpoint

    we need to search hash in logpoint of our current endpoint and servers. how we can do that. is there any package we need to install for particular application or anything else

    also we have edr kaspersky and imported its package in logpoint. we are getting teh windows logs only nott he hashes from kaspersky. we need to configure to recieve hashes from kasper sky

    Syed Faisal Qadri
  • search current hash into logpoint without adding or connecting with threat feeds

    we need to search hash in logpoint of our current endpoint and servers. how we can do that. is there any package we need to install for particular application or anything else.

    Syed Faisal Qadri
  • How to extract user from regex in search?

    Hello folks,

    I am attempting to extract data for users who have browsed torrents. I've applied a regex query to match users, but it's returning all users instead of only those matching the regex.

    Here is a query : "user"=* application=bittorrent | process regex("^[a-zA-Z]\.[a-zA-Z]+\d*\.\d{2}$",user )

    If anyone know what I am missing, love to hear.

    Thanks

    Nirmal Unagar
  • 【Japanese language support】We can provide/support Japanease language menu

    Dear Team,

    Since we need to prepare Japanese Web UI we want to provide it then please incorporate into your source code.

    Japan’s market is really needed to support our own language.

    Kindly understand our situation and cooperate with it.

    Could you do that?

    regards,

    Yoshihiro

    Yoshihiro Ishikawa
  • Has anyone tried getting log data from Topdesk in paticulare auditdata

    This is a cloud services and looking for a solution to get data to our onprem LP servers.

    Regards Kai

    Kai Gustafson
  • Change timestamp on incoming logs

    Hi!

    I’ve several logs that comes in UTC timeformat. My timezone Is UTC + 2. Which mess things up when I perform querys, hunting, analyzing events and taking out reports.

    My log Exporters often send syslog in UTC timeformat, RFC-compliant behavior.

    Is It possible to apply any sort of Normalization Package for these incoming logs to fix this?
    Can I try with some querys that changes the log_ts & col_ts field to UTC +2 timezone? Instead of the default UTC timezone.

    Thanks

    Aleksander Stanojevic
  • FileShare missing logs

    Hi,

    I need you help form an issue that I have about the FileShare logs. In fact, when a user download the files form fileShare I didn’t have any log to shows downloaded file. In fact I receiving juste the “ sycnhronisation ” logs :

    Could you please tell me how can I do to receive the logs from fileshare to show when a user download or copy file ?

    Thank you in advance for your help :)

    Regards,

    Siawash

    Micropole
  • Why this error message ?

    Hi,

    I am trying to create a playbook in order to block an account. However I have an error message thas I can’t undrestand the reason.

    Could you please tell me why I have this error message ?

    “Bad/missing Action inputs! Details: Bad rest-action url: /users/...”

    Thank you in advance.

    Regads,

    Siawash

    Micropole
  • Data privacy on dashboards

    I have activated the “Data Privacy” module and configured the fields, concerned groups, etc.

    I have a dashboard whose query contains one of the “encrypted” fields defined into “data privacy” module. Since activation, the dashboard is not displaying information anymore.

    There is a way to allow (via request/grant or something else) the dashboard to display the information ?

    Thanks,
    Alexandru

    Alexandru ILIOIU
  • SOAR Playbook Creation From LogPoint Alerts

    Hi,

    I need your help regarding the SOAR. In fact, I have just started the SOAR and I want to create a playbook From LogPoint alerts in order to block an IP@ or an action. Could please help me ? How can I do that ?

    Thank you in advance.

    Regards,

    Siawash

    Micropole
  • Performance Question: Has the order of search parameters influence on search performance?

    Hi everyone!

    does anyone know if it makes any difference how you order the search parameters in a search query?

    example:

    event_id=1234 event_channel="Security"

    vs.

    event_channel="Security" event_id=1234

    Markus Nebel
  • How to configure AbuseIPDB instance in LogPoint SOAR

    Hi everyone,

    I need to check some source addresses in my playbooks and I want to use AbuseIPDB for this action, but I can’t finish configuring AbuseIPDB instance and obtain some errors.  Could you help me with the correct configuration for this instance in SOAR LogPoint Playbook integrations?.  I have my api_key but I don’t know what is the correct base_url and threshold parameters if I need to review IP address.

    I hope your help

    Regards

    Gabriel

    Gabriel Layana
  • how to configure AbuseIPDB instance

    Hi everyone,

    I need your help.  I need to check Public IP Address in my playbook to obtain some information about it.  I want to use AbuseIPDB but I’m obtaining error.

    Bad/missing Action inputs! Details: Bad rest-action url: base url/check

    I think the error is in my AbuseIPDB’s instance configuration.  What is the base_url and threshold that I have to configure?  I have my API Key ready.

    Thanks for your help.

    Regards,

    Gabriel

    Gabriel Layana
  • When you comment a members reply there can go weeks before admin let your responce true.

    Why is your admin’s so slow to approve answers, this is verry bad for people asking about a topic, they lose interest when it seems like no answers or suggestions are given.

    I have now several times waited for weeks to see my suggestions get through. This should be instant, or you should remove the approve process !!!

    Best regards Kai

    Kai Gustafson
  • Normalisation for Miliseconds

    <123>2024-01-11T11:11:11.123Z hostname
    I want to remove two fields from the above rohdaten using normalization, so to say. After normalization I should have the following fields:
    milliseconds=123
    log_ts=2024-01-11T11:11:11Z

    Can I do this in normalization (or signature)? If yes, can you write me the normalization/signature rule?

    This problem can be solved like this:

    | <<:int>><log_ts:datetime_m> <host:all>

    but I don't want that. I want to separate my fields at the beginning using normalization.


    Gökhan Gök
  • several if-else

    Hi everyone, i have a issue:

    I am writing a query. My query contains more than one if-else. However, because there are too many if-else, it does not return any result and gets stuck in “searching”. I wonder if there is a limit for else-if? If there is a small amount of else-if, it works, the query works, but if there is too much, unfortunately it doesn't work. I need help with this!!! since the values corresponding to each condition are different, I should use the else-if structure. I am open to different solution methods by the way.

    The query I wrote is as follows (here I just wanted to draw attention to the amount of else-if)

    I also get the following error: “No Response from server”

    alert=*
    | process eval("
    foo=if(alert=='xyz1') {return 1.2}
    else-if(alert=='xyz2') {return 1.0}
    else-if(alert=='xyz3') {return 1.21}
    else-if(alert=='xyz4') {return 1.2}
    else-if(alert=='xyz5') {return 1.29}
    else-if(alert=='xyz6') {return 1.25}
    else-if(alert=='xyz7') {return 1.29}
    else-if(alert=='xyz8') {return 1.200}
    else-if(alert=='xyz9') {return 1.24}
    else-if(alert=='xyz10') {return 1.25}
    else-if(alert=='xyz11') {return 2.2}
    else-if(alert=='xyz12') {return 0.2}
    else-if(alert=='xyz13') {return 13.2}
    else-if(alert=='xyz14') {return 1.2}
    else-if(alert=='xyz15') {return 5.2}
    else-if(alert=='xyz16') {return 9.2}
    else-if(alert=='xyz16') {return 55.2}
    else-if(alert=='xyz17') {return 9.2}
    else-if(alert=='xyz18') {return 6.2}
    else-if(alert=='xyz19') {return 10.2}
    else-if(alert=='xyz20') {return 18.2}
    else-if(alert=='xyz21') {return 19.2}
    else-if(alert=='xyz22') {return 9.2}
    else-if(alert=='xyz23') {return 71.2}
    else-if(alert=='xyz24') {return 19.2}
    else-if(alert=='xyz25') {return 16.2}
    else-if(alert=='xyz26') {return 9.2}
    else-if(alert=='xyz27') {return 41.2}
    else-if(alert=='xyz28') {return 18.2}
    else-if(alert=='xyz29') {return 19.2}
    else-if(alert=='xyz30') {return 121.2}
    else-if(alert=='xyz31') {return 1.221}
    else-if(alert=='xyz32') {return 11.2}
    else-if(alert=='xyz33') {return 156.2}
    else-if(alert=='xyz34') {return 15.2}
    else-if(alert=='xyz35') {return 12.2}
    else-if(alert=='xyz36') {return 1.2}
    else-if(alert=='xyz37') {return 1.2}
    else-if(alert=='xyz38') {return 15.2}
    else-if(alert=='xyz39') {return 1.2}
    else-if(alert=='xyz40') {return 15.2}
    else-if(alert=='xyz41') {return 16.2}
    else-if(alert=='xyz42') {return 19.2}
    " )

    | timechart count(alert) as cnt by alert, foo every 1 day
    | timechart sum(foo*cnt) as t1, sum(cnt) as num every 1 day
    | timechart sum(t1/num) as risk every 1 day

    Gökhan Gök
  • Windows Server DNS Query Log

    Hi all,

    I have configured my windows 2022 dns server to log dns queries. We need those logs for security and possible forensic reasons.

    The configuration is done in Windows Event Manager as described in DNS Logging and Diagnostics | Microsoft Learn . We are using LPAgent to collect other logs from this server.

    The result is an etl file, which cannot be read from the eventlog with im_msvistalog configuration from LPAgent. etl cannot be read with the im_msvistalog plugin of LPAgent.

    I have read that there is an NXLOG EE plugin im_etw out there which should be able to handle this file type, but we do not have the NXLog Enterprise Subscription.

    Is there any other option to collect the dns query logs from the server and import them into LogPoint?

    Is ther e any the best practice to handle windows dns server query logs (without using NXLOG EE)?

    Kind regards
    Uwe

    Uwe Poliak

User groups in Zendesk Community allow members to connect, share insights, ask questions, and collaborate within specific interest areas.