Welcome to Logpoint Community

Connect, share insights, ask questions, and discuss all things about Logpoint products with fellow users.

  • Unable to receive the logs from 0365?

    Using logpoint to fetch logs from Microsoft Office 365 but unable to receive the logs of emails (like:- email delivery etc ) except the mail delivery fail logs.

    Able to fetch the logs like:

    -Mail delivery failure

    Not able to reveive the logs like:

    -Mail delivered

    Any suggestion? Any Solution?

    CSO Integrations
  • Report

    When I try to create a report it says “”Invalid number of fields used in process command, expected 7 field(s) but found 8” Why?

    Michael Schmidt Jensen
  • Has anyone been able to integrate TheHive for managing cases ?

    I am trying to create a use case where i can connect TheHive with LP. Lets discuss if anyone has been able to or planning to do this

    Aaditya Khati
  • Detecting devices not sending logs

    Hi

    I know the following query can be used to create an alarm with devices that haven’t been sending logs
    | chart count() by device_ip | search ‘count()’ = 0

    However, in cases where I have multiple IP’s for a single device this won’t work as the ones not sending data will come up in the search result even though the device is sending data from another IP. Another case where I have issues, is when pulling data with for instance the O365 fetcher which will have device_ip of the localhost which will also be the case for the internal LogPoint logs. One way could be to create alerts on the individual repo’s with some specific characteristics, but I would like to avoid creating multiple alerts for the same to reduce search load.

    Any ideas how to build a single alert for detecting devices not sending logs with multiple IP’s and fetchers not fetching any data?

    Nicolai Thorndahl
  • Delete files in storage folder with bash command

    Hi friends,

    I have the problem that the storage folder is now over 90% full.

    Now I wanted to empty the folder using bash commands directly in the disk notification and have applied the following to "Command:": find /opt/makalu/storage/ -type f -mtime +30 -delete

    Unfortunately without success, the folder grows and grows. Do you have a tip or a solution for this?

    Thank you in advance and kind regards

    René Szeli
  • Problem adding McAfee ePo server via Syslog

    We configured our McAfee ePO (5.10) server to send its logs to a syslog server and configured it in the LP accordingly. Yet, when using the “Test Syslog” Feature in McAfee ePO, the test failed. Nonetheless, we are receiving logs from the server, but they only contain gibberish.

    LP raw logs

    This is as far as i think not a problem with normalization, as a tcpdump also shows the log payload not being human readable.

    tcpdump

    I already tried to change the charset min the log m,collection policy from utf_8 to iso8559_15 and ascii, to no avail.

    I found following McAfee ( KB87927 )  document, which says:

    ePO syslog forwarding only supports the TCP protocol, and requires Transport Layer Security (TLS) . Specifically, it supports receivers following RFC 5424 and RFC 5425 , which is known as syslog-ng . You do not need to import the certificate used by the syslog receiver into ePO. As long as the certificate is valid, ePO accepts it. Self-signed certificates are supported and are commonly used for this purpose.

    So my current guess it that the test connection failed as the ePO is expecting the LP to encrypt the traffic, which it does not do. Yet it still started to send the LP encrypted logs (but what cert does he use), therefore the gibberish.

    Hence my question, did anyone manage to successfully retrieve usable logs from a McAfee ePO server using Syslog, or might have any suggestion what is wrong with my configuration ?

    Andre Kurtz
  • How to correlate more than 2 lines of logs?

    We have a Cisco IronPort which is analyzing emails.

    Each email analysis process generates multiple lines of logs that can be related to each other by a unique id (the normalized field “message_id”).
    However, i am now lacking ideas how i can correlate more than two log lines e.g. with a join.

    My goal is to first search for logs where the DKIM verification failed. after that I would like to see ALL log lines that contain the same message_id as the first "DKIM" log line. The number of log lines can vary.

    Here are some of my approaches, which unfortunately do not give the desired result:

    [message="*DKIMVeri*"] as s1 join [device_ip=10.0.0.25] as s2 on s1.message_id = s2.message_id

    This only retruns two log lines, not all matching s1.message_id = s2.message_id. Also a “right join” doesn’t work, even when the documentation indicates it .

    [4 message="*DKIMVeri*" having same message_id]

    “having same” needs to specify the exact amount of logs, while this information is unknown. Furthermore, a result is returned, where only the normalized fields behind the "having same" clause are further usable, not those of the found events. Also the filter “message” here breaks the whole concept.

    Do you have any ideas how to solve the issue?

    Markus Nebel
  • Discussion: How do you implement Clients in DHCP networks?

    In our internal research team we obsevered that it is of extremly high importance to have the logs of the client systems collected in your SIEM. Especially of the windows systems, in the best case with sysmon together with a sophisticated sysmon configuration.

    The majority of large scale “attacks” doesn’t utilize any strange “cyber hacking voodoo”, but uses simple “human naivity” as initial code execution trigger. Like a “mouse click” to “enable content” of a microsoft office document with VBA macros, which was delivered via email from the attackers. The following malware download, its execution, reconnaissance and lateral movement steps can be easily detected with a good sysmon configuration. And this in “real time”, before any harm was done or your IDS may throw alerts.

    The main issue is, that clients are typically flexible/mobile systems, which are connecting your enterprise network via different network IP ranges (several LANs, Wifi, VPN, WAN etc.).

    As the current logpoint design requires either static IPs or whole network ranges, this completely blows up the license model, as you may have a /21 network (for example) with only 100 active devices in it.

    I added a feature request a while back, where I request to re-design the LPagent, or, to be more specific, the logpoint configuration module on top of the nxlog used as LPagent.

    At the moment the LPagent is inconvenient, as it runs a web server on the log source/device/client to accept connections from the logpoint datanode, which then pushes the nxlog configuration to the log source. This requires windows firewall rules for incoming connections etc.

    Also this is only possible with static IP devices, because the LP datanode acts as HTTP client and thus needs to know the unique device IP to connect to its web server and push the nxlog config. So this eliminates the usage on the flexibel client systems.

    My idea was to replace the web server by a web client in the first place, so that the LPagent is connecting to the LP datanode (or multiple for load balancing or network separation), instead of the other way around. This reduces the complexity of the LPagent enormously and resolves the firewalling and the static IP issue.

    On the LP datanode side, an agent authentication token should be generated (either one for all devices, or for device groups) and an API endpoint has to be implemented, which accepts connections from LPagents from different configurable IP networks.
    The LPagent shall receive the agent authentication token during its installation (either in the installer GUI or as CLI parameter so it could be done via group policy or central software control solutions). This token could then be used to make an initial agent configuration and identification (e.g. with exchanging a TLS client certificate, agent/client UUID etc.).

    This would solve the license and IP issue on the LP datanode side, as the LP datanode then could see the total number of individual active devices according to the agent identification (e.g a agent UUID) and claim the correct amount of licenses. So the LPagent would become IP independent, when using the agent authentication token.

    Even WAN log collection could be possible then (via specially secured connection of course) if you place a collector in your DMZ.

    So my question to the community is: Are you collecting logs from clients, and if so, how are you doing it?

    My only idea at the moment is to use nxlog 5 with a manual configuration, and add multiple collector IPs (this is possible since nxlog 5) for the different possible networks (LAN, VPN, Wifi...). But this would explode the license number, if you have a large network.

    Markus Nebel
  • Excel Exports contain HTML encoding - How to avoid?

    When exporting to Excel the field ‘msg’ contains the same HTML encoding as the GUI.

    This is how some example data are shown in the GUI:

    This is how the Excel Speadsheet looks like for the same data:

    This is the sourcecode for the tabel in the GUI, which shows that it contain the same html <span> tags:

    Is it possible to get the Excel Export without html encoding in the msg field?

    Perhaps it could be an option to enable/disable html <tags> in exports in the preferences menu?

    Mads Pedersen
  • PaloAlto PanOS 10 - Logs not normalized

    Hello,

    since we replaced the PaloAlto firewall devices a couple of days ago (the old ones were running PanOS 9.1.7, the new ones are on 10.1.4) for one of our customers, none of the logs coming from the firewalls are normalized anymore (there are 1000s of logs in the repo, but doing a search query “ norm_id="*" “ shows no result).

    We are using the same policies (collection, normalization etc) as before, and the firewall admin says that they just migrated the configuration fromt ht eold to the new devices and can not see any changes regarding the log configuration settings.

    I already restarted all normalizer services, even rebooted the LP and completely recreated the device configuration.

    We are using the latest (5.2.0) PaloAlto Application plugin on LogPoint 6.12.2, and its details clearly state that PanPOS 10 is supported ( Palo Alto Network Firewall – ServiceDesk # LogPoint ). And taking a look at the raw logs, i can not see any differenc in the log format of PanOS 9 and 10.  However, also tried adding the “PaloAltoCEFCompiledNormalizer” to the normalization policy (it “only” included  the PaloAltoNetworkFirewallCompiledNormalizer), but nothing helped.

    Does anyone has any thought what might be the issue or what else i can check before i open a support ticket. Is there any way to debug the normalization preocess on the LogPoint CLI ?

    Regards

    Andre

    Andre Kurtz
  • [HOW TO] MPS per repo and per log source

    Hello all,

    I would like to visualize:

    ▶️ MPS sent by each log sources

    ▶️ MPS per repo_name

    I have managed to create a timechart of MPS per repo_name:

    repo_name=* | timechart count() by repo_name

    Note : This is not really MPS per repos, but log volume per repo.

    But I cannot find how to generate the equivalent for each log sources.

    Thanks for your help!

    Louis MILCENT
  • device export script

    Hi

    Today I have a Python script for exporting devices in to a csv-file with the following fields:

    device_name,device_ips,device_groups,log_collection_policies,distributed_collector,confidentiality,integrity,availability,timezone

    Does a script exist that also extract the additional fiels:

    uses_proxy , proxy_ip , hostname

    This will make moving devices from LogPoint 5 to LogPoint 6 considerably more easy.

    Regards

    Hans

    Hans-Henrik Mørkholt
  • Unable to hunt down the user/process that failes to authenticate on DC

    I monitor for failed authentications on DC’s.

    labels: Authentication | Fail | Kerberos | User

    My top failed authentications is on one client/one account that I can’t hunt down. I have looked at all process’es and their “credential’s” + installed sysmon on the client. But I can’t find the process or user. Any ideas how I could hunt this down?

    Network Team
  • Windows SCCM send logs to LogPoint

    Hi!

    I’m curious into how to collect logs from SCCM. Logs related to endpoint protection, virus alarms, quarantind threats etc.


    Found out that nxlog provides a configuration file for this. Missing some fields in the configuration file, example <Output out_syslog>. To point out the syslog dst.

    Microsoft System Center Configuration Manager :: NXLog Documentation

    Has anyone any experience about this?

    Thankful for replies.

    Aleksander Stanojevic
  • Logpoint collector behind NAT

    Hi Community,

    We have a distributed collector in a remote location. We have established a Site-to-site VPN between locations. The scenario is that the IP Address of the collector is in NAT and mapped to a different IP than that of the actual host IP.

    For E.g the system IP of collector is 172.29.20.80 and the IP of the collector as seen by the Remote Logpoint is 172.22.2.2.

    We have made the necessary configuration and ensured the Collector is visible in the logpoint. However, the IP as recorded by Logpoint is the actual system IP (Not the IP Logpoint should recognize it as). The issue is the status is Inactive stage.

    Is this due to the difference in host IP and NAT address?

    CSO Integrations
  • Prefered Way to Fetch Logs from Azure?

    Hi,

    what is LogPoint's recommended way to get logs from Azure AD / EntraID and any other Azure applications into LogPoint?

    We have noticed that the Azure EventHubs sometimes provide their logs several days late via the message queue. We were able to verify this independently of LogPoint using a fetcher developed in Python.

    How does this work with the " Azure Log Analytics Workspace" module? Can the logs be expected in the SIEM in a timely manner?

    A delay of several hours and days is not possible with the current alert rule concept of searching on already indexed timeranges without running the alert rules on utopian high timeranges.

    Markus Nebel
  • Filter only part of a field

    Hello

    We want to see a statistic of outgoing mails/which domain sends how many.

    Filtering by source user is easy, however what I would need is the domain, not the precise E-Mail address.
    Is there a way to filter by only what comes after the @, so I can make a chart with only that information.
    Or is there a way to get the more precise Domain.

    cheers, and happy holidays

    Mike

    Mike Furrer
  • Re-parse an event for normalization (JSON-event)

    Hi !

    Just a interesting question. I know that other SIEM vendors have problem with this. Maybe LogPoint have a good function for this.

    So I received a JSON-event that didn’t normalise, due to that no normalization-package was enabled. I enabled this after I received the event.

    So to my question. Is It possible to parse this event afterwards so that It gets normalized? Or do I have to wait for another event from the same logsource to see If this one gets normalized?

    Aleksander Stanojevic
  • What's the use case for 'Add Global Parameters' action in SOAR 1.0.4?

    Hi folks,

    I was wondering if anybody could tell me what the use case is for the new ‘Add Global Parameters’ action added in SOAR 1.0.4? As far as I can see, any output parameter from an action is already accessible from any other?

    From my quick tests it doesn’t look like they pass down to Sub-playbooks either, so are they just meant as a quicker way to access the values within a playbook?

    I couldn’t find any documentation on this, so I was hoping someone else might know the answer.

    CSO Integrations
  • Rename a case within a Playbook?

    Hi folks,

    Another cases and playbooks question - is there a way to update the name of an existing case item from within a Playbook?

    By default, we are generating cases with just the incident ID for identification, but we’d ideally like to be able to update the name of the case once some additional playbooks have run.

    We already have a way to get the case ID etc, it’s just the renaming part we’re stuck on.

    Is this possible?

    CSO Integrations
  • After update to 7.0 TimeChart does not work anymore

    Be aware if your are going to upgrade to 7.0 there are a bug in TimeChart function, and will not work.

    Answer from support:

    Hi Kai​,

    We are extremely sorry for the inconvience caused by it, fixes has been applied in upcoming patch for 7.0.1

    So if you need it, maybe you should wait to 7.0.1 are out.

    Regards Kai

    Permanently deleted user
  • Which firewall ports should be opened for logpoint server?

    Hi,

    on my firewall I opened port 443 to destination customer.logpoint.com (172.67.190.81 and 104.21.76.59).

    Now I see on the firewall that the server tries to open connections to the ip addresses 104.16.37.47 and 104.16.38.47 through port 443. Are these connections also needed?

    Best regards,

    Hans Vedder

    Hans Vedder
  • Why this query is wrong?

    Hi,

    when I start a query

    | chart min(log_ts) as min_ts  by min_ts, source_address, destination_address

    I receive the error message:

    could not convert string to float: '/'.

    But why?

    An example for log_ts: 2021/10/11 11:04:54

    I use

    | chart count() as "Count", min(log_ts) as min_ts, max(log_ts) as max_ts

    in a macro and I am sure that in fewer versions of Logpoint I didn’t receive this error message.

    Actually I use Logpoint version 6.12.0

    Best regards,

    Hans Vedder

    Hans Vedder
  • ThinkIn feedback wanted

    A big thank you from all of us for joining LogPoint’s ThinkIn 2021!

    As we’re always striving to improve and make the next edition of ThinkIn even better, we would very much appreciate your feedback on ThinkIn 2021.

    Please take a few minutes to share your impressions here in the comments section or Take the ThinkIn 2021 survey

    If you want to revisit ThinkIn 2021, you can find live recordings of main tracks for the two days here:
    ThinkIn 2021 – Day 1
    ThinkIn 2021 – Day 2

    Stay tuned for recordings of individual keynotes, presentations, and breakout sessions.

    CSO Integrations
  • Cisco Stealthwatch send syslog to LogPoint

    Hi!

    I’m wondering If It’s possible to configure Stealthwatch to communicate with LogPoint. I want Stealthwatch to forward events, even better If It also can forward flows to the SIEM.

    Is this possbile?

    All I can find regarding this is the integration with LogPoints SOAR to configure different types of actions.
    Adding the Vendors — Cisco Secure Network Analytics (Stealthwatch) SOAR Integration latest documentation (logpoint.com)

    Aleksander Stanojevic
  • Threat Intelligence - What are your experiences \ do you have recommendations ?

    Hello,

    just wanted to “pick the brains” of my fellow LP community member regarding TI. Is anyone here actively using the Threat Intelligence feature of the LogPoint and \ or has any recommendations and experiences on the matter. Personally i think it could be a very valuable part in a LogPoint environment to increase the detection capabilities, but have not be able to set it up in a way that would really beneficial.

    This is mainly due to the fact that i haven’t been able to find a decent (free) TI feed, and to my mind, the value of TI stands and falls with the quality of the feed data.


    Most of my customers have their firewalls, spam and web filter devices and mostly even their centralized AV solution sending their logs to LP. Setting up monitoring DNS request wouldn’t be a problem either. So i think we have enough visibility into the network traffic. Having a decent TI feed could allow us to compare these logs for known IoC (IP, hostnames, email addresses) and take a look at endpoints who have visited known malware URLs (spreading malware, being C2C server etc) or have received emails from known bad hosts in the past. You could then take a closer look at these endpoints if these could have been compromised.

    However, i have tried several freely available TI feeds, but none of them had the quality to be actually useful. Most had a lot of false positives as the feed are not updated regularly or have very outdated informationen. Additionally, these feeds also had a lot of false negatives (IP, URLs which were blocked by Google for days were not included yet). None of my customers has the manpower to sieve through hundreds of incidents a day just to find out the IoC is actually of a malware campaing from 2020.

    How are your experiences with TI feeds, paid or unpaid ? I have to admit that, due to the rather poor experiences with free feeds, i did not look into any paid feeds (though i am trying to find the time to take Recorded Future for a test ride :-) i think they still have a demo offer)?

    Does anyone of you have a recommendation for a feed ? Are paid feeds worth their money, and how much do they roughly cost ?

    Regards

    Andre

    Andre Kurtz
  • New in KB: How to handle a repo failed issue

    We occasionally encounter cases where we cannot perform search because a particular repo has failed. In that situation, the search UI does not allow to make any searches if that repo is selected.

    What does "repo failed" mean?

    The "readiness" status of service responsible for searching in a particular repo (indexsearcher) is kept by central searching service (merger) in "alive" field for each repo. If the "alive" status of a repo is false in the config file of merger service, failed repo issue is seen when making search in that repo. This could happen if index searcher service for that repo is not running as expected or if the config file of the merger service is not updated according to the status of indexsearcher service.

    Mitigation:

    Whenever we get repo failed for a particular repo then it is always wise to check the logs for the indexsearcher service of that repo.

    tail -f -n 50 /opt/immune/var/log/service/indexsearcher_<repo_name>/current#replace <repo_name> with actual repo name. for e.g. for repo with name "Windows",the command will betail -f -n 50 /opt/immune/var/log/service/indexsearcher_Windows/current

    The above command will output the last 50 logs for the indexsearcher service of the particular repo.

    You can also check if indexsearcher is replying back to alive probe with tcpdump on query.source.socket.

    grep "query.source.socket" /opt/immune/etc/config/indexsearcher_<repo_name>/config.jsontcpdump -i any port <query.source.socket port> -Aq

    If indexsearcher is alive, you should see {"isalive":true} and {"alive":true} messages

    ImageNotFound

    If there are no errors in the tail command and "alive":true messages are being seen in tcpdump commands but the failed repo error is still being seen with search,  try checking alive status in merger config.

    grep -B1 alive /opt/immune/etc/config/merger/config.json

    Potential Scenarios

    1. Indexsearcher service is recently restarted
      If an indexsearcher service for a repo is just restarted then for large repos it takes few minutes to scan metadata for stored indexes, before searches can be served. During that period, the repo failed error is observed.
    2. LogPoint machine is recently rebooted
      If a LogPoint machine is recently rebooted then the indexsearcher services take time to initialize services. During those few minutes, repo failed error can be seen for some repos.
    3. Issue in the indexsearcher service
      If there is some error in the indexsearcher service, then repo failed issue does not resolve on its own.
      In such scenarios, please review the logs of the indexsearcher service as mentioned above. It is recommended to create a support ticket in such scenarios for further investigation and resolution of the problem. It will be helpful to include the service log of indexsearcher service of that repo in the ticket. The log file is located at
      /opt/immune/var/log/service/indexsearcher_<repo_name>/current
    CSO Integrations
  • New in KB: How to handle diskfull issues

    Occasionally, we encounter issues where one of the mount point in LogPoint is full. The disk full conditions can be because of various reasons and it is of utmost importance to immediately free some disk so that LogPoint can function normally.

    Under disk full situations for /opt and primary repo path /opt/immune/storage locations, the log collection of LogPoint will be affected.

    Detection

    To detect the disk full situations, we can use the df command.

    df -h

    In the output of this command, we can either look for the percent usage of each mount point or the available storage space. These indicators will help us detect disk full scenarios.

    Mitigation

    Now, once we find out the problem in LogPoint is because of lack of storage space, we can dive deeper.

    /opt path has 100% storage used

    The /opt mount point generally stores the necessary config files, service log files, mongoDB data and executables of LogPoint. For normal functioning of LogPoint it is critical to have some storage space available.
    Since in normal scenario, this mount point does not actively store much data, it is unlikely to have storage space 100% used. But when it encounters such cases, we have to investigate using du command and find out which directory or file is the cause of disk getting full. The command that helps out is as follows:

    du -sch <file_1> <file_2> <directory_1> <directory_2>#To check all files and folder in current working directorydu -sch *

    It is important to try this command manually across the directories inside /opt to detect the culprit. Note: /opt/immune/storage is usually mounted to a different pool or lvm.

    Frequently encountered cases

    1. Storage occupied by old upgrade patch files.
      The old upgrade patch files are stored in /opt/immune/var/patches/installed/ directory. These patch files range from few MBs to hundreds of MBs and they can be reason for /opt being full. These older patch files can be deleted, if we are sure that these old upgrades are successfully installed in LogPoint.
    2. Storage occupied by mongoDB data directory
      The mongodb's data directory is /opt/immune/db/ . Sometimes the size of the db can be huge when the LogPoint has too many configurations data.
      In that case, please contact LogPoint support.
    3. Storage occupied by service log files
      The service log files are stored in /opt/immune/var/log/ directory. In some cases when some service is in debug mode or due to some errors some log files can swell to unexpected size. In that cases, we have no option but to delete such files. We have to locate such anomalous files and delete them. This can be done by the same du command to check file size.
      Since the content of those files are already indexed into LogPoint's log ingestion pipeline it is fine to delete the service logs. But only do so, if you are sure, else contact LogPoint support to do so.
    4. Storage occupied by nxlog dump files
      We have observed this issue in few customers when nxlog dumps some files in the directory
      /opt/nxlog/var/spool/nxlog/ .
      These files might can cause storage full in /opt mount point. So, cleaning the dump files or just moving them to other larger mounts should help. This issue has been addressed by recent version of LPAgent so, please update it to latest one to avoid having this issue.

    /opt/immune/storage has 100% storage used

    Usually /opt/immune/storage mount point has larger storage space compared to /opt because it has to store the logs and indices files as primary retention path.

    If this mount point gets 100% used, then log collection gets halted and related services will stop to function. It is important to fix such issues. To drill down which directory might be using a lot of space, same old du command does the trick.

    The probable cases when /opt/immune/storage is full can be as follows:

    1. Storage occupied by logs and indices
      In most of the cases, when /opt/immune/storage is full, this is because of the logs and indices. The logs and indices directory grow in size because of the data stored by LogPoint.

      In normal scenario we would expect disk size to be estimated properly so that, the logs stored will not exceed the provisioned space. Sometimes for some repos however there might be abrupt increase in event rate. In such scenarios we can either decrease the retention for the repos with most amount of data. Otherwise, we need to allocate more disk to accommodate increased log volume.
    2. Storage occupied by buffer directories
      There are some buffer directories which sometime can fill up, due to issues in the LogPoint and that can cause storage full scenarios. These buffer directories can be as follows:
      • /opt/immune/storage/highavailability/ - Issue in the highavailability (HA) functionality.
      • /opt/immune/storage/OldLogsKeeper/ - There are too much old logs coming in to the LogPoint machine.
      • /opt/immune/storage/FileKeepeer - If there is an issue in the indexsearcher service then logs are buffered in this directory.

        If any of the above directory are occupying too large space, then please call support for assistance.

    In any of the above situations if you are not sure, it is important to call support for help. The paths mentioned here are for default installations. For some custom changes in the data mount point and so on, the paths might differ.

    Note : The paths /opt/makalu and /opt/immune paths are actually same because in Logpoint /opt/immune is a soft symlink to /opt/makalu .

    CSO Integrations
  • Think In content available now!

    Once again, a big thank you from all of us for joining LogPoint’s ThinkIn 2021!

    We have collected all of the great keynotes, presentations, and breakout sessions for you to revisit: Thinkin 2021 recordings

    If you haven’t already provided your feedback on Thinkin 2021, we would very much appreciate a few minutes of your time: Take the ThinkIn 2021 survey
    ​​​​​​
    See you for ThinkIn 2022…

    CSO Integrations
  • Installation of Logpoint in AIX machine

    Can we install Logpoint in AIX machine 7.2?

    Kimil Timilsina

User groups in Zendesk Community allow members to connect, share insights, ask questions, and collaborate within specific interest areas.