Welcome to Logpoint Community

Connect, share insights, ask questions, and discuss all things about Logpoint products with fellow users.

  • Cisco Ironport eMail Security Appliance integration with UEAB. Why did it not work?

    Hi,

    I´m struggeling with the integration of the Cisco Ironport eMail Security Appliance as UEBA source.

    The LogPoint documenation - Data Sources For UEBA — UEBA Guide latest documentation (logpoint.com) - indicates the ESA is supported.

    The corresponding UEBA matching query is  - device_category=Email* sAMAccountName=* receiver=* datasize=* | fields,log_ts,sender,receiver,userPrincipalName,sAMAccountName,datasize,subject,status,file,file_count

    The ESA never sends a combination of receiver and datasize. The ESA only logs a combination auf sender and datazize.   The ESA´s  sender & receiver logs are linked only via the MID “message_identifier”

    Has anyone seen or did  this integration with Cisco´s ESA and UEBA?  Is it running in the correct way?

    Thanks.

    BR

    Johann

    Johann Sampl
  • Will LP 6 still receive updates ?

    Hello,

    i have some minor questions regarding LP “policies” regarding security vulnerabilities:

    Will LogPoint 6 still receive patches to fix security vulnerabilities ? E.g. LP 7.0.1 fixes the polkit vulnerability. As polkit was discovery AFTER the latest patch for LP 6 (6.12.02),  there is a good chance LP6 is affected by it too, but there is no patch available and i didn’t find any informationen that LP 6.12.02 is NOT affected by this vulnerability.

    I am currently not keen to upgrade my LP installations from 6.12.2 to LP 7, but there have been some vulnerabilities for Linux recently (Log4Shell, polkit, dirty pipe, now zlib) with a good chance of LP being affected by them. If LP6 will not receive patches anymore, i would have to update (fast).

    Generally speaking, is there any documentation how long the different LP versions are supported ?

    Also, is there a webseite, newsletter etc to get get a quick overview or (even better) automatic notification when a new LP patches \ software updates are released ?

    Right now i log into the service desk, browse to the product site and check manually, which is rather time consuming.

    Andre

    Andre Kurtz
  • Unable to receive the logs from 0365?

    Using logpoint to fetch logs from Microsoft Office 365 but unable to receive the logs of emails (like:- email delivery etc ) except the mail delivery fail logs.

    Able to fetch the logs like:

    -Mail delivery failure

    Not able to reveive the logs like:

    -Mail delivered

    Any suggestion? Any Solution?

    CSO Integrations
  • Old Community closing down today.

    Dear All,

    We would like to inform you that the old LogPoint Community on https://servicedesk.logpoint.com/hc/en-us/community/posts is closing down today / 25.03.2022 and all community activity will be directed to this community.

    CSO Integrations
  • Export large amount of raw logs

    Hi @all

    I need to export a large amount of raw logs - about 450 GB.

    Is it possible for me to export this amount in one go via the Export Raw Logs functionality or do I need to export the raw logs sequentially?

    Thx a lot!

    Karolina Rozanka
  • Masterclass - Danmark recording

    Tak til alle der deltog i vores seneste Masterclass for Norden. Glem ikke at gå ind og registrere dig til vores næste Masterclass d. 26 April, du kan læse mere her: https://go.logpoint.com/Nordic_Masterclass_2022. Hvis du ikke fik chancen til at se det live kan du her se optaglesen samt præsentationen.

    CSO Integrations
  • How to create health alerts in Logpoint for monitoring

    Hi Team,

    Could you please help us creating health alert like CPU 95% and memory usage is more than 80% in Logpoint.

    Thanks &Regards

    Satya

    Satya Pathivada
  • Using Timestamp in Search Template Variable?

    Is it somehow possible to use a timestamp in a search template variable?

    For example I want to compare log_ts to be between two timestamps.

    Therefore I added “fields” in the template config and added them to base queries. But it always either complains about the quotes (“) or the slashes inside the timestamp string (e.g. "2022/03/10 08:30:22").

    See the example below:

    If I now use the base queries in widget, it throws the said errors.

    Detailed configuration to reproduce as follows:

    Fields:

    Field Display Text Value
    start_session_timestamp Session Start Timestamp 2022/03/10 08:30:22
    end_session_timestamp Session End Timestamp 2022/03/10 09:30:22

    Basequeries:

    step2_between_timestamps
    log_ts >= "{{ start_session_timestamp }}" log_ts <= "{{ end_session_timestamp }}"

    Widget:

    Name Test
    Query
    {{step2_between_timestamps}}

    Timerange 1 Day

    Is this a bug or am I doing it wrong?

    Markus Nebel
  • Query returning no logs when field is [ true ]?

    Hey folks,

    First time posting here - I’ve got a bit of a strange issue when querying for a specific type of log.

    We have our Azure AD logging to Logpoint and I wanted to search for any account updates where the previous value of the ‘AccountEnabled’ field was ‘true’.

    "ModifiedProperties": [
    {
    "Name": "AccountEnabled",
    "NewValue": "[\r\n false\r\n]",
    "OldValue": "[\r\n true\r\n]"
    },
    ]

    As you can see from my screenshot below, there is this field in the normalized log which outputs the previous value, but when querying under the same timeframe after clicking on that specific field, the query shows 0 logs.

    Is this something I’m doing wrong, is this a bug with how the search query is interpreting it or is it a normalisation issue?

    Field that I’m interested in searching for (with the matching value)
    Clicking the field and searching with this query shows 0 logs. I’ve also tried re-formatting the query, but no dice.

    We are using the built-in Azure AD normalizer, with most of the default fields. Any ideas how I might resolve this/work around this would be great.

    CSO Integrations
  • Searching for special characters in fields like the wildcard star "*"

    When searching for special characters in field values in Logpoint just pasting them in a regular
    Key = value
    expression, can often result in searches not working as intended from the users perspective as the Logpoint search language will interpret the character differently from the intention.
    For instance searching for fields with a star “*” character results in getting all results that has a value in that specific key, as Logpoint uses the star “*” as a wildcard character, which basically means “anything”.

    key = * will result in all logs with a field called key

    Instead of using they kay value pairs to search we can use the builtin command match to find any occurrences of the value that we are looking for. In this example we will search for the star “*” frequently referred to as wildcard.

    We have some logs that have a field called policy in which we would like to find all occurrences of the character star “*” . To do this we first ensure that the policy field exists in the logs that we search by adding the following to our search:

    policy = *

    Next we want to use the command called match, which is a command that can be used with the process command eval. If we read on the docs portal (Plugins → Evaluation Process Plugin → Conditional and Comparison Functions) we can see that the match command takes a field and a regex and output a true or false to a field:


    | process eval("identifier=match(X, regex)")

    In above example:

    • identifier is the field to which we will return the boolean value true or false. This can be any field name that is not currently in use e.g. identifier or has_star
    • match is the command
    • X is the field that we want to find the match in
    • regex is where we copy our regex surrounded by single quotes ‘’

    So with this in mind we just need to create our regex, which can be done with your favourite regex checker. copy a potential value and write the regex. In this case we wrote the following regex which basically says match any character including whitespace until matching a star character *, then match any character including whitespace after. This regex match the full field value if there is 1 or more stars “*” in it.
    .*\*.*

    Now we just need to add it to our search and do a quick chart to structure our results a bit. The search will look like the following

    "policy"="*"

    | process eval("has_star=match(policy,'.*\*.*')")

    | chart count() by policy, has_star

    The search results can be seen for below:


    From here a simple filter command can be used to filter on results with the star in the policy field by adding
    | filter has_star=true

    The search can also be used to match other things that are not special characters, e.g. finding logs with a field that starts with A-E or 0-5.

    Nicolai Thorndahl
  • Customer open hours sessions - limited spots available! :)

    We still have a few spots available for our exclusive customer open hour sessions with LogPoint experts from our engineering, customer success, support, and global services teams.

    You might have questions like:

    • How do I activate SOAR on top of my SIEM V7.0?
    • How do I create a Trigger?
    • Do I need to pay to activate my SOAR?
    • Or something completely different. We are here to help you.

    The Open Hour sessions are:

    Upgrading to LogPoint 7 is free. Visit the LogPoint Help Center to download LogPoint 7.
    We look forward to answering your questions and supporting your experiences with LogPoint SIEM+SOAR.

    CSO Integrations
  • We are excited to announce our newest global service: Playbook Design Service.

    Our converged SIEM+SOAR performs automated investigation and response to cybersecurity incidents using playbooks. Playbook Design Service is an additional service assisting organizations with refining and optimizing your manual incident response processes into documented workflows and automated playbooks tailored for your organization. Our service encompasses a complete playbook lifecycle, from understanding your specific needs to the creation, development, and testing of the playbook. Having our Global Services experts by your side enables utilizing your SIEM to its fullest extent, reducing your workload, and increasing your ROI on security controls.

    For more information, download our Playbook Design Service brochure: https://go.logpoint.com/playbook-design-service?_ga=2.39629923.1196326192.1645625385-1446914226.1645171249&_gac=1.261194623.1642752963.CjwKCAiA0KmPBhBqEiwAJqKK412rigizVIxknwM7T0qJ3YeUrzEpvCi5Q4a5OEID4NJS455Nz2QDixoCaZUQAvD_BwE

    CSO Integrations
  • High Availability Repos usage

    While deploying LogPoint with High Availability repos I did a few tests scenarios on how HA behaves that I thought would be relevant to share.

    Repositories can be configured as high availability repositories which means that the data is replicated to another instance. This means that logs will be searchable in a couple of scenarios:

    First scenario, if the repo fails on the primary datanode (LP DN1) it will be able to search in the HA Repo_1 on the secondary datanode (LP DN2). This could for instance be that the disk was faulty or removed or the permissions on the path were set incorrect. This scenario can be seen in the picture below where the Repo 1 which is configured with HA, on the primary datanode (LP DN1) is unavailable, but still searchable as the secondary data node (LP DN2) still has the data in the HA Repo 1 repo. In this scenario the Repo 2 and Repo 3 are still searchable.

    First HA scenario where one HA repo fails

    In the second scenario If the primary data node (LP DN1) is shutdown or unavailable the data can be searched from the secondary datanode (LP DN2). However, this can only happen if the primary datanode (LP DN1) is configured as a remote LogPoint on the secondary datanode (LP DN2), so that when selecting repo’s in the search bar the repo’s from the primary datanode (LP DN1) can be seen and selected when searching (This also applies before the primary datanode is down/unavailable) from the secondary data node (LP DN2). The premerger will then know that it can search on the HA repo’s stored on the secondary data node (LP DN2) even though the repo’s are down on the primary data node (LP DN1) as it cannot be reached. In this case the Repo 1 can be searched via the HA Repo 1 and the Repo 3 can also be searched, Repo 2 is not searchable.

    Second HA scenario where the full primary server is down or unavailable

    Nicolai Thorndahl
  • Exclusive customer open hour sessions with LogPoint experts - Register now!

    Are you using LogPoint 7 but have questions about SIEM or SOAR?

    For a short period we are offering exclusive customer open hour sessions with LogPoint experts from our engineering, customer success, support, and global services teams.

    You might have questions like:

    • How do I activate SOAR on top of my SIEM V7.0?
    • How do I create a Trigger?
    • Do I need to pay to activate my SOAR?
    • Or something completely different. We are here to help you.

    The Open Hour sessions are:

    Upgrading to LogPoint 7 is free. Visit the LogPoint Help Center to download LogPoint 7.
    We look forward to answering your questions and supporting your experiences with LogPoint SIEM+SOAR.

    CSO Integrations
  • Using Alert to populate a Dynamic List without the Alert firing

    I wanted to share on the community how you can use an Alert rule to populate a dynamic list.

    1. Create the Dynamic list you want to populate
    2. Age limit on the Dynamic list is how long the data from the Alert will stay in the dynamic list before the values are removed
    3. Create the Alert that can populate the dynamic list
    4. Search Interval: Defines how often the the search is running on the LogPoint. Every search interval it will update the dynamic list if it finds new values or prolong existing values in the dynamic list
    5. You can set the condition on the Alert to be something like Trigger: condition: Greater than "99999" for it to never fire to the incidents view.
    6. However the Alert still needs to find results in the | process toList() part of the search query to populate the Dynamic List.

    This is a way to use an alert to automate the process of populating a dynamic list without the alert firing and     cluttering the incidents view.

    /Gustav

    Gustav Elkjær Rødsgaard
  • After update to 7.0 TimeChart does not work anymore

    Be aware if your are going to upgrade to 7.0 there are a bug in TimeChart function, and will not work.

    Answer from support:

    Hi Kai​,

    We are extremely sorry for the inconvience caused by it, fixes has been applied in upcoming patch for 7.0.1

    So if you need it, maybe you should wait to 7.0.1 are out.

    Regards Kai

    Kai Gustafson
  • How to correlate more than 2 lines of logs?

    We have a Cisco IronPort which is analyzing emails.

    Each email analysis process generates multiple lines of logs that can be related to each other by a unique id (the normalized field “message_id”).
    However, i am now lacking ideas how i can correlate more than two log lines e.g. with a join.

    My goal is to first search for logs where the DKIM verification failed. after that I would like to see ALL log lines that contain the same message_id as the first "DKIM" log line. The number of log lines can vary.

    Here are some of my approaches, which unfortunately do not give the desired result:

    [message="*DKIMVeri*"] as s1 join [device_ip=10.0.0.25] as s2 on s1.message_id = s2.message_id

    This only retruns two log lines, not all matching s1.message_id = s2.message_id. Also a “right join” doesn’t work, even when the documentation indicates it .

    [4 message="*DKIMVeri*" having same message_id]

    “having same” needs to specify the exact amount of logs, while this information is unknown. Furthermore, a result is returned, where only the normalized fields behind the "having same" clause are further usable, not those of the found events. Also the filter “message” here breaks the whole concept.

    Do you have any ideas how to solve the issue?

    Markus Nebel
  • How to restrict incident content to one event?

    Hi folks,

    When one sets up an alert, all the rows matching the search are sent to the alert. I have a use case where it is counterproductive to be able to track SLA and customers impacted. Basically, we’re concentrating all EDR alerts from many platform in one repo and want to trigger an incident by event.

    I fear the limit parameter will hide other events. And playing with both limit and time range seems not deterministic.

    Does anyone know how I could achieve 1 incident by row returned in the alert search?

    Thanks

    Didier CAMBEFORT
  • RegEx in fulltext search?

    I want to search an absolute windows path name to NOT start with A, B, C drive letters.

    I tried queries like this:

    path in ['[D-Z]:\\*']

    but this doesnt work. Any ideas?

    Markus Nebel
  • Microsoft Dynamic NAV application

    Hey,

    We have a requirement for analysing Microsoft Dynamics 365 logs. My understanding is that Dynamic NAV is now Dynamic 365 Business Central.

    In this case, the log source will be coming from Microsoft Dynamics 365 Sales Enterprise. Does anyone know or have any experience of using this LogPoint application, and will it parse the logs properly?

    Many thanks,

    Brandon Akal
  • nested joins in queries

    Hello,

    I am trying to automate getting some statistics I have to report to the executive relating to the number of resolved and unresolved alerts we are dealing with as a security team.
    I’m starting with the the alerts out of Office 365 as I thought that might be easier. The end goal being to aggregate the alerts from a number of different sources and either provide a regular report or dashboard for the executive. However I am starting small.

    Getting the number of resolved alerts is fairly straightforward

    norm_id="Office365" label="Alert" status="Resolved" host="SecurityComplianceCenter" alert_name=* | chart count()

    however getting the unresolved alerts is not quite as easy. My initial test involved negating the status field in the query above. Whist this gave a result, it was not very accurate.

    The problem appears to be that the status can be in one of 3 states, “Active”, “Investigating” or “Resolved”, so my first attempt was counting multiple log entries for the same alert.

    After some more experimentation I have come up with

    [norm_id="Office365" label="Alert" -status="Resolved" host="SecurityComplianceCenter" alert_name=*] as search1  
    left join
    [norm_id="Office365" label="Alert" status="Resolved" host="SecurityComplianceCenter" alert_name=* ] as search2
    on search1.alert_id = search2.alert_id | count()

    This sort of works and the result is more accurate, but still very different to what Office 365 shows.

    I think I need to check  that an entry with an Active or an Investigating state only counts once for the same alert_id before it is checked against whether there is a corresponding resolved entry for that alert_id

    I am not sure how to achieve this, whether it would need a nested join, or whether nested joins are even possible.

    Any hints, or better ways of achieving this would be greatly appreciated.

    Thanks

    Jon

    Jon Bagshaw
  • McAfee ePO - Some logs are not normalized

    Some logs coming from MCAfee ePo server are not being normalized. At first glance it seems that MCAfee introduced a new log type regarding PrintNightmare which LP does not recognize. I asked the customer and he indeeed uses McAfee to prevent users from installing new print drivers.

    We are using LP 6.12.02 and McAfee application 5.0.1. The normalization policies include

    • McAfeeEPOXMLCompiledNormalizer
    • LP_McAfee EPO XML
    • LP_McAfee EPO Antivirus
    • LPÜ_McAfee EPO Antivirus DB
    • LP_McAfee EPO Antivirus DB Generic

    Just added McAfeeVirusScanNormalizer. Maybe this will do the trick

    Example log (i replaced some information with REMOVED BY ME)

    <29>1 2022-01-24T06:52:07.0Z ASBSRV-EPO EPOEvents - EventFwd [agentInfo@3401 tenantId="1" bpsId="1" tenantGUID="{00000000-0000-0000-0000-000000000000}" tenantNodePath="1\2"] ???<?xml version="1.0" encoding="UTF-8"?><EPOevent><MachineInfo><MachineName>REMOVED BY ME</MachineName><AgentGUID>{a231b576-9e3a-11e9-2dbc-901b0e8e1ab2}</AgentGUID><IPAddress>REMOVED BY ME</IPAddress><OSName>Windows 10 Workstation</OSName><UserName>SYSTEM</UserName><TimeZoneBias>-60</TimeZoneBias><RawMACAddress>901b0e8e1ab2</RawMACAddress></MachineInfo><SoftwareInfo ProductName="McAfee Endpoint Security" ProductVersion="10.7.0.2522" ProductFamily="TVD"><CommonFields><Analyzer>ENDP_AM_1070</Analyzer><AnalyzerName>McAfee Endpoint Security</AnalyzerName><AnalyzerVersion>10.7.0.2522</AnalyzerVersion><AnalyzerHostName>GPC2015</AnalyzerHostName><AnalyzerDetectionMethod>Exploit Prevention</AnalyzerDetectionMethod></CommonFields><Event><EventID>18060</EventID><Severity>3</Severity><GMTTime>2022-01-24T06:48:33</GMTTime><CommonFields><ThreatCategory>hip.file</ThreatCategory><ThreatEventID>18060</ThreatEventID><ThreatName>PrintNightmare</ThreatName><ThreatType>IDS_THREAT_TYPE_VALUE_BOP</ThreatType><DetectedUTC>2022-01-24T06:48:33</DetectedUTC><ThreatActionTaken>blocked</ThreatActionTaken><ThreatHandled>True</ThreatHandled><SourceUserName>NT-AUTORITÄT\SYSTEM</SourceUserName><SourceProcessName>spoolsv.exe</SourceProcessName><TargetHostName>REMOVED BY ME</TargetHostName><TargetUserName>SYSTEM</TargetUserName><TargetFileName>C:\Windows\system32\spool\DRIVERS\x64\3\New\KOAK6J_G.DLL</TargetFileName><ThreatSeverity>2</ThreatSeverity></CommonFields><CustomFields target="EPExtendedEventMT"><BladeName>IDS_BLADE_NAME_SPB</BladeName><AnalyzerContentVersion>10.7.0.2522</AnalyzerContentVersion><AnalyzerRuleID>20000</AnalyzerRuleID><AnalyzerRuleName>PrintNightmare</AnalyzerRuleName><SourceProcessHash>b0d40c889924315e75409145f1baf034</SourceProcessHash><SourceProcessSigned>True</SourceProcessSigned><SourceProcessSigner>C=US, S=WASHINGTON, L=REDMOND, O=MICROSOFT CORPORATION, CN=MICROSOFT WINDOWS</SourceProcessSigner><SourceProcessTrusted>True</SourceProcessTrusted><SourceFilePath>C:\Windows\System32</SourceFilePath><SourceFileSize>765952</SourceFileSize><SourceModifyTime>2020-07-08  08:54:39</SourceModifyTime><SourceAccessTime>2021-03-05  10:58:36</SourceAccessTime><SourceCreateTime>2021-03-05  10:58:36</SourceCreateTime><SourceDescription>C:\Windows\System32\spoolsv.exe</SourceDescription><SourceProcessID>2852</SourceProcessID><TargetName>KOAK6J_G.DLL</TargetName><TargetPath>C:\Windows\system32\spool\DRIVERS\x64\3\New</TargetPath><TargetDriveType>IDS_EXP_DT_FIXED</TargetDriveType><TargetSigned>False</TargetSigned><TargetTrusted>False</TargetTrusted><AttackVectorType>4</AttackVectorType><DurationBeforeDetection>28068597</DurationBeforeDetection><NaturalLangDescription>IDS_NATURAL_LANG_DESC_DETECTION_APSP_2|TargetPath=C:\Windows\system32\spool\DRIVERS\x64\3\New|TargetName=KOAK6J_G.DLL|AnalyzerRuleName=PrintNightmare|SourceFilePath=C:\Windows\System32|SourceProcessName=spoolsv.exe|SourceUserName=NT-AUTORITÄT\SYSTEM</NaturalLangDescription><AccessRequested>IDS_AAC_REQ_CREATE</AccessRequested></CustomFields></Event></SoftwareInfo></EPOevent>

    Andre Kurtz
  • LP Alert rule correct ? (LP_Windows Failed Login Attempt using an Expired Account")

    Hello,

    i am currently taking a look at the alert rules shipped with LogPoint trying to figure out which of these are applicable to our environment, and sometimes find something i think (keep in mind, i am neither an expert reagrding LogPoint nor InfoSec) is not correct. I do not know whether LogPoint has any bug tracker i can post\ask for clarification.

    E.g.

    Alert rule  - LP_Windows Failed Login Attempt using an Expired Account (LP 6.12.2)

    “This alert is triggered whenever user attempts to login using expired account.”

    The search query is

    norm_id=WinServer* label=User label=Login label=Fail sub_status_code="0xC0000071" -target_user=*$ -user=*$ -user IN EXCLUDED_USERS | rename user as target_user, domain as target_domain, reason as failure_reason

    Asa far as i understand the Windows documentation ( 4625(F) An account failed to log on. (Windows 10) - Windows security | Microsoft Docs ), the substatus 0xC000071 means the login was attempted with an expired password , not with an expired account , which would be 0xC0000193.

    So shouldn’t the search query use the substatus 0xC000193, or am i missing something ? (I do not see the big impact a login attempt with an expired password has, while i would like to be alerted when an expired account tries to login).

    Another question:

    I would like to know what “label=User label=Login label=Fail” (or any other shipped label) actually decodes to. However, i can not find the search package for  the Windows labels to take a look how these search labels are “decoded”.

    Andre Kurtz
  • PaloAlto PanOS 10 - Logs not normalized

    Hello,

    since we replaced the PaloAlto firewall devices a couple of days ago (the old ones were running PanOS 9.1.7, the new ones are on 10.1.4) for one of our customers, none of the logs coming from the firewalls are normalized anymore (there are 1000s of logs in the repo, but doing a search query “ norm_id="*" “ shows no result).

    We are using the same policies (collection, normalization etc) as before, and the firewall admin says that they just migrated the configuration fromt ht eold to the new devices and can not see any changes regarding the log configuration settings.

    I already restarted all normalizer services, even rebooted the LP and completely recreated the device configuration.

    We are using the latest (5.2.0) PaloAlto Application plugin on LogPoint 6.12.2, and its details clearly state that PanPOS 10 is supported ( Palo Alto Network Firewall – ServiceDesk # LogPoint ). And taking a look at the raw logs, i can not see any differenc in the log format of PanOS 9 and 10.  However, also tried adding the “PaloAltoCEFCompiledNormalizer” to the normalization policy (it “only” included  the PaloAltoNetworkFirewallCompiledNormalizer), but nothing helped.

    Does anyone has any thought what might be the issue or what else i can check before i open a support ticket. Is there any way to debug the normalization preocess on the LogPoint CLI ?

    Regards

    Andre

    Andre Kurtz
  • Spots available for free of charge Troubleshooting training today :)

    Hi All,

    We have some limited spots available for Today`s Troubleshooting Advanced Training

    FREE OF CHARGE for a limited time. Don´t miss this opportunity to learn about the inner workings of LogPoint.

    Course starts at 2PM CET today and ends at 4PM

    Sign up here: https://logpoint.zoom.us/webinar/register/WN_tuBoQmAfTd6MinWokEyCUw?_ga=2.81719319.1236759532.1643011193-1106456273.1632296641

    CSO Integrations
  • Fetshing Logs from Oracle DB on Redhat

    Hello everyone,

    I want to collect oracle DB logs in redhat with SCP fetsher. does anyone know the exact locations of the logs.

    NB: I have already tried with the logs of redo*.logs, but they are not readable.

    Regards

    obachane
  • Expect prolonged response time from Support due to faults in internet submarine cable supply in the Indian Ocean.

    Dear All,

    We have been informed that there’s a problem with the under-sea cables between India and Europe resulting in connectivity issues. While work is underway to fully restore internet services at the earliest we are asking for your patience while we are doing our utmost best to ensure that your tickets are all resolved as fast as possible.

    In the meantime, we encourage you to use this community for instant help on non-critical issues.

    CSO Integrations
  • LogPoint 7.0 is available now!

    We are excited to announce that today we have released LogPoint 7.0.

    With LogPoint 7, SOAR is a native part of the SIEM, which means getting one out-of-the-box tool for the entire detection, investigation and response process.

    To learn more about LogPoint 7.0, access product documentation here: https://docs.logpoint.com/docs/install-and-upgrade-guide/en/latest/ or read our official press release here: https://www.logpoint.com/en/product-releases/streamline-security-operations-with-logpoint-7/

    Should you have any specific 7.0 questions, post it in the Community and we will do our best to address it asap :)

    CSO Integrations
  • Does the Open Door use the Syslog Collector TLS Certificate?

    Wir haben einen Kunden, der das neue Feature zum Hochladen der SSL/TLS Zertifikate für den Syslog Collector über die Web Oberfläche genutzt hat.

    Does this have any effect on the certificates used by OpenVPN?

    Because currently, after configuring the Distributed LogPoint, we see in the OpenVPN client log ( /opt/immune/var/log/service/remote_con_client_xx.xx.xx.xx/current ) that the certificate cannot be verified:

    2022-01-04_11:12:48.10967 Tue Jan  4 11:12:48 2022 VERIFY ERROR: depth=0, error=unable to get local issuer certificate: XXX

    Markus Nebel
  • [HOW TO] MPS per repo and per log source

    Hello all,

    I would like to visualize:

    ▶️ MPS sent by each log sources

    ▶️ MPS per repo_name

    I have managed to create a timechart of MPS per repo_name:

    repo_name=* | timechart count() by repo_name

    Note : This is not really MPS per repos, but log volume per repo.

    But I cannot find how to generate the equivalent for each log sources.

    Thanks for your help!

    Louis MILCENT

User groups in Zendesk Community allow members to connect, share insights, ask questions, and collaborate within specific interest areas.