Data LakeExabeam Data Lake Collector Guide

Table of Contents

Debug an Exabeam Data Lake Log Collector Agent

In order to minimize manual debugging on agent collectors, a debug tool can be run that will gather the necessary information for Exabeam's Customer Success team.

The script will produce a compressed file which contains the collector running status and the installation folder. This file can be sent to Exabeam Customer Success.

The command for running the tool on Windows:

cd tools
.\cmdbeat.exe diagnosis -installationPath "C:\Program Files" -outputPath C:\Exabeam

The command for running the tool on Linux:

Exabeam_Collector_Manager/tools/cmdbeat diagnosis -installationPath [installation_dir] -outputPath [output_dir]

Help Menu

Usage of diagnosis:
-installationPath string
    installation path for exabeam collectors
-outputPath string
    path for output the diagnosis result

In addition, if it is not apparent the agent collect is the root cause, review the following:

  • Confirm that the Database collector server is running

    systemctl status exabeam-lms-dblog

  • Check Database collector logs for error events

    journalctl -eu exabeam-lms-dblog

    • If the following event appears, then Logstash has stopped processing:

      [WARN ][] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
    Too many open files

      Logstash has a default limit of 4096 open files. You can manually change this upper limit to resume ingesting.

      1. Confirm the current limit.

        curl -XGET 'localhost:9600/_node/stats/process?pretty'

        The maximum number of open files is listed in the max_file_descriptors of the process block.

          "process" : {
            "open_file_descriptors" : 1314,
            "peak_open_file_descriptors" : 1327,
            "max_file_descriptors" : 4096,
            "mem" : {
              "total_virtual_in_bytes" : 11518406656
      2. Increase the limit by adding lines into /etc/security/limits.conf.

        sudo vim /etc/security/limits.conf

        Add the following lines:

        root  hard  nofile 65530
        root  soft  nofile 65530
      3. Add the following line to /etc/systemd/system/logstash.service.

        sudo vim /etc/systemd/system/logstash.service

        Add the following line:

      4. Reload and restart services.

        sudo systemctl daemon-reload
        sudo systemctl restart logstash
      5. Verify that the limit has loaded.

        curl -XGET 'localhost:9600/_node/stats/process?pretty'

        A new file limit should appear in the max_file_open of the process block.

          "process" : {
            "open_file_descriptors" : 2235,
            "peak_open_file_descriptors" : 2235,
            "max_file_descriptors" : 65536,
            "mem" : {
              "total_virtual_in_bytes" : 21708582912
  • Check for error events in Kafka

    docker exec -it kafka-host1 bash

    cd /opt/kafka/bin

    ./ --zookeeper zookeeper-host1:2181 --topic lms.kafka.topic

How to Set Filebeat Debug Level

You can set the logging level for your Filebeat collector by editing the logging section of the filebeat.yml file. Here is an example of the logging section of a filebeat.yml file:

logging.level: info
logging.to_files: true
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644

The available levels are debuginfowarning, or error. The default log level is info. They are defined as the following:

debug -- Debug messages, including a detailed printout of all events flushed. Also logs informational messages, warnings, errors, and critical errors. When the log level is debug, you can specify a list of selectors to display debug messages for specific components. If no selectors are specified, the * selector is used to display debug messages for all components.

info -- Informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors.

warning -- Warnings, errors, and critical errors.

error -- Errors and critical errors.