- Exabeam Site Collector
- Network Ports
- Install the Exabeam Site Collector
- Filtering Incoming Syslog Events in Exabeam Site Collector
- Filtering Outbound Logs in Exabeam Site Collector
- How to Direct Kafka Input to Exabeam Site Collector
- Supported Exabeam Site Collector Changes
- Troubleshoot the Exabeam Site Collector
- Capture Site Collector Diagnostics Using Exabeam Support Package
- Scenario 1: No logs are transmitted nor received
- Scenario 2: Kafka Google Cloud Storage (GCS) collectors have not appeared on Data Lake UI
- Scenario 3: If logs are not uploaded to GCS where logs are not on Data Lake
- Scenario 4: Unable to accept incoming syslog, active directory context, Splunk logs, or Incident Responder integrations
- Scenario 5: Unable to pull LDAP from SaaS
- Scenario 6: Cannot send after transport endpoint shutdown
- Scenario 8: Too many arguments in command /tools/config.parser.sh
- Other scenarios
- How to Migrate to New Exabeam SaaS Site Collector
- How to Uninstall Exabeam Site Collector
- Exabeam Site Collector Services
How to Direct Kafka Input to Exabeam Site Collector
Kafka sources may reside in various locations in and out of your deployed environment that is protected behind a firewall. You can leverage Kafka for variety of use cases, including having it used as an interim logs storage/distribution point. A Kafka agent is an efficient means to send logs a site collector (SC) to ingest and forward to SaaS or on-premises Data Lake .
Before implementing an external Kafka source, confirm your environment meets the following:
No proxy services are in use between the SC and Kafka source
Your Kafka source has a network connection for data traffic to the Site Collector (SC)
A SC has been installed at the SC host (For more information on SC installation, see Install the Exabeam Site Collector)
Data is shipped in JSON or plain text format only
Data is shipped without Kafka headers
Messages are less than 1 Mb in size
Kafka 1.1 and later is in use
Supported deployment types:
Kafka message ingestion without authentication
TLS-configured Kafka using certificates, but without client login-password authentication
Important
Not all TLS configurations is supported, please verify by consulting an Exabeam technical representative.
Note
Compression is used depending on the external Kafka configuration. Messages can be set with:
GZip
Snappy
No compression
Ensure you have latest and matching Exabeam Site Collector installer version for you deployment. For more information, see Install the Exabeam Site Collector.
Unpack the Exabeam Site Collector installer package at your Kafka host. See all available installation options with the help command:
sudo <Exabeam_Site_Collector_installer>/bin/external-kafka.sh --help
If the connection between the Kafka source and SC is to use SSL, generate authentication certificates before you start Kafka installation. Authentication certificates need to be generated at the Kafka host. Copy the store files to the SC host.
Run the
gen-kafka-certs.sh
at the external Kafka host (script found Exabeam Site Collector installer directory at the SC).Warning
Generating new key and trust stores will affect existing authentication configurations. Therefore reconfigure existing SSL connections before running this script.
#warning: reconfig >> sudo ./bin/gen-kafka-certs.sh
A successful executed script will produce the following message:
Certificates generation process finished Kafka CA certificate: .../kafka-ca.pem Kafka client certificate: .../kafka-cert.pem Kafka client key: .../kafka-key.pem Kafka keystore file: .../kafka.keystore.jks Kafka truststore file: .../kafka.truststore.jks
Five files are generated. Copy the generated PEM files to the SC host. (JKS files remain in place.)
kafka-ca.pem # root certificate kafka-cert.pem # client cert kafka-key.pem # client key kafka.keystore.jks # kafka keystore kafka.truststore.jks # kafka truststore
The keystore/truststore password used to generate the files is found in
gen-kafka-certs.sh
. Replace the default password before running this script.cat gen-kafka-certs.sh | grep password=
You should have the following:
Unpacked a copy of Exabeam Site Collector installation package at the Kafka host
Names of Kafka topic(s) to subscribe to
For SSL connections, copy authentication certificate files to the SC host (see How to Generate Authentication Certificates for SSL Connection)
kafka-ca.pem
(root certificate)kafka-cert.pem
(client certificate)kafka-key.pem
(client key)
Run the installation steps that best apply to your deployment environment and data flow:
Use this installation method if your environment does not need or support encrypted connections.
Configure a plaintext listener at the external Kafka host. Edit the following parameters in
kafka/config/server.properties
.listeners=PLAINTEXT://0.0.0.0:9092 advertised.listeners=PLAINTEXT://<kafka_hostname|kafka_ip>:9092
Restart Zookeeper and Kafka services at the Kafka host to apply configurations. Verify that configurations are correct by verifying in the logs of both services at the Kafka host.
Run the following command at the SC host in the Exabeam Site Collector installer directory:
sudo ./bin/external-kafka.sh --name=<name> --kafka-hosts=<addr:port,addr:port> --kafka-topics=<topic1,topic2> # Where the parameters are: # --name=<name> The unique name of Kafkabeat for External Kafka (it can only contain upper and lowercase letters, and numbers) # --kafka-hosts=<addr:port,addr:port> Coma-separated list of External Kafka brockers # --kafka-topics=<topic1,topic2> Coma-separated list of External Kafka topics
The installation method requires kafka-ca.pem
authentication file generated at the external Kafka host.
Configure the port at the external Kafka host. Edit the following parameters in
kafka/config/server.properties
for SSL with server verification on the client side.listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093 advertised.listeners=PLAINTEXT://<kafka_hostname|kafka_ip>:9092,SSL://<kafka_hostname|kafka_ip>:9093
Configure the SSL options in
kafka/config/server.properties
at the external Kafka host.security.protocol=SSL ssl.client.auth=<none> ssl.keystore.location=<full_path_to_kafka.keystore.jks> ssl.keystore.password=<keystore_password> # password used to generate file ssl.truststore.location=<full_path_to_kafka.truststore.jks> ssl.truststore.password=<truststore_password> # password used to generate file
Here is an example configuration with Kafka host paths, using server-based verification:
security.protocol=SSL ssl.client.auth=none ssl.keystore.location=/home/exabeam/certs/kafka.keystore.jks ssl.keystore.password=test1234 ssl.truststore.location=/home/exabeam/certs/kafka.truststore.jks ssl.truststore.password=test1234
Restart Zookeeper and Kafka services at the Kafka host to apply configurations. Verify that configurations are correct by verifying in the logs of both services at the Kafka host.
Run the following command at the SC host in the Exabeam Site Collector installer directory:
sudo ./bin/external-kafka.sh --install --name=<connection_name> --kafka-hosts=<kafka_hostname|kafka_ip>:9093 --kafka-topics=<kafka_topic> --certificate-authority=/<full_path>/kafka-ca.pem
Here is an example of an installation:
sudo ./bin/external-kafka.sh --install --name=test1 --kafka-hosts=your.host.name:9093 --kafka-topics=your.topic --certificate-authority=/path/to/kafka-ca.pem
The installation method requires kafka-ca.pem
, kafka-key.pem
, and kafka-cert.pem
authentication file generated at the external Kafka host.
Configure the port at the external Kafka host. Edit the following parameters in
kafka/config/server.properties
.listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093 advertised.listeners=PLAINTEXT://<kafka_hostname|kafka_ip>:9092,SSL://<kafka_hostname|kafka_ip>:9093
Configure the SSL options in
kafka/config/server.properties
at the external Kafka host.security.protocol=SSL ssl.client.auth=<required> ssl.keystore.location=<full_path_to_kafka.keystore.jks> ssl.keystore.password=<keystore_password> # password used to generate file ssl.truststore.location=<full_path_to_kafka.truststore.jks> ssl.truststore.password=<truststore_password> # password used to generate file
Here is an example configuration with Kafka host paths, using server-based verification:
security.protocol=SSL ssl.client.auth=required ssl.keystore.location=/home/exabeam/certs/kafka.keystore.jks ssl.keystore.password=test1234 ssl.truststore.location=/home/exabeam/certs/kafka.truststore.jks ssl.truststore.password=test1234
Restart Zookeeper and Kafka services at the Kafka host to apply configurations. Verify that configurations are correct by verifying in the logs of both services at the Kafka host.
Run the following command at the SC host in the Exabeam Site Collector installer directory:
sudo ./bin/external-kafka.sh --install --name=<connection_name> --kafka-hosts=<kafka_hostname|kafka_ip>:9093 --kafka-topics=<kafka_topic> --certificate=/<full_path>/kafka-cert.pem --certificate-authority=/<full_path>/kafka-ca.pem --key=/<full_path>/kafka-key.pem
Here is an example of an installation:
sudo ./bin/external-kafka.sh --install --name=test1 --kafka-hosts=your.host.name:9093 --kafka-topics=your.topic --certificate-authority=/path/to/kafka-ca.pem --certificate=/path/to/kafka-cert.pem --key=/path/to/kafka-key.pem
Run the script with the -list
flag. For example:
sudo <Exabeam_Site_Collector_installer>/bin/external-kafka.sh -list
Use
external-kafka.sh --uninstall
to remove the Kafka service on the host.sudo ./bin/external-kafka.sh --uninstall --name=<kafka_broker_name>
Here is an example of an uninstall:
sudo ./bin/external-kafka.sh --uninstall --name=test1
A successful uninstall will produce messages like:
Parsing current options - Action: uninstall - Name: test1 Uninstalling... - Uninstalling External Kafka test1... - Uninstalling External Kafka manager for test1 ... - Deregister manager config: /opt/exabeam/beats/test1/manager - Deregister manager agent: abbdd3e5b92440899a44315c0bf9d56a - Uninstalling External Kafka worker for test1 ... [Removing the Kafkabeat for External Kafka test1 is done!]
Reset the listener port configuration at the external Kafka host.
If not data has been sent or received, verify that the collector is running at the external Kafka host.
sudo systemctl status exabeam-kafka-<connecttion_alias>-collector
If messages stop without apparent reason, inspect the logs at the Kafka host.
sudo cat /opt/exabeam/beats/<hostname>/worker/logs/kafkabeat
If the Kafka log is larger than 1 MB, enable log truncation by editing the
processors
parameter in/opt/exabeam/beats/<hostname>/worker/kafkabeat.yml
. Set themax_bytes
to1000000
bytes. Alternatively but not at the same time, you can limit the event log size by the number of characters by editingmax_characters
.processors: - truncate_fields: fields: - message max_bytes: 1048576 max_characters: 1000
Verify that parameters used during installation are applicable for your environment. Review the full list of options using
<Exabeam_Site_Collector_installer>/bin/external-kafka.sh --help
:-help Print this help section -uninstall Uninstall the Kafkabeat for External Kafka -list List all the installed Kafkabeats for External Kafka -name=<name> The unique name of Kafkabeat for External Kafka (it can only contain lowercase letters, and numbers) -kafka-hosts=<addr:port,addr:port> Coma-separated list of External Kafka brockers -kafka-topics=<topic1,topic2> Coma-separated list of External Kafka topics -certificate-authority=<path> The path to the certificate authority file (*.pem) that is used in Kafka SSL configuration to verify the SSL connection with Kafka server -certificate=<path> The path to the certificate file (*.pem) that is used in Kafka SSL configuration for client certificate authorization (must be used with -key flag) -key=<path> The path to the key file (*.pem) that is used in Kafka SSL configuration for client certificate authorization