- Exabeam Site Collector
- Exabeam Site Collector Network Ports
- Exabeam Site Collector Specifications
- Install Exabeam Site Collector
- Upgrade Exabeam Site Collector
- Advanced Exabeam Site Collector Customizations
- Supported Exabeam Site Collector Changes
- Configure Transport Layer Security (TLS) Syslog Ingestion
- Direct Kafka Input to Exabeam Site Collector
- Add a Secondary Syslog Destination
- Remove a Syslog Destination
- Filter Incoming Syslog Events in Exabeam Site Collector
- Filtering Outbound Logs in Exabeam Site Collector
- Metadata Collected by Site Collector and Supported Agents
- Add OpenVPN After Exabeam Site Collector Installation
- Supported Exabeam Site Collector Changes
- Troubleshoot for Exabeam Site Collector
- Scenario 1: Collector or its status does not appear in the console and no logs reach destination
- Scenario 2: Collector is healthy but no logs are transmitted or received
- Scenario 3: Exabeam Advanced Analyticsunable to pull LDAP data
- Scenario 4: Kafka Google Cloud Storage (GCS) collectors have not appeared on Data Lake
- Scenario 5: If logs are not uploaded to GCS where logs are not on Data Lake
- Scenario 6: Unable to accept incoming syslog, active directory context, Splunk logs, or Incident Responder integrations
- Scenario 7: Cannot send after transport endpoint shutdown
- Scenario 8: Too many arguments in command /tools/config.parser.sh
- Other scenarios
- Capture Site Collector Diagnostics Using Exabeam Support Package
- Install and Upgrade Exabeam Site Collector for On-premises and Legacy Deployments
- Prerequisites
- Install Site Collector for Exabeam Data Lake On-premises Deployments
- Installing Site Collector for Exabeam Advanced Analytics On-premises Deployments
- Upgrade Site Collector for Exabeam Data Lake On-premises Deployments
- Upgrade Site Collector for Exabeam Advanced Analytics On-premises Deployments
- Uninstall Exabeam Site Collector
- Migrate to the New-Scale Site Collectors Service
- A. Glossary of Terms
Advanced Exabeam Site Collector Customizations
Exabeam Site Collector's standard features and configurations are usually enough to get your log ingestion and forwarding working. However, your deployment may require customizations.
Here are some customizations you can implement in your site collector:
Supported Exabeam Site Collector Changes
For a list of all options, use site-collector-installer.sh --help
.
Below are all supported changes you can make. For other changes, please contact your Exabeam Customer Success at Exabeam Community for further guidance.
Operating system updates
Site collector server IP changes
Once the IP of the server has been changed, edit the listener and SSL parameters in
/opt/kafka/config/server.properties
with the new IP.advertised.listeners=EXTERNAL_PLAINTEXT://<new_ip>:9092, EXTERNAL_SSL://<new_IP>:9093, INTERNAL_SSL://localhost:9094
Restart Kafka.
sudo systemctl restart kafka
Log retention change (default is 24 hours)
Edit
log.retention.hours
in/opt/kafka/config/server.properties
.log.retention.hours=24
Restart the
kafka
service.sudo systemctl restart kafka
RAM re-allocation to
logstash
Edit the
-Xm
parameters in/opt/logstash/config/jvm.options
, like the ones shown here:-Xms<ram>g
-Xmx<ram>g
Restart the
kafka
.sudo systemctl restart logstash
RAM re-allocation to
kafka
Edit the
-Xm
parameters in theexport
variable in/opt/kafka/bin/kafka-server-start.sh
.export KAFKA_HEAP_OPTS="-Xmx<ram>G -Xms<ram>G"
Restart the
kafka
service.sudo systemctl restart kafka
Configure Transport Layer Security (TLS) Syslog Ingestion
For Hardware and Virtual Deployments Only
Exabeam Site Collector supports TLS Syslog ingestion. Using TLS certificates, you can implement a whitelist in your deployment.
Open port 515 at your firewall to log traffic.
Replace existing default authentication certificates with TLS certificates. Default certificates are located at:
/opt/logstash/inbound-certs/exa-ca.pem
/opt/logstash/inbound-certs/syslog-inbound-key.pem
/opt/logstash/inbound-certs/syslog-inbound-cert.pem
Restart
logstash
services to apply the certificates.sudo systemctl restart logstash
Direct Kafka Input to Exabeam Site Collector
Kafka sources may reside in various locations in and out of your deployed environment that is protected behind a firewall. You can leverage Kafka for variety of use cases, including having it used as an interim logs storage/distribution point. A Kafka agent is an efficient means to send logs to a site collector to ingest and forward to SaaS or on-premises Data Lake deployments.
Before implementing an external Kafka source, confirm your environment meets the following criteria:
No proxy services are in use between the site collector and Kafka source
Your Kafka source has a network connection for data traffic to the site collector
A site collector has been installed at the site collector host (For more information on site collector installation, see Install Exabeam Site Collector)
Data is shipped in JSON or plain text format only
Data is shipped without Kafka headers
Messages are less than 1 Mb in size
Kafka 1.1, and later, is in use
Supported deployment types:
Kafka message ingestion without authentication
TLS-configured Kafka using certificates, but without client login-password authentication
Important
Not all TLS configurations are supported, please verify by consulting an Exabeam technical representative.
Note
Compression is used depending on the external Kafka configuration. Messages can be set with:
GZip
Snappy
No compression
Ensure you have latest and matching Exabeam Site Collector installer version for you deployment. For more information, see Install Exabeam Site Collector.
Unpack the Exabeam Site Collector installer package at your Kafka host. See all available installation options with the help
command:
sudo <Exabeam_Site_Collector_installer>/bin/external-kafka.sh -help
If the connection between the Kafka source and site collector is to use SSL, generate authentication certificates before you start the Kafka installation. Authentication certificates need to be generated at the Kafka host. Copy the store files to the site collector host.
Run the
gen-kafka-certs.sh
at the external Kafka host (script found Exabeam Site Collector installer directory at the site collector).Warning
Generating new key and trust stores will affect existing authentication configurations. Therefore reconfigure existing SSL connections before running this script.
#warning: reconfig >> sudo ./bin/gen-kafka-certs.sh
A successful executed script will produce the following message:
Certificates generation process finished Kafka CA certificate: .../kafka-ca.pem Kafka client certificate: .../kafka-cert.pem Kafka client key: .../kafka-key.pem Kafka keystore file: .../kafka.keystore.jks Kafka truststore file: .../kafka.truststore.jks
The keystore/truststore password used to generate the files is found in
gen-kafka-certs.sh
. Replace the default password before running this script.cat gen-kafka-certs.sh | grep password=
Five files are generated. Copy the generated PEM files to the site collector host. (JKS files remain in place.)
kafka-ca.pem # root certificate kafka-cert.pem # client cert kafka-key.pem # client key kafka.keystore.jks # kafka keystore kafka.truststore.jks # kafka truststore
You should have the following:
An unpacked copy of Exabeam Site Collector installation package at the Kafka host
Names of Kafka topic(s) to subscribe to
For SSL connections, copy authentication certificate files to the site collector host (see Generate Authentication Certificates for SSL Connection)
kafka-ca.pem
(root certificate)kafka-cert.pem
(client certificate)kafka-key.pem
(client key)
Run the installation steps that best apply to your deployment environment and data flow:
Use this installation method if your environment does not need or support encrypted connections.
Configure a plaintext listener at the external Kafka host. Edit the following parameters in
kafka/config/server.properties
.listeners=PLAINTEXT://0.0.0.0:9092 advertised.listeners=PLAINTEXT://<kafka_hostname|kafka_ip>:9092
Restart Zookeeper and Kafka services at the Kafka host to apply configurations. Verify that configurations are correct by verifying in the logs of both services at the Kafka host.
Run the following command at the site collector host in the Exabeam Site Collector installer directory:
sudo ./bin/external-kafka.sh -name=<name> -kafka-hosts=<addr:port,addr:port> -kafka-topics=<topic1,topic2> # Where the parameters are: # -name=<name> The unique name of Kafkabeat for External Kafka (it can only contain upper and lowercase letters, and numbers) # -kafka-hosts=<addr:port,addr:port> Coma-separated list of External Kafka brockers # -kafka-topics=<topic1,topic2> Coma-separated list of External Kafka topics
This installation method requires kafka-ca.pem
authentication file generated at the external Kafka host.
Configure the port at the external Kafka host. Edit the following parameters in
kafka/config/server.properties
for SSL with server verification on the client side.listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093 advertised.listeners=PLAINTEXT://<kafka_hostname|kafka_ip>:9092,SSL://<kafka_hostname|kafka_ip>:9093
Configure the SSL options in
kafka/config/server.properties
at the external Kafka host.security.protocol=SSL ssl.client.auth=<none> ssl.keystore.location=<full_path_to_kafka.keystore.jks> ssl.keystore.password=<keystore_password> # password used to generate file ssl.truststore.location=<full_path_to_kafka.truststore.jks> ssl.truststore.password=<truststore_password> # password used to generate file
Here is an example configuration with Kafka host paths, using server-based verification:
security.protocol=SSL ssl.client.auth=none ssl.keystore.location=/home/exabeam/certs/kafka.keystore.jks ssl.keystore.password=test1234 ssl.truststore.location=/home/exabeam/certs/kafka.truststore.jks ssl.truststore.password=test1234
Restart Zookeeper and Kafka services at the Kafka host to apply configurations. Verify that configurations are correct by verifying in the logs of both services at the Kafka host.
Run the following command at the site collector host in the Exabeam Site Collector installer directory:
sudo ./bin/external-kafka.sh --install -name=<connection_name> -kafka-hosts=<kafka_hostname|kafka_ip>:9093 -kafka-topics=<kafka_topic> --certificate-authority=/<full_path>/kafka-ca.pem
Here is an example of an installation:
sudo ./bin/external-kafka.sh --install -name=test1 -kafka-hosts=your.host.name:9093 -kafka-topics=your.topic -certificate-authority=/path/to/kafka-ca.pem
This installation method requires kafka-ca.pem
, kafka-key.pem
, and kafka-cert.pem
authentication file generated at the external Kafka host.
Configure the port at the external Kafka host. Edit the following parameters in
kafka/config/server.properties
.listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093 advertised.listeners=PLAINTEXT://<kafka_hostname|kafka_ip>:9092,SSL://<kafka_hostname|kafka_ip>:9093
Configure the SSL options in
kafka/config/server.properties
at the external Kafka host.security.protocol=SSL ssl.client.auth=<required> ssl.keystore.location=<full_path_to_kafka.keystore.jks> ssl.keystore.password=<keystore_password> # password used to generate file ssl.truststore.location=<full_path_to_kafka.truststore.jks> ssl.truststore.password=<truststore_password> # password used to generate file
Here is an example configuration with Kafka host paths, using server-based verification:
security.protocol=SSL ssl.client.auth=required ssl.keystore.location=/home/exabeam/certs/kafka.keystore.jks ssl.keystore.password=test1234 ssl.truststore.location=/home/exabeam/certs/kafka.truststore.jks ssl.truststore.password=test1234
Restart Zookeeper and Kafka services at the Kafka host to apply configurations. Verify that configurations are correct by verifying in the logs of both services at the Kafka host.
Run the following command at the site collector host in the Exabeam Site Collector installer directory:
sudo ./bin/external-kafka.sh --install -name=<connection_name> -kafka-hosts=<kafka_hostname|kafka_ip>:9093 -kafka-topics=<kafka_topic> -certificate=/<full_path>/kafka-cert.pem -certificate-authority=/<full_path>/kafka-ca.pem -key=/<full_path>/kafka-key.pem
Here is an example of an installation:
sudo ./bin/external-kafka.sh --install -name=test1 -kafka-hosts=your.host.name:9093 -kafka-topics=your.topic -certificate-authority=/path/to/kafka-ca.pem -certificate=/path/to/kafka-cert.pem -key=/path/to/kafka-key.pem
Run the script with the -list
flag. For example:
sudo <Exabeam_Site_Collector_installer>/bin/external-kafka.sh -list
Use
external-kafka.sh -uninstall
to remove the Kafka service on the host.sudo ./bin/external-kafka.sh -uninstall -name=<kafka_broker_name>
Here is an example of an uninstall instruction:
sudo ./bin/external-kafka.sh -uninstall -name=test1
A successful uninstall will produce messages like:
Parsing current options - Action: uninstall - Name: test1 Uninstalling... - Uninstalling External Kafka test1... - Uninstalling External Kafka manager for test1 ... - Deregister manager config: /opt/exabeam/beats/test1/manager - Deregister manager agent: abbdd3e5b92440899a44315c0bf9d56a - Uninstalling External Kafka worker for test1 ... [Removing the Kafkabeat for External Kafka test1 is done!]
Reset the listener port configuration at the external Kafka host.
If no data has been sent or received, verify that the site collector is running at the external Kafka host.
sudo systemctl status exabeam-kafka-<connecttion_alias>-collector
If messages stop abruptly, inspect the logs at the Kafka host for error messages.
sudo cat /opt/exabeam/beats/<hostname>/worker/logs/kafkabeat
If the Kafka log is larger than 1 MB, enable log truncation by editing the
processors
parameter in/opt/exabeam/beats/<hostname>/worker/kafkabeat.yml
. Set themax_bytes
to1000000
bytes. Alternatively, but not at the same time, you can limit the event log size by the number of characters by editingmax_characters
.processors: - truncate_fields: fields: - message max_bytes: 1048576 max_characters: 1000
Verify that parameters used during installation are applicable for your environment. Review the full list of options using
<Exabeam_Site_Collector_installer>/bin/external-kafka.sh -help
:-help Print this help section -uninstall Uninstall the Kafkabeat for External Kafka -list List all the installed Kafkabeats for External Kafka -name=<name> The unique name of Kafkabeat for External Kafka (it can only contain lowercase letters, and numbers) -kafka-hosts=<addr:port,addr:port> Coma-separated list of External Kafka brockers -kafka-topics=<topic1,topic2> Coma-separated list of External Kafka topics -certificate-authority=<path> The path to the certificate authority file (*.pem) that is used in Kafka SSL configuration to verify the SSL connection with Kafka server -certificate=<path> The path to the certificate file (*.pem) that is used in Kafka SSL configuration for client certificate authorization (must be used with -key flag) -key=<path> The path to the key file (*.pem) that is used in Kafka SSL configuration for client certificate authorization
Add a Secondary Syslog Destination
To leverage event data from Site Collector for additional IT operations, you can add a secondary Syslog destination in deployments that include Data Lake. The secondary destination can be located on premise or in a virtual environment. Examples of how a secondary destination can be used include the following:
To meet legal requirements for storing data on premise
To gain additional insights from the data in non-security applications
To ease cloud adoption and migration and avoid business interruptions
To support disaster recovery operations
Before a secondary Syslog destination can be added, the site collector needs to be deployed. For information on installing the site collector, see Install the Exabeam Site Collector in the Exabeam Site Collector Guide.
To add a Syslog destination, run the following command:
sudo ./site-collector-installer.sh -v --feature=uba --reinstall --destination=[syslog_destination_name] --aa-listener=[IP or host name and port. Example:10.70.1.159:514]
Confirm that you want to reinstall the Site Collector.
- Seems like you already installed site-collector (2.1.0), do you want to reinstall? (y/n): y
The following output is returned.
[Install Kafka AA for aa1] 7 - Installing AA target group aa1 ... 8 - Prepare Kafka AA folder aa1 9 - Target group name : aa1 10 - Target group topic: lms.kafka.topic 11 - Target host:port : 10.70.1.159:514 12 - Target worker dir : /opt/exabeam/beats/aa1/worker 13 - Target manager dir: /opt/exabeam/beats/aa1/manager 14 - Target EPS limit : 0 15 - Setting uba-worker files for aa1 ... 16 - Setting uba-worker config: /opt/exabeam/beats/aa1/worker/kafkabeat.yml 17 - Create uba-worker service: exabeam-kafka-aa1-collector 18 - Setting uba-manager files for aa1 ... 19 - Setting uba-manager config: /opt/exabeam/beats/aa1/manager/exabeat.yml 20 - Create uba-manager service: exabeam-kafka-aa1-log-manager 21[Finished install Kafka AA for aa1]
To verify that the destination was added, run the following command:
sudo /opt/exabeam/tools/sc-services-check.sh
For Data Lake customers, destinations can also be verified by logging in to Data Lake and navigating to Settings > Collector Management > Collectors .
Remove a Syslog Destination
Run the following command with the name of the destination that you want to remove.
sudo ./site-collector-installer.sh -v --feature=gcs --uninstall --destination=
[syslog_destination_name]Confirm that you want to uninstall the destination.
- Seems like you have installed site-collector (2.1.0), do you want to uninstall? (y/n): y
The output is similar to the following:
[Site collector uninstall is confirmed] [Uninstalling Kafka AA for aa1] - Uninstalling AA target group aa1... - Uninstalling uba manager for aa1 ... - Deregister manager config: /opt/exabeam/beats/aa1/manager - Deregister uba manager agent: 498acc14f5164adbbffd5be37d5b8598 - Uninstalling gcs worker for aa1 ... - Remove Kafka AA folder: /opt/exabeam/beats/aa1 [Finished uninstalling Kafka AA for aa1]
Filter Incoming Syslog Events in Exabeam Site Collector
For known threats in high-volume log scenarios, you can apply a filter on inbound syslog events to reduce the amount of data sent to your Exabeam SaaS deployment. You will edit the input configuration file with the filter string. Filtering will capture syslog events with known threat patterns. It will not capture events using "not match" parameters (for example, filtering will not capture for "X != event
").
SSH to the host of your site collector and create a back up of the configuration file to your home folder.
cp /opt/logstash/conf.d/syslog2kafka.conf ~/01-syslog-input.conf.orig
Append the
filter
code to/opt/logstash/conf.d/syslog2kafka.conf
. In this example, the filter uses"drop-string-"
and"drop-string-2"
as the filter.filter { if "drop-string-" in [message] or "drop-string-2" in [message] { drop { } } }
Restart the syslog service.
sudo systemctl restart logstash # Verify service status sudo systemctl status logstash
Use
sudo journalctl -e -u logstash
to display the syslog status log. A successful restart will resemble the log records below, ending with aSuccessfully started
record.Mar 25 20:23:57 dl docker[21481]: [2019-03-25T20:23:57,726][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:514"} Mar 25 20:23:57 dl docker[21481]: [2019-03-25T20:23:57,731][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:515"} Mar 25 20:23:58 dl docker[21481]: [2019-03-25T20:23:58,229][INFO ][logstash.pipeline ] Pipeline main started Mar 25 20:23:58 dl docker[21481]: [2019-03-25T20:23:58,233][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"0.0.0.0:514"} Mar 25 20:23:58 dl docker[21481]: [2019-03-25T20:23:58,253][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"0.0.0.0:514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"} Mar 25 20:23:58 dl docker[21481]: [2019-03-25T20:23:58,268][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Verify the filter is working by sending the filter string to site collector's ingest port. In this example, the filter uses
drop-string-
anddrop-string-2
as the filter.logger -n localhost -T -P 514 test message from other system 1; logger -n localhost -T -P 514 test message from other system 2;logger -n localhost -T -P 514 test message drop-string-1 1; logger -n localhost -T -P 514 test message drop-string-2 1; logger -n localhost -T -P 514 test message drop-string-1 2; logger -n localhost -T -P 514 test message drop-string-2 2; logger -n localhost -T -P 514 test message from other system 3;
Filtering Outbound Logs in Exabeam Site Collector
Exabeam Site Collector supports log filtering before uploading the logs to configured destinations. The site collector can drop entire events if filtering conditions are matched. Site collectors use Kafkabeat for the outbound message processing. Configurations are made in /opt/exabeam/beats/<TARGET>/worker/kafkabeat.yml
, where <TARGET>
is the folder that contains the site collector destination setup.
The values in <TARGET>
are the following options based on your deployment:
For SaaS:
gcs1
For Data Lake on-premises:
lms1
For Advanced Analytics on-premises:
uba1
For custom destinations (with unique
destinationIDs
for each target):<destinationID>
The configuration resembles the following configuration block:
kafkabeat: inputs: - type: kafka ... logging: ... output: ... path: ... queue: ... processors: - drop_event: when: - <condition>
After updating configurations, restart Kafkabeat with the following command:
sudo systemctl restart exabeam-kafka-<TARGET>-collector
Kafkabeat does not parse the event. Rather, it sends the event as-is to the destination. Filtering is based solely on the message
field of an event. Listed in the following table are the five conditions in the message field that are supported:
Condition | Description | Condition Example |
---|---|---|
| Checks if a value is part of a field. The field can be a string or an array of strings. The condition accepts only a string value. | Check if an error is part of the event message contains: message: "Specific error" |
| Checks the field against a regular expression. The condition accepts only strings. | Check if the event regexp: message: "src_ip=192\\.168\\.\\d{1,3}\\.\\d{1,3}" |
| An operator that receives a list of criteria to match a single condition. | Determines match to either or: - contains: message: "[DEBUG]" - contains: message: "[TRACE]" |
| An operator that receives a list of criteria to match all of. | Determines match where both and: - equals: http.response.code: 200 - equals: status: OK |
| An operator that receives the condition to negate. | Drops event if not contains: message: "[ERROR]" |
Here is an example:
We need to filter out logs from a particular Filebeat that sends logs from a given IP address. The filter will match logs that contain the text [OBSOLETE]
in content or that comes from the Filebeat with ID 573d5253-4e4e-4fff-92a5-8f2f227b3af1
and IP address src_ip=195.164.*.*
.
A sample log resembles:
[OBSOLETE] - Mar 24 15:00:34 2020 rt=1585062034 device=110.90.230.153 name=Stephanie Kim
The filter is written to apply all three possible matching criteria:
processors: - drop_event: when: or: - and: - contains: message: "\"id\":\"573d5253-4e4e-4fff-92a5-8f2f227b3af1\"" - regexp: message: "src_ip=195\\.163\\.\\d{1,3}\\.\\d{1,3}" - contains: message: "[OBSOLETE]"
Metadata Collected by Site Collector and Supported Agents
Site Collector
Site Collector collects the following metadata fields:
timestamp
timezone
time_off
hostname
Event logs processed through Site Collector include an exa_rsc
prefix, as shown in the following message example:
"exa_rsc": { "timestamp": "2021-06-03T05:51:10.492Z", "timezone": "UTC", "time_off": 0, "hostname":"cct-ec-1-nodes5n", }
Site Collector with Syslog input includes three additional fields:
port
forwarder
exa-message-size
"port": 33320, "forwarder": "localhost", "exa-message-size": 22,
Filebeat and GZipBeat
Filebeat and GZipBeat agents collect the following metadata fields:
timestamp
timezone
time_off
type
hostname
path
"@timestamp": "2021-06-03T06:54:47.122Z", "timezone": "UTC", "time_off": 0, "type": "filebeat", "hostname": "cct-ec-1-nodesa5", "path": "/home/exabeam/filelogs/test.log"
Add OpenVPN After Exabeam Site Collector Installation
If you have an existing and running Exabeam Site Collector on your host and you need to add OpenVPN, you can rerun the installation script with express instruction to apply the OpenVPN only. This avoids reinstalling the entire site collector package.
You will use the same installation script for new installations, upgrades, and feature add-ons. (Go to the instructions for your deployment in Install Exabeam Site Collector or Upgrade Exabeam Site Collector to see package download steps.)
In the directory where the installation file has been placed, use the following to add OpenVPN to the existing and running site collector hosts:
./site-collector-installer.sh -v --dl-saas --feature=openvpn
If a reinstallation of OpenVPN is needed, use:
./site-collector-installer.sh -v --dl-saas --feature=openvpn --reinstall
Supported Exabeam Site Collector Changes
For a list of all options, use site-collector-installer.sh --help
.
Below are all supported changes that you can make. For other changes, please contact Exabeam Customer Success for further guidance.
Operating system updates
Site Collector server IP changes
Once the IP of the server has been changed, edit the below line in the file:
/opt/kafka/config/server.properties
advertised.listeners=EXTERNAL_PLAINTEXT://MYIP:9092, EXTERNAL_SSL://MYIP:9093, INTERNAL_SSL://localhost:9094
Then, restart Kafka:
sudo systemctl restart kafka
Log retention change (default is 24 hours)
Edit the below line in the file:
/opt/kafka/config/server.properties
log.retention.hours=24
Then, restart Kafka:
sudo systemctl restart kafka
RAM allocation to logstash
Edit the below lines appropriately in this file:
/opt/logstash/config/jvm.options
-Xms16g
-Xmx16g
Then, restart Kafka:
sudo systemctl restart logstash
RAM allocation to Kafka
Edit the numbers before the 'G' in this file:
/opt/kafka/bin/kafka-server-start.sh
export KAFKA_HEAP_OPTS="-Xmx5G -Xms5G"
Then, restart Kafka:
sudo systemctl restart kafka