Skip to main content

Cloud-delivered Data LakeData Lake Administration Guide

Administrator Operations

Exabeam Licenses

Exabeam products require a license in order to function. These licenses determine which Exabeam products and features you can use. You are not limited by the amount of external data you can ingest and process.

There are multiple types of Exabeam product licenses available. Exabeam bundles these licenses together and issues you one key to activate all purchased products. For more information on the different licenses, see Types of Exabeam Product Licenses.

License Lifecycle

When you first install Exabeam, the installed instance uses a 30 day grace period license. This license allows you to try out all of the features in Exabeam for 30 days.

Grace Period

Exabeam provides a 30-day grace period for expired licenses before products stop processing data. During the grace period, you will not experience any change in product functionality. There is no limit to the amount of data you can ingest and process.

When the license or grace period is 14 days away from expiring, you will receive a warning alert on the home page and an email.

You can request a new license by contacting your Exabeam account representative or by opening a support ticket.

Expiration Period

When your grace period has ended, you will start to experience limited product functionality. Please contact your Exabeam representative for a valid license and restore all product features.

For Data Lake license expirations, health alerts and health checks will continue to work. Exabeam Threat Intelligence Services (TIS) and Telemetry will stop working.

You will receive a critical alert on the home page and an email.

License Alerts

License alerts are sent via an alert on the home page and in email when the license or grace period is 14 days away from expiring and when the grace period expires.

The home page alert is permanent until resolved. You must purchase a product license or renew your existing license to continue using Exabeam.

To check the status and details of your license, go to Settings SOC-Platform-Settings-Icon.png > Admin Operations > Licenses.

i61-licenses-page.png

Types of Exabeam Product Licenses

Exabeam licenses specify which products you have access to and for how long. We bundle your product licenses together into one license file. All products that fall under your Exabeam platform share the same expiration dates.

Data Lake product licenses:

  • Data Lake – The Data Lake license provides you with unlimited collection, ingestion, and secure data storage without volume-based pricing. The data ingested by Data Lake can be used by Advanced Analytics for analysis and Incident Responder during incident investigations.

  • Exabeam Threat Intelligence Services (TIS) – TIS provides real-time actionable intelligence into potential threats to your environment by uncovering indicators of compromise (IOC). It comes fully integrated with the purchase of a Data Lake license. TIS also allows access to telemetry.

After you have purchased or renewed your product licenses, proceed to Download a License.

Download an On-premises or Cloud Exabeam License

You can download your unique customer license file from the Exabeam Community.

To download your Exabeam license file:

  1. Log into the Exabeam Community with your credentials.

  2. Click on your username.

  3. Click on My Account.

  4. Click on the text file under the License File section to start the download

    DownloadLicense-newcommunity.jpg

After you have downloaded your Exabeam license, proceed to Apply a License.

Configure Custom UI Port in Data Lake

The WebCommon base URL, along with its port number, is hard-coded in the Data Lake Application config file.

Customers with an on-premise solution have the ability to configure a custom UI port. Customers with a cloud-delivered solution will need to contact the Exabeam Customer Success Team to have this configured for your environment.

To configure a custom UI port in Data Lake:

  1. Set the web_common_external_port variable.

    Ensure the variable is set in /opt/exabeam_installer/group_vars/all.yml : web_common_external_port : <UI_port_number>.

    Note

    If this variable is not set, access to the custom UI port may be lost after upgrading.

  2. Navigate to the DLA config folder:

    cd /opt/exabeam/config/lms/server/default/
  3. Open the application_default.conf file in an editor:

    vim application_default.conf
  4. Set the webcommonBaseUrl port value to the desired value:

    DL-WebCommon-Port-Change.png
  5. Save and exit.

  6. Restart Data Lake:

    systemctl restart exabeam-lms-server

Adding Nodes to a Cluster

Hardware and Virtual deployments only

The steps below walk through the process of adding nodes to an existing cluster or upgrading from a standalone to multi-node deployment. The prompts ask a series of questions regarding how you want your node cluster configured.

Before you begin, ensure you have:

  • Your Exabeam credentials

  • IP addresses of your Master and Worker nodes

  • Credentials for inter-node communication (Exabeam can create these during fresh installation if they do not already exist).

Caution

Before adding nodes to your cluster, please ensure the current storage capacity for these items are below the following thresholds:

  • For Data Lake:

    • 85% on Elasticsearch hot node

    • 85% on Elasticsearch warm nodes

    • 70% on Kafka service

Note

Exabeam does not support removing nodes from a cluster.

Warning

Do not increase the number of nodes in a cluster by more than 50% during any given batch of node additions. For example, if you want to have a cluster of 100 nodes grown from a cluster of 20 nodes, run the operation by starting with a batch of 10 nodes and then incrementally add nodes in batches no larger than 50% of the node count.

Review cluster performance before adding more nodes. Ensure that the cluster status is healthy, and nodes have completed rebalancing.

Add Nodes

  1. Run the following:

    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  2. Menu options will appear. Select Add new nodes to the cluster. The following example adds 2 nodes to an existing cluster:

    How many nodes do you wish to add? 2
    
  3. Enter the IP(s) of the new node(s). The following example assigns the IP to 2 nodes, where there is a Master Node (existing), and two new Worker Nodes (node 11 and node 12):

    Note

    Any given cluster cannot have more than one master node. Please enter lms_slave as the role.

    What is the IP address of node 11 (localhost/127.0.0.1 not allowed)? 10.10.2.88
    
    What are the roles of node 11?  ['lms_master', 'lms_slave']: lms_slave
    
    What is the IP address of node 12 (localhost/127.0.0.1 not allowed)? 10.10.2.89
    
    What are the roles of node 12?  ['lms_master', 'lms_slave']: lms_slave

    This step repeats until all nodes have IP addresses assigned.

  4. NTP is important for keeping the clocks in sync. If a local NTP server exists, please input that information. If no local NTP server exists, but the servers do have internet access, use the default pool.ntp.org. Only choose none if there is no local NTP server and no internet access.

    What's the NTP server to synchronize time with? Type 'none' if you don't have an NTP server and don't want to sync time with the default NTP server group from ntp.org. [pool.ntp.org] pool.ntp.org
  5. If the user has internal DNS servers, add them here. If not, select No.

    Would you like to add any DNS servers? [y/n] n
  6. Override the Docker and Calico default subnets if there are any conflicting networks in the user's domain. If not, answer no to both.

    Note

    If you change any of the docker networks, the product will automatically be uninstalled prior to being redeployed.

    Would you like to override the default docker BIP (172.17.0.1/16)? [y/n] n
    Enter the new docker_bip IP/CIDR (minimum size /25, recommended size /16): 172.18.0.1/16
    Would you like to override the calico_network_subnet IP/CIDR (10.50.48.0/20)? [y/n] n

The cluster is now configured.

Replicating Logs Across Exabeam Data Lake Clusters

You can configure Data Lake so that your Data Lake logs are replicated to a backup system.

Before proceeding, ensure you have two independent Data Lake clusters (the primary and a backup). These clusters must have the same number of nodes, running the same version of Data Lake, and enough storage capacity to hold the same amount of data based on retention settings.

Note

All steps below refer to the backup cluster.

  1. Establish a CLI session with the backup cluster of your deployment.

  2. Set the primary IP address (where 10.10.2.88 is the primary Data Lake IP) as the environment variable in the terminal.

    export PrimaryDataLakeMaster=10.10.2.88
  3. Copy the Keystore and Truststore files from the primary system to the backup system.

    scp exabeam@${PrimaryDataLakeMaster}:/opt/exabeam/config/common/kafka/ssl/kafka-host1.keystore.jks ~/primary.kafka.keystore.jks 
    
    scp exabeam@${PrimaryDataLakeMaster}:/opt/exabeam/config/common/kafka/ssl/kafka-host1.truststore.jks ~/primary.kafka.truststore.jks 
    
    . /opt/exabeam/bin/shell-environment.bash
    
  4. Add the data replication location:

    /opt/exabeam/bin/lms/add-dr-location.py --dl-master $PrimaryDataLakeMaster  \
    --kafka-keystore-path ~/primary.kafka.keystore.jks  \
    --kafka-keystore-password exabeam \
    --kafka-truststore-path ~/primary.kafka.truststore.jks \
    --kafka-truststore-password exabeam

Ingesting Logs into Exabeam Data Lake

Ingestion via syslog is automatically enabled by default. However, you must configure your syslog source host to send logs to the proper Data Lake destination IP/port.

Important

If you are running a multi-node Data Lake cluster with any Syslog sources, Exabeam strongly recommends having a load balancer with two site collectors behind it to mitigate any potential data loss.

Additionally, you must use the cert (<alias>-exa-ca.pem) provided in the customer artifacts package, which is provided during onboarding, if using Transport Layer Security (TLS).

If you are sending logs via syslog,

  • use port 515/TCP for TLS syslogs

  • use 514/TCP or UDP for syslogs without TLS.

Exabeam Data Lake Retention Settings

This feature provides the ability to remove old data based on a customizable retention policy so that the storage on the cluster can be reclaimed for newer data. Events in indices that exceed the retention period are deleted automatically and when an index is empty, the index is deleted as well.

Warning

Once the data deleted, it is lost permanently. Please implement archiving to prevent data loss.Remote Archiving NAS and AWS S3 from Data Lake

For example, an index was created for events ingested on 01/01/2020 and events in the system are retained for 90 days. With this retention policy, all events and the index are deleted on 03/31/2021 at midnight GMT.

Note

The default setting is 90 days retention. The retention settings are configured during the initial set up after customers subscribe to Data Lake.

If you need to retain data for longer than 90 days, Exabeam Cloud Archive is available to extend this retention period.

For auditing purposes, the system keeps a trail of deletions.

To view your retention policy, navigate to Settings > Index Management > Advanced Settings.

Retention-Period-Settings.png

Set Up LDAP Import

During this stage of the setup, Exabeam will connect to your LDAP servers and query them for user and computer information. We then store the attributes to our own database. Going forward Exabeam will poll your LDAP servers once every 24 hours and update the local copy to reflect the latest state changes.

  1. Navigate to Settings > Import LDAP > LDAP Server.

  2. At the Import LDAP UI, you can add servers.

    Or, float your pointer to the right of existing LDAP server to click edit (pencil icon) or delete (trash can icon) records.

Restore Frozen Storage

Frozen Storage is a service that allows customers to keep logs for up to 3 years with no default access or search capabilities.

If you subscribe to Frozen Storage, and need to query data held in frozen storage, you may request up to 30 days of logs to be restored at once, up to 4 restore requests per year.

When making a request for logs to be restored, you must include the date range you are interested in restoring. It will take 10 business days to fulfill the request.

These logs will be restored to a dedicated, temporary Data Lake cluster. Restoration will be done for the full data set, with no filters applied.