Advanced AnalyticsExabeam Advanced Analytics Administration Guide

Configure Advanced Analytics

Everything administrators need to know about setting up and operating Advanced Analytics.

Set Up Admin Operations

Everything administrators need to know about setting up and operating Advanced Analytics.

Access Exabeam Advanced Analytics

Navigate, log in, and authenticate into your Advanced Analytics environment.

If you have an hardware or virtual deployment of Advanced Analytics, enter the IP address of the server and port number 8484:

https://[IP address]:8484 or https://[IP address]:8484/uba

If you have the SaaS deployment of Advanced Analytics, navigate to https://[company].aa.exabeam.com.

Use your organization credentials to log into your Advanced Analytics product.

These login credentials were established when Advanced Analytics was installed. You can authenticate into Advanced Analytics using LDAP, SAML, CAC, or SSO through Okta. To configure and enable these authentication types, contact your Technical Account Manager.

If you work for a federal agency, you can authenticate into Advanced Analytics using Common Access Card (CAC). United States government personnel use the CAC to access physical spaces, computer networks, and systems. You have readers on your workstations that read your Personal Identity Verification (PIV) and authenticate you into various network resources.

You can authenticate into Advanced Analytics using CAC combined with another authentication mechanism, like Kerberos or local authentication. To configure and and enable other authentication mechanisms, contact your Technical Account Manager.

Set Up Log Management

Large enterprise environments generally include many server, network, and security technologies that can provide useful activity logs to trace who is doing what and where. Log ingestion can be coupled with your Data Lake data repository, that can forward syslogs to Advanced Analytics. (See Data Lake Administration Guide > Syslog Forwarding to Advanced Analytics.)

Use the Log Ingestion Settings page to configure the following log sources:

Note

The Syslog destination is your site collector IP/FQDN, and only TLS connections are accepted in port TCP/515.

  • Data Lake

  • Splunk

  • ServiceNow

  • HP ArcSight

  • IBM QRadar

  • McAfee Nitro

  • RSA Security Analytics

  • Sumo Logic

  • Google Cloud Pub/Sub

View Insights About Syslog Ingested Logs

Advanced Analytics has the ability to test the data pipeline of logs coming in via Syslog.

Note

This option is only available if the Enable Syslog Ingestion button is toggled on.

Click the Syslog Stats button to view the number of logs fetched, the number of events parsed, and the number of events created. A warning will also appear that lists any event types that were not created within the Syslog feed that was analyzed.

In this step you can also select Options to limit the time range and number of log events tested.

Set Up Training & Scoring

To build a baseline, Advanced Analytics extensively profiles the people, asset usage, and sessions. For example, in a typical deployment, Advanced Analytics begins by examining typically 60-90 days of an organization's logs. After the initial baseline analysis is done, Advanced Analytics begins assigning scores to each session based on the amount and type of anomalies in the session.

Set Up Log Feeds

Advanced Analytics can be configured to fetch log data from a SIEM. Administrators can configure log feeds that can be queried during ingestion. Exabeam provides out-of-the-box queries for various log sources; or you can edit them and apply your own.

Once the log feed is setup, you can perform a test query that fetches a small sample of logs from the log management system. You can also parse the sample logs to make sure that Advanced Analytics is able to normalize the logs. If the system is unable to parse the logs, reach out to Customer Success and the Exabeam team will create a parser for those logs.

Draft/Published Modes for Log Feeds

There are two different modes when it comes to adding log feeds under Settings > Log Feeds. When you create a new log feed and complete the workflow you will be asked if you would like to publish the feed. Publishing the feed lets the Analytics Processing Engine know that the respected feed is ready for consumption.

If you choose to not publish the feed then it will be left in draft mode and will not be picked up by the processing engine. You can always publish a feed that is in draft mode at a later time.

This allows you to add multiple feeds and test queries without worrying about the feed being picked up by the processing engine or having the processing engine encounter errors when a draft feed is deleted.

Once a feed is in published mode it will be picked up by the processing engine at the top of the hour.

Set Up Incident Notification

Advanced Analytics can send notifications with the details of notable sessions to your log repository for reporting and investigations via Syslog or email.

To configure syslog or email notifications, navigate to Settings > Incident Notification panel. In the Notifications UI, click the “+” icon and select the notification type.

Configure the notification fields with the following requirements:

  • Server – cloudrelay1.connect.exabeam.com

  • Port – 587

  • SSL – true

  • Sender email address – (any email address using <instanceID> @notify.exabeam.com)

After you have selected the notification method (Syslog or Email Notification), specify the configuration fields and select the type of data to forward to your external log repository or ticketing system(s).

Note

The Email Notification configuration also affects the email actions within Incident Responder playbooks. If this is not configured to a valid SMTP server, you cannot send email notifications from within the playbook. Once this has been configured, the Incident Responder service automatically populates as 'IRNotificationSMTPService' for send email actions, including:

  • Notify User By Email Phishing

  • Phishing Summary Report

  • Send Email

  • Send Template Email

  • Send Indicators via Email

Note

The syslog notifications include useful event fields that have values associated with them. This is unique to each event type. If the field does not have a value then it is not included in the syslog output.

They also include reason rule templates. If the UI template changes then the output changes too.

For information on the Syslog notifications key-value pair definitions, see the Appendix.

Advanced Analytics Transaction Log and Configuration Backup and Restore

Hardware and Virtual Deployments Only

Rebuilding a failed worker node host (from a failed disk for on on-premise appliance) or shifting a worker node host to new resources (such as in AWS) takes significant planning. One of the more complex steps and most prone to error is migrating the configurations. Exabeam has provide a backup mechanism for layered data format (LDF) transaction log and configuration files to minimize the risk of error. To use the configuration backup and restore feature, you must have:

  • Amazon Web Services S3 storage or an active Advanced Analytics worker node

  • Cluster with two or more nodes

  • Have read and write permission for the credentials you will configure to access the base path at the storage destination

  • A scheduled task in Advanced Analytics to run backup to the storage destination

Note

To rebuild after a cluster failure, it is recommended that a cloud-based backups be used. To rebuild nodes from disk failures, backup files to a worker node or cloud-based destination.

If you want to save the generate backup files to your first worker node, then no further configuration is needed to configure an external storage destination. A worker node destination addresses possible disk failure at the master node appliance. This is not recommended as the sole method for disaster recovery.

If you are storing your configurations at an AWS S3 location, you will need to define the target location before scheduling a backup.

  1. Go to Settings > Additional Settings > Admin Operations > External Storage.

  2. Click Add to register an AWS backup destination.

  3. Fill all field and then click TEST CONNECTION to verify connection credentials.

  4. Once a working connection is confirmed Successful, click SAVE.

Once you have a verified destination to store your files, configure and schedule a recurring backup.

  1. Go to Settings > Additional Settings > Backup & Restore > Backups.

  2. Click CREATE BACKUP to generate a new schedule record. If you are changing the destination, click the edit icon Settings-Backup-EditIcon.jpgon the displayed record.

  3. Fill all fields and then click SAVE to apply the configuration.

    Warning

    Time is given in UTC.

A successful backup will place a backup.exa file at either the base path of the AWS destination or /opt/exabeam/data/backup at the worker node. In the case that the scheduled backup fails to write files to the destination, confirm there is enough space at the destination to hold the files and that the exabeam-web-common service is running. (If exabeam-web-common is not running, review its application.log for hints as to the possible cause.)

In order to restore a node host using files store off-node, you must have:

  • administrator privileges to run tasks a the host

  • SSH access to the host

  • free space at the restoration partition at the master node host that is greater than 10 times the size of backup.exa backup file

  1. Copy the backup file, backup.exa, from the backup location to the restoration partition. This should be a temporary work directory (<restore_path>) at the master node.

  2. Run the following to unpack the EXA file and repopulate files.

    sudo /opt/exabeam/bin/tools/exa-restore <restore_path>/backup.exa

    exa-restore will stop all services, restore files, and then start all services. Monitor the console output for error messages. See Troubleshooting a Restoration if exa-restore is unable to run to completion.

  3. Remove backup.exa and the temporary work directory when restoration is completed.

If restoration does not succeed, the try following below solutions. If the scenarios listed do not match your situation,

Not Enough Disk Space

Select a different partition to restore the configuration files to and try to restore again. Otherwise, review files stored in to target destination and offload files to create more space.

Restore Script Cannot Stop All Services

Use the following to manually stop all services:

source /opt/exabeam/bin/shell-environment.bash && everything-stop
Restore Script Cannot Start All Services

Use the following to manually start all services:

source /opt/exabeam/bin/shell-environment.bash && everything-start
Restore Script Could Not Restore a Particular File

Use tar to manually restore the file:

# Determine the task ID and base directory (<base_dir>) for the file restoration that failed.
# Go to the <base_id>/<task_id> directory and apply following command:
sudo tar -xzpvf backup.tar backup.tgz -C <baseDir>

# Manually start all services.
source /opt/exabeam/bin/shell-environment.bash && everything-start

Exabeam Licenses

All Exabeam products require a license in order to function. These licenses determine which Exabeam products and features you can use. You are not limited by the amount of external data you can ingest and process.

There are multiple types of Exabeam product licenses available, which you can add to your Exabeam instance. For example, separate licenses are required to operate Incident Responder and Case Manager with your Advanced Analytics platform. Exabeam bundles these licenses together and issues you one key to activate all purchased products. For more information on the different product licenses, please see Types of Exabeam Product Licenses.

License Lifecycle

When you first install Exabeam, the installed instance uses a 30 day grace period license. This license allows you to try out all of the features in Exabeam for 30 days.

Grace Period

Exabeam provides a 30-day grace period for expired licenses before products stop processing data. During the grace period, you will not experience any change in product functionality. There is no limit to the amount of data you can ingest and process.

When the license or grace period is 14 days away from expiring, you will receive a warning alert on the home page and an email.

You can request a new license by contacting your Exabeam account representative or by opening a support ticket.

Expiration Period

When your grace period has ended, you will start to experience limited product functionality.

For Advanced Analytics , the Log Ingestion Engine will continue to ingest data, but the Analytics Engine will stop processing. Threat Hunter and telemetry will also stop working.

You will receive a critical alert on the home page and an email.

License Alerts

License alerts are sent via an alert on the home page and in email when the license or grace period is 14 days away from expiring and when the grace period expires.

Note

The email alert is sent to the address linked in the notifications setting page at Settings > Additional Settings > Notifications > Setup Notifications.

The home page alert is permanent until resolved. You must purchase a product license or renew your existing license to continue using Exabeam.

A message screen with alert of SSL certification expiration.

You can also check the status and details of your license any time by visiting Settings > ADMIN OPERATIONS > Licenses or System Health > Health Alerts.

License Versions

Currently, Exabeam has three versions of our product licenses (V1, V2, and V3). License versions are not backward compatible. If you are upgrading from Advanced Analytics I41 / or earlier you must apply the V3 license version. The table below summarizes how the different license versions are designed to work:

V1

V2

V3

Products Supported

  • Advanced Analytics

  • Threat Hunter

  • Advanced Analytics

  • Threat Hunter

  • Entity Analytics

  • Advanced Analytics

  • Threat Hunter

  • Entity Analytics

  • Incident Responder

  • Case Manager

  • Data Lake

  • Threat Intelligence Service (ExaCloud authentication)

Product Version

Advanced Analytics I38 and below

Advanced Analytics I41

Advanced Analytics I46 and above

Data Lake I24 and above

Uses unique customer ID

No

No

Yes

Federal License Mode

No

No

Yes

Available to customers through the Exabeam Community

No

No

Yes

Licensed enforced in Advanced Analytics

Yes

Yes

Yes

Licensed enforced in Data Lake

NA

NA

No

Applied through the UI

No, the license must be placed in a path in Tequila

No, the license must be placed in a path in Tequila

Yes

Table 1. License Version Details


Note

Licenses for Advanced Analytics I46 / and later must be installed via the GUI on the license management page.

Types of Exabeam Product Licenses

Exabeam licenses specify which products you have access to and for how long. We bundle your product licenses together into one license file. Therefore, all products that fall under your Exabeam platform share the same expiration dates.

Advanced Analytics product licenses:

  • User Analytics – This is the core product of Advanced Analytics . Exabeam’s user behavioral analytics security solution provides modern threat detection using behavioral modeling and machine learning.

  • Threat Hunter – Threat Hunter is a point and click advanced search function which allows for searches across a variety of dimensions, such as Activity Types, User Names, and Reasons. It comes fully integrated with User Analytics.

  • Exabeam Threat Intelligence Services (TIS) – TIS provides real-time actionable intelligence into potential threats to your environment by uncovering indicators of compromise (IOC). It comes fully integrated with the purchase of an Advanced Analytics V3 license. TIS also allows access to telemetry.

  • Entity Analytics (EA) – Entity Analytics offers analytics capabilities for internet-connected devices and entities beyond users such as hosts and IP addresses within an environment.

    Entity Analytics is available as an add-on option. If you are adding Entity Analytics to your existing Advanced Analytics platform, you will be sent a new license key. Note that you may require additional nodes to process asset oriented log sources.

  • Incident Responder – Also known as Orchestration Automation Response. Incident Responder adds automation to your SOC to make your cyber security incident response team more productive.

    Incident Responder is available as an add-on option. If you are adding Incident Responder to your existing Advanced Analytics platform, you will be sent a new license key. Note that you may require additional nodes to support automated incident responses.

  • Case Manager – Case Manager can fully integrate into Advanced Analytics enabling you to optimize analyst workflow by managing the life cycle of your incidents.

    Case Manager is available as an add-on option. If you are adding Case Manager to your existing Advanced Analytics platform, you will be sent a new license key. Note that you may require additional nodes to support this module extension.

After you have purchased or renewed your product licenses, proceed to Download a License.

Download an On-premises or Cloud Exabeam License

You can download your unique customer license file from the Exabeam Community.

To download your Exabeam license file:

  1. Log into the Exabeam Community with your credentials.

  2. Click on your username.

  3. Click on My Account.

  4. Click on the text file under the License File section to start the download

    DownloadLicense-newcommunity.jpg

After you have downloaded your Exabeam license, proceed to Apply a License.

Exabeam Cluster Authentication Token

Hardware and Virtual Deployments Only

The cluster authentication token is used to verify identities between clusters that have been deployed in phases as well as HTTP-based log collectors. Each peer cluster in a query pool must have its own token. You can set expiration dates during token creation or manually revoke tokens at any time.

To generate a token:

  1. Navigate to Settings > Admin Operations > Cluster Authentication Token.

    admin operations cluster authentication token selection
  2. At the Cluster Authentication Token menu:

    cluster authentication token menu
    1. To configure a new token, click A blue circle with a white plus sign..

    2. Or to edit an existing configuration, click A pen shaped edit icon..

  3. In the Setup Token menu, fill in the Token Name, Expiry Date, and select the Permission Level(s).

    setup token menu

    Note

    Token names may contain letters, numbers, and spaces only.

  4. Click ADD TOKEN or SAVE to apply the configuration.

Use this generated file to allow your API(s) to authenticated by token. Ensure that your API uses ExaAuthToken in its requests. For curl clients, the request structure resembles:

curl -H "ExaAuthToken:<generated_token>" https://<external_host>:<api_port>/<api_request_path>

Set Up Authentication and Access Control

What Are Accounts & Groups?

Peer Groups

Peer groups can be a team, department, division, geographic location, etc. and are defined by the organization. Exabeam uses this information to compare a user's behavior to that of their peers. For example, when a user logs into an application for the first time Exabeam can evaluate if it is normal for a member of their peer group to access that application. When Dynamic Peer Grouping is enabled, Exabeam will use machine learning to choose the best possible peer groups for a user for different activities based on the behaviors they exhibit.

Executives

Exabeam watches executive movements very closely because they are privileged and have access to sensitive and confidential information, making their credentials highly desirable for account takeover. Identifying executives allows the system to model executive assets, thereby prioritizing anomalous behaviors associated with them. For example, we will place a higher score for an anomaly triggered by a non-executive user accessing an executive workstation.

Service Accounts

A service account is a user account that belongs to an application rather than an end user and runs a particular piece of software. During the setup process, we work with an organization to identify patterns in service account labels and uses this information to classify accounts as service accounts based on their behavior. Exabeam also adds or removes points from sessions based on service account activity. For example, if a service account logs into an application interactively, we will add points to the session because service accounts should not typically log in to applications.

What Are Assets & Networks?

Workstations & Servers

Assets are computer devices such as servers, workstations, and printers. During the setup process, we will ask you to review and confirm asset labels. It is important for Exabeam to understand the asset types within the organization - are they Domain Controllers, Exchange Servers, Database Servers or workstations? This adds further context to what Exabeam sees within the logs. For example, if a user performs interactive logons to an Exchange Server on a daily basis, the user is likely an Exchange Administrator. Exabeam automatically pulls in assets from the LDAP server and categorizes them as servers or workstations based on the OS property or the Organizational Units they belong to. In this step, we ask you to review whether the assets tagged by Exabeam are accurate. In addition to configuration of assets during setup, Exabeam also runs an ongoing classifier that classifies assets as workstations or servers based on their behavior.

Network Zones

Network zones are internal network locations defined by the organization rather than a physical place. Zones can be cities, business units, buildings, or even specific rooms. For example, "Atlanta" can refer to a network zone within an organization rather than the city itself (all according to an organization's preference). Administrators can upload information regarding network zones for their internal assets via CSV or add manually one at a time.

Asset Groups

Asset Groups are a collection of assets that perform the same function in the organization and need to be treated as a single entity from an anomaly detection perspective. An example of an asset group would be a collection of Exchange Servers. Grouping them this way is useful to our modeling processing because it allows us to treat an asset group as a single entity, reducing the amount of false positives that are generated when users connect to multiple servers within that group. As a concrete example, if a user regularly connects to email exchange server #1 then Exabeam builds a baseline that says this is their normal behavior. But exchange servers are often load-balanced, and if the user then connects to email exchange server #2 we can say that this is still normal behavior for them because the exchange servers are one Asset Group. Other examples of asset groups are SharePoint farms, or Virtual Desktop Infrastructure (VDI).

Common Access Card (CAC) Authentication and Limitations

Exabeam supports Common Access Card (CAC) authentication. CAC is the principal card used to enable physical spaces, and provides access to computer networks and systems. Analysts have CAC readers on their workstations that read their Personal Identity Verification (PIV) and authenticates them to use various network resources.

Exabeam allows CAC authentication in combination with other authentication mechanisms (Kerberos, Local authentication, etc.).

Please note the following restrictions:

  • Configure CAC users that are authorized to access Exabeam from the Exabeam User Management page.

  • During the user provisioning, the CAC analysts must be assigned roles. The roles associated with a CAC user will be used for authorization when they login.

    Add User menu
    Figure 1. Add User menu


Configure a CAC User
  1. Generate Certificate and add to the cluster by running the shell script below. Fill in the fields pertinent to your organization.

    #!/bin/bash
    # Main variables
    Country="[country]"
    CommonName="[cac_username_hostname]"
    State="[state]"
    Locality="[locality]"
    Organization="[organization]"
    OrganizationalUnit="[organizational_unit]"
    EmailAddress="[email_address]" 
    
    # C =  Country Name (2 letter code)
    # ST = State or Province Name (full name)
    # L =  Locality Name (eg, city)
    # O =  Organization Name (eg, company)
    # OU = Organizational Unit Name (eg, section)
    # CN = Common Name (eg, your name or your server's hostname)
    # emailAddress = Email Address
    SubjString="/C=$Country/CN=$CommonName/emailAddress=$EmailAddress/ST=$State/L=$Locality/O=$Organization/OU=$OrganizationalUnit"
    
    # Run the following commands on Exabeam server to create Client Certificate
    openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -aes-128-cbc -out ca.key -pass pass:test
    openssl req -new -x509 -days 365 -sha256 -key ca.key -out ca.pem -subj "$SubjString" -passin pass:test 
    
    # Create client cert that will be signed by CA
    cCountry="[country]"
    cCommonName="[cac_username]"
    cState="[state]"
    cLocality="[locality]"
    cOrganization="[organization]"
    cOrganizationalUnit="[organization_unit]"
    cEmailAddress="[email]" 
    
    cSubjString="/C=$cCountry/CN=$cCommonName/emailAddress=$cEmailAddress/ST=$cState/L=$cLocality/O=$cOrganization/OU=$cOrganizationalUnit"openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out client.keyopenssl req -new -key client.key -sha256 -out client.csr -subj "$cSubjString"openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca.key -set_serial 0x`openssl rand 16 -hex` -sha256 -out client.pem -passin pass:testopenssl pkcs12 -export -in client.pem -inkey client.key -name "Sub-domain certificate for some name" -out client.p12 -passout pass:test
  2. Upload the generated ca.pem file to the CAC user home directory at the master node.

  3. Execute the following commands at the master node:

    source /opt/exabeam/bin/shell-environment.bash
    docker cp ca.pem exabeam-web-common:/
    docker exec exabeam-web-common keytool -import -trustcacerts -alias cacbundle -file ca.pem -keystore /opt/exabeam/web-common/config/custom/truststore.jks -storepass changeit -noprompt
  4. To associate the credentials to a login, create a CAC user by navigating to Settings > User Management > Users > Add User and select CAC in User type.

Configuration of Client Certificates

Located in /opt/exabeam/config/common/web/custom/application.conf the sslClientAuth flag must be set to true. Example below.

tequila {
  service {
    interface = "0.0.0.0"
    #hostname = "<hostname>"
    port = 8484
    https = true
    sslKeystore = "$EXABEAM_HOME/config/custom/keystore.jks"
    sslKeypass = "password"
 
    # The following property enables Two-Way Client SSL Authentication
    sslClientAuth = true

To install client certificates for CAC, add the client certificate bundle to the trust store on the master host. Example below (replace the name of the file with the bundle you are installing):

# For Exabeam Data Lake
sudo docker exec exabeam-web-common-host1 /bin/bash -c "cd /opt/exabeam/config/custom; keytool -import -trustcacerts -alias cacbundle -file ca.pem -keystore truststore.jks -storepass changeit -noprompt"

# For Exabeam Advanced Analytics
sudo docker exec exabeam-web-common /bin/bash -c "cd /opt/exabeam/config/custom; keytool -import -trustcacerts -alias cacbundle -file ca.pem -keystore truststore.jks -storepass changeit -noprompt

To verify the contents of the trust store on the master host, run the following:

# For Exabeam Data Lake 
sudo docker exec exabeam-web-common-host1 /bin/bash -c "keytool -list -v -keystore /opt/exabeam/config/custom/truststore.jks -storepass changeit"

# For Exabeam Advanced Analytics
 sudo docker exec exabeam-web-common /bin/bash -c "keytool -list -v -keystore /opt/exabeam/config/custom/truststore.jks -storepass changeit"

After configuration changes, restart web-common.

source /opt/exabeam/bin/shell-environment.bash; web-common-restart

Role-based Access Control

Customers are able to control the responsibilities and activities of their SOC team members with Role-based Access Control (RBAC). Local users, LDAP users or SAML authenticated users will be assigned roles within Exabeam.

Each user can be assigned one or more roles and the responsibilities of those roles are determined by the permissions their role allows. If users are assigned more than one role, that user receives the permissions of both roles.

Note

If a user is assigned multiple roles with conflicting permissions, Exabeam enforces the role having more permission. For example, if a role with lighter permission and a role with full permission are both assigned to a user, then the user will have full permission.

To access the Roles page, navigate to Settings > User Management > Roles.

Out-of-the-Box Access Roles

Exabeam provides pre-configured access roles that restrict a user's tasks, actions, and views. A user may have more than one role. When a task, action, or view has more than one role associated to a user, the role with the greater access is applied.

Administrator: This role is intended for administrative access to Exabeam. Users assigned to this role can perform administrative operations on Exabeam, such as configuring the appliance to fetch logs from the SIEM, connecting to Active Directory to pull in contextual information, and restarting the analytics engine. The default admin credential belongs to this role. This is a predefined role provided by Exabeam and cannot be deleted.

Auditor: Users assigned to this role have only view privileges within the Exabeam UI. They can view all activities within the Exabeam UI, but cannot make any changes such as add comments or approve sessions. This is a predefined role provided by Exabeam.

Default permissions include:

Permission

Description

View Comments

View comments

View Activities

View all notable users, assets, sessions, and related risk reasons in the organization.

View Global Insights

View the organizational models built by Exabeam. The histograms that show the normal behavior for all entities in the organization can be viewed.

View Executive Info

View the risk reasons and the timeline of the executive users in the organization. You will be able to see the activities performed by executive users along with the associated anomalies.

View Incidents

View incidents.

View Infographics

View all the infographics built by Exabeam. You will be able to see the overall trends for the organization.

View Insights

View the normal behaviors for specific entities within the organization. The histograms for specific users and assets can be viewed.

Search Incidents

Can search keywords in Incident Responder via the search bar.

Basic Search

Perform basic search on the Exabeam homepage. Basic search allows you to search for a specific user, asset, session, or a security alert.

View Search Library

View the Search Library provided by Exabeam and the corresponding search results associated with the filters.

Threat Hunting

Perform threat hunting on Exabeam. Threat hunting allows you to query the platform across a variety of dimension such as find all users whose sessions contain data exfiltration activities or a malware on their asset.

Tier 1 Analyst: Users assigned to this role are junior security analysts or incident desk responders who supports the day-to-day enterprise security operation and monitoring. This type of role will not be authorized to make any changes to Exabeam system except for making user, session and lockout comments. Users in this role cannot approve sessions or lockout activities. This is a predefined role provided by Exabeam.

Default permissions include:

Permission

Description

View Executive Info

View the risk reasons and the timeline of the executive users in the organization. You will be able to see the activities performed by executive users along with the associated anomalies.

View Activities

View all notable users, assets, sessions and related risk reasons in the organization.

View Infographics

View all the infographics built by Exabeam. You will be able to see the overall trends for the organization.

View Global Insights

View the organizational models built by Exabeam. The histograms that show the normal behavior for all entities in the organization can be viewed.

View Insights

View the normal behaviors for specific entities within the organization. The histograms for specific users and assets can be viewed.

Add AA Comments

Add comments for the various entities (users, assets and sessions) within Exabeam.

Sending Incidents to Incident Responder

Send incidents to Incident Responder.

Basic Search

Perform basic search on the Exabeam homepage. Basic search allows you to search for a specific user, asset, session, or a security alert.

Tier 3 Analyst: Users assigned to this role will be performing more complex investigations and remediation plans. They can review user sessions, account lockouts, add comments, approve activities and perform threat hunting. This is a predefined role provided by Exabeam and cannot be deleted.

Default permissions include:

Permission

Description

View Activities

View all notable users, assets, sessions and related risk reasons in the organization.

View Executive Info

View the risk reasons and the timeline of the executive users in the organization. You will be able to see the activities performed by executive users along with the associated anomalies.

View Global Insights

View the organizational models built by Exabeam. The histograms that show the normal behavior for all entities in the organization can be viewed.

View Infographics

View all the infographics built by Exabeam. You will be able to see the overall trends for the organization.

View Rules

View configured rules that determine how security events are handled.

View Insights

View the normal behaviors for specific entities within the organization. The histograms for specific users and assets can be viewed.

Approve Lockouts

Accept account lockout activities for users. Accepting lockouts indicates to Exabeam that the specific set of behaviors for that lockout activity sequence are whitelisted and are deemed normal for that user.

Accept Sessions

Accept sessions for users. Accepting sessions indicates to Exabeam that the specific set of behaviors for that session are whitelisted and are deemed normal for that user.

Add AA Comments

Add comments for the various entities (users, assets and sessions) within Exabeam.

Manage Rules

Create/Edit/Reload rules that determine how security events are handled.

Manage Watchlist

Add or remove users from the Watchlist. Users that have been added to the Watchlist are always listed on the Exabeam homepage, allowing them to be scrutinized closely.

Sending incidents to Incident Responder

Send incidents to Incident Responder.

Manage Search Library

Create saved searches as well as edit them.

Basic Search

Perform basic search on the Exabeam homepage. Basic search allows you to search for a specific user, asset, session, or a security alert.

View Search Library

View the Search Library provided by Exabeam and the corresponding search results associated with the filters.

Threat Hunting

Perform threat hunting on Exabeam. Threat hunting allows you to query the platform across a variety of dimensions such as find all users whose sessions contain data exfiltration activities or a malware on their asset.

Data Privacy Officer: This role is needed only when the data masking feature is turned on within Exabeam. Users assigned to this role are the only users that can view personally identifiable information (PII) in an unmasked form. They can review user sessions, account lockouts, add comments, approve activities and perform threat hunting. This is a predefined role provided by Exabeam.

See the section in this document titled Mask Data Within the Advanced Analytics UI on the next page for more information on this feature.

Default permissions include:

Permission

Description

View Activities

View all notable users, assets, sessions and related risk reasons in the organization.

View Executive Info

View the risk reasons and the timeline of the executive users in the organization. You will be able to see the activities performed by executive users along with the associated anomalies.

View Global Insights

View the organizational models built by Exabeam. The histograms that show the normal behavior for all entities in the organization can be viewed.

View Infographics

View all the infographics built by Exabeam. You will be able to see the overall trends for the organization.

View Insights

View the normal behaviors for specific entities within the organization. The histograms for specific users and assets can be viewed.

Basic Search

Perform basic search on the Exabeam homepage. Basic search allows you to search for a specific user, asset, session, or a security alert.

View Search Library

View the Search Library provided by Exabeam and the corresponding search results associated with the filters.

Threat Hunting

Perform threat hunting on Exabeam. Threat hunting allows you to query the platform across a variety of dimensions such as find all users whose sessions contain data exfiltration activities or a malware on their asset.

Mask Data Within the Advanced Analytics UI

Note

To enable/disable and configure data masking, please contact your Exabeam technical representative.

Note

Data masking is not supported in Case Management or Incident Responder modules.

Data masking within the UI ensures that personal data cannot be read, copied, modified, or removed without authorization during processing or use. With data masking enabled, the only user able to see a user's personal information will be users assigned to the permission "View Clear Text Data". The default role "Data Privacy Officer" is assigned this permission out of the box. Data masking is a configurable setting and is turned off by default.

To enable data masking in the UI, the dataMaskingEnabled field needs to be set to true. This is located in /opt/exabeam/config/tequila/custom/application.config.

PII {
    # Globally enable/disable data masking on all the PII configured fields. Default value is false.
    dataMaskingEnabled = true
}

You're able to fully customize which PII data is masked or shown in your deployment. The following fields are available when configuring PII data masking:

  • Default – This is the standard list of PII values controlled by Exabeam. If data masking is enabled, all of these fields are encrypted.

  • Custom – Encrypt additional fields beyond the default list by adding them to this custom list. The default is empty.

  • Excluded – Do not encrypt these fields. Adds that are in the default list to expose their values in your deployment. The default is empty.

For example, if you want to mask all default fields other than "task name" and also want to mask the "address" field, then you would configure the lists as shown below:

PII {
    # Globally enable/disable data masking on all the PII configured fields. Default value is false.
    dataMaskingEnabled = true
    dataMaskingSuffix = ":M"
    encryptedFields = {
        #encrypt fields
        event {
            default = [
                #EventFieldName
                "user",
                "account",
                ...
                "task_name"
            ]
            custom=["address"]
            excluded=["task_name"]
        }
        ...
    }
}
Mask Data for Notifications

You can configure Advanced Analytics to mask specific fields when sending notable sessions and/or anomalous rules via email, Splunk, and QRadar. This prevents exposure of sensitive data when viewing alerts sent to external destinations.

Note

Advanced Analytics activity log data is not masked or obfuscated when sent via Syslog. It is your responsibility to upload the data to a dedicated index which is available only to users with appropriate privileges.

Before proceeding through the steps below, ensure your deployment has:

  • Enabled data masking (instructions below)

  • Configured a destination for Notable Sessions notifications sent from Advanced Analytics via Incident Notifications

By default, all fields in a notification are unmasked. To enable data masking for notifications, the Enabled field needs to be set to true. This is located in the application.conf file in the path /opt/exabeam/config/tequila/custom.

NotificationRouter {
    ...
    Masking {
        Enabled = true
        Types = [...]
        NotableSessionFields = [...]
        AnomaliesRulesFields = [...]
    }
}

Use the Types field to add the notification destinations (Syslog, Email, QRadar, and/or Splunk). Then, use the NotableSessionFields and AnomaliesRulesFields to mask specific fields included in a notification.

For example, if you want to mask the user, source host and IP, and destination host and IP for notifications sent via syslog and Splunk, then you would configure the lists as shown below:

NotificationRouter {
    ...
    Masking {
        Enabled  = true
        Types = [Syslog, Splunk]
        NotableSessionFields = ["user", "src_host", "src_ip", "dest_host", "dest_ip"]

    }
}

Set Up User Management

Users are the analysts that have access to the Exabeam UI to review and investigate activity. These analysts also have the ability to accept sessions. Exabeam supports local authentication or authentication against an LDAP server.

Roles

Exabeam supports role-based access control. Under Default Roles are the roles that Exabeam has created; these cannot be deleted or modified. Selecting a role displays the permissions associated with that role.

Users can also create custom roles by selecting Create a New Role. In this dialogue box you will be asked to name the role and select the permissions associated with it.

Add an Exabeam Role

Exabeam's default roles include Administrator, Auditor, and Tier (1 and 3) Analyst. If you do not want to use these default roles or edit their permissions, create ones that best suit your organization.

To add a new role:

  1. Navigate to Settings > Exabeam User Management > Roles.

  2. Click Create Role.

  3. Fill the Create a new role fields and click SAVE. The search box allows you to search for specific permissions.

    Your newly created role should appear in the Roles UI under Custom Roles and can be assigned to any analyst.

  4. To start assigning users to the role, select the role and click Next, which will direct you to the Users UI to edit user settings. Edit the configuration for the users you wish to add the role to and click Next to apply the changes.

Supported Permissions

Administration

  • All Admin Ops: Perform all Exabeam administrative operations such as configuring the appliance, connecting to the log repository and Active Directory, setting up log feeds, managing users and roles that access the Exabeam UI, and performing system health checks.

    Manage Users and Context Sources: Manage users and roles in the Exabeam Security Intelligence Platform, as well as the context sources used to enhanced the logs ingested (e.g. assets, peer groups, service accounts, executives)

  • Manage context tables: Manage users, assets or other objects within Context Tables.

Comments

  • Add Advanced Analytics Comments: Add comments for the various entities (users, assets and sessions) within Exabeam.

  • Add Incident Responder Comments

Create

  • Create incidents

  • Upload Custom Services: Upload custom actions or services.

Delete

  • Delete incidents

Manage

  • Manage Bi-directional Communication: Configure inbound and outbound settings for Bi-Directional Communications.

  • Manage Data Ingest: Configure log sources and feeds and email-based ingest.

  • Manage Playbooks: Create, update, or delete playbooks.

  • Manage Services: Configure, edit, or delete services (3rd party integrations).

  • Manage Triggers: Create, update, or delete playbook triggers.

  • Run Playbooks: Run a playbook manually from the workbench.

  • Manage Checklist Definitions: Configure checklist definitions.

  • Manage ingest rules: Add, edit, or delete rules for how incidents are assigned, restricted, and prioritized on ingest.

  • Manage Queues: Create, edit, delete, and assign membership to queues

  • Manage Templates: Create, edit, or delete playbook templates.

  • Run Actions: Launch individual actions from the user interface.

View

  • Manage Incident Configs: Manage Incident Incident Responder Configs

  • View API

  • View Executive Info: View the risk reasons and the timeline of the executive users in the organization. You will be able to see the activities performed by executive users along with the associated anomalies.

  • View health

  • View Raw Logs: View the raw logs that are used to built the events on AA timeline.

  • View Infographics: View all the infographics built by Exabeam. You will be able to see the overall trends for the organization.

  • View Metrics: View the Incident Responder Metrics page.

  • View Activities: View all notable users, assets, sessions and related risk reasons in the organization.

  • View comments

  • View Global Insights: View the organizational models built by Exabeam. The histograms that show the normal behavior for all entities in the organization can be viewed.

  • View incidents

  • View Insights: View the normal behaviors for specific entities within the organization. The histograms for specific users and assets can be viewed.

  • View Rules: View configured rules that determine how security events are handled

Edit & Approve

  • Approve Lockouts: Accept account lockout activities for users. Accepting lockouts indicates to Exabeam that the specific set of behaviors for that lockout activity sequence are whitelisted and are deemed normal for that user.

  • Bulk Edit: Users can edit multiple incidents at the same time.

  • Edit incidents: Edit an incident's fields, edit entities & artifacts.

  • Manage Watchlist: Add or remove users from the Watchlist. Users that have been added to the Watchlist are always listed on the Exabeam homepage, allowing them to be scrutinized closely.

  • Accept Sessions: Accept sessions for users. Accepting sessions indicates to Exabeam that the specific set of behaviors for that session are whitelisted and are deemed normal for that user.

  • Delete entities and artifacts: Users can delete entities and artifacts.

  • Manage Rules: Create/Edit/Reload rules that determine how security events are handled

  • Sending incidents to Incident Responder

Search

  • Manage Search Library: Create saved searches as well as edit them.

  • Basic Search: Perform basic search on the Exabeam homepage. Basic search allows you to search for a specific user, asset, session, or a security alert.

  • Threat Hunting: Perform thread hunting on Exabeam. Query the platform across a variety of dimensions such as find all users whose sessions contain data exfiltration activities or a malware on their asset.

  • Manage Threat Hunting Public searches: Create, update, delete saved public searches

  • Search Incidents: Can search keywords in Incident Responder via the search bar.

  • View Search Library: View the Search Library provided by Exabeam and the corresponding search results associated with the filters.

Data Privacy

  • View Unmasked Data (PII): Show all personally identifiable information (PII) in a clear text form. When data masking is enabled within Exabeam, this permission should be enabled only for select users that need to see PII in a clear text form.

Manage Users

Understand the difference between Roles and Users. Configure the analysts that have access to the Exabeam User Interface, add the analyst's information, assign them roles, and set up user permissions and access based on your organization's needs.

Users

Users are the analysts that have access to the Exabeam UI to review and investigate activity. These analysts have specific roles, permissions, and can be assigned Exabeam objects within the platform. They also have the ability to accept sessions. Exabeam supports local authentication or authentication against an LDAP server.

Add an Exabeam User
  1. Navigate to Settings > Exabeam User Management > Users.

  2. Click Add User.

  3. Fill the new user fields and select role(s), and then click SAVE.

Your newly created user should appear in the Users UI.

Set Up LDAP Server

If you are adding an LDAP server for the first time, then the Add LDAP Server page displays when you reach the Import LDAP page. Otherwise, if you have already added an existing LDAP server, click Add LDAP Server to add more.

The add/edit LDAP Server page displays the fields necessary to query and pull context information from your LDAP server(s), including:

  • Server Type – Select either Microsoft Active Directory (default) or NetIQ eDirectory.

  • Primary IP Address or Hostname – Enter the LDAP IP address or hostname for the primary server of the given server type.

Note

For context retrieval in Microsoft Active Directory environments, we recommend pointing to a Global Catalog server. To list Global Catalog servers, enter the following command in a Windows command prompt window: nslookup -querytype=srv gc.tcp.acme.local.

Replace acme.local with your company's domain name.

  • I have a secondary server – If the primary LDAP server is unavailable, Exabeam falls back to the secondary LDAP server if configured. Click this checkbox to add a secondary LDAP server and display a Secondary IP Address or Hostname field.

  • TCP Port – Enter the TCP port of the LDAP server. Optionally, select Enable SSL (LDAPS) and/or Global Catalog to auto-populate the TCP port information accordingly.

  • Bind DN – Enter the bind domain name, or leave blank for anonymous bind.

  • Bind Password – Enter the bind password, if applicable.

  • Base DN – Enter the base domain name. For example, DC=acme, DC=local, etc.

For Microsoft Active Directory:

  • LDAP attributes for Account Name – This field auto-populated with the value sAMAccountName. Please modify the value if your AD deployment uses a different value.

For NetIQ eDirectory:

  • LDAP Attributes – The list of all attributes to be queried by the Exabeam Domain Service (EDS) component is required. When testing the connection to the eDirectory server, EDS will collect from the server a list of the available attributes and display that list as a drop down menu. Select the name of the attribute from that list or provide a name of your own. Only names for the LDAP attributes you want EDS to poll are required (i.e., not necessarily the full list). Additionally, EDS does not support other types of attributes, therefore you cannot add “new attributes” on the list below.

Click Validate Connection to test the LDAP settings.

If you selected Global Catalog, this button displays as Connect & Get Domains.

Set Up LDAP Authentication

In addition to local authentication Exabeam can authenticate users via an external LDAP server.

When you arrive at this page, by default the ‘Enable LDAP Authentication’ is selected and the LDAP attribute name is also populated. To change the LDAP attribute, enter the new account name and click Save. To add an LDAP group, select Add LDAP Group and enter the DN of the group you would like to add. Test Settings will tell you how many analysts Exabeam found in the group. From here you can select which role(s) to assign. It is important to note that these roles are assigned to the group and not to the individual analysts; if an analyst changes groups their role will automatically change to the role(s) associated with their new group.

Single Sign-on and Multi-factor Authentication Using SAML

Exabeam users may have a single sign on vendor in their environment, such as Okta, Ping, Duo, Google, or Microsoft Active Directory Federation Services. Exabeam integrates with them, allowing administrators and users to sign on to Exabeam using their existing credentials.

With SAML Authentication enabled, there is no need for users to enter credentials and/or remember/renew a password with Exabeam.

Configure SAML

Warning

If your instance of Exabeam is running in a private network, you must ensure webcommon.service.externalAddress is pointing to the correct external IP address and is the same as <exabeam_master_host>, which was specified in configuration for IdP. The property is pointing to EXABEAM_IP env variable, which is assigned during Exabeam deployment.

When Exabeam is deployed on AWS, there should not be any issues. When Exabeam is deployed on Google Cloud Platform, you may need to set the property in /optz/exabeam/config/common/web/default/application_default.conf.

Single sign-on: If your organization uses Okta, Ping Identity, Duo, or Google as an identity provider (IdP), you can configure single sign-on directly within the UI. Once configured, your users are automatically authenticated into the UI and will not be asked to create and/or enter specific login credentials.

Notice

For specific details and requirements to use Duo, please refer to our Community article, Configuring Duo for LDAP.

Multi-factor authentication: Similarly, Advanced Analytics automatically supports your multi-factor authentication (MFA, including two-factor authentication and/or two-step verification) through Okta, Ping Identity, Google, and Duo.

The SAML Status box shows the current condition of how your users are permitted to log in to the UI. Click Edit to configure how your users are permitted to log in, including:

  • Disabled – SAML was configured, but it is not currently enabled. Consequently, users from your organization can only log in with their Exabeam credentials, but they will not be automatically authorized based on the their SAML credentials.

  • Allowed – Users can log in with their SAML or Exabeam credentials. If they have Exabeam credentials, they will also be able to use them to log in.

  • Mandatory – Users can log in with their SAML credentials, but they cannot log in with their Exabeam credentials.

SAML Status to select the Disabled, Allowed, and Mandatory option. Allowed as selected status.
Configure an Identity Provider

Please contact your Technical Account Manager.

  1. Click the Menu The menu icon in the navigation bar; three white lines on a green background., then navigate to Settings > Admin Operations > Additional Settings.

  2. Under User Management, select Configure SAML.

  3. Click Add Identity Provider.

  4. You can configure multiple identity providers for your organization, but yenable only one at a time. By default, the IdP is enabled when you save. To disable it, toggle the IdP Disabled button.

    New Identity Provider with Idp Disabled On and OFF switch.
  5. Under SAML Identity Provider, select a provider from the list of all supported providers.

    SAML Provider with options like Okta, Google,etc...
  6. Decide how you want to configure the SSO:

    • If you have an XML metadata file from your IdP, select Upload the XML metadata file provided by your IdP. Click CHOOSE FILE, then upload the XML file.

    • If you don't have an XML metadata file, select Configure SSO manually. Click CHOOSE FILE, upload the IdP certificate, then enter the single sign-on URL. If applicable, enter a single log-out URL or a URL to redirect to after logging out.

  7. To map the identity provider attributes to Exabeam attributes, configure the query attributes:

    Exabeam Attributes with Idp Attribute as Email Address, Username, First Name, Last Name, Group for Query attribute.
  8. Click SAVE. Your identity provider appears in the Identity Providers table.

    Identity providers list with Name and Status and ADD NEW option.

You can also continue customizing the configuration by mapping your SAML groups to Exabeam user roles.

Map SAML Groups to Exabeam User Roles

Once you have configured a SAML identity provider, the Group Mappings options appears below the Identity Providers table.

To map your existing SAML groups to Exabeam user roles:

  1. Click Add Group.

    Configure SAML settings, with the Add Group button used to map SAML groups to Exabeam user roles highlighted in a red circle.
  2. Select your configured Identity Provider.

    New Group Mapping to map users with the group with Identity Proivder, Group Name, Exabeam User Roles.
  3. Enter a SAML Group Name.

  4. Use the checkboxes to select default and custom roles.

  5. Click Save.

Set Up Context Management

Logs tell Exabeam what the users and entities are doing while context tells us who the users and entities are. These are data sources that typically come from identity services such as Active Directory. They enrich the logs to help with the anomaly detection process or are used directly by the risk engine layer for fact-based rules. Regardless of where these external feeds are used, they all go through the anomaly detection layer as part of an event. Examples of context information potentially used by the anomaly detection layer are the location for a given IP address, ISP name for an IP address, and department for a user.

Analysts are able to view and edit Exabeam's out-of-the-box context tables as well as create their own custom tables. They can select a specific table, such as Executive Users, Service Accounts, etc. and see the details of the table and all of the objects within the table. Edits can be performed on objects individually or through CSV uploads.

Out-of-the-Box Context Tables

Context Table

Source

Available Actions

email_user

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam.

Administrators cannot add, edit, or delete the entries in this context table.

fullname_user

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam.

Administrators cannot add, edit, or delete the entries in this context table.

user_account

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or AD filters. Where Administrators have manually added users, they can also edit or delete entries.

user_department

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Active Directory filters. Where Administrators have manually added users, they can also edit or delete entries.

user_division

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Active Directory filters. Where Administrators have manually added users, they can also edit or delete entries.

user_manager

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Advanced Directoryfilters. Where Administrators have manually added users, they can also edit or delete entries.

user_department_number

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Active Directory filters. Where Administrators have manually added users, they can also edit or delete entries.

user_country

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Active Directory filters. Where Administrators have manually added users, they can also edit or delete entries.

user_location

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Active Directory filters. Where Administrators have manually added users, they can also edit or delete entries.

user_title

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Active Directory filters. Where Administrators have manually added users, they can also edit or delete entries.

user_fullname

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam.

Administrators cannot add, edit, or delete the entries in this context table.

user_phone_cell

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Active Directory filters. Where Administrators have manually added users, they can also edit or delete entries.

user_phone_office

LDAP

This table is automatically populated when administrators integrate their LDAP system with Exabeam and add regular expression through the Advanced Analytics tab.

Administrators can add entries manually via CSV or Active Directory filters. Where Administrators have manually added users, they can also edit or delete entries.

user_is_privileged

Administrators

Administrators can add entries manually, via CSV, or Active Directory. Entries can also be edited or deleted.

Threat Intelligence Service Context Tables

The table below shows the description of each available threat intelligence feed to a context table in Advanced Analytics:

Context Table

Description

is_ip_threat

IP addresses identified as a threat.

is_ip_ransomeware_ip

IP addresses associated with ransomware traffic.

is_tor_ip

Known Tor IP addresses.

reputation_domains

Domains associated with malware traffic

web_phishing

Domains associated with phishing attacks.

For more information on Exabeam threat intelligence service, please see the section Threat Intelligence Service Overview.

Custom Context Tables

Exabeam provides several filters and lookups to get your security deployment running immediately. However, there may be assets and users within your organization that need particular attention and cannot be fully addressed out of the box. Custom context tables allow you the flexibility to create watchlists or reference lists for assets, threat intelligence indicators, and users/groups that do not fit in the typical deployment categories. Custom context tables let you put parts of your organization under extra monitoring or special scrutiny, such as financial servers, privileged insiders, and high-level departed employees.

Within Advanced Analytics, you can create watchlists using context tables. When creating the table, the Label attribute allows you to attach tags to records that match entries in your context table. This provides quick access to query your results and/or focus your tracking using a global characteristic.

You can also build rules based on entries in your context tables. Set up alerts, actions, or playbooks to trigger when conditions match records, such as access to devices in a special asset group.

Context Data
Prepare Context Data

You can upload data as CSV files with either key and value columns or key-only column. All context tables include a Label to tag matching records into groups during parsing and filtering.

Key-value CSV –Two-field data file with a header row. This lookup lists correlations between the two fields, such as:

Key Fieldname

Value Fieldname

AC1Group

Accounts Receivable

AC2Group

Accounts Payable

Key-only CSV – Single-field data file with no header row. Items on this list are compared to as being present or not during data filtering. For example, a watchlist context table, SpecialGroup, consists of user groups of special interest:

“Accounts Receivable”

“Accounts Payable”

“Accounting Database Admin”

You can create a correlation rule that sends an alert when the monitoring data contains a user having the group name that matches any in the SpecialGroup table.

Label – The named tag associated with a record. This allows you to filter groups of records during parsing or filtering. You can also use labels to assemble watchlists based on groupings rather than by individual asset or user record.

Note

You can opt not to use labels by selecting No Label during table creation. Otherwise, labels are associated with tables and its records. For key-value context tables, the Label is drawn from the value field of the matching context table entry. For key-only context tables, the Label is the table attribute you enter in the Manual Assignment field during table creation and is used to tag all matching records.

New Context Table with Label Assignment selected as No Label
Create Custom Lookups

You must first create a table object to add contextual data to. Create the table with key-only or key-value field and whether labels will used based on the needs of your organization. Use the various methods to add content into your table depending on your data source.

Create a Context Table

To introduce context data into your environment, create a table object to contain your data and reference it in queries and lookups.

  1. Navigate to Settings > Accounts & Groups > Context Tables.

  2. At the top right of the UI, click the blue + to open the New Context Table dialogue box.

    A plus sign icon to add Context table.
  3. Fill in the details of the type of context table that this will be.

    New Context Table with name, object type, key-value type, label assignment form.

    Fill in table attribute fields:

    Name – A unique name identifying the table in queries and in the context of your organization.

    Object Type – The type gives the table additional tagging (with information on the potential data source, such as LDAP for users or user groups).

    • Users – This object type is associated with users and user group context tables. LDAP data sources can be used to fill its content.

    • Assets – These are itemizable objects of value to your organization. These can be devices, files, or workstations/servers.

    • Miscellaneous – These are reference objects of interest, such as tags for groups of objects within a department or network zones.

    Type – Select the field structure in the table as Key Value or Key Only. See Prepare Context Data for more information. If you are creating a correlation context table, use Key Only.

    Label Assignment – Click the text source for creating the label or use no label. See Prepare Context Data for more information.

  4. Click Save to advance to the table details UI for the newly created context table.

Your table is ready to store data. The following sections describe ways to add data to your table. Each method is dependent on the data source and intended use of the table.

Import Data into a Context Table Using CSV

This is the most flexible method to create unconventional context tables as the CSV file can contain any category or type of data that you wish to monitor.

  1. Select your desired context table.

  2. Select the Upload Table icon.

    Context table with upward arrow to upload the context table.
  3. Click Upload CSV. From your file system, select the CSV file you wish to import, then select Next.

    An Upload CSV File to upload table and add entries to context table.

    Note

    Key and value (2 fields) tables require a header first row. Do not include a header for keys-only CSV files (1 field). Table names may be alpha-numeric with no blank spaces. (Underscore is acceptable.)

  4. Inspect the contents that will be added to your table. Select Apply Changes, when you are done.

    ContextTables-Management-UploadTableUI2.jpg

Once context has been integrated, it is displayed in the table. You can use the lookup tables in rules as required.

ContextTables-FinalTableScreen.jpg

For assistance in creating custom context tables, contact Exabeam Customer Success by opening a case at Exabeam Community.

Import Data into a Context Table Using an LDAP Connection

This section details the steps required to create context tables to customize your lookups. In this example, we are creating a lookup table with two fields: the userAccountControl field and the User ID field. This allows the event enricher to map one to the other. For example, let's say you have a log that does not include the username, but instead included the userAccountControl field. This would map the two together. A similar use case would be badge logs: you could create a lookup table that maps the badge ID to the actual username, assuming the badge ID is contained in LDAP.

  1. Navigate to the Settings > Accounts & Groups > Context Tables.

  2. Click the ‘+’ icon to add a new table.

    ContextTables-Management.jpg
  3. In this example, we use these settings:

    Name – useraccountcontrol_user

    Object Type – Users

    Type – Key Value

    Label Assignment – Automatic Assignment from value

    An example of creating New Context Table with name, object type, key-value type, label assignment form.
  4. Click Save.

    Click No Label if you do not want to add a label to matching records during parsing or filtering.

    The context table now appears in the Context Management tables list.

  5. Select the name of the context table you created in Step 4 to configure it with values.

    ContextTables-Management-VerifyTable.jpg

    After clicking on useraccountcontrol_user you will be presented with the setup page for the useraccountcontrol_user context table.

  6. Click + Add Connection to connect the context table to an LDAP domain server.

    User account control in context table to add connections.
  7. Select the LDAP Server(s), Key, and Value to populate the context table. Optionally, filter the attribute source with conditions by clicking ADD CONDITION.

    New Connection in Context table management to add new LDAP connection.
  8. Click TEST CONNECTION to view and validate the test results, and then click SAVE.

    Test Connection result of an LDAP connection with key-value pair.

    Once context has been integrated, it is displayed in the table. You can use the lookup table in rules as required.

    User control in Context management to add Context table using LDAP onnection.

    For assistance in creating custom context tables, contact Exabeam Customer Success by opening a case at Exabeam Community

Audit Actions Using Logs

Advanced Analytics logs specific activities related to administrators and users of the product, including activities within the UI as well as configuration and server changes. This is especially useful for reviewing activities of departed employees as well as for audits (for example, GDPR).

Advanced Analytics logs the following events:

  • Log in and log out

  • Failed log in

  • User addition, update, and removal

  • Role addition, update, and deletion

  • Permission addition and deletion

  • Threat Hunter Search

  • API activation

  • Component restart

  • Log source addition, update, and deletion

  • Log feed addition, update, and deletion

  • Syslog enable and disable

  • Full and partial acceptance of a session

  • Full and partial acceptance of a lockout

  • Full and partial acceptance of an asset sequence

  • Starting of a session

  • Starring of an asset sequence

  • Watchlist addition, update, and delete

These audit logs are stored in MongoDB. You can find them at exabeam_audit_db inside the audit_events collection. The collection stores the entire auditing history. You cannot purge audit logs or set retention limits.

Send Advanced Analytics Activity Log Data via Syslog

Access activity data via Syslog. Audit logs of administrative and analyst actions can be forwarded to an existing SIEM or Data Lake via Syslog. Exabeam sends the Advanced Analytics activity data every five minutes.

Note

Advanced Analytics activity log data is not masked or obfuscated when sent via Syslog. It is your responsibility to upload the data to a dedicated index which is available only to users with appropriate privileges.

To access activity data via Syslog:

  1. Navigate to Settings > Log Management > Incident Notification.

  2. Edit an existing Syslog destination, or create a new Syslog destination.

  3. Configure any applicable Syslog settings.

  4. After completing the applicable fields, click TEST CONNECTION.

    1. If the test fails, validate the configured fields and re-test connectivity until successful.

    2. If the test succeeds, continue to the next step.

  5. Click the AA/CM/OAR Audit checkbox.

  6. Click Add Notification.

Starting the Analytics Engine

Once the setup is complete, the administrator can start the Exabeam Analytics Engine. The engine will start fetching the logs from the SIEM, parsing, and then analyzing them. On the Settings page, go to Admin Operations then Exabeam Engine to access controls.

Actions can be restarted from a specific point in time - Exabeam will re-fetch and reprocess all the logs going forward from that time. Note that date and time is given in UTC and starts at epoch (i.e. 00:00:00 hour).

When Ingest Log Feeds (and logs are selected) or Restart Processing is selected, a settings menu is presented.

Restart the engine – Select this option if this is the first time the engine is run.

Restart from the initial training period – Restart engine using data initially collected.

Restart from a date – Reprocess based on specific date (UTC).

Additional Configurations

Configure Static Mappings of Hosts to/from IP Addresses

Hardware and Virtual Deployments Only

Note

To configure this feature, please contact your Technical Account Manager.

You can configure static mappings from hosts to IP addresses, and vice versa. This is especially useful for mapping domain controllers (DCs). Since DCs do not often change IPs, you can tie the DC hostname to a specific IP address. Additionally, if there is user activity that isn't tied to a hostname but is tied to an IP address, then you can map the user to their specific, static IP address. This helps maintain and enrich information in events that may be lost or unknown since the system cannot tie events to dynamic IP addresses.

Map IP addresses to hosts

Add them to the file: /opt/exabeam/data/context/dynamic_objects/static_ip_host_mapping.csv

CSV Format: [ip], [host]

Map hosts to IP addresses

Add them to the file: /opt/exabeam/data/context/dynamic_objects/static_host_ip_mapping.csv

CSV Format: [host], [ip]

Associate Machine Oriented Log Events to User Sessions

Hardware and Virtual Deployments Only

Proxy and other generic sequence events (such as, web, database, file activity, endpoint) as well as some security and DLP alerts may generate logs that contain only machine names or IP addresses without the user names. In Advanced Analytics , you can automatically associate these events with users by IP/host-to-user mapping.

Note

This feature is currently only available for sequence events in multi-node deployments.

User-Host/IP Association

Exabeam will create an IP/host-to-user association based on specific configurable events. (See example below.) The logic to associate users and hosts is flexible and is configurable by using the UserPresentOnHostIf parameter. For example, you can choose to associate a user and host in Kerberos logon events only if the IP is in a specific network zone.

The configuration also allows you to associate the user with any field based on event type. For example, you can associate the user in a Kerberos logon event with dest_host (destination host) and dest_ip (destination IP), and the user in a remote-access event with src_host (source host) and src_ip (source IP). The user of a remote logon event can be associated with both src_host and dest_host because the events indicates they are present on both.

User-Host Example

The example configuration below shows an association between user and IP event. Edits are made to /opt/exabeam/config/custom/custom_exabeam_config.conf:

UserPresentOnHostIf {
 kerberos-logon = {
  Condition = "not (EndsWith(user, '$') OR InList(user, 'system', 'local service', 'network service','anonymous logon'))"
  UserPresentOn = ["dest_host", "dest_ip"]
 }
 remote-logon = {
  Condition = "not (EndsWith(user, '$') OR InList(user, 'system', 'local service', 'network service','anonymous logon'))"
  UserPresentOn = ["dest_host", "src_host", "dest_ip", "src_ip"]
 }
 remote-access = {
  Condition = "InList(ticket_options, '0x40800000', '0x60810010') && not (EndsWith(user, '$') OR InList(user, 'system', 'local service', 'network service', 
'anonymous logon'))"
  UserPresentOn = ["src_host", "src_ip"]
 }
}

After editing the configuration file, restart services to apply changes:

exabeam-analytics-stop
exabeam-analytics-start
User-Event Association

Based on the host/IP-to-user association described above, Exabeam can associate an event with a host/IP to a user. This is done via the HostToUserMerger parameter. This configuration enables you to determine which events will utilize the created associations as well as which fields should be used to make it.

A user will be resolved from the host/IP only if one user is associated with this host/IP. If more than one user is associated, no user will be resolved.

User-event example

The example configuration below defines which events should be considered for resolving the user. The events web-activity-allowed and web-activity-denied are event types that will be associated with the user.

HostToUserMerger {
 Enabled = true
 EventTypes = [
  {
   EventType = "web-activity-allowed"
   MergeFields = ["src_host", “src_ip”]
  },
  {
   EventType = "web-activity-denied"
   MergeFields = ["src_host"]
  }
 ]
} 

After editing the configuration file, restart services to apply changes:

exabeam-analytics-stop
exabeam-analytics-start
Alert-User Association

The host/IP-to-user association will also be used to resolve the user in security and DLP alerts that do not have one. If one user is on the host during the alert trigger, then the user is associated with a host/IP when resolving a user in security. If there is more than one user on the host, no DLP alerts are associated.

Display a Custom Login Message

You can create and display a custom login message for your users. The message is displayed to all users before they can proceed to login.

To display a custom login message:

  1. On a web browser, log in to your Exabeam web console using an account with administrator privileges.

  2. Navigate to Settings > Admin Operations > Additional Settings.

    The Admin Operations section of the settings with the Additional Settings link highlighted in a red circle.
  3. Under Admin Operations, click Login Message.

    The Admin Operations settings panel with the Login Message link highlighted with a red circle.
  4. Navigate to Settings > Admin Operations > Login Message.

    Login Message in Admin Operations to set the custom login message.
  5. Click EDIT.

    Admin Operations settings, under the Login Message tab, with the Edit button highlighted with a red circle.
  6. Enter a login message in Message Content.

    Note

    The message content has no character limit and must follow UTF-8 format. It supports empty lines between text. However, it does not support special print types, links, or images.

    Admin Operation settings, under the Login Message tab, with the Message Content header highlighted with a red circle.

    A common type of message is a warning message. The following example is a sample message:

    Usage Warning

    This computer system is for authorized use only. Users have no explicit or implicit expectation of privacy.

    Any or all uses of this system and all files on this system may be intercepted, monitored, recorded, copied, audited, inspected, and disclosed to an authorized site. By using this system, the user consents to such interception, monitoring, recording, copying, auditing, inspection, and disclosure at the discretion of the authorized site.

    Unauthorized or improper use of this system may result in administrative disciplinary action and civil and criminal penalties. By continuing to use this system you indicate your awareness of and consent to these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning.

    Note

    This sample warning message is intended to be used only as an example. Do not use this message in your deployment.

  7. Click SAVE.

    Admin Operations settings, under the Login Message tab, with the Save button highlighted with a red circle.
  8. Click the Display Login Message toggle to enable the message.

    Note

    You can hide your message at any time without deleting it by disabling the message content.

    Display Login Message tab switched off.

Your custom login message is now shared with all users before they proceed to the login screen.

PLT_Custom_Login_Message.jpg

Configure Threat Hunter Maximum Search Result Limit

You can configure the maximum search result limit when using Threat Hunter’s search capabilities. By default, the result limit is set to 10,000 sessions.

Note

To configure this feature, please contact your Technical Account Manager.

The default result limit is located in the application_default.conf file at /opt/exabeam/config/tequila/default/application_default.conf.

All changes should be made to

/opt/exabeam/config/tequila/custom/application.conf.

To configure the default result limit, enter an acceptable value in place of 10000 at tequila.data.criteria:

finalQueryResultLimit = 10000

There is no restriction on the limit value, however, for very large intermediate results you should input at least 30,000 sessions.

Change Date and Time Formats

Hardware and Virtual Deployments Only

Change the way dates and times are displayed in Advanced Analytics, Case Manager, and Incident Responder.

Note

To configure this feature, please contact your Technical Account Manager.

Dates and times may appear slightly different between Advanced Analytics, Case Manager, and Incident Responder.

  1. Navigate to /opt/exabeam/config/tequila/custom/, then open the application.conf file.

  2. Enter a supported format value:

    • To configure how dates are formatted, enter a supported value after tequila.data.criteria.dateFormat = , in quotation marks:

      tequila.data.criteria.dateFormat = "[value]"
    • To configure how times are formatted, enter a supported value after tequila.data.criteria.timeFormat = , in quotation marks:

      tequila.data.criteria.timeFormat = "[value]"
  3. Save the application.conf file.

  4. Restart Advanced Analytics Restful Web Services:

    web-stop;
    web-start
Supported Date and Time Formats

View all the ways you can format dates and times displayed in Advanced Analytics, Case Manager, and Incident Responder.

Date Formats

By default, dates are displayed in the "default" format, DD Month Year; for example, 27 September 2012.

Depending on the format, some areas of the product, like watchlists and user or asset profiles, may display a shortened or year-less version.

Value

Format

Example

Shortened Example

Year-less Example

"default"

DD Month YYYY

27 September 2012

27 Sep 2012

27 Sep

"default-short"

DD Mo YYYY

27 Sep 2012

n/a

27 Sep

"default-num"

DD-MM-YYYY

27-09-2012

n/a

27-09

"default-num-short"

DD-MM-YY

27-09-12

n/a

27-09

"us"

Month DD YYYY

September 27 2012

Sep 27 2012

Sep 27

"us-short"

Mo DD YYYY

Sep 27 2012

n/a

Sep 27

"us-num"

MM-DD-YYYY

09-27-2012

n/a

09-27

"us-num-short"

MM-DD-YY

09-27-12

n/a

09-27

"ISO"

YYYY-MM-DD (ISO 8601)

2012-09-27

n/a

09-27

"ISO-slash"

YYYY/MM/DD

2012/09/27

n/a

09/27

Time Formats

By default, times are displayed in 24hr format.

Value

Format

Notes

"24hr"

13:45

This is the default value in the configuration file.

For chart labels, the time appears as 13 instead of 1pm. Minutes aren't displayed.

"12hr"

1:45pm

Leading zeros aren't displayed. For example, the time appears as 1:45pm instead of 01:45pm.

Some areas of the product use a and p to indicate am and pm.

Set Up Machine Learning Algorithms (Beta)

Machine Learning (ML) algorithms require a different infrastructure than regular deployments. This infrastructure is necessary to run data science algorithms. ML infrastructure will install two new docker-powered services: Hadoop YARN and Advanced Analytics API.

Note

These machine learning algorithms are currently available as beta features.

Installation is only supported on EX4000 powered single- or multi-node deployments running Advanced Analytics i35 or later due to the high system resources needed for these jobs. ML infrastructure is a requirement for algorithms that drive the Personal Email Detection, Daily Activity Change Detection, and Windows Privileged Command Monitoring features.

Install and Deploy Machine Learning

Installation is done through the unified installer by specifying the ml product after Advanced Analytics has already been deployed. The build version needs to be identical to the version used for Advanced Analytics.

When asked for the docker tag of the image to be used for ML, make sure to use the same tag which was used for Advanced Analytics.

  1. Optionally, run this process in screen: screen -LS [yourname]_[todaysdate]

  2. Run the following script:/opt/exabeam_installer/init/exabeam-multinode-deployment.sh

  3. Select your inputs based on the following prompts:

    Add Product(s)
    Which product(s) do you wish to add? ['ml', 'lms', 'ir']: ml
    What is the docker tag for new ml images? <AA version_build>
    Would you like to override the default docker_gwbridge IP/CIDR? n
    Do you want to setup disaster recovery? n
  4. Stop the Log Ingestion Engine and the Analytics Engine at the shell, make configuration changes, and then restart services.

    1. exa-lime-stop; exa-martini-stop

    2. Edit EventStore parameters in /opt/exabeam/config/custom/custom_exabeam_config.conf:

      EventStore.Enabled = true
      EventStore.UseHDFS = true
    3. Navigate to /opt/exabeam/config/custom/custom_exabeam_config.conf and make sure that Event Store is disabled:

      EventStore.Enabled = false
    4. Restart the DS server, and then start the Log Ingestion Engine and the Analytics Engine:

      ds-server-stop; ds-server-start
      exa-lime-start; exa-martini-start
  5. Check the state of the DS server by inspecting the log

    /opt/exabeam/data/logs/ds-server.log.

  6. Check the DS server logs to ensure the algorithms have been enabled.

    grep enabled /opt/exabeam/data/logs/ds-server.log

    You should be able to see a list of all algorithms, along with their statuses and configurations.

    Note

    Navigate to /opt/exabeam/ds-server/config/custom/algorithms.conf and set Enabled = true on the DS algorithms you want to implement. If the custom algorithms.conf does not contain the DS algorithm you want to implement, copy over the corresponding algorithm block from /opt/exabeam/ds-server/config/default/algorithms_default.conf.

  7. Navigate to /opt/exabeam/data/logs and enable EventStore:

    }
        EventStore {
            UserHDFS = true
            Enabled = true
    }
    
  8. Restart the Log Ingestion Engine and the Analytics Engine to apply any updates:

    exa-lime-stop
    exa-lime-start
    exa-martini-stop
    exa-martini-start
  9. Continue to each module to complete your configurations

Configure Machine Learning

All Machine Learning algorithms use EventStore and expect the data to be stored on HDFS, which must be manually activated by adding this line to /opt/exabeam/config/custom/custom_exabeam_config.conf:

EventStore.Enabled = true
EventStore.UseHDFS = true

All other algorithm-specific configurations should be done in /opt/exabeam/ds-server/config/custom/algorithms.conf.

The defaults for each algorithm can be found in /opt/exabeam/ds-server/config/default/algorithms_default.conf. As with other configuration changes, only the options which are changed should be overwritten in the custom algorithms.conf file.

LogFetcher.LogDir in /opt/exabeam/config/custom/custom_exabeam_config.conf is the path for Martini to find events. DS algorithms use this path as well. Therefore, make sure that you have SDK.EventStoreHDFSPathTemplates in /opt/exabeam/ds-server/config/default/script.conf, which corresponds to LogFetcher.LogDir.

For example:

/opt/exabeam/ds-server/config/default/script.confEventStoreHDFSPathTemplates = [
"hdfs://hadoop-master:9000/opt/exabeam/data/input/(YYYY-MM-dd)/(HH).*.evt.gz"
]
/opt/exabeam/config/custom/custom_exabeam_config.conf
LogFetcher {
UseHDFS = true
LogDir = "/opt/exabeam/data/input"
# this values by default, you don’t have to override it in this config
HDFSHost = "hadoop-master"
HDFSPort = 9000
}

Note

You can free up space by removing data in hdfs://hadoop-master:9000/opt/exabeam/data/output, which is not required for DS deployments.

Upgrade Machine Learning Deployment

ML deployments have to be updated together with the underlying Advanced Analytics version. If Machine Learning is installed, the upgrade tool will ask both for a tag for Advanced Analytics and a tag for ML. Make sure to use the same tag for Advanced Analytics and ML. The format for the tag is <version>_<build #>.

Upgrading ML Custom Configurations

In i50.6 we have changed the source of processing events. Now, EventStore is no longer needed and should not be enabled to run DS algorithms. Instead, all ML algorithms read events from LogDir. Therefore, if you are upgrading from a version pre-i50.6, make sure EventStore.Type has been removed from these files:

  • ds-server/config/default/algorithms_default.conf

  • ds-server/config/custom/algorithms.conf

  • ds-server/config/default/script.conf

If you have custom settings, you must also make sure that you edit them correctly in order to preserve them. Custom configurations are not automatically updated.

See details on the required edits below:

In script.conf

Make sure that you remove EventStore.Type, and change EventStoreHDFSPathTemplates accordingly. Instead of an output generated by Martini, you should connect it to the Lime output.

Previous version of script.conf:

{
EventStoreHDFSPathTemplates = [
"hdfs://hadoop-master:9000/opt/exabeam/data/output/(YYYY-MM-dd)/(HH).[type]-events-{m,s?}.evt.gz",
"hdfs://hadoop-master:9000/opt/exabeam/data/output/(YYYY-MM-dd)/(HH).[type]-events-{m,s?}.[category].evt.gz"
]
EventStore {
#Event type. Can be Raw, Container or Any
Type = "Container"
#Event category. All available categories are in event_categories.conf
Categories = ["all"]
}

New version of script.conf:

{
EventStoreHDFSPathTemplates = [
"hdfs://hadoop-master:9000/opt/exabeam/data/input/(YYYY-MM-dd)/(HH).*.[category].evt.gz"
]
EventStore {
#Event category. All available categories are in event_categories.conf
Categories = ["all"]
}

To check that everything runs correctly, check custom_exabeam_config.confLogFetcher.LogDir for a path to events folder: /opt/exabeam/config/custom/custom_exabeam_config.conf

LogFetcher {
UseHDFS = true
# this path to events folder in HDFS should be the same as in
# script.conf EventStoreHDFSPathTemplates, main difference
LogDir = "/opt/exabeam/data/input"
# this values by default, you don’t have to override it in this config
HDFSHost = "hadoop-master"
HDFSPort = 9000
}

This path should be the same as in script.conf EventStoreHDFSPathTemplates:

EventStoreHDFSPathTemplates = [
"hdfs://hadoop-master:9000/opt/exabeam/data/input/(YYYY-MM-dd)/(HH).*.[category].evt.gz"
]
LogDir = "/opt/exabeam/data/input"

In algorithms.conf

If you customized EventStore.EventType for the personal-email-identification algorithm, daily-activity-change algorithm, or wincli-command-centric algorithm, then you must ensure that you remove parameter EventStore.EventType from the configuration:

Previous version of algorithms.conf:

personal-email-identification {
...
EventStore {
Type = “Container”
Categories = ["alerts"]
}
...
}

New version of algorithms.conf:

personal-email-identification {
...
EventStore {
Categories = ["alerts"]
}
...
}

To check that everything runs correctly, check the log files after launching exabeam-analytics. AA-API: /opt/exabeam/data/logs/aa-api.log DS server: /opt/exabeam/data/logs/ds-server.log # spark log files for all algorithms are located in the folder: /opt/exabeam/data/logs/ds-server Exabeam: /opt/exabeam/data/logs/exabeam.log   Also, you can check Processed events: tail -f -n 300 /opt/exabeam/data/logs/exabeam.log | grep Processed You should not see “0 events” for Processed events. If “0 events” persists, then that means that the paths to event files are configured improperly.   If you run into this issue, check: /opt/exabeam/config/custom/custom_exabeam_config.conf LogFetcher.LogDir. You should have the same specifications in the HDFS folder as in LogFetcher.LogDir. Also, the folder should contain folders with date and files, such as 00.*.evt.gz - 23.*.evt.gz.

Checking ML Status

You can check the status of DS algorithms in the mongo data_science_db. There is a separate collection with the states for each algorithm. You can also check the progress in the Martini logs: tail -f /opt/exabeam/data/logs/exabeam.log

Deactivate ML

To deactivate all ML components, shut down the respective services:

ds-server-stop
aa-api-stop
hadoop-yarn-stop
Detect Daily Activity Change

Daily activity change detection identifies significant changes in a user's overall behavior across both sessions (eg: Windows, VPN) and sequence events (eg: web activity, endpoint activity).

In addition to examining individual activities, Advanced Analytics also looks at anomalies in the overall patterns of the daily activities of a user. For example, taken individually it might not be anomalous for a user to access a server remotely that has been accessed before or download files from Salesforce, but a combination of activities could be anomalous based on the user's daily activity behavior.

The daily activity change will generate an event. If today's behavior is significantly different from past behavior, then that event will also generate a triggered rule (DAILY-ACTIVITY-CHANGE) for the event. The risk score from daily activity change is transferred to the user's session just like any other web or endpoint sequence.

Daily activity change detection is available as a beta capability and by default the feature is turned off.

Configuration Prerequisites

Ensure that you have the Machine Learning infrastructure (beta) installed. If you do not, follow the instructions in the section Machine Learning Algorithms (Beta). Then return to these configuration instructions.

Configuration
  1. Machine Learning Algorithms (Beta) must be deployed in order for the feature to work. Installation is done through the unified installer by specifying the ml product. The build version needs to be identical to the version used for Advanced Analytics.

  2. To enable Daily Activity Change add the following line to /opt/exabeam/ds-server/config/custom/algorithms.conf:

    Algorithms.daily-activity-change.Enabled = true
  3. To enable Daily Activity Change, in /opt/exabeam/ds-server/config/custom/algorithms.conf set:

    daily-activity-change {
     ...
    Enabled = true
  4. EventStore must be disabled:

    Make sure that in /opt/exabeam/config/custom/custom_exabeam_config.conf: EventStore.Enabled = false

  5. EventStore must also be active for the feature to work:

    Add the following lines to /opt/exabeam/config/custom/custom_exabeam_config.conf:

    EventStore.Enabled = true
    EventStore.UseHDFS = true
Daily Activity Change Parameters

You can customize configuration parameters for the algorithm under daily-activity-change within algorithms.conf (/opt/exabeam/ds-server/config/custom/algorithms.conf). Refer to algorithms_default.conf (/opt/exabeam/ds-server/config/default/algorihtms_default.conf) for default settings.

  • VarianceThreshold = 0.95 – variance threshold used by PCA

  • ResidueThreshold = 1 – above this threshold is considered anomalous

  • MinTrainingPeriod = 30 – a minimum period of historic data required to detect daily activity change

  • TrainingPeriod = 90 – data from eventTime - trainingPeriod to eventTime will be taken to train the algorithm

  • RetentionPeriod = 180 – keep historic data for this period

  • RuleId = "DAILY-ACTIVITY-CHANGE" – in mongo triggered_rule_db triggered_rule_collection all triggered rules by this algorithm will be saved with rule_id = “DAILY-ACTIVITY-CHANGE”

  • RuleEventType = "daily-activity" – in mongo triggered_rule_db.triggered_rule_collection all triggered rules by this algorithm will be saved with rule_event_type = “daily-activity”

  • DistinctCountIntervalMs = 600000 – time in events from EventStore will be rounded down to 600000 Ms = 10 minutes.

    • For example:1406877142000 = Friday, August 1, 2014 7:12:22 AM

    • Becomes:1406877000000 = Friday, August 1, 2014 7:10:00 AM

Verify Intermediate Results

This algorithm saves results in the Mongo database. You can check database ds_dac_db. It should have two collections: event_weight and user_activity. They should not be empty while processing.

You can also check triggered_rule_db in collection triggered_rule_collection. There should be some events with rule_id = DAILY-ACTIVITY-CHANGE if there are suspicious users.

Enable Daily Activity Change

Ensure that you have the Machine Learning infrastructure installed. If you do not, follow the instructions in the section Machine Learning Infrastructure. Then return to these configuration instructions.Set Up Machine Learning Infrastructure (Beta)

  1. Machine Learning Infrastructure must be deployed in order for the feature to work. Installation is done through the unified installer by specifying the ml product. The build version needs to be identical to the version used for Advanced Analytics.

  2. Add the following line to /opt/exabeam/ds-server/config/custom/algorithms.conf:

    Algorithms.daily-activity-change.Enabled = true
  3. EventStore must also be active for the feature to work.

    Add the following lines to /opt/exabeam/config/custom/custom_exabeam_config.conf:

    EventStore.Enabled = true
    EventStore.UseHDFS = true
Monitor Windows Privileged Commands

Note

To configure this feature, please contact your Technical Account Manager.

Advanced Analytics now identifies anomalous behaviors around Windows privileged commands performed via command line by privilege users. Attackers move through a network using native Windows commands in order to collect information, perform reconnaissance, spread malware, etc. The pattern of the Windows command usage by attackers is statistically and behaviorally different from that of legitimate users and therefore it is possible to detect anomalous behaviors involving command execution. Exabeam performs an offline machine learning algorithm to detect anomalous Windows command execution and assigns risk scores to users performing them.

Associated Rules:

ID

Name

Description

EPA-F-CLI

Suspicious Windows process executed

A native Windows command has been executed which is suspicious for this type of user. For example, a non-technical user is executing complicated powershell commands. Check with the user if they are aware of this and who/what is behind it.

Configuration Prerequisites

Ensure that you have the Machine Learning infrastructure (beta) installed. If you do not, follow the instructions in the section Machine Learning Algorithms. Then return to these configuration instructions.

Configuration

Configuration changes should be made in /opt/exabeam/config/custom/exabeam_custom.config. To enable CLI detection set Enabled = true.

Field Descriptions:

  • Enabled – Set to true to enable detection; set to false to disable.

  • CmdFlagsRegex – Regular expressions used for flag extraction.

  • CasheSize – Number of process IDs to be stored.

  • CacheExpirationTime – The number of days after which CacheSize is reset.

  • Commands – List of the CLI commands that the algorithm will monitor.

  1. Machine Learning Algorithms (Beta) must be deployed in order for the feature to work. Installation is done through the unified installer by specifying the ml product. The build version needs to be identical to the version used for Advanced Analytics.

  2. To enable Windows Command Line Algorithm, in /opt/exabeam/ds-server/config/custom/algorithms.conf set:

    wincli-command-centric {
     ...
    Enabled = true

    Commands and CmdFlagsRegex should be the same as in the custom_exabeam_config.conf.

  3. EventStore must be disabled:

    Make sure that in /opt/exabeam/config/custom/custom_exabeam_config.conf: EventStore.Enabled = false

Windows Privileged Command Monitoring Parameters

You can customize configuration parameters for the algorithm under wincli-command-centric within algorithms.conf (/opt/exabeam/ds-server/config/custom/algorithms.conf). Refer to algorithms_default.conf (/opt/exabeam/ds-server/config/default/algorihtms_default.conf) for default settings.

  • TrainingPeriod = 40 – data from trainingPeriod to eventTime will be taken to train the algorithm

  • BinThresholds – bins with size above the threshold are ignored. By default:

    BinThresholds {
    flag = 100
    directory = 100
    parent = 100
    }
  • Commands = ["at.exe", "bcdedit.exe", "cscript.exe", "csvde.exe"...] – list of the CLI commands that the algorithm will monitor

  • CmdFlagsRegex = "\\s(--|-|/)[a-zA-Z0-9-]+" – regular expressions used to extract flags from the command

  • HistoricStatsCollection = "command_centric_historic_stats" – collection in ds_wincli_db which will retain statistics for Martini rule behaviour

Verify Intermediate Results

To verify the intermediate results, you can look for data in ds_wincli_db collections: command_centric_historic_stats

command_centric_daily_stats

Support Information

This feature is supported in single and multi-node environments on the EX4000 but not on the EX2000 single-node environment.

Detect Phishing

Note

To configure this feature, please contact your Technical Account Manager.

Advanced Analytics now detects users who visit suspected phishing websites. Phishing often starts with a domain name string that has the look-and-feel of a legitimate domain, but is not. Phishers target the Internet's most recognizable domain names (google.com, yahoo.com, etc.) and make slight changes on these domain names in order to fool unassuming eyes. Phishing detection uses lexical analysis to identify whether a domain is a variant of popular domain names. In addition, it also checks URLs against a white-list of popular legitimate domains and a blacklist of identified suspicious domains. It also uses substring searches to identify domains that contain the domain name of a popular site as a substring within the suspect domain. For example, www.gmail.com-hack.net contains the recognizable "gmail.com" within the top level domain.

Associated Rules:

ID

Name

Description

WA-Phishing

Web activity to a phishing domain

Web activity to a suspected phishing domain has been detected. The domain is suspected as Phishing based on Exabeam data science algorithms.

Configuration

Configuration should be made in /opt/exabeam/config/custom/custom_exabeam_config.conf.

To enable Phishing Detection, set PhishingDetector.Enabled = true.

Support Information

Supported in single and multi-node environments with EX2000 and EX4000.

Restart the Analytics Engine

Administrators typically need to restart the Analytics Engine when configuration changes are made to the system such as adding new log feeds to be analyzed by Exabeam or changing risk scores to an existing rule.

Exabeam will store time-based records in the database for recovery and granular reprocessing. The histograms and the processing state of the Exabeam Analytics Engine are time-stamped by week and stored in the database. This allows the Exabeam Analytics Engine to be able to go back to any week in the past and continue processing.

To illustrate, let's say that the Exabeam Analytics Engine started processing logs from January 1, 2016, and is currently processing today, April 15, 2016. The administrator would like to ingest new Cloud application log feeds into Exabeam and start reprocessing from a time in the past, say March 30, 2016. The administrator would stop the Exabeam Analytics Engine and then restart processing from March 30, 2016. The system will go back to find the weekly boundary where the state of the nodes and the models are consistent - which might mean a few days before March 30, 2016 - and start processing all the configured log feeds from that point in time.

Navigate to Settings > Admin Operations > Exabeam Engine.

Upon clicking Restart Processing, the Processing Feeds page appears. You can choose to:

  • Restart the engine from where it left off.

  • Restart and reprocess all the configured log feeds from the initial training period

  • Restart from a specific date. The Analytics Engine will choose the nearest snapshot available for the date chosen and reprocess from this date.

Note

Reprocessing can take a considerable amount of time depending on the volume of data that needs to be reprocessed.

Caution

Upon clicking Process, a success page loads. If you are reconfiguring a secondary appliance, DO NOT click Start Exabeam Engine on the success page. Rather, please contact your administrator.

Note

If a Log Ingestion Engine restart is required when you attempt to restart the Analytics Engine, you will be prompted with a dialog box to also restart the Log Ingestion Engine. Advanced Analytics will intelligently handle the coordination between the two Engines. The Log Ingestion Engine will restart from the same time period as the Analytics Engine. You can choose to cancel the restart if you would like the Log Ingestion Engine to finish its current process, but this will also cancel the Analytics Engine restart procedure.

If you have made configuration changes then the system will check for any inadvertent errors in the configuration files before performing the restart. If the custom configuration validation does identify errors in the config files then it will list the errors and not perform the restart. Otherwise, it will restart the analytics engine as usual.

Custom Configuration Validation

Hardware and Virtual Deployments Only

Any edits you make to your Exabeam custom configuration files are validated before you are able to restart the analytics engine to apply them to your system. This helps prevent Advanced Analytics system failures due to inadvertent errors introduced to the config files.

The system validates Human-Optimized Configuration Object Notation (HOCON) syntax, for example, missing a quotes or wrong caps ("SCOREMANAGER" instead of "ScoreManager"). The validation also checks for dependencies such as extended rules in custom config files that are missing dependencies within default config files. Some additional supported validation examples are:

  • Value validity and ranges

  • Operators

  • Brackets

  • Date formats

  • Rule expressions

  • Model dependencies

If found, errors are listed by file name during the analytics engine restart attempt.

From here you can fix the configuration errors, Cancel the modal, and retry the restart.

Only the config files related to Advanced Analytics are validated:

  • custom_exabeam_config.conf (includes default config)

  • cluster.conf

  • custom_lime_config.conf

  • event_builder.conf

  • models.conf

  • parsers.conf (includes both default and custom)

  • rule_labels.json

  • rules.conf

  • custom_event_categories.conf

In addition to helping you troubleshoot your custom config edits, Advanced Analytics also saves the last known working config files. Every time the system successfully restarts, a backup is made and stored for you.

The backups are collected and zipped in /opt/exabeam/config/backup under custom_configuration_backups_martini. All zipped files are named as follows custom_config_backup_<date>_<time> with time in UTC server time. The last ten backups are stored, and the oldest copy is deleted to make room for a new backup.

You may choose to Roll Back to the latest backup if you run into configuration errors that you are unable to fix. If you do so the latest backup is restored and the analytics engine is not restarted.

Advanced Analytics Transaction Log and Configuration Backup and Restore

Hardware and Virtual Deployments Only

Rebuilding a failed worker node host (from a failed disk for on on-premise appliance) or shifting a worker node host to new resources (such as in AWS) takes significant planning. One of the more complex steps and most prone to error is migrating the configurations. Exabeam has provide a backup mechanism for layered data format (LDF) transaction log and configuration files to minimize the risk of error. To use the configuration backup and restore feature, you must have:

  • Amazon Web Services S3 storage or an active Advanced Analytics worker node

  • Cluster with two or more nodes

  • Have read and write permission for the credentials you will configure to access the base path at the storage destination

  • A scheduled task in Advanced Analytics to run backup to the storage destination

Note

To rebuild after a cluster failure, it is recommended that a cloud-based backups be used. To rebuild nodes from disk failures, backup files to a worker node or cloud-based destination.

If you want to save the generate backup files to your first worker node, then no further configuration is needed to configure an external storage destination. A worker node destination addresses possible disk failure at the master node appliance. This is not recommended as the sole method for disaster recovery.

If you are storing your configurations at an AWS S3 location, you will need to define the target location before scheduling a backup.

  1. Go to Settings > Additional Settings > Admin Operations > External Storage.

  2. Click Add to register an AWS backup destination.

  3. Fill all field and then click TEST CONNECTION to verify connection credentials.

  4. Once a working connection is confirmed Successful, click SAVE.

Once you have a verified destination to store your files, configure and schedule a recurring backup.

  1. Go to Settings > Additional Settings > Backup & Restore > Backups.

  2. Click CREATE BACKUP to generate a new schedule record. If you are changing the destination, click the edit icon Settings-Backup-EditIcon.jpgon the displayed record.

  3. Fill all fields and then click SAVE to apply the configuration.

    Warning

    Time is given in UTC.

A successful backup will place a backup.exa file at either the base path of the AWS destination or /opt/exabeam/data/backup at the worker node. In the case that the scheduled backup fails to write files to the destination, confirm there is enough space at the destination to hold the files and that the exabeam-web-common service is running. (If exabeam-web-common is not running, review its application.log for hints as to the possible cause.)

In order to restore a node host using files store off-node, you must have:

  • administrator privileges to run tasks a the host

  • SSH access to the host

  • free space at the restoration partition at the master node host that is greater than 10 times the size of backup.exa backup file

  1. Copy the backup file, backup.exa, from the backup location to the restoration partition. This should be a temporary work directory (<restore_path>) at the master node.

  2. Run the following to unpack the EXA file and repopulate files.

    sudo /opt/exabeam/bin/tools/exa-restore <restore_path>/backup.exa

    exa-restore will stop all services, restore files, and then start all services. Monitor the console output for error messages. See Troubleshooting a Restoration if exa-restore is unable to run to completion.

  3. Remove backup.exa and the temporary work directory when restoration is completed.

If restoration does not succeed, the try following below solutions. If the scenarios listed do not match your situation,

Not Enough Disk Space

Select a different partition to restore the configuration files to and try to restore again. Otherwise, review files stored in to target destination and offload files to create more space.

Restore Script Cannot Stop All Services

Use the following to manually stop all services:

source /opt/exabeam/bin/shell-environment.bash && everything-stop
Restore Script Cannot Start All Services

Use the following to manually start all services:

source /opt/exabeam/bin/shell-environment.bash && everything-start
Restore Script Could Not Restore a Particular File

Use tar to manually restore the file:

# Determine the task ID and base directory (<base_dir>) for the file restoration that failed.
# Go to the <base_id>/<task_id> directory and apply following command:
sudo tar -xzpvf backup.tar backup.tgz -C <baseDir>

# Manually start all services.
source /opt/exabeam/bin/shell-environment.bash && everything-start

Reprocess Jobs

Access the Reprocessing Jobs tab to view the status of jobs (for example, completed, in-progress, pending, and canceled), view specific changes and other details regarding a job, and cancel a pending or in-progress job.

List of Reprocessing jobs in Exabeam Engine in Admin operation with Status, Creator, Created, Started, Ended, Duration, to refresh status.

If you wish to cancel a reprocessing job for any reason, select the job in the Reprocessing Jobs table and then click Cancel Job.

Configure Notifications About Reprocessing Job Status Changes

You can configure email and Syslog notifications for certain reprocessing job status changes, including start, end, and failure.

To configure notifications for reprocessing job status changes:

  1. Navigate to Settings > Log Management > Incident Notification.

  2. Select an existing notification or create a new notification. You can choose either Syslog or email.

  3. Select the reprocessing jobs notifications according to your business needs (Job status changes and/or Job failures).

  4. Save your changes.

Re-Assign to a New IP (Appliance Only)

Hardware Deployments Only

Note

These instructions apply to Exabeam appliances only. For instructions on re-assigning IPs in virtual deployments, please contact Exabeam Customer Success by opening a case at Exabeam Community.

  1. Set up a named session to connect to the host. This will allow the process to continue in the event you lose connection to the host.

    screen -LS [session_name]
  2. Enter the cluster configuration menu.

    source /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  3. From the list of options, choose Change network settings.

  4. Choose Change IP of cluster hosts.

  5. Choose Change IP(s) of the cluster - Part I (Before changing IP).

  6. You will go through a clean up of any previous Exabeam installations.

    Do you want to continue with uninstalling the product? [y/n] y
  7. Acknowledge the Exabeam requisites.

    **********************************************************************
    Part I completed. Nuke successful. Product has been uninstalled.
    ***Important***
    Before running Part II, please perform these next steps below (Not optional!):
    - Step 1 (Manual): Update the IPs (using nmtui or tool of choice)
    - Step 2 (Manual): Restart network (e.g., systemctl restart network)
    **********************************************************************
    Please enter 'y' if you have read and understood the next steps: [y/n] y
  8. Open the nmtui to change IP addresses of each host in the cluster where the IP address will be changed.

    sudo nmtui
  9. Go to Edit Connection and then select the network interface.

  10. The example below shows the menu for the network hardware device eno1. Go to ETHERNET > IPv4 CONFIGURATION.

    The menu for the network hardware device ​eno1​​, with the ​Ethernet and ​IPv4 Configuration sections highlighted with a red rectangle.

    Warning

    Please apply the correct subnet CIDR block when entering [ip]/[subnet]. Otherwise, network routing will fail or produce unforeseen circumstances.

  11. Set the configuration to MANUAL, and then modify the IP address in Addresses.

  12. Click OK to save changes and exit the menu.

  13. Restart the network services.

    sudo systemctl restart network
  14. Enter the cluster configuration menu again.

    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  15. Choose Change network settings.

  16. Choose Change IP of cluster hosts.

  17. Choose Change IP(s) of the cluster - Part II (Before changing IP)

  18. Acknowledge the Exabeam requisites.

    **********************************************************************
    Please make sure you have completed all the items listed below:
    - Complete Part I successfully (nuke/uninstall product)
    - (Manual) Update the IPs (using nmtui or tool of choice)
    - (Manual) Restart network (e.g., systemctl restart network)
    **********************************************************************
    Do you want to continue with Part II? [y/n] y
    
  19. Provide the new IP of the host.

    What is the new IP address of [hostname]? (Previous address was 10.70.0.14)[new_host_ip]
  20. Update your DNS and NTP server information, if they have changed. Otherwise, answer n.

    Do you want to update your DNS server(s)? [y/n] n
    Do you want to update your NTP server? [y/n] n

Add Worker Nodes to an Advanced Analytics Cluster

Hardware and Virtual Deployments Only

Add nodes to an existing cluster to move from a standalone- to multi-node deployment.

You are prompted to answer questions about how your cluster should be configured. After answering these questions, please wait 20 minutes to 2 hours to finish, depending on how many nodes you deploy.

Each time you add nodes to the cluster, don't add more than 50 percent of the number of nodes you started with. For example, if you start with 20 nodes and you want to create a cluster with 100 nodes, add 10 nodes at a time until you reach 100 nodes.

The cluster must have at least two worker nodes. You can't create a cluster that has just one worker node.

Once you add a node to a cluster, you can't remove it.

Have the following available and provisioned:

  • Exabeam credentials

  • IP addresses of your master and orker nodes

  • Credentials for inter-node communication (Exabeam can create these if they do not already exist)

To add a worker node:

  1. Start a new screen session:

    screen -LS new_screen
    
  2. Run the command below to start the deployment:

    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  3. Menu options appear. Select Add new nodes to the cluster.

  4. Indicate how the nodes should be configured. For example, to set up a multi-node environment with a master node and two worker nodes:

    How many nodes do you wish to add? 2
    What is the IP address of node 1 (localhost/127.0.0.1 not allowed)? 10.10.2.88
    What are the roles of node 1? ['uba_master', 'uba_slave']: uba_master
    What is the IP address of node 2 (localhost/127.0.0.1 not allowed)? [enter IP address]
    What are the roles of node 2? ['uba_master', 'uba_slave
    ']: uba_slave
    What is the IP address of node 3 (localhost/127.0.0.1 not allowed)? [enter IP address]
    What are the roles of node 3? ['uba_master', 'uba_slave
    ']: uba_slave
  5. Network Time Protocol (NTP) keeps your computer's clocks in sync. Indicate how that should be configured:

    • If you have a local NTP server, input that information.

    • If you don't have a local NTP server but your server has Internet access, input the default pool.ntp.org.

    • If you don't have an NTP server or want to sync with the default NTP server, input none.

    What's the NTP server to synchronize time with? Type 'none' if you don't have an NTP server and don't want to sync time with the default NTP server group from ntp.org. [pool.ntp.org] pool.ntp.org
  6. Indicate whether to configure internal DNS servers and how:

    • If you would like to configure internal DNS servers, input y.

    • If you don't want to configure internal DNS servers, input n.

    Would you like to add any DNS servers? [y/n] n
  7. If there are any conflicting networks in the user's domain, override the docker_bip/CIDR value. If you change any of the docker networks, the product automatically uninstalls before you deploy it.

    • To override the value, input y.

    • If you don't want to override the value, input n.

    Would you like to override the default docker BIP (172.17.0.1/16)? [y/n] n
    Enter the new docker_bip IP/CIDR (minimum size /25, recommended size /16): 172.18.0.1/16
    Would you like to override the calico_network_subnet IP/CIDR (10.50.48.0/20)? [y/n] n
  8. (Optional) If you continue to add more nodes, review how the cluster is performing, and ensure the cluster health is green and nodes have finished re-balancing.

Hadoop Distributed File System (HDFS) Namenode Storage Redundancy

There is a safeguard in place for the HDFS NameNode (master node), storage to prevent data loss in the case of data corruption. Redundancy is automatically set up for you when you install or upgrade Advanced Analytics and include at least three nodes.

Note

Deployments may take longer if redundancy is enabled.

These nodes can include the common LIME and master node in the EX2003 appliance (excluding single-node deployments), or the standalone/dedicated LIME and Master Node in the EX4003. The Incident Responder node does not factor into the node count.

Redundancy requires two NameNodes that are both operating at all times. The second NameNode is always on the next available Advanced Analytics host, which in most cases is the first worker node. It constantly replicates the primary NameNode.

With this feature enabled in the case of the Master NameNode failing the system can still move forward without data loss. In such cases, you can use this redundancy to fix the state of Hadoop (such as installing a new SSD if there was an SSD failure) and successfully restart it.

Note

Disaster recovery deployments mirror the NameNode duplicated environment.

User Engagement Analytics Policy

Exabeam uses user engagement analytics to provide in-app walkthroughs and anonymously analyze user behavior, such as page views and clicks in the UI. This data informs user research and improves the overall user experience of the Exabeam Security Management Platform (SMP). Our user engagement analytics sends usage data from the web browser of the user to a cloud-based service called Pendo.

There are three types of data that our user engagement analytics receives from the web browser of the user. This data is sent to a cloud-based service called Pendo:

  • Metadata – User and account information that is explicitly provided when a user logs in to the Exabeam SMP, such as:

    • User ID or user email

    • Account name

    • IP address

    • Browser name and version

  • Page Load Data – Information on pages as users navigate to various parts of the Exabeam SMP, such as root paths of URLs and page titles.

  • UI Interactions Data – Information on how users interact with the Exabeam SMP, such as:

    • Clicking the Search button

    • Clicking inside a text box

    • Tabbing into a text box

Opt Out of User Engagement Analytics

Note

For customers with a Federal license, we disable user engagement analytics by default.

To prevent Exabeam SMP from sending your data to our user analytics:

  1. Access the config file at

    /opt/exabeam/config/common/web/custom/application.conf
  2. Add the following code snippet to the file:

    webcommon {
        app.tracker {
          appTrackerEnabled = false
          apiKey = ""
        }
    }
  3. Run the following command to restart Web Common and apply the changes:

    . /opt/exabeam/bin/shell-environment.bash web-common-restart

Set Up Rules Administration

Administrators are able to create and modify rules from within the Advanced Analytics UI in order to meet their needs. From the Rules Administration page, Administrators can:

  • View all the rules configured in the system, arranged categorically.

  • Change the risk scores for a rule.

  • Edit an existing Exabeam Rule. This will overwrite Exabeam's original rule of the same name with the option to 'Revert to Default', which negates any changes made.

  • Create a new fact-based rule.

  • Clone any existing Exabeam Rule. After cloning, an administrator can edit the cloned rule and save. Cloning preserves the original rule.

  • Disable any rule.

  • Reload the rules. New rules or changes made to existing rules will not take effect until the rules are reloaded.

    Note

    Modified or newly added rules that need to be reloaded are highlighted with an orange triangle.

All Rules: A comprehensive list of all rules existing in Advanced Analytics's system.

Exabeam Rules: These are all of the out-of-the-box Exabeam rules.

Custom Rules: All rules that have been created, cloned, or edited and saved by an administrator.

Disabled Rules: All the rules that have been disabled and are not being triggered.

Expanding a category will list all the rules in that category and offer more details about each rule.

  • The icon to the left of the rule indicates that the rule has been edited by an administrator.

  • The Rule Name and Description are displayed.

  • The Trigger Frequency is a measure of how often the rule has been flagged in User Sessions.

  • The Risk Level indicates the number of points that are allocated to a User Session when the rule is triggered.

  • Selecting the vertical ellipsis to the right offers four options:

    • Disable - Disables the rule. The rule will not trigger. This option will read 'Enable' for a rule that has already been disabled.

    • Advanced Editor - Launches the JSON style Advanced Editor.

    • Clone - This option makes a copy of the rule. You can save the copy under a new name and edit the new rule in the Advanced Editor.

    • Revert to Default - This option only appears for rules that have been edited by an administrator. Selecting this option clears all changes that have been made to the rule and restore it to the default settings.

What Is an Exabeam Rule?

So what exactly is a rule anyway? There are two types of Exabeam rules:

  • Model-based

  • Fact-based

Model-based rules rely on a model to determine if the rule should be applied to an event in a session, while fact based rules do not.

For example, a Fireye malware alert is fact based and does not require a model in order to be triggered. On the other hand, a rule such as an abnormal volume of data moved to USB is a Model-based rule.

Model-based rules rely on the information modeled in a histogram to determine anomalous activities. A rule is triggered if an event is concluded to be anomalous, and points are allocated towards the user session in which the event occurred. Each individual rule determines the criticality of the event and allocates the relevant number of points to the session associated with that event.

Taken together, the sum of scores from the applied rules is the score for the session. An example of a high-scoring event is the first login to a critical system by a specific user – which allocates a score of 40 to a user’s session. Confidence in the model must be above a certain percentage for the information to be used by a rule. This percentage is set in each rule, though most use 80%. When there is enough reliable information for the confidence to be 80% or higher, this is called convergence. If convergence is not reached, the rule cannot be triggered for the event.

How Exabeam Models Work

Since anomaly-based rules depend on models, it is helpful to have a basic understanding of how Exabeam's models work.

Our anomaly detection relies on statistical profiling of network entity behavior. Our statistical profiling is not only about user-level data. In fact, Exabeam profiles all network entities, including hosts and machines, and this extends to applications or processes, as data permits. The statistical profiling is histogram frequency based. To perform the histogram-based profiling, which requires discrete input, we incorporate a variety of methods to transform and to condition the data. Probability distributions are modeled using histograms, which are graphical representations of data. There are three different model types – categorical, numerical clustered, and numerical time-of-week.

Categorical is the most common. It models a string with significance: number, host name, username, etc. Where numbers fall into specific categories which cannot be quantified. When you model which host a user logs into, it is a categorical model.

Numerical Clustered involves numbers that have meaning – it builds clusters around a user’s common activities so you can easily see when the user deviates from this norm. For example, you can model how many hosts a user normally accesses in a session.

Numerical Time-of-Week models when users log into their machines in a 24-hour period. It models time as a cycle so that the beginning and end of the period are close together, rather than far apart. For example, if a user logs into a machine Sunday at 11:00 pm, it is closely modeled to Monday at 12:00am.

Model Aging

Over time, models built in your deployment naturally become outdated. For example, if an employee moves to a different department or accepts a promotion and they do not adhere to the same routines, access points, or other historical regularities.

We automatically clean up and rebuild all models on a regular basis (default is every 16 weeks) to ensure your models are as accurate and up-to-date as possible. This process also enhances system performance by cleaning out unused or underutilized models.

Rule Naming Convention

Exabeam has an internal Rule ID naming convention that is outlined below. This system is used for Exabeam created rules and models only. When a rule is created or cloned by a customer, the system will automatically create a Rule ID for the new rule that consists of customer-created, followed by a random hash. For example, a new rule could be called, customer-created-4Ef3DDYQsQ {.

The Exabeam convention for model and rule names is: ET-SF-A/F-Z

ET: The event types that the model or rule addresses. For example,

  • RA = remote-access

  • NKL = NTLM/Kerberos-logon

  • RL = remote-logon

SF: Scope and Feature of the model. For example,

  • HU = Scope=Host, Feature=User

  • OZ = Scope=Organization, Feature=Zone

A/F: For rules only

  • A = Abnormal

  • F = First

Z : Additional Information (Optional). For example,

  • DC: Domain Controller models/rules

  • CS: Critical Systems

Reprocess Rules

When adding new or managing existing Exabeam rule on the Exabeam Rules page, you can choose to reload individual or all rules. You can choose to reload and apply rule changes from the current point in time, or and reprocess historic data. When applying and reprocessing rule changes to historic data, the reprocess is done in parallel with active, live processing. It does not impede or stop any real-time analysis.

You can view the status of pending, in-progress, completed, canceled, and failed jobs at any time by navigating to Settings > Admin Operations > Exabeam Engine > Reprocessing Jobs. For more information on reprocessing, please see the section Reprocess Jobs.

Create Fact Based Rules Using the Simple Editor

The Simple Editor allows an administrator to create fact based rules. A fact based rule is one that does not rely on a model in order to trigger. See Exabeam Rules and the How Exabeam Models Work sections in this document for more information on the difference between fact based and model based rules.

Access the Simple Editor by navigating to Settings > Admin Operations > Exabeam Rules and clicking the + button.

After an administrator creates a rule, the rule is validated during the save operation.

Rule Category – Defines which category the rule falls under. For example, VPN Activity, Critical System Activity, etc.

Name – When the rule is triggered, this is the name displayed in the Advanced Analytics UI during the user session. We recommend using descriptive rule names to indicate the nature of the risky behavior; for example, Data Exfiltration by a Flight Risk User.

Description – The rule description provides additional details to help analysts investigate. Defining the why of the rule as well as the what helps analysts interpret the results of User Sessions more easily.

Events – Select the event types that the rule will depend on. For example, if your rule is evaluating user logins then the log types should reflect all the different login events that you want analyzed.

Risk Level – The risk level reflects the risk score that will be added to a User Session if the rule is triggered.

Rule Expression – This is the boolean expression that the rule engine uses to determine if a particular rule will trigger. This means that your rule will only trigger if all of the conditions described in the Rule Expression are met.

Rule Dependency – This is the only field in the Simple Editor that is optional. The Rule Dependency defines another rule (identified by the Rule ID) that your rule is dependent upon. When Rule A depends on Rule E, then Rule A will only trigger if Rule E is evaluated to true.

Example of Using the Simple Editor

We want to create a new fact-based rule: add 15 extra risk points to the sessions of users HR considers to be a flight risk. We have already added a context file titled ‘Flight Risk’ that contains the IDs of those users. We want our rule to trigger every time a User with the user label ‘Flight Risk’ starts a session.

We want the name and description of our rule to reflect its purpose.

Name – Flight Risks

Description – Users that HR considers to be flight risks.

We then want to select all of the events that this rule will analyze and how many points are added to a user session if this rule triggers.

Event Types – remote-access, remote-logon, local-logon, kerberos-logon, ntlm-logon, account-switch, app-logon, app-activity, privileged-object-access

Risk Level – Critical (15)

We then want to build a Rule Expression - this is the boolean expression that the rule engine uses to determine if a particular rule will trigger. This means that your rule will only trigger if all of the conditions described in the Rule Expression are met.

Our Field is User and the property of the user field that we want is User Label. In this case, a rule has to be triggered when the value of the user label is Flight Risk. Note that the label in the context lookup table for the Flight Risk users should match this value.

Click Done with Expression and we've created a rule expression for a rule that will trigger whenever the term Flight Risk is found in the user_label field.

Returning back to the Create a Rule page, we want our rule to trigger once per session and it does not depend on another rule triggering.

Click Save & Reload All. This saves the rule and reloads the rule file; rules need to be reloaded into the Exabeam Engine to have the changes be effective.

Edit Rules Using the Advanced Editor

The Advanced Editor is a JSON style editor and is what an administrator would use if they wanted to edit one of Exabeam's existing rules, or edit a cloned rule. All of Exabeam's out-of-the-box rules can be edited only via the Advanced Editor.

Note

Be careful here, these settings are for very advanced users only. Changes you make here can have a significant impact on the Exabeam Analytics Engine. The Advanced Editor allows administrators and advanced analysts to make changes to Exabeam rules in a JSON style configuration format. This should be used by administrators that have the expertise to create or tweak a machine learning rule and understand the syntax language for expressing a rule. In case of questions, reach out to Exabeam Customer Success for guidance.

This editor shows the entire rule as it exists in the configuration file. The Rule ID is the only field that cannot be changed. See the Rule Naming Convention section in this document for more information about Exabeam's naming convention. When an administrator makes any changes to a rule, the rule is validated during the save operation. If the rule has incorrect syntax, the administrator is prompted with the error and the details of the error. Once a rule is edited and saved using the Advanced Editor, the rule cannot be viewed via the Simple Editor.

Fields in the Advanced Editor
Glossary
ClassifyIf

This expression is similar to the TrainIf field in the model template. It evaluates to true if classification needs to be performed for a given event. In other words, how many times this rule should trigger.

DependencyExpression

This field defines a Boolean expression of other rule IDs. When rule A depends on expression E, A will only trigger if its parameters satisfy the RuleExpression, and E is evaluated to true after the rule IDs are substituted with their rule evaluation result.When an administrator makes any changes to a rule, the rule is validated during the save operation. If the rule is not syntactically well formed, the administrator is prompted with the error and the details of the error.

Disabled

This field will read either True or False. Set to True to deactivate the rule and all associated modelling.

FactFeatureName

he name of a feature used for fact based rules. For model-based rules the FactFeatureName is defined in the associated model.

Model

The name of the model that this rule references. If this rule is fact-based, then the model name is FACT.

PercentileThreshold

This value indicates which observations are considered anomalous based on the histogram. For example, a value of 0.1 indicates a percentile threshold of 10%. This goes back to the histogram and means that for the purposes of this rule we only consider events that appear below the 10th percentile to be abnormal. Note that many rules distinguish between the first time an event occurs and when that event has happened before, but is still abnormal. These two occurrences often appear as two separate rules because we want to assign two different scores to them.

ReasonTemplate

This appears in the UI and is to facilitate cross-examination by users. The items between braces represent type and value for an element to be displayed. The type helps define what happens when the user clicks on the hyperlink in the UI.

Rule ID

Unique identifier for this rule. The name in our example is NKL-UH-F. Exabeam has a naming convention for both models and rules that is outlined in the section titled Naming Convention. When editing or cloning an existing rule you cannot change the Rule ID.

RuleDescription

This is used in the UI to describe the reason why a particular rule triggered.

RuleEventTypes

This collection defines what events we are considering in this rule. It can be the same events that the model considers, but does not have to be. Sometimes you may want to model on one parameter but trigger on another.

RuleExpression

This is the boolean expression that the rule engine uses to determine if a particular rule will trigger. Your rule will only trigger if all of the conditions described in the Rule Expression are met. You can use probability or number of observations (num_observations) to determine how many times this event has been seen before. When either is set to zero it is a way to see when something happens that has not happened before. The confidence_factor refers to a concept called convergence. In order for the rule to use the information gathered by the model, we must be confident in that information. A percentage is set in each Confidence in the model must be above a certain percentage for the information to be used by a rule. This percentage is set in each rule, though most use 80%. When there is enough reliable information for the confidence to be 80% or higher, this is called convergence. If convergence is not reached, the rule cannot be triggered for the event.

RuleName

Descriptive name for the rule. Used for documentation purposes.

RuleType

This appears in the UI and is to facilitate cross-examination by users. The items between braces represent type and value for an element to be displayed. The type helps define what happens when the user clicks on the hyperlink in the UI.

Score

This is the score that will be assigned to a session when this rule triggers. Higher scores mean a higher level of risk from the security perspective.

Exabeam Threat Intelligence Service

The Exabeam Threat Intelligence Service delivers a constant stream of up-to-date threat indicators to Advanced Analytics deployments.

The categories of indicators affected are the following:

• IP addresses associated with Ransomware or Malware attacks

• IP addresses associated with the TOR network

• Domain names associated with Ransomware, Phishing, or Malware attacks

Indicators are downloaded by the on premise products from Threat Intelligence Service on a daily basis.

Exabeam Threat Intelligence Service Architecture
Figure 2. Exabeam Threat Intelligence Service Architecture


Advanced Analytics and Data Lake connect to Threat Intelligence Service through a cloud connector service that provides authentication and establishes a secure connection to Threat Intelligence Service. The cloud connector service then collects updated threat indicators from Threat Intelligence Service daily.

These indicators are then made available within Advanced Analytics to provide enhanced risk scoring based on curated threat intelligence.

This product does not require a separate license. It is bundled with Advanced Analytics deployments. Additional installation or configuration is not required.

If you would like to learn more, contact your technical account manager or watch product videos via the Exabeam Community.

Threat Intelligence Service Prerequisites

Before configuring Threat Intelligence Service, ensure your deployment meets the following prerequisites:

  • At least 5 Mbps Internet connection

  • Access to https://api.cloud.exabeam.com over HTTPS port 443

Note

Ensure dynamic access is enabled as the IP address may change. Also, for this reason, firewall rules for static IP and port addresses are not supported.

  • DNS resolution for Internet hostnames (this will only be used to resolve to https://api.cloud.exabeam.com)

Connect to Threat Intelligence Service through a Proxy

The communication between Threat Intelligence Service and Advanced Analytics occurs over a secure HTTPS connection.

If connections from your organization do not make use of a web proxy server, you may skip this section. Threat Intelligence Service is available automatically and does not require additional configuration.

If connections from your organization are required to go through a web proxy server to access the Internet, you will need to provide the configuration as shown below.

Note

Configuration is required for each of your Advanced AnalyticsData Lake deployments.

Warning

If your proxy performs SSL Interception, it will replace the SSL certificate from the Exabeam Threat Intel Service (ETIS) with an unknown certificate during the SSL negotiation, which will cause the connection to ETIS to fail. If possible, please disable SSL Interception for the IP address of your Exabeam products. If this cannot be disabled, please contact Exabeam Customer Success for further assistance.

Before configuring Threat Intelligence Service, ensure your deployment meets the following prerequisites:

  • At least 5 Mbps Internet connection

  • Access to https://api.cloud.exabeam.com over HTTPS port 443

Note

Ensure dynamic access is enabled as the IP address may change. Also, for this reason, firewall rules for static IP and port addresses are not supported.

  • DNS resolution for Internet hostnames (this will only be used to resolve to https://api.cloud.exabeam.com)

  1. Establish a CLI session with the master node of your Exabeam deployment.

  2. Open the custom file

    /opt/exabeam/config/common/cloud-connection-service/custom/application.conf
  3. Add the following section to the custom file and configure the parameters proxyHost, proxyPort, proxyUsername, and proxyPassword.

    Note

    Be sure to choose the appropriate settings based on whether the proxy utilizes http or https. Additionally, always use quoted strings for proxyHost, proxyProtocol, proxyUsername, and proxyPassword.

    HTTP:

    HTTP.png

    HTTPS:

    HTTPS.png
  4. Stop and then restart the cloud connector service in your product:

    1. source /opt/exabeam/bin/shell-environment.bash

    2. cloud-connection-service-stop

    3. cloud-connection-service-start

Note

Important Note: The username and password values are hashed in Data Lake i24 and later. After Cloud Connection Service (CCS) is restarted (step 4), the username and password are hashed using a 128 bit AES key, and these hashed values are stored in the local secrets store. In the config file, the username and password values are replaced by the hashed values.

If you want to subsequently change the values again, you can do so by replacing the hashed values by new plain text values and restart the CCS service.

As soon as the deployment can successfully connect to Threat Intelligence Service, threat intelligence feed data is pulled and saved as context tables and is also viewable on the Threat Intelligence Feeds settings page.

View and Manage Threat Intelligence Feeds

To view threat intelligence feeds, navigate to Settings > Cloud Config > Threat Intelligence Feeds.

TIS Feeds Settings Link.png

The page displays details related to all threat intelligence feeds provided by the cloud-based Exabeam Threat Intelligence service, such as:

  • The type of the feed (for example, domain list, IP list, etc.)

  • The name of the feed (as given by the cloud-based service)

  • A short description of the feed

  • The status indicating the availability of the feeds in the cloud-based service

  • The time of last update from the cloud service

Note

Data provided by the threat intelligence feeds can be accessed in Context tables (see section on Viewing Threat Intelligence Service Context Tables). The initial service feeds are associated with existing context tables by default. This means that as soon as your deployment is connected to the cloud-based Threat Intelligence Service service, your deployment will start collecting threat intelligence data.

If you want to collect data from some of the threads in other context tables, see the next section on Assigning a Threat Intelligence Feed to a New Context Table.

  • Associated context tables

Cloud Config in Settings to select Threat Intelligence Feeds.

Assign Threat Intelligence Feed to a New Context Table

Some of the feeds are pre-assigned to context tables. Click the arrow of a threat intelligence feed to expand and view additional details, including:

  • ID

  • Source URL

  • Indicator in Context Tables

  • Retrieved from Source

  • Feed Indicator Sample

  • Context Table(s)

TIS Feeds Settings Overview - Expand 2.png

Click the edit A pen shaped edit icon. icon of a single threat intelligence feed to assign or unassign it to one or more context tables.

Note

You cannot unassign default context table mappings.

Click the view icon to view existing indicators in the context table.

A threat intelligence feed selected to assign or reassign to a context table. The checkbox and view icon are highlighted in red circles.

Select multiple threat intelligence feeds to bulk assign or unassign them from context tables.

Assign and Unassign the Threat Intelligence Feeds by selecting type and name.

Create a New Context Table from a Threat Intelligence Feed

To create a new context table from one or more threat intelligence feeds:

  1. Navigate to Settings > Cloud Config > Threat Intelligence Feeds.

    TIS Feeds Settings Link.png
  2. Edit a threat intelligence feed.

    1. To create a new context table from a single feed, click the edit A pen shaped edit icon. icon of the single feed.

      Assign Context table to edit the selected table.
    2. To create a new context table from multiple feeds, select the feeds and then click Assign.

      Select Threat intelligence feed to assign it to context table.
  3. Click + Add Context Table.

    Add Context Table in assign context table .
  4. Configure the context table fields, including title, object type (users assets, miscellaneous), and type (key value or key only).

    aa-settings-cloudconfigtis-edit-addcontextable.png
  5. Click ADD.

View Threat Intelligence Service Context Tables

To view the current context tables provided by Threat Intelligence Service:

  1. Log in to your instance of the UI.

  2. Navigate to Settings > Accounts & Groups > Context TablesSettings > Context Management > Context Tables.

    Accounts & Groups settings with the Context Tables link highlighted with a red circle.
    Context_Tables_-_DL.png
  3. Select one of the context tables listed above. For example, web_phishing (shown below).

    Web Phishing Select.png
  4. On the context table details page, view the keys and values currently associated with the context table.

    Web Phishing Details.png

Check ExaCloud Connector Service Health Status

To view the current status of the ExaCloud connector service:

  1. Log in to your instance of the UI.

  2. Click the top-right menu icon and select System Health.

    System Health in Settings on Homepage highlighted with red oval shape.
  3. Select the Health Checks tab.

  4. Click Run Checks.

    Health Checks in System Health to check health status of the system.
  5. Expand the Service Availability section, and then review the ExaCloud connection service availability icon.

    Service health with green dots indicate active and healthy service.

The service availability icon shows the current health of the Cloud connector service that is deployed on your Exabeam product.

  • Green – The cloud connector service is healthy and running on your on-prem deployment.

Note

The green icon does not specifically indicate the cloud connector is connecting to the cloud and pulling Threat Intelligence Service data. It only indicates the cloud connector service is up and running.

  • Red – The cloud connector service has failed. Please contact Exabeam Customer Success by opening a case via Community.Exabeam.com.

Exabeam Cloud Telemetry Service

Exabeam telemetry service provides valuable quality and health metrics to Exabeam. System events, metrics, and environment health data are collected and sent to Exabeam Cloud, enabling insight into system issues, such as processing downtime (such as processing delays and storage issues) and UI/application availability.

Learn about the different types of telemetry data, possible telemetry data, and disabling this feature.

Note

If you do not wish to send any data to the Exabeam Cloud, please follow the opt-out instructions listed in the How to Disable Exabeam Cloud Telemetry Service.

Prerequisites

For Exabeam to successfully collect telemetry data, please ensure the following prerequisites are met:

  • Advanced Analytics I48.4 or later with a valid license

  • Data Lake I32 or later with a valid license

  • Access to *.cloud.exabeam.com over HTTPS port 443.

Types of Telemetry Data in Exabeam Cloud Telemetry Service

At a high level, telemetry data falls into one of three categories:

  • Metrics (for example, CPU, events-per-second, and processing delay)

  • Events (for example, machine restart, user login, and configuration changes)

  • Environment (for example, versions, products, nodes, and configuration)

IP addresses and hostnames are masked before being sent to Exabeam Cloud. For example, {"host": "*.*.0.24"}.

Metrics

The example below shows the metrics data sent from the master node to the telemetry service in Exabeam Cloud:

Note

The example below is only a partial example and does not show the full payload.

{ "metrics": [ {"points":[[1558614965, 0.29]], "name": "tm.plt.service_cpu.exabeam-web-common-host1"}, {"points": [[1558614965, 0.3457]], "name": "tm.plt.service_memory.exabeam-web-common-host1"}, {"points": [[1558614965, 0.77]], "name": "tm.plt.service_cpu.mongodb-shard-host1"}, {"points": [[1558614965, 0.04947]], "name": "tm.plt.service_memory.mongodb-shard-host1"} ] }
Events

The example below shows the events data sent from the master node to the telemetry service in Exabeam Cloud:

Note

The example below is only a partial example and does not show the full payload.

{ "events": [ "dateHappened": 1558614965, "title": "Device /dev/shm S.M.A.R.T health check: FAIL", "text": "S.M.A.R.T non-compatible device" ] }
Environment

The example below shows the environment data sent from the master node to the telemetry service in Exabeam Cloud:

Note

The example below is only a partial example and does not show the full payload.

{"environment": { "versions": { "uba": { "build": "4", "branch": "I46.2"}, "common": { "build": "7", "branch": "PLT-i12.5"}, "exa_security": { "build": "33", "branch": "c180815.1"} }, "hosts": { "host3": { "host": "*.*.0.24","roles": ["oar","cm"]}, "host2": {"host": "*.*.0.72","roles": ["uba_slave"]}, "host1": {"host": "*.*.0.70","roles": ["uba_master"]} }, "licenseInfo": { "customer": "EXA-1234567", "gracePeriod": 60, "expiryDate": "10-11-2021", "version": "3", "products": ["User Analytics","Entity Analytics"], "uploadedAt": 1557740839325 } }

Data Collected by Exabeam Cloud Telemetry Service

Exabeam telemetry service provides valuable quality and health metrics to Exabeam. System events, metrics, and environment health data are collected and sent to Exabeam Cloud, enabling insight into system issues, such as processing downtime (such as processing delays and storage issues) and UI/application availability. The table below lists the possible metrics, events, and environment telemetry data.

Note

You can also view a full list of product metrics and events sent to the Exabeam cloud (including when the requests were made and the full payload) by accessing the audit log file located at /opt/exabeam/data/logs/common/cloud-connection-service/telemetry.log.

Environment

Name

Description

Frequency

Inventory

Nodes, masked IPs, and roles of each node.

Once a day

Product Version

Versions of each product in your deployment.

Once a day

License information

License information for each product in your deployment.

Once a day

Metrics for Advanced Analytics

Name

Description

Frequency

tm.aa.processing_delay_sec

An Advanced Analytics processing delay (if applicable) in seconds.

5 mins

tm.plt.service_status.<service-name>

Per-service status.

5 min

tm.plt.ssh_logins

Number of SSH logins.

5 min

tm.plt.service_memory.<service-name>

Per-service memory.

5 min

tm.plt.service_cpu.<service-name>

Per-service CPU.

5 min

tm.plt.load_avg_1m

tm.plt.load_avg_5m

tm.plt.load_avg_10m

Load average (CPU) per 1-minute, 5-minute, and 10-minute period.

5 min

tm.aa.compressed_logs_bytes

Log volume of the last hour.

1 hour

tm.aa.compressed_events_bytes

Events volume of the last hour.

1 hour

tm.aa.notable_users

Notable users.

5 min

tm.plt.disk_usage.mongo

tm.plt.disk_usage.data

tm.plt.disk_usage.root

Disk usage per partition.

5 min

tm.plt.total_users

Total users.

1 hour

tm.plt.total_assets

Total assets.

1 hour

Metrics for Data Lake

Name

Description

Frequency

tm.plt.service_status.<service-name>

Per-service status.

5 min

tm.plt.ssh_logins

Number of SSH logins.

5 min

tm.plt.service_memory.<service-name>

Per-service memory.

5 min

tm.plt.service_cpu.<service-name>

Per-service CPU.

5 min

tm.plt.load_avg_1m

tm.plt.load_avg_5m

tm.plt.load_avg_10m

Load average (CPU) broken per 1-minute, 5-minute, and 10-minute period.

5 min

tm.plt.disk_usage.mongo

tm.plt.disk_usage.data

tm.plt.disk_usage.root

tm.plt.disk_usage.es_hot

tm.plt.disk_usage.kafka

Disk usage per partition.

5 min

tm.plt.total_users

Total users.

1 hour

tm.plt.total_assets

Total assets.

1 hour

tm.dl.es.cluster_status tm.dl.es.number_of_nodes tm.dl.es.number_of_data_nodes tm.dl.es.active_shards tm.dl.es.active_primary_shards

Elasticsearch cluster status.

5 min

tm.dl.kafka.total_lag

A Kafka delay if detected.

5 min

tm.dl.kafka.connectors_lag

A Kafka connector lag if detected.

5 min

tm.dl.avg_doc_size_bytes

Average document size.

15 min

tm.dl.avg_msg_size_bytes

Average message size.

5 min

tm.dl.index_delay

Index delay if detected.

5 min

tm.dl.connectors_send_rate_bytes

Total connector ingestion rate in bytes.

5 min

tm.dl.ingestion_queue

Kafka topic delay if detected.

5 min

tm.dl.indexing_rate

Average indexing rate.

5 min

tm.dl.shards_today

Elasticsearch shards today.

5 min

tm.dl.shards_total

Elasticsearch shards total.

5 min

How to Disable Exabeam Cloud Telemetry Service

Hardware and Virtual Deployments Only

Cloud Telemetry Service will be enabled by default, following the installation of the relevant product versions. Exabeam highly recommends to connect to the Telemetry Service, in order to enjoy the benefits of future enhancements that will be built using this data.

If you do not wish to send any data to the Exabeam Cloud, the steps required vary depending on your deployment scenario:

  • Product Upgrade or Patch Installation

  • Product Installation

  • Any time after Product Upgrade

Disabling Telemetry Before Product Upgrade or Patch Installation

To disable the hosting of telemetry data in the Exabeam Cloud before upgrading your Exabeam product(s):

  1. Access the Cloud Connection Service (CCS) configuration files at:

    /opt/exabeam/config/common/cloud-connection-service/custom/application.conf
  2. Add a new line:

    cloud.plugin.Telemetry.enabled = false
  3. Perform the upgrade steps described in the Upgrade an On-Premises or Cloud Exabeam Product section.

Disabling Telemetry During a Product Installation

To disable the hosting of telemetry data in the Exabeam Cloud while installing your Exabeam product(s):

  1. Perform the installation steps described in the product installation section, but do not upload the product license. You will upload the product license later in this process.

  2. Access the Cloud Connection Service (CCS) configuration files at:

    /opt/exabeam/config/common/cloud-connection-service/custom/application.conf
  3. Add a new line:

    cloud.plugin.Telemetry.enabled = false
  4. Restart CCS by running the following command:

    . /opt/exabeam/bin/shell-environment.bash
    cloud-connection-service-stop && cloud-connection-service-start
  5. Upload the product license by following the steps provided in the Download an On-premises or Cloud Exabeam License and ??? sections.

Disabling Telemetry After Product Upgrade

To disable the hosting of telemetry data in the Exabeam Cloud after upgrading your Exabeam product(s):

  1. Access the Cloud Connection Service (CCS) configuration files at:

    /opt/exabeam/config/common/cloud-connection-service/custom/application.conf
  2. Add a new line:

    cloud.plugin.Telemetry.enabled = false
  3. Restart CCS by running the following command:

    . /opt/exabeam/bin/shell-environment.bash
    cloud-connection-service-stop && cloud-connection-service-start

Disaster Recovery (Advanced Analytics, Case Manager, and Incident Responder)

Hardware and Virtual Deployments Only

In a disaster recovery scenario, Advanced Analytics content is replicated continuously from the active site to the passive site, including:

  • Logs/Events – The active cluster fetches logs from SIEM and/or receiving the logs via Syslog. Once the logs are parsed, the events are replicated to the passive cluster.

  • Configuration – Changes to configuration such as configuring new log feeds, parsers, LDAP server, roles and Exabeam users, models and rules are replicated from the active to the standby cluster. This includes files as well as the relevant database collections (for example, EDS configuration, users and roles are in database).

  • Context – Contextual data such as users, assets, service accounts, peer groups, etc.

  • User Generated Data – Comments, approved sessions, Watchlists, starred sessions, saved searches, and whitelists stored in the Mongo database.

Note

You can also configure your Advanced Analytics deployment to replicate only specific file types across clusters.

If you have Case Manager or a combined Case Manager and Incident Responder license, the disaster recovery system replicates:

  • Incidents and incident details (entities, artifacts, comments, etc.)

  • Custom incident filters and searches.

  • Roles and permissions.

  • Playbooks and actions (including history and saved results of previous actions)

  • Configurations (for example, alert sources, alert feeds, notification settings, incident message and email settings), phases and tasks, integrated services (for example, parsers and alert rules).

Deploy Disaster Recovery

Hardware and Virtual Deployments Only

Warning

You can only perform this configuration with the assistance of Exabeam Customer Success Engineer.

The two-cluster scenario employs an Active-Passive Disaster Recovery architecture with asynchronous replication.

With this approach, you maintain an active and secondary set of Advanced Analytics (and additional Case Manager and Incident Responder) clusters in separate locations. In cases of a failure at the active site, you can fail over to the passive site.

At a high level, when Disaster Recovery is set up between two Advanced Analytics clusters, the active cluster is responsible for fetching the logs from SIEM or receiving the logs via Syslog. Once the logs have been parsed into events, the events are replicated from the active cluster to the passive cluster every five minutes.

Optionally, the raw logs can be replicated from the active to the passive cluster. (This allows reprocessing of logs, if needed. However, replication will generate great bandwidth demands between nodes.) If the active cluster goes down, then the passive cluster becomes the active until such time as the downed site is recovered.

IR DR.jpg
Prerequisites
  • Open port TCP 10022 (bi-directional)

  • IP addresses of both the primary and secondary clusters

  • SSH key to access the primary cluster

  • At least 200 megabits per second connection between primary and secondary clusters

  • The active and passive clusters must have the exact same number of nodes in the same formation. For example, if the second and third nodes on the primary cluster are worker nodes, the second and third nodes on the passive cluster must also be worker nodes. If the fifth node on the primary cluster is a Case Manager node, the fifth cluster on the passive cluster must also be a Case Manager node.

Deployment

This process requires you to setup disaster recovery first on the active cluster (primary node) and then on the passive cluster (secondary site).

Note

If you have already set up disaster recovery for Advanced Analytics and are adding disaster recovery for Incident Responder, please see Adding Case Manager & Incident Responder Disaster Recovery to Existing Advanced Analytics Disaster Recovery.

Active Cluster Setup

On the active site, run the following:

screen -LS dr_setup
/opt/exabeam_installer/init/exabeam-multinode-deployment.sh

Select option: "Configure disaster recovery".

Select the option: "This cluster is source cluster (usually the primary)"

Please select the type of cluster:
1) This cluster is source cluster (usually the primary)
2) This cluster is destination cluster (usually the dr node)
3) This cluster is for file replication (configuration change needed)

Please wait for the cluster setup to successfully complete before proceeding to the next section.

Passive Cluster Setup

Copy onto the passive cluster master the SSH key that allows access to the active cluster. (Skip this step if you have a pre-existing key that allows you to SSH from passive to the active cluster.)

On the passive site (standby master), run the following:

screen -LS dr_setup
/opt/exabeam_installer/init/exabeam-multinode-deployment.sh

Select option: "Configure disaster recovery".

Select the option: "This cluster is destination cluster (usually the dr node)"

Please select the type of cluster:
1) This cluster is source cluster (usually the primary)
2) This cluster is destination cluster (usually the dr node)
3) This cluster is for file replication (configuration change needed)

Input the IP address of the active cluster.

What is the IP of the source cluster?

Select the option "SSH key".

The source cluster's SSH key will replace the one for this cluster. How do you want to pull the source cluster SSH key?
1) password
2) SSH key

Input the private key file path.

What is the path to the private key file?

The passive cluster will connect to the active cluster with the private key provided. If there is no SSH key at the passive cluster, select Option 1 and follow the prompts. You will be asked for user credentials (with sufficient privileges) to either access the active cluster master to retrieve the SSH key or generate a new key.

Failover to Passive Cluster

Hardware and Virtual Deployments Only

This section includes instructions on how to failover to the passive site when the previously active site goes down. It also covers how to failback when you are ready to bring the restored site back online.

Exabeam's recommended policy for failback is to demote the failed cluster as the new passive cluster going forward. For example, Cluster A is the active cluster and Cluster B is the passive. Cluster A fails and Cluster B becomes active. When Cluster A is ready to come back online, it rejoins as a passive cluster until complete data synchronization and is then promoted to an active cluster again.

Make Passive Cluster (Secondary Site) Become Active

Log on to the passive cluster and ensure docker is running:

sos; docker ps

Stop the replicator:

replicator-socks-stop; replicator-stop

After stopping the replicator, run the deployment script:

screen -LS dr_failover
/opt/exabeam_installer/init/exabeam-multinode-deployment.sh

Select "Promote Disaster Recovery Cluster to be Primary." This step promotes the passive cluster to be the active.

After the passive cluster is promoted to active, stop docker:

sos; everything-stop

Start docker again:

docker-start

Run the following command to start all services:

everything-start

If using a Syslog server, please switch the Syslog server to push logs to the new active cluster environment (secondary site).

If you have deployed Helpdesk Communications, restart two-way email service in the UI.

Start Log Ingestion and Analytics Engine from the Exabeam Engine page.

Make Failed Active Cluster (Primary Site) Become Passive After Recovery

Warning

Do not immediately promote the restored cluster back to active status after recovery. It must be demoted in order to synchronize data lost during its outage.

  1. Log on to the existing active and ensure docker is running.

    sos; docker ps
  2. Run the deployment script:

    screen -LS dr_failover
    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  3. Select option "Configure Disaster Recovery."

  4. Select the option: "This cluster is destination cluster (usually the dr node)"

    Please select the type of cluster:
    1) This cluster is source cluster (usually the primary)
    2) This cluster is destination cluster (usually the dr node)
    3) This cluster is for file replication (configuration change needed)
  5. Input the IP address of the source cluster.

    What is the IP of the source cluster?
  6. Select the option "SSH key".

    The source cluster's SSH key will replace the one for this cluster. How do you want to pull the source cluster SSH key?
    1) password
    2) SSH key
  7. Input the private key file path.

    What is the path to the private key file?
  8. Run the following command to stop all services:

    sos; everything-stop
  9. After the recovered cluster is demoted, start docker again:

    docker-start
  10. Run the following command to start all services:

    everything-start
Failback to Passive Site (Original Primary) Cluster
Demote Active Cluster (Secondary Site) Back to Passive After Synchronization
  1. Log on to the current active cluster and ensure docker is running:

    sos; docker ps
  2. Stop the replicator:

    replicator-socks-stop; replicator-stop
  3. After stopping the replicator, run the deployment script:

    screen -LS dr_failback
    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  4. Select option "Configure Disaster Recovery."

  5. Set the cluster as Disaster Recovery (Non Primary) to demote the active (former standby) cluster back to standby.

    After the active (former passive) cluster is demoted to passive, stop docker:

    sos; everything-stop
  6. After everything is done, start docker again:

    docker-start
  7. Run the following command to start all services:

    everything-start
Promote Restored Cluster (Original Primary) to Active
  1. Log on to the restored cluster master and ensure docker is running:

    sos; docker ps
  2. Run the deployment script:

    screen -LS dr_failback
    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  3. Select "Promote Disaster Recovery Cluster to be Primary." This step promotes the recovered cluster to back to active status.

  4. Run the following command to stop all services:

    sos; everything-stop
  5. After the restored cluster is promoted, start docker again:

    docker-start
  6. Run the following command to start all services:

    sos; everything-start
  7. If you have deployed Incident Responder, restart incident feed log ingestion in the UI.

  8. Navigate to Settings > Case Manager > Incident Ingestion > Incident Feeds.

  9. Click Restart Log Ingestion Engine.

  10. If you have deployed Helpdesk Communications, restart two-way email service in the UI.

  11. Navigate to Settings > Case Manager > Incident Ingestion > 2-Way Email.

  12. Click the pencil/edit icon associated with the applicable email configuration.

  13. Click Restart.

  14. If using a Syslog server, please switch the Syslog server to push logs to the active cluster (primary site). Start Log Ingestion and Analytics Engine from the Exabeam Engine page.

Replicate Specific Files Across Clusters

Hardware and Virtual Deployments Only

Warning

You can only perform this configuration with the assistance of Exabeam Customer Success Engineer.

File replication across clusters leverages Advanced Analytics and Incident Responder disaster recovery functionality, which replicates entire cluster configurations, context, user generated data, logs/events, and HDFS files (for Incident Responder).

Note

Advanced Analytics HDFS files are copied from oldest to newest. Incident Responder HDFS files are copied from newest to oldest.

In certain customer scenarios, clusters are situated in remote areas with considerable bandwidth restraints. In these rare scenarios, you can configure Advanced Analytics and/or Incident Responder to only replicate and fetch specific files. For example, you can configure your deployment to replicate only compressed event files across clusters.

File Replication
Primary Site – Source Cluster Setup

On the primary site, run the following:

screen -LS dr_setup
/opt/exabeam_installer/init/exabeam-multinode-deployment.sh

Select option: "Configure disaster recovery".

Select the option: "This cluster is source cluster (usually the primary)"

Please select the type of cluster:
1) This cluster is source cluster (usually the primary)
2) This cluster is destination cluster (usually the dr node)
3) This cluster is for file replication (configuration change needed)

Wait for the deployment to successfully finish.

Secondary Site – Destination Cluster Setup
  1. On the secondary site, run the following:

    screen -LS dr_setup
    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  2. Select option: "Configure disaster recovery".

  3. Select the option: "This cluster is for file replication (configuration change needed)"

    Please select the type of cluster:
    1) This cluster is source cluster (usually the primary)
    2) This cluster is destination cluster (usually the dr node)
    3) This cluster is for file replication (configuration change needed)
  4. Input the IP address of the source cluster.

    What is the IP of the source cluster?
  5. Select the option "SSH key".

    The source cluster's SSH key will replace the one for this cluster. How do you want to pull the source cluster SSH key?
    1) password
    2) SSH key
  6. Input the private key path.

    What is the path to the private key file?
  7. Wait for the deployment to successfully finish.

  8. Replication of the primary cluster automatically begins, but all replication items are disabled. You must manually enable the replication items.

  9. On the secondary site, access the custom configuration file /opt/exabeam/config/custom/custom_replicator_disable.conf, and then enable replication items.

  10. For example, if you wish to only fetch compressed event files, then set the Enabled field for the [“.evt.gz”] file type to true (shown below).

    {
        EndPointType = HDFS
        Include {
            Dir = "/opt/exabeam/data/input"
            FilePattern = [".evt.gz"]
        }
        Enabled = true
    }
  11. Start the replicator.

    sos; replicator-socks-start; replicator-start
  12. Once the replicator is started, log on to the standby cluster GUI. Navigate to Context Setup and click the Generate Context button. This gathers context from the active cluster to synchronize the standby cluster.

Add Case Manager and Incident Responder to Advanced Analytics Disaster Recovery

Hardware and Virtual Deployments Only

If you are upgrading from Advanced Analytics SMP 2019.1 (i48) or lower and have configured disaster recovery for Advanced Analytics, add Case Manager and Incident Responder to the existing Advanced Analytics disaster recovery.

Warning

Configure this only with an Exabeam Customer Success Engineer.

1. Stop the Replicator
  1. Ensure that the Advanced Analytics replication is current.

  2. To ensure that the passive site matches the active site, compare the files in HDFS, the local file system, and MongoDB.

  3. Source the shell environment:

    . /opt/exabeam/bin/shell-environment.bash
  4. On the active cluster, stop the replicator:

    sos; replicator-socks-stop; replicator-stop
2. Upgrade the Passive and Active Advanced Analytics Clusters

Note

Both the primary and secondary clusters must be on the same release version at all times.

Warning

If you have an existing custom UI port, please set the web_common_external_port variable in /opt/exabeam_installer/group_vars/all.yml. Otherwise, you may lose access at the custom UI port after the clusters upgrade.

web_common_external_port: <UI_port_number>

  1. (Optional) Disable Exabeam Cloud Telemetry Service.

  2. If you use the SkyFormation cloud connector service, stop the service.

    1. For SkyFormation v.2.1.18 and higher, run:

      sudo systemctl stop sk4compose
    2. For SkyFormation v.2.1.17 and lower, run:

      sudo systemctl stop sk4tomcat
      sudo systemctl stop sk4postgres

      Note

      After you've finished upgrading the clusters, the SkyFormation service automatically starts. To upgrade to the latest version of SkyFormation, please refer to the Update SkyFormation app on an Exabeam Appliance guide at support.skyformation.com.

  3. From Exabeam Community, download the Exabeam_[product]_[build_version].sxb file of the version you're upgrading to. Place it anywhere on the master node, except /opt/exabeam_installer, using Secure File Transfer Protocol (SFTP).

  4. Change the permission of the file:

    chmod +x Exabeam_[product]_[build_version].sxb
  5. Start a new terminal session using your exabeam credentials (do not run as ROOT).

  6. To avoid accidentally terminating your session, initiate a screen session.

    screen -LS [yourname]_[todaysdate]
  7. Execute the command (where yy is the iteration number and zz is the build number):

    ./Exabeam_[product]_[build_version].sxb upgrade 

    The system auto-detects your existing version. If it can't, you are prompted to enter the existing version you are upgrading from.

  8. When the upgrade finishes, decide whether to start the Analytics Engine and Log Ingestion Message Extraction engine:

    Upgrade completed. Do you want to start exabeam-analytics now? [y/n] y
    Upgrade completed. Do you want to start lime now? [y/n] y
3. Add Case Manager to Advanced Analytics
  1. SSH to the primary Advanced Analytics machine.

  2. Start a new screen session:

    screen –LS new_screen
    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  3. When asked to make a selection, choose Add product to the cluster.

  4. From these actions, choose option 4.

    1) Upgrade from existing version
    2) Deploy cluster
    3) Run precheck
    4) Add product to the cluster
    5) Add new nodes to the cluster
    6) Nuke existing services
    7) Nuke existing services and deploy
    8) Balance hadoop (run if adding nodes failed the first time)
    9) Roll back to previously backed up version
    10) Generate inventory file on disk
    11) Configure disaster recovery
    12) Promote Disaster Recovery Cluster to be Primary
    13) Install pre-approved CentOS package updates
    14) Change network settings
    15) Generate certificate signing requests
    16) Exit
    Choices: ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16']: default (1): 4
  5. Indicate how the node should be configured:

    Which product(s) do you wish to add? ['ml', 'dl', 'cm']: cm
    How many nodes do you wish to add? (minimum: 0): 1
    What is the IP address of node 1 (localhost/127.0.0.1 not allowed)? 10.10.2.40
    What are the roles of node 1? ['cm', 'uba_slave']: cm
  6. To configure Elasticsearch, Kafka, DNS servers, and disaster recovery, it's best that you use these values:

    How many elasticsearch instances per host? [2] 1
    What's the replication factor for elasticsearch? 0 means no replication. [0]
    How much memory in GB for each elasticsearch instance? [16] 16
    How much memory in GB for each kafka instance? [5]
    Would you like to add any DNS servers? [y/n] n
    Do you want to setup disaster recovery? [y/n] n
  7. Once the installation script successfully completes, restart the Analytics Engine.

4. Configure Disaster Recovery on the Advanced Analytics and Case Manager Passive Clusters
  1. On the secondary site, run:

    screen -LS dr_setup
    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  2. Select option: Configure disaster recovery.

  3. Select the third option: This cluster is for file replication (configuration change needed)

    Please select the type of cluster:
    1) This cluster is source cluster (usually the primary)
    2) This cluster is destination cluster (usually the dr node)
    3) This cluster is for file replication (configuration change needed)
  4. Enter the IP address of the source cluster.

    What is the IP of the source cluster?
  5. Select option: SSH key.

    The source cluster's SSH key will replace the one for this cluster. How do you want to pull the source cluster SSH key?
    1) password
    2) SSH key
    
  6. Enter the private key path.

    What is the path to the private key file?

    The deployment may take some time to finish.

  7. The primary cluster begins to replicate automatically, but all replication items are disabled. You must manually enable the replication items.

    On the secondary site, access the custom configuration file /opt/exabeam/config/custom/custom_replicator_disable.conf, then enable replication items.

    For example, if you wish to only fetch compressed event files, then set the Enabled field for the [“.evt.gz”] file type to true:

    {
        EndPointType = HDFS
        Include {
            Dir = "/opt/exabeam/data/input"
            FilePattern = [".evt.gz"]
        }
        Enabled = true
    }
  8. Start the replicator:

    sos; replicator-start
  9. Log on to the standby cluster GUI.

  10. To gather context from the active cluster to synchronize the standby cluster, navigate to LDAP Import > Generate Context, then click Generate Context.

5. Start the Replicator

On the active cluster, start the replicator:

replicator-socks-start; replicator-start

Configure Settings to Search for Data Lake Logs in Advanced Analytics

Hardware and Virtual Deployments Only

Before you can search for a log from a Smart Timelines™ event, you must configure Advanced Analytics settings.Search for a Data Lake Log from an Advanced Analytics Smart Timelines™ Event

First, add Data Lake as a log source. Then, to point Advanced Analytics to the correct Data Lake URL, edit the custom application configuration file.

1. Add Data Lake as a Log Source

  1. In the navigation bar, click the Menu The menu icon in the navigation bar; three white lines on a green background., select Settings, then navigate to Log Management > Log Ingestion Settings.

  2. Click ADD.

  3. Under Source Type, select Exabeam Data Lake, then fill in the fields:

    • IP address or hostname – Enter the IP address or hostname of the Data Lake server.

    • (Optional) Description – Describe the source type.

    • TCP Port – Enter the TCP port of the Data Lake server.

    • Username – Enter your Exabeam username.

    • Password – Enter your Exabeam password.

  4. Click SAVE.

2. Edit the Custom Application Configuration File

  1. In /opt/exabeam/config/common/web/custom/application.conf, add to the end of the file:

    webcommon.auth.exabeam.exabeamWebSearchUrl = "https://dl.ip.address:8484/data/app/dataui#/discover?"

    Do not insert between stanzas.

  2. To apply the change, restart web-common:

    $ sos; web-common-stop; sleep 2; web-common-start

    You are logged out of Advanced Analytics.

Enable Settings to Detect Email Sent to Personal Accounts

Hardware and Virtual Deployments Only

To start monitoring when someone sends company email to a personal email account, enable it in the algorithms.conf custom configuration file.

Don't change the other parameters in the custom configuration file; they affect the machine learning algorithm behind this capability. If you have questions about these parameters, contact Exabeam Customer Success.

  1. Source the shell environment:

    . /opt/exabeam/bin/shell-environment.bash
  2. Navigate to /opt/exabeam/ds-server/config/custom/algorithms.conf.

  3. In the algorithms.conf file under personal-email-identification, change the value of Enabled to true:

    personal-email-identification {
        Enabled = true
  4. Add your company domain as a string to CompanyDomain:

    personal-email-identification {
        Enabled = true
        Parameters = {
            CompanyDomain = "company.com"

    To add multiple company domains, insert them in a list:

    personal-email-identification {
        Enabled = true
        Parameters = {
            CompanyDomain = ["company.com", "company1.com", "comapny2.com"]
        }
  5. Save the algorithms.conf file.

  6. Restart the DS server:

    ds-server-stop
    ds-server-start

Exabeam Hardening

The Exabeam Security Management Platform (SMP) has enabled security features by default that provide stricter controls and data protection. Two examples of what Exabeam has built protection against include Cross-Site Request Forgery (CSRF) and Cross-Origin Resource Sharing (CORS). A default set of filters are defined and enabled in Exabeam configurations. This improves the default security of the environment for all Exabeam services.

For Exabeam SaaS deployments that use Exabeam Advanced Analytics as your Exabeam Cloud Connector identity provider (IdP), Exabeam will update Cloud Connector to v.2.5.86 or later.

No manual configuration is needed for deployments with the following versions or later, as these protections are enabled by default:

  • Exabeam Advanced Analytics i53.6

  • Exabeam Data Lake i34.6

Important

This security enhancement has been enabled by default:

  • Data Lake i34.6 and i35

  • Advanced Analytics i53.6 and i54.5

It is not enabled by default in:

  • Data Lake i33 or earlier

  • Advanced Analytics i52 or earlier

Please follow the hardening guidelines. At the earliest opportunity, please upgrade to a currently supported version of Advanced Analytics and Data Lake .

How to Enable Cross-Site Request Forgery Protection

Cross-Site Request Forgery (CSRF) attacks are web-based vulnerabilities where attackers trick users with trusted credentials to commit unintended malicious actions. CSRF attacks change the states of their targets rather than steal data. Examples include changing account emails and changing passwords.

CSRF protection is available for Exabeam Advanced Analytics and Data Lake and previously inactive. Older versions of Advanced Analytics and Data Lake may manually harden or upgrade to a hardened supported version (Advanced Analytics i53.6 or later and Data Lake i34.6 or later) to enable the security configuration by default.

For information about enabled versions, see Exabeam Hardening.

These protections may affect API calls to the Exabeam SMP; please review customs scripts and APIs used by your organization. Please follow instructions given in Step 1c to conform your scripts.

To enable CSRF protection, apply the following:

  1. For all deployments, the /opt/exabeam/config/common/web/custom/application.conf file at each master host needs to be configured to enable CSRF protection at service startup.

    1. Edit the following parameters in the CONF file:

      csrf.enabled=true
      csrf.cookie.secure=true
      csrf.cookie.name="CSRF_TOKEN"
    2. Restart web-common to enable CSRF protection.

      . /opt/exabeam/bin/shell-environment.bash
      web-common-restart

      Note

      Log ingestion will not be interrupted during the restart. web-common can take up to 1 minute to resume services.

    3. API calls to Exabeam that utilize POST requests using types application/x-www-form-urlencoded, multipart/form-data and text/plain are affected by CSRF configurations. Ensure API clients have headers that has Csrf-Token set to value nocheck.

      Continue with the next step.

  2. For Advanced Analytics using Case Manager or Incident Responder , edit /opt/exabeam/code/soar-python-action-engine/soar/integrations/exabeamaa/connector.py.

    1. Find the entry self._session = SoarSession(base_url=apiurl, timeout=timeout, verify=False) and replace with:

      self._session = SoarSession(base_url=apiurl, timeout=timeout, verify=False, headers={'Csrf-Token': 'nocheck'})
    2. Restart services.

      sudo systemctl restart exabeam-soar-python-action-engine-web-server
      sudo systemctl restart exabeam-soar-python-action-engine
  3. If SAML is configured, the IdP’s domain needs to be explicitly added to the CORS origins and then apply the new configuration. Please follow steps given in How to Enable Cross-Origin Resource Sharing Protection.

How to Enable Cross-Origin Resource Sharing Protection

Cross-Origin Resource Sharing (CORS) is a browser standard which allows for the resources or functionality of a web application to be accessed by other web pages originating from a different domain -- specifically, the origin. An origin is defined by the scheme (protocol), host (domain), and port of the URL used to access a resource. CORS is a policy that allows a server to indicate any origins other than its own from which a browser should permit loading of resources

CORS protection is available for Exabeam Advanced Analytics and Data Lake and enabled by default in Data Lake i34.6 or Advanced Analytics i53.6 and later versions. Older versions of Advanced Analytics and Data Lake may manually harden or upgrade to a hardened supported version (Advanced Analytics i53.6 or later and Data Lake i34.6 or later) to enable the security configuration by default.

For information about enabled versions, see Exabeam Hardening.

To manually enable CORS protection when it is not enabled by default, apply the following:

  1. For all deployments, the /opt/exabeam/config/common/web/custom/application.conf file at each master host needs to be configured to enable CORS protection at service startup. Edit webcommon.service.origins parameter the CONF file to match your Exabeam service domain:

    webcommon.service.origins = ["https://*.exabeam.<your_organization>.com:<listener_port>", <...additional_origins...>]

    Here's an example with 2 service origins:

    webcommon.service.origins = ["https://*.exabeam.org-name.com", "https://*.exabeam.org-name.com:8484"]
  2. Restart web-common to enable CORS protection.

    . /opt/exabeam/bin/shell-environment.bash
    web-common-restart

    Note

    Log ingestion will not be interrupted during the restart. web-common can take up to 1 minute to resume services.

How to Verify Origin and CORS Enforcement with cURL

The verification method presented here uses cURL to test CORS protection once it has been implemented.

You can verify that your environment is enforcing CORS policy with the following (using www.example.com as an origin):

curl -H "Origin: http://www.example.com" --verbose <exabeam_ip_or_hostname>

The response should be 403 Forbidden with the error message Invalid Origin - http:www.example.com.

To verify that CORS is working as intended, modify the origin:

curl -H "Origin: <exabeam_ip_or_hostname>" --verbose <exabeam_ip_or_hostname>

The response should be 200 OK with the Exabeam home page's HTML.