- Advanced Analytics
- Understand the Basics of Advanced Analytics
- Deploy Exabeam Products
- Considerations for Installing and Deploying Exabeam Products
- Things You Need to Know About Deploying Advanced Analytics
- Pre-Check Scripts for an On-Premises or Cloud Deployment
- Install Exabeam Software
- Upgrade an Exabeam Product
- Add Ingestion (LIME) Nodes to an Existing Advanced Analytics Cluster
- Apply Pre-approved CentOS Updates
- Configure Advanced Analytics
- Set Up Admin Operations
- Access Exabeam Advanced Analytics
- A. Supported Browsers
- Set Up Log Management
- Set Up Training & Scoring
- Set Up Log Feeds
- Draft/Published Modes for Log Feeds
- Advanced Analytics Transaction Log and Configuration Backup and Restore
- Configure Advanced Analytics System Activity Notifications
- Exabeam Licenses
- Exabeam Cluster Authentication Token
- Set Up Authentication and Access Control
- What Are Accounts & Groups?
- What Are Assets & Networks?
- Common Access Card (CAC) Authentication
- Role-Based Access Control
- Out-of-the-Box Roles
- Set Up User Management
- Manage Users
- Set Up LDAP Server
- Set Up LDAP Authentication
- Third-Party Identity Provider Configuration
- Azure AD Context Enrichment
- Set Up Context Management
- Custom Context Tables
- How Audit Logging Works
- Starting the Analytics Engine
- Additional Configurations
- Configure Static Mappings of Hosts to/from IP Addresses
- Associate Machine Oriented Log Events to User Sessions
- Display a Custom Login Message
- Configure Threat Hunter Maximum Search Result Limit
- Change Date and Time Formats
- Set Up Machine Learning Algorithms (Beta)
- Detect Phishing
- Restart the Analytics Engine
- Restart Log Ingestion and Messaging Engine (LIME)
- Custom Configuration Validation
- Advanced Analytics Transaction Log and Configuration Backup and Restore
- Reprocess Jobs
- Re-Assign to a New IP (Appliance Only)
- Hadoop Distributed File System (HDFS) Namenode Storage Redundancy
- User Engagement Analytics Policy
- Configure Settings to Search for Data Lake Logs in Advanced Analytics
- Enable Settings to Detect Email Sent to Personal Accounts
- Configure Smart Timeline™ to Display More Accurate Times for When Rules Triggered
- Configure Rules
- Exabeam Threat Intelligence Service
- Threat Intelligence Service Prerequisites
- Connect to Threat Intelligence Service through a Proxy
- View Threat Intelligence Feeds
- Threat Intelligence Context Tables
- View Threat Intelligence Context Tables
- Assign a Threat Intelligence Feed to a New Context Table
- Create a New Context Table from a Threat Intelligence Feed
- Check ExaCloud Connector Service Health Status
- Disaster Recovery
- Manage Security Content in Advanced Analytics
- Exabeam Hardening
- Set Up Admin Operations
- Health Status Page
- Troubleshoot Advanced Analytics Data Ingestion Issues
- Generate a Support File
- View Version Information
- Syslog Notifications Key-Value Pair Definitions
Deploy Exabeam Products
Hardware and Virtual Deployments Only
Understand infrastructure requirements and successfully run a fresh installation or upgrade.
Considerations for Installing and Deploying Exabeam Products
Hardware and Virtual Deployments Only
Before you install and deploy an Exabeam product, ensure you have set up your physical, virtual machine, or Cloud Exabeam appliance. For more information on setting up your environment, please refer to our appliance and virtual machine setup guides.
The installation prompts ask a series of questions regarding how you want your node cluster and distributed file system configured.
Have the following prepared before starting:
exabeam
user account credentials with installation privileges.Warning
DO NOT ATTEMPT TO RUN THIS INSTALLATION AS ROOT.
SSH key for authenticating sessions between hosts. (Authentication using SSH password method is not preferred. SSH password method is not supported for AWS and GCP deployments.)
If you are using an external Certificate Authority (CA), please consult an Exabeam technical representative before installation.
IP addresses and hostnames of new node servers.
Preferred NTP and DNS hostnames and addresses.
Docker BIP and Calico subnet (cannot be an existing IP space or in use), if not using default settings
For virtual or cloud installations, obtain access to instance images or configurations for your platform. Contact your Exabeam representative for more information.
If you are setting up a disaster recovery scheme, please consult Disaster Recovery Deployment.
For Amazon Web Services (AWS) and Google Cloud Platform (GCP) deployments, you must meet the following requirements before installing:
AWS deployments: All nodes MUST have src/dest (source/destination) checks turned off.
GCP deployments: The firewall rules must allow IP protocol 4 (IP-in-IP or IPIP) traffic within the cluster. While setting up your TCP/UDP ports, ensure the Other protocols box is checked and in the input box type
ipip
, and then save the setting.Nodes allow traffic to and from security group to itself.
A terminal/screen session (SSH access).
Run deployment scripts only on the master node host. The deployment process will automatically install to worker hosts from the master node/host.
Repeat the deployment process at standby nodes. Secondary sites and standby nodes should have the same resources and capacities as the primary site and its nodes.
If you have questions about the prerequisites or installation approaches, please create a support ticket at Exabeam Community to connect with a technical representative who can assist you.
Supported Exabeam Deployment Configurations
Hardware and Virtual Deployments Only
The tables below shows the supported deployment configurations for Exabeam products and modules. When running the installation scripts, the various packages will be referred to by Exabeam module names.
Advanced Analytics Deployment Configurations
uba = Advanced Analytics
uba_master =Advanced Analytics master host
uba_slave =Advanced Analytics worker host
uba_lime = Advanced Analytics dedicated LIME host
cm = Case Manager and Incident Responder bundle
ml = Machine Learning
If more than 1TB worth of logs ingested a day, it is recommended that a standalone LIME node be deployed. For more information about deploying a standalone LIME node, see Add Ingestion (LIME) Nodes to an Existing Advanced Analytics Cluster.
Node Host | uba_master | uba_slave | cm | ml | uba_lime |
---|---|---|---|---|---|
Master Node | ✓ | ✓ | |||
Case Manager/Incident Responder Node 1 | ✓ | ✓ | |||
Worker Node 2 | ✓ | ✓ | |||
Dedicated LIME Node 3 | ✓ | ✓ |
Important
Single-node clusters can have uba_master
configuration only and cannot be combined with Case Manager/Incident Responder. Two-node cluster configuration must have a uba_master
and Case Manager/Incident Responder, not uba_slave
. Worker nodes may be deployed in cluster with more than two nodes.
Things You Need to Know About Deploying Advanced Analytics
Hardware and Virtual Deployments Only
Review considerations for installing and upgrading Advanced Analytics, including network ports, SIEM configurations, setting up your .conf
file, the default Syslog template, LDAP server integration, and network zones.
Network Ports
The table below shows all the ports that Exabeam either connects to or receives connections from. Ensure these ports are configured appropriately for data and communications traversal.
Service | Hosts | Port | TCP | UDP |
---|---|---|---|---|
SSH | All Cluster Hosts | 22 | ✓ | |
BGP | All Cluster Hosts | 179 | ✓ | |
Exabeam Web UI (HTTPS) | All Cluster Hosts | 8484 | ✓ | |
Docker | All Cluster Hosts | 2376 | ✓ | |
Docker | All Cluster Hosts | 2377 | ✓ | |
Docker | All Cluster Hosts | 4789 | ✓ | |
Docker | All Cluster Hosts | 7946 | ✓ | ✓ |
Docker Registry | Master Host | 5000 | ✓ | |
Kafka Connector | All Cluster Hosts | 8083 | ✓ | |
Kafka | All Cluster Hosts | 9092 | ✓ | |
Kafka | All Cluster Hosts | 9093 | ✓ | |
Kafka | All Cluster Hosts | 9094 | ✓ | |
MongoDB | All Cluster Hosts | 27017 | ✓ | |
MongoDB | All Cluster Hosts | 27018 | ✓ | |
MongoDB | All Cluster Hosts | 27019 | ✓ | |
Hadoop | All Cluster Hosts | 9000 | ✓ | |
Hadoop | All Cluster Hosts | 50010 | ✓ | |
Hadoop | All Cluster Hosts | 50020 | ✓ | |
etcd | First 1 or 3 nodes up to highest odd number | 2379 | ✓ | |
etcd | First 1 or 3 nodes up to highest odd number | 2380 | ✓ | |
Ping | All Cluster Hosts | ICMP | ||
Elastalert | All Cluster Hosts | 3030 | ✓ | |
Disaster Recovery Socks Proxy | Master and Failover Hosts | 10022 | ✓ | |
NTP | Master Host | 123 | ✓ | |
DNS | All Cluster Hosts | 53 | ✓ | |
SMTP | Master and Failover Hosts | 25 | ✓ | |
SMTPS | Master and Failover Hosts | 587 | ✓ | |
Syslog Forwarder | Target Host | 514 | ✓ | ✓ |
Syslog Forwarder | All Cluster Hosts | 515 | ✓ | |
Disaster Recovery MongoDb | Master and Failover Hosts | 5123 | ✓ | |
Exabeam Coordination Service (Zookeeper) | All Cluster Hosts | 2181 | ✓ | |
Exabeam Coordination Service (Zookeeper) | All Cluster Hosts | 2888 | ✓ | |
Exabeam Coordination Service (Zookeeper) | All Cluster Hosts | 3888 | ✓ | |
Exabeam Data LakeUI | Master Host | 5601 | ✓ | |
Exabeam SOAR Metrics UI | Case Manager Host | 5850 | ✓ | |
Exabeam SOAR Server | Case Manager Host | 7999 | ✓ | |
Exabeam SOAR Server | Case Manager Host | 8097 | ✓ | |
Exabeam SOAR Server | Case Manager Host | 9998 | ✓ | |
Exabeam SOAR Server | Case Manager Host | 9999 | ✓ | |
Exabeam Advanced Analytics Engine | All Advanced Analytics Martini Hosts | 8090 | ✓ | |
Exabeam Advanced Analytics API | Master/Main Advanced Analytics Node | 8482 | ✓ | |
Exabeam Advanced Analytics UI | Master Host | 8483 | ✓ | |
Exabeam Health Agent | All Cluster Hosts | 8659 | ✓ | |
Exabeam SOAR-LEMON | Case ManagementHost | 8880 | ✓ | |
Exabeam SOAR-LEMON | Case Manager Host | 8888 | ||
Exabeam SOAR-LEMON | Case ManagementHost | 8889 | ✓ | |
Exabeam SOAR Syslog | Case Manager Host | 9875 | ✓ | ✓ |
Exabeam SOAR Action Controller | OAR Host | 9978 | ✓ | |
Exabeam Advanced Analytics Engine JMX | All Advanced Analytics Martini Hosts | 9003 | ✓ | |
Exabeam Advanced Analytics LIME JMX | All LIME Hosts | 9006 | ✓ | |
Exabeam Replicator | Master Host | 9099 | ✓ | |
Elasticsearch | All Cluster CM Hosts | 9200 | ✓ | |
Elasticsearch | All Cluster CM Hosts | 9300 | ✓ | |
Datadog and Threat Intelligence Service | Master and Failover Hosts | 443 | ✓ |
Ensure ports for third-party products allow traffic from Exabeam Hosts.
Service | Port | TCP | UDP | Advanced Analytics | Incident Responder | Data Lake |
---|---|---|---|---|---|---|
LDAP (Non-secure Connection) | 389 | ✓ | ✓ | ✓ | ✓ | |
LDAP (Secure Connection) | 636 | ✓ | ✓ | ✓ | ✓ | |
QRadar | 443 | ✓ | ✓ | |||
ArcSight ESM | 3306 | ✓ | ✓ | |||
Ganglia | 8081 | ✓ | ✓ | ✓ | ✓ | |
Splunk | 8089 | ✓ | ✓ | |||
ArcSight Logger | 9000 | ✓ | ✓ | |||
RSA | 50105 | ✓ | ✓ |
Configure Your SIEM
Depending on which SIEM your organization uses, take these basic configuration steps to connect seamlessly with Exabeam. You will need the IP Address and TCP Port of your SIEM.
Site Collector
Exabeam SaaS Cloud uses site collectors to enable you to upload log data from your data centers or VPCs to Exabeam. Site collectors in the Exabeam SaaS Cloud were designed to support most data centers with a single Site Collector.
You can configure more than one site collector for your deployment. For example, if you need to collect logs from multiple data centers or VPCs and upload them to Exabeam.
Splunk
Exabeam fetches logs by querying the Splunk Search Head on TCP port 8089. It is possible to distribute search across multiple search heads by manually specifying the different search head IPs. The logs are fetched by leveraging the Splunk APIs.
You can configure your Splunk Cloud connection to fetch logs from Splunk Cloud by setting and configuring a proxy.
To do so, specify the parameters ProxyHost and ProxyPort in /opt/exabeam/config/custom/custom_lime_config.conf.
Note
The ProxyHost
and ProxyPort
are optional parameters. When provided, the connection goes through the proxy. If not provided, the connection goes directly to Splunk.
Sample configuration for the proxy:
Hosts = { Splunk1 = { Hostname = "10.10.2.123" Password = "password" Port = 8089 Username = "admin" ProxyHost = "192.158.8.12" ProxyPort = 3123 } }
IBM QRadar
Exabeam makes API calls to the QRadar SIEM on TCP port 443 to fetch the logs.
Syslog
Exabeam supports direct syslog ingestion. SIEM platforms not listed above can send data to Exabeam in the form of syslogs. Syslog messages may be sent directly by security devices or forwarded by a SIEM. Enabling Syslog collection is recommended in environments where fetching from the SIEM is too slow or where the log sources of interest to the customer aren't all in the SIEM. Syslog ingestion can be enabled for certain log feeds while other log feeds are fetched directly from the SIEM.
Syslog Server Configuration
Located at /opt/exabeam/config/rsyslog/exabeam_rsyslog.conf
this file specifies which protocols to use (TCP, UDP), which ports to open, the layout of the syslog messages we receive, as well as filtering options.
Syslog ingestion can be turned on and off through the Settings page.
Guidelines for Integrating Your LDAP Server
The Global Catalog server is a domain controller that enables searching for Active Directory objects without requiring replication of the entire contents of Active Directory to every domain controller. It is important to point Exabeam to a Global Catalog server (port 3268) so that they system can gather comprehensive user information.
Note
Before integrating an LDAP server, please ensure you have installed and properly configured the Exabeam Site Collector.
You will need the IP Address or Hostname, TCP Port (defaults to 389), Base Distinguished Name, Bind Distinguished Name and password to connect to the domain controller (Active Directory). The Base Distinguished Name is the starting point for directory server searches, and the Bind Distinguished Name is the Active Directory user account that has privileges to search for users.
We need the complete Distinguished Name, for example: CN=exabeam,OU=Service Accounts,OU=Administration,dc=NA,dc=Acme,dc=net
If there are multiple domains in your environment, you can use the IP address or host name of the domain controller that serves as the Global Catalog server. On the other hand, you can configure Exabeam to connect to multiple domain controllers to pull in information from different domains.
Network Zones
You will need the CIDR Range and names of your network zones. Please limit these to networks greater than or equal to /24, as smaller zones may create an excess of unnecessary information.
Pre-Check Scripts for an On-Premises or Cloud Deployment
The table below lists the pre-checks run when you deploy your Exabeam product:
Note
If a critical pre-check fails, the deployment stops and cannot proceed. If a warning message is displayed on the CLI, a non-critical pre-check failed and the deployment can proceed. You MUST fix unsuccessful pre-checks, those tagged as FAIL
, before electing to proceed with your deployment.
Name | Description | Critical | First Advanced Analytics Version | First Data LakeVersion |
---|---|---|---|---|
CheckSubnet | Checks if all hosts in the cluster are in the same subnet. NoteThis pre-check is excluded on Google Cloud Platform deployments. | Yes | i46 | i20 |
CheckInstalledPackages | Checks if the following required packages are installed (RPM):
| Yes | i46 | i22 |
CheckSSHDConfig | Checks if SSHD configure file (
| No | i46 | i22 |
CheckInterfaceNames | Checks if the interface names are properly detected. | Yes | i46 | i22 |
CheckPermissions | Checks for existence and permission of the system key directory. | Yes | i46 | i22 |
CheckRpmPackAge | Checks if the cluster has packages older than 90 days. | No | i46 | i22 |
CheckPartitions | Checks if drives are mounted and partitioned correctly. | No | i46 | i24 |
CheckStorage (non critical) | Checks that the total disk space used in the following directories is less than or equal to 70%. This is a non-critical pre-check.
| No | i46 | i24 |
CheckStorage (critical) | Checks that the total disk space used in the following directories is less than or equal to 85%. This is a critical pre-check.
| Yes | i46 | i24 |
Run the Installation Pre-Check Script for an On-Premises or Cloud Deployment
When deploying your Exabeam product, a series of automated pre-checks test your platform to ensure servers meet Exabeam's requirements in terms of available resources. The pre-check script collects and verifies the following information:
sshd config
Sufficient disk space
OS and kernel version
Networking interfaces
Memory
Number of CPUs
Time zone (UTC is currently the only supported time zone)
Note
It is strongly recommended that deployment does not proceed if pre-checks do not pass.
Preconditions: Linux users should be able to run sudo without password.
Download the exa_pre_check.py
script from Exabeam Community.
Caution
Make sure you download the script that corresponds to your current version number of Advanced Analytics. If you are running a multi-node system, you may need to run the script on either all hosts on master node and all worker nodes, or just the master node, depending on the version.
Start a terminal session on a node. You must run the pre-check on all nodes in your deployment.
Run exa_pre_check.py
and check the output.
A successful pre-check will conclude with All checks passed.
| => cd /opt | => python exa_pre_check.py INFO exa_pre_check.py 2018-08-07 21:42:39,921 verify_precheck_results 111:Pre-check SSHDPrecheck passed at host: localhost . OK INFO exa_pre_check.py 2018-08-07 21:42:39,921 verify_precheck_results 111:Pre-check OSVersionPrecheck passed at host: localhost . OK INFO exa_pre_check.py 2018-08-07 21:42:39,921 verify_precheck_results 111:Pre-check FreeRootSpacePrecheck passed at host: localhost . OK INFO exa_pre_check.py 2018-08-07 21:42:39,921 verify_precheck_results 111:Pre-check FreeExabeamDataSpacePrecheck passed at host: localhost . OK INFO exa_pre_check.py 2018-08-07 21:42:39,921 verify_precheck_results 111:Pre-check FreeMongoSpacePrecheck passed at host: localhost . OK INFO exa_pre_check.py 2018-08-07 21:42:39,921 verify_precheck_results 121:All checks passed.
An unsuccessful pre-check will conclude the following messages and it is advised you do not upgrade until checks have passed.
WARNING exa_pre_check.py 2018-08-09 22:06:48,353 verify_precheck_results 103:Precheck FreeMongoSpacePrecheck failed at host: 10.10.2.81 . Please make sure you have enough disk spaces at /opt/exabeam/data/mongo . ERROR exa_pre_check.py 2018-08-09 22:06:48,353 verify_precheck_results 105: There are problems with your environment, but deployment may still continue. It is recommended that you correct the above problems if possible.
Install Exabeam Software
Hardware and Virtual Deployments Only
The instructions below are for new installations using the fresh_install steps. Installations should only run on Exabeam supported or approved hardware and platforms. For upgrades, please see Upgrading Exabeam Software
Warning
Do not install unauthorized third-party software onto Exabeam appliances. The performance and function of Exabeam hardware may be impacted.
To install Exabeam software:
Note
These instructions will walk you through installing Advanced Analytics and Case Manager (with Incident Responder). If you are installing only Advanced Analytics, please take note of and disregard Case Manager-Incident Responder-related prompts where applicable.
Download
Exabeam_[product]_[build_version].sxb
file from Exabeam Community that you want to install. Transfer the downloaded SXB to/home/exabeam
or anywhere on the master node except/opt/exabeam_installer
.Note
For AWS, disable source/destination checks on all cluster hosts. This is necessary for the network technology in Data Lake.
Start a new terminal session using the
exabeam
credentials (do not run as ROOT).Initiate a screen session. This is mandatory and will prevent accidental termination of your session.
screen -LS [yourname]_[todaysdate]
Change the permission of the file using:
chmod +x Exabeam_[product]_[build_version].sxb
Execute the following command:
./Exabeam_[product]_[build_version].sxb fresh_install
Note
If your installation is disrupted and needs to be resumed, execute the following:
/opt/exabeam_installer/init/exabeam-multinode-deployment.sh
Then select the "Deploy Cluster" menu option. If the network connection to the Exabeam host is dropped at any point during the installation, type
screen -r [yourname]_[todaysdate]
to reattach the screen session.The following are prompts based on the product you are installing.
Indicate how your nodes should be configured. There are many possible deployment combinations.
For example, to configure a multi-node environment with Advanced Analytics installed on the master node (node 1) and Case Manager installed on the worker node (node 2).
Which product(s) do you wish to add? ['uba', 'ml', 'dl', 'cm']
uba cm
How many nodes do you wish to add? (minimum: 2)2
What is the IP address of node 1 (localhost/127.0.0.1 not allowed)? [node1_address
] What are the roles of node 1? ['uba_master', 'uba_slave']:uba_master
What is the IP address of node 2 (Localhost/127.0.0.1 not allowed)? [node2_address
] What are the roles of node 2? ['cm', 'uba_slave']:cm
To configure an environment with multiple ingestion nodes, with Advanced Analytics installed on the master node (node 1), three ingestion nodes (node 2, 3, and 4), and a worker node (node 5):
Which product(s) do you wish to add? ['uba', 'ml', 'dl', 'cm']
uba
How many nodes do you wish to add? (minimum: 2)5
What is the IP address of node 1 (localhost/127.0.0.1 not allowed)? [node1_address
] What are the roles of node 1? ['uba_master', 'uba_slave','uba_lime']:uba_master
What is the IP address of node 2 (localhost/127.0.0.1 not allowed)? [node2_address
] What are the roles of node 2? ['uba_slave', 'uba_lime']:uba_lime
What are the roles of node 3? ['uba_slave', 'uba_lime']:uba_lime
What is the IP address of node 3 (localhost/127.0.0.1 not allowed)? ['uba_slave', 'uba_lime']: [node3_address
] What is the IP address of node 4 (localhost/127.0.0.1 not allowed)? ['uba_slave', 'uba_lime']: [node4_address
] What are the roles of node 4? ['uba_slave', 'uba_lime']:uba_lime
What is the IP address of node 5 (localhost/127.0.0.1 not allowed)? ['uba_slave', 'uba_lime']: [node5_address
] What are the roles of node 5? ['uba_slave', 'uba_lime']:uba_slave
This IP assign step will repeat until all nodes are assigned addresses.
To configure a single-node environment, follow the same process but input the IP address of just one node.
Valid credentials (SSH Keys) are needed for inter-node communications. The example below uses an internal path for the path to the SSH Private Key, which you must replace with your own. The path to the SSH Private Key must be an absolute path. Follow instructions:
Note
If you have set up the instance in AWS or GCP, you must use the same private key shared across all the instances.
Follow these instructions if you already have an SSH Private Key. This is the preferred method. Contact your Exabeam representative if you need assistance.
The nodes within the Exabeam cluster communicate with each other regarding the processing status of the jobs, health status of the services etc. Valid credentials (ssh keys) are needed for secure inter-node communications. Do you have a ssh private key that can be used for internode communications? (If you don't have one, answer 'n' and we will create one for you. If you are running Exabeam on Amazon Web Services, you need to use the SSH key that the instance was launched with.) [y/n]
y
What's the path to the ssh private key? [/opt/exabeam_installer/.ssh/key.pem]/home/exabeam/.ssh/key.pem
What's the user name used to deploy the public ssh key? This user must exist and have sudo power. [exabeam]exabeam
Does Exabeam need password or SSH key to log in to all hosts? (This credential is needed only to put the SSH key on the machines. All communications moving forward will use the SSH key.) 1) password 2) SSH key ['1','2']: default (none):2
What's the path to the ssh private key? [/opt/exabeam_installer/.ssh/key.pem]/opt/exabeam_installer/.ssh/key.pem
Follow these instructions if you need to generate an SSH Private Key. This method is not supported for AWS and GCP deployments.
The nodes within the Exabeam cluster communicate with each other regarding the processing status of the jobs, health status of the services etc. Valid credentials (ssh keys) are needed for secure inter-node communications. Do you have a ssh private key that can be used for internode communications? (If you don't have one, answer 'n' and we will create one for you. If you are running Exabeam on Amazon Web Services, you need to use the SSH key that the instance was launched with.) [y/n]
n
We will generate a new ssh key for the deployment at /opt/exabeam_installer/.ssh/key.pem What's the user name used to deploy the publish ssh key? This user must exist and have sudo power. [exabeam]exabeam
Does Exabeam need password or SSH key to log in to all hosts? (This credential is needed only to put the SSH key on the machines. All communications moving forward will use the SSH key.) 1) password 2) SSH key ['1','2']: default (None):1
You will be prompted several times for password. Password: [password
]
The installation will automatically partition your drives. However, if auto-detection fails, you will be prompted to manually configure your partitions.
You will be given a suggested storage layout, which you can accept or override.
Unable to autodetect drive types for host Check if drive configuration/override is needed.
You will be given a suggested storage layout, which you can accept or override. If you choose to accept the auto-suggested drive mapping, type
y
and then proceed to the next step. If you choose to map the drives yourself, typen
and follow the prompts to configure your drives to match the parameters in the table below.Exabeam Equivalent
/dev/xvdb
/dev/xvdc
/dev/xvdd
Remainder Drives
EX-2000 (base)
Advanced Analytics worker node
LVM (1)
Dedicated Mount (2)
Dedicated Mount (2)
[n/a]
EX-2000 PLUS
Advanced Analytics and Incident Responder worker node
LVM (1)
LVM (1)
LVM (1)
Dedicated Mount (2)
EX-4000
Advanced Analytics master node
LVM (1)
LVM (1)
LVM (1)
Dedicated Mount (2)
To manually configure your drives, apply the parameters for the role and node you have assigned your host:
EX[appliance_type] mapping applied. { [suggested drive mappings] ... [suggested drive mappings] } Please review the above, would you like to apply this drive mapping automatically to the host? (Use lsblk or fdisk to verify on a separate screen) [y/n]
n
To map an EX2000 (base):
Please specify the drive purpose. We typically put SSDs on the LVM for services requiring fast I/O (data, mongo, es_hot), and HDDs for dedicated services like hadoop, elasticsearch, kafka. Ideally your host should have a mix of SSDs (fast) and HDDs (slow), so you should set your drive purpose accordingly to the Exabeam appliance specs. Important: If your host has all SSDs mounted, please mark the drive purpose for dedicated mounts, and the rest for the LVM. The size of the drive should be a good indicator as to which purpose it should be assigned to (larger sizes go to the dedicated mounts). Important: you should not provision all your disks to the LVM, or the dedicated mounts, there should be a mix. {'device': '/dev/xvdb', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '1031GB', 'table': 'unknown'} 1) Provision device /dev/xvdb to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdb to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):
1
{'device': '/dev/xvdc', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '1031GB', 'table': 'unknown'} 1) Provision device /dev/xvdc to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdc to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):1
{'device': '/dev/xvdd', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '2147GB', 'table': 'unknown'} 1) Provision device /dev/xvdd to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdd to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):1
To map an EX2000 PLUS:
Please specify the drive purpose. We typically put SSDs on the LVM for services requiring fast I/O (data, mongo, es_hot), and HDDs for dedicated services like hadoop, elasticsearch, kafka. Ideally your host should have a mix of SSDs (fast) and HDDs (slow), so you should set your drive purpose accordingly to the Exabeam appliance specs. Important: If your host has all SSDs mounted, please mark the drive purpose for dedicated mounts, and the rest for the LVM. The size of the drive should be a good indicator as to which purpose it should be assigned to (larger sizes go to the dedicated mounts). Important: you should not provision all your disks to the LVM, or the dedicated mounts, there should be a mix. {'device': '/dev/xvdb', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '1031GB', 'table': 'unknown'} 1) Provision device /dev/xvdb to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdb to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):
1
{'device': '/dev/xvdc', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '1031GB', 'table': 'unknown'} 1) Provision device /dev/xvdc to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdc to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):1
{'device': '/dev/xvdd', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '2147GB', 'table': 'unknown'} 1) Provision device /dev/xvdd to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdd to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):1
{'device': '/dev/xvde', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '2147GB', 'table': 'unknown'} 1) Provision device /dev/xvde to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvde to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):2
Select Option 2 for the remainder drives.
To map an EX4000:
Please specify the drive purpose. We typically put SSDs on the LVM for services requiring fast I/O (data, mongo, es_hot), and HDDs for dedicated services like hadoop, elasticsearch, kafka. Ideally your host should have a mix of SSDs (fast) and HDDs (slow), so you should set your drive purpose accordingly to the Exabeam appliance specs. Important: If your host has all SSDs mounted, please mark the drive purpose for dedicated mounts, and the rest for the LVM. The size of the drive should be a good indicator as to which purpose it should be assigned to (larger sizes go to the dedicated mounts). Important: you should not provision all your disks to the LVM, or the dedicated mounts, there should be a mix. {'device': '/dev/xvdb', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '1031GB', 'table': 'unknown'} 1) Provision device /dev/xvdb to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdb to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):
1
{'device': '/dev/xvdc', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '1031GB', 'table': 'unknown'} 1) Provision device /dev/xvdc to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdc to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):1
{'device': '/dev/xvdd', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '2147GB', 'table': 'unknown'} 1) Provision device /dev/xvdd to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvdd to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):1
{'device': '/dev/xvde', 'driver': 'xvd', 'model': 'Xen Virtual Block Device', 'size': '2147GB', 'table': 'unknown'} 1) Provision device /dev/xvde to LVM (for data, mongo, or es_hot) 2) Provision device /dev/xvde to dedicated mounts (for hadoop, kafka, or elasticsearch) ['1', '2']: default (None):2
Select Option 2 for the remainder drives.
The following values are recommended.
For Advanced Analytics when Case Manager is being deployed:
How many elasticsearch instances per host? [2]
1
What's the replication factor for elasticsearch? 0 means no replication. [0]0
How much memory in GB for each elasticsearch for each instance? [16]16
How much memory in GB for each kafka instance? [5]5
For Data Lake:
Note
If you are choosing an instance type where the memory is greater than 120 GB, we require 4 warm nodes. Otherwise, you will receive a warning message during the deployment process.
How many elasticsearch instances per host? [4]
4
How much memory in GB for each elasticsearch master node? [5]5
How much memory in GB for each elasticsearch hot node? [16]16
How much memory in GB for each elasticsearch warm node? [22]22
How much memory in GB for each kafka instance? [5]5
The following values are recommended for AWS and GCP deployments.
How many elasticsearch instances per host? [4]
4
How much memory in GB for each elasticsearch master node? [5]5
How much memory in GB for each elasticsearch hot node? [16]16
How much memory in GB for each elasticsearch warm node? [22]11
How much memory in GB for each kafka instance? [5]5
NTP is important for keeping the clocks in sync. If you have a local NTP server please input that information. If you do not have a local NTP server, but have internet access, use the default
pool.ntp.org
. Only choosenone
if there is no local NTP server and no internet access.What's the NTP server to synchronize time with? Type 'none' if you don't have an NTP server and don't want to sync time with the default NTP server group from ntp.org. [pool.ntp.org]
pool.ntp.org
The installation will automatically detect and assign a default route for your cluster.
Let us determine the right network interface name for the deployment.Discovered network interface name: eno1. This will be used as the default nic in the cluster.
If you would like to add internal DNS servers, select
y
and add them here. If not, selectn
. Name resolution here impacts only Docker containers.Would you like to add any DNS servers? [y/n]
n
If there are any conflicting networks in the user's domain, override the Docker BIP and Calico subnets. Answer
y
if you want to override (example is given below) andn
if you do not.Note
The
docker_bip
must have an IP actually in the subnet (i.e., the value cannot end in .0).Would you like to override the docker_bip IP/CIDR (172.17.0.1/16)? [y/n]
y
Enter the new docker_bip IP/CIDR (minimum size /25, recommended size /16): [docker_bip_ip/CIDR
] Would you like to override the calico_network_subnet IP/CIDR (10.50.48.0/20)? [y/n]n
Caution
IP addresses are given in the form
[ip]/[CIDR]
. Please apply the correct subnet CIDR block. Otherwise, network routing may fail or produce an unforeseen impact.For Advanced Analytics, if setting up disaster recovery, configure it here. Please refer to Deploy Disaster Recovery.
If the RPM (YUM) packages delivered with your installation have aged 3 months, you will be prompted to update your packages. You can also choose the option to
Install pre-approved CentOS package updates
from the main menu. ONLY UPDATE USING RPM (YUM) packages provided by Exabeam inside your SXB package.Note
You have the option to perform a rolling update or update all hosts at once. Choosing to perform a rolling update prevents log ingestion downtime. This option still requires the use of TCP and a load balancer in front of any Syslog source. Only update all hosts at once if you are doing a fresh install.
While this update process will exit the original fresh_install script, once you have run the YUM updates and your hosts have been rebooted, you can return to and complete the deployment process by logging into your master host and apply:
/opt/exabeam_installer/init/exabeam-multinode-deployment.sh
Then select the
Deploy Cluster
menu option.
Your product is now deployed.
If you want to disable the Exabeam Cloud Telemetry Service, see How to Disable Exabeam Cloud Telemetry Service.
If you purchased Cloud Connectors to ingest logs, see the Cloud Connectors Administration Guide to get started.
Once you have deployed your purchased products, go to your host UI to configure features and services:
https://[master host IP]:8484
Log in using the credentials for the admin
user, using the default password changeme
, to make configurations. Please change the default password as soon as possible.
Upgrade an Exabeam Product
Hardware and Virtual Deployments Only
Important
Non-standard customizations to product service and configuration files are overwritten during upgrades. However, these customizations are detected during upgrades, and backups of the corresponding files are automatically created in the following folder: /opt/exabeam/data/version_files/backups. After the upgrade is complete, you can refer to these files to restore the configurations to your upgraded software.
You must meet the following requirements before upgrading to this release:
AWS deployments: All nodes MUST have src/dest (source/destination) checks turned off.
GCP deployments: Network open to IP protocol 4 (IP in IP) traffic within the cluster.
Nodes allow traffic to and from security group to itself.
Warning
Do not install unauthorized third-party software onto Exabeam appliances. The performance and function of Exabeam hardware may be impacted.
If you have questions about the prerequisites, please create a support ticket at Exabeam Community to connect with a technical representative who can assist you.
Note
The current disaster recovery setup requires that both the primary and secondary clusters are on the same release version at all times. For more information, see Disaster Recovery.
Warning
If you have an existing custom UI port, please set the web_common_external_port
variable. Otherwise, access at the custom UI port may be lost after upgrading. Ensure the variable is set in /opt/exabeam_installer/group_vars/all.yml
:
web_common_external_port: <UI_port_number>
Download
Exabeam_[product]_[build_version].sxb
version file from Exabeam Community that you want to upgrade to. Place it on the master node in a temporary directory. Do not place the SXB file in the/opt/exabeam_installer
directory.Run the command below to start a new screen session:
screen -LS [yourname]_[todaysdate]
Change the permission of the SXB file.
chmod +x Exabeam_[product]_[build_version].sxb
Start a new terminal session using your
exabeam
credentials (do not run as ROOT).Initiate a screen session. This is mandatory and will prevent accidental termination of your session.
screen -LS [yourname]_[todaysdate]
Execute the following command:
./Exabeam_[product]_[build_version].sxb upgrade
The system will auto-detect your existing version. If it cannot, then you will be asked to enter the existing version that you are upgrading from.
If your previous software had non-standard customizations, the following output displays to indicate that a backup of the corresponding configuration and/or service files have been created in the given location. You can refer to these backup files to restore the customizations to your upgraded software.
When the upgrade finishes, the script will then ask the following questions.
Upgrade completed. Do you want to start exabeam-analytics now? [y/n] y Upgrade completed. Do you want to start lime now? [y/n] y
Upgrading Advanced Analytics and Case Manager
SSH to the primary Advanced Analytics machine.
Run the command below to start a new screen session:
screen –LS new_screen
Download the new
Exabeam_[product]_[build_version].sxb
from the Exabeam Community. Place it anywhere on the master node except /opt/exabeam_installer.In the same file directory where you saved the .sxb file in step 3, run the command below to upgrade Advanced Analytics and Case Manager:
./Exabeam_SOAR_SOAR-iyy_zzz.UBA_iyy_zzz.PLATFORM_PLT-iyy_zzz.EXA_SECURITY_cyyyyyy_zz.sxb upgrade
Use these instructions if you are upgrading a bundled Advanced Analytics and Case Manager deployment.
Add Ingestion (LIME) Nodes to an Existing Advanced Analytics Cluster
Hardware and Virtual Deployments Only
When you add ingestion nodes to an existing cluster, you boost your ingesting power so you can ingest and parse more logs.
If you have a Log Ingestion Message Extraction (LIME) engine on your master node or you already have dedicated ingestion nodes, which you may have added when you first installed Advanced Analytics, you can add a dedicated ingestion node.
You are prompted to answer questions about how your node should be configured. After answering these questions, please wait twenty minutes to two hours for the process to finish, depending on how many nodes you deployed.
If you have a LIME engine on your master node and you want to add multiple dedicated ingestion nodes, you must first add just one dedicated ingestion node. The LIME engine is disabled on the master node and moves to the dedicated ingestion node. After you add this first ingestion node, you can continue to add more as needed.
You can add ingestion nodes only if you ingest logs from Syslog. If you ingest logs from a SIEM, you can use only one LIME engine, on either a master or dedicated node. If you ingest logs from both a SIEM and Syslog, you ingest your SIEM logs using one LIME engine, on either a master or dedicated node, then distribute your Syslog traffic across your other ingestion nodes.
Each ingestion node should handle no more than 11k EPS. This upper limit depends on log mixture and type, how many custom parsers you have, and how complex the parsers are. To follow best practices, use a load balancer to evenly distribute the traffic across the nodes.
Once you add a node to a cluster, you can't remove it.
Have the following available and provisioned:
Exabeam credentials
IP addresses of your ingestion nodes
Credentials for inter-node communication (Exabeam can create these if they do not already exist)
A load balancer
Ensure that you ingest logs from Syslog and your load balancer is configured to send no more than 11k EPS to each node
To add an ingestion node:
Start a new screen session:
screen -LS new_screen
Run the command below to start the deployment:
/opt/exabeam_installer/init/exabeam-multinode-deployment.sh
Menu options appear. Select
Add new nodes to the cluster
.Indicate how the nodes should be configured:
How many nodes do you wish to add?
1
What is the IP address of node 1 (localhost/127.0.0.1 not allowed)?10.10.2.88
What are the roles of node 1? ['uba_slave', 'uba_lime']:uba_lime
Network Time Protocol (NTP) keeps your computer's clocks in sync. Indicate how this should be configured:
If you have a local NTP server, input that information.
If you don't have a local NTP server but your server has Internet access, input the default
pool.ntp.org
.If you don't have an NTP server or want to sync with the default NTP server, input
none
.
What's the NTP server to synchronize time with? Type 'none' if you don't have an NTP server and don't want to sync time with the default NTP server group from ntp.org. [pool.ntp.org]
pool.ntp.org
Indicate whether to configure internal DNS servers:
If you would like to configure internal DNS servers, input
y
.If you don't want to configure internal DNS servers, input
n
.
Would you like to add any DNS servers? [y/n]
n
If there are any conflicting networks in the user's domain, override the docker_bip/CIDR value. If you change any of the docker networks, the product automatically uninstalls before you deploy it.
To override the value, input
y
.If you don't want to override the value, input
n
.
Would you like to override the default docker BIP (172.17.0.1/16)? [y/n]
n
Enter the new docker_bip IP/CIDR (minimum size /25, recommended size /16):172.18.0.1/16
Would you like to override the calico_network_subnet IP/CIDR (10.50.48.0/20)? [y/n]n
To move your Rsyslog filters to the new node, note your ingestion node's host number at
/opt/exabeam_installer/inventory
, then run the command, replacing[host]
with the host number:scp /etc/rsyslog.d/exabeam_rsyslog.conf exabeam@[host]:/etc/rsyslog.d/
Apply Pre-approved CentOS Updates
Hardware and Virtual Deployments Only
CentOS patches may be released on a periodic basis. These patches are included as part of the upgrade package. Applying the patches typically requires a system reboot.
To apply these CentOS updates, execute the below command:
/opt/exabeam_installer/init/exabeam-multinode-deployment.sh
Select the option
Install pre-approved CentOS package updates
.