Case ManagerConfigure Case Manager

Table of Contents

Add Case Manager and Incident Responder to Advanced Analytics Disaster Recovery

Hardware and Virtual Deployments Only

If you are upgrading from Advanced Analytics SMP 2019.1 (i48) or lower and have configured disaster recovery for Advanced Analytics, add Case Manager and Incident Responder to the existing Advanced Analytics disaster recovery.

Warning

Configure this only with an Exabeam Customer Success Engineer.

  1. Ensure that the Advanced Analytics replication is current.

  2. To ensure that the passive site matches the active site, compare the files in HDFS, the local file system, and MongoDB.

  3. Source the shell environment:

    . /opt/exabeam/bin/shell-environment.bash
  4. On the active cluster, stop the replicator:

    sos; replicator-socks-stop; replicator-stop

Note

Both the primary and secondary clusters must be on the same release version at all times.

Warning

If you have an existing custom UI port, please set the web_common_external_port variable in /opt/exabeam_installer/group_vars/all.yml. Otherwise, you may lose access at the custom UI port after the clusters upgrade.

web_common_external_port: <UI_port_number>

  1. (Optional) Disable Exabeam Cloud Telemetry Service.Disable Telemetry Service

  2. If you use the SkyFormation cloud connector service, stop the service.

    1. For SkyFormation v.2.1.18 and higher, run:

      sudo systemctl stop sk4compose
    2. For SkyFormation v.2.1.17 and lower, run:

      sudo systemctl stop sk4tomcat
      sudo systemctl stop sk4postgres

      Note

      After you've finished upgrading the clusters, the SkyFormation service automatically starts. To upgrade to the latest version of SkyFormation, please refer to the Update SkyFormation app on an Exabeam Appliance guide at support.skyformation.com.

  3. From Exabeam Community, download the Exabeam_[product]_[build_version].sxb file of the version you're upgrading to. Place it anywhere on the master node, except /opt/exabeam_installer, using Secure File Transfer Protocol (SFTP).

  4. Change the permission of the file:

    chmod +x Exabeam_[product]_[build_version].sxb
  5. Start a new terminal session using your exabeam credentials (do not run as ROOT).

  6. To avoid accidentally terminating your session, initiate a screen session.

    screen -LS [yourname]_[todaysdate]
  7. Execute the command (where yy is the iteration number and zz is the build number):

    ./Exabeam_[product]_[build_version].sxb upgrade 

    The system auto-detects your existing version. If it can't, you are prompted to enter the existing version you are upgrading from.

  8. When the upgrade finishes, decide whether to start the Analytics Engine and Log Ingestion Message Extraction engine:

    Upgrade completed. Do you want to start exabeam-analytics now? [y/n] y
    Upgrade completed. Do you want to start lime now? [y/n] y
  1. SSH to the primary Advanced Analytics machine.

  2. Start a new screen session:

    screen –LS new_screen
    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  3. When asked to make a selection, choose Add product to the cluster.

  4. From these actions, choose option 4.

    1) Upgrade from existing version
    2) Deploy cluster
    3) Run precheck
    4) Add product to the cluster
    5) Add new nodes to the cluster
    6) Nuke existing services
    7) Nuke existing services and deploy
    8) Balance hadoop (run if adding nodes failed the first time)
    9) Roll back to previously backed up version
    10) Generate inventory file on disk
    11) Configure disaster recovery
    12) Promote Disaster Recovery Cluster to be Primary
    13) Install pre-approved CentOS package updates
    14) Change network settings
    15) Generate certificate signing requests
    16) Exit
    Choices: ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16']: default (1): 4
  5. Indicate how the node should be configured:

    Which product(s) do you wish to add? ['ml', 'dl', 'cm']: cm
    How many nodes do you wish to add? (minimum: 0): 1
    What is the IP address of node 1 (localhost/127.0.0.1 not allowed)? 10.10.2.40
    What are the roles of node 1? ['cm', 'uba_slave']: cm
  6. To configure Elasticsearch, Kafka, DNS servers, and disaster recovery, it's best that you use these values:

    How many elasticsearch instances per host? [2] 1
    What's the replication factor for elasticsearch? 0 means no replication. [0]
    How much memory in GB for each elasticsearch instance? [16] 16
    How much memory in GB for each kafka instance? [5]
    Would you like to add any DNS servers? [y/n] n
    Do you want to setup disaster recovery? [y/n] n
  7. Once the installation script successfully completes, restart the Analytics Engine.

  1. On the secondary site, run:

    screen -LS dr_setup
    /opt/exabeam_installer/init/exabeam-multinode-deployment.sh
  2. Select option: Configure disaster recovery.

  3. Select the third option: This cluster is for file replication (configuration change needed)

    Please select the type of cluster:
    1) This cluster is source cluster (usually the primary)
    2) This cluster is destination cluster (usually the dr node)
    3) This cluster is for file replication (configuration change needed)
  4. Enter the IP address of the source cluster.

    What is the IP of the source cluster?
  5. Select option: SSH key.

    The source cluster's SSH key will replace the one for this cluster. How do you want to pull the source cluster SSH key?
    1) password
    2) SSH key
    
  6. Enter the private key path.

    What is the path to the private key file?

    The deployment may take some time to finish.

  7. The primary cluster begins to replicate automatically, but all replication items are disabled. You must manually enable the replication items.

    On the secondary site, access the custom configuration file /opt/exabeam/config/custom/custom_replicator_disable.conf, then enable replication items.

    For example, if you wish to only fetch compressed event files, then set the Enabled field for the [“.evt.gz”] file type to true:

    {
        EndPointType = HDFS
        Include {
            Dir = "/opt/exabeam/data/input"
            FilePattern = [".evt.gz"]
        }
        Enabled = true
    }
  8. Start the replicator:

    sos; replicator-start
  9. Log on to the standby cluster GUI.

  10. To gather context from the active cluster to synchronize the standby cluster, navigate to LDAP Import > Generate Context, then click Generate Context.

On the active cluster, start the replicator:

replicator-socks-start; replicator-start