Skip to main content

Virtual MachinesESX OVA Setup Guide

Table of Contents

Request OVA Image Files

To request access to the OVA image files, visit the Virtual Images section in the Exabeam Community.

Note

Please review all specifications for your platform and ensure you have sufficient resources to deploy Exabeam images. Additionally, please ensure you have valid Exabeam licenses for the product(s) you will implement.

Exabeam Platform Specification Table for Virtual Platforms

These are the minimum operating specifications needed to run your Exabeam product. We do not support hybrid deployments (cross environment deployments). All nodes must be in the same subnet.

Be aware that vCPU is not the same as the number of CPUs or Cores for the processor. A vCPU is typically equal to the number of threads in the processor.

The tables below details the CPU and memory allocation required for Exabeam products to operate optimally, with the following provisioned:

  • The host is not shared with any other product or resource.

  • For OVA deployments, additional RAM and vCPUs allocated to the hypervisor (i.e. not allocated to any other VM).

  • All virtual appliances must be approved by Exabeam prior to deployment.

  • Storage requirements:

    • Use local drives or an NAS or SAN with a 10Gbit/s link (iSCSI or Fibre Channel) and block storage access; the underlying drives must also meet the SSD/HDD IO performance requirements.

      Important

      Configuration and maintenance of NAS/SAN is the responsibility of the customer. Exabeam cannot debug issues related to misconfiguration or performance with NAS/SAN.

      Important

      NFS is not supported.

    • Do not use VMFS with extents spread across multiple drives and managed by ESX.

    • To avoid performance issues, SSD drive files MUST be put on an SSD backed data store.

      Note

      To ensure adequate virtual drive capacity, a 10% additional storage buffer is required on the physical drives. For example, when an OVA requires 215 GB of virtual space, the appliance requires a physical drive with 236.5 GB of space.

    • Disk storage must be configured with thick provisioning (eager zeroed recommended). Snapshots are not supported because they switch the environment from thick to thin provisioning.

  • Network requirements:

    • Network card must support paravirtualization (VMXNet v.3 adapters).

Advanced Analytics Node Type

Specifications

Physical Master Node including Hypervisor

vCPU: 44 (Intel Xeon Gold 6230 or newer processor)

Memory: 266GB

  • 1 x 32 GB (SSD) for hypervisor. Hypervisor requires an independent drive for reliability and isolation.

  • 2 x 240 GB (SSD). Data redundancy is required via SAN/NAS or with Hardware RAID 1 for local disks. Due to RAID 1 configuration, two drives are required.

  • 2 x 3.84 TB (SSD). Data redundancy is required via SAN/NAS or with Hardware RAID 1 for local disks.

  • 6 x 4 TB (HDD). Data redundancy is required via SAN/NAS or with Hardware RAID 10 for local disks. Due to RAID 10 configuration, only half of the physical capacity is available and an even quantity of drives is required.

Virtual Hypervisor

4 vCPU

Memory: 10 GB

1 x 32 GB (SSD)

OVA Virtual Master Node

vCPU: 40 (Intel Xeon Gold 6230 or newer processor)

Memory: 256 GB

  • 1 x 215 GB (SSD)

  • 3 x 863 GB (SSD)

  • 6 x 1.8 TB (HDD)

Physical Worker Node including Hypervisor

vCPU: 24 (Intel Xeon Silver 4210 or newer processor)

Memory: 138 GB

  • 1 x 32 GB (SSD) for hypervisor. Hypervisor requires an independent drive for reliability and isolation.

  • 2 x 240 GB (SSD). Data redundancy is required via SAN/NAS or with Hardware RAID 1 for local disks. Due to RAID 1 configuration, two drives are required.

  • 2 x 3.84 TB (SSD). Data redundancy is required via SAN/NAS or with Hardware RAID 1 for local disks.

  • 6 x 4 TB (HDD). Data redundancy is required via SAN/NAS or with Hardware RAID 10 for local disks. Due to RAID 10 configuration, only half of the physical capacity is available and an even quantity of drives is required.

Virtual Hypervisor

4 vCPU

Memory: 10 GB

1 x 32 GB (SSD)

OVA Virtual Worker Node

vCPU: 20 (Intel Xeon Silver 4210 or newer processor)

Memory: 128 GB

  • 1 x 215 GB (SSD)

  • 3 x 863 GB (SSD)

  • 6 x 1.80 TB (HDD)

Table 1. Advanced Analytics Node Specifications


Node Type in Exabeam Cluster

Specifications

Physical Master or Worker Node including Hypervisor

vCPU: 20 (Xeon processor E5-2620 v4 or newer processor)

Memory: 202 GB

  • 1 x 32 GB (SSD) for hypervisor. Hypervisor requires an independent drive for reliability and isolation.

  • 2 x 240 GB (SSD). Data redundancy is required via SAN/NAS or with Hardware RAID 1 for local disks.

  • 2 x 3.84 TB (SSD). Data redundancy is required via SAN/NAS or with Hardware RAID 1 for local disks.

  • 10 x 8 TB (HDD). Data redundancy is required via SAN/NAS or with Hardware RAID 10 for local disks. Due to RAID 10 configuration, only half of the physical capacity is available and an even quantity of drives is required.

Virtual Hypervisor

4 vCPU

Memory: 10 GB

1 x 32 GB (SSD)

OVA Virtual Master or

Worker Node

vCPU: 16 (Xeon processor E5-2620 v4 or newer processor)

Memory: 192 GB

  • 1 x 215 GB (SSD)

  • 2 x 1.73 TB (SSD)

  • 9 x 3.60 TB (HDD)

Table 2. Data Lake Node Specifications


For clusters with 21 nodes or more, an additional three management nodes are required for cluster management operations, health monitoring and other critical functions.