Provides information about different storage types required for Genesys Multicloud CX services.
Deciding storage includes a lot of factors such as the number of agents, call volumes, call recordings and archiving them, data security, accessibility, and so on. It also includes technical factors such as the Input Output Per Second (IOPS) or throughput, storage type, latency, and so on.
In Genesys Multicloud CX private edition, you will create storage for specific services, for example, Genesys Customer Experience Insights (GCXI), and Voice. The services that require storage elements, such as file and disk storage for processing its data, use the Kubernetes Persistence Volume subsystem (PV). The storage subsystem and Kubernetes StorageClass types requirements for different services for different Kubernetes platforms are given in the following tables:
You can create or select the storage subsystem for your service on a specific Kubernetes platform based on the information presented in the corresponding table. For the exact sizing of each storage subsystem or PVs, refer to the related service-level documentation.
File and disk storage for OpenShift
The following table provides the storage information for OpenShift:
|Storage Type||Description||Associated Services|
|Disk||Uses dynamically provisioned disks (which reside in a single AZ) to create an RWO volume that can be attached to a single pod. Genesys recommends SSD storage for production deployments.||
|Files-local||Creates an RWX volume that can be attached to many pods across all the node pool AZs; this is similar to NFS. Local type means the 'Locally Redundant Storage' that replicates copies of data in a single AZ in a single region. Hence, there is a risk of your Persistent Volume Claims (PVCs) becoming unavailable if a single region completely fails. It needs a guaranteed 1 IOPS per GB stored with a minimum of 100 IOPS while allowing bursting and higher throughput than standard HDD storage. Genesys recommends SSD storage for production deployments. Note: The minimum volume size needed is 100GB.||BDS|
|Files-redundant||Creates an RWX volume that can be attached to many pods across all the node pool AZs. Redundant type means the 'Zonal Redundant Storage' that replicates copies of data across multiple AZs in a region. No IOPS guarantee is needed. Similar to NFS.||
|Object storage||Creates object storage which is optimized for storing massive amounts of unstructured data across AZ and regions. Similar to S3 or Azure Blob, Google Cloud Storage.||*GIM data feed/GSP
*Genesys Multicloud CX services that use the disk space mainly for logging purposes. You can optionally configure them not to use the disk space. Refer to the related service-level guides for more information.
File and disk storage for GKE
The following table provides the storage information for GKE:
|GKE Storage Class Name#||Storage Type||Notes||Associated Services|
|ephemeral (emptyDir)||Persistent disk||Node disk accessed through local ephemeral emptyDir volumes, provided there is no access to hostPath.||
|Persistent Disk (pd) - Default Zonal (single AZ) RWO StorageClasses provided by GKE with typical Block storage performance.||
|standard-rwx**||Filestore - Basic HDD||Local redundant storage for RWX volumes shared between pod instances; replicated data in a single AZ.||BDS|
|redundant-rwx**||Filestore - Enterprise||Regional redundant storage for RWX volumes shared between pod instances; replicated data to two zones in a region (Regional PD).||
- #The GKE storage class names are created by default. You can modify the storage class names based on your organizational needs.
- *RWO type storage is tested with the default CSI driver.
- **RWX type storage is tested with the Filestore CSI driver. This storage driver is not enabled by default and it must be enabled in the GKE clusters. However, this configuration is available only in GKE 1.21.x releases. For more information on enabling Filestore CSI driver, see GKE documentation