Provides information about different storage types required for Genesys Multicloud CX services.
Deciding storage includes a lot of factors such as the number of agents, call volumes, call recordings and archiving them, data security, accessibility, and so on. It also includes technical factors such as the Input Output Per Second (IOPS) or throughput, storage type, latency, and so on.
In Genesys Multicloud CX private edition, you will create storage for specific services, for example, Genesys Customer Experience Insights (GCXI), and Voice. The services that require storage elements, such as file and disk storage for processing its data, use the Kubernetes Persistence Volume subsystem (PV). The storage subsystem and Kubernetes StorageClass types requirements for different services for different Kubernetes platforms are given in the following tables:
- File and disk storage for Azure Kubernetes Service (AKS)
- File and disk storage for Google Kubernetes Engine (GKE)
- File and disk storage for OpenShift
You can create or select the storage subsystem for your service on a specific Kubernetes platform based on the information presented in the corresponding table. For the exact sizing of each storage subsystem or PVs, refer to the related service-level documentation.
File and disk storage for AKS
The following table provides the storage information for AKS:
|AKS Storage Class Name#||Storage Type||Notes||Associated Services|
|disk-hdd (ephemeral)||Standard_HDD||Node disk mounted via HostPath.||
|Azure Disk - Standard
Azure Disk - Premium
|Use single AZ disks to create an RWO volume that can be attached to a single pod.||
|files-standard||Azure Files - Standard Fileshare LRS||Local redundant storage (LRS) for RWX volumes that can be shared between multiple pod instances; replicated data in a single AZ.
Lower throughput than premium and no IOPs guaranteed.
|files-standard-redundant||Azure Files - Premium Fileshare ZRS||Zonal redundant storage (ZRS) for RWX volumes shared across multiple pods; replicated data across multiple AZs in a region.
No IOPS guaranteed - similar to NFS.
|blob storage||Azure Blob Storage||Create Azure Blob Storage which is optimized for storing massive amounts of unstructured data across AZ and regions.||
- #The AKS storage class names are created by default. You can modify the storage class names based on your organizational needs.
File and disk storage for GKE
The following table provides the storage information for GKE:
|GKE Storage Class Name#||Storage Type||Notes||Associated Services|
|ephemeral (emptyDir)||Persistent disk||Node disk accessed through local ephemeral emptyDir volumes, provided there is no access to hostPath.||
|Persistent Disk (pd) - Default Zonal (single AZ) RWO StorageClasses provided by GKE with typical Block storage performance.||
|standard-rwx**||Filestore - Basic HDD||Local redundant storage for RWX volumes shared between pod instances; replicated data in a single AZ.||BDS|
|redundant-rwx**||Filestore - Enterprise||Regional redundant storage for RWX volumes shared between pod instances; replicated data to two zones in a region (Regional PD).||
|blob storage||Cloud Storage buckets||Create Google Cloud Storage which is optimized for storing massive amounts of unstructured data across AZ and regions.||
- #The GKE storage class names are created by default. You can modify the storage class names based on your organizational needs.
- *RWO type storage is tested with the default CSI driver.
- **RWX type storage is tested with the Filestore CSI driver. This storage driver is not enabled by default and it must be enabled in the GKE clusters. However, this configuration is available only in GKE 1.21.x releases. For more information on enabling Filestore CSI driver, see GKE documentation
File and disk storage for OpenShift
The following table provides the storage information for OpenShift:
|Storage Type||Description||Associated Services|
|Disk||Uses dynamically provisioned disks (which reside in a single AZ) to create an RWO volume that can be attached to a single pod. Genesys recommends SSD storage for production deployments.||
|Files-local||Create an RWX volume that can be attached to many pods across all the node pool AZs; this is similar to NFS. The local type means 'Locally Redundant Storage' that replicates copies of data in a single AZ in a single region. Hence, there is a risk of your Persistent Volume Claims (PVCs) becoming unavailable if a single region completely fails. It needs a guaranteed 1 IOPS per GB stored with a minimum of 100 IOPS while allowing bursting and higher throughput than standard HDD storage. Genesys recommends SSD storage for production deployments. Note: The minimum volume size needed is 100GB.||BDS|
|Files-redundant||Create an RWX volume that can be attached to many pods across all the node pool AZs. The redundant type means 'Zonal Redundant Storage' that replicates copies of data across multiple AZs in a region. No IOPS guarantee is needed. Similar to NFS.||
|Object storage||Create object storage which is optimized for storing massive amounts of unstructured data across AZ and regions. Similar to S3 or Azure Blob, Google Cloud Storage.||
*Genesys Multicloud CX services that use the disk space mainly for logging purposes. You can optionally configure them not to use the disk space. Refer to the related service-level guides for more information.