Storage requirements
Provides information about different storage types required for Genesys Multicloud CX services.
Deciding storage includes a lot of factors such as the number of agents, call volumes, call recordings and archiving them, data security, accessibility, and so on. It also includes technical factors such as the Input Output Per Second (IOPS) or throughput, storage type, latency, and so on.
In Genesys Multicloud CX private edition, you will create storage for specific services, for example, Genesys Customer Experience Insights (GCXI), and Voice. The services that require storage elements, such as file and disk storage for processing its data, use the Kubernetes Persistence Volume subsystem (PV). The storage subsystem and Kubernetes StorageClass types requirements for different services for different Kubernetes platforms are given in the following tables:
- File and disk storage for Azure Kubernetes Service (AKS)
- File and disk storage for Google Kubernetes Engine (GKE)
You can create or select the storage subsystem for your service on a specific Kubernetes platform based on the information presented in the corresponding table. For the exact sizing of each storage subsystem or PVs, refer to the related service-level documentation.
File and disk storage for AKS
The following table provides the storage information for AKS:
AKS Storage Class Name# | Storage Type | Notes | Associated Services |
---|---|---|---|
disk-hdd (ephemeral) | Standard_HDD | Node disk mounted via HostPath. |
|
disk-standard
disk-premium |
Azure Disk - Standard
Azure Disk - Premium |
Use single AZ disks to create an RWO volume that can be attached to a single pod. |
|
files-standard | Azure Files - Standard Fileshare LRS | Local redundant storage (LRS) for RWX volumes that can be shared between multiple pod instances; replicated data in a single AZ.
Lower throughput than premium and no IOPs guaranteed. |
BDS |
files-standard-redundant | Azure Files - Premium Fileshare ZRS | Zonal redundant storage (ZRS) for RWX volumes shared across multiple pods; replicated data across multiple AZs in a region.
No IOPS guaranteed - similar to NFS. |
|
blob storage | Azure Blob Storage | Create Azure Blob Storage which is optimized for storing massive amounts of unstructured data across AZ and regions. |
|
- #The AKS storage class names are created by default. You can modify the storage class names based on your organizational needs.
File and disk storage for GKE
The following table provides the storage information for GKE:
GKE Storage Class Name# | Storage Type | Notes | Associated Services |
---|---|---|---|
ephemeral (emptyDir) | Persistent disk | Node disk accessed through local ephemeral emptyDir volumes, provided there is no access to hostPath. |
|
standard-rwo*
premium-rwo* |
pd-balanced (SSD)
pd-ssd (SSD) |
Persistent Disk (pd) - Default Zonal (single AZ) RWO StorageClasses provided by GKE with typical Block storage performance. |
|
standard-rwx** | Filestore - Basic HDD | Local redundant storage for RWX volumes shared between pod instances; replicated data in a single AZ. | BDS |
redundant-rwx** | Filestore - Enterprise | Regional redundant storage for RWX volumes shared between pod instances; replicated data to two zones in a region (Regional PD). |
|
blob storage | Cloud Storage buckets | Create Google Cloud Storage which is optimized for storing massive amounts of unstructured data across AZ and regions. |
|
- #The GKE storage class names are created by default. You can modify the storage class names based on your organizational needs.
- *RWO type storage is tested with the default CSI driver.
- **RWX type storage is tested with the Filestore CSI driver. This storage driver is not enabled by default and it must be enabled in the GKE clusters. However, this configuration is available only in GKE 1.21.x releases. For more information on enabling Filestore CSI driver, see GKE documentation