Difference between revisions of "PrivateEdition/Current/PEGuide/StorageReq"
(Published) |
|||
Line 8: | Line 8: | ||
|structuredtext=Deciding storage includes a lot of factors such as the number of agents, call volumes, call recordings and archiving them, data security, accessibility, and so on. It also includes technical factors such as the Input Output Per Second (IOPS) or throughput, storage type, latency, and so on. | |structuredtext=Deciding storage includes a lot of factors such as the number of agents, call volumes, call recordings and archiving them, data security, accessibility, and so on. It also includes technical factors such as the Input Output Per Second (IOPS) or throughput, storage type, latency, and so on. | ||
− | In Genesys Multicloud CX private edition, you will create storage for specific services, for example, Genesys Customer Experience Insights (GCXI), and Voice. The services that require storage elements, such as file and disk storage for processing its data, use the [https://kubernetes.io/docs/concepts/storage/persistent-volumes/ Kubernetes Persistence Volume subsystem] (PV). The storage subsystem and Kubernetes StorageClass types requirements for different services for different Kubernetes platforms are given in the following tables: | + | In Genesys Multicloud CX private edition, you will create storage for specific services, for example, Genesys Customer Experience Insights (GCXI), and Voice. The services that require storage elements, such as file and disk storage for processing its data, use the [https://kubernetes.io/docs/concepts/storage/persistent-volumes/ Kubernetes Persistence Volume subsystem] (PV). The storage subsystem and Kubernetes StorageClass types requirements for different services for different Kubernetes platforms are given in the following tables: |
+ | *[[{{FULLPAGENAME}}#storageAKS|File and disk storage for Azure Kubernetes Service (AKS)]] | ||
+ | *[[{{FULLPAGENAME}}#storageGKE|File and disk storage for Google Kubernetes Engine (GKE)]] | ||
*[[{{FULLPAGENAME}}#storageOCP|File and disk storage for OpenShift]] | *[[{{FULLPAGENAME}}#storageOCP|File and disk storage for OpenShift]] | ||
− | |||
You can create or select the storage subsystem for your service on a specific Kubernetes platform based on the information presented in the corresponding table. For the exact sizing of each storage subsystem or PVs, refer to the related {{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|topic=GEServices|display text=service-level documentation}}. | You can create or select the storage subsystem for your service on a specific Kubernetes platform based on the information presented in the corresponding table. For the exact sizing of each storage subsystem or PVs, refer to the related {{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|topic=GEServices|display text=service-level documentation}}. | ||
Line 17: | Line 18: | ||
{{NoteFormat|By default, the Kubernetes platform creates default file and disk storage classes. However, Genesys recommends not to use them but to create a custom file and disk storage for your service.|1}} | {{NoteFormat|By default, the Kubernetes platform creates default file and disk storage classes. However, Genesys recommends not to use them but to create a custom file and disk storage for your service.|1}} | ||
{{NoteFormat|You can determine the storage requirements for your contact center yourself by either exploring the storage requirements of each service, by using the Sizing Calculator or by leveraging the Genesys Professional Services team's support.|2}} | {{NoteFormat|You can determine the storage requirements for your contact center yourself by either exploring the storage requirements of each service, by using the Sizing Calculator or by leveraging the Genesys Professional Services team's support.|2}} | ||
− | {{AnchorDiv| | + | {{Section |
− | ===File and disk storage for | + | |alignment=Vertical |
− | The following table provides the storage information for | + | |structuredtext={{AnchorDiv|storageAKS}} |
+ | ===File and disk storage for AKS=== | ||
+ | The following table provides the storage information for AKS:<br /> | ||
{{{!}} class="wikitable" | {{{!}} class="wikitable" | ||
{{!}}+ | {{!}}+ | ||
− | + | !'''AKS Storage Class Name<sup>#</sup>''' | |
− | {{!}} | + | !Storage Type |
− | {{!}} | + | !Notes |
+ | !Associated Services | ||
+ | {{!}}- | ||
+ | {{!}}disk-hdd (ephemeral) | ||
+ | {{!}}Standard_HDD | ||
+ | {{!}}Node disk mounted via HostPath. | ||
+ | {{!}} | ||
+ | *GCXI | ||
+ | *Gplus WFM | ||
+ | *GVP-MCP | ||
+ | *GVP-RM | ||
+ | *Interaction Server | ||
+ | *Pulse | ||
+ | *Tenant | ||
+ | *Voice Services | ||
+ | *WebRTC | ||
{{!}}- | {{!}}- | ||
− | {{!}}Disk | + | {{!}}disk-standard |
− | {{!}} | + | disk-premium |
+ | {{!}}Azure Disk - Standard | ||
+ | Azure Disk - Premium | ||
+ | {{!}}Use single AZ disks to create an RWO volume that can be attached to a single pod. | ||
{{!}} | {{!}} | ||
− | * | + | *CX Contact |
+ | *Designer | ||
*GVP | *GVP | ||
− | * | + | *GWS |
− | * | + | *UCSX |
− | |||
− | |||
− | |||
− | |||
{{!}}- | {{!}}- | ||
− | {{!}}Files- | + | {{!}}files-standard |
− | {{!}} | + | {{!}}Azure Files - Standard Fileshare LRS |
+ | {{!}}Local redundant storage (LRS) for RWX volumes that can be shared between multiple pod instances; replicated data in a single AZ. | ||
+ | Lower throughput than premium and no IOPs guaranteed. | ||
{{!}}BDS | {{!}}BDS | ||
{{!}}- | {{!}}- | ||
− | {{!}}Files- | + | {{!}}files-standard-redundant |
− | {{!}} | + | {{!}}Azure Files - Premium Fileshare ZRS |
+ | {{!}}Zonal redundant storage (ZRS) for RWX volumes shared across multiple pods; replicated data across multiple AZs in a region. | ||
+ | No IOPS guaranteed - similar to NFS. | ||
{{!}} | {{!}} | ||
− | *CX Contact | + | *CX Contact |
*Designer | *Designer | ||
*GCXI | *GCXI | ||
− | *Gplus | + | *Gplus WFM |
*GVP | *GVP | ||
*GWS | *GWS | ||
− | + | *Pulse | |
− | *Pulse | + | *Tenant |
− | *Tenant | ||
*UCSX | *UCSX | ||
− | *WebRTC | + | *WebRTC |
{{!}}- | {{!}}- | ||
− | {{!}} | + | {{!}}blob storage |
− | {{!}} | + | {{!}}Azure Blob Storage |
− | {{!}} | + | {{!}}Create Azure Blob Storage which is optimized for storing massive amounts of unstructured data across AZ and regions. |
− | + | {{!}} | |
− | *Digital | + | *Digital channels (image, files, upload) |
− | *Recordings(GVP) | + | *GIM data feed/GSP |
+ | *Recordings (GVP) | ||
+ | *Telemetry | ||
*Voicemail | *Voicemail | ||
− | |||
− | |||
{{!}}} | {{!}}} | ||
− | <nowiki> | + | *<nowiki>#</nowiki>''The AKS storage class names are created by default. You can modify the storage class names based on your organizational needs.'' |
− | + | }} | |
− | + | ||
+ | {{Section | ||
|alignment=Vertical | |alignment=Vertical | ||
|structuredtext={{AnchorDiv|storageGKE}} | |structuredtext={{AnchorDiv|storageGKE}} | ||
Line 126: | Line 148: | ||
*UCSX | *UCSX | ||
*WebRTC | *WebRTC | ||
+ | {{!}}- | ||
+ | {{!}}blob storage | ||
+ | {{!}}Cloud Storage buckets | ||
+ | {{!}}Create Google Cloud Storage which is optimized for storing massive amounts of unstructured data across AZ and regions. | ||
+ | {{!}} | ||
+ | *Digital channels (image, files, upload) | ||
+ | *GIM data feed/GSP | ||
+ | *Recordings (GVP) | ||
+ | *Telemetry | ||
+ | *Voicemail | ||
{{!}}} | {{!}}} | ||
− | |||
*<nowiki>#</nowiki>''The GKE storage class names are created by default. You can modify the storage class names based on your organizational needs.'' | *<nowiki>#</nowiki>''The GKE storage class names are created by default. You can modify the storage class names based on your organizational needs.'' | ||
*<nowiki>*</nowiki>''RWO type storage is tested with the default CSI driver.'' | *<nowiki>*</nowiki>''RWO type storage is tested with the default CSI driver.'' | ||
*<nowiki>**</nowiki>RWX type storage is tested with the Filestore CSI driver. This storage driver is not enabled by default and it must be enabled in the GKE clusters. However, this configuration is available only in GKE 1.21.x releases. For more information on enabling Filestore CSI driver, see [https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver GKE documentation] | *<nowiki>**</nowiki>RWX type storage is tested with the Filestore CSI driver. This storage driver is not enabled by default and it must be enabled in the GKE clusters. However, this configuration is available only in GKE 1.21.x releases. For more information on enabling Filestore CSI driver, see [https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver GKE documentation] | ||
+ | |Status=No | ||
+ | }} | ||
+ | |||
+ | {{AnchorDiv|storageOCP}} | ||
+ | ===File and disk storage for OpenShift=== | ||
+ | The following table provides the storage information for OpenShift: | ||
+ | {{{!}} class="wikitable" | ||
+ | {{!}}+ | ||
+ | {{!}}'''Storage Type''' | ||
+ | {{!}}'''Description''' | ||
+ | {{!}}'''Associated Services''' | ||
+ | {{!}}- | ||
+ | {{!}}Disk | ||
+ | {{!}}Uses dynamically provisioned disks (which reside in a single AZ) to create an RWO volume that can be attached to a single pod. Genesys recommends SSD storage for production deployments. | ||
+ | {{!}} | ||
+ | *GCXI* | ||
+ | *GVP | ||
+ | *Gplus-WFM | ||
+ | *Interaction Server* | ||
+ | *Pulse* | ||
+ | *Tenant* | ||
+ | *Voice Services* | ||
+ | *WebRTC* | ||
+ | {{!}}- | ||
+ | {{!}}Files-local | ||
+ | {{!}}Create an RWX volume that can be attached to many pods across all the node pool AZs; this is similar to NFS. The local type means 'Locally Redundant Storage' that replicates copies of data in a single AZ in a single region. Hence, there is a risk of your Persistent Volume Claims (PVCs) becoming unavailable if a single region completely fails. It needs a guaranteed 1 IOPS per GB stored with a minimum of 100 IOPS while allowing bursting and higher throughput than standard HDD storage. Genesys recommends SSD storage for production deployments. '''Note:''' The minimum volume size needed is 100GB. | ||
+ | {{!}}BDS | ||
+ | {{!}}- | ||
+ | {{!}}Files-redundant | ||
+ | {{!}}Create an RWX volume that can be attached to many pods across all the node pool AZs. The redundant type means 'Zonal Redundant Storage' that replicates copies of data across multiple AZs in a region. No IOPS guarantee is needed. Similar to NFS. | ||
+ | {{!}} | ||
+ | *CX Contact | ||
+ | *Designer | ||
+ | *GCXI | ||
+ | *Gplus-WFM | ||
+ | *GVP | ||
+ | *GWS | ||
+ | *Intelligent Workload Distribution | ||
+ | *Pulse* | ||
+ | *Tenant* | ||
+ | *UCSX | ||
+ | *WebRTC* | ||
+ | {{!}}- | ||
+ | {{!}}Object storage | ||
+ | {{!}}Create object storage which is optimized for storing massive amounts of unstructured data across AZ and regions. Similar to S3 or Azure Blob, Google Cloud Storage. | ||
+ | {{!}} | ||
+ | *GIM data feed/GSP | ||
+ | *Digital Channels (image, files, upload) | ||
+ | *Recordings (GVP) | ||
+ | *Voicemail | ||
+ | *Telemetry | ||
+ | {{!}}- | ||
+ | {{!}}} | ||
+ | <nowiki>*</nowiki>''Genesys Multicloud CX services that use the disk space mainly for logging purposes. You can optionally configure them not to use the disk space. Refer to the related'' {{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|topic=GEServices|display text=service-level guides}} ''for more information.'' | ||
|Status=No | |Status=No | ||
}} | }} | ||
}} | }} |
Revision as of 14:30, September 30, 2022
Contents
Provides information about different storage types required for Genesys Multicloud CX services.
Deciding storage includes a lot of factors such as the number of agents, call volumes, call recordings and archiving them, data security, accessibility, and so on. It also includes technical factors such as the Input Output Per Second (IOPS) or throughput, storage type, latency, and so on.
In Genesys Multicloud CX private edition, you will create storage for specific services, for example, Genesys Customer Experience Insights (GCXI), and Voice. The services that require storage elements, such as file and disk storage for processing its data, use the Kubernetes Persistence Volume subsystem (PV). The storage subsystem and Kubernetes StorageClass types requirements for different services for different Kubernetes platforms are given in the following tables:
- File and disk storage for Azure Kubernetes Service (AKS)
- File and disk storage for Google Kubernetes Engine (GKE)
- File and disk storage for OpenShift
You can create or select the storage subsystem for your service on a specific Kubernetes platform based on the information presented in the corresponding table. For the exact sizing of each storage subsystem or PVs, refer to the related service-level documentation.
File and disk storage for AKS
The following table provides the storage information for AKS:
AKS Storage Class Name# | Storage Type | Notes | Associated Services |
---|---|---|---|
disk-hdd (ephemeral) | Standard_HDD | Node disk mounted via HostPath. |
|
disk-standard
disk-premium |
Azure Disk - Standard
Azure Disk - Premium |
Use single AZ disks to create an RWO volume that can be attached to a single pod. |
|
files-standard | Azure Files - Standard Fileshare LRS | Local redundant storage (LRS) for RWX volumes that can be shared between multiple pod instances; replicated data in a single AZ.
Lower throughput than premium and no IOPs guaranteed. |
BDS |
files-standard-redundant | Azure Files - Premium Fileshare ZRS | Zonal redundant storage (ZRS) for RWX volumes shared across multiple pods; replicated data across multiple AZs in a region.
No IOPS guaranteed - similar to NFS. |
|
blob storage | Azure Blob Storage | Create Azure Blob Storage which is optimized for storing massive amounts of unstructured data across AZ and regions. |
|
- #The AKS storage class names are created by default. You can modify the storage class names based on your organizational needs.
File and disk storage for GKE
The following table provides the storage information for GKE:
GKE Storage Class Name# | Storage Type | Notes | Associated Services |
---|---|---|---|
ephemeral (emptyDir) | Persistent disk | Node disk accessed through local ephemeral emptyDir volumes, provided there is no access to hostPath. |
|
standard-rwo*
premium-rwo* |
pd-balanced (SSD)
pd-ssd (SSD) |
Persistent Disk (pd) - Default Zonal (single AZ) RWO StorageClasses provided by GKE with typical Block storage performance. |
|
standard-rwx** | Filestore - Basic HDD | Local redundant storage for RWX volumes shared between pod instances; replicated data in a single AZ. | BDS |
redundant-rwx** | Filestore - Enterprise | Regional redundant storage for RWX volumes shared between pod instances; replicated data to two zones in a region (Regional PD). |
|
blob storage | Cloud Storage buckets | Create Google Cloud Storage which is optimized for storing massive amounts of unstructured data across AZ and regions. |
|
- #The GKE storage class names are created by default. You can modify the storage class names based on your organizational needs.
- *RWO type storage is tested with the default CSI driver.
- **RWX type storage is tested with the Filestore CSI driver. This storage driver is not enabled by default and it must be enabled in the GKE clusters. However, this configuration is available only in GKE 1.21.x releases. For more information on enabling Filestore CSI driver, see GKE documentation
File and disk storage for OpenShift
The following table provides the storage information for OpenShift:
Storage Type | Description | Associated Services |
Disk | Uses dynamically provisioned disks (which reside in a single AZ) to create an RWO volume that can be attached to a single pod. Genesys recommends SSD storage for production deployments. |
|
Files-local | Create an RWX volume that can be attached to many pods across all the node pool AZs; this is similar to NFS. The local type means 'Locally Redundant Storage' that replicates copies of data in a single AZ in a single region. Hence, there is a risk of your Persistent Volume Claims (PVCs) becoming unavailable if a single region completely fails. It needs a guaranteed 1 IOPS per GB stored with a minimum of 100 IOPS while allowing bursting and higher throughput than standard HDD storage. Genesys recommends SSD storage for production deployments. Note: The minimum volume size needed is 100GB. | BDS |
Files-redundant | Create an RWX volume that can be attached to many pods across all the node pool AZs. The redundant type means 'Zonal Redundant Storage' that replicates copies of data across multiple AZs in a region. No IOPS guarantee is needed. Similar to NFS. |
|
Object storage | Create object storage which is optimized for storing massive amounts of unstructured data across AZ and regions. Similar to S3 or Azure Blob, Google Cloud Storage. |
|
*Genesys Multicloud CX services that use the disk space mainly for logging purposes. You can optionally configure them not to use the disk space. Refer to the related service-level guides for more information.