View table: PEPrerequisites

Jump to: navigation, search

Table structure:

  • productshort - String
  • Role - String
  • DisplayName - String
  • TocName - String
  • Dimension - String
  • Context - Wikitext
  • Product - String
  • Manual - String
  • UseCase - String
  • PEPageType - String
  • LimitationsText - Wikitext
  • HelmText - Wikitext
  • ThirdPartyText - Wikitext
  • StorageText - Wikitext
  • NetworkText - Wikitext
  • BrowserText - Wikitext
  • DependenciesText - Wikitext
  • GDPRText - Wikitext
  • IncludedServiceId - String

This table has 54 rows altogether.

Page productshort Role DisplayName TocName Dimension Context Product Manual UseCase PEPageType LimitationsText HelmText ThirdPartyText StorageText NetworkText BrowserText DependenciesText GDPRText IncludedServiceId
AUTH/Current/AuthPEGuide/Planning AUTH Before you begin Find out what to do before deploying Genesys Authentication. Genesys Authentication AuthPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Authentication in Genesys Multicloud CX private edition is made up of three containers, one for each of its components:
  • gws-core-auth - Authentication API service
  • gws-ui-auth - Authentication UI service
  • gws-core-environment - Environment API service

The service also includes a Helm chart, which you must deploy to install all three containers for Genesys Authentication:

  • gauth

See Helm charts and containers for Authentication, Login, and SSO for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the gauth folder in the JFrog repository. See Downloading your Genesys Multicloud CX containers for details.

Install the prerequisite dependencies listed in the Third-party services table before you deploy Genesys Authentication. Genesys Authentication uses PostgreSQL to store key/value pairs for the Authentication API and Environment API services. It uses Redis to cache data for the Authentication API service.

Ingress

Genesys Authentication supports both internal and external ingress with two ingress objects that are configured with the ingress and internal_ingress settings in the values.yaml file. See Configure Genesys Authentication for details about overriding Helm chart values.

  • ingress - External ingress for UIs and external API clients. External ingress can be public.
  • internal_ingress - Internal ingress for internal API clients. Internal ingress contains an extended list of API endpoints that are not available for external ingress. Internal ingress should not be public.

These ingress objects support Transport Layer Security (TLS) version 1.2. TLS is enabled by default and you can configure it by overriding the ingress.tls and internal_ingress.tls settings in values.yaml.

For example: ?'"`UNIQ--source-0000000E-QINU`"'?

In the example above:

  • secretName is the certificate and private key to use for TLS. The secret is a prerequisite and must be created before you deploy Genesys Authentication, unless you have Certificate ClusterIssuer installed and configured in Kubernetes Cluster. In this case, the secret is created by ClusterIssuer.
  • hosts is a list of the fully qualified domain names that should use the certificate. The list must be the same as the value configured for ingress.frontend and internal_ingress.frontend.

Cookies

Genesys Authentication components use cookies to identify HTTP/HTTPS user sessions.

The Authentication UI supports the web browsers listed in the Browsers table. Genesys Authentication must be deployed before other Genesys Multicloud CX private edition services. To complete provisioning the service, you must first deploy Web Services and Applications and the Tenant Service. For a look at the high-level deployment order, see Order of services deployment.
BDS/Current/BDSPEGuide/Planning BDS Before you begin Find out what to do before deploying Billing Data Service (BDS). Billing Data Service BDSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • In a private edition environment, Billing Data Service (BDS) currently supports deployments only on the specific Kubernetes platforms cited in About BDS. Deploying BDS in environments that are not officially tested and certified by Genesys is not supported. If your organization requires a special business case to deploy BDS in an uncertified or unsupported environment (for example, mixed-mode), contact your Genesys Account Representative or Customer Care.
For general information about downloading containers, see: Downloading your Genesys Multicloud CX containers.

To learn what Helm chart version you must download for your release, see Helm charts and containers for Billing Data Service

The BDS Helm chart defines a CronJob:

BDS Helm chart package [ bds-cronjob-<version>.tgz ]: Scripts in this container run Extraction, Transformation, and Load process according to the main configuration specified as Config Map.  

BDS queries data primarily from Genesys Info Mart, GVP Reporting Server, and the Framework Configuration Management Environment, and can transmit the data to the SFTP server.

Before deploying BDS, configure the following 3rd party components: To deploy a BDS container on a node (or a worker computer), you need the following storage elements:
  • dual-core vCPU
  • 8 GB RAM
  • 40 GB PVC*
  • Database - BDS does not require any database but extracts the data from the GIM database, that is, Postgres. Hence, BDS supports any Postgres version that GIM currently supports.

?'"`UNIQ--nowiki-0000000A-QINU`"'? BDS creates the persistent volume claim (PVC) file storage to save the extracted and transformed files with current ETL time stamps.

  • The file share must be persistent, and must be mounted inside the pod to /mnt/fileshare folder. Note: Mount name cannot be changed, once created.
  • The information and files necessary to start BDS in between launches is stored on PVC.
  • Data in the file share must be backed up with a retention period of 90 days, and data protection.
BDS interoperates with the following Genesys software applications to receive billing data:
  • Genesys Voice Platform Reporting Server (GVP RS) 9.0 or later
  • Genesys Info Mart 8.5.015.19 or later

Depending on the metrics that you need, you must ensure that the corresponding data sources are available in your environment for BDS to generate meaningful reports.

To build requests for main data sources, BDS obtains additional information from the following sources:

  • GVP Config Server
  • Genesys GWS
Content coming soon
?'"`UNIQ--nowiki-0000000B-QINU`"'?

For more information, see the "suite-level" Link to come documentation.

ContentAdmin/Internal/Small/Test5 ContentAdmin Test Prereq page for PE Intro statement summarizing this page. Internal Content Administration Small bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • This release includes important security upgrades made to third-party software.
  • Important security improvements.
  • As of December 23, 2021, No results supports deployments on Google Kubernetes Engine (GKE) in Genesys Multicloud CX private edition, as part of the Early Adopter Program.
  • 20220330174606
Intro to third-party section Unstructured chunk Unstructured chunk Intro text for browser section. Unstructured chunk Intro text for GDPR section.
DES/Current/DESPEGuide/Planning DES Before you begin Find out what to do before deploying Designer. Designer DESPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Designer currently supports multi-tenancy provided by the tenant Configuration Server. That is, each tenant should have a dedicated Configuration Server, and Designer can be shared across the multiple tenants.

Before you begin:

  1. Install Kubernetes. Refer to the Kubernetes documentation site for installation instructions. You can also refer to the Genesys Docker Deployment Guide for information on Kubernetes and High Availability.
  2. Install Helm according to the instructions outlined in the Helm documentation site.

After you complete the above mandatory procedures, return to this document to complete deployment of Designer and DAS as a service in a K8s cluster.

Download the Designer related Docker containers and Helm charts from the JFrog repository.

See Helm charts and containers for Designer for the Helm chart and container versions you must download for your release.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

The following section lists the third-party prerequisites for Designer.
  • Kubernetes 1.19.x - 1.21.x
  • Helm 3.0
  • Docker
    • To store Designer and DAS docker images to the local docker registry.
  • Ingress Controller
    • If Designer and DAS are accessed from outside of a K8s cluster, it is recommended to deploy/configure an ingress controller (for example, NGINX), if not already available. Also, the Blue-Green deployment strategy works based on the ingress rules.
    • The Designer UI requires Session Stickiness. Configure session stickiness in the annotations parameter in the values.yaml file during Designer installation.

For information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.

The following storage requirements are mandatory prerequisites:
  • Persistent Volumes (PVs)
    • Create persistent volumes for workspace storage (5 GB minimum) and logs (5 GB minimum)
    • Set the access mode for these volumes to ReadWriteMany.
    • The Designer manifest package includes a sample YAML file to create Persistent Volumes required for Designer and DAS.
    • Persistent volumes must be shared across multiple K8s nodes. Genesys recommends using NFS to create Persistent Volumes.
  • Shared file System - NFS
    • For production, deploy the NFS server as highly available (HA) to avoid single points of failure. It is also recommended that the NFS storage be deployed as a Disaster Recovery (DR) topology to achieve continuous availability if one region fails.
    • By Default, Designer and DAS containers run as a Genesys user (uid:gid 500:500). For this reason, the shared volume must have permissions that will allow write access to uid:gid 500:500. The optimal method is to change the NFS server host path to the Genesys user: chown -R genesys:genesys.
    • The Designer package includes a sample YAML file to create an NFS server. Use this only for a demo/lab setup purpose.
    • Azure Files Storage - If you opt for Cloud storage, then Azure Files Storage is an option to consider and has the following requirements:
      A Zone-Redundant Storage for RWX volumes replicated data in zone redundant (check this), shared across multiple pods.
      • Provisioned capacity : 1 TiB
      • Baseline IO/s : 1424
      • Burst IO/s : 4000
      • Egress Rate : 121.4 MiBytes/s
      • Ingress Rate : 81.0 MiBytes/s


  • If Designer and DAS are accessed from outside of a K8s cluster, it is recommended to deploy/configure an ingress controller (for example, NGINX), if not already available. Also, the Blue-Green deployment strategy works based on the ingress rules.
  • The Designer UI requires Session Stickiness. Configure session stickiness in the annotations parameter in the values.yaml file during Designer installation.
Unless otherwise noted, Designer supports the latest versions of the following browsers:
  • Mozilla Firefox
  • Google Chrome (see Important, below)
  • Microsoft Edge
  • Apple Safari

Internet Explorer (all versions) is not supported.

Important
For Google Chrome, Designer supports the n-1 version of the browser, i.e. the version prior to the latest release.

Minimum display resolution

The minimum display resolution supported by Designer is 1920 x 1080.

Third-party cookies

Some features in Designer require the use of third-party cookies. Browsers must allow third-party cookies to be stored for Designer to work properly.

The following Genesys dependencies are mandatory prerequisites:
  • Genesys Web Services (GWS) 9.x
    • Configure GWS to work with a compatible version of Configuration Server.
  • Other Genesys Components
    • Authentication Service
    • Voice Microservices

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Designer supports the European Union's General Data Protection Regulation (GDPR) requirements and provides customers the ability to export or delete sensitive data using ElasticSearch APIs and other third-party tools.

For the purposes of GDPR compliance, Genesys is a data processor on behalf of customers who use Designer. Customers are the data controllers of the personal data that they collect from their end customers, that is, the data subjects. Designer Analytics can potentially store data collected from end users in ElasticSearch. This data can be queried by certain fields that are relevant to GDPR. Once identified, the data can be exported or deleted using ElasticSearch APIs and other third-party tools that customers find suitable for their needs.

In particular, the following SDR fields may contain PII or sensitive data that customers can choose to delete or export as required:

  • ANI - This SDR field contains the customer's phone number used to make voice calls handled by Designer applications.
  • variables.Contact - This SDR field is an object and can have multiple properties, such as, name, email address, and other contact details. For example,

?'"`UNIQ--source-00000006-QINU`"'?

  • Application variables defined in the main application flow are also stored in the SDR under the variables object. These variables depend on application logic and may capture sensitive information intentionally or unintentionally. It is recommended to mark such variables secure (see Securing Variables in Designer Help for more details). But if they are captured in analytics, they can also be used to identify candidate SDRs for deletion or retrieval. The same applies to userdata key value pairs attached to interaction data which is captured in the calldata object in the SDR.
Important
It is the customer's responsibility to remove any PII or sensitive data within 21 days or less, if required by General Data Protection Regulation (GDPR) standards.
For general information about Genesys support for GDPR compliance, see General Data Protection Regulation.
Draft:AUTH/Current/AuthPEGuide/Planning Draft:AUTH Before you begin Find out what to do before deploying Genesys Authentication. Genesys Authentication AuthPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Authentication in Genesys Multicloud CX private edition is made up of three containers, one for each of its components:
  • gws-core-auth - Authentication API service
  • gws-ui-auth - Authentication UI service
  • gws-core-environment - Environment API service

The service also includes a Helm chart, which you must deploy to install all three containers for Genesys Authentication:

  • gauth

See Helm charts and containers for Authentication, Login, and SSO for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the gauth folder in the JFrog repository. See Downloading your Genesys Multicloud CX containers for details.

Install the prerequisite dependencies listed in the Third-party services table before you deploy Genesys Authentication. Genesys Authentication uses PostgreSQL to store key/value pairs for the Authentication API and Environment API services. It uses Redis to cache data for the Authentication API service.

Ingress

Genesys Authentication supports both internal and external ingress with two ingress objects that are configured with the ingress and internal_ingress settings in the values.yaml file. See Configure Genesys Authentication for details about overriding Helm chart values.

  • ingress - External ingress for UIs and external API clients. External ingress can be public.
  • internal_ingress - Internal ingress for internal API clients. Internal ingress contains an extended list of API endpoints that are not available for external ingress. Internal ingress should not be public.

These ingress objects support Transport Layer Security (TLS) version 1.2. TLS is enabled by default and you can configure it by overriding the ingress.tls and internal_ingress.tls settings in values.yaml.

For example: ?'"`UNIQ--source-0000000E-QINU`"'?

In the example above:

  • secretName is the certificate and private key to use for TLS. The secret is a prerequisite and must be created before you deploy Genesys Authentication, unless you have Certificate ClusterIssuer installed and configured in Kubernetes Cluster. In this case, the secret is created by ClusterIssuer.
  • hosts is a list of the fully qualified domain names that should use the certificate. The list must be the same as the value configured for ingress.frontend and internal_ingress.frontend.

Cookies

Genesys Authentication components use cookies to identify HTTP/HTTPS user sessions.

The Authentication UI supports the web browsers listed in the Browsers table. Genesys Authentication must be deployed before other Genesys Multicloud CX private edition services. To complete provisioning the service, you must first deploy Web Services and Applications and the Tenant Service. For a look at the high-level deployment order, see Order of services deployment.
Draft:BDS/Current/BDSPEGuide/Planning Draft:BDS Before you begin Find out what to do before deploying Billing Data Service (BDS). Billing Data Service BDSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • In a private edition environment, Billing Data Service (BDS) currently supports deployments only on the specific Kubernetes platforms cited in About BDS. Deploying BDS in environments that are not officially tested and certified by Genesys is not supported. If your organization requires a special business case to deploy BDS in an uncertified or unsupported environment (for example, mixed-mode), contact your Genesys Account Representative or Customer Care.
For general information about downloading containers, see: Downloading your Genesys Multicloud CX containers.

To learn what Helm chart version you must download for your release, see Helm charts and containers for Billing Data Service

The BDS Helm chart defines a CronJob:

BDS Helm chart package [ bds-cronjob-<version>.tgz ]: Scripts in this container run Extraction, Transformation, and Load process according to the main configuration specified as Config Map.  

BDS queries data primarily from Genesys Info Mart, GVP Reporting Server, and the Framework Configuration Management Environment, and can transmit the data to the SFTP server.

Before deploying BDS, configure the following 3rd party components: To deploy a BDS container on a node (or a worker computer), you need the following storage elements:
  • dual-core vCPU
  • 8 GB RAM
  • 40 GB PVC*
  • Database - BDS does not require any database but extracts the data from the GIM database, that is, Postgres. Hence, BDS supports any Postgres version that GIM currently supports.

?'"`UNIQ--nowiki-0000000A-QINU`"'? BDS creates the persistent volume claim (PVC) file storage to save the extracted and transformed files with current ETL time stamps.

  • The file share must be persistent, and must be mounted inside the pod to /mnt/fileshare folder. Note: Mount name cannot be changed, once created.
  • The information and files necessary to start BDS in between launches is stored on PVC.
  • Data in the file share must be backed up with a retention period of 90 days, and data protection.
BDS interoperates with the following Genesys software applications to receive billing data:
  • Genesys Voice Platform Reporting Server (GVP RS) 9.0 or later
  • Genesys Info Mart 8.5.015.19 or later

Depending on the metrics that you need, you must ensure that the corresponding data sources are available in your environment for BDS to generate meaningful reports.

To build requests for main data sources, BDS obtains additional information from the following sources:

  • GVP Config Server
  • Genesys GWS
Content coming soon
?'"`UNIQ--nowiki-0000000B-QINU`"'?

For more information, see the "suite-level" Link to come documentation.

Draft:ContentAdmin/Boilerplate/PEGuide/Planning Draft:ContentAdmin Before you begin Find out what to do before deploying <service_name>. Internal Content Administration PEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
List any limitations or assumptions related to the deployment.
List the containers and the <service_names> they include. Provide any specific information about the container and its Helm charts. Link to the "suite-level" doc for common information about how to download the Helm charts in Jfrog Edge: Downloading your Genesys Multicloud CX containers

See Helm charts and containers for <service_name> for the Helm chart version you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

List any third-party services that are required (both common across Genesys Multicloud CX private edition and specific to <service_name>).
Describe storage requirements, including:
  • Size
  • Type (HDD, SDD, NVMe)
  • IOPS
  • Latency sensitive (local vs netapp disk)
  • Specific requirements for third-party services, including HA setup, connectivity, expected sizing and scaling models
  • List which data/PVC storage are critical and need to be backed up for redundancy and data protection.
Describe network requirements, including:
  • Required properties for ingress, such as:
    • Cookies usage
    • Header requirements (client IP and redirect, passthrough)
    • Session stickiness
    • Allowlisting (optional)
    • TLS (optional)
  • Cross-region bandwidth
  • External connections from the Kubernetes cluster to other systems. This includes connecting to Genesys Cloud CX for hybrid services (such as AI, WEM) as well as "mixed" environments where some components are still deployed as VMs. Note that mixed environments are mainly for transition periods when customers migrate from a classic premise environment to Genesys Multicloud CX private edition.
  • WAF Rules (specific only for services handling internet traffic)
  • Pod Security Policy
  • TLS/SSL Certificate configurations
List supported browsers/versions for the UI, if applicable.
Describe any dependencies <service_name> has on other Genesys services. Include a link to the "suite-level" documentation for the order in which services must be deployed. For example, the Auth and GWS services must be deployed and running before deploying the WWE service. Order of services deployment
Provide information about GDPR support. Include a link to the "suite-level" documentation. Link to come
Draft:ContentAdmin/Internal/Small/Test5 Draft:ContentAdmin Test Prereq page for PE Intro statement summarizing this page. Internal Content Administration Small bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • This release includes important security upgrades made to third-party software.
  • Important security improvements.
  • As of December 23, 2021, No results supports deployments on Google Kubernetes Engine (GKE) in Genesys Multicloud CX private edition, as part of the Early Adopter Program.
  • 20220330174606
Intro to third-party section Unstructured chunk Unstructured chunk Intro text for browser section. Unstructured chunk Intro text for GDPR section.
Draft:DES/Current/DESPEGuide/Planning Draft:DES Before you begin Find out what to do before deploying Designer. Designer DESPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Designer currently supports multi-tenancy provided by the tenant Configuration Server. That is, each tenant should have a dedicated Configuration Server, and Designer can be shared across the multiple tenants.

Before you begin:

  1. Install Kubernetes. Refer to the Kubernetes documentation site for installation instructions. You can also refer to the Genesys Docker Deployment Guide for information on Kubernetes and High Availability.
  2. Install Helm according to the instructions outlined in the Helm documentation site.

After you complete the above mandatory procedures, return to this document to complete deployment of Designer and DAS as a service in a K8s cluster.

Download the Designer related Docker containers and Helm charts from the JFrog repository.

See Helm charts and containers for Designer for the Helm chart and container versions you must download for your release.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

The following section lists the third-party prerequisites for Designer.
  • Kubernetes 1.19.x - 1.21.x
  • Helm 3.0
  • Docker
    • To store Designer and DAS docker images to the local docker registry.
  • Ingress Controller
    • If Designer and DAS are accessed from outside of a K8s cluster, it is recommended to deploy/configure an ingress controller (for example, NGINX), if not already available. Also, the Blue-Green deployment strategy works based on the ingress rules.
    • The Designer UI requires Session Stickiness. Configure session stickiness in the annotations parameter in the values.yaml file during Designer installation.

For information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.

The following storage requirements are mandatory prerequisites:
  • Persistent Volumes (PVs)
    • Create persistent volumes for workspace storage (5 GB minimum) and logs (5 GB minimum)
    • Set the access mode for these volumes to ReadWriteMany.
    • The Designer manifest package includes a sample YAML file to create Persistent Volumes required for Designer and DAS.
    • Persistent volumes must be shared across multiple K8s nodes. Genesys recommends using NFS to create Persistent Volumes.
  • Shared file System - NFS
    • For production, deploy the NFS server as highly available (HA) to avoid single points of failure. It is also recommended that the NFS storage be deployed as a Disaster Recovery (DR) topology to achieve continuous availability if one region fails.
    • By Default, Designer and DAS containers run as a Genesys user (uid:gid 500:500). For this reason, the shared volume must have permissions that will allow write access to uid:gid 500:500. The optimal method is to change the NFS server host path to the Genesys user: chown -R genesys:genesys.
    • The Designer package includes a sample YAML file to create an NFS server. Use this only for a demo/lab setup purpose.
    • Azure Files Storage - If you opt for Cloud storage, then Azure Files Storage is an option to consider and has the following requirements:
      A Zone-Redundant Storage for RWX volumes replicated data in zone redundant (check this), shared across multiple pods.
      • Provisioned capacity : 1 TiB
      • Baseline IO/s : 1424
      • Burst IO/s : 4000
      • Egress Rate : 121.4 MiBytes/s
      • Ingress Rate : 81.0 MiBytes/s


  • If Designer and DAS are accessed from outside of a K8s cluster, it is recommended to deploy/configure an ingress controller (for example, NGINX), if not already available. Also, the Blue-Green deployment strategy works based on the ingress rules.
  • The Designer UI requires Session Stickiness. Configure session stickiness in the annotations parameter in the values.yaml file during Designer installation.
Unless otherwise noted, Designer supports the latest versions of the following browsers:
  • Mozilla Firefox
  • Google Chrome (see Important, below)
  • Microsoft Edge
  • Apple Safari

Internet Explorer (all versions) is not supported.

Important
For Google Chrome, Designer supports the n-1 version of the browser, i.e. the version prior to the latest release.

Minimum display resolution

The minimum display resolution supported by Designer is 1920 x 1080.

Third-party cookies

Some features in Designer require the use of third-party cookies. Browsers must allow third-party cookies to be stored for Designer to work properly.

The following Genesys dependencies are mandatory prerequisites:
  • Genesys Web Services (GWS) 9.x
    • Configure GWS to work with a compatible version of Configuration Server.
  • Other Genesys Components
    • Authentication Service
    • Voice Microservices

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Designer supports the European Union's General Data Protection Regulation (GDPR) requirements and provides customers the ability to export or delete sensitive data using ElasticSearch APIs and other third-party tools.

For the purposes of GDPR compliance, Genesys is a data processor on behalf of customers who use Designer. Customers are the data controllers of the personal data that they collect from their end customers, that is, the data subjects. Designer Analytics can potentially store data collected from end users in ElasticSearch. This data can be queried by certain fields that are relevant to GDPR. Once identified, the data can be exported or deleted using ElasticSearch APIs and other third-party tools that customers find suitable for their needs.

In particular, the following SDR fields may contain PII or sensitive data that customers can choose to delete or export as required:

  • ANI - This SDR field contains the customer's phone number used to make voice calls handled by Designer applications.
  • variables.Contact - This SDR field is an object and can have multiple properties, such as, name, email address, and other contact details. For example,

?'"`UNIQ--source-00000006-QINU`"'?

  • Application variables defined in the main application flow are also stored in the SDR under the variables object. These variables depend on application logic and may capture sensitive information intentionally or unintentionally. It is recommended to mark such variables secure (see Securing Variables in Designer Help for more details). But if they are captured in analytics, they can also be used to identify candidate SDRs for deletion or retrieval. The same applies to userdata key value pairs attached to interaction data which is captured in the calldata object in the SDR.
Important
It is the customer's responsibility to remove any PII or sensitive data within 21 days or less, if required by General Data Protection Regulation (GDPR) standards.
For general information about Genesys support for GDPR compliance, see General Data Protection Regulation.
Draft:GAWFM/Current/GAWFMPEGuide/Planning Draft:GAWFM Before you begin Find out what to do before deploying Gplus Adapter for WFM. Gplus Adapter for WFM GAWFMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Persistent storage access to recovery logs. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

Containers:

The GPlus Adapter for WFM has the following containers:

  • GPlus Adapter provisioning container
  • GPlus Adapter service container for Aspect
  • GPlus Adapter service container for Nice
  • GPlus Adapter service container for Teleopti
  • GPlus Adapter service container for Verint

Helm charts:

  • GPlus Adapter service deployment – gpluswfm-100.0.xx.tgz
List any third-party services that are required (both common across Genesys Multicloud CX private edition and specific to <service_name>).

Resource recommendations

  • Memory consumption is proportional to the size of the customer’s configuration server database.
  • Standard operating resources are substantially lower; however, spikes can occur due to environment issues such as server disconnects which can cause significant memory and CPU load.
Storage estimates
Agents Memory CPUs
< 5k concurrent 4 GB 1 CPU
5k - 10 k concurrent 8 GB 1 CPU
10k + 12 GB 2 CPUs


These estimates were compiled under the following scenarios:
Media type TServers only TServers and Interaction Servers
Voice calls 60 interactions / second 20 interactions / second
Emails - 20 emails / second
  • These volumes are per historical stream. For example, 5 interactions per second with three reporting streams should be considered as 15 interactions per second. Note: RTA streams do not contribute substantially to resource requirements.
  • These tests were conducted with 8 GB RAM, 1 CPU at a speed of 2.7GHz, 100k concurrent agents, and clean routing (mostly an issue for multimedia interactions on Interaction Server).

Storage sizing calculator

The adapter requires storage of recovery logs for 1 week. The recovery logging for the adapter stores the full data for every event it receives from Configuration Server, Interaction Server, and TServer. Customers need to provide enough storage to handle 7 days of TServer and Interaction Server event data. In addition, the adapter downloads a fresh copy of all CME data daily from Configuration Server.  A rough approximation for how much storage is required can be expressed with the following formula:

Storage = 7 days × (EventSize × #Events + CMESize) × Compression × Padding

Storage: The amount of storage required by the adapter.

EventSize: The average size of TServer/Interaction Server events in Bytes.  This depends on the business rules of the contact center and how much user data is being attached to events.  A typical value would be in the range of 1-5KB.

#Events: The average number of TServer/Interaction Server events processed in a typical day.  This can vary widely depending on the complexity and size of the contact center as well as call volume.

CMESize: The aggregate size of all CME data being monitored by the adapter.  The amount of memory taken up by the Configuration Server application can be used as an approximation.

Compression: A multiplier representing the amount of compression of the gzip log files, typically 0.05-0.10.

Padding: A multiplier to provide extra space in case of periods of high activity or unexpected restarts. A value of 2 is recommended.

For example:

To calculate the amount of storage required by a contact center with an average EventSize of 3KB and 5 million events per day, with a CME Size of 1GB then:

 Storage = 7 × (3KB × 5,000,000 + 1GB) × 0.10 × 2 = 21GB per week

Other storage requirements

The speed and throughput requirements for recovery log storage depend on the amount of logging calculated in the previous section. If the adapter writes 21GB per week then it writes 3GB per day and if most of the logging takes place during the 16 busiest hours of the day, then the adapter will need to write at a rate of:

Rate = 3GB ÷ 16hr = 56 MB/s

This kind of throughput can be handled by most disk storage providers.

CNI for Direct Pod Routing

The GPlus Adapter does not have any special requirements for CNI networking.

Ingress

The GPlus Adapter does not expose any http/https endpoints for ingress.

Subnet sizing

The GPlus Adapter does not support deployments with multiple replicas and so requires only one IP address per instance.

External Connections

The GPlus WFM Adapter requires connections to Genesys Configuration Server, TServer and Interaction Server in order to monitor state information for interactions, agents, and other configuration objects.  The adapter may also have optional connections to WFM FTP servers for transferring reports.  For RTA streaming, the adapter may either act as a server for the various WFM RTA clients to connect to (applies to vendors Teleopti, Nice-IEX, Verint), or as a client in the case of Aspect.  Prometheus metrics are exposed on the /metrics http endpoint set up on the adapter, although this endpoint is not exposed to the internet.

External connections
Client Network Type Client Network Server Server Network protocol Notes
Gplus WFM Adapter Any Any TServer Any TCP 0 or more plus backup servers
Gplus WFM Adapter Any Any Interaction Server Any TCP 0 or more plus backup servers
Gplus WFM Adapter Any Any Config Server Any TCP 1 plus any backup servers
Gplus WFM Adapter Any Any WFM FTP Server Any TCP Configurable protocol, can be either or both
WFM RTA Server Any Any Gplus WFM Adapter Any TCP 0 or more. Applies to NICE, Teleopti, and Verint adapters.
Gplus WFM Adapter Any Any WFM RTA Server Any TCP 0 or more. Applies only to Aspect adapter.
Prometheus Logging Any Any Gplus WFM Adapter Any HTTP or HTTPS
N/A
Describe any dependencies <service_name> has on other Genesys services. Include a link to the "suite-level" documentation for the order in which services must be deployed. For example, the Auth and GWS services must be deployed and running before deploying the WWE service. Order of services deployment
  • SIP server
  • T-Server
  • Config Server
Provide information about GDPR support. Include a link to the "suite-level" documentation. Link to come
Draft:GVP/Current/GVPPEGuide/Planning Draft:GVP Before you begin Find out what to do before deploying Genesys Voice Platform. Genesys Voice Platform GVPPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • Third-party integrations like ASR/TTS or recording use cases are not supported in this initial release.
  • Resource Manager does not use gateway LRG configurations. Instead, it uses the contact center ID coming from SIP Server as gvp-tenant-id in the INVITE message to identify the tenant and pick the IVR Profiles.
  • Only single MCP LRG is supported per GVP deployment.
  • Only the specific component configuration options documented in Helm values.yaml overrides can be modified. Other configuration options can't be changed.
  • DID/DID groups are managed as part of Designer applications (Applications)
  • SIP TLS / SRTP are currently not supported.
You will have to download the GVP related Docker containers and Helm charts from the JFrog repository. For docker container and helm chart versions, refer to Helm charts and containers for Genesys Voice Platform.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

Media Control Platform

Storage requirement for production (min)

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
recordings-volume 100Gi RWO high Storing recordings, dual AZ, gvp-mcp, rup Y Y
rup-volume 40Gi RWO high Storing recordings temporarily, dual AZ, rup Y Y
log-pvc 50Gi RWO medium storing log files gvp-mcp Y Y

Storage requirements for Sandbox

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
recordings-volume 50Gi RWO high Storing recordings, dual AZ, gvp-mcp, rup Y Y
rup-volume 20Gi RWO high Storing recordings temporarily, dual AZ, rup Y Y
log-pvc 25Gi RWO medium storing log files gvp-mcp Y Y

Resource Manager

Storage requirement for production (min)

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billingpvc 20Gi RWO high billing gvp-rm Y Y
log-pvc 50Gi RWO medium storing log files gvp-rm Y Y

Storage requirements for Sandbox

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billingpvc 20Gi RWO high billing gvp-rm Y Y
log-pvc 10Gi RWO medium storing log files gvp-rm Y Y

Service Discovery

Not applicable

Reporting Server

Storage requirement for production (min)

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billing-pvc 20Gi RWO High Stores ActiveMQ data and config information gvp-rs Y Y

Storage requirement for Sandbox

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billing-pvc 10Gi RWO High Stores ActiveMQ data and config information gvp-rs Y Y

GVP Configuration Server

Not applicable

Media Control Platform

Ingress

Not applicable

HA/DR

MCP is deployed with autoscaling in all regions. For more details, see the section Auto-scaling.

Calls are routed to active MCPs from GVP Resource Manager (RM) and in case of a MCP instance terminating, the calls are then routed to a different MCP instance.

Cross-region bandwidth

MCPs are not expected to be doing cross-region requests in normal mode of operation.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-00000019-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Resource Manager

Ingress

Not applicable

HA/DR

Resource Manager is deployed as the Active and Active pair.

Cross-region bandwidth

Resource Manager is deployed per region. There is no cross region deployment.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001B-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Service Discovery

Ingress

Not applicable

HA/DR

Service Discovery is a singleton service which will be restarted if it shuts down unexpectedly or becomes unavailable.

Cross-region bandwidth

Service Discovery is not expected to be doing cross-region requests in normal mode of operation.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001D-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Reporting Server

Ingress

Not applicable

HA/DR

Reporting Server is deployed as a single pod service.

Cross-region bandwidth

Reporting Server is deployed per region. There is no cross region deployment.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001F-QINU`"'?

SMTP Setting

Not applicable

TLS/SSL Certificates configurations

Not applicable

GVP Configuration Server

Ingress

Not applicable

HA/DR

GVP Configuration Server is deployed as a singleton. If the GVP Configuration Server crashes, a new pod will be created. The GVP services will continue to service calls if the GVP Configuration Server is unavailable and only new configuration changes, such as new MCP pods, will not be available.

Cross-region bandwidth

GVP Configuration Server is not expected to be doing cross-region requests in normal mode of operation.

External connections

External service Functionality
PostGresSQL database

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-00000021-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

N/A

Media Control Platform

Service Functionality
Consul Consul service must be deployed before deploying MCP for proper service registration in GVP Configuration Server and RM.

Resource Manager

Service Functionality
GVP Configuration Server GVP Configuration Server must be deployed before deploying RM for proper working.

Service Discovery

Service Functionality
Consul Consul service must be deployed before deploying Service Discovery for proper service registration in GVP Configuration Server and Resource Manager.

Reporting Server

Service Functionality
GVP Configuration Server GVP Configuration Server must be deployed before deploying RS for proper working.

GVP Configuration Server

N/A

This section describes product-specific aspects of Genesys Voice Platform support for the European Union's General Data Protection Regulation (GDPR) in premise deployments. For general information about Genesys support for GDPR compliance, see General Data Protection Regulation.

Warning

Disclaimer: The information contained here is not considered final. This document will be updated with additional technical information.

Data Retention Policies

GVP has configurable retention policies that allow expiration of data. GVP allows aggregating data for items like peak and call volume reporting. The aggregated data is anonymous. Detailed call detail records include DNIS and ANI data. The Voice Application Reporter (VAR) data could potentially have personal data, and would have to be deleted when requested. The log data files would have sensitive information (possibly masked), but requires the data to be rotated/expired frequently to meet the needs of GDPR.

Configuration Settings

Media Server

Media Server is capable of storing data and sending alarms which can potentially contain sensitive information, but by default, the data will typically be automatically cleansed (by the log rollover process) within 40 days.

The location of these files can be configured in the GVP Media Control Platform Configuration [default paths are shown below]:

  • vxmli:recordutterance-path = $InstallationRoot$/utterance/
  • vxmli:recording-basepath = $InstallationRoot$/record/
  • Netann:record-basepath = $InstallationRoot$/record
  • msml:cpd-record-basepath = $InstallationRoot$/record/
  • msml:record-basepath = $InstallationRoot$
  • msml:record-irrecoverablerecordpostdir = $InstallationRoot$/cache/record/failed
  • mpc:recordcachedir = $InstallationRoot$/cache/record
  • calllog:directory = $InstallationRoot$/callrec/Log files and temporary files can be saved.

The location of these files can be configured in the GVP Media Control Platform Configuration [default paths are shown below]:

  • vxmli:logdir = $InstallationRoot$/logs/
  • vxmli:tmpdir = $InstallationRoot$/tmp/
  • vxmli:directories-save_tempfiles = $InstallationRoot$/tmp/

Note: Changing default values is not really supported in the initial Private Edition release for any of the above MCP options.

Also, additional sinks are available where alarms and potentially sensitive information can be captured. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information. The metrics can be configured in the GVP Media Control Platform configuration:

  • ems.log_sinks = MFSINK I DATAC I TRAPSINK
  • ems:metricsconfig-DATAC = *
  • ems:dc-default-metricsfilter = 0-16,18,25,35,36,41,52-55,74,128,136-141
  • ems.metricsconfig.MFSINK = 0-16,18-41,43,52-56,72-74,76-81,127-129,130,132-141,146-152

GVP Resource Manager

Resource Manager is capable of storing data and sending alarms and potentially sensitive information, but by default, the data will typically be automatically cleansed (by the log rollover process) within 40 days.

Customers are advised to understand the GVP logging (for all components) and understand the sinks (destinations) for information which the platform can potentially capture. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information.

GVP Reporting Server

The Reporting Server is capable of storing/sending alarms and potentially sensitive information, but by default, these components process but do not store consumer PII. Customers are advised to understand the GVP logging (for all components) and understand the sinks (destinations) for information which the platform can potentially capture. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information.

By default, Reporting Server is designed to collect statistics and other user information. Retention period of this information is configurable, with most data stored for less than 40 days. Customers should work with their application designers to understand what information is captured as part of the application, and, whether or not the data could be considered sensitive.

These settings could be changed by the customer as per their need by using a Helm chart override values.yaml.

Data Retention Specific Settings

  • rs.db.retention.operations.daily.default: "40"
  • rs.db.retention.operations.monthly.default: "40"
  • rs.db.retention.operations.weekly.default: "40"
  • rs.db.retention.var.daily.default: "40"
  • rs.db.retention.var.monthly.default: "40"
  • rs.db.retention.var.weekly.default: "40"
  • rs.db.retention.cdr.default: "40"

Identifying Sensitive Information for Processing

The following example demonstrates how to find this information in the Reporting Server database – for the example where ‘Session_ID’ is considered sensitive:

  • select * from dbo.CUSTOM_VARS where session_ID = '018401A9-100052D6';
  • select * from dbo.VAR_CDRS where session_ID = '018401A9-100052D6';
  • select * from dbo.EVENT_LOGS where session_ID = '018401A9-100052D6';
  • select * from dbo.MCP_CDR where session_ID = '018401A9-100052D6';
  • select * from dbo.MCP_CDR_EXT where session_ID = '018401A9-100052D6';

An example of a SQL query which might be used to understand if specific information is sensitive: ?'"`UNIQ--source-00000023-QINU`"'?

Draft:GWS/Current/GWSPEGuide/Planning Draft:GWS Before you begin Find out what to do before deploying Genesys Web Services and Applications. Genesys Web Services and Applications GWSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Web Services and Applications (GWS) in Genesys Multicloud CX private edition is made up of multiple containers and Helm charts. The pages in this "Configure and deploy" chapter walk you through how to deploy the following Helm charts:
  • GWS services (gws-services) - all the GWS components.
  • GWS ingress (gws-ingress) - provides internal and external access to GWS services. Internal ingress is used for cross-component communication inside the GWS deployment. It also can be used by other clients located inside the same Kubernetes cluster. External ingress provides access to GWS services to clients located outside the Kubernetes cluster. If you are deploying Genesys Web Services and Applications in a single namespace with other private edition services, then you do not need to deploy GWS ingress.

GWS also includes a Helm chart for Nginx (wwe-nginx) for Workspace Web Edition - see the Workspace Web Edition Private Edition Guide for details about how to deploy this chart.

See Helm charts and containers for Genesys Web Services and Applications for the Helm chart versions you must download for your release.

For information about downloading Helm charts from JFrog Edge, see Downloading your Genesys Multicloud CX containers.

Install the prerequisite dependencies listed in the Third-party services table before you deploy Genesys Web Services and Applications. See Software requirements for a full list of prerequisites and third-party services required by all Genesys Multicloud CX private edition services. GWS uses PostgreSQL to store tenant information, Redis to cache session data, and Elasticsearch to store monitored statistics for fast access. If you set up any of these services as dedicated services for GWS, they have the following minimal requirements:

PostgreSQL

  • CPU: 2
  • RAM: 8 GB
  • HDD: 50 GB

Redis

  • 2 nodes:
    • CPU: 2
    • RAM: 8 GB
    • HDD: 20 GB

Elasticsearch

  • 3 "master" nodes:
    • CPU: 2
    • RAM: 8 GB
    • HDD: 20 GB
  • 4 "data" nodes
    • CPU: 4
    • RAM: 16 GB
    • HDD: 20 GB
GWS ingress objects support Transport Layer Security (TLS) version 1.2 for a secure connection between Kubernetes cluster ingress and GWS ingress. TLS is disabled by default, but you can configure it for internal and external ingress by overriding the entryPoints.internal.ingress.tls and entryPoints.external.ingress.tls sections of the GWS ingress Helm chart.

For example:

entryPoints:
  external:
    ingress:
    tls:
      - secretName: gws-secret-ext
        hosts:
          - gws.genesys.com

In the example above:

  • secretName is the name of the Kubernetes secret that contains the certificate. The secret is a prerequisite and must be created before you deploy GWS ingress.
  • hosts is a list of the fully qualified domain names that should use the certificate. The list must be the same as the value configured for the entryPoints.external.ingress.hosts parameter.

Cookies

GWS components use cookies for following purposes:

  • identify HTTP/HTTPS user sessions
  • identify CometD user sessions
  • support session stickiness
JM: Are these browsers are supported for Agent Setup? Browser support for WWE is documented in the WWE guid for private edition: Before you begin. Also, can we align with the versions that are supposed to be supported for cloud/private edition in all products? Those versions are:
  • Chrome: Current release or one version previous
  • Firefox: Current release or one version previous
  • Microsoft Edge: Current release
  • Microsoft Edge Chromium: Current release


You can use any of the following browsers for UIs:

  • Chrome 75+
  • Firefox 68+
  • Firefox ESR 60.9
  • Microsoft Edge
Genesys Web Services and Applications must be deployed after Genesys Authentication.

For a look at the high-level deployment order, see Order of services deployment in the Setting up Genesys Multicloud CX Private Edition guide.

?'"`UNIQ--nowiki-00000008-QINU`"'?
Draft:IXN/Current/IXNPEGuide/Planning Draft:IXN Before you begin Find out what to do before deploying Interaction Server. Interaction Server IXNPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IXN Server:
  • supports single-region model of deployment only
  • does not support scaling or HA
  • requires dedicated PostgreSQL deployment per customer
Available IXN containers can be found by the following names in registry:
  • ixn/ixn_vq_node
  • ixn/ixn_node
  • ixn/interaction_server

Available helm charts can be found by the name ixn-<version>

For information about downloading Genesys containers and Helm charts from JFrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers.

The following are the minimum versions supported by IXN Server:
  • Kubernetes 1.17+
  • Helm 3.0
In case logging into files is configured for IXN Server, it requires a volume storage mounted to IXN Server container. The storage must be capable to write up to 100 MB/min and 10 MB/s for 2 minutes in peak. The storage size depends on logging configuration.

Regarding storage characteristics for IXN Server database, refer to PostgreSQL documentation.

Contact your account representative if you need assistance with sizing calculations.

Not applicable Not applicable Tenant service. For more information, refer to the Tenant Service Private Edition Guide.


Provide information about GDPR support. Include a link to the "suite-level" documentation. Link to come
Draft:PEC-AD/Current/WWEPEGuide/Planning Draft:PEC-AD Before you begin Find out what to do before deploying Workspace Web Edition. Agent Desktop WWEPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations or assumptions related to the deployment. The Workspace Web Edition Helm charts are included in the Genesys Web Services (GWS) Helm charts. You can access them when you download the GWS Helm charts from JFrog using your credentials.

See Helm charts and containers for Genesys Web Services and Applications for the Helm chart version you must download for your release.

For information about downloading Genesys Helm charts from JFrog Edge, refer to this article: Downloading your Genesys Multicloud CX containers.

There are no specific storage requirements for Workspace Web Edition. Network requirements include:
  • Required properties for ingress:
    • Cookies usage: None
    • Header requirements - client IP & redirect,  passthrough: None
    • Session stickiness: None
    • Allowlisting - optional: None
    • TLS for ingress - optional (you can enable or disable TLS on the connection): Though annotation like any UI or API in the solution
  • Cross-region bandwidth: N/A
  • External connections from the Kubernetes cluster to other systems: N/A
  • WAF Rules (specific only for services handling internet traffic): N/A
  • Pod Security Policy: N/A
  • High-Availability/Disaster Recovery: Refer to High availability and disaster recovery
  • TLS/SSL Certificate configurations: No specific requirements
You can use any of the supported browsers to run Workspace Agent Desktop on the client side.

Mandatory Dependencies

The following services must be deployed and running before deploying the WWE service. For more information, refer to Order of services deployment.

  • Genesys Authentication Service:
    • A redirect must be configured in Auth/Environment to allow an agent to login from the WWE URL. The redirect should be configured in the Auth onboarding script, according to the DNS assigned to the WWE service.
  • GWS services:
    • The CORS rules for WWE URLs must be configured in GWS. This should be configured in the GWS onboarding script, according to the DNS assigned to the WWE service.
    • The GWS API URL should be specified at the WWE deployment time as part of the Helm values.
  • TLM service:
    • The CORS rules for the domain where WWE is declared must be configured in Telemetry Service (TLM). For example: ?'"`UNIQ--nowiki-000001D5-QINU`"'?

Optional Dependencies

Depending on the deployed architecture, the following services must be deployed and running before deploying the WWE service:

  • WebRTC Service: To allow WebRTC in the browser
  • Telemetry Service: To allow browser observability (metrics and logs)

Miscellaneous desktop-side optional dependencies

The following software must or might be deployed on agent workstations to allow agents to leverage the WWE service:

  • Mandatory: A browser referenced in the supported browser list.
  • Optional: Genesys Softphone: a SIP or WebRTC softphone to handle the voice channel of agents.
Workspace Web Edition does not have specific GDPR support.
Draft:PEC-CAB/Current/CABPEGuide/Planning Draft:PEC-CAB Before you begin Find out what to do before deploying Genesys Callback. Callback CABPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Engagement Service (GES) is the only service that runs in the GES Docker container. The Helm charts included with the GES release provision GES and any Kubernetes infrastructure necessary for GES to run, such as load balancing, autoscaling, ingress control, and monitoring integration.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for Callback for the Helm chart version you must download for your release.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements. The primary contributor to the size of a callback record is the amount of user data that is attached to a callback. Since this is an open-ended field, and the composition will differ from customer to customer, it is difficult to state the precise storage requirements of GES for a given deployment. To assist you, the following table lists the results of testing done in an internal Genesys development environment and shows the impact that user data has when it comes to the storage requirements for both Redis and Postgres.
Test Redis size Postgres  size (MB)
10,000 Scheduled Callbacks with no user data 26.51 MB 41.1 MB
10,000 Scheduled Callbacks with 10 KB of user data 64.44 MB 252.91 MB
10,000 Scheduled Callbacks with 100k of user data 110.58 MB 595.79 MB

Note: This is 100k of randomized string in a single field in the user data.

Hardware requirements

Genesys strongly recommends the following hardware requirements to run GES with a single tenant. The requirements are based on running GES in a multi-tenanted environment and scaled down accordingly. Use these guidelines, coupled with the callback storage information listed above, to gauge the precise requirements needed to ensure that GES runs smoothly in your deployment.

GES

(Based on t3.medium)

  • vCPUs: 1
  • Memory: 2 GiB
  • Network burst: 5 Gbps

Redis

(Based on cache.r5.large) Redis is essential to GES service availability. Deploy two of the Redis caches in a cluster; the second cache acts as a replica of the first. For more information, see Architecture.

GES requires a dedicated, non-clustered Redis instance. Callback data is stored in Redis memory.

  • vCPUs: 1
  • Memory: 8 GiB
  • Network burst: 10 Gbps

PostgreSQL

(Based on db.t3.medium)

  • vCPUs: 2
  • Memory: 4 GiB
  • Network burst: 5 Gbps
  • Storage: 100 GiB

Sizing calculator

The information in this section is provided to help you determine what hardware you need to run GES and third-party components. The information and formulas are based on an analysis of database disk storage and Redis memory usage requirements for callback data. The numbers provided here include only storage and memory usage for callbacks. Additional storage and memory is required for configuration data and basic operations.

Requirements per callback

Each callback record (excluding user data) requires approximately 6.5 to 7.0 kB of database disk storage, plus additional disk storage for the user data. Each kB of user data consumes approximately 3.0 kB of disk storage.

Each callback record (excluding user data) requires approximately 4.5 to 5.5 kB of Redis memory, plus an additional 1.25 kB for each kB of user data.

Use the following formulas to estimate disk storage and Redis memory requirements:

  • Estimate database disk storage requirements for callback data:
    <number of callbacks per day> × (7 kB + (3 kB × <kB of user data per callback>)) × 14 days
  • Estimate Redis memory requirements for callback data:
    <number of callbacks per day> × (5.5 kB + (1.25 kB × <kB of user data per callback>)) × 14 days

For example, if a tenant has an average of 100,000 callbacks per day with 1kB user data in each callback:

  • The database storage requirement is approximately 14 GB.
  • The Redis memory requirement is approximately 9.5 GB.

NOTE: Each callback record is stored for 14 days. If you average about 10k scheduled callbacks every day, and the scheduled callbacks are all booked as far out as possible (that is, 14 days in the future), the number of callbacks to use in storage and memory calculations is 28 days × 10k callbacks per day = 280k callbacks.

Redis operations

The Redis operations primarily update the connectivity status to other services such as Tenant Service (specifically ORS and URS) and Genesys Web Services and Applications (GWS).

When GES is idle (zero callbacks in the past, no active callback sessions, no scheduled callbacks), GES generates about 50 Redis operations per second per GES node per tenant.

Each Immediate callback generates approximately 110 Redis operations from its creation to the end of the ORS session.

For Scheduled callbacks, assuming each callback generates 110 Redis operations when the ORS session is active (based on Immediate callback numbers), there is 1 additional Redis operation for each minute that a callback is scheduled.

For example, if a callback is scheduled for 1 hour from the time it was created, the number of Redis operations is approximately 60 + 110 = 170.

For a callback scheduled for 1 day from the time it was created, it generates approximately 60 × 24 + 110 = 1550 Redis operations, using the following formula for the number of Redis operations per callback:
<number of callbacks> × (110 + <number of minutes until scheduled time>)

Because the longevity of a callback ORS session depends on the estimated wait time (EWT), the total number of Redis operations performed by GES per minute varies, based on both the number of callbacks in the system and the EWT of the callbacks.

Use the following formula to estimate the number of Redis operations performed per minute:
Total number of Redis operations per minute = (50 base GES Redis operations per second × 60 seconds) + <number of upcoming scheduled callbacks in the system> + ((<total number of active callbacks> / <EWT>) × 110)

Where:

  • Total number of active callbacks = <number of active immediate callbacks> + <number of active scheduled callbacks>, and
  • Number of active scheduled callbacks = (<number of scheduled callbacks per time slot> / <time slot duration>) × <EWT>

For example, let's say we have the following scenario:

  • Scheduled callbacks:
    • Time slot duration = 15 minutes
    • Maximum capacity per time slot = 100
    • Business hours = 24x7
    • Assume that all time slots are fully booked for the next 14 days
  • Number of active immediate callbacks = 1,000
  • Estimated wait time = 90 minutes

Using the preceding formulas, estimate the Redis operations per minute:

  • Total number of scheduled callbacks = (100 × (60 / 15)) × 24 × 14 = 134,400
  • Number of active scheduled callbacks = (100 / 15) × 90 = 600
  • Number of upcoming scheduled callbacks = <total number of scheduled callbacks> - <number of active scheduled callbacks> = (134,400 - 600) = 133,800
  • Total number of active callbacks = 1,000 + 600 = 1,600
  • Total number of Redis operations per minute = (50 × 60) + 133,800 + ((1,600 / 90) × 110) = 138,756

Redis keys

Each callback creates three additional Redis keys. Given the preceding calculations for Redis memory requirements for each callback, the formula for the average key size is:
(5.5 kB + (1.25 kB × <kB_of_user_data_per_callback>)) / 3

Incoming connections to the GES deployment are handled either through the UI or through the external API. For information about how to use the external API, see the Genesys Multicloud CX Developer Center.

Connection topology

The diagram below shows the incoming and outgoing connections amongst GES and other Genesys and third-party software such as Redis, PostgreSQL, and Prometheus. In the diagram, Prometheus is shown as being part of the broader Kubernetes deployment, although this is not a requirement. What's important is that Prometheus is able to reach the internal load balancer for GES.

The other important thing to note is that, depending on the use case, GES might communicate with Firebase and CAPTCHA over the open internet. This is not part of the default callback offering, but if you use Push Notifications with your callback service, then GES must be able to connect to Firebase over TLS. The use of Push Notifications or CAPTCHA is optional and not necessary for the basic callback scenarios.

GES requires a dedicated, non-clustered Redis instance.

Ges connection topology private edition diagram.png

Web application firewall rules

Information in the following sections is based on NGINX configuration used by GES in an Azure cloud environment.

Cookies and session requirements

When interacting with the UI, GES and GWS ensure that the user's browser has the appropriate session cookies. By default, UI sessions time out after 20 minutes of inactivity.

The external Engagement API does not require session management or the use of cookies, but it is important that the GES API key be provided in the request headers in the X-API-Key field.

For ingress to GES, allow requests to only the following paths to be forwarded to GES:

- /ges/
- /engagement/v3/callbacks/create
- /engagement/v3/callbacks/cancel
- /engagement/v3/callbacks/retrieve
- /engagement/v3/callbacks/availability/
- /engagement/v3/callbacks/queue-status/
- /engagement/v3/callbacks/open-for/
- /engagement/v3/estimated-wait-time
- /engagement/v3/call-in/requests/create
- /engagement/v3/statistics/operations/get-statistic-ex

In addition to allowing connections to only these paths, ensure that the ccid or ContactCenterID headers on any incoming requests are empty. This enhances security of the GES deployment; it prevents the use of external APIs by an actor who has only the CCID of the contact center.

TLS/SSL certificate configuration

There are no special TLS certificate requirements for the GES/Genesys Callback web-based UI.

Subnet requirements

There are no special requirements for sizing or creating an IP subnet for GES above and beyond the demands of the broader Kubernetes cluster.

The Genesys Callback user interface is supported in the following browsers. GES has dependencies on several other Genesys services. You must deploy the services on which GES depends and verify that each is working as expected before you provision and configure GES. If you follow this advice, then – if any issues arise during the provisioning of GES – you can be reasonably assured that the fault lies in how GES is provisioned, rather than in a downstream program.

GES/Callback requires your environment to contain supported releases of the following Genesys services, which must be deployed before you deploy Callback:

  • GWS
  • Voice Microservices
  • Designer

For detailed information about the correct order of services deployment, see Order of services deployment.

Callback records are stored for 14 days. The 14-day TTL setting starts at the Desired Callback Time. The Callback TTL (seconds) setting in the CALLBACK_SETTINGS data table has no effect on callback record storage duration; 14 days is a fixed value for all callback records.

For more information, see Link to come.

Draft:PEC-DC/Current/DCPEGuide/Planning Draft:PEC-DC Before you begin Find out what to do before deploying Digital Channels. Digital Channels DCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Digital Channels for private edition has the following limitations:
  • Supports only a single-region model of deployment.
  • Social media requires additional components that are not included in Digital Channels.
Digital Channels in Genesys Multicloud CX private edition includes the following containers:
  • nexus
  • hubpp
  • tenant_deployment

The service also includes a Helm chart, which you must deploy to install all the containers for Digital Channels:

  • nexus

See Helm charts and containers for Digital Channels for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the nexus folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.


Install the prerequisite dependencies listed in the Third-party services table before you deploy Digital Channels. Digital Channels uses PostgreSQL and Redis to store all data.


For general network requirements, review the information on the suite-level Network settings page. Digital Channels has dependencies on the following Genesys services:
  • Genesys Authentication
  • Web Services and Applications
  • Tenant Microservice
  • Universal Contact Service
  • Designer

For detailed information about the correct order of services deployment, see Order of services deployment.

JM: Does Digital Channels support GDPR?
Draft:PEC-Email/Current/EmailPEGuide/Planning Draft:PEC-Email Before you begin Find out what to do before deploying Email. Email EmailPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of Email supports single-region model of deployment only. Email in Genesys Multicloud CX private edition includes the following containers:
  • iwd-email

The service also includes a Helm chart, which you must deploy to install the required containers for Email:

  • iwdem

See Helm Chart and Containers for Email for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwdem folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in IWD, UCS-X, and Digital Channels which are external to the Email service. External Connections: IMAP, SMTP, Gmail, GRAPH Not applicable The following Genesys services are required:
  • Genesys authentication service (GAuth)
  • Universal Contact Service (UCS)
  • Interaction Server
  • Digital Channels (Nexus)
  • Intelligent Workload Distribution (IWD)

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
Draft:PEC-GPA/Current/GPAPEGuide/Planning Draft:PEC-GPA Before you begin Find out what to do before deploying Gplus Adapter for Salesforce. Gplus Adapter for Salesforce GPAPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations or assumptions related to the deployment. You can access the Gplus Adapter for Salesforce Helm charts when you download them from JFrog using your credentials.

See [[Draft:ReleaseNotes/Current/GenesysEngage-cloud/GPAHelm|]] for the Helm chart version you must download for your release.

For information about downloading Genesys Helm charts from JFrog Edge, refer to this article: Downloading your Genesys Multicloud CX containers.

Salesforce Lightning or Salesforce Classic. There are no specific storage requirements for Gplus Adapter for Salesforce.


Network requirements include:
  • Required properties for ingress:
    • Cookies usage: None
    • Header requirements - client IP & redirect,  passthrough: None
    • Session stickiness: None
    • Allowlisting - optional: None
    • TLS for ingress - optional (you can enable or disable TLS on the connection): Though annotation like any UI or API in the solution
  • Cross-region bandwidth: N/A
  • External connections from the Kubernetes cluster to other systems: N/A
  • WAF Rules (specific only for services handling internet traffic): N/A
  • Pod Security Policy: N/A
  • High-Availability/Disaster Recovery: Refer to High availability and disaster recovery
  • TLS/SSL Certificate configurations: No specific requirements
You can use any of the supported browsers to run Gplus Adapter for Salesforce and Workspace Agent Desktop on the client side.

Mandatory Dependencies

The following services must be deployed and running before deploying the GPA service. For more information, refer to Order of services deployment.

  • Genesys Authentication Service:
    • A redirect must be configured in Auth/Environment to allow an agent to login from the WWE URL. The redirect should be configured in the Auth onboarding script, according to the DNS assigned to the WWE service.
  • GWS services:
    • The CORS rules for WWE URLs must be configured in GWS. This should be configured in the GWS onboarding script, according to the DNS assigned to the WWE service.
    • The GWS API URL should be specified at the WWE deployment time as part of the Helm values.
  • TLM service:
    • The CORS rules for the domain where WWE is declared must be configured in Telemetry Service (TLM). For example: ?'"`UNIQ--nowiki-00000005-QINU`"'?

Optional Dependencies

Depending on the deployed architecture, the following services must be deployed and running before deploying the WWE service:

  • WebRTC Service: To allow WebRTC in the browser
  • Telemetry Service: To allow browser observability (metrics and logs)

Miscellaneous desktop-side optional dependencies

The following software must or might be deployed on agent workstations to allow agents to leverage the WWE service:

  • Mandatory: A browser referenced in the supported browser list.
  • Optional: Genesys Softphone: a SIP or WebRTC softphone to handle the voice channel of agents.
Gplus Adapter for Salesforce does not have specific GDPR support.
Draft:PEC-IWD/Current/IWDDMPEGuide/Planning Draft:PEC-IWD Before you begin Find out what to do before deploying IWD Data Mart. Intelligent Workload Distribution IWDDMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IWD Data Mart:
  • works as a short-living job started on schedule
  • does not support scaling or HA
  • requires dedicated PostgreSQL deployment per customer

IWD Data Mart is a short-living job, so Prometheus metrics cannot be pulled. Therefore, it requires a standalone Pushgateway service for monitoring.

IWD Data Mart in Genesys Multicloud CX private edition includes the following containers:
  • iwd_dm_cloud

The service also includes a Helm chart, which you must deploy to install the required containers for IWD Data Mart:

  • iwddm-cronjob

See Helm Charts and Containers for IWD and IWD Data Mart for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwddm-cronjob folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in PostgreSQL, which is external to the IWD Data Mart. Not applicable Not applicable Intelligent Workload Distribution (IWD) with a provisioned tenant.

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
Draft:PEC-IWD/Current/IWDPEGuide/Planning Draft:PEC-IWD Before you begin Find out what to do before deploying IWD. Intelligent Workload Distribution IWDPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IWD:
  • supports single-region model of deployment only
  • requires dedicated PostgreSQL deployment per customer


IWD in Genesys Multicloud CX private edition includes the following containers:
  • iwd

The service also includes a Helm chart, which you must deploy to install the required containers for IWD:

  • iwd

See Helm Charts and Containers for IWD and IWD Data Mart for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwd folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in the PostgreSQL, Elasticsearch, and Digital Channels which are external to IWD.

Sizing of Elasticsearch depends on the load. Allow on average 15 KB per work item, 50 KB per email. This can be adjusted depending on the size of items processed.

External Connections: IWD allows customer to configure webhooks. If configured, this establishes an HTTP or HTTPS connection to the configured host or port. Not applicable The following Genesys services are required:
  • Genesys authentication service (GAuth)
  • Universal Contact Service (UCS)
  • Interaction Server
  • Digital Channels (Nexus)

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
Draft:PEC-OU/Current/CXCPEGuide/Planning Draft:PEC-OU Before you begin Find out what to do before deploying CX Contact. Outbound (CX Contact) CXCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations. Before you begin deploying the CX Contact service, it is assumed that the following prerequisites and optional task, if needed, are completed:

Prerequisites

  • A Kubernetes or OpenShift cluster is ready for deployment of CX Contact.
  • The Kubectl and Helm command line tools are on your computer.
  • You have connectivity to target cluster, the proper kubectl context to work with the cluster, and your user has administrative permissions to deploy CX Contact to the defined namespace.

Optional tasks

  • SFTP Server—Install an SFTP Server with basic authentication for optional input and output data. SFTP Server is used when automation capabilities are required.
  • CDP NG access credentials—As of CX Contact 9.0.025, Compliance Data Provider Next Generation (CDP NG) is used as a CDP by default. Before attempting to connect to CDP NG, obtain the necessary access credentials (ID and Secret) from Genesys Customer Care.
  • Bitnami repository—If you choose to deploy dedicated Redis and Elasticsearch for CX Contact, add the Bitnami repository to install Redis and Elasticsearch using the following command:
    helm repo add bitnami ?'"`UNIQ--nowiki-00000011-QINU`"'?

After you've completed the mandatory tasks, check the Third-party prerequisites.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for CX Contact for the Helm chart version you must download for your release.

CX Contact is the only service that runs in the CX Contact Docker container. The Helm charts included with the CX Contact release provision CX Contact and any Kubernetes infrastructure necessary for CX Contact to run.

Set up Elasticsearch and Redis services as standalone services or installed in a single OpenShift cluster. You can also install them as shared services, deployed in an "infra" namespace in OpenShift.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

CX Contact requires shared persistent storage and an associated storage class created by the cluster administrator. The Helm chart creates the ReadWriteMany (RWX) Persistent Volume Claim (PVC) that is used to store and share data with multiple CX Contact components.

The minimal recommended PVC size is 100GB.

This topic describes network requirements and recommendations for CX Contact in private edition deployments:

Single namespace

Deploy CX Contact in a single namespace to prevent ingress/egress traffic from going through additional hops, due to firewalls, load balancers, or other network layers that introduce network latencies and overhead. Do not hardcode the namespace. You can override it by using the Helm file/values (provided during the Helm install command standard --namespace= argument), if necessary.

External connections

For information about external connections from the Kubernetes cluster to other systems, see Architecture. External connections also include:

  • Compliance Data Provider (AWS)
  • SFTP Servers

Ingress

The CX Contact UI requires Session Stickiness. Use ingress-nginx as the ingress controller (see github.com).

Important
The CX Contact Helm chart contains default annotations for session stickiness only for ingress-nginx. If you are using a different ingress controller, refer to its documentation for session stickiness configuration.

Ingress SSL

If you are using Chrome 80 or later, the SameSite cookie must have the Secure flag (see Chromium Blog). Therefore, Genesys recommends that you configure a valid SSL certificate on ingress.

Logging

Log rotation is required so that logs do not consume all of the available storage on the node.

Kubernetes is currently not responsible for rotating logs. Log rotation can be handled by the docker json-file log driver by setting the max-file and max-size options.

For effective troubleshooting, the engineering team should provide stdout logs of the pods (using the command kubectl logs). As a result, log retention is not very aggressive (see JSON file logging driver). For example: ?'"`UNIQ--source-00000012-QINU`"'? For on-site debugging purposes, CX Contact logs can be collected and stored in Elasticsearch. (For example, EFK stack. See medium.com).

Monitoring

CX Contact provides metrics that can be consumed by Prometheus and Grafana. It is recommended to have the Prometheus Operator (see githum.com) installed in the cluster. CX Contact Helm chart supports the creation of CustomResourceDefinitions that can be consumed by the Prometheus Operator.

For more information about monitoring, see Observability in Outbound (CX Contact).

CX Contact components operate with Genesys core services (v8.5 or v8.1) in the back end. All voice-processing components (Voice Microservice and shared services, such as GVP), and the GWS and Genesys Authentication services (mentioned below) must deployed and running before deploying the CX Contact service. See Order of services deployment.

The following Genesys services and components are required:

  • GWS
  • Genesys Authentication Service
  • Tenant Service
  • Voice Microservice
  • Multi-tenant Configuration Server

Nexus is optional.

 

CX Contact does not support GDPR.
Draft:PEC-REP/Current/GCXIPEGuide/Planning Draft:PEC-REP Before you begin deploying GCXI Find out what to do before deploying Genesys Customer Experience Insights (GCXI). Reporting GCXIPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 GCXI can provide meaningful reports only if Genesys Info Mart and Reporting and Analytics Aggregates (RAA) are deployed and available. Deploy GCXI only after Genesys Info Mart and RAA.

Mixed-mode deployment

RAA and Genesys Info Mart can be deployed in Private Edition, or on-premise. If you are deploying GCXI Private Edition with your existing on-premises Genesys Info Mart and RAA software (a mixed-mode deployment), simply point to your on-premise RDBMS when configuring GCXI. In such a scenario, you do not need to also deploy RAA Private Edition or Genesys Info Mart Private Edition. For information about on-premises deployments of RAA and Genesys Info Mart, see docs.genesys.com.

For more information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Customer Experience Insights

GCXI Containers

  • GCXI Helm chart uses the following containers.
    • gcxi - main GCXI container, runs as a StatefulSet. This container is roughly 12 GB; ensure that you have enough space to allocate it.
    • gcxi-control - supplementary container, used for initial installation of GCXI, and for clean up.

GCXI Helm Chart Download the latest yaml files from the repository, or examine the attached files: Sample GCXI yaml files

For more information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.


GCXI installation requires a set of local Persistent Volumes (PVs). Kubernetes local volumes are directories on the host with specific properties: https://kubernetes.io/docs/concepts/storage/volumes/#local

Example usage: https://zhimin-wen.medium.com/local-volume-provision-242affd5efe2

Kubernetes provides a powerful volume plugin system, which enables Kubernetes workloads to use a wide variety of block and file storage to persist data.

You can use the GCXI Helm chart to easily set up your own PVs, or you can configure PV Dynamic Provisioning in your cluster so that Kubernetes automatically creates PVs.

Volumes Design

GCXI installation uses the following PVC:

Mount Name Mount Path

(inside container)

Description Access Type Approximate Size Default Mount Point on Host

(You can change the mount point using values.)

The local provisioner requires that the specified directory pre-exists on your host.

Must be Shared across Nodes? Required Node Label

(applies to default Local PV setup)

gcxi-backup /genesys/gcxi_shared/backup Backup files

Used by control container / jobs.

RWX Depends on backup frequency.

5 GB+

/genesys/gcxi/backup

You can override this setting using

Values.gcxi.local.pv.backup.path

Only in multiple concurrent installs scenarios. gcxi/local-pv-gcxi-backup = "true"
gcxi-log /mnt/log MSTR logs

Used by main container.

The GCXI Helm chart allows log volumes of legacy hostPath type. This scenario is the default and used in examples in this document.

RWX Depends on rotation scheme.

5 GB+

/mnt/log/gcxi

subPathExpr: $(POD_NAME)

You can override this setting using

Values.gcxi.local.pv.log.path

Not necessarily. gcxi/local-pv-gcxi-log = "true"

If you are using hostPath volumes for logs, you don't need node label.

gcxi-postgres /var/lib/postgresql/data

(if using Postgres in container)

or

disk space in Postgres RDBMS

Meta DB volume

Used by Postgres container, if deployed.

RWO Depends on usage.

10 GB+

/genesys/gcxi/shared

You can override this setting using

Values.gcxi.local.pv.postgres.path

Yes, unless you tie the Postgres container to some particular node. gcxi/local-pv-postgres-data = "true"
gcxi-share /genesys/gcxi_share MSTR shared caches and cubes

Used by main container.

RWX Depends on usage.

5 GB+

/genesys/gcxi/data

subPathExpr: $(POD_NAME)

You can override this setting using

Values.gcxi.local.pv.share.path

Yes. gcxi/local-pv-gcxi-share = "true"

Preparing the environment

To prepare your environment, complete the following steps:

  1. For GKE deployments, run the following command to log in to the gcloud cluster:
    ?'"`UNIQ--source-00000025-QINU`"'?
  2. To log in to the cluster, run the following command:
    ?'"`UNIQ--source-00000027-QINU`"'?
  3. To check the cluster version on OpenShift deployments, run the following command:
    ?'"`UNIQ--source-00000029-QINU`"'?
  4. To create a new project, run the following command:
    OpenShift: ?'"`UNIQ--source-0000002B-QINU`"'?
    GKE:
    1. Edit the create-gcxi-namespace.json, adding the following values:
      ?'"`UNIQ--source-0000002D-QINU`"'?
    2. To apply the changes, run the following command:
      ?'"`UNIQ--source-0000002F-QINU`"'?
  5. For GKE, to confirm namespace creation, run the following command:
    ?'"`UNIQ--source-00000031-QINU`"'?
  6. Create a secret for docker-registry to pull images from the Genesys JFrog repository:
    ?'"`UNIQ--source-00000033-QINU`"'?
  7. Create the file values-test.yaml, and populate it with appropriate override values. For a simple deployment using PostgreSQL inside the container, you must include PersistentVolumes named gcxi-log-pv, gcxi-backup-pv, gcxi-share-pv, and gcxi-postgres-pv. You must override GCXI_GIM_DB with the name of your Genesys Info Mart data source.

Ingress

Ingress annotations are supported in the values.yaml file (see line 317). Genesys recommends session stickiness, to improve user experience. ?'"`UNIQ--source-00000035-QINU`"'?

Allowlisting is required for GCXI.

WAF Rules

WAF rules are defined in the variables.tf file (see line 245).

SMTP

The GCXI container and Helm chart support the environment variable EMAIL_SERVER.

TLS

The GCXI container does not serve TLS natively. Ensure that your environment is configured to use proxy with HTTPS offload.

MicroStrategy Web is the user interface most often used for accessing, managing, and running the Genesys CX Insights reports. MicroStrategy Web certifies the latest versions, at the time of release, for the following web browsers:
  • Apple Safari
  • Google Chrome (Windows and iOS)
  • Microsoft Edge
  • Microsoft Internet Explorer (Versions 9 and 10 are supported, but not certified)
  • Mozilla Firefox

To view updated information about supported browsers, see the MicroStrategy ReadMe.

GCXI requires the following services:
  • Reporting and Analytics Aggregates (RAA) is required to aggregate Genesys Info Mart data.
  • Genesys Info Mart and / or Intelligent Workload Distribution (IWD) Data Mart. GCXI can run without these services, but cannot produce meaningful output without them.
  • GWS Auth/Environment service
  • Genesys Platform Authentication thru Config Server (GAuth). Alternatively, GCXI includes a native internal login, which you can use to authorize users, instead of GAuth. This document assumes you are using GAuth (the recommended solution), which gives ConfigServer users access to GCXI.
  • GWS client id/client secret
GCXI can store Personal Identifiable Information (PII) in logs, history files, and in reports (in scenarios where customers include PII data in reports). Genesys recommends that you do not capture PII in reports. If you do capture PII, it is your responsibility to remove any such report data within 21 days or less, if required by General Data Protection Regulation (GDPR) standards.

For more information and relevant procedures, see: Genesys CX Insights Support for GDPR and the suite-level Link to come documentation.

Draft:PEC-REP/Current/GCXIPEGuide/PlanningRAA Draft:PEC-REP Before you begin deploying RAA Find out what to do before deploying Reporting and Analytics Aggregates (RAA). Reporting GCXIPEGuide The RAA container works with the Genesys Info Mart database; deploy RAA only after you have deployed Genesys Info Mart.

The Genesys Info Mart database schema must correspond to a compatible Genesys Info Mart version. Execute the following command to discover the required Genesys Info Mart release: ?'"`UNIQ--source-00000020-QINU`"'? RAA container runs RAA on Java 11, and is supplied with the following of JDBC drivers:

  • MSSQL 9.2.1 JDBC Driver
  • Postgres 42.2.11 JDBC Driver
  • Oracle Database 21c (21.1) JDBC Driver

Genesys recommends that you verify whether the provided driver is compatible with your database, and if it is not, you can override the JDBC driver by copying an updated driver file to the folder lib\jdbc_driver_<RDBMS> within the mounted config volume, or by creating a co-named link within the folder lib\jdbc_driver_<RDBMS>, which points to a driver file stored on another volume (where <RDBMS> is the RDBMS used in your environment). This is possible because RAA is launched in a config folder, which is mounted in a container.

Mixed-mode deployment

RAA is deployed to support GCXI. You can deploy RAA (and Genesys Info Mart) in Private Edition, or on-premises. If you are deploying GCXI Private Edition with your existing on-premises Genesys Info Mart and RAA software (a mixed-mode deployment), simply point to your on-premise RDBMS when configuring GCXI. In such a scenario, you do not need to also deploy RAA Private Edition or Genesys Info Mart Private Edition. Information about on-premises deployments of RAA and Genesys Info Mart are available on docs.genesys.com.

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Customer Experience Insights.

You can download the gcxi helm charts from the following repository:?'"`UNIQ--source-00000022-QINU`"'? For more information about downloading containers, see: Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.
This section describes the storage requirements for various volumes.

GIM secret volume

In scenarios where raa.env.GCXI_GIM_DB__JSON is not specified, RAA mounts this volume to provide GIM connections details.

  1. Declare GIM database connection details as a Kubernetes secret in gimsecret.yaml:
    ?'"`UNIQ--source-00000024-QINU`"'?
  2. Reference gimsecret.yaml in values.yaml:
    ?'"`UNIQ--source-00000026-QINU`"'?

Alternatively, you can mount the CSI secret using secretProviderClass, in values.yaml:?'"`UNIQ--source-00000028-QINU`"'?

Config volume

RAA mounts a config volume inside the container, as the folder /genesys/raa_config. The folder is treated as a work directory, RAA reads the following files from it during startup:

  • conf.xml, which contains application-level config settings.
  • custom *.ss files.
  • JDBC driver, from the folder lib/jdbc_driver_<RDBMS>.

RAA does not normally create any files in /genesys/raa_config at runtime, so the volume does not require a fast storage class. By default, the size limit is set to 50 MB. You can specify the storage class and size limit in values.yaml:?'"`UNIQ--source-0000002A-QINU`"'?

   ...

RAA helm chart creates a Persistent Volume Claim (PVC). You can define a Persistent Volume (PV) separately using the gcxi-raa chart, and bind such a volume to the PVC by specifying the volume name in the raa.volumes.config.pvc.volumeName value, in values.yaml:?'"`UNIQ--source-0000002C-QINU`"'?

Health volume

RAA uses the Health volume to store:

  • Health files.
  • Prometheus file containing metrics for the most recent 2-3 scrape intervals.
  • Results of the most recent testRun init container execution.

By default, the volume is limited to 50MB. RAA periodically interacts with the volume at runtime, so Genesys does not recommend a slow storage class for this volume. You can specify the storage class and size limit in values.yaml:?'"`UNIQ--source-0000002E-QINU`"'?RAA helm chart creates a PVC. You can define a PV separately using the gcxi-raa chart, and bind such a volume to the PVC by specifying the volume name in the raa.volumes.health.pvc.volumeName value, in values.yaml:?'"`UNIQ--source-00000030-QINU`"'?

RAA interacts only with the Genesys Info Mart database.

RAA can expose Prometheus metrics by way of Netcat.

The aggregation pod has it's own IP address, and can run with one or two running containers. For Helm test, an additional IP address is required -- each test pod runs one container.

Genesys recommends that RAA be located in the same region as the Genesys Info Mart database.

Secrets

RAA secret information is defined in the values.yaml file (line 89).

For information about configuring arbitrary UID, see Configure security.

Not applicable. RAA interacts with Genesys Info Mart database only. Not applicable.
Draft:PEC-REP/Current/GIMPEGuide/PlanningGCA Draft:PEC-REP Before you begin GCA deployment Find out what to do before deploying GIM Config Adapter (GCA). Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Instructions are provided for a single-tenant deployment.


GIM Config Adapter (GCA) and GCA monitoring are the only services that run in the GCA Docker container. The Helm charts included with the GCA release provision GCA and any Kubernetes infrastructure necessary for GCA to run.

See Helm charts and containers for Genesys Info Mart for the Helm chart versions you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GCA.

GCA uses object storage to store the GCA snapshot during processing. Like GSP, GCA supports using S3-compatible storage provided by OpenShift and Google Cloud Platform (GCP), and Genesys expects you to use the same storage account for GSP and GCA. If you want to use separate storage for GCA, follow the Create object storage instructions for GSP to create similar S3-compatible storage for GCA. No special network requirements.
  • Voice Tenant Service, which enables GCA to access the Configuration Server database. You must deploy the Voice Tenant Service before you deploy GCA.
    • Ensure that an appropriate user account is available for GCA to use to access the Configuration Database. The GCA user account requires at least read permissions.
    • You must also have your Tenant ID information available.
  • There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GSP looks for the GCA snapshot when it starts; if GCA has not yet been deployed, GSP will encounter unknown configuration objects and resources until the snapshot becomes available.

For detailed information about the correct order of services deployment, see Order of services deployment.

Not applicable. GCA does not store information beyond an ephemeral snapshot. f05492f5-52ed-490a-b0d5-c318a4a7272b
Draft:PEC-REP/Current/GIMPEGuide/PlanningGIM Draft:PEC-REP Before you begin GIM deployment Find out what to do before deploying GIM. Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Instructions are provided for a single-tenant deployment. GIM and GIM monitoring are the only services that run in the GIM Docker container. The Helm charts included with the GIM release provision GIM and any Kubernetes infrastructure necessary for GIM to run.

See Helm charts and containers for Genesys Info Mart for the Helm chart version you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GIM.

GIM uses PostgreSQL for the Info Mart database and, optionally, uses object storage to store exported Info Mart data.

PostgreSQL — the Info Mart database

The Info Mart database stores data about agent and interaction activity, Outbound Contact campaigns, and other services usage in your contact center. A subset of tables and views created, maintained, and populated by Reporting and Analytics Aggregates (RAA) provides the aggregated data on which Genesys CX Insights (GCXI) reports are based.

A sizing calculator for Genesys Multicloud CX private edition is under development. In the meantime, the interactive tool available for on-premises deployments might help you estimate the size of your Info Mart database, see Genesys Info Mart 8.5 Database Size Estimator.

Genesys recommends a minimum 3 IOPS per GB.

For information about creating the Info Mart database, see Create the Info Mart database.

Create the Info Mart database

Use any database management tool to create the Info Mart ETL database and user.

  1. Create the database.
  2. Create a user for the Genesys Info Mart services to use, and grant full permissions to the user for that database.
    This user’s account is used by Genesys Info Mart jobs to access the Info Mart database schema.
    The Info Mart schema name is public.
    Important
    Make a note of the database and user details, which you use to populate database-related Helm chart override values for GIM and GCA.

Object storage — Data Export packages

The GIM Data Export feature enables you to export data from your Info Mart database. Unless you elect to store your exported data in a local directory, your Info Mart data is exported to an object store. GIM supports export to Azure Blob Storage or S3-compatible storage provided by OpenShift and Google Cloud Platform (GCP).

If you want to use S3-compatible storage, follow the Create object storage instructions for GSP to create the S3-compatible storage for GIM.

Important
GSP and GCA use object storage to store data during processing. For safety and security reasons, Genesys strongly recommends that you use a dedicated object storage account for the GIM persistent storage, and do not share the storage account created for GSP and GCA.

As another alternative, you can configure GIM to store your exported data in a local directory. In this case, you do not need to create the object storage.

No special network requirements.
  • You must have your Tenant ID information available.
  • There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GCA might try to access the Info Mart database to synchronize configuration data; if GIM has not yet been deployed, the Info Mart database will be empty.

For detailed information about the correct order of services deployment, see Order of services deployment.

GIM provides full support for you to comply with Right of Access ("export") or Right of Erasure ("forget") requests from consumers and employees with respect to personally identifiable information (PII) in the Info Mart database.

For more information about how Genesys Info Mart implements support for GDPR requests, see Genesys Info Mart Support for GDPR. For details about the Info Mart database tables and columns that potentially contain PII, see the description of the CTL_GDPR_HISTORY table in the Genesys Info Mart on-premises documentation.

e65e00cb-c1c8-4fb8-9614-80ac07c3a4e3
Draft:PEC-REP/Current/GIMPEGuide/PlanningGSP Draft:PEC-REP Before you begin GSP deployment Find out what to do before deploying GIM Stream Processor (GSP). Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 GIM Stream Processor (GSP) is the only service that runs in the GSP Docker container. The Helm charts included with the GSP release provision GSP and any Kubernetes infrastructure necessary for GSP to run.

See Helm charts and containers for Genesys Info Mart for the Helm chart version you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

Note for new writer: There are known differences between the content on pages in this GSP section and the PAT team instructions used as doc input. Some of the PAT team content (for example, details about how to create third-party storage) is intentionally excluded, while there are some corrections/additions in this doc that the PAT team may not have carried over. Before using PAT team input to update pages in this section, compare their new content against the May 10/22 version of 19.3.1 Deployment, and only consider content added since then. Similarly for GCA and GIM.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GSP.

Like GCA, GSP uses S3-compatible storage to store data during processing. GSP stores data such as GSP checkpoints, savepoints, and high availability data. By default, GSP is configured to use Azure Blob Storage, but you can also use S3-compatible storage provided by other cloud platforms. Genesys expects you to use the same storage account for GSP and GCA.

Follow the instructions provided by the storage service provider of your choice to create the S3-compatible storage. For example:

  • For OpenShift, see the OpenShift Data Foundation (formerly OpenShift Container Storage) documentation about creating an Object Bucket Claim (OBC).
  • For GKE, see the Google Cloud Storage documentation about creating bucket storage.

Genesys Info Mart has no special requirements for the storage buckets you create.

To enable the Genesys Info Mart service to access your S3-compatible storage object, you need to know details such as the endpoint information, access key, and secret, in order to populate Helm chart override values for the service (see Configure S3-compatible storage).

Important
Note and securely store the bucket details, particularly the access key and secret, when you create the storage bucket. Depending on the cloud storage service you choose, you may not be able to recover this information subsequently.

Writer's note: Detailed instructions about creating OpenShift OBCs and GCP buckets deleted.

No special network requirements. Network bandwidth must be sufficient to handle the volume of data to be transferred into and out of Kafka. There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GSP looks for the GCA snapshot when it starts; if GCA has not yet been deployed, GSP will encounter unknown configuration objects and resources until the snapshot becomes available.

There are other private edition services you must deploy before Genesys Info Mart. For detailed information about the recommended order of services deployment, see Order of services deployment.

Not applicable, provided your Kafka retention policies have not been set to more than 30 days. GSP does not store information beyond ephemeral data used during processing.

Kafka configuration

Unless Kafka has been configured to auto-create topics, ensure that the Kafka topics GSP requires have been created in the Kafka configuration. The following table shows the topic names GSP expects to use. An entry in the Customizable GSP parameter column indicates that GSP supports using a customized topic name. If you use customized topic names, you must override the applicable values in the values.yaml file (see Override Helm chart values).

The topics represent various data domains. If a topic does not exist, GSP will never receive data for that domain. If the topic exists but the customizable parameter value is empty in the GSP configuration, data from that domain will be discarded.

Note for new writer: Uncomment the gsp-mn row when media-neutral is supported. Add a row for designer-das when SDR is supported (see GIM-14103).

Topic name Customizable GSP parameter Description
GSP consumes the following topics:
voice-callthread Name of the input topic with voice interactions
voice-agentstate Name of the input topic with voice agent states
voice-outbound Name of the input topic with outbound (CX Contact) activity associated with either voice or digital interactions
digital-itx digitalItx Name of the input topic with digital interactions
digital-agentstate digitalAgentStates Name of the input topic with digital agent states
gca-cfg cfg Name of the input topic with configuration data
GSP produces the following topics:
gsp-ixn interactions Name of the output topic for interactions
gsp-sm agentStates Name of the output topic for agent states
gsp-outbound outbound Name of the output topic for outbound (CX Contact) activity
gsp-custom custom Name of the output topic for custom reporting
gsp-cfg cfg Name of the output topic for configuration reporting
c39fe496-c79e-4846-b451-1bc8bedb126b
Draft:PEC-REP/Current/PulsePEGuide/Planning Draft:PEC-REP Before you begin Find out what to do before deploying Genesys Pulse. Reporting PulsePEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no known limitations.


For more information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Pulse

Genesys Pulse Containers

Container Description Docker Path
collector Genesys Pulse Collector <docker>/pulse/collector:<image-version>
cs_proxy Configuration Server Proxy <docker>/pulse/cs_proxy:<image-version>
init Init container, used for DB initialization <docker>/pulse/init:<image-version>
lds Load Distribution Server (LDS) <docker>/pulse/lds:<image-version>
monitor_dcu_push_agent Provides monitoring data from Stat Server and Genesys Pulse Collector <docker>/pulse/monitor_dcu_push_agent:<image-version>
monitor_lds_push_agent Provides monitoring data from LDS <docker>/pulse/monitor_lds_push_agent:<image-version>
pulse Genesys Pulse Backend <docker>/pulse/pulse:<image-version>
ss Stat Server <docker>/pulse/ss:<image-version>
userpermissions User Permissions service <docker>/pulse/userpermissions:<image-version>

Genesys Pulse Helm Charts

Helm Chart Containers Shared Helm Path
Init init yes <helm>/init-<chart-version>.tgz
Pulse pulse yes <helm>/pulse-<chart-version>.tgz
LDS cs_proxy, lds, monitor_lds_push_agent <helm>/lds-<chart-version>.tgz
DCU cs_proxy, ss, collector, monitor_dcu_push_agent <helm>/dcu-<chart-version>.tgz
Permissions cs_proxy, userpermissions <helm>/permissions-<chart-version>.tgz
Init Tenant init <helm>/init-tenant-<chart-version>.tgz
Monitor - yes <helm>/monitor-<chart-version>.tgz
OpenShift or GKE CLI must be installed.

For more information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

Logs Volume

Persistent Volume Size Type IOPS POD Containers Critical Backup needed
pulse-dcu-logs 10Gi RW high DCU csproxy, collector, statserver Y Y
pulse-lds-logs 10Gi RW high lds csproxy, lds Y Y
pulse-permissions-logs 10Gi RW high permissions csproxy, permissions Y Y
pulse-logs 10Gi RW high pulse pulse Y Y

The logs volume stores log files:

  • To use the persistent volume, set the log.volumeType to the pvc.
  • To use the local storage, set the log.volumeType to the hostpath.

Genesys Pulse Collector Health Volume

Local Volume POD Containers
collector-health dcu collector, monitor-sidecar

Genesys Pulse Collector health volume provides non-persistent storage for store Genesys Pulse Collector health state files for monitoring.

Stat Server Backup Volume

Persistent Volume Size Type IOPS POD Containers Critical Backup needed
statserver-backup 1Gi RWO medium dcu statserver N N

Stat Server backup volume provides disk space for Stat Server's state backup. The Stat Server backup volume stores the server state between restarts of the container.

No special requirements. Ensure that the following services are deployed and running before you deploy Genesys Pulse:


  • Genesys Authentication:
  • Genesys Web Services and Applications
  • Agent Setup
  • Tenant Service:
    • The Tenant UUID (v4) is provisioned, example: "9350e2fc-a1dd-4c65-8d40-1f75a2e080dd"
    • The Tenant service is available as host:
      • GKE: "tenant-<tenant-uuid>.voice" port: 8888
      • OpenShift: "tenant-<tenant-uuid>.voice.svc.cluster.local." port: 8888
  • Voice Microservice:
    • The Voice service is available as host:
      • GKE: "tenant-<tenant-uuid>.voice" port: 8000
      • OpenShift: "tenant-<tenant-uuid>.voice.svc.cluster.local." port: 8000
Important
All services listed must be accessible from within the cluster where Genesys Pulse will be deployed.

For more information, see Order of services deployment.

Genesys Pulse supports the General Data Protection Regulation (GDPR). See Genesys Pulse Support for GDPR for details.
Draft:PrivateEdition/Current/TenantPEGuide/Planning Draft:PrivateEdition Before you begin Find out what to do before deploying the Tenant Service. Genesys Multicloud CX Private Edition TenantPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for Voice Microservices for the Helm chart version you must download for your release.

Containers

The Tenant Service has the following containers:

  • Core tenant service container
  • Database initialization and upgrade container
  • Role and privileges  initialization and upgrade container
  • Solution specific: pulse provisioning container

Helm charts

  • Tenant deployment
  • Tenant infrastructure
For information about setting up your Genesys Multicloud CX private edition platform, see Software Requirements.

The following table lists the third-party prerequisites for the Tenant Service.

Content coming soon
?'"`UNIQ--nowiki-00000006-QINU`"'?
Content coming soon
?'"`UNIQ--nowiki-00000007-QINU`"'?
For detailed information about the correct order of services deployment, see Order of services deployment.

The following prerequisites are required before deploying the Tenant Service:

  • Voice Platform and all its external dependencies must be deployed before proceeding with the Tenant Service deployment.
  • PostgreSQL 10 database management system must be deployed and database shall be allocated either as a primary or replica. For more information about the sample deployment of a standalone DBMS, see Third-party prerequisites.

In addition, if you expect to use Agent Setup or Workspace Web Edition after the tenant is deployed, Genesys recommends that you deploy GWS Authentication Service before proceeding with the Tenant Service deployment.

Specific dependencies

The Tenant Service is dependent on the following platform endpoints:

  • GWS environment API
  • Interaction service core
  • Interaction service vq

The Tenant Service is dependent on the following service component endpoints:

  • Voice Front End Service
  • Voice Redis (RQ) Service
  • Voice Config Service
Not applicable.
Draft:STRMS/Current/STRMSPEGuide/Planning Draft:STRMS Before you begin Find out what to do before deploying Event Stream. Event Stream STRMSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 See Helm charts and containers for Event Stream for the Helm chart version you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

Describe network requirements, including:
  • Required properties for ingress, such as:
    • Cookies usage
    • Header requirements (client IP and redirect, passthrough)
    • Session stickiness
    • Allowlisting (optional)
    • TLS (optional)
  • Cross-region bandwidth
  • External connections from the Kubernetes cluster to other systems. This includes connecting to Genesys Cloud CX for hybrid services (such as AI, WEM) as well as "mixed" environments where some components are still deployed as VMs. Note that mixed environments are mainly for transition periods when customers migrate from a classic premise environment to Genesys Multicloud CX private edition.
  • WAF Rules (specific only for services handling internet traffic)
  • Pod Security Policy
  • TLS/SSL Certificate configurations
Event Stream has dependencies on several other Genesys services. It is recommended that the provisioning and configuration of Event Stream be done after these services have been set up so that should any issues arise during the provisioning of Event Stream, it can be reasonably assured that the fault lies in how Event Stream is provisioned rather than in some downstream program.

For a look at the high-level deployment order, see Order of services deployment.

Draft:TLM/Current/TLMPEGuide/Planning Draft:TLM Before you begin Find out what to do before deploying Telemetry Service. Telemetry Service TLMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 NA Telemetry Service is composed of:
  • 1 Docker Container: tlm/telemetry-service:version
  • 1 Helm Chart: telemetry-service_version.tgz

For additional information about overriding Helm chart values, see Overriding Helm Chart values in the Genesys Multicloud CX Private Edition Guide.

For information about downloading Helm charts from JFrog Edge, see Downloading your Genesys Multicloud CX containers in the Setting up Genesys Multicloud CX Private Edition guide.

NA
NA For any kind of Telemetry deployment, the following service must be deployed and running before deploying the Telemetry service:

For a look at the high-level deployment order, see Order of services deployment.

17df197d-45b4-4d49-b269-f44d5bdfe5a1
Draft:UCS/Current/UCSPEGuide/Planning Draft:UCS Before you begin Find out what to do before deploying Universal Contact Service (UCS). Universal Contact Service UCSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Currently, UCS:
  • supports a single-region model of deployment only
  • does not support SSL communication with ElasticSearch.
  • requires dedicated PostgreSQL deployment per customer.
Download the UCS related Docker containers and Helm charts from the JFrog repository.

See Helm charts and containers for Universal Contact Service for the Helm chart and container versions you must download for your release.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

  • Kubernetes 1.17+
  • Helm 3.0
All data are stored in the PostgreSQL, Elasticsearch, and Nexus Upload Service which are external to the UCS. UCS requires the following Genesys components:
  • Genesys Authentication Service
  • GWS Environment Service
As a part of GDPR compliance procedure, the customer would send a request to Care providing the information about the end user. Care would then open a ticket for Engineering team to follow up on the request.

The engineering team would process the request:

GDPR request: Export Data

  • Request to UCS to get contact by ID: identify contact (if there is email address or phone number), or getContact (if there is a direct contact ID).
  • Request to UCS-X to get list of interactions for contact found.
  • Perform CSV export and attach resulting file to the ticket.

GDPR request: Forget me

  • Request to UCS-X to get contact by ID: identify contact (if there is email address or phone number), or getContact (if there is a direct contact ID).
  • Request to UCS-X to get list of interactions for contact found.
  • Delete all found interactions.
  • Re-check that all interactions for contact were removed.
  • Delete contact.
  • Re-check that contact was removed.
  • Update the ticket.
3fdcc389-c8a5-4c38-8ee7-d6ab8d1e5dd8
Draft:VM/Current/VMPEGuide/Planning Draft:VM Before you begin Find out what to do before deploying Voice Microservices. Voice Microservices VMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

The following services are included with Voice Microservices:

  • Voice Agent State Service
  • Voice Config Service
  • Voice Dial Plan Service
  • Voice Front End Service
  • Voice Orchestration Service
  • Voice Registrar Service
  • Voice Call State Service
  • Voice RQ Service
  • Voice SIP Cluster Service
  • Voice SIP Proxy Service
  • Voice Voicemail Service
  • Voice Tenant Service

See Helm charts and containers for Voice Microservices for the Helm chart version you must download for your release.

For information about the Voicemail Service, see Before you begin in the Configure and deploy Voicemail section of this guide.

For information about the Tenant service, also included with Voice Microservices, see the Tenant Service Private Edition Guide.

For information about setting up your Genesys Multicloud CX private edition platform, see Software Requirements.

The following table lists the third-party prerequisites for Voice Microservices.

Content coming soon
?'"`UNIQ--nowiki-00000025-QINU`"'?
Content coming soon
?'"`UNIQ--nowiki-00000026-QINU`"'?
For detailed information about the correct order of services deployment, see Order of services deployment.

Multi-Tenant Inbound Voice: Voicemail Service

Customer data that is likely to identify an individual, or a combination of other held data to identify an individual is considered as Personally Identifiable Information (PII). Customer name, phone number, email address, bank details, and IP address are some examples of PII.

According to EU GDPR:

  • When a customer requests to access personal data that is available with the contact center, the PII associated with the client is exported from the database in client-understandable format. You use the Export Me request to do this.
  • When a customer requests to delete personal data, the PII associated with that client is deleted from the database within 30 days. However, the Voicemail service is designed in a way that the Customer PII data is deleted in one day using the Forget Me request.

Both Export Me and Forget Me requests depend only on Caller ID/ANI input from the customer. The following PII data is deleted or exported during the Forget Me or Export Me request process, respectively:

  • Voicemail Message
  • Caller ID/ANI

GDPR feature is supported only when StorageInterface' is configured as BlobStorage, and Voicemail service is configured with Azure storage account data store.

Adding caller_id tag during voicemail deposit

Index tag caller_id is included in voicemail messages and metadata blob files during voicemail deposit. Using the index tags, you can easily filter the Forget Me or Export Me instead of searching every mailbox.

GDPR multi-region support

In voicemail service, all voicemail metadata files are stored in master region and voicemail messages are deposited/stored in the respective region. Therefore, It is required to connect all the regions of a tenant to perform Forget Me, Undo Forget Me, or Export Me processes for GDPR inputs.

To provide multi-region support for GDPR, follow these steps while performing GDPR operation:

  1. Get the list of regions of a tenant.
  2. Ensure all regions storage accounts are up. If any one of storage accounts is down, you cannot perform the GDPR operation.
  3. GDPR operates in the master region files, first.
  4. Then, GDPR operates in all the non-master region files.

APIs

Voicemail service provides APIs to Export Me and Forget Me requests of GDPR authenticated with GWS. APIs support any valid client ID. The API can process more than one user's data in a single API request. The API follows the same standard input format used in the current PEC (legacy component).

Forget Me and Export Me API Input.json Forget Me Undo Input.json
?'"`UNIQ--source-00000027-QINU`"'? ?'"`UNIQ--source-00000029-QINU`"'?

Voicemail service stores only the caller ANI. Therefore, the voicemail processes the records only with the "phone" parameter from the given input and does not include any other parameters.

Forget Me: API for Forget me deletes the PII data related to the consumer after one day based on the API request. The files are deleted through the operations:

  • Message and metadata files are reuploaded with forgetme=true and case_id=[case_id_value] index tag during the Forget Me API call.
  • Deleting files using Azure lifecycle management rules. A rule named forgetme is created in Azure lifecycle management. The Forgetme rule deletes the file if it meets the following conditions:
    • The file is not modified in a day
    • The file has forgetme=true index tag

The Forgetme rule is executed automatically by Azure lifecycle management once a day. Therefore, there are limitations in deleting files and capturing them in the limitations section.

  • Undo Forget Me: The API to undo the Forget Me request with the same case id.
    If the admin/user has wrongly requested/entered the caller ANI, then the voicemail service provides an option to undo the Forget Me request using another API call with the same case ID, to avoid data loss.
  • Export Me: The API for Export Me returns the list of message IDs with message media URL to download the media.
    • The media URL is also authenticated and authorized with the GWS token.
  • The Voicemail Service is exposed via the Kubernetes service, and can be accessed by URL in any region: ?'"`UNIQ--nowiki-0000002B-QINU`"'? (The FQDN remains same in all the regions wherever voicemail service is deployed).
  • Append the API URL with the above-mentioned base URL for accessing the APIs.
  • The Voicemail service authenticates and authorizes each request with GWS. The Voicemail service requires the OAuth token in the header for the following the API calls:
    • Authorization: Basic <token> (or) Bearer <token>
    • Contact center ID is taken from the authorization token
  • Here is the API definition:
    • messageId: Unique message ID of the message.
  • The API sample response is given based on the sample input mentioned above.
Draft:WebRTC/Current/WebRTCPEGuide/Planning Draft:WebRTC Before you begin Find out what to do before deploying WebRTC. WebRTC WebRTCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 All prerequisites described under Third-party prerequisites, Genesys dependencies, and Secrets have been met. Download the Helm charts from the webrtc folder in the JFrog repository. See Helm charts and containers for WebRTC for the Helm chart version you must download for your release.

For information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

WebRTC contains the following containers:

Artifact Type Functionality JFrog Containers and Helm charts
webrtc webrtc gateway container Handles agents’ sessions, signalling, and media traffic. It also performs media transcoding. https://<jfrog artifactory>/<docker location>/webrtc/webrtc/
coturn coturn container Utilizes TURN functionality https://<jfrog artifactory>/<docker location>/webrtc/coturn/
webrtc-service Helm chart https://<jfrog artifactory>/<helm location>/ webrtc-service-<version_number>.tgz
For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following are the third-party prerequisites for WebRTC:

WebRTC does not require persistent storage for any purposes except Gateway and CoTurn logs. The following table describes the storage requirements:
Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
webrtc-gateway-log-volume 50Gi RW medium storing gateway log files webrtc Y Y
webrtc-coturn-log-volume 50Gi RW medium storing coturn log files coturn N Y

Persistent Volume and Persistent Volume Claim will be created if they are configured. The size for them optional and should be adjusted according to log rate described below:

Gateway:

idle: 0.5 MB/hour per agent

active call: around 0.2MB per call per agent.

Example: For 24 full hours of work, where each agent call rate is constant and is around 7 to 10 calls per hour, we will require around ~500GB for 1000 agents, with around ~20GB being consumed per hour.

CoTurn:

For 1000 connected agents, the load rate is approximately 3.6 GB/hour which scales linearly and increases or decreases with the number of agents and stays constant whether calls are performed or not.

Ingress

WebRTC requires the following Ingress requirements:

  • Persistent session stickiness based on cookie is mandatory. Stickiness cookie should contain the following attributes:
    • SameSite=None
    • Secure
    • Path=/
  • No specific headers requirements
  • Whitelisting (optional)
  • TLS is mandatory

Secrets

WebRTC supports three types of secrets: CSI driver, Kubernetes secrets, and environment variables.

Important
GWS Secret for WebRTC should contain the following grants:

?'"`UNIQ--source-0000001B-QINU`"'?

For GWS secrets, CSI or Kubernetes secret should contain gwsClient and gwsSecret key-values.

GWS secret for WebRTC must be created in the WebRTC namespace using the following specification as an example:?'"`UNIQ--source-0000001D-QINU`"'?

ConfigMaps

Not Applicable

WAF Rules

The following Web Application Firewall (WAF) rules should be disabled for WebRTC:

WAF Rule Number of rules
REQUEST-920-PROTOCOL-ENFORCEMENT 920300
920440
REQUEST-913-SCANNER-DETECTION 913100
913101
REQUEST-921-PROTOCOL-ATTACK 921150
REQUEST-942-APPLICATION-ATTACK-SQLI 942430


Pod Security Policy

Not applicable

Auto-scaling

WebRTC and CoTurn auto-scaling is performed by KEDA operator. The auto-scaling feature requires Prometheus metrics. To know more about KEDA, visit https://keda.sh/docs/2.0/concepts/.

Use the following option in YAML values file to enable the deployment of auto-scaling objects:

?'"`UNIQ--source-0000001F-QINU`"'?

You can configure the Polling interval and maximum number of replicas separately for Gateway pods and CoTurn pods using the following options:

?'"`UNIQ--source-00000021-QINU`"'?

  • Gateway Pod Scaling
    • Sign-ins

?'"`UNIQ--source-00000023-QINU`"'?

  • CPU based scaling

WebRTC auto-scaling is also performed based on the CPU and memory usage. The following YAML shows how CPU and memory limits should be configured for Gateway pods in YAML values file:

?'"`UNIQ--source-00000025-QINU`"'?

  • CoTurn Pod scaling

Auto-scaling of CoTurn is performed based on CPU and memory usage only. The following YAML shows how CPU and memory limits should be configured for CoTurn pods in YAML values file:

?'"`UNIQ--source-00000027-QINU`"'?

SMTP settings

Not applicable

WebRTC has dependencies on several other Genesys services and it is recommended that the provisioning and configuration of WebRTC be done after these services have been set up.
Service Functionality
GWS Used for environment and tenants configuration reading
GAuth Used for WebRTC service and Agents authentication
GVP Used for voice calls - conferences, recording, and so on
Voice microservice Used to handle voice calls
Tenant microservice Used to store tenant configuration

For detailed information about the correct order of services deployment, see Order of services deployment.

Not applicable d703e174-b039-43c9-8859-e25b3a7feb22
GVP/Current/GVPPEGuide/Planning GVP Before you begin Find out what to do before deploying Genesys Voice Platform. Genesys Voice Platform GVPPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • Third-party integrations like ASR/TTS or recording use cases are not supported in this initial release.
  • Resource Manager does not use gateway LRG configurations. Instead, it uses the contact center ID coming from SIP Server as gvp-tenant-id in the INVITE message to identify the tenant and pick the IVR Profiles.
  • Only single MCP LRG is supported per GVP deployment.
  • Only the specific component configuration options documented in Helm values.yaml overrides can be modified. Other configuration options can't be changed.
  • DID/DID groups are managed as part of Designer applications (Applications)
  • SIP TLS / SRTP are currently not supported.
You will have to download the GVP related Docker containers and Helm charts from the JFrog repository. For docker container and helm chart versions, refer to Helm charts and containers for Genesys Voice Platform.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

Media Control Platform

Storage requirement for production (min)

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
recordings-volume 100Gi RWO high Storing recordings, dual AZ, gvp-mcp, rup Y Y
rup-volume 40Gi RWO high Storing recordings temporarily, dual AZ, rup Y Y
log-pvc 50Gi RWO medium storing log files gvp-mcp Y Y

Storage requirements for Sandbox

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
recordings-volume 50Gi RWO high Storing recordings, dual AZ, gvp-mcp, rup Y Y
rup-volume 20Gi RWO high Storing recordings temporarily, dual AZ, rup Y Y
log-pvc 25Gi RWO medium storing log files gvp-mcp Y Y

Resource Manager

Storage requirement for production (min)

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billingpvc 20Gi RWO high billing gvp-rm Y Y
log-pvc 50Gi RWO medium storing log files gvp-rm Y Y

Storage requirements for Sandbox

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billingpvc 20Gi RWO high billing gvp-rm Y Y
log-pvc 10Gi RWO medium storing log files gvp-rm Y Y

Service Discovery

Not applicable

Reporting Server

Storage requirement for production (min)

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billing-pvc 20Gi RWO High Stores ActiveMQ data and config information gvp-rs Y Y

Storage requirement for Sandbox

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billing-pvc 10Gi RWO High Stores ActiveMQ data and config information gvp-rs Y Y

GVP Configuration Server

Not applicable

Media Control Platform

Ingress

Not applicable

HA/DR

MCP is deployed with autoscaling in all regions. For more details, see the section Auto-scaling.

Calls are routed to active MCPs from GVP Resource Manager (RM) and in case of a MCP instance terminating, the calls are then routed to a different MCP instance.

Cross-region bandwidth

MCPs are not expected to be doing cross-region requests in normal mode of operation.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-00000037-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Resource Manager

Ingress

Not applicable

HA/DR

Resource Manager is deployed as the Active and Active pair.

Cross-region bandwidth

Resource Manager is deployed per region. There is no cross region deployment.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-00000039-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Service Discovery

Ingress

Not applicable

HA/DR

Service Discovery is a singleton service which will be restarted if it shuts down unexpectedly or becomes unavailable.

Cross-region bandwidth

Service Discovery is not expected to be doing cross-region requests in normal mode of operation.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000003B-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Reporting Server

Ingress

Not applicable

HA/DR

Reporting Server is deployed as a single pod service.

Cross-region bandwidth

Reporting Server is deployed per region. There is no cross region deployment.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000003D-QINU`"'?

SMTP Setting

Not applicable

TLS/SSL Certificates configurations

Not applicable

GVP Configuration Server

Ingress

Not applicable

HA/DR

GVP Configuration Server is deployed as a singleton. If the GVP Configuration Server crashes, a new pod will be created. The GVP services will continue to service calls if the GVP Configuration Server is unavailable and only new configuration changes, such as new MCP pods, will not be available.

Cross-region bandwidth

GVP Configuration Server is not expected to be doing cross-region requests in normal mode of operation.

External connections

External service Functionality
PostGresSQL database

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000003F-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

N/A

Media Control Platform

Service Functionality
Consul Consul service must be deployed before deploying MCP for proper service registration in GVP Configuration Server and RM.

Resource Manager

Service Functionality
GVP Configuration Server GVP Configuration Server must be deployed before deploying RM for proper working.

Service Discovery

Service Functionality
Consul Consul service must be deployed before deploying Service Discovery for proper service registration in GVP Configuration Server and Resource Manager.

Reporting Server

Service Functionality
GVP Configuration Server GVP Configuration Server must be deployed before deploying RS for proper working.

GVP Configuration Server

N/A

This section describes product-specific aspects of Genesys Voice Platform support for the European Union's General Data Protection Regulation (GDPR) in premise deployments. For general information about Genesys support for GDPR compliance, see General Data Protection Regulation.

Warning

Disclaimer: The information contained here is not considered final. This document will be updated with additional technical information.

Data Retention Policies

GVP has configurable retention policies that allow expiration of data. GVP allows aggregating data for items like peak and call volume reporting. The aggregated data is anonymous. Detailed call detail records include DNIS and ANI data. The Voice Application Reporter (VAR) data could potentially have personal data, and would have to be deleted when requested. The log data files would have sensitive information (possibly masked), but requires the data to be rotated/expired frequently to meet the needs of GDPR.

Configuration Settings

Media Server

Media Server is capable of storing data and sending alarms which can potentially contain sensitive information, but by default, the data will typically be automatically cleansed (by the log rollover process) within 40 days.

The location of these files can be configured in the GVP Media Control Platform Configuration [default paths are shown below]:

  • vxmli:recordutterance-path = $InstallationRoot$/utterance/
  • vxmli:recording-basepath = $InstallationRoot$/record/
  • Netann:record-basepath = $InstallationRoot$/record
  • msml:cpd-record-basepath = $InstallationRoot$/record/
  • msml:record-basepath = $InstallationRoot$
  • msml:record-irrecoverablerecordpostdir = $InstallationRoot$/cache/record/failed
  • mpc:recordcachedir = $InstallationRoot$/cache/record
  • calllog:directory = $InstallationRoot$/callrec/Log files and temporary files can be saved.

The location of these files can be configured in the GVP Media Control Platform Configuration [default paths are shown below]:

  • vxmli:logdir = $InstallationRoot$/logs/
  • vxmli:tmpdir = $InstallationRoot$/tmp/
  • vxmli:directories-save_tempfiles = $InstallationRoot$/tmp/

Note: Changing default values is not really supported in the initial Private Edition release for any of the above MCP options.

Also, additional sinks are available where alarms and potentially sensitive information can be captured. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information. The metrics can be configured in the GVP Media Control Platform configuration:

  • ems.log_sinks = MFSINK I DATAC I TRAPSINK
  • ems:metricsconfig-DATAC = *
  • ems:dc-default-metricsfilter = 0-16,18,25,35,36,41,52-55,74,128,136-141
  • ems.metricsconfig.MFSINK = 0-16,18-41,43,52-56,72-74,76-81,127-129,130,132-141,146-152

GVP Resource Manager

Resource Manager is capable of storing data and sending alarms and potentially sensitive information, but by default, the data will typically be automatically cleansed (by the log rollover process) within 40 days.

Customers are advised to understand the GVP logging (for all components) and understand the sinks (destinations) for information which the platform can potentially capture. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information.

GVP Reporting Server

The Reporting Server is capable of storing/sending alarms and potentially sensitive information, but by default, these components process but do not store consumer PII. Customers are advised to understand the GVP logging (for all components) and understand the sinks (destinations) for information which the platform can potentially capture. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information.

By default, Reporting Server is designed to collect statistics and other user information. Retention period of this information is configurable, with most data stored for less than 40 days. Customers should work with their application designers to understand what information is captured as part of the application, and, whether or not the data could be considered sensitive.

These settings could be changed by the customer as per their need by using a Helm chart override values.yaml.

Data Retention Specific Settings

  • rs.db.retention.operations.daily.default: "40"
  • rs.db.retention.operations.monthly.default: "40"
  • rs.db.retention.operations.weekly.default: "40"
  • rs.db.retention.var.daily.default: "40"
  • rs.db.retention.var.monthly.default: "40"
  • rs.db.retention.var.weekly.default: "40"
  • rs.db.retention.cdr.default: "40"

Identifying Sensitive Information for Processing

The following example demonstrates how to find this information in the Reporting Server database – for the example where ‘Session_ID’ is considered sensitive:

  • select * from dbo.CUSTOM_VARS where session_ID = '018401A9-100052D6';
  • select * from dbo.VAR_CDRS where session_ID = '018401A9-100052D6';
  • select * from dbo.EVENT_LOGS where session_ID = '018401A9-100052D6';
  • select * from dbo.MCP_CDR where session_ID = '018401A9-100052D6';
  • select * from dbo.MCP_CDR_EXT where session_ID = '018401A9-100052D6';

An example of a SQL query which might be used to understand if specific information is sensitive: ?'"`UNIQ--source-00000041-QINU`"'?

GWS/Current/GWSPEGuide/Planning GWS Before you begin Find out what to do before deploying Web Services and Applications. Genesys Web Services and Applications GWSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Web Services and Applications (GWS) in Genesys Multicloud CX private edition is made up of multiple containers and Helm charts. The pages in this "Configure and deploy" chapter walk you through how to deploy the following Helm charts:
  • GWS services (gws-services) - all the GWS components.
  • GWS ingress (gws-ingress) - provides internal and external access to GWS services. Internal ingress is used for cross-component communication inside the GWS deployment. It also can be used by other clients located inside the same Kubernetes cluster. External ingress provides access to GWS services to clients located outside the Kubernetes cluster. If you are deploying Web Services and Applications in a single namespace with other private edition services, then you do not need to deploy GWS ingress.

GWS also includes a Helm chart for Nginx (wwe-nginx) for Workspace Web Edition - see the Workspace Web Edition Private Edition Guide for details about how to deploy this chart.

See Helm charts and containers for Genesys Web Services and Applications for the Helm chart versions you must download for your release.

For information about downloading Helm charts from JFrog Edge, see Downloading your Genesys Multicloud CX containers.

Install the prerequisite dependencies listed in the Third-party services table before you deploy Web Services and Applications. See Software requirements for a full list of prerequisites and third-party services required by all Genesys Multicloud CX private edition services. GWS uses PostgreSQL to store tenant information, Redis to cache session data, and Elasticsearch to store monitored statistics for fast access. If you set up any of these services as dedicated services for GWS, they have the following minimal requirements:

PostgreSQL

  • CPU: 2
  • RAM: 8 GB
  • HDD: 50 GB

Redis

  • 2 nodes:
    • CPU: 2
    • RAM: 8 GB
    • HDD: 20 GB

Elasticsearch

  • 3 "master" nodes:
    • CPU: 2
    • RAM: 8 GB
    • HDD: 20 GB
  • 4 "data" nodes)
    • CPU: 4
    • RAM: 16 GB
    • HDD: 20 GB
GWS ingress objects support Transport Layer Security (TLS) version 1.2 for a secure connection between Kubernetes cluster ingress and GWS ingress. TLS is disabled by default, but you can configure it for internal and external ingress by overriding the entryPoints.internal.ingress.tls and entryPoints.external.ingress.tls sections of the GWS ingress Helm chart.

For example:

entryPoints:
  external:
    ingress:
    tls:
      - secretName: gws-secret-ext
        hosts:
          - gws.genesys.com

In the example above:

  • secretName is the name of the Kubernetes secret that contains the certificate. The secret is a prerequisite and must be created before you deploy GWS Ingress.
  • hosts is a list of the fully qualified domain names that should use the certificate. The list must be the same as the value configured for the entryPoints.external.ingress.hosts parameter.

Cookies

GWS components use cookies for following purposes:

  • identify HTTP/HTTPS user sessions
  • identify CometD user sessions
  • support session stickiness
JM: Are these browsers are supported for Agent Setup? Browser support for WWE is documented in the WWE guid for private edition: Before you begin. Also, can we align with the versions that are supposed to be supported for cloud/private edition in all products? Those versions are:
  • Chrome: Current release or one version previous
  • Firefox: Current release or one version previous
  • Microsoft Edge: Current release
  • Microsoft Edge Chromium: Current release


You can use any of the following browsers for UIs:

  • Chrome 75+
  • Firefox 68+
  • Firefox ESR 60.9
  • Microsoft Edge
Genesys Web Services and Applications must be deployed after Genesys Authentication.

For a look at the high-level deployment order, see Order of services deployment in the Setting up Genesys Multicloud CX Private Edition guide.

?'"`UNIQ--nowiki-00000008-QINU`"'?
IXN/Current/IXNPEGuide/Planning IXN Before you begin Find out what to do before deploying Interaction Server. Interaction Server IXNPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IXN Server:
  • supports single-region model of deployment only
  • does not support scaling or HA
  • requires dedicated PostgreSQL deployment per customer
Available IXN containers can be found by the following names in registry:
  • ixn/ixn_vq_node
  • ixn/ixn_node
  • ixn/interaction_server

Available helm charts can be found by the name ixn-<version>

For information about downloading Genesys containers and Helm charts from JFrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers.

The following are the minimum versions supported by IXN Server:
  • Kubernetes 1.17+
  • Helm 3.0
  • Postgres Database v10+
  • Redis 6.0+
  • Kafka 0.11+
In case logging into files is configured for IXN Server, it requires a volume storage mounted to IXN Server container. The storage must be capable to write up to 100 MB/min and 10 MB/s for 2 minutes in peak. The storage size depends on logging configuration.

Regarding storage characteristics for IXN Server database, refer to PostgreSQL documentation.

Contact your account representative if you need assistance with sizing calculations.

Not applicable Not applicable Tenant service. For more information, refer to the Tenant Service Private Edition Guide.


Provide information about GDPR support. Include a link to the "suite-level" documentation. Link to come
PEC-AD/Current/WWEPEGuide/Planning PEC-AD Before you begin Find out what to do before deploying Workspace Web Edition. Agent Desktop WWEPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations or assumptions related to the deployment. The Workspace Web Edition Helm charts are included in the Genesys Web Services (GWS) Helm charts. You can access them when you download the GWS Helm charts from JFrog using your credentials.

See Helm charts and containers for Genesys Web Services and Applications for the Helm chart version you must download for your release.

For information about downloading Genesys Helm charts from JFrog Edge, refer to this article: Downloading your Genesys Multicloud CX containers.

There are no specific storage requirements for Workspace Web Edition. Network requirements include:
  • Required properties for ingress:
    • Cookies usage: None
    • Header requirements - client IP & redirect,  passthrough: None
    • Session stickiness: None
    • Allowlisting - optional: None
    • TLS for ingress - optional (you can enable or disable TLS on the connection): Though annotation like any UI or API in the solution
  • Cross-region bandwidth: N/A
  • External connections from the Kubernetes cluster to other systems: N/A
  • WAF Rules (specific only for services handling internet traffic): N/A
  • Pod Security Policy: N/A
  • High-Availability/Disaster Recovery: Refer to High availability and disaster recovery
  • TLS/SSL Certificate configurations: No specific requirements
You can use any of the supported browsers to run Workspace Agent Desktop on the client side.

Mandatory Dependencies

The following services must be deployed and running before deploying the WWE service. For more information, refer to Order of services deployment.

  • Genesys Authentication Service:
    • A redirect must be configured in Auth/Environment to allow an agent to login from the WWE URL. The redirect should be configured in the Auth onboarding script, according to the DNS assigned to the WWE service.
  • GWS services:
    • The CORS rules for WWE URLs must be configured in GWS. This should be configured in the GWS onboarding script, according to the DNS assigned to the WWE service.
    • The GWS API URL should be specified at the WWE deployment time as part of the Helm values.
  • TLM service:
    • The CORS rules for the domain where WWE is declared must be configured in Telemetry Service (TLM). For example: ?'"`UNIQ--nowiki-000001B2-QINU`"'?

Optional Dependencies

Depending on the deployed architecture, the following services must be deployed and running before deploying the WWE service:

  • WebRTC Service: To allow WebRTC in the browser
  • Telemetry Service: To allow browser observability (metrics and logs)

Miscellaneous desktop-side optional dependencies

The following software must or might be deployed on agent workstations to allow agents to leverage the WWE service:

  • Mandatory: A browser referenced in the supported browser list.
  • Optional: Genesys Softphone: a SIP or WebRTC softphone to handle the voice channel of agents.
Workspace Web Edition does not have specific GDPR support.
PEC-CAB/Current/CABPEGuide/Planning PEC-CAB Before you begin Find out what to do before deploying Genesys Callback. Callback CABPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Engagement Service (GES) is the only service that runs in the GES Docker container. The Helm charts included with the GES release provision GES and any Kubernetes infrastructure necessary for GES to run, such as load balancing, autoscaling, ingress control, and monitoring integration.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for Callback for the Helm chart version you must download for your release.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements. The primary contributor to the size of a callback record is the amount of user data that is attached to a callback. Since this is an open-ended field, and the composition will differ from customer to customer, it is difficult to state the precise storage requirements of GES for a given deployment. To assist you, the following table lists the results of testing done in an internal Genesys development environment and shows the impact that user data has when it comes to the storage requirements for both Redis and Postgres.
Test Redis size Postgres  size (MB)
10,000 Scheduled Callbacks with no user data 26.51 MB 41.1 MB
10,000 Scheduled Callbacks with 10 KB of user data 64.44 MB 252.91 MB
10,000 Scheduled Callbacks with 100k of user data 110.58 MB 595.79 MB

Note: This is 100k of randomized string in a single field in the user data.

Hardware requirements

Genesys strongly recommends the following hardware requirements to run GES with a single tenant. The requirements are based on running GES in a multi-tenanted environment and scaled down accordingly. Use these guidelines, coupled with the callback storage information listed above, to gauge the precise requirements needed to ensure that GES runs smoothly in your deployment.

GES

(Based on t3.medium)

  • vCPUs: 1
  • Memory: 2 GiB
  • Network burst: 5 Gbps

Redis

(Based on cache.r5.large) Redis is essential to GES service availability. Deploy two of the Redis caches in a cluster; the second cache acts as a replica of the first. For more information, see Architecture.

GES requires a dedicated, non-clustered Redis instance. Callback data is stored in Redis memory.

  • vCPUs: 1
  • Memory: 8 GiB
  • Network burst: 10 Gbps

PostgreSQL

(Based on db.t3.medium)

  • vCPUs: 2
  • Memory: 4 GiB
  • Network burst: 5 Gbps
  • Storage: 100 GiB

Sizing calculator

The information in this section is provided to help you determine what hardware you need to run GES and third-party components. The information and formulas are based on an analysis of database disk storage and Redis memory usage requirements for callback data. The numbers provided here include only storage and memory usage for callbacks. Additional storage and memory is required for configuration data and basic operations.

Requirements per callback

Each callback record (excluding user data) requires approximately 6.5 to 7.0 kB of database disk storage, plus additional disk storage for the user data. Each kB of user data consumes approximately 3.0 kB of disk storage.

Each callback record (excluding user data) requires approximately 4.5 to 5.5 kB of Redis memory, plus an additional 1.25 kB for each kB of user data.

Use the following formulas to estimate disk storage and Redis memory requirements:

  • Estimate database disk storage requirements for callback data:
    <number of callbacks per day> × (7 kB + (3 kB × <kB of user data per callback>)) × 14 days
  • Estimate Redis memory requirements for callback data:
    <number of callbacks per day> × (5.5 kB + (1.25 kB × <kB of user data per callback>)) × 14 days

For example, if a tenant has an average of 100,000 callbacks per day with 1kB user data in each callback:

  • The database storage requirement is approximately 14 GB.
  • The Redis memory requirement is approximately 9.5 GB.

NOTE: Each callback record is stored for 14 days. If you average about 10k scheduled callbacks every day, and the scheduled callbacks are all booked as far out as possible (that is, 14 days in the future), the number of callbacks to use in storage and memory calculations is 28 days × 10k callbacks per day = 280k callbacks.

Redis operations

The Redis operations primarily update the connectivity status to other services such as Tenant Service (specifically ORS and URS) and Genesys Web Services and Applications (GWS).

When GES is idle (zero callbacks in the past, no active callback sessions, no scheduled callbacks), GES generates about 50 Redis operations per second per GES node per tenant.

Each Immediate callback generates approximately 110 Redis operations from its creation to the end of the ORS session.

For Scheduled callbacks, assuming each callback generates 110 Redis operations when the ORS session is active (based on Immediate callback numbers), there is 1 additional Redis operation for each minute that a callback is scheduled.

For example, if a callback is scheduled for 1 hour from the time it was created, the number of Redis operations is approximately 60 + 110 = 170.

For a callback scheduled for 1 day from the time it was created, it generates approximately 60 × 24 + 110 = 1550 Redis operations, using the following formula for the number of Redis operations per callback:
<number of callbacks> × (110 + <number of minutes until scheduled time>)

Because the longevity of a callback ORS session depends on the estimated wait time (EWT), the total number of Redis operations performed by GES per minute varies, based on both the number of callbacks in the system and the EWT of the callbacks.

Use the following formula to estimate the number of Redis operations performed per minute:
Total number of Redis operations per minute = (50 base GES Redis operations per second × 60 seconds) + <number of upcoming scheduled callbacks in the system> + ((<total number of active callbacks> / <EWT>) × 110)

Where:

  • Total number of active callbacks = <number of active immediate callbacks> + <number of active scheduled callbacks>, and
  • Number of active scheduled callbacks = (<number of scheduled callbacks per time slot> / <time slot duration>) × <EWT>

For example, let's say we have the following scenario:

  • Scheduled callbacks:
    • Time slot duration = 15 minutes
    • Maximum capacity per time slot = 100
    • Business hours = 24x7
    • Assume that all time slots are fully booked for the next 14 days
  • Number of active immediate callbacks = 1,000
  • Estimated wait time = 90 minutes

Using the preceding formulas, estimate the Redis operations per minute:

  • Total number of scheduled callbacks = (100 × (60 / 15)) × 24 × 14 = 134,400
  • Number of active scheduled callbacks = (100 / 15) × 90 = 600
  • Number of upcoming scheduled callbacks = <total number of scheduled callbacks> - <number of active scheduled callbacks> = (134,400 - 600) = 133,800
  • Total number of active callbacks = 1,000 + 600 = 1,600
  • Total number of Redis operations per minute = (50 × 60) + 133,800 + ((1,600 / 90) × 110) = 138,756

Redis keys

Each callback creates three additional Redis keys. Given the preceding calculations for Redis memory requirements for each callback, the formula for the average key size is:
(5.5 kB + (1.25 kB × <kB_of_user_data_per_callback>)) / 3

Incoming connections to the GES deployment are handled either through the UI or through the external API. For information about how to use the external API, see the Genesys Multicloud CX Developer Center.

Connection topology

The diagram below shows the incoming and outgoing connections amongst GES and other Genesys and third-party software such as Redis, PostgreSQL, and Prometheus. In the diagram, Prometheus is shown as being part of the broader Kubernetes deployment, although this is not a requirement. What's important is that Prometheus is able to reach the internal load balancer for GES.

The other important thing to note is that, depending on the use case, GES might communicate with Firebase and CAPTCHA over the open internet. This is not part of the default callback offering, but if you use Push Notifications with your callback service, then GES must be able to connect to Firebase over TLS. The use of Push Notifications or CAPTCHA is optional and not necessary for the basic callback scenarios.

GES requires a dedicated, non-clustered Redis instance.

Ges connection topology private edition diagram.png

Web application firewall rules

Information in the following sections is based on NGINX configuration used by GES in an Azure cloud environment.

Cookies and session requirements

When interacting with the UI, GES and GWS ensure that the user's browser has the appropriate session cookies. By default, UI sessions time out after 20 minutes of inactivity.

The external Engagement API does not require session management or the use of cookies, but it is important that the GES API key be provided in the request headers in the X-API-Key field.

For ingress to GES, allow requests to only the following paths to be forwarded to GES:

- /ges/
- /engagement/v3/callbacks/create
- /engagement/v3/callbacks/cancel
- /engagement/v3/callbacks/retrieve
- /engagement/v3/callbacks/availability/
- /engagement/v3/callbacks/queue-status/
- /engagement/v3/callbacks/open-for/
- /engagement/v3/estimated-wait-time
- /engagement/v3/call-in/requests/create
- /engagement/v3/statistics/operations/get-statistic-ex

In addition to allowing connections to only these paths, ensure that the ccid or ContactCenterID headers on any incoming requests are empty. This enhances security of the GES deployment; it prevents the use of external APIs by an actor who has only the CCID of the contact center.

TLS/SSL certificate configuration

There are no special TLS certificate requirements for the GES/Genesys Callback web-based UI.

Subnet requirements

There are no special requirements for sizing or creating an IP subnet for GES above and beyond the demands of the broader Kubernetes cluster.

The Genesys Callback user interface is supported in the following browsers. GES has dependencies on several other Genesys services. You must deploy the services on which GES depends and verify that each is working as expected before you provision and configure GES. If you follow this advice, then – if any issues arise during the provisioning of GES – you can be reasonably assured that the fault lies in how GES is provisioned, rather than in a downstream program.

GES/Callback requires your environment to contain supported releases of the following Genesys services, which must be deployed before you deploy Callback:

  • GWS
  • Voice Microservices
  • Designer

For detailed information about the correct order of services deployment, see Order of services deployment.

Callback records are stored for 14 days. The 14-day TTL setting starts at the Desired Callback Time. The Callback TTL (seconds) setting in the CALLBACK_SETTINGS data table has no effect on callback record storage duration; 14 days is a fixed value for all callback records.

For more information, see Link to come.

PEC-DC/Current/DCPEGuide/Planning PEC-DC Before you begin Find out what to do before deploying Digital Channels. Digital Channels DCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Digital Channels for private edition has the following limitations:
  • Supports only a single-region model of deployment.
  • Social media requires additional components that are not included in Digital Channels.
Digital Channels in Genesys Multicloud CX private edition includes the following containers:
  • nexus
  • hubpp
  • tenant_deployment

The service also includes a Helm chart, which you must deploy to install all the containers for Digital Channels:

  • nexus

See Helm charts and containers for Digital Channels for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the nexus folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.


Install the prerequisite dependencies listed in the Third-party services table before you deploy Digital Channels. Digital Channels uses PostgreSQL and Redis to store all data.


For general network requirements, review the information on the suite-level Network settings page. Digital Channels has dependencies on the following Genesys services:
  • Genesys Authentication
  • Web Services and Applications
  • Tenant Microservice
  • Universal Contact Service
  • Designer

For detailed information about the correct order of services deployment, see Order of services deployment.

JM: Does Digital Channels support GDPR?
PEC-Email/Current/EmailPEGuide/Planning PEC-Email Before you begin Find out what to do before deploying Email. Email EmailPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of Email supports single-region model of deployment only. Email in Genesys Multicloud CX private edition includes the following containers:
  • iwd-email

The service also includes a Helm chart, which you must deploy to install the required containers for Email:

  • iwdem

See Helm Chart and Containers for Email for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwdem folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in IWD, UCS-X, and Digital Channels which are external to the Email service. External Connections: IMAP, SMTP, Gmail, GRAPH Not applicable The following Genesys services are required:
  • Genesys authentication service (GAuth)
  • Universal Contact Service (UCS)
  • Interaction Server
  • Digital Channels (Nexus)
  • Intelligent Workload Distribution (IWD)

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
PEC-IWD/Current/IWDDMPEGuide/Planning PEC-IWD Before you begin Find out what to do before deploying IWD Data Mart. Intelligent Workload Distribution IWDDMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IWD Data Mart:
  • works as a short-living job started on schedule
  • does not support scaling or HA
  • requires dedicated PostgreSQL deployment per customer

IWD Data Mart is a short-living job, so Prometheus metrics cannot be pulled. Therefore, it requires a standalone Pushgateway service for monitoring.

IWD Data Mart in Genesys Multicloud CX private edition includes the following containers:
  • iwd_dm_cloud

The service also includes a Helm chart, which you must deploy to install the required containers for IWD Data Mart:

  • iwddm-cronjob

See Helm Charts and Containers for IWD and IWD Data Mart for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwddm-cronjob folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in PostgreSQL, which is external to the IWD Data Mart. Not applicable Not applicable Intelligent Workload Distribution (IWD) with a provisioned tenant.

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
PEC-IWD/Current/IWDPEGuide/Planning PEC-IWD Before you begin Find out what to do before deploying IWD. Intelligent Workload Distribution IWDPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IWD:
  • supports single-region model of deployment only
  • requires dedicated PostgreSQL deployment per customer


IWD in Genesys Multicloud CX private edition includes the following containers:
  • iwd

The service also includes a Helm chart, which you must deploy to install the required containers for IWD:

  • iwd

See Helm Charts and Containers for IWD and IWD Data Mart for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwd folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in the PostgreSQL, Elasticsearch, and Digital Channels which are external to IWD.

Sizing of Elasticsearch depends on the load. Allow on average 15 KB per work item, 50 KB per email. This can be adjusted depending on the size of items processed.

External Connections: IWD allows customer to configure webhooks. If configured, this establishes an HTTP or HTTPS connection to the configured host or port. Not applicable The following Genesys services are required:
  • Genesys authentication service (GAuth)
  • Universal Contact Service (UCS)
  • Interaction Server
  • Digital Channels (Nexus)

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
PEC-OU/Current/CXCPEGuide/Planning PEC-OU Before you begin Find out what to do before deploying CX Contact. Outbound (CX Contact) CXCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations. Before you begin deploying the CX Contact service, it is assumed that the following prerequisites and optional task, if needed, are completed:

Prerequisites

  • A Kubernetes or OpenShift cluster is ready for deployment of CX Contact.
  • The Kubectl and Helm command line tools are on your computer.
  • You have connectivity to target cluster, the proper kubectl context to work with the cluster, and your user has administrative permissions to deploy CX Contact to the defined namespace.

Optional tasks

  • SFTP Server—Install an SFTP Server with basic authentication for optional input and output data. SFTP Server is used when automation capabilities are required.
  • CDP NG access credentials—As of CX Contact 9.0.025, Compliance Data Provider Next Generation (CDP NG) is used as a CDP by default. Before attempting to connect to CDP NG, obtain the necessary access credentials (ID and Secret) from Genesys Customer Care.
  • Bitnami repository—If you choose to deploy dedicated Redis and Elasticsearch for CX Contact, add the Bitnami repository to install Redis and Elasticsearch using the following command:
    helm repo add bitnami ?'"`UNIQ--nowiki-00000011-QINU`"'?

After you've completed the mandatory tasks, check the Third-party prerequisites.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for CX Contact for the Helm chart version you must download for your release.

CX Contact is the only service that runs in the CX Contact Docker container. The Helm charts included with the CX Contact release provision CX Contact and any Kubernetes infrastructure necessary for CX Contact to run.

Set up Elasticsearch and Redis services as standalone services or installed in a single OpenShift cluster. You can also install them as shared services, deployed in an "infra" namespace in OpenShift.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

CX Contact requires shared persistent storage and an associated storage class created by the cluster administrator. The Helm chart creates the ReadWriteMany (RWX) Persistent Volume Claim (PVC) that is used to store and share data with multiple CX Contact components.

The minimal recommended PVC size is 100GB.

This topic describes network requirements and recommendations for CX Contact in private edition deployments:

Single namespace

Deploy CX Contact in a single namespace to prevent ingress/egress traffic from going through additional hops, due to firewalls, load balancers, or other network layers that introduce network latencies and overhead. Do not hardcode the namespace. You can override it by using the Helm file/values (provided during the Helm install command standard --namespace= argument), if necessary.

External connections

For information about external connections from the Kubernetes cluster to other systems, see Architecture. External connections also include:

  • Compliance Data Provider (AWS)
  • SFTP Servers

Ingress

The CX Contact UI requires Session Stickiness. Use ingress-nginx as the ingress controller (see github.com).

Important
The CX Contact Helm chart contains default annotations for session stickiness only for ingress-nginx. If you are using a different ingress controller, refer to its documentation for session stickiness configuration.

Ingress SSL

If you are using Chrome 80 or later, the SameSite cookie must have the Secure flag (see Chromium Blog). Therefore, Genesys recommends that you configure a valid SSL certificate on ingress.

Logging

Log rotation is required so that logs do not consume all of the available storage on the node.

Kubernetes is currently not responsible for rotating logs. Log rotation can be handled by the docker json-file log driver by setting the max-file and max-size options.

For effective troubleshooting, the engineering team should provide stdout logs of the pods (using the command kubectl logs). As a result, log retention is not very aggressive (see JSON file logging driver). For example: ?'"`UNIQ--source-00000012-QINU`"'? For on-site debugging purposes, CX Contact logs can be collected and stored in Elasticsearch. (For example, EFK stack. See medium.com).

Monitoring

CX Contact provides metrics that can be consumed by Prometheus and Grafana. It is recommended to have the Prometheus Operator (see githum.com) installed in the cluster. CX Contact Helm chart supports the creation of CustomResourceDefinitions that can be consumed by the Prometheus Operator.

For more information about monitoring, see Observability in Outbound (CX Contact).

CX Contact components operate with Genesys core services (v8.5 or v8.1) in the back end. All voice-processing components (Voice Microservice and shared services, such as GVP), and the GWS and Genesys Authentication services (mentioned below) must deployed and running before deploying the CX Contact service. See Order of services deployment.

The following Genesys services and components are required:

  • GWS
  • Genesys Authentication Service
  • Tenant Service
  • Voice Microservice
  • Multi-tenant Configuration Server

Nexus is optional.

 

CX Contact does not support GDPR.
PEC-REP/Current/GCXIPEGuide/Planning PEC-REP Before you begin deploying GCXI Find out what to do before deploying Genesys Customer Experience Insights (GCXI). Reporting GCXIPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 GCXI can provide meaningful reports only if Genesys Info Mart and Reporting and Analytics Aggregates (RAA) are deployed and available. Deploy GCXI only after Genesys Info Mart and RAA. For more information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Customer Experience Insights

GCXI Containers

  • GCXI Helm chart uses the following containers.
    • gcxi - main GCXI container, runs as a StatefulSet. This container is roughly 12 GB; ensure that you have enough space to allocate it.
    • gcxi-control - supplementary container, used for initial installation of GCXI, and for clean up.

GCXI Helm Chart Download the latest yaml files from the repository, or examine the attached files: Sample GCXI yaml files

For more information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.


GCXI installation requires a set of local Persistent Volumes (PVs). Kubernetes local volumes are directories on the host with specific properties: https://kubernetes.io/docs/concepts/storage/volumes/#local

Example usage: https://zhimin-wen.medium.com/local-volume-provision-242affd5efe2

Kubernetes provides a powerful volume plugin system, which enables Kubernetes workloads to use a wide variety of block and file storage to persist data.

You can use the GCXI Helm chart to easily set up your own PVs, or you can configure PV Dynamic Provisioning in your cluster so that Kubernetes automatically creates PVs.

Volumes Design

GCXI installation uses the following PVC:

Mount Name Mount Path

(inside container)

Description Access Type Approximate Size Default Mount Point on Host

(You can change the mount point using values.)

The local provisioner requires that the specified directory pre-exists on your host.

Must be Shared across Nodes? Required Node Label

(applies to default Local PV setup)

gcxi-backup /genesys/gcxi_shared/backup Backup files

Used by control container / jobs.

RWX Depends on backup frequency.

5 GB+

/genesys/gcxi/backup

You can override this setting using

Values.gcxi.local.pv.backup.path

Only in multiple concurrent installs scenarios. gcxi/local-pv-gcxi-backup = "true"
gcxi-log /mnt/log MSTR logs

Used by main container.

The GCXI Helm chart allows log volumes of legacy hostPath type. This scenario is the default and used in examples in this document.

RWX Depends on rotation scheme.

5 GB+

/mnt/log/gcxi

subPathExpr: $(POD_NAME)

You can override this setting using

Values.gcxi.local.pv.log.path

Not necessarily. gcxi/local-pv-gcxi-log = "true"

If you are using hostPath volumes for logs, you don't need node label.

gcxi-postgres /var/lib/postgresql/data

(if using Postgres in container)

or

disk space in Postgres RDBMS

Meta DB volume

Used by Postgres container, if deployed.

RWO Depends on usage.

10 GB+

/genesys/gcxi/shared

You can override this setting using

Values.gcxi.local.pv.postgres.path

Yes, unless you tie the Postgres container to some particular node. gcxi/local-pv-postgres-data = "true"
gcxi-share /genesys/gcxi_share MSTR shared caches and cubes

Used by main container.

RWX Depends on usage.

5 GB+

/genesys/gcxi/data

subPathExpr: $(POD_NAME)

You can override this setting using

Values.gcxi.local.pv.share.path

Yes. gcxi/local-pv-gcxi-share = "true"

Preparing the environment

To prepare your environment, complete the following steps:

  1. For GKE deployments, run the following command to log in to the gcloud cluster:
    ?'"`UNIQ--source-00000025-QINU`"'?
  2. To log in to the cluster, run the following command:
    ?'"`UNIQ--source-00000027-QINU`"'?
  3. To check the cluster version on OpenShift deployments, run the following command:
    ?'"`UNIQ--source-00000029-QINU`"'?
  4. To create a new project, run the following command:
    OpenShift: ?'"`UNIQ--source-0000002B-QINU`"'?
    GKE:
    1. Edit the create-gcxi-namespace.json, adding the following values:
      ?'"`UNIQ--source-0000002D-QINU`"'?
    2. To apply the changes, run the following command:
      ?'"`UNIQ--source-0000002F-QINU`"'?
  5. For GKE, to confirm namespace creation, run the following command:
    ?'"`UNIQ--source-00000031-QINU`"'?
  6. Create a secret for docker-registry to pull images from the Genesys JFrog repository:
    ?'"`UNIQ--source-00000033-QINU`"'?
  7. Create the file values-test.yaml, and populate it with appropriate override values. For a simple deployment using PostgreSQL inside the container, you must include PersistentVolumes named gcxi-log-pv, gcxi-backup-pv, gcxi-share-pv, and gcxi-postgres-pv. You must override GCXI_GIM_DB with the name of your Genesys Info Mart data source.

Ingress

Ingress annotations are supported in the values.yaml file (see line 317). Genesys recommends session stickiness, to improve user experience. ?'"`UNIQ--source-00000035-QINU`"'?

Allowlisting is required for GCXI.

WAF Rules

WAF rules are defined in the variables.tf file (see line 245).

SMTP

The GCXI container and Helm chart support the environment variable EMAIL_SERVER.

TLS

The GCXI container does not serve TLS natively. Ensure that your environment is configured to use proxy with HTTPS offload.

MicroStrategy Web is the user interface most often used for accessing, managing, and running the Genesys CX Insights reports. MicroStrategy Web certifies the latest versions, at the time of release, for the following web browsers:
  • Apple Safari
  • Google Chrome (Windows and iOS)
  • Microsoft Edge
  • Microsoft Internet Explorer (Versions 9 and 10 are supported, but not certified)
  • Mozilla Firefox

To view updated information about supported browsers, see the MicroStrategy ReadMe.

GCXI requires the following services:
  • Reporting and Analytics Aggregates (RAA) is required to aggregate Genesys Info Mart data.
  • Genesys Info Mart and / or Intelligent Workload Distribution (IWD) Data Mart. GCXI can run without these services, but cannot produce meaningful output without them.
  • GWS Auth/Environment service
  • Genesys Platform Authentication thru Config Server (GAuth). Alternatively, GCXI includes a native internal login, which you can use to authorize users, instead of GAuth. This document assumes you are using GAuth (the recommended solution), which gives ConfigServer users access to GCXI.
  • GWS client id/client secret
GCXI can store Personal Identifiable Information (PII) in logs, history files, and in reports (in scenarios where customers include PII data in reports). Genesys recommends that you do not capture PII in reports. If you do capture PII, it is your responsibility to remove any such report data within 21 days or less, if required by General Data Protection Regulation (GDPR) standards.

For more information and relevant procedures, see: Genesys CX Insights Support for GDPR and the suite-level Link to come documentation.

PEC-REP/Current/GCXIPEGuide/PlanningRAA PEC-REP Before you begin deploying RAA Find out what to do before deploying Reporting and Analytics Aggregates (RAA). Reporting GCXIPEGuide The RAA container works with the Genesys Info Mart database; deploy RAA only after you have deployed Genesys Info Mart.

The Genesys Info Mart database schema must correspond to a compatible Genesys Info Mart version. Execute the following command to discover the required Genesys Info Mart release: ?'"`UNIQ--source-00000020-QINU`"'? RAA container runs RAA on Java 11, and is supplied with the following of JDBC drivers:

  • MSSQL 9.2.1 JDBC Driver
  • Postgres 42.2.11 JDBC Driver
  • Oracle Database 21c (21.1) JDBC Driver

Genesys recommends that you verify whether the provided driver is compatible with your database, and if it is not, you can override the JDBC driver by copying an updated driver file to the folder lib\jdbc_driver_<RDBMS> within the mounted config volume, or by creating a co-named link within the folder lib\jdbc_driver_<RDBMS>, which points to a driver file stored on another volume (where <RDBMS> is the RDBMS used in your environment). This is possible because RAA is launched in a config folder, which is mounted in a container.

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Customer Experience Insights.

You can download the gcxi helm charts from the following repository:?'"`UNIQ--source-00000022-QINU`"'? For more information about downloading containers, see: Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.
This section describes the storage requirements for various volumes.

GIM secret volume

In scenarios where raa.env.GCXI_GIM_DB__JSON is not specified, RAA mounts this volume to provide GIM connections details.

  1. Declare GIM database connection details as a Kubernetes secret in gimsecret.yaml:
    ?'"`UNIQ--source-00000024-QINU`"'?
  2. Reference gimsecret.yaml in values.yaml:
    ?'"`UNIQ--source-00000026-QINU`"'?

Alternatively, you can mount the CSI secret using secretProviderClass, in values.yaml:?'"`UNIQ--source-00000028-QINU`"'?

Config volume

RAA mounts a config volume inside the container, as the folder /genesys/raa_config. The folder is treated as a work directory, RAA reads the following files from it during startup:

  • conf.xml, which contains application-level config settings.
  • custom *.ss files.
  • JDBC driver, from the folder lib/jdbc_driver_<RDBMS>.

RAA does not normally create any files in /genesys/raa_config at runtime, so the volume does not require a fast storage class. By default, the size limit is set to 50 MB. You can specify the storage class and size limit in values.yaml:?'"`UNIQ--source-0000002A-QINU`"'?

   ...

RAA helm chart creates a Persistent Volume Claim (PVC). You can define a Persistent Volume (PV) separately using the gcxi-raa chart, and bind such a volume to the PVC by specifying the volume name in the raa.volumes.config.pvc.volumeName value, in values.yaml:?'"`UNIQ--source-0000002C-QINU`"'?

Health volume

RAA uses the Health volume to store:

  • Health files.
  • Prometheus file containing metrics for the most recent 2-3 scrape intervals.
  • Results of the most recent testRun init container execution.

By default, the volume is limited to 50MB. RAA periodically interacts with the volume at runtime, so Genesys does not recommend a slow storage class for this volume. You can specify the storage class and size limit in values.yaml:?'"`UNIQ--source-0000002E-QINU`"'?RAA helm chart creates a PVC. You can define a PV separately using the gcxi-raa chart, and bind such a volume to the PVC by specifying the volume name in the raa.volumes.health.pvc.volumeName value, in values.yaml:?'"`UNIQ--source-00000030-QINU`"'?

RAA interacts only with the Genesys Info Mart database.

RAA can expose Prometheus metrics by way of Netcat.

The aggregation pod has it's own IP address, and can run with one or two running containers. For Helm test, an additional IP address is required -- each test pod runs one container.

Genesys recommends that RAA be located in the same region as the Genesys Info Mart database.

Secrets

RAA secret information is defined in the values.yaml file (line 89).

For information about configuring arbitrary UID, see Configure security.

Not applicable. RAA interacts with Genesys Info Mart database only. Not applicable.
PEC-REP/Current/GIMPEGuide/PlanningGCA PEC-REP Before you begin GCA deployment Find out what to do before deploying GIM Config Adapter (GCA). Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Instructions are provided for a single-tenant deployment.


GIM Config Adapter (GCA) and GCA monitoring are the only services that run in the GCA Docker container. The Helm charts included with the GCA release provision GCA and any Kubernetes infrastructure necessary for GCA to run.

See Helm charts and containers for Genesys Info Mart for the Helm chart versions you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GCA.

GCA uses object storage to store the GCA snapshot during processing. Like GSP, GCA supports using S3-compatible storage provided by OpenShift and Google Cloud Platform (GCP), and Genesys expects you to use the same storage account for GSP and GCA. If you want to use separate storage for GCA, follow the Create object storage instructions for GSP to create similar S3-compatible storage for GCA. No special network requirements.
  • Voice Tenant Service, which enables GCA to access the Configuration Server database. You must deploy the Voice Tenant Service before you deploy GCA.
    • Ensure that an appropriate user account is available for GCA to use to access the Configuration Database. The GCA user account requires at least read permissions.
    • You must also have your Tenant ID information available.
  • There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GSP looks for the GCA snapshot when it starts; if GCA has not yet been deployed, GSP will encounter unknown configuration objects and resources until the snapshot becomes available.

For detailed information about the correct order of services deployment, see Order of services deployment.

Not applicable. GCA does not store information beyond an ephemeral snapshot. f05492f5-52ed-490a-b0d5-c318a4a7272b
PEC-REP/Current/GIMPEGuide/PlanningGIM PEC-REP Before you begin GIM deployment Find out what to do before deploying GIM. Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Instructions are provided for a single-tenant deployment. GIM and GIM monitoring are the only services that run in the GIM Docker container. The Helm charts included with the GIM release provision GIM and any Kubernetes infrastructure necessary for GIM to run.

See Helm charts and containers for Genesys Info Mart for the Helm chart version you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GIM.

GIM uses PostgreSQL for the Info Mart database and, optionally, uses object storage to store exported Info Mart data.

PostgreSQL — the Info Mart database

The Info Mart database stores data about agent and interaction activity, Outbound Contact campaigns, and other services usage in your contact center. A subset of tables and views created, maintained, and populated by Reporting and Analytics Aggregates (RAA) provides the aggregated data on which Genesys CX Insights (GCXI) reports are based.

A sizing calculator for Genesys Multicloud CX private edition is under development. In the meantime, the interactive tool available for on-premises deployments might help you estimate the size of your Info Mart database, see Genesys Info Mart 8.5 Database Size Estimator.

Genesys recommends a minimum 3 IOPS per GB.

For information about creating the Info Mart database, see Create the Info Mart database.

Create the Info Mart database

Use any database management tool to create the Info Mart ETL database and user.

  1. Create the database.
  2. Create a user for the Genesys Info Mart services to use, and grant full permissions to the user for that database.
    This user’s account is used by Genesys Info Mart jobs to access the Info Mart database schema.
    The Info Mart schema name is public.
    Important
    Make a note of the database and user details, which you use to populate database-related Helm chart override values for GIM and GCA.

Object storage — Data Export packages

The GIM Data Export feature enables you to export data from your Info Mart database. Unless you elect to store your exported data in a local directory, your Info Mart data is exported to an object store. GIM supports export to Azure Blob Storage or S3-compatible storage provided by OpenShift and Google Cloud Platform (GCP).

If you want to use S3-compatible storage, follow the Create object storage instructions for GSP to create the S3-compatible storage for GIM.

Important
GSP and GCA use object storage to store data during processing. For safety and security reasons, Genesys strongly recommends that you use a dedicated object storage account for the GIM persistent storage, and do not share the storage account created for GSP and GCA.

As another alternative, you can configure GIM to store your exported data in a local directory. In this case, you do not need to create the object storage.

No special network requirements.
  • You must have your Tenant ID information available.
  • There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GCA might try to access the Info Mart database to synchronize configuration data; if GIM has not yet been deployed, the Info Mart database will be empty.

For detailed information about the correct order of services deployment, see Order of services deployment.

GIM does not yet support GDPR compliance. For details about the Info Mart database tables and columns that potentially contain personally identifiable information (PII), see the description of the CTL_GDPR_HISTORY table in the Genesys Info Mart on-premises documentation. e65e00cb-c1c8-4fb8-9614-80ac07c3a4e3
PEC-REP/Current/GIMPEGuide/PlanningGSP PEC-REP Before you begin GSP deployment Find out what to do before deploying GIM Stream Processor (GSP). Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 GIM Stream Processor (GSP) is the only service that runs in the GSP Docker container. The Helm charts included with the GSP release provision GSP and any Kubernetes infrastructure necessary for GSP to run.

See Helm charts and containers for Genesys Info Mart for the Helm chart version you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GSP.

Like GCA, GSP uses S3-compatible storage to store data during processing. GSP stores data such as GSP checkpoints, savepoints, and high availability data. By default, GSP is configured to use Azure Blob Storage, but you can also use S3-compatible storage provided by other cloud platforms. Genesys expects you to use the same storage account for GSP and GCA.

To create S3-compatible storage, do one of the following:

OpenShift: Create an Object Bucket Claim

Create an S3 Object Bucket Claim (OBC) if none exists.

  1. Create a gsp-obc.yaml file:
  1. ?'"`UNIQ--source-0000001F-QINU`"'?
  2. Execute the command to create the OBC:
    ?'"`UNIQ--source-00000021-QINU`"'?
    The following Kubernetes resources are created automatically:
    • An ObjectBucket (OB), which contains the bucket endpoint information, a reference to the OBC, and a reference to the storage class.
    • A ConfigMap in the same namespace as the OBC, which contains the endpoint to which applications connect in order to consume the object interface
    • A Secret in the same namespace as the OBC, which contains the key-pairs needed to access the bucket.
    Note the following:
    • The name of the secret and the configMap are the same as the OBC name.
    • The bucket name is created with a randomized suffix.
  3. Get S3 data.
    You need to know details of your S3 object in order to populate Helm chart override values for the service.
    1. Execute the following command, where gim is the name of the configMap associated with the OBC:
       ?'"`UNIQ--source-00000023-QINU`"'?
      The result shows data such as BUCKET_HOST, BUCKET_NAME, BUCKET_PORT, and so on.
    2. Execute the following commands to get the values of the keys you require for access, where gim is the name of the secret associated with the OBC:
      • To get the value of the access key:
        ?'"`UNIQ--source-00000025-QINU`"'?
      • To get the value of the secret key:
        ?'"`UNIQ--source-00000027-QINU`"'?
    Use the S3 data to populate storage-related Helm chart override values for the service (for GSP, see Configure S3-compatible storage).
    Tip
    You can also obtain the S3 data from the OpenShift console: Go to the Object bucket claims section under the Storage menu, and click on the required OBC resource. The data will be at the bottom of the page.

GKE: Create bucket storage

?'"`UNIQ--nowiki-00000029-QINU`"'?

A screenshot of the GCP Cloud Console showing the Cloud Storage menu

In the Google Cloud Platform (GCP) Cloud Console, select Cloud Storage and then choose Browser from the drop-down menu.
?'"`UNIQ--nowiki-0000002A-QINU`"'?

A screenshot of the GCP Cloud Console showing the Cloud Storage Browser screen

On the Cloud Storage > Browser screen, click CREATE BUCKET.
?'"`UNIQ--nowiki-0000002B-QINU`"'?

A screenshot of the GCP Cloud Console showing the Cloud Storage Create Bucket screen

On the Create a bucket screen, specify the bucket details:

  • Name your bucket — Enter a unique name for the bucket.
  • Choose where to store your data — Select the geo-redundancy type (multi-region, dual-region, or region) and location where the data will be stored.
    Important
    You cannot change the location after the bucket is created.
  • Choose a default storage class for your data — Select the Standard storage class.
  • Choose how to control access to objects — Check the box to enforce public access prevention on this bucket, to protect the bucket from being accidentally exposed to the public. When you enforce public access prevention, no one can make data in applicable buckets public through IAM policies or ACLs.
  • Choose how to protect object data — GSP does not require any protection tools.

Click CREATE BUCKET to create the bucket.

The Bucket details screen displays. Verify and, if necessary, edit the details.

?'"`UNIQ--nowiki-0000002C-QINU`"'?
?'"`UNIQ--nowiki-0000002D-QINU`"'?

A screenshot of the GCP Cloud Console showing the Cloud Storage Settings screen

It is important to create a storage access key.

  1. Go to Settings > Interoperability.
  2. If the service account you want to use does not already exist, click CREATE A KEY FOR SERVICE ACCOUNT. Otherwise, click CREATE A KEY FOR ANOTHER SERVICE ACCOUNT, select the service account in the dialog box that displays, and click CREATE KEY.
    Note:The service account is the GCP service account associated with the Google Cloud project. The GCP service account enables applications to authenticate and access Google Cloud resources and services. It is not related to the Kubernetes service account created by the Helm chart. Depending on how you want to organize your Google Cloud resources, you can have multiple GCP projects and service accounts.
?'"`UNIQ--nowiki-0000002E-QINU`"'?

A screenshot of the GCP Cloud Console showing the Cloud Storage New Service Account HMAC screen

The Access Key and Secret are generated and displayed in a dialog box. Copy and securely save these details, which you use to populate storage-related Helm chart override values for the applicable service (see Configure S3-compatible storage).

Important
You cannot recover the Secret once the dialog box is closed.
?'"`UNIQ--nowiki-0000002F-QINU`"'?
No special network requirements. Network bandwidth must be sufficient to handle the volume of data to be transferred into and out of Kafka. There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GSP looks for the GCA snapshot when it starts; if GCA has not yet been deployed, GSP will encounter unknown configuration objects and resources until the snapshot becomes available.

There are other private edition services you must deploy before Genesys Info Mart. For detailed information about the recommended order of services deployment, see Order of services deployment.

Not applicable, provided your Kafka retention policies have not been set to more than 30 days. GSP does not store information beyond ephemeral data used during processing.

Kafka configuration

Unless Kafka has been configured to auto-create topics, ensure that the Kafka topics GSP requires have been created in the Kafka configuration. The following table shows the topic names GSP expects to use. An entry in the Customizable GSP parameter column indicates that GSP supports using a customized topic name. If you use customized topic names, you must override the applicable values in the values.yaml file (see Override Helm chart values).

The topics represent various data domains. If a topic does not exist, GSP will never receive data for that domain. If the topic exists but the customizable parameter value is empty in the GSP configuration, data from that domain will be discarded.

Topic name Customizable GSP parameter Description
GSP consumes the following topics:
voice-callthread Name of the input topic with voice interactions
voice-agentstate Name of the input topic with voice agent states
voice-outbound Name of the input topic with outbound (CX Contact) activity
digital-itx digitalItx Name of the input topic with digital interactions
digital-agentstate digitalAgentStates Name of the input topic with digital agent states
gca-cfg cfg Name of the input topic with configuration data
GSP produces the following topics:
gsp-ixn interactions Name of the output topic for interactions
gsp-sm agentStates Name of the output topic for agent states
gsp-outbound outbound Name of the output topic for outbound (CX Contact) activity
gsp-custom custom Name of the output topic for custom reporting
gsp-cfg cfg Name of the output topic for configuration reporting
c39fe496-c79e-4846-b451-1bc8bedb126b
PEC-REP/Current/PulsePEGuide/Planning PEC-REP Before you begin Find out what to do before deploying Genesys Pulse. Reporting PulsePEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no known limitations.


For more information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Pulse

Genesys Pulse Containers

Container Description Docker Path
collector Genesys Pulse Collector <docker>/pulse/collector:<image-version>
cs_proxy Configuration Server Proxy <docker>/pulse/cs_proxy:<image-version>
init Init container, used for DB initialization <docker>/pulse/init:<image-version>
lds Load Distribution Server (LDS) <docker>/pulse/lds:<image-version>
monitor_dcu_push_agent Provides monitoring data from Stat Server and Genesys Pulse Collector <docker>/pulse/monitor_dcu_push_agent:<image-version>
monitor_lds_push_agent Provides monitoring data from LDS <docker>/pulse/monitor_lds_push_agent:<image-version>
pulse Genesys Pulse Backend <docker>/pulse/pulse:<image-version>
ss Stat Server <docker>/pulse/ss:<image-version>
userpermissions User Permissions service <docker>/pulse/userpermissions:<image-version>

Genesys Pulse Helm Charts

Helm Chart Containers Shared Helm Path
Init init yes <helm>/init-<chart-version>.tgz
Pulse pulse yes <helm>/pulse-<chart-version>.tgz
LDS cs_proxy, lds, monitor_lds_push_agent <helm>/lds-<chart-version>.tgz
DCU cs_proxy, ss, collector, monitor_dcu_push_agent <helm>/dcu-<chart-version>.tgz
Permissions cs_proxy, userpermissions <helm>/permissions-<chart-version>.tgz
Init Tenant init <helm>/init-tenant-<chart-version>.tgz
Monitor - yes <helm>/monitor-<chart-version>.tgz
OpenShift or GKE CLI must be installed.

For more information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

Logs Volume

Persistent Volume Size Type IOPS POD Containers Critical Backup needed
pulse-dcu-logs 10Gi RW high DCU csproxy, collector, statserver Y Y
pulse-lds-logs 10Gi RW high lds csproxy, lds Y Y
pulse-permissions-logs 10Gi RW high permissions csproxy, permissions Y Y
pulse-logs 10Gi RW high pulse pulse Y Y

The logs volume stores log files:

  • To use the persistent volume, set the log.volumeType to the pvc.
  • To use the local storage, set the log.volumeType to the hostpath.

Genesys Pulse Collector Health Volume

Local Volume POD Containers
collector-health dcu collector, monitor-sidecar

Genesys Pulse Collector health volume provides non-persistent storage for store Genesys Pulse Collector health state files for monitoring.

Stat Server Backup Volume

Persistent Volume Size Type IOPS POD Containers Critical Backup needed
statserver-backup 1Gi RWO medium dcu statserver N N

Stat Server backup volume provides disk space for Stat Server's state backup. The Stat Server backup volume stores the server state between restarts of the container.

No special requirements. Ensure that the following services are deployed and running before you deploy Genesys Pulse:


  • Genesys Authentication:
  • Genesys Web Services and Applications
  • Agent Setup
  • Tenant Service:
    • The Tenant UUID (v4) is provisioned, example: "9350e2fc-a1dd-4c65-8d40-1f75a2e080dd"
    • The Tenant service is made available as host:
      • GKE: "tenant-<tenant-uuid>.voice" port: 8888
      • OpenShift: "tenant-<tenant-uuid>.voice.svc.cluster.local." port: 8888
  • Voice Microservice:
    • The Voice service is made available as host:
      • GKE: "tenant-<tenant-uuid>.voice" port: 8000
      • OpenShift: "tenant-<tenant-uuid>.voice.svc.cluster.local." port: 8000
Important
All services listed above must be accessible from within the cluster where Genesys Pulse will be deployed.

For more information, see Order of services deployment.

Genesys Pulse supports the General Data Protection Regulation (GDPR). See Genesys Pulse Support for GDPR for details.
PrivateEdition/Current/TenantPEGuide/Planning PrivateEdition Before you begin Find out what to do before deploying the Tenant Service. Genesys Multicloud CX Private Edition TenantPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

Containers

The Tenant Service has the following containers:

  • Core tenant service container
  • Database initialization and upgrade container
  • Role and privileges  initialization and upgrade container
  • Solution specific: pulse provisioning container

Helm charts

  • Tenant deployment
  • Tenant infrastructure
For information about setting up your Genesys Multicloud CX private edition platform, see Software Requirements.

The following table lists the third-party prerequisites for the Tenant Service.

Content coming soon
?'"`UNIQ--nowiki-0000003B-QINU`"'?
Content coming soon
?'"`UNIQ--nowiki-0000003C-QINU`"'?
For detailed information about the correct order of services deployment, see Order of services deployment.

The following prerequisites are required before deploying the Tenant Service:

  • Voice Platform and all its external dependencies must be deployed before proceeding with the Tenant Service deployment.
  • PostgreSQL 10 database management system must be deployed and database shall be allocated either as a primary or replica. For more information about the sample deployment of a standalone DBMS, see Third-party prerequisites.

In addition, if you expect to use Agent Setup or Workspace Web Edition after the tenant is deployed, Genesys recommends that you deploy GWS Authentication Service before proceeding with the Tenant Service deployment.

Specific dependencies

The Tenant Service is dependent on the following platform endpoints:

  • GWS environment API
  • Interaction service core
  • Interaction service vq

The Tenant Service is dependent on the following service component endpoints:

  • Voice Front End Service
  • Voice Redis (RQ) Service
  • Voice Config Service
Content coming soon
?'"`UNIQ--nowiki-0000003D-QINU`"'?
TLM/Current/TLMPEGuide/Planning TLM Before you begin Find out what to do before deploying Telemetry Service. Telemetry Service TLMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 NA Telemetry Service is composed of:
  • 1 Docker Container: tlm/telemetry-service:version
  • 1 Helm Chart: telemetry-service_version.tgz

For additional information about overriding Helm chart values, see Overriding Helm Chart values in the Genesys Multicloud CX Private Edition Guide.

For information about downloading Helm charts from JFrog Edge, see Downloading your Genesys Multicloud CX containers in the Setting up Genesys Multicloud CX Private Edition guide.

NA
NA For any kind of Telemetry deployment, the following service must be deployed and running before deploying the Telemetry service:

For a look at the high-level deployment order, see Order of services deployment.

17df197d-45b4-4d49-b269-f44d5bdfe5a1
UCS/Current/UCSPEGuide/Planning UCS Before you begin Find out what to do before deploying Universal Contact Service (UCS). Universal Contact Service UCSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Currently, UCS:
  • supports a single-region model of deployment only
  • does not support SSL communication with ElasticSearch.
  • requires dedicated PostgreSQL deployment per customer.
Download the UCS related Docker containers and Helm charts from the JFrog repository.

See Helm charts and containers for Universal Contact Service for the Helm chart and container versions you must download for your release.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

  • Kubernetes 1.17+
  • Helm 3.0
All data are stored in the PostgreSQL, Elasticsearch, and Nexus Upload Service which are external to the UCS. UCS requires the following Genesys components:
  • Genesys Authentication Service
  • GWS Environment Service
As a part of GDPR compliance procedure, the customer would send a request to Care providing the information about the end user. Care would then open a ticket for Engineering team to follow up on the request.

The engineering team would process the request:

GDPR request: Export Data

  • Request to UCS to get contact by ID: identify contact (if there is email address or phone number), or getContact (if there is a direct contact ID).
  • Request to UCS-X to get list of interactions for contact found.
  • Perform CSV export and attach resulting file to the ticket.

GDPR request: Forget me

  • Request to UCS-X to get contact by ID: identify contact (if there is email address or phone number), or getContact (if there is a direct contact ID).
  • Request to UCS-X to get list of interactions for contact found.
  • Delete all found interactions.
  • Re-check that all interactions for contact were removed.
  • Delete contact.
  • Re-check that contact was removed.
  • Update the ticket.
3fdcc389-c8a5-4c38-8ee7-d6ab8d1e5dd8
VM/Current/VMPEGuide/Planning VM Before you begin Find out what to do before deploying Voice Microservices. Voice Microservices VMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

The following services are included with Voice Microservices:

  • Voice Agent State Service
  • Voice Config Service
  • Voice Dial Plan Service
  • Voice Front End Service
  • Voice Orchestration Service
  • Voice Registrar Service
  • Voice Call State Service
  • Voice RQ Service
  • Voice SIP Cluster Service
  • Voice SIP Proxy Service
  • Voice Voicemail Service
  • Voice Tenant Service

See Helm charts and containers for Voice Microservices for the Helm chart version you must download for your release.

For information about the Voicemail Service, see Before you begin in the Configure and deploy Voicemail section of this guide.

For information about the Tenant service, also included with Voice Microservices, see the Tenant Service Private Edition Guide.

For information about setting up your Genesys Multicloud CX private edition platform, see Software Requirements.

The following table lists the third-party prerequisites for Voice Microservices.

Content coming soon
?'"`UNIQ--nowiki-0000009C-QINU`"'?
Content coming soon
?'"`UNIQ--nowiki-0000009D-QINU`"'?
For detailed information about the correct order of services deployment, see Order of services deployment.

Multi-Tenant Inbound Voice: Voicemail Service

Customer data that is likely to identify an individual, or a combination of other held data to identify an individual is considered as Personally Identifiable Information (PII). Customer name, phone number, email address, bank details, and IP address are some examples of PII.

According to EU GDPR:

  • When a customer requests to access personal data that is available with the contact center, the PII associated with the client is exported from the database in client-understandable format. You use the Export Me request to do this.
  • When a customer requests to delete personal data, the PII associated with that client is deleted from the database within 30 days. However, the Voicemail service is designed in a way that the Customer PII data is deleted in one day using the Forget Me request.

Both Export Me and Forget Me requests depend only on Caller ID/ANI input from the customer. The following PII data is deleted or exported during the Forget Me or Export Me request process, respectively:

  • Voicemail Message
  • Caller ID/ANI

GDPR feature is supported only when StorageInterface' is configured as BlobStorage, and Voicemail service is configured with Azure storage account data store.

Adding caller_id tag during voicemail deposit

Index tag caller_id is included in voicemail messages and metadata blob files during voicemail deposit. Using the index tags, you can easily filter the Forget Me or Export Me instead of searching every mailbox.

GDPR multi-region support

In voicemail service, all voicemail metadata files are stored in master region and voicemail messages are deposited/stored in the respective region. Therefore, It is required to connect all the regions of a tenant to perform Forget Me, Undo Forget Me, or Export Me processes for GDPR inputs.

To provide multi-region support for GDPR, follow these steps while performing GDPR operation:

  1. Get the list of regions of a tenant.
  2. Ensure all regions storage accounts are up. If any one of storage accounts is down, you cannot perform the GDPR operation.
  3. GDPR operates in the master region files, first.
  4. Then, GDPR operates in all the non-master region files.

APIs

Voicemail service provides APIs to Export Me and Forget Me requests of GDPR authenticated with GWS. APIs support any valid client ID. The API can process more than one user's data in a single API request. The API follows the same standard input format used in the current PEC (legacy component).

Forget Me and Export Me API Input.json Forget Me Undo Input.json
?'"`UNIQ--source-0000009E-QINU`"'? ?'"`UNIQ--source-000000A0-QINU`"'?

Voicemail service stores only the caller ANI. Therefore, the voicemail processes the records only with the "phone" parameter from the given input and does not include any other parameters.

Forget Me: API for Forget me deletes the PII data related to the consumer after one day based on the API request. The files are deleted through the operations:

  • Message and metadata files are reuploaded with forgetme=true and case_id=[case_id_value] index tag during the Forget Me API call.
  • Deleting files using Azure lifecycle management rules. A rule named forgetme is created in Azure lifecycle management. The Forgetme rule deletes the file if it meets the following conditions:
    • The file is not modified in a day
    • The file has forgetme=true index tag

The Forgetme rule is executed automatically by Azure lifecycle management once a day. Therefore, there are limitations in deleting files and capturing them in the limitations section.

  • Undo Forget Me: The API to undo the Forget Me request with the same case id.
    If the admin/user has wrongly requested/entered the caller ANI, then the voicemail service provides an option to undo the Forget Me request using another API call with the same case ID, to avoid data loss.
  • Export Me: The API for Export Me returns the list of message IDs with message media URL to download the media.
    • The media URL is also authenticated and authorized with the GWS token.
  • The Voicemail Service is exposed via the Kubernetes service, and can be accessed by URL in any region: ?'"`UNIQ--nowiki-000000A2-QINU`"'? (The FQDN remains same in all the regions wherever voicemail service is deployed).
  • Append the API URL with the above-mentioned base URL for accessing the APIs.
  • The Voicemail service authenticates and authorizes each request with GWS. The Voicemail service requires the OAuth token in the header for the following the API calls:
    • Authorization: Basic <token> (or) Bearer <token>
    • Contact center ID is taken from the authorization token
  • Here is the API definition:
    • messageId: Unique message ID of the message.
  • The API sample response is given based on the sample input mentioned above.
WebRTC/Current/WebRTCPEGuide/Planning WebRTC Before you begin Find out what to do before deploying WebRTC. WebRTC WebRTCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 All prerequisites described under Third-party prerequisites, Genesys dependencies, and Secrets have been met. Download the Helm charts from the webrtc folder in the JFrog repository. See Helm charts and containers for WebRTC for the Helm chart version you must download for your release.

For information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

WebRTC contains the following containers:

Artifact Type Functionality JFrog Containers and Helm charts
webrtc webrtc gateway container Handles agents’ sessions, signalling, and media traffic. It also performs media transcoding. https://<jfrog artifactory>/<docker location>/webrtc/webrtc/
coturn coturn container Utilizes TURN functionality https://<jfrog artifactory>/<docker location>/webrtc/coturn/
webrtc-service Helm chart https://<jfrog artifactory>/<helm location>/ webrtc-service-<version_number>.tgz
For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following are the third-party prerequisites for WebRTC:

WebRTC does not require persistent storage for any purposes except Gateway and CoTurn logs. The following table describes the storage requirements:
Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
webrtc-gateway-log-volume 50Gi RW medium storing gateway log files webrtc Y Y
webrtc-coturn-log-volume 50Gi RW medium storing coturn log files coturn N Y

Persistent Volume and Persistent Volume Claim will be created if they are configured. The size for them optional and should be adjusted according to log rate described below:

Gateway:

idle: 0.5 MB/hour per agent

active call: around 0.2MB per call per agent.

Example: For 24 full hours of work, where each agent call rate is constant and is around 7 to 10 calls per hour, we will require around ~500GB for 1000 agents, with around ~20GB being consumed per hour.

CoTurn:

For 1000 connected agents, the load rate is approximately 3.6 GB/hour which scales linearly and increases or decreases with the number of agents and stays constant whether calls are performed or not.

Ingress

WebRTC requires the following Ingress requirements:

  • Persistent session stickiness based on cookie is mandatory. Stickiness cookie should contain the following attributes:
    • SameSite=None
    • Secure
    • Path=/
  • No specific headers requirements
  • Whitelisting (optional)
  • TLS is mandatory

Secrets

WebRTC supports three types of secrets: CSI driver, Kubernetes secrets, and environment variables.

Important
GWS Secret for WebRTC should contain the following grants:

?'"`UNIQ--source-00000024-QINU`"'?

For GWS secrets, CSI or Kubernetes secret should contain gwsClient and gwsSecret key-values.

GWS secret for WebRTC must be created in the WebRTC namespace using the following specification as an example:?'"`UNIQ--source-00000026-QINU`"'?

ConfigMaps

Not Applicable

WAF Rules

The following Web Application Firewall (WAF) rules should be disabled for WebRTC:

WAF Rule Number of rules
REQUEST-920-PROTOCOL-ENFORCEMENT 920300
920440
REQUEST-913-SCANNER-DETECTION 913100
913101
REQUEST-921-PROTOCOL-ATTACK 921150
REQUEST-942-APPLICATION-ATTACK-SQLI 942430


Pod Security Policy

Not applicable

Auto-scaling

WebRTC and CoTurn auto-scaling is performed by KEDA operator. The auto-scaling feature requires Prometheus metrics. To know more about KEDA, visit https://keda.sh/docs/2.0/concepts/.

Use the following option in YAML values file to enable the deployment of auto-scaling objects:

?'"`UNIQ--source-00000028-QINU`"'?

You can configure the Polling interval and maximum number of replicas separately for Gateway pods and CoTurn pods using the following options:

?'"`UNIQ--source-0000002A-QINU`"'?

  • Gateway Pod Scaling
    • Sign-ins

?'"`UNIQ--source-0000002C-QINU`"'?

  • CPU based scaling

WebRTC auto-scaling is also performed based on the CPU and memory usage. The following YAML shows how CPU and memory limits should be configured for Gateway pods in YAML values file:

?'"`UNIQ--source-0000002E-QINU`"'?

  • CoTurn Pod scaling

Auto-scaling of CoTurn is performed based on CPU and memory usage only. The following YAML shows how CPU and memory limits should be configured for CoTurn pods in YAML values file:

?'"`UNIQ--source-00000030-QINU`"'?

SMTP settings

Not applicable

WebRTC has dependencies on several other Genesys services and it is recommended that the provisioning and configuration of WebRTC be done after these services have been set up.
Service Functionality
GWS Used for environment and tenants configuration reading
GAuth Used for WebRTC service and Agents authentication
GVP Used for voice calls - conferences, recording, and so on
Voice microservice Used to handle voice calls
Tenant microservice Used to store tenant configuration

For detailed information about the correct order of services deployment, see Order of services deployment.

Not applicable d703e174-b039-43c9-8859-e25b3a7feb22