View table: PEPrerequisites

Jump to: navigation, search

Table structure:

  • productshort - String
  • Role - String
  • DisplayName - String
  • TocName - String
  • Dimension - String
  • Context - Wikitext
  • Product - String
  • Manual - String
  • UseCase - String
  • PEPageType - String
  • LimitationsText - Wikitext
  • HelmText - Wikitext
  • ThirdPartyText - Wikitext
  • StorageText - Wikitext
  • NetworkText - Wikitext
  • BrowserText - Wikitext
  • DependenciesText - Wikitext
  • GDPRText - Wikitext
  • IncludedServiceId - String

This table has 55 rows altogether.

Page productshort Role DisplayName TocName Dimension Context Product Manual UseCase PEPageType LimitationsText HelmText ThirdPartyText StorageText NetworkText BrowserText DependenciesText GDPRText IncludedServiceId
AUTH/Current/AuthPEGuide/Planning AUTH Before you begin Find out what to do before deploying Genesys Authentication. Genesys Authentication AuthPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Authentication in Genesys Multicloud CX private edition is made up of three containers, one for each of its components:
  • gws-core-auth - Authentication API service
  • gws-ui-auth - Authentication UI service
  • gws-core-environment - Environment API service

The service also includes a Helm chart, which you must deploy to install all three containers for Genesys Authentication:

  • gauth

See Helm charts and containers for Authentication, Login, and SSO for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the gauth folder in the JFrog repository. See Downloading your Genesys Multicloud CX containers for details.

Install the prerequisite dependencies listed in the Third-party services table before you deploy Genesys Authentication. Genesys Authentication uses PostgreSQL to store key/value pairs for the Authentication API and Environment API services. It uses Redis to cache data for the Authentication API service.

Ingress

Genesys Authentication supports both internal and external ingress with two ingress objects that are configured with the ingress and internal_ingress settings in the values.yaml file. See Configure Genesys Authentication for details about overriding Helm chart values.

  • ingress - External ingress for UIs and external API clients. External ingress can be public.
  • internal_ingress - Internal ingress for internal API clients. Internal ingress contains an extended list of API endpoints that are not available for external ingress. Internal ingress should not be public.

These ingress objects support Transport Layer Security (TLS) version 1.2. TLS is enabled by default and you can configure it by overriding the ingress.tls and internal_ingress.tls settings in values.yaml.

For example: ?'"`UNIQ--source-0000000F-QINU`"'?

In the example above:

  • secretName is the certificate and private key to use for TLS. The secret is a prerequisite and must be created before you deploy Genesys Authentication, unless you have Certificate ClusterIssuer installed and configured in Kubernetes Cluster. In this case, the secret is created by ClusterIssuer.
  • hosts is a list of the fully qualified domain names that should use the certificate. The list must be the same as the value configured for ingress.frontend and internal_ingress.frontend.

Cookies

Genesys Authentication components use cookies to identify HTTP/HTTPS user sessions.

The Authentication UI supports the web browsers listed in the Browsers table. Genesys Authentication must be deployed before other Genesys Multicloud CX private edition services. To complete provisioning the service, you must first deploy Web Services and Applications and the Tenant Service. For a look at the high-level deployment order, see Order of services deployment.
ContentAdmin/Internal/Small/Test5 ContentAdmin Test Prereq page for PE Intro statement summarizing this page. Internal Content Administration Small bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • This release includes important security upgrades made to third-party software.
  • Important security improvements.
  • As of December 23, 2021, No results supports deployments on Google Kubernetes Engine (GKE) in Genesys Multicloud CX private edition, as part of the Early Adopter Program.
  • 20220330174606
Intro to third-party section Unstructured chunk Unstructured chunk Intro text for browser section. Unstructured chunk Intro text for GDPR section.
DES/Current/DESPEGuide/Planning DES Before you begin Find out what to do before deploying Designer. Designer DESPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Designer currently supports multi-tenancy provided by the tenant Configuration Server. That is, each tenant should have a dedicated Configuration Server, and Designer can be shared across the multiple tenants.

Before you begin:

  1. Install Kubernetes. Refer to the Kubernetes documentation site for installation instructions. You can also refer to the Genesys Docker Deployment Guide for information on Kubernetes and High Availability.
  2. Install Helm according to the instructions outlined in the Helm documentation site.

After you complete the above mandatory procedures, return to this document to complete deployment of Designer and DAS as a service in a K8s cluster.

Important
Designer applications cannot be used to handle default routed calls or voice interactions. IRD applications should be used for such scenarios until Designer adds support for handling default routed calls or voice interactions.
Download the Designer related Docker containers and Helm charts from the JFrog repository.

See Helm charts and containers for Designer for the Helm chart and container versions you must download for your release.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

The following section lists the third-party prerequisites for Designer.
  • Kubernetes 1.19.x - 1.21.x
  • Helm 3.0
  • Docker
    • To store Designer and DAS docker images to the local docker registry.
  • Ingress Controller
    • If Designer and DAS are accessed from outside of a K8s cluster, it is recommended to deploy/configure an ingress controller (for example, NGINX), if not already available. Also, the Blue-Green deployment strategy works based on the ingress rules.
    • The Designer UI requires Session Stickiness. Configure session stickiness in the annotations parameter in the values.yaml file during Designer installation.

For information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.

The following storage requirements are mandatory prerequisites:
  • Persistent Volumes (PVs)
    • Create persistent volumes for workspace storage (5 GB minimum) and logs (5 GB minimum)
    • Set the access mode for these volumes to ReadWriteMany.
    • The Designer manifest package includes a sample YAML file to create Persistent Volumes required for Designer and DAS.
    • Persistent volumes must be shared across multiple K8s nodes. Genesys recommends using NFS to create Persistent Volumes.
  • Shared file System - NFS
    • For production, deploy the NFS server as highly available (HA) to avoid single points of failure. It is also recommended that the NFS storage be deployed as a Disaster Recovery (DR) topology to achieve continuous availability if one region fails.
    • By Default, Designer and DAS containers run as a Genesys user (uid:gid 500:500). For this reason, the shared volume must have permissions that will allow write access to uid:gid 500:500. The optimal method is to change the NFS server host path to the Genesys user: chown -R genesys:genesys.
    • The Designer package includes a sample YAML file to create an NFS server. Use this only for a demo/lab setup purpose.
    • Azure Files Storage - If you opt for Cloud storage, then Azure Files Storage is an option to consider and has the following requirements:
      A Zone-Redundant Storage for RWX volumes replicated data in zone redundant (check this), shared across multiple pods.
      • Provisioned capacity : 1 TiB
      • Baseline IO/s : 1424
      • Burst IO/s : 4000
      • Egress Rate : 121.4 MiBytes/s
      • Ingress Rate : 81.0 MiBytes/s


  • If Designer and DAS are accessed from outside of a K8s cluster, it is recommended to deploy/configure an ingress controller (for example, NGINX), if not already available. Also, the Blue-Green deployment strategy works based on the ingress rules.
  • The Designer UI requires Session Stickiness. Configure session stickiness in the annotations parameter in the values.yaml file during Designer installation.
Unless otherwise noted, Designer supports the latest versions of the following browsers:
  • Mozilla Firefox
  • Google Chrome (see Important, below)
  • Microsoft Edge
  • Apple Safari

Internet Explorer (all versions) is not supported.

Important
For Google Chrome, Designer supports the n-1 version of the browser, i.e. the version prior to the latest release.

Minimum display resolution

The minimum display resolution supported by Designer is 1920 x 1080.

Third-party cookies

Some features in Designer require the use of third-party cookies. Browsers must allow third-party cookies to be stored for Designer to work properly.

The following Genesys dependencies are mandatory prerequisites:
  • Genesys Web Services (GWS) 9.x
    • Configure GWS to work with a compatible version of Configuration Server.
  • Other Genesys Components
    • Authentication Service
    • Voice Microservices

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Designer supports the European Union's General Data Protection Regulation (GDPR) requirements and provides customers the ability to export or delete sensitive data using ElasticSearch APIs and other third-party tools.

For the purposes of GDPR compliance, Genesys is a data processor on behalf of customers who use Designer. Customers are the data controllers of the personal data that they collect from their end customers, that is, the data subjects. Designer Analytics can potentially store data collected from end users in ElasticSearch. This data can be queried by certain fields that are relevant to GDPR. Once identified, the data can be exported or deleted using ElasticSearch APIs and other third-party tools that customers find suitable for their needs.

In particular, the following SDR fields may contain PII or sensitive data that customers can choose to delete or export as required:

  • ANI - This SDR field contains the customer's phone number used to make voice calls handled by Designer applications.
  • variables.Contact - This SDR field is an object and can have multiple properties, such as, name, email address, and other contact details. For example,

?'"`UNIQ--source-0000000B-QINU`"'?

  • Application variables defined in the main application flow are also stored in the SDR under the variables object. These variables depend on application logic and may capture sensitive information intentionally or unintentionally. It is recommended to mark such variables secure (see Securing Variables in Designer Help for more details). But if they are captured in analytics, they can also be used to identify candidate SDRs for deletion or retrieval. The same applies to userdata key value pairs attached to interaction data which is captured in the calldata object in the SDR.
Important
It is the customer's responsibility to remove any PII or sensitive data within 21 days or less, if required by General Data Protection Regulation (GDPR) standards.
For general information about Genesys support for GDPR compliance, see General Data Protection Regulation.
Draft:AUTH/Current/AuthPEGuide/Planning Draft:AUTH Before you begin Find out what to do before deploying Genesys Authentication. Genesys Authentication AuthPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Authentication in Genesys Multicloud CX private edition is made up of three containers, one for each of its components:
  • gws-core-auth - Authentication API service
  • gws-ui-auth - Authentication UI service
  • gws-core-environment - Environment API service

The service also includes a Helm chart, which you must deploy to install all three containers for Genesys Authentication:

  • gauth

See Helm charts and containers for Authentication, Login, and SSO for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the gauth folder in the JFrog repository. See Downloading your Genesys Multicloud CX containers for details.

Install the prerequisite dependencies listed in the Third-party services table before you deploy Genesys Authentication. Genesys Authentication uses PostgreSQL to store key-value pairs for the Authentication API and Environment API services. It uses Redis to cache data for the Authentication API service.

Create a PostgreSQL database and user

Before deploying Genesys Authentication, you must create a PostgreSQL database and a user with superuser permissions. This example creates the "gauth_master" user and "gauth_db" database, but you can use any names that makes sense for your organization: ?'"`UNIQ--syntaxhighlight-00000012-QINU`"'?Note: Make note of the user name, password, and database to configure the postgres settings in the values.yaml file.

Ingress

Genesys Authentication supports both internal and external ingress with two ingress objects that are configured with the ingress and internal_ingress settings in the values.yaml file. See Configure Genesys Authentication for details about overriding Helm chart values.

  • ingress - External ingress for UIs and external API clients. External ingress can be public.
  • internal_ingress - Internal ingress for internal API clients. Internal ingress contains an extended list of API endpoints that are not available for external ingress. Internal ingress should not be public.

These ingress objects support Transport Layer Security (TLS) version 1.2. TLS is enabled by default and you can configure it by overriding the ingress.tls and internal_ingress.tls settings in values.yaml.

For example: ?'"`UNIQ--source-00000014-QINU`"'?

In the example above:

  • secretName is the certificate and private key to use for TLS. The secret is a prerequisite and must be created before you deploy Genesys Authentication, unless you have Certificate ClusterIssuer installed and configured in Kubernetes Cluster. In this case, the secret is created by ClusterIssuer.
  • hosts is a list of the fully qualified domain names that should use the certificate. The list must be the same as the value configured for ingress.frontend and internal_ingress.frontend.

Cookies

Genesys Authentication components use cookies to identify HTTP/HTTPS user sessions.

The Authentication UI supports the web browsers listed in the Browsers table. Genesys Authentication must be deployed before other Genesys Multicloud CX private edition services. To complete provisioning the service, you must first deploy Web Services and Applications and the Tenant Service. For a look at the high-level deployment order, see Order of services deployment.
Draft:ContentAdmin/Boilerplate/PEGuide/Planning Draft:ContentAdmin Before you begin Find out what to do before deploying <service_name>. Internal Content Administration PEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
List any limitations or assumptions related to the deployment.
List the containers and the <service_names> they include. Provide any specific information about the container and its Helm charts. Link to the "suite-level" doc for common information about how to download the Helm charts in Jfrog Edge: Downloading your Genesys Multicloud CX containers

See Helm charts and containers for <service_name> for the Helm chart version you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

List any third-party services that are required (both common across Genesys Multicloud CX private edition and specific to <service_name>).
Describe storage requirements, including:
  • Size
  • Type (HDD, SDD, NVMe)
  • IOPS
  • Latency sensitive (local vs netapp disk)
  • Specific requirements for third-party services, including HA setup, connectivity, expected sizing and scaling models
  • List which data/PVC storage are critical and need to be backed up for redundancy and data protection.
Describe network requirements, including:
  • Required properties for ingress, such as:
    • Cookies usage
    • Header requirements (client IP and redirect, passthrough)
    • Session stickiness
    • Allowlisting (optional)
    • TLS (optional)
  • Cross-region bandwidth
  • External connections from the Kubernetes cluster to other systems. This includes connecting to Genesys Cloud CX for hybrid services (such as AI, WEM) as well as "mixed" environments where some components are still deployed as VMs. Note that mixed environments are mainly for transition periods when customers migrate from a classic premise environment to Genesys Multicloud CX private edition.
  • WAF Rules (specific only for services handling internet traffic)
  • Pod Security Policy
  • TLS/SSL Certificate configurations
List supported browsers/versions for the UI, if applicable.
Describe any dependencies <service_name> has on other Genesys services. Include a link to the "suite-level" documentation for the order in which services must be deployed. For example, the Auth and GWS services must be deployed and running before deploying the WWE service. Order of services deployment
Provide information about GDPR support. Include a link to the "suite-level" documentation. Link to come
Draft:ContentAdmin/Internal/Small/Test5 Draft:ContentAdmin Test Prereq page for PE Intro statement summarizing this page. Internal Content Administration Small bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • This release includes important security upgrades made to third-party software.
  • Important security improvements.
  • As of December 23, 2021, No results supports deployments on Google Kubernetes Engine (GKE) in Genesys Multicloud CX private edition, as part of the Early Adopter Program.
  • 20220330174606
Intro to third-party section Unstructured chunk Unstructured chunk Intro text for browser section. Unstructured chunk Intro text for GDPR section.
Draft:DES/Current/DESPEGuide/Planning Draft:DES Before you begin Find out what to do before deploying Designer. Designer DESPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Designer currently supports multi-tenancy provided by the tenant Configuration Server. That is, each tenant should have a dedicated Configuration Server, and Designer can be shared across the multiple tenants.

Before you begin:

  1. Install Kubernetes. Refer to the Kubernetes documentation site for installation instructions. You can also refer to the Genesys Docker Deployment Guide for information on Kubernetes and High Availability.
  2. Install Helm according to the instructions outlined in the Helm documentation site.

After you complete the above mandatory procedures, return to this document to complete deployment of Designer and DAS as a service in a K8s cluster.

Important
Designer applications cannot be used to handle default routed calls or voice interactions. IRD applications should be used for such scenarios until Designer adds support for handling default routed calls or voice interactions.
Download the Designer related Docker containers and Helm charts from the JFrog repository.

See Helm charts and containers for Designer for the Helm chart and container versions you must download for your release.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

The following section lists the third-party prerequisites for Designer.
  • Kubernetes 1.19.x - 1.21.x
  • Helm 3.0
  • Docker
    • To store Designer and DAS docker images to the local docker registry.
  • Ingress Controller
    • If Designer and DAS are accessed from outside of a K8s cluster, it is recommended to deploy/configure an ingress controller (for example, NGINX), if not already available. Also, the Blue-Green deployment strategy works based on the ingress rules.
    • The Designer UI requires Session Stickiness. Configure session stickiness in the annotations parameter in the values.yaml file during Designer installation.

For information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.

The following storage requirements are mandatory prerequisites:
  • Persistent Volumes (PVs)
    • Create persistent volumes for workspace storage (5 GB minimum) and logs (5 GB minimum)
    • Set the access mode for these volumes to ReadWriteMany.
    • The Designer manifest package includes a sample YAML file to create Persistent Volumes required for Designer and DAS.
    • Persistent volumes must be shared across multiple K8s nodes. Genesys recommends using NFS to create Persistent Volumes.
  • Shared file System - NFS
    • For production, deploy the NFS server as highly available (HA) to avoid single points of failure. It is also recommended that the NFS storage be deployed as a Disaster Recovery (DR) topology to achieve continuous availability if one region fails.
    • By Default, Designer and DAS containers run as a Genesys user (uid:gid 500:500). For this reason, the shared volume must have permissions that will allow write access to uid:gid 500:500. The optimal method is to change the NFS server host path to the Genesys user: chown -R genesys:genesys.
    • The Designer package includes a sample YAML file to create an NFS server. Use this only for a demo/lab setup purpose.
    • Azure Files Storage - If you opt for Cloud storage, then Azure Files Storage is an option to consider and has the following requirements:
      A Zone-Redundant Storage for RWX volumes replicated data in zone redundant (check this), shared across multiple pods.
      • Provisioned capacity : 1 TiB
      • Baseline IO/s : 1424
      • Burst IO/s : 4000
      • Egress Rate : 121.4 MiBytes/s
      • Ingress Rate : 81.0 MiBytes/s


  • If Designer and DAS are accessed from outside of a K8s cluster, it is recommended to deploy/configure an ingress controller (for example, NGINX), if not already available. Also, the Blue-Green deployment strategy works based on the ingress rules.
  • The Designer UI requires Session Stickiness. Configure session stickiness in the annotations parameter in the values.yaml file during Designer installation.
Unless otherwise noted, Designer supports the latest versions of the following browsers:
  • Mozilla Firefox
  • Google Chrome (see Important, below)
  • Microsoft Edge
  • Apple Safari

Internet Explorer (all versions) is not supported.

Important
For Google Chrome, Designer supports the n-1 version of the browser, i.e. the version prior to the latest release.

Minimum display resolution

The minimum display resolution supported by Designer is 1920 x 1080.

Third-party cookies

Some features in Designer require the use of third-party cookies. Browsers must allow third-party cookies to be stored for Designer to work properly.

The following Genesys dependencies are mandatory prerequisites:
  • Genesys Web Services (GWS) 9.x
    • Configure GWS to work with a compatible version of Configuration Server.
  • Other Genesys Components
    • Authentication Service
    • Voice Microservices

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Designer supports the European Union's General Data Protection Regulation (GDPR) requirements and provides customers the ability to export or delete sensitive data using ElasticSearch APIs and other third-party tools.

For the purposes of GDPR compliance, Genesys is a data processor on behalf of customers who use Designer. Customers are the data controllers of the personal data that they collect from their end customers, that is, the data subjects. Designer Analytics can potentially store data collected from end users in ElasticSearch. This data can be queried by certain fields that are relevant to GDPR. Once identified, the data can be exported or deleted using ElasticSearch APIs and other third-party tools that customers find suitable for their needs.

In particular, the following SDR fields may contain PII or sensitive data that customers can choose to delete or export as required:

  • ANI - This SDR field contains the customer's phone number used to make voice calls handled by Designer applications.
  • variables.Contact - This SDR field is an object and can have multiple properties, such as, name, email address, and other contact details. For example,

?'"`UNIQ--source-0000000B-QINU`"'?

  • Application variables defined in the main application flow are also stored in the SDR under the variables object. These variables depend on application logic and may capture sensitive information intentionally or unintentionally. It is recommended to mark such variables secure (see Securing Variables in Designer Help for more details). But if they are captured in analytics, they can also be used to identify candidate SDRs for deletion or retrieval. The same applies to userdata key value pairs attached to interaction data which is captured in the calldata object in the SDR.
Important
It is the customer's responsibility to remove any PII or sensitive data within 21 days or less, if required by General Data Protection Regulation (GDPR) standards.
For general information about Genesys support for GDPR compliance, see General Data Protection Regulation.
Draft:GAWFM/Current/GAWFMPEGuide/Planning Draft:GAWFM Before you begin Find out what to do before deploying Gplus Adapter for WFM. Gplus Adapter for WFM GAWFMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Persistent storage access to recovery logs. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

Containers:

The GPlus Adapter for WFM has the following containers:

  • GPlus Adapter provisioning container
  • GPlus Adapter service container for Aspect
  • GPlus Adapter service container for Nice
  • GPlus Adapter service container for Teleopti
  • GPlus Adapter service container for Verint

Helm charts:

  • GPlus Adapter service deployment – gpluswfm-100.0.xx.tgz
List any third-party services that are required (both common across Genesys Multicloud CX private edition and specific to <service_name>).

Resource recommendations

  • Memory consumption is proportional to the size of the customer’s configuration server database.
  • Standard operating resources are substantially lower; however, spikes can occur due to environment issues such as server disconnects which can cause significant memory and CPU load.
Storage estimates
Agents Memory CPUs
< 5k concurrent 4 GB 1 CPU
5k - 10 k concurrent 8 GB 1 CPU
10k + 12 GB 2 CPUs


These estimates were compiled under the following scenarios:
Media type TServers only TServers and Interaction Servers
Voice calls 60 interactions / second 20 interactions / second
Emails - 20 emails / second
  • These volumes are per historical stream. For example, 5 interactions per second with three reporting streams should be considered as 15 interactions per second. Note: RTA streams do not contribute substantially to resource requirements.
  • These tests were conducted with 8 GB RAM, 1 CPU at a speed of 2.7GHz, 100k concurrent agents, and clean routing (mostly an issue for multimedia interactions on Interaction Server).

Storage sizing calculator

The adapter requires storage of recovery logs for 1 week. The recovery logging for the adapter stores the full data for every event it receives from Configuration Server, Interaction Server, and TServer. Customers need to provide enough storage to handle 7 days of TServer and Interaction Server event data. In addition, the adapter downloads a fresh copy of all CME data daily from Configuration Server.  A rough approximation for how much storage is required can be expressed with the following formula:

Storage = 7 days × (EventSize × #Events + CMESize) × Compression × Padding

Storage: The amount of storage required by the adapter.

EventSize: The average size of TServer/Interaction Server events in Bytes.  This depends on the business rules of the contact center and how much user data is being attached to events.  A typical value would be in the range of 1-5KB.

#Events: The average number of TServer/Interaction Server events processed in a typical day.  This can vary widely depending on the complexity and size of the contact center as well as call volume.

CMESize: The aggregate size of all CME data being monitored by the adapter.  The amount of memory taken up by the Configuration Server application can be used as an approximation.

Compression: A multiplier representing the amount of compression of the gzip log files, typically 0.05-0.10.

Padding: A multiplier to provide extra space in case of periods of high activity or unexpected restarts. A value of 2 is recommended.

For example:

To calculate the amount of storage required by a contact center with an average EventSize of 3KB and 5 million events per day, with a CME Size of 1GB then:

 Storage = 7 × (3KB × 5,000,000 + 1GB) × 0.10 × 2 = 21GB per week

Other storage requirements

The speed and throughput requirements for recovery log storage depend on the amount of logging calculated in the previous section. If the adapter writes 21GB per week then it writes 3GB per day and if most of the logging takes place during the 16 busiest hours of the day, then the adapter will need to write at a rate of:

Rate = 3GB ÷ 16hr = 56 MB/s

This kind of throughput can be handled by most disk storage providers.

CNI for Direct Pod Routing

The GPlus Adapter does not have any special requirements for CNI networking.

Ingress

The GPlus Adapter does not expose any http/https endpoints for ingress.

Subnet sizing

The GPlus Adapter does not support deployments with multiple replicas and so requires only one IP address per instance.

External Connections

The GPlus WFM Adapter requires connections to Genesys Configuration Server, TServer and Interaction Server in order to monitor state information for interactions, agents, and other configuration objects.  The adapter may also have optional connections to WFM FTP servers for transferring reports.  For RTA streaming, the adapter may either act as a server for the various WFM RTA clients to connect to (applies to vendors Teleopti, Nice-IEX, Verint), or as a client in the case of Aspect.  Prometheus metrics are exposed on the /metrics http endpoint set up on the adapter, although this endpoint is not exposed to the internet.

External connections
Client Network Type Client Network Server Server Network protocol Notes
Gplus WFM Adapter Any Any TServer Any TCP 0 or more plus backup servers
Gplus WFM Adapter Any Any Interaction Server Any TCP 0 or more plus backup servers
Gplus WFM Adapter Any Any Config Server Any TCP 1 plus any backup servers
Gplus WFM Adapter Any Any WFM FTP Server Any TCP Configurable protocol, can be either or both
WFM RTA Server Any Any Gplus WFM Adapter Any TCP 0 or more. Applies to NICE, Teleopti, and Verint adapters.
Gplus WFM Adapter Any Any WFM RTA Server Any TCP 0 or more. Applies only to Aspect adapter.
Prometheus Logging Any Any Gplus WFM Adapter Any HTTP or HTTPS
N/A
Describe any dependencies <service_name> has on other Genesys services. Include a link to the "suite-level" documentation for the order in which services must be deployed. For example, the Auth and GWS services must be deployed and running before deploying the WWE service. Order of services deployment
  • SIP server
  • T-Server
  • Config Server
Provide information about GDPR support. Include a link to the "suite-level" documentation. Link to come
Draft:GVP/Current/GVPPEGuide/Planning Draft:GVP Before you begin Find out what to do before deploying Genesys Voice Platform. Genesys Voice Platform GVPPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • Resource Manager does not use gateway LRG configurations. Instead, it uses the contact center ID coming from SIP Server as gvp-tenant-id in the INVITE message to identify the tenant and pick the IVR Profiles.
  • Only single MCP LRG is supported per GVP deployment.
  • Only the specific component configuration options documented in Helm values.yaml overrides can be modified. Other configuration options can't be changed.
  • DID/DID groups are managed as part of Designer applications (Applications)
  • SIP TLS / SRTP are currently not supported.
You will have to download the GVP related Docker containers and Helm charts from the JFrog repository. For docker container and helm chart versions, refer to Helm charts and containers for Genesys Voice Platform.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

Media Control Platform

Storage requirement for production (min)

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
recordings-volume 100Gi RWO high Storing recordings, dual AZ, gvp-mcp, rup Y Y
rup-volume 40Gi RWO high Storing recordings temporarily, dual AZ, rup Y Y
log-pvc 50Gi RWO medium storing log files gvp-mcp Y Y

Storage requirements for Sandbox

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
recordings-volume 50Gi RWO high Storing recordings, dual AZ, gvp-mcp, rup Y Y
rup-volume 20Gi RWO high Storing recordings temporarily, dual AZ, rup Y Y
log-pvc 25Gi RWO medium storing log files gvp-mcp Y Y

Resource Manager

Storage requirement for production (min)

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billingpvc 20Gi RWO high billing gvp-rm Y Y
log-pvc 50Gi RWO medium storing log files gvp-rm Y Y

Storage requirements for Sandbox

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billingpvc 20Gi RWO high billing gvp-rm Y Y
log-pvc 10Gi RWO medium storing log files gvp-rm Y Y

Service Discovery

Not applicable

Reporting Server

Storage requirement for production (min)

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billing-pvc 20Gi RWO High Stores ActiveMQ data and config information gvp-rs Y Y

Storage requirement for Sandbox

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billing-pvc 10Gi RWO High Stores ActiveMQ data and config information gvp-rs Y Y

GVP Configuration Server

Not applicable

Media Control Platform

Ingress

Not applicable

HA/DR

MCP is deployed with autoscaling in all regions. For more details, see the section Auto-scaling.

Calls are routed to active MCPs from GVP Resource Manager (RM) and in case of a MCP instance terminating, the calls are then routed to a different MCP instance.

Cross-region bandwidth

MCPs are not expected to be doing cross-region requests in normal mode of operation.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001A-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Resource Manager

Ingress

Not applicable

HA/DR

Resource Manager is deployed as the Active and Active pair.

Cross-region bandwidth

Resource Manager is deployed per region. There is no cross region deployment.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001C-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Service Discovery

Ingress

Not applicable

HA/DR

Service Discovery is a singleton service which will be restarted if it shuts down unexpectedly or becomes unavailable.

Cross-region bandwidth

Service Discovery is not expected to be doing cross-region requests in normal mode of operation.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001E-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Reporting Server

Ingress

Not applicable

HA/DR

Reporting Server is deployed as a single pod service.

Cross-region bandwidth

Reporting Server is deployed per region. There is no cross region deployment.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-00000020-QINU`"'?

SMTP Setting

Not applicable

TLS/SSL Certificates configurations

Not applicable

GVP Configuration Server

Ingress

Not applicable

HA/DR

GVP Configuration Server is deployed as a singleton. If the GVP Configuration Server crashes, a new pod will be created. The GVP services will continue to service calls if the GVP Configuration Server is unavailable and only new configuration changes, such as new MCP pods, will not be available.

Cross-region bandwidth

GVP Configuration Server is not expected to be doing cross-region requests in normal mode of operation.

External connections

External service Functionality
PostGresSQL database

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-00000022-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

N/A

Media Control Platform

Service Functionality
Consul Consul service must be deployed before deploying MCP for proper service registration in GVP Configuration Server and RM.

Resource Manager

Service Functionality
GVP Configuration Server GVP Configuration Server must be deployed before deploying RM for proper working.

Service Discovery

Service Functionality
Consul Consul service must be deployed before deploying Service Discovery for proper service registration in GVP Configuration Server and Resource Manager.

Reporting Server

Service Functionality
GVP Configuration Server GVP Configuration Server must be deployed before deploying RS for proper working.

GVP Configuration Server

N/A

This section describes product-specific aspects of Genesys Voice Platform support for the European Union's General Data Protection Regulation (GDPR) in premise deployments. For general information about Genesys support for GDPR compliance, see General Data Protection Regulation.

Warning

Disclaimer: The information contained here is not considered final. This document will be updated with additional technical information.

Data Retention Policies

GVP has configurable retention policies that allow expiration of data. GVP allows aggregating data for items like peak and call volume reporting. The aggregated data is anonymous. Detailed call detail records include DNIS and ANI data. The Voice Application Reporter (VAR) data could potentially have personal data, and would have to be deleted when requested. The log data files would have sensitive information (possibly masked), but requires the data to be rotated/expired frequently to meet the needs of GDPR.

Configuration Settings

Media Server

Media Server is capable of storing data and sending alarms which can potentially contain sensitive information, but by default, the data will typically be automatically cleansed (by the log rollover process) within 40 days.

The location of these files can be configured in the GVP Media Control Platform Configuration [default paths are shown below]:

  • vxmli:recordutterance-path = $InstallationRoot$/utterance/
  • vxmli:recording-basepath = $InstallationRoot$/record/
  • Netann:record-basepath = $InstallationRoot$/record
  • msml:cpd-record-basepath = $InstallationRoot$/record/
  • msml:record-basepath = $InstallationRoot$
  • msml:record-irrecoverablerecordpostdir = $InstallationRoot$/cache/record/failed
  • mpc:recordcachedir = $InstallationRoot$/cache/record
  • calllog:directory = $InstallationRoot$/callrec/Log files and temporary files can be saved.

The location of these files can be configured in the GVP Media Control Platform Configuration [default paths are shown below]:

  • vxmli:logdir = $InstallationRoot$/logs/
  • vxmli:tmpdir = $InstallationRoot$/tmp/
  • vxmli:directories-save_tempfiles = $InstallationRoot$/tmp/

Note: Changing default values is not really supported in the initial Private Edition release for any of the above MCP options.

Also, additional sinks are available where alarms and potentially sensitive information can be captured. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information. The metrics can be configured in the GVP Media Control Platform configuration:

  • ems.log_sinks = MFSINK I DATAC I TRAPSINK
  • ems:metricsconfig-DATAC = *
  • ems:dc-default-metricsfilter = 0-16,18,25,35,36,41,52-55,74,128,136-141
  • ems.metricsconfig.MFSINK = 0-16,18-41,43,52-56,72-74,76-81,127-129,130,132-141,146-152

GVP Resource Manager

Resource Manager is capable of storing data and sending alarms and potentially sensitive information, but by default, the data will typically be automatically cleansed (by the log rollover process) within 40 days.

Customers are advised to understand the GVP logging (for all components) and understand the sinks (destinations) for information which the platform can potentially capture. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information.

GVP Reporting Server

The Reporting Server is capable of storing/sending alarms and potentially sensitive information, but by default, these components process but do not store consumer PII. Customers are advised to understand the GVP logging (for all components) and understand the sinks (destinations) for information which the platform can potentially capture. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information.

By default, Reporting Server is designed to collect statistics and other user information. Retention period of this information is configurable, with most data stored for less than 40 days. Customers should work with their application designers to understand what information is captured as part of the application, and, whether or not the data could be considered sensitive.

These settings could be changed by the customer as per their need by using a Helm chart override values.yaml.

Data Retention Specific Settings

  • rs.db.retention.operations.daily.default: "40"
  • rs.db.retention.operations.monthly.default: "40"
  • rs.db.retention.operations.weekly.default: "40"
  • rs.db.retention.var.daily.default: "40"
  • rs.db.retention.var.monthly.default: "40"
  • rs.db.retention.var.weekly.default: "40"
  • rs.db.retention.cdr.default: "40"

Identifying Sensitive Information for Processing

The following example demonstrates how to find this information in the Reporting Server database – for the example where ‘Session_ID’ is considered sensitive:

  • select * from dbo.CUSTOM_VARS where session_ID = '018401A9-100052D6';
  • select * from dbo.VAR_CDRS where session_ID = '018401A9-100052D6';
  • select * from dbo.EVENT_LOGS where session_ID = '018401A9-100052D6';
  • select * from dbo.MCP_CDR where session_ID = '018401A9-100052D6';
  • select * from dbo.MCP_CDR_EXT where session_ID = '018401A9-100052D6';

An example of a SQL query which might be used to understand if specific information is sensitive: ?'"`UNIQ--source-00000024-QINU`"'?

Draft:GWS/Current/GWSPEGuide/Planning Draft:GWS Before you begin Find out what to do before deploying Genesys Web Services and Applications. Genesys Web Services and Applications GWSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Web Services and Applications (GWS) in Genesys Multicloud CX private edition is made up of multiple containers and Helm charts. The pages in this "Configure and deploy" chapter walk you through how to deploy the following Helm charts:
  • GWS services (gws-services) - all the GWS components.
  • GWS ingress (gws-ingress) - provides internal and external access to GWS services. Internal ingress is used for cross-component communication inside the GWS deployment. It also can be used by other clients located inside the same Kubernetes cluster. External ingress provides access to GWS services to clients located outside the Kubernetes cluster. If you are deploying Genesys Web Services and Applications in a single namespace with other private edition services, then you do not need to deploy GWS ingress.

GWS also includes a Helm chart for Nginx (wwe-nginx) for Workspace Web Edition - see the Workspace Web Edition Private Edition Guide for details about how to deploy this chart.

See Helm charts and containers for Genesys Web Services and Applications for the Helm chart versions you must download for your release.

For information about downloading Helm charts from JFrog Edge, see Downloading your Genesys Multicloud CX containers.

Install the prerequisite dependencies listed in the Third-party services table before you deploy Genesys Web Services and Applications. See Software requirements for a full list of prerequisites and third-party services required by all Genesys Multicloud CX private edition services. GWS uses PostgreSQL to store tenant information, Redis to cache session data, and Elasticsearch to store monitored statistics for fast access. If you set up any of these services as dedicated services for GWS, they have the following minimal requirements:

PostgreSQL

  • CPU: 2
  • RAM: 8 GB
  • HDD: 50 GB

Redis

  • 2 nodes:
    • CPU: 2
    • RAM: 8 GB
    • HDD: 20 GB

Elasticsearch

  • 3 "master" nodes:
    • CPU: 2
    • RAM: 8 GB
    • HDD: 20 GB
  • 4 "data" nodes
    • CPU: 4
    • RAM: 16 GB
    • HDD: 20 GB
GWS ingress objects support Transport Layer Security (TLS) version 1.2 for a secure connection between Kubernetes cluster ingress and GWS ingress. TLS is disabled by default, but you can configure it for internal and external ingress by overriding the entryPoints.internal.ingress.tls and entryPoints.external.ingress.tls sections of the GWS ingress Helm chart.

For example:

entryPoints:
  external:
    ingress:
    tls:
      - secretName: gws-secret-ext
        hosts:
          - gws.genesys.com

In the example above:

  • secretName is the name of the Kubernetes secret that contains the certificate. The secret is a prerequisite and must be created before you deploy GWS ingress.
  • hosts is a list of the fully qualified domain names that should use the certificate. The list must be the same as the value configured for the entryPoints.external.ingress.hosts parameter.

Cookies

GWS components use cookies for following purposes:

  • identify HTTP/HTTPS user sessions
  • identify CometD user sessions
  • support session stickiness
JM: Are these browsers are supported for Agent Setup? Browser support for WWE is documented in the WWE guid for private edition: Before you begin. Also, can we align with the versions that are supposed to be supported for cloud/private edition in all products? Those versions are:
  • Chrome: Current release or one version previous
  • Firefox: Current release or one version previous
  • Microsoft Edge: Current release
  • Microsoft Edge Chromium: Current release


You can use any of the following browsers for UIs:

  • Chrome 75+
  • Firefox 68+
  • Firefox ESR 60.9
  • Microsoft Edge
Genesys Web Services and Applications must be deployed after Genesys Authentication.

For a look at the high-level deployment order, see Order of services deployment in the Setting up Genesys Multicloud CX Private Edition guide.

?'"`UNIQ--nowiki-0000000D-QINU`"'?
Draft:IXN/Current/IXNPEGuide/Planning Draft:IXN Before you begin Find out what to do before deploying Interaction Server. Interaction Server IXNPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IXN Server:
  • supports single-region model of deployment only
  • does not support scaling or HA
  • requires dedicated PostgreSQL deployment per customer
Available IXN containers can be found by the following names in registry:
  • ixn/ixn_vq_node
  • ixn/ixn_node
  • ixn/interaction_server

Available helm charts can be found by the name ixn-<version>

For information about downloading Genesys containers and Helm charts from JFrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers.

The following are the minimum versions supported by IXN Server:
  • Kubernetes 1.17+
  • Helm 3.0
In case logging into files is configured for IXN Server, it requires a volume storage mounted to IXN Server container. The storage must be capable to write up to 100 MB/min and 10 MB/s for 2 minutes in peak. The storage size depends on logging configuration.

Regarding storage characteristics for IXN Server database, refer to PostgreSQL documentation.

Contact your account representative if you need assistance with sizing calculations.

Not applicable Not applicable Tenant service. For more information, refer to the Tenant Service Private Edition Guide.


Provide information about GDPR support. Include a link to the "suite-level" documentation. Link to come
Draft:PEC-AD/Current/WWEPEGuide/Planning Draft:PEC-AD Before you begin Find out what to do before deploying Workspace Web Edition. Agent Desktop WWEPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations or assumptions related to the deployment. The Workspace Web Edition Helm charts are included in the Genesys Web Services (GWS) Helm charts. You can access them when you download the GWS Helm charts from JFrog using your credentials.

See Helm charts and containers for Genesys Web Services and Applications for the Helm chart version you must download for your release.

For information about downloading Genesys Helm charts from JFrog Edge, refer to this article: Downloading your Genesys Multicloud CX containers.

There are no specific storage requirements for Workspace Web Edition. Network requirements include:
  • Required properties for ingress:
    • Cookies usage: None
    • Header requirements - client IP & redirect,  passthrough: None
    • Session stickiness: None
    • Allowlisting - optional: None
    • TLS for ingress - optional (you can enable or disable TLS on the connection): Though annotation like any UI or API in the solution
  • Cross-region bandwidth: N/A
  • External connections from the Kubernetes cluster to other systems: N/A
  • WAF Rules (specific only for services handling internet traffic): N/A
  • Pod Security Policy: N/A
  • High-Availability/Disaster Recovery: Refer to High availability and disaster recovery
  • TLS/SSL Certificate configurations: No specific requirements
You can use any of the supported browsers to run Agent Workspace on the client side.

Mandatory Dependencies

The following services must be deployed and running before deploying the WWE service. For more information, refer to Order of services deployment.

  • Genesys Authentication Service:
    • A redirect must be configured in Auth/Environment to allow an agent to login from the WWE URL. The redirect should be configured in the Auth onboarding script, according to the DNS assigned to the WWE service.
  • GWS services:
    • The CORS rules for WWE URLs must be configured in GWS. This should be configured in the GWS onboarding script, according to the DNS assigned to the WWE service.
    • The GWS API URL should be specified at the WWE deployment time as part of the Helm values.
  • TLM service:
    • The CORS rules for the domain where WWE is declared must be configured in Telemetry Service (TLM). For example: ?'"`UNIQ--nowiki-0000000C-QINU`"'?

Optional Dependencies

Depending on the deployed architecture, the following services must be deployed and running before deploying the WWE service:

  • WebRTC Service: To allow WebRTC in the browser
  • Telemetry Service: To allow browser observability (metrics and logs)

Miscellaneous desktop-side optional dependencies

The following software must or might be deployed on agent workstations to allow agents to leverage the WWE service:

  • Mandatory: A browser referenced in the supported browser list.
  • Optional: Genesys Softphone: a SIP or WebRTC softphone to handle the voice channel of agents.
Workspace Web Edition does not have specific GDPR support.
Draft:PEC-CAB/Current/CABPEGuide/Planning Draft:PEC-CAB Before you begin Find out what to do before deploying Genesys Callback. Callback CABPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Engagement Service (GES) is the only service that runs in the GES Docker container. The Helm charts included with the GES release provision GES and any Kubernetes infrastructure necessary for GES to run, such as load balancing, autoscaling, ingress control, and monitoring integration.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for Callback for the Helm chart version you must download for your release.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements. The primary contributor to the size of a callback record is the amount of user data that is attached to a callback. Since this is an open-ended field, and the composition will differ from customer to customer, it is difficult to state the precise storage requirements of GES for a given deployment. To assist you, the following table lists the results of testing done in an internal Genesys development environment and shows the impact that user data has when it comes to the storage requirements for both Redis and Postgres.
Test Redis size Postgres  size (MB)
10,000 Scheduled Callbacks with no user data 26.51 MB 41.1 MB
10,000 Scheduled Callbacks with 10 KB of user data 64.44 MB 252.91 MB
10,000 Scheduled Callbacks with 100k of user data 110.58 MB 595.79 MB

Note: This is 100k of randomized string in a single field in the user data.

Hardware requirements

Genesys strongly recommends the following hardware requirements to run GES with a single tenant. The requirements are based on running GES in a multi-tenanted environment and scaled down accordingly. Use these guidelines, coupled with the callback storage information listed above, to gauge the precise requirements needed to ensure that GES runs smoothly in your deployment.

GES

(Based on t3.medium)

  • vCPUs: 1
  • Memory: 2 GiB
  • Network burst: 5 Gbps

Redis

(Based on cache.r5.large) Redis is essential to GES service availability. Deploy two Redis caches in a cluster; the second cache acts as a replica of the first. For more information, see Architecture.

Callback data is stored in Redis memory.

  • vCPUs: 1
  • Memory: 8 GiB
  • Network burst: 10 Gbps

PostgreSQL

(Based on db.t3.medium)

  • vCPUs: 2
  • Memory: 4 GiB
  • Network burst: 5 Gbps
  • Storage: 100 GiB

Sizing calculator

The information in this section is provided to help you determine what hardware you need to run GES and third-party components. The information and formulas are based on an analysis of database disk storage and Redis memory usage requirements for callback data. The numbers provided here include only storage and memory usage for callbacks. Additional storage and memory is required for configuration data and basic operations.

Requirements per callback

Each callback record (excluding user data) requires approximately 6.5 to 7.0 kB of database disk storage, plus additional disk storage for the user data. Each kB of user data consumes approximately 3.0 kB of disk storage.

Each callback record (excluding user data) requires approximately 4.5 to 5.5 kB of Redis memory, plus an additional 1.25 kB for each kB of user data.

Use the following formulas to estimate disk storage and Redis memory requirements:

  • Estimate database disk storage requirements for callback data:
    <number of callbacks per day> × (7 kB + (3 kB × <kB of user data per callback>)) × 14 days
  • Estimate Redis memory requirements for callback data:
    <number of callbacks per day> × (5.5 kB + (1.25 kB × <kB of user data per callback>)) × 14 days

For example, if a tenant has an average of 100,000 callbacks per day with 1kB user data in each callback:

  • The database storage requirement is approximately 14 GB.
  • The Redis memory requirement is approximately 9.5 GB.

NOTE: Each callback record is stored for 14 days. If you average about 10k scheduled callbacks every day, and the scheduled callbacks are all booked as far out as possible (that is, 14 days in the future), the number of callbacks to use in storage and memory calculations is 28 days × 10k callbacks per day = 280k callbacks.

Redis operations

The Redis operations primarily update the connectivity status to other services such as Tenant Service (specifically ORS and URS) and Genesys Web Services and Applications (GWS).

When GES is idle (zero callbacks in the past, no active callback sessions, no scheduled callbacks), GES generates about 50 Redis operations per second per GES node per tenant.

Each Immediate callback generates approximately 110 Redis operations from its creation to the end of the ORS session.

For Scheduled callbacks, assuming each callback generates 110 Redis operations when the ORS session is active (based on Immediate callback numbers), there is 1 additional Redis operation for each minute that a callback is scheduled.

For example, if a callback is scheduled for 1 hour from the time it was created, the number of Redis operations is approximately 60 + 110 = 170.

For a callback scheduled for 1 day from the time it was created, it generates approximately 60 × 24 + 110 = 1550 Redis operations, using the following formula for the number of Redis operations per callback:
<number of callbacks> × (110 + <number of minutes until scheduled time>)

Because the longevity of a callback ORS session depends on the estimated wait time (EWT), the total number of Redis operations performed by GES per minute varies, based on both the number of callbacks in the system and the EWT of the callbacks.

Use the following formula to estimate the number of Redis operations performed per minute:
Total number of Redis operations per minute = (50 base GES Redis operations per second × 60 seconds) + <number of upcoming scheduled callbacks in the system> + ((<total number of active callbacks> / <EWT>) × 110)

Where:

  • Total number of active callbacks = <number of active immediate callbacks> + <number of active scheduled callbacks>, and
  • Number of active scheduled callbacks = (<number of scheduled callbacks per time slot> / <time slot duration>) × <EWT>

For example, let's say we have the following scenario:

  • Scheduled callbacks:
    • Time slot duration = 15 minutes
    • Maximum capacity per time slot = 100
    • Business hours = 24x7
    • Assume that all time slots are fully booked for the next 14 days
  • Number of active immediate callbacks = 1,000
  • Estimated wait time = 90 minutes

Using the preceding formulas, estimate the Redis operations per minute:

  • Total number of scheduled callbacks = (100 × (60 / 15)) × 24 × 14 = 134,400
  • Number of active scheduled callbacks = (100 / 15) × 90 = 600
  • Number of upcoming scheduled callbacks = <total number of scheduled callbacks> - <number of active scheduled callbacks> = (134,400 - 600) = 133,800
  • Total number of active callbacks = 1,000 + 600 = 1,600
  • Total number of Redis operations per minute = (50 × 60) + 133,800 + ((1,600 / 90) × 110) = 138,756

Redis keys

Each callback creates three additional Redis keys. Given the preceding calculations for Redis memory requirements for each callback, the formula for the average key size is:
(5.5 kB + (1.25 kB × <kB_of_user_data_per_callback>)) / 3

Incoming connections to the GES deployment are handled either through the UI or through the external API. For information about how to use the external API, see the Genesys Multicloud CX Developer Center.

Connection topology

The diagram below shows the incoming and outgoing connections amongst GES and other Genesys and third-party software such as Redis, PostgreSQL, and Prometheus. In the diagram, Prometheus is shown as being part of the broader Kubernetes deployment, although this is not a requirement. What's important is that Prometheus is able to reach the internal load balancer for GES.

The other important thing to note is that, depending on the use case, GES might communicate with Firebase and CAPTCHA over the open internet. This is not part of the default callback offering, but if you use Push Notifications with your callback service, then GES must be able to connect to Firebase over TLS. The use of Push Notifications or CAPTCHA is optional and not necessary for the basic callback scenarios.

Ges connection topology private edition diagram.png

Web application firewall rules

Information in the following sections is based on NGINX configuration used by GES in an Azure cloud environment.

Cookies and session requirements

When interacting with the UI, GES and GWS ensure that the user's browser has the appropriate session cookies. By default, UI sessions time out after 20 minutes of inactivity.

The external Engagement API does not require session management or the use of cookies, but it is important that the GES API key be provided in the request headers in the X-API-Key field.

For ingress to GES, allow requests to only the following paths to be forwarded to GES:

- /ges/
- /engagement/v3/callbacks/create
- /engagement/v3/callbacks/cancel
- /engagement/v3/callbacks/retrieve
- /engagement/v3/callbacks/availability/
- /engagement/v3/callbacks/queue-status/
- /engagement/v3/callbacks/open-for/
- /engagement/v3/estimated-wait-time
- /engagement/v3/call-in/requests/create
- /engagement/v3/statistics/operations/get-statistic-ex

In addition to allowing connections to only these paths, ensure that the ccid or ContactCenterID headers on any incoming requests are empty. This enhances security of the GES deployment; it prevents the use of external APIs by an actor who has only the CCID of the contact center.

TLS/SSL certificate configuration

There are no special TLS certificate requirements for the GES/Genesys Callback web-based UI.

Subnet requirements

There are no special requirements for sizing or creating an IP subnet for GES above and beyond the demands of the broader Kubernetes cluster.

The Genesys Callback user interface is supported in the following browsers. GES has dependencies on several other Genesys services. You must deploy the services on which GES depends and verify that each is working as expected before you provision and configure GES. If you follow this advice, then – if any issues arise during the provisioning of GES – you can be reasonably assured that the fault lies in how GES is provisioned, rather than in a downstream program.

GES/Callback requires your environment to contain supported releases of the following Genesys services, which must be deployed before you deploy Callback:

  • Genesys Web Services and Applications (GWS)
  • Genesys Authentication
  • Voice Microservices (includes Tenant Service)
  • Designer
  • Agent Setup

For detailed information about the correct order of services deployment, see Order of services deployment.

Callback records are stored for 14 days. The 14-day TTL setting starts at the Desired Callback Time. The Callback TTL (seconds) setting in the CALLBACK_SETTINGS data table has no effect on callback record storage duration; 14 days is a fixed value for all callback records.
Draft:PEC-DC/Current/DCPEGuide/Planning Draft:PEC-DC Before you begin Find out what to do before deploying Digital Channels. Digital Channels DCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Digital Channels for private edition has the following limitations:
  • Supports only a single-region model of deployment.
  • Social media requires additional components that are not included in Digital Channels.
Digital Channels in Genesys Multicloud CX private edition includes the following containers:
  • nexus
  • hubpp
  • tenant_deployment

The service also includes a Helm chart, which you must deploy to install all the containers for Digital Channels:

  • nexus

See Helm charts and containers for Digital Channels for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the nexus folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.


Install the prerequisite dependencies listed in the Third-party services table before you deploy Digital Channels. Digital Channels uses PostgreSQL and Redis to store all data.


For general network requirements, review the information on the suite-level Network settings page. Digital Channels has dependencies on the following Genesys services:
  • Genesys Authentication
  • Web Services and Applications
  • Tenant Microservice
  • Universal Contact Service
  • Designer

For detailed information about the correct order of services deployment, see Order of services deployment.

JM: Does Digital Channels support GDPR?
Draft:PEC-DC/Current/DCPEGuide/PlanningAIConnector Draft:PEC-DC Before you begin Find out what to do before deploying AI Connector. Digital Channels DCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 AI Connector for private edition has the following limitation:
  • Supports only a single-region model of deployment.
AI Connector in Genesys Multicloud CX private edition includes the following container:
  • athena

The service also includes a Helm chart, which you must deploy to install all the containers for AI Connector:

  • athena

See Helm charts and containers for Digital Channels for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the athena folder in the JFrog repository. For information about how to download the Helm chart, see Downloading your Genesys Multicloud CX containers.


Install the prerequisite dependencies listed in the Third-party services table before you deploy AI Connector. AI Connector uses PostgreSQL and Redis to store all data.


For general network requirements, review the information on the suite-level Network settings page. AI Connector has dependencies on the following Genesys service:
  • Digital Channels

For detailed information about the correct order of services deployment, see Order of services deployment.

d4d81735-166a-401c-abfb-0957cbbaef56
Draft:PEC-Email/Current/EmailPEGuide/Planning Draft:PEC-Email Before you begin Find out what to do before deploying Email. Email EmailPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of Email supports single-region model of deployment only. Email in Genesys Multicloud CX private edition includes the following containers:
  • iwd-email

The service also includes a Helm chart, which you must deploy to install the required containers for Email:

  • iwdem

See Helm Chart and Containers for Email for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwdem folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in IWD, UCS-X, and Digital Channels which are external to the Email service. External Connections: IMAP, SMTP, Gmail, GRAPH Not applicable The following Genesys services are required:
  • Genesys authentication service (GAuth)
  • Universal Contact Service (UCS)
  • Interaction Server
  • Digital Channels (Nexus)
  • Intelligent Workload Distribution (IWD)

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
Draft:PEC-GPA/Current/GPAPEGuide/Planning Draft:PEC-GPA Before you begin Find out what to do before deploying Gplus Adapter for Salesforce. Gplus Adapter for Salesforce GPAPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations or assumptions related to the deployment. You can access the Gplus Adapter for Salesforce Helm charts when you download them from JFrog using your credentials.

See [[Draft:ReleaseNotes/Current/GenesysEngage-cloud/GPAHelm|]] for the Helm chart version you must download for your release.

For information about downloading Genesys Helm charts from JFrog Edge, refer to this article: Downloading your Genesys Multicloud CX containers.

Salesforce Lightning or Salesforce Classic. There are no specific storage requirements for Gplus Adapter for Salesforce.


Network requirements include:
  • Required properties for ingress:
    • Cookies usage: None
    • Header requirements - client IP & redirect,  passthrough: None
    • Session stickiness: None
    • Allowlisting - optional: None
    • TLS for ingress - optional (you can enable or disable TLS on the connection): Though annotation like any UI or API in the solution
  • Cross-region bandwidth: N/A
  • External connections from the Kubernetes cluster to other systems: N/A
  • WAF Rules (specific only for services handling internet traffic): N/A
  • Pod Security Policy: N/A
  • High-Availability/Disaster Recovery: Refer to High availability and disaster recovery
  • TLS/SSL Certificate configurations: No specific requirements
You can use any of the supported browsers to run Gplus Adapter for Salesforce and Agent Workspace on the client side.

Mandatory Dependencies

The following services must be deployed and running before deploying the GPA service. For more information, refer to Order of services deployment.

  • Genesys Authentication Service:
    • A redirect must be configured in Auth/Environment to allow an agent to login from the WWE URL. The redirect should be configured in the Auth onboarding script, according to the DNS assigned to the WWE service.
  • GWS services:
    • The CORS rules for WWE URLs must be configured in GWS. This should be configured in the GWS onboarding script, according to the DNS assigned to the WWE service.
    • The GWS API URL should be specified at the WWE deployment time as part of the Helm values.
  • TLM service:
    • The CORS rules for the domain where WWE is declared must be configured in Telemetry Service (TLM). For example: ?'"`UNIQ--nowiki-0000000A-QINU`"'?

Optional Dependencies

Depending on the deployed architecture, the following services must be deployed and running before deploying the WWE service:

  • WebRTC Service: To allow WebRTC in the browser
  • Telemetry Service: To allow browser observability (metrics and logs)

Miscellaneous desktop-side optional dependencies

The following software must or might be deployed on agent workstations to allow agents to leverage the WWE service:

  • Mandatory: A browser referenced in the supported browser list.
  • Optional: Genesys Softphone: a SIP or WebRTC softphone to handle the voice channel of agents.
Gplus Adapter for Salesforce does not have specific GDPR support.
Draft:PEC-IWD/Current/IWDDMPEGuide/Planning Draft:PEC-IWD Before you begin Find out what to do before deploying IWD Data Mart. Intelligent Workload Distribution IWDDMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IWD Data Mart:
  • works as a short-living job started on schedule
  • does not support scaling or HA
  • requires dedicated PostgreSQL deployment per customer

IWD Data Mart is a short-living job, so Prometheus metrics cannot be pulled. Therefore, it requires a standalone Pushgateway service for monitoring.

IWD Data Mart in Genesys Multicloud CX private edition includes the following containers:
  • iwd_dm_cloud

The service also includes a Helm chart, which you must deploy to install the required containers for IWD Data Mart:

  • iwddm-cronjob

See Helm Charts and Containers for IWD and IWD Data Mart for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwddm-cronjob folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in PostgreSQL, which is external to the IWD Data Mart. Not applicable Not applicable Intelligent Workload Distribution (IWD) with a provisioned tenant.

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
Draft:PEC-IWD/Current/IWDPEGuide/Planning Draft:PEC-IWD Before you begin Find out what to do before deploying IWD. Intelligent Workload Distribution IWDPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IWD:
  • supports single-region model of deployment only
  • requires dedicated PostgreSQL deployment per customer


IWD in Genesys Multicloud CX private edition includes the following containers:
  • iwd

The service also includes a Helm chart, which you must deploy to install the required containers for IWD:

  • iwd

See Helm Charts and Containers for IWD and IWD Data Mart for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwd folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in the PostgreSQL, Elasticsearch, and Digital Channels which are external to IWD.

Sizing of Elasticsearch depends on the load. Allow on average 15 KB per work item, 50 KB per email. This can be adjusted depending on the size of items processed.

External Connections: IWD allows customer to configure webhooks. If configured, this establishes an HTTP or HTTPS connection to the configured host or port. Not applicable The following Genesys services are required:
  • Genesys authentication service (GAuth)
  • Universal Contact Service (UCS)
  • Interaction Server
  • Digital Channels (Nexus)

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
Draft:PEC-OU/Current/CXCPEGuide/Planning Draft:PEC-OU Before you begin Find out what to do before deploying CX Contact. Outbound (CX Contact) CXCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations. Before you begin deploying the CX Contact service, it is assumed that the following prerequisites and optional task, if needed, are completed:

Prerequisites

  • A Kubernetes cluster is ready for deployment of CX Contact.
  • The Kubectl and Helm command line tools are on your computer.
  • You have connectivity to target cluster, the proper kubectl context to work with the cluster, and your user has administrative permissions to deploy CX Contact to the defined namespace.

Optional tasks

  • SFTP Server—Install an SFTP Server with basic authentication for optional input and output data. SFTP Server is used when automation capabilities are required.
  • CDP NG access credentials—As of CX Contact 9.0.025, Compliance Data Provider Next Generation (CDP NG) is used as a CDP by default. Before attempting to connect to CDP NG, obtain the necessary access credentials (ID and Secret) from Genesys Customer Care.
  • Bitnami repository—If you choose to deploy dedicated Redis and Elasticsearch for CX Contact, add the Bitnami repository to install Redis and Elasticsearch using the following command:
    helm repo add bitnami ?'"`UNIQ--nowiki-00000012-QINU`"'?

After you've completed the mandatory tasks, check the Third-party prerequisites.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for CX Contact for the Helm chart version you must download for your release.

CX Contact is the only service that runs in the CX Contact Docker container. The Helm charts included with the CX Contact release provision CX Contact and any Kubernetes infrastructure necessary for CX Contact to run.

Set up Elasticsearch and Redis services as standalone services or installed in a single Kubernetes cluster.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

CX Contact requires shared persistent storage and an associated storage class created by the cluster administrator. The Helm chart creates the ReadWriteMany (RWX) Persistent Volume Claim (PVC) that is used to store and share data with multiple CX Contact components.

The minimal recommended PVC size is 100GB.

This topic describes network requirements and recommendations for CX Contact in private edition deployments:

Single namespace

Deploy CX Contact in a single namespace to prevent ingress/egress traffic from going through additional hops, due to firewalls, load balancers, or other network layers that introduce network latencies and overhead. Do not hardcode the namespace. You can override it by using the Helm file/values (provided during the Helm install command standard --namespace= argument), if necessary.

External connections

For information about external connections from the Kubernetes cluster to other systems, see Architecture. External connections also include:

  • Compliance Data Provider (AWS)
  • SFTP Servers

Ingress

The CX Contact UI requires Session Stickiness. Use ingress-nginx as the ingress controller (see github.com).

Important
The CX Contact Helm chart contains default annotations for session stickiness only for ingress-nginx. If you are using a different ingress controller, refer to its documentation for session stickiness configuration.

Ingress SSL

If you are using Chrome 80 or later, the SameSite cookie must have the Secure flag (see Chromium Blog). Therefore, Genesys recommends that you configure a valid SSL certificate on ingress.

Logging

Log rotation is required so that logs do not consume all of the available storage on the node.

Kubernetes is currently not responsible for rotating logs. Log rotation can be handled by the docker json-file log driver by setting the max-file and max-size options.

For effective troubleshooting, the engineering team should provide stdout logs of the pods (using the command kubectl logs). As a result, log retention is not very aggressive (see JSON file logging driver). For example: ?'"`UNIQ--source-00000013-QINU`"'? For on-site debugging purposes, CX Contact logs can be collected and stored in Elasticsearch. (For example, EFK stack. See medium.com).

Monitoring

CX Contact provides metrics that can be consumed by Prometheus and Grafana. It is recommended to have the Prometheus Operator (see githum.com) installed in the cluster. CX Contact Helm chart supports the creation of CustomResourceDefinitions that can be consumed by the Prometheus Operator.

For more information about monitoring, see Observability in Outbound (CX Contact).

CX Contact components operate with Genesys core services (v8.5 or v8.1) in the back end. All voice-processing components (Voice Microservice and shared services, such as GVP), and the GWS and Genesys Authentication services (mentioned below) must deployed and running before deploying the CX Contact service. See Order of services deployment.

The following Genesys services and components are required:

  • GWS
  • Genesys Authentication Service
  • Tenant Service
  • Voice Microservice
  • Multi-tenant Configuration Server

Nexus is optional.

 

CX Contact does not support GDPR.
Draft:PEC-REP/Current/GCXIPEGuide/Planning Draft:PEC-REP Before you begin deploying GCXI Find out what to do before deploying Genesys Customer Experience Insights (GCXI). Reporting GCXIPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 GCXI can provide meaningful reports only if Genesys Info Mart and Reporting and Analytics Aggregates (RAA) are deployed and available. Deploy GCXI only after Genesys Info Mart and RAA. For more information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Customer Experience Insights

GCXI Containers

  • GCXI Helm chart uses the following containers.
    • gcxi - main GCXI container, runs as a StatefulSet. This container is roughly 12 GB; ensure that you have enough space to allocate it.
    • gcxi-control - supplementary container, used for initial installation of GCXI, and for clean-up.

GCXI Helm Chart Download the latest yaml files from the repository, or examine the attached files: Sample GCXI yaml files

For more information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.


GCXI installation requires a set of local Persistent Volumes (PVs). Kubernetes local volumes are directories on the host with specific properties: https://kubernetes.io/docs/concepts/storage/volumes/#local

Example usage: https://zhimin-wen.medium.com/local-volume-provision-242affd5efe2

Kubernetes provides a powerful volume plugin system, which enables Kubernetes workloads to use a wide variety of block and file storage to persist data.

You can use the GCXI Helm chart to set up your own PVs, or you can configure PV Dynamic Provisioning in your cluster so that Kubernetes automatically creates PVs.

Volumes Design

GCXI installation uses the following PVC:

Mount Name Mount Path

(inside container)

Description Access Type Approximate Size Default Mount Point on Host

(You can change the mount point using values.)

The local provisioner requires that the specified directory pre-exists on your host.

Must be Shared across Nodes? Required Node Label

(applies to default Local PV setup)

gcxi-backup /genesys/gcxi_shared/backup Backup files

Used by control container / jobs.

RWX Depends on backup frequency.

5 GB+

/genesys/gcxi/backup

You can override this setting using

Values.gcxi.local.pv.backup.path

Only in multiple concurrent installs scenarios. gcxi/local-pv-gcxi-backup = "true"
gcxi-log /mnt/log MSTR logs

Used by main container.

The GCXI Helm chart allows log volumes of legacy hostPath type. This scenario is the default and used in examples in this document.

RWX Depends on rotation scheme.

5 GB+

/mnt/log/gcxi

subPathExpr: $(POD_NAME)

You can override this setting using

Values.gcxi.local.pv.log.path

Not necessarily. gcxi/local-pv-gcxi-log = "true"

If you are using hostPath volumes for logs, you don't need node label.

gcxi-postgres /var/lib/postgresql/data

(if using Postgres in container)

or

disk space in Postgres RDBMS

Meta DB volume

Used by Postgres container, if deployed.

RWO Depends on usage.

10 GB+

/genesys/gcxi/shared

You can override this setting using

Values.gcxi.local.pv.postgres.path

Yes, unless you tie the Postgres container to some particular node. gcxi/local-pv-postgres-data = "true"
gcxi-share /genesys/gcxi_share MSTR shared caches and cubes

Used by main container.

RWX Depends on usage.

5 GB+

/genesys/gcxi/data

subPathExpr: $(POD_NAME)

You can override this setting using

Values.gcxi.local.pv.share.path

Yes. gcxi/local-pv-gcxi-share = "true"

Preparing the environment

To prepare your environment, complete the following steps:

  1. To log in to the cluster, run the following command:
    • For AKS:
      ?'"`UNIQ--source-0000002D-QINU`"'?
    • For GKE:
      ?'"`UNIQ--source-0000002F-QINU`"'?
    • For OpenShift:
      ?'"`UNIQ--source-00000031-QINU`"'?
      • To check the cluster version on OpenShift deployments, run the following command:
        ?'"`UNIQ--source-00000033-QINU`"'?
  2. To create a new project, run the following command:
    GKE or AKS:
    1. Edit the create-gcxi-namespace.json, adding the following values:
      ?'"`UNIQ--source-00000035-QINU`"'?
    2. To apply the changes, run the following command:
      ?'"`UNIQ--source-00000037-QINU`"'?
    OpenShift: ?'"`UNIQ--source-00000039-QINU`"'?
  3. For GKE or AKS, to confirm namespace creation, run the following command:
    ?'"`UNIQ--source-0000003B-QINU`"'?
  4. Create a secret for docker-registry to pull images from the Genesys JFrog repository:
    ?'"`UNIQ--source-0000003D-QINU`"'?
  5. Create the file values-test.yaml, and populate it with appropriate override values. For a simple deployment using PostgreSQL inside the container, you must include PersistentVolumes named gcxi-log-pv, gcxi-backup-pv, gcxi-share-pv, and gcxi-postgres-pv. You must override GCXI_GIM_DB with the name of your Genesys Info Mart data source.

Ingress

Ingress annotations are supported in the values.yaml file (see line 317). Genesys recommends session stickiness, to improve user experience. ?'"`UNIQ--source-0000003F-QINU`"'?

Allowlisting is required for GCXI.

WAF Rules

WAF rules are defined in the variables.tf file (see line 245).

SMTP

The GCXI container and Helm chart support the environment variable EMAIL_SERVER.

TLS

The GCXI container does not serve TLS natively. Ensure that your environment is configured to use proxy with HTTPS offload.

MicroStrategy Web is the user interface most often used for accessing, managing, and running the Genesys CX Insights reports. MicroStrategy Web certifies the latest versions, at the time of release, for the following web browsers:
  • Apple Safari
  • Google Chrome (Windows and iOS)
  • Microsoft Edge
  • Microsoft Internet Explorer (Versions 9 and 10 are supported, but not certified)
  • Mozilla Firefox

To view updated information about supported browsers, see the MicroStrategy ReadMe.

GCXI requires the following services:
  • Reporting and Analytics Aggregates (RAA) is required to aggregate Genesys Info Mart data.
  • Genesys Info Mart and / or Intelligent Workload Distribution (IWD) Data Mart. GCXI can run without these services, but cannot produce meaningful output without them.
  • GWS Auth/Environment service
  • Genesys Platform Authentication thru Config Server (GAuth). Alternatively, GCXI includes a native internal login, which you can use to authorize users, instead of GAuth. This document assumes you are using GAuth (the recommended solution), which gives ConfigServer users access to GCXI.
  • GWS client id/client secret
GCXI can store Personal Identifiable Information (PII) in logs, history files, and in reports (in scenarios where customers include PII data in reports). Genesys recommends that you do not capture PII in reports. If you do capture PII, it is your responsibility to remove any such report data within 21 days or less, if required by General Data Protection Regulation (GDPR) standards.

For more information and relevant procedures, see: Genesys CX Insights Support for GDPR and the suite-level Link to come documentation.

Draft:PEC-REP/Current/GCXIPEGuide/PlanningRAA Draft:PEC-REP Before you begin deploying RAA Find out what to do before deploying Reporting and Analytics Aggregates (RAA). Reporting GCXIPEGuide The RAA container works with the Genesys Info Mart database; deploy RAA only after you have deployed Genesys Info Mart.

The Genesys Info Mart database schema must correspond to a compatible Genesys Info Mart version. Execute the following command to discover the required Genesys Info Mart release: ?'"`UNIQ--source-00000025-QINU`"'? RAA container runs RAA on Java 11, and is supplied with the following of JDBC drivers:

  • MSSQL 9.2.1 JDBC Driver
  • Postgres 42.2.11 JDBC Driver
  • Oracle Database 21c (21.1) JDBC Driver

Genesys recommends that you verify whether the provided driver is compatible with your database, and if it is not, you can override the JDBC driver by copying an updated driver file to the folder lib\jdbc_driver_<RDBMS> within the mounted config volume, or by creating a co-named link within the folder lib\jdbc_driver_<RDBMS>, which points to a driver file stored on another volume (where <RDBMS> is the RDBMS used in your environment). This is possible because RAA is launched in a config folder, which is mounted in a container.

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Customer Experience Insights.

You can download the gcxi helm charts from the following repository:?'"`UNIQ--source-00000027-QINU`"'? For more information about downloading containers, see: Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.
This section describes the storage requirements for various volumes.

GIM secret volume

In scenarios where raa.env.GCXI_GIM_DB__JSON is not specified, RAA mounts this volume to provide GIM connections details.

  1. Declare GIM database connection details as a Kubernetes secret in gimsecret.yaml:
    ?'"`UNIQ--source-00000029-QINU`"'?
  2. Reference gimsecret.yaml in values.yaml:
    ?'"`UNIQ--source-0000002B-QINU`"'?

Alternatively, you can mount the CSI secret using secretProviderClass, in values.yaml:?'"`UNIQ--source-0000002D-QINU`"'?

Config volume

RAA mounts a config volume inside the container, as the folder /genesys/raa_config. The folder is treated as a work directory, RAA reads the following files from it during startup:

  • conf.xml, which contains application-level config settings.
  • custom *.ss files.
  • JDBC driver, from the folder lib/jdbc_driver_<RDBMS>.

RAA does not normally create any files in /genesys/raa_config at runtime, so the volume does not require a fast storage class. By default, the size limit is set to 50 MB. You can specify the storage class and size limit in values.yaml:?'"`UNIQ--source-0000002F-QINU`"'?

   ...

RAA helm chart creates a Persistent Volume Claim (PVC). You can define a Persistent Volume (PV) separately using the gcxi-raa chart, and bind such a volume to the PVC by specifying the volume name in the raa.volumes.config.pvc.volumeName value, in values.yaml:?'"`UNIQ--source-00000031-QINU`"'?

Health volume

RAA uses the Health volume to store:

  • Health files.
  • Prometheus file containing metrics for the most recent 2-3 scrape intervals.
  • Results of the most recent testRun init container execution.

By default, the volume is limited to 50MB. RAA periodically interacts with the volume at runtime, so Genesys does not recommend a slow storage class for this volume. You can specify the storage class and size limit in values.yaml:?'"`UNIQ--source-00000033-QINU`"'?RAA helm chart creates a PVC. You can define a PV separately using the gcxi-raa chart, and bind such a volume to the PVC by specifying the volume name in the raa.volumes.health.pvc.volumeName value, in values.yaml:?'"`UNIQ--source-00000035-QINU`"'?

RAA interacts only with the Genesys Info Mart database.

RAA can expose Prometheus metrics by way of Netcat.

The aggregation pod has it's own IP address, and can run with one or two running containers. For Helm test, an additional IP address is required -- each test pod runs one container.

Genesys recommends that RAA be located in the same region as the Genesys Info Mart database.

Secrets

RAA secret information is defined in the values.yaml file (line 89).

For information about configuring arbitrary UID, see Configure security.

Not applicable. RAA interacts with Genesys Info Mart database only. Not applicable.
Draft:PEC-REP/Current/GIMPEGuide/PlanningGCA Draft:PEC-REP Before you begin GCA deployment Find out what to do before deploying GIM Config Adapter (GCA). Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Instructions are provided for a single-tenant deployment. To configure and deploy the GIM Config Adapter (GCA), you must obtain the Helm charts included with the GCA release. These Helm charts provision GCA plus any Kubernetes infrastructure needed for GCA to run.

GCA and GCA monitoring are the only services that run in the GCA container.

For information on downloading from the image repository, see Downloading your Genesys Multicloud CX containers. To find the correct Helm chart version for your release, see Helm charts and containers for Genesys Info Mart.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GCA.

GCA uses S3-compatible storage to store the GCA snapshot during processing. You must provision this S3-compatible storage in your environment. By default, GCA is configured to use Azure Blob storage, but you can also use S3-compatible storage provided by other cloud platforms.

Genesys expects you to use the same storage account for GSP and GCA. If you want to provision separate storage for GCA, follow the Create S3-compatible storage instructions for GSP to create similar S3-compatible storage for GCA.

No special network requirements.
  • Voice Tenant Service, which enables GCA to access the Configuration Server database. You must deploy the Voice Tenant Service before you deploy GCA.
    • Ensure that an appropriate user account is available for GCA to use to access the Configuration Database. The GCA user account requires at least read permissions.
    • You must also have your Tenant ID information available.
  • There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GSP looks for the GCA snapshot when it starts; if GCA has not yet been deployed, GSP will encounter unknown configuration objects and resources until the snapshot becomes available.

For detailed information about the correct order of services deployment, see Order of services deployment.

Not applicable. GCA does not store information beyond an ephemeral snapshot. f05492f5-52ed-490a-b0d5-c318a4a7272b
Draft:PEC-REP/Current/GIMPEGuide/PlanningGIM Draft:PEC-REP Before you begin GIM deployment Find out what to do before deploying GIM. Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Instructions are provided for a single-tenant deployment. To configure and deploy Genesys Info Mart, you must obtain the Helm charts included with the Info Mart release. These Helm charts provision Info Mart plus any Kubernetes infrastructure Info Mart requires to run.

Genesys Info Mart and GIM monitoring are the only services that run in the Info Mart container.

For the correct Helm chart version for your release, see Helm charts and containers for Genesys Info Mart. For information on downloading from the image repository, see Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GIM.

GIM uses PostgreSQL for the Info Mart database and, optionally, uses object storage to store exported Info Mart data.

PostgreSQL — the Info Mart database

The Info Mart database stores data about agent and interaction activity, Outbound Contact campaigns, and usage information about other services in your contact center. A subset of tables and views created, maintained, and populated by Reporting and Analytics Aggregates (RAA) provides the aggregated data on which Genesys CX Insights (GCXI) reports are based.

A sizing calculator for Genesys Multicloud CX private edition is under development. In the meantime, the interactive tool available for on-premises deployments might help you estimate the size of your Info Mart database, see Genesys Info Mart 8.5 Database Size Estimator.

Genesys recommends a minimum 3 IOPS per GB.

For information about creating the Info Mart database, see Before you begin GIM deployment


Create the Info Mart database

Use any database management tool to create the Info Mart ETL database and user.

  1. Create the database.
  2. Create a user for all of the Genesys Info Mart services to use. Grant that user full permissions for the database.
    This user’s account is used by Info Mart jobs to access the Info Mart database schema.
    The name of the Info Mart schema name is public.
    Important
    Make a note of the database and user details. You need this information when you configure GIM and GCA Helm chart override values.

Object storage — Data Export packages

The GIM Data Export feature enables you to export data from the GIM database so it is available for other purposes. Unless you elect to store your exported data in a local directory, GIM data is exported to an object store. GIM supports export to either Azure Blob Storage or the S3-compatible storage provided by Google Cloud Platform (GCP).

If you want to use S3-compatible storage, follow the Before you begin GSP deployment instructions for GSP to create the S3-compatible storage for GIM.

Important
GSP and GCA use object storage to store data during processing. For safety and security reasons, Genesys strongly recommends that you use a dedicated object storage account for the GIM persistent storage, and do not share the storage account created for GSP and GCA. GSP and GCA can share an account, and this is the expected deployment.

If you are not using obect storage, you can configure GIM to store exported data in a local directory. In this case, you do not need to create the object storage.

No special network requirements.
  • You must have your Tenant ID information available.
  • There are no strict dependencies among the Genesys Info Mart services, but the logic of your particular pipeline might require them to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GCA might try to access the Info Mart database to synchronize configuration data but if GIM has not yet been deployed, the Info Mart database will be empty.

For detailed information about the correct order of services deployment, see Order of services deployment.

GIM provides full support for you to comply with Right of Access ("export") or Right of Erasure ("forget") requests from consumers and employees with respect to personally identifiable information (PII) in the Info Mart database.

Genesys Info Mart is designed to comply with General Data Protection Regulation (GDPR) policies. Support for GDPR includes the following:

  • The way that Genesys Info Mart processes customer files complies with Right of Access ("export") and Right of Erasure ("forget") requirements.
  • Genesys Infor Mart supports configuring data retention policies.
  • GDPR processing is fully audited.

For more information about how Genesys Info Mart implements support for GDPR requests, see Genesys Info Mart Support for GDPR.

For details about the Info Mart database tables and columns that potentially contain PII, see the description of the CTL_GDPR_HISTORY table in the Genesys Info Mart on-premises documentation.

e65e00cb-c1c8-4fb8-9614-80ac07c3a4e3
Draft:PEC-REP/Current/GIMPEGuide/PlanningGSP Draft:PEC-REP Before you begin GSP deployment Find out what to do before deploying GIM Stream Processor (GSP). Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 To configure and deploy the GIM Stream Processor (GSP), you must obtain the Helm charts included with the GSP release. These Helm charts provision GSP plus any Kubernetes infrastructure GSP requires to run.

GSP is the only service that runs in the GSP container.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers. To find the correct Helm chart version for your release, see Helm charts and containers for Genesys Info Mart for the Helm chart version you must download for your release.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GSP.

GSP maintains internal “state,” such as such as GSP checkpoints, savepoints, and high availability data, which must be persisted (stored). When it starts, GSP reads its state from storage, which allows it to continue processing data without reading data from Kafka topics from the start. GSP periodically updates its stare as it processing incoming data.

GSP uses S3-compatible storage to store persisted data. You must provision this S3-compatible storage in your environment.

By default, GSP is configured to use Azure Blob Storage, but you can also use S3-compatible storage provided by other cloud platforms. Genesys expects you to use the same storage account for GSP and GCA.

Create S3-compatible storage

Genesys Info Mart has no special requirements for the storage buckets you create. Follow the instructions provided by the storage service provider of your choice to create the S3-compatible storage.

  • For GKE, see the Google Cloud Storage documentation about creating bucket storage.
  • For AKS, see Azure Storage documentation about creating blob storage.

To enable the S3-compatible storage object, you populate Helm chart override values for the service (see ). To do this, you need to know details such as the endpoint information, access key, and secret.

Important
Note and securely store the bucket details, particularly the access key and secret, when you create the storage bucket. Depending on the cloud storage service you choose, you may not be able to recover this information subsequently.
No special network requirements. Network bandwidth must be sufficient to handle the volume of data to be transferred into and out of Kafka. There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GSP looks for the GCA snapshot when it starts; if GCA has not yet been deployed, GSP will encounter unknown configuration objects and resources until the snapshot becomes available.

There are other private edition services you must deploy before Genesys Info Mart. For detailed information about the recommended order of services deployment, see Order of services deployment.

Not applicable, provided your Kafka retention policies have not been set to more than 30 days. GSP does not store information beyond the ephemeral data used during processing.

GSP Kafka topics

For GSP, topics in Kafka represent various data domains and GSP expects certain topics to be defined.

  • If a topic does not exist, GSP will never receive data for that domain.
  • If a customized topic has been created but not defined in GSP configuration, data from that domain will be discarded.

Unless Kafka has been configured to auto-create topics, you must manually ensure that all of the Kafka topics GSP requires are created in Kafka configuration.

The following table shows the topic names GSP expects to be available. In this table, an entry in the Customizable GSP parameter column indicates support for customizing that topic name.

Topic name Customizable GSP parameter Description
GSP consumes data from the following topics:
designer-sdr designer Name of the input topic with Session Detail Record (SDR) data
digital-agentstate digitalAgentStates Name of the input topic with digital agent states
digital-itx digitalItx Name of the input topic with digital interactions
gca-cfg cfg Name of the input topic with configuration data
voice-agentstate Name of the input topic with voice agent states
voice-callthread Name of the input topic with voice interactions
voice-outbound Name of the input topic with outbound (CX Contact) activity associated with either voice or digital interactions
GSP produces data into the following topics:
gsp-cfg cfg Name of the output topic for configuration reporting
gsp-custom custom Name of the output topic for custom reporting
gsp-ixn interactions Name of the output topic for interactions
gsp-mn mediaNeutral Name of the output topic for media-neutral agent states
gsp-outbound outbound Name of the output topic for outbound (CX Contact) activity
gsp-sm agentStates Name of the output topic for agent states
c39fe496-c79e-4846-b451-1bc8bedb126b
Draft:PEC-REP/Current/PulsePEGuide/Planning Draft:PEC-REP Before you begin Find out what to do before deploying Genesys Pulse. Reporting PulsePEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no known limitations.


For more information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Pulse

Genesys Pulse Containers

Container Description Docker Path
collector Genesys Pulse Collector <docker>/pulse/collector:<image-version>
cs_proxy Configuration Server Proxy <docker>/pulse/cs_proxy:<image-version>
init Init container, used for DB initialization <docker>/pulse/init:<image-version>
lds Load Distribution Server (LDS) <docker>/pulse/lds:<image-version>
monitor_dcu_push_agent Provides monitoring data from Stat Server and Genesys Pulse Collector <docker>/pulse/monitor_dcu_push_agent:<image-version>
monitor_lds_push_agent Provides monitoring data from LDS <docker>/pulse/monitor_lds_push_agent:<image-version>
pulse Genesys Pulse Backend <docker>/pulse/pulse:<image-version>
ss Stat Server <docker>/pulse/ss:<image-version>
userpermissions User Permissions service <docker>/pulse/userpermissions:<image-version>

Genesys Pulse Helm Charts

Helm Chart Containers Shared Helm Path
Init init yes <helm>/init-<chart-version>.tgz
Pulse pulse yes <helm>/pulse-<chart-version>.tgz
LDS cs_proxy, lds, monitor_lds_push_agent <helm>/lds-<chart-version>.tgz
DCU cs_proxy, ss, collector, monitor_dcu_push_agent <helm>/dcu-<chart-version>.tgz
Permissions cs_proxy, userpermissions <helm>/permissions-<chart-version>.tgz
Init Tenant init <helm>/init-tenant-<chart-version>.tgz
Monitor - yes <helm>/monitor-<chart-version>.tgz
Appropriate CLI must be installed.

For more information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

Logs Volume

Persistent Volume Size Type IOPS POD Containers Critical Backup needed
pulse-dcu-logs 10Gi RW high DCU csproxy, collector, statserver Y Y
pulse-lds-logs 10Gi RW high lds csproxy, lds Y Y
pulse-permissions-logs 10Gi RW high permissions csproxy, permissions Y Y
pulse-logs 10Gi RW high pulse pulse Y Y

The logs volume stores log files:

  • To use the persistent volume, set the log.volumeType to the pvc.
  • To use the local storage, set the log.volumeType to the hostpath.

Genesys Pulse Collector Health Volume

Local Volume POD Containers
collector-health dcu collector, monitor-sidecar

Genesys Pulse Collector health volume provides non-persistent storage for store Genesys Pulse Collector health state files for monitoring.

Stat Server Backup Volume

Persistent Volume Size Type IOPS POD Containers Critical Backup needed
statserver-backup 1Gi RWO medium dcu statserver N N

Stat Server backup volume provides disk space for Stat Server's state backup. The Stat Server backup volume stores the server state between restarts of the container.

No special requirements. Ensure that the following services are deployed and running before you deploy Genesys Pulse:


  • Genesys Authentication:
  • Genesys Web Services and Applications
  • Agent Setup
  • Tenant Service:
    • The Tenant UUID (v4) is provisioned, example: "9350e2fc-a1dd-4c65-8d40-1f75a2e080dd"
    • The Tenant service is available as host. For example, in GKE, it is: "tenant-<tenant-uuid>.voice" port: 8888
  • Voice Microservice:
    • The Voice service is available as host. For example, in GKE, it is: "tenant-<tenant-uuid>.voice" port: 8000
Important
All services listed must be accessible from within the cluster where Genesys Pulse will be deployed.

For more information, see Order of services deployment.

Genesys Pulse supports the General Data Protection Regulation (GDPR). See Genesys Pulse Support for GDPR for details.
Draft:PrivateEdition/Current/TenantPEGuide/Planning Draft:PrivateEdition Before you begin Find out what to do before deploying the Tenant Service. Genesys Multicloud CX Private Edition TenantPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for Voice Microservices for the Helm chart version you must download for your release.

Containers

The Tenant Service has the following containers:

  • Core tenant service container
  • Database initialization and upgrade container
  • Role and privileges  initialization and upgrade container
  • Solution specific: pulse provisioning container

Helm charts

  • Tenant deployment
  • Tenant infrastructure
For information about setting up your Genesys Multicloud CX private edition platform, see Software Requirements.

The following table lists the third-party prerequisites for the Tenant Service.

For information about storage requirements for Voice Microservices, including the Tenant Service, see Storage requirements in the Voice Microservices Private Edition Guide. For general network requirements, review the information on the suite-level Network settings page. For detailed information about the correct order of services deployment, see Order of services deployment.

The following prerequisites are required before deploying the Tenant Service:

  • Voice Platform and all its external dependencies must be deployed before proceeding with the Tenant Service deployment.
  • PostgreSQL 10 database management system must be deployed and database shall be allocated either as a primary or replica. For more information about the sample deployment of a standalone DBMS, see Third-party prerequisites.

In addition, if you expect to use Agent Setup or Workspace Web Edition after the tenant is deployed, Genesys recommends that you deploy GWS Authentication Service before proceeding with the Tenant Service deployment.

Specific dependencies

The Tenant Service is dependent on the following platform endpoints:

  • GWS environment API
  • Interaction service core
  • Interaction service vq

The Tenant Service is dependent on the following service component endpoints:

  • Voice Front End Service
  • Voice Redis (RQ) Service
  • Voice Config Service
Not applicable. 5a34ac72-3fae-4368-afd8-5b899e1c52ba
Draft:STRMS/Current/STRMSPEGuide/Planning Draft:STRMS Before you begin Find out what to do before deploying Event Stream. Event Stream STRMSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 See Helm charts and containers for Event Stream for the Helm chart version you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

Event Stream has dependencies on several other Genesys services. It is recommended that the provisioning and configuration of Event Stream be done after these services have been set up so that should any issues arise during the provisioning of Event Stream, it can be reasonably assured that the fault lies in how Event Stream is provisioned rather than in some downstream program.

For a look at the high-level deployment order, see Order of services deployment.

Draft:TLM/Current/TLMPEGuide/Planning Draft:TLM Before you begin Find out what to do before deploying Telemetry Service. Telemetry Service TLMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 NA Telemetry Service is composed of:
  • 1 Docker Container: tlm/telemetry-service:version
  • 1 Helm Chart: telemetry-service_version.tgz

For additional information about overriding Helm chart values, see Overriding Helm Chart values in the Genesys Multicloud CX Private Edition Guide.

For information about downloading Helm charts from JFrog Edge, see Downloading your Genesys Multicloud CX containers in the Setting up Genesys Multicloud CX Private Edition guide.

NA
NA For any kind of Telemetry deployment, the following service must be deployed and running before deploying the Telemetry service:

For a look at the high-level deployment order, see Order of services deployment.

17df197d-45b4-4d49-b269-f44d5bdfe5a1
Draft:UCS/Current/UCSPEGuide/Planning Draft:UCS Before you begin Find out what to do before deploying Universal Contact Service (UCS). Universal Contact Service UCSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Currently, UCS:
  • supports a single-region model of deployment only
  • does not support SSL communication with ElasticSearch.
  • requires dedicated PostgreSQL deployment per customer.
Download the UCS related Docker containers and Helm charts from the JFrog repository.

See Helm charts and containers for Universal Contact Service for the Helm chart and container versions you must download for your release.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

  • Kubernetes 1.17+
  • Helm 3.0
All data are stored in the PostgreSQL, Elasticsearch, and Nexus Upload Service which are external to the UCS. UCS requires the following Genesys components:
  • Genesys Authentication Service
  • GWS Environment Service
As a part of GDPR compliance procedure, the customer would send a request to Care providing the information about the end user. Care would then open a ticket for Engineering team to follow up on the request.

The engineering team would process the request:

GDPR request: Export Data

  • Request to UCS to get contact by ID: identify contact (if there is email address or phone number), or getContact (if there is a direct contact ID).
  • Request to UCS-X to get list of interactions for contact found.
  • Perform CSV export and attach resulting file to the ticket.

GDPR request: Forget me

  • Request to UCS-X to get contact by ID: identify contact (if there is email address or phone number), or getContact (if there is a direct contact ID).
  • Request to UCS-X to get list of interactions for contact found.
  • Delete all found interactions.
  • Re-check that all interactions for contact were removed.
  • Delete contact.
  • Re-check that contact was removed.
  • Update the ticket.
Draft:VM/Current/VMPEGuide/Planning Draft:VM Before you begin Find out what to do before deploying Voice Microservices. Voice Microservices VMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

The following services are included with Voice Microservices:

  • Voice Agent State Service
  • Voice Config Service
  • Voice Dial Plan Service
  • Voice Front End Service
  • Voice Orchestration Service
  • Voice Registrar Service
  • Voice Call State Service
  • Voice RQ Service
  • Voice SIP Cluster Service
  • Voice SIP Proxy Service
  • Voice Voicemail Service
  • Voice Tenant Service

See Helm charts and containers for Voice Microservices for the Helm chart version you must download for your release.

For information about the Voicemail Service, see Before you begin in the Configure and deploy Voicemail section of this guide.

For information about the Tenant service, also included with Voice Microservices, see the Tenant Service Private Edition Guide.

For information about setting up your Genesys Multicloud CX private edition platform, see Software Requirements.

The following table lists the third-party prerequisites for Voice Microservices.

Voice Tenant Service
Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
log-pvc 50Gi RWO medium storing log files tenant Y Y

SIP Cluster Service

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
log-pvc 50Gi RWO medium storing log files voice-sip Y Y

VoiceMail Service

Persistent Volume Type IOPS Functionality Container Critical Backup needed
Azure blob storage v2 RWM medium storing voicemailbox settings and voicemail messages tenant Y Y
AWS S3 Bucket RWM medium storing voicemailbox settings and voicemail messages tenant Y Y
File System RWM medium storing voicemailbox settings and voicemail messages tenant Y Y

For more information, see Storage requirements in the Configure and deploy Voicemail section of this guide.

For general network requirements, review the information on the suite-level Network settings page.


Voice Voicemail Service Voice Tenant Service
Cross-region bandwidth Connect to other region Voicemail service to push MWI notification. Need to connect to Tenant Service in other regions.

Bandwidth for Redis cross-region connection.

External connections Redis, Storage Account Redis and Kafka: Supports secured (TLS) connection.


Postgres: Supports secured (TLS, simple) connection between Tenant and Postgres server.

Pod Security Policy All containers running as Genesys user (500) and non-root user All containers running as Genesys user (500) and non-root user
SMTP Settings SMTP enabled Not applicable
TLS/SSL Certificates configurations Not applicable Not applicable
Ingress Not applicable Not applicable
Subnet sizing Network bandwidth must be sufficient to handle the volume of data to be transferred into and out of Kafka and Redis. Subnet sizing to accommodate N+1 Tenant pods.
CNI for Direct Pod Routing Not applicable Not applicable
For detailed information about the correct order of services deployment, see Order of services deployment.

Multi-Tenant Inbound Voice: Voicemail Service

Customer data that is likely to identify an individual, or a combination of other held data to identify an individual is considered as Personally Identifiable Information (PII). Customer name, phone number, email address, bank details, and IP address are some examples of PII.

According to EU GDPR:

  • When a customer requests to access personal data that is available with the contact center, the PII associated with the client is exported from the database in client-understandable format. You use the Export Me request to do this.
  • When a customer requests to delete personal data, the PII associated with that client is deleted from the database within 30 days. However, the Voicemail service is designed in a way that the Customer PII data is deleted in one day using the Forget Me request.

Both Export Me and Forget Me requests depend only on Caller ID/ANI input from the customer. The following PII data is deleted or exported during the Forget Me or Export Me request process, respectively:

  • Voicemail Message
  • Caller ID/ANI

GDPR feature is supported only when StorageInterface' is configured as BlobStorage, and Voicemail service is configured with Azure storage account data store.

Adding caller_id tag during voicemail deposit

Index tag caller_id is included in voicemail messages and metadata blob files during voicemail deposit. Using the index tags, you can easily filter the Forget Me or Export Me instead of searching every mailbox.

GDPR multi-region support

In voicemail service, all voicemail metadata files are stored in master region and voicemail messages are deposited/stored in the respective region. Therefore, It is required to connect all the regions of a tenant to perform Forget Me, Undo Forget Me, or Export Me processes for GDPR inputs.

To provide multi-region support for GDPR, follow these steps while performing GDPR operation:

  1. Get the list of regions of a tenant.
  2. Ensure all regions storage accounts are up. If any one of storage accounts is down, you cannot perform the GDPR operation.
  3. GDPR operates in the master region files, first.
  4. Then, GDPR operates in all the non-master region files.
Draft:WebRTC/Current/WebRTCPEGuide/Planning Draft:WebRTC Before you begin Find out what to do before deploying WebRTC. WebRTC WebRTCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 All the prerequisites described under Third-party prerequisites, Genesys dependencies, and Secrets have been met. Download the Helm charts from the webrtc folder in the JFrog repository. See Helm charts and containers for WebRTC for the Helm chart version you must download for your release.

For more information about how to download Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

WebRTC contains the following containers:

Artifact Type Functionality JFrog Containers and Helm charts
webrtc webrtc gateway container Handles agents’ sessions, signaling, and media traffic. It also performs media transcoding. https://<jfrog artifactory>/<docker location>/webrtc/webrtc/
coturn coturn container Uses TURN functionality https://<jfrog artifactory>/<docker location>/webrtc/coturn/
webrtc-service Helm chart https://<jfrog artifactory>/<helm location>/ webrtc-service-<version_number>.tgz
For more information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following are the third-party prerequisites for WebRTC:

WebRTC does not require persistent storage for any purposes except Gateway and CoTurn logs. The following table describes the storage requirements:
Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
webrtc-gateway-log-volume 50Gi RW medium storing gateway log files webrtc Y Y
webrtc-coturn-log-volume 50Gi RW medium storing coturn log files coturn N Y

If Persistent Volume and Persistent Volume Claim are configured, they are created. The log size is adjusted according to the log rate described as follows:

Gateway:

idle: 0.5 MB/hour per agent

active call: around 0.2MB per call per agent.

Example: For 24 hours of work, where each agent gets 7 - 10 calls per hour with a constant call rate, you require around ~500 GB for 1000 agents, with around ~20 GB being consumed per hour.

CoTurn:

For 1000 connected agents, the load rate is approximately 3.6 GB/hour that scales linearly and increases or decreases with the number of agents and stays constant whether calls are performed or not.

Ingress

WebRTC requires the following Ingress requirements:

  • Persistent session stickiness based on cookie is mandatory. Stickiness cookie contains the following attributes:
    • SameSite=None
    • Secure
    • Path=/
  • No specific header requirements
  • Whitelisting (optional)
  • TLS is mandatory

Secrets

WebRTC supports three types of secrets: CSI driver, Kubernetes secrets, and environment variables.

Important
GWS Secret for WebRTC must contain the following grants:

?'"`UNIQ--source-00000020-QINU`"'?

For GWS secrets, the CSI or Kubernetes secret must contain gwsClient and gwsSecret key-values.

GWS secret for WebRTC must be created in the WebRTC namespace using the following specification as an example:?'"`UNIQ--source-00000022-QINU`"'?

ConfigMaps

Not Applicable

WAF Rules

Disable the following Web Application Firewall (WAF) rules for WebRTC:

WAF Rule Number of rules
REQUEST-920-PROTOCOL-ENFORCEMENT 920300
920440
REQUEST-913-SCANNER-DETECTION 913100
913101
REQUEST-921-PROTOCOL-ATTACK 921150
REQUEST-942-APPLICATION-ATTACK-SQLI 942430


Pod Security Policy

Not applicable

Auto-scaling

KEDA operator enables auto-scaling of WebRTC and CoTurn, where the auto-scaling feature requires Prometheus metrics. To know more about KEDA, visit https://keda.sh/docs/2.0/concepts/.

Use the following option in the YAML values file to enable deployment of the auto-scaling objects:

?'"`UNIQ--source-00000024-QINU`"'?

You can configure the Polling interval and maximum number of replicas separately for Gateway pods and CoTurn pods using the following options:

?'"`UNIQ--source-00000026-QINU`"'?

  • Gateway Pod Scaling
    • Sign-ins

?'"`UNIQ--source-00000028-QINU`"'?

  • CPU based scaling

WebRTC auto-scaling is also performed based on the CPU and memory usage. The following YAML shows how CPU and storage limits must be configured for Gateway pods in the YAML values file:

?'"`UNIQ--source-0000002A-QINU`"'?

  • CoTurn Pod scaling

Auto-scaling of CoTurn is performed based on CPU and memory usage only. The following YAML shows how CPU and storage limits must be configured for CoTurn pods in the YAML values file:

?'"`UNIQ--source-0000002C-QINU`"'?

SMTP settings

Not applicable

WebRTC has dependencies on several other Genesys services and it is recommended that the provisioning and configuration of WebRTC are done after these services have been set up.
Service Functionality
GWS Used for environment and tenants configuration reading
GAuth Used for authentication of WebRTC service and Agents
GVP Used for voice calls - conferences, recording, and so on
Voice microservice Used to handle voice calls
Tenant microservice Used to store tenant configuration

For more information about the correct order of services deployment, see Order of services deployment.

Not applicable d703e174-b039-43c9-8859-e25b3a7feb22
GVP/Current/GVPPEGuide/Planning GVP Before you begin Find out what to do before deploying Genesys Voice Platform. Genesys Voice Platform GVPPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167
  • Resource Manager does not use gateway LRG configurations. Instead, it uses the contact center ID coming from SIP Server as gvp-tenant-id in the INVITE message to identify the tenant and pick the IVR Profiles.
  • Only single MCP LRG is supported per GVP deployment.
  • Only the specific component configuration options documented in Helm values.yaml overrides can be modified. Other configuration options can't be changed.
  • DID/DID groups are managed as part of Designer applications (Applications)
  • SIP TLS / SRTP are currently not supported.
You will have to download the GVP related Docker containers and Helm charts from the JFrog repository. For docker container and helm chart versions, refer to Helm charts and containers for Genesys Voice Platform.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

Media Control Platform

Storage requirement for production (min)

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
recordings-volume 100Gi RWO high Storing recordings, dual AZ, gvp-mcp, rup Y Y
rup-volume 40Gi RWO high Storing recordings temporarily, dual AZ, rup Y Y
log-pvc 50Gi RWO medium storing log files gvp-mcp Y Y

Storage requirements for Sandbox

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
recordings-volume 50Gi RWO high Storing recordings, dual AZ, gvp-mcp, rup Y Y
rup-volume 20Gi RWO high Storing recordings temporarily, dual AZ, rup Y Y
log-pvc 25Gi RWO medium storing log files gvp-mcp Y Y

Resource Manager

Storage requirement for production (min)

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billingpvc 20Gi RWO high billing gvp-rm Y Y
log-pvc 50Gi RWO medium storing log files gvp-rm Y Y

Storage requirements for Sandbox

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billingpvc 20Gi RWO high billing gvp-rm Y Y
log-pvc 10Gi RWO medium storing log files gvp-rm Y Y

Service Discovery

Not applicable

Reporting Server

Storage requirement for production (min)

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billing-pvc 20Gi RWO High Stores ActiveMQ data and config information gvp-rs Y Y

Storage requirement for Sandbox

Persistent Volume Min Size Type IOPS Functionality Container Critical Backup needed
billing-pvc 10Gi RWO High Stores ActiveMQ data and config information gvp-rs Y Y

GVP Configuration Server

Not applicable

Media Control Platform

Ingress

Not applicable

HA/DR

MCP is deployed with autoscaling in all regions. For more details, see the section Auto-scaling.

Calls are routed to active MCPs from GVP Resource Manager (RM) and in case of a MCP instance terminating, the calls are then routed to a different MCP instance.

Cross-region bandwidth

MCPs are not expected to be doing cross-region requests in normal mode of operation.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001A-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Resource Manager

Ingress

Not applicable

HA/DR

Resource Manager is deployed as the Active and Active pair.

Cross-region bandwidth

Resource Manager is deployed per region. There is no cross region deployment.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001C-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Service Discovery

Ingress

Not applicable

HA/DR

Service Discovery is a singleton service which will be restarted if it shuts down unexpectedly or becomes unavailable.

Cross-region bandwidth

Service Discovery is not expected to be doing cross-region requests in normal mode of operation.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-0000001E-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

Reporting Server

Ingress

Not applicable

HA/DR

Reporting Server is deployed as a single pod service.

Cross-region bandwidth

Reporting Server is deployed per region. There is no cross region deployment.

External connections

Not applicable

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-00000020-QINU`"'?

SMTP Setting

Not applicable

TLS/SSL Certificates configurations

Not applicable

GVP Configuration Server

Ingress

Not applicable

HA/DR

GVP Configuration Server is deployed as a singleton. If the GVP Configuration Server crashes, a new pod will be created. The GVP services will continue to service calls if the GVP Configuration Server is unavailable and only new configuration changes, such as new MCP pods, will not be available.

Cross-region bandwidth

GVP Configuration Server is not expected to be doing cross-region requests in normal mode of operation.

External connections

External service Functionality
PostGresSQL database

Pod Security Policy

All containers running as genesys user (500) and non-root user. ?'"`UNIQ--source-00000022-QINU`"'?

SMTP Settings

Not applicable

TLS/SSL Certificates configurations

Not applicable

N/A

Media Control Platform

Service Functionality
Consul Consul service must be deployed before deploying MCP for proper service registration in GVP Configuration Server and RM.

Resource Manager

Service Functionality
GVP Configuration Server GVP Configuration Server must be deployed before deploying RM for proper working.

Service Discovery

Service Functionality
Consul Consul service must be deployed before deploying Service Discovery for proper service registration in GVP Configuration Server and Resource Manager.

Reporting Server

Service Functionality
GVP Configuration Server GVP Configuration Server must be deployed before deploying RS for proper working.

GVP Configuration Server

N/A

This section describes product-specific aspects of Genesys Voice Platform support for the European Union's General Data Protection Regulation (GDPR) in premise deployments. For general information about Genesys support for GDPR compliance, see General Data Protection Regulation.

Warning

Disclaimer: The information contained here is not considered final. This document will be updated with additional technical information.

Data Retention Policies

GVP has configurable retention policies that allow expiration of data. GVP allows aggregating data for items like peak and call volume reporting. The aggregated data is anonymous. Detailed call detail records include DNIS and ANI data. The Voice Application Reporter (VAR) data could potentially have personal data, and would have to be deleted when requested. The log data files would have sensitive information (possibly masked), but requires the data to be rotated/expired frequently to meet the needs of GDPR.

Configuration Settings

Media Server

Media Server is capable of storing data and sending alarms which can potentially contain sensitive information, but by default, the data will typically be automatically cleansed (by the log rollover process) within 40 days.

The location of these files can be configured in the GVP Media Control Platform Configuration [default paths are shown below]:

  • vxmli:recordutterance-path = $InstallationRoot$/utterance/
  • vxmli:recording-basepath = $InstallationRoot$/record/
  • Netann:record-basepath = $InstallationRoot$/record
  • msml:cpd-record-basepath = $InstallationRoot$/record/
  • msml:record-basepath = $InstallationRoot$
  • msml:record-irrecoverablerecordpostdir = $InstallationRoot$/cache/record/failed
  • mpc:recordcachedir = $InstallationRoot$/cache/record
  • calllog:directory = $InstallationRoot$/callrec/Log files and temporary files can be saved.

The location of these files can be configured in the GVP Media Control Platform Configuration [default paths are shown below]:

  • vxmli:logdir = $InstallationRoot$/logs/
  • vxmli:tmpdir = $InstallationRoot$/tmp/
  • vxmli:directories-save_tempfiles = $InstallationRoot$/tmp/

Note: Changing default values is not really supported in the initial Private Edition release for any of the above MCP options.

Also, additional sinks are available where alarms and potentially sensitive information can be captured. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information. The metrics can be configured in the GVP Media Control Platform configuration:

  • ems.log_sinks = MFSINK I DATAC I TRAPSINK
  • ems:metricsconfig-DATAC = *
  • ems:dc-default-metricsfilter = 0-16,18,25,35,36,41,52-55,74,128,136-141
  • ems.metricsconfig.MFSINK = 0-16,18-41,43,52-56,72-74,76-81,127-129,130,132-141,146-152

GVP Resource Manager

Resource Manager is capable of storing data and sending alarms and potentially sensitive information, but by default, the data will typically be automatically cleansed (by the log rollover process) within 40 days.

Customers are advised to understand the GVP logging (for all components) and understand the sinks (destinations) for information which the platform can potentially capture. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information.

GVP Reporting Server

The Reporting Server is capable of storing/sending alarms and potentially sensitive information, but by default, these components process but do not store consumer PII. Customers are advised to understand the GVP logging (for all components) and understand the sinks (destinations) for information which the platform can potentially capture. See Table 6 and Appendix H of the Genesys Voice Platform User’s Guide for more information.

By default, Reporting Server is designed to collect statistics and other user information. Retention period of this information is configurable, with most data stored for less than 40 days. Customers should work with their application designers to understand what information is captured as part of the application, and, whether or not the data could be considered sensitive.

These settings could be changed by the customer as per their need by using a Helm chart override values.yaml.

Data Retention Specific Settings

  • rs.db.retention.operations.daily.default: "40"
  • rs.db.retention.operations.monthly.default: "40"
  • rs.db.retention.operations.weekly.default: "40"
  • rs.db.retention.var.daily.default: "40"
  • rs.db.retention.var.monthly.default: "40"
  • rs.db.retention.var.weekly.default: "40"
  • rs.db.retention.cdr.default: "40"

Identifying Sensitive Information for Processing

The following example demonstrates how to find this information in the Reporting Server database – for the example where ‘Session_ID’ is considered sensitive:

  • select * from dbo.CUSTOM_VARS where session_ID = '018401A9-100052D6';
  • select * from dbo.VAR_CDRS where session_ID = '018401A9-100052D6';
  • select * from dbo.EVENT_LOGS where session_ID = '018401A9-100052D6';
  • select * from dbo.MCP_CDR where session_ID = '018401A9-100052D6';
  • select * from dbo.MCP_CDR_EXT where session_ID = '018401A9-100052D6';

An example of a SQL query which might be used to understand if specific information is sensitive: ?'"`UNIQ--source-00000024-QINU`"'?

GWS/Current/GWSPEGuide/Planning GWS Before you begin Find out what to do before deploying Genesys Web Services and Applications. Genesys Web Services and Applications GWSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Web Services and Applications (GWS) in Genesys Multicloud CX private edition is made up of multiple containers and Helm charts. The pages in this "Configure and deploy" chapter walk you through how to deploy the following Helm charts:
  • GWS services (gws-services) - all the GWS components.
  • GWS ingress (gws-ingress) - provides internal and external access to GWS services. Internal ingress is used for cross-component communication inside the GWS deployment. It also can be used by other clients located inside the same Kubernetes cluster. External ingress provides access to GWS services to clients located outside the Kubernetes cluster. If you are deploying Genesys Web Services and Applications in a single namespace with other private edition services, then you do not need to deploy GWS ingress.

GWS also includes a Helm chart for Nginx (wwe-nginx) for Workspace Web Edition - see the Workspace Web Edition Private Edition Guide for details about how to deploy this chart.

See Helm charts and containers for Genesys Web Services and Applications for the Helm chart versions you must download for your release.

For information about downloading Helm charts from JFrog Edge, see Downloading your Genesys Multicloud CX containers.

Install the prerequisite dependencies listed in the Third-party services table before you deploy Genesys Web Services and Applications. See Software requirements for a full list of prerequisites and third-party services required by all Genesys Multicloud CX private edition services. GWS uses PostgreSQL to store tenant information, Redis to cache session data, and Elasticsearch to store monitored statistics for fast access. If you set up any of these services as dedicated services for GWS, they have the following minimal requirements:

PostgreSQL

  • CPU: 2
  • RAM: 8 GB
  • HDD: 50 GB

Redis

  • 2 nodes:
    • CPU: 2
    • RAM: 8 GB
    • HDD: 20 GB

Elasticsearch

  • 3 "master" nodes:
    • CPU: 2
    • RAM: 8 GB
    • HDD: 20 GB
  • 4 "data" nodes
    • CPU: 4
    • RAM: 16 GB
    • HDD: 20 GB
GWS ingress objects support Transport Layer Security (TLS) version 1.2 for a secure connection between Kubernetes cluster ingress and GWS ingress. TLS is disabled by default, but you can configure it for internal and external ingress by overriding the entryPoints.internal.ingress.tls and entryPoints.external.ingress.tls sections of the GWS ingress Helm chart.

For example:

entryPoints:
  external:
    ingress:
    tls:
      - secretName: gws-secret-ext
        hosts:
          - gws.genesys.com

In the example above:

  • secretName is the name of the Kubernetes secret that contains the certificate. The secret is a prerequisite and must be created before you deploy GWS ingress.
  • hosts is a list of the fully qualified domain names that should use the certificate. The list must be the same as the value configured for the entryPoints.external.ingress.hosts parameter.

Cookies

GWS components use cookies for following purposes:

  • identify HTTP/HTTPS user sessions
  • identify CometD user sessions
  • support session stickiness
JM: Are these browsers are supported for Agent Setup? Browser support for WWE is documented in the WWE guid for private edition: Before you begin. Also, can we align with the versions that are supposed to be supported for cloud/private edition in all products? Those versions are:
  • Chrome: Current release or one version previous
  • Firefox: Current release or one version previous
  • Microsoft Edge: Current release
  • Microsoft Edge Chromium: Current release


You can use any of the following browsers for UIs:

  • Chrome 75+
  • Firefox 68+
  • Firefox ESR 60.9
  • Microsoft Edge
Genesys Web Services and Applications must be deployed after Genesys Authentication.

For a look at the high-level deployment order, see Order of services deployment in the Setting up Genesys Multicloud CX Private Edition guide.

?'"`UNIQ--nowiki-0000000D-QINU`"'?
IXN/Current/IXNPEGuide/Planning IXN Before you begin Find out what to do before deploying Interaction Server. Interaction Server IXNPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IXN Server:
  • supports single-region model of deployment only
  • does not support scaling or HA
  • requires dedicated PostgreSQL deployment per customer
Available IXN containers can be found by the following names in registry:
  • ixn/ixn_vq_node
  • ixn/ixn_node
  • ixn/interaction_server

Available helm charts can be found by the name ixn-<version>

For information about downloading Genesys containers and Helm charts from JFrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers.

The following are the minimum versions supported by IXN Server:
  • Kubernetes 1.17+
  • Helm 3.0
In case logging into files is configured for IXN Server, it requires a volume storage mounted to IXN Server container. The storage must be capable to write up to 100 MB/min and 10 MB/s for 2 minutes in peak. The storage size depends on logging configuration.

Regarding storage characteristics for IXN Server database, refer to PostgreSQL documentation.

Contact your account representative if you need assistance with sizing calculations.

Not applicable Not applicable Tenant service. For more information, refer to the Tenant Service Private Edition Guide.


Provide information about GDPR support. Include a link to the "suite-level" documentation. Link to come
PEC-AD/Current/WWEPEGuide/Planning PEC-AD Before you begin Find out what to do before deploying Workspace Web Edition. Agent Desktop WWEPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations or assumptions related to the deployment. The Workspace Web Edition Helm charts are included in the Genesys Web Services (GWS) Helm charts. You can access them when you download the GWS Helm charts from JFrog using your credentials.

See Helm charts and containers for Genesys Web Services and Applications for the Helm chart version you must download for your release.

For information about downloading Genesys Helm charts from JFrog Edge, refer to this article: Downloading your Genesys Multicloud CX containers.

There are no specific storage requirements for Workspace Web Edition. Network requirements include:
  • Required properties for ingress:
    • Cookies usage: None
    • Header requirements - client IP & redirect,  passthrough: None
    • Session stickiness: None
    • Allowlisting - optional: None
    • TLS for ingress - optional (you can enable or disable TLS on the connection): Though annotation like any UI or API in the solution
  • Cross-region bandwidth: N/A
  • External connections from the Kubernetes cluster to other systems: N/A
  • WAF Rules (specific only for services handling internet traffic): N/A
  • Pod Security Policy: N/A
  • High-Availability/Disaster Recovery: Refer to High availability and disaster recovery
  • TLS/SSL Certificate configurations: No specific requirements
You can use any of the supported browsers to run Agent Workspace on the client side.

Mandatory Dependencies

The following services must be deployed and running before deploying the WWE service. For more information, refer to Order of services deployment.

  • Genesys Authentication Service:
    • A redirect must be configured in Auth/Environment to allow an agent to login from the WWE URL. The redirect should be configured in the Auth onboarding script, according to the DNS assigned to the WWE service.
  • GWS services:
    • The CORS rules for WWE URLs must be configured in GWS. This should be configured in the GWS onboarding script, according to the DNS assigned to the WWE service.
    • The GWS API URL should be specified at the WWE deployment time as part of the Helm values.
  • TLM service:
    • The CORS rules for the domain where WWE is declared must be configured in Telemetry Service (TLM). For example: ?'"`UNIQ--nowiki-0000000C-QINU`"'?

Optional Dependencies

Depending on the deployed architecture, the following services must be deployed and running before deploying the WWE service:

  • WebRTC Service: To allow WebRTC in the browser
  • Telemetry Service: To allow browser observability (metrics and logs)

Miscellaneous desktop-side optional dependencies

The following software must or might be deployed on agent workstations to allow agents to leverage the WWE service:

  • Mandatory: A browser referenced in the supported browser list.
  • Optional: Genesys Softphone: a SIP or WebRTC softphone to handle the voice channel of agents.
Workspace Web Edition does not have specific GDPR support.
PEC-CAB/Current/CABPEGuide/Planning PEC-CAB Before you begin Find out what to do before deploying Genesys Callback. Callback CABPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Genesys Engagement Service (GES) is the only service that runs in the GES Docker container. The Helm charts included with the GES release provision GES and any Kubernetes infrastructure necessary for GES to run, such as load balancing, autoscaling, ingress control, and monitoring integration.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for Callback for the Helm chart version you must download for your release.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements. The primary contributor to the size of a callback record is the amount of user data that is attached to a callback. Since this is an open-ended field, and the composition will differ from customer to customer, it is difficult to state the precise storage requirements of GES for a given deployment. To assist you, the following table lists the results of testing done in an internal Genesys development environment and shows the impact that user data has when it comes to the storage requirements for both Redis and Postgres.
Test Redis size Postgres  size (MB)
10,000 Scheduled Callbacks with no user data 26.51 MB 41.1 MB
10,000 Scheduled Callbacks with 10 KB of user data 64.44 MB 252.91 MB
10,000 Scheduled Callbacks with 100k of user data 110.58 MB 595.79 MB

Note: This is 100k of randomized string in a single field in the user data.

Hardware requirements

Genesys strongly recommends the following hardware requirements to run GES with a single tenant. The requirements are based on running GES in a multi-tenanted environment and scaled down accordingly. Use these guidelines, coupled with the callback storage information listed above, to gauge the precise requirements needed to ensure that GES runs smoothly in your deployment.

GES

(Based on t3.medium)

  • vCPUs: 1
  • Memory: 2 GiB
  • Network burst: 5 Gbps

Redis

(Based on cache.r5.large) Redis is essential to GES service availability. Deploy two Redis caches in a cluster; the second cache acts as a replica of the first. For more information, see Architecture.

Callback data is stored in Redis memory.

  • vCPUs: 1
  • Memory: 8 GiB
  • Network burst: 10 Gbps

PostgreSQL

(Based on db.t3.medium)

  • vCPUs: 2
  • Memory: 4 GiB
  • Network burst: 5 Gbps
  • Storage: 100 GiB

Sizing calculator

The information in this section is provided to help you determine what hardware you need to run GES and third-party components. The information and formulas are based on an analysis of database disk storage and Redis memory usage requirements for callback data. The numbers provided here include only storage and memory usage for callbacks. Additional storage and memory is required for configuration data and basic operations.

Requirements per callback

Each callback record (excluding user data) requires approximately 6.5 to 7.0 kB of database disk storage, plus additional disk storage for the user data. Each kB of user data consumes approximately 3.0 kB of disk storage.

Each callback record (excluding user data) requires approximately 4.5 to 5.5 kB of Redis memory, plus an additional 1.25 kB for each kB of user data.

Use the following formulas to estimate disk storage and Redis memory requirements:

  • Estimate database disk storage requirements for callback data:
    <number of callbacks per day> × (7 kB + (3 kB × <kB of user data per callback>)) × 14 days
  • Estimate Redis memory requirements for callback data:
    <number of callbacks per day> × (5.5 kB + (1.25 kB × <kB of user data per callback>)) × 14 days

For example, if a tenant has an average of 100,000 callbacks per day with 1kB user data in each callback:

  • The database storage requirement is approximately 14 GB.
  • The Redis memory requirement is approximately 9.5 GB.

NOTE: Each callback record is stored for 14 days. If you average about 10k scheduled callbacks every day, and the scheduled callbacks are all booked as far out as possible (that is, 14 days in the future), the number of callbacks to use in storage and memory calculations is 28 days × 10k callbacks per day = 280k callbacks.

Redis operations

The Redis operations primarily update the connectivity status to other services such as Tenant Service (specifically ORS and URS) and Genesys Web Services and Applications (GWS).

When GES is idle (zero callbacks in the past, no active callback sessions, no scheduled callbacks), GES generates about 50 Redis operations per second per GES node per tenant.

Each Immediate callback generates approximately 110 Redis operations from its creation to the end of the ORS session.

For Scheduled callbacks, assuming each callback generates 110 Redis operations when the ORS session is active (based on Immediate callback numbers), there is 1 additional Redis operation for each minute that a callback is scheduled.

For example, if a callback is scheduled for 1 hour from the time it was created, the number of Redis operations is approximately 60 + 110 = 170.

For a callback scheduled for 1 day from the time it was created, it generates approximately 60 × 24 + 110 = 1550 Redis operations, using the following formula for the number of Redis operations per callback:
<number of callbacks> × (110 + <number of minutes until scheduled time>)

Because the longevity of a callback ORS session depends on the estimated wait time (EWT), the total number of Redis operations performed by GES per minute varies, based on both the number of callbacks in the system and the EWT of the callbacks.

Use the following formula to estimate the number of Redis operations performed per minute:
Total number of Redis operations per minute = (50 base GES Redis operations per second × 60 seconds) + <number of upcoming scheduled callbacks in the system> + ((<total number of active callbacks> / <EWT>) × 110)

Where:

  • Total number of active callbacks = <number of active immediate callbacks> + <number of active scheduled callbacks>, and
  • Number of active scheduled callbacks = (<number of scheduled callbacks per time slot> / <time slot duration>) × <EWT>

For example, let's say we have the following scenario:

  • Scheduled callbacks:
    • Time slot duration = 15 minutes
    • Maximum capacity per time slot = 100
    • Business hours = 24x7
    • Assume that all time slots are fully booked for the next 14 days
  • Number of active immediate callbacks = 1,000
  • Estimated wait time = 90 minutes

Using the preceding formulas, estimate the Redis operations per minute:

  • Total number of scheduled callbacks = (100 × (60 / 15)) × 24 × 14 = 134,400
  • Number of active scheduled callbacks = (100 / 15) × 90 = 600
  • Number of upcoming scheduled callbacks = <total number of scheduled callbacks> - <number of active scheduled callbacks> = (134,400 - 600) = 133,800
  • Total number of active callbacks = 1,000 + 600 = 1,600
  • Total number of Redis operations per minute = (50 × 60) + 133,800 + ((1,600 / 90) × 110) = 138,756

Redis keys

Each callback creates three additional Redis keys. Given the preceding calculations for Redis memory requirements for each callback, the formula for the average key size is:
(5.5 kB + (1.25 kB × <kB_of_user_data_per_callback>)) / 3

Incoming connections to the GES deployment are handled either through the UI or through the external API. For information about how to use the external API, see the Genesys Multicloud CX Developer Center.

Connection topology

The diagram below shows the incoming and outgoing connections amongst GES and other Genesys and third-party software such as Redis, PostgreSQL, and Prometheus. In the diagram, Prometheus is shown as being part of the broader Kubernetes deployment, although this is not a requirement. What's important is that Prometheus is able to reach the internal load balancer for GES.

The other important thing to note is that, depending on the use case, GES might communicate with Firebase and CAPTCHA over the open internet. This is not part of the default callback offering, but if you use Push Notifications with your callback service, then GES must be able to connect to Firebase over TLS. The use of Push Notifications or CAPTCHA is optional and not necessary for the basic callback scenarios.

Ges connection topology private edition diagram.png

Web application firewall rules

Information in the following sections is based on NGINX configuration used by GES in an Azure cloud environment.

Cookies and session requirements

When interacting with the UI, GES and GWS ensure that the user's browser has the appropriate session cookies. By default, UI sessions time out after 20 minutes of inactivity.

The external Engagement API does not require session management or the use of cookies, but it is important that the GES API key be provided in the request headers in the X-API-Key field.

For ingress to GES, allow requests to only the following paths to be forwarded to GES:

- /ges/
- /engagement/v3/callbacks/create
- /engagement/v3/callbacks/cancel
- /engagement/v3/callbacks/retrieve
- /engagement/v3/callbacks/availability/
- /engagement/v3/callbacks/queue-status/
- /engagement/v3/callbacks/open-for/
- /engagement/v3/estimated-wait-time
- /engagement/v3/call-in/requests/create
- /engagement/v3/statistics/operations/get-statistic-ex

In addition to allowing connections to only these paths, ensure that the ccid or ContactCenterID headers on any incoming requests are empty. This enhances security of the GES deployment; it prevents the use of external APIs by an actor who has only the CCID of the contact center.

TLS/SSL certificate configuration

There are no special TLS certificate requirements for the GES/Genesys Callback web-based UI.

Subnet requirements

There are no special requirements for sizing or creating an IP subnet for GES above and beyond the demands of the broader Kubernetes cluster.

The Genesys Callback user interface is supported in the following browsers. GES has dependencies on several other Genesys services. You must deploy the services on which GES depends and verify that each is working as expected before you provision and configure GES. If you follow this advice, then – if any issues arise during the provisioning of GES – you can be reasonably assured that the fault lies in how GES is provisioned, rather than in a downstream program.

GES/Callback requires your environment to contain supported releases of the following Genesys services, which must be deployed before you deploy Callback:

  • Genesys Web Services and Applications (GWS)
  • Genesys Authentication
  • Voice Microservices (includes Tenant Service)
  • Designer
  • Agent Setup

For detailed information about the correct order of services deployment, see Order of services deployment.

Callback records are stored for 14 days. The 14-day TTL setting starts at the Desired Callback Time. The Callback TTL (seconds) setting in the CALLBACK_SETTINGS data table has no effect on callback record storage duration; 14 days is a fixed value for all callback records.
PEC-DC/Current/DCPEGuide/Planning PEC-DC Before you begin Find out what to do before deploying Digital Channels. Digital Channels DCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Digital Channels for private edition has the following limitations:
  • Supports only a single-region model of deployment.
  • Social media requires additional components that are not included in Digital Channels.
Digital Channels in Genesys Multicloud CX private edition includes the following containers:
  • nexus
  • hubpp
  • tenant_deployment

The service also includes a Helm chart, which you must deploy to install all the containers for Digital Channels:

  • nexus

See Helm charts and containers for Digital Channels for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the nexus folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.


Install the prerequisite dependencies listed in the Third-party services table before you deploy Digital Channels. Digital Channels uses PostgreSQL and Redis to store all data.


For general network requirements, review the information on the suite-level Network settings page. Digital Channels has dependencies on the following Genesys services:
  • Genesys Authentication
  • Web Services and Applications
  • Tenant Microservice
  • Universal Contact Service
  • Designer

For detailed information about the correct order of services deployment, see Order of services deployment.

JM: Does Digital Channels support GDPR?
PEC-DC/Current/DCPEGuide/PlanningAIConnector PEC-DC Before you begin Find out what to do before deploying AI Connector. Digital Channels DCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 AI Connector for private edition has the following limitation:
  • Supports only a single-region model of deployment.
AI Connector in Genesys Multicloud CX private edition includes the following container:
  • athena

The service also includes a Helm chart, which you must deploy to install all the containers for AI Connector:

  • athena

See Helm charts and containers for Digital Channels for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the athena folder in the JFrog repository. For information about how to download the Helm chart, see Downloading your Genesys Multicloud CX containers.


Install the prerequisite dependencies listed in the Third-party services table before you deploy AI Connector. AI Connector uses PostgreSQL and Redis to store all data.


For general network requirements, review the information on the suite-level Network settings page. AI Connector has dependencies on the following Genesys service:
  • Digital Channels

For detailed information about the correct order of services deployment, see Order of services deployment.

d4d81735-166a-401c-abfb-0957cbbaef56
PEC-Email/Current/EmailPEGuide/Planning PEC-Email Before you begin Find out what to do before deploying Email. Email EmailPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of Email supports single-region model of deployment only. Email in Genesys Multicloud CX private edition includes the following containers:
  • iwd-email

The service also includes a Helm chart, which you must deploy to install the required containers for Email:

  • iwdem

See Helm Chart and Containers for Email for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwdem folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in IWD, UCS-X, and Digital Channels which are external to the Email service. External Connections: IMAP, SMTP, Gmail, GRAPH Not applicable The following Genesys services are required:
  • Genesys authentication service (GAuth)
  • Universal Contact Service (UCS)
  • Interaction Server
  • Digital Channels (Nexus)
  • Intelligent Workload Distribution (IWD)

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
PEC-IWD/Current/IWDDMPEGuide/Planning PEC-IWD Before you begin Find out what to do before deploying IWD Data Mart. Intelligent Workload Distribution IWDDMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IWD Data Mart:
  • works as a short-living job started on schedule
  • does not support scaling or HA
  • requires dedicated PostgreSQL deployment per customer

IWD Data Mart is a short-living job, so Prometheus metrics cannot be pulled. Therefore, it requires a standalone Pushgateway service for monitoring.

IWD Data Mart in Genesys Multicloud CX private edition includes the following containers:
  • iwd_dm_cloud

The service also includes a Helm chart, which you must deploy to install the required containers for IWD Data Mart:

  • iwddm-cronjob

See Helm Charts and Containers for IWD and IWD Data Mart for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwddm-cronjob folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in PostgreSQL, which is external to the IWD Data Mart. Not applicable Not applicable Intelligent Workload Distribution (IWD) with a provisioned tenant.

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
PEC-IWD/Current/IWDPEGuide/Planning PEC-IWD Before you begin Find out what to do before deploying IWD. Intelligent Workload Distribution IWDPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 The current version of IWD:
  • supports single-region model of deployment only
  • requires dedicated PostgreSQL deployment per customer


IWD in Genesys Multicloud CX private edition includes the following containers:
  • iwd

The service also includes a Helm chart, which you must deploy to install the required containers for IWD:

  • iwd

See Helm Charts and Containers for IWD and IWD Data Mart for the Helm chart version you must download for your release.

To download the Helm chart, navigate to the iwd folder in the JFrog repository. For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

All data is stored in the PostgreSQL, Elasticsearch, and Digital Channels which are external to IWD.

Sizing of Elasticsearch depends on the load. Allow on average 15 KB per work item, 50 KB per email. This can be adjusted depending on the size of items processed.

External Connections: IWD allows customer to configure webhooks. If configured, this establishes an HTTP or HTTPS connection to the configured host or port. Not applicable The following Genesys services are required:
  • Genesys authentication service (GAuth)
  • Universal Contact Service (UCS)
  • Interaction Server
  • Digital Channels (Nexus)

For the order in which the Genesys services must be deployed, refer to the Order of services deployment topic in the Setting up Genesys Multicloud CX private edition document.

Content coming soon
PEC-OU/Current/CXCPEGuide/Planning PEC-OU Before you begin Find out what to do before deploying CX Contact. Outbound (CX Contact) CXCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no limitations. Before you begin deploying the CX Contact service, it is assumed that the following prerequisites and optional task, if needed, are completed:

Prerequisites

  • A Kubernetes cluster is ready for deployment of CX Contact.
  • The Kubectl and Helm command line tools are on your computer.
  • You have connectivity to target cluster, the proper kubectl context to work with the cluster, and your user has administrative permissions to deploy CX Contact to the defined namespace.

Optional tasks

  • SFTP Server—Install an SFTP Server with basic authentication for optional input and output data. SFTP Server is used when automation capabilities are required.
  • CDP NG access credentials—As of CX Contact 9.0.025, Compliance Data Provider Next Generation (CDP NG) is used as a CDP by default. Before attempting to connect to CDP NG, obtain the necessary access credentials (ID and Secret) from Genesys Customer Care.
  • Bitnami repository—If you choose to deploy dedicated Redis and Elasticsearch for CX Contact, add the Bitnami repository to install Redis and Elasticsearch using the following command:
    helm repo add bitnami ?'"`UNIQ--nowiki-00000012-QINU`"'?

After you've completed the mandatory tasks, check the Third-party prerequisites.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for CX Contact for the Helm chart version you must download for your release.

CX Contact is the only service that runs in the CX Contact Docker container. The Helm charts included with the CX Contact release provision CX Contact and any Kubernetes infrastructure necessary for CX Contact to run.

Set up Elasticsearch and Redis services as standalone services or installed in a single Kubernetes cluster.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

CX Contact requires shared persistent storage and an associated storage class created by the cluster administrator. The Helm chart creates the ReadWriteMany (RWX) Persistent Volume Claim (PVC) that is used to store and share data with multiple CX Contact components.

The minimal recommended PVC size is 100GB.

This topic describes network requirements and recommendations for CX Contact in private edition deployments:

Single namespace

Deploy CX Contact in a single namespace to prevent ingress/egress traffic from going through additional hops, due to firewalls, load balancers, or other network layers that introduce network latencies and overhead. Do not hardcode the namespace. You can override it by using the Helm file/values (provided during the Helm install command standard --namespace= argument), if necessary.

External connections

For information about external connections from the Kubernetes cluster to other systems, see Architecture. External connections also include:

  • Compliance Data Provider (AWS)
  • SFTP Servers

Ingress

The CX Contact UI requires Session Stickiness. Use ingress-nginx as the ingress controller (see github.com).

Important
The CX Contact Helm chart contains default annotations for session stickiness only for ingress-nginx. If you are using a different ingress controller, refer to its documentation for session stickiness configuration.

Ingress SSL

If you are using Chrome 80 or later, the SameSite cookie must have the Secure flag (see Chromium Blog). Therefore, Genesys recommends that you configure a valid SSL certificate on ingress.

Logging

Log rotation is required so that logs do not consume all of the available storage on the node.

Kubernetes is currently not responsible for rotating logs. Log rotation can be handled by the docker json-file log driver by setting the max-file and max-size options.

For effective troubleshooting, the engineering team should provide stdout logs of the pods (using the command kubectl logs). As a result, log retention is not very aggressive (see JSON file logging driver). For example: ?'"`UNIQ--source-00000013-QINU`"'? For on-site debugging purposes, CX Contact logs can be collected and stored in Elasticsearch. (For example, EFK stack. See medium.com).

Monitoring

CX Contact provides metrics that can be consumed by Prometheus and Grafana. It is recommended to have the Prometheus Operator (see githum.com) installed in the cluster. CX Contact Helm chart supports the creation of CustomResourceDefinitions that can be consumed by the Prometheus Operator.

For more information about monitoring, see Observability in Outbound (CX Contact).

CX Contact components operate with Genesys core services (v8.5 or v8.1) in the back end. All voice-processing components (Voice Microservice and shared services, such as GVP), and the GWS and Genesys Authentication services (mentioned below) must deployed and running before deploying the CX Contact service. See Order of services deployment.

The following Genesys services and components are required:

  • GWS
  • Genesys Authentication Service
  • Tenant Service
  • Voice Microservice
  • Multi-tenant Configuration Server

Nexus is optional.

 

CX Contact does not support GDPR.
PEC-REP/Current/GCXIPEGuide/Planning PEC-REP Before you begin deploying GCXI Find out what to do before deploying Genesys Customer Experience Insights (GCXI). Reporting GCXIPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 GCXI can provide meaningful reports only if Genesys Info Mart and Reporting and Analytics Aggregates (RAA) are deployed and available. Deploy GCXI only after Genesys Info Mart and RAA. For more information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Customer Experience Insights

GCXI Containers

  • GCXI Helm chart uses the following containers.
    • gcxi - main GCXI container, runs as a StatefulSet. This container is roughly 12 GB; ensure that you have enough space to allocate it.
    • gcxi-control - supplementary container, used for initial installation of GCXI, and for clean-up.

GCXI Helm Chart Download the latest yaml files from the repository, or examine the attached files: Sample GCXI yaml files

For more information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.


GCXI installation requires a set of local Persistent Volumes (PVs). Kubernetes local volumes are directories on the host with specific properties: https://kubernetes.io/docs/concepts/storage/volumes/#local

Example usage: https://zhimin-wen.medium.com/local-volume-provision-242affd5efe2

Kubernetes provides a powerful volume plugin system, which enables Kubernetes workloads to use a wide variety of block and file storage to persist data.

You can use the GCXI Helm chart to set up your own PVs, or you can configure PV Dynamic Provisioning in your cluster so that Kubernetes automatically creates PVs.

Volumes Design

GCXI installation uses the following PVC:

Mount Name Mount Path

(inside container)

Description Access Type Approximate Size Default Mount Point on Host

(You can change the mount point using values.)

The local provisioner requires that the specified directory pre-exists on your host.

Must be Shared across Nodes? Required Node Label

(applies to default Local PV setup)

gcxi-backup /genesys/gcxi_shared/backup Backup files

Used by control container / jobs.

RWX Depends on backup frequency.

5 GB+

/genesys/gcxi/backup

You can override this setting using

Values.gcxi.local.pv.backup.path

Only in multiple concurrent installs scenarios. gcxi/local-pv-gcxi-backup = "true"
gcxi-log /mnt/log MSTR logs

Used by main container.

The GCXI Helm chart allows log volumes of legacy hostPath type. This scenario is the default and used in examples in this document.

RWX Depends on rotation scheme.

5 GB+

/mnt/log/gcxi

subPathExpr: $(POD_NAME)

You can override this setting using

Values.gcxi.local.pv.log.path

Not necessarily. gcxi/local-pv-gcxi-log = "true"

If you are using hostPath volumes for logs, you don't need node label.

gcxi-postgres /var/lib/postgresql/data

(if using Postgres in container)

or

disk space in Postgres RDBMS

Meta DB volume

Used by Postgres container, if deployed.

RWO Depends on usage.

10 GB+

/genesys/gcxi/shared

You can override this setting using

Values.gcxi.local.pv.postgres.path

Yes, unless you tie the Postgres container to some particular node. gcxi/local-pv-postgres-data = "true"
gcxi-share /genesys/gcxi_share MSTR shared caches and cubes

Used by main container.

RWX Depends on usage.

5 GB+

/genesys/gcxi/data

subPathExpr: $(POD_NAME)

You can override this setting using

Values.gcxi.local.pv.share.path

Yes. gcxi/local-pv-gcxi-share = "true"

Preparing the environment

To prepare your environment, complete the following steps:

  1. To log in to the cluster, run the following command:
    • For AKS:
      ?'"`UNIQ--source-0000002D-QINU`"'?
    • For GKE:
      ?'"`UNIQ--source-0000002F-QINU`"'?
    • For OpenShift:
      ?'"`UNIQ--source-00000031-QINU`"'?
      • To check the cluster version on OpenShift deployments, run the following command:
        ?'"`UNIQ--source-00000033-QINU`"'?
  2. To create a new project, run the following command:
    GKE or AKS:
    1. Edit the create-gcxi-namespace.json, adding the following values:
      ?'"`UNIQ--source-00000035-QINU`"'?
    2. To apply the changes, run the following command:
      ?'"`UNIQ--source-00000037-QINU`"'?
    OpenShift: ?'"`UNIQ--source-00000039-QINU`"'?
  3. For GKE or AKS, to confirm namespace creation, run the following command:
    ?'"`UNIQ--source-0000003B-QINU`"'?
  4. Create a secret for docker-registry to pull images from the Genesys JFrog repository:
    ?'"`UNIQ--source-0000003D-QINU`"'?
  5. Create the file values-test.yaml, and populate it with appropriate override values. For a simple deployment using PostgreSQL inside the container, you must include PersistentVolumes named gcxi-log-pv, gcxi-backup-pv, gcxi-share-pv, and gcxi-postgres-pv. You must override GCXI_GIM_DB with the name of your Genesys Info Mart data source.

Ingress

Ingress annotations are supported in the values.yaml file (see line 317). Genesys recommends session stickiness, to improve user experience. ?'"`UNIQ--source-0000003F-QINU`"'?

Allowlisting is required for GCXI.

WAF Rules

WAF rules are defined in the variables.tf file (see line 245).

SMTP

The GCXI container and Helm chart support the environment variable EMAIL_SERVER.

TLS

The GCXI container does not serve TLS natively. Ensure that your environment is configured to use proxy with HTTPS offload.

MicroStrategy Web is the user interface most often used for accessing, managing, and running the Genesys CX Insights reports. MicroStrategy Web certifies the latest versions, at the time of release, for the following web browsers:
  • Apple Safari
  • Google Chrome (Windows and iOS)
  • Microsoft Edge
  • Microsoft Internet Explorer (Versions 9 and 10 are supported, but not certified)
  • Mozilla Firefox

To view updated information about supported browsers, see the MicroStrategy ReadMe.

GCXI requires the following services:
  • Reporting and Analytics Aggregates (RAA) is required to aggregate Genesys Info Mart data.
  • Genesys Info Mart and / or Intelligent Workload Distribution (IWD) Data Mart. GCXI can run without these services, but cannot produce meaningful output without them.
  • GWS Auth/Environment service
  • Genesys Platform Authentication thru Config Server (GAuth). Alternatively, GCXI includes a native internal login, which you can use to authorize users, instead of GAuth. This document assumes you are using GAuth (the recommended solution), which gives ConfigServer users access to GCXI.
  • GWS client id/client secret
GCXI can store Personal Identifiable Information (PII) in logs, history files, and in reports (in scenarios where customers include PII data in reports). Genesys recommends that you do not capture PII in reports. If you do capture PII, it is your responsibility to remove any such report data within 21 days or less, if required by General Data Protection Regulation (GDPR) standards.

For more information and relevant procedures, see: Genesys CX Insights Support for GDPR and the suite-level Link to come documentation.

PEC-REP/Current/GCXIPEGuide/PlanningRAA PEC-REP Before you begin deploying RAA Find out what to do before deploying Reporting and Analytics Aggregates (RAA). Reporting GCXIPEGuide The RAA container works with the Genesys Info Mart database; deploy RAA only after you have deployed Genesys Info Mart.

The Genesys Info Mart database schema must correspond to a compatible Genesys Info Mart version. Execute the following command to discover the required Genesys Info Mart release: ?'"`UNIQ--source-00000020-QINU`"'? RAA container runs RAA on Java 11, and is supplied with the following of JDBC drivers:

  • MSSQL 9.2.1 JDBC Driver
  • Postgres 42.2.11 JDBC Driver
  • Oracle Database 21c (21.1) JDBC Driver

Genesys recommends that you verify whether the provided driver is compatible with your database, and if it is not, you can override the JDBC driver by copying an updated driver file to the folder lib\jdbc_driver_<RDBMS> within the mounted config volume, or by creating a co-named link within the folder lib\jdbc_driver_<RDBMS>, which points to a driver file stored on another volume (where <RDBMS> is the RDBMS used in your environment). This is possible because RAA is launched in a config folder, which is mounted in a container.

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Customer Experience Insights.

You can download the gcxi helm charts from the following repository:?'"`UNIQ--source-00000022-QINU`"'? For more information about downloading containers, see: Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, including Kubernetes, Helm, and other prerequisites, see Software requirements.
This section describes the storage requirements for various volumes.

GIM secret volume

In scenarios where raa.env.GCXI_GIM_DB__JSON is not specified, RAA mounts this volume to provide GIM connections details.

  1. Declare GIM database connection details as a Kubernetes secret in gimsecret.yaml:
    ?'"`UNIQ--source-00000024-QINU`"'?
  2. Reference gimsecret.yaml in values.yaml:
    ?'"`UNIQ--source-00000026-QINU`"'?

Alternatively, you can mount the CSI secret using secretProviderClass, in values.yaml:?'"`UNIQ--source-00000028-QINU`"'?

Config volume

RAA mounts a config volume inside the container, as the folder /genesys/raa_config. The folder is treated as a work directory, RAA reads the following files from it during startup:

  • conf.xml, which contains application-level config settings.
  • custom *.ss files.
  • JDBC driver, from the folder lib/jdbc_driver_<RDBMS>.

RAA does not normally create any files in /genesys/raa_config at runtime, so the volume does not require a fast storage class. By default, the size limit is set to 50 MB. You can specify the storage class and size limit in values.yaml:?'"`UNIQ--source-0000002A-QINU`"'?

   ...

RAA helm chart creates a Persistent Volume Claim (PVC). You can define a Persistent Volume (PV) separately using the gcxi-raa chart, and bind such a volume to the PVC by specifying the volume name in the raa.volumes.config.pvc.volumeName value, in values.yaml:?'"`UNIQ--source-0000002C-QINU`"'?

Health volume

RAA uses the Health volume to store:

  • Health files.
  • Prometheus file containing metrics for the most recent 2-3 scrape intervals.
  • Results of the most recent testRun init container execution.

By default, the volume is limited to 50MB. RAA periodically interacts with the volume at runtime, so Genesys does not recommend a slow storage class for this volume. You can specify the storage class and size limit in values.yaml:?'"`UNIQ--source-0000002E-QINU`"'?RAA helm chart creates a PVC. You can define a PV separately using the gcxi-raa chart, and bind such a volume to the PVC by specifying the volume name in the raa.volumes.health.pvc.volumeName value, in values.yaml:?'"`UNIQ--source-00000030-QINU`"'?

RAA interacts only with the Genesys Info Mart database.

RAA can expose Prometheus metrics by way of Netcat.

The aggregation pod has it's own IP address, and can run with one or two running containers. For Helm test, an additional IP address is required -- each test pod runs one container.

Genesys recommends that RAA be located in the same region as the Genesys Info Mart database.

Secrets

RAA secret information is defined in the values.yaml file (line 89).

For information about configuring arbitrary UID, see Configure security.

Not applicable. RAA interacts with Genesys Info Mart database only. Not applicable.
PEC-REP/Current/GIMPEGuide/PlanningGCA PEC-REP Before you begin GCA deployment Find out what to do before deploying GIM Config Adapter (GCA). Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Instructions are provided for a single-tenant deployment.


GIM Config Adapter (GCA) and GCA monitoring are the only services that run in the GCA Docker container. The Helm charts included with the GCA release provision GCA and any Kubernetes infrastructure necessary for GCA to run.

See Helm charts and containers for Genesys Info Mart for the Helm chart versions you must download for your release.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GCA.

GCA uses object storage to store the GCA snapshot during processing. Like GSP, GCA supports using S3-compatible storage provided by OpenShift and Google Cloud Platform (GCP), and Genesys expects you to use the same storage account for GSP and GCA. If you want to use separate storage for GCA, follow the Configure S3-compatible storage instructions for GSP to create similar S3-compatible storage for GCA. No special network requirements.
  • Voice Tenant Service, which enables GCA to access the Configuration Server database. You must deploy the Voice Tenant Service before you deploy GCA.
    • Ensure that an appropriate user account is available for GCA to use to access the Configuration Database. The GCA user account requires at least read permissions.
    • You must also have your Tenant ID information available.
  • There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GSP looks for the GCA snapshot when it starts; if GCA has not yet been deployed, GSP will encounter unknown configuration objects and resources until the snapshot becomes available.

For detailed information about the correct order of services deployment, see Order of services deployment.

Not applicable. GCA does not store information beyond an ephemeral snapshot. f05492f5-52ed-490a-b0d5-c318a4a7272b
PEC-REP/Current/GIMPEGuide/PlanningGIM PEC-REP Before you begin GIM deployment Find out what to do before deploying GIM. Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Instructions are provided for a single-tenant deployment. To configure and deploy Genesys Info Mart, you must obtain the Helm charts included with the Info Mart release. These Helm charts provision Info Mart plus any Kubernetes infrastructure Info Mart requires to run.

Genesys Info Mart and GIM monitoring are the only services that run in the Info Mart container.

For the correct Helm chart version for your release, see Helm charts and containers for Genesys Info Mart. For information on downloading from the image repository, see Downloading your Genesys Multicloud CX containers.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GIM.

GIM uses PostgreSQL for the Info Mart database and, optionally, uses object storage to store exported Info Mart data.

PostgreSQL — the Info Mart database

The Info Mart database stores data about agent and interaction activity, Outbound Contact campaigns, and usage information about other services in your contact center. A subset of tables and views created, maintained, and populated by Reporting and Analytics Aggregates (RAA) provides the aggregated data on which Genesys CX Insights (GCXI) reports are based.

A sizing calculator for Genesys Multicloud CX private edition is under development. In the meantime, the interactive tool available for on-premises deployments might help you estimate the size of your Info Mart database, see Genesys Info Mart 8.5 Database Size Estimator.

Genesys recommends a minimum 3 IOPS per GB.

For information about creating the Info Mart database, see Before you begin GIM deployment


Create the Info Mart database

Use any database management tool to create the Info Mart ETL database and user.

  1. Create the database.
  2. Create a user for all of the Genesys Info Mart services to use. Grant that user full permissions for the database.
    This user’s account is used by Info Mart jobs to access the Info Mart database schema.
    The name of the Info Mart schema name is public.
    Important
    Make a note of the database and user details. You need this information when you configure GIM and GCA Helm chart override values.

Object storage — Data Export packages

The GIM Data Export feature enables you to export data from the GIM database so it is available for other purposes. Unless you elect to store your exported data in a local directory, GIM data is exported to an object store. GIM supports export to either Azure Blob Storage or the S3-compatible storage provided by Google Cloud Platform (GCP).

If you want to use S3-compatible storage, follow the Before you begin GSP deployment instructions for GSP to create the S3-compatible storage for GIM.

Important
GSP and GCA use object storage to store data during processing. For safety and security reasons, Genesys strongly recommends that you use a dedicated object storage account for the GIM persistent storage, and do not share the storage account created for GSP and GCA. GSP and GCA can share an account, and this is the expected deployment.

If you are not using obect storage, you can configure GIM to store exported data in a local directory. In this case, you do not need to create the object storage.

No special network requirements.
  • You must have your Tenant ID information available.
  • There are no strict dependencies among the Genesys Info Mart services, but the logic of your particular pipeline might require them to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GCA might try to access the Info Mart database to synchronize configuration data but if GIM has not yet been deployed, the Info Mart database will be empty.

For detailed information about the correct order of services deployment, see Order of services deployment.

GIM provides full support for you to comply with Right of Access ("export") or Right of Erasure ("forget") requests from consumers and employees with respect to personally identifiable information (PII) in the Info Mart database.

Genesys Info Mart is designed to comply with General Data Protection Regulation (GDPR) policies. Support for GDPR includes the following:

  • The way that Genesys Info Mart processes customer files complies with Right of Access ("export") and Right of Erasure ("forget") requirements.
  • Genesys Infor Mart supports configuring data retention policies.
  • GDPR processing is fully audited.

For more information about how Genesys Info Mart implements support for GDPR requests, see [[PEC-REP/Current/GIMPEGuide/GDPR|]].

For details about the Info Mart database tables and columns that potentially contain PII, see the description of the CTL_GDPR_HISTORY table in the Genesys Info Mart on-premises documentation.

e65e00cb-c1c8-4fb8-9614-80ac07c3a4e3
PEC-REP/Current/GIMPEGuide/PlanningGSP PEC-REP Before you begin GSP deployment Find out what to do before deploying GIM Stream Processor (GSP). Reporting GIMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 To configure and deploy the GIM Stream Processor (GSP), you must obtain the Helm charts included with the GSP release. These Helm charts provision GSP plus any Kubernetes infrastructure GSP requires to run.

GSP is the only service that runs in the GSP container.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers. To find the correct Helm chart version for your release, see Helm charts and containers for Genesys Info Mart for the Helm chart version you must download for your release.

For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following table lists the third-party prerequisites for GSP.

GSP maintains internal “state,” such as such as GSP checkpoints, savepoints, and high availability data, which must be persisted (stored). When it starts, GSP reads its state from storage, which allows it to continue processing data without reading data from Kafka topics from the start. GSP periodically updates its stare as it processing incoming data.

GSP uses S3-compatible storage to store persisted data. You must provision this S3-compatible storage in your environment.

By default, GSP is configured to use Azure Blob Storage, but you can also use S3-compatible storage provided by other cloud platforms. Genesys expects you to use the same storage account for GSP and GCA.

Create S3-compatible storage

Genesys Info Mart has no special requirements for the storage buckets you create. Follow the instructions provided by the storage service provider of your choice to create the S3-compatible storage.

  • For GKE, see the Google Cloud Storage documentation about creating bucket storage.
  • For AKS, see Azure Storage documentation about creating blob storage.

To enable the S3-compatible storage object, you populate Helm chart override values for the service (see ). To do this, you need to know details such as the endpoint information, access key, and secret.

Important
Note and securely store the bucket details, particularly the access key and secret, when you create the storage bucket. Depending on the cloud storage service you choose, you may not be able to recover this information subsequently.
No special network requirements. Network bandwidth must be sufficient to handle the volume of data to be transferred into and out of Kafka. There are no strict dependencies between the Genesys Info Mart services, but the logic of your particular pipeline might require Genesys Info Mart services to be deployed in a particular order. Depending on the order of deployment, there might be temporary data inconsistencies until all the Genesys Info Mart services are operational. For example, GSP looks for the GCA snapshot when it starts; if GCA has not yet been deployed, GSP will encounter unknown configuration objects and resources until the snapshot becomes available.

There are other private edition services you must deploy before Genesys Info Mart. For detailed information about the recommended order of services deployment, see Order of services deployment.

Not applicable, provided your Kafka retention policies have not been set to more than 30 days. GSP does not store information beyond the ephemeral data used during processing.

GSP Kafka topics

For GSP, topics in Kafka represent various data domains and GSP expects certain topics to be defined.

  • If a topic does not exist, GSP will never receive data for that domain.
  • If a customized topic has been created but not defined in GSP configuration, data from that domain will be discarded.

Unless Kafka has been configured to auto-create topics, you must manually ensure that all of the Kafka topics GSP requires are created in Kafka configuration.

The following table shows the topic names GSP expects to be available. In this table, an entry in the Customizable GSP parameter column indicates support for customizing that topic name.

Topic name Customizable GSP parameter Description
GSP consumes data from the following topics:
designer-sdr designer Name of the input topic with Session Detail Record (SDR) data
digital-agentstate digitalAgentStates Name of the input topic with digital agent states
digital-itx digitalItx Name of the input topic with digital interactions
gca-cfg cfg Name of the input topic with configuration data
voice-agentstate Name of the input topic with voice agent states
voice-callthread Name of the input topic with voice interactions
voice-outbound Name of the input topic with outbound (CX Contact) activity associated with either voice or digital interactions
GSP produces data into the following topics:
gsp-cfg cfg Name of the output topic for configuration reporting
gsp-custom custom Name of the output topic for custom reporting
gsp-ixn interactions Name of the output topic for interactions
gsp-mn mediaNeutral Name of the output topic for media-neutral agent states
gsp-outbound outbound Name of the output topic for outbound (CX Contact) activity
gsp-sm agentStates Name of the output topic for agent states
c39fe496-c79e-4846-b451-1bc8bedb126b
PEC-REP/Current/PulsePEGuide/Planning PEC-REP Before you begin Find out what to do before deploying Genesys Pulse. Reporting PulsePEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 There are no known limitations.


For more information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

To learn what Helm chart version you must download for your release, see Helm charts and containers for Genesys Pulse

Genesys Pulse Containers

Container Description Docker Path
collector Genesys Pulse Collector <docker>/pulse/collector:<image-version>
cs_proxy Configuration Server Proxy <docker>/pulse/cs_proxy:<image-version>
init Init container, used for DB initialization <docker>/pulse/init:<image-version>
lds Load Distribution Server (LDS) <docker>/pulse/lds:<image-version>
monitor_dcu_push_agent Provides monitoring data from Stat Server and Genesys Pulse Collector <docker>/pulse/monitor_dcu_push_agent:<image-version>
monitor_lds_push_agent Provides monitoring data from LDS <docker>/pulse/monitor_lds_push_agent:<image-version>
pulse Genesys Pulse Backend <docker>/pulse/pulse:<image-version>
ss Stat Server <docker>/pulse/ss:<image-version>
userpermissions User Permissions service <docker>/pulse/userpermissions:<image-version>

Genesys Pulse Helm Charts

Helm Chart Containers Shared Helm Path
Init init yes <helm>/init-<chart-version>.tgz
Pulse pulse yes <helm>/pulse-<chart-version>.tgz
LDS cs_proxy, lds, monitor_lds_push_agent <helm>/lds-<chart-version>.tgz
DCU cs_proxy, ss, collector, monitor_dcu_push_agent <helm>/dcu-<chart-version>.tgz
Permissions cs_proxy, userpermissions <helm>/permissions-<chart-version>.tgz
Init Tenant init <helm>/init-tenant-<chart-version>.tgz
Monitor - yes <helm>/monitor-<chart-version>.tgz
Appropriate CLI must be installed.

For more information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

Logs Volume

Persistent Volume Size Type IOPS POD Containers Critical Backup needed
pulse-dcu-logs 10Gi RW high DCU csproxy, collector, statserver Y Y
pulse-lds-logs 10Gi RW high lds csproxy, lds Y Y
pulse-permissions-logs 10Gi RW high permissions csproxy, permissions Y Y
pulse-logs 10Gi RW high pulse pulse Y Y

The logs volume stores log files:

  • To use the persistent volume, set the log.volumeType to the pvc.
  • To use the local storage, set the log.volumeType to the hostpath.

Genesys Pulse Collector Health Volume

Local Volume POD Containers
collector-health dcu collector, monitor-sidecar

Genesys Pulse Collector health volume provides non-persistent storage for store Genesys Pulse Collector health state files for monitoring.

Stat Server Backup Volume

Persistent Volume Size Type IOPS POD Containers Critical Backup needed
statserver-backup 1Gi RWO medium dcu statserver N N

Stat Server backup volume provides disk space for Stat Server's state backup. The Stat Server backup volume stores the server state between restarts of the container.

No special requirements. Ensure that the following services are deployed and running before you deploy Genesys Pulse:


  • Genesys Authentication:
  • Genesys Web Services and Applications
  • Agent Setup
  • Tenant Service:
    • The Tenant UUID (v4) is provisioned, example: "9350e2fc-a1dd-4c65-8d40-1f75a2e080dd"
    • The Tenant service is available as host. For example, in GKE, it is: "tenant-<tenant-uuid>.voice" port: 8888
  • Voice Microservice:
    • The Voice service is available as host. For example, in GKE, it is: "tenant-<tenant-uuid>.voice" port: 8000
Important
All services listed must be accessible from within the cluster where Genesys Pulse will be deployed.

For more information, see Order of services deployment.

Genesys Pulse supports the General Data Protection Regulation (GDPR). See Genesys Pulse Support for GDPR for details.
PrivateEdition/Current/TenantPEGuide/Planning PrivateEdition Before you begin Find out what to do before deploying the Tenant Service. Genesys Multicloud CX Private Edition TenantPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

See Helm charts and containers for Voice Microservices for the Helm chart version you must download for your release.

Containers

The Tenant Service has the following containers:

  • Core tenant service container
  • Database initialization and upgrade container
  • Role and privileges  initialization and upgrade container
  • Solution specific: pulse provisioning container

Helm charts

  • Tenant deployment
  • Tenant infrastructure
For information about setting up your Genesys Multicloud CX private edition platform, see Software Requirements.

The following table lists the third-party prerequisites for the Tenant Service.

For information about storage requirements for Voice Microservices, including the Tenant Service, see Storage requirements in the Voice Microservices Private Edition Guide. For general network requirements, review the information on the suite-level Network settings page. For detailed information about the correct order of services deployment, see Order of services deployment.

The following prerequisites are required before deploying the Tenant Service:

  • Voice Platform and all its external dependencies must be deployed before proceeding with the Tenant Service deployment.
  • PostgreSQL 10 database management system must be deployed and database shall be allocated either as a primary or replica. For more information about the sample deployment of a standalone DBMS, see Third-party prerequisites.

In addition, if you expect to use Agent Setup or Workspace Web Edition after the tenant is deployed, Genesys recommends that you deploy GWS Authentication Service before proceeding with the Tenant Service deployment.

Specific dependencies

The Tenant Service is dependent on the following platform endpoints:

  • GWS environment API
  • Interaction service core
  • Interaction service vq

The Tenant Service is dependent on the following service component endpoints:

  • Voice Front End Service
  • Voice Redis (RQ) Service
  • Voice Config Service
Not applicable. 5a34ac72-3fae-4368-afd8-5b899e1c52ba
STRMS/Current/STRMSPEGuide/Planning STRMS Before you begin Find out what to do before deploying Event Stream. STRMSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 See
#mintydocs_link must be called from a MintyDocs-enabled page (STRMS/Current/STRMSPEGuide/Planning).
for the Helm chart version you must download for your release.For information about how to download the Helm charts, see
#mintydocs_link must be called from a MintyDocs-enabled page (STRMS/Current/STRMSPEGuide/Planning).
.
Event Stream has dependencies on several other Genesys services. It is recommended that the provisioning and configuration of Event Stream be done after these services have been set up so that should any issues arise during the provisioning of Event Stream, it can be reasonably assured that the fault lies in how Event Stream is provisioned rather than in some downstream program. For a look at the high-level deployment order, see
#mintydocs_link must be called from a MintyDocs-enabled page (STRMS/Current/STRMSPEGuide/Planning).
.
TLM/Current/TLMPEGuide/Planning TLM Before you begin Find out what to do before deploying Telemetry Service. Telemetry Service TLMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 NA Telemetry Service is composed of:
  • 1 Docker Container: tlm/telemetry-service:version
  • 1 Helm Chart: telemetry-service_version.tgz

For additional information about overriding Helm chart values, see Overriding Helm Chart values in the Genesys Multicloud CX Private Edition Guide.

For information about downloading Helm charts from JFrog Edge, see Downloading your Genesys Multicloud CX containers in the Setting up Genesys Multicloud CX Private Edition guide.

NA
NA For any kind of Telemetry deployment, the following service must be deployed and running before deploying the Telemetry service:

For a look at the high-level deployment order, see Order of services deployment.

17df197d-45b4-4d49-b269-f44d5bdfe5a1
UCS/Current/UCSPEGuide/Planning UCS Before you begin Find out what to do before deploying Universal Contact Service (UCS). Universal Contact Service UCSPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 Currently, UCS:
  • supports a single-region model of deployment only
  • does not support SSL communication with ElasticSearch.
  • requires dedicated PostgreSQL deployment per customer.
Download the UCS related Docker containers and Helm charts from the JFrog repository.

See Helm charts and containers for Universal Contact Service for the Helm chart and container versions you must download for your release.

For more information on JFrog, refer to the Downloading your Genesys Multicloud CX containers topic in the Setting up Genesys Multicloud CX private edition document.

  • Kubernetes 1.17+
  • Helm 3.0
All data are stored in the PostgreSQL, Elasticsearch, and Nexus Upload Service which are external to the UCS. UCS requires the following Genesys components:
  • Genesys Authentication Service
  • GWS Environment Service
As a part of GDPR compliance procedure, the customer would send a request to Care providing the information about the end user. Care would then open a ticket for Engineering team to follow up on the request.

The engineering team would process the request:

GDPR request: Export Data

  • Request to UCS to get contact by ID: identify contact (if there is email address or phone number), or getContact (if there is a direct contact ID).
  • Request to UCS-X to get list of interactions for contact found.
  • Perform CSV export and attach resulting file to the ticket.

GDPR request: Forget me

  • Request to UCS-X to get contact by ID: identify contact (if there is email address or phone number), or getContact (if there is a direct contact ID).
  • Request to UCS-X to get list of interactions for contact found.
  • Delete all found interactions.
  • Re-check that all interactions for contact were removed.
  • Delete contact.
  • Re-check that contact was removed.
  • Update the ticket.
VM/Current/VMPEGuide/Planning VM Before you begin Find out what to do before deploying Voice Microservices. Voice Microservices VMPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers.

The following services are included with Voice Microservices:

  • Voice Agent State Service
  • Voice Config Service
  • Voice Dial Plan Service
  • Voice Front End Service
  • Voice Orchestration Service
  • Voice Registrar Service
  • Voice Call State Service
  • Voice RQ Service
  • Voice SIP Cluster Service
  • Voice SIP Proxy Service
  • Voice Voicemail Service
  • Voice Tenant Service

See Helm charts and containers for Voice Microservices for the Helm chart version you must download for your release.

For information about the Voicemail Service, see Before you begin in the Configure and deploy Voicemail section of this guide.

For information about the Tenant service, also included with Voice Microservices, see the Tenant Service Private Edition Guide.

For information about setting up your Genesys Multicloud CX private edition platform, see Software Requirements.

The following table lists the third-party prerequisites for Voice Microservices.

Voice Tenant Service
Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
log-pvc 50Gi RWO medium storing log files tenant Y Y

SIP Cluster Service

Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
log-pvc 50Gi RWO medium storing log files voice-sip Y Y

VoiceMail Service

Persistent Volume Type IOPS Functionality Container Critical Backup needed
Azure blob storage v2 RWM medium storing voicemailbox settings and voicemail messages tenant Y Y
AWS S3 Bucket RWM medium storing voicemailbox settings and voicemail messages tenant Y Y
File System RWM medium storing voicemailbox settings and voicemail messages tenant Y Y

For more information, see Storage requirements in the Configure and deploy Voicemail section of this guide.

For general network requirements, review the information on the suite-level Network settings page.


Voice Voicemail Service Voice Tenant Service
Cross-region bandwidth Connect to other region Voicemail service to push MWI notification. Need to connect to Tenant Service in other regions.

Bandwidth for Redis cross-region connection.

External connections Redis, Storage Account Redis and Kafka: Supports secured (TLS) connection.


Postgres: Supports secured (TLS, simple) connection between Tenant and Postgres server.

Pod Security Policy All containers running as Genesys user (500) and non-root user All containers running as Genesys user (500) and non-root user
SMTP Settings SMTP enabled Not applicable
TLS/SSL Certificates configurations Not applicable Not applicable
Ingress Not applicable Not applicable
Subnet sizing Network bandwidth must be sufficient to handle the volume of data to be transferred into and out of Kafka and Redis. Subnet sizing to accommodate N+1 Tenant pods.
CNI for Direct Pod Routing Not applicable Not applicable
For detailed information about the correct order of services deployment, see Order of services deployment.

Multi-Tenant Inbound Voice: Voicemail Service

Customer data that is likely to identify an individual, or a combination of other held data to identify an individual is considered as Personally Identifiable Information (PII). Customer name, phone number, email address, bank details, and IP address are some examples of PII.

According to EU GDPR:

  • When a customer requests to access personal data that is available with the contact center, the PII associated with the client is exported from the database in client-understandable format. You use the Export Me request to do this.
  • When a customer requests to delete personal data, the PII associated with that client is deleted from the database within 30 days. However, the Voicemail service is designed in a way that the Customer PII data is deleted in one day using the Forget Me request.

Both Export Me and Forget Me requests depend only on Caller ID/ANI input from the customer. The following PII data is deleted or exported during the Forget Me or Export Me request process, respectively:

  • Voicemail Message
  • Caller ID/ANI

GDPR feature is supported only when StorageInterface' is configured as BlobStorage, and Voicemail service is configured with Azure storage account data store.

Adding caller_id tag during voicemail deposit

Index tag caller_id is included in voicemail messages and metadata blob files during voicemail deposit. Using the index tags, you can easily filter the Forget Me or Export Me instead of searching every mailbox.

GDPR multi-region support

In voicemail service, all voicemail metadata files are stored in master region and voicemail messages are deposited/stored in the respective region. Therefore, It is required to connect all the regions of a tenant to perform Forget Me, Undo Forget Me, or Export Me processes for GDPR inputs.

To provide multi-region support for GDPR, follow these steps while performing GDPR operation:

  1. Get the list of regions of a tenant.
  2. Ensure all regions storage accounts are up. If any one of storage accounts is down, you cannot perform the GDPR operation.
  3. GDPR operates in the master region files, first.
  4. Then, GDPR operates in all the non-master region files.
WebRTC/Current/WebRTCPEGuide/Planning WebRTC Before you begin Find out what to do before deploying WebRTC. WebRTC WebRTCPEGuide bf21dc7c-597d-4bbe-8df2-a2a64bd3f167 All prerequisites described under Third-party prerequisites, Genesys dependencies, and Secrets have been met. Download the Helm charts from the webrtc folder in the JFrog repository. See Helm charts and containers for WebRTC for the Helm chart version you must download for your release.

For information about how to download the Helm charts in Jfrog Edge, see the suite-level documentation: Downloading your Genesys Multicloud CX containers

WebRTC contains the following containers:

Artifact Type Functionality JFrog Containers and Helm charts
webrtc webrtc gateway container Handles agents’ sessions, signalling, and media traffic. It also performs media transcoding. https://<jfrog artifactory>/<docker location>/webrtc/webrtc/
coturn coturn container Utilizes TURN functionality https://<jfrog artifactory>/<docker location>/webrtc/coturn/
webrtc-service Helm chart https://<jfrog artifactory>/<helm location>/ webrtc-service-<version_number>.tgz
For information about setting up your Genesys Multicloud CX private edition platform, see Software requirements.

The following are the third-party prerequisites for WebRTC:

WebRTC does not require persistent storage for any purposes except Gateway and CoTurn logs. The following table describes the storage requirements:
Persistent Volume Size Type IOPS Functionality Container Critical Backup needed
webrtc-gateway-log-volume 50Gi RW medium storing gateway log files webrtc Y Y
webrtc-coturn-log-volume 50Gi RW medium storing coturn log files coturn N Y

Persistent Volume and Persistent Volume Claim will be created if they are configured. The size for them optional and should be adjusted according to log rate described below:

Gateway:

idle: 0.5 MB/hour per agent

active call: around 0.2MB per call per agent.

Example: For 24 full hours of work, where each agent call rate is constant and is around 7 to 10 calls per hour, we will require around ~500GB for 1000 agents, with around ~20GB being consumed per hour.

CoTurn:

For 1000 connected agents, the load rate is approximately 3.6 GB/hour which scales linearly and increases or decreases with the number of agents and stays constant whether calls are performed or not.

Ingress

WebRTC requires the following Ingress requirements:

  • Persistent session stickiness based on cookie is mandatory. Stickiness cookie should contain the following attributes:
    • SameSite=None
    • Secure
    • Path=/
  • No specific headers requirements
  • Whitelisting (optional)
  • TLS is mandatory

Secrets

WebRTC supports three types of secrets: CSI driver, Kubernetes secrets, and environment variables.

Important
GWS Secret for WebRTC should contain the following grants:

?'"`UNIQ--source-00000024-QINU`"'?

For GWS secrets, CSI or Kubernetes secret should contain gwsClient and gwsSecret key-values.

GWS secret for WebRTC must be created in the WebRTC namespace using the following specification as an example:?'"`UNIQ--source-00000026-QINU`"'?

ConfigMaps

Not Applicable

WAF Rules

The following Web Application Firewall (WAF) rules should be disabled for WebRTC:

WAF Rule Number of rules
REQUEST-920-PROTOCOL-ENFORCEMENT 920300
920440
REQUEST-913-SCANNER-DETECTION 913100
913101
REQUEST-921-PROTOCOL-ATTACK 921150
REQUEST-942-APPLICATION-ATTACK-SQLI 942430


Pod Security Policy

Not applicable

Auto-scaling

WebRTC and CoTurn auto-scaling is performed by KEDA operator. The auto-scaling feature requires Prometheus metrics. To know more about KEDA, visit https://keda.sh/docs/2.0/concepts/.

Use the following option in YAML values file to enable the deployment of auto-scaling objects:

?'"`UNIQ--source-00000028-QINU`"'?

You can configure the Polling interval and maximum number of replicas separately for Gateway pods and CoTurn pods using the following options:

?'"`UNIQ--source-0000002A-QINU`"'?

  • Gateway Pod Scaling
    • Sign-ins

?'"`UNIQ--source-0000002C-QINU`"'?

  • CPU based scaling

WebRTC auto-scaling is also performed based on the CPU and memory usage. The following YAML shows how CPU and memory limits should be configured for Gateway pods in YAML values file:

?'"`UNIQ--source-0000002E-QINU`"'?

  • CoTurn Pod scaling

Auto-scaling of CoTurn is performed based on CPU and memory usage only. The following YAML shows how CPU and memory limits should be configured for CoTurn pods in YAML values file:

?'"`UNIQ--source-00000030-QINU`"'?

SMTP settings

Not applicable

WebRTC has dependencies on several other Genesys services and it is recommended that the provisioning and configuration of WebRTC be done after these services have been set up.
Service Functionality
GWS Used for environment and tenants configuration reading
GAuth Used for WebRTC service and Agents authentication
GVP Used for voice calls - conferences, recording, and so on
Voice microservice Used to handle voice calls
Tenant microservice Used to store tenant configuration

For detailed information about the correct order of services deployment, see Order of services deployment.

Not applicable d703e174-b039-43c9-8859-e25b3a7feb22