- 1 Platform and network
- 2 Supported services
- 3 Software requirements
- 4 Kubernetes clusters
- 5 Security
- 6 High-Availability
- 7 Data stores
- 8 File and disk storage
- 9 Email
- 10 Content delivery networks (CDNs)
- 11 Monitoring
- 12 Integrations
Understand the architecture and components of Genesys Engage cloud private edition; the supported third-party back-end services; and how they all work together in both single- and multi-region deployments.
As mentioned in the About page, Genesys Engage cloud private edition gives you the flexibility to deploy your contact center on a public cloud or a private one—and even on bare metal servers that reside within your corporate data center.
Platform and network
The basic architecture for private edition involves three levels:
- A unit consists of all of the Genesys Engage and third-party services and resources required to create a single instance of Genesys Engage cloud private edition. This instance is hosted within a single region or data center.
- A unit group brings together a network of units to create a global platform for tenants that covers all geographical regions
- A unit pair consists of two units that are part of a unit group and that are both located within a specific geographical region
- Region—A set of isolated and physically separated Availability Zones deployed within a latency-defined perimeter and connected through a dedicated low-latency network within a specific geographical area. Note: Regions as defined here are a feature of the cloud deployment architecture and are not supported in the private data center deployment architecture, which does not use Availability Zones.
- Data center—A building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.
- Availability Zone (AZ)—A discrete location within a region that is designed to operate independently from the other Availability Zones in that region. Because of this separation, any given Availability Zone is unlikely to be affected by failures in other Availability Zones. Note: Availability Zones are a feature of the cloud deployment architecture and are not supported in the private data center deployment architecture.
- Tenant—A business entity that has common goals and procedures, and occupies part or all of a contact center. Tenants that share a contact center could be different businesses, or different divisions within the same business.
- Multi-tenancy—The partitioning capacity for a platform to host and manage tenants. Each tenant is configured individually and separately.
The following sections provide a more in-depth description of the characteristics of the three levels of the private edition architecture.
A unit can either be dedicated to a specific tenant or used for multiple tenants. The unit and its services and resources can be distributed across Availability Zones if the environment has them.
A unit is composed of the following:
- Network access services (load balancers, firewalls, SBC, and so on)
- A Kubernetes cluster with all of the private edition service pods
- Third-party services (Postgres, Redis, Consul, Kafka, and so on)
There are two main types of units:
- A primary unit centralizes certain services used by all regions for a specific tenant, such as Designer application creation, historical reporting, or UI. There is only one primary unit in a unit group. In the current architecture, digital channels are only supported by the primary unit.
- A secondary unit only supports voice-related services at this time. Digital channels are only supported by the primary unit.
Unit pairs provide the following capabilities:
- Redundancy within a geographical region. This geo-redundancy is built into the private edition services.
- Tenants can be distributed across the two units to help reduce the blast area in case of a major failure
A unit pair can consist of a primary unit and a secondary unit, or of two secondary units. Note: Unit pairs are only supported by voice-related services at this time.
Unit groups interconnect their constituent units by means of a network peering solution, and all inter-region traffic uses either your network connectivity or the network connectivity of your cloud provider. Each group contains a primary unit in each region. This primary regional unit hosts all of the private edition services, while the secondary regional unit hosts only a subset of private edition services. A unit group must contain at least one unit pair. If you add a new geographical region, then you must add a unit pair to the unit group in that geographical region.
Genesys Engage cloud private edition allows you to set up a highly available and resilient infrastructure whether you are using a cloud deployment or hosting it in a private data center, as shown in the following diagrams.
Multiple regions and data centers
The platform supports deployment across multiple regions and data centers. This capability provides extra availability for the voice-related services, with a global view.
- Call routing and processing—The ability to distribute call processing across regions. Also, to centrally create and distribute Designer applications across regions.
- Agent availability—The ability to have a call processed by agents from any region
- Data sovereignty—The ability to contain the data (recordings, and so on) and processing of the call within the region in which the call originated
- Reporting (Real-time and Historical)—The ability to provide a global view across all regions
- Tenant provisioning—The ability to centrally provision the contact center across multiple regions
- Callback—The ability to use a central service to provide in-queue callback across regions
Subnets are your responsibility: you must create a subnet for the Kubernetes cluster to accommodate the Genesys Engage services.
For information about network access, see Networking Overview.
Genesys Engage cloud private edition supports the services listed on the Genesys Engage services list.
Genesys Engage cloud private edition requires the software and versions listed on the software requirements page. Note that you are responsible for installing and deploying the appropriate third-party software in a way that best suits your requirements and the requirements of the Genesys Engage services.
All Genesys Engage services must run in Kubernetes. Required third-party services can be managed either outside Kubernetes or within Kubernetes. Kubernetes is responsible for managing the running of services, such as monitor restart, and so on.
Private edition does not currently support multiple instances of the platform in a single Kubernetes cluster. In other words, if you want to set up separate environments for testing, staging, production, and so on, you must deploy the private edition instances for the various environments in separate clusters.
Genesys currently recommends that you use node pools to deploy Kubernetes for the Genesys Engage services that are hosted within each unit.
- Node pools—Genesys recommends that you use the following node pools. Our Helm charts include overrides for the nodeSelector attribute. Use these overrides to assign a service to the appropriate node pool.
- General node pool—This node pool is where most of the Genesys Engage services are deployed. This type of pool uses general-purpose compute instances with Premium SSD, which provide a 4:1 ratio of memory GB to vCPU.
- Real-time node pool—This node pool is for stateful voice services that require a drain time of 60 minutes or longer to maintain active voice sessions. It uses general-purpose nodes with Premium SSD.
- Third-party service node pool (optional)—This node pool is only needed if you are going to deploy data stores and other third-party services in Kubernetes, such as Redis, Kafka, Postgres or Elasticsearch. These services generally need locally optimized storage and will use the storage-optimized nodes with directly attached NVMe and Premium SSD, which provide an 8:1 ratio of memory GB to vCPU.
- For information about private edition's general networking requirements and constraints, see Networking Overview
- For information about networking settings for Kubernetes clusters, see Network Settings
For more information, see Service Priorities.
Most services scale their Kubernetes pods by using the Horizontal Pod Autoscaler. However, this tool can only use CPU or memory metrics from the Kubernetes Metric Server in the HorizontalPodAutoscaler Object. Private edition also works with the Kubernetes cluster scaler. Note that each service provides its own autoscaling rule, and that the autoscaling rule for a specific service is stored in the Helm charts for that service.
Genesys uses the third-party KEDA open-source autoscaler for Genesys Engage services that require custom metrics from Prometheus. Use the included Helm override attributes to adjust the defaults for each service.
You must perform your own scaling operations on the Kubernetes control plane. The operational requirements of this scaling depend on the size of your contact center. For large installations, you might need to deploy multiple clusters and distribute the Genesys Engage services across them.
Private edition uses ConfigMaps to pass variables and data to the deployed services. This allows each service to be separate from its configuration data, which is a factor in making each service immutable. Genesys provides Helm override attributes that you use to set the configuration values for each service. For more information, see the appropriate service guide.
You can use OpenShift operators to deploy most third-party services into OpenShift. Note that Genesys does not provide operators to deploy Genesys Engage services.
Genesys Engage cloud private edition has been developed using industry-standard tools and best practices to identify and eliminate security vulnerabilities.
You are responsible for setting up security in the cluster.
For more information, see High Availability and Disaster Recovery.
Each service must have its own data store cluster or instances, which must not be shared in production environments unless they are under the same service group.
- All data stores must enable and deploy their high availability (HA) functionality
- All data stores must be distributed across Availability Zones, if they are available
- All data stores must support TLS connections and authentication, as appropriate
Here are the data stores used by each service:
|Service||Type of Data||Shared across tenants||Cross region replication|
|Designer||Application Analytics data||Yes||No|
|IWD||Interaction and Queue Analytics data||Yes||No|
|TLM||Searchable telemetry data||Yes||No|
|UCS||Searchable contact and interaction history data||Yes||No|
|GWS WS||Searchable Statistics data||Yes||No|
|Service||Type of Data||Shared across tenants||Cross region access|
|Tenant||stream of tenant data||Yes||Yes|
|CXC||runtime campaign and calling list status||Yes||No|
|GES||runtime callback status and data||Yes||No|
|Nexus||runtime messaging session data||Yes||No|
|IWD||Historical reporting data||Yes||No|
|VMS (all of these services have separate keys (registrate, ORS, ORS stream, Callthread, Agent, Config, SIP, RQ))||runtime interaction, agent, registrations, config and routing request streams, scxml session data||Yes||Yes (not all)|
|GAuth||authentication session data||Yes||No|
|GWS||cached statistics, interaction and agent data||Yes||No|
Postgres and MS SQL
|Service||Type of Data||Shared across tenants||Cross region replication|
|GCXI||metadata for reports||Yes||No|
|GVP RS - MS SQL||GVP reporting data||Yes||No|
|GVP CFG||config data||Yes||No|
|IXN||digital interaction data||No||No|
|Pulse Permissions||config data||No||No|
|Tenant||config and campaign data||No||Yes|
|GIM||Historical reporting data||No||No|
|IWD||IWD config data||No||No|
|UCS||contact, transcriptions, emails, interaction history||No||No|
File and disk storage
For more information, see Storage Requirements.
The following private edition services send emails as part of their service:
These services use standard mail agents on the operating system over SMTP via ports 25 and 587.
To use email with a service, you must set up the appropriate SMTP relay to relay messages from that service to your email system or email service. Note: This must be done from the Kubernetes clusters.
Content delivery networks (CDNs)
The WWE service that runs within private edition delivers static content. You can host this content from a CDN or from nginx running in the Kubernetes cluster.
Private edition provides appropriate interfaces for you to use your own monitoring tools. For the purposes of this software, monitoring encompasses:
Private edition provides a set of Prometheus-based metrics and defines an endpoint which the Prometheus platform can scrape. However, it does not provide a Grafana dashboard or Alert rule definitions.
Private edition uses OpenShift monitoring to verify all metrics provided by the pods.
Private edition provides the vast majority of its log data via stdout and stderr. In some exceptional cases, data is logged to disk.
Private edition uses OpenShift logging to verify all logs provided by the pods.
Private edition support integrations with a wide variety of systems to provide an enriched customer experience, including in the following areas:
- Bot platforms, such as Google Dialogflow and AWS Lex
- WFM platforms, such as Verint and Nice
- Email systems
- Identity providers
- Reporting platforms, including business intelligence tools
- Messaging and social platforms
- CRM and BPM systems
- Biometrics systems