Monitoring overview and approach

From Genesys Documentation
Jump to: navigation, search
This topic is part of the manual Operations for version Current of Genesys Multicloud CX Private Edition.

Learn about the types of metrics, and the monitoring approach for your Genesys Multicloud CX services that are used in private edition.

Metrics, alerts, and monitoring approach for services

Services provide the necessary interface to use your own monitoring and logging tools, Prometheus-based metrics, and the endpoint that the Prometheus platform can scrape for alerting and monitoring. The default operators do not scrape user workload or user-defined applications like Genesys services. You must enable Prometheus to scrape user workload. Once enabled, Prometheus scrapes all metrics from endpoints exposed by services.

Some services optionally use Pushgateway to push metrics from jobs that cannot be scraped.


In general, the monitoring approach in a private edition deployment is Prometheus-based. Through Prometheus support, the metrics that are generated by Genesys services are made available for visualization (using tools like Grafana). For more details, see the respective sections based on your cloud platform.
If you are not using Prometheus or an APM tool that supports Prometheus CRDs and PodMonitor or ServiceMonitor objects, then you must build your own solution until Genesys includes the Prometheus annotation support.
There are two types of metrics: system and service.
  • System metrics contain data pertaining to cluster performance and status such as CPU usage, memory usage, network I/O pressure, disk usage, and so on. When Prometheus is deployed, by default the system metrics are automatically collected. They provide monitoring of cluster components and ship with a set of alerts to immediately notify the cluster administrator about any occurring problems
  • Service metrics contain data pertaining to Genesys services. For most services, you must enable 'user workload monitoring', and then create ServiceMonitor or PodMonitor per your requirement. However, services that do not use CRD or annotation, run the Pushgateway (Cron job) to collect metrics and push them into the Prometheus gateway.

OpenShift monitoring

A core feature of the Prometheus operator is to monitor the Kubernetes API server for changes to specific objects and ensure the current Prometheus deployments match these objects. The operator acts on the following custom resource definitions (CRDs):

  • Prometheus: Defines a preferred Prometheus deployment.
  • Alertmanager: Defines a preferred Alertmanager deployment.
  • ThanosRuler: Defines a preferred Thanos Ruler deployment.
  • ServiceMonitor: Declaratively specifies how groups of Kubernetes services must be monitored. The operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.
  • PodMonitor: Declaratively specifies how a group of pods should be monitored. The operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.
  • Probe: Declaratively specifies how groups of ingresses or static targets should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.
  • PrometheusRule: Defines a preferred set of Prometheus alerting and/or recording rules. The operator generates a rule file, which can be used by Prometheus instances.
  • AlertmanagerConfig: Declaratively specifies subsections of the Alertmanager configuration, allowing routing of alerts to custom receivers, and setting inhibit rules.

The following diagram illustrates the monitoring approach for private edition documentation. When you deploy Red Hat OpenShift cluster, the OpenShift monitoring operators are installed by default as a part of the OpenShift cluster, in read-only format.MonitoringServices.png

When you enable this feature, the Openshift-user-workload-monitoring namespace is created.

In the Application namespace, you can modify Alerts per your requirement. The ServiceMonitor/PodMonitor specifies the port on which these metrics are exposed, which are collected by the Prometheus operator.

Each service can contain more than one service or pod monitor that exposes the metrics.

GKE monitoring

GKE monitoring enables you to identify issues related to the performance of your services, and acquire visibility into containers, nodes, and pods within your GKE environment. There are two approaches in GKE for monitoring: Google Cloud operations suite and Prometheus-based approach. For more details, refer to the following sections:

Google Cloud operations suite

By default, GKE clusters are natively integrated with monitoring. When you create a GKE cluster, monitoring is enabled by default. Cloud Monitoring collects metrics, events, and metadata from Google Cloud. Refer to the following for more details:

Prometheus-based approach

Prometheus is the monitoring tool that is often used with Kubernetes. Prometheus covers a full stack of Kubernetes cluster components, deployed microservices, alerts, and dashboards. If you configure Cloud Operations for GKE and include Prometheus support, then the metrics that are generated by services using the Prometheus exposition format can be exported from the cluster and made visible as external metrics in Cloud Monitoring. To know more about Prometheus toolkit, refer to the following:

Click here to learn about deploying Prometheus.

Enabling monitoring for your services

To set up monitoring for the cluster and your private edition services in cloud platforms, find instructions here:

Comments or questions about this documentation? Contact us for support!