<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://all.docs.genesys.com/index.php?action=history&amp;feed=atom&amp;title=VM%2FCurrent%2FVMPEGuide%2FVoiceConfigServiceMetrics</id>
	<title>VM/Current/VMPEGuide/VoiceConfigServiceMetrics - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://all.docs.genesys.com/index.php?action=history&amp;feed=atom&amp;title=VM%2FCurrent%2FVMPEGuide%2FVoiceConfigServiceMetrics"/>
	<link rel="alternate" type="text/html" href="https://all.docs.genesys.com/index.php?title=VM/Current/VMPEGuide/VoiceConfigServiceMetrics&amp;action=history"/>
	<updated>2026-04-14T23:39:19Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://all.docs.genesys.com/index.php?title=VM/Current/VMPEGuide/VoiceConfigServiceMetrics&amp;diff=116231&amp;oldid=prev</id>
		<title>Corinneh: Published</title>
		<link rel="alternate" type="text/html" href="https://all.docs.genesys.com/index.php?title=VM/Current/VMPEGuide/VoiceConfigServiceMetrics&amp;diff=116231&amp;oldid=prev"/>
		<updated>2022-02-23T20:56:45Z</updated>

		<summary type="html">&lt;p&gt;Published&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{ArticlePEServiceMetrics&lt;br /&gt;
|IncludedServiceId=a0519f2a-7110-4037-b752-3393df349c62&lt;br /&gt;
|CRD=Supports both CRD and annotations&lt;br /&gt;
|Port=9100&lt;br /&gt;
|Endpoint=http://&amp;lt;pod-ipaddress&amp;gt;:9100/metrics&lt;br /&gt;
|MetricsUpdateInterval=30 seconds&lt;br /&gt;
|MetricsDefined=Yes&lt;br /&gt;
|MetricsIntro=You can query Prometheus directly to see all the metrics that the Voice Config Service exposes. The following metrics are likely to be particularly useful. Genesys does not commit to maintain other currently available Config Service metrics not documented on this page.&lt;br /&gt;
|PEMetric={{PEMetric&lt;br /&gt;
|Metric=config_device_response&lt;br /&gt;
|Type=counter&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|Label=location, tenant, request_type, status&lt;br /&gt;
|MetricDescription=Number of device responses for each request.&lt;br /&gt;
|SampleValue=2&lt;br /&gt;
|UsedFor=Traffic&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=config_tenant_response&lt;br /&gt;
|Type=counter&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|Label=location, request_type, status&lt;br /&gt;
|MetricDescription=Number of Tenant responses for each request.&lt;br /&gt;
|SampleValue=2&lt;br /&gt;
|UsedFor=Traffic&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=config_node_get_response&lt;br /&gt;
|Type=counter&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|MetricDescription=Number of Get responses for each request.&lt;br /&gt;
|UsedFor=Traffic&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=config_node_agent_response&lt;br /&gt;
|Type=counter&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|MetricDescription=Number of agent responses for each request.&lt;br /&gt;
|UsedFor=Traffic&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=config_redis_state&lt;br /&gt;
|Type=gauge&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|Label=location, redis_cluster_name&lt;br /&gt;
|MetricDescription=Current Redis connection state:&lt;br /&gt;
&lt;br /&gt;
-1 – error&amp;lt;br /&amp;gt;&lt;br /&gt;
0 – disconnected&amp;lt;br /&amp;gt;&lt;br /&gt;
1 – connected&amp;lt;br /&amp;gt;&lt;br /&gt;
2 – ready&lt;br /&gt;
|SampleValue=2&lt;br /&gt;
|UsedFor=Errors&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=service_version_info&lt;br /&gt;
|Type=gauge&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|Label=version&lt;br /&gt;
|MetricDescription=Displays the version of Voice Config Service that is currently running. In the case of this metric, the labels provide the important information. The metric value is always 1 and does not provide any information.&lt;br /&gt;
|SampleValue=service_version_info{version=&amp;quot;100.0.1000006&amp;quot;} 1&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=config_health_level&lt;br /&gt;
|Type=gauge&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|MetricDescription=Health level of the config node:&lt;br /&gt;
&lt;br /&gt;
-1 – error&amp;lt;br /&amp;gt;&lt;br /&gt;
0 – fail&amp;lt;br /&amp;gt;&lt;br /&gt;
1 – degraded&amp;lt;br /&amp;gt;&lt;br /&gt;
2 – pass&lt;br /&gt;
|SampleValue=2&lt;br /&gt;
|UsedFor=Errors&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=config_healthcheck_generic_exception&lt;br /&gt;
|Type=gauge&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|MetricDescription=Generic error during health check.&lt;br /&gt;
|SampleValue=0&lt;br /&gt;
}}&lt;br /&gt;
|AlertsDefined=Yes&lt;br /&gt;
|PEAlert={{PEAlert&lt;br /&gt;
|Alert=Redis disconnected for 5 minutes&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with Redis, then restart Redis.&lt;br /&gt;
*If the alarm is triggered only for the pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check to see if there is an issue with the pod.&lt;br /&gt;
|BasedOn=redis_state&lt;br /&gt;
|Threshold=Redis is not available for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Redis disconnected for 10 minutes&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with Redis, then restart Redis.&lt;br /&gt;
*If the alarm is triggered only for the pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check to see if there is an issue with the pod.&lt;br /&gt;
|BasedOn=redis_state&lt;br /&gt;
|Threshold=Redis is not available for the pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; for 10 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod Failed&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*One of the containers in the pod has entered a failed state. Check the Kibana logs for the reason.&lt;br /&gt;
|BasedOn=kube_pod_status_phase&lt;br /&gt;
|Threshold=Pod failed &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod Unknown state&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with the Kubernetes cluster.&lt;br /&gt;
*If the alarm is triggered only for the pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check to see whether the image is correct and if the container is starting up.&lt;br /&gt;
|BasedOn=kube_pod_status_phase&lt;br /&gt;
|Threshold=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in Unknown state for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod Pending state&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure the Kubernetes nodes where the pod is running are alive in the cluster.&lt;br /&gt;
*If the alarm is triggered only for the pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check the health of the pod.&lt;br /&gt;
|BasedOn=kube_pod_status_phase&lt;br /&gt;
|Threshold=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in Pending state for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod Not ready for 10 minutes&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If this alarm is triggered, check whether the CPU is available for the pods.&lt;br /&gt;
*Check whether the port of the pod is running and serving the request.&lt;br /&gt;
|BasedOn=kube_pod_status_ready&lt;br /&gt;
|Threshold=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in NotReady state for 10 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Container restarted repeatedly&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*One of the containers in the pod has entered a failed state. Check the Kibana logs for the reason.&lt;br /&gt;
|BasedOn=kube_pod_container_status_restarts_total&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; was restarted 5 or more times within 15 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod memory greater than 65%&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=High memory usage for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*Check whether the horizontal pod autoscaler has triggered and if the maximum number of pods has been reached.&lt;br /&gt;
*Check Grafana for abnormal load.&lt;br /&gt;
*Collect the service logs; raise an investigation ticket.&lt;br /&gt;
|BasedOn=container_memory_working_set_bytes, kube_pod_container_resource_requests_memory_bytes&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; memory usage exceeded 65% for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod memory greater than 80%&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Critical memory usage for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*Check whether the horizontal pod autoscaler has triggered and if the maximum number of pods has been reached.&lt;br /&gt;
*Check Grafana for abnormal load.&lt;br /&gt;
*Restart the service.&lt;br /&gt;
*Collect the service logs; raise an investigation ticket.&lt;br /&gt;
|BasedOn=container_memory_working_set_bytes, kube_pod_container_resource_requests_memory_bytes&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; memory usage exceeded 80% for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod CPU greater than 65%&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=High CPU load for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*Check whether the horizontal pod autoscaler has triggered and if the maximum number of pods has been reached.&lt;br /&gt;
*Check Grafana for abnormal load.&lt;br /&gt;
*Collect the service logs; raise an investigation ticket.&lt;br /&gt;
|BasedOn=container_cpu_usage_seconds_total, container_spec_cpu_period&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; CPU usage exceeded 65% for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod CPU greater than 80%&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Critical CPU load for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*Check whether the horizontal pod autoscaler has triggered and if the maximum number of pods has been reached.&lt;br /&gt;
*Check Grafana for abnormal load.&lt;br /&gt;
*Restart the service.&lt;br /&gt;
*Collect the service logs; raise an investigation ticket.&lt;br /&gt;
|BasedOn=container_cpu_usage_seconds_total, container_spec_cpu_period&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; CPU usage exceeded 80% for 5 minutes.&lt;br /&gt;
}}&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Corinneh</name></author>
		
	</entry>
</feed>