<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://all.docs.genesys.com/index.php?action=history&amp;feed=atom&amp;title=VM%2FCurrent%2FVMPEGuide%2FVoiceRegistrarServiceMetrics</id>
	<title>VM/Current/VMPEGuide/VoiceRegistrarServiceMetrics - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://all.docs.genesys.com/index.php?action=history&amp;feed=atom&amp;title=VM%2FCurrent%2FVMPEGuide%2FVoiceRegistrarServiceMetrics"/>
	<link rel="alternate" type="text/html" href="https://all.docs.genesys.com/index.php?title=VM/Current/VMPEGuide/VoiceRegistrarServiceMetrics&amp;action=history"/>
	<updated>2026-04-17T11:13:43Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://all.docs.genesys.com/index.php?title=VM/Current/VMPEGuide/VoiceRegistrarServiceMetrics&amp;diff=116224&amp;oldid=prev</id>
		<title>Corinneh: Published</title>
		<link rel="alternate" type="text/html" href="https://all.docs.genesys.com/index.php?title=VM/Current/VMPEGuide/VoiceRegistrarServiceMetrics&amp;diff=116224&amp;oldid=prev"/>
		<updated>2022-02-23T20:56:22Z</updated>

		<summary type="html">&lt;p&gt;Published&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{ArticlePEServiceMetrics&lt;br /&gt;
|IncludedServiceId=34f32476-4f7c-446d-9224-3a4175f25e24&lt;br /&gt;
|CRD=Supports both CRD and annotations&lt;br /&gt;
|Port=11500&lt;br /&gt;
|Endpoint=http://&amp;lt;pod-ipaddress&amp;gt;:11500/metrics&lt;br /&gt;
|MetricsUpdateInterval=30 seconds&lt;br /&gt;
|MetricsDefined=Yes&lt;br /&gt;
|MetricsIntro=Voice Registrar Service exposes Genesys-defined, Registrar Service–specific metrics as well as some standard Kafka metrics. You can query Prometheus directly to see all the metrics that the Registrar Service exposes. The following metrics are likely to be particularly useful. Genesys does not commit to maintain other currently available Voice Registrar Service metrics not documented on this page.&lt;br /&gt;
|PEMetric={{PEMetric&lt;br /&gt;
|Metric=registrar_register_count&lt;br /&gt;
|Type=counter&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|Label=location, tenant&lt;br /&gt;
|MetricDescription=Number of registrations.&lt;br /&gt;
|UsedFor=Traffic&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=registrar_health_level&lt;br /&gt;
|Type=gauge&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|MetricDescription=Health level of the registrar node:&lt;br /&gt;
&lt;br /&gt;
-1 – fail&amp;lt;br /&amp;gt;&lt;br /&gt;
0 – starting&amp;lt;br /&amp;gt;&lt;br /&gt;
1 – degraded&amp;lt;br /&amp;gt;&lt;br /&gt;
2 – pass&lt;br /&gt;
|UsedFor=Errors&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=registrar_request_latency&lt;br /&gt;
|Type=histogram&lt;br /&gt;
|Unit=milliseconds&lt;br /&gt;
|Label=le, location, tenant&lt;br /&gt;
|MetricDescription=Time taken to process the request (ms).&lt;br /&gt;
|UsedFor=Latency&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=registrar_active_sip_registrations&lt;br /&gt;
|Type=gauge&lt;br /&gt;
|Unit=N/A&lt;br /&gt;
|Label=tenant&lt;br /&gt;
|MetricDescription=Number of active SIP registrations.&lt;br /&gt;
|UsedFor=Traffic&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=kafka_consumer_latency&lt;br /&gt;
|Type=histogram&lt;br /&gt;
|Label=tenant, topic&lt;br /&gt;
|MetricDescription=Consumer latency is the time difference between when the message is produced and when the message is consumed. That is, the time when the consumer received the message minus the time when the producer produced the message.&lt;br /&gt;
|UsedFor=Latency&lt;br /&gt;
}}{{PEMetric&lt;br /&gt;
|Metric=kafka_consumer_state&lt;br /&gt;
|Type=gauge&lt;br /&gt;
|MetricDescription=Current Kafka consumer connection state:&lt;br /&gt;
&lt;br /&gt;
0 – disconnected&amp;lt;br /&amp;gt;&lt;br /&gt;
1 – connected&lt;br /&gt;
}}&lt;br /&gt;
|AlertsDefined=Yes&lt;br /&gt;
|PEAlert={{PEAlert&lt;br /&gt;
|Alert=Kafka events latency is too high&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple topics, make sure there are no issues with Kafka (CPU, memory, or network overload).&lt;br /&gt;
*If the alarm is triggered only for topic &amp;lt;nowiki&amp;gt;{{ $labels.topic }}&amp;lt;/nowiki&amp;gt;, check if there is an issue with the service related to the topic (CPU, memory, or network overload).&lt;br /&gt;
|BasedOn=kafka_consumer_latency_bucket&lt;br /&gt;
|Threshold=Latency for more than 5% of messages is more than 0.5 seconds for topic &amp;lt;nowiki&amp;gt;{{ $labels.topic }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Too many Kafka consumer failed health checks&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with Kafka, and then restart Kafka.&lt;br /&gt;
*If the alarm is triggered only for &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt;, check if there is an issue with the service.&lt;br /&gt;
|BasedOn=kafka_consumer_error_total&lt;br /&gt;
|Threshold=Health check failed more than 10 times in 5 minutes for Kafka consumer for topic  &amp;lt;nowiki&amp;gt;{{$labels.topic}}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Too many Kafka consumer request timeouts&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with Kafka, and then restart Kafka.&lt;br /&gt;
*If the alarm is triggered only for &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt;, check if there is an issue with the service.&lt;br /&gt;
|BasedOn=kafka_consumer_error_total&lt;br /&gt;
|Threshold=There were more than 10 request timeouts within 5 minutes for the Kafka consumer for topic &amp;lt;nowiki&amp;gt;{{$labels.topic}}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Too many Kafka consumer crashes&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with Kafka, and then restart Kafka.&lt;br /&gt;
*If the alarm is triggered only for &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt;, check if there is an issue with the service.&lt;br /&gt;
|BasedOn=kafka_consumer_error_total&lt;br /&gt;
|Threshold=There were more than 3 Kafka consumer crashes within 5 minutes for service &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Kafka not available&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Kafka is not available for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with Kafka, and then restart Kafka.&lt;br /&gt;
*If the alarm is triggered only for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check if there is an issue with the pod.&lt;br /&gt;
|BasedOn=kafka_producer_state, kafka_consumer_state&lt;br /&gt;
|Threshold=Kafka is not available for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; for 5 consecutive minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Redis disconnected for 5 minutes&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with Redis, and then restart Redis.&lt;br /&gt;
*If the alarm is triggered only for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check if there is an issue with the pod.&lt;br /&gt;
|BasedOn=redis_state&lt;br /&gt;
|Threshold=Redis is not available for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Redis disconnected for 10 minutes&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with Redis, and then restart Redis.&lt;br /&gt;
*If the alarm is triggered only for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check if there is an issue with the pod.&lt;br /&gt;
|BasedOn=redis_state&lt;br /&gt;
|Threshold=Redis is not available for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; for 10 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod Failed&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; failed.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*One of the containers in the pod has entered a failed state. Check the Kibana logs for the reason.&lt;br /&gt;
|BasedOn=kube_pod_status_phase&lt;br /&gt;
|Threshold=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in Failed state.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod Unknown state&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in Unknown state.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure there are no issues with the Kubernetes cluster.&lt;br /&gt;
*If the alarm is triggered only for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check whether the image is correct and if the container is starting up.&lt;br /&gt;
|BasedOn=kube_pod_status_phase&lt;br /&gt;
|Threshold=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in Unknown state for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod Pending state&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in Pending state.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*If the alarm is triggered for multiple services, make sure the Kubernetes nodes where the pod is running are alive in the cluster.&lt;br /&gt;
*If the alarm is triggered only for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;, check the health of the pod.&lt;br /&gt;
|BasedOn=kube_pod_status_phase&lt;br /&gt;
|Threshold=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in Pending state for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod Not ready for 10 minutes&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*If this alarm is triggered, check whether the CPU is available for the pods.&lt;br /&gt;
*Check whether the port of the pod is running and serving the request.&lt;br /&gt;
|BasedOn=kube_pod_status_ready&lt;br /&gt;
|Threshold=Pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt; is in the NotReady state for 10 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Container restarted repeatedly&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Actions:&lt;br /&gt;
&lt;br /&gt;
*One of the container in the pod has entered a Failed state. Check the Kibana logs for the reason.&lt;br /&gt;
|BasedOn=kube_pod_container_status_restarts_total&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; was restarted 5 or more times within 15 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod CPU greater than 65%&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=High CPU load for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*Check whether the horizontal pod autoscaler has triggered and if the maximum number of pods has been reached.&lt;br /&gt;
*Check Grafana for abnormal load.&lt;br /&gt;
*Collect the service logs; raise an investigation ticket.&lt;br /&gt;
|BasedOn=container_cpu_usage_seconds_total, kube_pod_container_resource_limits&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; CPU usage exceeded 65% for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod memory greater than 65%&lt;br /&gt;
|Severity=Warning&lt;br /&gt;
|AlertDescription=High memory usage for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*Check whether the horizontal pod autoscaler has triggered and if the maximum number of pods has been reached.&lt;br /&gt;
*Check Grafana for abnormal load.&lt;br /&gt;
*Collect the service logs; raise an investigation ticket.&lt;br /&gt;
|BasedOn=container_memory_working_set_bytes, kube_pod_container_resource_limits&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; memory usage exceeded 65% for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod memory greater than 80%&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Critical memory usage for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*Check whether the horizontal pod autoscaler has triggered and if the maximum number of pods has been reached.&lt;br /&gt;
*Check Grafana for abnormal load.&lt;br /&gt;
*Restart the service.&lt;br /&gt;
*Collect the service logs: raise an investigation ticket.&lt;br /&gt;
|BasedOn=container_memory_working_set_bytes, kube_pod_container_resource_limits&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; memory usage exceeded 80% for 5 minutes.&lt;br /&gt;
}}{{PEAlert&lt;br /&gt;
|Alert=Pod CPU greater than 80%&lt;br /&gt;
|Severity=Critical&lt;br /&gt;
|AlertDescription=Critical CPU load for pod &amp;lt;nowiki&amp;gt;{{ $labels.pod }}&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Actions:&lt;br /&gt;
&lt;br /&gt;
*Check whether the horizontal pod autoscaler has triggered and if the maximum number of pods has been reached.&lt;br /&gt;
*Check Grafana for abnormal load.&lt;br /&gt;
|BasedOn=container_cpu_usage_seconds_total, kube_pod_container_resource_limits&lt;br /&gt;
|Threshold=Container &amp;lt;nowiki&amp;gt;{{ $labels.container }}&amp;lt;/nowiki&amp;gt; CPU usage exceeded 80% for 5 minutes.&lt;br /&gt;
}}&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Corinneh</name></author>
		
	</entry>
</feed>