Cargo query
Showing below up to 100 results in range #101 to #200.
View (previous 100 | next 100) (20 | 50 | 100 | 250 | 500)
Page | Alert | Severity | AlertDescription | BasedOn | Threshold |
---|---|---|---|---|---|
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics | NGI_LOG_FETCH_RESOURCE_TIMEOUT | MEDIUM | Number of VXMLi fetch timeouts exceeded limit | gvp_mcp_log_parser_eror_total {LogID="40026",endpoint="mcplog"...} | 1min |
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics | NGI_LOG_PARSE_ERROR | WARNING | Number of VXMLi parse errors exceeded limit | gvp_mcp_log_parser_eror_total {LogID="40028",endpoint="mcplog"...} | 1min |
Draft:GVP/Current/GVPPEGuide/Reporting Server Metrics | ContainerCPUreached80percent | HIGH | The trigger will flag an alarm when the RS container CPU utilization goes beyond 80% for 15 mins | container_cpu_usage_seconds_total, container_spec_cpu_quota, container_spec_cpu_period | 15mins |
Draft:GVP/Current/GVPPEGuide/Reporting Server Metrics | ContainerMemoryUsage80percent | HIGH | The trigger will flag an alarm when the RS container Memory utilization goes beyond 80% for 15 mins | container_memory_usage_bytes, kube_pod_container_resource_limits_memory_bytes | 15mins |
Draft:GVP/Current/GVPPEGuide/Reporting Server Metrics | ContainerRestartedRepeatedly | CRITICAL | The trigger will flag an alarm when the RS or RS SNMP container gets restarted 5 or more times within 15 mins | kube_pod_container_status_restarts_total | 15mins |
Draft:GVP/Current/GVPPEGuide/Reporting Server Metrics | InitContainerFailingRepeatedly | CRITICAL | The trigger will flag an alarm when the RS init container gets failed 5 or more times within 15 mins | kube_pod_init_container_status_restarts_total | 15mins |
Draft:GVP/Current/GVPPEGuide/Reporting Server Metrics | PodStatusNotReady | CRITICAL | The trigger will flag an alarm when RS pod status is Not ready for 30 mins and this will be controlled through override-value.yaml file. | kube_pod_status_ready | 30mins |
Draft:GVP/Current/GVPPEGuide/Reporting Server Metrics | PVC50PercentFilled | HIGH | This trigger will flag an alarm when the RS PVC size is 50% filled
|
kubelet_volume_stats_used_bytes, kubelet_volume_stats_capacity_bytes | 15mins |
Draft:GVP/Current/GVPPEGuide/Reporting Server Metrics | PVC80PercentFilled | CRITICAL | This trigger will flag an alarm when the RS PVC size is 80% filled
|
kubelet_volume_stats_used_bytes, kubelet_volume_stats_capacity_bytes | 5mins |
Draft:GVP/Current/GVPPEGuide/Reporting Server Metrics | RSQueueSizeCritical | HIGH | The trigger will flag an alarm when RS JMS message queue size goes beyond 15000 (3GB approx. backlog) for 15 mins | rsQueueSize | 15mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | ContainerCPUreached80percentForRM0 | HIGH | The trigger will flag an alarm when the RM container CPU utilization goes beyond 80% for 15 mins | container_cpu_usage_seconds_total, container_spec_cpu_quota, container_spec_cpu_period | 15mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | ContainerCPUreached80percentForRM1 | HIGH | The trigger will flag an alarm when the RM container CPU utilization goes beyond 80% for 15 mins | container_cpu_usage_seconds_total, container_spec_cpu_quota, container_spec_cpu_period | 15mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | ContainerMemoryUsage80percentForRM0 | HIGH | The trigger will flag an alarm when the RM container Memory utilization goes beyond 80% for 15 mins | container_memory_rss, kube_pod_container_resource_limits_memory_bytes | 15mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | ContainerMemoryUsage80percentForRM1 | HIGH | The trigger will flag an alarm when the RM container Memory utilization goes beyond 80% for 15 mins | container_memory_rss, kube_pod_container_resource_limits_memory_bytes | 15mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | ContainerRestartedRepeatedly | CRITICAL | The trigger will flag an alarm when the RM or RM SNMP container gets restarted 5 or more times within 15 mins | kube_pod_container_status_restarts_total | 15 mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | InitContainerFailingRepeatedly | CRITICAL | The trigger will flag an alarm when the RM init container gets failed 5 or more times within 15 mins. | kube_pod_init_container_status_restarts_total | 15 mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | MCPPortsExceeded | HIGH | All the MCP ports in MCP LRG are exceeded | gvp_rm_log_parser_eror_total | 1min |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | PodStatusNotReady | CRITICAL | The trigger will flag an alarm when RM pod status is Not ready for 30 mins and this will be controlled by override-value.yaml. | kube_pod_status_ready | 30mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RM Service Down | CRITICAL | RM pods are not in ready state and RM service is not available | kube_pod_container_status_running | 0 |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RMConfigServerConnectionLost | HIGH | RM lost connection to GVP Configuration Server for 5mins. | gvp_rm_log_parser_warn_total | 5 mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RMInterNodeConnectivityBroken | HIGH | Inter-node connectivity between RM nodes is lost for 5mins. | gvp_rm_log_parser_warn_total | 5 mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RMMatchingIVRTenantNotFound | MEDIUM | Matching IVR profile tenant could not be found for 2mins | gvp_rm_log_parser_eror_total | 2mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RMResourceAllocationFailed | MEDIUM | RM Resource allocation failed for 1mins | gvp_rm_log_parser_eror_total | 1min |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RMServiceDegradedTo50Percentage | HIGH | One of the RM container is not in running state for 5mins | kube_pod_container_status_running | 5mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RMSocketInterNodeError | HIGH | RM Inter node Socket Error for 5mins. | gvp_rm_log_parser_eror_total | 5mins |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RMTotal4XXErrorForINVITE | MEDIUM | The RM mib counter stats will be collected for every 60 seconds and if the mib counter total4xxInviteSent increments from its previous value by 10 within 60 seconds the trigger will flag an alarm. | rmTotal4xxInviteSent | 1min |
Draft:GVP/Current/GVPPEGuide/Resource Manager Metrics | RMTotal5XXErrorForINVITE | HIGH | The RM mib counter stats will be collected for every 30 seconds and if the mib counter total5xxInviteSent increments from its previous value by 5 within 5 minutes the trigger will flag an alarm. | rmTotal5xxInviteSent | 5 mins |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | CPUThrottling | Critical | Containers are being throttled more than 1 time per second. | container_cpu_cfs_throttled_periods_total | 1 |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | gws_high_500_responces_java | Critical | Too many 500 responses. | gws_responses_total | 10 |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | gws_high_5xx_responces_count | Critical | Too many 5xx responses. | gws_responses_total | 60 |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | gws_high_cpu_usage | Warning | High container CPU usage. | container_cpu_usage_seconds_total | 300% |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | gws_high_jvm_gc_pause_seconds_count | Critical | JVM garbage collection occurs too often. | jvm_gc_pause_seconds_count | 10 |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | gws_jvm_threads_deadlocked | Critical | Deadlocked JVM threads exist. | jvm_threads_deadlocked | 0 |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | netstat_Tcp_RetransSegs | Warning | High number of TCP RetransSegs (retransmitted segments). | node_netstat_Tcp_RetransSegs | 2000 |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | total_count_of_errors_during_context_initialization | Warning | Total count of errors during context initialization. | gws_context_error_total | 1200 |
Draft:GWS/Current/GWSPEGuide/GWSMetrics | total_count_of_errors_in_PSDK_connections | Warning | Total count of errors in PSDK connections. | psdk_conn_error_total | 3 |
Draft:GWS/Current/GWSPEGuide/WorkspaceMetrics | DesiredPodsDontMatchSpec | Critical | The Workspace Service deployment doesn't have the desired number of replicas. | kube_deployment_status_replicas_available, kube_deployment_spec_replicas | Fired when number of available replicas does not equal to configured number. |
Draft:GWS/Current/GWSPEGuide/WorkspaceMetrics | gws_app_workspace_incoming_requests | Critical | High rate of incoming requests from Workspace Web Edition. | gws_app_workspace_incoming_requests | 10 |
Draft:GWS/Current/GWSPEGuide/WorkspaceMetrics | gws_high_500_responces_workspace | Critical | The Workspace Service has too many 500 responses. | gws_app_workspace_requests | 10 |
Draft:GWS/Current/GWSPEGuide/WorkspaceMetrics | gws_high_cpu_usage | Warning | High container CPU usage. | container_cpu_usage_seconds_total | 300% |
Draft:GWS/Current/GWSPEGuide/WorkspaceMetrics | gws_high_nodejs_eventloop_lag_seconds | Critical | The Node.js event loop is too slow. | nodejs_eventloop_lag_seconds | 0.2 |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES-NODE-JS-DELAY-WARNING | Warning | Triggers if the base NodeJS event loop becomes excessive. This indicates significant resource and performance issues with the deployment. | application_ccecp_nodejs_eventloop_lag_seconds | Triggered when the event loop is greater than 5 milliseconds for a period exceeding 5 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_CB_ENQUEUE_LIMIT_REACHED | Info | GES is throttling callbacks to a given phone number. | CB_ENQUEUE_LIMIT_REACHED | Triggered when GES has begun throttling callbacks to a given number within the past 2 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_CB_SUBMIT_FAILED | Info | GES has failed to submit a callback to ORS. | CB_SUBMIT_FAILED | Triggered when GES has failed to submit a callback to ORS in the past 2 minutes for any reason. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_CB_TTL_LIMIT_REACHED | Info | GES is throttling callbacks for a specific tenant. | CB_TTL_LIMIT_REACHED | Triggered when GES has started throttling callbacks within the past 2 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_CPU_USAGE | Info | GES has high CPU usage for 1 minute. | ges_process_cpu_seconds_total | Triggered when the average CPU usage (measured by ges_process_cpu_seconds_total) is greater than 90% for 1 minute. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_DNS_FAILURE | Warning | A GES pod has encountered difficulty resolving DNS requests. | DNS_FAILURE | Triggered when GES encounters any DNS failures within the last 30 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_GWS_AUTH_DOWN | Warning | Connection to the Genesys Authentication Service is down. | GWS_AUTH_STATUS | Triggered when the connection to the Genesys Authentication Service is down for 5 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_GWS_CONFIG_DOWN | Warning | Connection to the GWS Configuration Service is down. | GWS_CONFIG_STATUS | Triggered when the connection to the GWS Configuration Service is down. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_GWS_ENVIRONMENT_DOWN | Warning | Connection to the GWS Environment Service is down. | GWS_ENV_STATUS | Triggered when the connection to the GWS Environment Service is down. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_GWS_INCORRECT_CLIENT_CREDENTIALS | Warning | The GWS client credentials provided to GES are incorrect. | GWS_INCORRECT_CLIENT_CREDENTIALS | Triggered when GWS has had any issue with the GES client credentials in the last 5 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_GWS_SERVER_ERROR | Warning | GES has encountered server or connection errors with GWS. | GWS_SERVER_ERROR | Triggered when there has been a GWS server error in the past 5 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_HEALTH | Critical | One or more downstream components (PostGres, Config Server, GWS, ORS) are down. '''Note:''' Because GES goes into a crash loop when Redis is down, this does not fire when Redis is down. | GES_HEALTH | Triggered when any component is down for any length of time. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_HTTP_400_POD | Info | An individual GES pod is returning excessive HTTP 400 results. | ges_http_failed_requests_total, http_400_tolerance | Triggered when two or more HTTP 400 results are returned from a pod within a 5-minute period. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_HTTP_401_POD | Info | An individual GES pod is returning excessive HTTP 401 results. | ges_http_failed_requests_total, http_401_tolerance | Triggered when two or more HTTP 401 results are returned from a pod within a 5-minute period. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_HTTP_404_POD | Info | An individual GES pod is returning excessive HTTP 404 results. | ges_http_failed_requests_total, http_404_tolerance | Triggered when two or more HTTP 404 results are returned from a pod within a 5-minute period. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_HTTP_500_POD | Info | An individual GES pod is returning excessive HTTP 500 results. | ges_http_failed_requests_total, http_500_tolerance | Triggered when two or more HTTP 500 results are returned from a pod within a 5-minute period. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_INVALID_CONTENT_LENGTH | Info | Fires if GES encounters any incoming requests that have exceeded the maximum content length of 10mb on the internal port and 500kb for the external, public-facing port. | INVALID_CONTENT_LENGTH, invalid_content_length_tolerance | Triggered when one instance of a message with an invalid length is received. Silenced after 2 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_LOGGING_FAILURE | Warning | GES has failed to write a message to the log. | LOGGING_FAILURE | Triggered when there are any failures writing to the logs. Silenced after 1 minute. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_MEMORY_USAGE | Info | GES has high memory usage for a period of 90 seconds. | ges_nodejs_heap_space_size_used_bytes, ges_nodejs_heap_space_size_available_bytes | Triggered when memory usage (measured as a ratio of Used Heap Space vs Available Heap Space) is above 80% for a 90-second interval. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_NEXUS_ACCESS_FAILURE | Warning | GES has been having difficulties contacting Nexus. This alert is only relevant for customers who leverage the Push Notification feature in Genesys Callback. | NEXUS_ACCESS_FAILURE | Triggered when GES has failed to connect or communicate with Nexus more than 30 times over the last hour. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_NOT_READY_CRITICAL | Critical | GES pods are not in the Ready state. Indicative of issues with the Redis connection or other problems with the Helm deployment. | kube_pod_container_status_ready | Triggered when more than 50% of GES pods have not been in a Ready state for 5 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_NOT_READY_WARNING | Warning | GES pods are not in the Ready state. Indicative of issues with the Redis connection or other problems with the Helm deployment. | kube_pod_container_status_ready | Triggered when 25% (or more) of GES pods have not been in a Ready state for 10 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_ORS_REDIS_DOWN | Critical | Connection to ORS_REDIS is down. | ORS_REDIS_STATUS | Triggered when the ORS_REDIS connection is down for 5 consecutive minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_PODS_RESTART | Critical | GES pods have been excessively crashing and restarting. | kube_pod_container_status_restarts_total | Triggered when there have been more than five pod restarts in the past 15 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_RBAC_CREATE_VQ_PROXY_ERROR | Info | Fires if there are issues with GES managing VQ Proxy Objects. | RBAC_CREATE_VQ_PROXY_ERROR, rbac_create_vq_proxy_error_tolerance | Triggered when there are at least 1000 instances of issues managing VQ Proxy objects within a 10-minute period. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_SLOW_HTTP_RESPONSE_TIME | Warning | Fired if the average response time for incoming requests begins to lag. | ges_http_request_duration_seconds_sum, ges_http_request_duration_seconds_count | Triggered when the average response time for incoming requests is above 1.5 seconds for a sustained period of 15 minutes. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_UNCAUGHT_EXCEPTION | Warning | There has been an uncaught exception within GES. | UNCAUGHT_EXCEPTION | Triggered when GES encounters any uncaught exceptions. Silenced after 1 minute. |
Draft:PEC-CAB/Current/CABPEGuide/CallbackMetrics | GES_UP | Critical | Fires when fewer than two GES pods have been up for the last 15 minutes. | Triggered when fewer than two GES pods are up for 15 consecutive minutes. | |
Draft:PEC-DC/Current/DCPEGuide/DCMetrics | Memory usage is above 3000 Mb | Critical | Triggered when the memory usage on this pod is above 3000 Mb for 15 minutes. | nexus_process_resident_memory_bytes | For 15 minutes |
Draft:PEC-DC/Current/DCPEGuide/DCMetrics | Nexus error rate | Critical | Triggered when the error rate on this pod is greater than 20% for 15 minutes. | nexus_errors_total, nexus_request_total | For 15 minutes |
Draft:PEC-IWD/Current/IWDPEGuide/IWD metrics and alerts | Database connections above 75 | HIGH | Triggered when pod database connections number is above 75. | Default number of connections: 75 | |
Draft:PEC-IWD/Current/IWDPEGuide/IWD metrics and alerts | IWD DB errors | CRITICAL | Triggered when IWD experiences more than 2 errors within 1 minute during operations with database. | Default number of errors: 2 | |
Draft:PEC-IWD/Current/IWDPEGuide/IWD metrics and alerts | IWD error rate | CRITICAL | Triggered when the number of errors in IWD exceeds the threshold for 15 min period. | Default number of errors: 2 | |
Draft:PEC-IWD/Current/IWDPEGuide/IWD metrics and alerts | Memory usage is above 3000 Mb | CRITICAL | Triggered when the pod memory usage is above 3000 MB. | Default memory usage: 3000 MB | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-API-LatencyHigh | HIGH | Triggered when the latency for API responses is beyond the defined threshold. | 2500ms for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-API-Redis-Connection-Failed | HIGH | Triggered when the connection to redis fails for more than 1 minute. | 1m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-CPUUsage | HIGH | Triggered when the CPU utilization of a pod is beyond the threshold | 300% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-EXT-Ingress-Error-Rate | HIGH | Triggered when the Ingress error rate is above the specified threshold. | 20% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-MemoryUsage | HIGH | Triggered when the memory utilization of a pod is beyond the threshold. | 70% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-MemoryUsagePD | HIGH | Triggered when the memory usage of a pod is above the critical threshold. | 90% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-PodNotReadyCount | HIGH | Triggered when the number of pods ready for a CX Contact deployment is less than or equal to the threshold. | 1 for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-PodRestartsCount | HIGH | Triggered when the restart count for a pod is beyond the threshold. | 1 for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-PodRestartsCountPD | HIGH | Triggered when the restart count is beyond the critical threshold. | 5 for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | CXC-PodsNotReadyPD | HIGH | Triggered when there are no pods ready for CX Contact deployment. | 0 for 1m | |
Draft:PEC-OU/Current/CXCPEGuide/APIAMetrics | cxc_api_too_many_errors_from_auth | HIGH | Triggered when there are too many error responses from the auth service for more than the specified time threshold. | 1m | |
Draft:PEC-OU/Current/CXCPEGuide/CPGMMetrics | CXC-CM-Redis-Connection-Failed | HIGH | Triggered when the connection to redis fails for more than 1 minute. | 1m | |
Draft:PEC-OU/Current/CXCPEGuide/CPGMMetrics | CXC-CPUUsage | HIGH | Triggered when a the CPU utilization of a pod is beyond the threshold | 300% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPGMMetrics | CXC-MemoryUsage | HIGH | Triggered when the memory utilization of a pod is beyond the threshold. | 70% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPGMMetrics | CXC-MemoryUsagePD | HIGH | Triggered when the memory usage of a pod is above the critical threshold. | 90% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPGMMetrics | CXC-PodNotReadyCount | HIGH | Triggered when the number of pods ready for a CX Contact deployment is less than or equal to the threshold. | 1 for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPGMMetrics | CXC-PodRestartsCount | HIGH | Triggered when the restart count for a pod is beyond the threshold. | 1 for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPGMMetrics | CXC-PodRestartsCountPD | HIGH | Triggered when the restart count is beyond the critical threshold. | 5 for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPGMMetrics | CXC-PodsNotReadyPD | HIGH | Triggered when there are no pods ready for CX Contact deployment. | 0 for 1m | |
Draft:PEC-OU/Current/CXCPEGuide/CPLMMetrics | CXC-CoM-Redis-no-active-connections | HIGH | Triggered when CX Contact compliance has no active redis connection for 2 minutes | 2m | |
Draft:PEC-OU/Current/CXCPEGuide/CPLMMetrics | CXC-Compliance-LatencyHigh | HIGH | Triggered when the latency for API responses is beyond the defined threshold. | 5000ms for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPLMMetrics | CXC-CPUUsage | HIGH | Triggered when the CPU utilization of a pod is beyond the threshold. | 300% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPLMMetrics | CXC-MemoryUsage | HIGH | Triggered when the memory utilization of a pod is beyond the threshold. | 70% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPLMMetrics | CXC-MemoryUsagePD | HIGH | Triggered when the memory usage of a pod is above the critical threshold. | 90% for 5m | |
Draft:PEC-OU/Current/CXCPEGuide/CPLMMetrics | CXC-PodNotReadyCount | HIGH | Triggered when the number of pods ready for a CX Contact deployment is less than or equal to the threshold. | 1 for 5m |