View table: PEAlert

Jump to: navigation, search

Table structure:

  • Alert - Wikitext
  • Severity - String
  • AlertDescription - String
  • BasedOn - Wikitext
  • Threshold - String

This table has 851 rows altogether.

Page Alert Severity AlertDescription BasedOn Threshold
AUTH/Current/AuthPEGuide/AUTHMetrics auth_auth_login_errors Critical Genesys Authentication has received more than 20 login errors for the call center ID in the last 60 seconds. auth_system_login_errors_total More than 20 in 60 seconds
AUTH/Current/AuthPEGuide/AUTHMetrics auth_high_500_responces_count Critical Genesys Authentication has received more than 10 500 responses. gws_responses_total More than 10
AUTH/Current/AuthPEGuide/AUTHMetrics auth_high_5xx_responces_count Critical Genesys Authentication has received more than 10 5xx responses. gws_responses_total More than 10
AUTH/Current/AuthPEGuide/AUTHMetrics auth_high_jvm_gc_pause_seconds_count Critical JVM garbage collection occurs more than 10 times in the last 30 seconds. jvm_gc_pause_seconds_count More than 10 in 30 seconds
AUTH/Current/AuthPEGuide/AUTHMetrics auth_jvm_threads_deadlocked Critical Deadlocked JVM threads exist. jvm_threads_deadlocked 0
AUTH/Current/AuthPEGuide/AUTHMetrics auth_saml_response_errors High Genesys Authentication received more than 20 SAML errors for the contact center ID in the last 60 seconds. auth_saml_response_errors More than 20 in 60 seconds
AUTH/Current/AuthPEGuide/AUTHMetrics auth_saml_timing_errors High Genesys Authentication received more than 20 SAML timing errors for the contact center ID in the last 60 seconds. auth_saml_timing_errors More than 20 in 60 seconds
AUTH/Current/AuthPEGuide/AUTHMetrics auth_total_count_of_errors_during_context_initialization High Genesys Authentication received more than 10 errors in the last 30 seconds during context initialization. A spike might indicate a network or configuration problem. Check the logs for details. auth_context_error_total More than 10 in 30 seconds
AUTH/Current/AuthPEGuide/AUTHMetrics auth_total_count_of_errors_in_PSDK_connections High Genesys Authentication received more than 3 errors in PSDK connections in the last 30 seconds. A spike might indicate a problem with the backend or a network issue. Check the logs for details. psdk_conn_error_total More than 3 in 30 seconds
AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-CPU-Usage High A Genesys Authentication pod has CPU usage above 300% during the last 5 minutes. <br /> container_cpu_usage_seconds_total More than 300% in 5 minutes
AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Memory-Usage High A Genesys Authentication pod has memory usage above 70% in the last 5 minutes. <br /> container_memory_usage_bytes, container_spec_memory_limit_bytes More than 70% in 5 minutes
AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Memory-Usage-CRITICAL Critical A Genesys Authentication pod has memory usage above 90% in the last 5 minutes. container_memory_usage_bytes More than 90% in 5 minutes
AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Pod-NotReady-Count High Genesys Authentication has 1 pod ready in the last 5 minutes. kube_deployment_spec_replicas, kube_deployment_status_replicas_available 1 in 5 minutes
AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Pod-Restarts-Count High A Genesys Authentication pod has restarted 1 or more times during the last 5 minutes. kube_pod_container_status_restarts_total 1 or more in 5 minutes
AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Pod-Restarts-Count-CRITICAL Critical A Genesys Authentication pod has restarted more than 5 times in the last 5 minutes. kube_pod_container_status_restarts_total More than 5 in 5 minutes
AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Pods-NotReady-CRITICAL Critical Genesys Authentication has 0 pods ready in the last 5 minutes. <br /> kube_deployment_status_replicas_available, kube_deployment_spec_replicas 0 in 5 minutes
DES/Current/DESPEGuide/DAS Metrics AbsentAlert
(Alarm: Deployment availability)
CRITICAL Triggered when DAS pod metrics are unavailable. 1<br>Default interval: 60s
DES/Current/DESPEGuide/DAS Metrics containerReadyAlert
(Alarm: Pod Ready Count)
CRITICAL Triggered when a pod's ready count is less than the threshold (1). 1<br>Default interval: 60s
DES/Current/DESPEGuide/DAS Metrics containerRestartAlert
(Alarm: Pod Restarts Count)
CRITICAL Triggered when a pod's restart count is beyond the threshold. 5<br>Default interval: 180s
DES/Current/DESPEGuide/DAS Metrics CPUUtilization
(Alarm: Pod CPU Usage)
CRITICAL Triggered when a pod's CPU utilization is beyond the threshold. 75%<br>Default interval: 180s
DES/Current/DESPEGuide/DAS Metrics Health
(Alarm: Health Status)
CRITICAL Triggered when DAS health status is 0. 0<br>Default interval: 60s
DES/Current/DESPEGuide/DAS Metrics HTTP4XXCount
(Alarm: Application 4XX Error)
HIGH Triggered when DAS exceeds the 4xx error count threshold specified here. 100<br>Default interval: 180s
DES/Current/DESPEGuide/DAS Metrics HTTP5XXCount
(Alarm: Application 5XX Error)
HIGH Triggered when DAS exceeds the allowed 5xx error count threshold specified here. 10<br>Default interval: 180s
DES/Current/DESPEGuide/DAS Metrics HTTPLatency
(Alarm: DAS HTTP Latency Alert)
HIGH Triggered when the average time taken by a HTTP request is greater than the threshold (in seconds) specified here. 10s<br>Default interval: 180s
DES/Current/DESPEGuide/DAS Metrics MemoryUtilization
(Alarm: Pod Memory Usage)
CRITICAL Triggered when a pod's memory utilization is beyond the threshold. 75%<br>Default interval: 180s
DES/Current/DESPEGuide/DAS Metrics PHPHealth
(Alarm: PHP Health Status)
CRITICAL Triggered when Designer/DAS experiences a PHP Health check failure. 0<br>Default interval: 60s
DES/Current/DESPEGuide/DAS Metrics PhpLatency
(Alarm: DAS PHP Latency Alert)
HIGH Triggered when the average time taken by a PHP request is greater than the threshold (in seconds) specified here. 10s<br>Default interval: 180s
DES/Current/DESPEGuide/DAS Metrics ProxyHealth
(Alarm: Proxy Health Status)
CRITICAL Triggered when Designer/DAS experiences a Proxy Health check failure. 0<br>Default interval: 60s
DES/Current/DESPEGuide/DAS Metrics WorkspaceHealth
(Alarm: Workspace Health Status)
CRITICAL Triggered when DAS is not able to communicate with the workspace. 0<br>Default interval: 60s
DES/Current/DESPEGuide/DAS Metrics WorkspaceUtilization
(Alarm: Azure Fileshare PVC Usage)
HIGH Triggered when file share usage is greater than the threshold. 80%<br>Default interval: 180s
DES/Current/DESPEGuide/DES Metrics AbsentAlert (Alarm: Deployment availability) CRITICAL Triggered when Designer pod metrics are unavailable. 1<br>Default interval: 60s
DES/Current/DESPEGuide/DES Metrics containerReadyAlert
(Alarm: Pod Ready Count)
CRITICAL Triggered when a pod's ready count is less than the threshold (1). 1<br>Default interval: 60s
DES/Current/DESPEGuide/DES Metrics containerRestartAlert
(Alarm: Pod Restarts Count)
CRITICAL Triggered when a pod's restart count is beyond the threshold. 5<br>Default interval: 180s
DES/Current/DESPEGuide/DES Metrics CPUUtilization
(Alarm: Pod CPU Usage)
CRITICAL Triggered when a pod's CPU utilization is beyond the threshold. 75%<br>Default interval: 180s
DES/Current/DESPEGuide/DES Metrics ESHealth
(Alarm: Elasticsearch Health Status)
CRITICAL Triggered when Designer/DAS is not able to reach the Elasticsearch server. 0<br>Default interval: 60s
DES/Current/DESPEGuide/DES Metrics GWSHealth
(Alarm: GWS Health Status)
CRITICAL Triggered when Designer/DAS is not able to reach the GWS server. 0<br>Default interval: 60s
DES/Current/DESPEGuide/DES Metrics Health
(Alarm: Health Status)
CRITICAL Triggered when Designer health status is 0. 0<br>Default interval: 60s
DES/Current/DESPEGuide/DES Metrics MemoryUtilization
(Alarm: Pod Memory Usage)
CRITICAL Triggered when a pod's memory utilization is beyond the threshold. 75%<br>Default interval: 180s
DES/Current/DESPEGuide/DES Metrics WorkspaceHealth
(Alarm: Workspace Health Status)
CRITICAL Triggered when Designer is not able to communicate with the workspace. 0<br>Default interval: 60s
DES/Current/DESPEGuide/DES Metrics WorkspaceUtilization
(Alarm: Azure Fileshare PVC Usage)
HIGH Triggered when file share usage is greater than the threshold. 80%<br>Default interval: 180s
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_auth_login_errors Critical Genesys Authentication has received more than 20 login errors for the call center ID in the last 60 seconds. auth_system_login_errors_total More than 20 in 60 seconds
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_high_500_responces_count Critical Genesys Authentication has received more than 10 500 responses. gws_responses_total More than 10
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_high_5xx_responces_count Critical Genesys Authentication has received more than 10 5xx responses. gws_responses_total More than 10
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_high_jvm_gc_pause_seconds_count Critical JVM garbage collection occurs more than 10 times in the last 30 seconds. jvm_gc_pause_seconds_count More than 10 in 30 seconds
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_jvm_threads_deadlocked Critical Deadlocked JVM threads exist. jvm_threads_deadlocked 0
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_saml_response_errors High Genesys Authentication received more than 20 SAML errors for the contact center ID in the last 60 seconds. auth_saml_response_errors More than 20 in 60 seconds
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_saml_timing_errors High Genesys Authentication received more than 20 SAML timing errors for the contact center ID in the last 60 seconds. auth_saml_timing_errors More than 20 in 60 seconds
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_total_count_of_errors_during_context_initialization High Genesys Authentication received more than 10 errors in the last 30 seconds during context initialization. A spike might indicate a network or configuration problem. Check the logs for details. auth_context_error_total More than 10 in 30 seconds
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics auth_total_count_of_errors_in_PSDK_connections High Genesys Authentication received more than 3 errors in PSDK connections in the last 30 seconds. A spike might indicate a problem with the backend or a network issue. Check the logs for details. psdk_conn_error_total More than 3 in 30 seconds
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-CPU-Usage High A Genesys Authentication pod has CPU usage above 300% during the last 5 minutes. <br /> container_cpu_usage_seconds_total More than 300% in 5 minutes
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Memory-Usage High A Genesys Authentication pod has memory usage above 70% in the last 5 minutes. <br /> container_memory_usage_bytes, container_spec_memory_limit_bytes More than 70% in 5 minutes
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Memory-Usage-CRITICAL Critical A Genesys Authentication pod has memory usage above 90% in the last 5 minutes. container_memory_usage_bytes More than 90% in 5 minutes
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Pod-NotReady-Count High Genesys Authentication has 1 pod ready in the last 5 minutes. kube_deployment_spec_replicas, kube_deployment_status_replicas_available 1 in 5 minutes
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Pod-Restarts-Count High A Genesys Authentication pod has restarted 1 or more times during the last 5 minutes. kube_pod_container_status_restarts_total 1 or more in 5 minutes
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Pod-Restarts-Count-CRITICAL Critical A Genesys Authentication pod has restarted more than 5 times in the last 5 minutes. kube_pod_container_status_restarts_total More than 5 in 5 minutes
Draft:AUTH/Current/AuthPEGuide/AUTHMetrics GAUTH-Blue-Pods-NotReady-CRITICAL Critical Genesys Authentication has 0 pods ready in the last 5 minutes. <br /> kube_deployment_status_replicas_available, kube_deployment_spec_replicas 0 in 5 minutes
Draft:DES/Current/DESPEGuide/DAS Metrics AbsentAlert
(Alarm: Deployment availability)
CRITICAL Triggered when DAS pod metrics are unavailable. 1<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DAS Metrics containerReadyAlert
(Alarm: Pod Ready Count)
CRITICAL Triggered when a pod's ready count is less than the threshold (1). 1<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DAS Metrics containerRestartAlert
(Alarm: Pod Restarts Count)
CRITICAL Triggered when a pod's restart count is beyond the threshold. 5<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DAS Metrics CPUUtilization
(Alarm: Pod CPU Usage)
CRITICAL Triggered when a pod's CPU utilization is beyond the threshold. 75%<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DAS Metrics Health
(Alarm: Health Status)
CRITICAL Triggered when DAS health status is 0. 0<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DAS Metrics HTTP4XXCount
(Alarm: Application 4XX Error)
HIGH Triggered when DAS exceeds the 4xx error count threshold specified here. 100<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DAS Metrics HTTP5XXCount
(Alarm: Application 5XX Error)
HIGH Triggered when DAS exceeds the allowed 5xx error count threshold specified here. 10<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DAS Metrics HTTPLatency
(Alarm: DAS HTTP Latency Alert)
HIGH Triggered when the average time taken by a HTTP request is greater than the threshold (in seconds) specified here. 10s<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DAS Metrics MemoryUtilization
(Alarm: Pod Memory Usage)
CRITICAL Triggered when a pod's memory utilization is beyond the threshold. 75%<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DAS Metrics PHPHealth
(Alarm: PHP Health Status)
CRITICAL Triggered when Designer/DAS experiences a PHP Health check failure. 0<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DAS Metrics PhpLatency
(Alarm: DAS PHP Latency Alert)
HIGH Triggered when the average time taken by a PHP request is greater than the threshold (in seconds) specified here. 10s<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DAS Metrics ProxyHealth
(Alarm: Proxy Health Status)
CRITICAL Triggered when Designer/DAS experiences a Proxy Health check failure. 0<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DAS Metrics WorkspaceHealth
(Alarm: Workspace Health Status)
CRITICAL Triggered when DAS is not able to communicate with the workspace. 0<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DAS Metrics WorkspaceUtilization
(Alarm: Azure Fileshare PVC Usage)
HIGH Triggered when file share usage is greater than the threshold. 80%<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DES Metrics AbsentAlert (Alarm: Deployment availability) CRITICAL Triggered when Designer pod metrics are unavailable. 1<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DES Metrics containerReadyAlert
(Alarm: Pod Ready Count)
CRITICAL Triggered when a pod's ready count is less than the threshold (1). 1<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DES Metrics containerRestartAlert
(Alarm: Pod Restarts Count)
CRITICAL Triggered when a pod's restart count is beyond the threshold. 5<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DES Metrics CPUUtilization
(Alarm: Pod CPU Usage)
CRITICAL Triggered when a pod's CPU utilization is beyond the threshold. 75%<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DES Metrics ESHealth
(Alarm: Elasticsearch Health Status)
CRITICAL Triggered when Designer/DAS is not able to reach the Elasticsearch server. 0<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DES Metrics GWSHealth
(Alarm: GWS Health Status)
CRITICAL Triggered when Designer/DAS is not able to reach the GWS server. 0<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DES Metrics Health
(Alarm: Health Status)
CRITICAL Triggered when Designer health status is 0. 0<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DES Metrics MemoryUtilization
(Alarm: Pod Memory Usage)
CRITICAL Triggered when a pod's memory utilization is beyond the threshold. 75%<br>Default interval: 180s
Draft:DES/Current/DESPEGuide/DES Metrics WorkspaceHealth
(Alarm: Workspace Health Status)
CRITICAL Triggered when Designer is not able to communicate with the workspace. 0<br>Default interval: 60s
Draft:DES/Current/DESPEGuide/DES Metrics WorkspaceUtilization
(Alarm: Azure Fileshare PVC Usage)
HIGH Triggered when file share usage is greater than the threshold. 80%<br>Default interval: 180s
Draft:GVP/Current/GVPPEGuide/GVP Configuration Server Metrics ContainerCPUreached70percentForConfigserver HIGH The trigger will flag an alarm when the Configserver container CPU utilization goes beyond 70% for 15 mins container_cpu_usage_seconds_total, container_spec_cpu_quota, container_spec_cpu_period 15mins
Draft:GVP/Current/GVPPEGuide/GVP Configuration Server Metrics ContainerMemoryUseOver1GBForConfigserver HIGH The trigger will flag an alarm when the Configserver container working memory has exceeded 1GB for 15 mins container_memory_working_set_bytes 15mins
Draft:GVP/Current/GVPPEGuide/GVP Configuration Server Metrics ContainerMemoryUseOver90PercentForConfigserver HIGH The trigger will flag an alarm when the Configserver container working memory use is over 90% of the limit for 15 mins container_memory_working_set_bytes, kube_pod_container_resource_limits_memory_bytes 15mins
Draft:GVP/Current/GVPPEGuide/GVP Configuration Server Metrics ContainerNotRunningForConfigserver HIGH This alert is triggered when the Configserver container has not been running for 15 minutes kube_pod_container_status_running 15mins
Draft:GVP/Current/GVPPEGuide/GVP Configuration Server Metrics ContainerNotRunningForServiceHandler MEDIUM This alert is triggered when the service-handler container has not been running for 15 minutes kube_pod_container_status_running 15mins
Draft:GVP/Current/GVPPEGuide/GVP Configuration Server Metrics ContainerRestartsOver4ForConfigserver HIGH This alert is triggered when the Configserver container restarts in 15 mins exceeded 4 kube_pod_container_status_restarts_total 15mins
Draft:GVP/Current/GVPPEGuide/GVP Configuration Server Metrics ContainerRestartsOver4ForServiceHandler MEDIUM This alert is triggered when the service-handler container restarts exceeded 4 for 15 mins kube_pod_container_status_running 15mins
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics ContainerCPUreached70percentForMCP HIGH The trigger will flag an alarm when the MCP container CPU utilization goes beyond 70% for 5 mins container_cpu_usage_seconds_total, container_spec_cpu_quota, container_spec_cpu_period 15mins
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics ContainerMemoryUseOver7GBForMCP HIGH The trigger will flag an alarm when the MCP container working memory has exceeded 7GB for 5 mins container_memory_working_set_bytes 15mins
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics ContainerMemoryUseOver90PercentForMCP HIGH The trigger will flag an alarm when the MCP container working memory use is over 90% of the limit for 5 mins container_memory_working_set_bytes, kube_pod_container_resource_limits_memory_bytes 15mins
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics ContainerRestartsOver2ForMCP HIGH The trigger will flag an alarm when the MCP container restarts exceeded 2 for 15 mins kube_pod_container_status_restarts_total 15mins
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics MCP_MEDIA_ERROR_CRITICAL CRITICAL Number of LMSIP media errors exceeded critical limit gvp_mcp_log_parser_eror_total {LogID="33008",endpoint="mcplog"...} 30mins
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics MCP_SDP_PARSE_ERROR WARNING Number of SDP parse errors exceeded limit gvp_mcp_log_parser_eror_total {LogID="33006",endpoint="mcplog"...} N/A
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics MCP_WEBSOCKET_CLIENT_OPEN_ERROR HIGH There are errors opening a session with a websocket client gvp_mcp_log_parser_eror_total {LogID="40026",endpoint="mcplog"...} N/A
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics MCP_WEBSOCKET_CLIENT_PROTOCOL_ERROR HIGH There are protocol errors with a websocket client gvp_mcp_log_parser_eror_total {LogID="40026",endpoint="mcplog"...} N/A
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics MCP_WEBSOCKET_TOKEN_CONFIG_ERROR HIGH There are errors getting information for Auth token with a websocket client gvp_mcp_log_parser_eror_total {LogID="40026",endpoint="mcplog"...} N/A
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics MCP_WEBSOCKET_TOKEN_CREATE_ERROR HIGH There are errors creating a JWT token with a websocket client gvp_mcp_log_parser_eror_total {LogID="40026",endpoint="mcplog"...} N/A
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics MCP_WEBSOCKET_TOKEN_FETCH_ERROR HIGH There are errors fetching Auth token with a websocket client gvp_mcp_log_parser_eror_total {LogID="40026",endpoint="mcplog"...} N/A
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics NGI_LOG_FETCH_RESOURCE_ERROR MEDIUM Number of VXMLi fetch errors exceeded limit gvp_mcp_log_parser_eror_total {LogID="40026",endpoint="mcplog"...} 1min
Draft:GVP/Current/GVPPEGuide/GVP MCP Metrics NGI_LOG_FETCH_RESOURCE_ERROR_4XX WARNING Number of VXMLi 4xx fetch errors exceeded limit gvp_mcp_log_parser_eror_total {LogID="40032",endpoint="mcplog"...} 1min

More...

Retrieved from "https://all.docs.genesys.com/Special:CargoTables/PEAlert (2024-04-25 03:32:31)"