Shared Provisioning
Contents
Learn how to configure Genesys Pulse.
Prerequisites
Please complete Before you begin instructions.
Information you will need:
- Versions:
- <image-version> = 100.0.000.0005
- <chart-versions>= 100.0.000.+0005
- K8S namespace pulse
- Project Name pulse
- Postgres credentials
- <db-host>
- <db-port>
- <db-name>
- <db-user>
- <db-user-password>
- <db-ssl-mode>
- Docker credentials
- <docker-registry>
- <docker-registry-secret-name>
- Redis credentials
- <redis-host>
- <redis-port>
- <redis-password>
- <redis-enable-ssl>
- Tenant service variables
- <tenant-uuid>
- <tenant-sid>
- <tenant-name>
- <tenant-dcu>
- GAuth/GWS service variables
- <gauth-url-external>
- <gauth-url-internal>
- <gauth-client-id>
- <gauth-client-secret>
- <gws-url-external>
- <gws-url-internal>
- Storage class:
- <pv-storage-class-rw-many>
- Pulse:
- <pulse-host>
Single namespace
If you plan to deploy into the Single Namespace (OpenShift software-defined networking [SDN] with multi-tenant mode, where namespaces are network isolated), ensure that your environment meets the following requirements for inputs:
- Backend services deployed into the single namespace must include the string pulse:
<db-host> <db-name> <redis-host>
- The hostname used for Ingress must be unique, and must include the string pulse:
<pulse-host>
- Internal service-to-service traffic must use the service endpoints, rather than the Ingress:
<gauth-url-internal> <gws-url-internal>
Deployment
init Helm chart
This chart is used to initialize the shared PostgreSQL datbase.
Get init Helm chart
helm repo update
helm search repo pulsehelmrepo/init
Prepare override-init file
Create a file with the following content, entering appropriate values where indicated, and save the file as values-override-init.yaml:
# Default values for init.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
tag: "<image-version>"
pullPolicy: IfNotPresent
registry: "<docker-registry>"
imagePullSecrets: [name: "<docker-registry-secret-name>"]
# tenant identification, or empty for shared deployment
tenants:
- id: "<tenant-uuid>"
name: "<tenant-name>"
key: "<tenant-sid>"
dcu: "<tenant-dcu>"
# common configuration.
config:
# set "true" to create config maps
createConfigMap: true
# set "true" to create secrets
createSecret: true
# Postgres config - fill when createConfigMap: true
# Postgres config map name
postgresConfig: "pulse-postgres-configmap"
# Postgres hostname
postgresHost: "<postgres-hostname>"
# Postgres port
postgresPort: "<postgres-port>"
# Postgres SSL mode
postgresEnableSSL: "<postgres-ssl-mode>"
# Postgres secret config - fill when createSecret: true
# Postgres User
postgresUser: "<postgres-user>"
# Postgres Password
postgresPassword: "<postgres-password>"
# Secret name for postgres
postgresSecret: "pulse-postgres-secret"
# Secret key for postgres user
postgresSecretUser: "META_DB_ADMIN"
# Secret key for postgres password
postgresSecretPassword: "META_DB_ADMINPWD"
# Redis config - fill when createConfigMap: true
# Redis config map name
redisConfig: "pulse-redis-configmap"
# Redis host
redisHost: "<redis-hostname>"
# Redis port
redisPort: "<redis-port>"
# Redis SSL enabled
redisEnableSSL: "false"
# Redis secret config - fill when createSecret: true
# Password for Redis
redisKey: "<redis-key>"
# Secret name for Redis
redisSecret: "pulse-redis-secret"
# Secret key for Redis password
redisSecretKey: "REDIS01_KEY"
# GWS secret config - fill when createSecret: true
# Client ID
gwsClientId: "<gws-client-id>"
# Client Secret
gwsClientSecret: "<gws-client-secret>"
# Secret name
gwsSecret: "pulse-gws-secret"
# Secret key for Client ID
gwsSecretClientId: "clientId"
# Secret key for Client Secret
gwsSecretClientSecret: "clientSecret"
# fill database name
dbName: "<db-name>"
# set "true" when need @host added for username
dbUserWithHost: true
# set "true" for CSI secrets
mountSecrets: false
## Service account settings
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
## Add annotations to all pods
##
podAnnotations: {}
## Specifies the security context for all Pods in the service
##
podSecurityContext:
runAsNonRoot: true
runAsUser: 500
runAsGroup: 500
fsGroup: 0
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
limits:
memory: 256Mi
cpu: 200m
requests:
memory: 128Mi
cpu: 100m
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
Install init Helm chart
Run:
helm upgrade --install pulse-init pulsehelmrepo/init --wait --wait-for-jobs --version=<chart-version> --namespace=pulse -f values-override-init.yaml
Command will finish with exit code 0 if installation is successful.
Validate init Helm chart
Execute the following command to validate Helm chart initialization. Pulse-init job should have a Status of Completed:
oc get pods -n=pulse -l "app.kubernetes.io/name=init,app.kubernetes.io/instance=pulse-init"
NAME READY STATUS RESTARTS AGE
pulse-init-job-5669c 0/1 Completed 0 79m
Install pulse Helm chart
This chart is used for install shared part.
Get pulse Helm chart
helm repo update
helm search repo pulsehelmrepo/pulse
Prepare override-pulse file
Create a file with the following content, entering appropriate values where indicated, and save the file as values-override-pulse.yaml:
# Default values for pulse.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
tag: "<image-version>"
pullPolicy: IfNotPresent
registry: "<docker-registry>"
imagePullSecrets: [name: "<docker-registry-secret-name>"]
replicaCount: 2
# common configuration.
config:
dbName: "<db-name>"
# set "true" when need @host added for username
dbUserWithHost: true
# set "true" for CSI secrets
mountSecrets: false
# Postgres config map name
postgresConfig: "pulse-postgres-configmap"
# Postgres secret name
postgresSecret: "pulse-postgres-secret"
# Postgres secret key for user
postgresSecretUser: "META_DB_ADMIN"
# Postgres secret key for password
postgresSecretPassword: "META_DB_ADMINPWD"
# Redis config map name
redisConfig: "pulse-redis-configmap"
# Redis secret name
redisSecret: "pulse-redis-secret"
# Redis secret key for access key
redisSecretKey: "REDIS01_KEY"
# GAuth secret name
gwsSecret: "pulse-gws-secret"
# GAuth secret key for client_id
gwsSecretClientId: "clientId"
# GAuth secret key for client_secret
gwsSecretClientSecret: "clientSecret"
# monitoring settings
monitoring:
# enable the Prometheus metrics endpoint
enabled: false
# port is <options.managementPort>
# HTTP path is <options.managementContext><options.prometheusEndpoint>
# additional annotations required for monitoring PODs
# you can reference values of other variables as {{.Values.variable.full.name}}
podAnnotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "{{.Values.options.managementPort}}"
# prometheus.io/path: "{{.Values.options.managementContext}}{{.Values.options.prometheusEndpoint}}"
serviceMonitor:
# enables ServiceMonitor creation
enabled: false
# interval at which metrics should be scraped
scrapeInterval: 30s
# timeout after which the scrape is ended
scrapeTimeout:
# namespace of the ServiceMonitor, defaults to the namespace of the service
namespace:
additionalLabels: {}
# common log configuration
log:
# target directory where log will be stored, leave empty for default
logDir: ""
# path where volume will be mounted
volumeMountPath: /data/log
# log volume type: none | hostpath | pvc
volumeType: pvc
# log volume hostpath, used with volumeType "hostpath"
volumeHostPath: /mnt/log
# log PVC parameters, used with volumeType "pvc"
pvc:
name: pulse-logs
accessModes:
- ReadWriteMany
capacity: 10Gi
class: <pv-storage-class-rw-many>
# application options
options:
authUrl: "https://<gauth-url-external>"
authUrlInt: "http://<gauth-url-internal>"
gwsUrl: "https://<gws-url-external>"
gwsUrlInt: "http://<gws-url-internal>"
## Service account settings
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
## Add annotations to all pods
##
podAnnotations: {}
## Add labels to all pods
##
podLabels: {}
## Specifies the security context for all Pods in the service
##
podSecurityContext:
runAsNonRoot: true
runAsUser: 500
runAsGroup: 500
fsGroup: 0
## Ingress configuration
ingress:
enabled: true
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## recommended to increase proxy-body-size size
# nginx.ingress.kubernetes.io/proxy-body-size: 5m
hosts:
- host: "<pulse-host>"
paths: [/]
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
gateway:
enabled: false
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
limits:
memory: 4Gi
cpu: 1
requests:
memory: 650Mi
cpu: 100m
## HPA Settings
## Not supported in this release!
hpa:
enabled: false
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Pod Disruption Budget Settings
podDisruptionBudget:
enabled: false
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
# control network policies
networkPolicies:
enabled: false
Install pulse Helm chart
Run:
helm upgrade --install pulse pulsehelmrepo/pulse --wait --version=<chart-version> --namespace=pulse -f values-override-pulse.yaml
Command will finish with exit code 0 if installation is successful.
Validate pulse Helm chart
Execute the following command to kist all running Pulse pods:
oc get pods -n=pulse -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse"
NAME READY STATUS RESTARTS AGE
pulse-648b9d6666-f5d84 1/1 Running 0 22m
pulse-648b9d6666-kqhs6 1/1 Running 0 68m
Validation
Use the following procedures to validate the deployment.
Check logs for error
- Execute the following command to check the log files:
oc get pods os logs <pulse-pod-id>
- Where: <pulse-pod-id> is the pod identifier.
Health validation
- Execute the following command to download the health vladiation metrics:
- GET /actuator/metrics/pulse.health.all
- Open two console windows, and execute the following commands:
- Console 1:
oc get pods --namespace pulse -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse" -o jsonpath="{.items[0].metadata.name}" oc --namespace pulse port-forward <pod-name> 8090:8090
- Console 2:
curl -X GET http://127.0.0.1:8090/actuator/metrics/pulse.health.all -H 'Content-Type: application/json'
- If Pulse is running correctly and can connect to Redis and PostgreSQL, the following is returned:
- http response is 200
- json response has measurements.statistic.value of 1.0, for example:
{ "name": "pulse.health.all", "description": "Provides overall application status", "baseUnit": "Boolean", "measurements": [ { "statistic": "VALUE", "value": 1 } ], "availableTags": [ { "tag": "deployment.code", "values": [ "pulse" ] }, { "tag": "application.name", "values": [ "pulse" ] } ] }
- Console 1:
Troubleshooting
Use the following procedures to troubleshoot problems with the deployment.
Check init Helm manifests
Execute the following command to output init Helm manifest files into the helm-template directory:
helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template init pulsehelmrepo/init -f values-override-init.yaml.yaml
Where: <chart-version> is the Helm chart version.
Check Pulse Helm manifests
Execute the following command to output Pulse Helm manifest files into the helm-template directory:
helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse pulsehelmrepo/pulse -f values-override-pulse.yaml
Where: <chart-version> is the Helm chart version.
Override Helm chart values
For more information about overriding Helm chart values, see the the "suite-level" documentation:Overriding Helm chart values.
Note: We are still working on an approach to handle documentation for Helm chart values, so please leave this section until the end.
Parameter | Description | Default | Valid values |
---|---|---|---|
service.port | Designer service to be exposed. | 8888 | A valid port. |
... |
Configure security
Arbitrary UIDs
If your OpenShift deployment uses arbitrary UIDs, you must override the securityContext settings. By default, the user and group IDs are set to 500:500:500. If your deployment uses arbitrary UIDs, update the podSecurityContext section in the YAML file for each chart as follows; do not define any specific IDs:
podSecurityContext:
runAsNonRoot: true
runAsUser: null
runAsGroup: null
fsGroup: null