Shared Provisioning

From Genesys Documentation
Revision as of 01:09, July 1, 2021 by Olena.chapovska@genesys.com (talk | contribs) (Published)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
This topic is part of the manual Genesys Pulse Private Edition Guide for version Current of Reporting.

Learn how to configure Genesys Pulse.

Prerequisites

Please complete Before you begin instructions.

Information you will need:

  • Versions:
    • <image-version> = 9.0.100.10
    • <chart-versions>= 9.0.100+10
  • K8S namespace <namespace> (e.g. 'pulse')
  • Project Name <project-name> (e.g. 'pulse')
  • Postgres credentials
    • <db-host>
    • <db-port>
    • <db-port-internal>
    • <db-name>
    • <db-user>
    • <db-user-password>
    • <db-superuser>
    • <db-superuser-password>
    • <db-ssl-mode>
  • Docker credentials
    • <docker-email>
    • <docker-password>
    • <docker-user>
  • OpenShift credentials
    • <openshift-url>
    • <openshift-port>
    • <openshift-token>
  • Redis credentials
    • <redis-host>
    • <redis-port>
    • <redis-password>
    • <redis-enable-ssl>
  • Tenant service variables
    • <tenant-uuid>
    • <tenant-sid>
    • <tenant-name>
  • GAuth/GWS service variables
    • <gauth-url-external>
    • <gauth-url-internal>
    • <gauth-client-id>
    • <gauth-client-secret>
    • <gws-url-external>
    • <gws-url-internal>
Fill appropriate placeholders and save values in .shared_init_variables:
export CHART_VERSION='<chart-version>'
export DB_HOST='<db-host>'
export DB_NAME_SHARED='<db-name>'
export DB_NAME_SUPERUSER='<db-superuser>'
export DB_PASSWORD_SHARED='<db-user-password>'
export DB_PASSWORD_SUPERUSER='<db-superuser-password>'
export DB_PORT='<db-port>'
export DB_SSL_MODE='<db-ssl-mode>'
export DB_USER_SHARED='<db-user>'
export DOCKER_EMAIL='<docker-email>'
export DOCKER_PASSWORD='<docker-password>'
export DOCKER_REGISTRY='<docker-registry>'
export DOCKER_REGISTRY_SECRET_NAME='<docker-registry-secret-name>'
export DOCKER_TAG='<image-version>'
export DOCKER_USER='<docker-user>'
export GAUTH_CLIENT_ID='<gauth-client-id>'
export GAUTH_CLIENT_SECRET='<gauth-client-secret>'
export GAUTH_URL='<gauth-url-external>'
export GAUTH_URL_INTERNAL='<gauth-url-internal>'
export GWS_URL='<gws-url-external>'
export GWS_URL_INTERNAL='<gws-url-internal>'
export NAMESPACE='<namespace>'
export OS_ADD_SCC_TO_USER='<scc-user>'
export OS_PORT='<openshift-port>'
export OS_TOKEN='<openshift-token>'
export OS_URL='<openshift-url>'
export PROJECT_NAME='<project-name>'
export PULSE_ENDPOINT='<pulse-endpoint>'
export PULSE_HEALTH_PORT=8090
export PV_STORAGE_CLASS_RW_MANY='<rw-many-storage-class>'
export REDIS_ENABLE_SSL='<redis-enable-ssl>'
export REDIS_HOST='<redis-host>'
export REDIS_PASSWORD='<redis-password>'
export REDIS_PORT='<redis-port>'
export TENANT_UUID='<tenant-uuid>'
export TENANT_DCU='2'
export TENANT_NAME='<tenant-name>'
export TENANT_SID='<tenant-sid>'

Create project in OpenShift

Login using token

source .shared_init_variables
 
oc login --token="${OS_TOKEN}" \
           --server="${OS_URL}:${OS_PORT}" \
           --insecure-skip-tls-verify=true

Create project

source .shared_init_variables
 
oc new-project "${PROJECT_NAME}" \
   --description="${PROJECT_NAME}" \
   --display-name="${PROJECT_NAME}"

Add SCC to user

source .shared_init_variables
 
oc adm policy add-scc-to-user "${OS_ADD_SCC_TO_USER}" -z default -n "${NAMESPACE}"

Enable namespace

source .shared_init_variables
 
oc annotate namespace --overwrite "${NAMESPACE}" 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Equal", "effect": "NoSchedule", "key": "team", "value": "pat"}]'

Switch to project

source .shared_init_variables
 
oc project "${PROJECT_NAME}"

Create secret for auth to registry

source .shared_init_variables
 
oc create secret docker-registry "${DOCKER_REGISTRY_SECRET_NAME}" \
     --docker-server="${DOCKER_REGISTRY}" \
     --docker-username="${DOCKER_USER}" \
     --docker-password="${DOCKER_PASSWORD}" \
     --docker-email="${DOCKER_EMAIL}"
 
oc secrets link default "${DOCKER_REGISTRY_SECRET_NAME}" --for=pull

PostgreSQL

Create shared db user

As PosgreQSL superuser save the following as create_shared_db_role.sql:
CREATE ROLE "${DB_USER_SHARED}@${DB_HOST}" WITH NOSUPERUSER LOGIN ENCRYPTED PASSWORD '${DB_PASSWORD_SHARED}';

Run:

source .shared_init_variables
 
envsubst < ./create_shared_db_role.sql | \
PGPASSWORD=${DB_PASSWORD_SUPERUSER} psql -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_NAME_SUPERUSER}" -f -

Create shared db and grant privileges

Save the following as create_shared_db.sql

CREATE DATABASE ${DB_NAME_SHARED};
GRANT ALL PRIVILEGES ON DATABASE ${DB_NAME_SHARED} TO "${DB_USER_SHARED}@${DB_HOST}";

Run:

source .shared_init_variables
 
envsubst < ./create_shared_db.sql | \
PGPASSWORD=${DB_PASSWORD_SUPERUSER} psql -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_NAME_SUPERUSER}" -f -

Deployment

Preparations

Create pulse-postgres-configmap

Save as pulse-postgres-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
  name: pulse-postgres-configmap
  namespace: ${NAMESPACE}
data:
  META_DB_HOST: '${DB_HOST}'
  META_DB_PORT: '${DB_PORT}'
  META_DB_SSL_MODE: '${DB_SSL_MODE}'
Run:
source .shared_init_variables
 
envsubst < ./pulse-postgres-configmap | \
oc apply --namespace=${NAMESPACE} \
        -f -

Validate pulse-postgres-configmap

The following command should return created ConfigMap:
source .shared_init_variables
 
oc get configmap pulse-postgres-configmap -n=${NAMESPACE}
Example:
NAME                       DATA   AGE
pulse-postgres-configmap   3      5h5m

Create pulse-redis-configmap

Save as pulse-redis-configmap:
apiVersion: v1
kind: ConfigMap
metadata:
  name: pulse-redis-configmap
  namespace: ${NAMESPACE}
data:
  REDIS_HOST: '${REDIS_HOST}'
  REDIS_PORT: '${REDIS_PORT}'
  REDIS_ENABLE_SSL: '${REDIS_ENABLE_SSL}'
Run:
source .shared_init_variables
 
envsubst < ./pulse-redis-configmap | \
oc apply  --namespace=${NAMESPACE} \
        -f -

Validate pulse-redis-configmap

The following command should return created ConfigMap:
source .shared_init_variables
 
oc get configmap pulse-redis-configmap -n=${NAMESPACE}
Example:
NAME                    DATA   AGE
pulse-redis-configmap   3      4h38m

Create pulse-gws-secret

Save as pulse-gws-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
  name: pulse-gws-secret
  namespace: ${NAMESPACE}
type: Opaque
stringData:
  clientId: ${GAUTH_CLIENT_ID}
  clientSecret: ${GAUTH_CLIENT_SECRET}
Run:
source .shared_init_variables
 
envsubst < ./pulse-gws-secret.yaml | \
oc apply  --namespace=${NAMESPACE} \
        -f -

Validate pulse-gws-secret

The following command should return created secret:
source .shared_init_variables
 
oc get secret pulse-gws-secret -n=${NAMESPACE}
Example:
NAME               TYPE     DATA   AGE
pulse-gws-secret   Opaque   2      5h4m

Create pulse-postgres-secret

Save as pulse-postgres-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
  name: pulse-postgres-secret
  namespace: ${NAMESPACE}
type: Opaque
stringData:
  META_DB_ADMIN: '${DB_USER_SHARED}'
  META_DB_ADMINPWD: '${DB_PASSWORD_SHARED}'
Run:
source .shared_init_variables
 
envsubst < ./pulse-postgres-secret.yaml | \
oc apply  --namespace=${NAMESPACE} \
        -f -

Validate pulse-postgres-secret

The following command should return created secret:
source .shared_init_variables
 
oc get secret pulse-postgres-secret -n=${NAMESPACE}
Example:
NAME                    TYPE     DATA   AGE
pulse-postgres-secret   Opaque   2      5h30m

Create pulse-redis-secret

Save as pulse-redis-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
  name: pulse-redis-secret
  namespace: ${NAMESPACE}
type: Opaque
stringData:
  REDIS01_KEY: ${REDIS_PASWORD}
Run:
source .shared_init_variables
 
envsubst < ./pulse-redis-secret.yaml | \
oc apply  --namespace=${NAMESPACE} \
        -f -

Validate pulse-redis-secret

The following command should return created secret:
source .shared_init_variables
 
oc get secret pulse-redis-secret -n=${NAMESPACE}
Example:
NAME                 TYPE     DATA   AGE
pulse-redis-secret   Opaque   1      5h5m

Install init helm chart

Use this chart for shared Postgres database.

Get init helm chart
helm repo update
helm search repo pe-jfrog-stage/init

Prepare override file

Save as values-override-init.yaml:
# Default values for init.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
image:
  name: init
  tag: "${DOCKER_TAG}"
  pullPolicy: IfNotPresent
  repository: "${DOCKER_REGISTRY}/pulse/"
 
imagePullSecrets: [name: ${DOCKER_REGISTRY_SECRET_NAME}]
 
# tenant identification, or empty for shared deployment
tenants:
  - id: "${TENANT_UUID}"
    name: "${TENANT_NAME}"
    key: "${TENANT_SID}"
    dcu: "${TENANT_DCU}"
 
# common configuration.
config:
  dbName: "${DB_NAME_SHARED}"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres secret name
  postgresSecret: "pulse-postgres-secret"
  # Postgres secret key for user
  postgresSecretUser: "META_DB_ADMIN"
  # Postgres secret key for password
  postgresSecretPassword: "META_DB_ADMINPWD"
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Containers should run as genesys user and cannot use elevated permissions
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
# securityContext: {}
#    runAsUser: 500
#    runAsGroup: 500
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: 256Mi
    cpu: 200m
  requests:
    memory: 128Mi
    cpu: 100m
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

Install init helm chart

Run:
source .shared_init_variables
 
envsubst < ./values-override-init.yaml | \
helm upgrade --install pulse-init pe-jfrog-stage/init \
      --wait --wait-for-jobs \
      --version="${CHART_VERSION}" \
      --namespace="${NAMESPACE}" \
      -f -

This command will finish with exit code 0 if installation is successful.

Validate init helm chart

The following command should report pulse-init job as Completed:
source .shared_init_variables
 
oc get pods -n="${NAMESPACE}" -l "app.kubernetes.io/name=init,app.kubernetes.io/instance=pulse-init"
Example:
NAME             READY   STATUS      RESTARTS   AGE
pulse-init-job-bvstl   0/1     Completed   0          3h30m

Install pulse helm chart

Use this chart for the shared part.

Get pulse helm chart
helm repo update
helm search repo pe-jfrog-stage/pulse

Prepare override file

Save as values-override-pulse.yaml:
# Default values for pulse.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
image:
  name: pulse
  tag: "${DOCKER_TAG}"
  pullPolicy: IfNotPresent
  repository: "${DOCKER_REGISTRY}/pulse/"
 
imagePullSecrets: [name: ${DOCKER_REGISTRY_SECRET_NAME}]
 
replicaCount: 2
 
# common configuration.
config:
  dbName: "${DB_NAME_SHARED}"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres secret name
  postgresSecret: "pulse-postgres-secret"
  # Postgres secret key for user
  postgresSecretUser: "META_DB_ADMIN"
  # Postgres secret key for password
  postgresSecretPassword: "META_DB_ADMINPWD"
  # Redis config map name
  redisConfig: "pulse-redis-configmap"
  # Redis secret name
  redisSecret: "pulse-redis-secret"
  # Redis secret key for access key
  redisSecretKey: "REDIS01_KEY"
  # GAuth secret name
  gwsSecret: "pulse-gws-secret"
  # GAuth secret key for client_id
  gwsSecretClientId: "clientId"
  # GAuth secret key for client_secret
  gwsSecretClientSecret: "clientSecret"
 
# monitoring settings
monitoring:
  # enable the Prometheus metrics endpoint
  enabled: false
  # port is <options.managementPort>
  # HTTP path is <options.managementContext><options.prometheusEndpoint>
  # additional annotations required for monitoring PODs
  # you can reference values of other variables as {{.Values.variable.full.name}}
  podAnnotations: {}
    # prometheus.io/scrape: "true"
    # prometheus.io/port: "{{.Values.options.managementPort}}"
    # prometheus.io/path: "{{.Values.options.managementContext}}{{.Values.options.prometheusEndpoint}}"
  serviceMonitor:
    # enables ServiceMonitor creation
    enabled: false
    # interval at which metrics should be scraped
    scrapeInterval: 30s
    # timeout after which the scrape is ended
    scrapeTimeout:
    # namespace of the ServiceMonitor, defaults to the namespace of the service
    namespace:
    additionalLabels: {}
 
# common log configuration
log:
  # target directory where log will be stored, leave empty for default
  logDir: ""
  # path where volume will be mounted
  volumeMountPath: /data/log
  # log volume type: none | hostpath | pvc
  volumeType: pvc
  # log volume hostpath, used with volumeType "hostpath"
  volumeHostPath: /mnt/log
  # log PVC parameters, used with volumeType "pvc"
  pvc:
    name: pulse-logs
    accessModes:
      - ReadWriteMany
    capacity: 10Gi
    class: ${PV_STORAGE_CLASS_RW_MANY}
 
# application options
options:
  authUrl: "https://${GAUTH_URL}"
  authUrlInt: "http://${GAUTH_URL_INTERNAL}"
  gwsUrl: "https://${GWS_URL}"
  gwsUrlInt: "http://${GWS_URL_INTERNAL}"
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Add labels to all pods
##
podLabels: {}
 
## Containers should run as genesys user and cannot use elevated permissions
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
# securityContext: {}
#    runAsUser: 500
#    runAsGroup: 500
 
## Ingress configuration
ingress:
  enabled: true
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    ## recommended to increase proxy-body-size size
    # nginx.ingress.kubernetes.io/proxy-body-size: 5m
  hosts:
    - host: "${PULSE_ENDPOINT}"
      paths: [/]
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
 
gateway:
  enabled: false
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: 4Gi
    cpu: 1
  requests:
    memory: 650Mi
    cpu: 100m
 
## HPA Settings
## Not supported in this release!
hpa:
  enabled: false
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
## Pod Disruption Budget Settings
podDisruptionBudget:
  enabled: false
 
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
# control network policies
networkPolicies:
  enabled: false

Install pulse helm chart

Run:
source .shared_init_variables
 
envsubst < ./values-override-pulse.yaml | \
helm upgrade --install pulse pe-jfrog-stage/pulse \
      --wait \
      --version="${CHART_VERSION}" \
      --namespace="${NAMESPACE}" \
      -f -

This command will finish with exit code 0 if installation is successful.

Validate pulse helm chart

The following report running pods:
source .shared_init_variables
 
oc get pods -n="${NAMESPACE}" -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse"
Example:
NAME                     READY   STATUS    RESTARTS   AGE
pulse-5dbd484467-bh82r   1/1     Running   0          4h26m
pulse-5dbd484467-wz7xt   1/1     Running   0          4h26m

Validation

Check logs for error

oc get pods
os logs <pulse-pod-id>

Health validation

GET /actuator/metrics/pulse.health.all

Run in two different consoles (1 -> 2).

Console 1:

source .shared_init_variables

export POD_NAME=$(oc get pods --namespace=${NAMESPACE} -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse" -o jsonpath="{.items[0].metadata.name}")
oc --namespace=${NAMESPACE} port-forward $POD_NAME ${PULSE_HEALTH_PORT}:${PULSE_HEALTH_PORT}

Console 2:

source .shared_init_variables

curl -X GET http://127.0.0.1:${PULSE_HEALTH_PORT:-?}/actuator/metrics/pulse.health.all \
     -H 'Content-Type: application/json'  | jq '.'

Genesys Pulse is successfully running and can connect to Postgres and Redis when:

  • HTTP response code is 200
  • JSON response is measurements.statistic.value=1
    {
      "name": "pulse.health.all",
      "description": "Provides overall application status",
      "baseUnit": "Boolean",
      "measurements": [
        {
          "statistic": "VALUE",
          "value": 1
        }
      ],
      "availableTags": [
        {
          "tag": "deployment.code",
          "values": [
            "pulse"
          ]
        },
        {
          "tag": "application.name",
          "values": [
            "pulse"
          ]
        }
      ]
    }

Troubleshooting

Check init helm manifests

Run to output manifest into helm-template directory:

source .shared_init_variables
 
envsubst < ./values-override-init.yaml | \
helm template \
   --version="${CHART_VERSION}" \
   --namespace="${NAMESPACE}" \
   --debug \
   --output-dir helm-template \
   init pe-jfrog-stage/init \
   -f -

Check pulse helm manifests

Run to output manifest into helm-template directory:

source .shared_init_variables
 
envsubst < ./values-override-pulse.yaml | \
helm template \
     --version="${CHART_VERSION}" \
     --namespace="${NAMESPACE}" \
     --debug \
     --output-dir helm-template \
     pulse pe-jfrog-stage/pulse \
     -f -

Override Helm chart values

Link to the "suite-level" documentation about how to override Helm chart values: Overriding Helm chart values If there are multiple YAML files (for example, one for each container if your service has multiple containers), you could use a table for each file or use a single table with an extra column for the file name. If the parameter is related to a feature documented elsewhere, you can include a link in the Description column. For example, descriptions for logging setting can link to the Logging page.

Note: We are still working on an approach to handle documentation for Helm chart values, so please leave this section until the end.

Parameter Description Default Valid values
service.port Designer service to be exposed. 8888 A valid port.
...

Configure Kubernetes

Document the layouts for the following so customers can create them if their Helm chart doesn't include a way to do this:
  • ConfigMaps
  • Secrets

Configure security

List security-related settings, such as how to set up credentials and certificates for third-party services.
Comments or questions about this documentation? Contact us for support!