Difference between revisions of "PEC-REP/Current/PulsePEGuide/Configure"

From Genesys Documentation
Jump to: navigation, search
(Published)
 
 
(6 intermediate revisions by 3 users not shown)
Line 7: Line 7:
 
|sectionHeading=Prerequisites
 
|sectionHeading=Prerequisites
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext=Please complete {{Link-SomewhereInThisVersion|manual=PulsePEGuide|topic=Planning}} instructions.
+
|structuredtext=Before you begin the steps on this page, complete the instructions on {{Link-SomewhereInThisVersion|manual=PulsePEGuide|topic=Planning}}.
  
Information you will need:
+
Information you require for shared provisioning:
  
 
*Versions:
 
*Versions:
**<image-version> = 9.0.100.10
+
**<image-version> = 100.0.000.0015
**<chart-versions>= 9.0.100+10
+
**<chart-versions>= 100.0.000+0015
*K8S namespace <namespace> (e.g. 'pulse')
+
*K8S namespace pulse
*Project Name <project-name> (e.g. 'pulse')
+
*Project Name pulse
 
*Postgres credentials
 
*Postgres credentials
 
**<db-host>
 
**<db-host>
 
**<db-port>
 
**<db-port>
**<db-port-internal>
 
 
**<db-name>
 
**<db-name>
 
**<db-user>
 
**<db-user>
 
**<db-user-password>
 
**<db-user-password>
**<db-superuser>
 
**<db-superuser-password>
 
 
**<db-ssl-mode>
 
**<db-ssl-mode>
 
*Docker credentials
 
*Docker credentials
**<docker-email>
+
**<docker-registry>
**<docker-password>
+
**<docker-registry-secret-name>
**<docker-user>
 
*OpenShift credentials
 
**<openshift-url>
 
**<openshift-port>
 
**<openshift-token>
 
 
*Redis credentials
 
*Redis credentials
 
**<redis-host>
 
**<redis-host>
Line 43: Line 35:
 
**<tenant-sid>
 
**<tenant-sid>
 
**<tenant-name>
 
**<tenant-name>
 +
**<tenant-dcu>
 
*GAuth/GWS service variables
 
*GAuth/GWS service variables
 
**<gauth-url-external>
 
**<gauth-url-external>
Line 50: Line 43:
 
**<gws-url-external>
 
**<gws-url-external>
 
**<gws-url-internal>
 
**<gws-url-internal>
 +
*Storage class:
 +
**<pv-storage-class-rw-many>
 +
*Pulse:
 +
**<pulse-host>
 +
**<pulse-manage-agents-host> (optional)
  
Fill appropriate placeholders and save values in <tt>.shared_init_variables</tt>:<source lang="text">export CHART_VERSION='<chart-version>'
+
{{AnchorDiv|SingleNamespace}}
export DB_HOST='<db-host>'
+
===Single namespace===
export DB_NAME_SHARED='<db-name>'
+
Single namespace deployments have a software-defined networking (SDN) with multitenant mode, where namespaces are network isolated. If you plan to deploy Pulse into the single namespace, ensure that your environment meets the following requirements for inputs:
export DB_NAME_SUPERUSER='<db-superuser>'
+
 
export DB_PASSWORD_SHARED='<db-user-password>'
+
*Back-end services deployed into the single namespace must include the string ''pulse'':
export DB_PASSWORD_SUPERUSER='<db-superuser-password>'
+
*:<source lang="bash"><db-host>
export DB_PORT='<db-port>'
+
<db-name>
export DB_SSL_MODE='<db-ssl-mode>'
+
<redis-host></source>
export DB_USER_SHARED='<db-user>'
+
*The hostname used for Ingress must be unique, and must include the string ''pulse'':
export DOCKER_EMAIL='<docker-email>'
+
*:<source lang="bash"><pulse-host></source>
export DOCKER_PASSWORD='<docker-password>'
+
*:<source lang="bash"><pulse-manage-agents-host></source>
export DOCKER_REGISTRY='<docker-registry>'
+
*Internal service-to-service traffic must use the service endpoints, rather than the Ingress Controller:
export DOCKER_REGISTRY_SECRET_NAME='<docker-registry-secret-name>'
+
*:<source lang="bash"><gauth-url-internal>
export DOCKER_TAG='<image-version>'
+
<gws-url-internal></source>
export DOCKER_USER='<docker-user>'
 
export GAUTH_CLIENT_ID='<gauth-client-id>'
 
export GAUTH_CLIENT_SECRET='<gauth-client-secret>'
 
export GAUTH_URL='<gauth-url-external>'
 
export GAUTH_URL_INTERNAL='<gauth-url-internal>'
 
export GWS_URL='<gws-url-external>'
 
export GWS_URL_INTERNAL='<gws-url-internal>'
 
export NAMESPACE='<namespace>'
 
export OS_ADD_SCC_TO_USER='<scc-user>'
 
export OS_PORT='<openshift-port>'
 
export OS_TOKEN='<openshift-token>'
 
export OS_URL='<openshift-url>'
 
export PROJECT_NAME='<project-name>'
 
export PULSE_ENDPOINT='<pulse-endpoint>'
 
export PULSE_HEALTH_PORT=8090
 
export PV_STORAGE_CLASS_RW_MANY='<rw-many-storage-class>'
 
export REDIS_ENABLE_SSL='<redis-enable-ssl>'
 
export REDIS_HOST='<redis-host>'
 
export REDIS_PASSWORD='<redis-password>'
 
export REDIS_PORT='<redis-port>'
 
export TENANT_UUID='<tenant-uuid>'
 
export TENANT_DCU='2'
 
export TENANT_NAME='<tenant-name>'
 
export TENANT_SID='<tenant-sid>'</source><br />
 
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
|sectionHeading=Create project in OpenShift
+
|sectionHeading=Override Helm chart values
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext='''Login using token'''
+
|structuredtext=For more information about overriding Helm chart values, see the suite-level documentation: {{SuiteLevelLink|helmoverride}}.
<source lang="text">source .shared_init_variables
+
{{{!}} class="wikitable"
+
{{!}}+
oc login --token="${OS_TOKEN}" \
+
!Parameter
          --server="${OS_URL}:${OS_PORT}" \
+
!Description
          --insecure-skip-tls-verify=true</source>
+
!Default
 
+
!Valid values
'''Create project'''
+
{{!}}-
<source lang="text">source .shared_init_variables
+
{{!}}service.port
+
{{!}}Designer service to be exposed.
oc new-project "${PROJECT_NAME}" \
+
{{!}}8888
  --description="${PROJECT_NAME}" \
+
{{!}}A valid port.
  --display-name="${PROJECT_NAME}" </source>
+
{{!}}-
 
+
{{!}}}
'''Add SCC to user'''
 
<source lang="text">source .shared_init_variables
 
 
oc adm policy add-scc-to-user "${OS_ADD_SCC_TO_USER}" -z default -n "${NAMESPACE}"</source>
 
 
 
'''Enable namespace'''
 
<source lang="text">source .shared_init_variables
 
 
oc annotate namespace --overwrite "${NAMESPACE}" 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Equal", "effect": "NoSchedule", "key": "team", "value": "pat"}]'</source>
 
 
 
'''Switch to project'''
 
<source lang="text">
 
source .shared_init_variables
 
 
oc project "${PROJECT_NAME}" </source>
 
 
 
'''Create secret for auth to registry'''
 
<source lang="text">
 
source .shared_init_variables
 
 
oc create secret docker-registry "${DOCKER_REGISTRY_SECRET_NAME}" \
 
    --docker-server="${DOCKER_REGISTRY}" \
 
    --docker-username="${DOCKER_USER}" \
 
    --docker-password="${DOCKER_PASSWORD}" \
 
    --docker-email="${DOCKER_EMAIL}"
 
 
oc secrets link default "${DOCKER_REGISTRY_SECRET_NAME}" --for=pull </source>
 
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
|sectionHeading=PostgreSQL
+
|sectionHeading=Deployment
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext='''Create shared db user'''
+
|structuredtext====init Helm chart===
 +
Use this chart to initialize the shared PostgreSQL database.
  
As PosgreQSL superuser save the following as <tt>create_shared_db_role.sql:</tt><source lang="text">CREATE ROLE "${DB_USER_SHARED}@${DB_HOST}" WITH NOSUPERUSER LOGIN ENCRYPTED PASSWORD '${DB_PASSWORD_SHARED}';</source>
+
====Get init Helm chart====
Run:
+
Run the following commands to get the chart:
<source lang="text">source .shared_init_variables
+
<source lang="bash">helm repo update
 +
helm search repo pulsehelmrepo/init</source>
 +
====Prepare override-init file (GKE)====
 +
Create a file with the following content, entering appropriate values where indicated, and save the file as '''values-override-init.yaml''':
 +
<source lang="bash"># Default values for init.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
# tenant identification, or empty for shared deployment
 +
tenants:
 +
  - id:  "<tenant-uuid>"
 +
    name: "<tenant-name>"
 +
    key:  "<tenant-sid>"
 +
    dcu:  "<tenant-dcu>"
 +
    # optional, available starting with 100.0.000.0015   
 +
    # enable_agent_management: true
 
   
 
   
envsubst < ./create_shared_db_role.sql | \
+
# common configuration.
PGPASSWORD=${DB_PASSWORD_SUPERUSER} psql -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_NAME_SUPERUSER}" -f - </source>
+
config:
 
+
  # set "true" to create config maps
'''Create shared db and grant privileges'''
+
  createConfigMap: true
 
+
  # set "true" to create secrets
Save the following as <tt>create_shared_db.sql</tt>
+
  createSecret: true
<source lang="text">CREATE DATABASE ${DB_NAME_SHARED};
 
GRANT ALL PRIVILEGES ON DATABASE ${DB_NAME_SHARED} TO "${DB_USER_SHARED}@${DB_HOST}"; </source>
 
Run:
 
<source lang="text">source .shared_init_variables
 
 
   
 
   
envsubst < ./create_shared_db.sql | \
+
  # Postgres config - fill when createConfigMap: true
PGPASSWORD=${DB_PASSWORD_SUPERUSER} psql -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_NAME_SUPERUSER}" -f -</source>
+
  # Postgres config map name
|Status=No
+
  postgresConfig: "pulse-postgres-configmap"
}}{{Section
+
  # Postgres hostname
|sectionHeading=Deployment
+
  postgresHost: "<postgres-hostname>"
|alignment=Vertical
+
  # Postgres port
|structuredtext====Preparations===
+
   postgresPort: "<postgres-port>"
'''Create pulse-postgres-configmap'''
+
   # Postgres SSL mode
 
+
   postgresEnableSSL: "<postgres-ssl-mode>"
Save as <tt>pulse-postgres-configmap</tt>.yaml:<source lang="text">
 
apiVersion: v1
 
kind: ConfigMap
 
metadata:
 
   name: pulse-postgres-configmap
 
  namespace: ${NAMESPACE}
 
data:
 
   META_DB_HOST: '${DB_HOST}'
 
   META_DB_PORT: '${DB_PORT}'
 
  META_DB_SSL_MODE: '${DB_SSL_MODE}' </source>
 
 
 
Run:<source lang="text">source .shared_init_variables
 
 
   
 
   
envsubst < ./pulse-postgres-configmap | \
+
  # Postgres secret config - fill when createSecret: true
oc apply --namespace=${NAMESPACE} \
+
  # Postgres User
        -f - </source>
+
  postgresUser: "<postgres-user>"
 
+
  # Postgres Password
'''Validate pulse-postgres-configmap'''
+
  postgresPassword: "<postgres-password>"
 
+
  # Secret name for postgres
The following command should return created ConfigMap:<source lang="text">source .shared_init_variables
+
  postgresSecret: "pulse-postgres-secret"
 +
  # Secret key for postgres user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Secret key for postgres  password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
 
 +
  # Redis config - fill when createConfigMap: true
 +
  # Redis config map name
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis host
 +
  redisHost: "<redis-hostname>"
 +
  # Redis port
 +
  redisPort: "<redis-port>"
 +
  # Redis SSL enabled
 +
  redisEnableSSL: "false"
 
   
 
   
oc get configmap pulse-postgres-configmap -n=${NAMESPACE} </source>
+
  # Redis secret config - fill when createSecret: true
 
+
  # Password for Redis
Example:<source lang="text">
+
  redisKey: "<redis-key>"
NAME                      DATA   AGE
+
   # Secret name for Redis
pulse-postgres-configmap   3      5h5m </source>
+
  redisSecret: "pulse-redis-secret"
 
+
  # Secret key for Redis password
'''Create pulse-redis-configmap'''
+
   redisSecretKey: "REDIS01_KEY"
 
+
 
Save as <tt>pulse-redis-configmap:</tt><source lang="text">apiVersion: v1
+
  # GWS secret config - fill when createSecret: true
kind: ConfigMap
+
  # Client ID
metadata:
+
  gwsClientId: "<gws-client-id>"
   name: pulse-redis-configmap
+
  # Client Secret
   namespace: ${NAMESPACE}
+
  gwsClientSecret: "<gws-client-secret>"
data:
+
   # Secret name
   REDIS_HOST: '${REDIS_HOST}'
+
  gwsSecret: "pulse-gws-secret"
   REDIS_PORT: '${REDIS_PORT}'
+
   # Secret key for Client ID
   REDIS_ENABLE_SSL: '${REDIS_ENABLE_SSL}' </source>
+
   gwsSecretClientId: "clientId"
 
+
   # Secret key for Client Secret
Run:<source lang="text">
+
   gwsSecretClientSecret: "clientSecret"
source .shared_init_variables
 
 
   
 
   
envsubst < ./pulse-redis-configmap | \
+
  # fill database name
oc apply  --namespace=${NAMESPACE} \
+
  dbName: "<db-name>"
        -f - </source>
+
  # set "true" when need @host added for username
 
+
  dbUserWithHost: true
'''Validate pulse-redis-configmap'''
+
  # set "true" for CSI secrets
 
+
  mountSecrets: false
The following command should return created ConfigMap:<source lang="text">source .shared_init_variables
 
 
   
 
   
oc get configmap pulse-redis-configmap -n=${NAMESPACE} </source>
+
## Service account settings
 
+
serviceAccount:
Example:<source lang="text">NAME                    DATA   AGE
+
   # Specifies whether a service account should be created
pulse-redis-configmap   3      4h38m </source>
+
   create: false
 
+
   # Annotations to add to the service account
'''Create pulse-gws-secret'''
+
   annotations: {}
 
+
  # The name of the service account to use.
Save as <tt>pulse-gws-secret.yaml:</tt><source lang="text">
+
   # If not set and create is true, a name is generated using the fullname template
apiVersion: v1
+
   name: ""
kind: Secret
 
metadata:
 
   name: pulse-gws-secret
 
   namespace: ${NAMESPACE}
 
type: Opaque
 
stringData:
 
   clientId: ${GAUTH_CLIENT_ID}
 
   clientSecret: ${GAUTH_CLIENT_SECRET} </source>
 
 
 
Run:<source lang="text">source .shared_init_variables
 
 
   
 
   
envsubst < ./pulse-gws-secret.yaml | \
+
## Add annotations to all pods
oc apply  --namespace=${NAMESPACE} \
+
##
        -f - </source>
+
podAnnotations: {}
 
 
'''Validate pulse-gws-secret'''
 
The following command should return created secret:<source lang="text">source .shared_init_variables
 
 
   
 
   
oc get secret pulse-gws-secret -n=${NAMESPACE} </source>
+
## Specifies the security context for all Pods in the service
 
+
##
Example:<source lang="text">
+
podSecurityContext:
NAME              TYPE    DATA  AGE
+
  fsGroup: null
pulse-gws-secret  Opaque  2      5h4m </source>
+
  runAsUser: null
 
+
  runAsGroup: 0
'''Create pulse-postgres-secret'''
+
  runAsNonRoot: true
 
 
Save as <tt>pulse-postgres-secret.yaml:</tt><source lang="text">apiVersion: v1
 
kind: Secret
 
metadata:
 
  name: pulse-postgres-secret
 
  namespace: ${NAMESPACE}
 
type: Opaque
 
stringData:
 
  META_DB_ADMIN: '${DB_USER_SHARED}'
 
  META_DB_ADMINPWD: '${DB_PASSWORD_SHARED}' </source>
 
 
 
Run:<source lang="text">source .shared_init_variables
 
 
   
 
   
envsubst < ./pulse-postgres-secret.yaml | \
+
## Resource requests and limits
oc apply  --namespace=${NAMESPACE} \
+
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
        -f - </source>
+
##
 
+
resources:
'''Validate pulse-postgres-secret'''
+
  limits:
 
+
    memory: 256Mi
The following command should return created secret:<source lang="text">source .shared_init_variables
+
    cpu: 200m
 +
  requests:
 +
    memory: 128Mi
 +
    cpu: 100m
 
   
 
   
oc get secret pulse-postgres-secret -n=${NAMESPACE} </source>
+
## Priority Class
 
+
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
Example:<source lang="text">NAME                    TYPE    DATA  AGE
+
##
pulse-postgres-secret  Opaque  2      5h30m </source>
+
priorityClassName: ""
 
 
'''Create pulse-redis-secret'''
 
 
 
Save as <tt>pulse-redis-secret.yaml</tt>:<source lang="text">apiVersion: v1
 
kind: Secret
 
metadata:
 
  name: pulse-redis-secret
 
  namespace: ${NAMESPACE}
 
type: Opaque
 
stringData:
 
  REDIS01_KEY: ${REDIS_PASWORD} </source>
 
 
 
Run:<source lang="text">source .shared_init_variables
 
 
   
 
   
envsubst < ./pulse-redis-secret.yaml | \
+
## Node labels for assignment.
oc apply  --namespace=${NAMESPACE} \
+
## ref: https://kubernetes.io/docs/user-guide/node-selection/
        -f - </source>
+
##
 
+
nodeSelector: {}
'''Validate pulse-redis-secret'''
 
 
 
The following command should return created secret:<source lang="text">
 
source .shared_init_variables
 
 
   
 
   
oc get secret pulse-redis-secret -n=${NAMESPACE} </source>
+
## Tolerations for assignment.
 
+
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
Example:<source lang="text">
+
##
NAME                TYPE    DATA  AGE
+
tolerations: []</source>
pulse-redis-secret  Opaque  1      5h5m </source>
+
====Prepare override-init file (AKS)====
 
+
Create a file with the following content, entering appropriate values where indicated, and save the file as '''values-override-init.yaml''':
===Install init helm chart===
+
<source lang="bash"># Default values for init.
Use this chart for shared Postgres database.
 
 
 
'''Get init helm chart''' <source lang="text">helm repo update
 
helm search repo pe-jfrog-stage/init </source>
 
 
 
'''Prepare override file'''
 
 
 
Save as <tt>values-override-init.yaml:</tt><source lang="text"># Default values for init.
 
 
# This is a YAML-formatted file.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
# Declare variables to be passed into your templates.
 
   
 
   
 
image:
 
image:
  name: init
+
   tag: "<image-version>"
   tag: "${DOCKER_TAG}"
 
 
   pullPolicy: IfNotPresent
 
   pullPolicy: IfNotPresent
   repository: "${DOCKER_REGISTRY}/pulse/"
+
   registry: "<docker-registry>"
+
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
imagePullSecrets: [name: ${DOCKER_REGISTRY_SECRET_NAME}]
 
 
   
 
   
 
# tenant identification, or empty for shared deployment
 
# tenant identification, or empty for shared deployment
 
tenants:
 
tenants:
   - id: "${TENANT_UUID}"
+
   - id:   "<tenant-uuid>"
     name: "${TENANT_NAME}"
+
     name: "<tenant-name>"
     key: "${TENANT_SID}"
+
     key: "<tenant-sid>"
     dcu: "${TENANT_DCU}"
+
     dcu: "<tenant-dcu>"
 
   
 
   
 
# common configuration.
 
# common configuration.
 
config:
 
config:
  dbName: "${DB_NAME_SHARED}"
+
   # set "true" to create config maps
   # set "true" when need @host added for username
+
   createConfigMap: true
   dbUserWithHost: true
+
   # set "true" to create secrets
   # set "true" for CSI secrets
+
   createSecret: true
   mountSecrets: false
+
 +
  # Postgres config - fill when createConfigMap: true
 
   # Postgres config map name
 
   # Postgres config map name
 
   postgresConfig: "pulse-postgres-configmap"
 
   postgresConfig: "pulse-postgres-configmap"
   # Postgres secret name
+
  # Postgres hostname
 +
  postgresHost: "<postgres-hostname>"
 +
  # Postgres port
 +
  postgresPort: "<postgres-port>"
 +
  # Postgres SSL mode
 +
  postgresEnableSSL: "<postgres-ssl-mode>"
 +
 +
   # Postgres secret config - fill when createSecret: true
 +
  # Postgres User
 +
  postgresUser: "<postgres-user>"
 +
  # Postgres Password
 +
  postgresPassword: "<postgres-password>"
 +
  # Secret name for postgres
 
   postgresSecret: "pulse-postgres-secret"
 
   postgresSecret: "pulse-postgres-secret"
   # Postgres secret key for user
+
   # Secret key for postgres user
 
   postgresSecretUser: "META_DB_ADMIN"
 
   postgresSecretUser: "META_DB_ADMIN"
   # Postgres secret key for password
+
   # Secret key for postgres  password
 
   postgresSecretPassword: "META_DB_ADMINPWD"
 
   postgresSecretPassword: "META_DB_ADMINPWD"
 +
 
 +
  # Redis config - fill when createConfigMap: true
 +
  # Redis config map name
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis host
 +
  redisHost: "<redis-hostname>"
 +
  # Redis port
 +
  redisPort: "<redis-port>"
 +
  # Redis SSL enabled
 +
  redisEnableSSL: "false"
 +
 +
  # Redis secret config - fill when createSecret: true
 +
  # Password for Redis
 +
  redisKey: "<redis-key>"
 +
  # Secret name for Redis
 +
  redisSecret: "pulse-redis-secret"
 +
  # Secret key for Redis password
 +
  redisSecretKey: "REDIS01_KEY"
 +
 
 +
  # GWS secret config - fill when createSecret: true
 +
  # Client ID
 +
  gwsClientId: "<gws-client-id>"
 +
  # Client Secret
 +
  gwsClientSecret: "<gws-client-secret>"
 +
  # Secret name
 +
  gwsSecret: "pulse-gws-secret"
 +
  # Secret key for Client ID
 +
  gwsSecretClientId: "clientId"
 +
  # Secret key for Client Secret
 +
  gwsSecretClientSecret: "clientSecret"
 +
 +
  # fill database name
 +
  dbName: "<db-name>"
 +
  # set "true" when need @host added for username
 +
  dbUserWithHost: true
 +
  # set "true" for CSI secrets
 +
  mountSecrets: false
 
   
 
   
 
## Service account settings
 
## Service account settings
Line 359: Line 325:
 
podAnnotations: {}
 
podAnnotations: {}
 
   
 
   
## Containers should run as genesys user and cannot use elevated permissions
+
## Specifies the security context for all Pods in the service
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
+
##
# securityContext: {}
+
podSecurityContext: {}
#    runAsUser: 500
 
#    runAsGroup: 500
 
 
   
 
   
 
## Resource requests and limits
 
## Resource requests and limits
Line 389: Line 353:
 
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 
##
 
##
tolerations: [] </source>
+
tolerations: []</source>
 
+
====Install init Helm chart====
'''Install init helm chart'''
+
Run:
 
+
<source lang="bash">helm upgrade --install pulse-init pulsehelmrepo/init --wait --wait-for-jobs --version=<chart-version> --namespace=pulse -f values-override-init.yaml</source>
Run: <source lang="text">
+
If the installation is successful, the command finishes with exit code 0.
source .shared_init_variables
 
 
envsubst < ./values-override-init.yaml | \
 
helm upgrade --install pulse-init pe-jfrog-stage/init \
 
      --wait --wait-for-jobs \
 
      --version="${CHART_VERSION}" \
 
      --namespace="${NAMESPACE}" \
 
      -f - </source>
 
 
 
This command will finish with exit code 0 if installation is successful.
 
  
'''Validate init helm chart'''
+
====Validate init Helm chart====
 +
To validate Helm chart initialization, run the following command:
 +
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=init,app.kubernetes.io/instance=pulse-init"
 +
NAME                  READY  STATUS      RESTARTS  AGE
 +
pulse-init-job-5669c  0/1    Completed  0          79m</source>If the initialization was successful, the Pulse-init job has a Status of Completed.
  
The following command should report pulse-init job as Completed:<source lang="text">
+
===Install pulse Helm chart===
source .shared_init_variables
+
Use this chart to install the shared part.
 
oc get pods -n="${NAMESPACE}" -l "app.kubernetes.io/name=init,app.kubernetes.io/instance=pulse-init" </source>
 
  
Example:<source lang="text">NAME            READY  STATUS      RESTARTS  AGE
+
====Get pulse Helm chart====
pulse-init-job-bvstl  0/1    Completed  0          3h30m </source>
+
<source lang="bash">helm repo update
 
+
helm search repo pulsehelmrepo/pulse</source>
===Install pulse helm chart===
+
====Prepare override-pulse file (GKE)====
Use this chart for the shared part.
+
Create a file with the following content, entering appropriate values where indicated, and save the file as '''values-override-pulse.yaml''':  
 
+
<source lang="bash">
'''Get pulse helm chart''' <source lang="text">helm repo update
+
# Default values for pulse.
helm search repo pe-jfrog-stage/pulse </source>
 
 
 
'''Prepare override file'''
 
 
 
Save as <tt>values-override-pulse.yaml:</tt><source lang="text"># Default values for pulse.
 
 
# This is a YAML-formatted file.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
# Declare variables to be passed into your templates.
 
   
 
   
 
image:
 
image:
  name: pulse
+
   tag: "<image-version>"
   tag: "${DOCKER_TAG}"
 
 
   pullPolicy: IfNotPresent
 
   pullPolicy: IfNotPresent
   repository: "${DOCKER_REGISTRY}/pulse/"
+
   registry: "<docker-registry>"
+
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
imagePullSecrets: [name: ${DOCKER_REGISTRY_SECRET_NAME}]
 
 
   
 
   
 
replicaCount: 2
 
replicaCount: 2
Line 439: Line 388:
 
# common configuration.
 
# common configuration.
 
config:
 
config:
   dbName: "${DB_NAME_SHARED}"
+
   dbName: "<db-name>"
 
   # set "true" when need @host added for username
 
   # set "true" when need @host added for username
 
   dbUserWithHost: true
 
   dbUserWithHost: true
Line 504: Line 453:
 
       - ReadWriteMany
 
       - ReadWriteMany
 
     capacity: 10Gi
 
     capacity: 10Gi
     class: ${PV_STORAGE_CLASS_RW_MANY}
+
     class: <pv-storage-class-rw-many>
 
   
 
   
 
# application options
 
# application options
 
options:
 
options:
   authUrl: "https://${GAUTH_URL}"
+
   authUrl: "https://<gauth-url-external>"
   authUrlInt: "http://${GAUTH_URL_INTERNAL}"
+
   authUrlInt: "http://<gauth-url-internal>"
   gwsUrl: "https://${GWS_URL}"
+
   gwsUrl: "https://<gws-url-external>"
   gwsUrlInt: "http://${GWS_URL_INTERNAL}"
+
   gwsUrlInt: "http://<gws-url-internal>"
+
  # optional, required for Manage Agents since 100.0.000.0015
 +
  # advancedServiceUrl: "https://<pulse-manage-agents-host>"
 +
 
 
## Service account settings
 
## Service account settings
 
serviceAccount:
 
serviceAccount:
Line 531: Line 482:
 
podLabels: {}
 
podLabels: {}
 
   
 
   
## Containers should run as genesys user and cannot use elevated permissions
+
## Specifies the security context for all Pods in the service
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
+
##
# securityContext: {}
+
podSecurityContext:
#   runAsUser: 500
+
  fsGroup: null
#   runAsGroup: 500
+
   runAsUser: null
 +
   runAsGroup: 0
 +
  runAsNonRoot: true
 
   
 
   
 
## Ingress configuration
 
## Ingress configuration
Line 546: Line 499:
 
     # nginx.ingress.kubernetes.io/proxy-body-size: 5m
 
     # nginx.ingress.kubernetes.io/proxy-body-size: 5m
 
   hosts:
 
   hosts:
     - host: "${PULSE_ENDPOINT}"
+
     - host: "<pulse-host>"
 
       paths: [/]
 
       paths: [/]
 
   tls: []
 
   tls: []
Line 598: Line 551:
 
# control network policies
 
# control network policies
 
networkPolicies:
 
networkPolicies:
   enabled: false </source>
+
   enabled: false</source>
 +
====Prepare override-pulse file (AKS)====
 +
Create a file with the following content, entering appropriate values where indicated, and save the file as '''values-override-pulse.yaml''':
 +
<source lang="bash"># Default values for pulse.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
replicaCount: 2
 +
 +
# common configuration.
 +
config:
 +
  dbName: "<db-name>"
 +
  # set "true" when need @host added for username
 +
  dbUserWithHost: true
 +
  # set "true" for CSI secrets
 +
  mountSecrets: false
 +
  # Postgres config map name
 +
  postgresConfig: "pulse-postgres-configmap"
 +
  # Postgres secret name
 +
  postgresSecret: "pulse-postgres-secret"
 +
  # Postgres secret key for user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Postgres secret key for password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
  # Redis config map name
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis secret name
 +
  redisSecret: "pulse-redis-secret"
 +
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 +
  # GAuth secret name
 +
  gwsSecret: "pulse-gws-secret"
 +
  # GAuth secret key for client_id
 +
  gwsSecretClientId: "clientId"
 +
  # GAuth secret key for client_secret
 +
  gwsSecretClientSecret: "clientSecret"
 +
 +
# monitoring settings
 +
monitoring:
 +
  # enable the Prometheus metrics endpoint
 +
  enabled: false
 +
  # port is <options.managementPort>
 +
  # HTTP path is <options.managementContext><options.prometheusEndpoint>
 +
  # additional annotations required for monitoring PODs
 +
  # you can reference values of other variables as {{.Values.variable.full.name}}
 +
  podAnnotations: {}
 +
    # prometheus.io/scrape: "true"
 +
    # prometheus.io/port: "{{.Values.options.managementPort}}"
 +
    # prometheus.io/path: "{{.Values.options.managementContext}}{{.Values.options.prometheusEndpoint}}"
 +
  serviceMonitor:
 +
    # enables ServiceMonitor creation
 +
    enabled: false
 +
    # interval at which metrics should be scraped
 +
    scrapeInterval: 30s
 +
    # timeout after which the scrape is ended
 +
    scrapeTimeout:
 +
    # namespace of the ServiceMonitor, defaults to the namespace of the service
 +
    namespace:
 +
    additionalLabels: {}
 +
 +
# common log configuration
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
# application options
 +
options:
 +
  authUrl: "https://<gauth-url-external>"
 +
  authUrlInt: "http://<gauth-url-internal>"
 +
  gwsUrl: "https://<gws-url-external>"
 +
  gwsUrlInt: "http://<gws-url-internal>"
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Add labels to all pods
 +
##
 +
podLabels: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext: {}
 +
 +
## Ingress configuration
 +
ingress:
 +
  enabled: true
 +
  annotations:
 +
    kubernetes.io/ingress.class: nginx
 +
    kubernetes.io/tls-acme: "true"
 +
  enabled: true
 +
  hosts:
 +
  - host: <pulse-host>
 +
    paths:
 +
    - /
 +
  tls:
 +
  - hosts:
 +
    - <pulse-host>
 +
    secretName: pulse-cert-tls
 +
 +
gateway:
 +
  enabled: false
 +
 +
## Resource requests and limits
 +
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 +
##
 +
resources:
 +
  limits:
 +
    memory: 4Gi
 +
    cpu: 1
 +
  requests:
 +
    memory: 650Mi
 +
    cpu: 100m
 +
 +
## HPA Settings
 +
## Not supported in this release!
 +
hpa:
 +
  enabled: false
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 +
 +
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}
 +
 +
# control network policies
 +
networkPolicies:
 +
  enabled: false</source>
 +
====Install pulse Helm chart====
 +
Run:
 +
<source lang="bash">helm upgrade --install pulse pulsehelmrepo/pulse --wait --version=<chart-version>  --namespace=pulse -f values-override-pulse.yaml</source>
 +
If installation is successful, the command finishes with exit code 0.
  
'''Install pulse helm chart'''
+
====Validate pulse Helm chart====
 +
To list all running Pulse pods, run the following command:
 +
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse"
 +
NAME                    READY  STATUS    RESTARTS  AGE
 +
pulse-648b9d6666-f5d84  1/1    Running  0          22m
 +
pulse-648b9d6666-kqhs6  1/1    Running  0          68m</source>
  
Run:<source lang="text">source .shared_init_variables
+
==Install pulse-manage-agents helm chart==
 +
Use this chart to install the optional shared service for Manage Agents functionality. This feature is available starting with release 100.0.000.0015.
 +
===Get pulse helm chart===
 +
Run the following commands to get the chart:
 +
<source lang="bash">helm repo update
 +
helm search repo <pulsehelmrepo>/pulse-manage-agents</source>
 +
===Prepare override file===
 +
Create a file with the following content, entering appropriate values where indicated, and save the file as '''values-override-pulse-manage-agents.yaml''':
 +
<source lang="bash"># Default values for pulse-manage-agents.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
  image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
replicaCount: 2
 +
 +
# monitoring settings
 +
monitoring:
 +
  # enable the Prometheus metrics endpoint
 +
  enabled: false
 +
  # port number of the Prometheus metrics endpoint
 +
  port: 9103
 +
  # additional annotations required for monitoring PODs
 +
  # you can reference values of other variables as {{.Values.variable.full.name}}
 +
  podAnnotations: {}
 +
  # additional annotations required for monitoring Service
 +
  # you can reference values of other variables as {{.Values.variable.full.name}}
 +
  serviceAnnotations: {}
 +
    # prometheus.io/scrape: "true"
 +
    # prometheus.io/port: "{{.Values.monitoring.port}}"
 +
    # prometheus.io/path: "/metrics"
 +
  serviceMonitor:
 +
    # enables ServiceMonitor creation
 +
    enabled: false
 +
    # interval at which metrics should be scraped
 +
    scrapeInterval: 30s
 +
    # timeout after which the scrape is ended
 +
    scrapeTimeout:
 +
    # namespace of the ServiceMonitor, defaults to the namespace of the service
 +
    namespace:
 +
    additionalLabels: {}
 +
  dashboards:
 +
    # enables Grafana dashboards
 +
    enabled: true
 +
    # namespace of the ConfigMap with Grafana dashboards,
 +
    # defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
  alerts:
 +
    # enables alert rules
 +
    enabled: false
 +
    # namespace of the alert rules, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 
   
 
   
envsubst < ./values-override-pulse.yaml | \
+
# common log configuration
helm upgrade --install pulse pe-jfrog-stage/pulse \
+
log:
      --wait \
+
  # target directory where log will be stored, leave empty for default
      --version="${CHART_VERSION}" \
+
  logDir: ""
      --namespace="${NAMESPACE}" \
+
  # path where volume will be mounted
      -f - </source>
+
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-manage-agents-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
# application options
 +
env:
 +
  CLOUD: MULTICLOUD
 +
  amGwsUrlInt: <http://<gws-url-internal>
 +
  amGwsPort: 80
 +
  amGauthUrlInt: <https://<gauth-url-internal>
 +
 +
# CORS functionality checks that Origin header in OPTIONS command is one of these.
 +
whitelistedOrigins:
 +
  - https://<pulse-host>
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Add labels to all pods
 +
##
 +
podLabels: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext:
 +
  runAsNonRoot: true
 +
  runAsUser: 500
 +
  runAsGroup: 500
 +
  fsGroup: 0
 +
 +
## Ingress configuration
 +
ingress:
 +
  enabled: true
 +
  annotations: {}
 +
    # kubernetes.io/ingress.class: nginx
 +
    # kubernetes.io/tls-acme: "true"
 +
    ## recommended to increase proxy-body-size size
 +
    # nginx.ingress.kubernetes.io/proxy-body-size: 5m
 +
  hosts:
 +
    - host: "<pulse-manage-agents-host>"
 +
      paths: [/]
 +
  tls: []
 +
  #  - secretName: chart-example-tls
 +
  #    hosts:
 +
  #      - chart-example.local
 +
 +
gateway:
 +
  enabled: false
 +
 +
## Resource requests and limits
 +
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 +
## resources:
 +
  limits:
 +
    memory: 256Mi
 +
    cpu: 200m
 +
  requests:
 +
    memory: 155Mi
 +
    cpu: 10m
 +
 +
## HPA Settings
 +
## Not supported in this release!
 +
hpa:
 +
  enabled: false
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 +
 +
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}
 +
 +
# control network policies
 +
networkPolicies:
 +
  enabled: false</source>
  
This command will finish with exit code 0 if installation is successful.
+
===Install pulse-manage-agents helm chart===
 +
Run the following command to install pulse-manage-agents:
 +
<source lang="bash">helm upgrade --install pulse pulsehelmrepo/pulse-manage-agents --wait --version=<chart-version>  --namespace=pulse -f values-override-pulse-manage-agents.yaml</source>
 +
If the installation is successful, the command finishes with exit code 0.
  
'''Validate pulse helm chart'''
+
===Validate pulse-manage-agents helm chart===
 
+
To validate Helm chart initialization, run the following command:
The following report running pods: <source lang="text">source .shared_init_variables
+
<source lang="bash">oc get pods -n=pulse -l "app.kubernetes.io/name=pulse-manage-agents,app.kubernetes.io/instance=pulse-manage-agents"</source>
+
The output should show that all pulse-manage-agents pods are Running:
oc get pods -n="${NAMESPACE}" -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse" </source>
+
<source lang="LANGUAGE">NAME                                 READY  STATUS    RESTARTS  AGE
 
+
pulse-manage-agents-b999c7758-hscfz   1/1    Running  0          5d18h
Example:<source lang="text">NAME                     READY  STATUS    RESTARTS  AGE
+
pulse-manage-agents-b999c7758-s67zt   1/1    Running  0          5d18h</source>
pulse-5dbd484467-bh82r   1/1    Running  0          4h26m
 
pulse-5dbd484467-wz7xt   1/1    Running  0          4h26m </source>
 
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
 
|sectionHeading=Validation
 
|sectionHeading=Validation
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext='''Check logs for error'''
+
|structuredtext=Use the following procedures to validate the deployment.
<source lang="text">oc get pods
+
===Check logs for error===
os logs <pulse-pod-id> </source>
 
 
 
'''Health validation'''
 
  
<tt>GET /actuator/metrics/pulse.health.all</tt>
+
#To check the log files, run the following command:
 +
#:<source lang="bash">kubectl get pods
 +
os logs <pulse-pod-id></source>Where: <tt>&lt;pulse-pod-id&gt;</tt> is the pod identifier.
  
Run in two different consoles (1 -> 2).
+
===Health validation===
  
<u>Console 1:</u>
+
#To download the health validation metrics, run the following command:
<source lang="text">source .shared_init_variables
+
#:'''GET /actuator/metrics/pulse.health.all'''
 
+
#Open two Command Prompt windows, and run the following commands:
export POD_NAME=$(oc get pods --namespace=${NAMESPACE} -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse" -o jsonpath="{.items[0].metadata.name}")
+
##'''Console 1''':
oc --namespace=${NAMESPACE} port-forward $POD_NAME ${PULSE_HEALTH_PORT}:${PULSE_HEALTH_PORT} </source>
+
##:<source lang="bash">kubectl get pods --namespace pulse -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse" -o jsonpath="{.items[0].metadata.name}"
 
+
kubectl --namespace pulse port-forward <pod-name> 8090:8090</source>
<u>Console 2:</u>
+
##'''Console 2''':
<source lang="text">source .shared_init_variables
+
##:<source lang="bash">curl -X GET http://127.0.0.1:8090/actuator/metrics/pulse.health.all       -H 'Content-Type: application/json'
 
+
</source>
curl -X GET http://127.0.0.1:${PULSE_HEALTH_PORT:-?}/actuator/metrics/pulse.health.all \
+
#:If Pulse is running correctly and can connect to Redis and PostgreSQL, the following results appear:
    -H 'Content-Type: application/json' | jq '.'</source>
+
#*http response is <tt>200</tt>
 
+
#*json response has <tt>measurements.statistic.value</tt> of <tt>1.0</tt>, for example:
Genesys Pulse is successfully running and can connect to Postgres and Redis when:
+
#*:<source lang="bash">{
 
 
*HTTP response code is 200
 
*JSON response is measurements.statistic.value=1<source lang="text">{
 
 
   "name": "pulse.health.all",
 
   "name": "pulse.health.all",
 
   "description": "Provides overall application status",
 
   "description": "Provides overall application status",
Line 680: Line 979:
 
|sectionHeading=Troubleshooting
 
|sectionHeading=Troubleshooting
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext='''Check init helm manifests'''
+
|structuredtext=If you encounter problems during deployment, examine the init Helm and Pulse Helm manifests.
 
+
===Check init Helm manifests===
Run to output manifest into helm-template directory:
+
To output init Helm manifest files into the '''helm-template''' directory, run the following command:
 
+
<source lang="bash">helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template init pulsehelmrepo/init -f values-override-init.yaml.yaml
<source lang="text">source .shared_init_variables
+
</source>
+
Where: <tt>&lt;chart-version&gt;</tt> is the Helm chart version.
envsubst < ./values-override-init.yaml | \
+
===Check Pulse Helm manifests===
helm template \
+
To output Pulse Helm manifest files into the '''helm-template''' directory, run the following command:
  --version="${CHART_VERSION}" \
+
<source lang="bash">helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse pulsehelmrepo/pulse -f values-override-pulse.yaml
  --namespace="${NAMESPACE}" \
+
</source>
  --debug \
+
Where: <tt>&lt;chart-version&gt;</tt> is the Helm chart version.
  --output-dir helm-template \
 
  init pe-jfrog-stage/init \
 
  -f -</source>
 
 
 
'''Check pulse helm manifests'''
 
 
 
Run to output manifest into helm-template directory:
 
 
 
<source lang="text">source .shared_init_variables
 
 
envsubst < ./values-override-pulse.yaml | \
 
helm template \
 
    --version="${CHART_VERSION}" \
 
    --namespace="${NAMESPACE}" \
 
    --debug \
 
    --output-dir helm-template \
 
    pulse pe-jfrog-stage/pulse \
 
    -f -</source>
 
 
|Status=No
 
|Status=No
}}{{Section
 
|sectionHeading=Override Helm chart values
 
|alignment=Vertical
 
|structuredtext=<div style="background-color: aliceblue; font-style: italic;">Link to the "suite-level" documentation about how to override Helm chart values: {{SuiteLevelLink|helmoverride}} If there are multiple YAML files (for example, one for each container if your service has multiple containers), you could use a table for each file or use a single table with an extra column for the file name. If the parameter is related to a feature documented elsewhere, you can include a link in the Description column. For example, descriptions for logging setting can link to the Logging page.
 
Note: We are still working on an approach to handle documentation for Helm chart values, so please leave this section until the end.
 
</div>
 
{{{!}} class="wikitable"
 
{{!}}+
 
!Parameter
 
!Description
 
!Default
 
!Valid values
 
{{!}}-
 
{{!}}service.port
 
{{!}}Designer service to be exposed.
 
{{!}}8888
 
{{!}}A valid port.
 
{{!}}-
 
{{!}}...
 
{{!}}
 
{{!}}
 
{{!}}
 
{{!}}}
 
|Status=Yes
 
}}{{Section
 
|sectionHeading=Configure Kubernetes
 
|alignment=Vertical
 
|structuredtext=<div style="background-color: aliceblue; font-style: italic;">Document the layouts for the following so customers can create them if their Helm chart doesn't include a way to do this:
 
 
*ConfigMaps
 
*Secrets
 
</div>
 
|Status=Yes
 
 
}}{{Section
 
}}{{Section
 
|sectionHeading=Configure security
 
|sectionHeading=Configure security
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext=<div style="background-color: aliceblue; font-style: italic;">List security-related settings, such as how to set up credentials and certificates for third-party services.</div>
+
|structuredtext====Arbitrary UIDs===
 +
If your OpenShift deployment uses arbitrary UIDs, you must override the securityContext settings. By default, the user and group IDs are set to 500:500:500. If your deployment uses arbitrary UIDs, update the '''podSecurityContext''' section in the YAML file for each chart as discussed in {{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|topic=ConfigSecurity}}.
 
|Status=Yes
 
|Status=Yes
 
}}
 
}}
 
|PEPageType=9c3ae89b-4f75-495b-85f8-d8c4afcb3f97
 
|PEPageType=9c3ae89b-4f75-495b-85f8-d8c4afcb3f97
 
}}
 
}}
 +
<!---->

Latest revision as of 16:01, March 29, 2023

This topic is part of the manual Genesys Pulse Private Edition Guide for version Current of Reporting.

Learn how to configure Genesys Pulse.

Prerequisites

Before you begin the steps on this page, complete the instructions on Before you begin.

Information you require for shared provisioning:

  • Versions:
    • <image-version> = 100.0.000.0015
    • <chart-versions>= 100.0.000+0015
  • K8S namespace pulse
  • Project Name pulse
  • Postgres credentials
    • <db-host>
    • <db-port>
    • <db-name>
    • <db-user>
    • <db-user-password>
    • <db-ssl-mode>
  • Docker credentials
    • <docker-registry>
    • <docker-registry-secret-name>
  • Redis credentials
    • <redis-host>
    • <redis-port>
    • <redis-password>
    • <redis-enable-ssl>
  • Tenant service variables
    • <tenant-uuid>
    • <tenant-sid>
    • <tenant-name>
    • <tenant-dcu>
  • GAuth/GWS service variables
    • <gauth-url-external>
    • <gauth-url-internal>
    • <gauth-client-id>
    • <gauth-client-secret>
    • <gws-url-external>
    • <gws-url-internal>
  • Storage class:
    • <pv-storage-class-rw-many>
  • Pulse:
    • <pulse-host>
    • <pulse-manage-agents-host> (optional)

Single namespace

Single namespace deployments have a software-defined networking (SDN) with multitenant mode, where namespaces are network isolated. If you plan to deploy Pulse into the single namespace, ensure that your environment meets the following requirements for inputs:

  • Back-end services deployed into the single namespace must include the string pulse:
    <db-host>
    <db-name>
    <redis-host>
  • The hostname used for Ingress must be unique, and must include the string pulse:
    <pulse-host>
    <pulse-manage-agents-host>
  • Internal service-to-service traffic must use the service endpoints, rather than the Ingress Controller:
    <gauth-url-internal>
    <gws-url-internal>

Override Helm chart values

For more information about overriding Helm chart values, see the suite-level documentation: Overriding Helm chart values.

Parameter Description Default Valid values
service.port Designer service to be exposed. 8888 A valid port.

Deployment

init Helm chart

Use this chart to initialize the shared PostgreSQL database.

Get init Helm chart

Run the following commands to get the chart:

helm repo update
helm search repo pulsehelmrepo/init

Prepare override-init file (GKE)

Create a file with the following content, entering appropriate values where indicated, and save the file as values-override-init.yaml:

# Default values for init.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
image:
  tag: "<image-version>"
  pullPolicy: IfNotPresent
  registry: "<docker-registry>"
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
# tenant identification, or empty for shared deployment
tenants:
  - id:   "<tenant-uuid>"
    name: "<tenant-name>"
    key:  "<tenant-sid>"
    dcu:  "<tenant-dcu>"
    # optional, available starting with 100.0.000.0015    
    # enable_agent_management: true
 
# common configuration.
config:
  # set "true" to create config maps
  createConfigMap: true
  # set "true" to create secrets
  createSecret: true
 
  # Postgres config - fill when createConfigMap: true
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres hostname
  postgresHost: "<postgres-hostname>"
  # Postgres port
  postgresPort: "<postgres-port>"
  # Postgres SSL mode
  postgresEnableSSL: "<postgres-ssl-mode>"
 
  # Postgres secret config - fill when createSecret: true
  # Postgres User
  postgresUser: "<postgres-user>"
  # Postgres Password
  postgresPassword: "<postgres-password>"
  # Secret name for postgres
  postgresSecret: "pulse-postgres-secret"
  # Secret key for postgres user
  postgresSecretUser: "META_DB_ADMIN"
  # Secret key for postgres  password
  postgresSecretPassword: "META_DB_ADMINPWD"
   
  # Redis config - fill when createConfigMap: true
  # Redis config map name
  redisConfig: "pulse-redis-configmap"
  # Redis host
  redisHost: "<redis-hostname>"
  # Redis port
  redisPort: "<redis-port>"
  # Redis SSL enabled
  redisEnableSSL: "false"
 
  # Redis secret config - fill when createSecret: true
  # Password for Redis
  redisKey: "<redis-key>"
  # Secret name for Redis
  redisSecret: "pulse-redis-secret"
  # Secret key for Redis password
  redisSecretKey: "REDIS01_KEY"
   
  # GWS secret config - fill when createSecret: true
  # Client ID
  gwsClientId: "<gws-client-id>"
  # Client Secret
  gwsClientSecret: "<gws-client-secret>"
  # Secret name
  gwsSecret: "pulse-gws-secret"
  # Secret key for Client ID
  gwsSecretClientId: "clientId"
  # Secret key for Client Secret
  gwsSecretClientSecret: "clientSecret"
 
  # fill database name
  dbName: "<db-name>"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Specifies the security context for all Pods in the service
##
podSecurityContext:
   fsGroup: null
   runAsUser: null
   runAsGroup: 0
   runAsNonRoot: true
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: 256Mi
    cpu: 200m
  requests:
    memory: 128Mi
    cpu: 100m
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

Prepare override-init file (AKS)

Create a file with the following content, entering appropriate values where indicated, and save the file as values-override-init.yaml:

# Default values for init.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
image:
  tag: "<image-version>"
  pullPolicy: IfNotPresent
  registry: "<docker-registry>"
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
# tenant identification, or empty for shared deployment
tenants:
  - id:   "<tenant-uuid>"
    name: "<tenant-name>"
    key:  "<tenant-sid>"
    dcu:  "<tenant-dcu>"
 
# common configuration.
config:
  # set "true" to create config maps
  createConfigMap: true
  # set "true" to create secrets
  createSecret: true
 
  # Postgres config - fill when createConfigMap: true
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres hostname
  postgresHost: "<postgres-hostname>"
  # Postgres port
  postgresPort: "<postgres-port>"
  # Postgres SSL mode
  postgresEnableSSL: "<postgres-ssl-mode>"
 
  # Postgres secret config - fill when createSecret: true
  # Postgres User
  postgresUser: "<postgres-user>"
  # Postgres Password
  postgresPassword: "<postgres-password>"
  # Secret name for postgres
  postgresSecret: "pulse-postgres-secret"
  # Secret key for postgres user
  postgresSecretUser: "META_DB_ADMIN"
  # Secret key for postgres  password
  postgresSecretPassword: "META_DB_ADMINPWD"
   
  # Redis config - fill when createConfigMap: true
  # Redis config map name
  redisConfig: "pulse-redis-configmap"
  # Redis host
  redisHost: "<redis-hostname>"
  # Redis port
  redisPort: "<redis-port>"
  # Redis SSL enabled
  redisEnableSSL: "false"
 
  # Redis secret config - fill when createSecret: true
  # Password for Redis
  redisKey: "<redis-key>"
  # Secret name for Redis
  redisSecret: "pulse-redis-secret"
  # Secret key for Redis password
  redisSecretKey: "REDIS01_KEY"
   
  # GWS secret config - fill when createSecret: true
  # Client ID
  gwsClientId: "<gws-client-id>"
  # Client Secret
  gwsClientSecret: "<gws-client-secret>"
  # Secret name
  gwsSecret: "pulse-gws-secret"
  # Secret key for Client ID
  gwsSecretClientId: "clientId"
  # Secret key for Client Secret
  gwsSecretClientSecret: "clientSecret"
 
  # fill database name
  dbName: "<db-name>"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Specifies the security context for all Pods in the service
##
podSecurityContext: {}
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: 256Mi
    cpu: 200m
  requests:
    memory: 128Mi
    cpu: 100m
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

Install init Helm chart

Run:

helm upgrade --install pulse-init pulsehelmrepo/init --wait --wait-for-jobs --version=<chart-version> --namespace=pulse -f values-override-init.yaml

If the installation is successful, the command finishes with exit code 0.

Validate init Helm chart

To validate Helm chart initialization, run the following command:

kubectl get pods -n=pulse -l "app.kubernetes.io/name=init,app.kubernetes.io/instance=pulse-init"
NAME                   READY   STATUS      RESTARTS   AGE
pulse-init-job-5669c   0/1     Completed   0          79m
If the initialization was successful, the Pulse-init job has a Status of Completed.

Install pulse Helm chart

Use this chart to install the shared part.

Get pulse Helm chart

helm repo update
helm search repo pulsehelmrepo/pulse

Prepare override-pulse file (GKE)

Create a file with the following content, entering appropriate values where indicated, and save the file as values-override-pulse.yaml:

# Default values for pulse.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
image:
  tag: "<image-version>"
  pullPolicy: IfNotPresent
  registry: "<docker-registry>"
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
replicaCount: 2
 
# common configuration.
config:
  dbName: "<db-name>"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres secret name
  postgresSecret: "pulse-postgres-secret"
  # Postgres secret key for user
  postgresSecretUser: "META_DB_ADMIN"
  # Postgres secret key for password
  postgresSecretPassword: "META_DB_ADMINPWD"
  # Redis config map name
  redisConfig: "pulse-redis-configmap"
  # Redis secret name
  redisSecret: "pulse-redis-secret"
  # Redis secret key for access key
  redisSecretKey: "REDIS01_KEY"
  # GAuth secret name
  gwsSecret: "pulse-gws-secret"
  # GAuth secret key for client_id
  gwsSecretClientId: "clientId"
  # GAuth secret key for client_secret
  gwsSecretClientSecret: "clientSecret"
 
# monitoring settings
monitoring:
  # enable the Prometheus metrics endpoint
  enabled: false
  # port is <options.managementPort>
  # HTTP path is <options.managementContext><options.prometheusEndpoint>
  # additional annotations required for monitoring PODs
  # you can reference values of other variables as {{.Values.variable.full.name}}
  podAnnotations: {}
    # prometheus.io/scrape: "true"
    # prometheus.io/port: "{{.Values.options.managementPort}}"
    # prometheus.io/path: "{{.Values.options.managementContext}}{{.Values.options.prometheusEndpoint}}"
  serviceMonitor:
    # enables ServiceMonitor creation
    enabled: false
    # interval at which metrics should be scraped
    scrapeInterval: 30s
    # timeout after which the scrape is ended
    scrapeTimeout:
    # namespace of the ServiceMonitor, defaults to the namespace of the service
    namespace:
    additionalLabels: {}
 
# common log configuration
log:
  # target directory where log will be stored, leave empty for default
  logDir: ""
  # path where volume will be mounted
  volumeMountPath: /data/log
  # log volume type: none | hostpath | pvc
  volumeType: pvc
  # log volume hostpath, used with volumeType "hostpath"
  volumeHostPath: /mnt/log
  # log PVC parameters, used with volumeType "pvc"
  pvc:
    name: pulse-logs
    accessModes:
      - ReadWriteMany
    capacity: 10Gi
    class: <pv-storage-class-rw-many>
 
# application options
options:
  authUrl: "https://<gauth-url-external>"
  authUrlInt: "http://<gauth-url-internal>"
  gwsUrl: "https://<gws-url-external>"
  gwsUrlInt: "http://<gws-url-internal>"
  # optional, required for Manage Agents since 100.0.000.0015
  # advancedServiceUrl: "https://<pulse-manage-agents-host>"

## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Add labels to all pods
##
podLabels: {}
 
## Specifies the security context for all Pods in the service
##
podSecurityContext:
   fsGroup: null
   runAsUser: null
   runAsGroup: 0
   runAsNonRoot: true
 
## Ingress configuration
ingress:
  enabled: true
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    ## recommended to increase proxy-body-size size
    # nginx.ingress.kubernetes.io/proxy-body-size: 5m
  hosts:
    - host: "<pulse-host>"
      paths: [/]
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
 
gateway:
  enabled: false
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: 4Gi
    cpu: 1
  requests:
    memory: 650Mi
    cpu: 100m
 
## HPA Settings
## Not supported in this release!
hpa:
  enabled: false
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
## Pod Disruption Budget Settings
podDisruptionBudget:
  enabled: false
 
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
# control network policies
networkPolicies:
  enabled: false

Prepare override-pulse file (AKS)

Create a file with the following content, entering appropriate values where indicated, and save the file as values-override-pulse.yaml:

# Default values for pulse.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
image:
  tag: "<image-version>"
  pullPolicy: IfNotPresent
  registry: "<docker-registry>"
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
replicaCount: 2
 
# common configuration.
config:
  dbName: "<db-name>"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres secret name
  postgresSecret: "pulse-postgres-secret"
  # Postgres secret key for user
  postgresSecretUser: "META_DB_ADMIN"
  # Postgres secret key for password
  postgresSecretPassword: "META_DB_ADMINPWD"
  # Redis config map name
  redisConfig: "pulse-redis-configmap"
  # Redis secret name
  redisSecret: "pulse-redis-secret"
  # Redis secret key for access key
  redisSecretKey: "REDIS01_KEY"
  # GAuth secret name
  gwsSecret: "pulse-gws-secret"
  # GAuth secret key for client_id
  gwsSecretClientId: "clientId"
  # GAuth secret key for client_secret
  gwsSecretClientSecret: "clientSecret"
 
# monitoring settings
monitoring:
  # enable the Prometheus metrics endpoint
  enabled: false
  # port is <options.managementPort>
  # HTTP path is <options.managementContext><options.prometheusEndpoint>
  # additional annotations required for monitoring PODs
  # you can reference values of other variables as {{.Values.variable.full.name}}
  podAnnotations: {}
    # prometheus.io/scrape: "true"
    # prometheus.io/port: "{{.Values.options.managementPort}}"
    # prometheus.io/path: "{{.Values.options.managementContext}}{{.Values.options.prometheusEndpoint}}"
  serviceMonitor:
    # enables ServiceMonitor creation
    enabled: false
    # interval at which metrics should be scraped
    scrapeInterval: 30s
    # timeout after which the scrape is ended
    scrapeTimeout:
    # namespace of the ServiceMonitor, defaults to the namespace of the service
    namespace:
    additionalLabels: {}
 
# common log configuration
log:
  # target directory where log will be stored, leave empty for default
  logDir: ""
  # path where volume will be mounted
  volumeMountPath: /data/log
  # log volume type: none | hostpath | pvc
  volumeType: pvc
  # log volume hostpath, used with volumeType "hostpath"
  volumeHostPath: /mnt/log
  # log PVC parameters, used with volumeType "pvc"
  pvc:
    name: pulse-logs
    accessModes:
      - ReadWriteMany
    capacity: 10Gi
    class: <pv-storage-class-rw-many>
 
# application options
options:
  authUrl: "https://<gauth-url-external>"
  authUrlInt: "http://<gauth-url-internal>"
  gwsUrl: "https://<gws-url-external>"
  gwsUrlInt: "http://<gws-url-internal>"
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Add labels to all pods
##
podLabels: {}
 
## Specifies the security context for all Pods in the service
##
podSecurityContext: {}
 
## Ingress configuration
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  enabled: true
  hosts:
  - host: <pulse-host>
    paths:
    - /
  tls:
  - hosts:
    - <pulse-host>
    secretName: pulse-cert-tls
 
gateway:
  enabled: false
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: 4Gi
    cpu: 1
  requests:
    memory: 650Mi
    cpu: 100m
 
## HPA Settings
## Not supported in this release!
hpa:
  enabled: false
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
## Pod Disruption Budget Settings
podDisruptionBudget:
  enabled: false
 
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
# control network policies
networkPolicies:
  enabled: false

Install pulse Helm chart

Run:

helm upgrade --install pulse pulsehelmrepo/pulse --wait --version=<chart-version>  --namespace=pulse -f values-override-pulse.yaml

If installation is successful, the command finishes with exit code 0.

Validate pulse Helm chart

To list all running Pulse pods, run the following command:

kubectl get pods -n=pulse -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse"
NAME                     READY   STATUS    RESTARTS   AGE
pulse-648b9d6666-f5d84   1/1     Running   0          22m
pulse-648b9d6666-kqhs6   1/1     Running   0          68m

Install pulse-manage-agents helm chart

Use this chart to install the optional shared service for Manage Agents functionality. This feature is available starting with release 100.0.000.0015.

Get pulse helm chart

Run the following commands to get the chart:

helm repo update
helm search repo <pulsehelmrepo>/pulse-manage-agents

Prepare override file

Create a file with the following content, entering appropriate values where indicated, and save the file as values-override-pulse-manage-agents.yaml:

# Default values for pulse-manage-agents.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
  image:
  tag: "<image-version>"
  pullPolicy: IfNotPresent
  registry: "<docker-registry>"
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
replicaCount: 2
 
# monitoring settings
monitoring:
  # enable the Prometheus metrics endpoint
  enabled: false
  # port number of the Prometheus metrics endpoint
  port: 9103
  # additional annotations required for monitoring PODs
  # you can reference values of other variables as {{.Values.variable.full.name}}
  podAnnotations: {}
  # additional annotations required for monitoring Service
  # you can reference values of other variables as {{.Values.variable.full.name}}
  serviceAnnotations: {}
    # prometheus.io/scrape: "true"
    # prometheus.io/port: "{{.Values.monitoring.port}}"
    # prometheus.io/path: "/metrics"
  serviceMonitor:
    # enables ServiceMonitor creation
    enabled: false
    # interval at which metrics should be scraped
    scrapeInterval: 30s
    # timeout after which the scrape is ended
    scrapeTimeout:
    # namespace of the ServiceMonitor, defaults to the namespace of the service
    namespace:
    additionalLabels: {}
  dashboards:
    # enables Grafana dashboards
    enabled: true
    # namespace of the ConfigMap with Grafana dashboards,
    # defaults to the namespace of the POD
    namespace:
    additionalLabels: {}
  alerts:
    # enables alert rules
    enabled: false
    # namespace of the alert rules, defaults to the namespace of the POD
    namespace:
    additionalLabels: {}
 
# common log configuration
log:
  # target directory where log will be stored, leave empty for default
  logDir: ""
  # path where volume will be mounted
  volumeMountPath: /data/log
  # log volume type: none | hostpath | pvc
  volumeType: pvc
  # log volume hostpath, used with volumeType "hostpath"
  volumeHostPath: /mnt/log
  # log PVC parameters, used with volumeType "pvc"
  pvc:
    name: pulse-manage-agents-logs
    accessModes:
      - ReadWriteMany
    capacity: 10Gi
    class: <pv-storage-class-rw-many>
 
# application options
env:
  CLOUD: MULTICLOUD
  amGwsUrlInt: <http://<gws-url-internal>
  amGwsPort: 80
  amGauthUrlInt: <https://<gauth-url-internal>
 
# CORS functionality checks that Origin header in OPTIONS command is one of these.
whitelistedOrigins:
  - https://<pulse-host>
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Add labels to all pods
##
podLabels: {}
 
## Specifies the security context for all Pods in the service
##
podSecurityContext:
  runAsNonRoot: true
  runAsUser: 500
  runAsGroup: 500
  fsGroup: 0
 
## Ingress configuration
ingress:
  enabled: true
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    ## recommended to increase proxy-body-size size
    # nginx.ingress.kubernetes.io/proxy-body-size: 5m
  hosts:
    - host: "<pulse-manage-agents-host>"
      paths: [/]
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
 
gateway:
  enabled: false
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## resources:
  limits:
    memory: 256Mi
    cpu: 200m
  requests:
    memory: 155Mi
    cpu: 10m
 
## HPA Settings
## Not supported in this release!
hpa:
  enabled: false
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
## Pod Disruption Budget Settings
podDisruptionBudget:
  enabled: false
 
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
# control network policies
networkPolicies:
  enabled: false

Install pulse-manage-agents helm chart

Run the following command to install pulse-manage-agents:

helm upgrade --install pulse pulsehelmrepo/pulse-manage-agents --wait --version=<chart-version>  --namespace=pulse -f values-override-pulse-manage-agents.yaml

If the installation is successful, the command finishes with exit code 0.

Validate pulse-manage-agents helm chart

To validate Helm chart initialization, run the following command:

oc get pods -n=pulse -l "app.kubernetes.io/name=pulse-manage-agents,app.kubernetes.io/instance=pulse-manage-agents"

The output should show that all pulse-manage-agents pods are Running:

NAME                                  READY   STATUS    RESTARTS   AGE
pulse-manage-agents-b999c7758-hscfz   1/1     Running   0          5d18h
pulse-manage-agents-b999c7758-s67zt   1/1     Running   0          5d18h

Validation

Use the following procedures to validate the deployment.

Check logs for error

  1. To check the log files, run the following command:
    kubectl get pods
    os logs <pulse-pod-id>
    Where: <pulse-pod-id> is the pod identifier.

Health validation

  1. To download the health validation metrics, run the following command:
    GET /actuator/metrics/pulse.health.all
  2. Open two Command Prompt windows, and run the following commands:
    1. Console 1:
      kubectl get pods --namespace pulse -l "app.kubernetes.io/name=pulse,app.kubernetes.io/instance=pulse" -o jsonpath="{.items[0].metadata.name}"
      kubectl --namespace pulse port-forward <pod-name> 8090:8090
    2. Console 2:
      curl -X GET http://127.0.0.1:8090/actuator/metrics/pulse.health.all       -H 'Content-Type: application/json'
    If Pulse is running correctly and can connect to Redis and PostgreSQL, the following results appear:
    • http response is 200
    • json response has measurements.statistic.value of 1.0, for example:
      {
        "name": "pulse.health.all",
        "description": "Provides overall application status",
        "baseUnit": "Boolean",
        "measurements": [
          {
            "statistic": "VALUE",
            "value": 1
          }
        ],
        "availableTags": [
          {
            "tag": "deployment.code",
            "values": [
              "pulse"
            ]
          },
          {
            "tag": "application.name",
            "values": [
              "pulse"
            ]
          }
        ]
      }

Troubleshooting

If you encounter problems during deployment, examine the init Helm and Pulse Helm manifests.

Check init Helm manifests

To output init Helm manifest files into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template init pulsehelmrepo/init -f values-override-init.yaml.yaml

Where: <chart-version> is the Helm chart version.

Check Pulse Helm manifests

To output Pulse Helm manifest files into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse pulsehelmrepo/pulse -f values-override-pulse.yaml

Where: <chart-version> is the Helm chart version.

Configure security

Arbitrary UIDs

If your OpenShift deployment uses arbitrary UIDs, you must override the securityContext settings. By default, the user and group IDs are set to 500:500:500. If your deployment uses arbitrary UIDs, update the podSecurityContext section in the YAML file for each chart as discussed in OpenShift security settings.


Comments or questions about this documentation? Contact us for support!