Difference between revisions of "PEC-REP/Current/PulsePEGuide/Provision"

From Genesys Documentation
Jump to: navigation, search
(Published)
 
 
(5 intermediate revisions by 2 users not shown)
Line 8: Line 8:
 
|sectionHeading=Prerequisites
 
|sectionHeading=Prerequisites
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext=Please complete {{Link-SomewhereInThisVersion|manual=PulsePEGuide|topic=Planning}} instructions.
+
|structuredtext=Before performing the steps described on this page, complete the {{Link-SomewhereInThisVersion|manual=PulsePEGuide|topic=Planning}} instructions, and ensure that you have the following information:
 
 
Information you will need:
 
  
 
*Versions:
 
*Versions:
**<image-version> = 9.0.100.10
+
**<image-version> = 100.0.000.0015
**<chart-versions>= 9.0.100+10
+
**<chart-versions>= 100.0.000+0015
*K8S namespace <namespace> (e.g. 'pulse')
+
*K8S namespace pulse
*Project Name <project-name> (e.g. 'pulse')
+
*Project Name pulse
*Postgres credentials
+
*Postgres credentials:
 
**<db-host>
 
**<db-host>
 
**<db-port>
 
**<db-port>
Line 23: Line 21:
 
**<db-user>
 
**<db-user>
 
**<db-user-password>
 
**<db-user-password>
**<db-superuser>
 
**<db-superuser-password>
 
 
**<db-ssl-mode>
 
**<db-ssl-mode>
*Docker credentials
+
*Docker credentials:
**<docker-email>
+
**<docker-registry>
**<docker-password>
+
**<docker-registry-secret-name>
**<docker-user>
+
*Redis credentials:
*OpenShift credentials
 
**<openshift-url>
 
**<openshift-port>
 
**<openshift-token>
 
*Redis credentials
 
 
**<redis-host>
 
**<redis-host>
 
**<redis-port>
 
**<redis-port>
 
**<redis-password>
 
**<redis-password>
 
**<redis-enable-ssl>
 
**<redis-enable-ssl>
*Tenant service variables
+
*Tenant service variables:
 
**<tenant-uuid>
 
**<tenant-uuid>
 
**<tenant-sid>
 
**<tenant-sid>
 
**<tenant-name>
 
**<tenant-name>
 +
**<tenant-dcu>
 +
*GAuth/GWS service variables:
 +
**<gauth-url-external>
 +
**<gauth-url-internal>
 +
**<gauth-client-id>
 +
**<gauth-client-secret>
 +
**<gws-url-external>
 +
**<gws-url-internal>
 +
*Storage class:
 +
**<pv-storage-class-rw-many>
 +
**<pv-storage-class-rw-once>
 +
*Pulse:
 +
**<pulse-host>
  
Fill appropriate placeholders in <tt>.shared_tenant_variables</tt>:<source lang="text">export PROJECT_NAME='<project-name>'
+
{{AnchorDiv|SingleNamespace}}
export NAMESPACE='<namespace>'
+
===Single namespace===
export CHART_VERSION='<chart-version>'
+
Single namespace deployments have a software-defined networking (SDN) with multitenant mode, where namespaces are network isolated. If you plan to deploy Pulse into the single namespace, ensure that your environment meets the following requirements for inputs:
export DB_HOST='<db-host>'
+
 
export DB_PORT='<db-port>'
+
*Back-end services deployed into the single namespace must include the string ''pulse'':
export DB_NAME_SHARED='<db-name>'
+
**<db-host>
export DB_USER_SHARED='<db-user>'
+
**<db-name>
export DB_PASSWORD_SHARED='<db-user-password>'
+
**<redis-host>
export DB_NAME_SUPERUSER='<db-superuser>'
+
*The hostname used for Ingress must be unique, and must include the string ''pulse'':
export DB_PASSWORD_SUPERUSER='<db-superuser-password>'
+
**<pulse-host>
export DB_SSL_MODE='<db-ssl-mode>'
+
*Internal service-to-service traffic must use the service endpoints, rather than the Ingress Controller:
export DOCKER_REGISTRY_SECRET_NAME='<docker-registry-secret-name>'
+
**<gauth-url-internal>
export DOCKER_REGISTRY='<docker-registry>'
+
**<gws-url-internal><nowiki></source></nowiki>
export DOCKER_TAG='<image-version>'
 
export REDIS_ENABLE_SSL='<redis-enable-ssl>'
 
export REDIS_PASWORD='<redis-password>'
 
export REDIS_PORT='<redis-port>'
 
export REDIS_HOST='<redis-host>'
 
export TENANT_UUID='<tenant-uuid>'
 
export TENANT_DCU='2'
 
export TENANT_NAME='<tenant-name>'
 
export TENANT_SID='<tenant-sid>'
 
export PV_STORAGE_CLASS_RW_MANY='<rw-many-storage-class>'
 
export PV_STORAGE_CLASS_RW_ONCE='<rw-once-storage-class>' </source>
 
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
 
|sectionHeading=Tenant provisioning
 
|sectionHeading=Tenant provisioning
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext={{Notices|Notice=PEComingSoon}}
+
|structuredtext====Install init tenant chart===
|Status=No
+
'''Get the <tt>init-tenant</tt> helm chart:'''
}}{{Section
+
<source lang="bash">helm repo update
|sectionHeading=Troubleshooting
+
helm search repo <pulsehelmrepo>/init-tenant</source>
|alignment=Vertical
+
 
|structuredtext='''Check init-tenant helm chart manifests'''
+
'''Prepare the override file:'''
 +
 
 +
*Update the <tt>values-override-init-tenant.yaml</tt>  file (AKS):
 +
*:<source lang="bash"># Default values for init-tenant.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
# * Images
 +
# Replace for your values: registry and secret
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
configurator:
 +
  enabled: true
 +
  # set service domain used to access voice service
 +
  # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
 +
  voiceDomain: "voice.svc.<domain>"
 +
  # set service domain used to access ixn service
 +
  # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
 +
  ixnDomain: "ixn.svc.<domain>"
 +
  # set service domain used to access pulse service
 +
  # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
 +
  pulseDomain: "pulse.svc.<domain>"
 +
  # set configration server password, used when create secrets
 +
  cfgUser: "default"
 +
  # set configration server user, used when create secrets
 +
  cfgPassword: "password"
 +
  # common log configuration
 +
  cfgHost: "tenant-9350e2fc-a1dd-4c65-8d40-1f75a2e080dd.voice.svc.<domain>"
 +
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: none
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-init-tenant-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
# * Tenant info
 +
# Replace for your values
 +
tenant:
 +
  # Tenant UUID
 +
  id: <tenant-uuid>
 +
  # Tenant SID (like 0001)
 +
  sid: <tenant-sid>
 +
 +
# common configuration.
 +
config:
 +
  dbName: "<db-name>"
 +
  # set "true" when need @host added for username
 +
  dbUserWithHost: true
 +
  # set "true" for CSI secrets
 +
  mountSecrets: false
 +
  # Postgres config map name
 +
  postgresConfig: "pulse-postgres-configmap"
 +
  # Postgres secret name
 +
  postgresSecret: "pulse-postgres-secret"
 +
  # Postgres secret key for user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Postgres secret key for password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext: {}
 +
 +
## Resource requests and limits
 +
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 +
##
 +
resources:
 +
  limits:
 +
    memory: 256Mi
 +
    cpu: 200m
 +
  requests:
 +
    memory: 128Mi
 +
    cpu: 100m
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
# * Templates
 +
templates:
 +
  - Agent_Group_Status.gpb
 +
  - Agent_KPIs.gpb
 +
  - Agent_Login.gpb
 +
  - Alert_Widget.gpb
 +
  - Callback_Activity.gpb
 +
  - Campaign_Activity.gpb
 +
  - Campaign_Callback_Status.gpb
 +
  - Campaign_Group_Activity.gpb
 +
  - Campaign_Group_Status.gpb
 +
  - Chat_Agent_Activity.gpb
 +
  - Chat_Queue_Activity.gpb
 +
  - Chat_Service_Level_Performance.gpb
 +
  - Chat_Waiting_Statistics.gpb
 +
  - Email_Agent_Activity.gpb
 +
  - Email_Queue_Activity.gpb
 +
  - Facebook_Media_Activity.gpb
 +
  - IFRAME.gpb
 +
  - IWD_Agent_Activity.gpb
 +
  - IWD_Queue_Activity.gpb
 +
  - Queue_KPIs.gpb
 +
  - Queue_Overflow_Reason.gpb
 +
  - Static_Text.gpb
 +
  - Twitter_Media_Activity.gpb
 +
  - eServices_Agent_Activity.gpb
 +
  - eServices_Queue_KPIs.gpb</source>
 +
 
 +
*Update the <tt>values-override-init-tenant.yaml</tt>  file (GKE):
 +
*:{{NoteFormat|Enable configurator only for configurations in GKE with VPC scoped DNS.
 +
}}
 +
 
 +
*:<source lang="bash"># Default values for init-tenant.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
# * Images
 +
# Replace for your values: registry and secret
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
configurator:
 +
  enabled: true
 +
  # set service domain used to access voice service
 +
  # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
 +
  voiceDomain: "voice.svc.<domain>"
 +
  # set service domain used to access ixn service
 +
  # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
 +
  ixnDomain: "ixn.svc.<domain>"
 +
  # set service domain used to access pulse service
 +
  # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
 +
  pulseDomain: "pulse.svc.<domain>"
 +
  # set configration server password, used when create secrets
 +
  cfgUser: "default"
 +
  # set configration server user, used when create secrets
 +
  cfgPassword: "password"
 +
  # common log configuration
 +
  cfgHost: "tenant-<tenant-uuid>.voice.svc.<domain>"
 +
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: none
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-init-tenant-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: nfs-client
 +
 +
# * Tenant info
 +
# Replace for your values
 +
tenant:
 +
  # Tenant UUID
 +
  id: <tenant-uuid>
 +
  # Tenant SID (like 0001)
 +
  sid: <tenant-sid>
 +
 +
# common configuration.
 +
config:
 +
  dbName: "<db-name>"
 +
  # set "true" when need @host added for username
 +
  dbUserWithHost: true
 +
  # set "true" for CSI secrets
 +
  mountSecrets: false
 +
  # Postgres config map name
 +
  postgresConfig: "pulse-postgres-configmap"
 +
  # Postgres secret name
 +
  postgresSecret: "pulse-postgres-secret"
 +
  # Postgres secret key for user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Postgres secret key for password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext:
 +
  fsGroup: null
 +
  runAsUser: null
 +
  runAsGroup: 0
 +
  runAsNonRoot: true
 +
 +
## Resource requests and limits
 +
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 +
##
 +
resources:
 +
  limits:
 +
    memory: 256Mi
 +
    cpu: 200m
 +
  requests:
 +
    memory: 128Mi
 +
    cpu: 100m
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
# * Templates
 +
templates:
 +
  - Agent_Group_Status.gpb
 +
  - Agent_KPIs.gpb
 +
  - Agent_Login.gpb
 +
  - Alert_Widget.gpb
 +
  - Callback_Activity.gpb
 +
  - Campaign_Activity.gpb
 +
  - Campaign_Callback_Status.gpb
 +
  - Campaign_Group_Activity.gpb
 +
  - Campaign_Group_Status.gpb
 +
  - Chat_Agent_Activity.gpb
 +
  - Chat_Queue_Activity.gpb
 +
  - Chat_Service_Level_Performance.gpb
 +
  - Chat_Waiting_Statistics.gpb
 +
  - Email_Agent_Activity.gpb
 +
  - Email_Queue_Activity.gpb
 +
  - Facebook_Media_Activity.gpb
 +
  - IFRAME.gpb
 +
  - IWD_Agent_Activity.gpb
 +
  - IWD_Queue_Activity.gpb
 +
  - Queue_KPIs.gpb
 +
  - Queue_Overflow_Reason.gpb
 +
  - Static_Text.gpb
 +
  - Twitter_Media_Activity.gpb
 +
  - eServices_Agent_Activity.gpb
 +
  - eServices_Queue_KPIs.gpb
 +
</source>
 +
 
 +
'''Install the <tt>init-tenant</tt> helm chart''': <br />
 +
To install the <tt>init-tenant</tt> helm chart, run the following command:
 +
<source lang="bash">
 +
helm upgrade --install "pulse-init-tenant-<tenant-sid>" pulsehelmrepo/init-tenant --wait --wait-for-jobs --version="<chart-version>"--namespace=pulse -f values-override-init-tenant.yaml
 +
</source>
 +
If installation is successful, the exit code <tt>0</tt> appears.
 +
 
 +
'''Validate the <tt>init-tenant</tt> helm chart''':<br />
 +
To validate the <tt>init-tenant</tt> helm chart, run the following command:
 +
<source lang="bash">
 +
kubectl get pods -n="pulse" -l "app.kubernetes.io/name=init-tenant,app.kubernetes.io/instance=pulse-init-tenant-<tenant-sid>"
 +
</source>
 +
If the deployment was successful, the <tt>pulse-init-tenant</tt> job is listed as <tt>Completed</tt>/.
 +
For example:
 +
<source lang="bash">
 +
NAME                                    READY  STATUS      RESTARTS  AGE
 +
pulse-init-tenant-100-job-qszgl          0/1    Completed  0          2d20h
 +
</source>
 +
 
 +
===Install dcu helm chart===
 +
 
 +
'''Get the <tt>dcu</tt> helm chart:'''
 +
<source lang="bash">
 +
helm repo update
 +
helm search repo <pulsehelmrepo>/dcu
 +
</source>
 +
 
 +
'''Prepare the override file:'''
 +
 
 +
*Update the <tt>values-override-dcu.yaml</tt> file (AKS):
 +
*:<source lang="bash"># Default values for dcu.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
replicaCount: "<tenant-dcu>"
 +
 +
# * Tenant info
 +
# tenant identification, or empty for shared deployment
 +
tenant:
 +
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
  sid: "<tenant-sid>"
 +
 +
# * Common log configuration
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-dcu-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
# * Config info
 +
# Set your values.
 +
config:
 +
  dbName: "<db-name>"
 +
  # set "true" when need @host added for username
 +
  dbUserWithHost: true
 +
  mountSecrets: false
 +
  postgresConfig: "pulse-postgres-configmap"
 +
  # Postgres secret name
 +
  postgresSecret: "pulse-postgres-secret"
 +
  # Postgres secret key for user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Postgres secret key for password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis secret name
 +
  redisSecret: "pulse-redis-secret"
 +
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 +
 +
# * Image
 +
# container image common settings
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext: {}
 +
 +
## Add labels to all pods
 +
##
 +
podLabels: {}
 +
 +
## HPA Settings
 +
## Not supported in this release!
 +
hpa:
 +
  enabled: false
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 +
 +
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}
 +
 +
# * Monitoring settings
 +
monitoring:
 +
  # enable the Prometheus metrics endpoint
 +
  enabled: false
 +
  # enable golden signals metrics (not supported for PE)
 +
  goldenSignals:
 +
    enabled: false
 +
  # port number of the Prometheus metrics endpoint
 +
  port: 9091
 +
  # HTTP path to scrape for metrics
 +
  path: /metrics
 +
  # additional annotations required for monitoring PODs
 +
  # you can reference values of other variables as {{.Values.variable.full.name}}
 +
  podAnnotations: {}
 +
    # prometheus.io/scrape: "true"
 +
    # prometheus.io/port: "{{.Values.monitoring.port}}"
 +
    # prometheus.io/path: "/metrics"
 +
  podMonitor:
 +
    # enables PodMonitor creation for the POD
 +
    enabled: true
 +
    # interval at which metrics should be scraped
 +
    scrapeInterval: 30s
 +
    # timeout after which the scrape is ended
 +
    scrapeTimeout:
 +
    # namespace of the PodMonitor, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
  alerts:
 +
    # enables alert rules
 +
    enabled: true
 +
    # alert condition duration
 +
    duration: 5m
 +
    # namespace of the alert rules, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
 
 +
 +
##########################################################################
 +
 +
# * Configuration for the Collector container
 +
collector:
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "300Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "200m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "4Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "8000m"
 +
  # securityContext: {}
 +
 +
# * Configuration for the StatServer container
 +
statserver:
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "300Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "100m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "4Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "4000m"
 +
  # securityContext: {}
 +
 +
# * Configuration for the monitor sidecar container
 +
monitorSidecar:
 +
  # resource limits for container
 +
  resources:
 +
    # disabled: true
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "30Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "2m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "70Mi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "10m"
 +
  # securityContext: {}
 +
 +
##########################################################################
 +
 +
# * Configuration for the Configuration Server Proxy container
 +
csproxy:
 +
  # define domain for the configuration host
 +
  params:
 +
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "200Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext: {}
 +
 +
# volumeClaims contains persistent volume claims for services
 +
# All available storage classes can be found here:
 +
# https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
 +
volumeClaims:
 +
  # statserverBackup is storage for statserver backup data
 +
  statserverBackup:
 +
    name: statserver-backup
 +
    accessModes:
 +
      - ReadWriteOnce
 +
    # capacity is storage capacity
 +
    capacity: "1Gi"
 +
    # class is storage class. Must be set explicitly.
 +
    class: <pv-storage-class-rw-once>
 +
</source>
  
Run to output manifest into helm-template directory:
+
*Update the <tt>values-override-dcu.yaml</tt> file (GKE):
<source lang="text">source .tenant_init_variables
+
*:<source lang="bash"># Default values for dcu.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
replicaCount: "<tenant-dcu>"
 +
 +
# * Tenant info
 +
# tenant identification, or empty for shared deployment
 +
tenant:
 +
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
  sid: "<tenant-sid>"
 +
 +
# * Common log configuration
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-dcu-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
# * Config info
 +
# Set your values.
 +
config:
 +
  dbName: "<db-name>"
 +
  # set "true" when need @host added for username
 +
  dbUserWithHost: true
 +
  mountSecrets: false
 +
  postgresConfig: "pulse-postgres-configmap"
 +
  # Postgres secret name
 +
  postgresSecret: "pulse-postgres-secret"
 +
  # Postgres secret key for user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Postgres secret key for password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis secret name
 +
  redisSecret: "pulse-redis-secret"
 +
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 +
 +
# * Image
 +
# container image common settings
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext:
 +
  runAsNonRoot: true
 +
  runAsUser: 500
 +
  runAsGroup: 500
 +
  fsGroup: 0
 +
 +
## Add labels to all pods
 +
##
 +
podLabels: {}
 +
 +
## HPA Settings
 +
## Not supported in this release!
 +
hpa:
 +
  enabled: false
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 +
 +
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}
 +
 +
# * Monitoring settings
 +
monitoring:
 +
  # enable the Prometheus metrics endpoint
 +
  enabled: false
 +
  # enable golden signals metrics (not supported for PE)
 +
  goldenSignals:
 +
    enabled: false
 +
  # port number of the Prometheus metrics endpoint
 +
  port: 9091
 +
  # HTTP path to scrape for metrics
 +
  path: /metrics
 +
  # additional annotations required for monitoring PODs
 +
  # you can reference values of other variables as {{.Values.variable.full.name}}
 +
  podAnnotations: {}
 +
    # prometheus.io/scrape: "true"
 +
    # prometheus.io/port: "{{.Values.monitoring.port}}"
 +
    # prometheus.io/path: "/metrics"
 +
  podMonitor:
 +
    # enables PodMonitor creation for the POD
 +
    enabled: true
 +
    # interval at which metrics should be scraped
 +
    scrapeInterval: 30s
 +
    # timeout after which the scrape is ended
 +
    scrapeTimeout:
 +
    # namespace of the PodMonitor, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
  alerts:
 +
    # enables alert rules
 +
    enabled: true
 +
    # alert condition duration
 +
    duration: 5m
 +
    # namespace of the alert rules, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
 
 +
 +
##########################################################################
 +
 +
# * Configuration for the Collector container
 +
collector:
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "300Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "200m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "4Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "8000m"
 +
  # securityContext:
 +
  #  runAsUser: 500
 +
  #  runAsGroup: 500
 +
 +
# * Configuration for the StatServer container
 +
statserver:
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "300Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "100m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "4Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "4000m"
 +
  # securityContext:
 +
  #  runAsUser: 500
 +
  #  runAsGroup: 500
 
   
 
   
envsubst < ./values-override-init-tenant.yaml | \
+
# * Configuration for the monitor sidecar container
helm template \
+
monitorSidecar:
    --version="${CHART_VERSION}" \
+
  # resource limits for container
    --namespace="${NAMESPACE}" \
+
  resources:
    --debug \
+
    # disabled: true
    --output-dir helm-template \
+
    # minimum resource requirements to start container
    "${CHART_NAME_TENANT_INIT}" pe-jfrog-stage/init-tenant \
+
    requests:
    -f - </source>
+
      # minimal amount of memory required to start a container
 +
      memory: "30Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "2m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "70Mi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "10m"
 +
  # securityContext:
 +
  #  runAsUser: 500
 +
  #  runAsGroup: 500
 +
 +
##########################################################################
 +
 +
# * Configuration for the Configuration Server Proxy container
 +
csproxy:
 +
  # define domain for the configuration host
 +
  params:
 +
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "200Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext:
 +
  #  runAsUser: 500
 +
  #  runAsGroup: 500
 +
 +
# volumeClaims contains persistent volume claims for services
 +
# All available storage classes can be found here:
 +
# https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
 +
volumeClaims:
 +
  # statserverBackup is storage for statserver backup data
 +
  statserverBackup:
 +
    name: statserver-backup
 +
    accessModes:
 +
      - ReadWriteOnce
 +
    # capacity is storage capacity
 +
    capacity: "1Gi"
 +
    # class is storage class. Must be set explicitly.
 +
    class: <pv-storage-class-rw-once>
 +
</source>
  
'''Check dcu helm chart manifests'''
+
'''Install the <tt>dcu</tt> helm chart'''<br />To install the <tt>dcu</tt> helm chart, run the following command:
 +
<source lang="bash">helm upgrade --install "pulse-dcu-<tenant-sid>"  pulsehelmrepo/dcu --wait --reuse-values --version=<chart-version> --namespace=pulse -f values-override-dcu.yaml
 +
</source>
  
Run to output manifest into helm-template directory:
+
'''Validate the <tt>dcu</tt> helm chart'''<br />To validate the <tt>dcu</tt> helm chart, run the following command:
<source lang="text">source .tenant_init_variables
+
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=dcu,app.kubernetes.io/instance=pulse-dcu-<tenant-sid>"
 +
</source>
 +
Check the output to ensure that all <tt>pulse-dcu</tt> pods are running, for example:
 +
<source lang="bash">
 +
NAME              READY  STATUS    RESTARTS  AGE
 +
pulse-dcu-100-0  3/3    Running  0          5m23s
 +
pulse-dcu-100-1  3/3    Running  0          4m47s
 +
</source>
 +
 
 +
===Install lds helm chart===
 +
 
 +
'''Get the <tt>lds</tt> helm chart:'''
 +
<source lang="bash">helm repo update
 +
helm search repo  <pulsehelmrepo>/lds</source>
 +
 
 +
'''Prepare the override file:'''
 +
 
 +
*Update values in the <tt>values-override-lds.yaml</tt> file (AKS):
 +
*:<source lang="bash"># Default values for lds.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
replicaCount: 2
 +
 +
# * Tenant info
 +
# tenant identification, or empty for shared deployment
 +
tenant:
 +
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
  sid: "<tenant-sid>"
 +
 +
# * Common log configuration
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-lds-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
# * Container image common settings
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext: {}
 +
 +
## Add labels to all pods
 +
##
 +
podLabels: {}
 +
 +
## HPA Settings
 +
## Not supported in this release!
 +
hpa:
 +
  enabled: false
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 +
 +
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}
 
   
 
   
envsubst < ./values-override-dcu.yaml | \
+
# * Monitoring settings
helm template \
+
monitoring:
    --version="${CHART_VERSION}" \
+
  # enable the Prometheus metrics endpoint
    --namespace="${NAMESPACE}" \
+
  enabled: false
    --debug \
+
  # enable golden signals metrics (not supported for PE)
    --output-dir helm-template \
+
  goldenSignals:
    "pulse-dcu-${TENANT_SID}" pe-jfrog-stage/dcu \
+
    enabled: false
    -f - </source>
+
  # port number of the Prometheus metrics endpoint
 +
  port: 9091
 +
  # HTTP path to scrape for metrics
 +
  path: /metrics
 +
  # additional annotations required for monitoring PODs
 +
  # you can reference values of other variables as {{.Values.variable.full.name}}
 +
  podAnnotations: {}
 +
    # prometheus.io/scrape: "true"
 +
    # prometheus.io/port: "{{.Values.monitoring.port}}"
 +
    # prometheus.io/path: "/metrics"
 +
  podMonitor:
 +
    # enables PodMonitor creation for the POD
 +
    enabled: true
 +
    # interval at which metrics should be scraped
 +
    scrapeInterval: 30s
 +
    # timeout after which the scrape is ended
 +
    scrapeTimeout:
 +
    # namespace of the PodMonitor, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
  alerts:
 +
    # enables alert rules
 +
    enabled: true
 +
    # alert condition duration
 +
    duration: 5m
 +
    # namespace of the alert rules, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
 +
# * Configuration for the LDS container
 +
lds:
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "50Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "4Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "4000m"
 +
  # securityContext: {}
 +
 +
# * Configuration for the monitor sidecar container
 +
monitorSidecar:
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "30Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "2m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "70Mi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "10m"
 +
  # securityContext: {}
 +
 +
# *  Configuration for the Configuration Server Proxy container
 +
csproxy:
 +
  # define domain for the configuration host
 +
  params:
 +
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "200Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext: {}
 +
</source>
  
'''Check lds helm chart manifests'''
+
*Update values in the <tt>values-override-lds.yaml</tt> file (GKE):
 +
*:<source lang="bash"># Default values for lds.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
replicaCount: 2
 +
 +
# * Tenant info
 +
# tenant identification, or empty for shared deployment
 +
tenant:
 +
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
  sid: "<tenant-sid>"
 +
 +
# * Common log configuration
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-lds-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
# * Container image common settings
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext:
 +
  runAsNonRoot: true
 +
  runAsUser: 500
 +
  runAsGroup: 500
 +
  fsGroup: 0
 +
 +
## Add labels to all pods
 +
##
 +
podLabels: {}
 +
 +
## HPA Settings
 +
## Not supported in this release!
 +
hpa:
 +
  enabled: false
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 +
 +
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}
 +
 +
# * Monitoring settings
 +
monitoring:
 +
  # enable the Prometheus metrics endpoint
 +
  enabled: false
 +
  # enable golden signals metrics (not supported for PE)
 +
  goldenSignals:
 +
    enabled: false
 +
  # port number of the Prometheus metrics endpoint
 +
  port: 9091
 +
  # HTTP path to scrape for metrics
 +
  path: /metrics
 +
  # additional annotations required for monitoring PODs
 +
  # you can reference values of other variables as {{.Values.variable.full.name}}
 +
  podAnnotations: {}
 +
    # prometheus.io/scrape: "true"
 +
    # prometheus.io/port: "{{.Values.monitoring.port}}"
 +
    # prometheus.io/path: "/metrics"
 +
  podMonitor:
 +
    # enables PodMonitor creation for the POD
 +
    enabled: true
 +
    # interval at which metrics should be scraped
 +
    scrapeInterval: 30s
 +
    # timeout after which the scrape is ended
 +
    scrapeTimeout:
 +
    # namespace of the PodMonitor, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
  alerts:
 +
    # enables alert rules
 +
    enabled: true
 +
    # alert condition duration
 +
    duration: 5m
 +
    # namespace of the alert rules, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
 +
# * Configuration for the LDS container
 +
lds:
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "50Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "4Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "4000m"
 +
  # securityContext:
 +
  #  runAsUser: 500
 +
  #  runAsGroup: 500
 +
 +
# * Configuration for the monitor sidecar container
 +
monitorSidecar:
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "30Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "2m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "70Mi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "10m"
 +
  # securityContext:
 +
  #  runAsUser: 500
 +
  #  runAsGroup: 500
 +
 +
# *  Configuration for the Configuration Server Proxy container
 +
csproxy:
 +
  # define domain for the configuration host
 +
  params:
 +
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "200Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext:
 +
  #  runAsUser: 500
 +
  #  runAsGroup: 500</source>
  
Run to output manifest into helm-template directory:
+
'''Update values in the <tt>values-override-lds-vq.yaml</tt> file:'''
<source lang="text">source .tenant_init_variables
+
<source lang="bash"># Default values for lds.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 
   
 
   
envsubst < ./values-override-lds.yaml | \
+
lds:
helm template \
+
  params:
    --version="${CHART_VERSION}" \
+
    cfgApp: "pulse-lds-vq-$((K8S_POD_INDEX % 2))"
    --namespace="${NAMESPACE}" \
+
    --debug \
+
log:
    --output-dir helm-template \
+
  pvc:
    "pulse-lds-${TENANT_SID}" pe-jfrog-stage/lds \
+
    name: pulse-lds-vq-logs
    -f - </source>
+
</source>
  
'''Check permissions helm chart manifests'''
+
'''Install the <tt>lds</tt> helm chart''':<br />To install the <tt>lds</tt> helm chart, run the following command:
 +
<source lang="bash">
 +
helm upgrade --install "pulse-lds-<tenant-sid>"    pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml
 +
helm upgrade --install "pulse-lds-vq-<tenant-sid>" pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml -f values-override-lds-vq.yaml
 +
</source>
 +
If the installation is successful, the exit code <tt>0</tt> appears.
  
Run to output manifest into helm-template directory:
+
'''Validate the <tt>lds</tt> helm chart''':<br />To validate the <tt>lds</tt> helm chart, run the following command:
<source lang="text">
+
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=lds,app.kubernetes.io/instance=pulse-lds-<tenant-sid>"
source .tenant_init_variables
+
</source>
 +
Verify that the command reports all pulse-lds-vq pods as Running, for example:
 +
<source lang="bash">
 +
NAME              READY  STATUS    RESTARTS  AGE
 +
pulse-lds-100-0  3/3    Running  0          2d20h
 +
pulse-lds-100-1  3/3    Running  0          2d20h </source>
 +
 
 +
===Install permissions helm chart===
 +
'''Get the <tt>permissions</tt> helm chart'''
 +
<source lang="bash">helm repo update
 +
helm search repo <pulsehelmrepo>/permissions</source>
 +
 
 +
'''Prepare the override file:'''
 +
 
 +
*Update values in the <tt>values-override-permissions.yaml</tt> file (AKS):
 +
*:<source lang="bash"># Default values for permissions.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 
   
 
   
envsubst < ./values-override-permissions.yaml | \
+
# * Image configuration
helm template \
+
image:
    --version="${CHART_VERSION}" \
+
  tag: "<image-version>"
    --namespace="${NAMESPACE}" \
+
  pullPolicy: IfNotPresent
    --debug \
+
  registry: "<docker-registry>"
    --output-dir helm-template \
+
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
    "pulse-permissions" pe-jfrog-stage/permissions \
+
    -f - </source>
+
# * Tenant info
 +
# tenant identification, or empty for shared deployment
 +
tenant:
 +
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
  sid: "<tenant-sid>"
 +
 +
# common configuration.
 +
config:
 +
  dbName: "<db-name>"
 +
  # set "true" when need @host added for username
 +
  dbUserWithHost: true
 +
  # set "true" for CSI secrets
 +
  mountSecrets: false
 +
  # Postgres config map name
 +
  postgresConfig: "pulse-postgres-configmap"
 +
  # Postgres secret name
 +
  postgresSecret: "pulse-postgres-secret"
 +
  # Postgres secret key for user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Postgres secret key for password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
  # Redis config map name
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis secret name
 +
  redisSecret: "pulse-redis-secret"
 +
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 +
 +
 +
# * Configuration for the Configuration Server Proxy container
 +
csproxy:
 +
  # define domain for the configuration host
 +
  params:
 +
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "200Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext: {}
 +
 +
# * Common log configuration
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-permissions-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext: {}
 +
 +
## Resource requests and limits
 +
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 +
##
 +
resources:
 +
  limits:
 +
    memory: "1Gi"
 +
    cpu: "500m"
 +
  requests:
 +
    memory: "400Mi"
 +
    cpu: "50m"
 +
 +
## HPA Settings
 +
## Not supported in this release!
 +
hpa:
 +
  enabled: false
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 +
 +
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}
 +
</source>
 +
 
 +
*Update values in the <tt>values-override-permissions.yaml</tt> file (GKE):
 +
*:<source lang="bash"># Default values for permissions.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
# * Image configuration
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
# * Tenant info
 +
# tenant identification, or empty for shared deployment
 +
tenant:
 +
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
  sid: "<tenant-sid>"
 +
 +
# common configuration.
 +
config:
 +
  dbName: "<db-name>"
 +
  # set "true" when need @host added for username
 +
  dbUserWithHost: true
 +
  # set "true" for CSI secrets
 +
  mountSecrets: false
 +
  # Postgres config map name
 +
  postgresConfig: "pulse-postgres-configmap"
 +
  # Postgres secret name
 +
  postgresSecret: "pulse-postgres-secret"
 +
  # Postgres secret key for user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Postgres secret key for password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
  # Redis config map name
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis secret name
 +
  redisSecret: "pulse-redis-secret"
 +
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 +
 +
 +
# * Configuration for the Configuration Server Proxy container
 +
csproxy:
 +
  # define domain for the configuration host
 +
  params:
 +
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 +
  # resource limits for container
 +
  resources:
 +
    # minimum resource requirements to start container
 +
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "200Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext:
 +
  #  runAsUser: 500
 +
  #  runAsGroup: 500
 +
 +
# * Common log configuration
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-permissions-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext:
 +
  fsGroup: null
 +
  runAsUser: null
 +
  runAsGroup: 0
 +
  runAsNonRoot: true
 +
 +
## Resource requests and limits
 +
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 +
##
 +
resources:
 +
  limits:
 +
    memory: "1Gi"
 +
    cpu: "500m"
 +
  requests:
 +
    memory: "400Mi"
 +
    cpu: "50m"
 +
 +
## HPA Settings
 +
## Not supported in this release!
 +
hpa:
 +
  enabled: false
 +
 +
## Priority Class
 +
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 +
 +
## Node labels for assignment.
 +
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 +
##
 +
nodeSelector: {}
 +
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
 +
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 +
 +
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}</source>
 +
'''Install the permissions helm chart:'''
 +
To install the permissions helm chart, run the following command:
 +
<source lang="bash">helm upgrade --install "pulse-permissions-<tenant-sid>" pulsehelmrepo/permissions --wait --version="<chart-version>" --namespace=pulse -f values-override-permissions.yaml
 +
</source>
 +
If installation is successful, the exit code <tt>0</tt> appears.
 +
 
 +
'''Validate the permissions helm chart:'''<br />To validate the permissions helm chart, run the following command:
 +
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=permissions,app.kubernetes.io/instance=pulse-permissions-<tenant-sid>"
 +
</source>
 +
Verify that the command report all <tt>pulse-permissions</tt> pods as Running, for example:
 +
<source lang="bash">
 +
NAME                                    READY  STATUS    RESTARTS  AGE
 +
pulse-permissions-100-c5ff8bb7d-jl7d7  2/2    Running  2          2d20h </source>
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
|sectionHeading=Do Not Publish
+
|sectionHeading=Troubleshooting
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext=<div style="background-color: aliceblue; font-style: italic;">List any provisioning needed to deploy, run, or manage the service. For example:
+
|structuredtext='''Check init-tenant helm chart manifests:'''<br />To output the manifest into the '''helm-template''' directory, run the following command:
 +
<source lang="bash">helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-init-tenant-<tenant-sid> pulsehelmrepo/init-tenant -f values-override-init-tenant.yaml
 +
</source>
  
*Designer: Create an Access Group specific to Designer Developer, Admin.
+
'''Check dcu helm chart manifests:'''<br />To output the dcu Helm chart manifest into the '''helm-template''' directory, run the following command:
*Agent Setup: Create Agent Setup options to provide access to Administrator, Supervisor. or Ops.
+
<source lang="bash">helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-dcu-<tenant-sid> pulsehelmrepo/dcu -f values-override-dcu.yaml
*Genesys Info Mart: Update the CTL_CONFIG table in the GIM DB to control ETL and DB maintenance behavior.</div>
+
</source>
|Status=Yes
+
 
 +
'''Check lds helm chart manifests:'''<br />To output the lds chart manifest into the '''helm-template''' directory, run the following command:
 +
<source lang="bash">helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-lds-<tenant-sid> pulsehelmrepo/lds -f values-override-lds.yaml
 +
</source>
 +
 
 +
'''Check permissions Helm chart manifests:'''<br />To output the Helm chart manifest into the '''helm-template''' directory, run the following command:
 +
<source lang="bash">helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-permissions pulsehelmrepo/permissions -f values-override-permissions.yaml
 +
</source>
 +
|Status=No
 
}}
 
}}
 
|PEPageType=55cef4ff-9306-4313-8fd8-377282a38478
 
|PEPageType=55cef4ff-9306-4313-8fd8-377282a38478
 
}}
 
}}

Latest revision as of 16:01, March 29, 2023

This topic is part of the manual Genesys Pulse Private Edition Guide for version Current of Reporting.

Prerequisites

Before performing the steps described on this page, complete the Before you begin instructions, and ensure that you have the following information:

  • Versions:
    • <image-version> = 100.0.000.0015
    • <chart-versions>= 100.0.000+0015
  • K8S namespace pulse
  • Project Name pulse
  • Postgres credentials:
    • <db-host>
    • <db-port>
    • <db-name>
    • <db-user>
    • <db-user-password>
    • <db-ssl-mode>
  • Docker credentials:
    • <docker-registry>
    • <docker-registry-secret-name>
  • Redis credentials:
    • <redis-host>
    • <redis-port>
    • <redis-password>
    • <redis-enable-ssl>
  • Tenant service variables:
    • <tenant-uuid>
    • <tenant-sid>
    • <tenant-name>
    • <tenant-dcu>
  • GAuth/GWS service variables:
    • <gauth-url-external>
    • <gauth-url-internal>
    • <gauth-client-id>
    • <gauth-client-secret>
    • <gws-url-external>
    • <gws-url-internal>
  • Storage class:
    • <pv-storage-class-rw-many>
    • <pv-storage-class-rw-once>
  • Pulse:
    • <pulse-host>

Single namespace

Single namespace deployments have a software-defined networking (SDN) with multitenant mode, where namespaces are network isolated. If you plan to deploy Pulse into the single namespace, ensure that your environment meets the following requirements for inputs:

  • Back-end services deployed into the single namespace must include the string pulse:
    • <db-host>
    • <db-name>
    • <redis-host>
  • The hostname used for Ingress must be unique, and must include the string pulse:
    • <pulse-host>
  • Internal service-to-service traffic must use the service endpoints, rather than the Ingress Controller:
    • <gauth-url-internal>
    • <gws-url-internal></source>

Tenant provisioning

Install init tenant chart

Get the init-tenant helm chart:

helm repo update
helm search repo <pulsehelmrepo>/init-tenant

Prepare the override file:

  • Update the values-override-init-tenant.yaml file (AKS):
    # Default values for init-tenant.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    # * Images
    # Replace for your values: registry and secret
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    configurator:
      enabled: true
      # set service domain used to access voice service
      # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
      voiceDomain: "voice.svc.<domain>"
      # set service domain used to access ixn service
      # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
      ixnDomain: "ixn.svc.<domain>"
      # set service domain used to access pulse service
      # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
      pulseDomain: "pulse.svc.<domain>"
      # set configration server password, used when create secrets
      cfgUser: "default"
      # set configration server user, used when create secrets
      cfgPassword: "password"
      # common log configuration
      cfgHost: "tenant-9350e2fc-a1dd-4c65-8d40-1f75a2e080dd.voice.svc.<domain>"
     
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: none
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-init-tenant-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Tenant info
    # Replace for your values
    tenant:
      # Tenant UUID
      id: <tenant-uuid>
      # Tenant SID (like 0001)
      sid: <tenant-sid>
     
    # common configuration.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      # set "true" for CSI secrets
      mountSecrets: false
      # Postgres config map name
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext: {}
     
    ## Resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      limits:
        memory: 256Mi
        cpu: 200m
      requests:
        memory: 128Mi
        cpu: 100m
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    # * Templates
    templates:
      - Agent_Group_Status.gpb
      - Agent_KPIs.gpb
      - Agent_Login.gpb
      - Alert_Widget.gpb
      - Callback_Activity.gpb
      - Campaign_Activity.gpb
      - Campaign_Callback_Status.gpb
      - Campaign_Group_Activity.gpb
      - Campaign_Group_Status.gpb
      - Chat_Agent_Activity.gpb
      - Chat_Queue_Activity.gpb
      - Chat_Service_Level_Performance.gpb
      - Chat_Waiting_Statistics.gpb
      - Email_Agent_Activity.gpb
      - Email_Queue_Activity.gpb
      - Facebook_Media_Activity.gpb
      - IFRAME.gpb
      - IWD_Agent_Activity.gpb
      - IWD_Queue_Activity.gpb
      - Queue_KPIs.gpb
      - Queue_Overflow_Reason.gpb
      - Static_Text.gpb
      - Twitter_Media_Activity.gpb
      - eServices_Agent_Activity.gpb
      - eServices_Queue_KPIs.gpb
  • Update the values-override-init-tenant.yaml file (GKE):
    Important
    Enable configurator only for configurations in GKE with VPC scoped DNS.
  • # Default values for init-tenant.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    # * Images
    # Replace for your values: registry and secret
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    configurator:
      enabled: true
      # set service domain used to access voice service
      # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
      voiceDomain: "voice.svc.<domain>"
      # set service domain used to access ixn service
      # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
      ixnDomain: "ixn.svc.<domain>"
      # set service domain used to access pulse service
      # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
      pulseDomain: "pulse.svc.<domain>"
      # set configration server password, used when create secrets
      cfgUser: "default"
      # set configration server user, used when create secrets
      cfgPassword: "password"
      # common log configuration
      cfgHost: "tenant-<tenant-uuid>.voice.svc.<domain>"
     
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: none
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-init-tenant-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: nfs-client
     
    # * Tenant info
    # Replace for your values
    tenant:
      # Tenant UUID
      id: <tenant-uuid>
      # Tenant SID (like 0001)
      sid: <tenant-sid>
     
    # common configuration.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      # set "true" for CSI secrets
      mountSecrets: false
      # Postgres config map name
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext:
       fsGroup: null
       runAsUser: null
       runAsGroup: 0
       runAsNonRoot: true
     
    ## Resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      limits:
        memory: 256Mi
        cpu: 200m
      requests:
        memory: 128Mi
        cpu: 100m
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    # * Templates
    templates:
      - Agent_Group_Status.gpb
      - Agent_KPIs.gpb
      - Agent_Login.gpb
      - Alert_Widget.gpb
      - Callback_Activity.gpb
      - Campaign_Activity.gpb
      - Campaign_Callback_Status.gpb
      - Campaign_Group_Activity.gpb
      - Campaign_Group_Status.gpb
      - Chat_Agent_Activity.gpb
      - Chat_Queue_Activity.gpb
      - Chat_Service_Level_Performance.gpb
      - Chat_Waiting_Statistics.gpb
      - Email_Agent_Activity.gpb
      - Email_Queue_Activity.gpb
      - Facebook_Media_Activity.gpb
      - IFRAME.gpb
      - IWD_Agent_Activity.gpb
      - IWD_Queue_Activity.gpb
      - Queue_KPIs.gpb
      - Queue_Overflow_Reason.gpb
      - Static_Text.gpb
      - Twitter_Media_Activity.gpb
      - eServices_Agent_Activity.gpb
      - eServices_Queue_KPIs.gpb

Install the init-tenant helm chart:
To install the init-tenant helm chart, run the following command:

helm upgrade --install "pulse-init-tenant-<tenant-sid>" pulsehelmrepo/init-tenant --wait --wait-for-jobs --version="<chart-version>"--namespace=pulse -f values-override-init-tenant.yaml

If installation is successful, the exit code 0 appears.

Validate the init-tenant helm chart:
To validate the init-tenant helm chart, run the following command:

kubectl get pods -n="pulse" -l "app.kubernetes.io/name=init-tenant,app.kubernetes.io/instance=pulse-init-tenant-<tenant-sid>"

If the deployment was successful, the pulse-init-tenant job is listed as Completed/. For example:

NAME                                     READY   STATUS      RESTARTS   AGE
pulse-init-tenant-100-job-qszgl          0/1     Completed   0          2d20h

Install dcu helm chart

Get the dcu helm chart:

helm repo update
helm search repo <pulsehelmrepo>/dcu

Prepare the override file:

  • Update the values-override-dcu.yaml file (AKS):
    # Default values for dcu.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    replicaCount: "<tenant-dcu>"
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-dcu-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Config info
    # Set your values.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      mountSecrets: false
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
      redisConfig: "pulse-redis-configmap"
      # Redis secret name
      redisSecret: "pulse-redis-secret"
      # Redis secret key for access key
      redisSecretKey: "REDIS01_KEY"
     
    # * Image
    # container image common settings
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext: {}
     
    ## Add labels to all pods
    ##
    podLabels: {}
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
     
    # * Monitoring settings
    monitoring:
      # enable the Prometheus metrics endpoint
      enabled: false
      # enable golden signals metrics (not supported for PE)
      goldenSignals:
        enabled: false
      # port number of the Prometheus metrics endpoint
      port: 9091
      # HTTP path to scrape for metrics
      path: /metrics
      # additional annotations required for monitoring PODs
      # you can reference values of other variables as {{.Values.variable.full.name}}
      podAnnotations: {}
        # prometheus.io/scrape: "true"
        # prometheus.io/port: "{{.Values.monitoring.port}}"
        # prometheus.io/path: "/metrics"
      podMonitor:
        # enables PodMonitor creation for the POD
        enabled: true
        # interval at which metrics should be scraped
        scrapeInterval: 30s
        # timeout after which the scrape is ended
        scrapeTimeout:
        # namespace of the PodMonitor, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
      alerts:
        # enables alert rules
        enabled: true
        # alert condition duration
        duration: 5m
        # namespace of the alert rules, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
       
     
    ##########################################################################
     
    # * Configuration for the Collector container
    collector:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "300Mi"
          # minimal CPU to reserve
          cpu: "200m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "8000m"
      # securityContext: {}
     
    # * Configuration for the StatServer container
    statserver:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "300Mi"
          # minimal CPU to reserve
          cpu: "100m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "4000m"
      # securityContext: {}
     
    # * Configuration for the monitor sidecar container
    monitorSidecar:
      # resource limits for container
      resources:
        # disabled: true
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "30Mi"
          # minimal CPU to reserve
          cpu: "2m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "70Mi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "10m"
      # securityContext: {}
     
    ##########################################################################
     
    # * Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext: {}
     
    # volumeClaims contains persistent volume claims for services
    # All available storage classes can be found here:
    # https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
    volumeClaims:
      # statserverBackup is storage for statserver backup data
      statserverBackup:
        name: statserver-backup
        accessModes:
          - ReadWriteOnce
        # capacity is storage capacity
        capacity: "1Gi"
        # class is storage class. Must be set explicitly.
        class: <pv-storage-class-rw-once>
  • Update the values-override-dcu.yaml file (GKE):
    # Default values for dcu.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    replicaCount: "<tenant-dcu>"
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-dcu-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Config info
    # Set your values.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      mountSecrets: false
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
      redisConfig: "pulse-redis-configmap"
      # Redis secret name
      redisSecret: "pulse-redis-secret"
      # Redis secret key for access key
      redisSecretKey: "REDIS01_KEY"
     
    # * Image
    # container image common settings
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext:
      runAsNonRoot: true
      runAsUser: 500
      runAsGroup: 500
      fsGroup: 0
     
    ## Add labels to all pods
    ##
    podLabels: {}
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
     
    # * Monitoring settings
    monitoring:
      # enable the Prometheus metrics endpoint
      enabled: false
      # enable golden signals metrics (not supported for PE)
      goldenSignals:
        enabled: false
      # port number of the Prometheus metrics endpoint
      port: 9091
      # HTTP path to scrape for metrics
      path: /metrics
      # additional annotations required for monitoring PODs
      # you can reference values of other variables as {{.Values.variable.full.name}}
      podAnnotations: {}
        # prometheus.io/scrape: "true"
        # prometheus.io/port: "{{.Values.monitoring.port}}"
        # prometheus.io/path: "/metrics"
      podMonitor:
        # enables PodMonitor creation for the POD
        enabled: true
        # interval at which metrics should be scraped
        scrapeInterval: 30s
        # timeout after which the scrape is ended
        scrapeTimeout:
        # namespace of the PodMonitor, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
      alerts:
        # enables alert rules
        enabled: true
        # alert condition duration
        duration: 5m
        # namespace of the alert rules, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
       
     
    ##########################################################################
     
    # * Configuration for the Collector container
    collector:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "300Mi"
          # minimal CPU to reserve
          cpu: "200m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "8000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # * Configuration for the StatServer container
    statserver:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "300Mi"
          # minimal CPU to reserve
          cpu: "100m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "4000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # * Configuration for the monitor sidecar container
    monitorSidecar:
      # resource limits for container
      resources:
        # disabled: true
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "30Mi"
          # minimal CPU to reserve
          cpu: "2m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "70Mi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "10m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    ##########################################################################
     
    # * Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # volumeClaims contains persistent volume claims for services
    # All available storage classes can be found here:
    # https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
    volumeClaims:
      # statserverBackup is storage for statserver backup data
      statserverBackup:
        name: statserver-backup
        accessModes:
          - ReadWriteOnce
        # capacity is storage capacity
        capacity: "1Gi"
        # class is storage class. Must be set explicitly.
        class: <pv-storage-class-rw-once>

Install the dcu helm chart
To install the dcu helm chart, run the following command:

helm upgrade --install "pulse-dcu-<tenant-sid>"  pulsehelmrepo/dcu --wait --reuse-values --version=<chart-version> --namespace=pulse -f values-override-dcu.yaml

Validate the dcu helm chart
To validate the dcu helm chart, run the following command:

kubectl get pods -n=pulse -l "app.kubernetes.io/name=dcu,app.kubernetes.io/instance=pulse-dcu-<tenant-sid>"

Check the output to ensure that all pulse-dcu pods are running, for example:

NAME              READY   STATUS    RESTARTS   AGE
pulse-dcu-100-0   3/3     Running   0          5m23s
pulse-dcu-100-1   3/3     Running   0          4m47s

Install lds helm chart

Get the lds helm chart:

helm repo update
helm search repo  <pulsehelmrepo>/lds

Prepare the override file:

  • Update values in the values-override-lds.yaml file (AKS):
    # Default values for lds.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    replicaCount: 2
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-lds-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Container image common settings
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext: {}
     
    ## Add labels to all pods
    ##
    podLabels: {}
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
     
    # * Monitoring settings
    monitoring:
      # enable the Prometheus metrics endpoint
      enabled: false
      # enable golden signals metrics (not supported for PE)
      goldenSignals:
        enabled: false
      # port number of the Prometheus metrics endpoint
      port: 9091
      # HTTP path to scrape for metrics
      path: /metrics
      # additional annotations required for monitoring PODs
      # you can reference values of other variables as {{.Values.variable.full.name}}
      podAnnotations: {}
        # prometheus.io/scrape: "true"
        # prometheus.io/port: "{{.Values.monitoring.port}}"
        # prometheus.io/path: "/metrics"
      podMonitor:
        # enables PodMonitor creation for the POD
        enabled: true
        # interval at which metrics should be scraped
        scrapeInterval: 30s
        # timeout after which the scrape is ended
        scrapeTimeout:
        # namespace of the PodMonitor, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
      alerts:
        # enables alert rules
        enabled: true
        # alert condition duration
        duration: 5m
        # namespace of the alert rules, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
     
    # * Configuration for the LDS container
    lds:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "50Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "4000m"
      # securityContext: {}
     
    # * Configuration for the monitor sidecar container
    monitorSidecar:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "30Mi"
          # minimal CPU to reserve
          cpu: "2m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "70Mi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "10m"
      # securityContext: {}
     
    # *  Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext: {}
  • Update values in the values-override-lds.yaml file (GKE):
    # Default values for lds.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    replicaCount: 2
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-lds-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Container image common settings
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext:
      runAsNonRoot: true
      runAsUser: 500
      runAsGroup: 500
      fsGroup: 0
     
    ## Add labels to all pods
    ##
    podLabels: {}
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
     
    # * Monitoring settings
    monitoring:
      # enable the Prometheus metrics endpoint
      enabled: false
      # enable golden signals metrics (not supported for PE)
      goldenSignals:
        enabled: false
      # port number of the Prometheus metrics endpoint
      port: 9091
      # HTTP path to scrape for metrics
      path: /metrics
      # additional annotations required for monitoring PODs
      # you can reference values of other variables as {{.Values.variable.full.name}}
      podAnnotations: {}
        # prometheus.io/scrape: "true"
        # prometheus.io/port: "{{.Values.monitoring.port}}"
        # prometheus.io/path: "/metrics"
      podMonitor:
        # enables PodMonitor creation for the POD
        enabled: true
        # interval at which metrics should be scraped
        scrapeInterval: 30s
        # timeout after which the scrape is ended
        scrapeTimeout:
        # namespace of the PodMonitor, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
      alerts:
        # enables alert rules
        enabled: true
        # alert condition duration
        duration: 5m
        # namespace of the alert rules, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
     
    # * Configuration for the LDS container
    lds:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "50Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "4000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # * Configuration for the monitor sidecar container
    monitorSidecar:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "30Mi"
          # minimal CPU to reserve
          cpu: "2m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "70Mi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "10m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # *  Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500

Update values in the values-override-lds-vq.yaml file:

# Default values for lds.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
lds:
  params:
    cfgApp: "pulse-lds-vq-$((K8S_POD_INDEX % 2))"
 
log:
  pvc:
    name: pulse-lds-vq-logs

Install the lds helm chart:
To install the lds helm chart, run the following command:

helm upgrade --install "pulse-lds-<tenant-sid>"    pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml
helm upgrade --install "pulse-lds-vq-<tenant-sid>" pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml -f values-override-lds-vq.yaml

If the installation is successful, the exit code 0 appears.

Validate the lds helm chart:
To validate the lds helm chart, run the following command:

kubectl get pods -n=pulse -l "app.kubernetes.io/name=lds,app.kubernetes.io/instance=pulse-lds-<tenant-sid>"

Verify that the command reports all pulse-lds-vq pods as Running, for example:

NAME              READY   STATUS    RESTARTS   AGE
pulse-lds-100-0   3/3     Running   0          2d20h
pulse-lds-100-1   3/3     Running   0          2d20h

Install permissions helm chart

Get the permissions helm chart

helm repo update
helm search repo <pulsehelmrepo>/permissions

Prepare the override file:

  • Update values in the values-override-permissions.yaml file (AKS):
    # Default values for permissions.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    # * Image configuration
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # common configuration.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      # set "true" for CSI secrets
      mountSecrets: false
      # Postgres config map name
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
      # Redis config map name
      redisConfig: "pulse-redis-configmap"
      # Redis secret name
      redisSecret: "pulse-redis-secret"
      # Redis secret key for access key
      redisSecretKey: "REDIS01_KEY"
     
     
    # * Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext: {}
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-permissions-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext: {}
     
    ## Resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      limits:
        memory: "1Gi"
        cpu: "500m"
      requests:
        memory: "400Mi"
        cpu: "50m"
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
  • Update values in the values-override-permissions.yaml file (GKE):
    # Default values for permissions.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    # * Image configuration
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # common configuration.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      # set "true" for CSI secrets
      mountSecrets: false
      # Postgres config map name
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
      # Redis config map name
      redisConfig: "pulse-redis-configmap"
      # Redis secret name
      redisSecret: "pulse-redis-secret"
      # Redis secret key for access key
      redisSecretKey: "REDIS01_KEY"
     
     
    # * Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-permissions-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext:
       fsGroup: null
       runAsUser: null
       runAsGroup: 0
       runAsNonRoot: true
     
    ## Resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      limits:
        memory: "1Gi"
        cpu: "500m"
      requests:
        memory: "400Mi"
        cpu: "50m"
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}

Install the permissions helm chart: To install the permissions helm chart, run the following command:

helm upgrade --install "pulse-permissions-<tenant-sid>" pulsehelmrepo/permissions --wait --version="<chart-version>" --namespace=pulse -f values-override-permissions.yaml

If installation is successful, the exit code 0 appears.

Validate the permissions helm chart:
To validate the permissions helm chart, run the following command:

kubectl get pods -n=pulse -l "app.kubernetes.io/name=permissions,app.kubernetes.io/instance=pulse-permissions-<tenant-sid>"

Verify that the command report all pulse-permissions pods as Running, for example:

NAME                                    READY   STATUS    RESTARTS   AGE
pulse-permissions-100-c5ff8bb7d-jl7d7   2/2     Running   2          2d20h

Troubleshooting

Check init-tenant helm chart manifests:
To output the manifest into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-init-tenant-<tenant-sid> pulsehelmrepo/init-tenant -f values-override-init-tenant.yaml

Check dcu helm chart manifests:
To output the dcu Helm chart manifest into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-dcu-<tenant-sid> pulsehelmrepo/dcu -f values-override-dcu.yaml

Check lds helm chart manifests:
To output the lds chart manifest into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-lds-<tenant-sid> pulsehelmrepo/lds -f values-override-lds.yaml

Check permissions Helm chart manifests:
To output the Helm chart manifest into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-permissions pulsehelmrepo/permissions -f values-override-permissions.yaml
Comments or questions about this documentation? Contact us for support!