Difference between revisions of "PEC-REP/Current/PulsePEGuide/Provision"

From Genesys Documentation
Jump to: navigation, search
 
Line 50: Line 50:
 
{{AnchorDiv|SingleNamespace}}
 
{{AnchorDiv|SingleNamespace}}
 
===Single namespace===
 
===Single namespace===
Single namespace deployments have a software-defined networking (SDN) with multitenant mode, where namespaces are network isolated. If you plan to deploy Pulse into the single namespace (OpenShift SDN Multi-tenant mode where namespaces are network isolated), ensure that your environment meets the following requirements for inputs:
+
Single namespace deployments have a software-defined networking (SDN) with multitenant mode, where namespaces are network isolated. If you plan to deploy Pulse into the single namespace, ensure that your environment meets the following requirements for inputs:
  
 
*Back-end services deployed into the single namespace must include the string ''pulse'':
 
*Back-end services deployed into the single namespace must include the string ''pulse'':
Line 89: Line 89:
 
   # set service domain used to access voice service
 
   # set service domain used to access voice service
 
   # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
 
   # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
  # example for OpenShift single namespace: genesys.svc.cluster.local.
 
 
   voiceDomain: "voice.svc.<domain>"
 
   voiceDomain: "voice.svc.<domain>"
 
   # set service domain used to access ixn service
 
   # set service domain used to access ixn service
 
   # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
 
   # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
  # example for OpenShift single namespace: genesys.svc.cluster.local.
 
 
   ixnDomain: "ixn.svc.<domain>"
 
   ixnDomain: "ixn.svc.<domain>"
 
   # set service domain used to access pulse service
 
   # set service domain used to access pulse service
 
   # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
 
   # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
  # example for OpenShift single namespace: genesys.svc.cluster.local.
 
 
   pulseDomain: "pulse.svc.<domain>"
 
   pulseDomain: "pulse.svc.<domain>"
 
   # set configration server password, used when create secrets
 
   # set configration server password, used when create secrets
Line 220: Line 217:
  
 
*Update the <tt>values-override-init-tenant.yaml</tt>  file (GKE):
 
*Update the <tt>values-override-init-tenant.yaml</tt>  file (GKE):
*:{{NoteFormat|Enable configurator only for configurations in GKE with VPC scoped DNS, or OpenShift with single namespace.
+
*:{{NoteFormat|Enable configurator only for configurations in GKE with VPC scoped DNS.
 
}}
 
}}
  
Line 239: Line 236:
 
   # set service domain used to access voice service
 
   # set service domain used to access voice service
 
   # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
 
   # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
  # example for OpenShift single namespace: genesys.svc.cluster.local.
 
 
   voiceDomain: "voice.svc.<domain>"
 
   voiceDomain: "voice.svc.<domain>"
 
   # set service domain used to access ixn service
 
   # set service domain used to access ixn service
 
   # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
 
   # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
  # example for OpenShift single namespace: genesys.svc.cluster.local.
 
 
   ixnDomain: "ixn.svc.<domain>"
 
   ixnDomain: "ixn.svc.<domain>"
 
   # set service domain used to access pulse service
 
   # set service domain used to access pulse service
 
   # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
 
   # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
  # example for OpenShift single namespace: genesys.svc.cluster.local.
 
 
   pulseDomain: "pulse.svc.<domain>"
 
   pulseDomain: "pulse.svc.<domain>"
 
   # set configration server password, used when create secrets
 
   # set configration server password, used when create secrets
Line 373: Line 367:
 
   - eServices_Queue_KPIs.gpb
 
   - eServices_Queue_KPIs.gpb
 
</source>
 
</source>
*Update the <tt>values-override-init-tenant.yaml</tt> file (OpenShift):
+
 
*:<source lang="bash">
+
'''Install the <tt>init-tenant</tt> helm chart''': <br />
# Default values for init-tenant.
+
To install the <tt>init-tenant</tt> helm chart, run the following command:
# This is a YAML-formatted file.
+
<source lang="bash">
# Declare variables to be passed into your templates.
+
helm upgrade --install "pulse-init-tenant-<tenant-sid>" pulsehelmrepo/init-tenant --wait --wait-for-jobs --version="<chart-version>"--namespace=pulse -f values-override-init-tenant.yaml
   
+
</source>
# * Images
+
If installation is successful, the exit code <tt>0</tt> appears.
# Replace for your values: registry and secret
+
 
image:
+
'''Validate the <tt>init-tenant</tt> helm chart''':<br />
  tag: "<image-version>"
+
To validate the <tt>init-tenant</tt> helm chart, run the following command:
  pullPolicy: IfNotPresent
+
<source lang="bash">
  registry: "<docker-registry>"
+
kubectl get pods -n="pulse" -l "app.kubernetes.io/name=init-tenant,app.kubernetes.io/instance=pulse-init-tenant-<tenant-sid>"
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
+
</source>
   
+
If the deployment was successful, the <tt>pulse-init-tenant</tt> job is listed as <tt>Completed</tt>/.
# * Tenant info
+
For example:
# Replace for your values
+
<source lang="bash">
tenant:
+
NAME                                    READY  STATUS      RESTARTS  AGE
   # Tenant UUID
+
pulse-init-tenant-100-job-qszgl          0/1    Completed  0          2d20h
   id: <tenant-uuid>
+
</source>
 +
 
 +
===Install dcu helm chart===
 +
 
 +
'''Get the <tt>dcu</tt> helm chart:'''
 +
<source lang="bash">
 +
helm repo update
 +
helm search repo <pulsehelmrepo>/dcu
 +
</source>
 +
 
 +
'''Prepare the override file:'''
 +
 
 +
*Update the <tt>values-override-dcu.yaml</tt> file (AKS):
 +
*:<source lang="bash"># Default values for dcu.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
   
 +
replicaCount: "<tenant-dcu>"
 +
   
 +
# * Tenant info
 +
# tenant identification, or empty for shared deployment
 +
tenant:
 +
   # Tenant UUID
 +
   id: "<tenant-uuid>"
 
   # Tenant SID (like 0001)
 
   # Tenant SID (like 0001)
   sid: <tenant-sid>
+
   sid: "<tenant-sid>"
 
   
 
   
# common configuration.
+
# * Common log configuration
 +
log:
 +
  # target directory where log will be stored, leave empty for default
 +
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-dcu-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
 +
# * Config info
 +
# Set your values.
 
config:
 
config:
 
   dbName: "<db-name>"
 
   dbName: "<db-name>"
 
   # set "true" when need @host added for username
 
   # set "true" when need @host added for username
 
   dbUserWithHost: true
 
   dbUserWithHost: true
  # set "true" for CSI secrets
 
 
   mountSecrets: false
 
   mountSecrets: false
  # Postgres config map name
 
 
   postgresConfig: "pulse-postgres-configmap"
 
   postgresConfig: "pulse-postgres-configmap"
 
   # Postgres secret name
 
   # Postgres secret name
Line 410: Line 444:
 
   # Postgres secret key for password
 
   # Postgres secret key for password
 
   postgresSecretPassword: "META_DB_ADMINPWD"
 
   postgresSecretPassword: "META_DB_ADMINPWD"
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis secret name
 +
  redisSecret: "pulse-redis-secret"
 +
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 
   
 
   
## Service account settings
+
# * Image
 +
# container image common settings
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
## Service account settings
 
serviceAccount:
 
serviceAccount:
 
   # Specifies whether a service account should be created
 
   # Specifies whether a service account should be created
Line 427: Line 474:
 
## Specifies the security context for all Pods in the service
 
## Specifies the security context for all Pods in the service
 
##
 
##
podSecurityContext:
+
podSecurityContext: {}
  fsGroup: null
 
  runAsUser: null
 
  runAsGroup: 0
 
  runAsNonRoot: true
 
 
   
 
   
## Resource requests and limits
+
## Add labels to all pods
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 
 
##
 
##
resources:
+
podLabels: {}
  limits:
+
    memory: 256Mi
+
## HPA Settings
    cpu: 200m
+
## Not supported in this release!
   requests:
+
hpa:
    memory: 128Mi
+
   enabled: false
    cpu: 100m
 
 
   
 
   
 
## Priority Class
 
## Priority Class
Line 459: Line 500:
 
tolerations: []
 
tolerations: []
 
   
 
   
# * Templates
+
## Pod Disruption Budget Settings
templates:
+
podDisruptionBudget:
   - Agent_Group_Status.gpb
+
   enabled: false
  - Agent_KPIs.gpb
+
  - Agent_Login.gpb
+
## Affinity for assignment.
  - Alert_Widget.gpb
+
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  - Callback_Activity.gpb
+
##
  - Campaign_Activity.gpb
+
affinity: {}
  - Campaign_Callback_Status.gpb
+
  - Campaign_Group_Activity.gpb
+
# * Monitoring settings
  - Campaign_Group_Status.gpb
+
monitoring:
  - Chat_Agent_Activity.gpb
+
   # enable the Prometheus metrics endpoint
   - Chat_Queue_Activity.gpb
+
   enabled: false
   - Chat_Service_Level_Performance.gpb
+
   # enable golden signals metrics (not supported for PE)
   - Chat_Waiting_Statistics.gpb
+
   goldenSignals:
   - Email_Agent_Activity.gpb
+
    enabled: false
  - Email_Queue_Activity.gpb
+
   # port number of the Prometheus metrics endpoint
   - Facebook_Media_Activity.gpb
+
   port: 9091
   - IFRAME.gpb
+
   # HTTP path to scrape for metrics
   - IWD_Agent_Activity.gpb
+
   path: /metrics
   - IWD_Queue_Activity.gpb
+
   # additional annotations required for monitoring PODs
   - Queue_KPIs.gpb
+
   # you can reference values of other variables as {{.Values.variable.full.name}}
   - Queue_Overflow_Reason.gpb
+
   podAnnotations: {}
  - Static_Text.gpb
+
    # prometheus.io/scrape: "true"
  - Twitter_Media_Activity.gpb
+
    # prometheus.io/port: "{{.Values.monitoring.port}}"
  - eServices_Agent_Activity.gpb
+
    # prometheus.io/path: "/metrics"
   - eServices_Queue_KPIs.gpb
+
  podMonitor:
</source>
+
    # enables PodMonitor creation for the POD
 
+
    enabled: true
'''Install the <tt>init-tenant</tt> helm chart''': <br />
+
    # interval at which metrics should be scraped
To install the <tt>init-tenant</tt> helm chart, run the following command:
+
    scrapeInterval: 30s
<source lang="bash">
+
     # timeout after which the scrape is ended
helm upgrade --install "pulse-init-tenant-<tenant-sid>" pulsehelmrepo/init-tenant --wait --wait-for-jobs --version="<chart-version>"--namespace=pulse -f values-override-init-tenant.yaml
+
    scrapeTimeout:
</source>
+
    # namespace of the PodMonitor, defaults to the namespace of the POD
If installation is successful, the exit code <tt>0</tt> appears.
+
    namespace:
 
+
    additionalLabels: {}
'''Validate the <tt>init-tenant</tt> helm chart''':<br />
+
  alerts:
To validate the <tt>init-tenant</tt> helm chart, run the following command:
+
    # enables alert rules
<source lang="bash">
+
    enabled: true
kubectl get pods -n="pulse" -l "app.kubernetes.io/name=init-tenant,app.kubernetes.io/instance=pulse-init-tenant-<tenant-sid>"
+
    # alert condition duration
</source>
+
    duration: 5m
If the deployment was successful, the <tt>pulse-init-tenant</tt> job is listed as <tt>Completed</tt>/.
+
    # namespace of the alert rules, defaults to the namespace of the POD
For example:
+
    namespace:
<source lang="bash">
+
    additionalLabels: {}
NAME                                    READY  STATUS      RESTARTS  AGE
+
 
pulse-init-tenant-100-job-qszgl          0/1     Completed  0          2d20h
 
</source>
 
 
 
===Install dcu helm chart===
 
 
 
'''Get the <tt>dcu</tt> helm chart:'''
 
<source lang="bash">
 
helm repo update
 
helm search repo <pulsehelmrepo>/dcu
 
</source>
 
 
 
'''Prepare the override file:'''
 
 
 
*Update the <tt>values-override-dcu.yaml</tt> file (AKS):
 
*:<source lang="bash"># Default values for dcu.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
 
   
 
   
replicaCount: "<tenant-dcu>"
+
##########################################################################
 
   
 
   
# * Tenant info
+
# * Configuration for the Collector container
# tenant identification, or empty for shared deployment
+
collector:
tenant:
+
  # resource limits for container
  # Tenant UUID
+
  resources:
  id: "<tenant-uuid>"
+
    # minimum resource requirements to start container
  # Tenant SID (like 0001)
+
    requests:
  sid: "<tenant-sid>"
+
      # minimal amount of memory required to start a container
 +
      memory: "300Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "200m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "4Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "8000m"
 +
  # securityContext: {}
 
   
 
   
# * Common log configuration
+
# * Configuration for the StatServer container
log:
+
statserver:
   # target directory where log will be stored, leave empty for default
+
   # resource limits for container
   logDir: ""
+
   resources:
  # path where volume will be mounted
+
    # minimum resource requirements to start container
  volumeMountPath: /data/log
+
    requests:
  # log volume type: none | hostpath | pvc
+
      # minimal amount of memory required to start a container
  volumeType: pvc
+
      memory: "300Mi"
  # log volume hostpath, used with volumeType "hostpath"
+
      # minimal CPU to reserve
  volumeHostPath: /mnt/log
+
      cpu: "100m"
  # log PVC parameters, used with volumeType "pvc"
+
    # resource limits for containers
  pvc:
+
    limits:
    name: pulse-dcu-logs
+
      # maximum amount of memory a container can use before being evicted
    accessModes:
+
      # by the OOM Killer
       - ReadWriteMany
+
      memory: "4Gi"
    capacity: 10Gi
+
      # maximum amount of CPU resources that can be used and should be tuned to reflect
    class: <pv-storage-class-rw-many>
+
       # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "4000m"
 +
  # securityContext: {}
 
   
 
   
# * Config info
+
# * Configuration for the monitor sidecar container
# Set your values.
+
monitorSidecar:
config:
+
   # resource limits for container
  dbName: "<db-name>"
+
   resources:
   # set "true" when need @host added for username
+
    # disabled: true
   dbUserWithHost: true
+
    # minimum resource requirements to start container
  mountSecrets: false
+
    requests:
  postgresConfig: "pulse-postgres-configmap"
+
      # minimal amount of memory required to start a container
  # Postgres secret name
+
      memory: "30Mi"
  postgresSecret: "pulse-postgres-secret"
+
      # minimal CPU to reserve
  # Postgres secret key for user
+
      cpu: "2m"
  postgresSecretUser: "META_DB_ADMIN"
+
    # resource limits for containers
  # Postgres secret key for password
+
    limits:
  postgresSecretPassword: "META_DB_ADMINPWD"
+
      # maximum amount of memory a container can use before being evicted
  redisConfig: "pulse-redis-configmap"
+
      # by the OOM Killer
  # Redis secret name
+
      memory: "70Mi"
  redisSecret: "pulse-redis-secret"
+
      # maximum amount of CPU resources that can be used and should be tuned to reflect
   # Redis secret key for access key
+
      # what the application can effectively use before needing to be horizontally scaled out
  redisSecretKey: "REDIS01_KEY"
+
      cpu: "10m"
 +
   # securityContext: {}
 
   
 
   
# * Image
+
##########################################################################
# container image common settings
 
image:
 
  tag: "<image-version>"
 
  pullPolicy: IfNotPresent
 
  registry: "<docker-registry>"
 
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
 
   
 
   
## Service account settings
+
# * Configuration for the Configuration Server Proxy container
serviceAccount:
+
csproxy:
   # Specifies whether a service account should be created
+
  # define domain for the configuration host
   create: false
+
  params:
  # Annotations to add to the service account
+
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
  annotations: {}
+
   # resource limits for container
  # The name of the service account to use.
+
   resources:
  # If not set and create is true, a name is generated using the fullname template
+
    # minimum resource requirements to start container
  name: ""
+
    requests:
+
      # minimal amount of memory required to start a container
## Add annotations to all pods
+
      memory: "200Mi"
##
+
      # minimal CPU to reserve
podAnnotations: {}
+
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext: {}
 
   
 
   
## Specifies the security context for all Pods in the service
+
# volumeClaims contains persistent volume claims for services
##
+
# All available storage classes can be found here:
podSecurityContext: {}
+
# https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
+
volumeClaims:
## Add labels to all pods
+
  # statserverBackup is storage for statserver backup data
##
+
  statserverBackup:
podLabels: {}
+
    name: statserver-backup
+
    accessModes:
## HPA Settings
+
      - ReadWriteOnce
## Not supported in this release!
+
    # capacity is storage capacity
hpa:
+
    capacity: "1Gi"
  enabled: false
+
    # class is storage class. Must be set explicitly.
 +
    class: <pv-storage-class-rw-once>
 +
</source>
 +
 
 +
*Update the <tt>values-override-dcu.yaml</tt> file (GKE):
 +
*:<source lang="bash"># Default values for dcu.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 
   
 
   
## Priority Class
+
replicaCount: "<tenant-dcu>"
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 
##
 
priorityClassName: ""
 
 
   
 
   
## Node labels for assignment.
+
# * Tenant info
## ref: https://kubernetes.io/docs/user-guide/node-selection/
+
# tenant identification, or empty for shared deployment
##
+
tenant:
nodeSelector: {}
+
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
  sid: "<tenant-sid>"
 
   
 
   
## Tolerations for assignment.
+
# * Common log configuration
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+
log:
##
+
  # target directory where log will be stored, leave empty for default
tolerations: []
+
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-dcu-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 
   
 
   
## Pod Disruption Budget Settings
+
# * Config info
podDisruptionBudget:
+
# Set your values.
   enabled: false
+
config:
+
  dbName: "<db-name>"
## Affinity for assignment.
+
  # set "true" when need @host added for username
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+
  dbUserWithHost: true
##
+
   mountSecrets: false
affinity: {}
+
  postgresConfig: "pulse-postgres-configmap"
 +
  # Postgres secret name
 +
  postgresSecret: "pulse-postgres-secret"
 +
  # Postgres secret key for user
 +
  postgresSecretUser: "META_DB_ADMIN"
 +
  # Postgres secret key for password
 +
  postgresSecretPassword: "META_DB_ADMINPWD"
 +
  redisConfig: "pulse-redis-configmap"
 +
  # Redis secret name
 +
  redisSecret: "pulse-redis-secret"
 +
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 
   
 
   
# * Monitoring settings
+
# * Image
monitoring:
+
# container image common settings
   # enable the Prometheus metrics endpoint
+
image:
   enabled: false
+
   tag: "<image-version>"
   # enable golden signals metrics (not supported for PE)
+
   pullPolicy: IfNotPresent
   goldenSignals:
+
   registry: "<docker-registry>"
    enabled: false
+
   imagePullSecrets: [name: "<docker-registry-secret-name>"]
   # port number of the Prometheus metrics endpoint
+
   port: 9091
+
## Service account settings
   # HTTP path to scrape for metrics
+
serviceAccount:
   path: /metrics
+
   # Specifies whether a service account should be created
   # additional annotations required for monitoring PODs
+
   create: false
   # you can reference values of other variables as {{.Values.variable.full.name}}
+
   # Annotations to add to the service account
   podAnnotations: {}
+
   annotations: {}
    # prometheus.io/scrape: "true"
+
   # The name of the service account to use.
    # prometheus.io/port: "{{.Values.monitoring.port}}"
+
   # If not set and create is true, a name is generated using the fullname template
    # prometheus.io/path: "/metrics"
+
   name: ""
  podMonitor:
+
    # enables PodMonitor creation for the POD
+
## Add annotations to all pods
    enabled: true
+
##
    # interval at which metrics should be scraped
+
podAnnotations: {}
    scrapeInterval: 30s
+
    # timeout after which the scrape is ended
+
## Specifies the security context for all Pods in the service
    scrapeTimeout:
+
##
    # namespace of the PodMonitor, defaults to the namespace of the POD
+
podSecurityContext:
    namespace:
+
   runAsNonRoot: true
    additionalLabels: {}
+
  runAsUser: 500
   alerts:
+
  runAsGroup: 500
    # enables alert rules
+
  fsGroup: 0
    enabled: true
 
    # alert condition duration
 
    duration: 5m
 
    # namespace of the alert rules, defaults to the namespace of the POD
 
    namespace:
 
    additionalLabels: {}
 
 
 
 
   
 
   
##########################################################################
+
## Add labels to all pods
 +
##
 +
podLabels: {}
 
   
 
   
# * Configuration for the Collector container
+
## HPA Settings
collector:
+
## Not supported in this release!
  # resource limits for container
+
hpa:
  resources:
+
  enabled: false
    # minimum resource requirements to start container
+
    requests:
+
## Priority Class
      # minimal amount of memory required to start a container
+
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
      memory: "300Mi"
+
##
      # minimal CPU to reserve
+
priorityClassName: ""
      cpu: "200m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "4Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "8000m"
 
  # securityContext: {}
 
 
   
 
   
# * Configuration for the StatServer container
+
## Node labels for assignment.
statserver:
+
## ref: https://kubernetes.io/docs/user-guide/node-selection/
  # resource limits for container
+
##
  resources:
+
nodeSelector: {}
    # minimum resource requirements to start container
+
    requests:
+
## Tolerations for assignment.
      # minimal amount of memory required to start a container
+
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
      memory: "300Mi"
+
##
      # minimal CPU to reserve
+
tolerations: []
      cpu: "100m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "4Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "4000m"
 
  # securityContext: {}
 
 
   
 
   
# * Configuration for the monitor sidecar container
+
## Pod Disruption Budget Settings
monitorSidecar:
+
podDisruptionBudget:
  # resource limits for container
+
   enabled: false
  resources:
 
    # disabled: true
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "30Mi"
 
      # minimal CPU to reserve
 
      cpu: "2m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "70Mi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "10m"
 
   # securityContext: {}
 
 
   
 
   
##########################################################################
+
## Affinity for assignment.
 +
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 +
##
 +
affinity: {}
 
   
 
   
# * Configuration for the Configuration Server Proxy container
+
# * Monitoring settings
csproxy:
+
monitoring:
   # define domain for the configuration host
+
  # enable the Prometheus metrics endpoint
   params:
+
  enabled: false
     cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
+
   # enable golden signals metrics (not supported for PE)
   # resource limits for container
+
   goldenSignals:
   resources:
+
     enabled: false
    # minimum resource requirements to start container
+
   # port number of the Prometheus metrics endpoint
    requests:
+
   port: 9091
      # minimal amount of memory required to start a container
+
  # HTTP path to scrape for metrics
      memory: "200Mi"
+
  path: /metrics
      # minimal CPU to reserve
+
  # additional annotations required for monitoring PODs
      cpu: "50m"
+
  # you can reference values of other variables as {{.Values.variable.full.name}}
     # resource limits for containers
+
  podAnnotations: {}
     limits:
+
    # prometheus.io/scrape: "true"
      # maximum amount of memory a container can use before being evicted
+
    # prometheus.io/port: "{{.Values.monitoring.port}}"
      # by the OOM Killer
+
    # prometheus.io/path: "/metrics"
      memory: "2Gi"
+
  podMonitor:
      # maximum amount of CPU resources that can be used and should be tuned to reflect
+
     # enables PodMonitor creation for the POD
      # what the application can effectively use before needing to be horizontally scaled out
+
     enabled: true
      cpu: "1000m"
+
    # interval at which metrics should be scraped
  # securityContext: {}
+
    scrapeInterval: 30s
 +
    # timeout after which the scrape is ended
 +
    scrapeTimeout:
 +
    # namespace of the PodMonitor, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
  alerts:
 +
    # enables alert rules
 +
    enabled: true
 +
    # alert condition duration
 +
    duration: 5m
 +
    # namespace of the alert rules, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 +
 
 +
 +
##########################################################################
 
   
 
   
# volumeClaims contains persistent volume claims for services
+
# * Configuration for the Collector container
# All available storage classes can be found here:
+
collector:
# https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
+
   # resource limits for container
volumeClaims:
+
   resources:
   # statserverBackup is storage for statserver backup data
+
     # minimum resource requirements to start container
   statserverBackup:
+
     requests:
     name: statserver-backup
+
       # minimal amount of memory required to start a container
     accessModes:
+
      memory: "300Mi"
       - ReadWriteOnce
+
      # minimal CPU to reserve
    # capacity is storage capacity
+
      cpu: "200m"
    capacity: "1Gi"
+
     # resource limits for containers
     # class is storage class. Must be set explicitly.
+
     limits:
     class: <pv-storage-class-rw-once>
+
      # maximum amount of memory a container can use before being evicted
</source>
+
      # by the OOM Killer
 
+
      memory: "4Gi"
*Update the <tt>values-override-dcu.yaml</tt> file (GKE):
+
      # maximum amount of CPU resources that can be used and should be tuned to reflect
*:<source lang="bash"># Default values for dcu.
+
      # what the application can effectively use before needing to be horizontally scaled out
# This is a YAML-formatted file.
+
      cpu: "8000m"
# Declare variables to be passed into your templates.
+
  # securityContext:
+
  #  runAsUser: 500
replicaCount: "<tenant-dcu>"
+
  #  runAsGroup: 500
 
   
 
   
# * Tenant info
+
# * Configuration for the StatServer container
# tenant identification, or empty for shared deployment
+
statserver:
tenant:
+
  # resource limits for container
  # Tenant UUID
+
  resources:
  id: "<tenant-uuid>"
+
    # minimum resource requirements to start container
  # Tenant SID (like 0001)
+
    requests:
  sid: "<tenant-sid>"
+
      # minimal amount of memory required to start a container
+
      memory: "300Mi"
# * Common log configuration
+
      # minimal CPU to reserve
log:
+
      cpu: "100m"
  # target directory where log will be stored, leave empty for default
+
    # resource limits for containers
  logDir: ""
+
    limits:
  # path where volume will be mounted
+
      # maximum amount of memory a container can use before being evicted
  volumeMountPath: /data/log
+
      # by the OOM Killer
   # log volume type: none | hostpath | pvc
+
      memory: "4Gi"
  volumeType: pvc
+
      # maximum amount of CPU resources that can be used and should be tuned to reflect
   # log volume hostpath, used with volumeType "hostpath"
+
      # what the application can effectively use before needing to be horizontally scaled out
   volumeHostPath: /mnt/log
+
      cpu: "4000m"
   # log PVC parameters, used with volumeType "pvc"
+
   # securityContext:
   pvc:
+
   #  runAsUser: 500
    name: pulse-dcu-logs
+
   #  runAsGroup: 500
    accessModes:
 
      - ReadWriteMany
 
    capacity: 10Gi
 
    class: <pv-storage-class-rw-many>
 
 
   
 
   
# * Config info
+
# * Configuration for the monitor sidecar container
# Set your values.
+
monitorSidecar:
config:
+
   # resource limits for container
  dbName: "<db-name>"
+
   resources:
   # set "true" when need @host added for username
+
    # disabled: true
   dbUserWithHost: true
+
    # minimum resource requirements to start container
  mountSecrets: false
+
    requests:
  postgresConfig: "pulse-postgres-configmap"
+
      # minimal amount of memory required to start a container
  # Postgres secret name
+
      memory: "30Mi"
  postgresSecret: "pulse-postgres-secret"
+
      # minimal CPU to reserve
  # Postgres secret key for user
+
      cpu: "2m"
  postgresSecretUser: "META_DB_ADMIN"
+
    # resource limits for containers
  # Postgres secret key for password
+
    limits:
  postgresSecretPassword: "META_DB_ADMINPWD"
+
      # maximum amount of memory a container can use before being evicted
   redisConfig: "pulse-redis-configmap"
+
      # by the OOM Killer
   # Redis secret name
+
      memory: "70Mi"
   redisSecret: "pulse-redis-secret"
+
      # maximum amount of CPU resources that can be used and should be tuned to reflect
   # Redis secret key for access key
+
      # what the application can effectively use before needing to be horizontally scaled out
   redisSecretKey: "REDIS01_KEY"
+
      cpu: "10m"
 +
   # securityContext:
 +
   #  runAsUser: 500
 +
   #  runAsGroup: 500
 
   
 
   
# * Image
+
##########################################################################
# container image common settings
 
image:
 
  tag: "<image-version>"
 
  pullPolicy: IfNotPresent
 
  registry: "<docker-registry>"
 
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
 
   
 
   
## Service account settings
+
# * Configuration for the Configuration Server Proxy container
serviceAccount:
+
csproxy:
   # Specifies whether a service account should be created
+
  # define domain for the configuration host
   create: false
+
  params:
  # Annotations to add to the service account
+
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
  annotations: {}
+
   # resource limits for container
  # The name of the service account to use.
+
   resources:
  # If not set and create is true, a name is generated using the fullname template
+
    # minimum resource requirements to start container
  name: ""
+
    requests:
+
      # minimal amount of memory required to start a container
## Add annotations to all pods
+
      memory: "200Mi"
##
+
      # minimal CPU to reserve
podAnnotations: {}
+
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext:
 +
  #   runAsUser: 500
 +
  #  runAsGroup: 500
 
   
 
   
## Specifies the security context for all Pods in the service
+
# volumeClaims contains persistent volume claims for services
##
+
# All available storage classes can be found here:
podSecurityContext:
+
# https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
   runAsNonRoot: true
+
volumeClaims:
   runAsUser: 500
+
   # statserverBackup is storage for statserver backup data
  runAsGroup: 500
+
   statserverBackup:
  fsGroup: 0
+
    name: statserver-backup
+
    accessModes:
## Add labels to all pods
+
      - ReadWriteOnce
##
+
    # capacity is storage capacity
podLabels: {}
+
    capacity: "1Gi"
+
    # class is storage class. Must be set explicitly.
## HPA Settings
+
    class: <pv-storage-class-rw-once>
## Not supported in this release!
+
</source>
hpa:
+
 
  enabled: false
+
'''Install the <tt>dcu</tt> helm chart'''<br />To install the <tt>dcu</tt> helm chart, run the following command:
+
<source lang="bash">helm upgrade --install "pulse-dcu-<tenant-sid>"  pulsehelmrepo/dcu --wait --reuse-values --version=<chart-version> --namespace=pulse -f values-override-dcu.yaml
## Priority Class
+
</source>
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
+
 
##
+
'''Validate the <tt>dcu</tt> helm chart'''<br />To validate the <tt>dcu</tt> helm chart, run the following command:
priorityClassName: ""
+
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=dcu,app.kubernetes.io/instance=pulse-dcu-<tenant-sid>"
   
+
</source>
## Node labels for assignment.
+
Check the output to ensure that all <tt>pulse-dcu</tt> pods are running, for example:
## ref: https://kubernetes.io/docs/user-guide/node-selection/
+
<source lang="bash">
##
+
NAME              READY  STATUS    RESTARTS  AGE
nodeSelector: {}
+
pulse-dcu-100-0  3/3    Running  0          5m23s
 +
pulse-dcu-100-1  3/3    Running  0          4m47s
 +
</source>
 +
 
 +
===Install lds helm chart===
 +
 
 +
'''Get the <tt>lds</tt> helm chart:'''
 +
<source lang="bash">helm repo update
 +
helm search repo <pulsehelmrepo>/lds</source>
 +
 
 +
'''Prepare the override file:'''
 +
 
 +
*Update values in the <tt>values-override-lds.yaml</tt> file (AKS):
 +
*:<source lang="bash"># Default values for lds.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 
   
 
   
## Tolerations for assignment.
+
replicaCount: 2
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 
##
 
tolerations: []
 
 
   
 
   
## Pod Disruption Budget Settings
+
# * Tenant info
podDisruptionBudget:
+
# tenant identification, or empty for shared deployment
   enabled: false
+
tenant:
 +
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
   sid: "<tenant-sid>"
 
   
 
   
## Affinity for assignment.
+
# * Common log configuration
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+
log:
##
+
  # target directory where log will be stored, leave empty for default
affinity: {}
+
  logDir: ""
 +
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-lds-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 
   
 
   
# * Monitoring settings
+
# * Container image common settings
monitoring:
+
image:
   # enable the Prometheus metrics endpoint
+
   tag: "<image-version>"
  enabled: false
+
   pullPolicy: IfNotPresent
  # enable golden signals metrics (not supported for PE)
+
   registry: "<docker-registry>"
   goldenSignals:
+
   imagePullSecrets: [name: "<docker-registry-secret-name>"]
    enabled: false
+
  # port number of the Prometheus metrics endpoint
+
## Service account settings
   port: 9091
+
serviceAccount:
  # HTTP path to scrape for metrics
+
  # Specifies whether a service account should be created
   path: /metrics
+
  create: false
  # additional annotations required for monitoring PODs
+
  # Annotations to add to the service account
  # you can reference values of other variables as {{.Values.variable.full.name}}
+
  annotations: {}
  podAnnotations: {}
+
  # The name of the service account to use.
    # prometheus.io/scrape: "true"
+
   # If not set and create is true, a name is generated using the fullname template
    # prometheus.io/port: "{{.Values.monitoring.port}}"
+
  name: ""
    # prometheus.io/path: "/metrics"
 
  podMonitor:
 
    # enables PodMonitor creation for the POD
 
    enabled: true
 
    # interval at which metrics should be scraped
 
    scrapeInterval: 30s
 
    # timeout after which the scrape is ended
 
    scrapeTimeout:
 
    # namespace of the PodMonitor, defaults to the namespace of the POD
 
    namespace:
 
    additionalLabels: {}
 
   alerts:
 
    # enables alert rules
 
    enabled: true
 
    # alert condition duration
 
    duration: 5m
 
    # namespace of the alert rules, defaults to the namespace of the POD
 
    namespace:
 
    additionalLabels: {}
 
 
 
 
   
 
   
##########################################################################
+
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 
   
 
   
# * Configuration for the Collector container
+
## Specifies the security context for all Pods in the service
collector:
+
##
  # resource limits for container
+
podSecurityContext: {}
  resources:
+
    # minimum resource requirements to start container
+
## Add labels to all pods
    requests:
+
##
      # minimal amount of memory required to start a container
+
podLabels: {}
      memory: "300Mi"
 
      # minimal CPU to reserve
 
      cpu: "200m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "4Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "8000m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
   
 
   
# * Configuration for the StatServer container
+
## HPA Settings
statserver:
+
## Not supported in this release!
  # resource limits for container
+
hpa:
  resources:
+
  enabled: false
    # minimum resource requirements to start container
+
    requests:
+
## Priority Class
      # minimal amount of memory required to start a container
+
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
      memory: "300Mi"
+
##
      # minimal CPU to reserve
+
priorityClassName: ""
      cpu: "100m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "4Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "4000m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
   
 
   
# * Configuration for the monitor sidecar container
+
## Node labels for assignment.
monitorSidecar:
+
## ref: https://kubernetes.io/docs/user-guide/node-selection/
  # resource limits for container
+
##
  resources:
+
nodeSelector: {}
    # disabled: true
+
    # minimum resource requirements to start container
+
## Tolerations for assignment.
    requests:
+
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
      # minimal amount of memory required to start a container
+
##
      memory: "30Mi"
+
tolerations: []
      # minimal CPU to reserve
 
      cpu: "2m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "70Mi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "10m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
   
 
   
##########################################################################
+
## Pod Disruption Budget Settings
 +
podDisruptionBudget:
 +
  enabled: false
 
   
 
   
# * Configuration for the Configuration Server Proxy container
+
## Affinity for assignment.
csproxy:
+
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  # define domain for the configuration host
+
##
  params:
+
affinity: {}
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "200Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "2Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "1000m"
 
  # securityContext:
 
  #   runAsUser: 500
 
  #  runAsGroup: 500
 
 
   
 
   
# volumeClaims contains persistent volume claims for services
+
# * Monitoring settings
# All available storage classes can be found here:
+
monitoring:
# https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
+
  # enable the Prometheus metrics endpoint
volumeClaims:
+
  enabled: false
  # statserverBackup is storage for statserver backup data
+
  # enable golden signals metrics (not supported for PE)
  statserverBackup:
+
  goldenSignals:
     name: statserver-backup
+
    enabled: false
     accessModes:
+
  # port number of the Prometheus metrics endpoint
      - ReadWriteOnce
+
  port: 9091
     # capacity is storage capacity
+
  # HTTP path to scrape for metrics
     capacity: "1Gi"
+
  path: /metrics
     # class is storage class. Must be set explicitly.
+
  # additional annotations required for monitoring PODs
     class: <pv-storage-class-rw-once>
+
  # you can reference values of other variables as {{.Values.variable.full.name}}
</source>
+
  podAnnotations: {}
*Update the <tt>values-override-dcu.yaml</tt> file (OpenShift):
+
    # prometheus.io/scrape: "true"
*:<source lang="bash">
+
    # prometheus.io/port: "{{.Values.monitoring.port}}"
# Default values for dcu.
+
    # prometheus.io/path: "/metrics"
# This is a YAML-formatted file.
+
  podMonitor:
# Declare variables to be passed into your templates.
+
    # enables PodMonitor creation for the POD
+
    enabled: true
replicaCount: "<tenant-dcu>"
+
     # interval at which metrics should be scraped
 +
     scrapeInterval: 30s
 +
     # timeout after which the scrape is ended
 +
     scrapeTimeout:
 +
     # namespace of the PodMonitor, defaults to the namespace of the POD
 +
     namespace:
 +
    additionalLabels: {}
 +
  alerts:
 +
    # enables alert rules
 +
    enabled: true
 +
    # alert condition duration
 +
    duration: 5m
 +
    # namespace of the alert rules, defaults to the namespace of the POD
 +
    namespace:
 +
    additionalLabels: {}
 
   
 
   
# * Tenant info
+
# * Configuration for the LDS container
# tenant identification, or empty for shared deployment
+
lds:
tenant:
+
  # resource limits for container
  # Tenant UUID
+
  resources:
  id: "<tenant-uuid>"
+
    # minimum resource requirements to start container
  # Tenant SID (like 0001)
+
    requests:
  sid: "<tenant-sid>"
+
      # minimal amount of memory required to start a container
 +
      memory: "50Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "4Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "4000m"
 +
  # securityContext: {}
 
   
 
   
# * Common log configuration
+
# * Configuration for the monitor sidecar container
log:
+
monitorSidecar:
   # target directory where log will be stored, leave empty for default
+
   # resource limits for container
   logDir: ""
+
   resources:
  # path where volume will be mounted
+
    # minimum resource requirements to start container
  volumeMountPath: /data/log
+
    requests:
  # log volume type: none | hostpath | pvc
+
      # minimal amount of memory required to start a container
  volumeType: pvc
+
      memory: "30Mi"
  # log volume hostpath, used with volumeType "hostpath"
+
      # minimal CPU to reserve
  volumeHostPath: /mnt/log
+
      cpu: "2m"
  # log PVC parameters, used with volumeType "pvc"
+
    # resource limits for containers
  pvc:
+
    limits:
    name: pulse-dcu-logs
+
      # maximum amount of memory a container can use before being evicted
    accessModes:
+
      # by the OOM Killer
       - ReadWriteMany
+
      memory: "70Mi"
    capacity: 10Gi
+
      # maximum amount of CPU resources that can be used and should be tuned to reflect
    class: <pv-storage-class-rw-many>
+
       # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "10m"
 +
  # securityContext: {}
 
   
 
   
# * Config info
+
# * Configuration for the Configuration Server Proxy container
# Set your values.
+
csproxy:
config:
+
  # define domain for the configuration host
  dbName: "<db-name>"
+
  params:
   # set "true" when need @host added for username
+
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
  dbUserWithHost: true
+
   resources:
  mountSecrets: false
+
    # minimum resource requirements to start container
  postgresConfig: "pulse-postgres-configmap"
+
    requests:
  # Postgres secret name
+
      # minimal amount of memory required to start a container
  postgresSecret: "pulse-postgres-secret"
+
      memory: "200Mi"
  # Postgres secret key for user
+
      # minimal CPU to reserve
  postgresSecretUser: "META_DB_ADMIN"
+
      cpu: "50m"
  # Postgres secret key for password
+
    # resource limits for containers
  postgresSecretPassword: "META_DB_ADMINPWD"
+
    limits:
  redisConfig: "pulse-redis-configmap"
+
      # maximum amount of memory a container can use before being evicted
  # Redis secret name
+
      # by the OOM Killer
  redisSecret: "pulse-redis-secret"
+
      memory: "2Gi"
   # Redis secret key for access key
+
      # maximum amount of CPU resources that can be used and should be tuned to reflect
  redisSecretKey: "REDIS01_KEY"
+
      # what the application can effectively use before needing to be horizontally scaled out
+
      cpu: "1000m"
# * Image
+
   # securityContext: {}
# container image common settings
+
</source>
image:
+
 
  tag: "<image-version>"
+
*Update values in the <tt>values-override-lds.yaml</tt> file (GKE):
  pullPolicy: IfNotPresent
+
*:<source lang="bash"># Default values for lds.
  registry: "<docker-registry>"
+
# This is a YAML-formatted file.
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
+
# Declare variables to be passed into your templates.
 
   
 
   
## Service account settings
+
replicaCount: 2
serviceAccount:
 
  # Specifies whether a service account should be created
 
  create: false
 
  # Annotations to add to the service account
 
  annotations: {}
 
  # The name of the service account to use.
 
  # If not set and create is true, a name is generated using the fullname template
 
  name: ""
 
 
   
 
   
## Add annotations to all pods
+
# * Tenant info
##
+
# tenant identification, or empty for shared deployment
podAnnotations: {}
+
tenant:
 +
  # Tenant UUID
 +
  id: "<tenant-uuid>"
 +
  # Tenant SID (like 0001)
 +
  sid: "<tenant-sid>"
 
   
 
   
## Specifies the security context for all Pods in the service
+
# * Common log configuration
##
+
log:
podSecurityContext:
+
  # target directory where log will be stored, leave empty for default
   runAsNonRoot: true
+
  logDir: ""
   runAsUser: 500
+
  # path where volume will be mounted
   runAsGroup: 500
+
  volumeMountPath: /data/log
   fsGroup: 0
+
   # log volume type: none | hostpath | pvc
 +
   volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
   volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
   pvc:
 +
    name: pulse-lds-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 
   
 
   
## Add labels to all pods
+
# * Container image common settings
 +
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
 +
## Service account settings
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name: ""
 +
 +
## Add annotations to all pods
 +
##
 +
podAnnotations: {}
 +
 +
## Specifies the security context for all Pods in the service
 +
##
 +
podSecurityContext:
 +
  runAsNonRoot: true
 +
  runAsUser: 500
 +
  runAsGroup: 500
 +
  fsGroup: 0
 +
 +
## Add labels to all pods
 
##
 
##
 
podLabels: {}
 
podLabels: {}
Line 1,154: Line 1,225:
 
   enabled: false
 
   enabled: false
 
   # enable golden signals metrics (not supported for PE)
 
   # enable golden signals metrics (not supported for PE)
  # not needed since 100.0.000.0015
 
 
   goldenSignals:
 
   goldenSignals:
 
     enabled: false
 
     enabled: false
Line 1,164: Line 1,234:
 
   # you can reference values of other variables as {{.Values.variable.full.name}}
 
   # you can reference values of other variables as {{.Values.variable.full.name}}
 
   podAnnotations: {}
 
   podAnnotations: {}
  # additional annotations required for monitoring Service
 
  # you can reference values of other variables as {{.Values.variable.full.name}}
 
  # available since 100.0.000.0015
 
  serviceAnnotations: {}   
 
 
     # prometheus.io/scrape: "true"
 
     # prometheus.io/scrape: "true"
 
     # prometheus.io/port: "{{.Values.monitoring.port}}"
 
     # prometheus.io/port: "{{.Values.monitoring.port}}"
Line 1,190: Line 1,256:
 
     additionalLabels: {}
 
     additionalLabels: {}
 
   
 
   
##########################################################################
+
# * Configuration for the LDS container
+
lds:
# * Configuration for the Collector container
 
collector:
 
 
   # resource limits for container
 
   # resource limits for container
 
   resources:
 
   resources:
Line 1,199: Line 1,263:
 
     requests:
 
     requests:
 
       # minimal amount of memory required to start a container
 
       # minimal amount of memory required to start a container
       memory: "300Mi"
+
       memory: "50Mi"
 
       # minimal CPU to reserve
 
       # minimal CPU to reserve
       cpu: "200m"
+
       cpu: "50m"
 
     # resource limits for containers
 
     # resource limits for containers
 
     limits:
 
     limits:
Line 1,209: Line 1,273:
 
       # maximum amount of CPU resources that can be used and should be tuned to reflect
 
       # maximum amount of CPU resources that can be used and should be tuned to reflect
 
       # what the application can effectively use before needing to be horizontally scaled out
 
       # what the application can effectively use before needing to be horizontally scaled out
       cpu: "8000m"
+
       cpu: "4000m"
 
   # securityContext:
 
   # securityContext:
 
   #  runAsUser: 500
 
   #  runAsUser: 500
 
   #  runAsGroup: 500
 
   #  runAsGroup: 500
 
   
 
   
# * Configuration for the StatServer container
+
# * Configuration for the monitor sidecar container
statserver:
+
monitorSidecar:
 
   # resource limits for container
 
   # resource limits for container
 
   resources:
 
   resources:
Line 1,221: Line 1,285:
 
     requests:
 
     requests:
 
       # minimal amount of memory required to start a container
 
       # minimal amount of memory required to start a container
       memory: "300Mi"
+
       memory: "30Mi"
 
       # minimal CPU to reserve
 
       # minimal CPU to reserve
       cpu: "100m"
+
       cpu: "2m"
 
     # resource limits for containers
 
     # resource limits for containers
 
     limits:
 
     limits:
 
       # maximum amount of memory a container can use before being evicted
 
       # maximum amount of memory a container can use before being evicted
 
       # by the OOM Killer
 
       # by the OOM Killer
       memory: "4Gi"
+
       memory: "70Mi"
 
       # maximum amount of CPU resources that can be used and should be tuned to reflect
 
       # maximum amount of CPU resources that can be used and should be tuned to reflect
 
       # what the application can effectively use before needing to be horizontally scaled out
 
       # what the application can effectively use before needing to be horizontally scaled out
       cpu: "4000m"
+
       cpu: "10m"
 
   # securityContext:
 
   # securityContext:
 
   #  runAsUser: 500
 
   #  runAsUser: 500
 
   #  runAsGroup: 500
 
   #  runAsGroup: 500
 
   
 
   
# * Configuration for the monitor sidecar container
+
# * Configuration for the Configuration Server Proxy container
monitorSidecar:
+
csproxy:
   # resource limits for container
+
   # define domain for the configuration host
   resources:
+
   params:
     # disabled: true
+
     cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "30Mi"
 
      # minimal CPU to reserve
 
      cpu: "2m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "70Mi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "10m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
##########################################################################
 
 
# * Configuration for the Configuration Server Proxy container
 
csproxy:
 
  # resource limits for container
 
 
   resources:
 
   resources:
 
     # minimum resource requirements to start container
 
     # minimum resource requirements to start container
Line 1,281: Line 1,322:
 
   # securityContext:
 
   # securityContext:
 
   #  runAsUser: 500
 
   #  runAsUser: 500
   #  runAsGroup: 500
+
   #  runAsGroup: 500</source>
 +
 
 +
'''Update values in the <tt>values-override-lds-vq.yaml</tt> file:'''
 +
<source lang="bash"># Default values for lds.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 +
 +
lds:
 +
  params:
 +
    cfgApp: "pulse-lds-vq-$((K8S_POD_INDEX % 2))"
 
   
 
   
# volumeClaims contains persistent volume claims for services
+
log:
# All available storage classes can be found here:
+
   pvc:
# https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
+
     name: pulse-lds-vq-logs
volumeClaims:
 
  # statserverBackup is storage for statserver backup data
 
   statserverBackup:
 
     name: statserver-backup
 
    accessModes:
 
      - ReadWriteOnce
 
    # capacity is storage capacity
 
    capacity: "1Gi"
 
    # class is storage class. Must be set explicitly.
 
    class: <pv-storage-class-rw-once></source>
 
 
 
'''Install the <tt>dcu</tt> helm chart'''<br />To install the <tt>dcu</tt> helm chart, run the following command:
 
<source lang="bash">helm upgrade --install "pulse-dcu-<tenant-sid>"  pulsehelmrepo/dcu --wait --reuse-values --version=<chart-version> --namespace=pulse -f values-override-dcu.yaml
 
 
</source>
 
</source>
  
'''Validate the <tt>dcu</tt> helm chart'''<br />To validate the <tt>dcu</tt> helm chart, run the following command:
+
'''Install the <tt>lds</tt> helm chart''':<br />To install the <tt>lds</tt> helm chart, run the following command:
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=dcu,app.kubernetes.io/instance=pulse-dcu-<tenant-sid>"
+
<source lang="bash">
 +
helm upgrade --install "pulse-lds-<tenant-sid>"    pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml
 +
helm upgrade --install "pulse-lds-vq-<tenant-sid>" pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml -f values-override-lds-vq.yaml
 +
</source>
 +
If the installation is successful, the exit code <tt>0</tt> appears.
 +
 
 +
'''Validate the <tt>lds</tt> helm chart''':<br />To validate the <tt>lds</tt> helm chart, run the following command:
 +
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=lds,app.kubernetes.io/instance=pulse-lds-<tenant-sid>"
 
</source>
 
</source>
Check the output to ensure that all <tt>pulse-dcu</tt> pods are running, for example:
+
Verify that the command reports all pulse-lds-vq pods as Running, for example:
 
<source lang="bash">
 
<source lang="bash">
 
NAME              READY  STATUS    RESTARTS  AGE
 
NAME              READY  STATUS    RESTARTS  AGE
pulse-dcu-100-0  3/3    Running  0          5m23s
+
pulse-lds-100-0  3/3    Running  0          2d20h
pulse-dcu-100-1  3/3    Running  0          4m47s
+
pulse-lds-100-1  3/3    Running  0          2d20h </source>
</source>
 
  
===Install lds helm chart===
+
===Install permissions helm chart===
 
+
'''Get the <tt>permissions</tt> helm chart'''
'''Get the <tt>lds</tt> helm chart:'''
 
 
<source lang="bash">helm repo update
 
<source lang="bash">helm repo update
helm search repo <pulsehelmrepo>/lds</source>
+
helm search repo <pulsehelmrepo>/permissions</source>
  
 
'''Prepare the override file:'''
 
'''Prepare the override file:'''
  
*Update values in the <tt>values-override-lds.yaml</tt> file (AKS):
+
*Update values in the <tt>values-override-permissions.yaml</tt> file (AKS):
*:<source lang="bash"># Default values for lds.
+
*:<source lang="bash"># Default values for permissions.
 
# This is a YAML-formatted file.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
# Declare variables to be passed into your templates.
 
   
 
   
replicaCount: 2
+
# * Image configuration
   
+
image:
 +
  tag: "<image-version>"
 +
  pullPolicy: IfNotPresent
 +
  registry: "<docker-registry>"
 +
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 +
   
 
# * Tenant info
 
# * Tenant info
 
# tenant identification, or empty for shared deployment
 
# tenant identification, or empty for shared deployment
Line 1,334: Line 1,381:
 
   sid: "<tenant-sid>"
 
   sid: "<tenant-sid>"
 
   
 
   
# * Common log configuration
+
# common configuration.
log:
+
config:
   # target directory where log will be stored, leave empty for default
+
  dbName: "<db-name>"
   logDir: ""
+
   # set "true" when need @host added for username
   # path where volume will be mounted
+
   dbUserWithHost: true
   volumeMountPath: /data/log
+
  # set "true" for CSI secrets
   # log volume type: none | hostpath | pvc
+
  mountSecrets: false
   volumeType: pvc
+
   # Postgres config map name
   # log volume hostpath, used with volumeType "hostpath"
+
   postgresConfig: "pulse-postgres-configmap"
   volumeHostPath: /mnt/log
+
   # Postgres secret name
   # log PVC parameters, used with volumeType "pvc"
+
   postgresSecret: "pulse-postgres-secret"
   pvc:
+
   # Postgres secret key for user
    name: pulse-lds-logs
+
  postgresSecretUser: "META_DB_ADMIN"
    accessModes:
+
   # Postgres secret key for password
      - ReadWriteMany
+
   postgresSecretPassword: "META_DB_ADMINPWD"
    capacity: 10Gi
+
   # Redis config map name
    class: <pv-storage-class-rw-many>
+
  redisConfig: "pulse-redis-configmap"
 +
  # Redis secret name
 +
  redisSecret: "pulse-redis-secret"
 +
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 
   
 
   
# * Container image common settings
 
image:
 
  tag: "<image-version>"
 
  pullPolicy: IfNotPresent
 
  registry: "<docker-registry>"
 
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
 
   
 
   
## Service account settings
+
# * Configuration for the Configuration Server Proxy container
serviceAccount:
+
csproxy:
   # Specifies whether a service account should be created
+
  # define domain for the configuration host
   create: false
+
  params:
  # Annotations to add to the service account
+
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
  annotations: {}
+
   # resource limits for container
  # The name of the service account to use.
+
   resources:
  # If not set and create is true, a name is generated using the fullname template
+
    # minimum resource requirements to start container
  name: ""
+
    requests:
 +
      # minimal amount of memory required to start a container
 +
      memory: "200Mi"
 +
      # minimal CPU to reserve
 +
      cpu: "50m"
 +
    # resource limits for containers
 +
    limits:
 +
      # maximum amount of memory a container can use before being evicted
 +
      # by the OOM Killer
 +
      memory: "2Gi"
 +
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 +
      # what the application can effectively use before needing to be horizontally scaled out
 +
      cpu: "1000m"
 +
  # securityContext: {}
 
   
 
   
## Add annotations to all pods
+
# * Common log configuration
##
+
log:
podAnnotations: {}
+
  # target directory where log will be stored, leave empty for default
   
+
  logDir: ""
## Specifies the security context for all Pods in the service
+
  # path where volume will be mounted
 +
  volumeMountPath: /data/log
 +
  # log volume type: none | hostpath | pvc
 +
  volumeType: pvc
 +
  # log volume hostpath, used with volumeType "hostpath"
 +
  volumeHostPath: /mnt/log
 +
  # log PVC parameters, used with volumeType "pvc"
 +
  pvc:
 +
    name: pulse-permissions-logs
 +
    accessModes:
 +
      - ReadWriteMany
 +
    capacity: 10Gi
 +
    class: <pv-storage-class-rw-many>
 +
   
 +
## Specifies the security context for all Pods in the service
 
##
 
##
 
podSecurityContext: {}
 
podSecurityContext: {}
 
   
 
   
## Add labels to all pods
+
## Resource requests and limits
 +
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 
##
 
##
podLabels: {}
+
resources:
 +
  limits:
 +
    memory: "1Gi"
 +
    cpu: "500m"
 +
  requests:
 +
    memory: "400Mi"
 +
    cpu: "50m"
 
   
 
   
 
## HPA Settings
 
## HPA Settings
Line 1,409: Line 1,488:
 
##
 
##
 
affinity: {}
 
affinity: {}
 +
</source>
 +
 +
*Update values in the <tt>values-override-permissions.yaml</tt> file (GKE):
 +
*:<source lang="bash"># Default values for permissions.
 +
# This is a YAML-formatted file.
 +
# Declare variables to be passed into your templates.
 
   
 
   
# * Monitoring settings
+
# * Image configuration
monitoring:
+
image:
   # enable the Prometheus metrics endpoint
+
   tag: "<image-version>"
   enabled: false
+
   pullPolicy: IfNotPresent
   # enable golden signals metrics (not supported for PE)
+
   registry: "<docker-registry>"
   goldenSignals:
+
   imagePullSecrets: [name: "<docker-registry-secret-name>"]
    enabled: false
+
  # port number of the Prometheus metrics endpoint
+
# * Tenant info
  port: 9091
+
# tenant identification, or empty for shared deployment
   # HTTP path to scrape for metrics
+
tenant:
   path: /metrics
+
   # Tenant UUID
   # additional annotations required for monitoring PODs
+
   id: "<tenant-uuid>"
   # you can reference values of other variables as {{.Values.variable.full.name}}
+
   # Tenant SID (like 0001)
  podAnnotations: {}
+
   sid: "<tenant-sid>"
    # prometheus.io/scrape: "true"
+
    # prometheus.io/port: "{{.Values.monitoring.port}}"
+
# common configuration.
    # prometheus.io/path: "/metrics"
+
config:
   podMonitor:
+
  dbName: "<db-name>"
    # enables PodMonitor creation for the POD
+
  # set "true" when need @host added for username
    enabled: true
+
   dbUserWithHost: true
    # interval at which metrics should be scraped
+
  # set "true" for CSI secrets
    scrapeInterval: 30s
+
  mountSecrets: false
    # timeout after which the scrape is ended
+
  # Postgres config map name
    scrapeTimeout:
+
  postgresConfig: "pulse-postgres-configmap"
    # namespace of the PodMonitor, defaults to the namespace of the POD
+
  # Postgres secret name
    namespace:
+
  postgresSecret: "pulse-postgres-secret"
    additionalLabels: {}
+
  # Postgres secret key for user
   alerts:
+
  postgresSecretUser: "META_DB_ADMIN"
    # enables alert rules
+
  # Postgres secret key for password
    enabled: true
+
   postgresSecretPassword: "META_DB_ADMINPWD"
    # alert condition duration
+
  # Redis config map name
    duration: 5m
+
  redisConfig: "pulse-redis-configmap"
    # namespace of the alert rules, defaults to the namespace of the POD
+
  # Redis secret name
    namespace:
+
  redisSecret: "pulse-redis-secret"
    additionalLabels: {}
+
  # Redis secret key for access key
 +
  redisSecretKey: "REDIS01_KEY"
 
   
 
   
# * Configuration for the LDS container
 
lds:
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "50Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "4Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "4000m"
 
  # securityContext: {}
 
 
   
 
   
# * Configuration for the monitor sidecar container
+
# * Configuration for the Configuration Server Proxy container
monitorSidecar:
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "30Mi"
 
      # minimal CPU to reserve
 
      cpu: "2m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "70Mi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "10m"
 
  # securityContext: {}
 
 
# *  Configuration for the Configuration Server Proxy container
 
 
csproxy:
 
csproxy:
 
   # define domain for the configuration host
 
   # define domain for the configuration host
 
   params:
 
   params:
 
     cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 
     cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "200Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "2Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "1000m"
 
  # securityContext: {}
 
</source>
 
 
*Update values in the <tt>values-override-lds.yaml</tt> file (GKE):
 
*:<source lang="bash"># Default values for lds.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
 
replicaCount: 2
 
 
# * Tenant info
 
# tenant identification, or empty for shared deployment
 
tenant:
 
  # Tenant UUID
 
  id: "<tenant-uuid>"
 
  # Tenant SID (like 0001)
 
  sid: "<tenant-sid>"
 
 
# * Common log configuration
 
log:
 
  # target directory where log will be stored, leave empty for default
 
  logDir: ""
 
  # path where volume will be mounted
 
  volumeMountPath: /data/log
 
  # log volume type: none | hostpath | pvc
 
  volumeType: pvc
 
  # log volume hostpath, used with volumeType "hostpath"
 
  volumeHostPath: /mnt/log
 
  # log PVC parameters, used with volumeType "pvc"
 
  pvc:
 
    name: pulse-lds-logs
 
    accessModes:
 
      - ReadWriteMany
 
    capacity: 10Gi
 
    class: <pv-storage-class-rw-many>
 
 
# * Container image common settings
 
image:
 
  tag: "<image-version>"
 
  pullPolicy: IfNotPresent
 
  registry: "<docker-registry>"
 
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
 
## Service account settings
 
serviceAccount:
 
  # Specifies whether a service account should be created
 
  create: false
 
  # Annotations to add to the service account
 
  annotations: {}
 
  # The name of the service account to use.
 
  # If not set and create is true, a name is generated using the fullname template
 
  name: ""
 
 
## Add annotations to all pods
 
##
 
podAnnotations: {}
 
 
## Specifies the security context for all Pods in the service
 
##
 
podSecurityContext:
 
  runAsNonRoot: true
 
  runAsUser: 500
 
  runAsGroup: 500
 
  fsGroup: 0
 
 
## Add labels to all pods
 
##
 
podLabels: {}
 
 
## HPA Settings
 
## Not supported in this release!
 
hpa:
 
  enabled: false
 
 
## Priority Class
 
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 
##
 
priorityClassName: ""
 
 
## Node labels for assignment.
 
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 
##
 
nodeSelector: {}
 
 
## Tolerations for assignment.
 
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 
##
 
tolerations: []
 
 
## Pod Disruption Budget Settings
 
podDisruptionBudget:
 
  enabled: false
 
 
## Affinity for assignment.
 
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 
##
 
affinity: {}
 
 
# * Monitoring settings
 
monitoring:
 
  # enable the Prometheus metrics endpoint
 
  enabled: false
 
  # enable golden signals metrics (not supported for PE)
 
  goldenSignals:
 
    enabled: false
 
  # port number of the Prometheus metrics endpoint
 
  port: 9091
 
  # HTTP path to scrape for metrics
 
  path: /metrics
 
  # additional annotations required for monitoring PODs
 
  # you can reference values of other variables as {{.Values.variable.full.name}}
 
  podAnnotations: {}
 
    # prometheus.io/scrape: "true"
 
    # prometheus.io/port: "{{.Values.monitoring.port}}"
 
    # prometheus.io/path: "/metrics"
 
  podMonitor:
 
    # enables PodMonitor creation for the POD
 
    enabled: true
 
    # interval at which metrics should be scraped
 
    scrapeInterval: 30s
 
    # timeout after which the scrape is ended
 
    scrapeTimeout:
 
    # namespace of the PodMonitor, defaults to the namespace of the POD
 
    namespace:
 
    additionalLabels: {}
 
  alerts:
 
    # enables alert rules
 
    enabled: true
 
    # alert condition duration
 
    duration: 5m
 
    # namespace of the alert rules, defaults to the namespace of the POD
 
    namespace:
 
    additionalLabels: {}
 
 
# * Configuration for the LDS container
 
lds:
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "50Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "4Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "4000m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
# * Configuration for the monitor sidecar container
 
monitorSidecar:
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "30Mi"
 
      # minimal CPU to reserve
 
      cpu: "2m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "70Mi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "10m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
# *  Configuration for the Configuration Server Proxy container
 
csproxy:
 
  # define domain for the configuration host
 
  params:
 
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "200Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "2Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "1000m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500</source>
 
*Update values in the <tt>values-override-lds.yaml</tt> file (OpenShift):
 
*:<source lang="bash">
 
# Default values for lds.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
 
replicaCount: 2
 
 
# * Tenant info
 
# tenant identification, or empty for shared deployment
 
tenant:
 
  # Tenant UUID
 
  id: "<tenant-uuid>"
 
  # Tenant SID (like 0001)
 
  sid: "<tenant-sid>"
 
 
# * Common log configuration
 
log:
 
  # target directory where log will be stored, leave empty for default
 
  logDir: ""
 
  # path where volume will be mounted
 
  volumeMountPath: /data/log
 
  # log volume type: none | hostpath | pvc
 
  volumeType: pvc
 
  # log volume hostpath, used with volumeType "hostpath"
 
  volumeHostPath: /mnt/log
 
  # log PVC parameters, used with volumeType "pvc"
 
  pvc:
 
    name: pulse-lds-logs
 
    accessModes:
 
      - ReadWriteMany
 
    capacity: 10Gi
 
    class: <pv-storage-class-rw-many>
 
 
# * Container image common settings
 
image:
 
  tag: "<image-version>"
 
  pullPolicy: IfNotPresent
 
  registry: "<docker-registry>"
 
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
 
## Service account settings
 
serviceAccount:
 
  # Specifies whether a service account should be created
 
  create: false
 
  # Annotations to add to the service account
 
  annotations: {}
 
  # The name of the service account to use.
 
  # If not set and create is true, a name is generated using the fullname template
 
  name: ""
 
 
## Add annotations to all pods
 
##
 
podAnnotations: {}
 
 
## Specifies the security context for all Pods in the service
 
##
 
podSecurityContext:
 
  runAsNonRoot: true
 
  runAsUser: 500
 
  runAsGroup: 500
 
  fsGroup: 0
 
 
## Add labels to all pods
 
##
 
podLabels: {}
 
 
## HPA Settings
 
## Not supported in this release!
 
hpa:
 
  enabled: false
 
 
## Priority Class
 
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 
##
 
priorityClassName: ""
 
 
## Node labels for assignment.
 
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 
##
 
nodeSelector: {}
 
 
## Tolerations for assignment.
 
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 
##
 
tolerations: []
 
 
## Pod Disruption Budget Settings
 
podDisruptionBudget:
 
  enabled: false
 
 
## Affinity for assignment.
 
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 
##
 
affinity: {}
 
 
# * Monitoring settings
 
monitoring:
 
  # enable the Prometheus metrics endpoint
 
  enabled: false
 
  # enable golden signals metrics (not supported for PE)
 
  # not needed since 100.0.000.0015
 
  goldenSignals:
 
    enabled: false
 
  # port number of the Prometheus metrics endpoint
 
  port: 9091
 
  # HTTP path to scrape for metrics
 
  path: /metrics
 
  # additional annotations required for monitoring PODs
 
  # you can reference values of other variables as {{.Values.variable.full.name}}
 
  podAnnotations: {}
 
  # additional annotations required for monitoring Service
 
  # you can reference values of other variables as {{.Values.variable.full.name}}
 
  # available since 100.0.000.0015
 
  serviceAnnotations: {}     
 
    # prometheus.io/scrape: "true"
 
    # prometheus.io/port: "{{.Values.monitoring.port}}"
 
    # prometheus.io/path: "/metrics"
 
  podMonitor:
 
    # enables PodMonitor creation for the POD
 
    enabled: true
 
    # interval at which metrics should be scraped
 
    scrapeInterval: 30s
 
    # timeout after which the scrape is ended
 
    scrapeTimeout:
 
    # namespace of the PodMonitor, defaults to the namespace of the POD
 
    namespace:
 
    additionalLabels: {}
 
  alerts:
 
    # enables alert rules
 
    enabled: true
 
    # alert condition duration
 
    duration: 5m
 
    # namespace of the alert rules, defaults to the namespace of the POD
 
    namespace:
 
    additionalLabels: {}
 
 
# * Configuration for the LDS container
 
lds:
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "50Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "4Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "4000m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
# * Configuration for the monitor sidecar container
 
monitorSidecar:
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "30Mi"
 
      # minimal CPU to reserve
 
      cpu: "2m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "70Mi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "10m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
# *  Configuration for the Configuration Server Proxy container
 
csproxy:
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "200Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "2Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "1000m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
</source>
 
 
'''Update values in the <tt>values-override-lds-vq.yaml</tt> file:'''
 
<source lang="bash"># Default values for lds.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
 
lds:
 
  params:
 
    cfgApp: "pulse-lds-vq-$((K8S_POD_INDEX % 2))"
 
 
log:
 
  pvc:
 
    name: pulse-lds-vq-logs
 
</source>
 
 
'''Install the <tt>lds</tt> helm chart''':<br />To install the <tt>lds</tt> helm chart, run the following command:
 
<source lang="bash">
 
helm upgrade --install "pulse-lds-<tenant-sid>"    pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml
 
helm upgrade --install "pulse-lds-vq-<tenant-sid>" pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml -f values-override-lds-vq.yaml
 
</source>
 
If the installation is successful, the exit code <tt>0</tt> appears.
 
 
'''Validate the <tt>lds</tt> helm chart''':<br />To validate the <tt>lds</tt> helm chart, run the following command:
 
<source lang="bash">kubectl get pods -n=pulse -l "app.kubernetes.io/name=lds,app.kubernetes.io/instance=pulse-lds-<tenant-sid>"
 
</source>
 
Verify that the command reports all pulse-lds-vq pods as Running, for example:
 
<source lang="bash">
 
NAME              READY  STATUS    RESTARTS  AGE
 
pulse-lds-100-0  3/3    Running  0          2d20h
 
pulse-lds-100-1  3/3    Running  0          2d20h </source>
 
 
===Install permissions helm chart===
 
'''Get the <tt>permissions</tt> helm chart'''
 
<source lang="bash">helm repo update
 
helm search repo <pulsehelmrepo>/permissions</source>
 
 
'''Prepare the override file:'''
 
 
*Update values in the <tt>values-override-permissions.yaml</tt> file (AKS):
 
*:<source lang="bash"># Default values for permissions.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
 
# * Image configuration
 
image:
 
  tag: "<image-version>"
 
  pullPolicy: IfNotPresent
 
  registry: "<docker-registry>"
 
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
 
# * Tenant info
 
# tenant identification, or empty for shared deployment
 
tenant:
 
  # Tenant UUID
 
  id: "<tenant-uuid>"
 
  # Tenant SID (like 0001)
 
  sid: "<tenant-sid>"
 
 
# common configuration.
 
config:
 
  dbName: "<db-name>"
 
  # set "true" when need @host added for username
 
  dbUserWithHost: true
 
  # set "true" for CSI secrets
 
  mountSecrets: false
 
  # Postgres config map name
 
  postgresConfig: "pulse-postgres-configmap"
 
  # Postgres secret name
 
  postgresSecret: "pulse-postgres-secret"
 
  # Postgres secret key for user
 
  postgresSecretUser: "META_DB_ADMIN"
 
  # Postgres secret key for password
 
  postgresSecretPassword: "META_DB_ADMINPWD"
 
  # Redis config map name
 
  redisConfig: "pulse-redis-configmap"
 
  # Redis secret name
 
  redisSecret: "pulse-redis-secret"
 
  # Redis secret key for access key
 
  redisSecretKey: "REDIS01_KEY"
 
 
 
# * Configuration for the Configuration Server Proxy container
 
csproxy:
 
  # define domain for the configuration host
 
  params:
 
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "200Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "2Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "1000m"
 
  # securityContext: {}
 
 
# * Common log configuration
 
log:
 
  # target directory where log will be stored, leave empty for default
 
  logDir: ""
 
  # path where volume will be mounted
 
  volumeMountPath: /data/log
 
  # log volume type: none | hostpath | pvc
 
  volumeType: pvc
 
  # log volume hostpath, used with volumeType "hostpath"
 
  volumeHostPath: /mnt/log
 
  # log PVC parameters, used with volumeType "pvc"
 
  pvc:
 
    name: pulse-permissions-logs
 
    accessModes:
 
      - ReadWriteMany
 
    capacity: 10Gi
 
    class: <pv-storage-class-rw-many>
 
 
## Specifies the security context for all Pods in the service
 
##
 
podSecurityContext: {}
 
 
## Resource requests and limits
 
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 
##
 
resources:
 
  limits:
 
    memory: "1Gi"
 
    cpu: "500m"
 
  requests:
 
    memory: "400Mi"
 
    cpu: "50m"
 
 
## HPA Settings
 
## Not supported in this release!
 
hpa:
 
  enabled: false
 
 
## Priority Class
 
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 
##
 
priorityClassName: ""
 
 
## Node labels for assignment.
 
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 
##
 
nodeSelector: {}
 
 
## Tolerations for assignment.
 
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 
##
 
tolerations: []
 
 
## Pod Disruption Budget Settings
 
podDisruptionBudget:
 
  enabled: false
 
 
## Affinity for assignment.
 
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 
##
 
affinity: {}
 
</source>
 
 
*Update values in the <tt>values-override-permissions.yaml</tt> file (GKE):
 
*:<source lang="bash"># Default values for permissions.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
 
# * Image configuration
 
image:
 
  tag: "<image-version>"
 
  pullPolicy: IfNotPresent
 
  registry: "<docker-registry>"
 
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
 
# * Tenant info
 
# tenant identification, or empty for shared deployment
 
tenant:
 
  # Tenant UUID
 
  id: "<tenant-uuid>"
 
  # Tenant SID (like 0001)
 
  sid: "<tenant-sid>"
 
 
# common configuration.
 
config:
 
  dbName: "<db-name>"
 
  # set "true" when need @host added for username
 
  dbUserWithHost: true
 
  # set "true" for CSI secrets
 
  mountSecrets: false
 
  # Postgres config map name
 
  postgresConfig: "pulse-postgres-configmap"
 
  # Postgres secret name
 
  postgresSecret: "pulse-postgres-secret"
 
  # Postgres secret key for user
 
  postgresSecretUser: "META_DB_ADMIN"
 
  # Postgres secret key for password
 
  postgresSecretPassword: "META_DB_ADMINPWD"
 
  # Redis config map name
 
  redisConfig: "pulse-redis-configmap"
 
  # Redis secret name
 
  redisSecret: "pulse-redis-secret"
 
  # Redis secret key for access key
 
  redisSecretKey: "REDIS01_KEY"
 
 
 
# * Configuration for the Configuration Server Proxy container
 
csproxy:
 
  # define domain for the configuration host
 
  params:
 
    cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
 
  # resource limits for container
 
  resources:
 
    # minimum resource requirements to start container
 
    requests:
 
      # minimal amount of memory required to start a container
 
      memory: "200Mi"
 
      # minimal CPU to reserve
 
      cpu: "50m"
 
    # resource limits for containers
 
    limits:
 
      # maximum amount of memory a container can use before being evicted
 
      # by the OOM Killer
 
      memory: "2Gi"
 
      # maximum amount of CPU resources that can be used and should be tuned to reflect
 
      # what the application can effectively use before needing to be horizontally scaled out
 
      cpu: "1000m"
 
  # securityContext:
 
  #  runAsUser: 500
 
  #  runAsGroup: 500
 
 
# * Common log configuration
 
log:
 
  # target directory where log will be stored, leave empty for default
 
  logDir: ""
 
  # path where volume will be mounted
 
  volumeMountPath: /data/log
 
  # log volume type: none | hostpath | pvc
 
  volumeType: pvc
 
  # log volume hostpath, used with volumeType "hostpath"
 
  volumeHostPath: /mnt/log
 
  # log PVC parameters, used with volumeType "pvc"
 
  pvc:
 
    name: pulse-permissions-logs
 
    accessModes:
 
      - ReadWriteMany
 
    capacity: 10Gi
 
    class: <pv-storage-class-rw-many>
 
 
## Specifies the security context for all Pods in the service
 
##
 
podSecurityContext:
 
  fsGroup: null
 
  runAsUser: null
 
  runAsGroup: 0
 
  runAsNonRoot: true
 
 
## Resource requests and limits
 
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 
##
 
resources:
 
  limits:
 
    memory: "1Gi"
 
    cpu: "500m"
 
  requests:
 
    memory: "400Mi"
 
    cpu: "50m"
 
 
## HPA Settings
 
## Not supported in this release!
 
hpa:
 
  enabled: false
 
 
## Priority Class
 
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 
##
 
priorityClassName: ""
 
 
## Node labels for assignment.
 
## ref: https://kubernetes.io/docs/user-guide/node-selection/
 
##
 
nodeSelector: {}
 
 
## Tolerations for assignment.
 
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 
##
 
tolerations: []
 
 
## Pod Disruption Budget Settings
 
podDisruptionBudget:
 
  enabled: false
 
 
## Affinity for assignment.
 
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
 
##
 
affinity: {}</source>
 
*Update values in the <tt>values-override-permissions.yaml</tt> file (OpenShift):
 
*:<source lang="bash">
 
# Default values for permissions.
 
# This is a YAML-formatted file.
 
# Declare variables to be passed into your templates.
 
 
# * Image configuration
 
image:
 
  tag: "<image-version>"
 
  pullPolicy: IfNotPresent
 
  registry: "<docker-registry>"
 
  imagePullSecrets: [name: "<docker-registry-secret-name>"]
 
 
# * Tenant info
 
# tenant identification, or empty for shared deployment
 
tenant:
 
  # Tenant UUID
 
  id: "<tenant-uuid>"
 
  # Tenant SID (like 0001)
 
  sid: "<tenant-sid>"
 
 
# common configuration.
 
config:
 
  dbName: "<db-name>"
 
  # set "true" when need @host added for username
 
  dbUserWithHost: true
 
  # set "true" for CSI secrets
 
  mountSecrets: false
 
  # Postgres config map name
 
  postgresConfig: "pulse-postgres-configmap"
 
  # Postgres secret name
 
  postgresSecret: "pulse-postgres-secret"
 
  # Postgres secret key for user
 
  postgresSecretUser: "META_DB_ADMIN"
 
  # Postgres secret key for password
 
  postgresSecretPassword: "META_DB_ADMINPWD"
 
  # Redis config map name
 
  redisConfig: "pulse-redis-configmap"
 
  # Redis secret name
 
  redisSecret: "pulse-redis-secret"
 
  # Redis secret key for access key
 
  redisSecretKey: "REDIS01_KEY"
 
 
 
# * Configuration for the Configuration Server Proxy container
 
csproxy:
 
 
   # resource limits for container
 
   # resource limits for container
 
   resources:
 
   resources:
Line 2,340: Line 1,623:
 
##
 
##
 
affinity: {}</source>
 
affinity: {}</source>
 
 
'''Install the permissions helm chart:'''
 
'''Install the permissions helm chart:'''
 
To install the permissions helm chart, run the following command:
 
To install the permissions helm chart, run the following command:
Line 2,373: Line 1,655:
 
<source lang="bash">helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-permissions pulsehelmrepo/permissions -f values-override-permissions.yaml
 
<source lang="bash">helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-permissions pulsehelmrepo/permissions -f values-override-permissions.yaml
 
</source>
 
</source>
|Status=No
 
}}{{Section
 
|sectionHeading=Configure security
 
|alignment=Vertical
 
|structuredtext====Arbitrary UIDs===
 
If your OpenShift deployment uses arbitrary UIDs, you must override the securityContext settings. By default, the user and group IDs are set to 500:500:500. For more information about how to update the '''podSecurityContext''' section in the YAML file for each chart, see {{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|topic=ConfigSecurity}}.
 
 
|Status=No
 
|Status=No
 
}}
 
}}
 
|PEPageType=55cef4ff-9306-4313-8fd8-377282a38478
 
|PEPageType=55cef4ff-9306-4313-8fd8-377282a38478
 
}}
 
}}

Latest revision as of 16:01, March 29, 2023

This topic is part of the manual Genesys Pulse Private Edition Guide for version Current of Reporting.

Prerequisites

Before performing the steps described on this page, complete the Before you begin instructions, and ensure that you have the following information:

  • Versions:
    • <image-version> = 100.0.000.0015
    • <chart-versions>= 100.0.000+0015
  • K8S namespace pulse
  • Project Name pulse
  • Postgres credentials:
    • <db-host>
    • <db-port>
    • <db-name>
    • <db-user>
    • <db-user-password>
    • <db-ssl-mode>
  • Docker credentials:
    • <docker-registry>
    • <docker-registry-secret-name>
  • Redis credentials:
    • <redis-host>
    • <redis-port>
    • <redis-password>
    • <redis-enable-ssl>
  • Tenant service variables:
    • <tenant-uuid>
    • <tenant-sid>
    • <tenant-name>
    • <tenant-dcu>
  • GAuth/GWS service variables:
    • <gauth-url-external>
    • <gauth-url-internal>
    • <gauth-client-id>
    • <gauth-client-secret>
    • <gws-url-external>
    • <gws-url-internal>
  • Storage class:
    • <pv-storage-class-rw-many>
    • <pv-storage-class-rw-once>
  • Pulse:
    • <pulse-host>

Single namespace

Single namespace deployments have a software-defined networking (SDN) with multitenant mode, where namespaces are network isolated. If you plan to deploy Pulse into the single namespace, ensure that your environment meets the following requirements for inputs:

  • Back-end services deployed into the single namespace must include the string pulse:
    • <db-host>
    • <db-name>
    • <redis-host>
  • The hostname used for Ingress must be unique, and must include the string pulse:
    • <pulse-host>
  • Internal service-to-service traffic must use the service endpoints, rather than the Ingress Controller:
    • <gauth-url-internal>
    • <gws-url-internal></source>

Tenant provisioning

Install init tenant chart

Get the init-tenant helm chart:

helm repo update
helm search repo <pulsehelmrepo>/init-tenant

Prepare the override file:

  • Update the values-override-init-tenant.yaml file (AKS):
    # Default values for init-tenant.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    # * Images
    # Replace for your values: registry and secret
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    configurator:
      enabled: true
      # set service domain used to access voice service
      # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
      voiceDomain: "voice.svc.<domain>"
      # set service domain used to access ixn service
      # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
      ixnDomain: "ixn.svc.<domain>"
      # set service domain used to access pulse service
      # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
      pulseDomain: "pulse.svc.<domain>"
      # set configration server password, used when create secrets
      cfgUser: "default"
      # set configration server user, used when create secrets
      cfgPassword: "password"
      # common log configuration
      cfgHost: "tenant-9350e2fc-a1dd-4c65-8d40-1f75a2e080dd.voice.svc.<domain>"
     
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: none
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-init-tenant-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Tenant info
    # Replace for your values
    tenant:
      # Tenant UUID
      id: <tenant-uuid>
      # Tenant SID (like 0001)
      sid: <tenant-sid>
     
    # common configuration.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      # set "true" for CSI secrets
      mountSecrets: false
      # Postgres config map name
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext: {}
     
    ## Resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      limits:
        memory: 256Mi
        cpu: 200m
      requests:
        memory: 128Mi
        cpu: 100m
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    # * Templates
    templates:
      - Agent_Group_Status.gpb
      - Agent_KPIs.gpb
      - Agent_Login.gpb
      - Alert_Widget.gpb
      - Callback_Activity.gpb
      - Campaign_Activity.gpb
      - Campaign_Callback_Status.gpb
      - Campaign_Group_Activity.gpb
      - Campaign_Group_Status.gpb
      - Chat_Agent_Activity.gpb
      - Chat_Queue_Activity.gpb
      - Chat_Service_Level_Performance.gpb
      - Chat_Waiting_Statistics.gpb
      - Email_Agent_Activity.gpb
      - Email_Queue_Activity.gpb
      - Facebook_Media_Activity.gpb
      - IFRAME.gpb
      - IWD_Agent_Activity.gpb
      - IWD_Queue_Activity.gpb
      - Queue_KPIs.gpb
      - Queue_Overflow_Reason.gpb
      - Static_Text.gpb
      - Twitter_Media_Activity.gpb
      - eServices_Agent_Activity.gpb
      - eServices_Queue_KPIs.gpb
  • Update the values-override-init-tenant.yaml file (GKE):
    Important
    Enable configurator only for configurations in GKE with VPC scoped DNS.
  • # Default values for init-tenant.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    # * Images
    # Replace for your values: registry and secret
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    configurator:
      enabled: true
      # set service domain used to access voice service
      # example for GKE VPC case: voice.svc.gke1-uswest1.gcpe002.gencpe.com
      voiceDomain: "voice.svc.<domain>"
      # set service domain used to access ixn service
      # example for GKE VPC case: ixn.svc.gke1-uswest1.gcpe002.gencpe.com
      ixnDomain: "ixn.svc.<domain>"
      # set service domain used to access pulse service
      # example for GKE VPC case: pulse.svc.gke1-uswest1.gcpe002.gencpe.com
      pulseDomain: "pulse.svc.<domain>"
      # set configration server password, used when create secrets
      cfgUser: "default"
      # set configration server user, used when create secrets
      cfgPassword: "password"
      # common log configuration
      cfgHost: "tenant-<tenant-uuid>.voice.svc.<domain>"
     
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: none
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-init-tenant-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: nfs-client
     
    # * Tenant info
    # Replace for your values
    tenant:
      # Tenant UUID
      id: <tenant-uuid>
      # Tenant SID (like 0001)
      sid: <tenant-sid>
     
    # common configuration.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      # set "true" for CSI secrets
      mountSecrets: false
      # Postgres config map name
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext:
       fsGroup: null
       runAsUser: null
       runAsGroup: 0
       runAsNonRoot: true
     
    ## Resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      limits:
        memory: 256Mi
        cpu: 200m
      requests:
        memory: 128Mi
        cpu: 100m
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    # * Templates
    templates:
      - Agent_Group_Status.gpb
      - Agent_KPIs.gpb
      - Agent_Login.gpb
      - Alert_Widget.gpb
      - Callback_Activity.gpb
      - Campaign_Activity.gpb
      - Campaign_Callback_Status.gpb
      - Campaign_Group_Activity.gpb
      - Campaign_Group_Status.gpb
      - Chat_Agent_Activity.gpb
      - Chat_Queue_Activity.gpb
      - Chat_Service_Level_Performance.gpb
      - Chat_Waiting_Statistics.gpb
      - Email_Agent_Activity.gpb
      - Email_Queue_Activity.gpb
      - Facebook_Media_Activity.gpb
      - IFRAME.gpb
      - IWD_Agent_Activity.gpb
      - IWD_Queue_Activity.gpb
      - Queue_KPIs.gpb
      - Queue_Overflow_Reason.gpb
      - Static_Text.gpb
      - Twitter_Media_Activity.gpb
      - eServices_Agent_Activity.gpb
      - eServices_Queue_KPIs.gpb

Install the init-tenant helm chart:
To install the init-tenant helm chart, run the following command:

helm upgrade --install "pulse-init-tenant-<tenant-sid>" pulsehelmrepo/init-tenant --wait --wait-for-jobs --version="<chart-version>"--namespace=pulse -f values-override-init-tenant.yaml

If installation is successful, the exit code 0 appears.

Validate the init-tenant helm chart:
To validate the init-tenant helm chart, run the following command:

kubectl get pods -n="pulse" -l "app.kubernetes.io/name=init-tenant,app.kubernetes.io/instance=pulse-init-tenant-<tenant-sid>"

If the deployment was successful, the pulse-init-tenant job is listed as Completed/. For example:

NAME                                     READY   STATUS      RESTARTS   AGE
pulse-init-tenant-100-job-qszgl          0/1     Completed   0          2d20h

Install dcu helm chart

Get the dcu helm chart:

helm repo update
helm search repo <pulsehelmrepo>/dcu

Prepare the override file:

  • Update the values-override-dcu.yaml file (AKS):
    # Default values for dcu.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    replicaCount: "<tenant-dcu>"
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-dcu-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Config info
    # Set your values.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      mountSecrets: false
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
      redisConfig: "pulse-redis-configmap"
      # Redis secret name
      redisSecret: "pulse-redis-secret"
      # Redis secret key for access key
      redisSecretKey: "REDIS01_KEY"
     
    # * Image
    # container image common settings
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext: {}
     
    ## Add labels to all pods
    ##
    podLabels: {}
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
     
    # * Monitoring settings
    monitoring:
      # enable the Prometheus metrics endpoint
      enabled: false
      # enable golden signals metrics (not supported for PE)
      goldenSignals:
        enabled: false
      # port number of the Prometheus metrics endpoint
      port: 9091
      # HTTP path to scrape for metrics
      path: /metrics
      # additional annotations required for monitoring PODs
      # you can reference values of other variables as {{.Values.variable.full.name}}
      podAnnotations: {}
        # prometheus.io/scrape: "true"
        # prometheus.io/port: "{{.Values.monitoring.port}}"
        # prometheus.io/path: "/metrics"
      podMonitor:
        # enables PodMonitor creation for the POD
        enabled: true
        # interval at which metrics should be scraped
        scrapeInterval: 30s
        # timeout after which the scrape is ended
        scrapeTimeout:
        # namespace of the PodMonitor, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
      alerts:
        # enables alert rules
        enabled: true
        # alert condition duration
        duration: 5m
        # namespace of the alert rules, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
       
     
    ##########################################################################
     
    # * Configuration for the Collector container
    collector:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "300Mi"
          # minimal CPU to reserve
          cpu: "200m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "8000m"
      # securityContext: {}
     
    # * Configuration for the StatServer container
    statserver:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "300Mi"
          # minimal CPU to reserve
          cpu: "100m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "4000m"
      # securityContext: {}
     
    # * Configuration for the monitor sidecar container
    monitorSidecar:
      # resource limits for container
      resources:
        # disabled: true
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "30Mi"
          # minimal CPU to reserve
          cpu: "2m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "70Mi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "10m"
      # securityContext: {}
     
    ##########################################################################
     
    # * Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext: {}
     
    # volumeClaims contains persistent volume claims for services
    # All available storage classes can be found here:
    # https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
    volumeClaims:
      # statserverBackup is storage for statserver backup data
      statserverBackup:
        name: statserver-backup
        accessModes:
          - ReadWriteOnce
        # capacity is storage capacity
        capacity: "1Gi"
        # class is storage class. Must be set explicitly.
        class: <pv-storage-class-rw-once>
  • Update the values-override-dcu.yaml file (GKE):
    # Default values for dcu.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    replicaCount: "<tenant-dcu>"
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-dcu-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Config info
    # Set your values.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      mountSecrets: false
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
      redisConfig: "pulse-redis-configmap"
      # Redis secret name
      redisSecret: "pulse-redis-secret"
      # Redis secret key for access key
      redisSecretKey: "REDIS01_KEY"
     
    # * Image
    # container image common settings
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext:
      runAsNonRoot: true
      runAsUser: 500
      runAsGroup: 500
      fsGroup: 0
     
    ## Add labels to all pods
    ##
    podLabels: {}
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
     
    # * Monitoring settings
    monitoring:
      # enable the Prometheus metrics endpoint
      enabled: false
      # enable golden signals metrics (not supported for PE)
      goldenSignals:
        enabled: false
      # port number of the Prometheus metrics endpoint
      port: 9091
      # HTTP path to scrape for metrics
      path: /metrics
      # additional annotations required for monitoring PODs
      # you can reference values of other variables as {{.Values.variable.full.name}}
      podAnnotations: {}
        # prometheus.io/scrape: "true"
        # prometheus.io/port: "{{.Values.monitoring.port}}"
        # prometheus.io/path: "/metrics"
      podMonitor:
        # enables PodMonitor creation for the POD
        enabled: true
        # interval at which metrics should be scraped
        scrapeInterval: 30s
        # timeout after which the scrape is ended
        scrapeTimeout:
        # namespace of the PodMonitor, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
      alerts:
        # enables alert rules
        enabled: true
        # alert condition duration
        duration: 5m
        # namespace of the alert rules, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
       
     
    ##########################################################################
     
    # * Configuration for the Collector container
    collector:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "300Mi"
          # minimal CPU to reserve
          cpu: "200m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "8000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # * Configuration for the StatServer container
    statserver:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "300Mi"
          # minimal CPU to reserve
          cpu: "100m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "4000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # * Configuration for the monitor sidecar container
    monitorSidecar:
      # resource limits for container
      resources:
        # disabled: true
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "30Mi"
          # minimal CPU to reserve
          cpu: "2m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "70Mi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "10m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    ##########################################################################
     
    # * Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # volumeClaims contains persistent volume claims for services
    # All available storage classes can be found here:
    # https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
    volumeClaims:
      # statserverBackup is storage for statserver backup data
      statserverBackup:
        name: statserver-backup
        accessModes:
          - ReadWriteOnce
        # capacity is storage capacity
        capacity: "1Gi"
        # class is storage class. Must be set explicitly.
        class: <pv-storage-class-rw-once>

Install the dcu helm chart
To install the dcu helm chart, run the following command:

helm upgrade --install "pulse-dcu-<tenant-sid>"  pulsehelmrepo/dcu --wait --reuse-values --version=<chart-version> --namespace=pulse -f values-override-dcu.yaml

Validate the dcu helm chart
To validate the dcu helm chart, run the following command:

kubectl get pods -n=pulse -l "app.kubernetes.io/name=dcu,app.kubernetes.io/instance=pulse-dcu-<tenant-sid>"

Check the output to ensure that all pulse-dcu pods are running, for example:

NAME              READY   STATUS    RESTARTS   AGE
pulse-dcu-100-0   3/3     Running   0          5m23s
pulse-dcu-100-1   3/3     Running   0          4m47s

Install lds helm chart

Get the lds helm chart:

helm repo update
helm search repo  <pulsehelmrepo>/lds

Prepare the override file:

  • Update values in the values-override-lds.yaml file (AKS):
    # Default values for lds.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    replicaCount: 2
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-lds-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Container image common settings
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext: {}
     
    ## Add labels to all pods
    ##
    podLabels: {}
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
     
    # * Monitoring settings
    monitoring:
      # enable the Prometheus metrics endpoint
      enabled: false
      # enable golden signals metrics (not supported for PE)
      goldenSignals:
        enabled: false
      # port number of the Prometheus metrics endpoint
      port: 9091
      # HTTP path to scrape for metrics
      path: /metrics
      # additional annotations required for monitoring PODs
      # you can reference values of other variables as {{.Values.variable.full.name}}
      podAnnotations: {}
        # prometheus.io/scrape: "true"
        # prometheus.io/port: "{{.Values.monitoring.port}}"
        # prometheus.io/path: "/metrics"
      podMonitor:
        # enables PodMonitor creation for the POD
        enabled: true
        # interval at which metrics should be scraped
        scrapeInterval: 30s
        # timeout after which the scrape is ended
        scrapeTimeout:
        # namespace of the PodMonitor, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
      alerts:
        # enables alert rules
        enabled: true
        # alert condition duration
        duration: 5m
        # namespace of the alert rules, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
     
    # * Configuration for the LDS container
    lds:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "50Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "4000m"
      # securityContext: {}
     
    # * Configuration for the monitor sidecar container
    monitorSidecar:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "30Mi"
          # minimal CPU to reserve
          cpu: "2m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "70Mi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "10m"
      # securityContext: {}
     
    # *  Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext: {}
  • Update values in the values-override-lds.yaml file (GKE):
    # Default values for lds.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    replicaCount: 2
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-lds-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    # * Container image common settings
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    ## Service account settings
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # Annotations to add to the service account
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name: ""
     
    ## Add annotations to all pods
    ##
    podAnnotations: {}
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext:
      runAsNonRoot: true
      runAsUser: 500
      runAsGroup: 500
      fsGroup: 0
     
    ## Add labels to all pods
    ##
    podLabels: {}
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
     
    # * Monitoring settings
    monitoring:
      # enable the Prometheus metrics endpoint
      enabled: false
      # enable golden signals metrics (not supported for PE)
      goldenSignals:
        enabled: false
      # port number of the Prometheus metrics endpoint
      port: 9091
      # HTTP path to scrape for metrics
      path: /metrics
      # additional annotations required for monitoring PODs
      # you can reference values of other variables as {{.Values.variable.full.name}}
      podAnnotations: {}
        # prometheus.io/scrape: "true"
        # prometheus.io/port: "{{.Values.monitoring.port}}"
        # prometheus.io/path: "/metrics"
      podMonitor:
        # enables PodMonitor creation for the POD
        enabled: true
        # interval at which metrics should be scraped
        scrapeInterval: 30s
        # timeout after which the scrape is ended
        scrapeTimeout:
        # namespace of the PodMonitor, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
      alerts:
        # enables alert rules
        enabled: true
        # alert condition duration
        duration: 5m
        # namespace of the alert rules, defaults to the namespace of the POD
        namespace:
        additionalLabels: {}
     
    # * Configuration for the LDS container
    lds:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "50Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "4Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "4000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # * Configuration for the monitor sidecar container
    monitorSidecar:
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "30Mi"
          # minimal CPU to reserve
          cpu: "2m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "70Mi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "10m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # *  Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500

Update values in the values-override-lds-vq.yaml file:

# Default values for lds.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
lds:
  params:
    cfgApp: "pulse-lds-vq-$((K8S_POD_INDEX % 2))"
 
log:
  pvc:
    name: pulse-lds-vq-logs

Install the lds helm chart:
To install the lds helm chart, run the following command:

helm upgrade --install "pulse-lds-<tenant-sid>"    pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml
helm upgrade --install "pulse-lds-vq-<tenant-sid>" pulsehelmrepo/lds --wait --version=<chart-version> --namespace=pulse -f values-override-lds.yaml -f values-override-lds-vq.yaml

If the installation is successful, the exit code 0 appears.

Validate the lds helm chart:
To validate the lds helm chart, run the following command:

kubectl get pods -n=pulse -l "app.kubernetes.io/name=lds,app.kubernetes.io/instance=pulse-lds-<tenant-sid>"

Verify that the command reports all pulse-lds-vq pods as Running, for example:

NAME              READY   STATUS    RESTARTS   AGE
pulse-lds-100-0   3/3     Running   0          2d20h
pulse-lds-100-1   3/3     Running   0          2d20h

Install permissions helm chart

Get the permissions helm chart

helm repo update
helm search repo <pulsehelmrepo>/permissions

Prepare the override file:

  • Update values in the values-override-permissions.yaml file (AKS):
    # Default values for permissions.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    # * Image configuration
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # common configuration.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      # set "true" for CSI secrets
      mountSecrets: false
      # Postgres config map name
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
      # Redis config map name
      redisConfig: "pulse-redis-configmap"
      # Redis secret name
      redisSecret: "pulse-redis-secret"
      # Redis secret key for access key
      redisSecretKey: "REDIS01_KEY"
     
     
    # * Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext: {}
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-permissions-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext: {}
     
    ## Resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      limits:
        memory: "1Gi"
        cpu: "500m"
      requests:
        memory: "400Mi"
        cpu: "50m"
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
  • Update values in the values-override-permissions.yaml file (GKE):
    # Default values for permissions.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    # * Image configuration
    image:
      tag: "<image-version>"
      pullPolicy: IfNotPresent
      registry: "<docker-registry>"
      imagePullSecrets: [name: "<docker-registry-secret-name>"]
     
    # * Tenant info
    # tenant identification, or empty for shared deployment
    tenant:
      # Tenant UUID
      id: "<tenant-uuid>"
      # Tenant SID (like 0001)
      sid: "<tenant-sid>"
     
    # common configuration.
    config:
      dbName: "<db-name>"
      # set "true" when need @host added for username
      dbUserWithHost: true
      # set "true" for CSI secrets
      mountSecrets: false
      # Postgres config map name
      postgresConfig: "pulse-postgres-configmap"
      # Postgres secret name
      postgresSecret: "pulse-postgres-secret"
      # Postgres secret key for user
      postgresSecretUser: "META_DB_ADMIN"
      # Postgres secret key for password
      postgresSecretPassword: "META_DB_ADMINPWD"
      # Redis config map name
      redisConfig: "pulse-redis-configmap"
      # Redis secret name
      redisSecret: "pulse-redis-secret"
      # Redis secret key for access key
      redisSecretKey: "REDIS01_KEY"
     
     
    # * Configuration for the Configuration Server Proxy container
    csproxy:
      # define domain for the configuration host
      params:
        cfgHost: "tenant-<tenant-uuid>.voice.<domain>"
      # resource limits for container
      resources:
        # minimum resource requirements to start container
        requests:
          # minimal amount of memory required to start a container
          memory: "200Mi"
          # minimal CPU to reserve
          cpu: "50m"
        # resource limits for containers
        limits:
          # maximum amount of memory a container can use before being evicted
          # by the OOM Killer
          memory: "2Gi"
          # maximum amount of CPU resources that can be used and should be tuned to reflect
          # what the application can effectively use before needing to be horizontally scaled out
          cpu: "1000m"
      # securityContext:
      #   runAsUser: 500
      #   runAsGroup: 500
     
    # * Common log configuration
    log:
      # target directory where log will be stored, leave empty for default
      logDir: ""
      # path where volume will be mounted
      volumeMountPath: /data/log
      # log volume type: none | hostpath | pvc
      volumeType: pvc
      # log volume hostpath, used with volumeType "hostpath"
      volumeHostPath: /mnt/log
      # log PVC parameters, used with volumeType "pvc"
      pvc:
        name: pulse-permissions-logs
        accessModes:
          - ReadWriteMany
        capacity: 10Gi
        class: <pv-storage-class-rw-many>
     
    ## Specifies the security context for all Pods in the service
    ##
    podSecurityContext:
       fsGroup: null
       runAsUser: null
       runAsGroup: 0
       runAsNonRoot: true
     
    ## Resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      limits:
        memory: "1Gi"
        cpu: "500m"
      requests:
        memory: "400Mi"
        cpu: "50m"
     
    ## HPA Settings
    ## Not supported in this release!
    hpa:
      enabled: false
     
    ## Priority Class
    ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
    ##
    priorityClassName: ""
     
    ## Node labels for assignment.
    ## ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
     
    ## Tolerations for assignment.
    ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
     
    ## Pod Disruption Budget Settings
    podDisruptionBudget:
      enabled: false
     
    ## Affinity for assignment.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}

Install the permissions helm chart: To install the permissions helm chart, run the following command:

helm upgrade --install "pulse-permissions-<tenant-sid>" pulsehelmrepo/permissions --wait --version="<chart-version>" --namespace=pulse -f values-override-permissions.yaml

If installation is successful, the exit code 0 appears.

Validate the permissions helm chart:
To validate the permissions helm chart, run the following command:

kubectl get pods -n=pulse -l "app.kubernetes.io/name=permissions,app.kubernetes.io/instance=pulse-permissions-<tenant-sid>"

Verify that the command report all pulse-permissions pods as Running, for example:

NAME                                    READY   STATUS    RESTARTS   AGE
pulse-permissions-100-c5ff8bb7d-jl7d7   2/2     Running   2          2d20h

Troubleshooting

Check init-tenant helm chart manifests:
To output the manifest into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-init-tenant-<tenant-sid> pulsehelmrepo/init-tenant -f values-override-init-tenant.yaml

Check dcu helm chart manifests:
To output the dcu Helm chart manifest into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-dcu-<tenant-sid> pulsehelmrepo/dcu -f values-override-dcu.yaml

Check lds helm chart manifests:
To output the lds chart manifest into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-lds-<tenant-sid> pulsehelmrepo/lds -f values-override-lds.yaml

Check permissions Helm chart manifests:
To output the Helm chart manifest into the helm-template directory, run the following command:

helm template --version=<chart-version> --namespace=pulse --debug --output-dir helm-template pulse-permissions pulsehelmrepo/permissions -f values-override-permissions.yaml
Comments or questions about this documentation? Contact us for support!