Tenant Provisioning

From Genesys Documentation
Revision as of 06:51, July 1, 2021 by Olena.chapovska@genesys.com (talk | contribs) (Published)
Jump to: navigation, search
This topic is part of the manual Genesys Pulse Private Edition Guide for version Current of Reporting.

Prerequisites

Please complete Before you begin instructions.

Information you will need:

  • Versions:
    • <image-version> = 9.0.100.10
    • <chart-versions>= 9.0.100+10
  • K8S namespace <namespace> (e.g. 'pulse')
  • Project Name <project-name> (e.g. 'pulse')
  • Postgres credentials
    • <db-host>
    • <db-port>
    • <db-name>
    • <db-user>
    • <db-user-password>
    • <db-superuser>
    • <db-superuser-password>
    • <db-ssl-mode>
  • Docker credentials
    • <docker-email>
    • <docker-password>
    • <docker-user>
  • OpenShift credentials
    • <openshift-url>
    • <openshift-port>
    • <openshift-token>
  • Redis credentials
    • <redis-host>
    • <redis-port>
    • <redis-password>
    • <redis-enable-ssl>
  • Tenant service variables
    • <tenant-uuid>
    • <tenant-sid>
    • <tenant-name>
Fill appropriate placeholders in .shared_tenant_variables:
export PROJECT_NAME='<project-name>'
export NAMESPACE='<namespace>'
export CHART_VERSION='<chart-version>'
export DB_HOST='<db-host>'
export DB_PORT='<db-port>'
export DB_NAME_SHARED='<db-name>'
export DB_USER_SHARED='<db-user>'
export DB_PASSWORD_SHARED='<db-user-password>'
export DB_NAME_SUPERUSER='<db-superuser>'
export DB_PASSWORD_SUPERUSER='<db-superuser-password>'
export DB_SSL_MODE='<db-ssl-mode>'
export DOCKER_REGISTRY_SECRET_NAME='<docker-registry-secret-name>'
export DOCKER_REGISTRY='<docker-registry>'
export DOCKER_TAG='<image-version>'
export REDIS_ENABLE_SSL='<redis-enable-ssl>'
export REDIS_PASWORD='<redis-password>'
export REDIS_PORT='<redis-port>'
export REDIS_HOST='<redis-host>'
export TENANT_UUID='<tenant-uuid>'
export TENANT_DCU='2'
export TENANT_NAME='<tenant-name>'
export TENANT_SID='<tenant-sid>'
export PV_STORAGE_CLASS_RW_MANY='<rw-many-storage-class>'
export PV_STORAGE_CLASS_RW_ONCE='<rw-once-storage-class>'

Tenant provisioning

Install init tenant chart

Get the init-tenant helm chart

Download the init-tenant helm chart from JFrog using your credentials.

Prepare override file

Update the values-override-init-tenant.yaml file:
# Default values for init-tenant.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
# * Images
# Replace for your values: registry and secret
image:
  name: init
  tag: "${DOCKER_TAG}"
  pullPolicy: IfNotPresent
  repository: "${DOCKER_REGISTRY}/pulse/"
 
imagePullSecrets: [name: ${DOCKER_REGISTRY_SECRET_NAME}]
 
# * Tenant info
# Replace for your values
tenant:
  # Tenant UUID
  id: ${TENANT_UUID}
  # Tenant SID (like 0001)
  sid: ${TENANT_SID}
 
# common configuration.
config:
  dbName: "${DB_NAME_SHARED}"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres secret name
  postgresSecret: "pulse-postgres-secret"
  # Postgres secret key for user
  postgresSecretUser: "META_DB_ADMIN"
  # Postgres secret key for password
  postgresSecretPassword: "META_DB_ADMINPWD"
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Containers should run as genesys user and cannot use elevated permissions
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
# securityContext:
#    runAsUser: 500
#    runAsGroup: 500
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: 256Mi
    cpu: 200m
  requests:
    memory: 128Mi
    cpu: 100m
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
# * Templates
templates:
  - Agent_Group_Status.gpb
  - Agent_KPIs.gpb
  - Agent_Login.gpb
  - Alert_Widget.gpb
  - Callback_Activity.gpb
  - Campaign_Activity.gpb
  - Campaign_Callback_Status.gpb
  - Campaign_Group_Activity.gpb
  - Campaign_Group_Status.gpb
  - Chat_Agent_Activity.gpb
  - Chat_Queue_Activity.gpb
  - Chat_Service_Level_Performance.gpb
  - Chat_Waiting_Statistics.gpb
  - Email_Agent_Activity.gpb
  - Email_Queue_Activity.gpb
  - Facebook_Media_Activity.gpb
  - IFRAME.gpb
  - IWD_Agent_Activity.gpb
  - IWD_Queue_Activity.gpb
  - Queue_KPIs.gpb
  - Queue_Overflow_Reason.gpb
  - Static_Text.gpb
  - Twitter_Media_Activity.gpb
  - eServices_Agent_Activity.gpb
  - eServices_Queue_KPIs.gpb
Install the init-tenant helm chart
source .tenant_init_variables
 
envsubst < ./values-override-init-tenant.yaml | \
helm upgrade --install "pulse-init-tenant-${TENANT_SID}" pe-jfrog-stage/init-tenant \
      --wait --wait-for-jobs \
      --version="${CHART_VERSION}" \
      --namespace="${NAMESPACE}" \
      -f -
Validate the init-tenant helm chart
<source lang="text">source .tenant_init_variables
 
oc get pods -n="${NAMESPACE}" -l "app.kubernetes.io/name=init-tenant,app.kubernetes.io/instance=pulse-init-tenant-${TENANT_SID}"
The above command should report the pulse-init-tenant job as completed, for example:
NAME                                    READY   STATUS      RESTARTS   AGE
pulse-init-tenant-100-job-qszgl         0/1     Completed   0          2d20h

Install dcu helm chart

Get the dcu helm chart

Download the dcu helm chart from JFrog using your credentials.

Prepare override file

Update the values-override-dcu.yaml file:

# Default values for dcu.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
replicaCount: "${TENANT_DCU}"
 
# * Tenant info
# tenant identification, or empty for shared deployment
tenant:
  # Tenant UUID
  id: "${TENANT_UUID}"
  # Tenant SID (like 0001)
  sid: "${TENANT_SID}"
 
# * Common log configuration
log:
  # target directory where log will be stored, leave empty for default
  logDir: ""
  # path where volume will be mounted
  volumeMountPath: /data/log
  # log volume type: none | hostpath | pvc
  volumeType: pvc
  # log volume hostpath, used with volumeType "hostpath"
  volumeHostPath: /mnt/log
  # log PVC parameters, used with volumeType "pvc"
  pvc:
    name: pulse-dcu-logs
    accessModes:
      - ReadWriteMany
    capacity: 10Gi
    class: ${PV_STORAGE_CLASS_RW_MANY}
 
# * Config info
# Set your values.
config:
  dbName: "${DB_NAME_SHARED}"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres secret name
  postgresSecret: "pulse-postgres-secret"
  # Postgres secret key for user
  postgresSecretUser: "META_DB_ADMIN"
  # Postgres secret key for password
  postgresSecretPassword: "META_DB_ADMINPWD"
  redisConfig: "pulse-redis-configmap"
  # Redis secret name
  redisSecret: "pulse-redis-secret"
  # Redis secret key for access key
  redisSecretKey: "REDIS01_KEY"
 
# * Image
# container image common settings
image:
  name:
  tag: "${DOCKER_TAG}"
  pullPolicy: IfNotPresent
  repository: "${DOCKER_REGISTRY}/pulse/"
 
imagePullSecrets: [name: ${DOCKER_REGISTRY_SECRET_NAME}]
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Add labels to all pods
##
podLabels: {}
 
## HPA Settings
## Not supported in this release!
hpa:
  enabled: false
  
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
  
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
  
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
  
## Pod Disruption Budget Settings
podDisruptionBudget:
  enabled: false
  
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
# * Monitoring settings
monitoring:
  # enable the Prometheus metrics endpoint
  enabled: false
  # port number of the Prometheus metrics endpoint
  port: 9091
  # HTTP path to scrape for metrics
  path: /metrics
  # additional annotations required for monitoring PODs
  # you can reference values of other variables as {{.Values.variable.full.name}}
  podAnnotations: {}
    # prometheus.io/scrape: "true"
    # prometheus.io/port: "{{.Values.monitoring.port}}"
    # prometheus.io/path: "/metrics"
  podMonitor:
    # enables PodMonitor creation for the POD
    enabled: true
    # interval at which metrics should be scraped
    scrapeInterval: 30s
    # timeout after which the scrape is ended
    scrapeTimeout:
    # namespace of the PodMonitor, defaults to the namespace of the POD
    namespace:
    additionalLabels: {}
  alerts:
    # enables alert rules
    enabled: true
    # alert condition duration
    duration: 5m
    # namespace of the alert rules, defaults to the namespace of the POD
    namespace:
    additionalLabels: {}
 
##########################################################################
 
# * Configuration for the Collector container
collector:
  # resource limits for container
  resources:
    # minimum resource requirements to start container
    requests:
      # minimal amount of memory required to start a container
      memory: "300Mi"
      # minimal CPU to reserve
      cpu: "200m"
    # resource limits for containers
    limits:
      # maximum amount of memory a container can use before being evicted
      # by the OOM Killer
      memory: "4Gi"
      # maximum amount of CPU resources that can be used and should be tuned to reflect
      # what the application can effectively use before needing to be horizontally scaled out
      cpu: "8000m"
  # securityContext:
  #   runAsUser: 500
  #   runAsGroup: 500
 
# * Configuration for the StatServer container
statserver:
  # resource limits for container
  resources:
    # minimum resource requirements to start container
    requests:
      # minimal amount of memory required to start a container
      memory: "300Mi"
      # minimal CPU to reserve
      cpu: "100m"
    # resource limits for containers
    limits:
      # maximum amount of memory a container can use before being evicted
      # by the OOM Killer
      memory: "4Gi"
      # maximum amount of CPU resources that can be used and should be tuned to reflect
      # what the application can effectively use before needing to be horizontally scaled out
      cpu: "4000m"
  # securityContext:
  #   runAsUser: 500
  #   runAsGroup: 500
 
# * Configuration for the monitor sidecar container
monitorSidecar:
  # resource limits for container
  resources:
    # disabled: true
    # minimum resource requirements to start container
    requests:
      # minimal amount of memory required to start a container
      memory: "30Mi"
      # minimal CPU to reserve
      cpu: "2m"
    # resource limits for containers
    limits:
      # maximum amount of memory a container can use before being evicted
      # by the OOM Killer
      memory: "70Mi"
      # maximum amount of CPU resources that can be used and should be tuned to reflect
      # what the application can effectively use before needing to be horizontally scaled out
      cpu: "10m"
  # securityContext:
  #   runAsUser: 500
  #   runAsGroup: 500
 
##########################################################################
 
# * Configuration for the Configuration Server Proxy container
csproxy:
  # resource limits for container
  resources:
    # minimum resource requirements to start container
    requests:
      # minimal amount of memory required to start a container
      memory: "200Mi"
      # minimal CPU to reserve
      cpu: "50m"
    # resource limits for containers
    limits:
      # maximum amount of memory a container can use before being evicted
      # by the OOM Killer
      memory: "2Gi"
      # maximum amount of CPU resources that can be used and should be tuned to reflect
      # what the application can effectively use before needing to be horizontally scaled out
      cpu: "1000m"
  # securityContext:
  #   runAsUser: 500
  #   runAsGroup: 500
 
# volumeClaims contains persistent volume claims for services
# All available storage classes can be found here:
# https://github.com/genesysengage/tfm-azure-core-aks/blob/master/k8s-module/storage.tf
volumeClaims:
  # statserverBackup is storage for statserver backup data
  statserverBackup:
    name: statserver-backup
    accessModes:
      - ReadWriteOnce
    # capacity is storage capacity
    capacity: "1Gi"
    # class is storage class. Must be set explicitly.
    class: ${PV_STORAGE_CLASS_RW_ONCE}

Install the dcu helm chart

source .tenant_init_variables
 
envsubst < ./values-override-dcu.yaml | \
helm upgrade --install "pulse-dcu-${TENANT_SID}" pe-jfrog-stage/dcu \
      --wait \
      --reuse-values \
      --version="${CHART_VERSION}" \
      --namespace="${NAMESPACE}" \
      -f -

Validate the dcu helm chart

source .tenant_init_variables
 
oc get pods -n="${NAMESPACE}" -l "app.kubernetes.io/name=dcu,app.kubernetes.io/instance=pulse-dcu-${TENANT_SID}"

The above command should report all pulse-dcu pods as running, for example:

NAME              READY   STATUS    RESTARTS   AGE
pulse-dcu-100-0   4/4     Running   2          2d20h
pulse-dcu-100-1   4/4     Running   0          167m

Install lds helm cahrt

Get the lds helm chart

Download the lds helm chart from JFrog using your credentials.

Prepare override file

Update the values-override-lds.yaml file:

# Default values for lds.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
replicaCount: 2
 
# * Tenant info
# tenant identification, or empty for shared deployment
tenant:
  # Tenant UUID
  id: "${TENANT_UUID}"
  # Tenant SID (like 0001)
  sid: "${TENANT_SID}"
 
# * Common log configuration
log:
  # target directory where log will be stored, leave empty for default
  logDir: ""
  # path where volume will be mounted
  volumeMountPath: /data/log
  # log volume type: none | hostpath | pvc
  volumeType: pvc
  # log volume hostpath, used with volumeType "hostpath"
  volumeHostPath: /mnt/log
  # log PVC parameters, used with volumeType "pvc"
  pvc:
    name: pulse-lds-logs
    accessModes:
      - ReadWriteMany
    capacity: 10Gi
    class: ${PV_STORAGE_CLASS_RW_MANY}
 
# * Container image common settings
image:
  name:
  tag: "${DOCKER_TAG}"
  pullPolicy: IfNotPresent
  repository: "${DOCKER_REGISTRY}/pulse/"
 
imagePullSecrets: [name: ${DOCKER_REGISTRY_SECRET_NAME}]
 
## Service account settings
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
 
## Add annotations to all pods
##
podAnnotations: {}
 
## Add labels to all pods
##
podLabels: {}
 
## HPA Settings
## Not supported in this release!
hpa:
  enabled: false
  
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
  
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
  
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
  
## Pod Disruption Budget Settings
podDisruptionBudget:
  enabled: false
  
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
# * Monitoring settings
monitoring:
  # enable the Prometheus metrics endpoint
  enabled: false
  # port number of the Prometheus metrics endpoint
  port: 9091
  # HTTP path to scrape for metrics
  path: /metrics
  # additional annotations required for monitoring PODs
  # you can reference values of other variables as {{.Values.variable.full.name}}
  podAnnotations: {}
    # prometheus.io/scrape: "true"
    # prometheus.io/port: "{{.Values.monitoring.port}}"
    # prometheus.io/path: "/metrics"
  podMonitor:
    # enables PodMonitor creation for the POD
    enabled: true
    # interval at which metrics should be scraped
    scrapeInterval: 30s
    # timeout after which the scrape is ended
    scrapeTimeout:
    # namespace of the PodMonitor, defaults to the namespace of the POD
    namespace:
    additionalLabels: {}
  alerts:
    # enables alert rules
    enabled: true
    # alert condition duration
    duration: 5m
    # namespace of the alert rules, defaults to the namespace of the POD
    namespace:
    additionalLabels: {}
 
 
# * Configuration for the LDS container
lds:
  # resource limits for container
  resources:
    # minimum resource requirements to start container
    requests:
      # minimal amount of memory required to start a container
      memory: "50Mi"
      # minimal CPU to reserve
      cpu: "50m"
    # resource limits for containers
    limits:
      # maximum amount of memory a container can use before being evicted
      # by the OOM Killer
      memory: "4Gi"
      # maximum amount of CPU resources that can be used and should be tuned to reflect
      # what the application can effectively use before needing to be horizontally scaled out
      cpu: "4000m"
  # securityContext:
  #   runAsUser: 500
  #   runAsGroup: 500
 
# * Configuration for the monitor sidecar container
monitorSidecar:
  # resource limits for container
  resources:
    # minimum resource requirements to start container
    requests:
      # minimal amount of memory required to start a container
      memory: "30Mi"
      # minimal CPU to reserve
      cpu: "2m"
    # resource limits for containers
    limits:
      # maximum amount of memory a container can use before being evicted
      # by the OOM Killer
      memory: "70Mi"
      # maximum amount of CPU resources that can be used and should be tuned to reflect
      # what the application can effectively use before needing to be horizontally scaled out
      cpu: "10m"
  # securityContext:
  #   runAsUser: 500
  #   runAsGroup: 500
 
# *  Configuration for the Configuration Server Proxy container
csproxy:
  resources:
    # minimum resource requirements to start container
    requests:
      # minimal amount of memory required to start a container
      memory: "200Mi"
      # minimal CPU to reserve
      cpu: "50m"
    # resource limits for containers
    limits:
      # maximum amount of memory a container can use before being evicted
      # by the OOM Killer
      memory: "2Gi"
      # maximum amount of CPU resources that can be used and should be tuned to reflect
      # what the application can effectively use before needing to be horizontally scaled out
      cpu: "1000m"
  # securityContext:
  #   runAsUser: 500
  #   runAsGroup: 500

Install the lds helm chart

source .tenant_init_variables
 
envsubst < ./values-override-lds.yaml | \
helm upgrade --install "pulse-lds-${TENANT_SID}" pe-jfrog-stage/lds \
      --wait \
      --version="${CHART_VERSION}" \
      --namespace="${NAMESPACE}" \
      -f -

Validate the lds helm chart

source .tenant_init_variables
 
oc get pods -n="${NAMESPACE}" -l "app.kubernetes.io/name=lds,app.kubernetes.io/instance=pulse-lds-${TENANT_SID}"

The above command should report all pulse-lds pods as running, for example:

NAME              READY   STATUS    RESTARTS   AGE
pulse-lds-100-0   3/3     Running   0          2d20h
pulse-lds-100-1   3/3     Running   0          2d20h

Install permissions helm chart

Get the permissions helm chart

Download the permissions helm chart from JFrog using your credentials.

Prepare override file

Update the values-override-permissions.yaml file:

# Default values for permissions.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
# * Image configuration
image:
  name: userpermissions
  tag: "${DOCKER_TAG}"
  pullPolicy: IfNotPresent
  repository: "${DOCKER_REGISTRY}/pulse/"
 
  imagePullSecrets: [name: ${DOCKER_REGISTRY_SECRET_NAME}]
 
# * Tenant info
# tenant identification, or empty for shared deployment
tenant:
  # Tenant UUID
  id: "${TENANT_UUID}"
  # Tenant SID (like 0001)
  sid: "${TENANT_SID}"
 
# common configuration.
config:
  dbName: "${DB_NAME_SHARED}"
  # set "true" when need @host added for username
  dbUserWithHost: true
  # set "true" for CSI secrets
  mountSecrets: false
  # Postgres config map name
  postgresConfig: "pulse-postgres-configmap"
  # Postgres secret name
  postgresSecret: "pulse-postgres-secret"
  # Postgres secret key for user
  postgresSecretUser: "META_DB_ADMIN"
  # Postgres secret key for password
  postgresSecretPassword: "META_DB_ADMINPWD"
  # Redis config map name
  redisConfig: "pulse-redis-configmap"
  # Redis secret name
  redisSecret: "pulse-redis-secret"
  # Redis secret key for access key
  redisSecretKey: "REDIS01_KEY"
 
 
# * Configuration for the Configuration Server Proxy container
csproxy:
  # resource limits for container
  resources:
    # minimum resource requirements to start container
    requests:
      # minimal amount of memory required to start a container
      memory: "200Mi"
      # minimal CPU to reserve
      cpu: "50m"
    # resource limits for containers
    limits:
      # maximum amount of memory a container can use before being evicted
      # by the OOM Killer
      memory: "2Gi"
      # maximum amount of CPU resources that can be used and should be tuned to reflect
      # what the application can effectively use before needing to be horizontally scaled out
      cpu: "1000m"
  # securityContext:
  #   runAsUser: 500
  #   runAsGroup: 500
 
# * Common log configuration
log:
  # target directory where log will be stored, leave empty for default
  logDir: ""
  # path where volume will be mounted
  volumeMountPath: /data/log
  # log volume type: none | hostpath | pvc
  volumeType: pvc
  # log volume hostpath, used with volumeType "hostpath"
  volumeHostPath: /mnt/log
  # log PVC parameters, used with volumeType "pvc"
  pvc:
    name: pulse-permissions-logs
    accessModes:
      - ReadWriteMany
    capacity: 10Gi
    class: ${PV_STORAGE_CLASS_RW_MANY}
  
## Containers should run as genesys user and cannot use elevated permissions
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
# securityContext:
#    runAsUser: 500
#    runAsGroup: 500
 
## Resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: "1Gi"
    cpu: "500m"
  requests:
    memory: "400Mi"
    cpu: "50m"
  
## HPA Settings
## Not supported in this release!
hpa:
  enabled: false
  
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
  
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
  
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
  
## Pod Disruption Budget Settings
podDisruptionBudget:
  enabled: false
  
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
## Add annotations to all pods
##
podAnnotations: {}
  
## Add labels to all pods
##
podLabels: {}

Install permissions helm chart

source .tenant_init_variables
 
envsubst < ./values-override-permissions.yaml | \
helm upgrade --install "pulse-permissions-${TENANT_SID}" pe-jfrog-stage/permissions \
      --wait \
      --version="${CHART_VERSION}" \
      --namespace="${NAMESPACE}" \
      -f -

Validate permissions helm chart

 source .tenant_init_variables
 
oc get pods -n="${NAMESPACE}" -l "app.kubernetes.io/name=permissions,app.kubernetes.io/instance=pulse-permissions-${TENANT_SID}"

The above command should report all pulse-permissions pods as running, for example:

NAME                                    READY   STATUS    RESTARTS   AGE
pulse-permissions-100-c5ff8bb7d-jl7d7   2/2     Running   2          2d20h

Troubleshooting

Check init-tenant helm chart manifests

Run to output manifest into helm-template directory:

source .tenant_init_variables
 
envsubst < ./values-override-init-tenant.yaml | \
helm template \
     --version="${CHART_VERSION}" \
     --namespace="${NAMESPACE}" \
     --debug \
     --output-dir helm-template \
     "${CHART_NAME_TENANT_INIT}" pe-jfrog-stage/init-tenant \
     -f -

Check dcu helm chart manifests

Run to output manifest into helm-template directory:

source .tenant_init_variables
 
envsubst < ./values-override-dcu.yaml | \
helm template \
     --version="${CHART_VERSION}" \
     --namespace="${NAMESPACE}" \
     --debug \
     --output-dir helm-template \
     "pulse-dcu-${TENANT_SID}" pe-jfrog-stage/dcu \
     -f -

Check lds helm chart manifests

Run to output manifest into helm-template directory:

source .tenant_init_variables
 
envsubst < ./values-override-lds.yaml | \
helm template \
     --version="${CHART_VERSION}" \
     --namespace="${NAMESPACE}" \
     --debug \
     --output-dir helm-template \
     "pulse-lds-${TENANT_SID}" pe-jfrog-stage/lds \
     -f -

Check permissions helm chart manifests

Run to output manifest into helm-template directory:

source .tenant_init_variables
 
envsubst < ./values-override-permissions.yaml | \
helm template \
     --version="${CHART_VERSION}" \
     --namespace="${NAMESPACE}" \
     --debug \
     --output-dir helm-template \
     "pulse-permissions" pe-jfrog-stage/permissions \
     -f -

Do Not Publish

List any provisioning needed to deploy, run, or manage the service. For example:
  • Designer: Create an Access Group specific to Designer Developer, Admin.
  • Agent Setup: Create Agent Setup options to provide access to Administrator, Supervisor. or Ops.
  • Genesys Info Mart: Update the CTL_CONFIG table in the GIM DB to control ETL and DB maintenance behavior.
Comments or questions about this documentation? Contact us for support!