Difference between revisions of "GVP/Current/GVPPEGuide/Deploy"

From Genesys Documentation
Jump to: navigation, search
(Published)
(Published)
Line 5: Line 5:
 
|ComingSoon=No
 
|ComingSoon=No
 
|Section={{Section
 
|Section={{Section
|sectionHeading=Deploy in OpenShift
+
|sectionHeading=Deploy
 
|alignment=Vertical
 
|alignment=Vertical
 
|structuredtext={{NoteFormat|Make sure to review {{Link-SomewhereInThisVersion|manual=GVPPEGuide|topic=Planning}} for the full list of prerequisites required to deploy Genesys Voice Platform.|}}
 
|structuredtext={{NoteFormat|Make sure to review {{Link-SomewhereInThisVersion|manual=GVPPEGuide|topic=Planning}} for the full list of prerequisites required to deploy Genesys Voice Platform.|}}
Line 19: Line 19:
 
===Environment setup===
 
===Environment setup===
  
*Log in to the OpenShift cluster from the remote host via CLI
+
*Log in to the gke cluster
 +
<source lang="bash">
 +
gcloud container clusters get-credentials gke1
 +
</source>
  
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="ab571ebf-fcfb-4a01-8b1b-6944f77b9f78" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
*Create gvp project in gke cluster using following manifest file
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc login --token <token> --server <URL of the API server>
 
{{!}}}
 
  
*Check the cluster version
+
'''create-gvp-namespace.json'''
 +
<source lang="bash">
 +
{
 +
"apiVersion": "v1",
 +
"kind": "Namespace",
 +
"metadata": {
 +
"name": "gvp",
 +
"labels": {
 +
"name": "gvp"
 +
}
 +
}
 +
}
 +
</source>
  
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="214674de-64a8-4a44-b515-153092ce8d29" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
<source lang="bash">
{{!}} class="wysiwyg-macro-body"{{!}}
+
kubectl apply -f apply create-gvp-namespace.json
oc get clusterversion
+
</source>
{{!}}}
 
 
 
*Create gvp project in OpenShift cluster
 
 
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="64d874a6-e297-4123-a6a5-af80bef85b93" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc new-project gvp
 
{{!}}}
 
 
 
*Set default project to GVP
 
 
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="c54c5c76-b3b9-4509-bcba-c42ebf950235" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc project gvp
 
{{!}}}
 
 
 
*Bind SCC to genesys user using default service account
 
 
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="f527d463-7100-4bfa-bf89-f0db5364f738" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc adm policy add-scc-to-user genesys-restricted -z default -n gvp
 
{{!}}}
 
 
 
*Create secret for docker-registry in order to pull image from JFrog
 
 
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="3b7536c0-f7f3-4fca-917c-1ef9d8859a73" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create secret docker-registry <credential-name> --docker-server=<docker repo> --docker-username=<username> --docker-password=<API key from jfrog> --docker-email=<emailid>
 
{{!}}}
 
 
 
*Link the secret to default service account with pull role
 
 
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="ac36681b-3176-4f57-9817-0b97ade1d172" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc secrets link default <credential-name> --for=pull
 
{{!}}}
 
  
 +
*confirm namespace creation
 +
<source lang="bash">
 +
kubectl describe namespace gvp
 +
</source>
  
 
Installation order matters with GVP. To deploy without errors, the install order should be:
 
Installation order matters with GVP. To deploy without errors, the install order should be:
Line 85: Line 65:
 
Download the GVP Helm charts from JFrog using your credentials:
 
Download the GVP Helm charts from JFrog using your credentials:
  
gvp-configserver :  https://<jfrog artifactory/helm location>/gvp-configserver-100.0.1000017.tgz
+
gvp-configserver :  https://<jfrog artifactory/helm location>/gvp-configserver-<version_number>.tgz
  
gvp-sd :  https://<jfrog artifactory/helm location>/gvp-sd-100.0.1000019.tgz
+
gvp-sd :  https://<jfrog artifactory/helm location>/gvp-sd-<version_number>.tgz
  
gvp-rs :  https://<jfrog artifactory/helm location>/gvp-rs-100.0.1000077.tgz
+
gvp-rs :  https://<jfrog artifactory/helm location>/gvp-rs-<version_number>.tgz
  
gvp-rm :  https://<jfrog artifactory/helm location>/gvp-rm-100.0.1000082.tgz
+
gvp-rm :  https://<jfrog artifactory/helm location>/gvp-rm-<version_number>.tgz
  
gvp-mcp :  https://<jfrog artifactory/helm location>/gvp-mcp-100.0.1000040.tgz
+
gvp-mcp :  https://<jfrog artifactory/helm location>/gvp-mcp-<version_number>.tgz
 +
 
 +
For version numbers, refer to {{Link-AnywhereElse|product=ReleaseNotes|version=Current|manual=GenesysEngage-cloud|topic=GVPHelm|display text=Helm charts and containers for Genesys Voice Platform}}.
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
Line 130: Line 112:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="587e10f0-ca8a-449f-8e91-5334c925a9ac" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="587e10f0-ca8a-449f-8e91-5334c925a9ac" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
  oc create -f postgres-secret.yaml
+
  kubectl apply -f postgres-secret.yaml
 
{{!}}}<u>configserver-secret</u>
 
{{!}}}<u>configserver-secret</u>
  
Line 150: Line 132:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="33201d74-71dc-47ca-a0da-26714a75796f" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="33201d74-71dc-47ca-a0da-26714a75796f" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
  oc create -f configserver-secret.yaml
+
  kubectl apply -f configserver-secret.yaml
 
{{!}}}
 
{{!}}}
 
===Install Helm chart===
 
===Install Helm chart===
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="be4eab56-9055-4cbc-a7d4-8da3284f51dd" data-macro-parameters="language=bash{{!}}theme=Emacs{{!}}title=Install Helm Chart" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
<source lang="bash">
{{!}} class="wysiwyg-macro-body"{{!}}
+
helm install gvp-configserver ./<gvp-configserver-helm-artifact> -f gvp-configserver-values.yaml
helm install gvp-configserver ./<gvp-configserver-helm-artifact> -f gvp-configserver-values.yaml
+
</source>
{{!}}}At minimum following values will need to be updated in your values.yaml:
+
 
 +
Set the following values in your values.yaml for Configuration Server:
 +
 
 +
priorityClassName >> Set to a priority class that exists on cluster (or create it instead)
  
*<critical-priority-class> - Set to a priority class that exists on cluster (or create it instead)
+
imagePullSecrets >> Set to your pull secret name
*<docker-repo> - Set to your Docker Repo with Private Edition Artifacts
 
*<credential-name> - Set to your pull secret name
 
  
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="3588a7f4-5ad5-458e-be58-248972275c41" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-configserver-values.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
'''gvp-configserver-values.yaml'''
{{!}} class="wysiwyg-macro-body"{{!}}
+
<source lang="bash">
# Default values for gvp-configserver.
+
# Default values for gvp-configserver.
<nowiki>#</nowiki> This is a YAML-formatted file.
+
# This is a YAML-formatted file.
<nowiki>#</nowiki> Declare variables to be passed into your templates.
+
# Declare variables to be passed into your templates.
 
   
 
   
<nowiki>##</nowiki> Global Parameters
+
## Global Parameters
<nowiki>##</nowiki> Add labels to all the deployed resources
+
## Add labels to all the deployed resources
<nowiki>##</nowiki>
+
##
podLabels: {}
+
podLabels: {}
 
   
 
   
<nowiki>##</nowiki> Add annotations to all the deployed resources
+
## Add annotations to all the deployed resources
<nowiki>##</nowiki>
+
##
podAnnotations: {}
+
podAnnotations: {}
 
   
 
   
serviceAccount:
+
serviceAccount:
<nowiki> </nowiki> # Specifies whether a service account should be created
+
  # Specifies whether a service account should be created
<nowiki> </nowiki> create: false
+
  create: false
<nowiki> </nowiki> # Annotations to add to the service account
+
  # Annotations to add to the service account
<nowiki> </nowiki> annotations: {}
+
  annotations: {}
<nowiki> </nowiki> # The name of the service account to use.
+
  # The name of the service account to use.
<nowiki> </nowiki> # If not set and create is true, a name is generated using the fullname template
+
  # If not set and create is true, a name is generated using the fullname template
<nowiki> </nowiki> name:
+
  name:
 
   
 
   
<nowiki>##</nowiki> Deployment Configuration
+
## Deployment Configuration
<nowiki>##</nowiki> replicaCount should be 1 for Config Server
+
## replicaCount should be 1 for Config Server
replicaCount: 1
+
replicaCount: 1
 
   
 
   
<nowiki>##</nowiki> Base Labels. Please do not change these.
+
## Base Labels. Please do not change these.
serviceName: gvp-configserver
+
serviceName: gvp-configserver
component: shared
+
component: shared
<nowiki>#</nowiki> Namespace
+
# Namespace
partOf: gvp  
+
partOf: gvp
 
   
 
   
<nowiki>##</nowiki> Container image repo settings.
+
## Container image repo settings.
image:
+
image:
<nowiki> </nowiki> confserv:
+
  confserv:
<nowiki> </nowiki>  registry: <docker-repo>
+
    registry: pureengage-docker-staging.jfrog.io
<nowiki> </nowiki>  repository: gvp/gvp_confserv
+
    repository: gvp/gvp_confserv
<nowiki> </nowiki>  pullPolicy: IfNotPresent
+
    pullPolicy: IfNotPresent
<nowiki> </nowiki>  tag: "<nowiki>{{ .Chart.AppVersion }}</nowiki>"
+
    tag: "{{ .Chart.AppVersion }}"
<nowiki> </nowiki> serviceHandler:
+
  serviceHandler:
<nowiki> </nowiki>  registry: <docker-repo>
+
    registry: pureengage-docker-staging.jfrog.io
<nowiki> </nowiki>  repository: gvp/gvp_configserver_servicehandler
+
    repository: gvp/gvp_configserver_servicehandler
<nowiki> </nowiki>  pullPolicy: IfNotPresent
+
    pullPolicy: IfNotPresent
<nowiki> </nowiki>  tag: "<nowiki>{{ .Chart.AppVersion }}</nowiki>"
+
    tag: "{{ .Chart.AppVersion }}"
<nowiki> </nowiki> dbInit:
+
  dbInit:
<nowiki> </nowiki>  registry: <docker-repo>
+
    registry: pureengage-docker-staging.jfrog.io
<nowiki> </nowiki>  repository: gvp/gvp_configserver_configserverinit
+
    repository: gvp/gvp_configserver_configserverinit
<nowiki> </nowiki>  pullPolicy: IfNotPresent
+
    pullPolicy: IfNotPresent
<nowiki> </nowiki>  tag: "<nowiki>{{ .Chart.AppVersion }}</nowiki>"
+
    tag: "{{ .Chart.AppVersion }}"
 
   
 
   
<nowiki>##</nowiki> Config Server App Configuration
+
## Config Server App Configuration
configserver:
+
configserver:
<nowiki> </nowiki> ## Settings for liveness and readiness probes
+
  ## Settings for liveness and readiness probes
<nowiki> </nowiki> ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
+
  ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
<nowiki> </nowiki> livenessValues:
+
  livenessValues:
<nowiki> </nowiki>  path: /cs/liveness
+
    path: /cs/liveness
<nowiki> </nowiki>  initialDelaySeconds: 30
+
    initialDelaySeconds: 30
<nowiki> </nowiki>  periodSeconds: 60
+
    periodSeconds: 60
<nowiki> </nowiki>  timeoutSeconds: 20
+
    timeoutSeconds: 20
<nowiki> </nowiki>  failureThreshold: 3
+
    failureThreshold: 3
<nowiki> </nowiki>  healthCheckAPIPort: 8300         
+
    healthCheckAPIPort: 8300      
<nowiki> </nowiki>   
+
        
<nowiki> </nowiki> readinessValues:
+
  readinessValues:
<nowiki> </nowiki>  path: /cs/readiness
+
    path: /cs/readiness
<nowiki> </nowiki>  initialDelaySeconds: 30
+
    initialDelaySeconds: 30
<nowiki> </nowiki>  periodSeconds: 30
+
    periodSeconds: 30
<nowiki> </nowiki>  timeoutSeconds: 20
+
    timeoutSeconds: 20
<nowiki> </nowiki>  failureThreshold: 3
+
    failureThreshold: 3
<nowiki> </nowiki>  healthCheckAPIPort: 8300
+
    healthCheckAPIPort: 8300
 
   
 
   
<nowiki> </nowiki> alerts:
+
  alerts:
<nowiki> </nowiki>  cpuUtilizationAlertLimit: 70
+
    cpuUtilizationAlertLimit: 70
<nowiki> </nowiki>  memUtilizationAlertLimit: 90
+
    memUtilizationAlertLimit: 90
<nowiki> </nowiki>  workingMemAlertLimit: 7
+
    workingMemAlertLimit: 7
<nowiki> </nowiki>  maxRestarts: 2
+
    maxRestarts: 2
 
   
 
   
<nowiki>##</nowiki> PVCs defined
+
## PVCs defined
<nowiki>#</nowiki> none
+
#  none
 
   
 
   
<nowiki>##</nowiki> Define service(s) for application
+
## Define service(s) for application
service:
+
service:
<nowiki> </nowiki> type: ClusterIP
+
  type: ClusterIP
<nowiki> </nowiki> host: gvp-configserver-0
+
  host: gvp-configserver-0
<nowiki> </nowiki> port: 8888
+
  port: 8888
<nowiki> </nowiki> targetPort: 8888
+
  targetPort: 8888
 
   
 
   
<nowiki>##</nowiki> Service Handler configuration.
+
## Service Handler configuration.
serviceHandler:
+
serviceHandler:
<nowiki> </nowiki> port: 8300
+
  port: 8300
 
   
 
   
<nowiki>##</nowiki> Secrets storage related settings - k8s secrets only
+
## Secrets storage related settings - k8s secrets only
secrets:
+
secrets:
<nowiki> </nowiki> # Used for pulling images/containers from the repositories.
+
  # Used for pulling images/containers from the respositories.
<nowiki> </nowiki> imagePull:
+
  imagePull:
<nowiki> </nowiki>  - name: <credential-name>
+
    - name: pureengage-docker-dev
<nowiki> </nowiki>
+
    - name: pureengage-docker-staging
<nowiki> </nowiki> # Config Server secrets. If k8s is false, csi will be used, else k8s will be used.  
+
 
<nowiki> </nowiki> # Currently, only k8s is supported!
+
  # Config Server secrets. If k8s is false, csi will be used, else k8s will be used.
<nowiki> </nowiki> configServer:
+
  # Currently, only k8s is supported!
<nowiki> </nowiki>  secretName: configserver-secret
+
  configServer:
<nowiki> </nowiki>  secretUserKey: username
+
    secretName: configserver-secret
<nowiki> </nowiki>  secretPwdKey: password
+
    secretUserKey: username
<nowiki> </nowiki>  #csiSecretProviderClass: keyvault-gvp-gvp-configserver-secret
+
    secretPwdKey: password
 +
    #csiSecretProviderClass: keyvault-gvp-gvp-configserver-secret
 
   
 
   
<nowiki> </nowiki> # Config Server Postgres DB secrets and settings.
+
  # Config Server Postgres DB secrets and settings.
<nowiki> </nowiki> postgres:
+
  postgres:
<nowiki> </nowiki>  dbName: gvp
+
    dbName: gvp
<nowiki> </nowiki>  dbPort: 5432
+
    dbPort: 5432
<nowiki> </nowiki>  secretName: postgres-secret
+
    secretName: postgres-secret
<nowiki> </nowiki>  secretAdminUserKey: db-username
+
    secretAdminUserKey: db-username
<nowiki> </nowiki>  secretAdminPwdKey: db-password
+
    secretAdminPwdKey: db-password
<nowiki> </nowiki>  secretHostnameKey: db-hostname
+
    secretHostnameKey: db-hostname
<nowiki> </nowiki>  secretDbNameKey: db-name
+
    secretDbNameKey: db-name
<nowiki> </nowiki>  #secretServerNameKey: server-name
+
    #secretServerNameKey: server-name
 
   
 
   
<nowiki>##</nowiki> Ingress configuration
+
## Ingress configuration
ingress:
+
ingress:
<nowiki> </nowiki> enabled: false
+
  enabled: false
<nowiki> </nowiki> annotations: {}
+
  annotations: {}
<nowiki> </nowiki>  # kubernetes.io/ingress.class: nginx
+
    # kubernetes.io/ingress.class: nginx
<nowiki> </nowiki>  # kubernetes.io/tls-acme: "true"
+
    # kubernetes.io/tls-acme: "true"
<nowiki> </nowiki> hosts:
+
  hosts:
<nowiki> </nowiki>  - host: chart-example.local
+
    - host: chart-example.local
<nowiki> </nowiki>    paths: []
+
      paths: []
<nowiki> </nowiki> tls: []
+
  tls: []
<nowiki> </nowiki> #  - secretName: chart-example-tls
+
  #  - secretName: chart-example-tls
<nowiki> </nowiki> #    hosts:
+
  #    hosts:
<nowiki> </nowiki> #      - chart-example.local
+
  #      - chart-example.local
 
   
 
   
<nowiki>##</nowiki> App resource requests and limits
+
## App resource requests and limits
<nowiki>##</nowiki> ref: <nowiki>http://kubernetes.io/docs/user-guide/compute-resources/</nowiki>
+
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
<nowiki>##</nowiki>
+
##
resources:
+
resources:
<nowiki> </nowiki> requests:
+
  requests:
<nowiki> </nowiki>  memory: "512Mi"
+
    memory: "512Mi"
<nowiki> </nowiki>  cpu: "500m"
+
    cpu: "500m"
<nowiki> </nowiki> limits:
+
  limits:
<nowiki> </nowiki>  memory: "1Gi"
+
    memory: "1Gi"
<nowiki> </nowiki>  cpu: "1"
+
    cpu: "1"
 
   
 
   
<nowiki>##</nowiki> App containers' Security Context
+
## App containers' Security Context
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container</nowiki>
+
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
<nowiki>##</nowiki>
+
##
<nowiki>##</nowiki> Containers should run as genesys user and cannot use elevated permissions
+
## Containers should run as genesys user and cannot use elevated permissions
<nowiki>##</nowiki>
+
##
securityContext:
+
securityContext:
<nowiki> </nowiki> runAsUser: 500
+
  runAsUser: 500
<nowiki> </nowiki> runAsGroup: 500
+
  runAsGroup: 500
<nowiki> </nowiki> # capabilities:
+
  # capabilities:
<nowiki> </nowiki> #  drop:
+
  #  drop:
<nowiki> </nowiki> #  - ALL
+
  #  - ALL
<nowiki> </nowiki> # readOnlyRootFilesystem: true
+
  # readOnlyRootFilesystem: true
<nowiki> </nowiki> # runAsNonRoot: true
+
  # runAsNonRoot: true
<nowiki> </nowiki> # runAsUser: 1000
+
  # runAsUser: 1000
 
   
 
   
podSecurityContext: {}
+
podSecurityContext: {}
<nowiki> </nowiki> # fsGroup: 2000
+
  # fsGroup: 2000
 
   
 
   
<nowiki>##</nowiki> Priority Class
+
## Priority Class
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
<nowiki>##</nowiki> NOTE: this is an optional parameter
+
## NOTE: this is an optional parameter
<nowiki>##</nowiki>
+
##
priorityClassName: <critical-priority-class>
+
priorityClassName: system-cluster-critical
 
   
 
   
<nowiki>##</nowiki> Affinity for assignment.
+
## Affinity for assignment.
<nowiki>##</nowiki> Ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity</nowiki>
+
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
<nowiki>##</nowiki>
+
##
affinity: {}
+
affinity: {}
 
   
 
   
<nowiki>##</nowiki> Node labels for assignment.
+
## Node labels for assignment.
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/user-guide/node-selection/</nowiki>
+
## ref: https://kubernetes.io/docs/user-guide/node-selection/
<nowiki>##</nowiki>
+
##
nodeSelector: {}
+
nodeSelector: {}
 
   
 
   
<nowiki>##</nowiki> Tolerations for assignment.
+
## Tolerations for assignment.
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
<nowiki>##</nowiki>
+
##
tolerations: []
+
tolerations: []
 
   
 
   
<nowiki>##</nowiki> Service/Pod Monitoring Settings
+
## Service/Pod Monitoring Settings
<nowiki>##</nowiki> Whether to create Prometheus alert rules or not.
+
## Whether to create Prometheus alert rules or not.
prometheusRule:
+
prometheusRule:
<nowiki> </nowiki> create: true
+
  create: true
 
   
 
   
<nowiki>##</nowiki> Grafana dashboard Settings
+
## Grafana dashboard Settings
<nowiki>##</nowiki> Whether to create Grafana dashboard or not.
+
## Whether to create Grafana dashboard or not.
grafana:
+
grafana:
<nowiki> </nowiki> enabled: true
+
  enabled: true
 
   
 
   
<nowiki>##</nowiki> Enable network policies or not
+
## Enable network policies or not
networkPolicies:
+
networkPolicies:
<nowiki> </nowiki> enabled: false
+
  enabled: false
 
   
 
   
<nowiki>##</nowiki> DNS configuration options
+
## DNS configuration options
dnsConfig:
+
dnsConfig:
<nowiki> </nowiki> options:
+
  options:
<nowiki> </nowiki>  - name: ndots
+
    - name: ndots
<nowiki> </nowiki>    value: "3"
+
      value: "3"
{{!}}}
+
</source>
 +
 
 
===Verify the deployed resources===
 
===Verify the deployed resources===
 
Verify the deployed resources from OpenShift console/CLI.
 
Verify the deployed resources from OpenShift console/CLI.
Line 376: Line 361:
  
 
<u>shared-consul-consul-gvp-token</u>
 
<u>shared-consul-consul-gvp-token</u>
 +
 +
'''shared-consul-consul-gvp-token-secret.yaml'''
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="7f8edbd9-46af-4fd5-8f1d-0a94107150b2" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=shared-consul-consul-gvp-token-secret.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="7f8edbd9-46af-4fd5-8f1d-0a94107150b2" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=shared-consul-consul-gvp-token-secret.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
Line 389: Line 376:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="d626b74c-4177-4c6a-80eb-5b15c2cb4b49" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="d626b74c-4177-4c6a-80eb-5b15c2cb4b49" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
  oc create -f shared-consul-consul-gvp-token-secret.yaml
+
  kubectl create -f shared-consul-consul-gvp-token-secret.yaml
 
{{!}}}
 
{{!}}}
 
===ConfigMap creation===
 
===ConfigMap creation===
Creation of a '''tenant-inventory''' ConfigMap is required for service discovery deployment.
+
Create the following ConfigMap which is required for the service deployment.
{{{!}} class="wikitable" data-macro-name="warning" data-macro-id="8245f116-1e0d-4b93-b0b7-64f56059d079" data-macro-parameters="title=Caveat" data-macro-schema-version="1" data-macro-body-type="RICH_TEXT"
+
 
{{!}} class="wysiwyg-macro-body"{{!}}If the tenant has not been deployed yet then you will not have the information needed to populate the config map. An empty config-map can be created using:
+
'''Caveat'''
oc create configmap tenant-inventory -n gvp
+
 
{{!}}}
+
If the tenant has not been deployed yet then you will not have the information needed to populate the config map. An empty config-map can be created using:
====Provisioning a new tenant====
+
<source lang="bash">
Create a file (t100.json in the example) containing at minimum: '''name, id, gws-ccid,''' and '''default-application''' (should be set to '''IVRAppDefault''') from your tenant deployment.
+
kubectl create configmap tenant-inventory -n gvp
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="7e030700-72f2-4cba-8684-7277d30ade46" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=t100.json" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
</source>
{{!}} class="wysiwyg-macro-body"{{!}}
+
Create Config based on {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Provision|anchor=TP|display text=Tenant provisioning via Service Discovery Container}}.
{
+
 
    "name": "t100",
+
'''t100.json'''
    "id": "80dd",
+
<source lang="bash">
    "gws-ccid": "9350e2fc-a1dd-4c65-8d40-1f75a2e080dd",
+
{
    "default-application": "IVRAppDefault"
+
    "name": "t100",
}
+
    "id": "80dd",
{{!}}}Execute the following command to create ConfigMap on cluster:
+
    "gws-ccid": "9350e2fc-a1dd-4c65-8d40-1f75a2e080dd",
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="e1e8cef4-3c8c-42b1-86f8-44810ba0779e" data-macro-parameters="language=bash{{!}}theme=Emacs{{!}}title=Add Config Map" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
    "default-application": "IVRAppDefault"
{{!}} class="wysiwyg-macro-body"{{!}}
+
}
oc create configmap tenant-inventory --from-file t100.json -n gvp
+
</source>
{{!}}}
+
 
====Updating a tenant====
+
Execute the following command:
Delete the '''tenant-inventory''' ConfigMap using:
+
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="44060763-018b-4d8f-8298-be5b15468412" data-macro-parameters="language=bash{{!}}theme=Emacs{{!}}title=Delete Config Map" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
'''Add Config Map'''
{{!}} class="wysiwyg-macro-body"{{!}}
+
<source lang="bash">
oc delete configmap tenant-inventory -n gvp --ignore-not-found
+
kubectl create configmap tenant-inventory --from-file t100.json -n gvp
{{!}}}Update the '''t100.json''' file and execute the following to create the new ConfigMap:
+
</source>
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="a21129e4-fae7-44e1-8af3-30c5a04dff34" data-macro-parameters="language=bash{{!}}theme=Emacs{{!}}title=Add Config Map" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create configmap tenant-inventory --from-file t100.json -n gvp
 
{{!}}}GVP Service Discovery looks for changes to the config map every 60 seconds.
 
====Provisioning process details====
 
For additional details on the provisioning process, refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Provision}}.
 
 
===Install Helm chart===
 
===Install Helm chart===
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
Line 429: Line 410:
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
  helm install gvp-sd ./<gvp-sd-helm-artifact> -f gvp-sd-values.yaml
 
  helm install gvp-sd ./<gvp-sd-helm-artifact> -f gvp-sd-values.yaml
{{!}}}At minimum following values will need to be updated in your values.yaml:
+
{{!}}}
  
*<critical-priority-class> - Set to a priority class that exists on cluster (or create it instead)
 
*<docker-repo> - Set to your Docker Repo with Private Edition Artifacts
 
*<credential-name> - Set to your pull secret name
 
  
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="8d8c2e7b-040d-42e3-96c5-e8f07620a22e" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-sd-values.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
'''gvp-sd-values.yaml'''
{{!}} class="wysiwyg-macro-body"{{!}}
+
<source lang="bash">
# Default values for gvp-sd.
+
# Default values for gvp-sd.
<nowiki>#</nowiki> This is a YAML-formatted file.
+
# This is a YAML-formatted file.
<nowiki>#</nowiki> Declare variables to be passed into your templates.
+
# Declare variables to be passed into your templates.
 
   
 
   
<nowiki>##</nowiki> Global Parameters
+
## Global Parameters
<nowiki>##</nowiki> Add labels to all the deployed resources
+
## Add labels to all the deployed resources
<nowiki>##</nowiki>
+
##
podLabels: {}
+
podLabels: {}
 
   
 
   
<nowiki>##</nowiki> Add annotations to all the deployed resources
+
## Add annotations to all the deployed resources
<nowiki>##</nowiki>
+
##
podAnnotations: {}
+
podAnnotations: {}
 
   
 
   
serviceAccount:
+
serviceAccount:
<nowiki> </nowiki> # Specifies whether a service account should be created
+
  # Specifies whether a service account should be created
<nowiki> </nowiki> create: false
+
  create: false
<nowiki> </nowiki> # Annotations to add to the service account
+
  # Annotations to add to the service account
<nowiki> </nowiki> annotations: {}
+
  annotations: {}
<nowiki> </nowiki> # The name of the service account to use.
+
  # The name of the service account to use.
<nowiki> </nowiki> # If not set and create is true, a name is generated using the fullname template
+
  # If not set and create is true, a name is generated using the fullname template
<nowiki> </nowiki> name:
+
  name:
 
   
 
   
<nowiki>##</nowiki> Deployment Configuration
+
## Deployment Configuration
replicaCount: 1
+
replicaCount: 1
smtp: allowed
+
smtp: allowed
 
   
 
   
<nowiki>##</nowiki> Name overrides
+
## Name overrides
nameOverride: ""
+
nameOverride: ""
fullnameOverride: ""
+
fullnameOverride: ""
 
   
 
   
<nowiki>##</nowiki> Base Labels. Please do not change these.
+
## Base Labels. Please do not change these.
component: shared
+
component: shared
partOf: gvp
+
partOf: gvp
 
   
 
   
image:
+
image:
<nowiki> </nowiki> registry: <docker-repo>
+
  registry: pureengage-docker-staging.jfrog.io
<nowiki> </nowiki> repository: gvp/gvp_sd
+
  repository: gvp/gvp_sd
<nowiki> </nowiki> tag: "<nowiki>{{ .Chart.AppVersion }}</nowiki>"
+
  tag: "{{ .Chart.AppVersion }}"
<nowiki> </nowiki> pullPolicy: IfNotPresent
+
  pullPolicy: IfNotPresent
 
   
 
   
<nowiki>##</nowiki> PVCs defined
+
## PVCs defined
<nowiki>#</nowiki> none
+
# none
 
   
 
   
<nowiki>##</nowiki> Define service for application.  
+
## Define service for application.
service:
+
service:
<nowiki> </nowiki> name: gvp-sd
+
  name: gvp-sd
<nowiki> </nowiki> type: ClusterIP
+
  type: ClusterIP
<nowiki> </nowiki> port: 8080
+
  port: 8080
 
   
 
   
<nowiki>##</nowiki> Application configuration parameters.
+
## Application configuration parameters.
env:
+
env:
<nowiki> </nowiki> MCP_SVC_NAME: "gvp-mcp"
+
  MCP_SVC_NAME: "gvp-mcp"
<nowiki> </nowiki> EXTERNAL_CONSUL_SERVER: ""
+
  EXTERNAL_CONSUL_SERVER: ""
<nowiki> </nowiki> CONSUL_PORT: "8501"
+
  CONSUL_PORT: "8501"
<nowiki> </nowiki> CONFIG_SERVER_HOST: "gvp-configserver"
+
  CONFIG_SERVER_HOST: "gvp-configserver"
<nowiki> </nowiki> CONFIG_SERVER_PORT: "8888"
+
  CONFIG_SERVER_PORT: "8888"
<nowiki> </nowiki> CONFIG_SERVER_APP: "default"
+
  CONFIG_SERVER_APP: "default"
<nowiki> </nowiki> HTTP_SERVER_PORT: "8080"
+
  HTTP_SERVER_PORT: "8080"
<nowiki> </nowiki> METRICS_EXPORTER_PORT: "9090"
+
  METRICS_EXPORTER_PORT: "9090"
<nowiki> </nowiki> DEF_MCP_FOLDER: "MCP_Configuration_Unit\\MCP_LRG"
+
  DEF_MCP_FOLDER: "MCP_Configuration_Unit\\MCP_LRG"
<nowiki> </nowiki> TEST_MCP_FOLDER: "MCP_Configuration_Unit_Test\\MCP_LRG"
+
  TEST_MCP_FOLDER: "MCP_Configuration_Unit_Test\\MCP_LRG"
<nowiki> </nowiki> SYNC_INIT_DELAY: "10000"
+
  SYNC_INIT_DELAY: "10000"
<nowiki> </nowiki> SYNC_PERIOD: "60000"
+
  SYNC_PERIOD: "60000"
<nowiki> </nowiki> MCP_PURGE_PERIOD_MINS: "0"
+
  MCP_PURGE_PERIOD_MINS: "0"
<nowiki> </nowiki> EMAIL_METERING_FACTOR: "10"
+
  EMAIL_METERING_FACTOR: "10"
<nowiki> </nowiki> RECORDINGS_CONTAINER: "ccerp-recordings"
+
  RECORDINGS_CONTAINER: "ccerp-recordings"
<nowiki> </nowiki> TENANT_KV_FOLDER: "tenants"
+
  TENANT_KV_FOLDER: "tenants"
<nowiki> </nowiki> TENANT_CONFIGMAP_FOLDER: "/etc/config"
+
  TENANT_CONFIGMAP_FOLDER: "/etc/config"
<nowiki> </nowiki> SMTP_SERVER: "smtp-relay.smtp.svc.cluster.local"
+
  SMTP_SERVER: "smtp-relay.smtp.svc.cluster.local"
 
   
 
   
<nowiki>##</nowiki> Secrets storage related settings
+
## Secrets storage related settings
secrets:
+
secrets:
<nowiki> </nowiki> # Used for pulling images/containers from the repositories.
+
  # Used for pulling images/containers from the respositories.
<nowiki> </nowiki> imagePull:
+
  imagePull:
<nowiki> </nowiki>  - name: <credential-name>
+
    - name: pureengage-docker-dev
<nowiki> </nowiki>
+
    - name: pureengage-docker-staging
<nowiki> </nowiki> # If k8s is true, k8s will be used, else vault secret will be used.
+
 
<nowiki> </nowiki> configServer:
+
  # If k8s is true, k8s will be used, else vault secret will be used.
<nowiki> </nowiki>  k8s: true
+
  configServer:
<nowiki> </nowiki>  k8sSecretName: configserver-secret
+
    k8s: true
<nowiki> </nowiki>  k8sUserKey: username
+
    k8sSecretName: configserver-secret
<nowiki> </nowiki>  k8sPasswordKey: password
+
    k8sUserKey: username
<nowiki> </nowiki>  vaultSecretName: "/configserver-secret"
+
    k8sPasswordKey: password
<nowiki> </nowiki>  vaultUserKey: "configserver-username"
+
    vaultSecretName: "/configserver-secret"
<nowiki> </nowiki>  vaultPasswordKey: "configserver-password"
+
    vaultUserKey: "configserver-username"
 +
    vaultPasswordKey: "configserver-password"
 
   
 
   
<nowiki> </nowiki> # If k8s is true, k8s will be used, else vault secret will be used.
+
  # If k8s is true, k8s will be used, else vault secret will be used.
<nowiki> </nowiki> consul:
+
  consul:
<nowiki> </nowiki>  k8s: true
+
    k8s: true
<nowiki> </nowiki>  k8sTokenName: "shared-consul-consul-gvp-token"
+
    k8sTokenName: "shared-consul-consul-gvp-token"
<nowiki> </nowiki>  k8sTokenKey: "consul-consul-gvp-token"
+
    k8sTokenKey: "consul-consul-gvp-token"
<nowiki> </nowiki>  vaultSecretName: "/consul-secret"
+
    vaultSecretName: "/consul-secret"
<nowiki> </nowiki>  vaultSecretKey: "consul-consul-gvp-token"
+
    vaultSecretKey: "consul-consul-gvp-token"
 
   
 
   
<nowiki> </nowiki> # GTTS key, password via k8s secret, if k8s is true.  If false, this data comes from tenant profile.
+
  # GTTS key, password via k8s secret, if k8s is true.  If false, this data comes from tenant profile.
<nowiki> </nowiki> gtts:
+
  gtts:
<nowiki> </nowiki>  k8s: false
+
    k8s: false
<nowiki> </nowiki>  k8sSecretName: gtts-secret
+
    k8sSecretName: gtts-secret
<nowiki> </nowiki>  EncryptedKey: encrypted-key
+
    EncryptedKey: encrypted-key
<nowiki> </nowiki>  PasswordKey: password
+
    PasswordKey: password
 
   
 
   
ingress:
+
ingress:
<nowiki> </nowiki> enabled: false
+
  enabled: false
<nowiki> </nowiki> annotations: {}
+
  annotations: {}
<nowiki> </nowiki>  # kubernetes.io/ingress.class: nginx
+
    # kubernetes.io/ingress.class: nginx
<nowiki> </nowiki>  # kubernetes.io/tls-acme: "true"
+
    # kubernetes.io/tls-acme: "true"
<nowiki> </nowiki> hosts:
+
  hosts:
<nowiki> </nowiki>  - host: chart-example.local
+
    - host: chart-example.local
<nowiki> </nowiki>    paths: []
+
      paths: []
<nowiki> </nowiki> tls: []
+
  tls: []
<nowiki> </nowiki> #  - secretName: chart-example-tls
+
  #  - secretName: chart-example-tls
<nowiki> </nowiki> #    hosts:
+
  #    hosts:
<nowiki> </nowiki> #      - chart-example.local
+
  #      - chart-example.local
 
   
 
   
resources:
+
resources:
<nowiki> </nowiki> requests:
+
  requests:
<nowiki> </nowiki>  memory: "2Gi"
+
    memory: "2Gi"
<nowiki> </nowiki>  cpu: "1000m"
+
    cpu: "1000m"
<nowiki> </nowiki> limits:
+
  limits:
<nowiki> </nowiki>  memory: "2Gi"
+
    memory: "2Gi"
<nowiki> </nowiki>  cpu: "1000m"
+
    cpu: "1000m"
 
   
 
   
<nowiki>##</nowiki> App containers' Security Context
+
## App containers' Security Context
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container</nowiki>
+
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
<nowiki>##</nowiki>
+
##
<nowiki>##</nowiki> Containers should run as genesys user and cannot use elevated permissions
+
## Containers should run as genesys user and cannot use elevated permissions
<nowiki>##</nowiki> Pod level security context
+
## Pod level security context
podSecurityContext:  
+
podSecurityContext:
<nowiki> </nowiki> fsGroup: 500
+
  fsGroup: 500
<nowiki> </nowiki> runAsUser: 500
+
  runAsUser: 500
<nowiki> </nowiki> runAsGroup: 500
+
  runAsGroup: 500
<nowiki> </nowiki> runAsNonRoot: true
+
  runAsNonRoot: true
 
   
 
   
<nowiki>##</nowiki> Container security context  
+
## Container security context  
securityContext:
+
securityContext:
<nowiki> </nowiki> runAsUser: 500
+
  runAsUser: 500
<nowiki> </nowiki> runAsGroup: 500
+
  runAsGroup: 500
<nowiki> </nowiki> runAsNonRoot: true
+
  runAsNonRoot: true
 
   
 
   
<nowiki>##</nowiki> Priority Class
+
## Priority Class
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
<nowiki>##</nowiki> NOTE: this is an optional parameter
+
## NOTE: this is an optional parameter
<nowiki>##</nowiki>
+
##
priorityClassName: <critical-priority-class>
+
priorityClassName: system-cluster-critical
 
   
 
   
<nowiki>##</nowiki> Affinity for assignment.
+
## Affinity for assignment.
<nowiki>##</nowiki> Ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity</nowiki>
+
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
<nowiki>##</nowiki>
+
##
affinity: {}
+
affinity: {}
 
   
 
   
<nowiki>##</nowiki> Node labels for assignment.
+
## Node labels for assignment.
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/user-guide/node-selection/</nowiki>
+
## ref: https://kubernetes.io/docs/user-guide/node-selection/
<nowiki>##</nowiki>
+
##
nodeSelector: {}
+
nodeSelector: {}
 
   
 
   
<nowiki>##</nowiki> Tolerations for assignment.
+
## Tolerations for assignment.
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
<nowiki>##</nowiki>
+
##
tolerations: []
+
tolerations: []
 
   
 
   
<nowiki>##</nowiki> Service/Pod Monitoring Settings
+
## Service/Pod Monitoring Settings
prometheus:
+
prometheus:
<nowiki> </nowiki> # Enable for Prometheus operator
+
  # Enable for Prometheus operator
<nowiki> </nowiki> podMonitor:
+
  podMonitor:
<nowiki> </nowiki>  enabled: true
+
    enabled: true
 
   
 
   
<nowiki>##</nowiki> Enable network policies or not
+
## Enable network policies or not
networkPolicies:
+
networkPolicies:
<nowiki> </nowiki> enabled: false
+
  enabled: false
 
   
 
   
<nowiki>##</nowiki> DNS configuration options
+
## DNS configuration options
dnsConfig:
+
dnsConfig:
<nowiki> </nowiki> options:
+
  options:
<nowiki> </nowiki>  - name: ndots
+
    - name: ndots
<nowiki> </nowiki>    value: "3"
+
      value: "3"
{{!}}}
+
</source>
 
===Verify the deployed resources===
 
===Verify the deployed resources===
 
Verify the deployed resources from OpenShift console/CLI.
 
Verify the deployed resources from OpenShift console/CLI.
Line 633: Line 612:
  
 
db_username:
 
db_username:
 +
 +
'''rs-dbreader-password-secret.yaml'''
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="1630c47f-c10a-41bb-b1c5-b417d4ac23f0" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=rs-dbreader-password-secret.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="1630c47f-c10a-41bb-b1c5-b417d4ac23f0" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=rs-dbreader-password-secret.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
Line 649: Line 630:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="ccb59bb5-e759-47fa-b46e-113991e08ddd" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="ccb59bb5-e759-47fa-b46e-113991e08ddd" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
  oc create -f rs-dbreader-password-secret.yaml
+
  kubectl create -f rs-dbreader-password-secret.yaml
 
{{!}}}<u>shared-gvp-rs-sqlserer-secret</u>
 
{{!}}}<u>shared-gvp-rs-sqlserer-secret</u>
  
Line 655: Line 636:
  
 
db-reader-password:
 
db-reader-password:
 +
 +
'''shared-gvp-rs-sqlserer-secret.yaml'''
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="3a80c2ed-81d5-40d5-a8b4-8e099933f371" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=shared-gvp-rs-sqlserer-secret.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="3a80c2ed-81d5-40d5-a8b4-8e099933f371" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=shared-gvp-rs-sqlserer-secret.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
Line 669: Line 652:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="8e9dd016-c876-4baa-a7b0-a58b08dbd4ac" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="8e9dd016-c876-4baa-a7b0-a58b08dbd4ac" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
  oc create -f shared-gvp-rs-sqlserer-secret.yaml
+
  kubectl create -f shared-gvp-rs-sqlserer-secret.yaml
 
{{!}}}
 
{{!}}}
 
===Persistent Volumes creation===
 
===Persistent Volumes creation===
'''Note''': The steps for PV creation can be skipped if OCS is used to auto provision the persistent volumes.
+
Create the following PVs which are required for the service deployment.
 +
 
 +
gvp-rs-0
 +
 
 +
'''gvp-rs-pv.yaml'''
 +
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
name: gvp-rs-0
 +
namespace: gvp
 +
spec:
 +
capacity:
 +
storage: 30Gi
 +
accessModes:
 +
- ReadWriteOnce
 +
persistentVolumeReclaimPolicy: Retain
 +
storageClassName: gvp
 +
nfs:
 +
path:  /export/vol1/PAT/gvp/rs-01
 +
server: 192.168.30.51
 +
</source>
 +
 
 +
Execute the following command:
 +
<source lang="bash">
 +
kubectl create -f gvp-rs-pv.yaml
 +
</source>
  
Create the following PVs which are required for the service deployment.
 
{{{!}} class="wikitable" data-macro-name="warning" data-macro-id="8b99e091-ef72-401b-87ba-0a2fbb711686" data-macro-parameters="icon=false{{!}}title=Note Regarding Persistent Volumes" data-macro-schema-version="1" data-macro-body-type="RICH_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}If your OpenShift deployment is capable of self-provisioning of Persistent Volumes, then this step can be skipped.  Volumes will be created by provisioner.
 
{{!}}}<u>gvp-rs-0</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="913a9e32-bc41-4a67-b0ef-f4dde848bf06" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-rs-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-rs-0
 
  namespace: gvp
 
spec:
 
  capacity:
 
    storage: 30Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Retain
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/rs-01
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="fda31e0a-a32f-4991-ba43-35cfd17ef2e7" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-rs-pv.yaml
 
{{!}}}
 
 
===Install Helm chart===
 
===Install Helm chart===
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
Line 705: Line 688:
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
  helm install gvp-rs ./<gvp-rs-helm-artifact> -f gvp-rs-values.yaml
 
  helm install gvp-rs ./<gvp-rs-helm-artifact> -f gvp-rs-values.yaml
{{!}}}At minimum following values will need to be updated in your values.yaml:
+
{{!}}}
 +
 
 +
 
 +
The following values should be set in your values.yaml:
  
*<docker-repo> - Set to your Docker Repo with Private Edition Artifacts
+
*priorityClassName >> Set to a priority class that exists on cluster (or create it instead)
*<credential-name> - Set to your pull secret name
+
*imagePullSecrets >> Set to your pull secret name
 +
*keyVaultSecret: false >> make sure this is false to force use of k8s secrets
 +
*storageClass: genesys-gvp >> set to your storage class
  
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="53ece46e-756d-4559-8c47-1480f902ecbe" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-rs-values.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
'''gvp-rs-values.yaml'''
{{!}} class="wysiwyg-macro-body"{{!}}
+
<source lang="bash">
## Global Parameters
+
## Global Parameters
## Add labels to all the deployed resources
+
## Add labels to all the deployed resources
##
+
##
labels:
+
labels:
  enabled: true
+
  enabled: true
  serviceGroup: "gvp"
+
  serviceGroup: "gvp"
  componentType: "shared"
+
  componentType: "shared"
 +
 +
serviceAccount:
 +
  # Specifies whether a service account should be created
 +
  create: false
 +
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name:
 
   
 
   
serviceAccount:
+
## Primary App Configuration
  # Specifies whether a service account should be created
+
##
  create: false
+
# primaryApp:
  # Annotations to add to the service account
+
# type: ReplicaSet
  annotations: {}
+
# Should include the defaults for replicas
  # The name of the service account to use.
+
deployment:
  # If not set and create is true, a name is generated using the fullname template
+
  replicaCount: 1
  name:
+
  strategy: Recreate
 +
  namespace: gvp
 +
nameOverride: ""
 +
fullnameOverride: ""
 
   
 
   
## Primary App Configuration
+
image:
##
+
  registry: pureengage-docker-staging.jfrog.io
# primaryApp:
+
  gvprsrepository: gvp/gvp_rs
# type: ReplicaSet
+
  snmprepository: gvp/gvp_snmp
# Should include the defaults for replicas
+
  rsinitrepository: gvp/gvp_rs_init
deployment:
+
  rstag:
  replicaCount: 1
+
  rsinittag:
  strategy: Recreate
+
  snmptag: v9.0.040.07
  namespace: gvp
+
  pullPolicy: Always
nameOverride: ""
+
  imagePullSecrets:
fullnameOverride: ""
+
    - name: "pureengage-docker-staging"
 
   
 
   
image:
+
## liveness and readiness probes
  registry: <docker-repo>
+
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
  gvprsrepository: gvp/gvp_rs
+
livenessValues:
  snmprepository: gvp/gvp_snmp
+
  path: /ems-rs/components
  rsinitrepository: gvp/gvp_rs_init
+
  initialDelaySeconds: 30
  rstag:
+
  periodSeconds: 120
  rsinittag:
+
  timeoutSeconds: 3
  snmptag: v9.0.040.07
+
  failureThreshold: 3
  pullPolicy: Always
 
  imagePullSecrets:
 
    - name: "<credential-name>"
 
 
   
 
   
## liveness and readiness probes
+
readinessValues:
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
+
  path: /ems-rs/components
livenessValues:
+
  initialDelaySeconds: 10
  path: /ems-rs/components
+
  periodSeconds: 60
  initialDelaySeconds: 30
+
  timeoutSeconds: 3
  periodSeconds: 120
+
  failureThreshold: 3
  timeoutSeconds: 3
 
  failureThreshold: 3
 
 
   
 
   
readinessValues:
+
## PVCs defined
  path: /ems-rs/components
+
volumes:
  initialDelaySeconds: 10
+
  pvc:
  periodSeconds: 60
+
    storageClass: managed-premium
  timeoutSeconds: 3
+
    claimSize: 20Gi
  failureThreshold: 3
+
    activemqAndLocalConfigPath: "/billing/gvp-rs"
 
   
 
   
## PVCs defined
+
## Define service(s) for application.  Fields many need to be modified based on `type`
volumes:
+
service:
  pvc:
+
  type: ClusterIP
    storageClass: managed-premium
+
  restapiport: 8080
    claimSize: 20Gi
+
  activemqport: 61616
    activemqAndLocalConfigPath: "/billing/gvp-rs"
+
  envinjectport: 443
 +
  dnsport: 53
 +
  configserverport: 8888
 +
  snmpport: 1705
 
   
 
   
## Define service(s) for application. Fields many need to be modified based on `type`
+
## ConfigMaps with Configuration
service:
+
## Use Config Map for creating environment variables
  type: ClusterIP
+
context:
  restapiport: 8080
+
  env:
  activemqport: 61616
+
    CFGAPP: default
  envinjectport: 443
+
    GVP_RS_SERVICE_HOSTNAME: gvp-rs.gvp.svc.cluster.local
  dnsport: 53
+
    #CFGPASSWORD: password
  configserverport: 8888
+
    #CFGUSER: default
  snmpport: 1705
+
    CFG_HOST: gvp-configserver.gvp.svc.cluster.local
 +
    CFG_PORT: '8888'
 +
    CMDLINE: ./rs_startup.sh
 +
    DBNAME: gvp_rs
 +
    #DBPASS: 'jbIKfoS6LpfgaU$E'
 +
    DBUSER: openshiftadmin
 +
    rsDbSharedUsername: openshiftadmin
 +
    DBPORT: 1433
 +
    ENVTYPE: staging
 +
    GenesysIURegion: westus2
 +
    localconfigcachepath: /billing/gvp-rs/data/cache
 +
    HOSTFOLDER: Hosts
 +
    HOSTOS: CFGRedHatLinux
 +
    LCAPORT: '4999'
 +
    MSSQLHOST: mssqlserveropenshift.database.windows.net
 +
    RSAPP: azure_rs
 +
    RSJVM_INITIALHEAPSIZE: 500m
 +
    RSJVM_MAXHEAPSIZE: 1536m
 +
    RSFOLDER: Applications
 +
    RS_VERSION: 9.0.032.22
 +
    STDOUT: 'true'
 +
    WRKDIR: /usr/local/genesys/rs/
 +
    SNMPAPP: azure_rs_snmp
 +
    SNMP_WORKDIR: /usr/sbin
 +
    SNMP_CMDLINE: snmpd
 +
    SNMPFOLDER: Applications
 
   
 
   
## ConfigMaps with Configuration
+
  RSCONFIG:
## Use Config Map for creating environment variables
+
    messaging:
context:
+
      activemq.memoryUsageLimit: "256 mb"
  env:
+
      activemq.dataDirectory: "/billing/gvp-rs/data/activemq"
    CFGAPP: default
+
    log:
    GVP_RS_SERVICE_HOSTNAME: gvp-rs.gvp.svc.cluster.local
+
      verbose: "trace"
    #CFGPASSWORD: password
+
      trace: "stdout"
    #CFGUSER: default
+
    dbmp:
    CFG_HOST: gvp-configserver.gvp.svc.cluster.local
+
      rs.db.retention.operations.daily.default: "40"
    CFG_PORT: '8888'
+
      rs.db.retention.operations.monthly.default: "40"
    CMDLINE: ./rs_startup.sh
+
      rs.db.retention.operations.weekly.default: "40"
    DBNAME: gvp_rs
+
      rs.db.retention.var.daily.default: "40"
    #DBPASS: 'jbIKfoS6LpfgaU$E'
+
      rs.db.retention.var.monthly.default: "40"
    DBUSER: openshiftadmin
+
      rs.db.retention.var.weekly.default: "40"
    rsDbSharedUsername: openshiftadmin
+
      rs.db.retention.cdr.default: "40"
    DBPORT: 1433
 
    ENVTYPE: ""
 
    GenesysIURegion: ""
 
    localconfigcachepath: /billing/gvp-rs/data/cache
 
    HOSTFOLDER: Hosts
 
    HOSTOS: CFGRedHatLinux
 
    LCAPORT: '4999'
 
    MSSQLHOST: mssqlserveropenshift.database.windows.net
 
    RSAPP: azure_rs
 
    RSJVM_INITIALHEAPSIZE: 500m
 
    RSJVM_MAXHEAPSIZE: 1536m
 
    RSFOLDER: Applications
 
    RS_VERSION: 9.0.032.22
 
    STDOUT: 'true'
 
    WRKDIR: /usr/local/genesys/rs/
 
    SNMPAPP: azure_rs_snmp
 
    SNMP_WORKDIR: /usr/sbin
 
    SNMP_CMDLINE: snmpd
 
    SNMPFOLDER: Applications
 
 
   
 
   
  RSCONFIG:
+
# Default secrets storage to k8s secrets with csi able to be optional
    messaging:
+
secret:
      activemq.memoryUsageLimit: "256 mb"
+
  # keyVaultSecret will be a flag to between secret types(k8's or CSI). If keyVaultSecret was set to false k8's secret will be used
      activemq.dataDirectory: "/billing/gvp-rs/data/activemq"
+
  keyVaultSecret: false
    log:
+
  #RS SQL server secret
      verbose: "trace"
+
  rsSecretName: shared-gvp-rs-sqlserver-secret
      trace: "stdout"
+
  # secretProviderClassName will not be used used when keyVaultSecret set to false
    dbmp:
+
  secretProviderClassName: keyvault-gvp-rs-sqlserver-secret-00
      rs.db.retention.operations.daily.default: "40"
+
  dbreadersecretFileName: db-reader-password
      rs.db.retention.operations.monthly.default: "40"
+
  dbadminsecretFileName: db-admin-password
      rs.db.retention.operations.weekly.default: "40"
+
  #Configserver secret
      rs.db.retention.var.daily.default: "40"
+
  #If keyVaultSecret set to false the below parameters will not be used.
      rs.db.retention.var.monthly.default: "40"
+
  configserverProviderClassName: gvp-configserver-secret
      rs.db.retention.var.weekly.default: "40"
+
  cfgSecretFileNameForCfgUsername: configserver-username
      rs.db.retention.cdr.default: "40"
+
  cfgSecretFileNameForCfgPassword: configserver-password
 +
  #If keyVaultSecret set to true the below parameters will not be used.
 +
  cfgServerSecretName: configserver-secret
 +
  cfgSecretKeyNameForCfgUsername: username
 +
  cfgSecretKeyNameForCfgPassword: password
 
   
 
   
# Default secrets storage to k8s secrets with csi able to be optional
+
## Ingress configuration
secret:
+
ingress:
  # keyVaultSecret will be a flag to between secret types(k8's or CSI). If keyVaultSecret was set to false k8's secret will be used
+
  enabled: false
  keyVaultSecret: false
+
  annotations: {}
  #RS SQL server secret
+
    # kubernetes.io/ingress.class: nginx
  rsSecretName: shared-gvp-rs-sqlserver-secret
+
  # kubernetes.io/tls-acme: "true"
  # secretProviderClassName will not be used used when keyVaultSecret set to false
+
  hosts:
  secretProviderClassName: keyvault-gvp-rs-sqlserver-secret-00
+
    - host: chart-example.local
  dbreadersecretFileName: db-reader-password
+
      paths: []
  dbadminsecretFileName: db-admin-password
+
  tls: []
  #Configserver secret
+
  # - secretName: chart-example-tls
  #If keyVaultSecret set to false the below parameters will not be used.
+
  #   hosts:
  configserverProviderClassName: gvp-configserver-secret
+
  #      - chart-example.local
   cfgSecretFileNameForCfgUsername: configserver-username
 
  cfgSecretFileNameForCfgPassword: configserver-password
 
  #If keyVaultSecret set to true the below parameters will not be used.
 
  cfgServerSecretName: configserver-secret
 
  cfgSecretKeyNameForCfgUsername: username
 
  cfgSecretKeyNameForCfgPassword: password
 
 
   
 
   
## Ingress configuration
+
networkPolicies:
ingress:
+
  enabled: false
  enabled: false
 
  annotations: {}
 
    # kubernetes.io/ingress.class: nginx
 
  # kubernetes.io/tls-acme: "true"
 
  hosts:
 
    - host: chart-example.local
 
      paths: []
 
  tls: []
 
  #  - secretName: chart-example-tls
 
  #    hosts:
 
  #      - chart-example.local
 
 
   
 
   
networkPolicies:
+
## primaryAppresource requests and limits
  enabled: false
+
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
 +
##
 +
resourceForRS:
 +
  # We usually recommend not to specify default resources and to leave this as a conscious
 +
  # choice for the user. This also increases chances charts run on environments with little
 +
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
 +
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
 +
  requests:
 +
    memory: "500Mi"
 +
    cpu: "200m"
 +
  limits:
 +
    memory: "1Gi"
 +
    cpu: "300m"
 
   
 
   
## primaryAppresource requests and limits
+
resoueceForSnmp:
## ref: <nowiki>http://kubernetes.io/docs/user-guide/compute-resources/</nowiki>
+
  requests:
##
+
    memory: "500Mi"
resourceForRS:
+
    cpu: "100m"
  # We usually recommend not to specify default resources and to leave this as a conscious
+
  limits:
  # choice for the user. This also increases chances charts run on environments with little
+
    memory: "1Gi"
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
+
    cpu: "150m"
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
 
  requests:
 
    memory: "500Mi"
 
    cpu: "200m"
 
  limits:
 
    memory: "1Gi"
 
    cpu: "300m"
 
 
   
 
   
resoueceForSnmp:
+
## primaryApp containers' Security Context
  requests:
+
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
    memory: "500Mi"
+
##
    cpu: "100m"
+
## Containers should run as genesys user and cannot use elevated permissions
  limits:
+
securityContext:
    memory: "1Gi"
+
  runAsNonRoot: true
    cpu: "150m"
+
  runAsUser: 500
 +
  runAsGroup: 500
 
   
 
   
## primaryApp containers' Security Context
+
podSecurityContext:
## ref: <nowiki>https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container</nowiki>
+
  fsGroup: 500
##
 
## Containers should run as genesys user and cannot use elevated permissions
 
securityContext:
 
  runAsNonRoot: true
 
  runAsUser: 500
 
  runAsGroup: 500
 
 
   
 
   
podSecurityContext:
+
## Priority Class
  fsGroup: 500
+
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
 +
##
 +
priorityClassName: ""
 
   
 
   
## Priority Class
+
## Affinity for assignment.
## ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/</nowiki>
+
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
+
##
priorityClassName: ""
+
affinity: {}
 
   
 
   
## Affinity for assignment.
+
## Node labels for assignment.
## Ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity</nowiki>
+
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
+
##
affinity:  
+
nodeSelector: {}
 
   
 
   
## Node labels for assignment.
+
## Tolerations for assignment.
## ref: <nowiki>https://kubernetes.io/docs/user-guide/node-selection/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
+
##
nodeSelector:  
+
tolerations: []
 
   
 
   
## Tolerations for assignment.
+
## Extra labels
## ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
+
##
  tolerations: []
+
# labels: {}
 
   
 
   
## Extra labels
+
## Extra Annotations
## ref: <nowiki>https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
+
##
labels: {}
+
annotations: {}
 
   
 
   
## Extra Annotations
+
## Service/Pod Monitoring Settings
## ref: <nowiki>https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/</nowiki>
+
monitoring:
##
+
  podMonitorEnabled: true
#  annotations: {}
+
  prometheusRulesEnabled: true
 +
  grafanaEnabled: true
 
   
 
   
## Service/Pod Monitoring Settings
+
monitor:
prometheus:
+
   prometheusPort: 9116
   enabled: true
+
   monitorName: gvp-monitoring
   metric:
+
  module: [if_mib]
    port: 9116
+
  target: [127.0.0.1:1161]
 
# Enable for Prometheus operator
 
podMonitor:
 
  enabled: true
 
  metric:
 
    path: /snmp
 
    module: [ if_mib ]
 
    target: [ 127.0.0.1:1161 ]
 
 
monitoring:
 
  prometheusRulesEnabled: true
 
  grafanaEnabled: true
 
   
 
monitor:
 
  monitorName: gvp-monitoring
 
 
   
 
   
##DNS Settings
+
##DNS Settings
dnsConfig:
+
dnsConfig:
  options:
+
  options:
    - name: ndots
+
    - name: ndots
      value: "3"
+
      value: "3"
{{!}}}
+
</source>
 
===Verify the deployed resources===
 
===Verify the deployed resources===
 
Verify the deployed resources from OpenShift console/CLI.
 
Verify the deployed resources from OpenShift console/CLI.
 +
|Status=No
 +
}}{{Section
 +
|sectionHeading=4. GVP Resource Manager
 +
|anchor=GVPRM
 +
|alignment=Vertical
 +
|structuredtext='''Note''': RM and forward will not pass readiness checks until an MCP has registered properly.  This is because service is not available without MCPs.
 +
===Persistent Volumes creation===
 +
Create the following PVs which are required for the service deployment.
  
====Deployment validation - Success====
+
'''Note''': If your OpenShift deployment is capable of self-provisioning of Persistent Volumes then this step can be skipped.  Volumes will be created by provisioner.
  
1. Log into console and check if gvp-rs pod is ready and running "oc get pods -o wide".[[File:RS_Deploy_success_1.png|none|800px|RM_Deploy_success_1|link=https://all.docs.genesys.com/File:RS_Deploy_success_1.png]]2. Do a pod describe and check if both liveness and Readiness probes are passing "oc describe gvp-rs"
+
gvp-rm-01
  
3. Check the RS applications and Db details are properly configured in GVP Configuration Server.
+
'''gvp-rm-01-pv.yaml'''
 +
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
  name: gvp-rm-01
 +
spec:
 +
  capacity:
 +
    storage: 30Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Retain
 +
  storageClassName: gvp
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/rm-01
 +
    server: 192.168.30.51
 +
</source>
  
4. Check secrets are created in kubernetes cluster.[[File:RS_Deploy_success_2.png|none|800px|RM_Deploy_success_2|link=https://all.docs.genesys.com/File:RS_Deploy_success_2.png]]5. Check Db creation and configuration is successful.
+
Execute the following command:
====Deployment validation - Failure====
+
<source lang="bash">
To debug deployment failure, follow the below steps:
+
kubectl create -f gvp-rm-01-pv.yaml
 +
</source>
  
1. Log into console and check if gvp-rs pod is ready and running "oc get pods -o wide".
+
gvp-rm-02
  
2. If the RS container is continuously restarting, you need to check the liveness and readiness probe status.
+
'''gvp-rm-02-pv.yaml'''
 +
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
  name: gvp-rm-02
 +
spec:
 +
  capacity:
 +
    storage: 30Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Retain
 +
  storageClassName: gvp
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/rm-02
 +
    server: 192.168.30.51
 +
</source>
  
3. Do RS pod describe to check the liveness and readiness probe status "oc describe gvp-rs".
+
Execute the following command:
 +
<source lang="bash">
 +
kubectl create -f gvp-rm-02-pv.yaml
 +
</source>
 +
gvp-rm-logs-01
  
4. If probe failures are observed, check if PVC is attached properly and check RS logs if Config data is read properly.
+
'''gvp-rm-logs-01-pv.yaml'''
 +
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
  name: gvp-rm-logs-01
 +
spec:
 +
  capacity:
 +
    storage: 10Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Recycle
 +
  storageClassName: gvp
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/rm-logs-01
 +
    server: 192.168.30.51
 +
</source>
 +
Execute the following command:
 +
<source lang="bash">
 +
kubectl create -f gvp-rm-logs-01-pv.yaml
 +
</source>
 +
gvp-rm-logs-02
  
5. If rs-init container is failing, check RS and configserver/DB connectivity.
+
'''gvp-rm-logs-02-pv.yaml'''
|Status=No
+
<source lang="bash">
}}{{Section
+
apiVersion: v1
|sectionHeading=4. GVP Resource Manager
+
kind: PersistentVolume
|anchor=GVPRM
+
metadata:
|alignment=Vertical
+
  name: gvp-rm-logs-02
|structuredtext=<br />
+
spec:
{{{!}} class="wikitable" data-macro-name="warning" data-macro-id="73364891-e552-4c1e-93b4-beac108d3e78" data-macro-parameters="title=Note" data-macro-schema-version="1" data-macro-body-type="RICH_TEXT"
+
  capacity:
{{!}} class="wysiwyg-macro-body"{{!}}Resource Manager will not pass readiness checks until an MCP has registered properly.  This is because service is not available without MCPs.
+
    storage: 10Gi
{{!}}}
+
  accessModes:
===Persistent Volumes creation===
+
    - ReadWriteOnce
'''Note''': The steps for PV creation can be skipped if OCS is used to auto provision the persistent volumes.
+
  persistentVolumeReclaimPolicy: Recycle
 +
  storageClassName: gvp
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/rm-logs-02
 +
    server: 192.168.30.51
 +
</source>
  
Create the following PVs which are required for the service deployment.
+
Execute the following command:
{{{!}} class="wikitable" data-macro-name="warning" data-macro-id="6f74f414-0e53-4c9f-bb6c-babd5adf14b2" data-macro-parameters="icon=false{{!}}title=Note Regarding Persistent Volumes" data-macro-schema-version="1" data-macro-body-type="RICH_TEXT"
+
<source lang="bash">
{{!}} class="wysiwyg-macro-body"{{!}}If your OpenShift deployment is capable of self-provisioning of Persistent Volumes, then this step can be skipped.  Volumes will be created by provisioner.
+
kubectl create -f gvp-rm-logs-02-pv.yaml
{{!}}}<u>gvp-rm-01</u>
+
</source>
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="e41390d2-1784-4d9a-8773-e78bab94395d" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-rm-01-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-rm-01
 
spec:
 
  capacity:
 
    storage: 30Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Retain
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/rm-01
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="e5887dce-583f-40fa-a048-93e7d5181941" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-rm-01-pv.yaml
 
{{!}}}<u>gvp-rm-02</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="a578b363-4be7-4f31-adf2-250aa32e50a8" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-rm-02-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-rm-02
 
spec:
 
  capacity:
 
    storage: 30Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Retain
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/rm-02
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="979e11e1-9e95-49b7-898b-6be134d81e38" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-rm-02-pv.yaml
 
{{!}}}<u>gvp-rm-logs-01</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="e65b20dd-32cd-4a01-b206-40349434e269" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-rm-logs-01-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-rm-logs-01
 
spec:
 
  capacity:
 
    storage: 10Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Recycle
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/rm-logs-01
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="ca609291-3587-45f2-8aa9-3a234715ef48" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-rm-logs-01-pv.yaml
 
{{!}}}<u>gvp-rm-logs-02</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="51d6d4d5-4b16-4a54-a582-e8ad0eca64c4" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-rm-logs-02-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-rm-logs-02
 
spec:
 
  capacity:
 
    storage: 10Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Recycle
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/rm-logs-02
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="0e81e8e1-e57c-446a-ade3-b0de9b2e2c66" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-rm-logs-02-pv.yaml
 
{{!}}}
 
 
===Install Helm chart===
 
===Install Helm chart===
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="18d7d1d8-bad3-4ebc-851c-10bf3d232edb" data-macro-parameters="language=bash{{!}}theme=Emacs{{!}}title=Install Helm Chart" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
<source lang="bash">
{{!}} class="wysiwyg-macro-body"{{!}}
+
helm install gvp-rm ./<gvp-rm-helm-artifact> -f gvp-rm-values.yaml
helm install gvp-rm ./<gvp-rm-helm-artifact> -f gvp-rm-values.yaml
+
</source>
{{!}}}At minimum following values will need to be updated in your values.yaml:
+
 
 +
You must set the following values in your values.yaml for Configuration Server:
  
*<docker-repo> - Set to your Docker Repo with Private Edition Artifacts
+
*priorityClassName >> Set to a priority class that exists on cluster (or create it instead)
*<credential-name> - Set to your pull secret name
+
*imagePullSecrets >> Set to your pull secret name
 +
*Set the cfgServerSecretName if you changed it from default
  
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="f7c33da8-5ba3-41fb-8f92-cc804bfb7809" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-rm-values.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
'''gvp-rm-values.yaml'''
{{!}} class="wysiwyg-macro-body"{{!}}
+
<source lang="bash">
## Global Parameters
+
## Global Parameters
## Add labels to all the deployed resources
+
## Add labels to all the deployed resources
##
+
##
labels:
+
labels:
  enabled: true
+
  enabled: true
  serviceGroup: "gvp"
+
  serviceGroup: "gvp"
  componentType: "shared"
+
  componentType: "shared"
 +
 +
## Primary App Configuration
 +
##
 +
# primaryApp:
 +
# type: ReplicaSet
 +
# Should include the defaults for replicas
 +
deployment:
 +
  replicaCount: 2
 +
  deploymentEnv: "UPDATE_ENV"
 +
  namespace: gvp
 +
  clusterDomain: "svc.cluster.local"
 +
nameOverride: ""
 +
fullnameOverride: ""
 +
 +
image:
 +
  registry: pureengage-docker-staging.jfrog.io
 +
  gvprmrepository: gvp/gvp_rm
 +
  cfghandlerrepository: gvp/gvp_rm_cfghandler
 +
  snmprepository: gvp/gvp_snmp
 +
  gvprmtestrepository: gvp/gvp_rm_test
 +
  cfghandlertag:
 +
  rmtesttag:
 +
  rmtag:
 +
  snmptag: v9.0.040.07
 +
  pullPolicy: Always
 +
  imagePullSecrets:
 +
    - name: "pureengage-docker-staging"
 
   
 
   
## Primary App Configuration
+
dnsConfig:
##
+
  options:
# primaryApp:
+
    - name: ndots
# type: ReplicaSet
+
      value: "3"
# Should include the defaults for replicas
 
deployment:
 
  replicaCount: 2
 
  deploymentEnv: "UPDATE_ENV"
 
  namespace: gvp
 
  clusterDomain: "svc.cluster.local"
 
nameOverride: ""
 
fullnameOverride: ""
 
 
   
 
   
image:
+
# Pod termination grace period 15 mins.
  registry: <docker-repo>
+
gracePeriodSeconds: 900
  gvprmrepository: gvp/gvp_rm
 
  cfghandlerrepository: gvp/gvp_rm_cfghandler
 
  snmprepository: gvp/gvp_snmp
 
  gvprmtestrepository: gvp/gvp_rm_test
 
  cfghandlertag:
 
  rmtesttag:
 
  rmtag:
 
  snmptag: v9.0.040.07
 
  pullPolicy: Always
 
  imagePullSecrets:
 
    - name: "<credential-name>"
 
 
   
 
   
dnsConfig:
+
## liveness and readiness probes
  options:
+
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
    - name: ndots
+
livenessValues:
      value: "3"
+
  path: /rm/liveness
 +
  initialDelaySeconds: 60
 +
  periodSeconds: 90
 +
  timeoutSeconds: 20
 +
  failureThreshold: 3
 
   
 
   
# Pod termination grace period 15 mins.
+
readinessValues:
gracePeriodSeconds: 900
+
  path: /rm/readiness
 +
  initialDelaySeconds: 10
 +
  periodSeconds: 60
 +
  timeoutSeconds: 20
 +
  failureThreshold: 3
 
   
 
   
## liveness and readiness probes
+
## PVCs defined
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
+
volumes:
livenessValues:
+
  billingpvc:
  path: /rm/liveness
+
    storageClass: managed-premium
  initialDelaySeconds: 60
+
    claimSize: 20Gi
  periodSeconds: 90
+
    mountPath: "/rm"
  timeoutSeconds: 20
+
  logpvc:
  failureThreshold: 3
+
    EnablePVForLogStorage: true
 +
    storageClass: managed-premium
 +
    claimSize: 5Gi
 +
    accessMode: ReadWriteOnce
 +
    mountPath: "/mnt/log"
 +
    # If PV is not used for log storage by disabling the flag EnablePVForLogStorage: false, the given host path will be used for log storage.
 +
    LogStorageHostPath: /mnt/log
 
   
 
   
  readinessValues:
+
## Define service(s) for application. Fields many need to be modified based on `type`
  path: /rm/readiness
+
service:
  initialDelaySeconds: 10
+
  type: ClusterIP
  periodSeconds: 60
+
  port: 5060
  timeoutSeconds: 20
+
  rmHealthCheckAPIPort: 8300
  failureThreshold: 3
 
 
   
 
   
## PVCs defined
+
## ConfigMaps with Configuration
volumes:
+
## Use Config Map for creating environment variables
  billingpvc:
+
context:
    storageClass: managed-premium
+
  env:
    claimSize: 20Gi
+
    cfghandler:
    mountPath: "/rm"
+
      CFGSERVER: gvp-configserver.gvp.svc.cluster.local
 
+
      CFGSERVERBACKUP: gvp-configserver.gvp.svc.cluster.local
## Define RM log storage volume type
+
      CFGPORT: "8888"
rmLogStorage:
+
      CFGAPP: "default"
  volumeType:
+
      RMAPP: "azure_rm"
    persistentVolume:
+
      RMFOLDER: "Applications\\RM_MicroService\\RM_Apps"
      enabled: false
+
      HOSTFOLDER: "Hosts\\RM_MicroService"
      storageClass: disk-premium
+
      MCPFOLDER: "MCP_Configuration_Unit\\MCP_LRG"
      claimSize: 50Gi
+
      SNMPFOLDER: "Applications\\RM_MicroService\\SNMP_Apps"
      accessMode: ReadWriteOnce
+
      EnvironmentType: "prod"
    hostPath:
+
      CONFSERVERAPP: "confserv"
      enabled: true
+
      RSAPP: "azure_rs"
      path: /mnt/log
+
      SNMPAPP: "azure_rm_snmp"
    emptyDir:
+
      STDOUT: "true"
      enabled: false
+
      VOICEMAILSERVICEDIDNUMBER: "55551111"
  containerMountPath:
 
    path: /mnt/log
 
 
 
## FluentBit Settings
 
  fluentBitSidecar:
 
  enabled: false
 
 
 
## Define service(s) for application.  Fields many need to be modified based on `type`
 
service:
 
  type: ClusterIP
 
  port: 5060
 
  rmHealthCheckAPIPort: 8300
 
 
   
 
   
## ConfigMaps with Configuration
+
  RMCONFIG:
## Use Config Map for creating environment variables
+
    rm:
context:
+
      sip-header-for-dnis: "Request-Uri"
  env:
+
      ignore-gw-lrg-configuration: "true"
    cfghandler:
+
      ignore-ruri-tenant-dbid: "true"
      CFGSERVER: gvp-configserver.gvp.svc.cluster.local
+
    log:
      CFGSERVERBACKUP: gvp-configserver.gvp.svc.cluster.local
+
      verbose: "trace"
      CFGPORT: "8888"
+
    subscription:
      CFGAPP: "default"
+
      sip.transport.dnsharouting: "true"
      RMAPP: "azure_rm"
+
      sip.headerutf8verification: "false"
      RMFOLDER: "Applications\\RM_MicroService\\RM_Apps"
+
      sip.transport.setuptimer.tcp: "5000"
      HOSTFOLDER: "Hosts\\RM_MicroService"
+
      sip.threadpoolsize: "1"
      MCPFOLDER: "MCP_Configuration_Unit\\MCP_LRG"
+
    registrar:
      SNMPFOLDER: "Applications\\RM_MicroService\\SNMP_Apps"
+
      sip.transport.dnsharouting: "true"
      EnvironmentType: "prod"
+
      sip.headerutf8verification: "false"
      CONFSERVERAPP: "confserv"
+
      sip.transport.setuptimer.tcp: "5000"
      RSAPP: "azure_rs"
+
      sip.threadpoolsize: "1"
      SNMPAPP: "azure_rm_snmp"
+
    proxy:
      STDOUT: "true"
+
      sip.transport.dnsharouting: "true"
      VOICEMAILSERVICEDIDNUMBER: "55551111"
+
      sip.headerutf8verification: "false"
 +
      sip.transport.setuptimer.tcp: "5000"
 +
      sip.threadpoolsize: "16"
 +
      sip.maxtcpconnections: "1000"
 +
    monitor:
 +
      sip.transport.dnsharouting: "true"
 +
      sip.maxtcpconnections: "1000"
 +
      sip.headerutf8verification: "false"
 +
      sip.transport.setuptimer.tcp: "5000"
 +
      sip.threadpoolsize: "1"
 +
    ems:
 +
      rc.cdr.local_queue_path: "/rm/ems/data/cdrQueue_rm.db"
 +
      rc.ors.local_queue_path: "/rm/ems/data/orsQueue_rm.db"
 
   
 
   
  RMCONFIG:
+
# Default secrets storage to k8s secrets with csi able to be optional
    rm:
+
secret:
      sip-header-for-dnis: "Request-Uri"
+
  # keyVaultSecret will be a flag to between secret types(k8's or CSI). If keyVaultSecret was set to false k8's secret will be used
      ignore-gw-lrg-configuration: "true"
+
  keyVaultSecret: false
      ignore-ruri-tenant-dbid: "true"
+
  #If keyVaultSecret set to false the below parameters will not be used.
    log:
+
  configserverProviderClassName: gvp-configserver-secret
      verbose: "trace"
+
  cfgSecretFileNameForCfgUsername: configserver-username
    subscription:
+
  cfgSecretFileNameForCfgPassword: configserver-password
      sip.transport.dnsharouting: "true"
+
  #If keyVaultSecret set to true the below parameters will not be used.
      sip.headerutf8verification: "false"
+
  cfgServerSecretName: configserver-secret
      sip.transport.setuptimer.tcp: "5000"
+
  cfgSecretKeyNameForCfgUsername: username
      sip.threadpoolsize: "1"
+
  cfgSecretKeyNameForCfgPassword: password
    registrar:
 
      sip.transport.dnsharouting: "true"
 
      sip.headerutf8verification: "false"
 
      sip.transport.setuptimer.tcp: "5000"
 
      sip.threadpoolsize: "1"
 
    proxy:
 
      sip.transport.dnsharouting: "true"
 
      sip.headerutf8verification: "false"
 
      sip.transport.setuptimer.tcp: "5000"
 
      sip.threadpoolsize: "16"
 
      sip.maxtcpconnections: "1000"
 
    monitor:
 
      sip.transport.dnsharouting: "true"
 
      sip.maxtcpconnections: "1000"
 
      sip.headerutf8verification: "false"
 
      sip.transport.setuptimer.tcp: "5000"
 
      sip.threadpoolsize: "1"
 
    ems:
 
      rc.cdr.local_queue_path: "/rm/ems/data/cdrQueue_rm.db"
 
      rc.ors.local_queue_path: "/rm/ems/data/orsQueue_rm.db"
 
 
   
 
   
# Default secrets storage to k8s secrets with csi able to be optional
+
## Ingress configuration
secret:
+
ingress:
  # keyVaultSecret will be a flag to between secret types(k8's or CSI). If keyVaultSecret was set to false k8's secret will be used
+
  enabled: false
  keyVaultSecret: false
+
  annotations: {}
  #If keyVaultSecret set to false the below parameters will not be used.
+
    # kubernetes.io/ingress.class: nginx
  configserverProviderClassName: gvp-configserver-secret
+
  # kubernetes.io/tls-acme: "true"
  cfgSecretFileNameForCfgUsername: configserver-username
+
  paths: []
   cfgSecretFileNameForCfgPassword: configserver-password
+
  hosts:
  #If keyVaultSecret set to true the below parameters will not be used.
+
    - chart-example.local
  cfgServerSecretName: configserver-secret
+
  tls: []
  cfgSecretKeyNameForCfgUsername: username
+
  #  - secretName: chart-example-tls
  cfgSecretKeyNameForCfgPassword: password
+
  #   hosts:
 +
  #     - chart-example.local
 +
networkPolicies:
 +
  enabled: false
 +
sip:
 +
  serviceName: sipnode
 
   
 
   
## Ingress configuration
+
## primaryAppresource requests and limits
ingress:
+
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  enabled: false
+
##
  annotations: {}
+
resourceForRM:
    # kubernetes.io/ingress.class: nginx
+
  # We usually recommend not to specify default resources and to leave this as a conscious
  # kubernetes.io/tls-acme: "true"
+
  # choice for the user. This also increases chances charts run on environments with little
  paths: []
+
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  hosts:
+
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    - chart-example.local
+
  requests:
  tls: []
+
    memory: "1Gi"
  #  - secretName: chart-example-tls
+
    cpu: "200m"
  #    hosts:
+
    ephemeral-storage: "10Gi"
  #      - chart-example.local
+
  limits:
networkPolicies:
+
    memory: "2Gi"
  enabled: false
+
    cpu: "250m"
sip:
 
  serviceName: sipnode
 
 
   
 
   
## primaryAppresource requests and limits
+
resoueceForSnmp:
## ref: <nowiki>http://kubernetes.io/docs/user-guide/compute-resources/</nowiki>
+
  requests:
##
+
    memory: "500Mi"
resourceForRM:
+
    cpu: "100m"
  # We usually recommend not to specify default resources and to leave this as a conscious
+
  limits:
  # choice for the user. This also increases chances charts run on environments with little
+
    memory: "1Gi"
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
+
    cpu: "150m"
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
 
  requests:
 
    memory: "1Gi"
 
    cpu: "200m"
 
    ephemeral-storage: "10Gi"
 
  limits:
 
    memory: "2Gi"
 
    cpu: "250m"
 
 
   
 
   
resoueceForSnmp:
+
## primaryApp containers' Security Context
  requests:
+
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
    memory: "500Mi"
+
##
    cpu: "100m"
+
## Containers should run as genesys user and cannot use elevated permissions
  limits:
+
securityContext:
    memory: "1Gi"
+
  fsGroup: 500
    cpu: "150m"
+
  runAsNonRoot: true
 +
  runAsUserRM: 500
 +
  runAsGroupRM: 500
 +
  runAsUserCfghandler: 500
 +
  runAsGroupCfghandler: 500
 
   
 
   
## primaryApp containers' Security Context
+
## Priority Class
## ref: <nowiki>https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
+
##
## Containers should run as genesys user and cannot use elevated permissions
+
priorityClassName: ""
securityContext:
 
  fsGroup: 500
 
  runAsNonRoot: true
 
  runAsUserRM: 500
 
  runAsGroupRM: 500
 
  runAsUserCfghandler: 500
 
  runAsGroupCfghandler: 500
 
 
   
 
   
## Priority Class
+
## Affinity for assignment.
## ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/</nowiki>
+
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
+
##
priorityClassName: ""
+
affinity: {}
 
   
 
   
## Affinity for assignment.
+
## Node labels for assignment.
## Ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity</nowiki>
+
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
+
##
affinity:  
+
nodeSelector:
 
   
 
   
## Node labels for assignment.
 
## ref: <nowiki>https://kubernetes.io/docs/user-guide/node-selection/</nowiki>
 
##
 
nodeSelector:
 
 
   
 
   
 +
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 
   
 
   
## Tolerations for assignment.
+
## Service/Pod Monitoring Settings
## ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</nowiki>
+
monitoring:
##
+
  podMonitorEnabled: true
tolerations: []
+
  prometheusRulesEnabled: true
 +
  grafanaEnabled: true
 
   
 
   
## Service/Pod Monitoring Settings
+
monitor:
prometheus:
+
   monitorName: gvp-monitoring
   enabled: true
+
   prometheusPort: 9116
   metric:
+
  prometheusPortlogs: 8200
    port: 9116
+
  logFilePrefixName: RM
+
  module: [if_mib]
# Enable for Prometheus operator
+
  target: [127.0.0.1:1161]
podMonitor:
+
</source>
  enabled: true
+
 
  metric:
 
    path: /snmp
 
    module: [ if_mib ]
 
    target: [ 127.0.0.1:1161 ]
 
 
monitoring:
 
  prometheusRulesEnabled: true
 
  grafanaEnabled: true
 
   
 
monitor:
 
  monitorName: gvp-monitoring
 
{{!}}}
 
 
===Verify the deployed resources===
 
===Verify the deployed resources===
 
Verify the deployed resources from OpenShift console/CLI.
 
Verify the deployed resources from OpenShift console/CLI.
====Deployment validation - success====
+
|Status=No
1. Log into console and check if gvp-rm-0 and gvp-rm-1 pods are ready and running "oc get pods -o wide".[[File:RM_Deploy_success_1.png|none|800px|RM_Deploy_success_1|link=https://all.docs.genesys.com/File:RM_Deploy_success_1.png]]2. Do a pod describe and check if both liveness and readiness probes are passing "oc describe gvp-rm-0 / oc describe pod gvp-rm-1"
+
}}{{Section
 +
|sectionHeading=5. GVP Media Control Platform
 +
|anchor=GVPMCP
 +
|alignment=Vertical
 +
|structuredtext====Persistent Volumes creation===
 +
Create the following PVs which are required for the service deployment.
 +
 
 +
gvp-mcp-logs-01
 +
 
 +
'''gvp-mcp-logs-01-pv.yaml'''
 +
 
 +
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
  name: gvp-mcp-logs-01
 +
spec:
 +
  capacity:
 +
    storage: 10Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Recycle
 +
  storageClassName: gvp
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/mcp-logs-01
 +
    server: 192.168.30.51
 +
</source>
 +
 
 +
Execute the following command:
 +
<source lang="bash">
 +
kubectl create -f gvp-mcp-logs-01-pv.yaml
 +
</source>
 +
 
 +
gvp-mcp-logs-02
 +
 
 +
'''gvp-mcp-logs-02-pv.yaml'''
 +
 
 +
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
  name: gvp-mcp-logs-02
 +
spec:
 +
  capacity:
 +
    storage: 10Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Recycle
 +
  storageClassName: gvp
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/mcp-logs-02
 +
    server: 192.168.30.51
 +
</source>
 +
 
 +
Execute the following command:
 +
<source lang="bash">
 +
kubectl create -f gvp-mcp-logs-02-pv.yaml
 +
</source>
 +
 
 +
gvp-mcp-rup-volume-01
 +
 
 +
'''gvp-mcp-rup-volume-01-pv.yaml'''
 +
 
 +
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
  name: gvp-mcp-rup-volume-01
 +
spec:
 +
  capacity:
 +
    storage: 40Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Recycle
 +
  storageClassName: disk-premium
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/mcp-logs-01
 +
    server: 192.168.30.51
 +
</source>
 +
 
 +
Execute the following command:
 +
<source lang="bash">
 +
kubectl create -f gvp-mcp-rup-volume-01-pv.yaml
 +
</source>
 +
 
 +
gvp-mcp-rup-volume-02
  
3. LRG options configured by Resource Manager could be changed by using LRG configuration section in values.yaml. For example,
+
'''gvp-mcp-rup-volume-02-pv.yaml'''
LRGConfig:
 
    gvp.lrg:
 
      load-balance-scheme: "round-robin"
 
4. When Resource Manager is deployed, it creates LRG "MCP_Configuration_Unit\\MCP_LRG", configuration unit, default IVR Profiles in the environment tenant.
 
  
For information on provisioning a new tenant, refer to {{Link-SomewhereInThisVersion|manual=GVPPEGuide|topic=Provision}}.
+
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
  name: gvp-mcp-rup-volume-02
 +
spec:
 +
  capacity:
 +
    storage: 40Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Recycle
 +
  storageClassName: disk-premium
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/mcp-logs-02
 +
    server: 192.168.30.51
 +
</source>
  
Resource Manager uses gvp-tenant-id, the contact center ID, and the media service type coming from SIP Server in the INVITE message to identify the tenant and pick the IVR Profiles.
+
Execute the following command:
====Deployment validation - failure====
+
<source lang="bash">
To debug deployment failure, do the following:
+
kubectl create -f gvp-mcp-rup-volume-02-pv.yaml
 +
</source>
  
1. Log into console and check if gvp-rm-0 and gvp-rm-1 pods are ready and running "oc get pods -o wide".
+
gvp-mcp-recording-volume-01
  
2. If the RM container is continuously restarting, check the liveness and readiness probe status.
+
'''gvp-mcp-recordings-volume-01-pv.yaml'''
  
3. Do RM pod describe to check the liveness and readiness probe status "oc describe gvp-rm-0 / oc describe pod gvp-rm-1"
+
<source lang="bash">
 +
apiVersion: v1
 +
kind: PersistentVolume
 +
metadata:
 +
  name: gvp-mcp-recording-volume-01
 +
spec:
 +
  capacity:
 +
    storage: 40Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Recycle
 +
  storageClassName: gvp
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/mcp-logs-01
 +
    server: 192.168.30.51
 +
</source>
  
4. If probe failures are observed, check for MCP availability and MCP applications in RM LRG. Check RM logs to find the root cause.
+
Execute the following command:
 +
<source lang="bash">
 +
kubectl create -f gvp-mcp-recordings-volume-01-pv.yaml
 +
</source>
 +
gvp-mcp-recording-volume-02
  
5. If rm-init container is failing, check RM and Configuration Server connectivity, and check whether RM configuration details are properly configured.
+
'''gvp-mcp-recordings-volume-02-pv.yaml'''
|Status=No
+
<source lang="bash">
}}{{Section
+
apiVersion: v1
|sectionHeading=5. GVP Media Control Platform
+
kind: PersistentVolume
|anchor=GVPMCP
+
metadata:
|alignment=Vertical
+
  name: gvp-mcp-recording-volume-02
|structuredtext====Persistent Volumes creation===
+
spec:
'''Note''': The steps for PV creation can be skipped if OCS is used to auto provision the persistent volumes.
+
  capacity:
 +
    storage: 40Gi
 +
  accessModes:
 +
    - ReadWriteOnce
 +
  persistentVolumeReclaimPolicy: Recycle
 +
  storageClassName: gvp
 +
  nfs:
 +
    path:  /export/vol1/PAT/gvp/mcp-logs-02
 +
    server: 192.168.30.51
 +
</source>
 +
 
 +
Execute the following command:
 +
<source lang="bash">
 +
kubecl create -f gvp-mcp-recordings-volume-02-pv.yaml
 +
</source>
  
Create the following PVs which are required for the service deployment.
 
{{{!}} class="wikitable" data-macro-name="warning" data-macro-id="58240037-95e5-4c9a-907a-de105566d7af" data-macro-parameters="icon=false{{!}}title=Note Regarding Persistent Volumes" data-macro-schema-version="1" data-macro-body-type="RICH_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}If your OpenShift deployment is capable of self-provisioning of Persistent Volumes, then this step can be skipped.  Volumes will be created by provisioner.
 
{{!}}}<u>gvp-mcp-logs-01</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="cae04f19-fc92-4408-b8a2-2e5c65288721" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-mcp-logs-01-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-mcp-logs-01
 
spec:
 
  capacity:
 
    storage: 10Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Recycle
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/mcp-logs-01
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="0498ef92-8a60-4540-bd3f-18330eb80638" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-mcp-logs-01-pv.yaml
 
{{!}}}<u>gvp-mcp-logs-02</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="ac97e051-a284-4cd3-9284-89b33e9d61b6" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-mcp-logs-02-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-mcp-logs-02
 
spec:
 
  capacity:
 
    storage: 10Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Recycle
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/mcp-logs-02
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="e6f89699-84e9-4e1e-9fa0-0cfcc0af49e1" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-mcp-logs-02-pv.yaml
 
{{!}}}<u>gvp-mcp-rup-volume-01</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="9e37c8b0-1701-46ea-8389-2a1de78f205a" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-mcp-rup-volume-01-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-mcp-rup-volume-01
 
spec:
 
  capacity:
 
    storage: 40Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Recycle
 
  storageClassName: disk-premium
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/mcp-logs-01
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="8a1a959c-bd49-48a5-9a66-8d584e11b5bd" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-mcp-rup-volume-01-pv.yaml
 
{{!}}}<u>gvp-mcp-rup-volume-02</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="e5d5f1b5-76e3-4623-9c4d-3955a8f01b8c" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-mcp-rup-volume-02-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-mcp-rup-volume-02
 
spec:
 
  capacity:
 
    storage: 40Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Recycle
 
  storageClassName: disk-premium
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/mcp-logs-02
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="bfae65ba-86b0-481e-bb2b-a1a0ea10e74e" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-mcp-rup-volume-02-pv.yaml
 
{{!}}}<u>gvp-mcp-recording-volume-01</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="c30c91d3-afdc-467d-9dce-dc21a0f6cc6d" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-mcp-recordings-volume-01-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-mcp-recording-volume-01
 
spec:
 
  capacity:
 
    storage: 40Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Recycle
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/mcp-logs-01
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="060c080d-992f-483f-abd0-5b93227527df" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-mcp-recordings-volume-01-pv.yaml
 
{{!}}}<u>gvp-mcp-recording-volume-02</u>
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="36ca733a-9d2a-4715-a011-76a42cce2b35" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-mcp-recordings-volume-02-pv.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
apiVersion: v1
 
kind: PersistentVolume
 
metadata:
 
  name: gvp-mcp-recording-volume-02
 
spec:
 
  capacity:
 
    storage: 40Gi
 
  accessModes:
 
    - ReadWriteOnce
 
  persistentVolumeReclaimPolicy: Recycle
 
  storageClassName: gvp
 
  nfs:
 
    path:  /export/vol1/PAT/gvp/mcp-logs-02
 
    server: 192.168.30.51
 
{{!}}}Execute the following command:
 
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="fb8a00d5-ac2b-40e2-abb6-4802bed6d0d7" data-macro-parameters="language=bash{{!}}theme=Emacs" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
oc create -f gvp-mcp-recordings-volume-02-pv.yaml
 
{{!}}}
 
 
===Install Helm chart===
 
===Install Helm chart===
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
 
Download the required Helm chart release from the JFrog repository and install. Refer to {{Link-AnywhereElse|product=GVP|version=Current|manual=GVPPEGuide|topic=Deploy|anchor=HelmchaURLs|display text=Helm Chart URLs}}.
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="07bcb02c-6ac0-4bd3-943d-7c51425c4534" data-macro-parameters="language=bash{{!}}theme=Emacs{{!}}title=Install Helm Chart" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
 
{{!}} class="wysiwyg-macro-body"{{!}}
 
helm install gvp-mcp-blue ./<gvp-mcp-helm-artifact> -f gvp-mcp-values.yaml
 
{{!}}}At minimum following values will need to be updated in your values.yaml:
 
  
*<critical-priority-class> - Set to a priority class that exists on cluster (or create it instead)
+
<source lang="bash">
*<docker-repo> - Set to your Docker Repo with Private Edition Artifacts
+
helm install gvp-mcp ./<gvp-mcp-helm-artifact> -f gvp-mcp-values.yaml
*<credential-name> - Set to your pull secret name
+
</source>
*Set '''logicalResourceGroup: "MCP_Configuration_Unit"''' to add MCPs to the Real Configuration Unit (rather than test)
+
 
 +
You must set the following values in your values.yaml:
 +
 
 +
*Set '''logicalResourceGroup: "MCP_Configuration_Unit"''' to add MCPs to the Real Configuration Unit (rather than test).
 +
 
 +
'''gvp-mcp-values.yaml'''
  
{{{!}} class="wikitable" data-macro-name="code" data-macro-id="5a7e8e31-0a46-4664-87bb-8238df182a64" data-macro-parameters="language=yml{{!}}theme=Emacs{{!}}title=gvp-mcp-values.yaml" data-macro-schema-version="1" data-macro-body-type="PLAIN_TEXT"
+
<source lang="bash">
{{!}} class="wysiwyg-macro-body"{{!}}
+
## Default values for gvp-mcp.
## Default values for gvp-mcp.
+
## This is a YAML-formatted file.
<nowiki>##</nowiki> This is a YAML-formatted file.
+
## Declare variables to be passed into your templates.
<nowiki>##</nowiki> Declare variables to be passed into your templates.
+
 +
## Global Parameters
 +
## Add labels to all the deployed resources
 +
##
 +
podLabels: {}
 +
 +
## Add annotations to all the deployed resources
 +
##
 +
podAnnotations: {}
 
   
 
   
<nowiki>##</nowiki> Global Parameters
+
serviceAccount:
<nowiki>##</nowiki> Add labels to all the deployed resources
+
  # Specifies whether a service account should be created
<nowiki>##</nowiki>
+
  create: false
podLabels: {}
+
  # Annotations to add to the service account
 +
  annotations: {}
 +
  # The name of the service account to use.
 +
  # If not set and create is true, a name is generated using the fullname template
 +
  name:
 
   
 
   
<nowiki>##</nowiki> Add annotations to all the deployed resources
+
## Deployment Configuration
<nowiki>##</nowiki>
+
deploymentEnv: "UPDATE_ENV"
podAnnotations: {}
+
replicaCount: 2
 +
terminationGracePeriod: 3600
 
   
 
   
serviceAccount:
+
## Name and dashboard overrides
<nowiki> </nowiki> # Specifies whether a service account should be created
+
nameOverride: ""
<nowiki> </nowiki> create: false
+
fullnameOverride: ""
<nowiki> </nowiki> # Annotations to add to the service account
+
dashboardReplicaStatefulsetFilterOverride: ""
<nowiki> </nowiki> annotations: {}
 
<nowiki> </nowiki> # The name of the service account to use.
 
<nowiki> </nowiki> # If not set and create is true, a name is generated using the fullname template
 
<nowiki> </nowiki> name:
 
 
   
 
   
<nowiki>##</nowiki> Deployment Configuration
+
## Base Labels. Please do not change these.
deploymentEnv: "UPDATE_ENV"
+
serviceName: gvp-mcp
replicaCount: 2
+
component: shared
terminationGracePeriod: 3600
+
partOf: gvp
 
   
 
   
<nowiki>##</nowiki> Name and dashboard overrides
+
## Command-line arguments to the MCP process
nameOverride: ""
+
args:
fullnameOverride: ""
+
  - "gvp-configserver"
dashboardReplicaStatefulsetFilterOverride: ""
+
  - "8888"
 +
  - "default"
 +
  - "/etc/mcpconfig/config.ini"
 
   
 
   
<nowiki>##</nowiki> Base Labels. Please do not change these.
+
## Container image repo settings.
serviceName: gvp-mcp
+
image:
component: shared
+
  mcp:
partOf: gvp
+
    registry: pureengage-docker-staging.jfrog.io
 +
    repository: gvp/multicloud/gvp_mcp
 +
    tag: "{{ .Chart.AppVersion }}"
 +
    pullPolicy: IfNotPresent
 +
  serviceHandler:
 +
    registry: pureengage-docker-staging.jfrog.io
 +
    repository: gvp/multicloud/gvp_mcp_servicehandler
 +
    tag: "{{ .Chart.AppVersion }}"
 +
    pullPolicy: IfNotPresent
 +
  configHandler:
 +
    registry: pureengage-docker-staging.jfrog.io
 +
    repository: gvp/multicloud/gvp_mcp_confighandler
 +
    tag: "{{ .Chart.AppVersion }}"
 +
    pullPolicy: IfNotPresent
 +
  snmp:
 +
    registry: pureengage-docker-staging.jfrog.io
 +
    repository: gvp/multicloud/gvp_snmp
 +
    tag: v9.0.040.21
 +
    pullPolicy: IfNotPresent
 +
  rup:
 +
    registry: pureengage-docker-staging.jfrog.io
 +
    repository: cce/recording-provider
 +
    tag: 9.0.000.00.b.1432.r.ef30441
 +
    pullPolicy: IfNotPresent
 +
 
 +
## MCP specific settings
 +
mcp:
 +
  ## Settings for liveness and readiness probes of MCP
 +
  ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
 +
  livenessValues:
 +
    path: /mcp/liveness
 +
    initialDelaySeconds: 30
 +
    periodSeconds: 60
 +
    timeoutSeconds: 20
 +
    failureThreshold: 3
 +
    healthCheckAPIPort: 8300     
 
   
 
   
  <nowiki>##</nowiki> Command-line arguments to the MCP process
+
  # Used instead of startupProbe. This runs all initial self-tests, and could take some time.
args:
+
  # Timeout is < 1 minute (with reduced test set), and interval/period is 1 minute.
<nowiki> </nowiki> - "gvp-configserver"
+
  readinessValues:
<nowiki> </nowiki> - "8888"
+
    path: /mcp/readiness
<nowiki> </nowiki> - "default"
+
    initialDelaySeconds: 30
<nowiki> </nowiki> - "/etc/mcpconfig/config.ini"
+
    periodSeconds: 60
 +
    timeoutSeconds: 50
 +
    failureThreshold: 3
 +
    healthCheckAPIPort: 8300
 +
  # Location of configuration file for MCP
 +
  # initialConfigFile is the default template
 +
  # finalConfigFile is the final configuration after overrides are applied (see mcpConfig section for overrides)
 +
  initialConfigFile: "/etc/config/config.ini"
 +
  finalConfigFile: "/etc/mcpconfig/config.ini"
 
   
 
   
<nowiki>##</nowiki> Container image repo settings.
+
  # Dev and QA deployments will use MCP_Configuration_Unit_Test LRG and shared deployments will use MCP_Configuration_Unit LRG
image:
+
   logicalResourceGroup: "MCP_Configuration_Unit"
<nowiki> </nowiki> mcp:
 
<nowiki> </nowiki>  registry: <docker-repo>
 
<nowiki> </nowiki>  repository: gvp/multicloud/gvp_mcp
 
<nowiki> </nowiki>   tag: "<nowiki>{{ .Chart.AppVersion }}</nowiki>"
 
<nowiki> </nowiki>  pullPolicy: IfNotPresent
 
<nowiki> </nowiki> serviceHandler:
 
<nowiki> </nowiki>  registry: <docker-repo>
 
<nowiki> </nowiki>  repository: gvp/multicloud/gvp_mcp_servicehandler
 
<nowiki> </nowiki>  tag: "<nowiki>{{ .Chart.AppVersion }}</nowiki>"
 
<nowiki> </nowiki>  pullPolicy: IfNotPresent
 
<nowiki> </nowiki> configHandler:
 
<nowiki> </nowiki>  registry: <docker-repo>
 
<nowiki> </nowiki>  repository: gvp/multicloud/gvp_mcp_confighandler
 
<nowiki> </nowiki>  tag: "<nowiki>{{ .Chart.AppVersion }}</nowiki>"
 
<nowiki> </nowiki>  pullPolicy: IfNotPresent
 
<nowiki> </nowiki> snmp:
 
<nowiki> </nowiki>  registry: <docker-repo>
 
<nowiki> </nowiki>  repository: gvp/multicloud/gvp_snmp
 
<nowiki> </nowiki>  tag: v9.0.040.21
 
<nowiki> </nowiki>  pullPolicy: IfNotPresent
 
<nowiki> </nowiki> rup:
 
<nowiki> </nowiki>  registry: <docker-repo>
 
<nowiki> </nowiki>  repository: cce/recording-provider
 
<nowiki> </nowiki>  tag: 9.0.000.00.b.1432.r.ef30441
 
<nowiki> </nowiki>  pullPolicy: IfNotPresent
 
<nowiki> </nowiki>
 
<nowiki>##</nowiki> MCP specific settings
 
mcp:
 
<nowiki> </nowiki> ## Settings for liveness and readiness probes of MCP
 
<nowiki> </nowiki> ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
 
<nowiki> </nowiki> livenessValues:
 
<nowiki> </nowiki>  path: /mcp/liveness
 
<nowiki> </nowiki>  initialDelaySeconds: 30
 
<nowiki> </nowiki>  periodSeconds: 60
 
<nowiki> </nowiki>  timeoutSeconds: 20
 
<nowiki> </nowiki>  failureThreshold: 3
 
<nowiki> </nowiki>  healthCheckAPIPort: 8300       
 
 
   
 
   
<nowiki> </nowiki> # Used instead of startupProbe.  This runs all initial self-tests, and could take some time.
+
  # Threshold values for the various alerts in podmonitor.
<nowiki> </nowiki> # Timeout is < 1 minute (with reduced test set), and interval/period is 1 minute.
+
  alerts:
<nowiki> </nowiki> readinessValues:
+
    cpuUtilizationAlertLimit: 70
<nowiki> </nowiki>  path: /mcp/readiness
+
    memUtilizationAlertLimit: 90
<nowiki> </nowiki>  initialDelaySeconds: 30
+
    workingMemAlertLimit: 7
<nowiki> </nowiki>  periodSeconds: 60
+
    maxRestarts: 2
<nowiki> </nowiki>  timeoutSeconds: 50
+
    persistentVolume: 20
<nowiki> </nowiki>  failureThreshold: 3
+
    serviceHealth: 40
<nowiki> </nowiki>  healthCheckAPIPort: 8300
+
    recordingError: 7
<nowiki> </nowiki> # Location of configuration file for MCP
+
    configServerFailure: 0
<nowiki> </nowiki> # initialConfigFile is the default template
+
    dtmfError: 1
<nowiki> </nowiki> # finalConfigFile is the final configuration after overrides are applied (see mcpConfig section for overrides)
+
    dnsError: 6
<nowiki> </nowiki> initialConfigFile: "/etc/config/config.ini"
+
    totalError: 120
<nowiki> </nowiki> finalConfigFile: "/etc/mcpconfig/config.ini"
+
    selfTestError: 25
 +
    fetchErrorMin: 120
 +
    fetchErrorMax: 220
 +
    execError: 120
 +
    sdpParseError: 1
 +
    mediaWarning: 3
 +
    mediaCritical: 7
 +
    fetchTimeout: 10
 +
    fetchError: 10
 +
    ngiError: 12
 +
    ngi4xx: 10
 +
    recPostError: 7
 +
    recOpenError: 1
 +
    recStartError: 3
 +
    recCertError: 7
 +
    reportingDbInitError: 1
 +
    reportingFlushError: 1
 +
    grammarLoadError: 1
 +
    grammarSynError: 1
 +
    dtmfGrammarLoadError: 1
 +
    dtmfGrammarError: 1
 +
    vrmOpenSessError: 1
 +
    wsTokenCreateError: 1
 +
    wsTokenConfigError: 1
 +
    wsTokenFetchError: 1
 +
    wsOpenSessError: 1
 +
    wsProtoError: 1
 +
    grpcConfigError: 1
 +
    grpcSSLRootCertError: 1
 +
    grpcGoogleCredentialError: 1
 +
    grpcRecognizeStartError: 7
 +
    grpcWriteError: 7
 +
    grpcRecognizeError: 7
 +
    grpcTtsError: 7
 +
    streamerOpenSessionError: 1
 +
    streamerProtocolError: 1
 +
    msmlReqError: 7
 +
    dnsResError: 6
 +
    rsConnError: 150
 
   
 
   
<nowiki> </nowiki> # Dev and QA deployments will use MCP_Configuration_Unit_Test LRG and shared deployments will use MCP_Configuration_Unit LRG
+
## RUP (Recording Uploader) Settings
<nowiki> </nowiki> logicalResourceGroup: "MCP_Configuration_Unit"
+
rup:
 +
  ## Settings for liveness and readiness probes of RUP
 +
  ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
 +
  livenessValues:
 +
    path: /health/live
 +
    initialDelaySeconds: 30
 +
    periodSeconds: 60
 +
    timeoutSeconds: 20
 +
    failureThreshold: 3
 +
    healthCheckAPIPort: 8080     
 
   
 
   
<nowiki> </nowiki> # Threshold values for the various alerts in podmonitor.
+
   readinessValues:
<nowiki> </nowiki> alerts:
+
    path: /health/ready
<nowiki> </nowiki>   cpuUtilizationAlertLimit: 70
+
    initialDelaySeconds: 30
<nowiki> </nowiki>  memUtilizationAlertLimit: 90
+
    periodSeconds: 30
<nowiki> </nowiki>  workingMemAlertLimit: 7
+
    timeoutSeconds: 20
<nowiki> </nowiki>  maxRestarts: 2
+
    failureThreshold: 3
<nowiki> </nowiki>  persistentVolume: 20
+
    healthCheckAPIPort: 8080
<nowiki> </nowiki>  serviceHealth: 40
 
<nowiki> </nowiki>  recordingError: 7
 
<nowiki> </nowiki>  configServerFailure: 0
 
<nowiki> </nowiki>  dtmfError: 1
 
<nowiki> </nowiki>  dnsError: 6
 
<nowiki> </nowiki>  totalError: 120
 
<nowiki> </nowiki>  selfTestError: 25
 
<nowiki> </nowiki>  fetchErrorMin: 120
 
<nowiki> </nowiki>  fetchErrorMax: 220
 
<nowiki> </nowiki>  execError: 120
 
<nowiki> </nowiki>  sdpParseError: 1
 
<nowiki> </nowiki>  mediaWarning: 3
 
<nowiki> </nowiki>  mediaCritical: 7
 
<nowiki> </nowiki>  fetchTimeout: 10
 
<nowiki> </nowiki>  fetchError: 10
 
<nowiki> </nowiki>  ngiError: 12
 
<nowiki> </nowiki>  ngi4xx: 10
 
<nowiki> </nowiki>  recPostError: 7
 
<nowiki> </nowiki>  recOpenError: 1
 
<nowiki> </nowiki>  recStartError: 3
 
<nowiki> </nowiki>  recCertError: 7
 
<nowiki> </nowiki>  reportingDbInitError: 1
 
<nowiki> </nowiki>  reportingFlushError: 1
 
<nowiki> </nowiki>  grammarLoadError: 1
 
<nowiki> </nowiki>  grammarSynError: 1
 
<nowiki> </nowiki>  dtmfGrammarLoadError: 1
 
<nowiki> </nowiki>  dtmfGrammarError: 1
 
<nowiki> </nowiki>  vrmOpenSessError: 1
 
<nowiki> </nowiki>  wsTokenCreateError: 1
 
<nowiki> </nowiki>  wsTokenConfigError: 1
 
<nowiki> </nowiki>  wsTokenFetchError: 1
 
<nowiki> </nowiki>  wsOpenSessError: 1
 
<nowiki> </nowiki>  wsProtoError: 1
 
<nowiki> </nowiki>  grpcConfigError: 1
 
<nowiki> </nowiki>  grpcSSLRootCertError: 1
 
<nowiki> </nowiki>  grpcGoogleCredentialError: 1
 
<nowiki> </nowiki>  grpcRecognizeStartError: 7
 
<nowiki> </nowiki>  grpcWriteError: 7
 
<nowiki> </nowiki>  grpcRecognizeError: 7
 
<nowiki> </nowiki>  grpcTtsError: 7
 
<nowiki> </nowiki>  streamerOpenSessionError: 1
 
<nowiki> </nowiki>  streamerProtocolError: 1
 
<nowiki> </nowiki>  msmlReqError: 7
 
<nowiki> </nowiki>  dnsResError: 6
 
<nowiki> </nowiki>  rsConnError: 150
 
 
   
 
   
<nowiki>##</nowiki> RUP (Recording Uploader) Settings
+
  ## RUP PVC defines
rup:
+
   rupVolume:
<nowiki> </nowiki> ## Settings for liveness and readiness probes of RUP
+
    storageClass: "genesys"
<nowiki> </nowiki> ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
+
    accessModes: "ReadWriteOnce"
<nowiki> </nowiki> livenessValues:
+
    volumeSize: 40Gi
<nowiki> </nowiki>  path: /health/live
 
<nowiki> </nowiki>  initialDelaySeconds: 30
 
<nowiki> </nowiki>   periodSeconds: 60
 
<nowiki> </nowiki>  timeoutSeconds: 20
 
<nowiki> </nowiki>  failureThreshold: 3
 
<nowiki> </nowiki>  healthCheckAPIPort: 8080       
 
 
   
 
   
<nowiki> </nowiki> readinessValues:
+
  ## Other settings for RUP
<nowiki> </nowiki>   path: /health/ready
+
  recordingsFolder: "/pvolume/recordings"
<nowiki> </nowiki>   initialDelaySeconds: 30
+
  recordingsCache: "/pvolume/recording_cache"
<nowiki> </nowiki>   periodSeconds: 30
+
  rupProvisionerEnabled: "false"
<nowiki> </nowiki>   timeoutSeconds: 20
+
  decommisionDestType: "WebDAV"
<nowiki> </nowiki>   failureThreshold: 3
+
  decommisionDestWebdavUrl: "http://gvp-central-rup:8180"
<nowiki> </nowiki>   healthCheckAPIPort: 8080
+
   decommisionDestWebdavUsername: ""
 +
  decommisionDestWebdavPassword: ""
 +
  diskFullDestType: "WebDAV"
 +
  diskFullDestWebdavUrl: "http://gvp-central-rup:8180"
 +
  diskFullDestWebdavUsername: ""
 +
  diskFullDestWebdavPassword: ""
 +
  cpUrl: "http://cce-conversation-provider.cce.svc.cluster.local"
 +
   unrecoverableLostAction: "uploadtodefault"
 +
  unrecoverableDestType: "Azure"
 +
  unrecoverableDestAzureAccountName: "gvpwestus2dev"
 +
   unrecoverableDestAzureContainerName: "ccerp-unrecoverable"
 +
   logJsonEnable: true
 +
   logLevel: INFO
 +
   logConsoleLevel: INFO
 
   
 
   
<nowiki> </nowiki> ## RUP PVC defines
+
  ## RUP resource requests and limits
<nowiki> </nowiki> rupVolume:
+
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
<nowiki> </nowiki>  storageClass: "genesys"
+
  resources:
<nowiki> </nowiki>  accessModes: "ReadWriteOnce"
+
    requests:
<nowiki> </nowiki>  volumeSize: 40Gi
+
      memory: "128Mi"
 +
      cpu: "100m"
 +
      ephemeral-storage: "1Gi"
 +
    limits:
 +
      memory: "2Gi"
 +
      cpu: "1000m"
 
   
 
   
<nowiki> </nowiki> ## Other settings for RUP
+
## PVCs defined.  RUP one is under "rup" label.
<nowiki> </nowiki> recordingsFolder: "/pvolume/recordings"
+
recordingStorage:
<nowiki> </nowiki> recordingsCache: "/pvolume/recording_cache"
+
  storageClass: "genesys"
<nowiki> </nowiki> rupProvisionerEnabled: "false"
+
  accessModes: "ReadWriteOnce"
<nowiki> </nowiki> decommisionDestType: "WebDAV"
+
  volumeSize: 40Gi
<nowiki> </nowiki> decommisionDestWebdavUrl: "<nowiki>http://gvp-central-rup:8180</nowiki>"
 
<nowiki> </nowiki> decommisionDestWebdavUsername: ""
 
<nowiki> </nowiki> decommisionDestWebdavPassword: ""
 
<nowiki> </nowiki> diskFullDestType: "WebDAV"
 
<nowiki> </nowiki> diskFullDestWebdavUrl: "<nowiki>http://gvp-central-rup:8180</nowiki>"
 
<nowiki> </nowiki> diskFullDestWebdavUsername: ""
 
<nowiki> </nowiki> diskFullDestWebdavPassword: ""
 
<nowiki> </nowiki> cpUrl: "<nowiki>http://cce-conversation-provider.cce.svc.cluster.local</nowiki>"
 
<nowiki> </nowiki> unrecoverableLostAction: "uploadtodefault"
 
<nowiki> </nowiki> unrecoverableDestType: "Azure"
 
<nowiki> </nowiki> unrecoverableDestAzureAccountName: "gvpwestus2dev"
 
<nowiki> </nowiki> unrecoverableDestAzureContainerName: "ccerp-unrecoverable"
 
<nowiki> </nowiki> logJsonEnable: true
 
<nowiki> </nowiki> logLevel: INFO
 
<nowiki> </nowiki> logConsoleLevel: INFO
 
 
   
 
   
<nowiki> </nowiki> ## RUP resource requests and limits
+
# If PVC is not used by setting flag enablePV to false, the path in hostPath will be used for log storage.
<nowiki> </nowiki> ## ref: <nowiki>http://kubernetes.io/docs/user-guide/compute-resources/</nowiki>
+
logStorage:
<nowiki> </nowiki> resources:  
+
   enablePV: true
<nowiki> </nowiki>   requests:  
+
  storageClass: genesys
<nowiki> </nowiki>    memory: "128Mi"
+
  accessModes: ReadWriteOnce
<nowiki> </nowiki>    cpu: "100m"
+
  volumeSize: 5Gi
<nowiki> </nowiki>    ephemeral-storage: "1Gi"
+
   hostPath: /mnt/log
<nowiki> </nowiki>   limits:
 
<nowiki> </nowiki>    memory: "2Gi"
 
<nowiki> </nowiki>    cpu: "1000m"
 
 
   
 
   
<nowiki>##</nowiki> PVCs defined. RUP one is under "rup" label.
+
## Service Handler configuration. Note, the port values CANNOT be changed here alone, and should not be changed.
recordingStorage:
+
serviceHandler:
<nowiki> </nowiki> storageClass: "genesys"
+
  serviceHandlerPort: 8300
<nowiki> </nowiki> accessModes: "ReadWriteOnce"
+
  mcpSipPort: 5070
<nowiki> </nowiki> volumeSize: 40Gi
+
  consuleExternalHost: ""
 +
  consulPort: 8501
 +
  registrationInterval: 10000
 +
  mcpHealthCheckInterval: 30s
 +
  mcpHealthCheckTimeout: 10s
 
   
 
   
<nowiki>##</nowiki> MCP log storage volume types.
+
## Config Server values passed to RUP, etc. These should not be changed.
mcpLogStorage:
+
configServer:
<nowiki> </nowiki> volumeType:
+
   host: gvp-configserver
<nowiki> </nowiki>  persistentVolume:
+
   port: "8888"
<nowiki> </nowiki>   enabled: false
 
<nowiki> </nowiki>  storageClass: disk-premium
 
<nowiki> </nowiki>  volumeSize: 50Gi
 
<nowiki> </nowiki>   accessModes: ReadWriteOnce
 
<nowiki> </nowiki>  hostPath:
 
<nowiki> </nowiki>  enabled: true
 
<nowiki> </nowiki>  path: /mnt/log
 
<nowiki> </nowiki>  emptyDir:
 
<nowiki> </nowiki>  enabled: false 
 
 
   
 
   
<nowiki>##</nowiki> FluentBit Settings
+
## Secrets storage related settings - k8s secrets or csi
   fluentBitSidecar:
+
secrets:
   enabled: false
+
  # Used for pulling images/containers from the respositories.
    
+
   imagePull:
<nowiki>##</nowiki> Service Handler configuration. Note, the port values CANNOT be changed here alone, and should not be changed.  
+
    - name: pureengage-docker-dev
serviceHandler:
+
    - name: pureengage-docker-staging
<nowiki> </nowiki> serviceHandlerPort: 8300
+
    
<nowiki> </nowiki> mcpSipPort: 5070
+
   # Config Server secrets. If k8s is false, csi will be used, else k8s will be used.
<nowiki> </nowiki> consuleExternalHost: ""
+
  configServer:
<nowiki> </nowiki> consulPort: 8501
+
    k8s: true
<nowiki> </nowiki> registrationInterval: 10000
+
    secretName: configserver-secret
<nowiki> </nowiki> mcpHealthCheckInterval: 30s
+
    dbUserKey: username
<nowiki> </nowiki> mcpHealthCheckTimeout: 10s
+
    dbPasswordKey: password
 +
    csiSecretProviderClass: keyvault-gvp-gvp-configserver-secret
 +
 
 +
  # Consul secrets. If k8s is false, csi will be used, else k8s will be used.
 +
  consul:
 +
    k8s: true
 +
    secretName: shared-consul-consul-gvp-token
 +
    secretKey: consul-consul-gvp-token
 +
    csiSecretProviderClass: keyvault-consul-consul-gvp-token
 
   
 
   
<nowiki>##</nowiki> Config Server values passed to RUP, etc. These should not be changed.
+
## Ingress configuration
configServer:
+
ingress:
<nowiki> </nowiki> host: gvp-configserver
+
  enabled: false
  <nowiki> </nowiki> port: "8888"
+
  annotations: {}
 +
    # kubernetes.io/ingress.class: nginx
 +
    # kubernetes.io/tls-acme: "true"
 +
  hosts:
 +
    - host: chart-example.local
 +
      paths: []
 +
  tls: []
 +
  # - secretName: chart-example-tls
 +
  #    hosts:
 +
  #      - chart-example.local
 
   
 
   
<nowiki>##</nowiki> Secrets storage related settings - k8s secrets or csi
+
## App resource requests and limits
secrets:
+
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
<nowiki> </nowiki> # Used for pulling images/containers from the respositories.
+
## Values for MCP, and the default ones for other containers. RUP ones are under "rup" label.
<nowiki> </nowiki> imagePull:
+
##
<nowiki> </nowiki>  - name: <credential-name>
+
resourcesMcp:
<nowiki> </nowiki>
+
   requests:
<nowiki> </nowiki> # Config Server secrets. If k8s is false, csi will be used, else k8s will be used.  
+
    memory: "200Mi"
<nowiki> </nowiki> configServer:
+
    cpu: "250m"
<nowiki> </nowiki>  k8s: true
+
    ephemeral-storage: "1Gi"
<nowiki> </nowiki>   secretName: configserver-secret
+
   limits:
<nowiki> </nowiki>  dbUserKey: username
+
    memory: "2Gi"
<nowiki> </nowiki>  dbPasswordKey: password
+
    cpu: "300m"
<nowiki> </nowiki>  csiSecretProviderClass: keyvault-gvp-gvp-configserver-secret
 
<nowiki> </nowiki>
 
<nowiki> </nowiki> # Consul secrets. If k8s is false, csi will be used, else k8s will be used.
 
<nowiki> </nowiki> consul:
 
<nowiki> </nowiki>  k8s: true
 
<nowiki> </nowiki>   secretName: shared-consul-consul-gvp-token
 
<nowiki> </nowiki>  secretKey: consul-consul-gvp-token
 
<nowiki> </nowiki>  csiSecretProviderClass: keyvault-consul-consul-gvp-token
 
 
   
 
   
<nowiki>##</nowiki> Ingress configuration
+
resourcesDefault:
ingress:
+
  requests:
<nowiki> </nowiki> enabled: false
+
    memory: "128Mi"
<nowiki> </nowiki> annotations: {}
+
    cpu: "100m"
<nowiki> </nowiki>   # kubernetes.io/ingress.class: nginx
+
  limits:
<nowiki> </nowiki>   # kubernetes.io/tls-acme: "true"
+
    memory: "128Mi"
<nowiki> </nowiki> hosts:
+
    cpu: "100m"
<nowiki> </nowiki>   - host: chart-example.local
+
  # We usually recommend not to specify default resources and to leave this as a conscious
<nowiki> </nowiki>    paths: []
+
   # choice for the user. This also increases chances charts run on environments with little
<nowiki> </nowiki> tls: []
+
   # resources, such as Minikube. If you do want to specify resources, uncomment the following
<nowiki> </nowiki> # - secretName: chart-example-tls
+
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
<nowiki> </nowiki> #   hosts:
+
   # limits:
<nowiki> </nowiki> #     - chart-example.local
+
  #  cpu: 100m
 +
  #  memory: 128Mi
 +
  # requests:
 +
  #   cpu: 100m
 +
  #   memory: 128Mi
 
   
 
   
<nowiki>##</nowiki> App resource requests and limits
+
## App containers' Security Context
<nowiki>##</nowiki> ref: <nowiki>http://kubernetes.io/docs/user-guide/compute-resources/</nowiki>
+
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
<nowiki>##</nowiki> Values for MCP, and the default ones for other containers. RUP ones are under "rup" label.
+
##
<nowiki>##</nowiki>
+
## Containers should run as genesys user and cannot use elevated permissions
resourcesMcp:
+
## Pod level security context
<nowiki> </nowiki> requests:
+
podSecurityContext:
<nowiki> </nowiki>  memory: "200Mi"
+
   fsGroup: 500
<nowiki> </nowiki>   cpu: "250m"
+
   runAsUser: 500
<nowiki> </nowiki>   ephemeral-storage: "1Gi"
+
   runAsGroup: 500
<nowiki> </nowiki> limits:
+
   runAsNonRoot: true
<nowiki> </nowiki>   memory: "2Gi"
 
<nowiki> </nowiki>   cpu: "300m"
 
 
   
 
   
resourcesDefault:
+
## Container security context
<nowiki> </nowiki> requests:
+
securityContext:
<nowiki> </nowiki>  memory: "128Mi"
+
  # fsGroup: 500
<nowiki> </nowiki>  cpu: "100m"
+
   runAsUser: 500
<nowiki> </nowiki> limits:
+
   runAsGroup: 500
<nowiki> </nowiki>  memory: "128Mi"
+
   runAsNonRoot: true
<nowiki> </nowiki>  cpu: "100m"
 
<nowiki> </nowiki> # We usually recommend not to specify default resources and to leave this as a conscious
 
<nowiki> </nowiki> # choice for the user. This also increases chances charts run on environments with little
 
<nowiki> </nowiki> # resources, such as Minikube. If you do want to specify resources, uncomment the following
 
<nowiki> </nowiki> # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
 
<nowiki> </nowiki> # limits:
 
<nowiki> </nowiki> #   cpu: 100m
 
<nowiki> </nowiki> #   memory: 128Mi
 
<nowiki> </nowiki> # requests:
 
<nowiki> </nowiki> #   cpu: 100m
 
<nowiki> </nowiki> #   memory: 128Mi
 
 
   
 
   
<nowiki>##</nowiki> App containers' Security Context
+
## Priority Class
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
<nowiki>##</nowiki>
+
## NOTE: this is an optional parameter
<nowiki>##</nowiki> Containers should run as genesys user and cannot use elevated permissions
+
##
<nowiki>##</nowiki> Pod level security context
+
priorityClassName: system-cluster-critical
podSecurityContext:  
 
<nowiki> </nowiki> fsGroup: 500
 
<nowiki> </nowiki> runAsUser: 500
 
<nowiki> </nowiki> runAsGroup: 500
 
<nowiki> </nowiki> runAsNonRoot: true
 
 
   
 
   
<nowiki>##</nowiki> Container security context
+
# affinity: {}
securityContext:
 
<nowiki> </nowiki> # fsGroup: 500
 
<nowiki> </nowiki> runAsUser: 500
 
<nowiki> </nowiki> runAsGroup: 500
 
<nowiki> </nowiki> runAsNonRoot: true
 
 
   
 
   
<nowiki>##</nowiki> Priority Class
+
## Node labels for assignment.
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/</nowiki>
+
## ref: https://kubernetes.io/docs/user-guide/node-selection/
<nowiki>##</nowiki> NOTE: this is an optional parameter
+
##
<nowiki>##</nowiki>
+
nodeSelector:
priorityClassName: <critical-priority-class>
+
  #genesysengage.com/nodepool: realtime
 
   
 
   
  <nowiki>#</nowiki> affinity: {}
+
## Tolerations for assignment.
 +
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
 +
##
 +
tolerations: []
 +
  # - key: "kubernetes.azure.com/scalesetpriority"
 +
  #  operator: "Equal"
 +
  #  value: "spot"
 +
  #  effect: "NoSchedule"
 +
  # - key: "k8s.genesysengage.com/nodepool"
 +
  #  operator: "Equal"
 +
  #  value: "compute"
 +
  #  effect: "NoSchedule"
 +
  # - key: "kubernetes.azure.com/scalesetpriority"
 +
  #  operator: "Equal"
 +
  #  value: "compute"
 +
  #  effect: "NoSchedule"
 +
  #- key: "k8s.genesysengage.com/nodepool"
 +
  # operator: Exists
 +
  # effect: NoSchedule
 
   
 
   
<nowiki>##</nowiki> Node labels for assignment.
+
## Extra labels
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/user-guide/node-selection/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
<nowiki>##</nowiki>
+
##
nodeSelector:
+
## Use podLabels
<nowiki> </nowiki> #genesysengage.com/nodepool: realtime
+
#labels: {}
 
   
 
   
<nowiki>##</nowiki> Tolerations for assignment.
+
## Extra Annotations
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</nowiki>
+
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
<nowiki>##</nowiki>
+
##
tolerations: []
+
## Use podAnnotations
<nowiki> </nowiki> # - key: "kubernetes.azure.com/scalesetpriority"
+
#annotations: {}
<nowiki> </nowiki> #  operator: "Equal"
 
<nowiki> </nowiki> #  value: "spot"
 
<nowiki> </nowiki> #  effect: "NoSchedule"
 
<nowiki> </nowiki> # - key: "k8s.genesysengage.com/nodepool"
 
<nowiki> </nowiki> #  operator: "Equal"
 
<nowiki> </nowiki> #  value: "compute"
 
<nowiki> </nowiki> #  effect: "NoSchedule"
 
<nowiki> </nowiki> # - key: "kubernetes.azure.com/scalesetpriority"
 
<nowiki> </nowiki> #  operator: "Equal"
 
<nowiki> </nowiki> #   value: "compute"
 
<nowiki> </nowiki> #   effect: "NoSchedule"
 
<nowiki> </nowiki> #- key: "k8s.genesysengage.com/nodepool"
 
<nowiki> </nowiki> # operator: Exists
 
<nowiki> </nowiki> # effect: NoSchedule
 
 
   
 
   
<nowiki>##</nowiki> Extra labels
+
## Autoscaling Settings
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/</nowiki>
+
## Keda can be used as an alternative to HPA only to scale MCPs based on a cron schedule (UTC).
<nowiki>##</nowiki>
+
## If this is set to true, use Keda for scaling, or use HPA directly.
<nowiki>##</nowiki> Use podLabels
+
useKeda: true
<nowiki>#</nowiki>labels: {}
 
 
   
 
   
<nowiki>##</nowiki> Extra Annotations
+
## If Keda is enabled, only the following parameters are supported, and default HPA settings
<nowiki>##</nowiki> ref: <nowiki>https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/</nowiki>
+
## will be used within Keda.
<nowiki>##</nowiki>
+
keda:
<nowiki>##</nowiki> Use podAnnotations
+
  preScaleStart: "0 14 * * *"
<nowiki>#</nowiki>annotations: {}
+
  preScaleEnd: "0 2 * * *"
 +
  preScaleDesiredReplicas: 4
 +
  pollingInterval: 15
 +
  cooldownPeriod: 300
 
   
 
   
<nowiki>##</nowiki> Autoscaling Settings
+
## HPA Settings
<nowiki>##</nowiki> Keda can be used as an alternative to HPA only to scale MCPs based on a cron schedule (UTC).
+
# GVP-42512: PDB issue
<nowiki>##</nowiki> If this is set to true, use Keda for scaling, or use HPA directly.
+
# Alaways keep the following:
useKeda: true
+
# minReplicas >= 2
 +
# maxUnavailable = 1
 +
hpa:
 +
  enabled: false
 +
  minReplicas: 2
 +
  maxUnavailable: 1
 +
  maxReplicas: 4
 +
  podManagementPolicy: Parallel
 +
  targetCPUAverageUtilization: 20
 +
  scaleupPeriod: 15
 +
  scaleupPods: 4
 +
  scaleupPercent: 50
 +
  scaleupStabilizationWindow: 0
 +
  scaleupPolicy: Max
 +
  scaledownPeriod: 300
 +
  scaledownPods: 2
 +
  scaledownPercent: 10
 +
  scaledownStabilizationWindow: 3600
 +
  scaledownPolicy: Min
 
   
 
   
<nowiki>##</nowiki> If Keda is enabled, only the following parameters are supported, and default HPA settings
+
## Service/Pod Monitoring Settings
<nowiki>##</nowiki> will be used within Keda.
+
prometheus:
keda:
+
  mcp:
<nowiki> </nowiki> preScaleStart: "0 14 * * *"
+
    name: gvp-mcp-snmp
<nowiki> </nowiki> preScaleEnd: "0 2 * * *"
+
    port: 9116
<nowiki> </nowiki> preScaleDesiredReplicas: 4
 
<nowiki> </nowiki> pollingInterval: 15
 
<nowiki> </nowiki> cooldownPeriod: 300
 
 
   
 
   
<nowiki>##</nowiki> HPA Settings
+
  rup:
<nowiki>#</nowiki> GVP-42512: PDB issue
+
    name: gvp-mcp-rup
<nowiki>#</nowiki> Alaways keep the following:
+
    port: 8080
<nowiki>#</nowiki> minReplicas >= 2
 
<nowiki>#</nowiki> maxUnavailable = 1 
 
hpa:
 
<nowiki> </nowiki> enabled: false
 
<nowiki> </nowiki> minReplicas: 2
 
<nowiki> </nowiki> maxUnavailable: 1
 
<nowiki> </nowiki> maxReplicas: 4
 
<nowiki> </nowiki> podManagementPolicy: Parallel
 
<nowiki> </nowiki> targetCPUAverageUtilization: 20
 
<nowiki> </nowiki> scaleupPeriod: 15
 
<nowiki> </nowiki> scaleupPods: 4
 
<nowiki> </nowiki> scaleupPercent: 50
 
<nowiki> </nowiki> scaleupStabilizationWindow: 0
 
<nowiki> </nowiki> scaleupPolicy: Max
 
<nowiki> </nowiki> scaledownPeriod: 300
 
<nowiki> </nowiki> scaledownPods: 2
 
<nowiki> </nowiki> scaledownPercent: 10
 
<nowiki> </nowiki> scaledownStabilizationWindow: 3600
 
<nowiki> </nowiki> scaledownPolicy: Min
 
 
   
 
   
<nowiki>##</nowiki> Service/Pod Monitoring Settings
+
   podMonitor:
prometheus:
+
    enabled: true
<nowiki> </nowiki> mcp:
 
<nowiki> </nowiki>   name: gvp-mcp-snmp
 
<nowiki> </nowiki>  port: 9116
 
 
   
 
   
<nowiki> </nowiki> rup:
+
grafana:
<nowiki> </nowiki>   name: gvp-mcp-rup
+
   enabled: false
<nowiki> </nowiki>  port: 8080
 
 
   
 
   
  <nowiki> </nowiki> podMonitor:
+
  #log:
  <nowiki> </nowiki>  enabled: true
+
  # name: gvp-mcp-log
 +
  # port: 8200
 
   
 
   
grafana:
+
## Pod Disruption Budget Settings
<nowiki> </nowiki> enabled: false
+
podDisruptionBudget:
 +
  enabled: true
 
   
 
   
<nowiki> </nowiki> #log:
+
## Enable network policies or not
<nowiki> </nowiki> #  name: gvp-mcp-log
+
networkPolicies:
<nowiki> </nowiki> #  port: 8200
+
  enabled: false
 
   
 
   
<nowiki>##</nowiki> Pod Disruption Budget Settings
+
## DNS configuration options
podDisruptionBudget:
+
dnsConfig:
<nowiki> </nowiki> enabled: true
+
  options:
 +
    - name: ndots
 +
      value: "3"
 
   
 
   
<nowiki>##</nowiki> Enable network policies or not
+
## Configuration overrides
networkPolicies:
+
mcpConfig:
<nowiki> </nowiki> enabled: false
+
  # MCP config overrides
 +
  mcp.mpc.numdispatchthreads: 4
 +
  mcp.log.verbose: "interaction"
 +
  mcp.mpc.codec: "pcmu pcma telephone-event"
 +
  mcp.mpc.transcoders: "PCM MP3"
 +
  mcp.mpc.playcache.enable: 1
 +
  mcp.fm.http_proxy: ""
 +
  mcp.fm.https_proxy: ""
 
   
 
   
<nowiki>##</nowiki> DNS configuration options
+
  #MRCP v2 ASR config overrides
dnsConfig:
+
  mrcpv2_asr.provision.vrm.client.connectpersetup: true
<nowiki> </nowiki> options:
+
  mrcpv2_asr.provision.vrm.client.disablehotword: false
<nowiki> </nowiki>   - name: ndots
+
  mrcpv2_asr.provision.vrm.client.hotkeybasepath: "/usr/local/genesys/mcp/grammar/nuance/hotkey"
<nowiki> </nowiki>    value: "3"
+
  mrcpv2_asr.provision.vrm.client.noduplicatedgramuri: true
 +
  mrcpv2_asr.provision.vrm.client.sendswmsparams: false
 +
  mrcpv2_asr.provision.vrm.client.transportprotocol: "MRCPv2"
 +
  mrcpv2_asr.provision.vrm.client.sendloggingtag: true
 +
  mrcpv2_asr.provision.vrm.client.resource.name: "NuanceASRv2"
 +
   mrcpv2_asr.provision.vrm.client.resource.uri: "sip:mresources@speech-server-clusterip:5060"
 +
  mrcpv2_asr.provision.vrm.client.tlscertificatekey: "/usr/local/genesys/mcp/config/x509_certificate.pem"
 +
  mrcpv2_asr.provision.vrm.client.tlsprivatekey: "/usr/local/genesys/mcp/config/x509_certificate.pem"
 +
  mrcpv2_asr.provision.vrm.client.tlspassword: ""
 +
  mrcpv2_asr.provision.vrm.client.tlsprotocoltype: "TLSv1"
 +
  mrcpv2_asr.provision.vrm.client.confidencescale: 1
 +
  mrcpv2_asr.provision.vrm.client.sendsessionxml: true
 +
  mrcpv2_asr.provision.vrm.client.supportfornuance11: true
 +
  mrcpv2_asr.provision.vrm.client.uniquegramid: true
 
   
 
   
<nowiki>##</nowiki> Configuration overrides
+
  #MRCP v2 TTS config overrides
mcpConfig:
+
  mrcpv2_tts.provision.vrm.client.connectpersetup: true
<nowiki> </nowiki> # MCP config overrides
+
  mrcpv2_tts.provision.vrm.client.speechmarkerencoding: "UTF-8"
<nowiki> </nowiki> mcp.mpc.numdispatchthreads: 4
+
   mrcpv2_tts.provision.vrm.client.transportprotocol: "MRCPv2"
<nowiki> </nowiki> mcp.log.verbose: "interaction"
+
   mrcpv2_tts.provision.vrm.client.sendloggingtag: true
<nowiki> </nowiki> mcp.mpc.codec: "pcmu pcma telephone-event"
+
   mrcpv2_tts.provision.vrm.client.resource.name: "NuanceTTSv2"
<nowiki> </nowiki> mcp.mpc.transcoders: "PCM MP3"
+
   mrcpv2_tts.provision.vrm.client.resource.uri: "sip:mresources@speech-server-clusterip:5060"
<nowiki> </nowiki> mcp.mpc.playcache.enable: 1
+
   mrcpv2_tts.provision.vrm.client.tlscertificatekey: "/usr/local/genesys/mcp/config/x509_certificate.pem"
<nowiki> </nowiki> mcp.fm.http_proxy: ""
+
   mrcpv2_tts.provision.vrm.client.tlsprivatekey: "/usr/local/genesys/mcp/config/x509_certificate.pem"
<nowiki> </nowiki> mcp.fm.https_proxy: ""
+
   mrcpv2_tts.provision.vrm.client.tlspassword: ""
<nowiki> </nowiki>
+
   mrcpv2_tts.provision.vrm.client.tlsprotocoltype: "TLSv1"
# Nexus client configuration. 
+
   mrcpv2_tts.provision.vrm.client.nospeechlanguageheader: true
  # !!!  Other than pool.size and resource.uri, no other parameter should be changed.  !!!
+
   mrcpv2_tts.provision.vrm.client.sendsessionxml: true
   nexus_asr1.pool.size: 0
+
   mrcpv2_tts.provision.vrm.client.supportfornuance11: true
  nexus_asr1.provision.vrm.client.resource.uri: "[\"ws://nexus-production.nexus.svc.cluster.local/nexus/v3/bot/connection\"]"
+
</source>
   nexus_asr1.provision.vrm.client.resource.type: "ASR"
+
 
   nexus_asr1.provision.vrm.client.resource.name: "ASRC"
+
===Verify the deployed resources===
   nexus_asr1.provision.vrm.client.resource.engines: "nexus"
+
Verify the deployed resources from Google console/CLI.
   nexus_asr1.provision.vrm.client.resource.engine.nexus.audio.codec: "mulaw"
 
   nexus_asr1.provision.vrm.client.resource.engine.nexus.audio.samplerate: 8000
 
   nexus_asr1.provision.vrm.client.resource.engine.nexus.enableMaxSpeechTimeout: true
 
   nexus_asr1.provision.vrm.client.resource.engine.nexus.serviceAccountKey: ""
 
   nexus_asr1.provision.vrm.client.resource.privatekey: ""
 
   nexus_asr1.provision.vrm.client.resource.proxy: ""
 
   nexus_asr1.provision.vrm.client.TransportProtocol: "WEBSOCKET"
 
{{!}}}
 
 
|Status=No
 
|Status=No
 
}}
 
}}
 
|PEPageType=45d1441f-dc69-4a17-bd47-af5d811ce167
 
|PEPageType=45d1441f-dc69-4a17-bd47-af5d811ce167
 
}}
 
}}

Revision as of 11:39, December 13, 2021

This topic is part of the manual Genesys Voice Platform Private Edition Guide for version Current of Genesys Voice Platform.

Learn how to deploy Genesys Voice Platform.

Deploy

Important
Make sure to review Before you begin for the full list of prerequisites required to deploy Genesys Voice Platform.

Prerequisites

  • Consul with Service Mesh and DNS
  • Availability of shared Postgres for GVP Configuration Server
  • Availability of SQLServer Database for Reporting Server
    • DB should be created in advance (Example DB Name: gvp_rs)
    • There is a requirement for one user to have admin (dbo) access and a second with read only (ro) access
    • These credentials are used for creation of Reporting Server secrets.

Environment setup

  • Log in to the gke cluster
gcloud container clusters get-credentials gke1
  • Create gvp project in gke cluster using following manifest file

create-gvp-namespace.json

{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "gvp",
"labels": {
"name": "gvp"
}
}
}
kubectl apply -f apply create-gvp-namespace.json
  • confirm namespace creation
kubectl describe namespace gvp

Installation order matters with GVP. To deploy without errors, the install order should be:

1. GVP Configuration Server

2. GVP ServiceDiscovery

3. GVP Reporting Server

4. GVP Resource Manager

5. GVP Media Control Platform

Helm chart release URLs

Download the GVP Helm charts from JFrog using your credentials:

gvp-configserver : https://<jfrog artifactory/helm location>/gvp-configserver-<version_number>.tgz

gvp-sd : https://<jfrog artifactory/helm location>/gvp-sd-<version_number>.tgz

gvp-rs : https://<jfrog artifactory/helm location>/gvp-rs-<version_number>.tgz

gvp-rm : https://<jfrog artifactory/helm location>/gvp-rm-<version_number>.tgz

gvp-mcp : https://<jfrog artifactory/helm location>/gvp-mcp-<version_number>.tgz

For version numbers, refer to Helm charts and containers for Genesys Voice Platform.

1. GVP Configuration Server

Secrets creation

Create the following secrets which are required for the service deployment.

postgres-secret

db-hostname: Hostname of DB Server

db-name: Database Name

db-password: password for db user

db-username: username for db

server-name: Hostname of DB Server

apiVersion: v1
kind: Secret
metadata:
  name: postgres-secret
  namespace: gvp
type: Opaque 
data:
  db-username: <base64 encoded value>
  db-password: <base64 encoded value>
  db-hostname: cG9zdGdyZXMtcncuaW5mcmEuc3ZjLmNsdXN0ZXIubG9jYWw=
  db-name: Z3Zw
  server-name: cG9zdGdyZXMtcncuaW5mcmEuc3ZjLmNsdXN0ZXIubG9jYWw=
Execute the following command:
kubectl apply -f postgres-secret.yaml
configserver-secret

password: Password to set for Config DB

username: Username to set for Config DB

apiVersion: v1
kind: Secret
metadata:
  name: configserver-secret
  namespace: gvp
type: Opaque 
data:
  username: <base64 encoded value>
  password: <base64 encoded value>
Execute the following command:
kubectl apply -f configserver-secret.yaml

Install Helm chart

Download the required Helm chart release from the JFrog repository and install. Refer to Helm Chart URLs.

helm install gvp-configserver ./<gvp-configserver-helm-artifact> -f gvp-configserver-values.yaml

Set the following values in your values.yaml for Configuration Server:

priorityClassName >> Set to a priority class that exists on cluster (or create it instead)

imagePullSecrets >> Set to your pull secret name

gvp-configserver-values.yaml

# Default values for gvp-configserver.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
## Global Parameters
## Add labels to all the deployed resources
##
podLabels: {}
 
## Add annotations to all the deployed resources
##
podAnnotations: {}
 
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:
 
## Deployment Configuration
## replicaCount should be 1 for Config Server
replicaCount: 1
 
## Base Labels. Please do not change these.
serviceName: gvp-configserver
component: shared
# Namespace
partOf: gvp
 
## Container image repo settings.
image:
  confserv:
    registry: pureengage-docker-staging.jfrog.io
    repository: gvp/gvp_confserv
    pullPolicy: IfNotPresent
    tag: "{{ .Chart.AppVersion }}"
  serviceHandler:
    registry: pureengage-docker-staging.jfrog.io
    repository: gvp/gvp_configserver_servicehandler
    pullPolicy: IfNotPresent
    tag: "{{ .Chart.AppVersion }}"
  dbInit:
    registry: pureengage-docker-staging.jfrog.io
    repository: gvp/gvp_configserver_configserverinit
    pullPolicy: IfNotPresent
    tag: "{{ .Chart.AppVersion }}"
 
## Config Server App Configuration
configserver:
  ## Settings for liveness and readiness probes
  ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
  livenessValues:
    path: /cs/liveness
    initialDelaySeconds: 30
    periodSeconds: 60
    timeoutSeconds: 20
    failureThreshold: 3
    healthCheckAPIPort: 8300       
       
  readinessValues:
    path: /cs/readiness
    initialDelaySeconds: 30
    periodSeconds: 30
    timeoutSeconds: 20
    failureThreshold: 3
    healthCheckAPIPort: 8300
 
  alerts:
    cpuUtilizationAlertLimit: 70
    memUtilizationAlertLimit: 90
    workingMemAlertLimit: 7
    maxRestarts: 2
 
## PVCs defined
#  none
 
## Define service(s) for application
service:
  type: ClusterIP
  host: gvp-configserver-0
  port: 8888
  targetPort: 8888
 
## Service Handler configuration.
serviceHandler:
  port: 8300
 
## Secrets storage related settings - k8s secrets only
secrets:
  # Used for pulling images/containers from the respositories.
  imagePull:
    - name: pureengage-docker-dev
    - name: pureengage-docker-staging
   
  # Config Server secrets. If k8s is false, csi will be used, else k8s will be used.
  # Currently, only k8s is supported!
  configServer:
    secretName: configserver-secret
    secretUserKey: username
    secretPwdKey: password
    #csiSecretProviderClass: keyvault-gvp-gvp-configserver-secret
 
  # Config Server Postgres DB secrets and settings.
  postgres:
    dbName: gvp
    dbPort: 5432
    secretName: postgres-secret
    secretAdminUserKey: db-username
    secretAdminPwdKey: db-password
    secretHostnameKey: db-hostname
    secretDbNameKey: db-name
    #secretServerNameKey: server-name
 
## Ingress configuration
ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
 
## App resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  requests:
    memory: "512Mi"
    cpu: "500m"
  limits:
    memory: "1Gi"
    cpu: "1"
 
## App containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
## Containers should run as genesys user and cannot use elevated permissions
##
securityContext:
  runAsUser: 500
  runAsGroup: 500
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000
 
podSecurityContext: {}
  # fsGroup: 2000
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
## NOTE: this is an optional parameter
##
priorityClassName: system-cluster-critical
 
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
## Service/Pod Monitoring Settings
## Whether to create Prometheus alert rules or not.
prometheusRule:
  create: true
 
## Grafana dashboard Settings
## Whether to create Grafana dashboard or not.
grafana:
  enabled: true
 
## Enable network policies or not
networkPolicies:
  enabled: false
 
## DNS configuration options
dnsConfig:
  options:
    - name: ndots
      value: "3"

Verify the deployed resources

Verify the deployed resources from OpenShift console/CLI.

2. GVP Service Discovery

NOTE: After GVP-SD pod gets deployed, you will notice a few errors. Please ignore them and move on to next deployment. This will start working once RM & MCP are deployed.

Secrets creation

Create the following secrets which are required for the service deployment.

shared-consul-consul-gvp-token

shared-consul-consul-gvp-token-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: shared-consul-consul-gvp-token
  namespace: gvp
type: Opaque 
data:
  consul-consul-gvp-token: ZmU2NjFkNWYtYzVmNi1mZTJlLTgyM2MtYTAyZGQwN2JlMzll
Execute the following command:
kubectl create -f shared-consul-consul-gvp-token-secret.yaml

ConfigMap creation

Create the following ConfigMap which is required for the service deployment.

Caveat

If the tenant has not been deployed yet then you will not have the information needed to populate the config map. An empty config-map can be created using:

kubectl create configmap tenant-inventory -n gvp

Create Config based on Tenant provisioning via Service Discovery Container.

t100.json

{
    "name": "t100",
    "id": "80dd",
    "gws-ccid": "9350e2fc-a1dd-4c65-8d40-1f75a2e080dd",
    "default-application": "IVRAppDefault"
}

Execute the following command:

Add Config Map

kubectl create configmap tenant-inventory --from-file t100.json -n gvp

Install Helm chart

Download the required Helm chart release from the JFrog repository and install. Refer to Helm Chart URLs.

helm install gvp-sd ./<gvp-sd-helm-artifact> -f gvp-sd-values.yaml


gvp-sd-values.yaml

# Default values for gvp-sd.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
## Global Parameters
## Add labels to all the deployed resources
##
podLabels: {}
 
## Add annotations to all the deployed resources
##
podAnnotations: {}
 
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:
 
## Deployment Configuration
replicaCount: 1
smtp: allowed
 
## Name overrides
nameOverride: ""
fullnameOverride: ""
 
## Base Labels. Please do not change these.
component: shared
partOf: gvp
 
image:
  registry: pureengage-docker-staging.jfrog.io
  repository: gvp/gvp_sd
  tag: "{{ .Chart.AppVersion }}"
  pullPolicy: IfNotPresent
 
## PVCs defined
# none
 
## Define service for application.
service:
  name: gvp-sd
  type: ClusterIP
  port: 8080
 
## Application configuration parameters.
env:
  MCP_SVC_NAME: "gvp-mcp"
  EXTERNAL_CONSUL_SERVER: ""
  CONSUL_PORT: "8501"
  CONFIG_SERVER_HOST: "gvp-configserver"
  CONFIG_SERVER_PORT: "8888"
  CONFIG_SERVER_APP: "default"
  HTTP_SERVER_PORT: "8080"
  METRICS_EXPORTER_PORT: "9090"
  DEF_MCP_FOLDER: "MCP_Configuration_Unit\\MCP_LRG"
  TEST_MCP_FOLDER: "MCP_Configuration_Unit_Test\\MCP_LRG"
  SYNC_INIT_DELAY: "10000"
  SYNC_PERIOD: "60000"
  MCP_PURGE_PERIOD_MINS: "0"
  EMAIL_METERING_FACTOR: "10"
  RECORDINGS_CONTAINER: "ccerp-recordings"
  TENANT_KV_FOLDER: "tenants"
  TENANT_CONFIGMAP_FOLDER: "/etc/config"
  SMTP_SERVER: "smtp-relay.smtp.svc.cluster.local"
 
## Secrets storage related settings
secrets:
  # Used for pulling images/containers from the respositories.
  imagePull:
    - name: pureengage-docker-dev
    - name: pureengage-docker-staging
   
  # If k8s is true, k8s will be used, else vault secret will be used.
  configServer:
    k8s: true
    k8sSecretName: configserver-secret
    k8sUserKey: username
    k8sPasswordKey: password
    vaultSecretName: "/configserver-secret"
    vaultUserKey: "configserver-username"
    vaultPasswordKey: "configserver-password"
 
  # If k8s is true, k8s will be used, else vault secret will be used.
  consul:
    k8s: true
    k8sTokenName: "shared-consul-consul-gvp-token"
    k8sTokenKey: "consul-consul-gvp-token"
    vaultSecretName: "/consul-secret"
    vaultSecretKey: "consul-consul-gvp-token"
 
  # GTTS key, password via k8s secret, if k8s is true.  If false, this data comes from tenant profile.
  gtts:
    k8s: false
    k8sSecretName: gtts-secret
    EncryptedKey: encrypted-key
    PasswordKey: password
 
ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
 
resources:
  requests:
    memory: "2Gi"
    cpu: "1000m"
  limits:
    memory: "2Gi"
    cpu: "1000m"
 
## App containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
## Containers should run as genesys user and cannot use elevated permissions
## Pod level security context
podSecurityContext:
  fsGroup: 500
  runAsUser: 500
  runAsGroup: 500
  runAsNonRoot: true
 
## Container security context   
securityContext:
  runAsUser: 500
  runAsGroup: 500
  runAsNonRoot: true
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
## NOTE: this is an optional parameter
##
priorityClassName: system-cluster-critical
 
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
## Service/Pod Monitoring Settings
prometheus:
  # Enable for Prometheus operator
  podMonitor:
    enabled: true
 
## Enable network policies or not
networkPolicies:
  enabled: false
 
## DNS configuration options
dnsConfig:
  options:
    - name: ndots
      value: "3"

Verify the deployed resources

Verify the deployed resources from OpenShift console/CLI.

3. GVP Reporting Server

Secrets creation

Create the following secrets which are required for the service deployment.

rs-dbreader-password

db_hostname:

db_name:

db_password:

db_username:

rs-dbreader-password-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: rs-dbreader-password
  namespace: gvp
type: Opaque 
data:
  db_username: <base64 encoded value>
  db_password: <base64 encoded value>
  db_hostname: bXNzcWxzZXJ2ZXJvcGVuc2hpZnQuZGF0YWJhc2Uud2luZG93cy5uZXQ=
  db_name: cnNfZ3Zw
Execute the following command:
kubectl create -f rs-dbreader-password-secret.yaml
shared-gvp-rs-sqlserer-secret

db-admin-password:

db-reader-password:

shared-gvp-rs-sqlserer-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: shared-gvp-rs-sqlserver-secret
  namespace: gvp
type: Opaque 
data:
  db-admin-password: <value>
  db-reader-password: <value>
Execute the following command:
kubectl create -f shared-gvp-rs-sqlserer-secret.yaml

Persistent Volumes creation

Create the following PVs which are required for the service deployment.

gvp-rs-0

gvp-rs-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: gvp-rs-0
namespace: gvp
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: gvp
nfs:
path:  /export/vol1/PAT/gvp/rs-01
server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-rs-pv.yaml

Install Helm chart

Download the required Helm chart release from the JFrog repository and install. Refer to Helm Chart URLs.

helm install gvp-rs ./<gvp-rs-helm-artifact> -f gvp-rs-values.yaml


The following values should be set in your values.yaml:

  • priorityClassName >> Set to a priority class that exists on cluster (or create it instead)
  • imagePullSecrets >> Set to your pull secret name
  • keyVaultSecret: false >> make sure this is false to force use of k8s secrets
  • storageClass: genesys-gvp >> set to your storage class

gvp-rs-values.yaml

## Global Parameters
## Add labels to all the deployed resources
##
labels:
  enabled: true
  serviceGroup: "gvp"
  componentType: "shared"
 
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:
 
## Primary App Configuration
##
# primaryApp:
# type: ReplicaSet
# Should include the defaults for replicas
deployment:
  replicaCount: 1
  strategy: Recreate
  namespace: gvp
nameOverride: ""
fullnameOverride: ""
 
image:
  registry: pureengage-docker-staging.jfrog.io
  gvprsrepository: gvp/gvp_rs
  snmprepository: gvp/gvp_snmp
  rsinitrepository: gvp/gvp_rs_init
  rstag:
  rsinittag:
  snmptag: v9.0.040.07
  pullPolicy: Always
  imagePullSecrets:
    - name: "pureengage-docker-staging"
 
## liveness and readiness probes
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
livenessValues:
  path: /ems-rs/components
  initialDelaySeconds: 30
  periodSeconds: 120
  timeoutSeconds: 3
  failureThreshold: 3
 
readinessValues:
  path: /ems-rs/components
  initialDelaySeconds: 10
  periodSeconds: 60
  timeoutSeconds: 3
  failureThreshold: 3
 
## PVCs defined
volumes:
  pvc:
    storageClass: managed-premium
    claimSize: 20Gi
    activemqAndLocalConfigPath: "/billing/gvp-rs"
 
## Define service(s) for application.  Fields many need to be modified based on `type`
service:
  type: ClusterIP
  restapiport: 8080
  activemqport: 61616
  envinjectport: 443
  dnsport: 53
  configserverport: 8888
  snmpport: 1705
 
## ConfigMaps with Configuration
## Use Config Map for creating environment variables
context:
  env:
    CFGAPP: default
    GVP_RS_SERVICE_HOSTNAME: gvp-rs.gvp.svc.cluster.local
    #CFGPASSWORD: password
    #CFGUSER: default
    CFG_HOST: gvp-configserver.gvp.svc.cluster.local
    CFG_PORT: '8888'
    CMDLINE: ./rs_startup.sh
    DBNAME: gvp_rs
    #DBPASS: 'jbIKfoS6LpfgaU$E'
    DBUSER: openshiftadmin
    rsDbSharedUsername: openshiftadmin
    DBPORT: 1433
    ENVTYPE: staging
    GenesysIURegion: westus2
    localconfigcachepath: /billing/gvp-rs/data/cache
    HOSTFOLDER: Hosts
    HOSTOS: CFGRedHatLinux
    LCAPORT: '4999'
    MSSQLHOST: mssqlserveropenshift.database.windows.net
    RSAPP: azure_rs
    RSJVM_INITIALHEAPSIZE: 500m
    RSJVM_MAXHEAPSIZE: 1536m
    RSFOLDER: Applications
    RS_VERSION: 9.0.032.22
    STDOUT: 'true'
    WRKDIR: /usr/local/genesys/rs/
    SNMPAPP: azure_rs_snmp
    SNMP_WORKDIR: /usr/sbin
    SNMP_CMDLINE: snmpd
    SNMPFOLDER: Applications
 
  RSCONFIG:
    messaging:
      activemq.memoryUsageLimit: "256 mb"
      activemq.dataDirectory: "/billing/gvp-rs/data/activemq"
    log:
      verbose: "trace"
      trace: "stdout"
    dbmp:
      rs.db.retention.operations.daily.default: "40"
      rs.db.retention.operations.monthly.default: "40"
      rs.db.retention.operations.weekly.default: "40"
      rs.db.retention.var.daily.default: "40"
      rs.db.retention.var.monthly.default: "40"
      rs.db.retention.var.weekly.default: "40"
      rs.db.retention.cdr.default: "40"
 
# Default secrets storage to k8s secrets with csi able to be optional
secret:
  # keyVaultSecret will be a flag to between secret types(k8's or CSI). If keyVaultSecret was set to false k8's secret will be used
  keyVaultSecret: false
  #RS SQL server secret
  rsSecretName: shared-gvp-rs-sqlserver-secret
  # secretProviderClassName will not be used used when keyVaultSecret set to false
  secretProviderClassName: keyvault-gvp-rs-sqlserver-secret-00
  dbreadersecretFileName: db-reader-password
  dbadminsecretFileName: db-admin-password
  #Configserver secret
  #If keyVaultSecret set to false the below parameters will not be used.
  configserverProviderClassName: gvp-configserver-secret
  cfgSecretFileNameForCfgUsername: configserver-username
  cfgSecretFileNameForCfgPassword: configserver-password
  #If keyVaultSecret set to true the below parameters will not be used.
  cfgServerSecretName: configserver-secret
  cfgSecretKeyNameForCfgUsername: username
  cfgSecretKeyNameForCfgPassword: password
 
## Ingress configuration
ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
 
networkPolicies:
  enabled: false
 
## primaryAppresource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resourceForRS:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  requests:
    memory: "500Mi"
    cpu: "200m"
  limits:
    memory: "1Gi"
    cpu: "300m"
 
resoueceForSnmp:
  requests:
    memory: "500Mi"
    cpu: "100m"
  limits:
    memory: "1Gi"
    cpu: "150m"
 
## primaryApp containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
## Containers should run as genesys user and cannot use elevated permissions
securityContext:
  runAsNonRoot: true
  runAsUser: 500
  runAsGroup: 500
 
podSecurityContext:
  fsGroup: 500
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
## Extra labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
#  labels: {}
 
## Extra Annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
#  annotations: {}
 
## Service/Pod Monitoring Settings
monitoring:
  podMonitorEnabled: true
  prometheusRulesEnabled: true
  grafanaEnabled: true
 
monitor:
  prometheusPort: 9116
  monitorName: gvp-monitoring
  module: [if_mib]
  target: [127.0.0.1:1161]
 
##DNS Settings
dnsConfig:
  options:
    - name: ndots
      value: "3"

Verify the deployed resources

Verify the deployed resources from OpenShift console/CLI.

4. GVP Resource Manager

Note: RM and forward will not pass readiness checks until an MCP has registered properly. This is because service is not available without MCPs.

Persistent Volumes creation

Create the following PVs which are required for the service deployment.

Note: If your OpenShift deployment is capable of self-provisioning of Persistent Volumes then this step can be skipped. Volumes will be created by provisioner.

gvp-rm-01

gvp-rm-01-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-rm-01
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gvp
  nfs:
    path:  /export/vol1/PAT/gvp/rm-01
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-rm-01-pv.yaml

gvp-rm-02

gvp-rm-02-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-rm-02
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gvp
  nfs:
    path:  /export/vol1/PAT/gvp/rm-02
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-rm-02-pv.yaml

gvp-rm-logs-01

gvp-rm-logs-01-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-rm-logs-01
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: gvp
  nfs:
    path:  /export/vol1/PAT/gvp/rm-logs-01
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-rm-logs-01-pv.yaml

gvp-rm-logs-02

gvp-rm-logs-02-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-rm-logs-02
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: gvp
  nfs:
    path:  /export/vol1/PAT/gvp/rm-logs-02
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-rm-logs-02-pv.yaml

Install Helm chart

Download the required Helm chart release from the JFrog repository and install. Refer to Helm Chart URLs.

helm install gvp-rm ./<gvp-rm-helm-artifact> -f gvp-rm-values.yaml

You must set the following values in your values.yaml for Configuration Server:

  • priorityClassName >> Set to a priority class that exists on cluster (or create it instead)
  • imagePullSecrets >> Set to your pull secret name
  • Set the cfgServerSecretName if you changed it from default

gvp-rm-values.yaml

## Global Parameters
## Add labels to all the deployed resources
##
labels:
  enabled: true
  serviceGroup: "gvp"
  componentType: "shared"
 
## Primary App Configuration
##
# primaryApp:
# type: ReplicaSet
# Should include the defaults for replicas
deployment:
  replicaCount: 2
  deploymentEnv: "UPDATE_ENV"
  namespace: gvp
  clusterDomain: "svc.cluster.local"
nameOverride: ""
fullnameOverride: ""
 
image:
  registry: pureengage-docker-staging.jfrog.io
  gvprmrepository: gvp/gvp_rm
  cfghandlerrepository: gvp/gvp_rm_cfghandler
  snmprepository: gvp/gvp_snmp
  gvprmtestrepository: gvp/gvp_rm_test
  cfghandlertag:
  rmtesttag:
  rmtag:
  snmptag: v9.0.040.07
  pullPolicy: Always
  imagePullSecrets:
    - name: "pureengage-docker-staging"
 
dnsConfig:
  options:
    - name: ndots
      value: "3"
 
# Pod termination grace period 15 mins.
gracePeriodSeconds: 900
 
## liveness and readiness probes
## !!! THESE OPTIONS SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
livenessValues:
  path: /rm/liveness
  initialDelaySeconds: 60
  periodSeconds: 90
  timeoutSeconds: 20
  failureThreshold: 3
 
readinessValues:
  path: /rm/readiness
  initialDelaySeconds: 10
  periodSeconds: 60
  timeoutSeconds: 20
  failureThreshold: 3
 
## PVCs defined
volumes:
  billingpvc:
    storageClass: managed-premium
    claimSize: 20Gi
    mountPath: "/rm"
  logpvc:
    EnablePVForLogStorage: true
    storageClass: managed-premium
    claimSize: 5Gi
    accessMode: ReadWriteOnce
    mountPath: "/mnt/log"
    # If PV is not used for log storage by disabling the flag EnablePVForLogStorage: false, the given host path will be used for log storage.
    LogStorageHostPath: /mnt/log
 
## Define service(s) for application.  Fields many need to be modified based on `type`
service:
  type: ClusterIP
  port: 5060
  rmHealthCheckAPIPort: 8300
 
## ConfigMaps with Configuration
## Use Config Map for creating environment variables
context:
  env:
    cfghandler:
      CFGSERVER: gvp-configserver.gvp.svc.cluster.local
      CFGSERVERBACKUP: gvp-configserver.gvp.svc.cluster.local
      CFGPORT: "8888"
      CFGAPP: "default"
      RMAPP: "azure_rm"
      RMFOLDER: "Applications\\RM_MicroService\\RM_Apps"
      HOSTFOLDER: "Hosts\\RM_MicroService"
      MCPFOLDER: "MCP_Configuration_Unit\\MCP_LRG"
      SNMPFOLDER: "Applications\\RM_MicroService\\SNMP_Apps"
      EnvironmentType: "prod"
      CONFSERVERAPP: "confserv"
      RSAPP: "azure_rs"
      SNMPAPP: "azure_rm_snmp"
      STDOUT: "true"
      VOICEMAILSERVICEDIDNUMBER: "55551111"
 
  RMCONFIG:
    rm:
      sip-header-for-dnis: "Request-Uri"
      ignore-gw-lrg-configuration: "true"
      ignore-ruri-tenant-dbid: "true"
    log:
      verbose: "trace"
    subscription:
      sip.transport.dnsharouting: "true"
      sip.headerutf8verification: "false"
      sip.transport.setuptimer.tcp: "5000"
      sip.threadpoolsize: "1"
    registrar:
      sip.transport.dnsharouting: "true"
      sip.headerutf8verification: "false"
      sip.transport.setuptimer.tcp: "5000"
      sip.threadpoolsize: "1"
    proxy:
      sip.transport.dnsharouting: "true"
      sip.headerutf8verification: "false"
      sip.transport.setuptimer.tcp: "5000"
      sip.threadpoolsize: "16"
      sip.maxtcpconnections: "1000"
    monitor:
      sip.transport.dnsharouting: "true"
      sip.maxtcpconnections: "1000"
      sip.headerutf8verification: "false"
      sip.transport.setuptimer.tcp: "5000"
      sip.threadpoolsize: "1"
    ems:
      rc.cdr.local_queue_path: "/rm/ems/data/cdrQueue_rm.db"
      rc.ors.local_queue_path: "/rm/ems/data/orsQueue_rm.db"
 
# Default secrets storage to k8s secrets with csi able to be optional
secret:
  # keyVaultSecret will be a flag to between secret types(k8's or CSI). If keyVaultSecret was set to false k8's secret will be used
  keyVaultSecret: false
  #If keyVaultSecret set to false the below parameters will not be used.
  configserverProviderClassName: gvp-configserver-secret
  cfgSecretFileNameForCfgUsername: configserver-username
  cfgSecretFileNameForCfgPassword: configserver-password
  #If keyVaultSecret set to true the below parameters will not be used.
  cfgServerSecretName: configserver-secret
  cfgSecretKeyNameForCfgUsername: username
  cfgSecretKeyNameForCfgPassword: password
 
## Ingress configuration
ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  paths: []
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
networkPolicies:
  enabled: false
sip:
  serviceName: sipnode
 
## primaryAppresource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resourceForRM:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  requests:
    memory: "1Gi"
    cpu: "200m"
    ephemeral-storage: "10Gi"
  limits:
    memory: "2Gi"
    cpu: "250m"
 
resoueceForSnmp:
  requests:
    memory: "500Mi"
    cpu: "100m"
  limits:
    memory: "1Gi"
    cpu: "150m"
 
## primaryApp containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
## Containers should run as genesys user and cannot use elevated permissions
securityContext:
  fsGroup: 500
  runAsNonRoot: true
  runAsUserRM: 500
  runAsGroupRM: 500
  runAsUserCfghandler: 500
  runAsGroupCfghandler: 500
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
 
## Affinity for assignment.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector:
 
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
 
## Service/Pod Monitoring Settings
monitoring:
  podMonitorEnabled: true
  prometheusRulesEnabled: true
  grafanaEnabled: true
 
monitor:
  monitorName: gvp-monitoring
  prometheusPort: 9116
  prometheusPortlogs: 8200
  logFilePrefixName: RM
  module: [if_mib]
  target: [127.0.0.1:1161]

Verify the deployed resources

Verify the deployed resources from OpenShift console/CLI.

5. GVP Media Control Platform

Persistent Volumes creation

Create the following PVs which are required for the service deployment.

gvp-mcp-logs-01

gvp-mcp-logs-01-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-mcp-logs-01
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: gvp
  nfs:
    path:  /export/vol1/PAT/gvp/mcp-logs-01
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-mcp-logs-01-pv.yaml

gvp-mcp-logs-02

gvp-mcp-logs-02-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-mcp-logs-02
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: gvp
  nfs:
    path:  /export/vol1/PAT/gvp/mcp-logs-02
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-mcp-logs-02-pv.yaml

gvp-mcp-rup-volume-01

gvp-mcp-rup-volume-01-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-mcp-rup-volume-01
spec:
  capacity:
    storage: 40Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: disk-premium
  nfs:
    path:  /export/vol1/PAT/gvp/mcp-logs-01
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-mcp-rup-volume-01-pv.yaml

gvp-mcp-rup-volume-02

gvp-mcp-rup-volume-02-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-mcp-rup-volume-02
spec:
  capacity:
    storage: 40Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: disk-premium
  nfs:
    path:  /export/vol1/PAT/gvp/mcp-logs-02
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-mcp-rup-volume-02-pv.yaml

gvp-mcp-recording-volume-01

gvp-mcp-recordings-volume-01-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-mcp-recording-volume-01
spec:
  capacity:
    storage: 40Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: gvp
  nfs:
    path:  /export/vol1/PAT/gvp/mcp-logs-01
    server: 192.168.30.51

Execute the following command:

kubectl create -f gvp-mcp-recordings-volume-01-pv.yaml

gvp-mcp-recording-volume-02

gvp-mcp-recordings-volume-02-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gvp-mcp-recording-volume-02
spec:
  capacity:
    storage: 40Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: gvp
  nfs:
    path:  /export/vol1/PAT/gvp/mcp-logs-02
    server: 192.168.30.51

Execute the following command:

kubecl create -f gvp-mcp-recordings-volume-02-pv.yaml

Install Helm chart

Download the required Helm chart release from the JFrog repository and install. Refer to Helm Chart URLs.

helm install gvp-mcp ./<gvp-mcp-helm-artifact> -f gvp-mcp-values.yaml

You must set the following values in your values.yaml:

  • Set logicalResourceGroup: "MCP_Configuration_Unit" to add MCPs to the Real Configuration Unit (rather than test).

gvp-mcp-values.yaml

## Default values for gvp-mcp.
## This is a YAML-formatted file.
## Declare variables to be passed into your templates.
 
## Global Parameters
## Add labels to all the deployed resources
##
podLabels: {}
 
## Add annotations to all the deployed resources
##
podAnnotations: {}
 
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:
 
## Deployment Configuration
deploymentEnv: "UPDATE_ENV"
replicaCount: 2
terminationGracePeriod: 3600
 
## Name and dashboard overrides
nameOverride: ""
fullnameOverride: ""
dashboardReplicaStatefulsetFilterOverride: ""
 
## Base Labels. Please do not change these.
serviceName: gvp-mcp
component: shared
partOf: gvp
 
## Command-line arguments to the MCP process
args:
  - "gvp-configserver"
  - "8888"
  - "default"
  - "/etc/mcpconfig/config.ini"
 
## Container image repo settings.
image:
  mcp:
    registry: pureengage-docker-staging.jfrog.io
    repository: gvp/multicloud/gvp_mcp
    tag: "{{ .Chart.AppVersion }}"
    pullPolicy: IfNotPresent
  serviceHandler:
    registry: pureengage-docker-staging.jfrog.io
    repository: gvp/multicloud/gvp_mcp_servicehandler
    tag: "{{ .Chart.AppVersion }}"
    pullPolicy: IfNotPresent
  configHandler:
    registry: pureengage-docker-staging.jfrog.io
    repository: gvp/multicloud/gvp_mcp_confighandler
    tag: "{{ .Chart.AppVersion }}"
    pullPolicy: IfNotPresent
  snmp:
    registry: pureengage-docker-staging.jfrog.io
    repository: gvp/multicloud/gvp_snmp
    tag: v9.0.040.21
    pullPolicy: IfNotPresent
  rup:
    registry: pureengage-docker-staging.jfrog.io
    repository: cce/recording-provider
    tag: 9.0.000.00.b.1432.r.ef30441
    pullPolicy: IfNotPresent
   
## MCP specific settings
mcp:
  ## Settings for liveness and readiness probes of MCP
  ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
  livenessValues:
    path: /mcp/liveness
    initialDelaySeconds: 30
    periodSeconds: 60
    timeoutSeconds: 20
    failureThreshold: 3
    healthCheckAPIPort: 8300       
 
  # Used instead of startupProbe.  This runs all initial self-tests, and could take some time.
  # Timeout is < 1 minute (with reduced test set), and interval/period is 1 minute.
  readinessValues:
    path: /mcp/readiness
    initialDelaySeconds: 30
    periodSeconds: 60
    timeoutSeconds: 50
    failureThreshold: 3
    healthCheckAPIPort: 8300
  # Location of configuration file for MCP
  # initialConfigFile is the default template
  # finalConfigFile is the final configuration after overrides are applied (see mcpConfig section for overrides)
  initialConfigFile: "/etc/config/config.ini"
  finalConfigFile: "/etc/mcpconfig/config.ini"
 
  # Dev and QA deployments will use MCP_Configuration_Unit_Test LRG and shared deployments will use MCP_Configuration_Unit LRG
  logicalResourceGroup: "MCP_Configuration_Unit"
 
  # Threshold values for the various alerts in podmonitor.
  alerts:
    cpuUtilizationAlertLimit: 70
    memUtilizationAlertLimit: 90
    workingMemAlertLimit: 7
    maxRestarts: 2
    persistentVolume: 20
    serviceHealth: 40
    recordingError: 7
    configServerFailure: 0
    dtmfError: 1
    dnsError: 6
    totalError: 120
    selfTestError: 25
    fetchErrorMin: 120
    fetchErrorMax: 220
    execError: 120
    sdpParseError: 1
    mediaWarning: 3
    mediaCritical: 7
    fetchTimeout: 10
    fetchError: 10
    ngiError: 12
    ngi4xx: 10
    recPostError: 7
    recOpenError: 1
    recStartError: 3
    recCertError: 7
    reportingDbInitError: 1
    reportingFlushError: 1
    grammarLoadError: 1
    grammarSynError: 1
    dtmfGrammarLoadError: 1
    dtmfGrammarError: 1
    vrmOpenSessError: 1
    wsTokenCreateError: 1
    wsTokenConfigError: 1
    wsTokenFetchError: 1
    wsOpenSessError: 1
    wsProtoError: 1
    grpcConfigError: 1
    grpcSSLRootCertError: 1
    grpcGoogleCredentialError: 1
    grpcRecognizeStartError: 7
    grpcWriteError: 7
    grpcRecognizeError: 7
    grpcTtsError: 7
    streamerOpenSessionError: 1
    streamerProtocolError: 1
    msmlReqError: 7
    dnsResError: 6
    rsConnError: 150
 
## RUP (Recording Uploader) Settings
rup:
  ## Settings for liveness and readiness probes of RUP
  ## !!! THESE VALUES SHOULD NOT BE CHANGED UNLESS INSTRUCTED BY GENESYS !!!
  livenessValues:
    path: /health/live
    initialDelaySeconds: 30
    periodSeconds: 60
    timeoutSeconds: 20
    failureThreshold: 3
    healthCheckAPIPort: 8080       
 
  readinessValues:
    path: /health/ready
    initialDelaySeconds: 30
    periodSeconds: 30
    timeoutSeconds: 20
    failureThreshold: 3
    healthCheckAPIPort: 8080
 
  ## RUP PVC defines
  rupVolume:
    storageClass: "genesys"
    accessModes: "ReadWriteOnce"
    volumeSize: 40Gi
 
  ## Other settings for RUP
  recordingsFolder: "/pvolume/recordings"
  recordingsCache: "/pvolume/recording_cache"
  rupProvisionerEnabled: "false"
  decommisionDestType: "WebDAV"
  decommisionDestWebdavUrl: "http://gvp-central-rup:8180"
  decommisionDestWebdavUsername: ""
  decommisionDestWebdavPassword: ""
  diskFullDestType: "WebDAV"
  diskFullDestWebdavUrl: "http://gvp-central-rup:8180"
  diskFullDestWebdavUsername: ""
  diskFullDestWebdavPassword: ""
  cpUrl: "http://cce-conversation-provider.cce.svc.cluster.local"
  unrecoverableLostAction: "uploadtodefault"
  unrecoverableDestType: "Azure"
  unrecoverableDestAzureAccountName: "gvpwestus2dev"
  unrecoverableDestAzureContainerName: "ccerp-unrecoverable"
  logJsonEnable: true
  logLevel: INFO
  logConsoleLevel: INFO
 
  ## RUP resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  resources:
    requests:
      memory: "128Mi"
      cpu: "100m"
      ephemeral-storage: "1Gi"
    limits:
      memory: "2Gi"
      cpu: "1000m"
 
## PVCs defined.  RUP one is under "rup" label.
recordingStorage:
  storageClass: "genesys"
  accessModes: "ReadWriteOnce"
  volumeSize: 40Gi
 
# If PVC is not used by setting flag enablePV to false, the path in hostPath will be used for log storage.
logStorage:
  enablePV: true
  storageClass: genesys
  accessModes: ReadWriteOnce
  volumeSize: 5Gi
  hostPath: /mnt/log
 
## Service Handler configuration. Note, the port values CANNOT be changed here alone, and should not be changed.
serviceHandler:
  serviceHandlerPort: 8300
  mcpSipPort: 5070
  consuleExternalHost: ""
  consulPort: 8501
  registrationInterval: 10000
  mcpHealthCheckInterval: 30s
  mcpHealthCheckTimeout: 10s
 
## Config Server values passed to RUP, etc. These should not be changed.
configServer:
  host: gvp-configserver
  port: "8888"
 
## Secrets storage related settings - k8s secrets or csi
secrets:
  # Used for pulling images/containers from the respositories.
  imagePull:
    - name: pureengage-docker-dev
    - name: pureengage-docker-staging
   
  # Config Server secrets. If k8s is false, csi will be used, else k8s will be used.
  configServer:
    k8s: true
    secretName: configserver-secret
    dbUserKey: username
    dbPasswordKey: password
    csiSecretProviderClass: keyvault-gvp-gvp-configserver-secret
   
  # Consul secrets. If k8s is false, csi will be used, else k8s will be used.
  consul:
    k8s: true
    secretName: shared-consul-consul-gvp-token
    secretKey: consul-consul-gvp-token
    csiSecretProviderClass: keyvault-consul-consul-gvp-token
 
## Ingress configuration
ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
 
## App resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## Values for MCP, and the default ones for other containers. RUP ones are under "rup" label.
##
resourcesMcp:
  requests:
    memory: "200Mi"
    cpu: "250m"
    ephemeral-storage: "1Gi"
  limits:
    memory: "2Gi"
    cpu: "300m"
 
resourcesDefault:
  requests:
    memory: "128Mi"
    cpu: "100m"
  limits:
    memory: "128Mi"
    cpu: "100m"
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi
 
## App containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
## Containers should run as genesys user and cannot use elevated permissions
## Pod level security context
podSecurityContext:
  fsGroup: 500
  runAsUser: 500
  runAsGroup: 500
  runAsNonRoot: true
 
## Container security context
securityContext:
  # fsGroup: 500
  runAsUser: 500
  runAsGroup: 500
  runAsNonRoot: true
 
## Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
## NOTE: this is an optional parameter
##
priorityClassName: system-cluster-critical
 
# affinity: {}
 
## Node labels for assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector:
  #genesysengage.com/nodepool: realtime
 
## Tolerations for assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
  # - key: "kubernetes.azure.com/scalesetpriority"
  #   operator: "Equal"
  #   value: "spot"
  #   effect: "NoSchedule"
  # - key: "k8s.genesysengage.com/nodepool"
  #   operator: "Equal"
  #   value: "compute"
  #   effect: "NoSchedule"
  # - key: "kubernetes.azure.com/scalesetpriority"
  #   operator: "Equal"
  #   value: "compute"
  #   effect: "NoSchedule"
  #- key: "k8s.genesysengage.com/nodepool"
  #  operator: Exists
  #  effect: NoSchedule
 
## Extra labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
## Use podLabels
#labels: {}
 
## Extra Annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
## Use podAnnotations
#annotations: {}
 
## Autoscaling Settings
## Keda can be used as an alternative to HPA only to scale MCPs based on a cron schedule (UTC).
## If this is set to true, use Keda for scaling, or use HPA directly.
useKeda: true
 
## If Keda is enabled, only the following parameters are supported, and default HPA settings
## will be used within Keda.
keda:
  preScaleStart: "0 14 * * *"
  preScaleEnd: "0 2 * * *"
  preScaleDesiredReplicas: 4
  pollingInterval: 15
  cooldownPeriod: 300
 
## HPA Settings
# GVP-42512: PDB issue
# Alaways keep the following:
# minReplicas >= 2
# maxUnavailable = 1 
hpa:
  enabled: false
  minReplicas: 2
  maxUnavailable: 1
  maxReplicas: 4
  podManagementPolicy: Parallel
  targetCPUAverageUtilization: 20
  scaleupPeriod: 15
  scaleupPods: 4
  scaleupPercent: 50
  scaleupStabilizationWindow: 0
  scaleupPolicy: Max
  scaledownPeriod: 300
  scaledownPods: 2
  scaledownPercent: 10
  scaledownStabilizationWindow: 3600
  scaledownPolicy: Min
 
## Service/Pod Monitoring Settings
prometheus:
  mcp:
    name: gvp-mcp-snmp
    port: 9116
 
  rup:
    name: gvp-mcp-rup
    port: 8080
 
  podMonitor:
    enabled: true
 
grafana:
  enabled: false
 
  #log:
  #  name: gvp-mcp-log
  #  port: 8200
 
## Pod Disruption Budget Settings
podDisruptionBudget:
  enabled: true
 
## Enable network policies or not
networkPolicies:
  enabled: false
 
## DNS configuration options
dnsConfig:
  options:
    - name: ndots
      value: "3"
 
## Configuration overrides
mcpConfig:
  # MCP config overrides
  mcp.mpc.numdispatchthreads: 4
  mcp.log.verbose: "interaction"
  mcp.mpc.codec: "pcmu pcma telephone-event"
  mcp.mpc.transcoders: "PCM MP3"
  mcp.mpc.playcache.enable: 1
  mcp.fm.http_proxy: ""
  mcp.fm.https_proxy: ""
 
  #MRCP v2 ASR config overrides
  mrcpv2_asr.provision.vrm.client.connectpersetup: true
  mrcpv2_asr.provision.vrm.client.disablehotword: false
  mrcpv2_asr.provision.vrm.client.hotkeybasepath: "/usr/local/genesys/mcp/grammar/nuance/hotkey"
  mrcpv2_asr.provision.vrm.client.noduplicatedgramuri: true
  mrcpv2_asr.provision.vrm.client.sendswmsparams: false
  mrcpv2_asr.provision.vrm.client.transportprotocol: "MRCPv2"
  mrcpv2_asr.provision.vrm.client.sendloggingtag: true
  mrcpv2_asr.provision.vrm.client.resource.name: "NuanceASRv2"
  mrcpv2_asr.provision.vrm.client.resource.uri: "sip:mresources@speech-server-clusterip:5060"
  mrcpv2_asr.provision.vrm.client.tlscertificatekey: "/usr/local/genesys/mcp/config/x509_certificate.pem"
  mrcpv2_asr.provision.vrm.client.tlsprivatekey: "/usr/local/genesys/mcp/config/x509_certificate.pem"
  mrcpv2_asr.provision.vrm.client.tlspassword: ""
  mrcpv2_asr.provision.vrm.client.tlsprotocoltype: "TLSv1"
  mrcpv2_asr.provision.vrm.client.confidencescale: 1
  mrcpv2_asr.provision.vrm.client.sendsessionxml: true
  mrcpv2_asr.provision.vrm.client.supportfornuance11: true
  mrcpv2_asr.provision.vrm.client.uniquegramid: true
 
  #MRCP v2 TTS config overrides
  mrcpv2_tts.provision.vrm.client.connectpersetup: true
  mrcpv2_tts.provision.vrm.client.speechmarkerencoding: "UTF-8"
  mrcpv2_tts.provision.vrm.client.transportprotocol: "MRCPv2"
  mrcpv2_tts.provision.vrm.client.sendloggingtag: true
  mrcpv2_tts.provision.vrm.client.resource.name: "NuanceTTSv2"
  mrcpv2_tts.provision.vrm.client.resource.uri: "sip:mresources@speech-server-clusterip:5060"
  mrcpv2_tts.provision.vrm.client.tlscertificatekey: "/usr/local/genesys/mcp/config/x509_certificate.pem"
  mrcpv2_tts.provision.vrm.client.tlsprivatekey: "/usr/local/genesys/mcp/config/x509_certificate.pem"
  mrcpv2_tts.provision.vrm.client.tlspassword: ""
  mrcpv2_tts.provision.vrm.client.tlsprotocoltype: "TLSv1"
  mrcpv2_tts.provision.vrm.client.nospeechlanguageheader: true
  mrcpv2_tts.provision.vrm.client.sendsessionxml: true
  mrcpv2_tts.provision.vrm.client.supportfornuance11: true

Verify the deployed resources

Verify the deployed resources from Google console/CLI.

Retrieved from "https://all.docs.genesys.com/GVP/Current/GVPPEGuide/Deploy (2024-10-06 14:21:17)"
Comments or questions about this documentation? Contact us for support!