Difference between revisions of "PEC-REP/Current/GIMPEGuide/ConfigureGSP"

From Genesys Documentation
Jump to: navigation, search
 
(5 intermediate revisions by the same user not shown)
Line 10: Line 10:
 
|structuredtext=The GSP requires some configuration for deployment that must be made by modifying the GSP's default Helm chart. You do this by creating override entries in the GSP's '''values.yaml''' file.   
 
|structuredtext=The GSP requires some configuration for deployment that must be made by modifying the GSP's default Helm chart. You do this by creating override entries in the GSP's '''values.yaml''' file.   
  
Download the GSP Helm charts from the JFrog registry, using the appropriate credentials.  
+
Download the GSP Helm charts from your image registry, using the appropriate credentials.  
  
 
For information about how to download the Helm charts, see {{SuiteLevelLink|helmchart}}. To find the correct Helm chart version for your release, see {{Link-AnywhereElse|product=ReleaseNotes|version=Current|manual=GenesysEngage-cloud|topic=GIMHelm}} for the Helm chart version you must download for your release. For general information about Helm chart overrides, see {{SuiteLevelLink|helmoverride}} in the ''{{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|display text=Genesys Multicloud CX Private Edition Guide}}''.  
 
For information about how to download the Helm charts, see {{SuiteLevelLink|helmchart}}. To find the correct Helm chart version for your release, see {{Link-AnywhereElse|product=ReleaseNotes|version=Current|manual=GenesysEngage-cloud|topic=GIMHelm}} for the Helm chart version you must download for your release. For general information about Helm chart overrides, see {{SuiteLevelLink|helmoverride}} in the ''{{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|display text=Genesys Multicloud CX Private Edition Guide}}''.  
Line 24: Line 24:
 
|structuredtext={{AnchorDiv|GSP_ImageRegistry}}
 
|structuredtext={{AnchorDiv|GSP_ImageRegistry}}
 
===Image registry===
 
===Image registry===
Create an entry in the GSP's '''values.yaml''' file to specify the location of the Genesys JFrog image registry. This is the repository from which Kubernetes will pull images.
+
Create an entry in the GSP's '''values.yaml''' file to specify the location of your image registry. This is the repository from which Kubernetes pulls images.
  
The location of the Genesys JFrog image registry is defined when you set up the environment for the GSP. It is represented in the system as the <code>docker-registry</code>. In the GSP Helm chart, the repository is represented as <code>image: registry</code>, as shown below.  You can optionally set a container version for the image.
+
The location of the image registry is defined when you set up the environment for the GSP. It is represented in the system as the <code>docker-registry</code>. In the GSP Helm chart, the repository is represented as <code>image: registry</code>, as shown in the following example.  You can optionally set a container version for the image, as in the following example.
 +
 
 +
<source lang="bash">
 +
image: # The repository from which Kubernetes pulls images
 +
  registry: <your_container_registry>  # The default registry is pureengage-docker-staging.jfrog.io
 +
  tag: <image_tag> # The container image tag/version
 +
</source>
  
*<code>image:</code>
 
*:<code>registry</code> — ''the repository from which Kubernetes will pull images (''<code>pureengage-docker-staging.jfrog.io</code> ''by default)''
 
*:<code>tag</code> — ''the container image version''
 
 
{{AnchorDiv|GSP_Config_PullSecret}}
 
{{AnchorDiv|GSP_Config_PullSecret}}
 
===Pull secret===
 
===Pull secret===
When you set up your environment, you define a pull secret for Genesys JFrog image registry (<code>docker-registry</code>). You must include the pull secret in the GSP's '''values.yaml''' file in order for Kubernetes to be able to pull from the repository.
+
When you set up your environment, you define a pull secret for the image registry (<code>docker-registry</code>). You must include the pull secret in the GSP's '''values.yaml''' file in order for Kubernetes to be able to pull from the repository.
  
*<code>imagePullSecrets:</code>
+
<source lang="bash">
*:<code>docker-registry</code> — ''the credentials Kubernetes will use to pull the image from the registry''
+
imagePullSecrets:
 +
  docker-registry: {<pull-secret>} # The credentials Kubernetes will use to pull the image from the registry
 +
</source>
  
 
Note that other services use a different syntax than this to configure the repository pull secret, as follows:
 
Note that other services use a different syntax than this to configure the repository pull secret, as follows:
 
+
<source lang="bash">
*<code>image:</code>
+
imagePullSecrets:
*:<code>imagePullSecrets:</code>
+
  name: docker-registry
*::<code>- name: docker-registry</code>
+
</source>
*::Genesys Info Mart, GIM Stream Processor, and GIM Configuration Adaptor helm charts all support advanced templating that allow the helm to create the pull secret automatically; hence the variation in syntax.
+
Genesys Info Mart, GIM Stream Processor, and GIM Configuration Adaptor helm charts all support advanced templating that allow the helm to create the pull secret automatically; hence the variation in syntax.
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
Line 50: Line 55:
 
|alignment=Vertical
 
|alignment=Vertical
 
|structuredtext={{AnchorDiv|GSP_KafkaSecret}}
 
|structuredtext={{AnchorDiv|GSP_KafkaSecret}}
 +
 
===Kafka secret===
 
===Kafka secret===
 
If Kafka is configured with authentication, you must configure the Kafka secret so GSP can access Kafka. The Kafka secret is provisioned in the system as <code>kafka-secrets</code> when you set up the environment for GSP. Configure the Kafka secret by creating a Helm chart override in the '''values.yaml''' file.
 
If Kafka is configured with authentication, you must configure the Kafka secret so GSP can access Kafka. The Kafka secret is provisioned in the system as <code>kafka-secrets</code> when you set up the environment for GSP. Configure the Kafka secret by creating a Helm chart override in the '''values.yaml''' file.
  
*<code>kafka:</code>
+
<source lang="bash">
*:<code>password</code> - ''Credentials for accessing Kafka. This secret is created during deployment''
+
kafka:
 +
  password: <kafka-password> # Credentials for accessing Kafka. This secret is created during deployment.
 +
</source>
 +
 
 
{{AnchorDiv|GSP_KafkaBootstrap}}
 
{{AnchorDiv|GSP_KafkaBootstrap}}
 
 
===Kafka bootstrap===
 
===Kafka bootstrap===
 
To allow the Kafka service on GSP to align with the infrastructure Kafka service, make a Helm override entry with the location of the Kafka bootstrap.
 
To allow the Kafka service on GSP to align with the infrastructure Kafka service, make a Helm override entry with the location of the Kafka bootstrap.
  
*<code>kafka:</code>
+
<source lang="bash">kafka:
*:<code>bootstrap</code> — ''the Kafka address to align with the infrastructure Kafka''
+
  bootstrap: <kafka-bootstrap-location> # The Kafka address to align with the infrastructure Kafka
 +
</source>
 +
 
 
{{AnchorDiv|GSP_Config_KafkaTopics}}
 
{{AnchorDiv|GSP_Config_KafkaTopics}}
 
===Custom Kafka topic names===
 
===Custom Kafka topic names===
Line 76: Line 86:
 
When you set up the environment for GSP, you provision S3-compatible object storage for GSP to use as a persistent data store. In the '''values.yaml''' file, record the credentials needed by GSP to access this storage.
 
When you set up the environment for GSP, you provision S3-compatible object storage for GSP to use as a persistent data store. In the '''values.yaml''' file, record the credentials needed by GSP to access this storage.
  
*<code>gsp-s3</code> Credentials for accessing S3-compatible storage
+
<source lang="bash">
 +
gsp-s3: <s3-credentials> # Credentials for accessing S3-compatible storage
 +
</source>
 +
 
 
{{AnchorDiv|GSP_Config_KafkaBootstrap}}
 
{{AnchorDiv|GSP_Config_KafkaBootstrap}}
 
===Enable S3-compatible storage===
 
===Enable S3-compatible storage===
Line 83: Line 96:
 
By default, GSP is configured to use Azure Blob Storage as the persistent data store. If you have provisioned Azure Blob Storage in your deployment, modify the following entries in the '''values.yaml''' file:
 
By default, GSP is configured to use Azure Blob Storage as the persistent data store. If you have provisioned Azure Blob Storage in your deployment, modify the following entries in the '''values.yaml''' file:
  
*<code>job:</code>
+
<source lang="bash">
*:&nbsp;&nbsp;&nbsp;<code>storage:</code>
+
job:
*:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>gspPrefix</code> — ''the URI path prefix under which GSP savepoints, checkpoints, and high-availability data will be stored''
+
  storage:
*:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>gcaSnapshots</code> — ''the URI path under which GCA snapshot(s) will be stored''
+
    gspPrefix: <prefix> # The URI path prefix under which GSP savepoints, checkpoints, and high availability data will be stored
 +
    gcaSnapshots: <path> # The URI path under which GCA snapshots will be stored
 +
</source>
  
 
To enable other types of S3-compatible storage, modify the following entries in the '''values.yaml''' file:
 
To enable other types of S3-compatible storage, modify the following entries in the '''values.yaml''' file:
  
*<code>azure:</code>
+
<source lang="bash">
*:&nbsp;&nbsp;&nbsp;<code>enabled:</code> false
+
azure:
*<code>job:</code>
 
*:&nbsp;&nbsp;&nbsp;<code>storage:</code>
 
*:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>gspPrefix</code> — ''the bucket name where GSP savepoints, checkpoints, and high-availability data will be stored''
 
*:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>gcaSnapshots</code> — ''the bucket name where the GCA snapshot(s) will be stored''
 
*:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>s3</code> — ''the applicable details defined with the OBC or GCP bucket''
 
*:'''Note:''' The <code>host</code> parameter is ignored.
 
 
 
====OpenShift example====
 
<source lang="bash">azure:
 
 
   enabled: false
 
   enabled: false
..
 
 
job:
 
job:
 
   storage:
 
   storage:
     host: gspstate{{.Values.short_location}}{{.Values.environment}}.blob.core.windows.net
+
     gspPrefix: <bucket-name> # The bucket where GSP savepoints, checkpoints, and high-availability data will be stored
    #gspPrefix: wasbs://gsp-state@{{ tpl .Values.job.storage.host . }}/{{ .Release.Name }}/
+
     gcaSnapshots: <bucket-name> # The bucket where the GCA snapshots will be stored.
    gspPrefix: "s3p://gim-3f7ac1ab-03b9-445b-ba12-137d4bbc3c38/{{ .Release.Name }}/"                                                                             
+
     s3: <s3-details> # The applicable details defined for the OBC or GCP bucket.
     #gcaSnapshots: wasbs://gca@{{ tpl .Values.job.storage.host . }}/
+
</source>
    gcaSnapshots: "s3p://gim-3f7ac1ab-03b9-445b-ba12-137d4bbc3c38/gca/"                                                                 
+
'''Note:''' The <code>host</code> parameter is ignored.
    checkpoints: '{{ tpl .Values.job.storage.gspPrefix . }}checkpoints'
+
 
    savepoints: '{{ tpl .Values.job.storage.gspPrefix . }}savepoints'
 
    highAvailability: '{{ tpl .Values.job.storage.gspPrefix . }}ha'
 
     s3:
 
      endpoint: "https://s3.openshift-storage.svc:443"
 
      accessKey: "<access key>"
 
      secretKey: "<secret key>"
 
      pathStyleAccess: "true"</source>
 
 
====GKE example====
 
====GKE example====
 
<source lang="bash">azure:
 
<source lang="bash">azure:
Line 136: Line 134:
 
       accessKey: "<access Key>"
 
       accessKey: "<access Key>"
 
       secretKey: "<secret key>"
 
       secretKey: "<secret key>"
       pathStyleAccess: "true"</source>
+
       pathStyleAccess: "true"
|Status=No
+
</source>
}}{{Section
 
|sectionHeading=Arbitrary UIDs (OpenShift)
 
|anchor=GSP_ArbitraryUIDs
 
|alignment=Vertical
 
|structuredtext=If you have an OpenShift deployment and you want to use arbitrary UIDs, you must modify the security context settings in the '''values.yaml''' file. The security context settings define the privilege and access control settings for pods and containers.
 
 
 
In the '''values.yaml''' file, the default user and group IDs in the <code>securityContext</code> object are set to <code>500:500:500</code> (the '''genesys''' user), as shown below.
 
securityContext:
 
  runAsNonRoot: true
 
  runAsUser: 500
 
  runAsGroup: 500
 
  fsGroup: 500
 
 
containerSecurityContext: {}
 
To use arbitrary UIDs in your OpenShift deployment, you replace these values with null values, as in the following example.
 
securityContext:
 
  runAsNonRoot: true
 
  runAsUser: null
 
  runAsGroup: 0
 
  fsGroup: null
 
 
containerSecurityContext: {}
 
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
Line 165: Line 141:
 
|anchor=GSP_KubernetesPermissions
 
|anchor=GSP_KubernetesPermissions
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext=GSP uses Apache Flink for stateful stream processing, with communications handled via the Kubernetes API. To use the Kubernetes API, GSP must have the permissions shown below.
+
|structuredtext=GSP uses Apache Flink for stateful stream processing, with communications handled via the Kubernetes API. To use the Kubernetes API, GSP must have the permissions shown in the following example:
 
 
 
{{{!}} class="wikitable"
 
{{{!}} class="wikitable"
 
{{!}}+
 
{{!}}+
Line 205: Line 180:
  
 
{{AnchorDiv|GSP_Config_ArbitraryUIDs}}
 
{{AnchorDiv|GSP_Config_ArbitraryUIDs}}
===Arbitrary UIDs (Openshift)===
 
If you have an OpenShift deployment and you want to use arbitrary UIDs, you must modify the <code>securityContext</code> settings in the GSP's '''values.yaml''' file with an override entry. These settings define the privilege and access control settings for pods and containers.
 
 
In the default GSP '''values.yaml''' file, the user and group IDs are set to <code>500:500:500</code>, (the '''genesys''' user), as shown below.
 
<source lang="bash">
 
securityContext:
 
  runAsNonRoot: true
 
  runAsUser: 500
 
  runAsGroup: 500
 
  fsGroup: 500
 
 
containerSecurityContext: {}
 
</source>
 
To use arbitrary UIDs in an OpenShift deployment, replace the default values with null values, as in the example below.<source lang="bash">
 
securityContext:
 
  runAsNonRoot: true
 
  runAsUser: null
 
  runAsGroup: 0
 
  fsGroup: null
 
 
containerSecurityContext: {}
 
</source>
 
 
|Status=No
 
|Status=No
 
}}{{Section
 
}}{{Section
Line 232: Line 185:
 
|anchor=Options
 
|anchor=Options
 
|alignment=Vertical
 
|alignment=Vertical
|structuredtext=You can specify values in the '''values.yaml''' file to override the default values of configuration options that control GSP behavior and to customize user data and Outbound field mappings.<!--{{Editgrn_open}}<font color=red>'''Writer's note:''' Uncomment the link after the referenced section is published.</font> For more information, see [[{{FULLPAGENAME}}#Options|Configure GSP behavior]].{{Editgrn_close}}-->
+
|structuredtext=You can specify values in the '''values.yaml''' file to override the default values of configuration options that control GSP behavior and to customize user data and Outbound field mappings.
  
 
You can override aspects of the default configuration to modify GSP behavior and customize the way data is stored in the Info Mart database.
 
You can override aspects of the default configuration to modify GSP behavior and customize the way data is stored in the Info Mart database.
Line 246: Line 199:
 
For full information about the options you can configure, including the default and valid values, see {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=GSPConfigOptions}}.
 
For full information about the options you can configure, including the default and valid values, see {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=GSPConfigOptions}}.
  
{{Editgrn_open}}<font color=red>'''Alexey/Kostya,''' I've combined and extrapolated from info in e-mails and Jira tickets. Please esp. confirm the syntax in this subsection and in the extended example [[{{FULLPAGENAME}}#Example|at the bottom of the page]].</font>{{Editgrn_close}}
+
To configure options, edit the GSP '''values.yaml''' file. Under the '''cfgOptions''' object, specify the option and value in JSON format, noting the following:
  
To configure options, edit the GSP '''values.yaml''' file. Under the '''cfgOptions''' object, specify the option and value in JSON format, noting the following:
 
 
*Options are separately configurable by tenant and, where applicable, by media type or even at the level of individual queues (DNs or scripts).
 
*Options are separately configurable by tenant and, where applicable, by media type or even at the level of individual queues (DNs or scripts).
 
*Where an option can be configured at various levels, you can override a value set at a higher level (for example, for a particular media type in general) to set a different value for a particular lower-level object (for example, for that media type for an individual DN).
 
*Where an option can be configured at various levels, you can override a value set at a higher level (for example, for a particular media type in general) to set a different value for a particular lower-level object (for example, for that media type for an individual DN).
Line 279: Line 231:
 
</source>
 
</source>
  
For an example, see [[{{FULLPAGENAME}}#Example|below]].
+
For an example, see [[{{FULLPAGENAME}}#Example|example]].
 
 
{{Editgrn_open}}<font color=red>'''Reviewers!''' For the review request relating to [https://genesysjira.atlassian.net/browse/GIM-14206 GIM-14206], skip the next two sections and jump to the cfgOptions part of the example at the bottom of the page [[{{FULLPAGENAME}}#Example|(link)]]. Then please go to the {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=GSPConfigOptions}} page.</font>{{Editgrn_close}}
 
  
 +
<!-- GIM-14206-->
 
{{AnchorDiv|UserData}}
 
{{AnchorDiv|UserData}}
 
===Customize user data mapping===
 
===Customize user data mapping===
Line 290: Line 241:
  
 
To configure user-data mapping, edit the GSP '''values.yaml''' file. Under the '''udeMapping''' object , specify the mapping between your custom KVPs and the custom user-data database table(s) and column(s), noting the following:
 
To configure user-data mapping, edit the GSP '''values.yaml''' file. Under the '''udeMapping''' object , specify the mapping between your custom KVPs and the custom user-data database table(s) and column(s), noting the following:
 +
 
*The mapping, which is specified in JSON format, is configured separately by tenant.
 
*The mapping, which is specified in JSON format, is configured separately by tenant.
*In addition to specifying the database table and column in which the KVP value will be stored, you also specify the ''propagation rule'' that Genesys Info Mart will use to determine what value to store if more than one value is extracted for the same key in the same interaction. See {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=UserData|anchor=PropagationRules|display text=Propagation rules}} for more information.
+
*In addition to specifying the database table and column in which to store the KVP value, you also specify the ''propagation rule'' that Genesys Info Mart uses to determine what value to store if more than one value is extracted for the same key in the same interaction. See {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=UserData|anchor=PropagationRules|display text=Propagation rules}} for more information.
 
*You can specify a default value to use if the KVP value is null. You must provide a default value for dimensions.
 
*You can specify a default value to use if the KVP value is null. You must provide a default value for dimensions.
 
*You map custom user-data dimensions via the IRF_USER_DATA_KEYS table, where you specify the foreign key reference(s) for the user-data dimension table(s).
 
*You map custom user-data dimensions via the IRF_USER_DATA_KEYS table, where you specify the foreign key reference(s) for the user-data dimension table(s).
  
Genesys Info Mart provides the following {{Editgrn_open}}sample{{Editgrn_close}} tables in the Info Mart database schema for storage of custom KVPs:
+
Genesys Info Mart provides the following sample tables in the Info Mart database schema for storage of custom KVPs:
  
 
*USER_DATA_CUST_DIM_1 — for low-cardinality user data to be stored as dimensions
 
*USER_DATA_CUST_DIM_1 — for low-cardinality user data to be stored as dimensions
Line 302: Line 254:
 
The structure of the entries to specify in the '''values.yaml''' file is:
 
The structure of the entries to specify in the '''values.yaml''' file is:
  
<font color="red">{{Editgrn_open}}Questions:{{Editgrn_close}}
+
<!-- Writer -- some questions we should try and answer at some point:
 
+
1. Can you customize the table names (USER_DATA_CUST_DIM_1, IRF_USER_DATA_CUST_1) and column names (DIM_ATTRIBUTE_1, CUSTOM_DATA_1, etc.; also CUSTOM_KEY_1 in IRF_USER_DATA_KEYS)? i.e., Is it OK to show these as <variables>?
#Can you customize the table names (USER_DATA_CUST_DIM_1, IRF_USER_DATA_CUST_1) and column names (DIM_ATTRIBUTE_1, CUSTOM_DATA_1, etc.; also CUSTOM_KEY_1 in IRF_USER_DATA_KEYS)? i.e., Is it OK to show these as <variables>?
+
2 '''Alexey,''' the example [[{{FULLPAGENAME}}#Example|at the bottom of the page]] is taken from an example you sent me. Why are these particular IRF_USER_DATA_GEN_1 entries listed? In GIM Classic, we say these particular mappings are predefined by default. Do they need to be overtly specified here, or should we change the example to show non-standard KVPs?
# '''Alexey,''' the example [[{{FULLPAGENAME}}#Example|at the bottom of the page]] is taken from an example you sent me. Why are these particular IRF_USER_DATA_GEN_1 entries listed? In GIM Classic, we say these particular mappings are predefined by default. Do they need to be overtly specified here, or should we change the example to show non-standard KVPs?</font>
+
-->
  
 
<source lang="bash">
 
<source lang="bash">
Line 331: Line 283:
 
</source>
 
</source>
  
For an example, see [[{{FULLPAGENAME}}#Example|below]].
+
See the [[{{FULLPAGENAME}}#Example|example]].
  
 
{{AnchorDiv|Outbound}}
 
{{AnchorDiv|Outbound}}
===Customize Outbound {{Editgrn_open}}<strike>Contact</strike>{{Editgrn_close}} field mapping===
+
===Customize Outbound field mapping===
{{Editgrn_open}}<font color=red>'''Writer's note:''' In GIM Classic, we've been religious about using the phrase "Outbound Contact" to refer to OCS data. For PE and Multicloud in general, confirm we should use just "Outbound". And for PE, only CX Contact applies.</font>{{Editgrn_close}}
+
<!--Writer: In GIM Classic, we've been religious about using the phrase "Outbound Contact" to refer to OCS data. For PE and Multicloud in general, we should use just "Outbound". And for PE, only CX Contact applies.-->
  
 
Genesys Info Mart stores data about every outbound contact attempt, based on Record Field data it receives from the CX Contact (CXC) service. As described on {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=Outbound}}, some of the mapping between Field data and the Info Mart database tables and columns is predefined, and some is custom.  
 
Genesys Info Mart stores data about every outbound contact attempt, based on Record Field data it receives from the CX Contact (CXC) service. As described on {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=Outbound}}, some of the mapping between Field data and the Info Mart database tables and columns is predefined, and some is custom.  
  
 
To customize outbound mapping, edit the GSP values.yaml file.  Under the '''ocsMapping''' object, specify the mapping between your custom record fields and the tables and columns provided in the Info Mart database for custom record field data, namely:
 
To customize outbound mapping, edit the GSP values.yaml file.  Under the '''ocsMapping''' object, specify the mapping between your custom record fields and the tables and columns provided in the Info Mart database for custom record field data, namely:
 +
 
*In the CONTACT_ATTEMPT_FACT table:
 
*In the CONTACT_ATTEMPT_FACT table:
 
**10 floating-point numbers: <tt>numeric(14,4)</tt>
 
**10 floating-point numbers: <tt>numeric(14,4)</tt>
Line 347: Line 300:
 
**10 strings each: <tt>varchar(255)</tt>
 
**10 strings each: <tt>varchar(255)</tt>
  
{{Editgrn_open}}<font color=red>Placeholder for something to say about rpcValue and conversionValue (see question 2).</font>{{Editgrn_close}}
+
<!--Writer: Placeholder for something to say about rpcValue and conversionValue (see question 2).-->
 
+
<!--Writer: More questions we should pursue:
<font color="red">{{Editgrn_open}}'''Questions:'''{{Editgrn_close}}
+
1Per the CXC documentation ({{Link-AnywhereElse|product=PEC-OU|version=Current|manual=CXContact|topic=FieldLabels}}), by default "all user-defined fields in a contact list are labelled as Other1, Other2, Other3, and so on." In the following example, should I change all instances of "GenRecordField" to "Other"? -- e.g., CAF.RECORD_FIELD_1 populated by field: Other1, RECORD_FIELD_GROUP_1.RECORD_FIELD_1_STRING_1 populated by field: Other1String1, etc.
 
+
2 I cannot find any reference to OCS-equivalent right_person and conversion configuration in the CXC documentation. Are rpcValue and conversionValue applicable for PE?
#Per the CXC documentation ({{Link-AnywhereElse|product=PEC-OU|version=Current|manual=CXContact|topic=FieldLabels}}), by default "all user-defined fields in a contact list are labelled as Other1, Other2, Other3, and so on." In our example below, should I change all instances of "GenRecordField" to "Other"? -- e.g., CAF.RECORD_FIELD_1 populated by field: Other1, RECORD_FIELD_GROUP_1.RECORD_FIELD_1_STRING_1 populated by field: Other1String1, etc.
+
-->
# I cannot find any reference to OCS-equivalent right_person and conversion configuration in the CXC documentation. Are rpcValue and conversionValue applicable for PE?</font>
 
 
 
 
The mapping, which is specified in JSON format, is configured separately by tenant. The structure of the entries is:
 
The mapping, which is specified in JSON format, is configured separately by tenant. The structure of the entries is:
  
Line 392: Line 343:
 
</source>
 
</source>
  
For an example, see [[{{FULLPAGENAME}}#Example|below]].
+
See the [[{{FULLPAGENAME}}#Example|example]].
  
 
{{AnchorDiv|OptionsExample}}
 
{{AnchorDiv|OptionsExample}}
Line 531: Line 482:
 
         ...
 
         ...
 
</source>
 
</source>
|Status=Yes
+
|Status=No
 
}}
 
}}
 
|PEPageType=9c3ae89b-4f75-495b-85f8-d8c4afcb3f97
 
|PEPageType=9c3ae89b-4f75-495b-85f8-d8c4afcb3f97
 
}}
 
}}

Latest revision as of 13:31, March 10, 2023

This topic is part of the manual Genesys Info Mart Private Edition Guide for version Current of Reporting.

Learn how to configure GIM Stream Processor (GSP).

GSP Helm chart overrides

The GSP requires some configuration for deployment that must be made by modifying the GSP's default Helm chart. You do this by creating override entries in the GSP's values.yaml file.

Download the GSP Helm charts from your image registry, using the appropriate credentials.

For information about how to download the Helm charts, see Downloading your Genesys Multicloud CX containers. To find the correct Helm chart version for your release, see Helm charts and containers for Genesys Info Mart for the Helm chart version you must download for your release. For general information about Helm chart overrides, see Overriding Helm chart values in the Genesys Multicloud CX Private Edition Guide.

At minimum, you must create entries in the values.yaml file to specify key system information, as described in the following sections.

Important
Treat your modified values.yaml file as source code, which you are responsible to maintain so that your overrides are preserved and available for reuse when you upgrade.

Image registry and pull secret

Image registry

Create an entry in the GSP's values.yaml file to specify the location of your image registry. This is the repository from which Kubernetes pulls images.

The location of the image registry is defined when you set up the environment for the GSP. It is represented in the system as the docker-registry. In the GSP Helm chart, the repository is represented as image: registry, as shown in the following example. You can optionally set a container version for the image, as in the following example.

image: # The repository from which Kubernetes pulls images
  registry: <your_container_registry>  # The default registry is pureengage-docker-staging.jfrog.io
  tag: <image_tag> # The container image tag/version

Pull secret

When you set up your environment, you define a pull secret for the image registry (docker-registry). You must include the pull secret in the GSP's values.yaml file in order for Kubernetes to be able to pull from the repository.

imagePullSecrets:
  docker-registry: {<pull-secret>} # The credentials Kubernetes will use to pull the image from the registry

Note that other services use a different syntax than this to configure the repository pull secret, as follows:

imagePullSecrets:
  name: docker-registry

Genesys Info Mart, GIM Stream Processor, and GIM Configuration Adaptor helm charts all support advanced templating that allow the helm to create the pull secret automatically; hence the variation in syntax.

Kafka


Kafka secret

If Kafka is configured with authentication, you must configure the Kafka secret so GSP can access Kafka. The Kafka secret is provisioned in the system as kafka-secrets when you set up the environment for GSP. Configure the Kafka secret by creating a Helm chart override in the values.yaml file.

kafka:
  password: <kafka-password> # Credentials for accessing Kafka. This secret is created during deployment.

Kafka bootstrap

To allow the Kafka service on GSP to align with the infrastructure Kafka service, make a Helm override entry with the location of the Kafka bootstrap.

kafka:
  bootstrap: <kafka-bootstrap-location> # The Kafka address to align with the infrastructure Kafka

Custom Kafka topic names

Some of the Kafka topics used by the GSP support customizing the topic name. If any topic name has been customized, ensure it is represented as a Helm chart override entry, using the kafka:topic parameter.

For a list of the Kafka topics that GSP produces and consumes, including which of those support customized naming, see Before you begin GSP deployment.

S3-compatible storage

S3 storage credentials

When you set up the environment for GSP, you provision S3-compatible object storage for GSP to use as a persistent data store. In the values.yaml file, record the credentials needed by GSP to access this storage.

gsp-s3: <s3-credentials> # Credentials for accessing S3-compatible storage

Enable S3-compatible storage

When you set up your environment for the GSP, you provision S3-compatible object storage for the GSP's persistent data store. You must enable this storage with override entries in the values.yaml file.

By default, GSP is configured to use Azure Blob Storage as the persistent data store. If you have provisioned Azure Blob Storage in your deployment, modify the following entries in the values.yaml file:

job:
  storage:
    gspPrefix: <prefix> # The URI path prefix under which GSP savepoints, checkpoints, and high availability data will be stored
    gcaSnapshots: <path> # The URI path under which GCA snapshots will be stored

To enable other types of S3-compatible storage, modify the following entries in the values.yaml file:

azure:
  enabled: false
job:
  storage:
    gspPrefix: <bucket-name> # The bucket where GSP savepoints, checkpoints, and high-availability data will be stored
    gcaSnapshots: <bucket-name> # The bucket where the GCA snapshots will be stored.
    s3: <s3-details> # The applicable details defined for the OBC or GCP bucket.

Note: The host parameter is ignored.

GKE example

azure:
  enabled: false
...
job:
  storage:
    host: gspstate{{.Values.short_location}}{{.Values.environment}}.blob.core.windows.net
    #gspPrefix: wasbs://gsp-state@{{ tpl .Values.job.storage.host . }}/{{ .Release.Name }}/
    gspPrefix: "s3p://test-example-bucket-one/{{ .Release.Name }}/"                                                                               
    #gcaSnapshots: wasbs://gca@{{ tpl .Values.job.storage.host . }}/
    gcaSnapshots: "s3p://test-example-bucket-one/gca/"                                                                  
    checkpoints: '{{ tpl .Values.job.storage.gspPrefix . }}checkpoints'
    savepoints: '{{ tpl .Values.job.storage.gspPrefix . }}savepoints'
    highAvailability: '{{ tpl .Values.job.storage.gspPrefix . }}ha'
    s3:
      endpoint: "https://storage.googleapis.com:443"
      accessKey: "<access Key>"
      secretKey: "<secret key>"
      pathStyleAccess: "true"

Kubernetes API

GSP uses Apache Flink for stateful stream processing, with communications handled via the Kubernetes API. To use the Kubernetes API, GSP must have the permissions shown in the following example:

Verbs On Resource API Group Comment
get

list

watch

delete

jobs batch GSP uses these commands during upgrade and for a pre-upgrade hook to ensure that the previous version of GSP is stopped before upgrading to the new version.
create

update

patch

get

list

watch

delete

configmap general ("") GSP uses these commands to:
  • Support Flink's Kubernetes high availability (HA) services
  • Record the path to the savepoint (periodically taken by the take-savepoint cron job and by the upgrade hook)

Configure GSP behavior

You can specify values in the values.yaml file to override the default values of configuration options that control GSP behavior and to customize user data and Outbound field mappings.

You can override aspects of the default configuration to modify GSP behavior and customize the way data is stored in the Info Mart database.

You can customize the following:

  • Configuration options, to modify data-related aspects of GSP behavior
  • User data mappings, to map custom key-value pairs (KVPs), which are attached to event and reporting protocol data, to tables and columns in the Info Mart database
  • Outbound record field mappings, to map custom CX Contact record field data to Outbound-related tables and columns in the Info Mart database

Customize configuration option settings

For full information about the options you can configure, including the default and valid values, see GSP configuration options.

To configure options, edit the GSP values.yaml file. Under the cfgOptions object, specify the option and value in JSON format, noting the following:

  • Options are separately configurable by tenant and, where applicable, by media type or even at the level of individual queues (DNs or scripts).
  • Where an option can be configured at various levels, you can override a value set at a higher level (for example, for a particular media type in general) to set a different value for a particular lower-level object (for example, for that media type for an individual DN).
  • See the note about configuration levels for information about the available configuration levels for certain options.

The entries in the values.yaml file are structured as follows:

cfgOptions:
    "<tenant_id>": |
      standard:
        <option 1>: <value>
      media:
        <media type 1>:
          <option 2>: <value 1>
          <option 3>: <value 1>
        <media type 2>:
          <option 2>: <value 2>
          <option 4>: <value 1>
      dn:
        <dn_id>:
          media:
            <media type 1>:
              <option 2>: <value 3>
      script:
        <script_id>:
          media:
            <media type 2>:
              <option 2>: <value 4> 
      ...

For an example, see example.

Customize user data mapping

Genesys Info Mart uses a wide range of user-data KVPs from a number of upstream services to populate data in the Info Mart database. For full information about user data in Genesys Info Mart, see [[PEC-REP/Current/GIMPEGuide/UserData|]].

As described on [link TBD], you can extend storage of user data in the Info Mart database to include additional user-data KVPs you want to capture as custom user-data facts or dimensions.

To configure user-data mapping, edit the GSP values.yaml file. Under the udeMapping object , specify the mapping between your custom KVPs and the custom user-data database table(s) and column(s), noting the following:

  • The mapping, which is specified in JSON format, is configured separately by tenant.
  • In addition to specifying the database table and column in which to store the KVP value, you also specify the propagation rule that Genesys Info Mart uses to determine what value to store if more than one value is extracted for the same key in the same interaction. See Propagation rules for more information.
  • You can specify a default value to use if the KVP value is null. You must provide a default value for dimensions.
  • You map custom user-data dimensions via the IRF_USER_DATA_KEYS table, where you specify the foreign key reference(s) for the user-data dimension table(s).

Genesys Info Mart provides the following sample tables in the Info Mart database schema for storage of custom KVPs:

  • USER_DATA_CUST_DIM_1 — for low-cardinality user data to be stored as dimensions
  • IRF_USER_DATA_CUST_1 — for high-cardinality user data to be stored as facts

The structure of the entries to specify in the values.yaml file is:


udeMapping:
  "<tenant_id">: |
    IRF_USER_DATA_KEYS:
      columns:
        <CUSTOM_KEY_1>:
          type: dim
          table: <USER_DATA_CUST_DIM_1>
          attributes:
            <DIM_ATTRIBUTE_1>:
              kvp: <KVP>
              rule: <propagation_rule>
              default: <default_value>
            ...
    <IRF_USER_DATA_CUST_1>:
      columns:
        <CUSTOM_DATA_1>:
          type: value
          kvp: <KVP>
          rule: <propagation_rule>
          default: <default_value>
        ...

See the example.

Customize Outbound field mapping

Genesys Info Mart stores data about every outbound contact attempt, based on Record Field data it receives from the CX Contact (CXC) service. As described on [[PEC-REP/Current/GIMPEGuide/Outbound|]], some of the mapping between Field data and the Info Mart database tables and columns is predefined, and some is custom.

To customize outbound mapping, edit the GSP values.yaml file. Under the ocsMapping object, specify the mapping between your custom record fields and the tables and columns provided in the Info Mart database for custom record field data, namely:

  • In the CONTACT_ATTEMPT_FACT table:
    • 10 floating-point numbers: numeric(14,4)
    • 20 integers: integer
    • 30 strings: varchar(255)
  • In the RECORD_FIELD_GROUP_1 and RECORD_FIELD_GROUP_2 tables:
    • 10 strings each: varchar(255)

The mapping, which is specified in JSON format, is configured separately by tenant. The structure of the entries is:

ocsMapping:
  "<tenant_id>": |
    CONTACT_ATTEMPT_FACT:
      columns:
        RECORD_FIELD_1:
          field: <numericFieldName>
          rpcValue:
          conversionValue:
        ...
        RECORD_FIELD_11:
          field: <intFieldName>
          rpcValue:
          conversionValue:
        ...
        RECORD_FIELD_31:
          field: <stringFieldName>
          rpcValue:
          conversionValue:
        ...
    RECORD_FIELD_GROUP_1:
      columns:
        RECORD_FIELD_1_STRING_1:
          field: <stringFieldName>
          rpcValue:
          conversionValue:
        ...
    RECORD_FIELD_GROUP_2:
      columns:
        RECORD_FIELD_2_STRING_1:
          field: <stringFieldName>
          rpcValue:
          conversionValue:
        ...

See the example.

Example

cfgOptions:
    "eb0c9f3a-5dca-498f-98d5-7610f5fd1015": |
      standard:
        completed-queues: iWD_Completed,iWD_Processed_ext
        populate-workbin-as-hold: false
      media:
        voice:
          q-answer-threshold: 30
      media:
        email:
          q-short-abandoned-threshold: 20
      dn:
        eb0c9f3a-5dca-498f-98d5-7610f5fd1015_MM_VQ:
          media:
            email:
              q-answer-threshold: 90
              q-short-abandoned-threshold: 40
      script:
        eb0c9f3a-5dca-498f-98d5-7610f5fd1015_ixn_queue:
          media:
            chat:
              q-answer-threshold: 20
      script:
        eb0c9f3a-5dca-498f-98d5-7610f5fd1015_dev.IQ:
          standard:
            populate-ixnqueue-facts: false

udeMapping:
  "eb0c9f3a-5dca-498f-98d5-7610f5fd1015": |
    IRF_USER_DATA_KEYS:
      columns:
        CUSTOM_KEY_1:
          type: dim
          table: USER_DATA_CUST_DIM_1
          attributes:
            DIM_ATTRIBUTE_1:
              kvp: BusinessLine
              rule: CALL
              default: retail
            DIM_ATTRIBUTE_2:
              kvp: RULE_PARTY
              rule: PARTY
              default: none
            DIM_ATTRIBUTE_3:
              kvp: RULE_IRF
              rule: IRF
              default: none
            DIM_ATTRIBUTE_4:
              kvp: RULE_IRF_INITIAL
              rule: IRF_INITIAL
              default: none
            DIM_ATTRIBUTE_5:
              kvp: RULE_IRF_FIRST_UPDATE
              rule: IRF_FIRST_UPDATE
              default: none
    IRF_USER_DATA_CUST_1:
      columns:
        CUSTOM_DATA_1:
          type: value
          kvp: RULE_CALL
          rule: CALL
          default: none
        CUSTOM_DATA_2:
          type: value
          kvp: RULE_PARTY
          rule: PARTY
          default: none
        CUSTOM_DATA_3:
          type: value
          kvp: RULE_IRF
          rule: IRF
        CUSTOM_DATA_4:
          type: value
          kvp: RULE_IRF_FIRST_UPDATE
          rule: IRF_FIRST_UPDATE
        CUSTOM_DATA_5:
          type: value
          kvp: RULE_IRF_INITIAL
          rule: IRF_INITIAL
        CUSTOM_DATA_6:
          type: value
          kvp: RULE_IRF_ROUTE
          rule: IRF_ROUTE
    IRF_USER_DATA_GEN_1:
      columns:
        CASE_ID:
          type: value
          kvp: CaseID
          rule: Call
          default: none
        CUSTOMER_ID:
          type: value
          kvp: GIM_GRP
          rule: Call
          default: none

ocsMapping:
  "eb0c9f3a-5dca-498f-98d5-7610f5fd1015": |
    CONTACT_ATTEMPT_FACT:
      columns:
        RECORD_FIELD_1:
          field: GenRecordField1
          rpcValue:
          conversionValue:
        RECORD_FIELD_2:
          field: GenRecordField2
          rpcValue:
          conversionValue:
        ...
        RECORD_FIELD_60:
          field: GenRecordField60
          rpcValue:
          conversionValue:
    RECORD_FIELD_GROUP_1:
      columns:
        RECORD_FIELD_1_STRING_1:
          field: GenRecordField1String1
          rpcValue:
          conversionValue:
        RECORD_FIELD_1_STRING_2:
          field: GenRecordField1String2
          rpcValue:
          conversionValue:
        RECORD_FIELD_1_STRING_3:
        ...
    RECORD_FIELD_GROUP_2:
      columns:
        RECORD_FIELD_2_STRING_1:
          field: GenRecordField2String1
          rpcValue:
          conversionValue:
        RECORD_FIELD_2_STRING_2:
        ...
Comments or questions about this documentation? Contact us for support!