Before you begin

From Genesys Documentation
Jump to: navigation, search
This is a draft page; it has not yet been published.

Find out what to do before deploying Reporting and Analytics Aggregates (RAA).

Related documentation:

Limitations and assumptions

RAA container works with the Genesys Info Mart database. The Genesys Info Mart database scheme must correspond to a compatible Genesys Info Mart version. Execute the following command to discover the required Genesys Info Mart release:

docker run -it --entrypoint /bin/java gcxi/raa:<IMAGE_VERSION> -jar GIMAgg.jar -version

RAA container runs RAA on Java 11, and is supplied with the following of JDBC drivers:

  • MSSQL 9.2.1 JDBC Driver
  • Postgres 42.2.11 JDBC Driver
  • Oracle Database 21c (21.1) JDBC Driver

JDBC driver can be overridden by copying driver file to lib\jdbc_driver_<RDBMS> (or by making link) in a work dir. See for JDBC driver for RAA for details.

Download the Helm charts

For general information about downloading containers, see:

#mintydocs_link must be called from a MintyDocs-enabled page (Draft:PEC-REP/Current/RAAPEGuide/Planning).

. To learn what Helm chart version you must download for your release, see

#mintydocs_link must be called from a MintyDocs-enabled page (Draft:PEC-REP/Current/RAAPEGuide/Planning).

Init containers

The RAA Helm chart includes two explicit init containers:

  • configDelivery -- this container delivers required XML configuration and *.SS files in the RAA working directory. Those files must be available in a GZIP archive, encoded as base64, which must be passed using the Helm install/update option --set-file raa.deployment.configTar=config.tar.gz.b64. Default conf.xml and user-data-map.ss are supplied with the Helm chart. If a GZIP archive is not specified by the --set-file option, the init container copies them into the working directory (unless they are already present).
    The configDelivery init container is optional, and is disabled by default; when it is disabled, you can create RAA configuration files in the mounted config volume. The configDelivery init container makes sense when access to the mounted working directory is restricted for any reason -- often for security reasons, or due to use of ephemeral storage for the configuration volume.
    You must specify a container name value to enable configDelivery container:
    myvalues.yaml
    raa:
      ...
      statefulset:
      ...
        containers:
        ...
          configDelivery:
            name: "{{ $.Chart.Name }}-conf-delivery"
  • testRun -- this container executes all the aggregations on an empty time range. This helps to quickly detect configuration and customization problems -- SQL execution (even on empty data) checks the presence of involved tables and columns. The testRun init container is optional, and is enabled by default. To disable the container, set testRun: with no value in the container section of your myvalues.yaml:
    myvalues.yaml
    raa:
      ...
      statefulset:
      ...
        containers:
        ...
          testRun:
            name: "{{ $.Chart.Name }}-test-run"

Pod containers

RAA has two execution containers:

  • aggregation -- this container does the aggregation work. It is a required container, and only the container name is specified in values.yaml.:
    myvalues.yaml
raa:
  ...
  statefulset:
  ...
    containers:
      aggregation:
        name: "{{ $.Chart.Name }}"
      ...
  • monitor -- this container allows execution of the RAA tool from file. Use this function when the kubeclt exececutable is not available in your environment (for example, for security reasons). The monitor container also exposes two ports to enable scraping of aggregation metrics and health metric by Prometheus, or other monitoring tools. Monitor container is optional and disabled by default. Set values for monitor: as follows:
    myvalues.yaml
    raa:
      ...
      statefulset:
      ...
        containers:
        ...
          monitor:
            name: "{{ $.Chart.Name }}-monitor"
     
            toolcmd:
              # interval of checking for a new file with command
              intervalSec: "20"
     
            metrics:
              portName: "metrics"
              containerPort: "9100"
     
            health:
              portName: "health"
              containerPort: "9101"
          ...

Test containers

RAA includes two test containers for the Helm test command. They bust be executed in the following order:

  1. testRunCheck -- this container waits for execution of the testRun init container, and returns a result of test execution. The test is optional, and enabled by default. Enable it only when you plan to run the testRun init container; otherwise, disable it by setting testRunCheck: with no value in the testPod section of your values.yaml, as follows:
    myvalues.yaml
    raa:
      ...
      testPods:
     
        testRunCheck:
     
          name: "{{ tpl .Values.raa.serviceName . }}-test-run-check"
     
          container:
            name: "{{ $.Chart.Name }}-test-run-check"
     
          labels: {}
     
          annotations:
            "helm.sh/hook-weight": "100"
            "helm.sh/hook": "test-success"
            "helm.sh/hook-delete-policy": "before-hook-creation"
          ...
  2. healthChek -- this container executes and returns status of healthCheck, and prints the content of current configuration files and health files to standard output. The test is optional and enabled by default. To disable this test, set healthCheck: with no value in the testPod section of your values.yaml , as follows:
    myvalues.yaml
    raa:
      ...
      testPods:
        ...
        healthCheck:
          name: "{{ tpl .Values.raa.serviceName . }}-health-check"
     
          container:
            name: "{{ $.Chart.Name }}-health-check"
     
          labels: {}
     
          annotations:
            "helm.sh/hook-weight": "200"
            "helm.sh/hook": "test-success"
            "helm.sh/hook-delete-policy": "before-hook-creation" 
          ...

Third-party prerequisites

Content coming soon


Storage requirements

GIM secret volume

RAA mounts as a volume secret with Genesys Info Mart connections details when raa.env.GCXI_GIM_DB__JSON is not specified.

You can declare Genesys Info Mart database connection details as a Kubernetes secret, as follows:

gimsecret.yaml

apiVersion: v1
kind: Secret
metadata:
  namespace: gcxi
  name: gim-secret
type: kubernetes.io/service-account-token
data:
  json_credentials: eyJqZGJjX3VybCI6ImpkYmM6cG9zdGdyZXNxbDovLzxob3N0Pjo1NDMyLzxnaW1fZGF0YWJhc2U+IiwgImRiX3VzZXJuYW1lIjoiPHVzZXI+IiwgImRiX3Bhc3N3b3JkIjoiPHBhc3N3b3JkPiJ9Cg==

And reference the secret in values.yaml, as follows:

myvalues.yaml

raa
  ...
  volumes:
    ...
    gimSecret:
      name: "gim-secret-volume"
      secretName: "gim-secret"
      jsonFile: "json_credentials"
   ...

Alternatively, you can mount a CSI secret using secretProviderClass, as follows:

myvalues.yaml

raa
  ...
  volumes:
    ...
    gimSecret:
      name: "gim-secret-volume"
      secretProviderClass: "gim-secret-class"
      jsonFile: "json_credentials"
   ...

Config volume

RAA mounts a config volume folder inside the container, as /genesys/raa_config; this folder is the RAA working directory. At startup, RAA attempts to read the following files from /genesys/raa_config:

  • custom *.ss files — see How Do I Customize Queries and Hierarchies? for details.
  • JDBC driver from lib/jdbc_driver_<RDBMS> — see Procedure: Configuring the JDBC Driver for RAA for details.
  • conf.xml — This file must be present in the working folder, or aggregation cannot start.
    The default conf.xml (provided with Helm chart) has the following content:
    <CfgOptions>
    <Application>
      <i id="agg">
       <i k="sub-hour-interval" v="30min"/>
      </i>
      <i id="agg-feature">
       <i k="materialize-subhour-in-db" v="true"/>
       <i k="enable-available-features" v="true"/>
      </i>
      <i id="cfgApplication">
       <i k="CFGAPP_NAME" v="RAA" />
      </i> 
     </Application>
    </CfgOptions>

Usually RAA does not create any files here at runtime, so the volume does not requires a superfast storage class. Size limit is set to 50M by default. The storage class and size limit can be specified in values, as follows:

myvalues.yaml

raa
  ...
  volumes:
    ...
    config:
      capacity: 50Mi
      storageClassName: "<vendor storage class>"
    ...

RAA Helm chart creates Persistent Volume Claim. It can also optionally create a Persistent Volume (when raa.volumes.config.pv is specified). The following example illustrates how Persistent Volume is declared in the Helm chart:

raa-config-volume.yaml

{{- with .Values.raa.volumes.config.pv }}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: "{{ tpl .name $ }}"
  {{- if or ($.Values.raa.labels) (.labels) }}
  labels:
   {{- with $.Values.raa.labels }}
    {{- range $key, $value := . }}
    {{ $key }}: "{{ tpl $value $ }}"
    {{- end }}
   {{- end }}
   {{- with .labels }}
    {{- range $key, $value := . }}
    {{ $key }}: "{{ tpl $value $ }}"
    {{- end }}
   {{- end }}
  {{- end }}
 
  {{- if or ($.Values.raa.annotations) (.annotations) }}
  annotations:
   {{- with $.Values.raa.annotations }}
    {{- range $key, $value := . }}
    {{ $key }}: "{{ tpl $value $ }}"
    {{- end }}
   {{- end }}
   {{- with .annotations }}
    {{- range $key, $value := . }}
    {{ $key }}: "{{ tpl $value $ }}"
    {{- end }}
   {{- end }}
  {{- end }}
 
spec:
  accessModes:
    - ReadWriteMany
  {{- with $.Values.raa.volumes.config }}
  capacity:
    storage: "{{ .capacity }}"
    {{- with .storageClassName }}
  storageClassName: "{{ . }}"
    {{- end }}
    {{- with .pv.vendorSpec}}
  {{- toYaml . | nindent 2 }}
    {{- end}}
  {{- end }}
{{- end }}

It is enough to define Values.raa.volumes.config.storageClassName and a vendor specific part of Persistent Volume (raa.volumes.config.pv.vendorSpec) in values file:

myvalues.yaml

raa
  ...
  volumes:
    ...
    config:
      storageClassName: "hostpath"
      pv:
        vendorSpec:
          hostPath:
            type: Directory
            # path for conf.xml, *.ss files and JDBC driver when default is not suitable
            path: "/usr/local/genesys/RAA/config/"     
      ...

Alternatively, you can define Persistent Volume separately by defining the name must in values.yaml using the raa.volumes.config.pvc.volumeName for binding to Persistent Volume Claim:

myvalues.yaml

raa
  ...
  volumes:
    ...
    config:
      pv: {}
      pvc:
        volumeName: "my_raa_config_volume"
    ...

Optionally, you can also map ephemeral storage.

Health volume

RAA uses health volume for the following purposes:

Size limit is set to 50MB by default. Periodic interaction with the volume at runtime is expected, so Genesys does not recommended the use of very slow storage class for this volume.

myvalues.yaml

raa
  ...
  volumes:
    ...
    health:
      capacity: 50Mi
      storageClassName: "<vendor storage class>"
   ...

The RAA Helm chart creates a Persistent Volume Claim. It can also optionally create a Persistent Volume (when raa.volumes.health.pv is specified). See how Persistent Volume is declared in the Helm chart:

raa-config-volume.yaml

{{- with .Values.raa.volumes.health.pv }}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: "{{ tpl .name $ }}"
  {{- if or ($.Values.raa.labels) (.labels) }}
  labels:
   {{- with $.Values.raa.labels }}
    {{- range $key, $value := . }}
    {{ $key }}: "{{ tpl $value $ }}"
    {{- end }}
   {{- end }}
   {{- with .labels }}
    {{- range $key, $value := . }}
    {{ $key }}: "{{ tpl $value $ }}"
    {{- end }}
   {{- end }}
  {{- end }}
 
  {{- if or ($.Values.raa.annotations) (.annotations) }}
  annotations:
   {{- with $.Values.raa.annotations }}
    {{- range $key, $value := . }}
    {{ $key }}: "{{ tpl $value $ }}"
    {{- end }}
   {{- end }}
   {{- with .annotations }}
    {{- range $key, $value := . }}
    {{ $key }}: "{{ tpl $value $ }}"
    {{- end }}
   {{- end }}
  {{- end }}
 
spec:
  accessModes:
    - ReadWriteMany
  {{- with $.Values.raa.volumes.health }}
  capacity:
    storage: "{{ .capacity }}"
    {{- with .storageClassName }}
  storageClassName: "{{ . }}"
    {{- end }}
    {{- with .pv.vendorSpec}}
  {{- toYaml . | nindent 2 }}
    {{- end}}
  {{- end }}
{{- end }}

It is enough to define Values.raa.volumes.health.storageClassName and a vendor specific part of Persistent Volume (raa.volumes.health.pv.vendorSpec) in values file:

myvalues.yaml

raa
  ...
  volumes:
    ...
    health:
      storageClassName: "hostpath"
      pv:
        vendorSpec:
          hostPath:
            type: Directory
            path: "/usr/local/genesys/RAA/health/"     
      ...

Alternatively, you can separatly define a Persistent Volume; define its name in values.yaml, using the raa.volumes.health.pvc.volumeName for binding to Persistent Volume Claim:

myvalues.yaml

raa
  ...
  volumes:
    ...
    config:
      pv: {}
      pvc:
        volumeName: "my_raa_helath_volume"
    ...

You can also map an ephemeral storage.  

Network requirements

RAA interacts only with the Genesys Info Mart database.

RAA can expose Prometheus metrics by way of Netcat.

The aggregation pod has it's own IP address, and can run with one or two running containers. For Helm test, an additional IP address is required -- each test pod runs one container.

Genesys recommends that RAA be located in the same region as the Genesys Info Mart database.

Browser requirements

Not applicable.

Genesys dependencies

RAA interacts with Genesys Info Mart database only.

GDPR support

Not applicable.