Configure GSP

From Genesys Documentation
Revision as of 03:01, July 1, 2021 by Jose.druker@genesys.com (talk | contribs) (Published)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
This topic is part of the manual Genesys Info Mart Private Edition Guide for version Current of Reporting.

Learn how to configure GIM Stream Processor (GSP).

Create an Object Bucket Claim

To enable storage of data during GSP processing, create an S3 Object Bucket Claim (OBC) if none exists.

See the gsp-obc.yaml file:

apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: gim
  namespace: gsp
spec:
  generateBucketName: gim
  storageClassName: openshift-storage.noobaa.io

Then execute the command to create the OBC:

oc create -f gsp-obc.yaml -n gsp

The following Kubernetes resources are created automatically:

  • An ObjectBucket (OB), which contains the bucket endpoint information, a reference to the OBC, and a reference to the storage class.
  • A ConfigMap in the same namespace as the OBC, which contains the endpoint to which applications connect in order to consume the object interface
  • A Secret in the same namespace as the OBC, which contains the key-pairs needed to access the bucket.

Note the following:

  • The name of the secret and the configMap are the same as the OBC name.
  • The bucket name is created with a randomized suffix.

Get S3 data

You need to know details of your S3 object to populate Helm chart override values for GSP and GCA.

To get the OBC data, execute the following command, where gim is the name of the configMap associated with the OBC:

oc get cm gim -n gsp -o yaml -o jsonpath={.data}

The result shows data such as BUCKET_HOST, BUCKET_NAME, BUCKET_PORT, and so on.

Execute the following commands to get the values of the keys you require for access, where gim is the name of the secret associated with the OBC:

  • To get the value of the access key:
    oc get secret gim -n gsp -o yaml -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 --decode
  • To get the value of the secret key:
    oc get secret gim -n gsp -o yaml -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 --decode

Use the S3 data to populate the Helm chart override values for GSP and GCA.

Tip
You can also obtain the S3 data from the OpenShift console: Go to the Object bucket claims section under the Storage menu, and click on the required OBC resource. The data will be at the bottom of the page.

Override Helm chart values

Download the GSP Helm charts from JFrog using your credentials. You must override certain parameters in the gsp-values.yaml file to provide deployment-specific values for certain parameters.

For general information about overriding Helm chart values, see Overriding Helm chart values in the Genesys Engage Cloud Private Edition Guide.

Override the following key entries in the gsp-values.yaml file:

  • image:
    registry - the registry from which Kubernetes will pull images (pureengage-docker-staging.jfrog.io by default)
    tag - the container image version
  • imagePullSecrets:
    jfrog-stage-credentials - the secret from which Kubernetes will get credentials to pull the image from the registry
  • kafka:
    bootstrap - the Kafka address to align with the infrastructure Kafka
  • storage:
    gspPrefix - the s3 bucket name
  • s3 - the applicable s3 details defined with the OBC (see Get S3 data)

The gsp-values.yaml file

global:
  rbac:
    create: true
  serviceAccount:
    create: true
image:
  registry: pureengage-docker-staging.jfrog.io
  repository: gim/gsp
  pullPolicy: IfNotPresent
  tag: <image-version>
imagePullSecrets:
  pureengage-docker-dev: {}
  pureengage-docker-staging: {}
  jfrog-stage-credentials: {}
azure:
  enabled: false
environment: dev
location: eastus2
job:
  rbac:
    create: null
  serviceAccount:
    create: true
    name: gsp
  id: '00000000000000000000000000000000'
  className: com.genesyslab.gim.fsp.App
  savepoint: ''
  checkpointing:
    mode: AT_LEAST_ONCE
    interval: 20 min
    timeout: 40 min
    minPause: 15 min
    unaligned: 'false'
    concurrent: '1'
    external: ''
    tolerableFailed: '300'
  parallelism: '2'
  autoCreateTopics:
    partitions: 1
    replicationFactor: 3
  dumps: /var/lib/dumps
  timeDeviation: PT15S
  idleness: PT15M
  objectReuse: 'true'
  kafkaRateLimit: null
  storage:
    host: gspstate{{.Values.short_location}}{{.Values.environment}}.blob.core.windows.net
    #gspPrefix: wasbs://gsp-state@{{ tpl .Values.job.storage.host . }}/{{ .Release.Name }}/
    gspPrefix: "s3p://<bucket-name>/{{ .Release.Name }}/"
    #gcaSnapshots: wasbs://gca@{{ tpl .Values.job.storage.host . }}/
    gcaSnapshots: "s3p://<bucket-name>/gca/"
    checkpoints: '{{ tpl .Values.job.storage.gspPrefix . }}checkpoints'
    savepoints: '{{ tpl .Values.job.storage.gspPrefix . }}savepoints'
    highAvailability: '{{ tpl .Values.job.storage.gspPrefix . }}ha'
    s3:
      endpoint: "https://<bucket-host>:<bucket-port>"
      accessKey: "<access-key-value>"
      secretKey: "<secret-key-value>"
      pathStyleAccess: "true"
#    pvc:
#      create: true
#      mountPath: /opt/flink/state
#      claim: ''
#      claimSize: 10Gi
#      storageClass: standard
  log:
    level: INFO
    loggers:
      org.apache.kafka: INFO
  highAvailability:
    high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
    high-availability.jobmanager.port: '50010'
    kubernetes.namespace: '{{ .Release.Namespace }}'
    kubernetes.cluster-id: '{{ .Release.Name }}'
monitoring:
  enabled: true
  port: 9249
  dashboards:
    targetDirectory: /var/lib/grafana/dashboards/{{ .Release.Namespace }}
tm:
  nameOverride: ''
  fullnameOverride: ''
  numberOfTaskSlots: '2'
  deployment:
    replicaCount: 1
  port:
    rpc: 6122
  memory:
    jvmOverheadFraction: 0.18
    jvmOverheadMin: 220mb
    jvmOverheadMax: ''
    jvmMetaspace: 256mb
    offHeap: 128mb
    managed: ''
    heap: ''
    networkMax: ''
  resources:
    requests:
      memory: 1Gi
      cpu: '0.05'
    limits:
      memory: 3Gi
      cpu: '2'
  tolerations: []
  affinity: {}
jm:
  nameOverride: ''
  fullnameOverride: ''
  savepoints: ''
  port:
    rpc: 6123
    blob: 6124
    rest: 8081
  resources:
    requests:
      memory: 1Gi
      cpu: '0.05'
    limits:
      memory: 2048Mi
      cpu: '1'
monitor:
  rbac:
    create: null
  serviceAccount:
    create: true
    annotations: {}
    name: '{{ .Release.Name }}-monitor'
podSecurityContext: {}
securityContext: {}
service:
  type: ClusterIP
  port: 80
ingress:
  enabled: false
  annotations: {}
  hosts: []
  tls: []
kafka:
  bootstrap: 'infra-kafka-cp-kafka.infra.svc.cluster.local:9092'
  groupId: null
  clientId: gim-gsp
  offsets: GROUP_OFFSETS
  topic:
    out:
      interactions: gsp-ixn
      agentStates: gsp-sm
      outbound: gsp-outbound
      custom: gsp-custom
      cfg: gsp-cfg
    in:
      digitalItx: digital-itx
      digitalAgentStates: digital-agentstate
  maxRequestSize: '4194304'
  compressionType: lz4
  maxBlockMs: '322000'
  metadataMaxAgeMs: 600000
  metadataMaxIdleMs: 600000
  requestTimeoutMs: 32000
schemaRegistry:
  enabled: false
  url: ''
  user: ''
  password: ''
dnsConfig:
  options:
  - name: ndots
    value: '3'

Configure Kubernetes

Content coming soon

Configure security

Content coming soon
Comments or questions about this documentation? Contact us for support!