Difference between revisions of "PEC-REP/Current/GIMPEGuide/ConfigureGSP"
(Published) |
(Published) |
||
Line 58: | Line 58: | ||
|sectionHeading=Override Helm chart values | |sectionHeading=Override Helm chart values | ||
|alignment=Vertical | |alignment=Vertical | ||
− | |structuredtext=Download the GSP Helm charts from JFrog using your credentials. You must override certain parameters in the '''values.yaml''' file | + | |structuredtext=Download the GSP Helm charts from JFrog using your credentials. You must override certain parameters in the GSP '''values.yaml''' file to provide deployment-specific values for certain parameters. |
For general information about overriding Helm chart values, see {{SuiteLevelLink|helmoverride}} in the ''{{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|display text=Genesys Engage Cloud Private Edition Guide}}''. | For general information about overriding Helm chart values, see {{SuiteLevelLink|helmoverride}} in the ''{{Link-AnywhereElse|product=PrivateEdition|version=Current|manual=PEGuide|display text=Genesys Engage Cloud Private Edition Guide}}''. | ||
− | If you want to use arbitrary UIDs in your OpenShift deployment, you must override the '''securityContext''' settings in the ''' | + | If you want to use arbitrary UIDs in your OpenShift deployment, you must override the '''securityContext''' settings in the GSP '''values.yaml''' file, so that no user or group IDs are specified. For details, see [[{{FULLPAGENAME}}#Security|Configure security]], below. |
− | At a minimum, you must override the following key entries in the ''' | + | At a minimum, you must override the following key entries in the GSP '''values.yaml''' file: |
*<tt>image:</tt> | *<tt>image:</tt> | ||
Line 77: | Line 77: | ||
*<tt>s3</tt> - the applicable s3 details defined with the OBC (see {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=ConfigureGSP|anchor=S3Data|display text=Get S3 data}}) | *<tt>s3</tt> - the applicable s3 details defined with the OBC (see {{Link-SomewhereInThisVersion|manual=GIMPEGuide|topic=ConfigureGSP|anchor=S3Data|display text=Get S3 data}}) | ||
{{AnchorDiv|YAML-file}} | {{AnchorDiv|YAML-file}} | ||
− | ====The | + | ====The GSP '''values.yaml''' file==== |
− | The following sample ''' | + | The following sample GSP '''values.yaml''' file, which may not be completely up to date, shows the key parameter values you must override. |
<source lang="bash">global: | <source lang="bash">global: | ||
Line 259: | Line 259: | ||
|structuredtext=The security context settings define the privilege and access control settings for pods and containers. | |structuredtext=The security context settings define the privilege and access control settings for pods and containers. | ||
− | By default, the user and group IDs are set in the ''' | + | By default, the user and group IDs are set in the GSP '''values.yaml''' file as <tt>500:500:500</tt>, meaning the '''genesys''' user. |
<source lang="bash"> | <source lang="bash"> | ||
securityContext: | securityContext: | ||
Line 270: | Line 270: | ||
</source> | </source> | ||
===Arbitrary UIDs in OpenShift=== | ===Arbitrary UIDs in OpenShift=== | ||
− | If you want to use arbitrary UIDs in your OpenShift deployment, you must override the '''securityContext''' settings in the ''' | + | If you want to use arbitrary UIDs in your OpenShift deployment, you must override the '''securityContext''' settings in the GSP '''values.yaml''' file, so that you do not define any specific IDs. |
<source lang="bash"> | <source lang="bash"> |
Revision as of 15:48, September 15, 2021
Contents
Learn how to configure GIM Stream Processor (GSP).
Create an Object Bucket Claim
To enable storage of data during GSP processing, create an S3 Object Bucket Claim (OBC) if none exists.
See the gsp-obc.yaml file:
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: gim
namespace: gsp
spec:
generateBucketName: gim
storageClassName: openshift-storage.noobaa.io
Then execute the command to create the OBC:
oc create -f gsp-obc.yaml -n gsp
The following Kubernetes resources are created automatically:
- An ObjectBucket (OB), which contains the bucket endpoint information, a reference to the OBC, and a reference to the storage class.
- A ConfigMap in the same namespace as the OBC, which contains the endpoint to which applications connect in order to consume the object interface
- A Secret in the same namespace as the OBC, which contains the key-pairs needed to access the bucket.
Note the following:
- The name of the secret and the configMap are the same as the OBC name.
- The bucket name is created with a randomized suffix.
Get S3 data
You need to know details of your S3 object to populate Helm chart override values for GSP and GCA.
To get the OBC data, execute the following command, where gim is the name of the configMap associated with the OBC:
oc get cm gim -n gsp -o yaml -o jsonpath={.data}
The result shows data such as BUCKET_HOST, BUCKET_NAME, BUCKET_PORT, and so on.
Execute the following commands to get the values of the keys you require for access, where gim is the name of the secret associated with the OBC:
- To get the value of the access key:
oc get secret gim -n gsp -o yaml -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 --decode
- To get the value of the secret key:
oc get secret gim -n gsp -o yaml -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 --decode
Use the S3 data to populate the Helm chart override values for GSP and GCA.
Override Helm chart values
Download the GSP Helm charts from JFrog using your credentials. You must override certain parameters in the GSP values.yaml file to provide deployment-specific values for certain parameters.
For general information about overriding Helm chart values, see Overriding Helm chart values in the Genesys Engage Cloud Private Edition Guide.
If you want to use arbitrary UIDs in your OpenShift deployment, you must override the securityContext settings in the GSP values.yaml file, so that no user or group IDs are specified. For details, see Configure security, below.
At a minimum, you must override the following key entries in the GSP values.yaml file:
- image:
- registry - the registry from which Kubernetes will pull images (pureengage-docker-staging.jfrog.io by default)
- tag - the container image version
- imagePullSecrets:
- jfrog-stage-credentials - the secret from which Kubernetes will get credentials to pull the image from the registry
- kafka:
- bootstrap - the Kafka address to align with the infrastructure Kafka
- storage:
- gspPrefix - the s3 bucket name
- s3 - the applicable s3 details defined with the OBC (see Get S3 data)
The GSP values.yaml file
The following sample GSP values.yaml file, which may not be completely up to date, shows the key parameter values you must override.
global:
rbac:
create: true
serviceAccount:
create: true
image:
registry: pureengage-docker-staging.jfrog.io
repository: gim/gsp
pullPolicy: IfNotPresent
tag: <image-version>
imagePullSecrets:
pureengage-docker-dev: {}
pureengage-docker-staging: {}
jfrog-stage-credentials: {}
azure:
enabled: false
environment: dev
location: eastus2
job:
rbac:
create: null
serviceAccount:
create: true
name: gsp
id: '00000000000000000000000000000000'
className: com.genesyslab.gim.fsp.App
savepoint: ''
checkpointing:
mode: AT_LEAST_ONCE
interval: 20 min
timeout: 40 min
minPause: 15 min
unaligned: 'false'
concurrent: '1'
external: ''
tolerableFailed: '300'
parallelism: '2'
autoCreateTopics:
partitions: 1
replicationFactor: 3
dumps: /var/lib/dumps
timeDeviation: PT15S
idleness: PT15M
objectReuse: 'true'
kafkaRateLimit: null
storage:
host: gspstate{{.Values.short_location}}{{.Values.environment}}.blob.core.windows.net
#gspPrefix: wasbs://gsp-state@{{ tpl .Values.job.storage.host . }}/{{ .Release.Name }}/
gspPrefix: "s3p://<bucket-name>/{{ .Release.Name }}/"
#gcaSnapshots: wasbs://gca@{{ tpl .Values.job.storage.host . }}/
gcaSnapshots: "s3p://<bucket-name>/gca/"
checkpoints: '{{ tpl .Values.job.storage.gspPrefix . }}checkpoints'
savepoints: '{{ tpl .Values.job.storage.gspPrefix . }}savepoints'
highAvailability: '{{ tpl .Values.job.storage.gspPrefix . }}ha'
s3:
endpoint: "https://<bucket-host>:<bucket-port>"
accessKey: "<access-key-value>"
secretKey: "<secret-key-value>"
pathStyleAccess: "true"
# pvc:
# create: true
# mountPath: /opt/flink/state
# claim: ''
# claimSize: 10Gi
# storageClass: standard
log:
level: INFO
loggers:
org.apache.kafka: INFO
highAvailability:
high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
high-availability.jobmanager.port: '50010'
kubernetes.namespace: '{{ .Release.Namespace }}'
kubernetes.cluster-id: '{{ .Release.Name }}'
monitoring:
enabled: true
port: 9249
dashboards:
targetDirectory: /var/lib/grafana/dashboards/{{ .Release.Namespace }}
tm:
nameOverride: ''
fullnameOverride: ''
numberOfTaskSlots: '2'
deployment:
replicaCount: 1
port:
rpc: 6122
memory:
jvmOverheadFraction: 0.18
jvmOverheadMin: 220mb
jvmOverheadMax: ''
jvmMetaspace: 256mb
offHeap: 128mb
managed: ''
heap: ''
networkMax: ''
resources:
requests:
memory: 1Gi
cpu: '0.05'
limits:
memory: 3Gi
cpu: '2'
tolerations: []
affinity: {}
jm:
nameOverride: ''
fullnameOverride: ''
savepoints: ''
port:
rpc: 6123
blob: 6124
rest: 8081
resources:
requests:
memory: 1Gi
cpu: '0.05'
limits:
memory: 2048Mi
cpu: '1'
monitor:
rbac:
create: null
serviceAccount:
create: true
annotations: {}
name: '{{ .Release.Name }}-monitor'
podSecurityContext: {}
securityContext: {}
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
hosts: []
tls: []
kafka:
bootstrap: 'infra-kafka-cp-kafka.infra.svc.cluster.local:9092'
groupId: null
clientId: gim-gsp
offsets: GROUP_OFFSETS
topic:
out:
interactions: gsp-ixn
agentStates: gsp-sm
outbound: gsp-outbound
custom: gsp-custom
cfg: gsp-cfg
in:
digitalItx: digital-itx
digitalAgentStates: digital-agentstate
maxRequestSize: '4194304'
compressionType: lz4
maxBlockMs: '322000'
metadataMaxAgeMs: 600000
metadataMaxIdleMs: 600000
requestTimeoutMs: 32000
schemaRegistry:
enabled: false
url: ''
user: ''
password: ''
dnsConfig:
options:
- name: ndots
value: '3'
Configure Kubernetes
Configure security
The security context settings define the privilege and access control settings for pods and containers.
By default, the user and group IDs are set in the GSP values.yaml file as 500:500:500, meaning the genesys user.
securityContext:
runAsNonRoot: true
runAsUser: 500
runAsGroup: 500
fsGroup: 500
containerSecurityContext: {}
Arbitrary UIDs in OpenShift
If you want to use arbitrary UIDs in your OpenShift deployment, you must override the securityContext settings in the GSP values.yaml file, so that you do not define any specific IDs.
securityContext:
runAsNonRoot: true
runAsUser: null
runAsGroup: 0
fsGroup: null
containerSecurityContext: {}