Difference between revisions of "PrivateEdition/Current/Operations/Structured logging"
From Genesys Documentation
Line 5: | Line 5: | ||
|Context=A secondary method of logging required for standard stdout/stderr structured logging. | |Context=A secondary method of logging required for standard stdout/stderr structured logging. | ||
}} | }} | ||
− | + | This logging method that is required for standard stdout/stderr structured logs that are generated by containers within the Kubernetes environment. Therefore, this method is also called Kubernetes-supported logging. Here, the container is writes stdout/stderr logs to a '''''– var/log/containers''''' directory. | |
− | |||
[[File:K8 logging.png]] | [[File:K8 logging.png]] |
Revision as of 17:46, June 30, 2022
This topic is part of the manual Operations for version Current of Genesys Multicloud CX Private Edition.
Contents
A secondary method of logging required for standard stdout/stderr structured logging.
Related documentation:
RSS:
This logging method that is required for standard stdout/stderr structured logs that are generated by containers within the Kubernetes environment. Therefore, this method is also called Kubernetes-supported logging. Here, the container is writes stdout/stderr logs to a – var/log/containers directory.
You will be given the option to choose the external log aggregator to implement the aggregation.
Services that use Kubernetes structured logging:
- Genesys Authentication
- Web Services and Applications
- Genesys Engagement Services
- Designer
Important
Some services (such as Genesys Info Mart) use the Kubernetes logging approach with an exception that the logs are written in an unstructured format.Deploying logging in OpenShift cluster
Prerequisites
- Access to the cluster as a user with the
cluster-admin
role. - rsyslogd server daemon installed and listening on TCP port
- rsyslog ruleset for writing container messages to file accessible by NFS/NAS
Deployment process
- Deploy Elasticsearch and Cluster Logging using their Operators. You can deploy these Operators using any one of the following methods:
- If the cluster has taint nodes, Fluentd requires the following toleration levels.
tolerations: - effect: NoSchedule operator: Exists - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists
- Create a cluster logging instance with Fluentd tolerations mentioned in the previous step. Here's a sample code to build your own YAML file.
- clusterLogging.yaml
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 storage: storageClassName: "<storage-class-name>" size: 200G resources: requests: memory: "8Gi" proxy: resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: replicas: 1 curation: type: "curator" curator: schedule: "30 3 * * *" collection: logs: type: "fluentd" fluentd: tolerations: - effect: NoSchedule operator: Exists - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists
- Apply the cluster logging configuration. Use the following command.
oc apply -f clusterLogging.yaml
- Set up log forwarding.
- Syslog forwarding allows you to send a copy of your logs externally, instead of, or in addition to, the default Elasticsearch log store (or log management tool). This allows steaming of structured logs to both be indexed and available through a Kibana (or any data visualization dashboard software) instance, as well as written to flat files to provide to care for troubleshooting.
- ImportantWriting logs to ElasticSearch index or any log management tool is optional. However, storing of structured logs to flat files is required to support Genesys products. Though Rsyslog server is recommended, you can choose your desired method for collection/aggregation of structured syslog log messages.
- Prerequisites
- Fluentd deployed in cluster for log collection
- rsyslogd server installed and configured to have listening connection on TCP port
- rsyslog rules defined to capture syslog messages and write them to a log file
- rsyslog ruleset defined for writing container messages to a log file which NFS/NAS can access
- Forwarding logs using the syslog protocol
- Create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers and pipelines that use those outputs. Here is a sample code to build your own YAML file.
apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: rsyslog-east type: syslog syslog: facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' secret: name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'udp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east inputRefs: - audit - application outputRefs: - rsyslog-east - default labels: syslog: east secure: true - name: syslog-west inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: west
- metadata.name: instance
- metadata.namespace: openshift-logging
- spec.outputs.name: A unique name name for the output
- spec.outputs.type: Syslog
- spec.outputs.syslot: (Optional) specifies the syslog parameters listed below.
- spec.outputs.url: $protocol://<ip/host>:<port> (protocol = UDP/TCP/TLS, host and port of rsyslogd server)
- ImportantIf you are using a TLS prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.
- spec.pipelines.inputRefs: log types should be forwarded using that pipeline: application, infrastructure, or audit.
- spec.pipelines.outputRefs: output to use with that pipeline for forwarding the logs
- For more details on syslog and rsyslog, refer to:
- Apply the syslog cluster forwarder config using the following command.
oc apply -f syslog-configmap.yaml
- After creating the cluster log forwarder, Fluentd instances should restart one by one picking up new configuration.
- ImportantIf Fluentd pods do not restart on their own, delete them so that they are recreated.
- Forwarding logs using the legacy syslog method
- Create a configuration file called syslog.yaml, with the information needed to forward the logs in the openshift-logging namespace.
kind: ConfigMap apiVersion: v1 metadata: name: syslog namespace: openshift-logging data: syslog.conf: | <store> @type syslog_buffered remote_syslog 192.168.45.36 port 51400 hostname ${hostname} facility local0 severity info use_record true payload_key message </store>
- Where:
- @type refers tp the transport type (syslog_buffered is required for rsyslog server to write to file)
- remote_syslog: host/ip of remote syslog server
- port: rsyslog server TCP listening port
- hostname: name of host sending syslog msg
- facility: machine process that created the syslog event
- severity: predefined severity level of event
- Apply syslog configMap.
oc apply -f syslog.yaml
- After creating the cluster log forwarder, Fluentd instances should restart one by one picking up new configuration.
- ImportantIf Fluentd pods do not restart on their own, delete them so that they are recreated.
GKE logging
Click here for details about GKE logging.
Comments or questions about this documentation? Contact us for support!