Difference between revisions of "PrivateEdition/Current/Operations/Structured logging"

From Genesys Documentation
Jump to: navigation, search
 
(2 intermediate revisions by 2 users not shown)
Line 5: Line 5:
 
|Context=A secondary method of logging required for standard stdout/stderr structured logging.
 
|Context=A secondary method of logging required for standard stdout/stderr structured logging.
 
}}
 
}}
==OpenShift cluster logging==
+
This logging method that is required for standard stdout/stderr structured logs that are generated by containers within the Kubernetes environment. Therefore, this method is also called Kubernetes-supported logging. Here, the container is writes stdout/stderr logs to a '''''– var/log/containers''''' directory.   
OpenShift Container Platform cluster logging aggregates all of the logs from your OpenShift Container Platform cluster, such as node system logs, application container logs, and so on. This logging method that is required for standard stdout/stderr structured logs that are generated by containers within the Kubernetes environment. Therefore, this method is also called Kubernetes-supported logging. Here, the container is writes stdout/stderr logs to a '''''– var/log/containers''''' directory.   
 
  
 
[[File:K8 logging.png]]  
 
[[File:K8 logging.png]]  
Line 20: Line 19:
  
 
{{NoteFormat|Some services (such as Genesys Info Mart) use the Kubernetes logging approach with an exception that the logs are written in an unstructured format.|}}
 
{{NoteFormat|Some services (such as Genesys Info Mart) use the Kubernetes logging approach with an exception that the logs are written in an unstructured format.|}}
===Deploying logging in OpenShift cluster===
 
====Prerequisites====
 
 
*Access to the cluster as a user with the <code>cluster-admin</code> role.
 
*rsyslogd server daemon installed and listening on TCP port
 
*rsyslog ruleset for writing container messages to file accessible by NFS/NAS
 
 
====Deployment process====
 
 
#Deploy Elasticsearch and Cluster Logging using their Operators. You can deploy these Operators using any one of the following methods:
 
#:*[https://docs.openshift.com/container-platform/4.6/logging/cluster-logging-deploying.html Installing cluster logging using the web console]
 
#:*[https://docs.openshift.com/container-platform/4.6/logging/cluster-logging-deploying.html Installing cluster logging using the CLI]
 
#If the cluster has taint nodes, Fluentd requires the following toleration levels.
 
#:<source lang="yaml">tolerations:
 
- effect: NoSchedule
 
  operator: Exists
 
- key: CriticalAddonsOnly
 
  operator: Exists
 
- effect: NoExecute
 
  operator: Exists</source>
 
#Create a cluster logging instance with Fluentd tolerations mentioned in the previous step. Here's a sample code to build your own YAML file.
 
#:'''clusterLogging.yaml'''
 
#:<source lang="yaml">apiVersion: "logging.openshift.io/v1"
 
kind: "ClusterLogging"
 
metadata:
 
  name: "instance"
 
  namespace: "openshift-logging"
 
spec:
 
  managementState: "Managed"
 
  logStore:
 
    type: "elasticsearch"
 
    retentionPolicy:
 
      application:
 
        maxAge: 1d
 
      infra:
 
        maxAge: 7d
 
      audit:
 
        maxAge: 7d
 
    elasticsearch:
 
      nodeCount: 3
 
      storage:
 
        storageClassName: "<storage-class-name>"
 
        size: 200G
 
      resources:
 
        requests:
 
          memory: "8Gi"
 
      proxy:
 
        resources:
 
          limits:
 
            memory: 256Mi
 
          requests:
 
            memory: 256Mi
 
      redundancyPolicy: "SingleRedundancy"
 
  visualization:
 
    type: "kibana"
 
    kibana:
 
      replicas: 1
 
  curation:
 
    type: "curator"
 
    curator:
 
      schedule: "30 3 * * *"
 
  collection:
 
    logs:
 
      type: "fluentd"
 
      fluentd:
 
        tolerations:
 
        - effect: NoSchedule
 
          operator: Exists
 
        - key: CriticalAddonsOnly
 
          operator: Exists
 
        - effect: NoExecute
 
          operator: Exists</source>
 
#Apply the cluster logging configuration. Use the following command.
 
#:<source lang="yaml">oc apply -f  clusterLogging.yaml</source>
 
#Set up log forwarding.
 
#:Syslog forwarding allows you to send a copy of your logs externally, instead of, or in addition to, the default Elasticsearch log store (or log management tool). This allows steaming of structured logs to both be indexed and available through a Kibana (or any data visualization dashboard software) instance, as well as written to flat files to provide to care for troubleshooting.
 
#:{{NoteFormat|Writing logs to ElasticSearch index or any log management tool is optional. However, storing of structured logs to flat files is required to support Genesys products. Though Rsyslog server is recommended, you can choose your desired method for collection/aggregation of structured syslog log messages.|}}
 
#:'''Prerequisites'''
 
#:*Fluentd deployed in cluster for log collection
 
#:*rsyslogd server installed and configured to have listening connection on TCP port
 
#:*rsyslog rules defined to capture syslog messages and write them to a log file
 
#:*rsyslog ruleset defined for writing container messages to a log file which NFS/NAS can access
 
#:'''Forwarding logs using the syslog protocol'''
 
#:Create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers and pipelines that use those outputs. Here is a sample code to build your own YAML file.
 
#:<source lang="yaml">apiVersion: logging.openshift.io/v1
 
kind: ClusterLogForwarder
 
metadata:
 
  name: instance
 
  namespace: openshift-logging
 
spec:
 
  outputs:
 
  - name: rsyslog-east
 
    type: syslog
 
    syslog:
 
      facility: local0
 
      rfc: RFC3164
 
      payloadKey: message
 
      severity: informational
 
    url: 'tls://rsyslogserver.east.example.com:514'
 
    secret:
 
        name: syslog-secret
 
  - name: rsyslog-west
 
    type: syslog
 
    syslog:
 
      appName: myapp
 
      facility: user
 
      msgID: mymsg
 
      procID: myproc
 
      rfc: RFC5424
 
      severity: debug
 
    url: 'udp://rsyslogserver.west.example.com:514'
 
  pipelines:
 
  - name: syslog-east
 
    inputRefs:
 
    - audit
 
    - application
 
    outputRefs:
 
    - rsyslog-east
 
    - default
 
    labels:
 
      syslog: east
 
      secure: true
 
  - name: syslog-west
 
    inputRefs:
 
    - infrastructure
 
    outputRefs:
 
    - rsyslog-west
 
    - default
 
    labels:
 
      syslog: west</source>
 
#:*'''metadata.name''': instance
 
#:*'''metadata.namespace''': openshift-logging
 
#:*'''spec.outputs.name''': A unique name name for the output
 
#:*'''spec.outputs.type''': Syslog
 
#:*'''spec.outputs.syslot''': (Optional) specifies the syslog parameters listed below.
 
#:*'''spec.outputs.url''': $protocol://<ip/host>:<port> (protocol = UDP/TCP/TLS, host and port of rsyslogd server)
 
#:{{NoteFormat|If you are using a TLS prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.|}}
 
#:*'''spec.pipelines.inputRefs:''' log types should be forwarded using that pipeline: application, infrastructure, or audit.
 
#:*'''spec.pipelines.outputRefs:''' output to use with that pipeline for forwarding the logs
 
#:For more details on syslog and rsyslog, refer to:
 
#:*https://datatracker.ietf.org/doc/html/rfc5424
 
#:*https://www.rsyslog.com
 
#:Apply the syslog cluster forwarder config using the following command.
 
#:<source lang="yaml">oc apply -f syslog-configmap.yaml</source>
 
#:After creating the cluster log forwarder, Fluentd instances should restart one by one picking up new configuration.
 
#:{{NoteFormat|If Fluentd pods do not restart on their own, delete them so that they are recreated.|}}
 
#:'''Forwarding logs using the legacy syslog method'''
 
#:Create a configuration file called '''syslog.yaml''', with the information needed to forward the logs in the openshift-logging namespace.
 
#:<source lang="yaml">kind: ConfigMap
 
apiVersion: v1
 
metadata:
 
  name: syslog
 
  namespace: openshift-logging
 
data:
 
  syslog.conf: |
 
    <store>
 
      @type syslog_buffered
 
      remote_syslog 192.168.45.36
 
      port 51400
 
      hostname ${hostname}
 
      facility local0
 
      severity info
 
      use_record true
 
      payload_key message
 
    </store></source>
 
#:'''Where:'''
 
#:*'''@type''' refers tp the transport type (syslog_buffered is required for rsyslog server to write to file)
 
#:*'''remote_syslog''': host/ip of remote syslog server
 
#:*'''port''': rsyslog server TCP listening port
 
#:*'''hostname''': name of host sending syslog msg
 
#:*facility:  machine process that created the syslog event
 
#:*'''severity''': predefined severity level of event
 
#Apply syslog configMap.
 
#:<source lanf="yaml">oc apply -f syslog.yaml</source>
 
#:After creating the cluster log forwarder, Fluentd instances should restart one by one picking up new configuration.
 
#:{{NoteFormat|If Fluentd pods do not restart on their own, delete them so that they are recreated.|}}
 
 
 
==GKE logging==
 
==GKE logging==
 
Click {{Link-SomewhereInThisVersion|manual=Operations|topic=Logging_approaches|anchor=GKElogging|display text=here}} for details about GKE logging.<br />
 
Click {{Link-SomewhereInThisVersion|manual=Operations|topic=Logging_approaches|anchor=GKElogging|display text=here}} for details about GKE logging.<br />

Latest revision as of 19:32, March 12, 2023

This topic is part of the manual Operations for version Current of Genesys Multicloud CX Private Edition.

Contents


A secondary method of logging required for standard stdout/stderr structured logging.

This logging method that is required for standard stdout/stderr structured logs that are generated by containers within the Kubernetes environment. Therefore, this method is also called Kubernetes-supported logging. Here, the container is writes stdout/stderr logs to a – var/log/containers directory.

K8 logging.png

You will be given the option to choose the external log aggregator to implement the aggregation.

Services that use Kubernetes structured logging:

  • Genesys Authentication
  • Web Services and Applications
  • Genesys Engagement Services
  • Designer
Important
Some services (such as Genesys Info Mart) use the Kubernetes logging approach with an exception that the logs are written in an unstructured format.

GKE logging

Click here for details about GKE logging.

Comments or questions about this documentation? Contact us for support!