Difference between revisions of "PE-GPR/9.0.0/Deployment/DL-CfgFile"

From Genesys Documentation
Jump to: navigation, search
(Published)
m (Text replacement - "\|Platforms?=([^\|]*)PureEngage([\|]*)" to "|Platform=$1GenesysEngage-onpremises$2")
Line 7: Line 7:
 
|UseCase=
 
|UseCase=
 
|ComingSoon=No
 
|ComingSoon=No
|Platform=PureEngage
+
|Platform=GenesysEngage-onpremises
 
|Role=
 
|Role=
 
|Application=
 
|Application=

Revision as of 11:45, July 15, 2020

Learn how to configure Data Loader to upload data to your instance of the GPR Core Platform for analysis and agent scoring.

Related documentation:

Overview

Data Loader uploads data to the GPR Core Platform for analysis and agent scoring. GPR uses the following types of data:

  • Agent Profile data and interaction data, which Data Loader uploads using a direct connection to Genesys Info Mart. Once you have configured the connection parameters, Data Loader automatically uploads this data and refreshes it periodically.
  • Customer Profile data, which can come from a number of different sources, depending on your environment. To upload customer data, you must collect it into a CSV file with a consistent schema for Data Loader to upload it.
  • Optionally, you can upload additional agent data, interaction outcome data, and any other data you find useful. This data must be compiled into CSV files for Data Loader to upload it.

To upload data, you must create, and specify values for, the configuration options on the Options tab of the Data Loader Application object. The configuration sections and options are configured in your configuration manager application (GAX or Genesys Administrator).

  • The Predictive Routing Options Reference contains a complete list of all Predictive Routing options, with descriptions, default and valid values, and a brief explanation of how to use them.

This topic focuses specifically on data-upload configuration and provides a task-based perspective, enabling you to find which options you need to configure for specific upload types and giving more context to help you decide on the appropriate values for your environment.

NOTE: Once you have configured a schema in the Data Loader Application object and uploaded your data to the GPR Core Platform, you cannot modify the schema.

Data upload basics

This section explains how to configure Data Loader to upload the essentials for Predictive Routing. These essentials are the Agent Profile dataset, the Customer Profile dataset, and an interaction dataset.

  • You must create an Agent Profile dataset and a Customer Profile dataset before you can join interaction, outcome, or other data. The sections that follow cover more in-depth scenarios, including adding other types of data and joining data.

To upload to Data Loader, open the Data Loader Application object in your configuration manager (Genesys Administrator or GAX) and set up the following configuration sections and options for each essential category of data.

1. Agent Profile

To create your Agent Profile dataset, configure the following two sections, and include the options specified within each:

  • dataset-agents-gim
    • sql-query - Use the default query provided with Data Loader (the agents_data_gim.sql file in the /dl folder) or create your own. If you plan to use your own SQL file, be sure to include the mandatory fields listed in the option description.
    • data-type - The value for this option must be agents.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the update-period option. To pause uploading, set this option to false.
    • update-period - Specifies how often Data Loader runs an upload process to get new and updated data.
  • schema-agents-gim - In this section, create one option for each column in your Agent Profile dataset. You must include the columns/options listed below. If your SQL query is modified to read values from additional fields, you must also add those fields as options in this section. The option values are the datatype, and, if the field contains sensitive or PII data, the value anon.
    • dbID
    • employeeId
    • groupNames
    • skills
    • tenantDbID
    • userName

2. Customer Profile

To create your Customer Profile dataset, configure the following two sections, and include the options specified within each:

  • dataset-customers
    • csv-separator - Tells Data Loader what separator type you are using so it can correctly parse the CSV file.
    • data-type - The value for this option must be customers.
    • location - Enter the complete path to the CSV file containing your Customer Profile data.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the update-period option. To pause uploading, set this option to false.
  • schema-customers - In this section, create one option for each column in your Customer Profile dataset. The option values are the datatype, and, if the field contains sensitive or PII data, the value anon.
    • CustomerID - The only required option/column.

3. Interaction Dataset

To create a basic interaction dataset, complete the following steps. In a production environment, you would normally join the interaction data to the Agent and/or Customer Profile datasets. The sections that follow explain how to join data. This section gives you the minimum necessary information to create an initial interaction dataset.

  • dataset-<name> - When you create this configuration section, replace <name> with a dataset name of your choice. For example, you might create a section called dataset-interactions.
    • chunk-size - Defines how many interactions Data Loader should upload at once by specifying a period of time. Data Loader uploads all interactions that occur during this amount of time in a single batch.
    • update-period - Specifies how often Data Loader runs an upload process to get new and updated data.
    • sql-query - Use the default query provided with Data Loader (the interaction_data_aht.sql file in the /dl folder) or create your own. If you plan to use your own SQL file, be sure to include the mandatory fields listed in the option description. See Create your own SQL query for instructions.
    • data-type - The value for this option must be interactions.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the update-period option. To pause uploading, set this option to false.
  • schema-<name>- When you create this configuration section, replace <name> with the same dataset name you entered for the dataset-<name> section. If your dataset section is called dataset-interactions, this section is schema-interactions.

How to join data

Joining data enables you to pull together information about many aspects of your environment, enabling more powerful and nuanced predictions. When you join the data, the interaction dataset is the one containing joined fields. That is, the interaction dataset uploaded to the GPR application displays all its own fields, plus the fields from the joined dataset(s).

This section explains the basic procedure for joining the Agent Profile and/or Customer Profile dataset with an interaction dataset. The "Advanced upload scenarios" section on this page presents further examples.

Data upload sequence

When joining features from the Agent Profile or Customer Profile datasets to the interactions dataset, the order in which the datasets are initially uploaded is important. Upload data in the following sequence:

  1. Upload Agent Profile data by setting the [dataset-agents-gim].upload-dataset option to true. This starts upload of Agent Profile data from Genesys Info Mart.
  2. After the agent data upload is started, configure the Customer Profile data configuration sections and initiate the Customer Profile data upload by setting the [dataset-customers].upload-dataset option to true.
  3. Then configure the interactions data upload. The procedure below explains how to specify the Agent Profile and Customer Profile dataset fields to be joined to the interaction data. Once this configuration is complete, initiate the upload of the interaction dataset by setting the [dataset-<name>].upload-dataset option to true.

Data joining procedure

To enable joining, perform the following steps:

  1. In the Data Loader Application object, create or edit the dataset-<name> section for your interaction dataset. These instructions refer to the dataset-interactions section used as an example in the "Data upload basics" procedures, above. If your dataset has a different name, use your dataset name throughout.
  2. Add the join option. As the option value, specify the names of the datasets to be joined.
    • For example, if you would like to join data from the Agent Profile and Customer Profile datasets with dataset-interactions, set the value of the join option to: agents-gim, customers, interactions.
  3. Specify the fields to be joined to the interaction dataset by adding the names of the columns from the joined datasets into the Data Loader schema-interactions configuration section.
    • Each joined column should be a separate option in the schema-interactions configuration section. The value for each of these options should be the name of the schema- section from which the data comes.
    • For example, if you join fields from the Customer Profile (which is created using the dataset-customers and schema-customers configuration sections) with your interaction dataset, the option value for all joined fields is schema-customers.
  4. Specify the desired values for the following additional configuration options:
    • join-type - Configured in the dataset-interactions section. Specifies the join type, whether inner (only successfully joined records are uploaded) or outer (all interaction records are uploaded to the platform and the missing data is replaced with null values).
    • join-keys - Configured in the dataset- sections of the dataset(s) you are joining to the interaction dataset. Specifies a comma-separated list of the column names by which to join the data from this dataset to the interaction dataset.
    • enforce-schema-on-joined-data - Specifies whether fields that are not already in the interaction dataset are joined. If set to true, only data from fields already in the interaction dataset are joined. If set to false, ALL columns from the joined datasets are added to the interaction dataset. If you set this option to false, be careful not to exceed the maximum allowed number of 100 columns in the joined dataset, or Data Loader generates an error and exits the upload process.

The following example shows a schema- configuration section for an interaction dataset with joins to the Agent and Customer Profile datasets and an outcome dataset called feedback:

[schema-interactions-joined]

  • AHT=numeric
  • Customer_location=schema-customers
  • CUSTOMER HOLD DURATION=numeric
  • CUSTOMER TALK DURATION=numeric
  • CUSTOMER_ACW_DURATION=numeric
  • Customer_language=schema-customers
  • Customer_ANI=schema-customers
  • employeeId=schema-agents-gim
  • Feedback=schema-feedback
  • groupNames=schema-agents-gim
  • InteractionID=string
  • Is_Resolved=schema-feedback
  • skills=schema-agents-gim
  • START_TS=timestamp,created_at

Advanced upload scenarios

In addition to the Agent Profile, Customer Profile, and interactions datasets described in the preceding sections, you might want to upload other data from your environment. This section presents examples showing how to upload data from additional sources.

Add agent data from sources other than Genesys Info Mart

If your business has employee data relevant for Predictive Routing that is not available from Genesys Info Mart, create a CSV file with that data and configure the Data Loader Application object with the following sections and options:

  • dataset-agents-<name>
    • csv-separator - Tells Data Loader what separator type you are using so it can correctly parse the CSV file.
    • data-type - The value for this option must be outcomes.
    • location - Enter the complete path to the CSV file containing your agent data.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the update-period option. To pause uploading, set this option to false.
  • schema-agents-<name> - Shows how to configure the schema for an agent Dataset to be uploaded from a CSV file. It has only one mandatory field, but you must also configure options specifying the data types for all other fields. Add the anon parameter if the field should be anonymized.
    • employeeId=string,id,anon

Upload outcome data

To upload outcome data from a post-interaction survey, use the following configuration. Note that this example can be adapted for to create a dataset based on nearly any data source. In this example, we are uploading a dataset called feedback.

  • dataset-feedback
    • csv-separator - Tells Data Loader what separator type you are using so it can correctly parse the CSV file.
    • data-type - The value for this option must be outcomes.
    • location - Enter the complete path to the CSV file containing your feedback data.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the update-period option. To pause uploading, set this option to false.
  • schema-feedback
    • Customer_ANI=string,anon
    • employeeId=string,anon
    • Feedback=string
    • InteractionID=string
    • Is_Resolved=boolean
    • Timestamp_Feedback=timestamp,created_at

Upload aggregated interaction data to your Agent Profile dataset

Aggregated features enable you to pull together data from various sources to create features that address very specific scenarios. For example, you can define a feature that gives you the average handle time (AHT) for the interactions each agent in the Agent Profile dataset received from a certain virtual queue over a specified period.

To configure aggregated features in the Agent Profile, replace the dataset-agents-gim and schema-agents-gim configuration sections with the following:

  • dataset-agents-gim-ext - In addition to the standard options configured for the dataset-agents-gim section, add the following options:
    • start-date
    • end-date
    • chunk-size
  • schema-agents-gim-ext - Be sure to add all of the fields that Data Loader will aggregate as options in this configuration section.

Upload a dataset customized for a specific KPI

Is this a relevant use case?

Upload agent data from Configuration Server (on-premises environments only)

To upload from Configuration Server, use the following reserved section names. They take the same mandatory options and configuration parameters as dataset-agents-gim and schema-agents-gim, which are presented in the "Data upload basics" section, above.

  • dataset-agents-cfgsrv
  • schema-agents-cfgsrv

Create your own SQL query

Two default SQL queries are provided in the Data Loader installation package:

/dl/interaction_data_aht.sql - the query to collect the interactions dataset for the AHT metric. /dl/agents_data_gim.sql - the query to collect the default Agent Profile dataset.

To create your own SQL query, run the following command to locate the container ID:

docker ps

And then run the following command to access the container:

docker exec -it <container_id> /bin/bash

The following fields are mandatory in your SQL file:

  • employeeId - The id of the agent. Must match the name of the ID Field in the Agent Profile dataset.
  • dbID - The agent's configuration database DBID. Obtained from the Genesys Info Mart RESOURCE_.RESOURCE_CFG_DBID field.
  • userName - Obtained from the Genesys Info Mart RESOURCE_.RESOURCE_NAME field.
  • firstName - Obtained from the Genesys Info Mart RESOURCE_.AGENT_FIRST_NAME field.
  • lastName - Obtained from the Genesys Info Mart RESOURCE_.AGENT_LAST_NAME field.
  • tenantDbID - Obtained from the Genesys Info Mart RESOURCE_.tenant_key field.
  • groupNames - List of group names assigned to the agent.
  • skills - Dictionary containing the agent's current skills.

NOTES:

  • The field names are case-sensitive.
  • Use of angle brackets to signify "not" is not supported. Genesys recommends that you use != instead.

Configure the data upload schedule

How often Data Loader uploads data from Genesys info Mart and how much data it uploads at once is controlled by two options, upload-period and chunk-size, that coordinate as described in this section.

Comments or questions about this documentation? Contact us for support!