Configure Data Loader to upload data

From Genesys Documentation
Jump to: navigation, search

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

chunk-size

Default Value: PT15M (15 minutes)
Valid Values: String in ISO 8601 duration format, PT1S (1 second) or higher
Changes Take Effect: Immediately

Defines the chunk size, defined as interactions that happened within the specified length of time, used when extracting data from the Genesys Info Mart Database for a dataset of the interactions data type. Interactions that started within the chunk, as defined by this option value, are uploaded to the GPR Core Services platform as a single file and are then appended to the previously uploaded data for the associated dataset.

The following table provides sample formats that are supported for the chunk-size option.

Value Description
PT20.345S Specifies 20.345 seconds
PT15M Specifies 15 minutes
PT10H Specifies 10 hours
P2D Specifies 2 days
P2DT3H4M Specifies 2 days, 3 hours and 4 minutes
P-6H3M Specifies -6 hours and +3 minutes
-P6H3M Specifies -6 hours and -3 minutes
-P-6H+3M Specifies +6 hours and -3 minutes

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

upload-schedule

Default Value: No default value
Valid Values: A valid schedule in Cron format
Changes Take Effect: On the next upload

This option enables you to execute the upload of a dataset on a preconfigured schedule. In release 9.0.017.01 and lower, or if you do not set a value for the upload-schedule option, data upload scheduling is controlled using the update-period and chunk-size options.

vq-filter

Default Value: No default value
Valid Values: a comma-separated list of valid virtual queue names
Changes Take Effect: On the next data upload

To have Data Loader upload data only from a subset of virtual queues (VQs) for inclusion in an interaction-type dataset, enter a comma-separated list of the VQs to include. Data Loader uploads records from the Genesys Info Mart database associated with the specified VQs.

sql-query

Default Value: No default value
Valid Values: A string starting with "file:" and followed by a valid path to a file in the Data Loader Docker container containing an SQL query
Changes Take Effect: After 15 min timeout

You need to configure this option only when you are using a customized query to extract data from the Genesys Info Mart database for the Agent Profile and interactions datasets. You do not need to configure the sql-query option to create datasets from .csv files, such as for Customer Profile data, outcomes data, and agent data from sources other than Genesys Info Mart.

Two example SQL queries are provided in the Data Loader Docker container for your reference:

  • /dl/interaction_data_aht.sql - the query used to collect average handling time (AHT) data for Data Loader to upload to the interactions dataset.
  • /dl/agents_data_gim.sql - the query used to collect data to populate the default Agent Profile dataset.

For instructions to create your own SQL query, see <a rel="nofollow" class="external text" href="https://all.docs.genesys.com/PE-GPR/9.0.0/Deployment/DL-CfgFile#createSQL">Create your own SQL query</a> in the Deployment and Operations Guide.

The following is an example of a valid value for this option: file:/datasets/outcomes/my_interactions_data_gim.sql

If you do not configure this option in the [dataset-agents-gim] or [dataset-interactions-gim] sections, Data Loader uses the appropriate default query.

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

upload-dataset

Default Value: see option description
Valid Values: true, false
Changes Take Effect: After 60 sec timeout

Notifies Data Loader that the dataset is fully configured and the data processing for this dataset can be started. Data Loader checks every 60 seconds to see whether the value of this option has changed.

If set to true, Data Loader starts the dataset upload. If set to false, Data Loader does not upload data.

The default value for this option is pre-set to true for the dataset-agent-gim dataset and to false for the dataset-interactions-gim dataset. This configuration ensures that the agent profile, which needs to be in place first, is uploaded immediately.

location

Default Value: No default value
Valid Values: A valid path name string for a file containing a dataset in CSV format
Changes Take Effect: After 15 min timeout

Specifies the path to a CSV file containing a dataset. Required for the datasets provided as CSV files.

Configure the file location as described in the following steps:

  1. Place the file itself in the Data Loader IP folder structure using the following path:
    <ip_folder>/ai-data-loader-scripts/scripts/datasets_<dataset_type>
  2. The value for the location option is the path inside the Data Loader Docker container. Specify only the final part of the full path as given below:
    /datasets/<dataset_type>/<dataset file name>.csv

The possible dataset types are agents, customers, and outcomes.

data-type

Default Value: No default value
Valid Values: agents, customers, interactions, outcomes
Changes Take Effect: Immediately

Specifies the type of Dataset you are uploading.

  • agents - Data Loader uploads the data to the Agent Profile on the GPR Core Services platform. This data can come from Genesys Info Mart or from a CSV file. You can also join it with a Dataset of the "interactions" type.
  • customers - Data Loader uploads the data to the Customer Profile on the GPR Core Services platform. Its source is a CSV file. You can also join it with a Dataset of the "interactions" type.
  • interactions - The Dataset contains interactions extracted from Genesys Info Mart database, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "agents", "customers", or "outcomes" types before it is uploaded.
  • outcomes - The Dataset contains information extracted from sources other than the Genesys Info Mart database and that is provided as a CSV file, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "interactions" type.
    Note: This data type is used for any data that is not of the "interactions" type and that is being uploaded to a Dataset with a user-specified name (that is, a Dataset other than dataset-agents-gim or dataset-customers). The data you are uploading does not have to be literal outcome data.

csv-separator

Default Value: comma
Valid Values: comma, tab
Changes Take Effect: Immediately

Indicates the separator used in CSV data files that Data Loader is to upload, as indicated in the location option configured in the same section. This option is necessary only if you are configuring a Dataset that is to be created by uploading a CSV file.

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

upload-dataset

Default Value: see option description
Valid Values: true, false
Changes Take Effect: After 60 sec timeout

Notifies Data Loader that the dataset is fully configured and the data processing for this dataset can be started. Data Loader checks every 60 seconds to see whether the value of this option has changed.

If set to true, Data Loader starts the dataset upload. If set to false, Data Loader does not upload data.

The default value for this option is pre-set to true for the dataset-agent-gim dataset and to false for the dataset-interactions-gim dataset. This configuration ensures that the agent profile, which needs to be in place first, is uploaded immediately.

location

Default Value: No default value
Valid Values: A valid path name string for a file containing a dataset in CSV format
Changes Take Effect: After 15 min timeout

Specifies the path to a CSV file containing a dataset. Required for the datasets provided as CSV files.

Configure the file location as described in the following steps:

  1. Place the file itself in the Data Loader IP folder structure using the following path:
    <ip_folder>/ai-data-loader-scripts/scripts/datasets_<dataset_type>
  2. The value for the location option is the path inside the Data Loader Docker container. Specify only the final part of the full path as given below:
    /datasets/<dataset_type>/<dataset file name>.csv

The possible dataset types are agents, customers, and outcomes.

data-type

Default Value: No default value
Valid Values: agents, customers, interactions, outcomes
Changes Take Effect: Immediately

Specifies the type of Dataset you are uploading.

  • agents - Data Loader uploads the data to the Agent Profile on the GPR Core Services platform. This data can come from Genesys Info Mart or from a CSV file. You can also join it with a Dataset of the "interactions" type.
  • customers - Data Loader uploads the data to the Customer Profile on the GPR Core Services platform. Its source is a CSV file. You can also join it with a Dataset of the "interactions" type.
  • interactions - The Dataset contains interactions extracted from Genesys Info Mart database, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "agents", "customers", or "outcomes" types before it is uploaded.
  • outcomes - The Dataset contains information extracted from sources other than the Genesys Info Mart database and that is provided as a CSV file, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "interactions" type.
    Note: This data type is used for any data that is not of the "interactions" type and that is being uploaded to a Dataset with a user-specified name (that is, a Dataset other than dataset-agents-gim or dataset-customers). The data you are uploading does not have to be literal outcome data.

csv-separator

Default Value: comma
Valid Values: comma, tab
Changes Take Effect: Immediately

Indicates the separator used in CSV data files that Data Loader is to upload, as indicated in the location option configured in the same section. This option is necessary only if you are configuring a Dataset that is to be created by uploading a CSV file.

enforce-schema-on-joined-data

Default Value: true
Valid Values: true, false
Changes Take Effect: After 15 min timeout

  • If set to true, all fields are joined to the "interactions" Dataset from the Datasets listed in the join option configured in the same section.
  • If set to false, all fields from the Datasets listed in the join option configured in the same section are added to the "interactions" Dataset.

join-keys

Default Value: No default value
Valid Values: Comma-separated list of column names
Changes Take Effect: After 15 min timeout

A comma-separated list of the column names defined in the schema-agents-gim section that contain key values by which to join the data from this agent Dataset to an interaction type Dataset.

join-type

Default Value: inner
Valid Values: inner, outer
Changes Take Effect: After 15 min timeout

  • inner - Only the records successfully joined with the specified Datasets (outcomes, agents, customers, FCR, and so on) are uploaded to the GPR Core Services platform. This is the the typical value used in production environments.
  • outer - All interaction records are uploaded to the GPR Core Services platform. Any missing data is replaced with null values. Typically this value is used only for troubleshooting.

join

Default Value: No default value
Valid Values: a comma-separated list of section names containing dataset configurations
Changes Take Effect: After 15 min timeout

Specifies the list of the Datasets of the "agents", "customers", or "outcomes" types to join with the current "interactions" Dataset prior to upload to the GPR Core Services platform.

This join can be inner or outer depending on the value of the join-type option configured in the same section. The following examples show what you join depending on the value or values specified for this option:

  • agents - Joins interaction data obtained from Genesys Info Mart with agent information uploaded from Genesys Info Mart or from a CSV file.
  • customers - Joins interaction data obtained from Genesys Info Mart with customer information uploaded from a CSV file.
  • fcr - Joins interaction data obtained from Genesys Info Mart with first call resolution (FCR) interaction outcome data provided in a CSV file. For this value to be valid, you must have configured dataset-fcr and schema-fcr sections.
  • agents, customers - Joins interaction data obtained from Genesys Info Mart with the data from the Agents and Customer profiles.
  • agents, customers, fcr - Joins interaction data obtained from Genesys Info Mart with the data from the the Agents and Customer profiles and the FCR interaction outcome data.

join

Default Value: No default value
Valid Values: a comma-separated list of section names containing dataset configurations
Changes Take Effect: After 15 min timeout

Specifies the list of the Datasets of the "agents", "customers", or "outcomes" types to join with the current "interactions" Dataset prior to upload to the GPR Core Services platform.

This join can be inner or outer depending on the value of the join-type option configured in the same section. The following examples show what you join depending on the value or values specified for this option:

  • agents - Joins interaction data obtained from Genesys Info Mart with agent information uploaded from Genesys Info Mart or from a CSV file.
  • customers - Joins interaction data obtained from Genesys Info Mart with customer information uploaded from a CSV file.
  • fcr - Joins interaction data obtained from Genesys Info Mart with first call resolution (FCR) interaction outcome data provided in a CSV file. For this value to be valid, you must have configured dataset-fcr and schema-fcr sections.
  • agents, customers - Joins interaction data obtained from Genesys Info Mart with the data from the Agents and Customer profiles.
  • agents, customers, fcr - Joins interaction data obtained from Genesys Info Mart with the data from the the Agents and Customer profiles and the FCR interaction outcome data.

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

upload-schedule

Default Value: No default value
Valid Values: A valid schedule in Cron format
Changes Take Effect: On the next upload

This option enables you to execute the upload of a dataset on a preconfigured schedule. In release 9.0.017.01 and lower, or if you do not set a value for the upload-schedule option, data upload scheduling is controlled using the update-period and chunk-size options.

upload-dataset

Default Value: see option description
Valid Values: true, false
Changes Take Effect: After 60 sec timeout

Notifies Data Loader that the dataset is fully configured and the data processing for this dataset can be started. Data Loader checks every 60 seconds to see whether the value of this option has changed.

If set to true, Data Loader starts the dataset upload. If set to false, Data Loader does not upload data.

The default value for this option is pre-set to true for the dataset-agent-gim dataset and to false for the dataset-interactions-gim dataset. This configuration ensures that the agent profile, which needs to be in place first, is uploaded immediately.

data-type

Default Value: No default value
Valid Values: agents, customers, interactions, outcomes
Changes Take Effect: Immediately

Specifies the type of Dataset you are uploading.

  • agents - Data Loader uploads the data to the Agent Profile on the GPR Core Services platform. This data can come from Genesys Info Mart or from a CSV file. You can also join it with a Dataset of the "interactions" type.
  • customers - Data Loader uploads the data to the Customer Profile on the GPR Core Services platform. Its source is a CSV file. You can also join it with a Dataset of the "interactions" type.
  • interactions - The Dataset contains interactions extracted from Genesys Info Mart database, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "agents", "customers", or "outcomes" types before it is uploaded.
  • outcomes - The Dataset contains information extracted from sources other than the Genesys Info Mart database and that is provided as a CSV file, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "interactions" type.
    Note: This data type is used for any data that is not of the "interactions" type and that is being uploaded to a Dataset with a user-specified name (that is, a Dataset other than dataset-agents-gim or dataset-customers). The data you are uploading does not have to be literal outcome data.

sql-query

Default Value: No default value
Valid Values: A string starting with "file:" and followed by a valid path to a file in the Data Loader Docker container containing an SQL query
Changes Take Effect: After 15 min timeout

You need to configure this option only when you are using a customized query to extract data from the Genesys Info Mart database for the Agent Profile and interactions datasets. You do not need to configure the sql-query option to create datasets from .csv files, such as for Customer Profile data, outcomes data, and agent data from sources other than Genesys Info Mart.

Two example SQL queries are provided in the Data Loader Docker container for your reference:

  • /dl/interaction_data_aht.sql - the query used to collect average handling time (AHT) data for Data Loader to upload to the interactions dataset.
  • /dl/agents_data_gim.sql - the query used to collect data to populate the default Agent Profile dataset.

For instructions to create your own SQL query, see <a rel="nofollow" class="external text" href="https://all.docs.genesys.com/PE-GPR/9.0.0/Deployment/DL-CfgFile#createSQL">Create your own SQL query</a> in the Deployment and Operations Guide.

The following is an example of a valid value for this option: file:/datasets/outcomes/my_interactions_data_gim.sql

If you do not configure this option in the [dataset-agents-gim] or [dataset-interactions-gim] sections, Data Loader uses the appropriate default query.

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

upload-schedule

Default Value: No default value
Valid Values: A valid schedule in Cron format
Changes Take Effect: On the next upload

This option enables you to execute the upload of a dataset on a preconfigured schedule. In release 9.0.017.01 and lower, or if you do not set a value for the upload-schedule option, data upload scheduling is controlled using the update-period and chunk-size options.

chunk-size

Default Value: PT15M (15 minutes)
Valid Values: String in ISO 8601 duration format, PT1S (1 second) or higher
Changes Take Effect: Immediately

Defines the chunk size, defined as interactions that happened within the specified length of time, used when extracting data from the Genesys Info Mart Database for a dataset of the interactions data type. Interactions that started within the chunk, as defined by this option value, are uploaded to the GPR Core Services platform as a single file and are then appended to the previously uploaded data for the associated dataset.

The following table provides sample formats that are supported for the chunk-size option.

Value Description
PT20.345S Specifies 20.345 seconds
PT15M Specifies 15 minutes
PT10H Specifies 10 hours
P2D Specifies 2 days
P2DT3H4M Specifies 2 days, 3 hours and 4 minutes
P-6H3M Specifies -6 hours and +3 minutes
-P6H3M Specifies -6 hours and -3 minutes
-P-6H+3M Specifies +6 hours and -3 minutes

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

upload-schedule

Default Value: No default value
Valid Values: A valid schedule in Cron format
Changes Take Effect: On the next upload

This option enables you to execute the upload of a dataset on a preconfigured schedule. In release 9.0.017.01 and lower, or if you do not set a value for the upload-schedule option, data upload scheduling is controlled using the update-period and chunk-size options.

upload-dataset

Default Value: see option description
Valid Values: true, false
Changes Take Effect: After 60 sec timeout

Notifies Data Loader that the dataset is fully configured and the data processing for this dataset can be started. Data Loader checks every 60 seconds to see whether the value of this option has changed.

If set to true, Data Loader starts the dataset upload. If set to false, Data Loader does not upload data.

The default value for this option is pre-set to true for the dataset-agent-gim dataset and to false for the dataset-interactions-gim dataset. This configuration ensures that the agent profile, which needs to be in place first, is uploaded immediately.

location

Default Value: No default value
Valid Values: A valid path name string for a file containing a dataset in CSV format
Changes Take Effect: After 15 min timeout

Specifies the path to a CSV file containing a dataset. Required for the datasets provided as CSV files.

Configure the file location as described in the following steps:

  1. Place the file itself in the Data Loader IP folder structure using the following path:
    <ip_folder>/ai-data-loader-scripts/scripts/datasets_<dataset_type>
  2. The value for the location option is the path inside the Data Loader Docker container. Specify only the final part of the full path as given below:
    /datasets/<dataset_type>/<dataset file name>.csv

The possible dataset types are agents, customers, and outcomes.

data-type

Default Value: No default value
Valid Values: agents, customers, interactions, outcomes
Changes Take Effect: Immediately

Specifies the type of Dataset you are uploading.

  • agents - Data Loader uploads the data to the Agent Profile on the GPR Core Services platform. This data can come from Genesys Info Mart or from a CSV file. You can also join it with a Dataset of the "interactions" type.
  • customers - Data Loader uploads the data to the Customer Profile on the GPR Core Services platform. Its source is a CSV file. You can also join it with a Dataset of the "interactions" type.
  • interactions - The Dataset contains interactions extracted from Genesys Info Mart database, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "agents", "customers", or "outcomes" types before it is uploaded.
  • outcomes - The Dataset contains information extracted from sources other than the Genesys Info Mart database and that is provided as a CSV file, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "interactions" type.
    Note: This data type is used for any data that is not of the "interactions" type and that is being uploaded to a Dataset with a user-specified name (that is, a Dataset other than dataset-agents-gim or dataset-customers). The data you are uploading does not have to be literal outcome data.

csv-separator

Default Value: comma
Valid Values: comma, tab
Changes Take Effect: Immediately

Indicates the separator used in CSV data files that Data Loader is to upload, as indicated in the location option configured in the same section. This option is necessary only if you are configuring a Dataset that is to be created by uploading a CSV file.

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

upload-schedule

Default Value: No default value
Valid Values: A valid schedule in Cron format
Changes Take Effect: On the next upload

This option enables you to execute the upload of a dataset on a preconfigured schedule. In release 9.0.017.01 and lower, or if you do not set a value for the upload-schedule option, data upload scheduling is controlled using the update-period and chunk-size options.

update-period

Default Value: P24H
Valid Values: String in ISO 8601 duration format, from PT15M to P30D
Changes Take Effect: After 60 sec timeout

Specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.

upload-dataset

Default Value: see option description
Valid Values: true, false
Changes Take Effect: After 60 sec timeout

Notifies Data Loader that the dataset is fully configured and the data processing for this dataset can be started. Data Loader checks every 60 seconds to see whether the value of this option has changed.

If set to true, Data Loader starts the dataset upload. If set to false, Data Loader does not upload data.

The default value for this option is pre-set to true for the dataset-agent-gim dataset and to false for the dataset-interactions-gim dataset. This configuration ensures that the agent profile, which needs to be in place first, is uploaded immediately.

data-type

Default Value: No default value
Valid Values: agents, customers, interactions, outcomes
Changes Take Effect: Immediately

Specifies the type of Dataset you are uploading.

  • agents - Data Loader uploads the data to the Agent Profile on the GPR Core Services platform. This data can come from Genesys Info Mart or from a CSV file. You can also join it with a Dataset of the "interactions" type.
  • customers - Data Loader uploads the data to the Customer Profile on the GPR Core Services platform. Its source is a CSV file. You can also join it with a Dataset of the "interactions" type.
  • interactions - The Dataset contains interactions extracted from Genesys Info Mart database, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "agents", "customers", or "outcomes" types before it is uploaded.
  • outcomes - The Dataset contains information extracted from sources other than the Genesys Info Mart database and that is provided as a CSV file, which Data Loader uploads to the GPR Core Services platform. This data can optionally be joined with the Datasets of the "interactions" type.
    Note: This data type is used for any data that is not of the "interactions" type and that is being uploaded to a Dataset with a user-specified name (that is, a Dataset other than dataset-agents-gim or dataset-customers). The data you are uploading does not have to be literal outcome data.

sql-query

Default Value: No default value
Valid Values: A string starting with "file:" and followed by a valid path to a file in the Data Loader Docker container containing an SQL query
Changes Take Effect: After 15 min timeout

You need to configure this option only when you are using a customized query to extract data from the Genesys Info Mart database for the Agent Profile and interactions datasets. You do not need to configure the sql-query option to create datasets from .csv files, such as for Customer Profile data, outcomes data, and agent data from sources other than Genesys Info Mart.

Two example SQL queries are provided in the Data Loader Docker container for your reference:

  • /dl/interaction_data_aht.sql - the query used to collect average handling time (AHT) data for Data Loader to upload to the interactions dataset.
  • /dl/agents_data_gim.sql - the query used to collect data to populate the default Agent Profile dataset.

For instructions to create your own SQL query, see <a rel="nofollow" class="external text" href="https://all.docs.genesys.com/PE-GPR/9.0.0/Deployment/DL-CfgFile#createSQL">Create your own SQL query</a> in the Deployment and Operations Guide.

The following is an example of a valid value for this option: file:/datasets/outcomes/my_interactions_data_gim.sql

If you do not configure this option in the [dataset-agents-gim] or [dataset-interactions-gim] sections, Data Loader uses the appropriate default query.

Learn how to configure Data Loader to upload data to your instance of the GPR Core Platform for analysis and agent scoring.

Overview

Data Loader uploads data to the GPR Core Platform for analysis and agent scoring. GPR uses the following types of data:

  • Agent Profile data and interaction data, which Data Loader uploads using a direct connection to Genesys Info Mart. Once you have configured the connection parameters, Data Loader automatically uploads this data and refreshes it periodically.
  • Customer Profile data, which can come from several different sources, depending on your environment. To upload customer data, you must collect it into a .csv file with a consistent schema for Data Loader to upload it.
  • Optionally, you can upload more agent data, interaction outcome data, and any other data you find useful. Compile this data into .csv files for Data Loader to upload it.

To upload data, you must create, and specify values for, the configuration options on the Options tab of the Data Loader Application object. You configure the configuration sections and options in your configuration manager application (GAX or Genesys Administrator).

  • The Predictive Routing Options Reference contains a complete list of all Predictive Routing options, with descriptions, default and valid values, and a brief explanation of how to use them.

This topic focuses specifically on data-upload configuration and provides a task-based perspective, enabling you to find which options you must configure for specific upload types and giving more context to help you decide on the appropriate values for your environment.

NOTE: Once you have configured a schema in the Data Loader Application object and uploaded your data to the GPR Core Platform, you cannot modify the schema.

Data upload basics

This section explains how to configure Data Loader to upload the essentials for Predictive Routing. These essentials are the Agent Profile dataset, the Customer Profile dataset, and an interaction dataset.

  • Create an Agent Profile dataset and a Customer Profile dataset before trying to join interaction, outcome, or other data. The sections that follow cover more in-depth scenarios, including adding other types of data and joining data.

To upload to Data Loader, open the Data Loader Application object in your configuration manager (Genesys Administrator or GAX) and set up the following configuration sections and options for each essential category of data.

1. Agent Profile

To create your Agent Profile dataset, configure the following two sections, and include the options specified within each:

  • dataset-agents-gim
    • sql-query - Use the default query provided with Data Loader (the agents_data_gim.sql file in the /dl folder) or create your own. If you plan to use your own SQL file, be sure to include the mandatory fields listed in Create your own SQL query (below).
    • data-type - The value for this option must be agents.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the update-period option. To pause uploading, set this option to false.
    • upload-schedule (in release 9.0.018.00 and higher) or update-period - Specifies how often Data Loader runs an upload process to get new and updated data.
  • schema-agents-gim - In this section, create one option for each column in your Agent Profile dataset. You must include the columns/options listed below. If you modified your SQL query to read values from additional fields, you must also add those fields as options in this section. The option values are the datatype, and, if the field contains sensitive or PII data, the value anon.
    • dbID
    • EMPLOYEE_ID (note that this field name must use exactly this format, all caps with an underscore)
    • groupNames
    • skills
    • tenantDbID

2. Customer Profile

To create your Customer Profile dataset, configure the following two sections, and include the options specified within each:

  • dataset-customers
    • csv-separator - Tells Data Loader what separator type you are using so it can correctly parse the .csv file.
    • data-type - The value for this option must be customers.
    • location - Enter the Data Loader Docker container path to the .csv file containing your Customer Profile data.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the upload-schedule (in release 9.0.018.00 and higher) or update-period option. To pause uploading, set this option to false.
  • schema-customers - In this section, create one option for each column in your Customer Profile dataset. The option values are the datatype, and, if the field contains sensitive or PII data, the value anon.
    • CustomerID - The only required option/column.

3. Interaction Dataset

To create a basic interaction dataset, complete the following steps. In a production environment, you would typically join the interaction data to the Agent and/or Customer Profile datasets. The sections that follow explain how to join data. This section gives you the minimum necessary information to create an initial interaction dataset.

  • dataset-<name> - When you create this configuration section, replace <name> with a dataset name of your choice. For example, you can create a section called dataset-interactions.
    • chunk-size - Defines how many interactions you want Data Loader to upload at once by specifying a period of time. Data Loader uploads all interactions that occur during this amount of time in a single batch.
    • upload-schedule (in release 9.0.018.00 and higher) or update-period - Specifies how often Data Loader runs an upload process to get new and updated data.
    • sql-query - Use the default query provided with Data Loader (the interaction_data_aht.sql file in the /dl folder) or create your own. If you plan to use your own SQL file, be sure to include the mandatory fields listed in the option description. See Create your own SQL query for instructions.
    • data-type - The value for this option must be interactions.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the upload-schedule (in release 9.0.018.00 and higher) or update-period option. To pause uploading, set this option to false.
  • schema-<name>- When you create this configuration section, replace <name> with the same dataset name you entered for the dataset-<name> section. If your dataset section is called dataset-interactions, this section is schema-interactions.

How to join data

Joining data enables you to pull together information about many aspects of your environment, enabling more powerful and nuanced predictions. When you join the data, the interaction dataset is the one containing joined fields. That is, the interaction dataset uploaded to the GPR application displays all its own fields, plus the fields from the joined datasets.

This section explains the basic procedure for joining the Agent Profile and/or Customer Profile dataset with an interaction dataset. The "Advanced upload scenarios" section on this page presents further examples.

Data upload sequence

When joining features from the Agent Profile or Customer Profile datasets to the interactions dataset, the order in which you upload the datasets initially is important. Upload data in the following sequence:

  1. Upload Agent Profile data by setting the [dataset-agents-gim].upload-dataset option to true. This setting starts upload of Agent Profile data from Genesys Info Mart.
  2. After Data Loader starts the agent data upload, configure the Customer Profile data configuration sections and start the Customer Profile data upload by setting the [dataset-customers].upload-dataset option to true.
  3. Then configure the interactions data upload. The procedure below explains how to specify the Agent Profile and Customer Profile dataset fields to be joined to the interaction data. Once this configuration is complete, start the upload of the interaction dataset by setting the [dataset-<name>].upload-dataset option to true.

Data joining procedure

To enable joining, perform the following steps:

  1. In the Data Loader Application object, create or edit the dataset-<name> section for your interaction dataset. These instructions refer to the dataset-interactions section used as an example in the "Data upload basics" procedures, above. If your dataset has a different name, use your dataset name throughout.
  2. Add the join option. As the option value, specify the names of the datasets to be joined.
    • For example, if you would like to join data from the Agent Profile and Customer Profile datasets with dataset-interactions, set the value of the join option to: agents-gim, customers, interactions.
  3. Specify the fields to be joined to the interaction dataset by adding the names of the columns from the joined datasets into the Data Loader schema-interactions configuration section.
    • Each joined column must be a separate option in the schema-interactions configuration section. The value for each of these options must be the name of the schema- section from which the data comes.
    • For example, if you join fields from the Customer Profile (which you created using the dataset-customers and schema-customers configuration sections) with your interaction dataset, the option value for all joined fields is schema-customers.
  4. Specify the desired values for the following additional configuration options:
    • join-type - Configured in the dataset-interactions section. Specifies the join type, whether inner (only successfully joined records are uploaded) or outer (all interaction records are uploaded to the platform and the missing data is replaced with null values).
    • join-keys - Configured in the dataset- sections of the datasets you are joining to the interaction dataset. Specifies a comma-separated list of the column names by which to join the data from this dataset to the interaction dataset.
    • enforce-schema-on-joined-data - Specifies whether fields that are not already in the interaction dataset are joined. If set to true, only data from fields already in the interaction dataset are joined. If set to false, ALL columns from the joined datasets are added to the interaction dataset. If you set this option to false, be careful not to exceed the maximum allowed number of 100 columns in the joined dataset, or Data Loader generates an error and exits the upload process.

The following example shows a schema- configuration section for an interaction dataset with joins to the Agent and Customer Profile datasets and an outcome dataset called feedback:

[schema-interactions-joined]

  • AHT=numeric
  • Customer_location=schema-customers
  • CUSTOMER HOLD DURATION=numeric
  • CUSTOMER TALK DURATION=numeric
  • CUSTOMER_ACW_DURATION=numeric
  • Customer_language=schema-customers
  • Customer_ANI=schema-customers
  • EMPLOYEE_ID=schema-agents-gim
  • Feedback=schema-feedback
  • groupNames=schema-agents-gim
  • InteractionID=string
  • Is_Resolved=schema-feedback
  • skills=schema-agents-gim
  • START_TS=timestamp,created_at

Advanced upload scenarios

In addition to the Agent Profile, Customer Profile, and interactions datasets described in the preceding sections, you can upload other data from your environment. This section presents examples showing how to upload data from additional sources.

Add agent data from sources other than Genesys Info Mart

If your business has employee data relevant for Predictive Routing that is not available from Genesys Info Mart, create a .csv file with that data and configure the Data Loader Application object with the following sections and options:

  • dataset-agents-<name>
    • csv-separator - Tells Data Loader what separator type you are using so it can correctly parse the .csv file.
    • data-type - The value for this option must be outcomes.
    • location - Enter the Data Loader Docker container path to the .csv file containing your Customer Profile data. This path is related to the path inside the Docker container.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the update-period option. To pause uploading, set this option to false.
  • schema-agents-<name> - Shows how to configure the schema for an agent Dataset to be uploaded from a .csv file. It has only one mandatory field, but you must also configure options specifying the data types for all other fields. Add the anon parameter if the field must be anonymized.
    • EMPLOYEE_ID=string,id,anon

Upload outcome data

To upload outcome data from a post-interaction survey, use the following configuration. You can adapt this example to create a dataset based on nearly any data source. In this example, we are uploading a dataset called feedback.

  • dataset-feedback
    • csv-separator - Tells Data Loader what separator type you are using so it can correctly parse the .csv file.
    • data-type - The value for this option must be outcomes.
    • location - Enter the Data Loader Docker container path to the .csv file containing your Customer Profile data. This path is related to the path inside the Docker container.
    • upload-dataset - This option is set to true by default, which tells Data Loader to start uploading data at the interval specified in the update-period option. To pause uploading, set this option to false.
  • schema-feedback
    • Customer_ANI=string,anon
    • EMPLOYEE_ID=string,anon
    • Feedback=string
    • InteractionID=string
    • Is_Resolved=boolean
    • Timestamp_Feedback=timestamp,created_at

Upload aggregated interaction data to your Agent Profile dataset

Note: In Data Loader release 9.0.017.01 and higher, the cloud-based feature engineering pipeline (CFEP) provides more robust aggregation capabilities. See Configure for Feature Engineering for details.

Aggregated features enable you to pull together data from various sources to create features that address specific scenarios. For example, you can define a feature that gives you the average handle time (AHT) for the interactions each agent in the Agent Profile dataset received from a certain virtual queue over a specified period.

To configure aggregated features in the Agent Profile, replace the dataset-agents-gim and schema-agents-gim configuration sections with the following:

  • dataset-agents-gim-ext - In addition to the standard options configured for the dataset-agents-gim section, add the following options:
    • start-date
    • end-date
    • chunk-size
  • schema-agents-gim-ext - Be sure to add all fields that Data Loader must aggregate as options in this configuration section.

Create your own SQL query

For your reference, the Data Loader container includes the following two default SQL queries:

  • /dl/interaction_data_aht.sql - the query used to collect average handling time (AHT) data for Data Loader to upload to the interactions dataset.
  • /dl/agents_data_gim.sql - the query used to collect data to populate the default Agent Profile dataset.

To review these SQL queries, follow these steps:

  1. Run the following command to locate the container ID:
    docker ps
  2. Use the response from Step 1 to run the following command, which accesses the container:
    docker exec -it <container_id> /bin/bash
  3. Navigate to the /dl/ folder in the container and use the following command to view the default SQL query files:
    cat <filename>

To create your own custom SQL query, use the following procedure:

  1. Review the requirements for mandatory fields contained in this section (below) and follow the guidelines to create your queries.
  2. Place your custom SQL query files into one of the subfolders in your installation directory. The installation directories are mapped to the container. The folders available on the host computer are the following:
    • <IP folder>/ai-data-loader-scripts/scripts/datasets_customers
    • <IP folder>/ai-data-loader-scripts/scripts/datasets_agents
    • <IP folder>/ai-data-loader-scripts/scripts/datasets_outcomes
  3. For each dataset using a custom SQL query, use GAX to open the associated dataset configuration section in the Data Loader Application object. Specify the path to the query file in the sql-query option. The path must be given relative to the container folder structure.
    For a table containing the mapping between folders on the host and in the container, see Mapping between folders on the host and in the container.

Mandatory fields for custom SQL query files

The following fields are mandatory in your agent data SQL file:

  • EMPLOYEE_ID - The id of the agent. Must match the name of the ID Field in the Agent Profile dataset.
  • dbID - The agent's configuration database DBID. Obtained from the Genesys Info Mart RESOURCE_.RESOURCE_CFG_DBID field.
  • tenantDbID - Obtained from the Genesys Info Mart RESOURCE_.tenant_key field.
  • groupNames - List of group names assigned to the agent.
  • skills - Dictionary containing the agent's current skills.

The following fields are mandatory in your interaction data SQL file:

  • InteractionID - Obtained from the Genesys Info Mart INTERACTION_FACT.media_server_ixn_guid field.
  • EMPLOYEE_ID - field that defines the Agent ID, which must match the name of the id_field in the agents dataset
  • WHERE clause - Must include the following:
    • irf.start_date_time_key - This value must fall between the dates set in the :start_ts and :end_ts parameters. Data Loader replaces the :start_ts and :end_ts fields with the start and end time of the interactions dataset chunk, in epoch format.
    • Note: The abbreviation irf in the WHERE clause indicates the INTERACTION_RESOURCE_FACT table in the Genesys Info Mart database.

To create a custom SQL query when using the cloud feature engineering pipeline (CFEP), set the use-cloud-feature-engineering option to true and create your query using the following SQL template placeholders:

  • :user_data_fields in the SELECT clause and :user_data_joins in the WHERE clause to extract the user data stored in the Info Mart database.
  • :vq_filter in the WHERE clause to enable filtering of the interaction records by the Virtual Queue names defined in the vq-filter option.
  • If you use multiple mapping rules for one KVP, you need to specify how Data Loader should include them in the dataset.

NOTES:

  • The field names are case-sensitive.
  • Use of angle brackets to signify "not" is not supported. Genesys recommends that you use != instead.

Configure the data upload schedule

How often Data Loader uploads data from Genesys Info Mart and how much data it uploads at once is controlled by configuration options. The options available depend on your Data Loader version.

9.0.018.00 and higher

A CRON expression is a string consisting of six or seven subexpressions (fields) that describe individual details of the schedule. These fields, separated by white space, can contain any of the allowed values with various combinations of the allowed characters for that field. The following table shows the fields in the order expected and the allowed values for each.

Name Required Allowed Values Allowed Special Characters
Seconds Y 0-59 , - * /
Minutes Y 0-59 , - * /
Hours Y 0-23 , - * /
Day of month Y 1-31 , - * ? / L W C
Month Y 0-11 or JAN-DEC , - * /
Day of week Y 1-7 or SUN-SAT , - * ? / L C #
Year N empty or 1970-2099 , - * /

Example Cron Expressions

Cron expressions can be as simple as * * * * ? * or as complex as 0 0/5 14,18,3-39,52 ? JAN,MAR,SEP MON-FRI 2002-2010.

The table contains various Cron expressions and their meanings.

Expression Means
0 0 12 * * ? Start dataset upload at 12:00 PM (noon) every day
0 15 10 ? * * Start dataset upload at 10:15 AM every day
0 15 10 * * ? Start dataset upload at 10:15 AM every day
0 15 10 * * ? * Start dataset upload at 10:15 AM every day
0 15 10 * * ? 2005 Start dataset upload at 10:15 AM every day during the year 2005
0 * 14 * * ? Start dataset upload every minute starting at 2:00 PM and ending at 2:59 PM, every day
0 0/5 14 * * ? Start dataset upload every 5 minutes starting at 2:00 PM and ending at 2:55 PM, every day
0 0/5 14,18 * * ? Start dataset upload every 5 minutes starting at 2:00 PM and ending at 2:55 PM, AND fire every 5 minutes starting at 6:00 PM and ending at 6:55 PM, every day
0 0-5 14 * * ? Start dataset upload every minute starting at 2:00 PM and ending at 2:05 PM, every day
0 10,44 14 ? 3 WED Start dataset upload at 2:10 PM and at 2:44 PM every Wednesday in the month of March
0 15 10 ? * MON-FRI Start dataset upload at 10:15 AM every Monday, Tuesday, Wednesday, Thursday and Friday
0 15 10 15 * ? Start dataset upload at 10:15 AM on the 15th day of every month
0 15 10 L * ? Start dataset upload at 10:15 AM on the last day of every month
0 15 10 ? * 6L Start dataset upload at 10:15 AM on the last Friday of every month
0 15 10 ? * 6L Start dataset upload at 10:15 AM on the last Friday of every month
0 15 10 ? * 6L 2002-2005 Start dataset upload at 10:15 AM on every last Friday of every month during the years 2002, 2003, 2004, and 2005
0 15 10 ? * 6#3 Start dataset upload at 10:15 AM on the third Friday of every month
0 0 12 1/5 * ? Start dataset upload at 12 PM (noon) every 5 days every month, starting on the first day of the month
0 11 11 11 11 ? Start dataset upload every November 11 at 11:11 AM
9.0.017.01 and lower

The update-period option specifies the interval at which Data Loader attempts to upload data, enabling fresh data stored in the Genesys Info Mart database to be automatically uploaded to the associated dataset. Used with dataset-agents-gim and the main interactions dataset, which are the datasets created directly from Genesys Info Mart data.

  • If the update-period value is less than the value for the chunk-size option, Data Loader uploads all data after the watermark marking the end of the previous upload.
  • If the update-period value is larger than the value of the chunk-size option, Data Loader uploads all data after the watermark, split into chunks of the size specified by the value of the chunk-size option.

Examples

NOTE: In the the examples below the value of the end-date option is set in the future.

  • If update-period is set to 1 day (P1D) and chunk-size is set to one hour (PT1H), all the data after the previous watermark is uploaded in 1-hour chunks. This chunking is designed to prevent overloading your infrastructure.
  • If you are uploading a dataset for the first time and set start-date to 90 days in the past, update-period to 1 day (P1D), and chunk-size to 30 days, Data Loader uploads the 90 days of data in three 30-day chunks.