From Genesys Documentation
Jump to: navigation, search

A program is a set of instructions that tell SpeechMiner UI what to recognize in recorded conversations between contact center agents and customers, in relation to a specific business issue.

More specifically, a program's instructions are made up of topic and non-linguistic recognition tasks that contain guidelines about what SpeechMiner UI should look for, when to look for it, and where to look for it in the interaction.

Topics must be defined before you can add them to a program. However, you can change their contents after you add them to a program. The topic contents that are defined when the program is applied are the contents that will be sought by SpeechMiner UI in interactions associated with the program. For information about creating and modifying topics, see Topics

When SpeechMiner UI identifies a recognition task, it registers an event. For more information about events, refer to the SpeechMiner UI User Manual.

Every interaction that enters the system is automatically assigned to a program.

A program consists of the following:

  • Content Processing Methods
  • Non-Linguistic Identification
  • Structured Diagram
  • Priority Level

Content Processing Methods

Each program instructs SpeechMiner UI to extract useful information from a recorded conversation using the following content processing methods:

  • Speech Recognition: Identifies each word in its lexicon and transcribes it. Transcription enables users to search for phrases and facilitates certain types of reports that analyze the speech that is recognized for trends. In addition, a transcription enables you to see a variety of characteristics associated with the interaction and highlights phrases that can identify specific issues. For example, who said what, unsatisfied customers, and so on.
  • Topic Recognition: Identifies specific phrases associated with a defined topic. A topic represents a specific intent (for example, cancellation) and each program is associated with one or more topics. That is, topic recognition enables users to search for interactions containing a particular business issue, and facilitates reports that analyze the topic data. If Speech Recognition is not performed, topic recognition cannot be performed. That is, the topics included in the program define the linguistic data that SpeechMiner UI should look for in interactions that belong to the program.

Non-Linguistic Identification

Non-Linguistic Identification identifies the non-verbal parts of an interaction. For example, silence, busy signal, key presses, and caller agitation (tone).

Every program instructs SpeechMiner UI to automatically identify the following non-linguistic events:

  • Music: Indicates when music is being played during the interaction. Music generally indicates that the interaction was on hold.
  • Cross Talk: Indicates when two or more people are talking at the same time.
  • Silence: Indicates when there is nothing being said or played. SpeechMiner UI will automatically skip over these silences when the interaction is played back.
  • DTMF: Key press on a touch-tone phone. 12 different keys can be identified using DTMF (Dual Tone Multi Frequency).
  • Busy Tone: A busy signal.
  • Dial Tone: A dial tone.
  • Ringback: A signal used in PSTN (Public Switched Telephone Networks - standard "land lines") to indicate that the line is being called or an incoming interaction is present.
  • After call work: Indicates the section of the recording that takes place after the interaction has ended.
Identification standards for Dial Tone, Ringback, Busy Tone and DTMF are based on USA standards. To learn how to use different identification standards, refer to the Configuring SpeechMiner UI > Additional Configurations > Tone Frequency Configuration tab in the Genesys Interaction Analytics, Genesys Interaction Recording UI and Quality Management Administration Guide.
In addition, when you configure a program, you can choose whether SpeechMiner UI should also identify agitation (that is, non-verbal expressions of frustration and anger, such as deep sighs, grunts, or rapid changes in pitch).

Each non-linguistic event that is identified by SpeechMiner UI has a start time, an end time, and a type. For example, if SpeechMiner UI identifies silence in an interaction, this is a non-linguistic event whose start time is the beginning of the silent period, whose end time is the end of the silent period, and whose type is "Silence."

Structured Diagram

Each program is organized as a structured diagram that links topics in a consecutive string to mimic the expected flow of the conversations associated with the specific topic. The diagram is used to tell SpeechMiner UI where in an interaction the content must be found in order to match the requirements. For example, a structured diagram could show a "Loan Offer" topic at the beginning of an interaction, followed by a "Disclaimer" topic, and, finally, a "Contact Information" topic.

You can fine-tune the structure to increase the efficiency and accuracy of the recognition process using the following features:

  • Modify the Program Structure: Specify the order in which the topics must appear in the interaction, and indicate where branches may occur and which topics are optional, by adding and removing arrows in the structure diagram.
  • Create a Program Trigger: Set conditions for the links defined in the program's structure, including conditions that the topic must meet and metadata conditions that the interaction must meet (for example, the agent must be from a particular work group, or the interaction must have taken place after a certain date).
  • Create a Program Recognition Task: Specify that the topic must have taken place at a specific time during the interaction.

Priority Level

Interactions are sent to SpeechMiner UI from the external recording system. When they are received, they are put into SpeechMiner UI's processing queue to await their turn for analysis by SpeechMiner UI. Because the processing of some interactions may be more important than the processing of others, SpeechMiner UI does not automatically process interactions in the order in which they were placed into the queue.

When you create a program, you assign it a priority level. SpeechMiner UI selects which interaction to process first based on the priority level of the program it is associated to. SpeechMiner UI processes the highest priority interactions in the queue first, and then proceeds to interactions with lower priorities, regardless of how long the interactions have been in the queue.

Retrieved from " (2024-04-23 20:21:24)"
Comments or questions about this documentation? Contact us for support!