Jump to: navigation, search
(Created target blank page For Version: PSAAS:Julie) |
(Update with the copy of version: Public) |
||
Line 1: | Line 1: | ||
− | + | =Create an Evaluation= | |
+ | |||
+ | SpeechMiner Quality Management enables you to create five types of evaluations: | ||
+ | |||
+ | * [[distributedinteraction|Distributed Interaction Evaluation]] - Creates evaluation sessions about interactions for selected evaluators. Each evaluator will be assigned an evaluation session for each agent associated with the evaluation. The evaluation is created about the agents who participated in the interactions that are added to the evaluation. For example, if you add 2 evaluators and 3 interactions (each with a different agent) to the evaluation, 6 evaluation sessions will be created. That is, each evaluator will be asked to fill out the evaluation for each one of the 3 agents. | ||
+ | * [[distributedagent|Distributed Agent Evaluation]] - Creates evaluation sessions about one or more specific agent's performance during customer interactions for a specific evaluator. The selected evaluator is assigned an evaluation session associated with the selected agents. For example, if you select 18 agents to be evaluated by a specific evaluator, 1 evaluation session will be created for each of the 18 agents according to the selected interaction filter criteria. | ||
+ | * [[sharedevaluation|Shared Evaluation]] - Creates evaluation sessions about an agent's performance during customer interactions without assigning the sessions to a specific evaluator(s). Instead, each evaluator associated with the session can select and assign himself/herself the Shared session from the available pool of Shared sessions. Once an evaluator selects a Shared session, the specific session is no longer available to other evaluators. | ||
+ | * [[calibrationevaluation|Calibration Evaluation]] - Use this evaluation to compare evaluator performance, to ensure consistency across teams. A Calibration Evaluation is performed on one evaluation in the same way as a Distributed Interaction Evaluation session, the difference is that the result of these evaluation sessions can be used in a Calibration Score report (that is, a report that compares how the evaluators filled out the same evaluation session).. | ||
+ | * [[adhoc|Ad-Hoc Evaluation]] - Creates an evaluation session for a specific interaction or segment currently being played in the Media Player. | ||
+ | |||
+ | Refer to [[qmworkflow|Quality Management Workflow]] for a better understanding about the evaluation process. | ||
+ | |||
+ | [[Category:V:PSAAS:Julie]] |
Revision as of 18:03, June 10, 2019
Create an Evaluation
SpeechMiner Quality Management enables you to create five types of evaluations:
- Distributed Interaction Evaluation - Creates evaluation sessions about interactions for selected evaluators. Each evaluator will be assigned an evaluation session for each agent associated with the evaluation. The evaluation is created about the agents who participated in the interactions that are added to the evaluation. For example, if you add 2 evaluators and 3 interactions (each with a different agent) to the evaluation, 6 evaluation sessions will be created. That is, each evaluator will be asked to fill out the evaluation for each one of the 3 agents.
- Distributed Agent Evaluation - Creates evaluation sessions about one or more specific agent's performance during customer interactions for a specific evaluator. The selected evaluator is assigned an evaluation session associated with the selected agents. For example, if you select 18 agents to be evaluated by a specific evaluator, 1 evaluation session will be created for each of the 18 agents according to the selected interaction filter criteria.
- Shared Evaluation - Creates evaluation sessions about an agent's performance during customer interactions without assigning the sessions to a specific evaluator(s). Instead, each evaluator associated with the session can select and assign himself/herself the Shared session from the available pool of Shared sessions. Once an evaluator selects a Shared session, the specific session is no longer available to other evaluators.
- Calibration Evaluation - Use this evaluation to compare evaluator performance, to ensure consistency across teams. A Calibration Evaluation is performed on one evaluation in the same way as a Distributed Interaction Evaluation session, the difference is that the result of these evaluation sessions can be used in a Calibration Score report (that is, a report that compares how the evaluators filled out the same evaluation session)..
- Ad-Hoc Evaluation - Creates an evaluation session for a specific interaction or segment currently being played in the Media Player.
Refer to Quality Management Workflow for a better understanding about the evaluation process.
Comments or questions about this documentation? Contact us for support!