Designing and Conducting Formative Evaluations
A formative evaluation is a collection of data and information during the development of instruction that can be used to improve the effectiveness of the instruction. There are three phases.
The instructional strategy can be used to design the formative evaluation by creating a matrix of the instructional strategy and major categories of the questions about the instruction. Using the instructional strategy as the frame of reference for developing evaluation that is either too narrowly focused or too broad.
There are five questions that should be asked related to decisions made while developing materials:
Types of data to collect include the following:
One-to-One Evaluation
The purpose is the identify and remove the most obvious errors in the instruction and obtain the initial performance indications and reactions to the content by learners. There are three main criteria and the decisions designers make during the interview process including clarity, impact, and feasibility.
Small-Group Evaluation
There are two purposes: 1.) determine the effectiveness of changes made following the one-to-one evaluation and identify any remaining learning problems that learners may have; and 2.) determine whether learners can use the instruction without interacting with the instructor. During this phase, the evaluator plays a less interactive role performance and attitude data are collected, and in-depth debriefings are conducted to obtain both quantitative and qualitative data.
Field Trial
The purpose is to determine whether the changes in the instruction made after the small-group stage were effective and to see whether their instruction can be used in the context for which it was intended. During the field trial, the evaluator does not interfere as the data are gathered from the learners or instructor, although observation while materials are used can provide insights for interpreting data.
- One-to-one: designer works with individual learners to obtain data to revise the materials.
- Small-group evaluation: group of 8-20 learners that are representative of the target population to study the materials on their own and are tested to collect the required data.
- Field trial: test procedures required for installing the instruction in a situation as close to the "real world" as possible.
The instructional strategy can be used to design the formative evaluation by creating a matrix of the instructional strategy and major categories of the questions about the instruction. Using the instructional strategy as the frame of reference for developing evaluation that is either too narrowly focused or too broad.
There are five questions that should be asked related to decisions made while developing materials:
- Are the materials appropriate for the type of learning outcome?
- Do the materials include adequate instruction of the subordinate skills, and are the skills sequenced and clustered logically?
- Are the materials clear and readily understood by representative members of the target group?
- What is the motivational value of the materials? Do learners find materials relevant to their needs and interests? Are they confident as they work though the materials? Are they satisfied with what they have learned?
- Can the materials be managed efficiently in the manner they are mediated?
Types of data to collect include the following:
- Reactions of the subject-matter expert, whose responsibility it is to verify that the content of the module is accurate and current.
- Reactions of a manager or supervisor who has observed the learner using the skills in the performance context.
- Test data collected on entry skills tests, pretests, and posttests.
- Comments or notations made by learners to you or marked on the instructional materials about difficulties encountered at particular points in the materials.
- Data collected on attitude questionnaires or debriefing comments in which learners reveal their overall reactions to the instruction and their perceptions of where difficulties lie with the materials and the instructional procedures in general.
- The time required for learners to complete various components of instruction.
One-to-One Evaluation
The purpose is the identify and remove the most obvious errors in the instruction and obtain the initial performance indications and reactions to the content by learners. There are three main criteria and the decisions designers make during the interview process including clarity, impact, and feasibility.
- Clarity: 3 criteria
- message (vocabulary level, sentence complexity, introductions, conclusions, etc.)
- link (analogies, illustrations, and demonstrations)
- procedures (characteristics of the instruction including sequence, size of segment, transition between segments, etc.).
- Impact: learner's attitudes about the instruction and his or her achievement on specific objectives. The evaluator must determine whether the learner perceives the instruction as being
- personally relevant to her or him
- accomplishable with reasonable effort
- intersting and satisfying to experience
- Feasibility: relates to management-oriented considerations that can be examined during the one-to-one trial. Examples include:
- How should the maturity, independence, and motivation of the learner influence?
- Can learners such as this one operate or easily learn to operate any specialized equipment required?
- Is the learner comfortable in this environment?
- Is the cost of delivering this instruction reasonable given time requirements
Small-Group Evaluation
There are two purposes: 1.) determine the effectiveness of changes made following the one-to-one evaluation and identify any remaining learning problems that learners may have; and 2.) determine whether learners can use the instruction without interacting with the instructor. During this phase, the evaluator plays a less interactive role performance and attitude data are collected, and in-depth debriefings are conducted to obtain both quantitative and qualitative data.
Field Trial
The purpose is to determine whether the changes in the instruction made after the small-group stage were effective and to see whether their instruction can be used in the context for which it was intended. During the field trial, the evaluator does not interfere as the data are gathered from the learners or instructor, although observation while materials are used can provide insights for interpreting data.
Reflections
I thought it was interesting that the text said that studies have shown that studies of the instructional design products sold in the U.S. each year have not been evaluated with the learners and revised prior to distribution. Although important, I imagine that there are times that time is very limited and conducting a formative evaluation could be hard to fit in. I could see this being especially hard when the instruction is delivered online. It might be hard to determine who your learners are and to receive feedback.
I thought it was interesting that the text said that studies have shown that studies of the instructional design products sold in the U.S. each year have not been evaluated with the learners and revised prior to distribution. Although important, I imagine that there are times that time is very limited and conducting a formative evaluation could be hard to fit in. I could see this being especially hard when the instruction is delivered online. It might be hard to determine who your learners are and to receive feedback.