Having just learned and applied the ADDIE--Analysis, Design, Development, Implementation, and Evaluation--concept in 6430, I've noticed a peculiar circumstance: In spite of the desired effect of a learner-centered design construct, there seems very little emphasis on learner input into the process.
Where are opportunities for formative assessment from the intended learners?
The Brown/Green article that we read for our 09/22 class session mentioned that "one of the most effective methods for determining the success of your task analysis during the design and development of instruction is to ask a SME [subject matter expert] who was not part of the task analysis to look over what you created and to evaluate it for accuracy and thoroughness" (p. 116).
While I believe the SME can certainly validate the thoroughness of the ultimate product, I challenge the assertion that he/she can vouch for the effectiveness of it. In my mind, only the learner(s) can speak to that point. They will be the recipients of this product, and as the ones who actually utilize it, they will be better able to comment on its value.
In class on 09/01, we identified three values as imperative to instructional designers: efficiency, i.e., solving the problem without unnecessary steps; effectiveness, i.e., obtaining some degree of success; and appeal, i.e., motivating/drawing in the intended learner(s). To me, the learner(s) would be better able than a SME to tell the instructional designer(s) whether or not [1] the product contained any frivolous parts, [2] the product worked, and [3] the product interested them.
So why don't we consult them more often? To be fair, it seems apparent that the designer(s) could and should seek their input in the goals and/or learner assessments that typically occur as a part of the 'analysis' part of the ADDIE concept. But once the designer(s) reach the 'design' and 'development' stages, where they've generated a blue print and some deliverables, respectively, I think there should be more consultation with the target recipient group.
The Brown/Green article suggested a "summative evaluation activity" after the designer(s) implemented the instruction, but why wait until then? In the book, "How People Learn: Brain, Mind, Experience, and School," the National Research Council defined formative assessment as "ongoing assessments designed to make students' thinking visible to both teachers and students" (1999, p. 24) with a particular benefit to teachers to "identify problems that need to be remedied (problems that may not be visible without the assessments)" (p. 25).
Thus, I think it might be valuable for designers to incorporate a pilot group of learners into their design construct so the resultant product emerges with more inductive, formative feedback from the learners.
Tuesday, September 22, 2009
Subscribe to:
Post Comments (Atom)
I'm good with what you are saying about the use of formative assessment and its role in design. I'm a little more leery about the role of the SME you establish. While the method of delivery may be the big issue in IDET, if a SME doesn't validate the content... who sees to it? At least in the case of kids, I'm not really sure that what they desire at this point in life is what schools are supposed to deliver. A portion of the population may know what they "need," but looking out at a class during a discussion, post-lunch, with kids who may or may not have an interest in the content, and my only be interested in clicking off the math/english/science credits in the easiest way they can (if you aren't aware of the "packets" that they can use to make up a grade, it might be interesting to look in to)... I'm not sure that we should be leaving it to them to decide if they are interested the content that a SME may see as essential.
ReplyDeleteI think as in most circumstances, it is necessary for both (SME and learner input in this case)to be considered. Each participant has their place in the process. SMEs are important as has been expressed, to validate content and provide background information relevant to the instruction being designed, maybe even how learners typically acquire success in the given area. However, as you noted, the learners are central to the instruction being designed because if it doesn't meet their needs or assist them in accomplishing the goal/objective, what is the point of designing the instruction in the first place?
ReplyDeleteComing from a professional teaching background, both SME and learner input were critical to success. While in the teaching credential program in our content classes,
before/during our initial classroom teaching experience, we were able to acquire the main subject matter content to be successful in sharing the information (i.e., history, math, reading) with learners (although "teaching" the content was a different issue). However, as we quickly noticed when we pre-assessed our students, student needs were as diverse as their backgrounds, and oftentimes did not align with state standards for that particular grade level. Thus, learner needs drove lesson plans and activities to enable students to acquire and retain the necessary knowledge to be successful.
If you asked students whether they wanted to learn long division or fractions or narrative writing, they'd likely stick their nose up and roll their eyes. Of course, as state standards would dictate, this was "necessary" for them to accomplish their grade objectives and advance to the next grade. However, using pre- and continuous assessment tools helped facilitate in determining what instruction was necessary, therefore increasing effectiveness and efficiency based on learner input. Hence, small group instruction was critical for students needing extra help or those who required enrichment and a depth not all students were ready to master. Thus, as stated before both SME and students input are vital to successful accomplishment of objectives and goals.