Saturday, June 1, 2013

Effective Web Instruction: Handbook for an Inquiry-Based Process; Chapters 3 and 4

SUMMARY

In Chapters 3 and 4, Frick and Boling continue with their plan/process for preparing (chapter 3) and testing (Chapter 4) the paper prototype.  

Chapter 3 - Preparing for testing a prototype

Chapter 3 starts by identifying the three high level questions the paper prototype should answer:
  1.        Is the instruction effective?
  2.        Are students satisfied with the instruction?
  3.        Is the product usable by the students?

To answer question #1, a 3-step process is used: (1) develop a pretest, (2) Develop usability testing, and (3) develop a posttest (posttest should test the same things the pretest did.)

The process for developing the content (tasks) goes as follows:  Start by teaching it yourself using traditional ILT.  As you teach it, develop the paper prototype rapidly on paper.  Be prepared to explain why the first paper prototype is on paper and not on the web. The prototype should include as much of the content as possible, e.g., critical components (diagrams, demos and presentations) and high-level detail – the breadth.  It should show the depth of the deepest component by showing the entire strand.  It should not include finished graphics or unnecessary detail.  The final prototype should not have a “finished” or “professional” look or feel – you want the test subject to feel free to criticize.

Informational Products, the collections of learning resources used to support the actual instruction, should be assembled in the same manner.  Capture the breadth of the website to the “top Level”; capture the Depth of the website by developing several strands; at least one complete strand, preferably the deepest.

Assembly instructions:  Create a notebook of pages ensuring that you number/code the pages and all hyperlinks.  When building the prototype (and the content) be sure to answer/reply to Merrill’s instructional principles known to facilitate learning:
  1. Present in the context of a real world problem.
  2. Activate prior knowledge or experience.
  3. Demonstrate (show examples) of what is to be learned.
  4. Provide opportunity to practice or apply their newly learned knowledge or skill. 
  5. Provide techniques that encourage learners to integrate (transfer) the new knowledge or skill into their everyday life.

Develop an authentic assessment for student mastery.  Avoid the word “test” as it has a negative connotation to many.  Assessments should use “authentic” methods that imitate real life and match the instructional goal.  Use rubrics that answer the following:  What is the student performance you expect?  What are the conditions of the performance?  What criteria will you use to judge if the performance is good enough for mastery?  The assessment must be valid and assess task mastery. 
To assess student satisfaction, develop a Level 1 Kirkpatrick Reactionaire.  We will also be accomplishing a Level 2 Assessment of learning using the pre- and posttests. 

Finally, get an expert appraisal by asking a SME or SMTE to review the prototype.  Next, test the prototype yourself.  The self-check should be as much a quality check as a usability check.  Fix technical problems as they are found.  Do not use a prototype that has problems!  Create an “Observation Sheet” for use by the observer during the testing.  The observation sheet should ensure conformity and uniformity and provide a baseline. 

Chapter 4 - Testing a prototype

Chapter 4 provides a sample scenario for testing a prototype.  When conducting the testing, be sure to start with the Big Three: 
  1. Use Authentic Subjects – Should match the identified TPOP. 
  2. Use Authentic tasks – the learning activities should match the goals/objectives.  For informational content, the tasks should include information finding tasks.
  3.  Use Authentic Conditions - Remember, this is for web based content w/o instructor or facilitator.  Consider using instruction sheets to guide tasks.


Next, create a test plan and hold a pilot session.  The Test Plan may or may not be required.  If used, use a Test Plan Checklist.  Identify and recruit subjects (testers) who are the actual TPOP.  Determine how many subjects are needed; 4 – 5 is normal minimum.  The text also discusses sampling methods.
Sampling until saturation – sampling until the results do not change (see Krug, Chapters 9&10). 
Bayesian Reasoning – sequential decision making.  (The number of subjects required to prove a website is not working properly is considerably fewer that the number required proving it works properly.  Frick got similar results when determining lengths of adaptive mastery tests (assessments) using Bayesian reasoning process.)

Pilot Session Observer Guidelines are provided that include: What to do; What to say for noncommittal responses (prompting and probing); What to look for; and how to handle a variety of  situations the observer might encounter (e.g., subjects who get stuck or give up.)  When conducting the Sessions, the observer(s) should provide the following structure.  

  1.       Welcome - Provide an in-brief.  The in-brief is used to put the student at ease, tell them what to expect, and tell them to expect the worst – it’s not them it is the design.
  2.    Give the pre-assessment.  Assesses what they know before they start.
  3.       Present the prototype.  Remind the student to think aloud.  Remind them you expect them to encounter problems and need them to verbalize continuously.  Begin user activities/tasks.  Observe and record data - need to have an “observer sheet.”  Typically one observation sheet per task.
  4.       Administer post-assessment (posttest – pretest = Kirkpatrick Level 2 evaluatiuon.
  5.      Administer reactionaire (Kirkpatrick Level 1 evalaution.)

.

SUMMARY

I get mixed messages from this guidance about how much content the paper prototype should contain.  The authors say it should be a draft working copy that indicates the breadth and at least one full strand for the depth, but it also says it should allow the student test the entire website which implies it should contain everything.  This whole process seems to be somewhat cumbersome and lengthy, to develop first on apper and then a second time on the web seems less than efficient.  My customers demand efficiencies to keep the cost down and would question how much value-added was gained from this process.

It should be noted that the DoD and Navy both publish developers guides that provide fairly good guidelines for developing for the web.  When coupled with ADA, section 508 requirements, we normally have a pretty thorough set of standards and conventions for all the developers to follow.  And that is not even talking aboput Data Item Descriptions and Contreact Data Requirements Lists (CDRLS) that require detailed design pacjages for web development.

I hate to sound pessimistic, but for normal everyday use I would have trouble getting customer buy-in for paper prototyping.  I have trouble getting customer buy-in for the required 25% IPR proof of concept prototype which is typically done on a representative sampling of the content.  Telling the customer that I need access to a number of TPOP students on which to test they would consider an unneeded inconvenience to the end-user and an unnecessary cost to them.  Especially since there is no Navy instruction or guidance that requires it.  (This is the Program Office’s perception, not the end-users perception.  End-users – the Fleet Sailor – are normally very receptive to testing.)

2 comments:

  1. Hello Kevin,

    You provide a great overview of the reading and some useful insight into the real-world barriers we might encounter when it comes to paper prototyping.

    I do see that how the descriptions related to the amount of content to include sends a bit of a mixed message. I had a similar thought, but figured once I was to that point in the design process that I would be able to differentiate between essential and non-essential content. But now that I think about it, if the content is non-essential in achieving the desired performance outcomes, then is it really necessary to include it at all? You present a lot of food for thought.

    It is unlikely that I will be developing web-instruction with the Coast Guard (I anticipate my work will focus on face-to-face content), but I imagine a situation similar to the one you describe plays out in the other Services as well. It seems as though the designers would really need to quantify the benefits of paper-prototyping in terms that translate to time and cost savings if there is any hope of gaining support for the initiative.

    Thanks for a great post!

    -Kipp

    ReplyDelete
  2. Great minds think alike... I too wondered about the additional time element of this intensive paper prototyping. I've sketched out design before and passed it around for feedback/comments, but never done such intensive prototype testing of usability before the development and design. Usually we do usability testing after getting the feedback on the initial design and developing one module online. (Honestly, I feel like I could throw together the HTML for a page faster than developing one of these paper prototypes. Cutting and pasting repetitive things is much faster than writing them out!!) However, I'm interested in giving this a try. AND if I can find some research/literature with statistics that show this kind of prototyping is more effective in producing successful and more user friendly outcomes...then I'll use that justification to go to bat for demanding more time and resources for this additional step in the development. However, I do think I will need more justification to show it is needed, and not just a nice thing to do if one has the unlimited time and resources.

    ReplyDelete