Monday, May 27, 2013

Effective Web Instruction: Handbook for an Inquiry-Based Process

SUMMARY:

Getting Started

The authors (Frick and Boling) start by listing some common pitfalls of instructional design methods that include:
  •       The instructional website was designed and developed without input from the users
  •        Little or no testing of the design or the site to be sure it works
  •        No record of decision-making and standards used for the site
  •        Difficulty in justifying the site’s design or capricious redesign
  •        Finding problems after the site has gone public
  •        Undergoing expensive repairs to a design after it is built
  •        Allowing undetected problems to pass through to users

The authors define an inquiry-based instructional design process as one in which the instructional designer (ID) makes empirical observations to answer questions and make decisions with regard to the design and construction of the website.  (The American Heritage Dictionary defines empirical evidence or empirical data as (1) a source of knowledge acquired by means of observation or experimentation; (2) Guided by practical experience and not theory.)  The authors follow with a discussion on the ADDIE process of instructional design and the roles that can be played by the ID using this model. 

Chapter 1 Conducting the Analysis

The very first thing to do is identify who the stakeholders are and to work with the stakeholders to ensure the instructional goals are clearly defined.  At this time start considering authentic assessment methods to be used and the indicators that students will be required to exhibit to demonstrate competency or mastery.  The authors include a discussion on how observed behaviors are the indicators of goal achievement and not the actual learning.  Learning itself cannot be observed – only the products and actions resulting from learning can be seen and measured.

Learner Assessment – Start by identifying who the learners will be and determining their relevant characteristics.  It is important to determine what they already know about the learning.  At the same time it is also important for the ID to gain as much knowledge as they can on the subject matter.  It is very difficult (and inefficient) to accomplish valid and effective ID and development if you don’t know and understand the subject matter. 

Context Analysis – This is where you verify that e-learning and/or the web is the right method and media for delivering the training.  In a nut shell, using e-learning should enhance the learning in some way that other methods would not and you need to be able to explain these benefirts.  Other resources may still be needed – e.g., an instructor’s guide and trainee guide.  (I often create CBT/WBT products that are dual-purpose in that they can be hosted stand-alone or provided as computer aided instruction (CAI) in a classroom.  As CAI, both an instructor guide (IG) and a trainee guide (TG) are required to support the training.)  Lastly, the media should never be the “focus” of the analysis, the focus is the content and context of what is to be taught and the best way to provide the student a valid learning experience.

Chapter 2 Preparing for testing a prototype

In accordance with Frick and Boling. The prototype should always be able to answer the following questions:
  1. Is the instruction effective?
  2. Are students satisfied with the instruction?
  3. Is the product usable by students?

The authors advocate using a Design – Develop – Test iterative cycle.  They also remind us to remember to keep in mind the need to ensure the student target population (TPOP) has not already mastered the content to be provided and to focus on delivering new content for the TPOP to learn.  Start by developing a paper prototype to test on the TPOP.  Make sure the TPOP is valid as experienced testers may be willing to overlook small flaws in the design.  The first prototype should be paper-based with a more mature computer-based prototype done later if needed/required.  Use paper prototypes because:
  •        they are “hands-on”
  •        paper products are familiar for the testers to work with
  •        they have the look and feel of a draft document

Testers are much more likely to comment on draft working documents than polished documents that took a lot of time and effort into to develop.

The paper prototype should identify the breadth and depth of the product.  The prototype does not have to contain the entire project but should contain as much as possible.  It should convey the breadth at the top most level and at least one complete strand for the deepest part.  Deep is determined by the number of mouse clicks that will be required to get to the bottom, each click in the strand should be represented by at least one page.  Do include all critical elements – even if only represented by a place holder.  It is not necessary (and probably not wanted) to use finished graphics, etc.  Create the prototype on paper and organize in a loose leaf notebook.  Each page should contain a coded link representing hyperlinks. 

DISCUSSION:

This will be a first as I have never done a paper-based prototype before.  Normally, my requirement is to develop and present a working prototype at the twenty-five percent in-process review for the stakeholders to review and approve prior to full scale production.  I am sure the whole team does a mental prototype; we even do brainstorming during Group meetings that require mental prototyping.  Below, I am providing some more observations on various parts of the process.
Stakeholder buy-in.  As a personal note from somebody who has been doing this for many years, getting and keeping stakeholder buy-in from the earliest stages in a project is an essential element for successful project completion.  Stakeholder buy-in and participation is an indicator of good communication and communication is an essential element of a successful project.
Goals vs. Objectives:  The meaning for these two terms will depend upon the system or model being used.  I provide a Goal Statement that is a broad statement of what the student (competent performer) will be able to do as a result of the training or learning event.  The goal statement is supported by Terminal Learning Objectives (TLOs) which define skills-based tasks and Enabling Learning Objectives (ELOs) which identify the knowledge, skills, abilities, attitudes, tools, resources, and standards (KSATRS) that must be acquired and assessed/tested.  All ELOs must map to one or more TLOs, and the TLOs must map to the Goal Statement.  The Goal Statement must in turn map to an organizational requirement – usually a goal statement. 
The Medium is not the focus:  The authors make the statement, “The medium should not have been the primary focus in the first place.”  The U.S. Navy Inspector General (NAVINSGEN) Report on Computer Based Training (2009) provides a very insightful discussion and recommendations on the use of CBT following a year long study when Navy CBT failed to achieve the goals and objectives of the learning.  An area the Navy had major problems with for the past ten years, and still struggles with.
A word of caution on using a iterative processes.  Many customers hear or see the word iterative and see a reason for you to keep coming back to the well for more funding.  I believe each step in the ADDIE model is iterative in nature, but am careful to not use the word too often.  I have tried to sale stakeholders and customers on the value added by using an iterative process, but many customers still say they cannot afford it and resist the use of an iterative process.     

BIBLIGRAPHY:

The American Heritage Dictionary of the English Language (4th ed.). Houghton Mifflin. 2000. (downloaded from: http://www.ahdictionary.com/word/search.html?q=empirical&submit.x=41&submit.y=17)

Naval Inspector General Report to the Secretary of the Navy on Computer Based training (March 2009).  Downladed from http://www.corpsman.com/attachments/ig/cbt_ig_report.pdf


No comments:

Post a Comment