Wednesday, May 29, 2013

Changes in Student Motivation During Online Learning

By  Theodore W. Frick and Kyong-Jee Kim


SUMMARY

This article by Frick and Kim focuses on why there is such a high attrition rate among first time online learners, and what we can do as instructional designers (ID) to design and develop online courses that increase a learner’s motivation to learn (and persist with the learning).  The authors first discuss what past studies revealed in Review of the Literature, and  and then conducts its own study in The Present Study.

REVIEW OF THE LITERATURE

Wlodowski (1998) states that learning experiences must be highly motivating to foster the learner’s persistence to pursue lifelong learning.  In a review of current literature, Kim and Frick (2011) identify three factors that influence a learner’s motivation to learn: internal factors, external factors, and personal factors.


Internal Factors

This section begins by discussing Keller’s ARCS model of motivational design whereby attention, relevance, confidence and satisfaction are identified as being key components that must be designed into all training courses. They also discussed confidence and self-efficacy as being important since self-efficacy can play such an important role in a learner’s belief in their own ability to succeed (Bandura, 1997).  In reading this section, I found the authors’ findings and recommendations could be classified as either factors that increase motivation or factors that decrease motivation, as follows.


Factors that Increase Student Motivation

Factors that may Decrease Student Motivation

Computer/internet self-efficacy
Cognitive overload, especially for first time online learners
Academic Learning Time (ALT) – Where learning is scaffolded so that the student is engaged at the proper level for success (learning is not too difficult or too easy – it is in the student’s zone of proximal development)
Going outside the student’s zone of proximal development
When ALT and First Principles occurred student’s are 3.6 times more likely to experience satisfaction with the course
If student’s felt they experienced ALT negatively, they were assessed as 10 times more liley to have a low degree of mastery
The convenience, flexibility, and control found in online learning can contribute to a learner’s motivation
High school students reported that the social interaction available in traditional classroom instruction is important.  Does this lack of social interaction result in decreased motivation in online learning?
The control over pace and timing is also an important factor
Technical difficulties and communications breakdowns
Interface designs for the human to computer interface

Level of interaction


External Factors

External factors are those that are influenced by the environment in which the student takes the online learning.  Examples of external factors include: learner support, technical support, the student’s instructional and organizational climate.


Personal Factors

While studies are not conclusive, some believe if the ID matches the instructional strategies with the student’s learning styles the learner will be positively motivated.  Past studies also attribute the student’s temperament, gender and age to motivation.  Lastly, a prospective student’s past experiences with online learning and the resulting perceptions about online learning can also impact motivation.


PRESENT STUDY

The present study was conducted on self-directed e-learning (SDEL) where adults are taking independent study courses via the web.  SDEL courses typically have the following characteristics:
·         No formal or a less formal enrollment
·         No set schedule or timeline
·         Self-paced with the pace established by the student
·         No instructor to interact with or motivate students
·         Little or no peer interaction
The purpose of the study was to determine how student motivation changes as they progress through the course.  Data was collected from 800 students using questionnaires.  Students were drawn from universities, the business sector, and government and non profits.  The surveys focused on three main areas:
·         Motivation when beginning the course
·         Motivation during the course
·         Change in motivation while taking the course
The types of courses taken and time spent on the courses varied considerably, however 94.2% of the respondents indicated that the reasons for taking the online training was because classroom training did not fit their schedules or a course was unavailable, or because online training was convenient and flexible (Frick and Kim, 2011).   Respondents reported a relatively neutral (flat) change in motivation as follows:
·         40% reported no change in motivation
·         34% reported an increase in motivation
·         26% reported a decrease in motivation
The three most important factors that appeared to determine motivation at the outset of the training were perceived relevance of the training, the student’s technology competence (self-efficacy), and the student’s age.  Two factors seemed to be the best predictors for motivation during the course:  the belief that e-learning is right for me and the level of motivation held at the outset of the training.  The best predictor for a change in motivation is the motivation experienced during the course, and lastly, the best predictor for the student’s satisfaction for the course at completion is the change in motivation experienced during the course. 


DISCUSSION

I have been a fan of Wlodowski since using one of his books as an undergraduate that dealt with strategies for teaching adults - a book that I continue to use regularly to this date. The updated version of the book is: Enhancing Adult Motivation to Learn: A Comprehensive Guide for Teaching All Adults by Raymond J. Wlodkowski (Apr 18, 2008).  I highly recommend the book of you develop training for adults.  
Are internal factors also intrinsic motivators?  External factors extrinsic motivators?  I guess Personal factors could be intrinsic or extrinsic.  I bring this up because I seem to recall reading (probably Wlodowski or Bandura) that intrinsic motivators (like relevance and inclusion) are more deeply ingrained, longer lived, and cause students to persist more so than extrinsic motivators.  Extrinsic motivators (such as rewards) have a tendency to be shorter lived and do not necessarily result in student persistence.  Extrinsic motivators are good to spur slumping motivation. 
To me, the present studies seemed to suggest that motivation begets motivation.  If the student was motivated to begin the class, motivation was likely to continue as long as nothing went wrong – internally or externally. This also suggests to me that a properly motivated student is ours for the losing – if that makes sense.  In my experience, most adult students who come to SDEL of their own volition or choosing do come intrinsically motivated. 


BIBLIOGRAPHY

Bandura, Albert (1997).  Self-efficacy: The exercise of control. Freeman: New York.
Wlodkowski, Raymond J. (2008).  Enhancing Adult Motivation to Learn: A Comprehensive Guide for Teaching All Adults.  3rd Edition.  Jossey-Bass: Hoboken, NJ.

Monday, May 27, 2013

Effective Web Instruction: Handbook for an Inquiry-Based Process

SUMMARY:

Getting Started

The authors (Frick and Boling) start by listing some common pitfalls of instructional design methods that include:
  •       The instructional website was designed and developed without input from the users
  •        Little or no testing of the design or the site to be sure it works
  •        No record of decision-making and standards used for the site
  •        Difficulty in justifying the site’s design or capricious redesign
  •        Finding problems after the site has gone public
  •        Undergoing expensive repairs to a design after it is built
  •        Allowing undetected problems to pass through to users

The authors define an inquiry-based instructional design process as one in which the instructional designer (ID) makes empirical observations to answer questions and make decisions with regard to the design and construction of the website.  (The American Heritage Dictionary defines empirical evidence or empirical data as (1) a source of knowledge acquired by means of observation or experimentation; (2) Guided by practical experience and not theory.)  The authors follow with a discussion on the ADDIE process of instructional design and the roles that can be played by the ID using this model. 

Chapter 1 Conducting the Analysis

The very first thing to do is identify who the stakeholders are and to work with the stakeholders to ensure the instructional goals are clearly defined.  At this time start considering authentic assessment methods to be used and the indicators that students will be required to exhibit to demonstrate competency or mastery.  The authors include a discussion on how observed behaviors are the indicators of goal achievement and not the actual learning.  Learning itself cannot be observed – only the products and actions resulting from learning can be seen and measured.

Learner Assessment – Start by identifying who the learners will be and determining their relevant characteristics.  It is important to determine what they already know about the learning.  At the same time it is also important for the ID to gain as much knowledge as they can on the subject matter.  It is very difficult (and inefficient) to accomplish valid and effective ID and development if you don’t know and understand the subject matter. 

Context Analysis – This is where you verify that e-learning and/or the web is the right method and media for delivering the training.  In a nut shell, using e-learning should enhance the learning in some way that other methods would not and you need to be able to explain these benefirts.  Other resources may still be needed – e.g., an instructor’s guide and trainee guide.  (I often create CBT/WBT products that are dual-purpose in that they can be hosted stand-alone or provided as computer aided instruction (CAI) in a classroom.  As CAI, both an instructor guide (IG) and a trainee guide (TG) are required to support the training.)  Lastly, the media should never be the “focus” of the analysis, the focus is the content and context of what is to be taught and the best way to provide the student a valid learning experience.

Chapter 2 Preparing for testing a prototype

In accordance with Frick and Boling. The prototype should always be able to answer the following questions:
  1. Is the instruction effective?
  2. Are students satisfied with the instruction?
  3. Is the product usable by students?

The authors advocate using a Design – Develop – Test iterative cycle.  They also remind us to remember to keep in mind the need to ensure the student target population (TPOP) has not already mastered the content to be provided and to focus on delivering new content for the TPOP to learn.  Start by developing a paper prototype to test on the TPOP.  Make sure the TPOP is valid as experienced testers may be willing to overlook small flaws in the design.  The first prototype should be paper-based with a more mature computer-based prototype done later if needed/required.  Use paper prototypes because:
  •        they are “hands-on”
  •        paper products are familiar for the testers to work with
  •        they have the look and feel of a draft document

Testers are much more likely to comment on draft working documents than polished documents that took a lot of time and effort into to develop.

The paper prototype should identify the breadth and depth of the product.  The prototype does not have to contain the entire project but should contain as much as possible.  It should convey the breadth at the top most level and at least one complete strand for the deepest part.  Deep is determined by the number of mouse clicks that will be required to get to the bottom, each click in the strand should be represented by at least one page.  Do include all critical elements – even if only represented by a place holder.  It is not necessary (and probably not wanted) to use finished graphics, etc.  Create the prototype on paper and organize in a loose leaf notebook.  Each page should contain a coded link representing hyperlinks. 

DISCUSSION:

This will be a first as I have never done a paper-based prototype before.  Normally, my requirement is to develop and present a working prototype at the twenty-five percent in-process review for the stakeholders to review and approve prior to full scale production.  I am sure the whole team does a mental prototype; we even do brainstorming during Group meetings that require mental prototyping.  Below, I am providing some more observations on various parts of the process.
Stakeholder buy-in.  As a personal note from somebody who has been doing this for many years, getting and keeping stakeholder buy-in from the earliest stages in a project is an essential element for successful project completion.  Stakeholder buy-in and participation is an indicator of good communication and communication is an essential element of a successful project.
Goals vs. Objectives:  The meaning for these two terms will depend upon the system or model being used.  I provide a Goal Statement that is a broad statement of what the student (competent performer) will be able to do as a result of the training or learning event.  The goal statement is supported by Terminal Learning Objectives (TLOs) which define skills-based tasks and Enabling Learning Objectives (ELOs) which identify the knowledge, skills, abilities, attitudes, tools, resources, and standards (KSATRS) that must be acquired and assessed/tested.  All ELOs must map to one or more TLOs, and the TLOs must map to the Goal Statement.  The Goal Statement must in turn map to an organizational requirement – usually a goal statement. 
The Medium is not the focus:  The authors make the statement, “The medium should not have been the primary focus in the first place.”  The U.S. Navy Inspector General (NAVINSGEN) Report on Computer Based Training (2009) provides a very insightful discussion and recommendations on the use of CBT following a year long study when Navy CBT failed to achieve the goals and objectives of the learning.  An area the Navy had major problems with for the past ten years, and still struggles with.
A word of caution on using a iterative processes.  Many customers hear or see the word iterative and see a reason for you to keep coming back to the well for more funding.  I believe each step in the ADDIE model is iterative in nature, but am careful to not use the word too often.  I have tried to sale stakeholders and customers on the value added by using an iterative process, but many customers still say they cannot afford it and resist the use of an iterative process.     

BIBLIGRAPHY:

The American Heritage Dictionary of the English Language (4th ed.). Houghton Mifflin. 2000. (downloaded from: http://www.ahdictionary.com/word/search.html?q=empirical&submit.x=41&submit.y=17)

Naval Inspector General Report to the Secretary of the Navy on Computer Based training (March 2009).  Downladed from http://www.corpsman.com/attachments/ig/cbt_ig_report.pdf


Sunday, May 19, 2013

Mager's Instructional Objectives


Summary

In his book Preparing Instructional Objectives: A critical tool in the development of effective instruction (1997), Robert Mager provides a widely used and accepted methodology for writing and formatting instructional objectives.   Simply stated, Mager states that to be effective, all instructional objectives need to contain three elements: a performance, a condition, and the criterion.  (Note:  some instructional designers refer to the 3 parts as the behavior, condition, and standard.  A behavior, condition, and standard are the same thing as performance, condition, and criterion.)

The objective’s goal should be to cause a change in behavior on the part of the student.  If it does not, either the objective is poorly written or unnecessary.  The very first thing done, as part of the analysis phase, is to develop a list of tasks to be performed.  This is referred to as the Task Analysis and is where content for training is selected and sequenced.   From the derived task list, the 3 part objective is written.  The objective should describe the outcome of the training; not the process for delivering the training.  When writing the objective, the designer needs to make the objective as specific as possible.  Objectives that are too generic can be interpreted differently by different people – instructors and students.  When completed, the objectives will be used for selecting instructional strategies, methods and media.  They will also be used for developing tests and assessments. 

The Three Parts of the Objective

Performance – Use an overt (visible) verb where the performance is easily seen and measured (e.g., write, run, repair, etc.)  For covert (invisible) verbs the designer must include “indicator behaviors” that tell how the covert verb will be seen (or indicated) which should also aid in measurability.  There are also what Mager refers to as “abstract” performance verbs, such as understand, value, and appreciate.  These too will require an indicator behavior.

Condition – Use a condition statement as part of the objective to describe the condition under which the student will have to demonstrate competency.
Criterion – The criterion tells “how well” the student must perform to prove competency.  The criterion must be realistic and measurable.  Some sample criteria elements include: speed, quantity (how many), accuracy, and quality.  The criterion must come from requirements (or standards).

Mager also includes a chapter called Pitfalls and Barnacles.  In this chapter he discusses some of the problems he sees when reviewing objectives – such as objectives written without a performance as the main intent.  Normally, an indicator behavior will be used as the main event which can be confusing and misleading.  Another example is what he refers to as “False Givens” which is where unnecessary or misleading information is included in the objective; this often includes information that describes the process or procedure.

Comment and Critique

Mager’s 3 part objective has been around for a long time and is widely accepted as the standard by industry.  (This book was first copyrighted in 1962 as “Preparing Objectives for Programmed Instruction.”
I do have several critiques of things he does not discuss or reference.


  • He does mention or discuss the gap analysis or content prioritization normally done in conjunction with the Task Analysis.  All tasks are not trained, and the gap analysis and content prioritization are used to determine which tasks are training worthy and which are not.  Factors such as task criticality, difficulty, and frequency are used to determine tasks to be included in the instruction.   Only after tasks have been selected for inclusion in the instruction can instructional strategies, methods and media be selected – and the objectives written.
  • I don’t recall him discussing that all test and assessment items must be linked to an objective. When discussing the performance part of the objective, he does not discuss learning domains (cognitive, psychomotor, and affective) or the taxonomies that accompany each domain.  He also never mentions there has been extensive research (and verb lists) done on the appropriate performance verbs for each of the domain.  Research that is easily accessible by all instructional designers.
  • He does not talk about the need for evaluating and determining the need (or not) for task fidelity when writing the condition. The need for task fidelity will help determine instructional strategies, methods and media. 


Saturday, May 18, 2013

5 Star Instructional Design Rating


Summary:

Merrill presents a 5 star rating system for evaluating instruction to see if it contains the first principles of instruction.  One star is given for each of the first principles used:  real-world problem, activation, demonstration, application, and integration.  If a star is awarded, it is awarded at one of 3 award levels based on the degree of criteria achievement:  gold, silver, and bronze.  The author states that the rating scale is most appropriate for tutorial or experiential (simulation) instruction.  It is not applicable for reference material or isolated facts to include what the author calls “Tell and Ask” training where information is presented and then followed by a paper-based knowledge test.  Merrill states that the 5 star rating scale may not be appropriate for receptive (e.g., lectures) or exploratory unstructured problem solving with little or no guidance.  He also also states that this rating scale may not be appropriate for psychomotor tasks.  His rating scale includes 32 questions categorized  into the five areas by which to rate the course’s content to evaluate if it contains the 5 first principles needed to achieve effective, efficient, and engaging instruction.

Critique/Findings:

It seems like a logical rating scale on which to build a rubric for evaluating instruction. I also believe it would be able to be used to evaluate all methods of delivering instruction and not just electronic or computer/web-based instruction.   I assume this could be used for both formative and summative evaluation depending upon when it is applied.  Formative when used to evaluate training as it is being designed, developed and implemented; summative when applied to evaluate a course after it has been implemented.  
I am not sure why the author states this scale may not be suitable for use for evaluating a psychomotor task?  If the psychomotor task is based on a real world problem I see no reason why it could not be used.    (If it not a real world problem, what is the justification for training it?)  He also does not provide any guidance on how to apply the rating scale other than to ask the questions.  What determines a gold, silver, or bronze star – or no star?  How do you assign an overall score or rating for the instruction as a whole?  I guess I will see tomorrow when I use it to rate an online boating safety course required by the state of Virginia.    

Sunday, May 12, 2013

Prescriptive Principles for Instructional Design



Kevin Kennedy

R547 Computer-Mediated Learning

Week 1 Blog Post Summary & Discussion

May 12, 2013


The assignment for this week is to read, summarize, critique, and discuss the article Prescriptive Principles for Instructional Design by M. David Merrill, Matthew Barclay, and Andrew van Schaak. 


SUMMARY


The article is broken into three sections.  In the first section, the authors review the prescriptive principles identified by Merrill as the first principles of instruction.  For this article, instruction is defined as “a deliberate attempt to design a product or environment that facilitates the acquisition of specified learning goals.”   Merrill identified these first principles after reviewing an assortment of instructional design models and theories where he found these principles to be shared by all the models and theories he evaluated.  The first principles Merrill identified as being core to all the theories and model evaluated are:

  • Task-centered approach
  • Activation principle
  • Demonstration principle
  • Application principle
  • Integration principle

These are the principles that Merrill identified as being required for developing effective, efficient, and engaging instruction.  The first principle (task-centered) isn’t really a principle as much as it is a context that states to design the learning around a real-world problem using a task-centered approach.  The remaining four principles (activation, demonstration, application, and integration) are interrelated (and overlapping) and describe the four phases used for engaging the student.  The authors go on to say that these first principles share three important properties:  (1) the success of any instructional program will be directly proportional to the degree to which first principles are implemented; (2) These first principles can (and should) be implemented in any delivery system and in any instructional architecture  system; and (3) These first principles are design orientated and not learning orientated (they describe how the content should be designed, not how the student will learn it.)  The authors cite that while these principles have been around for a long time – at least 200 years – they are not used very often.  The rest of this section elaborates on the first principles and includes a good discussion on the importance of using a structure, guidance, coaching, and reflection cycle.  The section ends with a discussion on the application of first principles and briefly examines several uses of first principle (e.g., NETg, Shell EP, Brigham Young-Hawaii) have been completed successfully. 


The second section of the article discusses (briefly) other instructional design principles that have been proposed as prescriptive models for instructional design by others. The following prescriptive instructional design models are summarized as to how they relate to first principles if instructional design.

  • Principles for Multimedia Learning
  • Principles for e-Learning
  • Minimalist Principles
  • Cognitive Training Model
  • Instructional Principles Based on Learning Principles
  •  4C/ID Instructional Design

The last section discusses designing task-centered instruction and starts by illustrating and explaining the Pebble-in- the Pond instructional design methodology that provides a systems based approach to using first principles for instructional design.   


CRITIQUE & DISCUSSION


In a way this looks like just another instructional design model.  I have been using Gagne’s Nine Events of Instruction for years for designing and developing curriculum – it is the model I learned in the Navy’s Instructor Training course some years back and it seems to work very effectively.  I can also find Merrill’s First Principles of Instruction embedded in Gagne’s Nine Steps.  (For step 4, my instruction is always task-based.  The Task Analysis is the first step of the analysis phase followed by the gap analysis, content prioritization, sequencing, etc.)  Gagne’s Nine Events or Nine Steps are:

  1. Gain attention
  2. Describe the goal
  3. Stimulate recall of prior knowledge
  4. Present the material to be learned
  5. Provide guidance for learning
  6. Elicit performance / provide practice
  7. Provide informative feedback
  8. Assess performance
  9. Enhance retention and transfer


For Merrill, does the First Principles replace his Component Display Theory or does it supplement it?  For the past ten years or so, the Navy has adopted Merrill’s Component Display Theory as the basis for its automated instructional design and curriculum development authoring tool.  When designing and developing content, the developer builds all the content around the four types of instructional content (facts, concepts, procedures and principles)  and the three types of performance (remembering, using, and finding) as defined by Merrill in the Component Display Theory.  All our Terminal and Enabling Learning Objectives are supported by the below table, except in Excel.  I see First Principles embedded in this model too.  


BASIC FORMAT USED BY CONTENT DEVELOPERS TO DEFINE CONTENT USING THE
COMPONENT DISPLAY THEORY

Facts
Concepts
Procedures
Principles
Remembering




Using




Finding







Now, with this said, I am for trying new and innovative ways of doing things, but I also want to be careful to not fix something that is not broken. 

REFERENCE


Merrill, M. D., Barclay, M., & Schaak, A. v. (2008). Prescriptive Principles for Instructional Design. In AECT Handbook (pp. 173-184).