Handy Dandy Elearning Guidelines

Finding the Guiding Light

Eleaning Guidelines, I have found, are very handy things to have as a reference in your back pocket. People in the field have very kindly come together to formulate these guidelines to remind us of all those factors that should be part of good elearning design.  Even better, they can act as benchmarks for evaluation questions.  As our evaluation project involves formative evaluation in part focused on aspects of design, we used  Elearning Guidelines for New Zealand to act as a starting point to initially identify some of the areas relevant to our objectives.   These were then honed down and adapted to align with our evaluation objectives.

.

As you can contribute your own thoughts to these guidelines -,its is a great way to generate conversations about what constitutes quality elearning especially in the face of changing platforms of delivery.

Avoiding Getting Tripped Up

Dealing With The Elephants…

.

This week we (my student buddy and I) started putting together an evaluation plan for our evaluation project. The thing about evaluations is that if you don’t rein them in right from the start, there is always a risk that they can become ‘bigger than ben hur’.  I’ve been putting some thought into making sure this doesn’t happen to us.  Firstly we have to be realistic about whether the scope of the project is do-able within the time frame allocated.  Nothing worse than trying to cover all the bases and ending up not covering any very well because you are out of time.  In my experience evaluation time lines don’t always behave and can easily blow out.  Someone told me once that a good rule of thumb is to allocate a time frame for a task, then double it!

.

Scope creep is another culprit, usually sneaking up on you when you not looking.  It happens easily, say a really interesting part of the evaluation takes your attention, or an unexpected outcome that you want to follow-up further…and then there you go…..scope creep. We need to remember to stay on the straight and narrow and deflect any temptation to follow other paths (for now).

.

Flabby objectives can make you feel lost and a bit overwhelmed.  It is like a dress, if you’re not being precise when cutting your pattern, the dress will be skew whiff by the time you finish.  For us that means making sure our aim and objectives are bang on right from the start, so we are clear about what we are trying to evaluate and why.

Types of Evaluation

Health Promotion vs Instructional Design Evaluation
.
A number of years ago, I was lucky enough to conduct a needs assessment looking at how asthma was addressed in primary schools.  To guide me I used ‘Program Management Guidelines for Health Promotion’ put together by NSW Health Dept.  It had the familiar components of many evaluation models:  needs assessment, planning, implementation and evaluation, but also added a component of sustainability (surprisingly absent from many Instructional Design (ID) models).  Health promotion programs can span different settings, and emphasis is on population groups and collaboration across organisations, professions, community groups.  A course could just be one component of a program.
.
ID is about courses of instruction (of course!) and to me it differs in that it is likely to be used within a single organisation, concentrating on a program area or course within that organisation, and is focused on the individual.
.
This requires a reorientation of my concept and experience of evaluation.
.
Using An Instructional Design Model
.
I have chosen to use Reeves and Hegberg ID model for my evaluation project.  It has 6 components:
1. Review
.
2. Needs Assessment
.
3. Formative Evaluation
.
3. Effectiveness Evaluation
.
4. Impact Evaluation
.
5. Maintenance Evaluation
.
.
.
.
.
Each of these components has a specific purpose and has corresponding activities, procedures and tools which are appropriate to each component (some of which were highlighted by Duginan in my last post).  My evaluation project will be specifically looking at the  formative evaluation aspect of this ID model.

Formative Evaluation

It is not often that a course is spot on the  first time it is created.  The purpose of formative evaluation is to improve a course, ensuring that the kinks and bugs are ironed out before the final version.  Ideally it should happen early in the piece (and not only once) so that information can be fed back into the process.  Formative evaluation activities overarch the areas of program design, planning, development and implementation.

Reeves and Hedberg (2003) highlight the main factors associated with formative evaluation and associated questions to consider:

  • Functionality:  does the product work as designed?
  • Usability:            can the intended learners actually use the program?
  • Appeal:                do they like it?
  • Effectiveness:   did they learn anything?

Each of these factors can be broken down again depending on what is to be evaluated.  For example useability may include features of the interface such as the speed of loading, how it looks and ease of getting around etc.

Linking It All Together

Evaluation Terminology
Working Out Evaluation Lingo

Matching The Old With The New

Evaluation spans many fields.  I have come across it in clinical nursing, health promotion, research and teaching.  Although the underlying premise in all these disciplines is to ‘do things better’, evaluation terminology can change depending on who is doing the talking!  Above is a list of all the terms I have come across over the years which basically all describe just four (or three or five depending on which model you use) phases of evaluation cycle.  But before getting into the nuts and bolts of different types of evaluation, I need a quick  reminder of where ‘evaluation types’ slots in within the bigger picture.

.

Conceptual Levels of Evaluation
.
So what are the macro parts of the evaluation jigsaw and how do they fit together?  According to Reeves and Hedberg (2003) various approaches to evaluation are underpinned by different and sometimes competing paradigms (or schools of thought) and values.  Reeves and Hedberg point to the ‘Electic-Mixed-Method-Pragmatic Paradigm’ being most utilised in Instructional Design.  This is due to its ability to deal with the complex by borrowing philosophies from other paradigms,  and by focusing on the practical through using tools or approaches that are best for the job, and that support triangulation of methods. Duigan (2001) further categorises evaluation into approaches, purposes (types), methods and designs and gives examples of what types of activities fit within each of these.
.

Approaches

Purposes(Types)

Methods

Designs

Deciding on which pathway is best to conduct an evaluation isn’t as easy as’ pick and mix’ from the variety of activities within each of these aspects of evaluation.  Philosophical values, the context and practical considerations such at time, money and people can lead to debate about what choices are most effective and appropriate.

References:

Reeves, T. & Hedberg, J. (2003).  Evaluating Interactive Learning Systems.  Englewood Cliffs, NJ:  Educational Technology Publications.

.

Duignan, Dr. P. (2001).  Introduction to Strategic Evaluation:  Section on Evaluation Approaches, Purposes, Methods and Designs.  Retrieved from                                    http://www.parkerduignan.com/se/documents/104f.html

 

 

Introducing…