Looking Back…

The Road Travelled

I started the course with an understanding of evaluation that came from another discipline and from many years ago.  For some reason I thought evaluation would have different features specific to the realm of education and instructional design.  In fact I found the principles were universal.  What I did learn was in relation to different ID models, a reminder of the importance evaluation questions and the formative feedback helped me in the area of report writing.     

I did find incredibly valuable the experience to work totally online with someone else.  This is the way people will work in the future (or are already are) and I was glad to have a chance to give it a go.  Thank you Rosanne for making this ‘first’ experience a pleasant one.  I think as in real life, people work in different ways and I was fortunate that Rosanne and I worked in a similar way.  We both had a commitment to communicate, to complete our work in a timely manner and I think we talked about the things that mattered instead of sweating the small stuff.  That generated trust and respect.

I also thought it was pretty cool to use different software like audacity and google docs.  Google docs was great in that both Rosanne and I could work on the document at the same time and synchronously talk within the document about points or in a box on the side.  Great!  The downside was we had problems with formatting when we tried to convert it to another file format and Rosanne found it slow going on her work computer.  But I have heard of being able to do documents in Hotmail that doesn’t have such problems and I’m keen to give that a go in the future.

All in all, the thing that I keep in the back of my mind about evaluation is that it is like statistics – it can sometimes look better than it is depending on how it is done, interpreted and presented.  With evaluation you always have to read (and conduct it) with a critical eye and have a look at how it was done.  Shoddy processes lead to shoddy results.

In terms of contribution, during this last phase of the evaluation, I worked on the discussion, conclusion and recommendations.  We both edited each other’s work and formatted the report.   I also contributed to feedback to Sandra and Pam’s final report on the discussion forum.

Cheers!  I’m going to have some chocolate wHoops I mean a carrot.

 

Advertisements

On the Home Straight…

Like an old nag picking up the pace when the stable comes into sight,  I too am dreaming of the reward of a crunchy carrot.  Rosanne and I have got our findings and discussion to Bronwyn for formative feedback. My time was spent writing up the key informant interview and arranging some of the findings.  I also put the discussion together in Rosanne’s absence and Rosanne added to it on her return.  I also gave feedback to Alagaes and co. on their findings and Pam and Sandra on their discussion.

On writing these two sections, I had to remind myself about their purpose to avoid blurring them at the edges.  Here is how I see them although I could be wrong – tell you after the formative feedback….

  • Findings:  tell like it is and only how it is – don’t be tempted to start on the ‘why’ bit yet
  • Discussion:  pull out the bits that are significant and make sure you have the aims of the evaluation in mind when writing them up.  You can put your own spin on it – but don’t be too indulgent and back it up.  This is also the chance to tell everyone what didn’t go so well and how this has scewed things.
  • Conclusion:  Highlight all the juicy bits
  • Recommendations: keep it practical and do-able and link it to the evaluation findings

An evaluation guy, Michael Quinn Patton interestingly said that he found recommendations the hardest part of evaluation to teach.  He reckons you can be great at picking up weakness and strenghts of a program through analysis of the data but it doesn’ t neccesarily make you good at being a futurist. “Evaluations can help you make forecasts, but future decisions are not just a function of data.  Making good, contextually grounded, politically saavy and do-able recommendations is a sophisticated skill”

http://www.idrc.ca/en/ev-30442-201-1-DO_TOPIC.html

Good luck folks with the final lap.

That’s Life

Onwards and Upwards! Cue the Trumpets

We have had a few logistical challenges over the last few weeks.  Firstly our interview with our key informant and student survey were delayed and the expert reviewer is on bereavement leave.  Added to that, I have been away for a week and my buddy is away for 10 days.  But that is what happens when life happens isn’t it.  You have to expect it and if you don’t build in time for ‘ blow outs’  you might find yourself sunk by events outside your control.

Once we managed to get things teed up, I conducted the key informant interview over Skype using Audacity to record the audio.  Having never used Audacity, I was nervous about losing the audio if I made a mistake – I didn’t have another backup.  It actually went fine and I could see a whole heap of applications for this audio software.  I found it an interesting process.

Done the Cycle..Swim and Run to Go

I started  training for a triathlon this week.  Sounds spectacular when you say it like that but it’s just a mini one – a goal to help to get the fitness levels up.  I went out for my first cycle last weekend with a training group that regularly trains together and caters for beginners like me.  The website said they start with 10 km.  We went (in wind and rain) for 32km.  All I’ll say about it is today I can’t sit down.

Evaluation can be a bit like that.  You think you’re doing 10km and end up doing 32km with a bit of wind and rain thrown in.  Our current southerly that we have been unable to contact the lecturer to organise an interview or to survey her students.

When we do eventually get that underway, our next front will be the logistical challenge to transcribe and analyse the data from the interview.  We have mitigated this somewhat by having pre-determined categories.  It does however take skill not to construe too much.  This is muddied further by only having 2 of us to validate our findings.  We will have to be mindful of not bringing our own bias to the table.  Here is a general guide I found about conducting semi-structured interviews   http://www.sswm.info/sites/default/files/reference_attachments/LAFOREST%202009%20Guide%20to%20Organizing%20Semi%20Structured%20Interviews.pdf

The evaluation plan is finished and I have sent it directly to Bronwyn (a bit too much spam on this site).  Rosanne has attached it to her blog if you wish to view it.

In terms of feedback I have contributed to Pam and Sandra’s, and to Fred’s plan.

Working It All Out

The Evaluation Plan

The evaluation plan is nearly tucked up bed…and just as fatigued parent at the end of the day, just quietly I’m ready to have a lie down and a bit of a snooze myself.  Suffering a lack of vitamin D, I’ve been  tapping away on the introduction, rationale, aim, objectives, decisions, evaluation questions, sample and instrumentation, limitations, logistics and time line, and budget.  Rosanne who has also resorted to taking Vit D supplements, has kindly completed the other sections.  We have reviewed and given feedback on each other’s work and I’m currently putting the evaluation plan together in Word format for marking and getting the hot water bottle ready.

Working in a Group Online

I think Salmon (2004) is right on the button when she says that people need space to form an online identity to engage in elearning.  I’ve found this quite fascinating.  Not quite multipersonality material, but I think that there is a difference between online and f2f identities. I have found I take on similar roles in an online group that I would take on in a f2f group (I’m one of those annoying people who keep people on task as Rosanne can attest to).  But in person, I would listen a lot, chip in when needed and wrap my comments up in cotton wool as not to seem too bossy .  Online I can’t do that.  I have to put my comments and ideas out there.  And smiley faces don’t quite cut it.   And a lot can be read into a short comment or no comment.  And how do you handle things if you disagree (something I haven’t experienced yet)?  I’ve come to the conclusion that just as in a f2f group it would be good to have ‘group rules’ that are agreed upon at the start.  I guess it is similar introducing netiquette to a new group of students. I imagine it would make things clearer and avoid misunderstandings.

Salmon, G. (2004).  All Things in Moderation.  Retrieved from http://www.atimod.com/e-moderating/5stage.shtml

Would You Like A Sample With That?

Yes please, a nice robust sample with a lovely rationale on the side. Easy to serve up right? But I’ve been reading some scary articles that tell me that if I don’t get the sample right, all that blood, sweat and tears agonising over evaluation questions and methodologies is all for nothing.   Apparently using a clunky fitting sample method is a bit of a death knoll.

As far as types of sampling goes, people who know about these things tend to like probability sampling This is because samples are randomized, which means samples are more likely to be able to be generalized to other groups.

But in our evaluation we’re not going to do that.  Instead we are going the non-probability route.  Non-probability sampling doesn’t involve randomization.  We’ve chosen this because we are interested in all of students enrolled in the course we are evaluating.  Having this common characteristic means it is a purposive sample in contrast to say accidental sample where you may selecting people who just happen to come along.  This path does have disadvantages however.  Part of the draw back of non-probability sampling is sometimes the sheer numbers involved, but as we will use survey monkey it takes care of that because of its ease of implementation and collation.  Another draw back is self-selection bias.  Even though all students can complete our survey, not all may choose to.  So the number of respondents and who responds (for instance will there be a subgroup such as all female respondents) will be important.  We will have to take this into account when we are analysing the results.

Have we got it right?  or will the bell begin to toll….

Enee Meenee Minee Mo

If only selecting methodologies was so easy. Knowing what information you need can help in selecting which method of gathering information suits best.  Our evaluation questions slant towards perceptions asking why rather than what, therefore using  questionnaires and interviews that focus on a person’s view of the subject at hand makes sense.  In an ideal world we would firstly conduct a survey with students and  those issues that rise to the top could then be looked at in more detail in a focus group.   Time and distance restraints  however means that this won’t be possible.  So we are going  to take the approach of  gathering perceptions from different quarters, namely the students (survey), the lecturer (interview) and an expert (looking at content).   The rich mix of data that we gather hopefully will direct us to the areas of course design  that could be  improved.

Centre for Social and Health Outcomes Research and Evaluation  (2007).  Easy Evaluation Workshop.  Massey University, Palmerston North.