Reflection on Online Learning Meta-Analysis

Consider these questions as you formulate a post for this week. (These questions are intended to stimulate your thinking as you reflect on the US Department of Education Meta-analysis and summarize what was most important/useful to you. It’s not necessary—or even desirable—to answer each question directly in your post.)

  • How effectively was the meta-analysis conducted? Was it thorough? Objective? Was the study large enough to be significant? What can you learn from the methods employed here?
  • In what ways were your instincts confirmed (i.e., “Well, of course that’s an important factor; I knew that!”)?
  • What surprised you about the findings?
  • What can you take away from this report as a list of best practices?
  • Can you tie best practices gleaned here back to the underlying theories we discussed last week?

 

My big take-away from reading this report is that I’m optimistic about the future of education. Isn’t it great that the researchers had such a difficult time in finding studies that had anything similar about them! That means that educators are trying different things, they aren’t all just stuck on one kind of teaching, they are experimenting with various types of technology, interactions, passive and active learning, short duration, typical semester duration, synchronous, asynchronous, technology, blended, hybrid, heinz 57…. All the while, many of the studies conflicted with each other all because there are so many variables that a cut and dried answer seems impossible.

The researchers did a fantastic job of trying to identify all of the variables. So many variables. I got quite lost in the statistical and quantitative explanation that seemed to go on and on, but also seemed quite necessary in order to come up with some kind of result. I also appreciated all of the warnings that were included – like the research pool for K-12 studies was extremely limited (p. ix). There’s a big difference in K-12 learners and adult learners in their motivation, attention span, the level of interactivity, initiative, not to mention very different life circumstances. Another element that seemed important is that part of the pool that the researchers looked at were classes that were delivered in different durations, “19 involved instructional time frames of less than a month,” which seems like enough of a difference from a regular semester of 12-15 weeks which might have a pretty significant effect on learning behaviors if compared to each other (pg. xiii). Towards the end of the report, there comes another warning that most of the studies were done by the researcher who was also the teacher in their own courses, which can certainly have room for bias, misguidance or misdirection (pg 49).

This statement, “that the addition of synchronous communication with peers is not a significant moderator of online learning effectiveness” surprised me (pg 28). In my experience, adding some kind of synchronous computer-aided communication seems like it offers the opportunity to ask questions that get immediate results, a chance to create and built upon classroom community, and to actually feel like there really is a real teacher and other real students sharing the experience with you. I do believe that it might be a student preference or might make a student like a course better. However, I am not positive that this has any effect on the actual student learning outcomes, unless it provides students the motivation to keep up with the rest of the class and actually finish.So this leads me to wonder if retention of students was considered as a variable. I’ve often heard that the number of students who complete classes is 20-25% lower for asynchronous online courses in comparison to fully F2F courses. That would seem to be a significant variable to influence success.

I was also surprised by the statement that media didn’t seem to play a significant difference. “In summary, many researchers have hypothesized that the addition of images, graphics, audio, video or some combination would enhance student learning and positively affect achievement. However, the majority of studies to date have found that these media features do not affect learning outcomes significantly” (pg 40-41). There are so many disciplines that I think truly benefit from the additional multimedia. The sciences, math, many social science disciplines where you might be evaluating or simulating behavior, languages, art, and computer technology just to name a few. Screencasting how to perform an academic search — way more effective than reading step-by-step text, right? A different set of studies did support that interactive media seemed to be more beneficial than just straight media, media that the student didn’t have any choice in the control  (pg 48). I’m wondering if this observation is related to the timing of the survey and if including media using current technology doesn’t play a greater role than it did in 2008.

I think this will make a terrific reference for finding articles to support my work as an Instructional Designer. The narrative section will be a really good resource in a nice concise manner

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s