Tag Archives: #altc2009

iTitle: Any flv/mp4 will doUnfortunately I won’t be able to attend ALT-C this year and will have to muddle on as a remote delegate, primarily surfing the conferencing twitter stream.

Brian Kelly posted about the Use of Twitter at the ALTC 2009 Conference last year and by all accounts if ALT are able to video stream the keynotes again combining these two channels should mean it will be practically like I am there (but without the lunch queue ;-).

In Brain’s original post I noticed he mentioned the Twapper Keeper service (perhaps his first mention on this on hos blog) and that he had created a notebook for #altc2009. Having missed Martin Bean and Terry Anderson’s keynotes and wanting to gear myself up for ALT-C 2010 I thought I’d see if I could relive the keynotes with the preserved twitter stream using iTitle.

Knowing the twitter archive was available the next step was to see if I could find the video. On the official ALT-C 2009 keynotes page I saw they had the videos hosted on blip.tv. Unfortunately this wasn’t one of the video hosting sites currently supported by iTitle. This isn’t the first time I’ve had this problem as I had to manually tweak the pages for the JISC10 Conference Keynotes with Twitter Subtitles. Rather than having to keep tweaking pages I thought a simple solution would be to just let the user define a url for where a video is hosted (which works well with blip as they give direct links for videos in .flv and .mp4 format). So here are the videos with tweets (NB the jump navigation only works for loaded parts of the video)

Martin Bean
Vice-Chancellor of the Open University



iTitle: Martin Bean's Keynote
| blip.tv: Martin Bean's Keynote

Michael Wesch
Assistant Professor of Cultural Anthropology at Kansas State University, USA


iTitle: Michael Wesch's Keynote
| blip.tv: Michael Wesch's Keynote

Terry Anderson
Professor and Canada Research Chair in Distance Education at Athabasca University, Canada - Canada's Open University


iTitle: Terry Anderson's Keynote | blip.tv: Terry Anderson's Keynote

1 Comment

Flickr Tag Error: Call to display photo '2597104631' failed.

Error state follows:

  • stat: fail
  • code: 95
  • message: SSL is required

For the next post in my ALT-C series I’m going to highlight a session I didn’t actually attend but immediately regretted when comments started filtering in on twitter.

The session was based around the paper by Rodway-Dyer, Dunne and Newcombe from University of Exeter which summaries a study of audio and visual feedback used in two 1st year undergraduate classes. Click here for the paper and abstract.

Comments I picked up on this paper via twitter appeared to show audio feedback was not well received. Issues highlighted were:

  • the finding that “76% of students wanted face-to-face from a tutor in addition to other forms of feedback” [@adamread, @JackieCarter]
  • students found that receiving negative audio comments was harder than when written [@adamread, @ali818, @narcomarco]. Although this is still open to debate as @gillysalmon said that “duckling project at Leicester has found human voice easier to give negative feedback by audio than text”

Obviously there are issues with making assumptions based on a few 140 character tweets and it should be noted that the authors conclude that “overall, it seems that "there is considerable potential in using audio and screen visual feedback to support learning”, although students did express concerns in a number of areas.

Having had a chance to digest the paper the question I’m left with is how much of the negative experiences were a result of the wider assessment design rather than the use of audio feedback in itself. For example, reading the focus group discussions for audio feedback in geography I noted that:

  • students were not notified that they would be receiving audio feedback;
  • that despite the tutors best attempts students hadn’t engaged with assessment criteria; and
  • that this was the first essay students submitted at university level and they were unclear of the expected standards.

Similar issues to these were addressed in the Re-Engineering Assessment Practices (REAP) project, which produced an evolving set of assessment principles. Principles which could be successfully applied to the geography example might be:

Help clarify what good performance is – this could be achieved in a number of ways including creating an opportunity for the tutor to discuss criteria with students, or perhaps providing a exemplar of previous submissions with associated audio feedback.

Providing opportunities to act on feedback – as this was the students first submission providing feedback on a draft version of their essay not only allows students to act on feedback (it’s not surprising when students ignore feedback if they have no opportunity to use it).

Facilitates self-assessment and reflection - One of the redesigns piloted during REAP was the Foundation Pharmacy class, in which students submitted a draft using a pro-forma similar to that used by tutors to grade their final submission. Students were required to reflect on distinct sections of their essay, which again also allowed them to engage with the assessment criteria.

Encourage positive motivational beliefs – using the staged feedback described above would perhaps also address the issue of students becoming disillusioned.

Talking to a friend during the lunch break the research methodology used by the authors was also mentioned, in particular the use of ‘stimulated recall’. For this the authors played back examples of audio feedback to the tutor asking him to explain his thought processes and reflect on how his students would have responded to his comments. This methodology seems particularly appropriate to evaluate the use of audio feedback, and is something I want to take a closer look at.

A moment of serendipity

Whilst searching the twitter feed for comments on the session I noticed a tweet by @newmediac which was promoting a free webinar in which  “Phil Ice shares research on benefits of audio feedback” (here’s the full tweet). The session has already passed  but the recording for this event is here.

Tweets - Moment of serendipity
Moment of serendipity

The presenter, Phil Ice, has been working on audio feedback in the US for a number of years and has a number of interesting findings (and research methodologies) I haven’t seen in the UK.

For example, Ice and his team report:

students used content for which audio feedback was received approximately 3 times more often than content for which text-based feedback [was] received”

and that

students were 5 to 6 times more likely to apply content for which audio feedback was received at the higher levels of Bloom’s Taxonomy then content for which text-based feedback was received”.

These results were from a small scale study of approximately 30 students so aren’t conclusive. Ice has also conducted a larger studies with over 2,000 students which used the Community of Inquiry Framework Survey. Positive differences were found across a number of indicators including excessive use of audio to address feedback at lower levels is perceived as a barrier by students.

Ice has also conducted studies which breaks audio feedback into four types: global – overall quality; mid level – clarity of thought/argument; micro – word choice/grammar/punctuation; and other – scholarly advice. The study indicates that students prefer a combination of audio and text for global and mid-level comments.

Findings from Ice have been submitted for publication in the Journal of Educational Computing Research (which will soon feature a special issue on ‘Technology-Mediated Feedback for Teaching and Learning’).

Screenshot showing inline audio comments
Screenshot showing inline audio comments

Finally, I would like to mention the method Ice uses for audio feedback. He uses the audio comment tool within Acrobat Pro 8 to record comments ‘inline’. This appears to be particularly useful for students to relate comments to particular sections of their submitted work. Click here for a sample PDF document with audio feedback (this isn’t compatible with all PDF readers - I’ve tested on Acrobat Reader and Foxit Reader).

Hopefully this post has not only stimulated some ideas in the use of audio feedback, but also highlight a range of methodologies to effectively evaluate it.

Flickr Tag Error: Call to display photo '1261391879' failed.

Error state follows:

  • stat: fail
  • code: 95
  • message: SSL is required

Just back from ALT-C 2009 having been asked to present a session with colleagues on EduApps (this resulted from JISC RSC UK's donation of an EduApps stick to all conference delegates and ALT members). Over the next couple of days I'll be making a series of posts to highlight some of the best bits.

For my first post in this series I'm going to highlight some of the ideas presented by my colleague Adam Blackwood at RSC South East. Adam, amongst other things, is a mobile guru and in his session he highlighted some interesting tools [Click here for a copy of Adam’s slides and his Mobile Technology Summary Sheet].

Proximity push using TextBlue

First there is TextBlue.This company specialises in ‘proximity marketing’, using Bluetooth to push information primarily to mobile devices. This company has a range of products from plugin dongles for your laptop to 'broadcasters’ which can push content out for up to 1000 meters.

Adam demonstrated how this technology could be used to push learning content to student owned phones (or any Bluetooth enabled device). The only restriction you have on the file types you can use is what is viewable on the student's device. You probably also want to keep file sizes down because of the transfer time so the 30 minute podcast might be out of the question, but this technology could be ideal for distributing quizzes etc (something you could easily create with Mobile Study, which is free).

There is nothing stopping you transfer files via bluetooth without TextBlue. Doing it this way is very cumbersome and the TextBlue software turns it into a one click solution. A demo version of TextBlue software is available on request – Contact TextBlue

SMS polling/voting

I’ve been aware of SMS polling/voting services for sometime. All the examples I’ve previously looked at use the model where the hosting/collation of votes has been handled by a 3rd party site. Adam highlighted a new model which puts the editing/collation software on your own phone, students responding to your mobile number, not one provided by a 3rd party.

The software to do this currently only seems to be available for Android mobile devices. There are a couple of software applications that can do this but Adam was highlighting ‘Polls’ by Pollimath:

The concept is simple; draft the opinion poll on your phone, add your voters and open your poll. Your list of voters would receive an SMS and/or E-Mail notification. They vote via the Web or SMS Reply as per the options selected by the pollster. The pollster can see the poll statistics and the voting details (who voted for what choice).  

Polling Concept
Pollimath Concept Diagram

There is a free ‘Lite’ version of Pollimath which is limited to 10 voters per poll, but at $3.95 the full version is very reasonably priced. Pollimath has some nice features like being able to send vote invitations via email as well as SMS, allowing you to use multiple input methods, and being able to view the results online. This is a relatively new application and some more work needs to be done to graphically represent poll results as well as an easier way to distribute polls links but so far it looks very promising.

An alternative to Pollimath is ‘Handy Poll’s’ by Marc Tan. This has a better graphical results view, but doesn’t have as many of the features of Pollimath.

Augmented reality

The final thing Adam showed us was some ‘augmented reality’. With this the camera on your phone is combined with your location and direction information so that additional information can be overlaid. One of the most popular working examples is Layar for Android, but the video below shows where the next generation of augmented reality is going: