So we are now into week 2 of the open course in Learning and Knowledge Analytics LAK11. Whilst I’m already doing better at this course than PLESK10 I would still only class my involvement as periphery participation so I’ll be no doubt be revisiting the LAK11 syllabus again at a later date. A couple of things I’ve picked up from week 1 you might be interested in:
The only paper I had a chance to properly read was Elias, T. (2011) Learning Analytics: Definitions, Processes, Potential. It was more luck than anything else that I started here but I was very glad of the fortune [it was only later that I read Dave Cormier’s MOOC newbie voice – a slackers entrance into lak11 post which reassured me that although I wasn’t doing much at least it was the right thing].
Things I took away from the paper were:
- Some examples of learning analytics systems already being used.
- Purdue’s Signal’s block for Blackboard – “To identify students at risk academically, Signals combines predictive modeling with data-mining from Blackboard Vista. Each student is assigned a "risk group" determined by a predictive student success algorithm. One of three stoplight ratings, which correspond to the risk group, can be released on students’ Blackboard homepage.” [this reminded me of University of Strathclyde’s homegrown STAMS VLE which appears to have disappeared when the University moved to Moodle – bit of a shame as it was developed by staff in Statistics and Modelling Science so imagine behind the scenes it had a dusting of analytics – that’s progress for you]
- University of California Santa Barbara’s Moodog Moodle module – “In addition to collecting and presenting student activity data, we can proactively provide feedback to students or the instructor. Moodog tracks the Moodle logs, and when certain conditions are met, Moodog automatically sends an email to students to remind them to download or view a resource.” Zhang (2007) (p. 4417) [I was a little disappointed to only find references to this is academic papers]
- Something on collective intelligence – Woolley et al. (2010) identified the existence of collective intelligence which “is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group" (p 686)
- Some terminology/theories for recommendation systems – “recommendation methods based on different theories such as collaborative filtering algorithm, bayesian network, association rule mining, clustering, hurting graph, knowledge-based recommendation, etc. and the use of collaborative filtering algorithms (Cho, 2009)” [at this point in the paper I thought about Tony Hirst’s Identifying Periodic Google Trends posts, mainly in underscoring the shear scale of the field of learning analytics]
Overall the paper was very useful in highlighting how much I didn’t know, but was an indication of the things I might need to know [whilst it might not sound like it this is a positive outcome to let me self-regulate my learning].
Some things on participating on the course in general
There were other things I did during week one including playing the the recommendation search engine hunch. This experience was juxtaposed to the course moodle site, which was blindly sending me hundreds of emails from the course discussion forums. In the end I decided to unsubscribe to the email notifications and pull the forum into Google Reader via RSS. My hope was Google Reader would ‘sort by magic’ to pull interesting things to the top, but the algorithm is struggling to do anything other than chronologically order the feed [my guess is Google don’t have another data for my personal or group preferences – ho hum ;)]