lasi-ukPart of the Learning Analytics Summer Institute Global Network

Jisc CETIS in partnership with the Open University in Scotland and the University of Edinburgh School of Informatics will be hosting a one day event on the 5th July in the Informatics Forum (Edinburgh) focusing on Learning Analytics.

LASI-UK is part of a global network of events in the first week of July, which are linked to the Learning Analytics Summer Institute (LASI) at Stanford University. LASI-UK will explore ways of working with data to draw out and share insights that can make a real difference to learning and teaching. This free, interactive event has a dual focus on data visualisation and working with dialogue data. You can check out the draft programme here.

Delegates are also invited to join us for an optional evening session, where we will be broadcasting the final morning plenary session live from Stanford.

Who should attend?

Anyone with an interest in using data to support students and inform curriculum design and teaching, including:

  • Data analysts, planners and management information providers
  • Curriculum designers / education developers
  • Those involved in the construction, delivery and analysis of MOOCs


Click here for more details and registration

This is a free event; however, places are limited, so interested parties are encouraged to register early.  We look forward to welcoming you to our event.


This post originally appeared on the ocTEL blog on 15th April and was written in collaboration with David Jennings. I’ve reposted it here as it outlines some of the broad ideas we are using with this open course, but in particular I wanted to highlight the use of the WordPress plugin MailPress to distribute course updates. The latest MailPress theme is here and the course theme customisations (a more detailed recipe is in the works).

ocTEL has been up and running for over a week now. After a start where the momentum of the email discussion threatened to run away with itself, Stephen Downes, who is a ‘critical friend’ on the project, shared his advice to OcTEL. We’ve taken that advice to heart and this post outlines how we’ve sought to embrace his suggestions.

Stephen advises:

  • set up a system whereby you are sending out one email a day to people
  • in this email, put your course announcements at the top
  • also put a link to the mailing list archives, or (even better) links to the current topics on the board archives

… you can use this system to incorporate ‘publish in your own space’ responses

  • create a mechanism to allow people to register their blogs
  • set up an aggregator of participant blogs
  • include the aggregator listings in the once-a-day email


  • aggregate the Twitter posts for the course tag (I forget what it was; I’m sure it’s in an email somewhere)
  • aggregate from the diigo group – https://groups.diigo.com/group/alt-octel
  • list these posts in your once-a-day email

One email a day: the Daily Newsletter

When creating the ocTEL platform we were conscious of the need to push information to participants in a timely and useful way. From day one we had included the MailPress plugin for WordPress for distributing a daily and weekly newsletter. Below is a screenshot of the newsletter from day one sent on the 4th April. Other than there not being much content (it was day one) we recognised there were a number of areas to improve.

Daily Newsletter Day 1 (click to enlarge)

octel newsletterUsing some earlier work we’ve developed a custom ocTEL template (available here along with other source code from the project). The image to the right (which you can click to enlarge) gives you an overview of a recent version of the newsletter, which you can view online here.

Put your course announcements at the top, plus links to the current topics on the board archives…incorporate ‘publish in your own space’ responses

The newsletter currently contains five sections:

  • Course information – displays any full post made by the course team tagged ‘course information’.
  • Recent activity – is used to summaries new aggregated content from the course reader
  • Forum activity – currently configured to display the last 5 recent topics and recent replies
  • Participant blogs – excerpts from participant blogs
  • Bookmarks – a summary of bookmarks from delicious and diigo tagged ocTEL

As the course evolves our plan is to refine the newsletter to maximise effectiveness. This will be in part to respond to the dynamics of the course, but also to experiment with different configurations and features. If you have any suggestions for these we’ve created a topic on the forum or contact us by any other means that you prefer.

Create a mechanism to allow people to register their blogs, aggregate them and include in daily email

The ‘publish in your own space’ (or wherever you prefer) principle is at the core of our platform design and has directly influenced a number of our fundamental decisions including the adoption of WordPress. As part of the registration form we asked participants to include a link to their existing blog and these were imported into our platform from the start, after which participants could login to review and edit their details — as shown in the video below. This includes selecting an RSS feed so that we can aggregated their posts into our site allowing other participants to review content via the Course Reader or as part of the daily newsletter.

Like a number of other open online courses we achieve this by using the FeedWordPress plugin for WordPress which has the ability to aggregate RSS data. In addition to this we recently developed the ability for participants to submit additional artefacts not accessible using RSS.

Aggregate from the diigo group

As well as collection of data from participant feeds we are aggregating other feeds. These include the ocTEL JISCMail list, the Diigo group, Delicious bookmarks tagged ocTEL and others. Even though the course has been running a relatively short time, it’s encouraging to see individuals establishing new places for participants to cluster. As these are created, and where possible, we are adding them to the list of aggregated feeds. This currently includes the Google+ Community created by David Read and a Mendeley Group created by Ann Nortcliffe.

Aggregate the Twitter posts for the course tag

Currently we are only aggregating a selection of tweets into the Course Reader using the search term ‘#ocTEL AND ?’ giving us a sub-selection of tweets which might be questions. Using a system developed for another open online course we attempt to match questions with answers. These are available in the Course Reader and in this dedicated page.

It is also worth highlighting that we are archiving all the course tweets using a Google Spreadsheet template. In an attempt to provide a way for participants to navigate and explore this data we have another dedicated page with a ‘conversation graph’.

Cease sending out mailing list emails, give people a few days to catch their breath

This was the one element of Stephen’s advice we haven’t yet followed. We trusted ocTEL participants to self-regulate and catch their breath of their own accord, and happily they did. We know that email discussion lists have the capacity to flare up suddenly, and that may happen again. However, our parent organisation ALT successfully runs an email list with a thousand members, and, at present, ocTEL’s list of just under 800 participants is going through a quiet phase.

To conclude

When launching this course for the first time, we wondered what decisions would look daft with hindsight. Now I think we know at least one such decision. We have made changes quickly where we could, while still operating within our original principles. I hope this post highlights in a number of other areas we are not only on top of the problem but proactive in developing and sharing new solutions to help others who might be interested in delivering their own open online courses. I hope it also illustrates that we are trying to respond to the needs of our participants and we welcome any suggestions you have to improve the course (you are free to choose how to submit these for example the forum, email or other…)


Update: TAGS v6.0 is here! You can jump straight to the new site. If you are using version 5.x here are instructions on how to upgrade (there is an old Sheets version of TAGS which is recommended for use with TAGSExplorer).

For a couple of years now to support my research in Twitter community analysis/visualisation I’ve been developing my Twitter Archiving Google Spreadsheet (TAGS). To allow other to explore the possibilities of data generated by Twitter I’ve released copies of this template to the community.

In September 2012 Twitter announced the release of a new version of their API (the spreadsheet uses this to request data from Twitter). Around the same time Twitter also announced that the old version of their API would be switched off in March 2013. This has required some modification of TAGS to work with the new API. The biggest change for TAGS is that all requests now need authenticated access.

So here it is:

*** Twitter Archive Google Spreadsheet – TAGS v5.1 ***
[If the first link doesn't work try Opening this Spreadsheet and File > Make a copy]

Important: Uldis Bojars (@CaptSolo) has highlighted an issue with TAGS V5.0 which meant it would not page all the results. This has been fixed with the release of v5.1. If you are running version 5.0 of TAGS it's highly recommended you update the code.

Note! Search API isn't 100% reliable:

We find that the search API over-represents the more central users and does not offer an accurate picture of peripheral activity; we also find that the bias is greater for the network of mentions - González-Bailón, Sandra, et al. "Assessing the bias in communication networks sampled from twitter." Available at SSRN 2185134 (2012).

Instructions for setting up TAGSv5

Instructions are included on the Readme/Settings sheet of the template. If you are having problems it's worth checking Instructions for Tags v5.0 written by Stacy Blasiola (@Blasiola) or this modified version by Karen Smith & Shanifa Nasser made for Open Data Day Toronto available as CC-BY-SA.

What will happen to my existing TAGS sheets that aren't version 5.0?

When Twitter turn off the old API (test outages this March) all authenticated and unauthenticated search requests will stop working.

How do I upgrade existing versions of TAGS spreadsheets (v3.x to v4.0) to keep collecting beyond March 2013?

As I can’t push an update to existing copies of TAGS you’ll have to manually update by opening your spreadsheet, then opening Tools > Script editor… and replacing the section of code that starts function getTweets() { and finishes 134 lines later (possiblly with the line function twDate(aDate){ ) with the code here. [And yes I know that’s a pain in the ass but best I could do] ... or you can just start a new archive using TAGSv5.0

More additional tips and info when I get a chance


Getting Creative with Big Data and Google Apps I was recently asked to write a guest post for Big Data Week on using Google Apps as an interface for Big Data. For the post I decided to revisit an old recipe which uses Google Sheets (Spreadsheets) and Google Apps Script to interface the Twitter and Google Analytics API. One of the results is the bubble graph shown below which shows who has been tweeting my blog posts, how many visits their tweet generated, the number of retweets and how many followers the person has (click on the image for the interactive version). You can read more about his this was done and get a copy of the template in Getting Creative with Big Data and Google Apps

Visits, rewteets bubble graph


I’ve written a very long blog post which I’ll publish soon on text-mining public JISCMail (Listerv) lists using OpenRefine. It concludes with displaying list activity, posts over time and individual activity. The technique I used isn’t straight forward but as the output might be of benefit to other people like Brian Kelly who reported the Decline in JISCMail Use Across the Web Management Community I wondered if there was a better way of doing it. Here’s my answer:

*** JISCMail Public List Activity Overview Template ***
[Give it 30 seconds to render the results] 

JISCMail Public List Activity Overview Template

By making a copy of this spreadsheet and entering the url of the homepage of a public JISCMail List like OER-DISCUSS, it goes off and collects each months archives for almost the last 3 years, graphs the overall list activity as well as letting you see individual contributions (a limitation is matching variations in display names so in the OER-DISCUSS example Pat Lockley and Patrick Lockley get counted separately even though they are the same person).

How it works

On the data sheet cell A2 uses importXML to grab all the archive links. In cell B2 the importHTML function is used to grab the table of posts on each month’s archive page and does a QUERY to return post author names, the values being turned into a string from an array using JOIN. In cell A53 a UNIQUE list of author names (minus ‘) is generated using a combination of SPLIT and JOINS. This data is then used on the Dashboard sheet (to get the SPARKLINES I had to write a custom function using Google Apps Script.

function getActivity(user,source) {
  var output = [];
  for (i in source){
    var rows = source[i][0];
    var count = rows.match(new RegExp(user,"g"));
    if (count){
    } else {
      if (source[i][0]!=""){
  return output;

If you are interested in learning more about the functions used I recently posted Feeding Google Spreadsheets: Exercises in using importHTML, importFeed, importXML, importRange and importData (with some QUERY too). You should be able to use this template with any other public JISCMail list. Any questions get in touch.


In Notes on technology behind cMOOCs: Show me your aggregation architecture and I’ll show you mine I reached the point in my own mind that the key behind cMOOCs was how you aggregated and shared dispersed activity. At the time I also asked “Given the widespread use of Twitter in MOOCs are there tools/techniques required to aggregate and disseminate the course discussions?” and started looking at techniques to retrospectively analysis Twitter based discussions.  This activity hasn’t gone unnoticed and I was very grateful to be asked by Dave Cormier and George Siemens to do a weekly summary of Twitter data from their latest course Current/Future State of Higher Education (CFHE12) which started this week. This will be outwith my official CETIS work but given the increasing number of enquiries we are getting in this area it will undoubtedly feed in.

As I’ll be reporting on this course it made sense to sign-up. On one of the registration pages I noticed a couple of different hashtags left over from earlier course so asked the question:

Twitter status pageIf you visit the Twitter status page for this tweet you’ll see I got a couple of responses from AJCann and Jez Cope. If I had not sent you to that page how would have you known I got an answer? Did Jez know that Alan had already responded to me?

Given this type of dialogue, but at a higher level is a key aspect of learning and many a Greek has dined out on ‘knowing that they know nothing’ and started wondering how could this activity be aggregated and would this aggregation increase the situational awareness of participants and cause a shift in how the course community interacted with each other (I had recently read Tony Hirst’s post on Conference Situational Awareness and the example from the “London 2012 Olympic Games where it was identified that tweets relating to the congestion of the Olympic park entrances had a direct effect on crowd flow through the site” was still on my mind.

So after some late night code bashing here’s what I’ve come up with (this is very beta so your feedback is welcome – particularly if it doesn’t work). A Filtered Aggregation of #CFHE12 questions and responses (embedded below if you are viewing this post on my site):

What you have here is an aggregation of possible questions from #cfhe12 with buttons to filter for messages with and without replies. Because it’s linked to Twitter’s own embed code users can do the usual Twitter actions (reply, retweet etc). As noted there are some limitations perhaps the biggest is it isn’t 100% reliable in that I’ve got no way to include replies made without the #cfhe12 hashtag … in this version anyway.

I’ll let you go and play with and hopefully you’ll share your thoughts. Two things that spring to mind for me are: it would be nice if this page had RSS feeds just to keep the aggregation juices flowing; and wouldn’t it be interesting to use tweet favouriting to let the community curate questions/answers, a favourite representing an upvote (see Techniques for Live Tweet Curation)

Make your own

*** Open and copy TAGS v3.1Q ***

Run through the Basic and Advanced setup used in the TAGS v3.1 (you need to authenticate with Twitter).

In the spreadsheet open Tools > Script editor and follow the ‘To use Filter Questions Interface’ instructions

Upgrading an existing TAGS v3.1+ Archive

  1. imageOpen and copy TAGS v3.1Q and click on the ‘questionsFilter' sheet active.
  2. Activate the sheet tab menu and chose ‘Copy to…’.
  3. Now find your existing TAGS archive spreadsheet and copy.
  4. Once it has copied open the destination and rename the new sheet from ‘Copy of questionsFilter’ to questionsFilter
  5. Open Tools > Script editor… in your old archive and select New > File > Script file. Call the new file TAGSExtras
  6. In the new script tab copy and paste the code from here, then save
  7. Run > setup twice (first time to authorise, second to fun the function)
  8. File > Manage Versions and enter any description you like and Save New Version
  9. Publish > Deploy as web app... and click Update
  10. Run > getUrl and then open View > Logs... and copy the url into your browser address bar to view the result

How it was made (Non-techies you are free to leave ;)

The starting point was Twitter Archiving Google Spreadsheet TAGS v3. A hidden feature of this is to add a column to you Archive sheet called ‘possible_question’. When the archive collects tweets it looks for the text ‘? ‘ or ‘?’ at the end to identify the tweets might be a question and if so ‘TRUE’ is put in the archive column.

Having got a list of potential questions and associated tweet ids I could have put them in my failed lab experiment (and unfortunately titled) SpreadEmbed, but noticed that the embed.ly api doesn’t return a in-reply-to message with it embed code. To expand upon, because this is quite important, currently when you embed a tweet which is in reply you use something like this:

@mhawksey Most of us are using #cfhe12 ?

— AJCann (@AJCann) October 8, 2012

Although this text doesn’t include the text of the message it is replying to Twitter clever bit of javascript renders it like this:


re-writing our little <blockquote> as:

Now you know why the page takes so long to render ;)

With this extra data we can use jQuery to find and filter tweets that have the class ‘twt-reply’.

To recap using TAGS we can identify tweets that might be questions and using a Twitter embed we can also automatically get the message it is in reply to. So to display a question and answer together we only need to find the answer and Twitter will render the question it is in reply to (still with me). The problem we’ve got is we can easily filter for questions (possible_question == TRUE) but not the answer. To do this I create a sheet of all the tweet id_strings that are questions (=QUERY(Archive!A:N,"select A WHERE N is not null LIMIT 50",FALSE))  and another where we know the tweet is in reply to something (=QUERY(Archive!A:N,"select A, K WHERE K starts with '2' LIMIT 50",FALSE)) . For the last bit I need to write some Google Apps Script which replaced any question tweet ids with the answer id, which gives us the ‘Combination of Qs and As’ column.

Extracting question and answer ids

To render the tweets on a page we need to get the embed snippet using Twitter’s official oembed endpoint. Because getting the embed code need authenticated access I again used Google Apps Script to fetch this data and cache the result. Using Apps Script ContentService I can expose this by publishing the spreadsheet as a web app and serving up each tweets embed code in JSONP. For example here’s the JSONP wrapped embed code for #CFHE12. The last part of the puzzle is some good old fashioned HTML/JavaScript which renders the Twitter embed code and adds some UI (the code is here).


New item starred in Google Reader then add to DiigoIFTTT is a web service that lets you do some basic plumbing between web services. The idea is that if something happens on one service (referred to by IFTTT as ‘channels’) then you do something else on another (options also exist for other triggers including time, SMS and external sensors). To illustrate this I use IFTTT to bookmark anything I star in Google Reader in my Diigo account. Setting up these recipes takes no more than selecting a trigger and assigning an action, so no coding required.

There are currently 41 channels in IFTTT (including Twitter, Facebook and RSS feeds) users can link together in this way. One of the most recent additions is Google Drive. With Google Drive integration you can store files or, the one that interests me more, insert a new row in a Google Spreadsheet.

A nice feature of IFTTT is the ability to share and reuse recipes. Here’s a selection of recipes that use Google Spreadsheets. You’ll see there is already a range of quick hacks people have shared including things like:

and my contribution:

Backup RSS Feed to Google Spreadsheet

Examples of RSS backup

RSS is great for accessing machine readable data but often one of the limitations, particularly with blogs, is results are limited to the last 10 or so items.  This has created problems in some of my projects like the OERRI Projects Dashboard where I need all the posts. The solution to date has been to rely on 3rd party services like the Google Feed API.

Blog posts

By using IFTTT I’m able to easily capture blog posts as they are written.  Having the data stored in a Google Spreadsheet makes it easy for me to query and export to other services (here’s an example from my blog) [this might be something JISC might want to do to capture and mine funded project feeds – other solutions/techniques already exist]


Is a service which lets users collect posts around topics and display in an online magazine format. I use scoop.it to collect examples of how people are using Google Apps Script. Since June last year I’ve collected over 200 examples. I mainly do this for my own benefit, in part as a directed task to make sure I’m up-to-date with Apps Script developments, but also as a reference source of apps script examples. I can query these examples using a Google search but it concerns me that I’ve got no way to bulk export the data. Recently Scoop.it added a RSS option so I use this recipe to Backup Scoop.it to Google Spreadsheet.


There’s lots more you could do with the data now it’s in a spreadsheet and once I’ve built up a corpus I’ll have a play. One thing to note which might put some people off is to allow IFTTT to add data for you, you need to give them authenticated access to your Google Drive. I can live with that risk, but you might not.


Update 16/01/14: One of the services originally used in this project has been deprecated by Google. The code has been updated to fix this. If you are using an existing version of Templated Export and would like to upgrade keeping your existing settings please follow the instructions in this document (comments are open if any step is unclear).

Templated Export allows users to reshape and selectively publish data from Google Spreadsheets in custom formats. The tool is powered by Google App Script which allows integration with Google Spreadsheets using existing authenticated access. Example uses include: turning data from a spreadsheet into a RSS feed and publishing data from a spreadsheet in JSON format to be used by other tools/services. The framework for creating a custom wrapper for your data means there are many more opportunities.

Templated Export

Main features

  • Cloud based
  • Selective publication of data (down to individual cells)
  • Publication of data in different file types (text, atom, rss, xml, json)
  • Complete customisation of output file structure


  1. Open Templated Export for Google Spreadsheets
  2. File > Make a copy of the script
  3. File > Manage Versions and enter a description and Save New Version
  4. Run > onInstall twice (first time to authorise, second to fun the function)
  5. Publish > Deploy as web app... and click Update and open the 'Current web app URL' or click on the 'latest code' link if available. Bookmark this link (there are many tutorials for turning chrome bookmarks into applications)

Example usage

Generating a custom RSS Feed of data from a Google Spreadsheet

Publishing data from other sources like Guardian Datastore

Selectively publishing cell ranges as JSON

Some example templates

If you download this JSON file it contains the templates used in the demonstrations above. Use the Local Template: Backup/Restore button in Templated Export to install them.

Comments/suggestions welcome

I make the usual warranties that I'm not a professional programmer and I'm already aware of some niggly bugs, but comments and suggestions are always welcome. If there is enough interest in this tool I'm happy to put it on a community coding site or donate it to someone who actually knows what they are doing ;)


For those not familiar with MOOCs Dave Cormier & co. have a nice video explaining them here. Dave should know as the term:

MOOC was coined in 2008 by Dave Cormier, Manager of Web Communication and Innovations at the University of Prince Edward Island, and Senior Research Fellow Bryan Alexander of the National Institute for Technology in Liberal Education in response to an open online course designed and lead by George Siemens, associate director, Technology Enhanced Knowledge Research Institute at Athabasca University and Stephen Downes, Senior Researcher at The National Research Council (Canada). The course was called "Connectivism and Connective Knowledge" and was presented to 25 tuition-paying students at the University of Manitoba in addition to 2,300 other students from the general public who took the online class free of charge – from Wikipedia

If MOOCs aren’t already on the Gartner ‘peak of inflated expectations’ I’m sure by the end of this year they will as various people are gambling Where there’s MOOC, there’s brass?. Recently projects like Coursera, Udacity and edX have attracted over $100 million in investment to deliver MOOCs. It’s worth reiterating that not all MOOCs are the same and as George Siemens recently wrote:

Largely lost in the conversation around MOOCs is the different ideology that drives what are currently two broad MOOC offerings: the connectivist MOOCs (cMOOCs?) that I [George Siemens] have been involved with since 2008 (with people like Stephen Downes, Jim Groom, Dave Cormier, Alan Levine, Wendy Drexler, Inge de Waard, Ray Schroeder, David Wiley, Alec Couros, and others) and the well-financed MOOCs by Coursera and edX (xMOOCS?).

George’s post was entitled ‘MOOCs are really a platform’, which got me thinking that given the recent announcement that Coursera is offering it’s platform to other partners, including the University of Edinburgh, this falls into the ‘as a Service’ model as used in cloud computing. So Coursera are offering chosen partners (for a fee) ‘MOOCs as a Service’ (MaaS), or using the distinction from above ‘xMaaS’.

One other consideration is whether xMOOCs are really MOOCs. Martin Weller recently questioned if the recent offerings from Coursera et al. are truly open. So ignoring the xMOOCs for now I thought it would be useful to survey some ‘real’ MOOCs and identify if there were any technological trends and possible future directions. This process has left me the questioning if there is a need for more shared guidance and support on aggregation architectures. [This post is mainly for me to record thoughts and resources but it might be something we (JISC CETIS) take forward. If this is something you’d like please comment or share this post to let us gauge the level of interest]

How the data was collected

A list of MOOCs was taken from the ‘(Very) Partial History of MOOCs’ section of mooc.ca (maintained by Stephen Downes). This was added to with UK based examples funded by JISC/HEA. All 14 courses were then visited and examined for technologies used. The criteria for inclusion were that the course had to have finished or started.

Because of the nature of MOOCs there is a grey line between the technology: chosen by the course team; recommended to students; and used by students either as part of their personal learning environment or used to facilitate smaller study groups. A distinction was attempted to only include technologies chosen/used by the course team.

Data collection was also not entirely conclusive due some of the early MOOCs (CCK08  and CCK09) no longer having a web presence and others like MobiMOOC reusing their existing web presence (the 2011 version of the course has been edited to become the version for the 2012 session).

A Google Spreadsheet with the data and annotations is here. Please contact me for edits/additions.


Frequency of MOOC tech

MOOC tech treemapAbove is the obligatory word cloud of the frequency different technologies were used in the MOOCs surveyed (if you are feeling brave you can explore the data as a treemap on ManyEyes).

Two things are apparent in this data. Firstly, email is often a hidden technology. Services like Google Groups, Twitter and Facebook all allow (and in some cases rely on) email notifications. Secondly, it’s of limited use to know what technologies are being used, the important question is how they are being used.

We can get some insight into this from the diagram below taken from George Siemens “What is the Theory that Underpins Our MOOCs?

Siemens: MOOC Architecture

gRSShopper is an application that allows you to define your own community of RSS feeds, aggregates content from those feeds and organizes it, and helps you integrate that content into your own posts, articles and other content [source]

Because of the connectivist theory underpinning cMOOCs gRRShopper is a key component in aggregating distributed activity. It’s worth noting that only 5 of the 14 MOOCs surveyed used gRSShopper, but there is evidence that other mechanisms are in place to preform similar functions. For example in Digital Storytelling (DS106) this functionality is handled by the FeedWordPress plugin which allows administrators to specify feeder RSS feeds and selectively repost content to a self hosted WordPress blog. In PHONAR and PICBOD, which are photography based courses, students were directed to publish work to Flickr using a common tag to allow aggregation of work.

The general sense of it all – distributed, chaotic, emergent 

The ‘distributed, chaotic, emergent’ comes from a recent presentation by George Siemens for EDUCAUSE talking about cMOOCs. It’s apparent from the survey of MOOC technology that course teams are taking a loosely joined set of tools that they are comfortable with to facilitate a shared experience with the learner. As commented by Downes when writing about gRSShopper “the users are assumed to be outside the system for the most part, inhabiting their own spaces, and not mine”. It’s also apparent that people are taking discipline based approaches using tools aligned to study areas as previously described with PHONAR/PICBOD.

Even with the bespoke nature of MOOCs there are still opportunities to start collectively raiding the parts bin. Given the widespread use of Twitter in MOOCs are there tools/techniques required to aggregate and disseminate the course discussions? Given the wide use of WordPress within education are there opportunities for MOOC specific themes or plugins? With the ability to freely record and stream video from a Google Hangout do we need a wrapper to allow comment collection and annotation?


It’s just not the technology that sits on top. It’s been fascinating to read the blog posts from JIm Groom as he and his colleagues come to grips with the infrastructural issues of running ds106.us. As commented by Stephen Downes, and as Jim is finding, “aggregation kills websites”. So if it’s the aggregation of content that’s the glue in cMOOCs, perhaps this becomes the area of priority? Perhaps this is the area where JISC CETIS can most useful provide or facilitate guidance?  As Jim recently commented:

No where in the raging discussion around MOOCs is there anyone talking about sharing the infrastructural/architectural work they’ve done freely with others – Jim Groom in Integrating FeedWordPress with BuddyPress

So having loosely started with the pedagogy, looked at some of the technology I’m beginning to think aggregation architecture/infrastructure might be the best place where an organisation like JISC CETIS could help. Do you agree?   


Next week I’ll be presenting at Dev8eD (A free event for building, sharing and learning cool stuff in educational technology for learning and teaching!!) doing a session on ‘Hacking stuff together with Google Spreadsheets’ – other online spreadsheet tools are available. As part of this session I’ll be rolling out some new examples. Here’s one I’ve quickly thrown together to demonstrate UNIQUE and FILTER spreadsheet formula. It’s yet another example of me visiting the topic of electronic voting systems (clickers). The last time I did this it was gEVS – An idea for a Google Form/Visualization mashup for electronic voting, which used a single Google Form as a voting entry page and rendering the data using the Google Chart API on a separate webpage. This time round everything is done in the spreadsheet so it makes it easier to use/reuse. Below is a short video of the template in action followed by a quick explanation of how it works.

Here’s the:

*** Quick Clicker Voting System Template ***

The instructions on the ‘Readme’ tell you how to setup. If you are using this on a Google Apps domain (Google Apps for Education), then it’s possible to also record the respondents username.

record the respondents username

All the magic is happening in the FILTERED sheet. In column A cell 1 (which is hidden) there is the formula =UNIQUE(ALL!C:C). This returns a list of unique questions ids from the ALL sheet. If you now highlight cell D2 of the FILTERED sheet and select Data > Validation you can see these values are used to create a select list.

create a select list

The last bit of magic is in cells D4:D8. The first half of the formula [IF(ISNA(FILTER(ALL!D:D,ALL!C:C=$D$2,ALL!D:D=C4))] checks if there is any data. The important bit is:


This FILTERs column D of the ALL sheet using the condition that column C of ALL sheet matches what is in D2 and column D matches the right response option. This formula would return rows of data that match the query so if there are threee A responses for a particular question, three As would be inserted, one on each row. All we want is the number of rows the filter would return so it is wrapped in COUNTA (count array).

Simple, yes?