My old colleague at Cetis, David Sherlock, posted a nice little  ‘Twitter Question/Revision Bot’. This uses a .csv file of questions and multiple choice answers which get randomly tweeted out using a Python script. David designed the project with a Raspberry Pi in mind but also highlights it can be easily run on any Unix like environment such as Mac OS X or Linux. As not everyone is going to have easy access to this here’s how you can do something similar with Google Sheets (if you don’t want to play copy and paste coding make a copy of this sheet).

1. Setting up a Google Sheets environment to handle it

  1. Start with a new Google Sheet and like David have six columns in each row with question, answer, three options (one of which the correct one) and an extra row to record if the question has been asked.
  2. In your spreadsheet open Tools > Script editor and when asked start a ‘Blank Project’
  3. In the new editor window select Resources > Libraries (this will first prompt you to give your project a name, I called mine TwitBot.
  4. In the ‘Find a Library’ box enter MarIlVOhstkJA6QjPgCWAHIq9hSqx7jwh and click Select
  5. You should now see ‘TwtrService’ listed as one of the libraries. In the ‘Version’ dropdown select the latest version (at time of writing 14), and click Save

TwtrService is a library I’ve written so interact with the Twitter API. You can read more about it here.

In the code window add the following code and click save:

function setup() {
  if (TwtrService.isUserConnectedToTwitter()){
   var result = Browser.msgBox("Twitter Authorisation",
                   "You appear to already be connected to Twitter.\\n\\nWould you like to run the setup again?",
    // Process the user's response.
    if (result == 'yes') {
      // User clicked "Yes".
  } else {

The above code uses the TwtrService to help you set up Twitter access if required

Your script window should look like this:

script editor

2. Create a Twitter App

If you’ve used my other TAGS templates you can reuse your details for this. To see if Twitter App details are required from the script window select Run > setup (if setup isn’t listed you need to first save your code). Running setup will start the first part of the authentication process. Click continue and review the authentication required and ‘Accept’ if you are happy.

Auth required

Going back to the spreadsheet you started there should now be a dialog window asking you to do something. If you haven’t setup a Twitter App before it should look like this:

Twitter app creation

Follow the instructions onscreen to create your app.

Important: The Twitter account you use to authorise access is the one that will send out the tweets

3. Write a Python Google Apps Script

Add the code below to your existing script project and save:

var doc = SpreadsheetApp.getActiveSpreadsheet();
var sheet = doc.getSheetByName('Sheet1');

function tweetQuestion(){
  var ran = Math.floor(Math.random() * (sheet.getLastRow()-1)) + 2;
  var row = sheet.getRange(ran, 1, 1, sheet.getLastColumn()).getValues()[0];
  if (row[5] !== ""){
    tweetQuestion(); // if already asked pulls another random row
  } else {
    var tweet =  "Q: " + row[0];
    var tweet2 =  row[2];
    var tweet3 =  row[3];
    var tweet4 =  row[4];'statuses/update', {status: tweet});
    var options = [tweet2,tweet3,tweet4];
    shuffle(options);'statuses/update', {status: "A: " + options[0]);'statuses/update', {status: "B: " + options[1]);'statuses/update', {status: "C: " + options[2]);
    sheet.getRange(ran, 6).setValue(new Date());

function shuffle(a,b,c,d){//array,placeholder,placeholder,placeholder

4. Time the script to run every hour or so

In the script editor select Resources > Current project’s triggers and click ‘No triggers set up. Click here to add one now.’. Select to run tweetQuestion every hour (or your preference), and also click ‘notification’ so you can get an email if the script fails. Finally click ‘Save’

Timed triggers

What will happen now is the function even if you don’t  have the spreadsheet or script editor open or even your browser.

Important: when this script runs out of questions it will go into an infinite loop. You can go back into the trigger window to remove the function at any point. If you don’t you’ll end up using all of your script runtime quota. You homework is to figure a way to get the script to bail if there are no questions left.

My homework...

function tweetQuestion(){
  var asked_col = sheet.getRange(2, 6, sheet.getLastRow()-1).getValues();
  // get unasked q's
  var unasked = [];
  for(var i=0; i < asked_col.length; i++) {
    if(asked_col[i][0] == "") {
  // randomly pick one
  var ran = Math.floor(Math.random() * unasked.length);
  if (unasked[ran]){
    var row = sheet.getRange(unasked[ran], 1, 1, sheet.getLastColumn()).getValues()[0];
    var tweet =  "Q: " + row[0];
    var tweet2 =  row[2];
    var tweet3 =  row[3];
    var tweet4 =  row[4];'statuses/update', {status: tweet});
    var options = [tweet2,tweet3,tweet4];
    shuffle(options);'statuses/update', {status: "A: " + options[0]});'statuses/update', {status: "B: " + options[1]});'statuses/update', {status: "C: " + options[2]});
    sheet.getRange(unasked[ran], 6).setValue(new Date());
  } else {
    // no questions left - do something else

Kin Lane has done some great work in highlighting the importance of APIs in education. If you are unfamiliar with APIs they are a way for separate programs to communicate and share information or perform actions.  With the growing usage of data in education I believe APIs are the only way to use data effectively and efficiently. Kin’s University of API white paper  is a great starting point to get more context.

Reading the white paper reminded me how important it is to get people to think beyond the webpage and consider the underlying data used to generate it.

Luke, view the source

Back in the day when I discovered the work of Tony Hirst this was a real threshold concept for me. Five years ago unpicking data powering the web felt a lot easier there was usually only basic authentication required, if any. Now you usually have to do some sort of authentication handshake. This additional step often immediately lands you in codeland. Even if you don't do code there are still opportunities to explore APIs. Any decent API service will usually have interactive documentation for developers or API consoles. In a recent talk, which you can see the fragments of here, I highlighted what data is behind a tweet. If you'd like to explore the Twitter API in a non-cody way here's how:

Interactively exploring the Twitter API

1. Go to Twitter API console Make sure you logged in (should see your avatar top right)

2. In the Authentication drop down select OAuth 1 - this will prompt you to sign in with twitter


3. When bumped back to the pack select /status/show/{id}.json

image (3) 

4. After it prefills some details switch to the Template tab. In the id box enter a twitter id number e.g. in my tweet you'd just enter 591156241969319936 and hit the orange send button

 image (2)

5. In the pane you should get a response. The main data bit starts:


6. To get details of other tweets click the Template tab and enter another id.

image (1)

7. If you are interested in other API calls you can make click the Service box and select another



Later today (2.30pmUTC) I’ll be presenting at #oer15 about Twitter in open education (tune in here). As I wanted to highlight the network effect of Twitter I wanted to engage not just the room, but leave ‘footprints’ as for others to follow. I know people like Alex Couros and Alan Levine have done cool stuff live tweeting from Keynote. I’ve dabbled with doing stuff with Microsoft PowerPoint but was never fully satisfied. Given Twitter now supports a number of embedded media formats I thought rather than trying to fit Twitter into another presentation tool, to turn my live tweets into my slides.

And so TAGSPresenter is born! Using a Google Sheet as an editor, Google Drive to host images and a bit of Google Apps Script to glue it together I’ve got my own Twitter based presentation tool. I don’t have time to write about how it was technically achieved but if you want to peak under the hood of the hack here are my ‘slides’ which are published here.

Tune in at 2.30 to see how it goes ;)


As part of the latest release of TAGS (Twitter Archiving Google Sheet) I moved a lot of the code into a Google Apps Script Library. Libraries are a great way to bundle and release code allowing you to write your own classes and methods. To see some of the functionality already being developed by the Apps Script community you should have a look at the Google Apps Script Samples site and the excellent work Bruce McPherson has done which includes a EzyOauth2 library.

One of the things you can do with libraries is wrap one library into another. When rewriting TAGS it made sense to strip out a separate Twitter client library that I and others could use in different projects. Based on the work by Arun Nagarajan at Google, TwtrService provides  access to Twitter's REST API. The work I’ve done is to add some UI and configuration methods to try to streamline the authentication flow. As part of this developers can offer authentication routes using their own Twitter application or use an application created by users. This particular angle is a result of one of the design principles for TAGS, that every copy of the template should use a Twitter application owned by the user. The reason behind this is to distribute the risk. If Twitter were to suspend my data access because a TAGS user abused their API it would suspend access for all TAGS users. By requiring TAGS users to register their own application with Twitter the responsibility to abide by Twitter’s terms of service lies with them. So in TAGS the auth flow looks like this

The result is hopefully a flexible library that developers can integrate into their own projects or by getting users to register their own.

Over the next couple of weeks I'll be sharing some examples applications we've developed at ALT. In the meantime this post serves as a basic introduction to TwtrService and covers:

Overview of TwtrService

The TwtrService library for Google Apps Script centrally stores your Twitter access details allowing them to accessed from multiple script projects without the need for re-authentication. TwtrService is designed to allow you to directly use the Twitter’s v1.1 REST API GET and POST methods. For example to return Twitter search results for the search ‘Google Apps Script’ you would use:

var data = TwtrService.get('', {q: 'Google Apps Script'});

The url string can also be abbreviated to:

var data = TwtrService.get('search/tweets', {q: 'Google Apps Script'});

Additional options can be passed in the parameters array. For example to return 100 results for the same search you would use:

var data = TwtrService.get('search/tweets', {q: 'Google Apps Script', count: 100});

The project key for this library is MarIlVOhstkJA6QjPgCWAHIq9hSqx7jwh and the TwtrService methods are documented here.

To use the Twitter REST methods TwtrService first needs authenticated access. TwtrService has some built-in methods to do this detailed below. Once a user has authenticated access the TwtrService library stores these as User Properties. This means when a user has authenticated once with TwtrService using the library in another container-bound or standalone Apps Script immediately gives them access to Twitter API results using the get/post methods. In terms of security User Properties are limited to the effective user of the current script.


Quick start: Personal access

If you would just like to use TwtrService for your Apps Script projects the easiest way to get started is to register a Twitter application and enter it’s details on this page (if you are interested here is the source code for the interface).

Note: If you are already a user of TAGS you’ll already be able to use TwtrService without the step above.

In your Apps Script project you’d like to use the Twitter API in the Script Editor window use Resources > Libraries and add the service  using the project key MarIlVOhstkJA6QjPgCWAHIq9hSqx7jwh.

In your project you can now use the TwtrService.get() and methods. The documentation for get() is detailed below (post is the same but uses HTTP POST):

get(string url, Object parameters)

GET wrapper for request to the Twitter REST API. For full documentation of API method endpoints and parameters see For example to get last 100 tweets containing 'Google Apps Script': var data = TwtrService.get('search/tweets', {q: 'Google Apps Script', count: 100});

urlstringTwitter REST API resource url. This can be either long form e.g. or abbreviated e.g. search/tweets
parametersObjectadditional API parameters as detailed in the Twitter REST API documentation e.g. for search results a search string and count is specified by {q: 'Google Apps Script', count: 100}.
Return Values:
ObjectAPI response in JSON format.

upload(string url, Object parameters)

The upload method for adding media to Twitter. For full documentation of media upload see

urlstringTwitter REST API upload url. Currently the required string is 'media/upload'
parametersObjectadditional API parameters as detailed in the Twitter REST API media upload documentation. Current valid parameters are media and media_data
Return Values:
ObjectAPI response in JSON format.

var picture = DriveApp.getFileById('ID_OF_PICTURE_IN_DRIVE').getBlob();
var picture_data = Utilities.base64Encode(picture.getBytes());
var parameters = { "media" : picture_data };
var res = TwtrService.upload('media/upload', parameters);

Quick start: Personal access in Sheets and Documents

If you would like to replicate the TAGS authentication flow where users enter their Twitter application key/secret TwtrService comes with a number of UI methods. For TAGS the following code is used:

* Launches key/secret and auth flow
function setup() {
  if (TwtrService.isUserConnectedToTwitter()){
   var result = Browser.msgBox("Twitter Authorisation",
                   "You appear to already be connected to Twitter.\\n\\nWould you like to run the setup again?",
    // Process the user's response.
    if (result == 'yes') {
      // User clicked "Yes".
  } else {
* Used as part of setup() to process form data
function processForm(formObject) {

Quick Start: Shared Twitter Key/Secret

The earlier examples have assumed the user registers their own Twitter application. For scenarios where you would like to have the option for users to have authenticated access using a dedicated Twitter API key/secret it is possible to initialize these values. An example application code can be found here which is also deployed here.

Similar to earlier examples once a user authenticates access with your key/secret as long as these values are also initialized in other script projects the user will have access to the Twitter API via TwtrService.

Instructions for creating a Twitter Application to use with TwtrService

TwtrService requires you to have a registered Twitter application. If you are If you haven’t already done this here are some steps you need to get started:

  1. Register for an API key with Twitter at (if you've already registered for a TAGS sheet you can reuse your existing API Key and Secret).
    • Name, description and website can be anything you like
    • Important Include the Callback URL
  2. Read the 'Developer Rules of the Road' before clicking 'Create your Twitter application'

On the Twitter site your application should include a ‘Keys and Access Tokens’ tab which includes the Consumer Key (API Key) and Consumer Secret (API Secret) you’ll need.


backchannelOne of the uses of Twitter that caught the edtech community’s imagination early was the backchannel used for audience conversation, broadcast / amplification or just tweckling. As a presenter there are various ways you can use the backchannel such as directing discussion and highlighting example responses or monitoring on a second screen. When I’ve presented in the past I’ve struggled to check the backchannel whilst speaking but will try to weave video clips into my presentation as an opportunity to take stock.

An idea I'm keen to try for my next presentation is listening to the backchannel. Similar to the idea of TV presenters receiving directions from the production crew this might be a way to shape what you are saying based on what is being said at or about you. There are a couple of ways you could do this but the method I've found easiest to do is using Twitter SMS alerts and a ‘read aloud’ SMS notifier on my phone.

Click for your Twitter Device SettingsTwitter SMS messaging was something I wrote about in 2008 and essentially it’s the same concept, using mobile SMS notification. There are some limitations such as the Twitter supported mobile carriers and what you can be notified about. Since SMS notifications are not possible from search results you would have to reply on either tweets that mention you and/or people you've selectively enabled mobile notifications from. You could set this up in a number of ways. For example, you might ask a colleague in the audience and/or remote to give you feedback as you present.

Talking SMS settingsThe last step is to hear the message. I played around with a couple of Android Apps to do this and the best one I found was Talking SMS and Caller ID by Bit Studio Ltd (£1.50). This has a very small file size and unlike other apps which needed the phone to be awake to work Talking SMS happily runs in the background while your phone is in standby.  I should say I tried using Android’s TalkBack accessibility settings but couldn't find a way for it to automatically read the entire message.

So the next time you see me present and I've got a headphone in one ear and a confused face you'll know why. So do you think it'll work? Will you be trying this recipe yourself? s this easier to setup on iPhone?


Screenshot from TAGSExplorerGiven the number researchers who ask me about access to historic Twitter data who end up disappointed to hear free access to search results are limited to the last 7 days I’m sure they will be pleased to hear about the Twitter Data Grants:

we’re introducing a pilot project we’re calling Twitter Data Grants, through which we’ll give a handful of research institutions access to our public and historical data.

This was an area I’d hoped the Library of Congress who’d have solved long ago given they were gifted the data in April 2010. Unfortunately despite the announcement in Jan 2013 that access was weeks away nothing has appeared.

It’s worth stressing that Twitter’s initial pilot will be limited to a small number of proposals, but those who do get access will have the opportunity to “collaborate with Twitter engineers and researchers”. This isn’t the first time Twitter have opened data to researchers having made data available for a Jisc funded project to analyse the London Riot and while I except Twitter end up with a handful of elite researchers/institutions hopefully the pilot will be extended.

Proposals for this pilot need to be in by 15 March. A link is included in the Introducing Twitter Data Grants page.



Happy New Year to you too Simon! Having worked with Twitter and Google Maps API I was aware that their terms are becoming increasingly restrictive making the environment for 3rd party services for doing this increasingly difficult. There is a solution for doing using a modification of my Twitter Archiving Google Spreadsheet (TAGS) project (the guerrilla approach so to speak). The result for #rhizo14 is here (classic Maps seems to be struggling with this so here is what it looked like) (only viewable in non-mobile app versions of Google Maps and not the current preview version) and this post outlines how it was done. ...continue reading


Livros de Redes Sociais, SEO e Web 2.0Perhaps the worst SEO post title I could possibly use. If you are still wondering what SEO is ask Nitin Parmar ;)

Still lost? SEO is Search Engine Optimisation. I’ve had a long interest in SEO primarily in selfish terms to try and get this blog more read, but also in a wider ‘educational resource discovery’ context. I was the author of the ‘SEO and discoverability’ chapter in Into the wild – Technology for open educational resources, which highlights the importance and UKOER programme experiments with SEO.

So it’s perhaps no surprise that I agree with Tony:

Something I’m increasingly become aware of is SEO is not just about having the right metadata on your webpage, in fact arguably this is the least important aspect. The area I’m particularly interested in is the tools/techniques the SEO community use to gain ‘actionable insight’.

Ironically this is an area I’ve been inadvertently contributing to without really knowing it. Someone who spotted this early was Wil Reynolds founder of SEER Interactive:

SEER Interactive offer services in Paid Search Marketing and Search Engine Optimization but what’s particularly interesting is their commitment to being “an Analytics first company, and we will not take on projects where we can’t analyze our impact on your business”.

So what do I do that’s of interest to the SEO community? Well it seems like me SEOers like a good old-fashioned spreadsheet. They also like a good old-fashioned spreadsheet that they can hook into social network channels. A recent example of this is the work Richard Baxter (CEO and founder of SEOgadget) presented at MOZCon which extends TAGS (my Twitter Archiving Google Spreadsheet solution) demonstrating How To Use Twitter Data for Really Targeted Outreach. The general synopsis is:

an alternative method to find sites that our target audiences may be sharing on Twitter. With that data, you can build content strategy, understand your market a little better, and construct an alternative outreach plan based on what real people are sharing and engaging with, rather than starting with sites that just rank for guest post queries.

It was really interesting to read how Richard had used the output from TAGS, which was ingested into Excel where additional free Excel based SEO tools could be used to gain that all important ‘actionable insight’.

So ‘learning tech and library folk’ if you are planning your next phase of CPD maybe you should be looking at some SEO training and perhaps I’ll see you at MOZCon next year ;)

1 Comment

Twitter has recently frustrated a number of developers and mashup artists moving to tighter restrictions on it’s latest API. Top of the list for many are all Twitter Search API requests need to be authenticated (you can’t just grab and run, a request has to be via a Twitter account), removal of XML/Atom feeds and reduced rate limits. There are some gains which don’t appear to be widely written about so I’ll share here

#1 Get the last 18,000 tweets instead of 1,500

Reading over the notes for the latest release discussion/notes for NodeXL I spotted that

you now specify how many tweets you want to get from Twitter, up to a maximum of 18,000 tweets

Previously in the old API the hard limits were 1,500 tweets from the last 7 days. This meant of you requested a very popular search term you’d only get the last 1,500 tweets making any tweets made earlier in the day inaccessible. In the new API there is still the ‘last 7 days’ limit but you can page back a lot further. Because the API limits to 100 tweets per call and 180 calls per hour this means you could potentially get 18,000 tweets in one hit. If you cache the maximum tweet id, wait an hour for the rate limit to refresh you could theoretically get even more (I’ve removed the 1.5k limit in TAGSv5.0, but haven’t fully tested how much of the 18k you can get before hit by script timeouts).

#2 Increased metadata with a tweet

Below is an illustration of the data returned in a single search result comparing the old and new search API.

Old and new Search API responses

If you look at the old data and the new data the main addition is a lot more profile data. A lot of this isn’t of huge interest (unless you wanted to do a colour analysis of profile colours), but there is some useful stuff. For example in this example I have profile information for the original and retweeter. as well as friend/follower counts, location and more (I’ve already shown how you can combine this data with Google Analytics for comparative analysis).

Whilst I’m sure this won’t appease the hardcore Twitter devs/3rd party for hackademics like myself grabbing extra tweets and more rich data has it’s benefits.


I’m enrolled on the Learning Analytics and Knowledge (LAK13) which is an open online course introducing data and analytics in learning. As part of my personal assignment I thought it would be useful to share some of the data collection and analysis techniques I use for similar courses and take the opportunity to extend some of these. I should warn you that some of these posts will include very technical information. Please don’t run away as more often than not I’ll leave you with a spreadsheet where you fill in a cell and the rest is done for you. To begin with let's start with Twitter.

Twitter basics

Like other courses LAK is using a course tag hashtag to allow aggregation of tweets, in this case #lak13. Participants can either watch the Twitter Search for #lak13, or depending on their Twitter application of choice, view the stream there. Until recently a common complaint of the Twitter search is it was limited to the last 7 days (Twitter are now rolling out search for a small percentage of older tweets). Whilst this limit is perhaps less of an issue given the velocity of the Twitter stream for course tutors and students having longitudinal data can be useful. Fortunately the Twitter API (API is a way for machines to talk to each other) gives developers a way to use Twitter’s data and use it in their applications. Twitter’s API is in transition from version 1 to 1.1, version 1 being switched off this March, which is making things interesting. The biggest impact for the part of the API handling search results is the:

  • removal of data returned in ATOM feed format; and
  • removal of access without login

This means you’ll soon no longer to be able to create a Twitter search which you can watch in an RSS Feed Aggregator like Google Reader like this one for #lak13.

All is not lost as the new version of the API still allows access to search results but only as JSON.

 JSON (pron.: /ˈsən/ jay-sun, pron.: /ˈsɒn/ jay-sawn), or JavaScript Object Notation, is a text-based open standard designed for human-readable data interchange  -

I don’t want to get too bogged down in JSON but basically it provides a structured way of sharing data and many websites and web services will have lots of JSON data being passed to your browser and rendered nicely for you to view. Let's for example take a single tweet:

single tweets as displayed

Whilst the tweet looks like it just has some text, links and a profile image underneath the surface there is so much more data. To give you an idea highlighted are 11 lines from 130 lines of metadata associated with a single tweet. Here is the raw data for you to explore for yourself. In it you’ll see information about the user including location and friend/follower counts; a breakdown of entities like other people mentioned and links; and ids for the tweet and in reply to.

tweet metadata

One other Twitter basic that catches a lot of people out is the Search API is limited to the last 1500 tweets. So if you have a popular tag with over 1500 tweets in a day, at the end of the day only the last 1500 tweets are accessible via the Search API.

Archiving tweets for analysis

So there is potentially some rich data contained in tweets, but how can we capture this for analysis? There are a number of paid for services like eventifier that allow you to specify a hashtag for archive/analysis. As well as not being free the raw data isn’t also always available. My solution has been to develop a Google Spreadsheet to archive searches from Twitter (TAGS). This is just one of many other solutions like pulling data directly using R and Tableau the main advantage with this solution for me is I can set it up and it’s happy to automatically collect new data.

Setting this up to capture search results from #lak13 gives use the data in a spreadsheet.

spreadsheet of #lak13 tweets

This makes it easy to get overviews of the data using the built-in templates:

twitter summaryactivity over time

... or, as I’d like to spend the rest of this post, quickly looking at ways to create different views.

As you will no doubt discover using a spreadsheet environment to do this has pros and cons. On the plus side it’s easy to use built-in charts and formula to analyse the data, identifying queries that might be useful for further analysis. The downside is you are limited in the level of complexity. For example, trying to do things like term extraction, n-grams etc is probably not going to work. All is not lost as Google Sheets makes it easy to extract and consume the data in other applications like R, Datameer and others.

Using Google Sheets to import and query data

I’ve got a post on Feeding Google Spreadsheets: Exercises in using importHTML, importFeed, importXML, importRange and importData if you want to learn about other import options, but for now we are going to use importRange to pull data from one spreadsheet into another.

If you open this spreadsheet and File > Make a copy it’ll give you a version that you can edit. In cell A1 of the Archive sheet you should see the following formula  =importRange("0AqGkLMU9sHmLdEZJRXFiNjdUTDJqRkNhLUxtZE5FZmc","Archive!A:K")

What this does is pull the first couple of columns from this sheet where I’m already collecting LAK13 tweets (Note this techniques doesn't scale well, so when LAK starts hitting thousands of tweets you are better doing manipulations in the source spreadsheet than using importRange. I’m doing it this way to get you started and try some things out).


On the Summary sheet I’ve extended the summary available in TAGS by including weekly breakdowns. The entire sheet is made with a handful of different formula used in slightly different ways with a dusting of conditional formatting. I’ve highlighted a couple of these:

  • cell G2 =TRANSPOSE(FREQUENCY(FILTER(Archive!E:E,Archive!B:B=B2),S$15:S$22))
    • FILTER – returns an array of dates the person named in cell B2 has made in the archive
    • FREQUENCY – calculates the frequency distribution of these dates based on the dates listed in S15:S22 and returns a count for each distribution in rows starting from the cell the formula is in
    • TRANSPOSE – converts the values from a vertical to horizontal response so it fills values across the sheet and not down
  • cell P2 =COUNTIF(H2:O2,">0")
    • counts if the values in row 2 from column H to O are greater than zero giving number of weeks the users has participated
  • cells H2:O – conditional formatting
    • conditional formating
  • cell B1 =QUERY(Archive!A:B," Select B, COUNT(A) WHERE B <> '' GROUP BY B ORDER BY COUNT(A) desc LABEL B 'Top Tweeters', COUNT(A) 'No.'",TRUE)
    • QUERY – allows you to use Google’s Query Language which is similar to SQL used in relational databases. In the example using the data source as columns A and B in the archive sheet we select columns B (screen name of tweeter) and count of A (could be any other column with a unique value) where B is not blank. The results are grouped by B (screen name) and ordered by count. The query also renames the columns.


To give you some examples of possible queries you can use with data from Twitter in the spreadsheet you copied is a Query sheet with some examples. Included are some sample queries to filter tweets with ‘?’, which might indicate questions (even if rhetorical), time based filters and counts of messages between users.

Query sheet

Tony Hirst has written more about Using Google Spreadsheets as a Database with the Google Visualisation API Query Language, which includes creating queries to export data.

Other views of the data

The ability to export the data in this way opens up some other opportunities. Below is a screenshot of a ego/conversation centric view of #lak13 tweets rendered using the D3 javascript library. Whilst this view onto the archive is experimental hopefully it illustrates some of the opportunities.

ego/conversation centric view of #lak13 tweets


Hopefully this post has highlighted some of the limitations of Twitter search, but also how data can be collected and the opportunities to rapidly prototype some basic queries. I’m conscious that I have provided any answers about how this can be used within learning analytics beyond the surface activity monitoring but I’m going to let you work that one out. If you want so see some of my work in this area you might want to check out the following posts: