I suppose I should start with the why you would want to do this. Every time I join a Blackboard Collaborate session it’s like stepping back in time. Beside the appearance the technology in Collaborate is increasingly out of step with current trends. The software is built on Java, which is claimed to be in over 3 billion devices. Unfortunately for a number of users those 3 billion devices often doesn’t include the computer on their desk. Here is where the problems start as without enough permissions you won’t be able to install Java. To Blackboard’s credit they have spent time developing mobile apps not everyone is going to be able to use these either.

Aware of these barriers for ocTEL we decided to investigate streaming Collaborate session to YouTube. The main advantage for use in getting this to work is that as well as being able to give an alternative means to watch the session we immediately have a recording to share for those who missed it. You can see the results in this session from week 3 of ocTEL.

In this post I’ll outline the technique we use, which can also be more generally applied to any desktop application. It’s also worth highlighting that this is just one of many ways of streaming your desktop and you could achieve similar results using a Google Hangout On Air or the free ‘Wirecast for YouTube’ software Mac|Windows. The reason we didn’t go down that route was we wanted more control over the part of the screen being shared and we didn’t want to have to buy a Wirecast Pro license.

...continue reading

Not sure if this is a documented feature but you can search within Google+ Communities by adding s/yoursearchterm to the community home address (operators like AND/OR also appear to work).

Emma Duke-Williams commented:

I have some students using a private community for group work, and they have to provide a list of posts for assessment.

The only way we'd found to search just that community was to scroll down the page to the start of the community -then use the browser's search feature, but that only found the posts they'd started, not the discussions. I have just tested this with my most prolific student & seem to have found both his posts & his comments.

I tried the ‘s’ switch in the url after seeing a Googler share this link which lists all the ‘Apps Script’ communities”Apps Script”/communities so it looks like it can be used elsewhere in the url.

I notice however that using this didn’t always return the expect result, which is worth bearing in mind.


I’m a self-confessed RSS junkie, I ❤ RSS, so when topping up my feeds I wanted a quick way to add to Feedly (my current RSS reader of choice). I already use the Chrome RSS Subscription Extension (by Google) which gives me a handy icon in my address bar when a feed is detected.

RSS Subscription Extension (by Google)

Unfortunately though it doesn’t have Feedly as a service …

Manage feeds

Clicking on ‘Manage…’ gives the rather daunting

Edit feed reader dialog

A quick search for a Feedly specific url turned up Add Feedly to Firefox's Feed Handlers List and I recognised that the browser.contentHandlers.types.#.uri would do the trick. So using the following in the Edit feed reader dialog:

I back to my feed munching ways

1 Comment

From the postbag Marjolein Hoekstra (CleverClogs) writes:

Short description
Can you make a Google Script for me that compares two strings character by character? If differences are found, the script should point these out. If no differences are found at all, the script should put out the text "[ id. ]" .

Detailed description
I have two columns containing lists of horizontally identical, but sometimes almost identical text strings. This is on purpose. Each row has another couple of words that need to be compared.

I'd like to compare them on a character by character basis, and then point out in the second column at which positions it differs from the first, for example like this:



If you compare these two, you'll see that cell B2[3] has 'K' where A2[3] reads 'C'.

My envisioned formula would then populate cell C2 with: "[ – – K – – ]"

As far as I can tell, I'd need a Google Script that parses both strings character by character and output "–" when they are identical, and output the value of the character string from B2. It should be relative simple, with a FOR loop. Thing is, I've never written a Google Script, and it's a bit daunting for me to start on my own.

Note that LEN (A) is always identical to LEN (B)

Background info
In case you're interested in the actual use case: I want to use this formula to compare strings of Chinese characters, where the first column contains the traditional writing of these characters (typically requiring more strokes) and the second column containing the simplified writing of those same characters. Sometimes the characters are different, sometimes they are not. You can see this clearly in the screenshot below.

The Google Spreadsheet is used as input for a flashcard deck I'm building, using the iPhone app Flashcard Deluxe (top-notch system, highly flexible) [also available for Android].


Google Spreadsheet Example Flashcard Deluxe

There's no need to use Chinese characters to test the formula, I'm just providing this so that you know in what context the formula will be used.

The Solution

My initial thought was to use existing formula to SPLIT the cell text into individual character values and then do a comparison but unfortunately the SPLIT formula requires a character to split on. So instead I turned to Google Apps Script and wrote the following custom formula:

function stringComparison(s1, s2) {
  // lets test both variables are the same object type if not throw an error
  if ( !=={
    throw("Both values need to be an array of cells or individual cells")
  // if we are looking at two arrays of cells make sure the sizes match and only one column wide
  if( === '[object Array]' ) {
    if (s1.length != s2.length || s1[0].length > 1 || s2[0].length > 1){
      throw("Arrays of cells need to be same size and 1 column wide");
    // since we are working with an array intialise the return
    var out = [];
    for (r in s1){ // loop over the rows and find differences using diff sub function
      out.push([diff(s1[r][0], s2[r][0])]);
    return out; // return response
  } else { // we are working with two cells so return diff
    return diff(s1, s2)

function diff (s1, s2){
  var out = "[ ";
  var notid = false;
  // loop to match each character
  for (var n = 0; n < s1.length; n++){
    if (s1.charAt(n) == s2.charAt(n)){
      out += "–";
    } else {
      out += s2.charAt(n);
      notid = true;
out += " ";
  out += " ]"
  return (notid) ? out :  "[ id. ]"; // if notid(entical) return output or [id.]

One of the things to be aware of is Google Apps Script formulas are associated with a spreadsheet. You can't globally use a custom formula unless the script is attached. Fortunately when copying a spreadsheet you also get a copy of the script, so providing templates is a way around this.

With this limitation in mind I thought I’d have another go  at cracking this with built-in formula … and guess what it is possible. The key to unlocking this was when playing with the REGEXREPLACE formula I accidentally turned ‘ABCDE’ into ‘,A,B,C,D,E,’ by using =REGEXREPLACE(A20,"(.*?)",","). My RegEx is terrible so I’ll let someone else explain how this works in the comments, but getting to this point meant I could use a combination of SPLIT and REGEXREPLACE to do a character by character comparison on two cells of text. The final version of the formula goes (comparing cell A14 to B14):

=IF(EXACT(A14,B14),"[ id. ]","[ "&JOIN(" ",ARRAYFORMULA(REGEXREPLACE(SPLIT(REGEXREPLACE(B14,"(.*?)",","),","),SPLIT(REGEXREPLACE(A14,"(.*?)",","),","),"–")))&" ]")

My rough workings are embeded below. You can also make a copy of the entire project including the Apps Script solution here.

Update: Bruce Mcpherson has posted an alternative formula to do this which goes like:

"[ " & CONCATenate(ARRAYFORMULA(if(mid(A31, row(indirect("x1:x"&len(A31))) ,1)=mid(B31,row(indirect("x1:x"&len(A31))),1)," – "," "&mid(B31,row(indirect("x1:x"&len(A31)))&" ",1) ))) &" ]"

As you will see from the comments thread on that post Marjolein was having problems using my version with a Chinese characterset. Adding this to the example spreadsheet I'm unable to replicate the error but have encountered the problem here. If anyone can spot the difference I'd welcome your thoughts?

Update 2: Bruce pointed out that "the likely issue is that the columns with the problem are times - the characters mean AM. The same thing would probably happen with numbers. Have you tried wrapping the cell references in concatenate() to convert to a string?"

I said: ah I see what you mean 时 is being interpreted as 上午12:00:00. Not sure how I'd wrap the concatenate with my regexreplace. Your solution looks better all round so rather than loosing sleep I'd go with that


This was the question that came in over the wire this morning:

My first thought was using the Google Visualisation API Query Language which can would let you do a SQL type query and  LIMIT the response to 1 record and OFFSET to a particular row. The Google Code Playground has a nice example of this to play with to see what you can do, even better Tony Hirst’s Guardian Data Explorer helps you generate the query to generate an html view (although Tony hasn’t implemented to LIMIT and OFFSET.  So below is an example spreadsheet:

Example Spreadsheet

… and using the query * LIMIT 1 OFFSET 3&key=rYQm6lTXPH8dHA6XGhJVFsA&gid=0

We get … (the important bit is ‘OFFSET 3’ where offset 0 = row 2, offset 1 = row 3, etc.)

Google Visualization html row

So a couple of issues. First the spreadsheet needs to be File > Published to the web and the result is also read-only:

Thinking about the other view offered by Google on Spreadsheets it occurred to me the mobile view might be a solution. The mobile view if you don’t use a native app is List view (here is more about List View):

List View 

… which allows you to edit a row of data

Edit row in List View

the final trick was to change the rows per page to 1 and then use the page query to select the row e.g. to open the spreadsheet to edit row 5 we set &page=4 (the header isn't counted as a row). You can also select the sheet by changing the &ampgid= number. Tip: Open your spreadsheet and switching to 'List View' will help you build the URL.{your_key}&type=view&gid=0&f=true&sortcolid=-1&sortasc=true&page=4&rowsperpage=1

Edit single row in List View

So there you go how to open a Google Spreadsheet at a specific row for editing for Marjolein to use. The perfect solution … well almost?

Update: Saqib Ali has kindly passed on this trick from one of the Google team (I don't think it would work in the original scenario, but still very useful to know

Insert/Comment at a specific cell, and, in that comment "plus someone" - that is, type "+" and then the person's email or name NOTE: you will get a list of people in your domain as soon as you type the "+", filtered as you start typing their name/email the person you "plussed" will get an email with both the contents of the cell and your comment the link in that email will take them DIRECTLY TO THE TARGET CELL with the comment activated.

The 2013 New Year’s greetings, from the French advertising and public relations company Publicis Groupe by their CEO Maurice Lévy is making multiple appearances in my circles

When I saw it my immediate thought was how did they do it? In the official press release they say:

It’s just one of the many very human interpretations of the standard YouTube functionality taken for granted today.

Looking for YouTube player functionality left me scratching my head. Looking at the source gave a big clue:

Publicis Groupe custom tab code (click to enlarge)

… basically the entire video area including title and controls is an embedded iframe. This allows Publicis Groupe to create their own Flash based player with custom functionality for pause, play etc.

To get the iframe into their YouTube channel they used YouTube’s Custom tab settings which are available on Branded Channels.

Simple but very effective.


This method uses a UK based SMS gateway and most likely not suitable for international use

Two years ago I wrote how you could have a  free SMS voting using intelliSoftware SMS Gateway service. This recipe automatically forwarded text messages from the IntelliSoftware SMS gateway to a blogger account using posting via email. Votes were then extracted from messages from the blogs RSS feed using some PHP code on my server.

Last year a modified version of this was used to collect votes for the poster competition at eAssessment Scotland 2011. I was recently asked if the recipe would still work for this year’s conference. It does but I thought I could make it better.

30 lines of code, source is in the templateThe main change is to directly ingest SMS messages into a Google Spreadsheet (using 30 lines of code) which makes it easier for manipulation and presentation. The method for doing this is relatively simple because the IntelliSoftware gateway has a HTTP interface and you can also use Google Spreadsheets as a Database – INSERT with Apps Script form POST/GET submit method.

If you would like to do this yourself here’s how:

  1. Signup for an account at intelliSoftware (it’s free!)
    Note: the username you select is also used to direct texts so you might want to use a class or course name)
  2. Open a copy of this Google Spreadsheet template (also free)
  3. Open Tools > Script editor...
  4. Select Run > setup and okay, then Publish > Deploy as web app.. and:
    - enter Project Version name and click 'Save New Version'
    - set execute web app 'as me'
    - security level as 'anyone, even anonymously'
  5. Click Update and copy the service url you are given (it will look like[random_characters]/exec
  6. Now open your IntelliSoftware control panel
  7. Click on Forwarding and change, tick 'Enable incoming message forwarding' and change forwarding type to http
  8. Copy the web app url into the address field and click Save

To receive messages tell users to send a text message to 07786 XXX XXX with ‘xyz and their message’ (where 07786 XXX XXX is the mobile number found in the Trial Service section and xyz is your username created with intelliSoftware).

Simple response graphIn the example template I show how you can parse messages to generate a response graph. You might also want to look at how I’ve used a Google Form for Hacking stuff together with Google Spreadsheets: A simple electronic voting system, at the very basic level you’ve got a free SMS textwall to play with. If you do come up with any interesting mashups please leave a note in the comments :)


This morning I finished listening to Episode 5 of Data Stories: How To Learn Data Visualization. Data Stories is a bi-weekly podcast on data visualisation produced by Enrico Bertini and Moritz Stefaner, episode 5 also featuring Andy Kirk. For anyone interested in Data Visualisation I’d highly recommend you give it a listen.

Like many others I’m at the beginning of my data visualisation journey, one of the things this episode highlighted was there is a whole world of data visualisation experts out there that I’ve yet to start stealing learning from. Fortunately today another Visualisation expert, Nathan Yau (FlowingData), posted his list of Data and visualization blogs worth following. Perfect!

I could’ve gone through the list and individually subscribed to each of the blogs feeds but I’m lazy (so lazy that a 15 minute hack has turned into a 3 hour write-up <sigh>) and just wanted to dump them into my Google Reader. This is a problem Tony Hirst has encountered  in Feed-detection From Blog URL Lists, with OPML Output. One thing that is not clear is how Tony got his two column CSV of source urls. There are various tools Tony could have used to do this. Here’s my take on converting a page of blog urls into an OPML bundle.

Step 1 Extracting blogs urls: Method 1 using Scraper Chrome Extension

“Scraper is a Google Chrome extension for getting data out of web pages and into spreadsheets.”

Chrome users can grab a copy of Scraper here. Once installed if you go to Nathan Yan's Data and visualization blogs worth following and right-click on  the first link in the list and select ‘Scrape similar’

scraper menuIn the window that opens you should get something similar to the one below. Scraper has two options for identifying the parts of the page you want to extract, XPath or JQuery Selectors. Both of these have similar coding structures but for this example I’m going to stick with XPath. XPath basically provides a way to identify parts of the XML/HTML structure and extract content (if you are not familiar with XPath the w3schools is a great starting point).

Scraper dialog

In this example Scraper should default to ‘//div[1]/div[2]/ul[1]/li/a’. Here’s a quick explanation of how I read this query.  Because it starts with // it will select “nodes in the document from the current node that match the selection no matter where they are” for me this is the trigger to read the query from right to left as we are matching an endpoint pattern. So:

match all <a> in all <li> in first <ul> of second <div> (<div class=”entry-content”> of first <div> (<div class="entry">)

this give use the links from the first block of bullet point. We want the links from all of the bullet points lists so the pattern we want is

match first <a> in all <li> in all <ul> of second <div> of first <div>

So basically we need to switch a to a[1] and ul[1] to ul e.g. ‘//div[1]/div[2]/ul/li/a[1]’. Edit the XPath query and in the columns section beneath change the order by clicking and dragging so that @href/URL comes first. Clicking on the ‘Scrape’ button to get a new preview which should now contain a list of 37 urls. Click on Export to Google Docs … You are now ready to move to Step 2 Auto-discovering feed urls below.

Step 1 Extracting blogs urls: Method 2 using Google Spreadsheet importXML function

Another way to get this data is to directly scrape it using Google Spreadsheets using the importXML function. This function also uses XPath to extract parts of a webpage so we can reuse the query used in Method 1 but get the data straight into a spreadsheet (it’s also a live link so if Nathan adds a new link the spreadsheet will automatically update to include this). Let give it a go.

Create a new spreadsheet and in cells A1 to B3 enter the column heading Link, Title and Url. Next in cell A2 enter:


Note the addition of @href. This is included to extract the href attribute in the <a>. You should now have similar list of 37 urls from Nathan’s post.  To get titles we could enter another importXML function in cell B2 using the XPath ‘//div[1]/div[2]/ul/li/a[1]’ which will extract the text between <a></a>. Another way is to actual scrape the data from the target url. So in cell B2 enter:


So this will go to the url in A2 ( and extract anything wrapped in <title>

Now select cell B2 and fill the column down to get titles for all the urls. Finally we need to select the entire B column and Copy/Paste values only. The reason we do this is Google Spreadsheets only allows 50 importXML function per spreadsheet and we’ll need 37 more to get the RSS feeds for these sites.

Copy Paste as Values

Step 2 Auto-discovering feed urls

Initially i tried using Feed Autodiscovery With YQL with importXML using an XPath of "//link/@href" but I was not getting any results. So instead decided to auto-detect the feed directly using importXML. In cell C2 enter:


This time the XPath starts at the XML tree root (<html>) looks in the <head> for the first link with the attribute rel=’alternative’. From Tony’s post:

Remember, feed autodiscovery relies on web page containing the following construction in the HTML <head>element:
<link rel=”alternate” type=”application/rss+xml” href=”FEED_URL” title=”FEED_NAME” />

[I tried using //link[@rel="alternate" and @type="application/rss+xml"] but Google Spreadsheet didn’t like it, instead grabbing the first rel=alternate link]

Fill cell C2 down the rest of the column to get RSS feeds for the other urls. You’ll notice that there’s a #N/A for this is because their feed isn’t auto-discoverable. Visiting their site there is a XML link ( that we can just paste into our spreadsheet (tiding data is a usual processes in data visualisation).

Step 3 Generating an OPML bundle

Spreadsheet -> OPML GeneratorYou should now have a spreadsheet like this one with 3 columns of data (if you used the Scraper extension in step 1/method 1 you’ll need to make sure your columns are headed Link, Title and Url for the next step). Next to turn our spreadsheet of feeds into an OPML bundle. Fortunately this step has been made super easy by using the Spreadsheet -> OPML Generator. Just follow the instructions on this site an seconds later you’ve got:

OPML File of Nathan Yau’s recommended Data and Visualisation Blogs

And because I’ve imported these into Google Reader here’s an aggregated page of their posts.


Tony Hirst said:

I said:

and the how to

Feed autodiscovery scraper - based on CC-BY psychemedia


My solution for extracting site activity data from Delicious isn’t going so well (their API is letting me down). Instead I thought I’d share how I use Google Reader to create a searchable archive of Twitter mentions. It’s not particularly a new trick and you’ll find a number of similar how-to’s so this post is more about awareness raising.

Here’s the recipe:

  1. Fire up Google Reader and click on the ‘Subscribe’ button
  2. In the box enter replacing the text yourusername with your twitter screen name The search is basically looking for all mentions of yourusername –from:yourusername. If you want to include things you tweet just remove +-from%3Ayourusername from the url
  3. Click Add and it’ll start pulling in @replies and @mentions

To search for those useful links people sent you but you’ve forgot enter a term in the search box and select the feed you just created.

Google Reader search tweets

I’m not sure how many tweets Google Reader will cache (I’ve seen it mentioned that all are saved) so you don’t have to worry about the usually 7 day Twitter search limit. This solution is also not limited to mentions you can use any search operator you like.

1 Comment

This morning I’ve managed to confuse myself over which are the best formats and ways to publish this blog. So in this post I review what I already use and show how you can embed ePub, mobi and Kindle links into your blog using the Kindle It service.

I’ve always been interested in discovering new ways to automatically publish in as many different formats. I can’t remember in which order these developments came about but there was:

  • HTML – simple rendering of WordPress posts in html which over the years have been wrapped in various themes include a mobile friendly WPTouch format
  • RSS – another out-of-the-box WordPress (and other blog/news site staple)
  • PDF (aka MASHezine) – this is newspaper format of my last 10 posts. Originally I used to use HP’s tabbloid service to get an emailed PDF of my RSS feed which I manually uploaded, then I developed a Make Tabbloid wordpress plugin which used their API to automatically do this. … until with no notice they pulled their API. In the end this was a good thing as I rewrote the plugin using the open source PDF Newspaper.
  • Email – at my time at the RSC when I was out and about it was very clear that a lot of people didn’t use or understand RSS feeds (commoncraft RSS in plain English). Aware of not wanting to overburden overworked academic staff with my ramblings I use the MailPress WordPress plugin to distribute a monthly digest wrapped in a custom theme. Here’s an example of last months. If you want to subscribe to this visit my blog (full not mobile) and there is a box half way down right-hand-side (as I’m no longer at the RSC I’m less caring towards overworked academics there is also an option for a by post email update ;)

This is where I start getting confused. As I’m not an ebook reader user I don’t really know the best way for you to consume my content. Maybe you use the free Calibre ebook management software to convert your favourite sites into ebook format and sync with your device? Maybe there is a Kindle service you use to do this?

I’ve gone through a couple of ebook services in the past. First there was FeedBooks which you could get a RESTful url to the latest posts from your RSS feed in mobi/ePub/Kindle formats (this feature was pulled by feedbooks). Then I experimented with NewsToEbook, but this takes you off site unless you manually update the links to the cached output. Recently I had quick look at which has a widget you can drop in to your website (or you could do something with the dotepub API), but you are limited to ePub format.

Instead I’ve returned to another offering called Kindle It. Here’s why:

  • As well as Kindle it can generate ePub and mobi
  • For ePub and mobi it looks like I can use a RESTful url (i.e. I enter it once and Kindle It does the rest of the work keeping the output up-to-date)
  • The service allows you to email a post straight to your kindle (Use case scenario I have in mind is: Jeff is browsing my blog, spots one of my verbose posts and wants to read it later. Clicking on Kindle It he is able to send it to his Kindle for reading on the train)

If you’d like to use Kindle It on your own site below is the snippet of code I used which automatically passes the current page url to Kindle It. Update: I've written this little widget which I can call with <script type="text/javascript" src=""></script>

So you ebook users does this option work for you or is there a better way?