Google Apps Script

1 Comment

From the postbag Marjolein Hoekstra (CleverClogs) writes:

Short description
Can you make a Google Script for me that compares two strings character by character? If differences are found, the script should point these out. If no differences are found at all, the script should put out the text "[ id. ]" .

Detailed description
I have two columns containing lists of horizontally identical, but sometimes almost identical text strings. This is on purpose. Each row has another couple of words that need to be compared.

I'd like to compare them on a character by character basis, and then point out in the second column at which positions it differs from the first, for example like this:

A2: ABCDE

B2: ABKDE

If you compare these two, you'll see that cell B2[3] has 'K' where A2[3] reads 'C'.

My envisioned formula would then populate cell C2 with: "[ – – K – – ]"

As far as I can tell, I'd need a Google Script that parses both strings character by character and output "–" when they are identical, and output the value of the character string from B2. It should be relative simple, with a FOR loop. Thing is, I've never written a Google Script, and it's a bit daunting for me to start on my own.

Note that LEN (A) is always identical to LEN (B)

Background info
In case you're interested in the actual use case: I want to use this formula to compare strings of Chinese characters, where the first column contains the traditional writing of these characters (typically requiring more strokes) and the second column containing the simplified writing of those same characters. Sometimes the characters are different, sometimes they are not. You can see this clearly in the screenshot below.

The Google Spreadsheet is used as input for a flashcard deck I'm building, using the iPhone app Flashcard Deluxe (top-notch system, highly flexible) [also available for Android].

Screenshot:

Google Spreadsheet Example Flashcard Deluxe

There's no need to use Chinese characters to test the formula, I'm just providing this so that you know in what context the formula will be used.

The Solution

My initial thought was to use existing formula to SPLIT the cell text into individual character values and then do a comparison but unfortunately the SPLIT formula requires a character to split on. So instead I turned to Google Apps Script and wrote the following custom formula:

function stringComparison(s1, s2) {
  // lets test both variables are the same object type if not throw an error
  if (Object.prototype.toString.call(s1) !== Object.prototype.toString.call(s2)){
    throw("Both values need to be an array of cells or individual cells")
  }
  // if we are looking at two arrays of cells make sure the sizes match and only one column wide
  if( Object.prototype.toString.call(s1) === '[object Array]' ) {
    if (s1.length != s2.length || s1[0].length > 1 || s2[0].length > 1){
      throw("Arrays of cells need to be same size and 1 column wide");
    }
    // since we are working with an array intialise the return
    var out = [];
    for (r in s1){ // loop over the rows and find differences using diff sub function
      out.push([diff(s1[r][0], s2[r][0])]);
    }
    return out; // return response
  } else { // we are working with two cells so return diff
    return diff(s1, s2)
  }
}

function diff (s1, s2){
  var out = "[ ";
  var notid = false;
  // loop to match each character
  for (var n = 0; n < s1.length; n++){
    if (s1.charAt(n) == s2.charAt(n)){
      out += "–";
    } else {
      out += s2.charAt(n);
      notid = true;
    }
out += " ";
  }
  out += " ]"
  return (notid) ? out :  "[ id. ]"; // if notid(entical) return output or [id.]
}

One of the things to be aware of is Google Apps Script formulas are associated with a spreadsheet. You can't globally use a custom formula unless the script is attached. Fortunately when copying a spreadsheet you also get a copy of the script, so providing templates is a way around this.

With this limitation in mind I thought I’d have another go  at cracking this with built-in formula … and guess what it is possible. The key to unlocking this was when playing with the REGEXREPLACE formula I accidentally turned ‘ABCDE’ into ‘,A,B,C,D,E,’ by using =REGEXREPLACE(A20,"(.*?)",","). My RegEx is terrible so I’ll let someone else explain how this works in the comments, but getting to this point meant I could use a combination of SPLIT and REGEXREPLACE to do a character by character comparison on two cells of text. The final version of the formula goes (comparing cell A14 to B14):

=IF(EXACT(A14,B14),"[ id. ]","[ "&JOIN(" ",ARRAYFORMULA(REGEXREPLACE(SPLIT(REGEXREPLACE(B14,"(.*?)",","),","),SPLIT(REGEXREPLACE(A14,"(.*?)",","),","),"–")))&" ]")

My rough workings are embeded below. You can also make a copy of the entire project including the Apps Script solution here.

Update: Bruce Mcpherson has posted an alternative formula to do this which goes like:

"[ " & CONCATenate(ARRAYFORMULA(if(mid(A31, row(indirect("x1:x"&len(A31))) ,1)=mid(B31,row(indirect("x1:x"&len(A31))),1)," – "," "&mid(B31,row(indirect("x1:x"&len(A31)))&" ",1) ))) &" ]"

As you will see from the comments thread on that post Marjolein was having problems using my version with a Chinese characterset. Adding this to the example spreadsheet I'm unable to replicate the error but have encountered the problem here. If anyone can spot the difference I'd welcome your thoughts?

Update 2: Bruce pointed out that "the likely issue is that the columns with the problem are times - the characters mean AM. The same thing would probably happen with numbers. Have you tried wrapping the cell references in concatenate() to convert to a string?"

I said: ah I see what you mean 时 is being interpreted as 上午12:00:00. Not sure how I'd wrap the concatenate with my regexreplace. Your solution looks better all round so rather than loosing sleep I'd go with that

6 Comments

The concepts used in this post are very similar to the Google Analytics superProxy (announced after my solution ;) the main difference is this working in Google Drive meaning data access authentication is built in.

Access control to Google Analytics data is very coarse. Via the web interface you can have full access or none at all. Given the growing interest in data driven decision making at all levels of the institution I thought it would be beneficial to demonstrate how Google Spreadsheets and Google Apps Script could be used to selectively distribute Google Analytics data. This solution was developed for my session at the Institutional Web Managers Workshop 2013 (IWMW13) which I talk about in more detail at the end of this post.

Google Analytics Query ExporterSo here is the general flow. Google Apps Script, which is part of Google Drive, is used by Google Analytics admins to create custom queries which pull back slices of data. The admins can then control how these slices are distributed either within the security of Google Drive or published to the web. Also because Google App Script feature ‘script triggers’, which can be time based, admins can preconfigure mini scripts to decide when and how the data is distributed. Examples include:

  • Add data slice to a Google Spreadsheet (Spreadsheets feature options for publishing to the web)
  • Turn data slice into a .csv file and attached to an email for the recipient
  • Turn data slice into a .csv file and copy/share via Google Drive
  • Turn data slice into a .csv file and publish on Google Drive
  • All the above with different file formats including json

Here is the template I’ve developed which you can copy and use followed by some setup instructions (also there’s a setup video):

*** Google Spreadsheet GA Query Exporter ***

  1. File > Make a copy of this spreadsheet (need to be logged
  2. Open Tools > Script editor and then:
    File > Upgrade authorisation experience;
    Once it's upgraded still in the Script editor click Resources > Use Google's APIs; and
    Click the 'Google API Console' link at the bottom of the dialog window;
  3. In the Google Apis Console switch the Analytics API 'on' and read/accept the terms (you may have to accept two seperate terms windows)
  4. Close the Google Apis Console window and OK the Google API Services window in the Script editor
  5. Before closing the Script editor Run > authoriseGA to authenticate script

Once authenticated there are a number of ways to run the script. You can enter a query and click 'Get data' for one off fetches or open Tools > Script editor and modify the example cron jobs to automate collection and distribution of data (the cron jobs allow functionality that includes emailing/sharing links to csv files)

To create multiple queries in the same spreadsheet duplicate this sheet and modify the query

Tip: You can use formula to build the start and end dates e.g. yesterday is =(TODAY())-1

To help admins construct these data slices the tool is designed to work in collaboration with the official Google Analytics Query Explorer 2. With this users can experiment with queries like this one that returns the top blog posts based on visits. Within the Query Explorer I can export the Query URI and import to a sheet in the template. The general workflow is demonstrated in the video below which shows how to setup an example query/export:

Using the example query from above I’ve created this template sheet. Similar to the example in the video in cells E9 and E10 I use a formula to dynamically create a date range. Assuming I want to run the query on the 1st of each month for last months stats we start with the end-date generated using =TODAY()-1. To get the start-date we use the value of this field to get the 1st of last month using =DATE(YEAR(E10),MONTH(E10),1)

I could open the spreadsheet every month and manually click the ‘Get data’ button but instead I can setup a script trigger to run as a Month timer of the 1st between 1am and 2am.

Setup script trigger/cron job

The code I run is a little scriptlet like:

function exampleCronJobWriteCSVtoParticularFolder(){
  // copy new data as csv and then email as attachment (copy also kept in Google Drive)
  var sheetO = getData("input"); // name of sheet with query to execute e.g. 'input'
  var csv = saveAsCSV(sheetO.getName(), sheetO.getName()+" "+Utilities.formatDate(new Date(), "GMT", "yyyyMMdd"));
  var folder = DriveApp.getFolderById("0B6GkLMU9sHmLbThITlNvb2dzREE"); // folder id from url (navigate to folder and extract for your own folders)
  folder.addFile(csv);
  MailApp.sendEmail([email protected], "New data", "A new csv for '"+sheetO.getName()+"' has been created and is in the folder "+folder.getUrl() );
  // MailApp.sendEmail(recipient, subject, body, options) - More info https://developers.google.com/apps-script/reference/mail/mail-app
}

… which gets the data and then publishes it as a csv to this folder emailing a link to listed recipients. As the folder has been shared we can get access via Google Drive Host with the link https://googledrive.com/host/0B6GkLMU9sHmLbThITlNvb2dzREE/top-posts%2020130701.csv

public folder

The template includes several example scriplet examples for you to play with and modify.  For those who are interested the slides for my IWMW13 session are on slideshare. The session was designed to introduce Google Apps Script followed by the query exporter template. I only made this yesterday so it might be a bit alpha. Feel free to leave issue and suggestions in the comments. I’m sure there are a number of clarifications required.

2 Comments

In this post I want to cover three things. First I want to introduce a little app I’ve developed which allows you to create a RSS feed for any of your Gmail labels (with the option to remove certain links – useful if you don’t want others unsubscribing you from mailing lists). Secondly I explain how it was made and how you can use it yourself. Finally I want to discuss how this could be used in an open course environment, utilising the vast processing power from services like Twitter and reusing there target marketing emails to your benefit with a bit of ‘dark social judo’.

What is Gmail Label Feeder?

It’s a little Google Apps Script app that you can setup to select one of your Gmail labels, preview content (with the option to remove all the links you’d like not to publish, like unsubscription links) and publish a public feed of the result. This video (embedded below) gives an overview of the problem and how the ‘Gmail Label Feeder’ app works:


How can I setup Gmail Label Feeder and how does it work?

Whilst logged in to your Google account open this script and then follow the instructions below:

  1. In the Script Editor File > Make a copy (this makes your own personal copy of the script which lives in Google Drive. With this copy you can control permissions etc and means I except no liability if it goes wrong, breaks, doesn’t work)
  2. From the Script Editor Run > setup
    IMPORTANT: when authenticating use the Gmail account you want to create feeds for
  3. Still in the Script Editor File > Manage versions... and create an initial name and 'Save New Version'
  4. Then, Publish > Deploy as web app... and select 'Execute the app as: me' and allow anyone access even anonymously
  5. Finally open (and bookmark) the 'current web app url' to create (atom) feeds for your gmail labels

The process covering the creation of individual feeds is covered in the video above. The video below shows the five steps for the initial setup:

How it works

Having already used Eric Koleda’s Feed+ script when creating a similar app to turn Scoop.it searches into an RSS feed it wasn’t too much effort to change the source data to a Gmail account. It’s not the first time I’ve hacked the functionality of Gmail having already used my inbox to re-enable Google Reader social share features, so I already knew/had code to GmailApp.search using the ‘label:’ search operator. One thing I struggled with was removing selected links from emails. The UI side was straight forward thanks to the fantastic framework already developed by Eric. Initially I tried Removing html tags and content where tag content matches an array of values using Xml.parse() but as you see from the answer and comments from Jonathan Broughton and Bruce Mcpherson (thanks guys!) I used regex instead (only after getting caught out by line breaks and tabs).

Dark social judo: Pulling the email push

I was first made aware of ‘dark social’ via Alan Cann (who is ironically quoted in the THE today for his work around social media) and in particular his move to email as the main communication tool for students.

Alexis Madrigal at The Atlantic — who writes about the influence of what he calls “dark social” on engagement and traffic patterns. While everyone is busy watching Twitter and Facebook because they are easy to track, Madrigal argues that most social traffic still comes from old-fashioned or difficult-to-track sources like email and chat messages – (From Dark social: Why measuring user engagement is even harder than you think)

The use of email is something we’ve paid particular attention to in the open online course ocTEL, developing a daily newsletter to push a automated summary of course activity (talking about automated summaries read Tony Hirst’s Notes on Narrative Science and Automated Insights). There’s nothing particularly new in this but one whole I wanted to plug was getting an archive of these back into the ocTEL Course Reader (an RSS aggregation of all available course activity), hence the Gmail Label Feeder. Whilst doing this and picking over my own inbox for example emails I started to think about the ‘push’ I got from social sites like Twitter, Google and LinkedIn keeping me aware of activity and making suggestions for people and content I might like. Whilst some of these are very basic action reporting others require a degree of processing to generate.

Social push

The particular scenario I had in mind was if for example you were running a course Twitter account, when you get the inevitable ‘suggestions similar to’ or ‘do you know’ would there be some value judo style ’pull when your opponent pushes’  and publishing this content so it’s available to all your students. Even if this is of little individual value to the student wouldn't aggregating this data in a machine readable way be useful down the line. Have a look at the menu of push options, wouldn't at least some of these be useful to your students. Should we be doing a bit more dark social judo?

Twitter dark social menu

9 Comments

The Google Apps Script team have recently announced a host of new features. The three that caught my eye were:

  • Script editor added to Google Docs and Forms
  • Addition of the Forms Service which lets you programmatically manipulate forms
  • Extending Google Docs functionality using Custom menus and user interfaces including creating custom sidebars

The last one in particular looked interesting. Having a scriptable area to supplement the main control area immediately made me think about resurrecting tools like the citation robot ‘Igor’ or supplement Google Spreadsheets on the fly graphs or extra info from 3rd party sites.

As Tom Smith (University of York) has discovered sidebar integration in Google Spreadsheets isn't available yet, but the word from Google I/O session announcing this feature (video not available yet) is it’ll be here in a couple of weeks (see comments thread here).

Word Navigation PaneSo, like Tom, to kick the tyres on the Google Docs sidebar functionality I set myself a small project. One of the features of MS Word I like is the ‘Navigation Pane’, in particular for jumping around a document using section headings. Given this operates from a sidebar it seems an ideal candidate to try and replicate.

Looking at the Google Apps Script documentation we can see that we can getLinkUrl() from a TableOfContents within a Google Doc. Using an example from stackoverflow it’s easy to extract the link urls using:

 var tocDat = {};
  var doc = DocumentApp.getActiveDocument(); //get active document
  for (var i = 0; i < doc.getNumChildren(); i++) { // loop all the document elements
    var p = doc.getChild(i);
    if (p.getType() == DocumentApp.ElementType.TABLE_OF_CONTENTS) { // if the element type is a TABLE_OF_CONTENTS extract item links
      var toc = p.asTableOfContents();
      for (var ti = 0; ti < toc.getNumChildren(); ti++) { // looping over each ToC item
        var itemToc = toc.getChild(ti).asParagraph().getChild(0).asText();
        var itemText = itemToc.getText();
        var itemUrl = itemToc.getLinkUrl();
        tocDat[itemText] = itemUrl; // create object array
      }
    }
  }

It’s worth noting that to get this requires the user to have already inserted a table of contents into the document. There is an open issue ticket to do this using script. Something else I was unable to do was return what level the heading link was for (e.g. Heading 1, Heading 2 etc). To do this I had to loop arose the entire document, which you can see in the final project code.

Here is a copy of the example document with the code included. Because no need edit rights to run custom menus you’ll need to File > Make a copy to get the ‘Custom’ dropdown menu option.

custom menu

The first time you select Custom > Show Document Map you get a big scary authentication window (another one of the new features announced was a pilot of a new authentication flow). Once you've ‘Ok’ you can run Custom > Show Document Map which launches the sidebar:

doc map

At this point you are probably asking where are the links in the document map. For some reason the caja sanitisation is stripping the anchor link. Regardless of this, if you dig around the page source you’ll see as part of the sanitisation links target _blank which will open a new browser tab.

href target blank

In the sidebar documentation it says that communication with other Apps Script services is possible, which might be a way to hook the navigation functionality in, but as I can’t find any methods to change document position it looks like for now it’s a lost cause.

So while I've hit a dead-end having the sidebar, particularly when it reaches Google Sheets, is a big bonus but as always it’s important to be aware of the limitations. I'm looking forward to what others come up with.

PS Must try the programmable forms next (it might be an opportunity to update EventManager v3)

Share this post on:
| | |
Posted in Google Apps Script on by .

2 Comments

This was a useful post to write and reflect on some ideas. If you are here because you want to make RSS feeds for Scoop.it posts searches here's the Feed+ Machine Chrome Web App and the standalone Feed+ Machine App

Recently when reflecting on my career path two key moments came to mind. First around 1999 having graduated as a structural engineer I got a job converting course content marked up in Word documents into HTML. Why this was important was I was forced to understand the raw building blocks of the web, I was in a foreign world and I needed to learn the language and fast.

Luke, view the source

A decade later and my interests shifted, less about static content and more dynamic data. Tapping into the work, primarily of Tony Hirst, my eyes were opened to the opportunity of remixing the web. Taking existing data, a dash of code/3rd party services and creating new things and ideas. In this world data feeds are the new building blocks to play with. In particular RSS and Atom feeds are the prize. Usually openly available and easy to manipulate/render.

Luke, find the feed

Unfortunately this ease of access appears to be putting off the big 3rd party services. With the latest Twitter API atom isn’t an option, with Google+ RSS feeds should be forgot about. This is a great shame a number of open online courses recognise the value of learners finding their own space, feeding their activity into tools which can aggregate, potentially sense make, and feed the rhizomes of the networked learner.

Like other open courses this is the challenge we face in ocTEL, pulling distributed activity into the machine. For some services RSS is still championed.  Diigo where the RSS badge with pride..

RSS from Diigo

JISCMail go even further also flying the auto-discovery flag (yet another front being eroded) …

RSS from JISCMail

What about Delicious …

Where's the RSS from Delicious

Mendeley … exists but you have to go digging

Mendeley have some, but not telling you

Google+ … this is where it gets interesting

[trumpets] Feed+ by Eric Koleda

Eric Koleda has created a Chrome Web App (Feed+) that lets you create RSS(Atom) feeds for Google+ searches. Using Google Apps Script Eric has created a user interface and it does all the negotiation with the Google+ API converting the data into RSS. Given the availability of a Scoopit API and using Eric’s base code I’ve forked the project to create the Feed+ Machine Chrome Web App and the standalone Feed+ Machine App (Google login is required to manage your feeds). To begin with it only supports creating feeds from Scoop.it! post searches, but given Eric has done such a great job on the design it’ll be easy to extend. I’ll spare you the details of the code but the source files are here if you want to unpick what’s going on and ask questions.

Here’s an example of a feed for Scoop.its referencing ocTEL, which we are already digesting in the ocTEL Course Reader. But why do this? My interest is primarily in capturing as much of the digital landscape, each artefact collected adding more detail to the map. Whilst material pulled from Scoop.it is potentially yet another churn of existing content, who is sharing and any insight they add provides additional context and may be another way in which participants can identify and migrate to new clusters of activity.

There is a downside to this to this approach. Having a search feed from Scoop.it is useful but having wrestled with the ScoopIt API there is much more data available not being captured such as metadata around posts and topics. So perhaps like others I should abandon the feed and favour JSON and start filling the Tin Can

PS NEW!!! CETIS briefing on Activity Data and Paradata for more developments in this area.

Just a quick note to highlight a guest post I’ve written for the Google Apps Developer Blog on How Apps Script Makes Classroom Observation Quicker and Easier. In this Google Apps Script example I show how a couple of lines of code can do some custom Google Form handling. In this case the scenario was:

Justin Marckel, the assistant principal at Cornatzer Elementary School in North Carolina, asked for help in modifying one of my existing Apps Script examples. Justin was recording teachers’ classroom activities using a Google Form, then manually copying and pasting data into separate spreadsheets for each teacher to review. Justin wanted to know whether there was a way for a Google Form to store the results in a master spreadsheet, then filter results to each teacher’s spreadsheet.

You can rad the full post here

PS Will Welch spotted that there will be new Google Apps Script/Google Forms functionality announced at Google I/O.

Share this post on:
| | |
Posted in Google Apps Script on by .

2 Comments

One of the nice things about open courses like ocTEL is whilst having your own blogging space wasn’t a mandatory requirement we were aware that a number of participants had setup one up anyway. Conscious that activities like blogging can be very lonely and also aware that this is often a critical moment in motivating people to engage in reflective writing we’ve been encouraging tutors to show these blogs some extra love, which seems to have gone down well.

This raises the question that given blogging is optional and we haven’t asked people to self-declare if they are new to blogging how do we identify this community? With this question in mind I quickly (initial pass took me about 5 minutes thanks in part to earlier work in this area) put together a spreadsheet which took the participant blog list and got feed counts.

post table

How it was made

If you look at Sheet 1 you’ll see there is a static list of Feed urls (Column B). This was obtained by using the Chrome Screen Scrape extension. The reason it is static is I wanted a column where tutors could make comments and a dynamic import would screw this up. Looking at Sheet 2 cell A1 you can see how to do a dynamic list using the importXML function (see this post for more examples of spreadsheet import options).

To get the post counts and dates some Google Apps Script was required to write a custom function (custom function, you ask? Where have you been, read my blog). After that it was a bit of conditional formatting.

What’s missing

It would be useful to have a dynamic list of feeds which could retain comment cells (ideas on how to do this very welcome. My thought was to do a cron job to read the sheet, fetch more feeds then output the result)

Another metric it would be useful to add a comment count. So 1 post 3 comments would be identified as a lower priority than 1 post 0 comments.

So do you think this spreadsheet is potentially a useful tool in the open course toolbox? How would you improve it?

1 Comment

Just a note to say on 26th April at 4pm BST (in your timezone) I’ll be giving a webinar on using Google Apps Script to extract data from Canvas (LMS by Instructure). Even if you’re not a Canvas user hopefully there will be some general techniques you’ll find useful. It might also be useful for people thinking about applying basic learning analytic techniques to your courses. I’ll update this post with any resources from the session.

Martin Hawksey of Jisc CETIS started playing around with discussion usage data in Canvas when he was participating in the Learning Analytics course in Canvas Network. Using Google's cloud scripting service Google Apps Script,  free for Google Apps and Google account users, he could dig deep into the data and start doing some interesting (and valuable) analysis all from the comfort and familiarity of a spreadsheet.

Join us for this free 50-minute technical webinar on Friday, April 26th as Martin details his experience using the Canvas APIs and walks through building scripts to get meaningful Canvas data using Google Apps Script and the Canvas APIs.

Registration is free and you can book here.

Here's a recording of the session and the bundle of links (including slides)

1 Comment

Sankey DiagramSankey diagrams are a specific type of flow diagram, in which the width of the arrows is shown proportionally to the flow quantity.” Wikipedia.

I first came across Sankey diagrams by (the OKFN’s latest School of Data’s contributors ;) Tony Hirst in d3.js Powered Sankey Diagram. Subsequently Bruce McPherson showed how to create Sankey diagrams from Excel also using d3.js.

Having collect some survey data for Analytics and Institutional Capabilities at #cetis13 (here’s a copy of the form) we were looking for a way to take us beyond the Google Form reporting and gaining extra insight. In particular I was interested in trying to see if there were any relationships between the multiple choice questions. Using a Sankey diagram seemed like a possible solution and my colleague David Sherlock quickly came up with a modification of Mike Bostock’s Sankey example to accept a csv input (I’ll link if/when it gets written up).

Seeing this I thought it might be useful to make a wrapper to generate Sankey diagrams for data stored in a Google Sheet. The solution was relatively straight forward, using the Google Visualisation API to get the data in the right shape for d3js. An example of the prototype is here

There’s no interface yet for you to select a spreadsheet, sheets, columns etc but you can take you Google Spreadsheet ‘publish to the web’ and then add the following data to the url.

One of the issues with this solution is you might not want to make all your data available. To get around this I’ve written a Google Apps Script that lets you use a custom formula to preformat the data. To see this in action this Spreadsheet contains an example. The formula is in cell A1 and uses the format =setSankey(datarange, cols , separator)

  • datarange – sheet/cell reference for source data eg 'Form Responses - Edit'!A2:D Note must start with column A
  • cols – comma separated list of columns to use to generate a chart for eg "B,C,D"
  • separator {optional} – used to split multi value cells defaults to ", " eg " | "

To use this in your own spreadsheets open Tools > Script editor and copy the code from this gist. Here’s an example url using pre-processed data. My main difference is the addition of the &output=1 to the querystring.

Obviously creating your own querystrings to render the data isn’t ideal and it would be relatively straight forward to create a UI wrapper similar to the one used in EDGESExplorer, but it’s something I reluctant to do unless there is enough demand. The other consideration is the question – does the sankey diagram provide useful insight for the type of data or is it just more ‘damn lies’.

It would have of course been nice to write a Google Gadget to include this in a Spreadsheet … but Google are discontinuing those :(.

4 Comments

In my last post on Canvas Network Discussion Activity Data I mentioned I was a little disappointed to not be able to use social network analysis (SNA) modelling techniques on the Canvas Network discussion boards. My main barrier was accessing the data via the Canvas API using my preferred toolset. Fortunately Brian Whitmer at instructure.com (the company behind Canvas) posted a comment highlighting that as a student on the course it was easy for me to get access to this data using a token generated on my Canvas profile page. With this new information in this post I’ll cover three main areas:

  • a very quick introduction into techniques/opportunities for analysing threaded networks using SNA;
  • how I retrieved data from the Canvas platform for the #LAK13 discussions; and finally
  • some analysis using the NodeXL add-on for Microsoft Excel (Windows).

On Friday 1st March at 4pm GMT I’ll also be having a live Hangout on Air with Marc Smith, one of the original creators and continued project coordinator of NodeXL. The live feed will be embedded below embedded here and you can ask question via Twitter or Google+ using the combined hashtags #lak13 and #nodexl  e.g. ‘What the best place to find out more about NodeXL? #lak13 #nodexl’. For the hangout session we’ll look at how easy it is to use NodeXL to analyse a Twitter hashtag community in a couple of clicks. [The rest of this post is introducing a more advanced use of NodeXL so if I lose you in the rest of this post fear not as the session will be a lot easier going]

Opportunities for analysing threaded networks using SNA

Hello graph

WiredUK friend/follower graphOne of the dangers of SNA is people see images like the one to the right and miss the point of using this modelling technique (the image is from one of my first goes at using NodeXL so I’d be the first to admit there’s room for improvement). If you do there are couple of things to bear in mind. Often these images are exhaust, generated as a snapshot of a visual and/or exploratory analytic. A certain level of literacy is required to understand the underlying structure of the graph. Taking this last point a threshold concept for me was understanding that a basic concept of these graphs are more often than not constructed from an edge list which is essentially two columns of data representing and start point and end point to a node. For example, all the names in columns Vertex 1 and Vertex 2 are nodes and each row represents an edge between the nodes so in this case Beth –> Adam generates the highlighted edge.

Basic edge list and force layout

There is a lot more to understand about these types of graphs, but this basic concept means I know if I have any relationship data its easy to graph and explore.

Hello SNAPP

At this point it’s worth mentioning the browser plugin SNAPP.

The Social Networks Adapting Pedagogical Practice (SNAPP) tool performs real-time social network analysis and visualization of discussion forum activity within popular commercial and open source Learning Management Systems (LMS). SNAPP essentially serves as a diagnostic instrument, allowing teaching staff to evaluate student behavioural patterns against learning activity design objectives and intervene as required a timely manner.

Valuable interaction data is stored within a discussion forum but from the default threaded display of messages it is difficult to determine the level and direction of activity between participants. SNAPP infers relationship ties from the post-reply data and renders a social network diagram below the forum thread. The social network visualization can be filtered based upon user activity and social network data can be exported for further analysis in NetDraw. SNAPP integrates seamlessly with a variety of Learning Management Systems (Blackboard, Moodle and Desire2Learn) and must be triggered while a forum thread is displayed in a Web browser.

The social network diagrams can be used to identify:

  1. isolated students
  2. facilitator-centric network patterns where a tutor or academic is central to the network with little interaction occurring between student participants
  3. group malfunction
  4. users that bridge smaller clustered networks and serve as information brokers

The paper referencing SNA research supporting these areas was presented at LAK11 (if you don’t have access also available in the presentation’s slidedeck).  The paper Visualizing Threaded Conversation Networks: Mining Message Boards and Email Lists for Actionable Insights (Hansen, Shneiderman & Smith, 2010) also highlights simple ways to identify question people, answer people and discussion starters which are all potentially very useful within courses for identifying network clusters individuals might want to join/follow.

Retrieving data from Canvas

Hopefully with that quick intro you can see there might be some value in using SNA from threaded discussion analysis. Reading the SNAPP overview hopefully you spotted that it currently doesn’t support extracting data from Canvas discussion boards. This is an opportunity to understand some of the analysis SNAPP is doing behind the scenes.

Hello Google Apps Script

If you have been following my posts you’ll see that I favour using Google Apps Script as a lightweight tool for extracting data. Thanks to Brian (Instructure) I’ve got a way to access the Discussion Topics API. Looking at the API documents I decided the best way to proceed was to get all of the LAK13 discussion topics (top level information) and use this to get the full topic data. If you speak JSON we are essentially turning this:

Data as rendered in Canvas

into this:

Data in JSON

finally getting this (web version here):

Data in spreadsheet

The code to do this is available here. I’m going to spare you the details of the code but here are the instructions is you’d like to export data from other Canvas hosted discussion boards. If you’re not interested in that you can just jump to the next section.

Generating an edge list (extracting data) from Canvas to Google Sheets

  1. Create a new Google Spreadsheet and then in Tool > Script editor copy in the code from here
  2. If you are not pulling data from LAK13 you need to edit values in lines 2-4. If you visit your course homepage hopefully you can decode the url pattern based on the example for LAK13 https://learn.canvas.net/courses/33 (I should also point out you need to be enrolled on the course to receive data. Also read Canvas API Policy)
  3. Next you need an access token which is generated from your Canvas Profile Settings page. Scroll down to the bottom and click New Access Token, filling in a purpose and leaving expires blank. Make a copy of the token as it’s needed for the next step (I added a copy to a .txt file just in case the next step didn’t work
  4. Back in the Script Editor in Google Spreadsheets click File > Project Properties. In the ‘Project properties’ tab click  ‘+ Add row’ and replace (name) with access_token and (value) with the token you got from Canvas before clicking Save
    Project Properties
  5. Make sure everything is saved in the Script Editor and then Run > getCanvasDiscussionEdges, wait for the script to finish and on Sheet1 you should have a bunch of data to play with.

Using NodeXL to analyse Canvas Discussions

There are a number of different questions we could ask of the LAK13 data. The particular one I want to look at is who are the core community members stimulating/facilitating discussion (e.g. applying a connectivist theory who are the people you might want to connect with). To do this we need to (I’m assuming you’ve already installed NodeXL):

  1. Download the data extracted to the Google Spreadsheet (File > Download as > Microsoft Excel). [If you just want the data I’ve extracted here’s the download link – the data is automatically refreshed nightly]
  2. Open the download file in Excel and in the created_at column select all and Format Cells as General (I needed to do this because NodeXL was miss formating dates on import)
  3. Start a new NodeXL Template (I use the NodeXL Excel Template option from my windows Start menu)
  4. From the NodeXL ribbon you want to Import > From Open Workbook
    Import > From Open Workbook
  5. In the import dialog vert1_name and vert2_name are edges, anything else prefixed with ‘vert’ is assigned to the corresponding Vertex n property column and everything else is an  Vertex 1 property:
    import dialog
  6. Once imported you can open the Edges sheet, select the created_at column and Format Cells reassigns a date/time format.
  7. In Prepare Data chose ‘Count and merge duplicate edges’ and select Count and Vertex1 and Vertex 2
    Count and merge duplicate edges
  8. In the Graph section of the NodeXL ribbon we want to make this a directed graph (replies are directed) and choose you layout algorithm (I usually go Harel-Koren Fast Multiscale)
    make this a directed graph
  9. Next we want to prepare the data we want to analyse. In the Autofill Columns (within Visual Properties portion of the ribbon) set Edge Visibility to ‘topic_id’ and in Edge Visibility Options set ‘If the source column number is: Not equal to 558’ Show otherwise Skip (this will skip edges that are responses to the Pre-course discussion forum – I’ll let you question this decision in the comments/forum)
    Edge Visibility Options
  10. Click Ok then Autofill
  11. Next open the Vertices sheet and select all the rows (Ctrl+A) and from the Visibility option select ‘Show if in an Edge’
    Show if in an Edge
  12. Now find the row in the Vertices sheet for George Siemens and Skip (doing this were creating a ‘what if George wasn’t there’ scenario
    ‘what if George wasn’t there’ scenario
  13. Open the Graph Metrics window and add Vertex in-degree, vertex out-degree, Vertex betweenness and closeness centrality and Top items (in the Top items options you’ll need to add these as the metrics you want top 10s for), finally click Calculate metrics.
    Graph Metrics window

At this point you could use the calculated metrics to weight nodes in a graph, but for now I’m going to skip that. You should now have a Top Items sheet with some useful information. In the Betweenness Centrality list you should have these names:

  • Martin Hawksey
  • Simon Knight
  • Alex Perrier
  • Khaldoon Dhou
  • Rosa Estriégana Valdehita
  • Maha Al-Freih
  • Suzanne Shaffer
  • Maxim Skryabin
  • Bryan Braul
  • Peter Robertso

Excluding the pre-course discussion forum and George Siemens the discussions these people have engaged with provide the shortest paths to other people engaging in discussions on the Canvas site. Strategically these are potentially useful people  within the network that you might want to follow, question or engage with.

Getting to this point obliviously hasn’t been straight forward and had SNAPP been available in this instance it would have turned this in to a far shorter post. Programmatically using tools like R we could have arrived at the same answer with a couple of lines of code (that might be my challenge for next week ;). What it has hopefully illustrated is if you have data in an edge format (two column relationships) tools like NodeXL make it possible for you use SNA modelling techniques to gain insight. (I’m sure it also illustrates that data wrangling isn’t always straight forward, but guess what that’s life).

NodeGL for LAK13 discussionsSo at this point some of you might be wondering what does the LAK13 reply network look like. I could give you a flat image but why don’t you use my NodeXL online graph viewer to explore it yourself or download the data from the NodeXL Graph Gallery.