Analytics

2 Comments

This is my first ever attempt at 'live blogging' and haven't done any tidying other than cropping images. You might want to explore the LASI-UK a twitter summary by @sheilmcn  (complete #lasiuk twitter archive here)

Sheila MacNeill (JISC CETIS)
Setting the scene: learning scenarios

[Slides are here http://www.slideshare.net/sheilamac/lasiuk-learningscenarios-smn]

Overview of learning scenarios, broad-brush, what data do we have access to. Where does it take place? Learning taking place in various places in institution but not forgetting informal setting.

wp-1373015570667[1]

Various places where we communicate and learn (synchronous and asynchronous), which is creating a data footprint both formal and informal. Opportunities to merge data

wp-1373015755728[1]

University of Derby learner engagement dartboard. Uses primary, secondary and tertiary indicators. Used to build fuller picture of the student and engagement with institution.

wp-1373015881650[1]

Using SNA to help understand learning. Example of using SNAPP for discussion board analysis.

Example from me of dashboarding Canvas Discussion forums in Google Spreadsheet and TAGSExplorer to 'see' conversations.

 wp-1373015982307[1]

wp-1373016067499[1]

Diagram developed by Mark Stubbs for tackling the LA problem space 'analytics to develop understanding'

What actionable insights can we gain? Examples include patterns of behaviour, targeted intervention, benchmarking, retention, activity design, data driven design decisions, greater understanding of the who, why what and where of learning

Clare Llewellyn (University of Edinburgh)
Argumentation on the web - always vulgar and often convincing?

[Slides are here http://www.slideshare.net/ClareLlewellyn/clare-llewellyn-lasiuk]

Using argumentation to analysis the social web. Session title is from Oscar Wild quote. Example, discussion around an typical Guardian post includes threading quoting etc.

wp-1373016387338[1]

Breaking down the discussion into conversations with specific topics, main points of the discussion. Issue is data deluge, how do we find the points of most interest. Solutions: BBC tried using editors picks but issue creates snapshots and not whole conversation.

Argumentation

wp-1373016571361

Result

wp-1373016606197

Various levels of argumentation micro and macro level.

Methodology

wp-1373016727899

Data - using twitter data from the London riots (7729)

Identification - Unsupervised clustering machine learning. Select an appropriate algorithm. Strategy is try and see. Tried Unigram/bigram, incremental clustering, k-means, topic modelling. Topic modelling proved best in this example.

Guardian comments - using api to extract claim (quoted text) counter claim (comment). Using LTT - TTT2

Supervised machine learning - data from London riots which has already been marked up. Tools: TagHelper Lightside? Using claim (text) evidence (link)

Duygu Simsek (Open University)
Towards visual analytics to improve scientific reading & writing

[Slides are here http://www.slideshare.net/dsimsek/lasiuk-talk030713-2]

[wordpress app decided not to save my notes :(]

Naomi Jeffery (Open University in Scotland)

Learning analytics: a whistle-stop tour

wp-1373018881933

Overview of papers Naomi has recently found interesting. Marshal Lovetts (lak13) - no conception of what the data we are looking at actually means in the real world, remembering the person and learning design. Tutor v student requirements.

wp-1373019043881

Overlap between needs.

5 types of LA. Social network analytics - visualizing social learning ties from Network awareness tool. Platform used to explore support learner communities.

Discourse analytics - bit.ly/16ByD1l - meaningful traces from speaking and listening within online discussion forums. Using 9 metrics and then supporting students in interpretation. Discovered that presenting numbers (rather than visualisations) had better results. As helping students developed desired behaviours

Context? Analytics Understanding promotions (how can student 'liking' posts can be used to promote content)

Disposition analytics - mooc disengagement identifying learner subpopulations. developed categorization of completing, auditing, disengaging and sampling

wp-1373019650733

Learning pathways - http://stanford.io/123tufg Five learning methods minimal, careful, strategic explorers, explorers and haphazard. Prior knowledge an influence the model

Context analytics
Whether online, classroom, blended. Framework paper has several examples.

6 Comments

The concepts used in this post are very similar to the Google Analytics superProxy (announced after my solution ;) the main difference is this working in Google Drive meaning data access authentication is built in.

Access control to Google Analytics data is very coarse. Via the web interface you can have full access or none at all. Given the growing interest in data driven decision making at all levels of the institution I thought it would be beneficial to demonstrate how Google Spreadsheets and Google Apps Script could be used to selectively distribute Google Analytics data. This solution was developed for my session at the Institutional Web Managers Workshop 2013 (IWMW13) which I talk about in more detail at the end of this post.

Google Analytics Query ExporterSo here is the general flow. Google Apps Script, which is part of Google Drive, is used by Google Analytics admins to create custom queries which pull back slices of data. The admins can then control how these slices are distributed either within the security of Google Drive or published to the web. Also because Google App Script feature ‘script triggers’, which can be time based, admins can preconfigure mini scripts to decide when and how the data is distributed. Examples include:

  • Add data slice to a Google Spreadsheet (Spreadsheets feature options for publishing to the web)
  • Turn data slice into a .csv file and attached to an email for the recipient
  • Turn data slice into a .csv file and copy/share via Google Drive
  • Turn data slice into a .csv file and publish on Google Drive
  • All the above with different file formats including json

Here is the template I’ve developed which you can copy and use followed by some setup instructions (also there’s a setup video):

*** Google Spreadsheet GA Query Exporter ***

  1. File > Make a copy of this spreadsheet (need to be logged
  2. Open Tools > Script editor and then:
    File > Upgrade authorisation experience;
    Once it's upgraded still in the Script editor click Resources > Use Google's APIs; and
    Click the 'Google API Console' link at the bottom of the dialog window;
  3. In the Google Apis Console switch the Analytics API 'on' and read/accept the terms (you may have to accept two seperate terms windows)
  4. Close the Google Apis Console window and OK the Google API Services window in the Script editor
  5. Before closing the Script editor Run > authoriseGA to authenticate script

Once authenticated there are a number of ways to run the script. You can enter a query and click 'Get data' for one off fetches or open Tools > Script editor and modify the example cron jobs to automate collection and distribution of data (the cron jobs allow functionality that includes emailing/sharing links to csv files)

To create multiple queries in the same spreadsheet duplicate this sheet and modify the query

Tip: You can use formula to build the start and end dates e.g. yesterday is =(TODAY())-1

To help admins construct these data slices the tool is designed to work in collaboration with the official Google Analytics Query Explorer 2. With this users can experiment with queries like this one that returns the top blog posts based on visits. Within the Query Explorer I can export the Query URI and import to a sheet in the template. The general workflow is demonstrated in the video below which shows how to setup an example query/export:

Using the example query from above I’ve created this template sheet. Similar to the example in the video in cells E9 and E10 I use a formula to dynamically create a date range. Assuming I want to run the query on the 1st of each month for last months stats we start with the end-date generated using =TODAY()-1. To get the start-date we use the value of this field to get the 1st of last month using =DATE(YEAR(E10),MONTH(E10),1)

I could open the spreadsheet every month and manually click the ‘Get data’ button but instead I can setup a script trigger to run as a Month timer of the 1st between 1am and 2am.

Setup script trigger/cron job

The code I run is a little scriptlet like:

function exampleCronJobWriteCSVtoParticularFolder(){
  // copy new data as csv and then email as attachment (copy also kept in Google Drive)
  var sheetO = getData("input"); // name of sheet with query to execute e.g. 'input'
  var csv = saveAsCSV(sheetO.getName(), sheetO.getName()+" "+Utilities.formatDate(new Date(), "GMT", "yyyyMMdd"));
  var folder = DriveApp.getFolderById("0B6GkLMU9sHmLbThITlNvb2dzREE"); // folder id from url (navigate to folder and extract for your own folders)
  folder.addFile(csv);
  MailApp.sendEmail([email protected], "New data", "A new csv for '"+sheetO.getName()+"' has been created and is in the folder "+folder.getUrl() );
  // MailApp.sendEmail(recipient, subject, body, options) - More info https://developers.google.com/apps-script/reference/mail/mail-app
}

… which gets the data and then publishes it as a csv to this folder emailing a link to listed recipients. As the folder has been shared we can get access via Google Drive Host with the link https://googledrive.com/host/0B6GkLMU9sHmLbThITlNvb2dzREE/top-posts%2020130701.csv

public folder

The template includes several example scriplet examples for you to play with and modify.  For those who are interested the slides for my IWMW13 session are on slideshare. The session was designed to introduce Google Apps Script followed by the query exporter template. I only made this yesterday so it might be a bit alpha. Feel free to leave issue and suggestions in the comments. I’m sure there are a number of clarifications required.

lasi-ukPart of the Learning Analytics Summer Institute Global Network

Jisc CETIS in partnership with the Open University in Scotland and the University of Edinburgh School of Informatics will be hosting a one day event on the 5th July in the Informatics Forum (Edinburgh) focusing on Learning Analytics.

LASI-UK is part of a global network of events in the first week of July, which are linked to the Learning Analytics Summer Institute (LASI) at Stanford University. LASI-UK will explore ways of working with data to draw out and share insights that can make a real difference to learning and teaching. This free, interactive event has a dual focus on data visualisation and working with dialogue data. You can check out the draft programme here.

Delegates are also invited to join us for an optional evening session, where we will be broadcasting the final morning plenary session live from Stanford.

Who should attend?

Anyone with an interest in using data to support students and inform curriculum design and teaching, including:

  • Data analysts, planners and management information providers
  • Curriculum designers / education developers
  • Those involved in the construction, delivery and analysis of MOOCs

Registration

Click here for more details and registration

This is a free event; however, places are limited, so interested parties are encouraged to register early.  We look forward to welcoming you to our event.

Share this post on:
| | |
Posted in Analytics, Event, Featured on by .

Learning Analytics appears to be increasingly an emerging area of interest for institutions and I'm aware of a number of staff being asked to contribute on this area in their teaching and learning strategies. I thought it would be useful to spotlight some resources produced by Cetis and Jisc in this area that might help. The list is in no ways exhaustive and if you have any other resources you think worth highlighting either leave a comment or get in touch and I’ll add them to the post.

**New** Ferguson, R. (2013). Learning Analytics for Open and Distance Education. In S. Mishra (Ed.), CEMCA EdTech Notes. New Delhi, India: Commonwealth Educational Media Centre for Asia (CEMCA).
**More New** Society for Learning Analytics Research (SoLAR) has a collection of useful resources including these introductory articles

SoLAR recently ran a open course Strategy & Policy for Systemic Learning Analytics. Something worth looking at might be the recording of the session  Belinda Tynan (OU’s PVC Learning & Teaching) did on Designing Systemic Analytics at The Open University.

**Even Newer** from SoLAR looking more at a national strategy for the learning analytics. Improving the Quality and Productivity of the Higher Education Sector Policy and Strategy for Systems-Level Deployment of Learning Analytics

Highlights from Jisc Cetis Analytics Series include

  • Analytics; what is changing and why does it matter?
    This paper provides a high level overview to the CETIS Analytics Series. The series explores a number of key issues around the potential strategic advantages and insights which the increased attention on, and use of, analytics is bringing to the education sector. It is aimed primarily at managers and early adopters in Further and Higher Education who have a strategic role in developing the use of analytics in the following areas:

    • Whole Institutional Issues,
    • Ethical and Legal Issues,
    • Learning and Teaching,
    • Research Management,
    • Technology and Infrastructure.
  • Analytics for Learning and Teaching
    A broad view is taken of analytics for Learning and Teaching applications in Higher Education. In this we discriminate between learning analytics and academic analytics: uses for learning analytics are concerned with the optimisation of learning and teaching per se, while uses of educational analytics are concerned with optimisation of activities around learning and teaching, for example, student recruitment.
  • Legal, Risk and Ethical Aspects of Analytics in Higher Education
    The collection, processing and retention of data for analytical purposes has become commonplace in modern business, and consequently the associated legal considerations and ethical implications have also grown in importance. Who really owns this information? Who is ultimately responsible for maintaining it? What are the privacy issues and obligations? What practices pose ethical challenges?
    Also of interest the LAK13 on An evaluation of policy frameworks for addressing ethical considerations in learning analytics
  • Institutional Readiness for Analytics
    This briefing paper is written for managers and early adopters in further and higher education who are thinking about how they can build capability in their institution to make better use of data that is held on their IT systems about the organisation and provision of the student experience. It will be of interest to institutions developing plans, those charged with the provision of analytical data, and administrators or academics who wish to use data to inform their decision making. The document identifies the capabilities that individuals and institutions need to initiate, execute, and act upon analytical intelligence
  • Case Study, Acting on Assessment AnalyticsOver the past five years, as part of its overall developments in teaching and learning, The University of Huddersfield has been active in developing new approaches to assessment and feedback methodologies. This has included the implementation of related technologies such as e-submission and marking tools.In this case study Dr Cath Ellis shares with us how her interest in learning analytics began and how she and colleagues are making practical use of assessment data both for student feedback and overall course design processes.
    Aspects of this case study and other work in this area are available in this webinar recording on Learning analytics for assessment and feedback

Examples of Learning Analytic Tools

Taken from Dyckhoff, A. L., et al. "Supporting action research with learning analytics."Proceedings of the Third International Conference on Learning Analytics and Knowledge. ACM, 2013.

  • LOCO-Analyst [1, 4],
  • TADA-Ed [46],
  • Data Model to Ease Analysis and Mining [38],
  • Student Inspector [50],
  • MATEP [56–58],
  • CourseVis [43, 45],
  • GISMO [44],
  • Course Signals [3],
  • Check My Activity [25],
  • Moodog [54, 55],
  • TrAVis [41, 42],
  • Moodle Mining Tool [48],
  • EDM Vis [34],
  • AAT [29],
  • Teacher ADVisor [37],
  • E-learning Web Miner [26],
  • ARGUNAUT [30],
  • Biometricsbased Student Attendance Module [27],
  • CAMera and ZeitgeistDashboard [51, 52],
  • Student Activity Meter [28],
  • Discussion Interaction Analysis System (DIAS) [8–11],
  • CoSyLMSAnalytics [49],
  • Network Visualization Resource and SNAPP [5, 6, 17, 18],
  • i-Bee [47],
  • iHelp [12], and
  • Participation Tool [32]

References for these tools are listed here

Here is a more general set of Analytics Tools and Infrastructure from the Analytics Series

A quick reminder that the Analytics in UK Further and Higher Education Survey is still open.

Share this post on:
| | |
Posted in Analytics, JISC CETIS on by .

9 Comments

Flickr Tag Error: Call to display photo '8701951650' failed.

Error state follows:

  • stat: fail
  • code: 95
  • message: SSL is required

The term ‘dashboard’ currently seems to have a similar effect to marmite, you either love it or hate it. Fortunately it looks like the trend is towards increased personalisation of the way data is presented for both visual and exploratory purposes (an example of this is Thomson Reuters finance system Eikon), so if you don’t like marmite you can have something else instead.

One of the reasons I like using Google Spreadsheets is it’s a fairly easy environment to pull data into, provide specific views and share with others. Recent examples of this include the work I did providing a summary of the discussion activity within the open online course LAK13 (btw the recording of the recent ‘Geek Out’ webinar covering this is available from here). This example takes the existing Atom feed from Canvas discussion boards, which contains all forum posts, and provides some lite summary of activity (shown below – click to enlarge).

CanvasNetworkDashboard

When faced with a similar challenge for ocTEL it made sense to evolve this existing work rather than reinvent. As previously outlined in the ocTEL course recipe, we use the WordPress platform as a course hub, creating additional functionality using plugins. For the discussion forums we use the bbPress plugin. Like the rest of WordPress, bbPress creates a number of data feeds that we can use for forums, topics and replies. As far as I can tell though these feeds follow the WordPress global reading settings for the number of items available and are limited to RSS2.0.

One of the really nice things however about working with WordPress is once you have you basic stack of plugins its easy to customise using your themes functions.php script. And that’s what we’ve done for ocTEL. Using WordPress’ extensive codex documentation we can see it’s possible to create our own feeds. So by adding this custom action in our theme functions.php and including this feed template in our theme folder we have an Atom feed for all our forum activity. With some very minor tweaking of the Canvas example mentioned above I’ve now got an overview of ocTEL forum discussions dashboard.

[If you have your own bbPress forums you can use the code above to create your own data feed and File > Make a copy of the spreadsheet to dashboard your forums.]

So now we have some insight to the ocTEL forum activity it raises some interesting question. Besides turning the insight into action the questions at the top of my mind are: how do we promote this data endpoint? what other data feeds and formats would be useful? How do we license the data? Your thoughts as always are very welcome. 

PS My colleagues Lorna Campbell and Phil Barker have recently published an Activity Data and Paradata briefing paper

1 Comment

Just a note to say on 26th April at 4pm BST (in your timezone) I’ll be giving a webinar on using Google Apps Script to extract data from Canvas (LMS by Instructure). Even if you’re not a Canvas user hopefully there will be some general techniques you’ll find useful. It might also be useful for people thinking about applying basic learning analytic techniques to your courses. I’ll update this post with any resources from the session.

Martin Hawksey of Jisc CETIS started playing around with discussion usage data in Canvas when he was participating in the Learning Analytics course in Canvas Network. Using Google's cloud scripting service Google Apps Script,  free for Google Apps and Google account users, he could dig deep into the data and start doing some interesting (and valuable) analysis all from the comfort and familiarity of a spreadsheet.

Join us for this free 50-minute technical webinar on Friday, April 26th as Martin details his experience using the Canvas APIs and walks through building scripts to get meaningful Canvas data using Google Apps Script and the Canvas APIs.

Registration is free and you can book here.

Here's a recording of the session and the bundle of links (including slides)

1 Comment

Sankey DiagramSankey diagrams are a specific type of flow diagram, in which the width of the arrows is shown proportionally to the flow quantity.” Wikipedia.

I first came across Sankey diagrams by (the OKFN’s latest School of Data’s contributors ;) Tony Hirst in d3.js Powered Sankey Diagram. Subsequently Bruce McPherson showed how to create Sankey diagrams from Excel also using d3.js.

Having collect some survey data for Analytics and Institutional Capabilities at #cetis13 (here’s a copy of the form) we were looking for a way to take us beyond the Google Form reporting and gaining extra insight. In particular I was interested in trying to see if there were any relationships between the multiple choice questions. Using a Sankey diagram seemed like a possible solution and my colleague David Sherlock quickly came up with a modification of Mike Bostock’s Sankey example to accept a csv input (I’ll link if/when it gets written up).

Seeing this I thought it might be useful to make a wrapper to generate Sankey diagrams for data stored in a Google Sheet. The solution was relatively straight forward, using the Google Visualisation API to get the data in the right shape for d3js. An example of the prototype is here

There’s no interface yet for you to select a spreadsheet, sheets, columns etc but you can take you Google Spreadsheet ‘publish to the web’ and then add the following data to the url.

One of the issues with this solution is you might not want to make all your data available. To get around this I’ve written a Google Apps Script that lets you use a custom formula to preformat the data. To see this in action this Spreadsheet contains an example. The formula is in cell A1 and uses the format =setSankey(datarange, cols , separator)

  • datarange – sheet/cell reference for source data eg 'Form Responses - Edit'!A2:D Note must start with column A
  • cols – comma separated list of columns to use to generate a chart for eg "B,C,D"
  • separator {optional} – used to split multi value cells defaults to ", " eg " | "

To use this in your own spreadsheets open Tools > Script editor and copy the code from this gist. Here’s an example url using pre-processed data. My main difference is the addition of the &output=1 to the querystring.

Obviously creating your own querystrings to render the data isn’t ideal and it would be relatively straight forward to create a UI wrapper similar to the one used in EDGESExplorer, but it’s something I reluctant to do unless there is enough demand. The other consideration is the question – does the sankey diagram provide useful insight for the type of data or is it just more ‘damn lies’.

It would have of course been nice to write a Google Gadget to include this in a Spreadsheet … but Google are discontinuing those :(.

4 Comments

As part of LAK13 I’ve already written a series of blog posts highlighting a couple of ways to extract data from Canvas VLE. Prompted by a question by On and my colleague Sheila MacNeill I wanted to show you a way of getting feed data into a spreadsheet without using any code. The solution is to use Yahoo Pipes, but as this post will highlight this isn’t entirely straight forward and you need to be aware of several tricks to get the job done. As LAK13 isn’t using Google Groups for this post I’ll be using the Learning Analytics Google Group as a data source.

Sniffing for data

First we need to find a data source. Looking for an auto-detected RSS/Atom feed by visiting the group homepage reveals nothing. [I always forget browsers seem are moving away from telling you when they detect a feed. To get around this I use the Chrome RSS Subscription Extension which indicates with the orange RSS icon when a page has a feed.]

Browser with no feed detected Browser with feed detected

Looking for an official Google Groups API as an alternative method turns up this open issue from August 2007 for a Groups API aka there's no API :( Digging deeper we find Groups did have data feeds in their old interface. So with a bit of url magic I can land on the old Groups interface for Learning Analytics which gives us the orange light

Google Groups Old Interface with Feeds

Requesting the View all available feeds page we get some additional feed options:

Google Groups View all available feeds

At this point I could grab the Atom links and with a bit of tweaking process it with my existing Google Apps Script Code, but lets look at a ‘no code’ solution.

Feeding Google Sheets with Yahoo Pipes

At this point it’s worth reminding you that you could use the importFeed formula in a Google Spreadsheet which would import the data from a Google Group. The issue however is it’s limited to the last 20 items so we need a better way of feeding the sheet.

A great tool for manipulating rss/atom (other data) feeds is Yahoo Pipes. Pipes gives you a drag and drop programming environment where you can use blocks to perform operations and wire the outputs together. I learned most of my pipework from the Pipemaster – Tony Hirst and if you are looking for a starting point this is a good one.

Yahoo Pipes - Edit interface

Here I’ve created a pipe that takes a Google Group shortname does some minor manipulation, which I’ll explain later, and output a result. When we run the pipe we get some export options:

Yahoo Pipe - Run pipe

The one I’m looking for is .csv because it'll easily import into Google Sheets, but it’s not there … Just as we had to know the old Google Group interface has RSS feeds, with Yahoo Pipes we have to know the csv trick. Here’s the url for ‘Get as JSON’:

http://pipes.yahoo.com/pipes/pipe.run?_id=14ffe600e0c0a9315007b922e41be8ad&_render=json&group=learninganalytics

and if we swap &_render=json for &_render=csv by magic we have a csv version of the output (whilst we are here also notice the group name used when the pipe is run is also in the url. This means if we know the group shortname we don’t need to enter a name a ‘Run pipe’, we can build the url to get the csv.

Now in a Google Spreadsheet if you enter the formula =importData("http://pipes.yahoo.com/pipes/pipe.run?_id=14ffe600e0c0a9315007b922e41be8ad&_render=csv&group=learninganalytics")we get the groups last 100 messages in a spreadsheet.

Extra tricks

There were a couple of extra tricks I skipped worth highlighting. RSS/Atom feeds permit multilevel data, so an element like ‘author’ can have sub elements like ‘name’, ‘email’. CSVs on the other hand are 2D, rows and columns.

Illustration of nested structure of atom feed

When Yahoo Pipes generates a csv file it ignores sub elements, so in this case it’ll generate an author column but won’t include name or email. To get around this we need to pull the data we want into the first level (doing something similar for content).

Rename block

The next little trick is to get the feed dates in a format Google Spreadsheets recognise as a date rather than a string. In the feed dates are in ISO 8601 format e.g. 2013-03-05T13:13:06Z. By removing the ‘T’ and ‘Z’ Google Spreadsheets will automatically parse as a date. To do this we use a Regex block to look for T or Z (T|Z) replacing with a single space (which is why the ‘with’ box looks empty).

Regex block

I’ve wrapped the data in a modified version of the dashboard used for the Canvas data feed.

*** Google Groups Activity Dashboard ***

Google Groups Activity Dashboard

Limitations

A couple of big limitations to be aware of:

How long will Google Group data feeds last

Given we’ve had to dig out the data feeds from the old Google Group interface my suspicion is once this is shut off for good the feeds will also disappear. Who knows, Google may actually have a Group API by then ;s

Limited to last 100 messages

The eagle eyed amongst you will have spotted I was able to increase the number of messages returned to 100 by adding num=100 to the feed query. This is the limit though and you can’t use paging to get older results. There are a couple of ways you could store the feed results like using the FeedWordPress plugin. I’m experimenting with using IF I do THAT on {insert social network/rss feed/other} THEN add row to Google Spreadsheet, but as the data isn’t stored nicely (particularly dates which are saved like ‘February 15, 2013 at 01:48PM’ *sigh*) it makes it harder to use.

I think that’s me in terms of ways for extracting discussion boards data … for now. One other technique related to Google Spreadsheets is screen scraping using importXML. To see this in action you can read Dashboarding activity on public JISCMail lists using Google Sheets (Spreadsheets).

1 Comment

At 4pm GMT today (see in your own timezone) I'll be doing a Google Hangout On Air with Marc Smith one of NodeXL’s creators and coordinators. NodeXL is a handy free add-in for MS Excel (Windows) allowing you to generate, explore and analyse network graphs. I started using NodeXL a couple of years ago as I found it was an easy way to extract a social graph (friend/follower relationships) from Twitter. In the session we'll go through extracting a social graph from a Twitter hashtag and analysing the results.

You can view/participate in a number of ways. I'll embed the livestream in this post, or you can see/join the hangout when it appears in my Google+ timeline. You can ask question via Twitter or Google+ using the combined hashtags #lak13 and #nodexl  e.g. ‘What the best place to find out more about NodeXL? #lak13 #nodexl’ (this is a LAK13 fringe event)

Share this post on:
| | |
Posted in Analytics, NodeXL and tagged on by .

4 Comments

In my last post on Canvas Network Discussion Activity Data I mentioned I was a little disappointed to not be able to use social network analysis (SNA) modelling techniques on the Canvas Network discussion boards. My main barrier was accessing the data via the Canvas API using my preferred toolset. Fortunately Brian Whitmer at instructure.com (the company behind Canvas) posted a comment highlighting that as a student on the course it was easy for me to get access to this data using a token generated on my Canvas profile page. With this new information in this post I’ll cover three main areas:

  • a very quick introduction into techniques/opportunities for analysing threaded networks using SNA;
  • how I retrieved data from the Canvas platform for the #LAK13 discussions; and finally
  • some analysis using the NodeXL add-on for Microsoft Excel (Windows).

On Friday 1st March at 4pm GMT I’ll also be having a live Hangout on Air with Marc Smith, one of the original creators and continued project coordinator of NodeXL. The live feed will be embedded below embedded here and you can ask question via Twitter or Google+ using the combined hashtags #lak13 and #nodexl  e.g. ‘What the best place to find out more about NodeXL? #lak13 #nodexl’. For the hangout session we’ll look at how easy it is to use NodeXL to analyse a Twitter hashtag community in a couple of clicks. [The rest of this post is introducing a more advanced use of NodeXL so if I lose you in the rest of this post fear not as the session will be a lot easier going]

Opportunities for analysing threaded networks using SNA

Hello graph

WiredUK friend/follower graphOne of the dangers of SNA is people see images like the one to the right and miss the point of using this modelling technique (the image is from one of my first goes at using NodeXL so I’d be the first to admit there’s room for improvement). If you do there are couple of things to bear in mind. Often these images are exhaust, generated as a snapshot of a visual and/or exploratory analytic. A certain level of literacy is required to understand the underlying structure of the graph. Taking this last point a threshold concept for me was understanding that a basic concept of these graphs are more often than not constructed from an edge list which is essentially two columns of data representing and start point and end point to a node. For example, all the names in columns Vertex 1 and Vertex 2 are nodes and each row represents an edge between the nodes so in this case Beth –> Adam generates the highlighted edge.

Basic edge list and force layout

There is a lot more to understand about these types of graphs, but this basic concept means I know if I have any relationship data its easy to graph and explore.

Hello SNAPP

At this point it’s worth mentioning the browser plugin SNAPP.

The Social Networks Adapting Pedagogical Practice (SNAPP) tool performs real-time social network analysis and visualization of discussion forum activity within popular commercial and open source Learning Management Systems (LMS). SNAPP essentially serves as a diagnostic instrument, allowing teaching staff to evaluate student behavioural patterns against learning activity design objectives and intervene as required a timely manner.

Valuable interaction data is stored within a discussion forum but from the default threaded display of messages it is difficult to determine the level and direction of activity between participants. SNAPP infers relationship ties from the post-reply data and renders a social network diagram below the forum thread. The social network visualization can be filtered based upon user activity and social network data can be exported for further analysis in NetDraw. SNAPP integrates seamlessly with a variety of Learning Management Systems (Blackboard, Moodle and Desire2Learn) and must be triggered while a forum thread is displayed in a Web browser.

The social network diagrams can be used to identify:

  1. isolated students
  2. facilitator-centric network patterns where a tutor or academic is central to the network with little interaction occurring between student participants
  3. group malfunction
  4. users that bridge smaller clustered networks and serve as information brokers

The paper referencing SNA research supporting these areas was presented at LAK11 (if you don’t have access also available in the presentation’s slidedeck).  The paper Visualizing Threaded Conversation Networks: Mining Message Boards and Email Lists for Actionable Insights (Hansen, Shneiderman & Smith, 2010) also highlights simple ways to identify question people, answer people and discussion starters which are all potentially very useful within courses for identifying network clusters individuals might want to join/follow.

Retrieving data from Canvas

Hopefully with that quick intro you can see there might be some value in using SNA from threaded discussion analysis. Reading the SNAPP overview hopefully you spotted that it currently doesn’t support extracting data from Canvas discussion boards. This is an opportunity to understand some of the analysis SNAPP is doing behind the scenes.

Hello Google Apps Script

If you have been following my posts you’ll see that I favour using Google Apps Script as a lightweight tool for extracting data. Thanks to Brian (Instructure) I’ve got a way to access the Discussion Topics API. Looking at the API documents I decided the best way to proceed was to get all of the LAK13 discussion topics (top level information) and use this to get the full topic data. If you speak JSON we are essentially turning this:

Data as rendered in Canvas

into this:

Data in JSON

finally getting this (web version here):

Data in spreadsheet

The code to do this is available here. I’m going to spare you the details of the code but here are the instructions is you’d like to export data from other Canvas hosted discussion boards. If you’re not interested in that you can just jump to the next section.

Generating an edge list (extracting data) from Canvas to Google Sheets

  1. Create a new Google Spreadsheet and then in Tool > Script editor copy in the code from here
  2. If you are not pulling data from LAK13 you need to edit values in lines 2-4. If you visit your course homepage hopefully you can decode the url pattern based on the example for LAK13 https://learn.canvas.net/courses/33 (I should also point out you need to be enrolled on the course to receive data. Also read Canvas API Policy)
  3. Next you need an access token which is generated from your Canvas Profile Settings page. Scroll down to the bottom and click New Access Token, filling in a purpose and leaving expires blank. Make a copy of the token as it’s needed for the next step (I added a copy to a .txt file just in case the next step didn’t work
  4. Back in the Script Editor in Google Spreadsheets click File > Project Properties. In the ‘Project properties’ tab click  ‘+ Add row’ and replace (name) with access_token and (value) with the token you got from Canvas before clicking Save
    Project Properties
  5. Make sure everything is saved in the Script Editor and then Run > getCanvasDiscussionEdges, wait for the script to finish and on Sheet1 you should have a bunch of data to play with.

Using NodeXL to analyse Canvas Discussions

There are a number of different questions we could ask of the LAK13 data. The particular one I want to look at is who are the core community members stimulating/facilitating discussion (e.g. applying a connectivist theory who are the people you might want to connect with). To do this we need to (I’m assuming you’ve already installed NodeXL):

  1. Download the data extracted to the Google Spreadsheet (File > Download as > Microsoft Excel). [If you just want the data I’ve extracted here’s the download link – the data is automatically refreshed nightly]
  2. Open the download file in Excel and in the created_at column select all and Format Cells as General (I needed to do this because NodeXL was miss formating dates on import)
  3. Start a new NodeXL Template (I use the NodeXL Excel Template option from my windows Start menu)
  4. From the NodeXL ribbon you want to Import > From Open Workbook
    Import > From Open Workbook
  5. In the import dialog vert1_name and vert2_name are edges, anything else prefixed with ‘vert’ is assigned to the corresponding Vertex n property column and everything else is an  Vertex 1 property:
    import dialog
  6. Once imported you can open the Edges sheet, select the created_at column and Format Cells reassigns a date/time format.
  7. In Prepare Data chose ‘Count and merge duplicate edges’ and select Count and Vertex1 and Vertex 2
    Count and merge duplicate edges
  8. In the Graph section of the NodeXL ribbon we want to make this a directed graph (replies are directed) and choose you layout algorithm (I usually go Harel-Koren Fast Multiscale)
    make this a directed graph
  9. Next we want to prepare the data we want to analyse. In the Autofill Columns (within Visual Properties portion of the ribbon) set Edge Visibility to ‘topic_id’ and in Edge Visibility Options set ‘If the source column number is: Not equal to 558’ Show otherwise Skip (this will skip edges that are responses to the Pre-course discussion forum – I’ll let you question this decision in the comments/forum)
    Edge Visibility Options
  10. Click Ok then Autofill
  11. Next open the Vertices sheet and select all the rows (Ctrl+A) and from the Visibility option select ‘Show if in an Edge’
    Show if in an Edge
  12. Now find the row in the Vertices sheet for George Siemens and Skip (doing this were creating a ‘what if George wasn’t there’ scenario
    ‘what if George wasn’t there’ scenario
  13. Open the Graph Metrics window and add Vertex in-degree, vertex out-degree, Vertex betweenness and closeness centrality and Top items (in the Top items options you’ll need to add these as the metrics you want top 10s for), finally click Calculate metrics.
    Graph Metrics window

At this point you could use the calculated metrics to weight nodes in a graph, but for now I’m going to skip that. You should now have a Top Items sheet with some useful information. In the Betweenness Centrality list you should have these names:

  • Martin Hawksey
  • Simon Knight
  • Alex Perrier
  • Khaldoon Dhou
  • Rosa Estriégana Valdehita
  • Maha Al-Freih
  • Suzanne Shaffer
  • Maxim Skryabin
  • Bryan Braul
  • Peter Robertso

Excluding the pre-course discussion forum and George Siemens the discussions these people have engaged with provide the shortest paths to other people engaging in discussions on the Canvas site. Strategically these are potentially useful people  within the network that you might want to follow, question or engage with.

Getting to this point obliviously hasn’t been straight forward and had SNAPP been available in this instance it would have turned this in to a far shorter post. Programmatically using tools like R we could have arrived at the same answer with a couple of lines of code (that might be my challenge for next week ;). What it has hopefully illustrated is if you have data in an edge format (two column relationships) tools like NodeXL make it possible for you use SNA modelling techniques to gain insight. (I’m sure it also illustrates that data wrangling isn’t always straight forward, but guess what that’s life).

NodeGL for LAK13 discussionsSo at this point some of you might be wondering what does the LAK13 reply network look like. I could give you a flat image but why don’t you use my NodeXL online graph viewer to explore it yourself or download the data from the NodeXL Graph Gallery.