# Hacking stuff together with Google Spreadsheets: Using importHTML to create a Winter Olympics 2010 Medal Map

Another post related to my ‘Hacking stuff together with Google Spreadsheets’ (other online spreadsheet tools are available) session at  Dev8eD (A free event for building, sharing and learning cool stuff in educational technology for learning and teaching!!) next week. This time an example to demonstrate importHtml. Rather than reinventing the wheel I thought I’ve revisit Tony Hirst's Creating a Winter Olympics 2010 Medal Map In Google Spreadsheets (hmm are we allowed to use the word ‘Olympic’ in the same sentence as Google as they are not an official sponsor ;s).

Almost 4 years on the recipe hasn’t changed much. The Winter Olympics 2010 medals page on wikipedia is still there and we can still use the importHTML formula to grab the table [=importHTML("http://en.wikipedia.org/wiki/2010_Winter_Olympics_medal_table","table",3)]

The useful thing to remember is importHtml and it’s cousins importFeed, importXML, importData, and importRange create live links to the data, so if the table on wikipedia was to change the spreadsheet would also eventually update.

Where I take a slight detour with the recipe is that Google now have a chart heatmap that doesn’t need ISO country codes. Instead this is happy try to resolve country names.

Once the data is imported from Wikipedia if you select Insert > Chart and choosing heatmap, using the Nation and Total columns as the data range you should get a chart similar to the one below shown to the right. The problem with this is it’s missing most of the countries. To fix this we need to remove the country codes in brackets. One way to do this is trim the text from the left until the first bracket “(“. This can be done using a combination of the LEFT and FIND formula.

In your spreadsheet at cell H2 if you enter =LEFT(B2,FIND("(",B2)-2) this will return all the text in ‘Canada (CAN)’ up to ‘(‘ minus two characters to exclude the ‘(‘ and the space. You could manually fill this formula down the entire column but I like using the ARRAYFORMULA which allows you to use the same formula in multiple cells without having to manually fill it in. So our final formula in H2 is:

=ARRAYFORMULA(LEFT(B2:B27,FIND("(",B2:B27)-2))

Using the new column of cleaned country names we now get our final map

To recap, we used one of the import formula to pull live data into a Google Spreadsheet, stripped some unwanted text and generated a map. Because all of this is sitting in the ‘cloud’ it’ll quite happly refresh itself if the data changed.

| | |

Next week I’ll be presenting at Dev8eD (A free event for building, sharing and learning cool stuff in educational technology for learning and teaching!!) doing a session on ‘Hacking stuff together with Google Spreadsheets’ – other online spreadsheet tools are available. As part of this session I’ll be rolling out some new examples. Here’s one I’ve quickly thrown together to demonstrate UNIQUE and FILTER spreadsheet formula. It’s yet another example of me visiting the topic of electronic voting systems (clickers). The last time I did this it was gEVS – An idea for a Google Form/Visualization mashup for electronic voting, which used a single Google Form as a voting entry page and rendering the data using the Google Chart API on a separate webpage. This time round everything is done in the spreadsheet so it makes it easier to use/reuse. Below is a short video of the template in action followed by a quick explanation of how it works.

Here’s the:

*** Quick Clicker Voting System Template ***

The instructions on the ‘Readme’ tell you how to setup. If you are using this on a Google Apps domain (Google Apps for Education), then it’s possible to also record the respondents username.

All the magic is happening in the FILTERED sheet. In column A cell 1 (which is hidden) there is the formula =UNIQUE(ALL!C:C). This returns a list of unique questions ids from the ALL sheet. If you now highlight cell D2 of the FILTERED sheet and select Data > Validation you can see these values are used to create a select list.

The last bit of magic is in cells D4:D8. The first half of the formula [IF(ISNA(FILTER(ALL!D:D,ALL!C:C=\$D\$2,ALL!D:D=C4))] checks if there is any data. The important bit is:

COUNTA(FILTER(ALL!D:D,ALL!C:C=\$D\$2,ALL!D:D=C4))

This FILTERs column D of the ALL sheet using the condition that column C of ALL sheet matches what is in D2 and column D matches the right response option. This formula would return rows of data that match the query so if there are threee A responses for a particular question, three As would be inserted, one on each row. All we want is the number of rows the filter would return so it is wrapped in COUNTA (count array).

Simple, yes?

In the original JISC OER Rapid Innovation call one of the stipulations due to the size and durations of grants is that the main reporting process is blog-based. Amber Thomas, who is the JISC Programme Manager for this strand and a keen blogger herself, has been a long supporter of projects adopting open practices, blogging progress as they go. Brian Kelly (UKOLN) has also an interest in this area with a some posts including Beyond Blogging as an Open Practice, What About Associated Open Usage Data?

For the OERRI projects the proposal discussed at the start-up meeting was that projects adopt a taxonomy of tags to indicate keys posts (e.g. project plan, aims, outputs, nutshell etc.). For the final report projects would then compile all posts with specific tags and submit as a ms-word or pdf.

There are a number of advantages of this approach one of them, for people like me anyway, is it exposes machine readable data that can be used in a number of ways. In this post I’ll show I’ve create a quick dashboard in Google Spreadsheets which takes a list of blog RSS feeds and filters for specific tags/categories. Whilst demonstrated this with the OERRI projects the same technique could be used in other scenarios, such as, as a way to track student blogs. As part of this solution I’ll highlight some of the issues/affordances of different blogging platforms and introduce some future work to combine post content using a template structure.

Screenshot of OERRI post dashboard

### The OERRI Project Post Directory

If you are not interested in how this spreadsheet was made and just  want to grab a copy to use with your own set of projects/class blogs then just:

*** open the OERRI Project Post Directory ***
File > Make a copy if you want your own editable version

The link to the document above is the one I’ll be developing throughout the programme so feel free to bookmark the link to keep track of what the projects are doing.

The way the spreadsheet is structured is the tags/categories the script uses to filter posts is in cells D2:L2 and urls are constructed from the values in columns O-Q. The basic technique being used here is building urls that look for specific posts and returning links (made pretty with some conditional formatting).

### Blogging platforms used in OERRI

So how do we build a url to look for specific posts? With this technique it comes down to whether the blogging platform supports tag/category filtering so lets first look at the platforms being used in OERRI projects.

This chart (right) breaks down the blogging platforms. You’ll see the most (12 of 15) are using WordPress in two flavours, ‘shared’, indicating that the blog is also a personal or team blog containing other posts not related to OERRI and ‘dedicated’, setup entirely for the project.

The 3 other platforms are 2 MEDEV blogs and the OUs project on Cloudworks. I’m not familiar with the MEDEV platform and only know a bit about cloudworks so for now I’m going to ignore these and concentrate on the WordPress blogs.

### WordPress and Tag/Category Filtering

One of the benefits of WordPress is you can can an RSS feed for almost everything by adding /feed/ or ?feed=rss2 to urls (other platforms also support this, I a vague recollection of doing something similar in blogger(?)). For example, if you want a feed of all my Google Apps posts you can use http://mashe.hawksey.info/category/google-apps/feed/.

Even better is you can combine tags/categories with a ‘+’ operator so if you want a feed of all my Google Apps posts that are also categorised with Twitter you can use http://mashe.hawksey.info/category/google-apps+twitter/feed/.

So to get the Bebop ‘nutshell’ categorised post as a RSS item we can use: http://bebop.blogs.lincoln.ac.uk/category/nutshell/feed/

Looking at one of the shared wordpress blogs to get the ‘nutshell’ from RedFeather you can use: http://blogs.ecs.soton.ac.uk/oneshare/tag/redfeather+nutshell/feed/

The ‘import’ functions in Google Spreadsheet must be my favourites and I know lots of social media professionals who use them to pull data into a spreadsheet and produce reports for clients from the data. With importFeed we can go and see if a blog post under a certain category exists and then return something back, in this case the post link. For my first iteration of this spreadsheet I used the formula below:

This works well but one of the drawback of importFeed is we can only have a maximum of 50 of them in one spreadsheet. With 15 projects and 9 tag/categories the maths doesn’t add up.

To get around this I switched to Google Apps Script (macros for Google Spreadsheets I write a lot about). This doesn’t have an importFeed function built-in but I can do a UrlFetch and Xml parse. Here’s the code which does this (included in the template):

Note this code also uses the Cache Service to improve performance and make sure I don’t go over my UrlFetch quota.

### Trouble at the tagging mill

So we have a problem getting data from none WordPress blogs, which I’m quietly ignoring for now, the next problem is people not tagging/categorising posts correctly. For example, I can see Access to Math have 10 post including a ‘nutshell’ but none of these are tagged. From a machine side there’s not much I can do about this but at least from the dashboard I can spot something isn’t right.

#### Tags for a template

I’m sure once projects are politely reminded to tag posts they’ll oblige. One incentive might be to say if posts are tagged correctly then the code above could be easily added to to not just pull post links but the full post text which could then be used to generate the projects final submission.

### Summary

So stay tuned to the OERRI Project Post Directory spreadsheet to see if I can incorporate MEDEV and Cloudworks feeds, and also if I can create a template for final posts. Given Brian’s post on usage data mentioned at the beginning should I also be tracking post activity data on social networks or is that a false metric?

I’m sure there was something else but it has entirely slipped my mind …

# Analytics Reconnoitre: Notes on Open Solutions in Big Data from #esym12 • 3 Comments

A couple of weeks ago it was Big Data Week, “a series of interconnected activities and conversations around the world across not only technology but also the commercial use case for Big Data”.

big data[1][2] consists of data sets that grow so large and complex that they become awkward to work with using on-hand database management tools. Difficulties include capture, storage,[3] search, sharing, analytics,[4] and visualizing – BY Wikipedia

In O’Reilly Radar there was a piece on Big data in Europe which had Q&A from Big Data Week founder/organizer Stewart Townsend, and Carlos Somohano both of whom are big in Big Data.

Maybe I’m being naïve but I was surprised that there was no reference to what universities/research sector is doing with handling and analysing large data sets. For example at the Sanger Institute alone each of their DNA sequencers are generating 1 terabyte (1024 gigabytes) of data a day, storing over 17 petabytes (17 million gigabytes) which is doubling every year.

Those figures trip off my tongue because last week I was at the Eduserv Symposium 2012: Big Data, Big Deal? which had many examples of how institutions are dealing with ‘big data’. There were a couple of things I took away from this event like the prevalence of open source software as well as the number of vendors wrapping open source tools with their own systems to sell as service. Another clear message was a lack of data scientists who can turn raw data into information and knowledge.

As part of the Analytics Reconnoitre we are undertaking at JISC CETIS in this post I want to summarise some of the open source tools and ‘as a service’ offering in the Big Data scene.

[Disclaimer: I should say first I coming to this area cold. I’m not an information systems expert so what you’ll see here is a very top-level view more often than not me joining the dots from things I’ve learned 5 minutes ago. So if you’ve spot anything I’ve got wrong or bits I’m missing let me know]

### Open source as a Service

some of the aaS’s
CaaS – Cluster as a Service
IaaS – Infrastructure as a Service
SaaS – Software as a Service
PaaS – Platform as a Service

I’ve already highlighted how the open source R statistical computing environment is being used as an analytics layer. Open source is alive and well in other parts of the infrastructure.  First up at the was Rob Anderson from Isilon Systems (division of EMC) talking about Big Data and implications for storage. Rob did a great job introducing Big Data and a couple of things I took away were the message that there is a real demand for talented ‘data scientists’ and getting organisations to think differently about data.

If you look some of the products/services EMC offer you’ll find EMC Greenplum Database and HD Community Editions (Greenplum are a set of products to handle ‘Big Data’). You’ll see that these include the open source Apache Hadoop ecosystem. If like me you’ve heard of Hadoop but don’t really understand what it is, here is a useful post on Open source solutions for processing big data and getting Knowledge. This highlights components of the Hadoop most of which appear in the Greenplum Community Edition (I was very surprised to see the NoSQL database Cassandra which is now part of Hadoop was originally developed by Facebook and released as open source code – more about NoSQL later).

### Open algorithms, machines and people

The use of open source in big data was also highlighted by Anthony D Joseph Professor at the University of California, Berkeley in his talk. Anthony was highlighting UC Berkeley’s AMPLab which is exploring “Making Sense at Scale” by tightly integrating algorithms, machines and people (AMP). The slide (right) from Anthony’s presentation summaries what they are doing, combining 3 strands to solve big data problems.

They are achieving this by combining existing tools with new components. In the slide below you have the following pieces developed by AMPLab:

• Apache Mesos – an open source cluster manager
• Spark – an open source interactive and interactive data analysis system
• PIQL – Performance (predictive) Insightful Query Language (part of SCADS. There’s also PIQL-on-RAILS plugin MIT license)

In the Applications/tools box is: Advanced ML algorithms; Interactive data mining; Collaborative visualisation. I’m not entirely sure what these are but in Anthony’s presentation he mentioned more open source tools are required particularly in ‘new analysis environments’.

Here are the real applications of AMPLab Anthony mentioned:

[Another site mentioned by Anthony worth bookmarking/visiting is DataKind – ‘helping non-profits through pro bono data collections, analysis and visualisation’]

#### OpenStack

Another cloud/big data/open source tool I know of but not mentioned at the event is OpenStack. This was initially developed by commercial hosting service Rackspace and NASA (who it has been said are ‘the largest collector of data in human history’). Like Hadoop OpenStack is a collection of tools/projects rather than one product. OpenStack contains OpenStack Compute, OpenStack Object Storage and OpenStack Image Service.

### NoSQL

In computing, NoSQL is a class of database management system identified by its non-adherence to the widely-used relational database management system (RDBMS) model … It does not use SQL as its query language … NoSQL database systems are developed to manage large volumes of data that do not necessarily follow a fixed schema – BY wikipedia

NoSQL came up in Simon Metson’s (University of Bristol), Big science, Big Data session. This class of database is common in big data applications but Simon underlined that it’s not always the right tool for the job:

This view is echoed by Nick Jackson (University of Lincoln) who did an ‘awesome’ introduction to MongoDB (one of the many open source NoSQL solutions) as part of the Managing Research Data Hack Data organised by DevCSI/JISC MRD. A strongly recommend you look at the resources that came out of this event including other presentations from University of Bristol on data.bris.

[BTW the MongoDB site has a very useful page highlighting how it differs from another open source NoSQL solution CouchDB. So even NoSQL solutions come in many flavours. Also Simon Hodson Programme Manager, JISC MRD gave a lightening talk on JISC and Big Data at the Eduserv event]

### Summary

The amount of open source solutions in this area is perhaps not surprising as the majority of the web (65% according to the last netcraft survey) is run on the open source Apache server. It’s interesting to see that code is not only being contributed by the academic/research community but also companies like Facebook who deal with big data on a daily basis. Assuming the challenge isn’t technical it then becomes about organisations understanding what they can do with data and having the talent in place (data scientists) to turn data into ‘actionable insights’.

Here are videos of all the presentations (including links to slides where available)

For those of you who have made it this far through my dearth on links please feel free to now leave this site and watch some of the videos from the Data Scientist Summit 2011 (I’m still working my way through but there are some inspirational presentations).

Update Sander van der Waal at OSS Watch who was also at #esym12 as also posted The dominance of open source tools in Big Data Published

| | |
Posted in Analytics, Data, JISC CETIS and tagged on .

# Visual Analytics: Comparison of @SCOREProject and @UKOER (and template for making your own) • 5 Comments

Lou McGill from the JISC/HEA OER Programme Synthesis and Evaluation team recently contacted me as part of the OER Review asking if there was a way to analyse and visualise the Twitter followers of @SCOREProject and @ukoer. Having recently extracted data for the @jisccetis network of accounts I knew it was easy to get the information but make meaningful was another question.

There are a growing number of sites like twiangulate.com and visual.ly that make it easy to generate numbers and graphics. One of the limitations I find with these tools is they produce flat images and all opportunities for ‘visual analytics’ is lost.

 Twiangulate data create infographics with visual.ly

So here’s my take on the problem. A template constructed with free and open source tools that lets you visually explorer the @SCOREProject and @ukoer Twitter following.

In this post I’ll give my narrative on the SCOREProject/UKOER Twitter followership and give you the basic recipe for creating your own comparisons (I should say that the solution isn’t production quality, but I need to move onto other things so someone else can tidy up).

Let start with the output. Here’s a page comparing the Twitter Following of SCOREProject and UKOER. At the top each bubble represents someone who follows SCOREProject or UKOER (hovering over a bubble we can see who they are and clicking filters the summary table at the bottom).

### Bubble size matters

There are three options to change how the bubbles are sized:

• Betweenness Centrality (a measure of the community bridging capacity); (see Sheila’s post on this)
• In-Degree (how many other people who follower SCOREProject or ukoer also follow the person represented by the bubble); and
• Followers count (how many people follower the person represented by the node

Clicking on ‘Grouped’ button lets you see how bubble/people follow either the SCOREProject, UKOER or both. By switching between betweeness, degree and followers we can visually spot a couple of things:

• Betweenness Centrality: SCOREProject has 3 well connected intercommunity bubbles @GdnHigherEd, @gconole and  @A_L_T. UKOER has the SCOREProject following them which unsurprisingly makes them a great bridge to the SCOREProject community (if you are wondering where UKOER is as they don’t follow SCOREProject they don’t appear.
• In-Degree: Switching to In-Degree we can visually see that the overall volume of the UKOER group grows more despite the SCOREProject bubble in this group decreasing substantially. This suggests to me that the UKOER following is more interconnected
• Followers count: Here we see SCOREProject is the biggest winner thanks to being followed by @douglasi who has over 300,000 followers. So whilst SCOREProject is followed by less people than UKOER it has a potential greater reach if @douglasi ever retweeted a message.

### Colourful combination

Sticking with the grouped bubble view we can see different colour grouping within the clusters for SCOREProject, UKOER and both. The most noticeable being light green used to identify Group 4 which has 115 people people following SCOREProject compared to 59 following UKOER. The groupings are created using community structure detection algorithm proposed Joerg Reichardt and Stefan Bornholdt. To give a sense of who these sub-groups might represent individual wordclouds have been generated based on the individual Twitter profile descriptions. Clicking on a word within these clouds filters the table. So for example you can explore who has used the term manager in their twitter profile (I have to say the update isn’t instant but it’ll get there.

### Behind the scenes

The bubble chart is coded in d3.js and based on Animated Bubble Chart by Jim Vallandingham. The modifications I made were to allow bubble resizing (lines 37-44). This also required handling the bubble charge slightly differently (line 118). I got the idea of using the bubble chart for comparison from a Twitter Abused post Rape Culture and Twitter Abuse. It also made sense to reuse Jim’s template which uses the Twitter Bootstrap. The wordclouds are also rendered using d3.js by using the d3.wordcloud extension by Jason Davies. Finally the table at the bottom is rendered using the Google Visualisation API/Google Chart Tools.

All the components play nicely together although the performance isn’t great. If I have more time I might play with the load sequencing, but it could be I’m just asking too much of things like the Google Table chart rendering 600 rows.

### How to make your own

I should say that this recipe probably won’t work for accounts with over 5,000 followers. It also involves using R (in my case RStudio). R is used to do the network analysis/community detection side. You can download a copy of the script here. There’s probably an easier recipe that skips this part worth revisiting.

1. We start with taking a copy of Export Twitter Friends and Followers v2.1.2 [Network Mod] (as featured in Notes on extracting the JISC CETIS twitter follower network).
2. Authenticate the spreadsheet with Twitter (instructions in the spreadsheet) and then get the followers if the accounts you are interested in using the Twitter > Get followers menu option
3. Once you’ve got the followers run Twitter > Combine follower sheets Method II
4. Move to the Vertices sheet and sort the data on the friends_count column
5. In batches of around 250 rows select values from the id_str column and run TAGS Advanced > Get friend IDs – this will start populating the friends_ids column with data. For users with over 5,000 friends reselect their id_str and rerun the menu option until the ‘next_cursor’ equals 0
6. Next open the Script editor and open the TAGS4 file and then Run > setup.
7. Next select Publish > Publish as a service… and allow anyone to invoke the service anonymously. Copy the service URL and paste it into the R script downloaded earlier (also add the spreadhsheet key to the R script and within your spreadsheet File > Publish to the web
8. Run the R script! ...  and fingers crossed everything works.

The files used in the SCOREProject/UKOER can be downloaded from here. Changes you’ll need to make are adding the output csv files to the data folder, changing references in js/gtable.js and js/wordcloud.js and the labels used in coffee/coffee.vis

So there you go. I’ve spent way too much of my own time on this and haven’t really explained what is going on. Hopefully the various commenting in the source code removes some of the magic (I might revisit the R code as in some ways I think it deserves a post on its own. If you have any questions or feedback leave them in the comments ;)

# Analytics Reconnoitre: Notes on R in education and industry • 4 Comments

As part of my role at JISC CETIS I’ve been asked to contribute to our ‘Analytics Reconnoitre’ which is a JISC commissioned project looking at the data and analytics landscape. One of my first tasks is to report on the broad landscape and trends in analytics service and data providers. Whilst I’m still putting this report together it’s been interesting to note how one particular analytics tools, R, keeps pinging on my radar. I thought it would be useful to loosely join these together and share.

### Before R, the bigger ‘data science’ picture

Before I go into R there is some more scene setting required. As part of the Analytics Reconnoitre Adam Cooper (JISC CETIS) has already published Analytics and Big Data - Reflections from the Teradata Universe Conference 2012 and Making Sense of “Analytics”.

The Analytics and Big Data post is an excellent summary of the Teradata Universe event and Adam is also able to note some very useful thoughts on ‘What this Means for Post-compulsory Education’. This includes identifying pathways for education to move forward with business intelligence and analytics. One of these I particularly liked was:

Experiment with being more analytical at craft-scale
Rather than thinking in terms of infrastructure or major initiatives, get some practical value with the infrastructure you have. Invest in someone with "data scientist" skills as master crafts-person and give them access to all data but don't neglect the value of developing apprentices and of developing wider appreciation of the capabilities and limitations of analytics.

[I’m biased towards this path because it encapsulates a lot of what I aspire to be. The craft model was one introduced to me by Joss Winn at this year’s Dev8D and coming for a family of craftsmen it makes me more comfortable to think I’m continuing the tradition in some way.]

Here are Adams observations and reflections on ‘data science’ from the same bog post:

"Data Scientist" is a term which seems to be capturing the imagination in the corporate big data and analytics community but which has not been much used in our community.

A facetious definition of data scientist is "a business analyst who lives in California". Stephen Brobst gave his distinctions between data scientist and business analyst in his talk. His characterisation of a business analyst is someone who: is interested in understanding the answers to a business question; uses BI tools with filters to generate reports. A data scientist, on the other hand, is someone who: wants to know what the question should be; embodies a combination of curiosity, data gathering skills, statistical and modelling expertise and strong communication skills. Brobst argues that the working environment for a data scientist should allow them to self-provision data, rather than having to rely on what is formally supported in the organisation, to enable them to be inquisitive and creative.

Michael Rappa from the Institute for Advanced Analytics doesn't mention curiosity but offers a similar conception of the skill-set for a data scientist in an interview in Forbes magazine. The Guardian Data Blog has also reported on various views of what comprises a data scientist in March 2012, following the Strata Conference.

While it can be a sign of hype for new terminology to be spawned, the distinctions being drawn by Brobst and others are appealing to me because they are putting space between mainstream practice of business analysis and some arguably more effective practices. As universities and colleges move forward, we should be cautious of adopt the prevailing view from industry - the established business analyst role with a focus on reporting and descriptive statistics - and miss out on a set of more effective practices. Our lack of baked-in BI culture might actually be a benefit if it allows us to more quickly adopt the data scientist perspective alongside necessary management reporting. Furthermore, our IT environment is such that self-provisioning is more tractable.

### R in data science and in business

For those that don’t know R is an open source statistical programming language. If you want more background about the development of R the Information Age cover this in their piece Putting the R in analytics. An important thing to note, which is covered in the story, is R was developed by two academics at University of Auckland and continues to have a very strong and active academic community supporting it. Whilst initially used as an academic tool the article highlights how it is being adopted by the business sector.

I originally picked up the Information Age post via the Revolutions blog (hosted by Revolution Analytics) in the post Information Age: graduates driving industry adoption of R, which includes one of the following quotes from Information Age:

This popularity in academia means that R is being taught to statistics students, says Matthew Aldridge, co-founder of UK- based data analysis consultancy Mango Solutions. “We're seeing a lot of academic departments using R, versus SPSS which was what they always used to teach at university,” he says. “That means a lot of students are coming out with R skills.”

Finance and accounting advisory Deloitte, which uses R for various statistical analyses and to visualise data for presentations, has found this to be the case. “Many of the analytical hires coming out of school now have more experience with R than with SAS and SPSS, which was not the case years ago,” says Michael Petrillo, a senior project lead at Deloitte's New York branch.

Revolutions have picked up other stories related to R in big data and analytics. Two I have bookmarked are Yes, you need more than just R for Big Data Analytics in which Revolutions editor David Smith underlines that having tools like R aren’t enough and a wider data science approach is needed because “it combines the tool expertise with statistical expertise and the domain expertise required to understand the problem and the data applicable to it” .

Smith also reminds use that:

The R software is just one piece of software ecosystem — an analytics stack, if you will — of tools used to analyze Big Data. For one thing R isn't a data store in its own right: you also need a data layer where R can access structured and unstructured data for analysis. (For example, see how you can use R to extract data from Hadoop in the slides from today's webinar by Antonio Piccolboni.) At the analytics layer, you need statistical algorithms that work with Big Data, like those in Revolution R Enterprise. And at the presentation layer, you need the ability to embed the results of the analysis in reports, BI tools, or data apps.

[Revolutions also has a comprehensive list of R integrated throughout the enterprise analytics stack which includes vendor integrations from IBM, Oracle, SAP and more]

The second post from Revolutions is R and Foursquare's recommendation engine which is another graphic illustration of how R is being used in the business sector separately from vendor tools.

### Closing thoughts

At this point it’s worth highlighting another of Adam’s thoughts on directions for academia in Analytics and Big Data:

Don't focus on IT infrastructure (or tools)
Avoid the temptation (and sales pitches) to focus on IT infrastructure as a means to get going with analytics. While good tools are necessary, they are not the right place to start.

I agree about not being blinkered by specific tools and as pointed out earlier R can only ever be just one piece of software in the ecosystem and any good data scientist will use the right tool for the job. It’s interesting to see an academic tool being adopted, and arguable driving, part of the commercial sector. Will academia follow where they have led – if you see what I mean?

| | |
Posted in Analytics, JISC CETIS and tagged on .

# What I’ve starred this month: April 28, 2012

Here's some posts which have caught my attention this month:

Automatically generated from my Diigo Starred Items.
| | |
Posted in Starred on .

This morning I finished listening to Episode 5 of Data Stories: How To Learn Data Visualization. Data Stories is a bi-weekly podcast on data visualisation produced by Enrico Bertini and Moritz Stefaner, episode 5 also featuring Andy Kirk. For anyone interested in Data Visualisation I’d highly recommend you give it a listen.

Like many others I’m at the beginning of my data visualisation journey, one of the things this episode highlighted was there is a whole world of data visualisation experts out there that I’ve yet to start stealing learning from. Fortunately today another Visualisation expert, Nathan Yau (FlowingData), posted his list of Data and visualization blogs worth following. Perfect!

I could’ve gone through the list and individually subscribed to each of the blogs feeds but I’m lazy (so lazy that a 15 minute hack has turned into a 3 hour write-up <sigh>) and just wanted to dump them into my Google Reader. This is a problem Tony Hirst has encountered  in Feed-detection From Blog URL Lists, with OPML Output. One thing that is not clear is how Tony got his two column CSV of source urls. There are various tools Tony could have used to do this. Here’s my take on converting a page of blog urls into an OPML bundle.

### Step 1 Extracting blogs urls: Method 1 using Scraper Chrome Extension

“Scraper is a Google Chrome extension for getting data out of web pages and into spreadsheets.”

Chrome users can grab a copy of Scraper here. Once installed if you go to Nathan Yan's Data and visualization blogs worth following and right-click on  the first link in the list and select ‘Scrape similar’

In the window that opens you should get something similar to the one below. Scraper has two options for identifying the parts of the page you want to extract, XPath or JQuery Selectors. Both of these have similar coding structures but for this example I’m going to stick with XPath. XPath basically provides a way to identify parts of the XML/HTML structure and extract content (if you are not familiar with XPath the w3schools is a great starting point).

In this example Scraper should default to ‘//div[1]/div[2]/ul[1]/li/a’. Here’s a quick explanation of how I read this query.  Because it starts with // it will select “nodes in the document from the current node that match the selection no matter where they are” for me this is the trigger to read the query from right to left as we are matching an endpoint pattern. So:

match all <a> in all <li> in first <ul> of second <div> (<div class=”entry-content”> of first <div> (<div class="entry">)

this give use the links from the first block of bullet point. We want the links from all of the bullet points lists so the pattern we want is

match first <a> in all <li> in all <ul> of second <div> of first <div>

So basically we need to switch a to a[1] and ul[1] to ul e.g. ‘//div[1]/div[2]/ul/li/a[1]’. Edit the XPath query and in the columns section beneath change the order by clicking and dragging so that @href/URL comes first. Clicking on the ‘Scrape’ button to get a new preview which should now contain a list of 37 urls. Click on Export to Google Docs … You are now ready to move to Step 2 Auto-discovering feed urls below.

### Step 1 Extracting blogs urls: Method 2 using Google Spreadsheet importXML function

Another way to get this data is to directly scrape it using Google Spreadsheets using the importXML function. This function also uses XPath to extract parts of a webpage so we can reuse the query used in Method 1 but get the data straight into a spreadsheet (it’s also a live link so if Nathan adds a new link the spreadsheet will automatically update to include this). Let give it a go.

Create a new spreadsheet and in cells A1 to B3 enter the column heading Link, Title and Url. Next in cell A2 enter:

=ImportXML("http://flowingdata.com/2012/04/27/data-and-visualization-blogs-worth-following/","//div[1]/div[2]/ul/li/a[1]/@href")

Note the addition of @href. This is included to extract the href attribute in the <a>. You should now have similar list of 37 urls from Nathan’s post.  To get titles we could enter another importXML function in cell B2 using the XPath ‘//div[1]/div[2]/ul/li/a[1]’ which will extract the text between <a></a>. Another way is to actual scrape the data from the target url. So in cell B2 enter:

=ImportXML(A2,"//title")

So this will go to the url in A2 (http://infosthetics.com/) and extract anything wrapped in <title>

Now select cell B2 and fill the column down to get titles for all the urls. Finally we need to select the entire B column and Copy/Paste values only. The reason we do this is Google Spreadsheets only allows 50 importXML function per spreadsheet and we’ll need 37 more to get the RSS feeds for these sites.

### Step 2 Auto-discovering feed urls

Initially i tried using Feed Autodiscovery With YQL with importXML using an XPath of "//link/@href" but I was not getting any results. So instead decided to auto-detect the feed directly using importXML. In cell C2 enter:

This time the XPath starts at the XML tree root (<html>) looks in the <head> for the first link with the attribute rel=’alternative’. From Tony’s post:

Remember, feed autodiscovery relies on web page containing the following construction in the HTML <head>element:

Fill cell C2 down the rest of the column to get RSS feeds for the other urls. You’ll notice that there’s a #N/A for http://neoformix.com/ this is because their feed isn’t auto-discoverable. Visiting their site there is a XML link (http://neoformix.com/index.xml) that we can just paste into our spreadsheet (tiding data is a usual processes in data visualisation).

### Step 3 Generating an OPML bundle

You should now have a spreadsheet like this one with 3 columns of data (if you used the Scraper extension in step 1/method 1 you’ll need to make sure your columns are headed Link, Title and Url for the next step). Next to turn our spreadsheet of feeds into an OPML bundle. Fortunately this step has been made super easy by using the Spreadsheet -> OPML Generator. Just follow the instructions on this site an seconds later you’ve got:

OPML File of Nathan Yau’s recommended Data and Visualisation Blogs

And because I’ve imported these into Google Reader here’s an aggregated page of their posts.

Update:

Tony Hirst said:

I said:

and the how to

# Google Docs defaults to searching for Creative Commons licensed images. Great, but could they do better? • 1 Comment

A retweet yesterday by Amber Thomas (@ambrouk) of Anna Armstrong (@frenchdisko) alerted me to a feature of Google Docs I wasn’t aware of, that the Insert Image Search automatically filters for Creative Commons released pictures:

Fantastic I thought. A way for staff to create open resources with millions of pictures to choose from and reuse with no more effort than if they were inserting any other image. Such a feature obviously doesn’t come without it’s health warnings. Clicking on ‘Learn more’ we can see:

Before reusing content that you've found, you should verify that its licence is legitimate and check the exact terms of reuse stated in the licence.
For example, most licences require that you give credit to the image creator when reusing an image. Google has no way of knowing whether the licence is legitimate, so we aren't making any representation that the content is actually lawfully licensed.

I can appreciate that Google’s search technology isn’t going to be 100% reliable in detecting which license is being used in an existing work, but wouldn’t it be great if when you inserted the image Google also gave their ‘best guess’ of the license for you to check and edit if necessary.

[This graphic includes Japanese light bulb - CC-BY Elliot Burke]

PS I don’t know if something has gone horrible wrong with Google image indexing, but when in Insert Image Search I enter ‘lightbulb site:.flickr.com’ the thumbnails don’t always match the actual image.

# JISC OER Rapid Innovation: Technical roundup and possible directions #oerri

As the JISC OER Rapid Innovation projects have either started or will start very soon, mainly for my own benefit, I thought it would be useful to quickly summarise the the technical choices and challenges.

### Attribute Images - University of Nottingham

Building on the Xpert search engine which has a searchable index of over 250,000 open educational resources, Nottingham are planning a tool to embed CC license information into images.

The Attribute Images project will extend the Xpert Attribution service by creating a new tool that allows users to upload images, either from their computer or from the web and have a Creative Commons attribution statement embedded in the images. … It will provide an option for the user to upload the newly attributed images to Flickr through the Flickr API … In addition it will have an API allowing developers to make use of the service in other sites.

From the projects first post when they talk about ‘embedding’ CC statements it appears to be visible watermarking. It’ll be interesting if the project explore the Creative Commons recommended Adobe Extensible Metadata Platform (XMP) to embed license information into the image data. Something they might want to test is if the Flickr upload preserves this data when resizing. Creative Commons also have a range of tools to integrate license selection so it’ll be interesting to see if these are used or if there are compatibility issues.

### Bebop – University of Lincoln

Bebop is looking to help staff at Lincoln centralise personal resource creation activity from across platforms into a single stream.

This project will undertake research and development into the use of BuddyPress as an institutional academic profile management tool which aggregates teaching and learning resources as well as research outputs held on third-party websites into the individual’s BuddyPress profile. … This project will investigate and develop BuddyPress so as to integrate (‘consume’) third-party feeds and APIs into BuddyPress profiles and, furthermore, investigate the possibility of BuddyPress being used as a ‘producer application’ of data for re-publishing on other institutional websites and to third-party web services.

In a recent project post asking Where are the OERs? you can get an idea of the 3rd party APIs they will be looking at which includes Jorum/DSpace, YouTube, Slideshare etc. Talking to APIs isn’t a problem, after all that is what they are designed to do, and having developed plugins on WordPress/BuddyPress myself is a great platform to work on. The main technical challenge is more likely to be doing this on scale and the variability in the type of data returned. It’ll also be interesting if Bebop can be built with flexibility in mind (creating it’s own APIs so that it can be used on other platforms) – looks like the project is going down aggregating the RSS endpoint point route.

### Breaking Down Barriers: Building a GeoKnowledge Community with OER

The proposed project aims to Build a GeoKnowledge Community at Mimas by utilising existing technologies (DSpace) and services (Landmap/Jorum). The aim of the use case is to open-up 50% (8 courses) of the Learning Zone through Creative Commons (CC) Attribution Non-Commercial Share Alike (BY-NC-SA) license as agreed already with authors. A further aim is to transfer the hosting of the ELOGeo repository to Jorum from Nottingham (letter of support provided by University of Nottingham) and create a GeoKnowledge Community site embedded in Jorum using the DSpace API and linking the repository to the Landmap Learning Zone. … The technical solution in developing a specific community site within Jorum will be transferable to other communities that may have a similar requirement in the future.

Still don’t feel I have an entire handle on the technical side of this project, but its early days and already the project is producing a steady stream of posts on their blog. One for me to revisit.

#### CAMILOE (Collation and Moderation of Intriguing Learning Objects in Education)

This project reclaims and updates 1800 quality assured evidence informed reviews of education research, guidance and practice that were produced and updated between 2003 and 2010 and which are now archived and difficult to access. … These resources were classified using a wide range of schemas including Dublin core, age range, teaching subject, resource type, English Teaching standard and topic area but are no longer searchable or browsable by these categories. … Advances in Open Educational Resources (OER) technologies provide an opportunity to make this resource useful again for the academics who created it. These tools include enhanced meta tagging schemas for journal documents, academic proofing tools, repositories for dissemination of OER resources, and open source software for journal moderation and para data concerning resource use.

So a lot of existing records to get into shape and put in something that makes them accessible again. Not only that, if you look at the project overview you can see usage statistics play an important part. CAMILOE is also one of the projects interested in depositing information into the UK Learning Registry node setup as part of the JLeRN Experiment.

Having dabbled with using Google Refine to get Jorum UKOER records into a different shape I wonder if the project will go down this route, or given the number and existing shape manually re-index them. I’d be very surprised if RSS or OAI-PMH didn’t make an appearance.

### Improving Accessibility to Mathematical Teaching Resources

Making digital mathematical documents fully accessible for visually impaired students is a major challenge to offer equal educational opportunities. … In this project we now want to turn our current program, that is the result of our research, into an assistive technology tool. … According to the identified requirements we will adapt and embed our tool into an existing open source solution for editing markup to allow post-processing of recognised and translated documents for correction and further editing. We will also add facilities to our tool to allow for suitable subject specific customisation by expert users. … In addition to working with accessibility support officers we also want to enable individual learners to employ the tool by making it available firstly via a web interface and finally for download under a Creative Commons License.

The project is building on their existing tool Maxtract which turns mathematical formula in pdf documents into other formats including full text descriptions, which are more screen reader friendly (a post with more info on how it works). So turning

into:

1 divided by square root of 2 pi integral sub R e to the power of minus x to the power of 2 slash 2 dx = 1 .

The other formats the tool already supports are PDF annotated with LaTeX and XHTML. The project is partnering with JISC TechDis to gather specific user requirements.

### Linked Data Approaches to OERs

This project extends MIT’s Exhibit tool to allow users to construct bundles of OERs and other online content around playback of online video. … This project takes a linked data approach to aggregation of OERS and other online content in order  to improve the ‘usefulness’ of online resources for education. The outcome will be an open-source application which uses linked data approaches to present a collection of pedagogically related resources, framed within a narrative created by either the teacher or the students. The ‘collections’ or ‘narratives’ created using the tool will be organised around playback of rich media, such as audio or video, and will be both flexible and scaleable.

MIT’s Exhibit tool, particularly the timeline aspect, was something I used in the OER Visualisation Project. The project has already produced some videos demonstrating a prototype that uses a timecode to control what is displayed (First prototype!, Prototype #2 and Prototype #2 (part two)). I’m still not entirely sure what ‘linked data approaches’ will be so it’ll be interesting to see how that shapes ups.

Linked Data Approaches to OERs Blog
Read more about Linked Data Approaches to OERs on the JISC site <- not on the site yet

### Portfolio Commons

… seeks to provide free and open source software tools that can easily integrate open educational practices (the creation, use and sharing of OERs) into the daily routines of learners and teachers … This project proposes to create a free open source plugin for Mahara that will enable a user to select content from their Mahara Portfolio, licence it with a Creative Commons licence of their choosing, create metadata and make a deposit directly into their chosen repositories using the SWORD protocol

The SWORD Protocol, which was developed with funding by JISC, has a healthy eco system of compliant repositories, clients and code libraries, so the technical challenge on that part is getting it wired up as a plugin for Mahara. Creative Commons also have a range of tools to integrate license selection for web applications. It’ll be interesting to see if these are used.

When I met the project manager, John Casey, in London recently I also mentioned, given the arts background, of this project that scoping whether integrating with the Flickr API would be useful. Given that the Attribute Images project mentioned above is looking at this part the ideal scenario might be to link the Mahara plugin to a Attribute Images API, but timings might prevent that.

### Rapid Innovation Dynamic Learning Maps-Learning Registry (RIDLR)

Newcastle University’s Dynamic Learning Maps system (developed with JISC funding) is now embedded in the MBBS curriculum, and now being taken up in Geography and other subject areas … In RIDLR we will test the release of contextually rich paradata via the JLeRN Experiment to the Learning Registry and harvest back paradata about prescribed and additional personally collected resources used within and to augment the MBBS curriculum, to enhance the experience of teachers and learners. We will develop open APIs to harvest and release paradata on OER from end-users (bookmarks, tags, comments, ratings and reviews etc) from the Learning Registry and other sources for specific topics, within the context of curriculum and personal maps.

The technical challenge here is getting data into and out of the Learning Registry, it’ll be interesting to see what APIs they come up with. It’ll also be interesting to see what data they can get and if it’s usable within Dynamic Learning Maps. More information including a use case for this project has been posted here.

### RedFeather (Resource Exhibition and Discovery)

RedFeather (Resource Exhibition and Discovery) is a proposed lightweight repository server-side script that fosters best practice for OER, it can be dropped into any website with PHP, and which enables appropriate metadata to be assigned to resources, creates views in multiple formats (including HTML with in-browser previews, RSS and JSON), and provides instant tools to submit to Xpert and Jorum, or migrate to full repository platforms via SWORD.

The above quote nicely summarises the technical headlines. In a recent blog post the team illustrate how RedFeather might be used in a couple of use cases. The core component appears to be creating a single file (coded in PHP which is a server side scripting language) and transferring files/resources to a web server. It’ll be interesting to see if the project explore different deployments, for example, packaging FedFeather on a portable web server (server on a usb stick), or maybe deploy on Scraperwiki (a place in the cloud where you can execute PHP), or looking at how other cloud/3rd party services could be used. Update: I forgot to mention the OERPubAPI which is built on the SWORD v2. The interesting part that I'm watching closely is whether this API will provide a means to publish to none SWORDed repositories like YouTube, Flickr and Slideshare.

### Sharing Paradata Across Widget Stores (SPAWS)

We will use the Learning Registry infrastructure to share paradata about Widgets across multiple Widget Stores, improving the information available to users for selecting widgets and improving discovery by pooling usage information across stores.

For more detail on what paradata will be included the SPAWS nutshell post says:

each time a user visits a store and writes a review about a particular widget/gadget, or rates it, or embeds it, that information can potentially be syndicated to other stores in the network

There’s not much for me to add about the technical side of this project as Scott has already posted a technical overview and gone into more detail about the infrastructure and some initial code.

### SPINDLE: Increasing OER discoverability by improved keyword metadata via automatic speech to text transcription

SPINDLE will create linguistic analysis tools to filter uncommon spoken words from the automatically generated word-level transcriptions that will be obtained using Large Vocabulary Continuous Speech Recognition (LVCSR) software. SPINDLE will use this analysis to generate a keyword corpus for enriching metadata, and to provide scope for indexing inside rich media content using HTML5.

Enhancing the discoverability of audio/media is something I’m very familiar with having used tweets to index videos. My enthusiasm for this area took a knock with I discovered Mike Wald’s Synote system which uses IBM’s ViaScribe to extract annotations from video/audio. There’s a lot of overlap between Synote and SPINDLE which is why it was good to see them talking to each other at the programme start-up meeting. As far as I’m aware JISC funding for Synote ended in 2009 (but has just been refunded for a mobile version) so now is a good time to look at how open source LVCSR software can be used in a scenario where accuracy for accessibility as an assistive technology is being replaced by best guess to improve accessibility in terms of discoverability.

In terms of the technical side it will be interesting to see if SPINDLE looks at the WebVTT which seems to be finding its way at the W3C and does include an option for metadata (the issue might be that ‘V’ in WebVTT stands for video). Something that I hope doesn’t put SPINDLE off looking at WebVTT is the lack of native browser support (although it is on the way) There are some JavaScript libraries you can use to handle WebVTT.  It’ll also be interesting if there is a chance to compare (or highlight existing research) comparing an open source offering like Sphinx with commercial (e.g. ViaScribe)

### SupOERGlue

SuperOERGlue will pilot the integration of OER Glue with Newcastle University’s Dynamic Learning Maps, enabling easy content creation and aggregation from within the learning and teaching support environments, related to specific topics. … Partnering with Tatemae to use OER Glue, which harvests OER from around the world and has developed innovative ways for academics and learners to aggregate customised learning packages constructed of different OER, will enable staff and students to create their own personalised resource mashups which are directly related to specific topics in the curriculum.

Tatemae have a track record of working with open educational resources and courseware including developing OER Glue. There’s not a huge amount for me to say on the technical side. I did notice that OER Glue currently only works on Google Chrome web browser. Having worked in a number of institutions where installing extra software in a chore it’ll be interesting to see if this causes a problem. More information including a use case for this project has been posted hereUpdate: Related to RedFeather update I wondering if SupOERGlue will be looking at OERPub (“An architecture for remixable Open Educational Resources (OER)”)as a framework to republish OER.

### Synote Mobile

Synote Mobile will meet the important user need to make web-based OER recordings easier to access, search, manage, and exploit for learners, teachers and others. …This project will create a new mobile HTML5 version of Synote able to replay Synote recordings on any student’s mobile device capable of connecting to the Internet. The use of HTML5 will overcome the need to develop multiple device-specific applications. The original version of Synote displays the recording, transcript, notes and slide images in four different panels which uses too much screen area for a small mobile device. Synote Mobile will therefore be designed to display captions and notes and images simultaneously ‘over’ the video. Where necessary existing Synote recordings will be converted into an appropriate format to be played by the HTML5 player. Success will be demonstrated by tests and student evaluations using Synote recordings on their mobile devices.

I’ve already mentioned Synote in relation to SPINDLE. Even though it’s early the project is already documenting a number of their technical challenges. This includes reference to LongTail’s State of HTML5 Video report and a related post on Salt Websites. The later references WebVTT and highlights some libraries that can be used. Use of javascript libraries gets around the lack of <track> support in browsers, but as the LongTail State of the HTML5 video report states:

The element [<track>] is brand new, but every browser vendor is working hard to support it. This is especially important for mobile, since developers cannot use JavaScript to manually draw captions over a video element there.

The report goes on to say:

Note the HTML5 specification defines an alternative approaches to loading captions. It leverages video files with embedded text tracks. iOS supports this today (without API support), but no other browser has yet committed to implement this mechanism. Embedded text tracks are easier to deploy, but harder to edit and make available for search.

Interesting times for Synote Mobile and potentially an opportunity for the sector to learn a lot of lessons about creating accessible mobile video.

### Track OER

The project aims to look at two ways to reduce tensions between keeping OER in one place and OER spreading and transferring. If we can find out more about where OER is being used then we can continue to gather the information that is needed and help exploit the openness of OER. … The action of the project will be to develop software that can help track open educational resources. The software will be generic in nature and build from existing work developed by BCCampus and MIT, however a key step in this project is to provide an instantiation of the tracking on the Open University’s OpenLearn platform. … The solution will build on earlier work, notably by OLnet fellow Scott Leslie (BCCampus) and JISC project CaPRéT led by Brandon Muramatsu (MIT project partner in B2S).

At the programme start-up meeting talking to Patrick McAndrew, who is leading this project, part one of the solution is to include a unique Creative Commons License icon which is hosted on OU servers which when called by a resource reuse some content leaves a trace (option 3 in the suggested solutions here). This technique is well established and one I first came across when using the ClustrMaps service which uses a map of your website visitors as a hit counter (ClustrMaps was developed by Marc Eisenstadt Emeritus Professor at the Open University – small world ;). It looks like Piwiki is going to be used to handle/dashboard the web analytics, which is an open source alternative to Google Analytics. The second solution is extending the CETIS funded CaPRéT developed by Brandon Muramatsu & Co. at MIT which uses JavaScript to track when a user copies and pastes some text. It’ll be interesting if Track OER can port the CaPReT backend to Piwiki (BTW Pat Lockley has posted how to do OER Copy tracking using Google Analytics, which uses similar techniques).

### Xerte Experience Now Improved: Targeting HTML5 (XENITH)

Xerte Online Toolkits is a suite of tools in widespread use by teaching staff to create interactive learning materials. This project will develop the functionality for Xerte Online Toolkits to deliver content as HTML5. Xerte Online Toolkits creates and stores content as XML, and uses the Flash Player to present content. There is an increasing need for Xerte Online Toolkits to accommodate a wider range of delivery devices and platforms.

Here’s a page with more information about Xerte Online Toolkits, here’s an example toolkit and the source xml used to render it (view source). The issue with tis I haven’t seen the detail for the XENITH project, but something I initially thought about  was whether they would use XSLT (Extensible Stylesheet Language Transformations), but wondered if this would be a huge headache when converting their Flash player. Another possible solution I recently came across is jangaroo:

Jangaroo is an Open Source project building developer tools that adopt the power of ActionScript 3 to create high-quality JavaScript frameworks and applications. Jangaroo is released under the Apache License, Version 2.0.

This includes“let your existing ActionScript 3 application run in the browser without a Flash plugin” . It’ll be interesting to see the solution the project implements.

So which of these projects interests you the post? If you are on one of the projects do my technical highlights look right or have I missed something important?