JISC CETIS

Learning Analytics appears to be increasingly an emerging area of interest for institutions and I'm aware of a number of staff being asked to contribute on this area in their teaching and learning strategies. I thought it would be useful to spotlight some resources produced by Cetis and Jisc in this area that might help. The list is in no ways exhaustive and if you have any other resources you think worth highlighting either leave a comment or get in touch and I’ll add them to the post.

**New** Ferguson, R. (2013). Learning Analytics for Open and Distance Education. In S. Mishra (Ed.), CEMCA EdTech Notes. New Delhi, India: Commonwealth Educational Media Centre for Asia (CEMCA).
**More New** Society for Learning Analytics Research (SoLAR) has a collection of useful resources including these introductory articles

SoLAR recently ran a open course Strategy & Policy for Systemic Learning Analytics. Something worth looking at might be the recording of the session  Belinda Tynan (OU’s PVC Learning & Teaching) did on Designing Systemic Analytics at The Open University.

**Even Newer** from SoLAR looking more at a national strategy for the learning analytics. Improving the Quality and Productivity of the Higher Education Sector Policy and Strategy for Systems-Level Deployment of Learning Analytics

Highlights from Jisc Cetis Analytics Series include

  • Analytics; what is changing and why does it matter?
    This paper provides a high level overview to the CETIS Analytics Series. The series explores a number of key issues around the potential strategic advantages and insights which the increased attention on, and use of, analytics is bringing to the education sector. It is aimed primarily at managers and early adopters in Further and Higher Education who have a strategic role in developing the use of analytics in the following areas:

    • Whole Institutional Issues,
    • Ethical and Legal Issues,
    • Learning and Teaching,
    • Research Management,
    • Technology and Infrastructure.
  • Analytics for Learning and Teaching
    A broad view is taken of analytics for Learning and Teaching applications in Higher Education. In this we discriminate between learning analytics and academic analytics: uses for learning analytics are concerned with the optimisation of learning and teaching per se, while uses of educational analytics are concerned with optimisation of activities around learning and teaching, for example, student recruitment.
  • Legal, Risk and Ethical Aspects of Analytics in Higher Education
    The collection, processing and retention of data for analytical purposes has become commonplace in modern business, and consequently the associated legal considerations and ethical implications have also grown in importance. Who really owns this information? Who is ultimately responsible for maintaining it? What are the privacy issues and obligations? What practices pose ethical challenges?
    Also of interest the LAK13 on An evaluation of policy frameworks for addressing ethical considerations in learning analytics
  • Institutional Readiness for Analytics
    This briefing paper is written for managers and early adopters in further and higher education who are thinking about how they can build capability in their institution to make better use of data that is held on their IT systems about the organisation and provision of the student experience. It will be of interest to institutions developing plans, those charged with the provision of analytical data, and administrators or academics who wish to use data to inform their decision making. The document identifies the capabilities that individuals and institutions need to initiate, execute, and act upon analytical intelligence
  • Case Study, Acting on Assessment AnalyticsOver the past five years, as part of its overall developments in teaching and learning, The University of Huddersfield has been active in developing new approaches to assessment and feedback methodologies. This has included the implementation of related technologies such as e-submission and marking tools.In this case study Dr Cath Ellis shares with us how her interest in learning analytics began and how she and colleagues are making practical use of assessment data both for student feedback and overall course design processes.
    Aspects of this case study and other work in this area are available in this webinar recording on Learning analytics for assessment and feedback

Examples of Learning Analytic Tools

Taken from Dyckhoff, A. L., et al. "Supporting action research with learning analytics."Proceedings of the Third International Conference on Learning Analytics and Knowledge. ACM, 2013.

  • LOCO-Analyst [1, 4],
  • TADA-Ed [46],
  • Data Model to Ease Analysis and Mining [38],
  • Student Inspector [50],
  • MATEP [56–58],
  • CourseVis [43, 45],
  • GISMO [44],
  • Course Signals [3],
  • Check My Activity [25],
  • Moodog [54, 55],
  • TrAVis [41, 42],
  • Moodle Mining Tool [48],
  • EDM Vis [34],
  • AAT [29],
  • Teacher ADVisor [37],
  • E-learning Web Miner [26],
  • ARGUNAUT [30],
  • Biometricsbased Student Attendance Module [27],
  • CAMera and ZeitgeistDashboard [51, 52],
  • Student Activity Meter [28],
  • Discussion Interaction Analysis System (DIAS) [8–11],
  • CoSyLMSAnalytics [49],
  • Network Visualization Resource and SNAPP [5, 6, 17, 18],
  • i-Bee [47],
  • iHelp [12], and
  • Participation Tool [32]

References for these tools are listed here

Here is a more general set of Analytics Tools and Infrastructure from the Analytics Series

A quick reminder that the Analytics in UK Further and Higher Education Survey is still open.

Share this post on:
| | |
Posted in Analytics, JISC CETIS on by .

The JISC OER Rapid Innovation programme is coming to a close and as the 15 projects do their final tiding up it’s time to start thinking about the programme as a whole, emerging trends, lesson learned and help projects disseminate their outputs. One of the discussions we’ve started with Amber Thomas, the programme manager, is how best to go about this. Part of our support role at CETIS has been to record some of the technical and standards decisions taken by the projects. Phil Barker and I ended up having technical conversations with 13 of the 15 projects which are recorded in the CETIS PROD Database in the Rapid Innovation strand. One idea was see if there were any technology or standards themes we could use to illustrate what has been done in these areas. Here are a couple of ways to look at this data.

To start with PROD has some experimental tools to visualise the data. By selecting the Rapid Innovation strand as selecting ‘Stuff’ we get this tag cloud. We can see JSON, HTML5 and RSS are all very prominent. Unfortunately some of the context is lost as we don’t know without digging deeper which projects used JSON etc. 

PROD Wordcloud

To get more context I thought it would be useful to put the data in a network graph (click to enlarge).

NodeXL Graph

NodeXL Graph - top selectedWe can now see which projects (blue dots) used which tech/standards (brown dots) and again JSON, HTML5 and RSS are prominent. Selecting these (image right) we can see it covers most of the projects (no. 10), so these might be some technology themes we could talk about. But what about the remain projects?

As a happy accident I put the list of technologies/standards into Voyant Tools (similar to Wordle but way more powerful – I need to write a post about it) and got the graph below:

Voyant Tools Wordcloud

Because the wordcloud is generated from words rather than phrases the frequency is different: : api (16), rss (6), youtube (6), html5 (5), json (5). So maybe there is also a story about APIs and YouTube.

Share this post on:
| | |
Posted in JISC, JISC CETIS and tagged on by .

2 Comments

Flickr Tag Error: Call to display photo '4132778021' failed.

Error state follows:

  • stat: fail
  • code: 95
  • message: SSL is required
I thought it would be useful to give a summary of some of the tools I use/developed at CETIS to monitor the pulse of the web around our and JISC work. All of these are available for reuse and documented to varying degrees. All of the tools also use Google Spreadsheets/Apps Script which is free for anyone to use with a Google account, and all the recipes use free tools (the exception being owning a copy of Excel, but most institutions have this as standard).

Tools

Hashtag archiving, analysis and interfacing

Hashtag archiving, analysis and interfacingUsed with: CETIS COMMS, OERRI Programme, #UKOER, …
What it does: It’s a Google Spreadsheet template which can be setup to automatically archive Twitter searches. The template includes some summaries to show top contributors and frequency or tweets. There are a number of add-on interfaces that can be used to navigate the data in different ways, including TAGSExplorer and TAGSArc.

More info: https://mashe.hawksey.info/2011/11/twitter-how-to-archive-event-hashtags-and-visualize-conversation/

Monitoring Twitter searches and combining with Google Analytics

Monitoring Twitter searches and combining with Google AnalyticsUsed with: CETIS COMMS
What it does: Archives all tweets linking to to the .cetis.ac.uk domain and combines with our Google Analytics data to monitor influential distributors of our work.

More info: https://mashe.hawksey.info/2012/03/combine-twitter-and-google-analytics-data-to-find-your-top-content-distributors/

RSS Feed Activity Data Monitoring

RSS Feed Activity Data MonitoringUsed with: CETIS COMMS, OERRI Programme
What it does: Gives a dashboard view of the total social shares from a range of services (Facebook, Twitter, Google+ for a single or combination of RSS feeds. At CETIS we also monitor the social popularity of blogs referencing .cetis.ac.uk by using a RSS feed from Google’s Blog Search e.g. http://www.google.com/search?q=link:cetis.ac.uk&hl=en&tbm=blg&output=rss&num=20

More info: https://mashe.hawksey.info/2012/06/rss-feed-social-share-counting/

Post Activity

Blog Activity Data Feed Template OverviewUsed with: CETIS COMMS
What it does: Gives more detailed activity data around socially shared urls combining individual tweets from Topsy, Delicious, and post comments.

More info: https://mashe.hawksey.info/2012/08/blog-activity-data-feed-template/

Post Directory

Post DirectoryUsed with: OERRI Programme
What it does: Dashboards all the project blogs from the OERRI Programme and monitors when they release blog posts with predefined tags/categories. The dashboard also combines the social monitoring techniques mentioned above so that projects and the programme support team can monitor social shares for individual blog posts.

More info: https://mashe.hawksey.info/2012/08/how-jisc-cetis-dashboard-social-activity-around-blog-posts-using-a-splash-of-data-science/

Automatic final report generation

OERRI DashboardUsed with: OERRI Programme
What is does: As an extension to the Post Directory this tool combines project blog posts from a predefined set of tags/categories into a final report as an editable MS Word/HTML document. Currently only the original post content, including images, is compiled in individual reports but it would be easy to also incorporate some of the social tracking and/or post comments data.

More info: https://mashe.hawksey.info/2012/09/converting-blog-post-urls-into-ms-word-documents-using-google-apps-script-oerri/

Recipes

As well as standalone tools I’ve documented a number of recipes to analysis monitoring data.

Twitter conversation graph

Twitter conversation graphUsed with: #moocmooc, #cfhe12
What it does: Using data from the Twitter Archiving Google Spreadsheet template (TAGS) this recipe shows you how you can use a free Excel add-on, NodeXL, to graph threaded conversations. I’m still developing this technique but my belief is there are opportunities to give a better overview of conversations within hashtag communities, identifying key moments.

More info: https://mashe.hawksey.info/2012/08/first-look-at-analysing-threaded-twitter-discussions-from-large-archives-using-nodexl-moocmooc/

Community blogosphere graph

Community blogosphere graphUsed with: #ds106
What it does: Outlines how data from blog posts (in this case a corpus collected by the FeedWordPress plugin used in DS106) can be refined and graphed to show blog post interlinking within a community. An idea explored in this recipe is using measures used in social network analysis to highlight key posts.

More info: https://mashe.hawksey.info/2012/10/visualizing-cmooc-data-extracting-and-analysing-data-from-feedwordpress-part-1-ds106-nodexl/

Activity data visualisation (gource)

Activity data visualisation (gource)Used with: #ukoer
What it does: Documents how data can be extracted (in this case records from Jorum) and cleaned using Google Refine (soon to be renamed OpenRefine). This data is then exported as a custom log file which can be played in an open source visualisation tool called Gource. The purpose of this technique is to give the viewer a sense of the volume and size of data submitted or created by users within a community.

More info: https://mashe.hawksey.info/2011/12/google-refining-jorum-ukoer/

So now go forth and reuse!

Another post related to my ‘Hacking stuff together with Google Spreadsheets’ (other online spreadsheet tools are available) session at  Dev8eD (A free event for building, sharing and learning cool stuff in educational technology for learning and teaching!!) next week. This time an example to demonstrate importHtml. Rather than reinventing the wheel I thought I’ve revisit Tony Hirst's Creating a Winter Olympics 2010 Medal Map In Google Spreadsheets (hmm are we allowed to use the word ‘Olympic’ in the same sentence as Google as they are not an official sponsor ;s).

Almost 4 years on the recipe hasn’t changed much. The Winter Olympics 2010 medals page on wikipedia is still there and we can still use the importHTML formula to grab the table [=importHTML("http://en.wikipedia.org/wiki/2010_Winter_Olympics_medal_table","table",3)]

The useful thing to remember is importHtml and it’s cousins importFeed, importXML, importData, and importRange create live links to the data, so if the table on wikipedia was to change the spreadsheet would also eventually update.

Where I take a slight detour with the recipe is that Google now have a chart heatmap that doesn’t need ISO country codes. Instead this is happy try to resolve country names.

heatmap missing dataOnce the data is imported from Wikipedia if you select Insert > Chart and choosing heatmap, using the Nation and Total columns as the data range you should get a chart similar to the one below shown to the right. The problem with this is it’s missing most of the countries. To fix this we need to remove the country codes in brackets. One way to do this is trim the text from the left until the first bracket “(“. This can be done using a combination of the LEFT and FIND formula.

In your spreadsheet at cell H2 if you enter =LEFT(B2,FIND("(",B2)-2) this will return all the text in ‘Canada (CAN)’ up to ‘(‘ minus two characters to exclude the ‘(‘ and the space. You could manually fill this formula down the entire column but I like using the ARRAYFORMULA which allows you to use the same formula in multiple cells without having to manually fill it in. So our final formula in H2 is:

=ARRAYFORMULA(LEFT(B2:B27,FIND("(",B2:B27)-2))

Using the new column of cleaned country names we now get our final map

Interactive map: Click for interactive version

To recap, we used one of the import formula to pull live data into a Google Spreadsheet, stripped some unwanted text and generated a map. Because all of this is sitting in the ‘cloud’ it’ll quite happly refresh itself if the data changed. 

The final spreadsheet used in this example is here

Tony has another Data Scraping Wikipedia with Google Spreadsheets example here

2 Comments

Next week I’ll be presenting at Dev8eD (A free event for building, sharing and learning cool stuff in educational technology for learning and teaching!!) doing a session on ‘Hacking stuff together with Google Spreadsheets’ – other online spreadsheet tools are available. As part of this session I’ll be rolling out some new examples. Here’s one I’ve quickly thrown together to demonstrate UNIQUE and FILTER spreadsheet formula. It’s yet another example of me visiting the topic of electronic voting systems (clickers). The last time I did this it was gEVS – An idea for a Google Form/Visualization mashup for electronic voting, which used a single Google Form as a voting entry page and rendering the data using the Google Chart API on a separate webpage. This time round everything is done in the spreadsheet so it makes it easier to use/reuse. Below is a short video of the template in action followed by a quick explanation of how it works.

Here’s the:

*** Quick Clicker Voting System Template ***

The instructions on the ‘Readme’ tell you how to setup. If you are using this on a Google Apps domain (Google Apps for Education), then it’s possible to also record the respondents username.

record the respondents username

All the magic is happening in the FILTERED sheet. In column A cell 1 (which is hidden) there is the formula =UNIQUE(ALL!C:C). This returns a list of unique questions ids from the ALL sheet. If you now highlight cell D2 of the FILTERED sheet and select Data > Validation you can see these values are used to create a select list.

create a select list

The last bit of magic is in cells D4:D8. The first half of the formula [IF(ISNA(FILTER(ALL!D:D,ALL!C:C=$D$2,ALL!D:D=C4))] checks if there is any data. The important bit is:

COUNTA(FILTER(ALL!D:D,ALL!C:C=$D$2,ALL!D:D=C4))

This FILTERs column D of the ALL sheet using the condition that column C of ALL sheet matches what is in D2 and column D matches the right response option. This formula would return rows of data that match the query so if there are threee A responses for a particular question, three As would be inserted, one on each row. All we want is the number of rows the filter would return so it is wrapped in COUNTA (count array).

Simple, yes?

6 Comments

In the original JISC OER Rapid Innovation call one of the stipulations due to the size and durations of grants is that the main reporting process is blog-based. Amber Thomas, who is the JISC Programme Manager for this strand and a keen blogger herself, has been a long supporter of projects adopting open practices, blogging progress as they go. Brian Kelly (UKOLN) has also an interest in this area with a some posts including Beyond Blogging as an Open Practice, What About Associated Open Usage Data?

For the OERRI projects the proposal discussed at the start-up meeting was that projects adopt a taxonomy of tags to indicate keys posts (e.g. project plan, aims, outputs, nutshell etc.). For the final report projects would then compile all posts with specific tags and submit as a ms-word or pdf.

There are a number of advantages of this approach one of them, for people like me anyway, is it exposes machine readable data that can be used in a number of ways. In this post I’ll show I’ve create a quick dashboard in Google Spreadsheets which takes a list of blog RSS feeds and filters for specific tags/categories. Whilst demonstrated this with the OERRI projects the same technique could be used in other scenarios, such as, as a way to track student blogs. As part of this solution I’ll highlight some of the issues/affordances of different blogging platforms and introduce some future work to combine post content using a template structure.

OERRI Project Post Directory
Screenshot of OERRI post dashboard

The OERRI Project Post Directory

If you are not interested in how this spreadsheet was made and just  want to grab a copy to use with your own set of projects/class blogs then just:

*** open the OERRI Project Post Directory ***
File > Make a copy if you want your own editable version

The link to the document above is the one I’ll be developing throughout the programme so feel free to bookmark the link to keep track of what the projects are doing.

The way the spreadsheet is structured is the tags/categories the script uses to filter posts is in cells D2:L2 and urls are constructed from the values in columns O-Q. The basic technique being used here is building urls that look for specific posts and returning links (made pretty with some conditional formatting).

Blogging platforms used in OERRI

So how do we build a url to look for specific posts? With this technique it comes down to whether the blogging platform supports tag/category filtering so lets first look at the platforms being used in OERRI projects.

chart1This chart (right) breaks down the blogging platforms. You’ll see the most (12 of 15) are using WordPress in two flavours, ‘shared’, indicating that the blog is also a personal or team blog containing other posts not related to OERRI and ‘dedicated’, setup entirely for the project.

The 3 other platforms are 2 MEDEV blogs and the OUs project on Cloudworks. I’m not familiar with the MEDEV platform and only know a bit about cloudworks so for now I’m going to ignore these and concentrate on the WordPress blogs.

WordPress and Tag/Category Filtering

One of the benefits of WordPress is you can can an RSS feed for almost everything by adding /feed/ or ?feed=rss2 to urls (other platforms also support this, I a vague recollection of doing something similar in blogger(?)). For example, if you want a feed of all my Google Apps posts you can use https://mashe.hawksey.info/category/google-apps/feed/.

Even better is you can combine tags/categories with a ‘+’ operator so if you want a feed of all my Google Apps posts that are also categorised with Twitter you can use https://mashe.hawksey.info/category/google-apps+twitter/feed/.

So to get the Bebop ‘nutshell’ categorised post as a RSS item we can use: http://bebop.blogs.lincoln.ac.uk/category/nutshell/feed/

Looking at one of the shared wordpress blogs to get the ‘nutshell’ from RedFeather you can use: http://blogs.ecs.soton.ac.uk/oneshare/tag/redfeather+nutshell/feed/

Using Google Spreadsheet importFeed formula to get a post url

The ‘import’ functions in Google Spreadsheet must be my favourites and I know lots of social media professionals who use them to pull data into a spreadsheet and produce reports for clients from the data. With importFeed we can go and see if a blog post under a certain category exists and then return something back, in this case the post link. For my first iteration of this spreadsheet I used the formula below:

importFeed formula

This works well but one of the drawback of importFeed is we can only have a maximum of 50 of them in one spreadsheet. With 15 projects and 9 tag/categories the maths doesn’t add up.

To get around this I switched to Google Apps Script (macros for Google Spreadsheets I write a lot about). This doesn’t have an importFeed function built-in but I can do a UrlFetch and Xml parse. Here’s the code which does this (included in the template):

Note this code also uses the Cache Service to improve performance and make sure I don’t go over my UrlFetch quota.

We can call this function like other spreadsheet formula using ‘=fetchUrlfromRSS(aUrl)’.

Trouble at the tagging mill

So we have a problem getting data from none WordPress blogs, which I’m quietly ignoring for now, the next problem is people not tagging/categorising posts correctly. For example, I can see Access to Math have 10 post including a ‘nutshell’ but none of these are tagged. From a machine side there’s not much I can do about this but at least from the dashboard I can spot something isn’t right.

Tags for a template

I’m sure once projects are politely reminded to tag posts they’ll oblige. One incentive might be to say if posts are tagged correctly then the code above could be easily added to to not just pull post links but the full post text which could then be used to generate the projects final submission.

Summary

So stay tuned to the OERRI Project Post Directory spreadsheet to see if I can incorporate MEDEV and Cloudworks feeds, and also if I can create a template for final posts. Given Brian’s post on usage data mentioned at the beginning should I also be tracking post activity data on social networks or is that a false metric?

I’m sure there was something else but it has entirely slipped my mind …

BTW here’s the OPML file for the RSS feeds of the blogs that are live (also visible here as a Google Reader bundle)

3 Comments

A couple of weeks ago it was Big Data Week, “a series of interconnected activities and conversations around the world across not only technology but also the commercial use case for Big Data”.

big data[1][2] consists of data sets that grow so large and complex that they become awkward to work with using on-hand database management tools. Difficulties include capture, storage,[3] search, sharing, analytics,[4] and visualizing – BY Wikipedia

In O’Reilly Radar there was a piece on Big data in Europe which had Q&A from Big Data Week founder/organizer Stewart Townsend, and Carlos Somohano both of whom are big in Big Data.

Maybe I’m being naïve but I was surprised that there was no reference to what universities/research sector is doing with handling and analysing large data sets. For example at the Sanger Institute alone each of their DNA sequencers are generating 1 terabyte (1024 gigabytes) of data a day, storing over 17 petabytes (17 million gigabytes) which is doubling every year.

Those figures trip off my tongue because last week I was at the Eduserv Symposium 2012: Big Data, Big Deal? which had many examples of how institutions are dealing with ‘big data’. There were a couple of things I took away from this event like the prevalence of open source software as well as the number of vendors wrapping open source tools with their own systems to sell as service. Another clear message was a lack of data scientists who can turn raw data into information and knowledge.

As part of the Analytics Reconnoitre we are undertaking at JISC CETIS in this post I want to summarise some of the open source tools and ‘as a service’ offering in the Big Data scene.

[Disclaimer: I should say first I coming to this area cold. I’m not an information systems expert so what you’ll see here is a very top-level view more often than not me joining the dots from things I’ve learned 5 minutes ago. So if you’ve spot anything I’ve got wrong or bits I’m missing let me know]

Open source as a Service

some of the aaS’s
CaaS – Cluster as a Service
IaaS – Infrastructure as a Service
SaaS – Software as a Service
PaaS – Platform as a Service

I’ve already highlighted how the open source R statistical computing environment is being used as an analytics layer. Open source is alive and well in other parts of the infrastructure.  First up at the was Rob Anderson from Isilon Systems (division of EMC) talking about Big Data and implications for storage. Rob did a great job introducing Big Data and a couple of things I took away were the message that there is a real demand for talented ‘data scientists’ and getting organisations to think differently about data.

If you look some of the products/services EMC offer you’ll find EMC Greenplum Database and HD Community Editions (Greenplum are a set of products to handle ‘Big Data’). You’ll see that these include the open source Apache Hadoop ecosystem. If like me you’ve heard of Hadoop but don’t really understand what it is, here is a useful post on Open source solutions for processing big data and getting Knowledge. This highlights components of the Hadoop most of which appear in the Greenplum Community Edition (I was very surprised to see the NoSQL database Cassandra which is now part of Hadoop was originally developed by Facebook and released as open source code – more about NoSQL later).

Open algorithms, machines and people

amplab - state of the artThe use of open source in big data was also highlighted by Anthony D Joseph Professor at the University of California, Berkeley in his talk. Anthony was highlighting UC Berkeley’s AMPLab which is exploring “Making Sense at Scale” by tightly integrating algorithms, machines and people (AMP). The slide (right) from Anthony’s presentation summaries what they are doing, combining 3 strands to solve big data problems.

They are achieving this by combining existing tools with new components. In the slide below you have the following pieces developed by AMPLab:

  • Apache Mesos – an open source cluster manager
  • Spark – an open source interactive and interactive data analysis system
  • SCADS – consistency adjustable data store (license unknown)
  • PIQL – Performance (predictive) Insightful Query Language (part of SCADS. There’s also PIQL-on-RAILS plugin MIT license)

amplab - machines

In the Applications/tools box is: Advanced ML algorithms; Interactive data mining; Collaborative visualisation. I’m not entirely sure what these are but in Anthony’s presentation he mentioned more open source tools are required particularly in ‘new analysis environments’.

Here are the real applications of AMPLab Anthony mentioned:

[Another site mentioned by Anthony worth bookmarking/visiting is DataKind – ‘helping non-profits through pro bono data collections, analysis and visualisation’]

OpenStack

Another cloud/big data/open source tool I know of but not mentioned at the event is OpenStack. This was initially developed by commercial hosting service Rackspace and NASA (who it has been said are ‘the largest collector of data in human history’). Like Hadoop OpenStack is a collection of tools/projects rather than one product. OpenStack contains OpenStack Compute, OpenStack Object Storage and OpenStack Image Service.

NoSQL

In computing, NoSQL is a class of database management system identified by its non-adherence to the widely-used relational database management system (RDBMS) model … It does not use SQL as its query language … NoSQL database systems are developed to manage large volumes of data that do not necessarily follow a fixed schema – BY wikipedia

NoSQL came up in Simon Metson’s (University of Bristol), Big science, Big Data session. This class of database is common in big data applications but Simon underlined that it’s not always the right tool for the job:

This view is echoed by Nick Jackson (University of Lincoln) who did an ‘awesome’ introduction to MongoDB (one of the many open source NoSQL solutions) as part of the Managing Research Data Hack Data organised by DevCSI/JISC MRD. A strongly recommend you look at the resources that came out of this event including other presentations from University of Bristol on data.bris.

[BTW the MongoDB site has a very useful page highlighting how it differs from another open source NoSQL solution CouchDB. So even NoSQL solutions come in many flavours. Also Simon Hodson Programme Manager, JISC MRD gave a lightening talk on JISC and Big Data at the Eduserv event]

Summary

The amount of open source solutions in this area is perhaps not surprising as the majority of the web (65% according to the last netcraft survey) is run on the open source Apache server. It’s interesting to see that code is not only being contributed by the academic/research community but also companies like Facebook who deal with big data on a daily basis. Assuming the challenge isn’t technical it then becomes about organisations understanding what they can do with data and having the talent in place (data scientists) to turn data into ‘actionable insights’.

Here are videos of all the presentations (including links to slides where available)

BTW Here is an archive of tweets from #esym12

For those of you who have made it this far through my dearth on links please feel free to now leave this site and watch some of the videos from the Data Scientist Summit 2011 (I’m still working my way through but there are some inspirational presentations).

Update Sander van der Waal at OSS Watch who was also at #esym12 as also posted The dominance of open source tools in Big Data Published

4 Comments

As part of my role at JISC CETIS I’ve been asked to contribute to our ‘Analytics Reconnoitre’ which is a JISC commissioned project looking at the data and analytics landscape. One of my first tasks is to report on the broad landscape and trends in analytics service and data providers. Whilst I’m still putting this report together it’s been interesting to note how one particular analytics tools, R, keeps pinging on my radar. I thought it would be useful to loosely join these together and share.

Before R, the bigger ‘data science’ picture 

Before I go into R there is some more scene setting required. As part of the Analytics Reconnoitre Adam Cooper (JISC CETIS) has already published Analytics and Big Data - Reflections from the Teradata Universe Conference 2012 and Making Sense of “Analytics”.

The Analytics and Big Data post is an excellent summary of the Teradata Universe event and Adam is also able to note some very useful thoughts on ‘What this Means for Post-compulsory Education’. This includes identifying pathways for education to move forward with business intelligence and analytics. One of these I particularly liked was:

Experiment with being more analytical at craft-scale
Rather than thinking in terms of infrastructure or major initiatives, get some practical value with the infrastructure you have. Invest in someone with "data scientist" skills as master crafts-person and give them access to all data but don't neglect the value of developing apprentices and of developing wider appreciation of the capabilities and limitations of analytics.

[I’m biased towards this path because it encapsulates a lot of what I aspire to be. The craft model was one introduced to me by Joss Winn at this year’s Dev8D and coming for a family of craftsmen it makes me more comfortable to think I’m continuing the tradition in some way.]

Here are Adams observations and reflections on ‘data science’ from the same bog post:

"Data Scientist" is a term which seems to be capturing the imagination in the corporate big data and analytics community but which has not been much used in our community.

A facetious definition of data scientist is "a business analyst who lives in California". Stephen Brobst gave his distinctions between data scientist and business analyst in his talk. His characterisation of a business analyst is someone who: is interested in understanding the answers to a business question; uses BI tools with filters to generate reports. A data scientist, on the other hand, is someone who: wants to know what the question should be; embodies a combination of curiosity, data gathering skills, statistical and modelling expertise and strong communication skills. Brobst argues that the working environment for a data scientist should allow them to self-provision data, rather than having to rely on what is formally supported in the organisation, to enable them to be inquisitive and creative.

Michael Rappa from the Institute for Advanced Analytics doesn't mention curiosity but offers a similar conception of the skill-set for a data scientist in an interview in Forbes magazine. The Guardian Data Blog has also reported on various views of what comprises a data scientist in March 2012, following the Strata Conference.

While it can be a sign of hype for new terminology to be spawned, the distinctions being drawn by Brobst and others are appealing to me because they are putting space between mainstream practice of business analysis and some arguably more effective practices. As universities and colleges move forward, we should be cautious of adopt the prevailing view from industry - the established business analyst role with a focus on reporting and descriptive statistics - and miss out on a set of more effective practices. Our lack of baked-in BI culture might actually be a benefit if it allows us to more quickly adopt the data scientist perspective alongside necessary management reporting. Furthermore, our IT environment is such that self-provisioning is more tractable.

R in data science and in business

For those that don’t know R is an open source statistical programming language. If you want more background about the development of R the Information Age cover this in their piece Putting the R in analytics. An important thing to note, which is covered in the story, is R was developed by two academics at University of Auckland and continues to have a very strong and active academic community supporting it. Whilst initially used as an academic tool the article highlights how it is being adopted by the business sector.

I originally picked up the Information Age post via the Revolutions blog (hosted by Revolution Analytics) in the post Information Age: graduates driving industry adoption of R, which includes one of the following quotes from Information Age:

This popularity in academia means that R is being taught to statistics students, says Matthew Aldridge, co-founder of UK- based data analysis consultancy Mango Solutions. “We're seeing a lot of academic departments using R, versus SPSS which was what they always used to teach at university,” he says. “That means a lot of students are coming out with R skills.”

Finance and accounting advisory Deloitte, which uses R for various statistical analyses and to visualise data for presentations, has found this to be the case. “Many of the analytical hires coming out of school now have more experience with R than with SAS and SPSS, which was not the case years ago,” says Michael Petrillo, a senior project lead at Deloitte's New York branch.

Revolutions have picked up other stories related to R in big data and analytics. Two I have bookmarked are Yes, you need more than just R for Big Data Analytics in which Revolutions editor David Smith underlines that having tools like R aren’t enough and a wider data science approach is needed because “it combines the tool expertise with statistical expertise and the domain expertise required to understand the problem and the data applicable to it” .

Smith also reminds use that:

The R software is just one piece of software ecosystem — an analytics stack, if you will — of tools used to analyze Big Data. For one thing R isn't a data store in its own right: you also need a data layer where R can access structured and unstructured data for analysis. (For example, see how you can use R to extract data from Hadoop in the slides from today's webinar by Antonio Piccolboni.) At the analytics layer, you need statistical algorithms that work with Big Data, like those in Revolution R Enterprise. And at the presentation layer, you need the ability to embed the results of the analysis in reports, BI tools, or data apps.

[Revolutions also has a comprehensive list of R integrated throughout the enterprise analytics stack which includes vendor integrations from IBM, Oracle, SAP and more]

The second post from Revolutions is R and Foursquare's recommendation engine which is another graphic illustration of how R is being used in the business sector separately from vendor tools.

Closing thoughts

At this point it’s worth highlighting another of Adam’s thoughts on directions for academia in Analytics and Big Data:

Don't focus on IT infrastructure (or tools)
Avoid the temptation (and sales pitches) to focus on IT infrastructure as a means to get going with analytics. While good tools are necessary, they are not the right place to start.

I agree about not being blinkered by specific tools and as pointed out earlier R can only ever be just one piece of software in the ecosystem and any good data scientist will use the right tool for the job. It’s interesting to see an academic tool being adopted, and arguable driving, part of the commercial sector. Will academia follow where they have led – if you see what I mean?

1 Comment

A retweet yesterday by Amber Thomas (@ambrouk) of Anna Armstrong (@frenchdisko) alerted me to a feature of Google Docs I wasn’t aware of, that the Insert Image Search automatically filters for Creative Commons released pictures:

Insert image in Google Docs

Fantastic I thought. A way for staff to create open resources with millions of pictures to choose from and reuse with no more effort than if they were inserting any other image. Such a feature obviously doesn’t come without it’s health warnings. Clicking on ‘Learn more’ we can see:


Before reusing content that you've found, you should verify that its licence is legitimate and check the exact terms of reuse stated in the licence.
For example, most licences require that you give credit to the image creator when reusing an image. Google has no way of knowing whether the licence is legitimate, so we aren't making any representation that the content is actually lawfully licensed.

I can appreciate that Google’s search technology isn’t going to be 100% reliable in detecting which license is being used in an existing work, but wouldn’t it be great if when you inserted the image Google also gave their ‘best guess’ of the license for you to check and edit if necessary.

A better way for Google Docs to embed?
[This graphic includes Japanese light bulb - CC-BY Elliot Burke]

Or am I just being naïve about this whole thing?

PS I don’t know if something has gone horrible wrong with Google image indexing, but when in Insert Image Search I enter ‘lightbulb site:.flickr.com’ the thumbnails don’t always match the actual image.

thumbwhat

As the JISC OER Rapid Innovation projects have either started or will start very soon, mainly for my own benefit, I thought it would be useful to quickly summarise the the technical choices and challenges.

Attribute Images - University of Nottingham

Building on the Xpert search engine which has a searchable index of over 250,000 open educational resources, Nottingham are planning a tool to embed CC license information into images.

The Attribute Images project will extend the Xpert Attribution service by creating a new tool that allows users to upload images, either from their computer or from the web and have a Creative Commons attribution statement embedded in the images. … It will provide an option for the user to upload the newly attributed images to Flickr through the Flickr API … In addition it will have an API allowing developers to make use of the service in other sites.

From the projects first post when they talk about ‘embedding’ CC statements it appears to be visible watermarking. It’ll be interesting if the project explore the Creative Commons recommended Adobe Extensible Metadata Platform (XMP) to embed license information into the image data. Something they might want to test is if the Flickr upload preserves this data when resizing. Creative Commons also have a range of tools to integrate license selection so it’ll be interesting to see if these are used or if there are compatibility issues.

Attribute Images Blog
Read more about Attribute Images on the JISC site

Bebop – University of Lincoln

Bebop is looking to help staff at Lincoln centralise personal resource creation activity from across platforms into a single stream.

This project will undertake research and development into the use of BuddyPress as an institutional academic profile management tool which aggregates teaching and learning resources as well as research outputs held on third-party websites into the individual’s BuddyPress profile. … This project will investigate and develop BuddyPress so as to integrate (‘consume’) third-party feeds and APIs into BuddyPress profiles and, furthermore, investigate the possibility of BuddyPress being used as a ‘producer application’ of data for re-publishing on other institutional websites and to third-party web services.

In a recent project post asking Where are the OERs? you can get an idea of the 3rd party APIs they will be looking at which includes Jorum/DSpace, YouTube, Slideshare etc. Talking to APIs isn’t a problem, after all that is what they are designed to do, and having developed plugins on WordPress/BuddyPress myself is a great platform to work on. The main technical challenge is more likely to be doing this on scale and the variability in the type of data returned. It’ll also be interesting if Bebop can be built with flexibility in mind (creating it’s own APIs so that it can be used on other platforms) – looks like the project is going down aggregating the RSS endpoint point route.

Bebop Blog
Ream more about Bebop on the JISC site

Breaking Down Barriers: Building a GeoKnowledge Community with OER

The proposed project aims to Build a GeoKnowledge Community at Mimas by utilising existing technologies (DSpace) and services (Landmap/Jorum). The aim of the use case is to open-up 50% (8 courses) of the Learning Zone through Creative Commons (CC) Attribution Non-Commercial Share Alike (BY-NC-SA) license as agreed already with authors. A further aim is to transfer the hosting of the ELOGeo repository to Jorum from Nottingham (letter of support provided by University of Nottingham) and create a GeoKnowledge Community site embedded in Jorum using the DSpace API and linking the repository to the Landmap Learning Zone. … The technical solution in developing a specific community site within Jorum will be transferable to other communities that may have a similar requirement in the future.

Still don’t feel I have an entire handle on the technical side of this project, but its early days and already the project is producing a steady stream of posts on their blog. One for me to revisit.

Break Down Barriers Blog
Read more about Breaking Down Barriers on the JISC site

CAMILOE (Collation and Moderation of Intriguing Learning Objects in Education)

This project reclaims and updates 1800 quality assured evidence informed reviews of education research, guidance and practice that were produced and updated between 2003 and 2010 and which are now archived and difficult to access. … These resources were classified using a wide range of schemas including Dublin core, age range, teaching subject, resource type, English Teaching standard and topic area but are no longer searchable or browsable by these categories. … Advances in Open Educational Resources (OER) technologies provide an opportunity to make this resource useful again for the academics who created it. These tools include enhanced meta tagging schemas for journal documents, academic proofing tools, repositories for dissemination of OER resources, and open source software for journal moderation and para data concerning resource use.

So a lot of existing records to get into shape and put in something that makes them accessible again. Not only that, if you look at the project overview you can see usage statistics play an important part. CAMILOE is also one of the projects interested in depositing information into the UK Learning Registry node setup as part of the JLeRN Experiment.

Having dabbled with using Google Refine to get Jorum UKOER records into a different shape I wonder if the project will go down this route, or given the number and existing shape manually re-index them. I’d be very surprised if RSS or OAI-PMH didn’t make an appearance.

Read more about CAMILOE on the JISC site

Improving Accessibility to Mathematical Teaching Resources

Making digital mathematical documents fully accessible for visually impaired students is a major challenge to offer equal educational opportunities. … In this project we now want to turn our current program, that is the result of our research, into an assistive technology tool. … According to the identified requirements we will adapt and embed our tool into an existing open source solution for editing markup to allow post-processing of recognised and translated documents for correction and further editing. We will also add facilities to our tool to allow for suitable subject specific customisation by expert users. … In addition to working with accessibility support officers we also want to enable individual learners to employ the tool by making it available firstly via a web interface and finally for download under a Creative Commons License.

The project is building on their existing tool Maxtract which turns mathematical formula in pdf documents into other formats including full text descriptions, which are more screen reader friendly (a post with more info on how it works). So turning

example equation

into:

1 divided by square root of 2 pi integral sub R e to the power of minus x to the power of 2 slash 2 dx = 1 .

The other formats the tool already supports are PDF annotated with LaTeX and XHTML. The project is partnering with JISC TechDis to gather specific user requirements.

Improving Accessibility to Mathematics Blog
Read more about Improving Accessibility to Mathematics on the JISC site

Linked Data Approaches to OERs

This project extends MIT’s Exhibit tool to allow users to construct bundles of OERs and other online content around playback of online video. … This project takes a linked data approach to aggregation of OERS and other online content in order  to improve the ‘usefulness’ of online resources for education. The outcome will be an open-source application which uses linked data approaches to present a collection of pedagogically related resources, framed within a narrative created by either the teacher or the students. The ‘collections’ or ‘narratives’ created using the tool will be organised around playback of rich media, such as audio or video, and will be both flexible and scaleable.

MIT’s Exhibit tool, particularly the timeline aspect, was something I used in the OER Visualisation Project. The project has already produced some videos demonstrating a prototype that uses a timecode to control what is displayed (First prototype!, Prototype #2 and Prototype #2 (part two)). I’m still not entirely sure what ‘linked data approaches’ will be so it’ll be interesting to see how that shapes ups.

Linked Data Approaches to OERs Blog
Read more about Linked Data Approaches to OERs on the JISC site <- not on the site yet

Portfolio Commons

… seeks to provide free and open source software tools that can easily integrate open educational practices (the creation, use and sharing of OERs) into the daily routines of learners and teachers … This project proposes to create a free open source plugin for Mahara that will enable a user to select content from their Mahara Portfolio, licence it with a Creative Commons licence of their choosing, create metadata and make a deposit directly into their chosen repositories using the SWORD protocol

The SWORD Protocol, which was developed with funding by JISC, has a healthy eco system of compliant repositories, clients and code libraries, so the technical challenge on that part is getting it wired up as a plugin for Mahara. Creative Commons also have a range of tools to integrate license selection for web applications. It’ll be interesting to see if these are used.

When I met the project manager, John Casey, in London recently I also mentioned, given the arts background, of this project that scoping whether integrating with the Flickr API would be useful. Given that the Attribute Images project mentioned above is looking at this part the ideal scenario might be to link the Mahara plugin to a Attribute Images API, but timings might prevent that.

Read more about Portfolio Commons on the JISC site

Rapid Innovation Dynamic Learning Maps-Learning Registry (RIDLR)

Newcastle University’s Dynamic Learning Maps system (developed with JISC funding) is now embedded in the MBBS curriculum, and now being taken up in Geography and other subject areas … In RIDLR we will test the release of contextually rich paradata via the JLeRN Experiment to the Learning Registry and harvest back paradata about prescribed and additional personally collected resources used within and to augment the MBBS curriculum, to enhance the experience of teachers and learners. We will develop open APIs to harvest and release paradata on OER from end-users (bookmarks, tags, comments, ratings and reviews etc) from the Learning Registry and other sources for specific topics, within the context of curriculum and personal maps.

The technical challenge here is getting data into and out of the Learning Registry, it’ll be interesting to see what APIs they come up with. It’ll also be interesting to see what data they can get and if it’s usable within Dynamic Learning Maps. More information including a use case for this project has been posted here.

RIDLR and SupOERGlue Blog
Read more about RIDLR on the JISC site

RedFeather (Resource Exhibition and Discovery)

RedFeather (Resource Exhibition and Discovery) is a proposed lightweight repository server-side script that fosters best practice for OER, it can be dropped into any website with PHP, and which enables appropriate metadata to be assigned to resources, creates views in multiple formats (including HTML with in-browser previews, RSS and JSON), and provides instant tools to submit to Xpert and Jorum, or migrate to full repository platforms via SWORD.

The above quote nicely summarises the technical headlines. In a recent blog post the team illustrate how RedFeather might be used in a couple of use cases. The core component appears to be creating a single file (coded in PHP which is a server side scripting language) and transferring files/resources to a web server. It’ll be interesting to see if the project explore different deployments, for example, packaging FedFeather on a portable web server (server on a usb stick), or maybe deploy on Scraperwiki (a place in the cloud where you can execute PHP), or looking at how other cloud/3rd party services could be used. Update: I forgot to mention the OERPubAPI which is built on the SWORD v2. The interesting part that I'm watching closely is whether this API will provide a means to publish to none SWORDed repositories like YouTube, Flickr and Slideshare.

RedFeather Blog
Read more about RedFeather on the JISC site

Sharing Paradata Across Widget Stores (SPAWS)

We will use the Learning Registry infrastructure to share paradata about Widgets across multiple Widget Stores, improving the information available to users for selecting widgets and improving discovery by pooling usage information across stores.

For more detail on what paradata will be included the SPAWS nutshell post says:

each time a user visits a store and writes a review about a particular widget/gadget, or rates it, or embeds it, that information can potentially be syndicated to other stores in the network

There’s not much for me to add about the technical side of this project as Scott has already posted a technical overview and gone into more detail about the infrastructure and some initial code.

SPAWS Blog
Read more about SPAWS on the JISC site

SPINDLE: Increasing OER discoverability by improved keyword metadata via automatic speech to text transcription

SPINDLE will create linguistic analysis tools to filter uncommon spoken words from the automatically generated word-level transcriptions that will be obtained using Large Vocabulary Continuous Speech Recognition (LVCSR) software. SPINDLE will use this analysis to generate a keyword corpus for enriching metadata, and to provide scope for indexing inside rich media content using HTML5.

Enhancing the discoverability of audio/media is something I’m very familiar with having used tweets to index videos. My enthusiasm for this area took a knock with I discovered Mike Wald’s Synote system which uses IBM’s ViaScribe to extract annotations from video/audio. There’s a lot of overlap between Synote and SPINDLE which is why it was good to see them talking to each other at the programme start-up meeting. As far as I’m aware JISC funding for Synote ended in 2009 (but has just been refunded for a mobile version) so now is a good time to look at how open source LVCSR software can be used in a scenario where accuracy for accessibility as an assistive technology is being replaced by best guess to improve accessibility in terms of discoverability.

In terms of the technical side it will be interesting to see if SPINDLE looks at the WebVTT which seems to be finding its way at the W3C and does include an option for metadata (the issue might be that ‘V’ in WebVTT stands for video). Something that I hope doesn’t put SPINDLE off looking at WebVTT is the lack of native browser support (although it is on the way) There are some JavaScript libraries you can use to handle WebVTT.  It’ll also be interesting if there is a chance to compare (or highlight existing research) comparing an open source offering like Sphinx with commercial (e.g. ViaScribe)

SPINDLE Blog
Read more about SPINDLE on the JISC site

SupOERGlue

SuperOERGlue will pilot the integration of OER Glue with Newcastle University’s Dynamic Learning Maps, enabling easy content creation and aggregation from within the learning and teaching support environments, related to specific topics. … Partnering with Tatemae to use OER Glue, which harvests OER from around the world and has developed innovative ways for academics and learners to aggregate customised learning packages constructed of different OER, will enable staff and students to create their own personalised resource mashups which are directly related to specific topics in the curriculum.

Tatemae have a track record of working with open educational resources and courseware including developing OER Glue. There’s not a huge amount for me to say on the technical side. I did notice that OER Glue currently only works on Google Chrome web browser. Having worked in a number of institutions where installing extra software in a chore it’ll be interesting to see if this causes a problem. More information including a use case for this project has been posted hereUpdate: Related to RedFeather update I wondering if SupOERGlue will be looking at OERPub (“An architecture for remixable Open Educational Resources (OER)”)as a framework to republish OER.

RIDLR and SupOERGlue Blog
Read more about SupOERGlue on the JISC site

Synote Mobile

Synote Mobile will meet the important user need to make web-based OER recordings easier to access, search, manage, and exploit for learners, teachers and others. …This project will create a new mobile HTML5 version of Synote able to replay Synote recordings on any student’s mobile device capable of connecting to the Internet. The use of HTML5 will overcome the need to develop multiple device-specific applications. The original version of Synote displays the recording, transcript, notes and slide images in four different panels which uses too much screen area for a small mobile device. Synote Mobile will therefore be designed to display captions and notes and images simultaneously ‘over’ the video. Where necessary existing Synote recordings will be converted into an appropriate format to be played by the HTML5 player. Success will be demonstrated by tests and student evaluations using Synote recordings on their mobile devices.

I’ve already mentioned Synote in relation to SPINDLE. Even though it’s early the project is already documenting a number of their technical challenges. This includes reference to LongTail’s State of HTML5 Video report and a related post on Salt Websites. The later references WebVTT and highlights some libraries that can be used. Use of javascript libraries gets around the lack of <track> support in browsers, but as the LongTail State of the HTML5 video report states:

The element [<track>] is brand new, but every browser vendor is working hard to support it. This is especially important for mobile, since developers cannot use JavaScript to manually draw captions over a video element there.

The report goes on to say:

Note the HTML5 specification defines an alternative approaches to loading captions. It leverages video files with embedded text tracks. iOS supports this today (without API support), but no other browser has yet committed to implement this mechanism. Embedded text tracks are easier to deploy, but harder to edit and make available for search.

Interesting times for Synote Mobile and potentially an opportunity for the sector to learn a lot of lessons about creating accessible mobile video.

Synote Mobile Blog
Read more about Synote Mobile on the JISC site

Track OER

The project aims to look at two ways to reduce tensions between keeping OER in one place and OER spreading and transferring. If we can find out more about where OER is being used then we can continue to gather the information that is needed and help exploit the openness of OER. … The action of the project will be to develop software that can help track open educational resources. The software will be generic in nature and build from existing work developed by BCCampus and MIT, however a key step in this project is to provide an instantiation of the tracking on the Open University’s OpenLearn platform. … The solution will build on earlier work, notably by OLnet fellow Scott Leslie (BCCampus) and JISC project CaPRéT led by Brandon Muramatsu (MIT project partner in B2S).

At the programme start-up meeting talking to Patrick McAndrew, who is leading this project, part one of the solution is to include a unique Creative Commons License icon which is hosted on OU servers which when called by a resource reuse some content leaves a trace (option 3 in the suggested solutions here). This technique is well established and one I first came across when using the ClustrMaps service which uses a map of your website visitors as a hit counter (ClustrMaps was developed by Marc Eisenstadt Emeritus Professor at the Open University – small world ;). It looks like Piwiki is going to be used to handle/dashboard the web analytics, which is an open source alternative to Google Analytics. The second solution is extending the CETIS funded CaPRéT developed by Brandon Muramatsu & Co. at MIT which uses JavaScript to track when a user copies and pastes some text. It’ll be interesting if Track OER can port the CaPReT backend to Piwiki (BTW Pat Lockley has posted how to do OER Copy tracking using Google Analytics, which uses similar techniques).

Track OER Blog
Read more about Track OER on the JISC site

Xerte Experience Now Improved: Targeting HTML5 (XENITH)

Xerte Online Toolkits is a suite of tools in widespread use by teaching staff to create interactive learning materials. This project will develop the functionality for Xerte Online Toolkits to deliver content as HTML5. Xerte Online Toolkits creates and stores content as XML, and uses the Flash Player to present content. There is an increasing need for Xerte Online Toolkits to accommodate a wider range of delivery devices and platforms.

Here’s a page with more information about Xerte Online Toolkits, here’s an example toolkit and the source xml used to render it (view source). The issue with tis I haven’t seen the detail for the XENITH project, but something I initially thought about  was whether they would use XSLT (Extensible Stylesheet Language Transformations), but wondered if this would be a huge headache when converting their Flash player. Another possible solution I recently came across is jangaroo:

Jangaroo is an Open Source project building developer tools that adopt the power of ActionScript 3 to create high-quality JavaScript frameworks and applications. Jangaroo is released under the Apache License, Version 2.0.

This includes“let your existing ActionScript 3 application run in the browser without a Flash plugin” . It’ll be interesting to see the solution the project implements.

XENITH Blog
Read more about XENITH on the JISC site

BTW here’s the OPML file for the RSS feeds of the blogs that are live (also visible here as a Google Reader bundle)

So which of these projects interests you the post? If you are on one of the projects do my technical highlights look right or have I missed something important?