OER

3 Comments

This is a repost of Building an evidence hub plugin for WordPress which first appeared on the OER Research Hub blog on 4th October 2013

I was recently contracted by the Open University to experiment with a solution to record and display evidence for the  research hypotheses. Speaking to the project team a number of options were discussed from simple spreadsheet based approaches to re-purposing software versioning systems like Github.  In the end it was decided to develop a WordPress plugin. There were a couple of reasons for going down this route:

  1. WordPress is a one of the most popular open source platforms with a well documented codex and active developer community
  2. It's existing plugin and theme architecture make it easy to customise
  3. There are already over 27,000 open source plugins in the official plugin repository allowing easy feature extension

As part of the conversations with the team and having reviewed existing documentation the following relationship diagram was used to identify the structure and nature of the data that needed to be stored:

relationship diagram  

Having established this a the wireframe (shown below) was created to illustrate how data could be entered building upon the existing WordPress Custom Post Types:

evidence wireframe

Fortunately rather than creating the custom post type templates from scratch Francis Yaconiello’s has published a plugin template to do this. For this project we need three custom post types to enter data for hypothesis, evidence and location mapping to the relationship diagram shown above. To avoid duplication these templates use additional templating to make it easier to add custom fields. The result is shown below with the current ‘Add New Evidence’ screen.

New Evidence Screen

Location, location, location

location-lookupOne particular challenge was to find an easy way for users to attach location data to evidence. Rather than getting users to select a location name from a long list a simple location lookup which queried the existing location custom post types. This component uses, in part, the Pronamic Google Maps plugin which is included in this plugin as a software library. This route has been chosen to remove the dependency on other plugins being installed on the site which could be an issue for multisite deployment. The downside is that this component has become orphaned from the Pronamic  updates. This comes with an increased risk given the dependency on the Google Maps API. The mitigating factor is the Pronamic’s plugin stores the geodata in a way that it can be used by other plugins and mapping services.

Another reason for choosing to ‘bake in’ the Pronamic’s plugin is that with a line of code we can include geo options within the location custom post type. This option includes the geo-encoding and reverse geo-encoding of addresses making data entry easier.

geo-encoding and reverse geo-encoding

To save staff time in having to create new locations for evidence data has been imported from the evidence hub for open education using the WP Ultimate CSV Importer Plugin.

Next…

With some infrastructure in place for recording data the next challenge is to present this in a useful way. The current tack I’m going to take is expose the data as JSON (there’s a plugin for that) and try out some d3js examples, like a Reingold–Tilford Tree (H/T Tony Hirst @psychemedia)

The code for this project is available on Github

Share this post on:
| | |
Posted in OER, WordPress and tagged on by .

5 Comments

Repositories are living archives. In terms of the support it must provide for stored files, it must take into account two important functions of the files it holds:

  1. Access: The files are held so that users can access them. This means that they must be stored in formats that can be used by today's intended audience
  2. Preservation: The files are held so that users in 5, 10, 50, or more years can still access them. This means that they must be stored in formats that can be used by future audiences, or in formats that can easily be migrated

These two considerations are not always complementary. A file format that is good for access today may not be a format that is easy to migrate, but a format that is easy to migrate may not be easy to read.

The text above is taken from the JISC infoNet Digital Repositories infoKit. An added complication when considering the deposit of OER is if you are not using a ‘No Derivatives’ licence how can you support remix/editing.  Here’s a scenario taken from the WikiEducator:

A teacher wants to make a collage. She imports several PNG photos into Photoshop and creates the collage. She saves the file as a PSD and exports a copy as a PNG to post on the web. While others can edit the PNG, it would be a lot easier to edit the PSD file. However, in order to use PSD files, the person has to have a copy of Photoshop.

Already it’s starting to get more tricky. PSD is a proprietary file format developed and owned by Abobe and used in Photoshop. You can actually open and edit PSD files in open source tools like GIMP (I’m not sure how legally Gimp can do this – waiting for a response from OSSWatch Update: I've had a response. Upshot 'it can be awkward on all levels'. I'll point to a related blog post when it's published. Post by Scott Wilson at OSS Watch on using proprietary file formats in open source projects). Similarly you can use open source alternatives to Microsoft Office like LibreOffice to open and edit DOC/XLS/PPT etc but in this case Microsoft's proprietary file formats under their Open Specification Promise, which if you read this page on Wikipedia itself has a number of issues and limitations.

The next issue is, as highlighted by Chris Rusbridge in his Open letter to Microsoft on specs for obsolete file formats, the OSP doesn’t cover older file formats. So if you were an earlier adopter publishing OER in editable formats there is a danger that the format you used won’t be suitable down the line.

I’m mindful of the Digital Repository infoKit’s last point of guidance

Be practical: Being overly-strict about file formats may mean collecting no files leading to an empty repository! A sensible approach must be used that weighs up the cost and benefits of different file formats and the effort required to convert between them.

Should OER file formats be tomorrow’s problem?

Share this post on:
| | |
Posted in Half baked, OER on by .

3 Comments

If you haven’t already you should check out Jorum's 2012 Summer of Enhancements and you’ll see it’s a lot more than a spring clean. In summary there are 4 major projects going on:

  • JDEP - Improving discoverability through semantic technology
  • JEAP - Expanding Jorum’s collection through aggregation projects
  • JPEP - Exposing activity data and paradata
  • JUEP - Improving the front-end UI and user experience (UI/UX)
SEO the Game by Subtle Network Design - The Apprentice Card
Image Copyright subtlenetwork.com

As I was tasked to write the chapter on OER Search Engine Optimisation (SEO) and Discoverability as part of our recent OER Booksprint I thought I’d share some personal reflections on the JDEP - Improving discoverability through semantic technology project (touching upon JEAP - Expanding Jorum’s collection through aggregation projects).

Looking through JDEP the focus appears to be mainly improving internal discoverability within Jorum with better indexing. There are some very interesting developments in this area most of which are beyond my realm of expertise.

Autonomy IDOL

The first aspect is deploying Autonomy IDOL which uses “meaning-based search to unlock significant research material”. Autonomy is a HP owned company and IDOL (Intelligent Data Operating Layer) was recently used in a project by Mimas, JISC Collections and the British Library to unlocks hidden collections. With Autonomy IDOL it means that:

rather than searching simply by a specific keyword or phrase that could have a number of definitions or interpretations, our interface aims to understand relationships between documents and information and recognize the meaning behind the search query.

This is achieved by:

  • cluster search results around related conceptual themes
  • full-text indexing of documents and associated materials
  • text-mining of full-text documents
  • dynamic clustering and serendipitous browsing
  • visualisation approaches to search results

An aspect of Autonomy IDOL that caught my eye was:

 conceptual clustering capability of text, video and speech

Will Jorum be able to index resources using Autonomy's Speech Analytics solution?

If so that would be very useful, the issue may be how Jorum resources are packaged and where resources are hosted. If you would like to see Autonomy IDOL in action you can try the Institutional Repository Search which searches across 160 UK repositories.

Will Jorum be implementing an Amazon style recommendation system?

One thing it’ll be interesting to see (and this is perhaps more of a future aspiration) is the integration of a Amazon style recommendation system. The CORE project has already published a similar documents plugin, but given Jorum already has single sign-on I wonder how easy it would be integrate a solution to make resource recommendations based on usage data (here’s a paper on A Recommender System for the DSpace Open Repository Platform).

Elasticsearch

This is a term I’ve heard of but don’t really know enough to comment on. I’m mentioning it here mainly to highlight the report Cottage Labs prepared Investigating the suitability of Apache Solr and Elasticsearch for Mimas Jorum / Dashboard, which outlines the problem and solution for indexing and statistical querying.

External discoverability and SEO

Will Jorum be improving search engine optimisation?

From the forthcoming chapter on OER SEO and Discoverability:

Why SEO and discoverability are important

In common with other types of web resources, the majority of people will use a search engine to find open educational resources, therefore it is important to ensure that OERs feature prominently in search engine results.  In addition to ensuring that resources can be found by general search engines it is also important to make sure they also are easily discoverable in sites that are content or type specific e.g iTunes, YouTube, Flickr.

Although search engine optimisation can be complex, particularly given that search engines may change their algorithms with little or no prior warning or documentation, there is growing awareness that if institutions, projects or individuals wish to have a visible web presence and to disseminate their resources efficiently and effectively search engine optimisation and ranking can not be ignored1.

The statistics are compelling:

  • Over 80% of web searches are performed using Google [Ref 1]
  • Traffic from Google searches varies from repository to repository but ranges between 50-80% are not uncommon [Ref 2]
  • As an indication 83% of college students begin their information search in a search engine [Ref 3]

Given the current dominance of Google as the preferred search engine, it is important to understand how to optimise open educational resources to be discovered via Google Search. However SEO techniques are not specific to Google and are applicable to optimise resource discovery by other search engines.

By all accounts the only way for Jorum is up as it was recently reported in the JISCMail REPOSITORIES-LIST that “just over 5% of Jorum traffic comes directly from Google referrals”. So what is going wrong?

I’m not an SEO expert but a quick check using a search for site:dspace.jorum.ac.uk returns 135,000 results so content is being indexed (Jorum should have access to Googe Webmaster Tools to get detailed index and ranking data). Resource pages include metadata including DC.creator, DC.subject and more. One thing I noticed was missing from Jorum resource pages was <meta name="description" content="A description of the page" />. Why might this be important? Google will ignore meta tags it doesn't know (and here is the list of metatags Google knows).

Another factor might be that Google, apparently (can’t find a reference) trusts metadata that is human readable by using RDFa markup. So instead of hiding meta tags in the of a page Google might weight the data better if it was inline markup:

Current Jorum resource html source
Current Jorum resource html source

With example of RDFa markup
With example of RDFa markup

[Taking this one step further Jorum might want to use schema.org to improve how resources are displayed in search results]

It’ll will be interesting to see if JEAP - Expanding Jorum’s collection through aggregation projects will improve SEO because of backlink love.

Looking further ahead

Will there be a LTI interface to allow institutions to integrate Jorum into their VLE?

Final thought. It's been interesting to see Blackboard enter the repository marketplace with xpLor (see Michael Feldstein’s Blackboard’s New Platform Strategy for details). A feature of this cloud service that particularly caught my eye was the use of IMS Learning Tools Interoperability (LTI) to allow institutions to integrate a repository within their existing VLE (CETIS IMS Learning Tools Interoperability Briefing paper). As I understand it with this institutions would be able to seamlessly deposit and search for resources. I wonder Is this type of solution on the Jorum roadmap or do you feel there would be a lack of appetite within the sector for such a solution?

Fin

Those are my thoughts anyway. I know Jorum would welcome additional feedback on their Summer of Enhancements. I also welcome any thoughts on my thoughts ;)

BTW Here's a nice presentation on Improving Institutional Repository Search Engine Visibility in Google and Google Scholar

Jorum has a Dashboard Beta (for exposing usage and other stats about OER in Jorum) up for the community to have a play with: we would like to get your feedback!

For more information see the blog post here: http://www.jorum.ac.uk/blog/post/38/collecting-statistics-just-got-a-whole-lot-sweeter

Pertinent info: the Dashboard has live Jorum stats behind it, but the stats have some irregularities, so the stats themselves come with a health warning. We’re moving from quite an old version of DSpace to the most recent version over the summer, at which point we will have more reliable stats.

We also have a special project going over the summer to enhance our statistics and other paradata provision, so we’d love to get as much community feedback as possible to feed into that work. We’ll be doing a specific blog post about that as soon as we have contractors finalised!

Feedback by any of the mechanisms suggested in the blog post, or via discussion here on the list, all welcome.

The above message came from Sarah Currier on the [email protected] list. This was my response:

It always warms my heart to see a little more data being made openly available :)

I imagine (and I might be wrong) that the main users of this data might be repository managers wanting to analyse how their institutional resources are doing. So to be able to filter uploads/downloads/views for their resources and compare with overall figures would be useful.

Another (perhaps equally important) use case would be individuals wanting to know how their resources are doing, so a personal dashboard of resources uploaded, downloads, views would also be useful. This is an area Lincoln's Bebop project were interested in so it might be an idea to work with them to find out what data would be useful to them and in what format (although saying that think I only found one #ukoer record for Lincoln {hmm I wonder if anyone else would find it useful if you pushed data to Google Spreadsheets a la Guardian datastore (here's some I captured as part of the OER Visualisation Project}) ).

I'm interested to hear what the list think about these two points

You might also want to consider how the data is licensed on the developer page. Back to my favourite example, Gent use the Open Data Commons licence  http://opendatacommons.org/licenses/odbl/summary/

So what do you think of the beta dashboard? Do you think the two use cases I outline are valid or is there a more pertinent one? (If you want to leave a comment here I’ll make sure they are passed on to the Jorum team, or you can use other means).

[I’d also like to add a personal note that I’ve been impressed with the recent developments from Jorum/Mimas. There was a rocky period when I was at the JISC RSC when Jorum didn’t look aligned to what was going on in the wider world, but since then they’ve managed to turn it around and developments like this demonstrate a commitment to a better service]

Update: Bruce Mcpherson has been working some Excel/Google Spreadsheet magic and has links to examples in this comment thread

Share this post on:
| | |
Posted in API, Data, Jorum, OER and tagged on by .

As the JISC OER Rapid Innovation projects have either started or will start very soon, mainly for my own benefit, I thought it would be useful to quickly summarise the the technical choices and challenges.

Attribute Images - University of Nottingham

Building on the Xpert search engine which has a searchable index of over 250,000 open educational resources, Nottingham are planning a tool to embed CC license information into images.

The Attribute Images project will extend the Xpert Attribution service by creating a new tool that allows users to upload images, either from their computer or from the web and have a Creative Commons attribution statement embedded in the images. … It will provide an option for the user to upload the newly attributed images to Flickr through the Flickr API … In addition it will have an API allowing developers to make use of the service in other sites.

From the projects first post when they talk about ‘embedding’ CC statements it appears to be visible watermarking. It’ll be interesting if the project explore the Creative Commons recommended Adobe Extensible Metadata Platform (XMP) to embed license information into the image data. Something they might want to test is if the Flickr upload preserves this data when resizing. Creative Commons also have a range of tools to integrate license selection so it’ll be interesting to see if these are used or if there are compatibility issues.

Attribute Images Blog
Read more about Attribute Images on the JISC site

Bebop – University of Lincoln

Bebop is looking to help staff at Lincoln centralise personal resource creation activity from across platforms into a single stream.

This project will undertake research and development into the use of BuddyPress as an institutional academic profile management tool which aggregates teaching and learning resources as well as research outputs held on third-party websites into the individual’s BuddyPress profile. … This project will investigate and develop BuddyPress so as to integrate (‘consume’) third-party feeds and APIs into BuddyPress profiles and, furthermore, investigate the possibility of BuddyPress being used as a ‘producer application’ of data for re-publishing on other institutional websites and to third-party web services.

In a recent project post asking Where are the OERs? you can get an idea of the 3rd party APIs they will be looking at which includes Jorum/DSpace, YouTube, Slideshare etc. Talking to APIs isn’t a problem, after all that is what they are designed to do, and having developed plugins on WordPress/BuddyPress myself is a great platform to work on. The main technical challenge is more likely to be doing this on scale and the variability in the type of data returned. It’ll also be interesting if Bebop can be built with flexibility in mind (creating it’s own APIs so that it can be used on other platforms) – looks like the project is going down aggregating the RSS endpoint point route.

Bebop Blog
Ream more about Bebop on the JISC site

Breaking Down Barriers: Building a GeoKnowledge Community with OER

The proposed project aims to Build a GeoKnowledge Community at Mimas by utilising existing technologies (DSpace) and services (Landmap/Jorum). The aim of the use case is to open-up 50% (8 courses) of the Learning Zone through Creative Commons (CC) Attribution Non-Commercial Share Alike (BY-NC-SA) license as agreed already with authors. A further aim is to transfer the hosting of the ELOGeo repository to Jorum from Nottingham (letter of support provided by University of Nottingham) and create a GeoKnowledge Community site embedded in Jorum using the DSpace API and linking the repository to the Landmap Learning Zone. … The technical solution in developing a specific community site within Jorum will be transferable to other communities that may have a similar requirement in the future.

Still don’t feel I have an entire handle on the technical side of this project, but its early days and already the project is producing a steady stream of posts on their blog. One for me to revisit.

Break Down Barriers Blog
Read more about Breaking Down Barriers on the JISC site

CAMILOE (Collation and Moderation of Intriguing Learning Objects in Education)

This project reclaims and updates 1800 quality assured evidence informed reviews of education research, guidance and practice that were produced and updated between 2003 and 2010 and which are now archived and difficult to access. … These resources were classified using a wide range of schemas including Dublin core, age range, teaching subject, resource type, English Teaching standard and topic area but are no longer searchable or browsable by these categories. … Advances in Open Educational Resources (OER) technologies provide an opportunity to make this resource useful again for the academics who created it. These tools include enhanced meta tagging schemas for journal documents, academic proofing tools, repositories for dissemination of OER resources, and open source software for journal moderation and para data concerning resource use.

So a lot of existing records to get into shape and put in something that makes them accessible again. Not only that, if you look at the project overview you can see usage statistics play an important part. CAMILOE is also one of the projects interested in depositing information into the UK Learning Registry node setup as part of the JLeRN Experiment.

Having dabbled with using Google Refine to get Jorum UKOER records into a different shape I wonder if the project will go down this route, or given the number and existing shape manually re-index them. I’d be very surprised if RSS or OAI-PMH didn’t make an appearance.

Read more about CAMILOE on the JISC site

Improving Accessibility to Mathematical Teaching Resources

Making digital mathematical documents fully accessible for visually impaired students is a major challenge to offer equal educational opportunities. … In this project we now want to turn our current program, that is the result of our research, into an assistive technology tool. … According to the identified requirements we will adapt and embed our tool into an existing open source solution for editing markup to allow post-processing of recognised and translated documents for correction and further editing. We will also add facilities to our tool to allow for suitable subject specific customisation by expert users. … In addition to working with accessibility support officers we also want to enable individual learners to employ the tool by making it available firstly via a web interface and finally for download under a Creative Commons License.

The project is building on their existing tool Maxtract which turns mathematical formula in pdf documents into other formats including full text descriptions, which are more screen reader friendly (a post with more info on how it works). So turning

example equation

into:

1 divided by square root of 2 pi integral sub R e to the power of minus x to the power of 2 slash 2 dx = 1 .

The other formats the tool already supports are PDF annotated with LaTeX and XHTML. The project is partnering with JISC TechDis to gather specific user requirements.

Improving Accessibility to Mathematics Blog
Read more about Improving Accessibility to Mathematics on the JISC site

Linked Data Approaches to OERs

This project extends MIT’s Exhibit tool to allow users to construct bundles of OERs and other online content around playback of online video. … This project takes a linked data approach to aggregation of OERS and other online content in order  to improve the ‘usefulness’ of online resources for education. The outcome will be an open-source application which uses linked data approaches to present a collection of pedagogically related resources, framed within a narrative created by either the teacher or the students. The ‘collections’ or ‘narratives’ created using the tool will be organised around playback of rich media, such as audio or video, and will be both flexible and scaleable.

MIT’s Exhibit tool, particularly the timeline aspect, was something I used in the OER Visualisation Project. The project has already produced some videos demonstrating a prototype that uses a timecode to control what is displayed (First prototype!, Prototype #2 and Prototype #2 (part two)). I’m still not entirely sure what ‘linked data approaches’ will be so it’ll be interesting to see how that shapes ups.

Linked Data Approaches to OERs Blog
Read more about Linked Data Approaches to OERs on the JISC site <- not on the site yet

Portfolio Commons

… seeks to provide free and open source software tools that can easily integrate open educational practices (the creation, use and sharing of OERs) into the daily routines of learners and teachers … This project proposes to create a free open source plugin for Mahara that will enable a user to select content from their Mahara Portfolio, licence it with a Creative Commons licence of their choosing, create metadata and make a deposit directly into their chosen repositories using the SWORD protocol

The SWORD Protocol, which was developed with funding by JISC, has a healthy eco system of compliant repositories, clients and code libraries, so the technical challenge on that part is getting it wired up as a plugin for Mahara. Creative Commons also have a range of tools to integrate license selection for web applications. It’ll be interesting to see if these are used.

When I met the project manager, John Casey, in London recently I also mentioned, given the arts background, of this project that scoping whether integrating with the Flickr API would be useful. Given that the Attribute Images project mentioned above is looking at this part the ideal scenario might be to link the Mahara plugin to a Attribute Images API, but timings might prevent that.

Read more about Portfolio Commons on the JISC site

Rapid Innovation Dynamic Learning Maps-Learning Registry (RIDLR)

Newcastle University’s Dynamic Learning Maps system (developed with JISC funding) is now embedded in the MBBS curriculum, and now being taken up in Geography and other subject areas … In RIDLR we will test the release of contextually rich paradata via the JLeRN Experiment to the Learning Registry and harvest back paradata about prescribed and additional personally collected resources used within and to augment the MBBS curriculum, to enhance the experience of teachers and learners. We will develop open APIs to harvest and release paradata on OER from end-users (bookmarks, tags, comments, ratings and reviews etc) from the Learning Registry and other sources for specific topics, within the context of curriculum and personal maps.

The technical challenge here is getting data into and out of the Learning Registry, it’ll be interesting to see what APIs they come up with. It’ll also be interesting to see what data they can get and if it’s usable within Dynamic Learning Maps. More information including a use case for this project has been posted here.

RIDLR and SupOERGlue Blog
Read more about RIDLR on the JISC site

RedFeather (Resource Exhibition and Discovery)

RedFeather (Resource Exhibition and Discovery) is a proposed lightweight repository server-side script that fosters best practice for OER, it can be dropped into any website with PHP, and which enables appropriate metadata to be assigned to resources, creates views in multiple formats (including HTML with in-browser previews, RSS and JSON), and provides instant tools to submit to Xpert and Jorum, or migrate to full repository platforms via SWORD.

The above quote nicely summarises the technical headlines. In a recent blog post the team illustrate how RedFeather might be used in a couple of use cases. The core component appears to be creating a single file (coded in PHP which is a server side scripting language) and transferring files/resources to a web server. It’ll be interesting to see if the project explore different deployments, for example, packaging FedFeather on a portable web server (server on a usb stick), or maybe deploy on Scraperwiki (a place in the cloud where you can execute PHP), or looking at how other cloud/3rd party services could be used. Update: I forgot to mention the OERPubAPI which is built on the SWORD v2. The interesting part that I'm watching closely is whether this API will provide a means to publish to none SWORDed repositories like YouTube, Flickr and Slideshare.

RedFeather Blog
Read more about RedFeather on the JISC site

Sharing Paradata Across Widget Stores (SPAWS)

We will use the Learning Registry infrastructure to share paradata about Widgets across multiple Widget Stores, improving the information available to users for selecting widgets and improving discovery by pooling usage information across stores.

For more detail on what paradata will be included the SPAWS nutshell post says:

each time a user visits a store and writes a review about a particular widget/gadget, or rates it, or embeds it, that information can potentially be syndicated to other stores in the network

There’s not much for me to add about the technical side of this project as Scott has already posted a technical overview and gone into more detail about the infrastructure and some initial code.

SPAWS Blog
Read more about SPAWS on the JISC site

SPINDLE: Increasing OER discoverability by improved keyword metadata via automatic speech to text transcription

SPINDLE will create linguistic analysis tools to filter uncommon spoken words from the automatically generated word-level transcriptions that will be obtained using Large Vocabulary Continuous Speech Recognition (LVCSR) software. SPINDLE will use this analysis to generate a keyword corpus for enriching metadata, and to provide scope for indexing inside rich media content using HTML5.

Enhancing the discoverability of audio/media is something I’m very familiar with having used tweets to index videos. My enthusiasm for this area took a knock with I discovered Mike Wald’s Synote system which uses IBM’s ViaScribe to extract annotations from video/audio. There’s a lot of overlap between Synote and SPINDLE which is why it was good to see them talking to each other at the programme start-up meeting. As far as I’m aware JISC funding for Synote ended in 2009 (but has just been refunded for a mobile version) so now is a good time to look at how open source LVCSR software can be used in a scenario where accuracy for accessibility as an assistive technology is being replaced by best guess to improve accessibility in terms of discoverability.

In terms of the technical side it will be interesting to see if SPINDLE looks at the WebVTT which seems to be finding its way at the W3C and does include an option for metadata (the issue might be that ‘V’ in WebVTT stands for video). Something that I hope doesn’t put SPINDLE off looking at WebVTT is the lack of native browser support (although it is on the way) There are some JavaScript libraries you can use to handle WebVTT.  It’ll also be interesting if there is a chance to compare (or highlight existing research) comparing an open source offering like Sphinx with commercial (e.g. ViaScribe)

SPINDLE Blog
Read more about SPINDLE on the JISC site

SupOERGlue

SuperOERGlue will pilot the integration of OER Glue with Newcastle University’s Dynamic Learning Maps, enabling easy content creation and aggregation from within the learning and teaching support environments, related to specific topics. … Partnering with Tatemae to use OER Glue, which harvests OER from around the world and has developed innovative ways for academics and learners to aggregate customised learning packages constructed of different OER, will enable staff and students to create their own personalised resource mashups which are directly related to specific topics in the curriculum.

Tatemae have a track record of working with open educational resources and courseware including developing OER Glue. There’s not a huge amount for me to say on the technical side. I did notice that OER Glue currently only works on Google Chrome web browser. Having worked in a number of institutions where installing extra software in a chore it’ll be interesting to see if this causes a problem. More information including a use case for this project has been posted hereUpdate: Related to RedFeather update I wondering if SupOERGlue will be looking at OERPub (“An architecture for remixable Open Educational Resources (OER)”)as a framework to republish OER.

RIDLR and SupOERGlue Blog
Read more about SupOERGlue on the JISC site

Synote Mobile

Synote Mobile will meet the important user need to make web-based OER recordings easier to access, search, manage, and exploit for learners, teachers and others. …This project will create a new mobile HTML5 version of Synote able to replay Synote recordings on any student’s mobile device capable of connecting to the Internet. The use of HTML5 will overcome the need to develop multiple device-specific applications. The original version of Synote displays the recording, transcript, notes and slide images in four different panels which uses too much screen area for a small mobile device. Synote Mobile will therefore be designed to display captions and notes and images simultaneously ‘over’ the video. Where necessary existing Synote recordings will be converted into an appropriate format to be played by the HTML5 player. Success will be demonstrated by tests and student evaluations using Synote recordings on their mobile devices.

I’ve already mentioned Synote in relation to SPINDLE. Even though it’s early the project is already documenting a number of their technical challenges. This includes reference to LongTail’s State of HTML5 Video report and a related post on Salt Websites. The later references WebVTT and highlights some libraries that can be used. Use of javascript libraries gets around the lack of <track> support in browsers, but as the LongTail State of the HTML5 video report states:

The element [<track>] is brand new, but every browser vendor is working hard to support it. This is especially important for mobile, since developers cannot use JavaScript to manually draw captions over a video element there.

The report goes on to say:

Note the HTML5 specification defines an alternative approaches to loading captions. It leverages video files with embedded text tracks. iOS supports this today (without API support), but no other browser has yet committed to implement this mechanism. Embedded text tracks are easier to deploy, but harder to edit and make available for search.

Interesting times for Synote Mobile and potentially an opportunity for the sector to learn a lot of lessons about creating accessible mobile video.

Synote Mobile Blog
Read more about Synote Mobile on the JISC site

Track OER

The project aims to look at two ways to reduce tensions between keeping OER in one place and OER spreading and transferring. If we can find out more about where OER is being used then we can continue to gather the information that is needed and help exploit the openness of OER. … The action of the project will be to develop software that can help track open educational resources. The software will be generic in nature and build from existing work developed by BCCampus and MIT, however a key step in this project is to provide an instantiation of the tracking on the Open University’s OpenLearn platform. … The solution will build on earlier work, notably by OLnet fellow Scott Leslie (BCCampus) and JISC project CaPRéT led by Brandon Muramatsu (MIT project partner in B2S).

At the programme start-up meeting talking to Patrick McAndrew, who is leading this project, part one of the solution is to include a unique Creative Commons License icon which is hosted on OU servers which when called by a resource reuse some content leaves a trace (option 3 in the suggested solutions here). This technique is well established and one I first came across when using the ClustrMaps service which uses a map of your website visitors as a hit counter (ClustrMaps was developed by Marc Eisenstadt Emeritus Professor at the Open University – small world ;). It looks like Piwiki is going to be used to handle/dashboard the web analytics, which is an open source alternative to Google Analytics. The second solution is extending the CETIS funded CaPRéT developed by Brandon Muramatsu & Co. at MIT which uses JavaScript to track when a user copies and pastes some text. It’ll be interesting if Track OER can port the CaPReT backend to Piwiki (BTW Pat Lockley has posted how to do OER Copy tracking using Google Analytics, which uses similar techniques).

Track OER Blog
Read more about Track OER on the JISC site

Xerte Experience Now Improved: Targeting HTML5 (XENITH)

Xerte Online Toolkits is a suite of tools in widespread use by teaching staff to create interactive learning materials. This project will develop the functionality for Xerte Online Toolkits to deliver content as HTML5. Xerte Online Toolkits creates and stores content as XML, and uses the Flash Player to present content. There is an increasing need for Xerte Online Toolkits to accommodate a wider range of delivery devices and platforms.

Here’s a page with more information about Xerte Online Toolkits, here’s an example toolkit and the source xml used to render it (view source). The issue with tis I haven’t seen the detail for the XENITH project, but something I initially thought about  was whether they would use XSLT (Extensible Stylesheet Language Transformations), but wondered if this would be a huge headache when converting their Flash player. Another possible solution I recently came across is jangaroo:

Jangaroo is an Open Source project building developer tools that adopt the power of ActionScript 3 to create high-quality JavaScript frameworks and applications. Jangaroo is released under the Apache License, Version 2.0.

This includes“let your existing ActionScript 3 application run in the browser without a Flash plugin” . It’ll be interesting to see the solution the project implements.

XENITH Blog
Read more about XENITH on the JISC site

BTW here’s the OPML file for the RSS feeds of the blogs that are live (also visible here as a Google Reader bundle)

So which of these projects interests you the post? If you are on one of the projects do my technical highlights look right or have I missed something important?

12 Comments

a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server

That quote is taken from the Wikipedia entry for Git (software), the full quote is:

In software development, Git (/ɡɪt/) is a distributed revision control and source code management (SCM) system with an emphasis on speed.[4] Git was initially designed and developed by Linus Torvalds for Linux kernel development. Every Git working directory is a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. … Git supports rapid branching and merging, and includes specific tools for visualizing and navigating a non-linear development history. A core assumption in Git is that a change will be merged more often than it is written, as it is passed around various reviewers.

The idea of using Git as a platform in open educational development (not just as a software development tool) is something that has pinged my radar a couple of times this year so I thought I’d quickly* share some interesting links material in this area.  The core concept when reading this is the idea that Git repositories are:

  • designed as a collaborative space; and
  • encourage remixing and branching of material

*I’m not entirely happy with how this post is written but don’t want to spend too much time on it – consider it as some very rough notes.

Open bid writing

As it happens to order in which I came across these links also fits in with an evolution of the idea from software to educational support tool. The first example is still more at the software end, in this case the use of the GitHub Service by Joss Winn at the University of Lincoln as a place for Open bid writing, but it helps highlight the potential benefits of Git.

Project proposal versioningIn ‘Open Bid writing’ Joss reflects on the use of GitHub to develop his proposal for, the now funded, JISC OER Rapid Innovation Bebop project. The main advantages highlighted in the post are as this was proposed as a software development project the final code and proposal will all sit in one place. Now you might say how is this different from just uploading your project plan to your project site. The difference here is just as GIt allows you to navigated different versions of the code you can also see how the proposal evolved, see different versions of the proposal and how it was constructed and even how ideas evolved. Joss also points out that using GitHub during the writing process also gave the opportunity for others to learn or even contribute to the proposal.

The final aspect not included in the post but mentioned by Joss is a tweet before submitting the proposal is Git’s functionality for someone else to fork the project, that is take a snapshot of the proposal and develop it in a completely different direction. So at a later date you might see an opportunity to do something similar to Bebop and instead of starting from scratch use Lincoln’s proposal as the basis of your own work.

[In Joss’ post he also that one of the student projects at DevXS was to create a GitHub hosted version of the collaborative writing tool Etherpad which stores documents in Github. You can read more about RevisionHub here and the code developed at DevXS is here].

Not code, but poetry

‘Code is poetry’ is the WordPress motto but as Phil Beauvoir (JISC CETIS) highlights in his post Forking Hell? Git, GitHub, and the Rise of Social Coding already people are using Git repositories for other purposes beyond coding. These include writers, musicians and artists all putting there material in Git for others to contribute or fork to make something different. My favourite example from Phil’s post is:

Durham-based band, the Bristol 7’s, last year released their album, “The Narwhalingus EP” on GitHub under a Creative Commons licence “to see what the world could do with it”. The release, if we can call it that, comprises the final mixes and the individual tracks as MP3 files. The band invites everyone to:

“Fork the repo, sing some harmony, steal my guitar solo, or add a Trance beat. Whatever you want to do, just tell us about it, so we can hear what’s become of our baby!”

[Sticking very loosely with art I see via Ed Summers cc0 and git for data post that:]

the Cooper-Hewitt National Design Museum at the Smithsonian Institution made a pretty important announcement almost a month ago that they have released their collection metadata on GitHub using the CC0 Creative Commons license

Forking Your Syllabus

So far the examples I’ve highlighted have all used the GitHub service. Earlier in the week I had a chance to chat to Joss Winn at the JISC OER Rapid Innovation start-up meeting and started talking about Git. One of the things Joss mentioned was whilst Git presented a number of opportunities for academics to contribute, share and reuse material the terms and concepts of Git are foreign to the average academic. A post I had read but not fully processed is Brian Croxall’s Forking Your Syllabus. In this post Brian highlights that for new teachers it can be daunting to design a programme of learning and that “when you’re beginning to plan something new, you can always benefit from seeing what others before you have done”

Brian goes on to join the dots between syllabus creation and Git, the final picture coming together with Audrey Watters ClassConnect: "GitHub" for Class Lessons. My hunch is ClassConnect has a Git backend and while the icon set and functionality is ‘fork’ the language is ‘used’. ClassConnect

As Audrey points out ClassConnect is a new product and I don’t think all of the required features are there yet, like selecting and searching by Creative Commons license, but the idea of using the Git model in educational development is one to watch.

But that’s what I think. What do you think? Are the soft issues of getting people to work in a more open way always going to overshadow any technical development to make it easier to do this? Or will tools like ClassConnect suck people into different working practices? Will staff ‘git’ it?

Update: There's been some more discussion on this idea on the OER-DISCUSS JISCMail list