Featured

21 Comments

When you share a link on Twitter there are a number of services, like bit.ly, which allow you to track the impact of the url in terms of the number of clicks it attracts from other users. At the same time there are a number of ways to monitor people sharing links to your site, the most basic being using a Twitter search like this one for hawksey.info. Using these search results you could start extracting the follower information from the person tweeting, workout potential reach and so on, but wouldn’t you like to know, as with your own bit.ly account, how many visits someone else's tweet generated to your site? Fortunately there is a way to do this and in this post I give you two tools to help you do it and look at why this information might be useful, but lets first look at how it is possible.

Referral Traffic

In August 2011 Twitter started automatically wrapping links over 19 characters in it’s own shortening service t.co, later in October started wrapping all links in t.co. When you start navigating around the web at each page you visit the server generally knows which page you came from (exceptions include direct traffic and when coming from https://). When you click on a t.co link the site can track where you came from, known as referral traffic (interestingly it looks like Twitter also bypass your url shortener of choice by following the redirects until it reaches a final destination and use that link in the t.co redirect). So when you click on a t.co link posted on Twitter I can detect that, that’s where you came from. Furthermore, each time a person tweets a link it gets wrapped in a unique t.co url even if that url has been shortened before with the exception being new style retweets. This means that when someone clicks on a link tweeted by someone I can trace it back to a single person.

Lets see how this works in practice. When I fire up my Google Analytics account and look at referral traffic I can see it’s dominated by t.co sources. Drilling down into the t.co data I can actually see how many visits each t.co link generated.

Referral source Referral path

Searching Twitter for say http://t.co/wEbXrPah allows me to trace it back to this tweet:

So we can say this tweet and it subsequent 5 retweets generated 42 visits to my blog. At this point you might be saying but this tweet above has a bit.ly link, there’s no reference to t.co. It may say bit.ly but underneath the hyperlink is t.co:

Tweet html source

For some this isn’t news. In fact, Tom Critchlow was writing about how Twitter’s t.co link shortening service is game changing – here’s why way back in August 2011. His post probably has a better explanation of what is happening and also includes a bookmarklet (appears broken after recent Analytics overhaul) which takes the t.co referral path from your Google Analytics report and searches it on Twitter to find out which persons tweet is sending you the traffic.

I thought this was a neat idea but wanted to get the full impact of seeing the visit count associated with a named person. I tried a couple of ways to inject the data using a bookmarklet with no joy so turned to Google Spreadsheets (and in particular Google Apps Script) to marry data from Twitter with Google Analytics. So I give you:

Method 1: Quick 7 day search

*** Twitter/Google Analytics Top Distributor Sheet v1.0 ***
[File > Make a copy for you own version]

With this Google Spreadsheet I can authenticate with my Google Analytics account which then allows me to extract t.co data using the Google Analytics Core Reporting API. I then pass each t.co link to the Twitter Search API to find out who tweeted it first. This is all wrapped in a custom formula getGATwitterRef(startDate, endDate, optional numberOfResults) which generates a table of results like this:

Twitter/Google Analytics Top Distributor Sheet v1.0

So big thanks go to Brian E. Bennett (@bennettscience) for generating 16 visits, Alberto Cottica (@alberto_cottica), @futuresoup and others for also generating traffic. But who generated the 14 visits? The reason this row is blank is while the link is still generating traffic to my site the link was first tweeted over 7 days ago meaning it is outwith the capabilities of the Twitter Search API.

What a terrible shame never mind 9 out of 10 isn’t bad … ah but hold on I’ve got a Google Spreadsheet template that can archive Twitter searches (TAGS). So I give you:

Method 2: TAGS v4.0 with Google Analytics integration

*** TAGS v4.0 with Google Analytics integration ***
[File > Make a copy for you own version]
Update: 11th Sept. 2014 TAGS has moved on and this method might no longer work

So by using a search term for your domain you can start collecting ten of thousands of tweets over months and years, which you then query against your Google Analytics data.

‘What the flip’ you might be asking. Here’s the explanation. By using the same search query at the beginning of this post for all tweets containing ‘hawksey.info’ (and because Twitter is wrapping everything in t.co it knows these even if they began life as a bit.ly or goo.gl) I can build up a corpus of tweets containing links to my site. If you also look at this archive I’m building you’ll see in the column labelled ‘text’ there is not a bit.ly or goo.gl in sight, all the links are t.co.

So all I need to do is extract a t.co referral path I’m interested in from my Google Analytics data and find who first tweeted it in my archive giving me the number of visits that link generated.

So now I can say thank you @BillMew for the mystery 14 visits, thank you @TweetSmarter for the 38 visits generated from your tweet last month (you’ll see some #N/A I think because my collection went offline for a bit, working sweet now)

TAGS v4.0 with Google Analytics integration

Why

Whilst there could be a seriously creepy side to this (lets not forget people like Google have made serious bucks knowing where you go and what you share), there are a couple of reasons why I was interested in following up this concept. One was in relation to the Learning Registry/JLeRN experiment (background info here) which is trying to create a framework beyond metadata to include activity/paradata around resources. The idea is this data can be used to provide feedback and improve discoverability of resources. So potentially you’ve got some rich data to push into a node .. errm I think, maybe #haventplayedwiththespecyet

Another thought was during the OER Visualisation project I discovered that social sharing of resources appears, insert caveats, to be rare. If you could recognising and reward the people pushing content it might encourage them to distribute more (and as I highlighted in How do I ‘like’ your course? The value of Facebook recommendation there is real and measureable value in people distributing information through their networks) - the flip side to this being as soon as you start measuring something someone else will start gaming the system.

There’s also a degree of profiling you could do. If someone ends up at your resource having clicked on a linked shared by A that person may have something in common with them so you could target additional resources to them based on what A might like.

I’m sure there are others. As I’ve shared the tools to do this it’s only fair that you share your ideas in the comments.

But there is potentially more …

Data Feed Query ExplorerI will leave you with one last thought. I haven’t mentioned much about the code, which is available and open source (my bits anyway) via the Script Editor. To help construct the Google Analytics query I used the Data Feed Query Explorer. Here’s a permalink to the main query structure I used. If you open the link and hit the ‘Authenticate with Google Analytics’ button choosing one of your own analytics ids you can see what data comes back. I’ve been conservative only pulling what I need but if you click on the ‘dimensions’ box you can see I could also be pulling where the visits were coming from, time of day, and more. All potentially valuable intelligence to give you a picture of how a resource is being shared if you can unlock it of course.

2 Comments

UKOER Hashtag CommunityLast week I started to play with the #ukoer hashtag archive (which has generated lots of useful coding snippets to processes the data that I still need to blog … doh!). In the meantime I thought I’d share an early output. Embedded below is a zoom.it of the #ukoer hashtag community. The sketch (HT @psychemedia) is from a partial list* of twitters (n. 865) who have used the #ukoer hashtag in the last couple of years and who they currently follow. The image represents over 24,000 friendships, the average person having almost 30 connections to other people in the community.

 

Flickr Tag Error: Call to display photo '36026280' failed.

Error state follows:

  • stat: fail
  • code: 95
  • message: SSL is required
Publishing an early draft of this image generated a couple of ‘it looks like’ comments (HT @LornaMCampbell @glittrgirl). To me it looks like a heart, hence the title of this post. The other thing that usually raises questions is how the colour grouping are defined (HT @ambrouk). The answer in this case is it’s  generated from a modularity algorithm which tries to automatically detect community structure.

As an experiment I’ve filtered the Twitter profile information used for each of these groupings and generated a wordcloud using R (The R script used is a slight modification of one I’ve submitted to the Twitter Backchannel Analysis repository Tony started – something else I need to blog about. The modification is to SELECT a column WHERE modclass=somthing).

Right all this post has done is remind me of my post backlog and I’ve got more #ukoer visualisation to do so better get on with it.

*it’s a partial list because as far as I know there isn’t a complete archive of #ukoer tweets. The data I’m working from is from an export from TwapperKeeper for March 2010-Jan 2012  topped up with some data from Topsy for April 2009-March 2010

Share this post on:
| | |
Posted in Featured, JISC, OOH-ER and tagged on by .

10 Comments

I should start with the result so that you can see if it’s worth doing this:

The video shows the deposits from institutions and Subject Centres to Jorum tagged ‘ukoer’ from January 2009 to November 2011. In total over 8,000 deposits condensed into 5 minutes (there are more records, but these were the ones that could be reconciled against an institution name).

Here’s the recipe I used to do it, which should be easy to modify for your own and other repositories. As the explanation takes longer than to actually do it I’m going to assume you understand some basic tools and techniques, but you can always leave a comment if something isn’t clear.

Let start by looking at what it is we are trying to achieve. The animation is generated using code from the open source Gource project. Gource uses an input log file to visualise software commits using the format shown below. So for the  Jorum visualisation we need to generate a file with timstamp, creator (in this case the submitters host institution) and title (prefixed by subject classification).

Gource log format

The user andrew adding the file src/main.cpp on Thu, 03 Jun 2010 05:39:55 GMT (1275543595):

1275543595|andrew|A|src/main.cpp

Getting the data – building a source

Building the log file we need details of the records from Jorum. Fortunately Jorum implements the OAI Protocol for Metadata Harvesting, which is designed to allow the easy sharing and access of repository data. I say easy but in reality its easy if you have another repository on a server somewhere that can consume OAI data, but its not easy to find a desktop based solution. After a lot of trial and error I’ve arrived at a solution using a combination of MS Excel and Google Refine (BTW “Google Refine is a power tool for working with messy data, cleaning it up, transforming it from one format into another” – it’s also open source).

I had hoped to do all of this in Google Refine but was struggling with the initial data import, recognising the schema and including records with multiple subject classifications, so we briefly start with MS Excel.

In Excel (I’m using 2010, other versions may work) we want to start a new workbook. In the Data ribbon select ‘From Web’. In the dialog that opens in the Address bar enter http://dspace.jorum.ac.uk/oai/request?verb=ListIdentifiers&metadataPrefix=oai_dc .

Excel data ribbon

Once it’s finished loading (which can take a while) click Import. You’ll now get some dialog boxes warning you about the xml import but you can ignore those. You should now have a sheet of List Identifiers, that is a list of all the record identifiers (ns1:identifier4) and the subject set they are attached to (ns1:setSpec6status3) - you’ll find that there are more columns, mainly blank, which we don’t need.

Excel XML data imported

Next we want add some readable subject classification to the data by changing setSpec ids into text equivalents. This data is also available via Jorum’s OAI service and the raw data can be seen by looking at http://dspace.jorum.ac.uk/oai/request?verb=ListSets.

To get this data into Excel we want to follow a similar process to above in the same spreadsheet getting Data – From Web using http://dspace.jorum.ac.uk/oai/request?verb=ListSets as the address. This gives us a sheet similar to below with setSpec ids and associated category name.

Getting subject classification

Next we want to match this data to the sheet of List Identifiers. To do this we first want to sort the data we just captured on the setSpec column. Now in the original sheet add a new column and enter the following formula in the cell immediately beneath the column name (row 2):

=VLOOKUP([@[ns1:setSpec6]],ListSets!A:B,2,FALSE)

This formula looks up the setSpec6 value, matches it against the data we just got and returns a setName. You can now save this spreadsheet.

Getting more data using Google Refine

So far we’ve got a list of record ids from Jorum and the subject category for that record. We still need to get when the record was created, who by and resource title. To do this we are going to use Google Refine. If you haven’t already here’s how to install Google Refine. Open Google Refine and create a new project from the Excel file we just created. The default setting should work just make sure you select the sheet with 19,000 plus rows.

After the project has been created next we want to get more information for each record identifier. From the ns1:identifier4 column drop-down menu select Edit column > Add column by fetching URLSs:

Google Refine - add data from column In the dialog box that opens use the following settings:

  • New column name – record
  • Throttle delay – 500
  • On error – set to blank
  • Language – GREL
  • Expression - "http://dspace.jorum.ac.uk/oai/request?verb=GetRecord&metadataPrefix=oai_dc&identifier="+value

Google Refine - Add column by fetching URL

When you hit OK is Google Refine will use the row value to fetch even more data from Jorum and enter it into a cell. This is done using another entry point to OAI services using each identifier to get all the record data (here’s an example response). As this has to process over 19,000 requests it can take some time. If you would prefer not to wait here’s an export of my Refine project with the data already collected.

Google Refine - Returned data

So now we have all the information we need but it’s all in one cell, so we need to do a bit more refining.

Extracting a date

You’ll notice that each record has a couple of dates stored in dc:date. Lets look at extracting the first date we find. Google Refine has a couple of ways to parse a cell and get data out. Initially I tried using Jython but didn’t get very far, but thanks to some help from the Google Refine community found I could use Refine’s GREL language. Here’s how.

From the new ‘record’ column dropdown select Edit column > Add column > Add column based on this column.  In the dialog that opens set the new column name as first date and enter the following GREL expression:

forEach(value.parseHtml().select("dc|date"),v,if(length(v.htmlText())>11,v.htmlText().slice(0,10)+" "+v.htmlText().slice(11,19),"bad format")).sort()[0]

What is happening here is within the cell forEach <dc:date> If the result is length than 11 characters slice the text for the first 10 characters (yyyy-mm-dd) and a space then slice characters 11 to 19 (hh:mm:ss). As the dc:dates are temporarily stored in an array we sort this and get the first ([0]) value, which should be the smallest.

Next we want to turn the date, which is being stored as a string, into a UNIX timestamp (the number of seconds or milliseconds since midnight on January 1, 1970). We need a timestamp as this is the date/time format used by Gource.

To get this we want to add a column based on firstDate. In the Add column based on column firstDate enter the name timestamp and switch the language to Jython (I found this the best for this procedure) and the expression:

import time

return int(time.mktime(time.strptime(value, '%Y-%m-%d %H:%M:%S')))

This takes the cell value and turns it into a Jython time object by matching the date/time pattern used in the firstDate column. As Jython times are stored as UNIX timestamps we can just return the value to the new cell.

Some basic timestamp validation

Google Refine Facet/FilterI obviously didn’t start up Refine drop the expression from above in and get to this point. There was a lot of trial and error, testing assumptions like all the dates are in yyyy-mm-ddTHH:MM:SSZ format, and checking the processed data.  For example, if we want to check we’ve got valid timestamps for all the rows from the timestamp column dropdown menu we can select Facet > Customized facet > Facet by blank. To filter the blank rows we have to click on include in the Facet/Filter menu on the left hand side (we can also conveniently see that 3616 rows are blank).

Initial visual inspection of the results show that the status column contains a lot of records marked deleted. From The status column dropdown we can create an addition Facet > Text Facet. In the Facet/Filter section we can see that there are 3616 occurrences of the text ‘delete’, so we can conclude that blank timestamps are because of deleted records, which we can live with.

Important tip: As we have filtered the data if we do any additional column operations it will only be applied to the filtered rows so before moving on remove these facets by click on the little ‘x’ next to them.

Next lets sort the timestamps to check they are in a reasonable range. Do this by clicking the dropdown on timestamp ad using the sort option, sorting the cells as numbers (check both ascending and descending order). You’ll notice some of the dates are in 2004, I’m happy with these as Jorum has been going for some time now.

Google Refine - Numeric Histogram[By turning on the numeric facet for the timestamp column we also get a little histogram which is handy for filtering rows].

Before moving on make sure timestamp is sorted smallest first

So we now have a timestamp next lets extract the resource title.

Extracting a resource title

This is relatively straight forward as each record has a. So from the record column drop down select Edit column > Add column > Add column based on this column. In the dialog box use GREL, name the new column ‘title’ and use the following expression

value.parseHtml().select("dc|title")[0].htmlText()

[Each record only has one <dc:title> so it’s safe to just return the first title we find]

Reconciling who ‘damn’ made these resources

The headache comes from resource creators filling in information about their submission including information about who made it. This means that there are inconsistencies with how the data is entered, some records using a separated creator for the institution name, others including it with their name, or omitting this data altogether.  For the visualisation I wanted to resolve the resource against an institutional name rather than an individual or department. Here’s how the data was reconciled.

Lets start by extracting all the recordsto let use see what we are dealing with. We can do this by again using Edit column > Add column > Add column based on this column from the ‘record’ column. This time lets call the new column ‘creators’ and use the following GREL expression:

forEach(value.parseHtml().select("dc|creator"),v,v.htmlText()).join(",")

This will forEachget the value and store as a comma separated string.

For the early records you’ll notice that it’s a named person and there is little we can do to reconcile the record against an institution. For the later records you’ll see named people and an institutional affiliation. So lets see if we can extract these institutions into their own column.

From the creators column dropdown add a column based on this one calling it inst_id and using the following GREL expression

if(contains(value.toLowercase(),"university"),filter(value.toLowercase().split(/[-,.;\(\)]|(\s+for+\s)+/),v,contains(v,"university"))[0].trim(),if(contains(value.toLowercase(),"centre"),value.toLowercase(),""))

What this expression is doing is if the value contains the word ‘university’ the string is split into an array using the symbols –,.;() or the word ‘for’ and the array value with ‘university’ is stored, else if the value contains the word centre this value is stored (the OER Programme has projects from Universities and HEA Subject Centres).

Some additional refining via faceted filters and edit cells

Google Refine - Blank facetTo let us refine this data further from the new inst_id column and click the dropdown menu and select Facet > Customized facets > Facet by blank. Click on true so that we are just working with the blank inst_ids.

Scrolling through the records we can see some records the a creator that begins with ‘UKOER,Open Educational Repository in Support of Computer Science’. On the creators column from the dropdown sect ‘Text filter’ and use ‘Open Educational Repository in Support of Computer Science’. With this facet in place we can see there are 669 records. As we are confident these files were submitted as part of the Information and Computer Sciences Subject Centre’s work we can autofill the inst_id column with this data by clicking the dropdown on the inst_id column and selecting Edit cells > Transform. In the expression box enter "Information and Computer Sciences Subject Centre" and click OK.

Google Refine - Cell transformation

Remove the ‘creators’ filter by clicking the small ‘x’ in the top left of the box.

Let add a new text filter to the records column (you should know how to do this by now) with the word ‘university’. This should filter 878 rows or so. To make it easier to see what it is matching press Ctrl+F to bring up you browser Find on page and look for university.

Moving through the data you’ll see things like:

  • 384 rows can have inst_id’s by using the cell transformation filter(cells["record"].value.parseHtml().select("dc|publisher"),v,contains(v.htmlText().toLowercase(),"university"))[0].htmlText()
  • 89 rows include the term “University of Plymouth” in the dc:description, we can filter and fill these using the subject centre method.
  • 81 rows can have university names pulled from dc:subject using filter(cells["record"].value.parseHtml().select("dc|subject"),v,contains(v.htmlText().toLowercase(),"university"))[0].htmlText()

At this point if we just use the blank inst_id facet we’ve got 10,262 true (ie blank inst_id’s) and 9199 false, so a 47% hit rate … not great! But if we add a ‘ukoer’ text filter to the records column this improves to 8433 inst_id’s in 9955 matching rows which is a 84% hit rate. Whilst this isn’t perfect it’s probably the best we can do with this data. Next to turn those institutional id guesses into reusable data.

The real magic reconciling institutional names against CETIS PROD

So far we’ve tried to extract an institutional origin from various parts of the Jorum data and there is a lot of variation in how those ids are represented. For example, the inst_id column might have ‘the university of nottingham’, ‘university of nottingham’ or even ‘nottingham university’. To make further analysis of the data easier we want to match these variations against a common identifier, in the example above the ‘University of Nottingham’.

Google Refine has some very powerful reconciliation tools to help us do it. More information on Google Refine Reconciliation here.

In the inst_id column select Reconcile > Start reconciling.

Google Refine - Reconciliation

Google Refine has existing Freebase databases, which we could use to match institutional names against database ids, but as we are dealing with JISC/HEA projects it makes more sense to try and reconcile the data against the CETIS PROD database (this opens up further analysis down the line).

Kasabi - Reconciliation urlFortunately PROD data is mirrored to Kasabi, which includes a Reconciliation API for use with Google Refine. To use this data you need to register with Kasabi and then subscribe to the PROD data by visiting this page and clicking ‘Subscribe’. Once subscribed if you revisit the previous link and then click on the link to the ‘experimental API explorer’ and copy the url in the blue book including your apikey e.g. http://api.kasabi.com/dataset/jisc-cetis-project-directory/apis/reconciliation/search?apikey=aaaaaaaaaaaaakkkkkkkkkkeeeeeyyy

Back in the Google Refine Reconciliation dialog box click on ‘Add Standard Service …’ and enter the url you just created. Once added click on the new Reconciliation API and select ‘Reconcile against no particular type, then Start Reconciling’.

Google Refine - Using Kasabi data

Google Refine - edit cellOnce complete you should hopefully see from the inst_id judgment facet that the majority of names (all but 131) have been matched to PROD data. Filtering on the ‘none’ you can do mass edits on unmatched inst_ids by clicking the ‘edit’ and ‘Apply to All Identical Cells’. Once you’ve done this you can re-run Reconcile > Start reconciling to get additional matches.

Exporting to Gource using a custom template

Almost there people ;). At the very beginning I mentioned that the visualisation tool Gource has it’s own input log formats, shown below as a refresher:

1275543595|andrew|A|src/main.cpp

Another useful feature of Google Refine is Export Templating, which allows us to control how our data can be written out to a separate file.

In Google Refine make sure you have a text facet on the record column filtering for ‘ukoer’ and inst_id: judgement is on ‘matched’ (this means when we export it just include this data). Now select Export > Templating …. Remove any text in Prefix, Row Separator and Suffix and in Row Template use:

{{cells["timestamp"].value}}|{{cells["inst_id"].recon.match.name}}|A|{{replace(cells["subject"].value," / ","/")}}/{{if(length(cells["title"].value)>20,cells["title"].value.slice(0,20)+"...",cells["title"].value)}}

This will write the timestamp cell value, then the reconciled name for the inst_id, then the subject value (stripping whitespace between slashes) and the resource title stripped down to 20 characters.

Finally, Gource

Google Refine will spit out a .txt file with the formatted data. Before we use it with Gource there is one thing we need to do. Initially I was getting log file format errors in Gource and then discovered it was a .txt file encoding problem. So open your newly created .txt file (which is in UTF-8 format) and File > Save As changing the encoding to ANSI.

Save As ANSI encoding

To test you visualisation download gource and extract the files. In the same directory as your extracted files place a copy of your refined log file. To view what you’ve got open your command line, navigate to your extracted gource location and executing:

gource nameoflogfile.txt

The gource site has more instructions on recording videos.

- THE END -

well almost … Here’s:

The bigger picture

This work was undertaken as part of my OER Visualisation work (day 11) and while it’s useful to have the Jorum OER snowflake visualisation in the bag, having a refined data source opens up more opportunities to explore and present OER activity in other ways. For example, I immediate have a decent sized dataset of OER records with subject classification. I’ve also matched records against PROD data which means I can further reconcile against project names, locations etc.

Yummy data!

12 Comments

Graphs can be a powerful way to represent relationships between data, but they are also a very abstract concept, which means that they run the danger of meaning something only to the creator of the graph. Often, simply showing the structure of the data says very little about what it actually means, even though it’s a perfectly accurate means of representing the data. Everything looks like a graph, but almost nothing should ever be drawn as one. Ben Fry in ‘Visualizing Data

Where's Wally?

I got that quote from Dan Brickley’s post Linked Literature, Linked TV – Everything Looks like a Graph and like Dan I think Ben Fry has it spot on. When I started following Tony’s work on network analysis (here’s a starting point of posts), my immediate response was ‘Where’s Wally?’, where was I in relationship to my peers, who was I connected to, or even who wasn’t I connected to.

As I start my exploration of tools like NodeXL it's very clear that being able to filter, probe and wander through the data provides far more insights to what’s going on. This is why when I, and I’m sure Tony as well, show our tangled webs it’s designed as a teaser to inspire you to follow our recipes and get stuck into the data yourself. This isn’t however always practical.

imageA recent example of this was when I was looking through the Guardian’s Using social media to enhance student experience seminar #studentexp. I’d captured the #studentexp tagged tweets using my TAGS spreadsheet, used my recipe get sentiment analysis from ViralHeat and imported the data into NodeXL to start exploring some of the tweets and conversations from the day.

 

But what does this graph actually mean? I could start highlighting parts of the story, but that would be my interpretation of the data. I could give you the NodeXL file to download and look at, but you might not have this software installed or be proficient at using it. I could try looking at the raw data in the Google Spreadsheet, but it lacks ‘scanability’. So I’ve come up with a halfway house. A re-useable interface to the TAGS spreadsheet which starts presenting some of the visual story, with interactivity to let you drilldown into the data. I give you:

*** TAGSExplorer ***

TAGSExplorer
http://hawksey.info/tagsexplorer/?key=0AqGkLMU9sHmLdDJYMDZYR3FUcnVwWTkwLWpScnFIUXc&sheet=ob7&mentions=true

What is TAGSExplorer?

TAGSExplorer is a result of a couple of days code bashing (so a little rough around the edges) which mainly uses the DataTable part of the Google Visualization API to read data from a TAGS spreadsheet and format it to use with d3.js graphing library. By chucking some extra JavaScript/JQuery code (partly taken from johman’s Twitter Smash example) I’ve been able to reformat the raw Twitter data from the Google Spreadsheet and reformat it returning Twitter functionality like reply/retweet by using their Web Intents API.

What is displayed:

  • A node for each Twitterer who used the #studentexp hashtag and is stored in the spreadsheet archive.
  • Solid lines between nodes are conversations eg @ernestopriego tweeted @easegill I agree completely. Learning how to use social media tools is part of digital literacy and fluency; part of education. #studentexp  creating a connection between @ernestopriego and @easegill.
  • Dotted lines are not direct replies but mentions eg @theREALwikiman tweeted “If you're an academic librarian it might be worth following @GdnHigherEd's #studentexp tag right now, if you have time. Interesting stuff.” For performance by default these are turned off but enabled by following the instructions below.
  • Node text size based on he number of @replies and @mentions

How to make your own?

  1. If you haven’t already you need to capture some tweets into a TAGS spreadsheet
  2. When you have some data from the spreadsheet File > Publish to the web …
  3. Head over to TAGSExplorer and enter you spreadsheet key (or just paste the entire spreadsheet url HT to Tony Hirst for this code)
  4. Click ‘get sheet names’ and select the sheet of the data you want to use (if you are doing a continuous collection the default is archive)
  5. Click ‘go’
  6. If you want to share with others, click the ‘link for this’ at the top right which gives you a permanent url – the permanent link also hides the spreadsheet selection interface. By default mention lines are off but can be enabled by adding &mentions=true to the link (see example above)

Some examples

If you don’t have your own data yet here’s some examples from data I’ve already collected:

Where next?

I’ve got some ideas, I’m interested in integrating the sentiment scores from ViralHeat, but more importantly where do you think I should go next with this?

14 Comments

Back in April 2009 I posted  Evernote – a personal e-portfolio solution for students?. In the post I highlighted how the features of this young start-up potentially made it a nice solution for a FREE ‘personal’ e-portfolio (that is, removed from the shackles of institutionally bought systems). At the time though I did point out some potential shortcomings:

  • lack of mobile application for non iPhone/iPod Touch and Windows Mobile users
  • an easy way to privately share assests
  • notes are stored in proprietary Evernote format
  • the limit to only uploading pdf documents with the basic free service 

Over time these original issues have been whittled down.

Mobile - In May 2009 it was announced Evernote for BlackBerry Is Here and then in December Evernote for Android: It’s here! and there have been been numerous software updates and enhancement for tablet devices when they come along.

Sharing – From January 2010 there have been several updates adding note sharing with Mac, web, Windows and mobile apps. Sharing isn’t done privately instead using ‘security by obscurity’ (having publically available notes accessed via an obscure url). Update: Oops You'll see from the comment below that it is possible to share notebooks privately. From the sharing knowledge base:

Evernote allows both free and premium users to share notebooks privately with other Evernote users. Notebooks shared by premium users have the option of being editable by the users with whom the notebook is shared. In other words, if Bob the premium user shares a notebook with Fred the free user, Bob may choose to allow Fred to edit the contents of his shared notebook.

Export – When I started presenting Evernote as a personal e-portfolio system back in 2009 one of the questions I usually got asked is how could a student back-up or export notes stored on Evernote servers. At the time the desktop clients for Mac and Windows, which synchronise with Evernote so that you always have a local and remote copy of your files, could export your notes in a proprietary XML format. This meant you could import them into another Evernote account but that was it. In May 2009 Evernote however started rolling out html export for single or batches of notes starting with Mac (May 2009) and eventually getting around to Windows (November 2010).

File types – Back in April this was the deal breaker for me. With the free account you could only upload text, image, audio and PDF files. Having a place to also backup word documents and other electronic resources as well as making this searchable was the one thing I thought would put most tutors off of suggesting Evernote as a tool for their students. Fortunately this month (September 2011) Evernote announced that they had Removed File Type Restrictions for Free Accounts.

So what’s left? Will you be recommending Evernote to your students?

PS Here's a collection of links from Purdue University on Evernote in Education and not surprisingly Evernote themselves ran an Evernote in Education Series.

PPS I recently downloaded the free Android App Droid Scan Lite which lets me snap and reshape pics of docs which I can then share to Evernote as a JPEG (Evernote OCR's images to make them searchable ;)

2 Comments

Flickr Tag Error: Call to display photo '4814995807' failed.

Error state follows:

  • stat: fail
  • code: 95
  • message: SSL is required

Recently I posted a Google Spreadsheet of all the live JISC funded project websites from the last 3 years. Not too long before that I also posted Google Spreadsheets as a lean mean social bookmark/share counting machine, which used Google Apps Script to query different social network providers for share/like counts for a specified url.

I thought it would be interesting to combine the two and see which JISC funded  projects have been trickling through various social networks (the social engagement monitoring service PostRank did something similar with the TEDTalks, but since their purchase by Google the API, which would have made this a lot more easier, has been closed).

My starting point was the PIMS (2nd Pass) spreadsheet (I chose not to use PROD as I don’t think it has all of the JISC funded projects? – someone correct me if I’m wrong). I could have inserted cell formula for the custom Apps Script functions to getFacebookLike() etc. but as I mentioned in that post you can only use 100 of these before hitting timeouts, and with over 400 project websites it’s not straight forward.

The solution was two fold. Firstly, use Google Apps Script to iterate across the website urls fetching the results and recording it in the appropriate cell as a value. This saves the spreadsheet having to fetch the responses each time it is opened.

The second part of the solution was linked to this. As I was going to record values rather than use live data it made sense to try and aggregate the calls to individual social network services to avoid hitting urlfetch limits (I reckon you get about 400 of these a day?). As I mentioned in the original bookmark counting post I’d come across Yahel Carmon’s Shared Count API, which let you make one request and get a bunch of stats back for that url.

So here’s the code I used and the resulting Google Spreadsheet of JISC Project Social Favourites.

The most engaging JISC Project is…

And here is where the arguments start. The more accurate description is ‘the most engaging  JISC funded project website index page is…’ and even then there is the caveat of (including established websites part funded by JISC). This also excludes all the blog posts, wiki pages, supplemental pages, repository submissions generated by JISC projects and also not forgetting other forms of engagement like other people writing and linking back to JISC project websites/resources. The list goes on. Sheila MacNeill at JISC CETIS has already posted some thoughts in Socially favoured projects, real measures of engagement?.

So is there any value is this data? I’ll let you decide. The important part for me was the process. I now have a method for returning social bookmark/shares for a bunch of websites and a framework using Google Apps Script to start automatically adding urls and collecting data.

Problems encountered

So if the process was more important for me what did I learn along the way.

Shared Count API didn’t like some of the urls

For some reason the Shared Count API spat out the following 2 urls with 500 server errors.

I don’t know why it did this but my quick fix was to use the custom getFacebookLike formulas for these entries.

Twitter counts aren’t reliable

Twitter only recently started providing there own share/count button and as a result polling the official data isn’t always accurate. A separate service which has been monitoring the links people tweet for a lot longer is Topsy and fortunately for us Topsy have an API to pull similar data (these 3rd party APIs are becoming more scarce as the big boys buy up services and switch off APIs – I’m sure it will be a case that Topsy’s API will disappear soon as well :(

An example of the difference is the ticTOCs website which Twitter only thinks has been tweeted 30 times, but Topsy has 96 hits (the other advantage of Topsy is I can see what people said about ticTOCs – this data is also available via their API so I may be revisiting this source). When calculating ‘total engagement’ I took the maximum value between Twitter and Topsys (more about the total further down the page).

[As Topsy results aren’t included in the Shared Count API I grabbed these separately using the getTopsyCount function documented in my other bunch of other bookmark/share code snippets]

Hitting Apps Script urlFetch limits

Even using the Shared Count API (plus Topsy calls) I hit Apps Script urlFetch quota limits (I haven’t seen this documented anywhere but I’m guess its between 400-500). To get around this I shared the Spreadsheet and Script with another one of my Google accounts and was able to continue.

Stumble trip, stumble trip

I collected StumbleUpon stats mainly because they were part of the Shared Count API data, but unlike the other service details these are views rather than share counts so I didn’t include them in the totals as it’s a bit apple and pear-ish.

Buzz off

Buzz, Google’s second… no third… forth(?) attempt at social networking, is being eclipsed by Google+ but if like me you switched it on to automatically push updates from other services buzz counts potentially have a lot of noise in them.

For example, JISC funded projects which are hosted on Google Code (like Shuffl and meAggregator) end up with large Buzz counts (I’m guess each code commit generates a buzz) and not much other social activity. In the spreadsheet when I totalled the different service counts I also included a column excluding the Buzz counts.

Is a Like more valuable than a Tweet

This brings me back to to some of the questions around what does this all mean. I’ve already written/presented about how for services like Eventbrite there is more dollar value in a customer using a Facebook Like than Tweeting event information. So should a Facebook Like get more weighting than a tweet?

Where next

Umm not sure but if anyone has a collection of interesting  urls they’d like a spreadsheet of social counts for get in touch ;)

16 Comments

Update: The Social Graph API has been deprecated by Google and Protovis is no longer being developed. I've revisited this recipe using new tools in Deprecated Bye-bye Protovis Twitter Community Visualizer: Hello D3 Twitter Community Visualizer with EDGESExplorer Gadget

RT @psychemedia: How do folk who send you twitter messages connect? http://bit.ly/dNoKGK < see address bar (this is depressingly good)

Was what I tweeted on the 13th April 2011. What Tony had managed to do was use the Protovis JavaScript library to let people make their own network visualizations for Twitter search terms (if you haven’t seen Tony’s other work on hashtag communities, mainly using Gephi, its well worth a read). The concept for the latest iteration was explained in more detail by Tony in  Using Protovis to Visualise Connections Between People Tweeting a Particular Term.

Two limitations of this solution are: it relies on a current Twitter search which will disappear after 7 days; and it’s difficult to embed elsewhere. Tony and I had discussed collecting the data in a Google Spreadsheet using one of my many Twitter/Spreadsheet mashups and then draw the network visualization from the data.

I thought I would go one step further not only collecting the data in the Spreadsheet but then also generate the Protovis visualization in the sheet by making a gadget for it. The reason for going down the gadget route is in theory they provide an easy way to embed the visualization in other webpages and web apps.

This wouldn’t be my first foray into gadgets having already done My first Google Gadget to embed a flickr photoset with Galleria navigation, and I already knew that to gadgetize Tony’s original code just needed some xml wrapping and a dash of the Google Visualization API. In fact because I used Google Apps Script to collect the data there was very little needed to do with Tony’s code as both used JavaScript syntax.

So here it is the:

*** Protovis Twitter Community Visualizer ***
[If the link above doesn't work open this version and File > Make a copy (make sure you are signed in)]

and here is some output from it (if you are reading this in an RSS aggregator you'll need to visit this post) Update: Turns out the Protovis library doesn't work with IE so you'll just have to use a proper browser instead:

PS you can interact with the diagram by click-dragging nodes, using your mousescroll for zoom and click-drag empty parts to pan

Life is so much easier when you stand on the shoulders of giants ;)

Friday (20th May) was our Open for Education event. There was a real buzz as over 100 delegates squeezed into the NeSC to absorb a packed programme of open and free stuff. Once we get the videos from the event up I should do a separate post to highlight some of the best bits. In the meantime below is video and workshop handout from my App, App and Away workshop. I'm already working on version 2 for e-Assessment Scotland Conference on the 26th August.

Handout

This guide was written to support the App, App and Away workshop might be delivered on the 20th May 2011 as part of Open for Education event (unless otherwise stated available under CC-BY-SA). Shortlink: http://bit.ly/appappaway.

1. Background

Some more background on Google Docs has been  collected by EdTechTeam (CC-BY-SA 3.0) http://www.edtechteam.com/workshops/2011-01-14

1.1 What is Google Docs?

1.2 Interactive Overview (with Links to Help Pages):

2. The new glue: Google Apps Script

  • Google service to allow easy customisation of Google products and 3rd party service
  • A bit like macros but much more
  • Written using a JavaScript syntax but run on Google servers
  • Not just for the coders

Google Apps Script is a JavaScript cloud scripting language that provides easy ways to automate tasks across Google products and third party services.
With Google Apps Script you can:

  • Automate repetitive business processes (e.g. expense approvals, time-sheet tracking, ticket management, order fulfillment, and much more)
  • Link Google products with third party services (e.g. send custom emails and a calendar invitation to a list from a MySQL database)
  • Create custom spreadsheet functions
  • New! Build and collect user inputs through rich graphics interfaces and menus (e.g. a company could power an internal application for purchasing office supplies where users could shop via a customized menu interface)

http://code.google.com/googleapps/appsscript/

2.1 Key resources

2.2 What you can interact with

Google Apps Script includes objects and methods for controlling data in the following applications.

  • Google Spreadsheets
  • Google Documents - NEW
  • Gmail Contacts
  • Google Calendar
  • Google Sites
  • Google Maps
  • Gmail - NEW
  • More ...

3. Survey Form Admin/Just in Time Teaching

Before joining this workshop I sent you all a link to a survey. This was sent using a Google Apps Script. Lets look at your responses and how it was done.

4. Automated grading using Flubaroo

Made by Dave Abouavm, a Google employee, in his 20% time. Flubaroo uses Google Spreadsheet/Forms and Apps Script to automatically grade quizzes. http://www.flubaroo.com/

Aim
Create a form that can be used as a quiz. Responses to the quiz are aggregated to give an overview of class performance and students receive personalised feedback regarding their performance.

Activity

  1. Create a new spreadsheet
  2. Insert > Script then find Flubaroo in Education section
  3. After a couple of seconds accept authorize
  4. Tools > Form > Create Form
  5. Create your form including name and email fields if you want to send results. You can use any question types you like as long as the student can exactly match the correct answer
  6. Can Form > Go to live form and fill out yourself with correct responses, before sending link to students
  7. Once quiz closes go to Flubaroo > Grade quiz, identify response with correct answer
  8. Once graded you can then go to Flubaroo > Email grades (you can provide additional feedback by adding text to the correct responses.

5. Creating custom interfaces to Google Apps

Part of the Google Apps Script service allows you to create custom interfaces (UI Services). An example of this was the dialog boxes in Flubaroo. These were all written using Apps Script and as well as allowing user input can include any information accessible to Google Apps Script (other Google services and 3rd party information)
This example from Simple Apps Solutions shows the degree of control you have in terms of customising layout. Until recently this all had to be manually coded but there now is an online interface designer.

5.1 Turning Google Spreadsheets into a personal or group bookmarking service

Aim
Create an interface to Google Spreadsheet which allows you to create a Delicious style bookmarking service.

Activity

  1. Make a copy of this spreadsheet (File > Make a copy)
  2. Click on Tools > Script editor then from Run > setup (you will need to do this twice)
  3. While still in the Script Editor select Share > Publish as service.

- If you want to be the only one to add bookmarks choose 'Allow only myself to invoke this service'

- Enable Service

- Copy the URL and paste it into cell A8 of the Readme sheet

  1. Make a custom bookmark in your browser using the code provided (javascript: ...) as the url (I’ll talk you through this. Basic instructions for Internet Explorer and Firefox)
  2. Start bookmarking stuff

Example

6. Triggers

Google Apps Scripts can be run in three ways:

  • by the user
  • time-driven
  • on event (on form submit, on open etc.)

The big advantage of automated triggers is processes can be run in the background without the need for the Spreadsheet or Site being opened by the owner.
Example

  • Archive Tweets to a Spreadsheet - uses time driven triggers to pull search results from Twitter and store the in a Spreadsheet (potential use might be to archive class tweets

7. Other opportunities: Uploading files

Aim
Create a custom system to allow students to make online submissions of their work.
Activity

  1. Open Google Sites
  2. Create new site
  3. Choose Blank template and enter a site name and url
  4. Then select ‘More actions’ > Manage site (right hand side of the page)
  5. Select Apps Script > Add new script
  6. In the window that opens select File > Add script from gallery …
  7. Select Education category from the Script Gallery
  8. Scroll to find Submit Assignment in Google Sites and click Install, Authorize then click close
  9. Back in the Script Editor select File > Open, opening Submit Assignment in Google Sites
  10. In line 22 change the folder name value to something else e.g. var folderName = "Assignments”; then save and close this window
  11. Open Google Docs in a new window and Create new > Collection using the same name used in step 10 e.g. Assignments
  12. Back in Google Sites click on Return to site then Edit page
  13. With the cursor in the main section of the page select Insert > Apps Script Gadget
  14. Select Submit Assignment in Google Sites and click select
  15. Set the permissions and click Save, then Save again (top right)

You can now test submitting an assignment by selecting a file and clicking ‘Submit Assignment’. The file will be uploaded to you Google Docs under the collection name you used. This example could be extended to include other form elements. For example, you could incorporate textareas to create a pro-forma for the student to fill in with their submission. Pro-forma questions might be ‘What grade do you think you’ll get’, ‘What are the strong areas of you submission’ etc

8. Issues

Google Apps Script is an evolving product and new features are regularly being added. There are a couple of issues to be aware of before using Apps Script:

  • Enterprise level deployment - once used Apps Scripts can’t be automatically updated
  • Relying on ‘the cloud’ - need to be online to edit/use
  • Consuming Apps Scripts is not always straightforward - certain scripts need the user to manually configure (e.g. publish as a service, set triggers)

9. Other useful links/resources

4 Comments

For the JISC Winter Fayre I was asked to fill in for a last minute drop out. My only brief was that the title – though not necessarily the content - should be a reworking of that shown in the programme: ‘CREATE, Reach and Engage’. Following recent conversations/presentations with/from Tony Hirst and Pauline Randall, I already had some ideas floating around about ‘search’ and ‘recommendation’ and their potential effect on course discovery and enrolment. The crystalisaton of these ideas came together in my presentation: ‘Cost, Reach and Engagement’.

Here’s the slidecast:

If you prefer to read rather than listen, here’s an overview of what I said (incorporating some new material towards the end of this post, with a survey of RSC Scot N&E supported institution websites and … my recommendations for what you might want to do):

In the beginning

The tools for Internet search actually predate the web itself. Tools like Archie could extract information from file servers, generating searchable indexes of stuff. At around the same time, directories of websites also emerged. Some of these were curated lists, others automatically generated, or even a hybrid of both.

A big turning point in web search was the increasing use of algorithms to rank the relevancy of results. Google’s PageRank method has arguably received most of the recent attention, using a wide range of factors including the number of inbound links, click-throughs, even page-load speed to rank search results.

More details of the specifics of this can be found on the Wikipedia page on the history of search engines.

Recommendation: trusted and crowdsourced

Recommendation is an incredibly powerful way to influence action. It’s even more powerful when it comes from a trusted source. Personal recommendations are probably the most powerful, people being more likely to accept a recommendation from a friend than a stranger. Other forms of recommendation include advice from an independent source like the consumer protection site ‘Which?’, and more recently ‘crowdsourced’ reviews which are commonplace on sites like Amazon and are at the core of sites like laterooms.com, where trust is replaced by volume.

Another way to receive recommendations is through social networking sites like Facebook, Twitter and Linkedin. In some cases these recommendations are explicit- Linkedin allows an option to recommend directly to your colleagues - but they are also implied, ‘I liked this, so might you’.

Recommendation has always been and continues to be an important part of how businesses and institutions market themselves, but what is the value of recommendations made via social networks?

The value of Email, Share, Tweet and Like

Share buttons

I’m sure you’ve seen these four buttons appearing on various websites including this blog. These buttons send out notifications via your social networks (if you are enrolled) to your followers. Media sites like the BBC use them mainly to get you to share news stories around your networks. But it doesn’t end there, manufacturers and utility companies are also using these types of buttons to get you to do their marketing for them, for you to make a recommendation about their product or service to your network.

Image of Email, Share, Tweet and Like buttons

Is there any value in this type of recommendation? Fortunately online event promotion and administration site Eventbrite has revealed the value of those four little buttons. Eventbrite make money by charging a booking fee for paid for events (2.5% of ticket value + $0.99 per ticket with a maximum fee of $9.95), as Techcrunch revealed in October 2010: For Eventbrite, Each Facebook Share Is Worth $2.52. Update: Revised figures have been published by Mashable in Facebook “Likes” More Profitable Than Tweets [STUDY].

$2.52 is the average return to Eventbrite each time that someone clicks the Facebook ‘Like’ button! The second best return is on email which has an average return of $2.34 per click, followed by Linkedin ($0.90) and lastly Twitter ($0.43). So assuming that the majority of paid-for events hits the maximum booking fee ($10) then someone clicking the Facebook ‘Like’ button has a 1in 4 chance of getting someone else to buy a ticket.

Quickly looking at what I think might be happening here: email is a highly trusted recommendation source but is usually a one-to-one distribution. Facebook is a less trusted source as your network can be diluted to a degree, but clicking the ‘Like’ button makes it visible to your network (one-to-many). Networks like LinkedIn and Twitter probably have less social cohesion, Twitter in particular will have more accounts designed to market businesses and brands, so while they are potentially bigger networks they aren’t as trusted.

How do I ‘like’ your course?

So for Eventbrite there is a demonstrable value in incorporating these buttons into its service, providing a mechanism for people to easily recommend events to their friends, thereby generating sales. So are institutions missing a trick? If I went to your institution’s website is there an easy way for me to recommend courses to my friends?

 

Example share button (AddToAny)I’ve carried out a quick survey of institutions supported by our RSC and below is a table of the results. As can be seen, whilst the majority of them have a social media presence, only a minority have implemented a share button within their course information - and these are generally pushed inconspicuously to the page footer or sidebar. In a number of cases, even if there was a share option, bad meta tagging of the page name (which is often used by these buttons to classify what is being shared) meant that what was being shared was often meaningless. (As shown in the table, AddToAny and AddThis are share/ bookmarking services which provide widgets for your website with a collection of social media sites for the user to choose from when clicked upon).

Survey of social media presence and course recommendation buttons for institutions supported by JISC RSC Scotland North & East
Institution Social Media Presence Course Like/Share Buttons Prospectus Like/Share Buttons
Aberdeen None None None
Adam Smith None None None
Angus College Facebook, Twitter Email Via Scribd
Banff & Buchan Facebook, Twitter None None
Borders Facebook, Twitter None None
Carnegie Facebook, Twitter AddThis * ** Via Issuu
Dundee None Facebook, Reddit, Digg, StumbleUpon, Delicious * *** Buttons in footer
Edinburgh's Telford Facebook, Twitter None None
Elmwood Twitter None None
Forth Valley Facebook AddThis *** None
Inverness College None None None
Jewel & Esk Facebook, Twitter None None
Lews Castle College None None None
Moray College None None None
Newbattle Abbey None AddToAny None
North Highland None None None
Orkney Twitter None None
Oatridge Facebook None None
Perth College Facebook None Zmags
Sabhal Mòr Ostaig None AddThis *** None
Shetland Facebook, Twitter None None
Stevenson Facebook, Twitter None None
West Lothian None None None
Edinburgh College of Art None AddThis *** AddThis ***
Queen Margaret University Facebook, Twitter AddThis * ** Yudu
Scottish Agricultural College Facebook, Twitter Delicious, Digg, Facebook, Reddit, StumbleUpon *** None
University of the Highlands and Islands Facebook, Twitter, LinkedIn None None
Notes
* Page <title> doesn't reflect page content - remains static
** Buttons in sidebar
*** Buttons in the footer

Surveyed 14th March 2011 – Data available in this Google Doc

My Recommendations

  • Go for full buttons and make them prominent

For reference, I’m talking about the course/prospectus parts of your website. For other parts of your site you might prefer the more subtle AddToAny/AddThis et al. widgets, but for selling/promoting your courses I think you have to be more brazen about it. The institutions that did have social media share buttons on their sites had them hidden away in the footer or sidebar. To maximise the potential of them being clicked I would prominently place the buttons next to the course title or at the end of the entry. Because sites like Facebook and Twitter want you to share information around their network (its precious data for them to target their own marketing), they all provide easy ways to incorporate their buttons. Here’s the page for creating Facebook ‘Like’ buttons and here’s the page for Twitter’s Tweet button

  • Deuce Email/Facebook Like or trips Email, Tweet, Facebook Like

Right now I think there are two clear options for education in terms of button choices. The one you go for is probably dependent or your institution’s existing social media presence. For example if you don’t extensively use Twitter in your social media strategy you probably don’t want to use it as one of your share buttons as it’s harder for you to track comments. With regard to email, there are various options for sending via a webpage. The option I’ve gone for on this blog is to use the ShareThis Email chicklet, mainly because their popup window has the option to pull contact email addresses from Google and Yahoo. [You’ll notice I don’t use ShareThis for my other buttons. This is because their code requires an additional layer to get to the Facebook and Twitter pages]

At this point you might be asking why I include other share/bookmarking options in my blog. The decision to include other services is in part informed by CMO’s guide to the social media landscape which I picked up in Mashable’s article on Which Social Sites Are Best for Which Marketing Outcomes? [INFOGRAPHIC]

  • Get some insight – Facebook Insights, ShareThis

So you’ve invested a day or so implementing share/recommendation buttons into your course catalogue, how do you monitor their use before sending that memo to senior management to argue for more money for website development now that you’ve attracted students from around the world to study at your institution? I imagine most of you are already using some basic analytics to monitor page performance. Well similar tools exist for Facebook, Twitter, and, if you use it, the ShareThis email button.

Twitter’s official analytics service has been announced but isn’t available for general use yet, but fear not as there are a whole host of 3rd party Twitter analytics tools (Crowdbooster and TwitSprout are my current favs). More impressive is Facebook’s Insights for Websites which not only gives you an overview of how many clicks your Like buttons are getting, but also includes demographic information on age, gender, language and country (more information on this in Real-Time Analytics For Social Plugins)

image

So hopefully some Like buttons are going to start popping up at our supported colleges and universities (and if you’d like help or further advice on how to do this get in touch).

One final reflection is that this post began by looking at the history of Web search. That history continues to be written. Google’s recognition that recommendation through social networks is a very powerful way to leverage content is highly significant. Why rely on machine recommendation when your friends can do it for you? This is why Google recently announced that its search results will include data based on the indirect recommendations of friends (See An update to Google Social Search). Not only does this create an opportunity to improve search relevance, but it is another reason for including Like/Share buttons. If it is difficult for someone to share your course with their friends, potentially there is a negative secondary effect which means it might not be included in Google’s socially-enabled search results.

Final finally, would you recommend or share this post with your network ;)

7 Comments

JISC recently announced the funding call for Grant 04/08: Learning and teaching innovation (LTIG). These are small up to £50k one year projects giving institutions the opportunity to explore projects to support teaching and learning at the more innovative/high risk end of the spectrum. This is the 6th call for this particular type of funding and the lighter weight application process potentially makes it more appealing for those who have not previously applied for external project funding before.

I’ve helped to evaluate bids for round 5 of this programme and a variation of the call for Celtic FE colleges called SWaNi. This has given me some useful insight into the evaluation process and thought you’d all might like some insider tips. There is lots of general guidance and advice on writing bids, for this post everything I suggest is specifically targeted at your LTIG proposal.

For this post I’m also going to assume you’ve got some of the basics covered like reading the Call for initial proposals doc and checking your institution is eligible to bid. In Scotland this is made a little easier because ANY COLLEGE or university funded by the SFC can apply for funds. I highlight colleges because whilst this is a competitive call (last 3 calls have had 67/68 proposals funding 5 projects), I’m sure you can use the FE angle to your benefit, presenting JISC with an opportunity to fund innovation in a sector arguably usually overlooked.

So to start with I’m going to highlight some general philosophies I think you should have in mind in preparing your bid before then looking at each of the main sections of the Annex D – Learning and Teaching Innovation Grants Proposal Template.

Openness

JISC supports unrestricted access to the published output of publicly-funded research and wishes to encourage open access to research outputs to ensure that the fruits of UK research are made more widely available - LTIG6grant.doc Para B17

I would suggest that you shouldn't see openness as a burden, but an opportunity to strengthen your bid. The are a number of ways you can do this and resulting benefits:

  • Open Bid Writing. Joss Winn at University of Lincoln is a strong advocate of open bid writing. Putting together your bid in an open domain is an opportunity to gather evidence of a need for you project, it’s also an opportunity crowdsource content for your bid
  • Making your project sustainable. Creating an open project increases the opportunity for sustainability beyond the funding period. For example, if you are developing any software tools building a community around their development from the very beginning increases the chance of greater adoption and continued development. If you are doing any software development contact JISC OSS Watch for advice before you submit your bid. There feedback can be used to strengthen your proposal.

Usefulness/re-usability

proposals will be expected to demonstrate: that they have a potential to be a benefit to the whole JISC community [and] the potential to be scalable and replicable - LTIG6grant.doc Para 14

Often in unsuccessful LTIG proposals there is a tendency to focus purely on the local benefits, or solely be carried out within institutional walls. More so than ever projects need to be explicitly linked to the bigger picture and address real world needs. So instead of ‘we will be addressing the retention on this particular course after students identified it as a problem in a small scale survey’ you should use ‘the Quality Assurance Agency (QAA) (2008) Outcomes from Institutional Audit Progression and Completion Statistics. Second series. Sharing good practice. identified that …’.

The other thing to consider is interoperability and standards. JISC are more likely to shy away from a project which is deeply entrenched in bespoke institutional and systems not reusable by others.

Something to bear in mind is there is practically a standard for everything. If you are in doubt contact JISC CETIS, whose middle name is ‘interoperability’ and again if you contact them mention this in your proposal (if I read anything with ePortfolios it has to mention LEAP2A, for course information XCRI).

Dissemination/community engagement

The institution and its partners must commit to disseminating and sharing learning from the project throughout the community. LTIG6grant.doc Para B26

Most of the proposals I see include something about a website for dissemination, occasionally ‘a blog will be updated’. The danger with statements like these is they get lost as all the other bids are doing exactly the same thing. I’d include a strategy for making this more two-way. For example, as part of the JISC funded enhancement of the Twapper Keeper service several existing blogs were used to gather user ideas (e.g. here and here). The value of face-to-face shouldn’t also be overlooked. For the EVAF4ALL project they arranged for a meeting of ‘experts’ to come together and share ideas at a project start-up meeting (an idea might be to piggyback any special interest group meetings, HEA or RSC networks). Whilst mentioning dissemination it’s worth noting you should avoid end loading.

Student voice

If you do anything student facing make sure students are at the centre of the process. Holding a couple of student focus groups is no longer enough, you need to incorporate their expertise and knowledge into your project. My favourite quote to illustrate this is from Mayes (2007) referencing Etinne Wenger work:

Wenger describes how radical doctors are trying to describe a new paradigm for the doctor-patient relationship, where a consultation is re-conceptualised as a dialogue between two experts – one, the doctor, being expert in the generic medical science, while the other, the patient, is expert in his or her own case – medical and lifestyle history, symptoms etc. Both kinds of expertise are necessary for a successful diagnosis and agreed treatment regime and should be arrived at through a dialogue between equals – a horizontal relationship in which responsibility for outcomes is shared – Mayes (2007)

[Remember IMDB, Facebook and many other products were developed by students]

Bidding Template Breakdown

So with these general project philosophies in mind on to the bidding template. When writing your bid is keep looking at the evaluation criteria as laid out in LTIG6grant.doc Para 20. You must also adhere to the word limits, or your bid will be immediately discounted.

10. What is the issue, problem or user need that your proposed project is addressing?

A good place to start looking for evidence is the HEA EvidenceNet, which is “the place to come to find current evidence relating to teaching and learning in higher education”. As well as their main site it’s worth browsing the EvidenceNetWiki which is a useful way to identify some of the key references on most of educations biggest problems (assessment/feedback, 1st year experience, retention/widening participation). For general context Horizon Reports might also be a good source – here’s Horizon Report 2011

11. How does the proposed project address the issue described above?

Your essentially building an argument for funding your proposal. Section 10 was ‘what’ and this is ‘how’. You may want to break your ‘how’ into project phases. You definitely want to cover “the potential for sustainability of the work beyond the funded period”, as this is becoming a priority for JISC work. Something else to consider is ‘is the idea appropriate’.

12. What makes the proposed project innovative? Give references to any applicable previous research/work in this area and explain how your project would add or build on this.

The biggest failing I regularly see in this section is the failure to reference any prior work in your chosen area. In particular you want to see if there are any previous JISC projects on your chosen area. Identification of overlap is not a weakness but an opportunity to highlight how your project is different, why your project should be funded to fill the missing gap.

The best ways to find out what JISC has previously funded are Google ‘JISC funded with your project idea’. Alternatively use the CETIS PROD database to search for existing projects.

Obviously JISC aren’t the only project funders so you should reference other work where necessary (for example anything with mobile probably has some overlap with MoLeNET. Whilst I’m on mobile technology one of my pet hates is platform specific mobile apps. If you are doing something just for iPhone/iPad you’d better have a watertight argument for its use).

Edit: I should have also highlighted that anyone who works for JISC (in the Services, Programme support, RSCs) generally has a good overview of what is going on in the sector both nationally and internationally. Running your idea past one of us before submission is a good opportunity to find out if your idea really is innovative and areas where it overlaps with other projects.

13. What benefit will the outputs of your project be for other HE or FE institutions (outside of your institution)? Will they be able to use them, and why might they want to?

This is a new section to the bidding template. Often one of the criticisms I hear about JISC funded work is the wider impact on the sector. This is perhaps a bigger problem for the smaller projects which have tighter deadlines and smaller budgets. This is where the philosophy of an open and engaging project can be used to your benefit. If you have already generated interest in your idea and got some feedback this can be used to illustrate the benefit and demand of your project. You might want to consider the cost benefit here. We’re in the era of putting hard values on savings, so if your project is about retention what are the cost benefits for a student continuing their studies for the institution and even society.

14. Give brief details of the project timescale, project team (including how much time each member will be spending on the project), key work packages and outputs

An example I regularly use to illustrate one way to layout this section is the University of Strathclyde’s PEER Project submission, in particular the way it maps a timeline to workpackages, objectives and outputs. If your word count permits I would use it to go into more detail about your outputs (expected size, format, which Creative Commons license you’ll be using, where they will be put). If producing reports/documents you might want say whether drafts will be available for comment/contributions (various ways you could do this from making a public Google Doc or maybe writetoreply.org)

One of the evaluation criteria is “does the proposal suggest that it has the full support from the institution(s) involved” . For the initial stage of proposals you don’t need to, nor should you, submit a letter of support. I think it’s hard to fulfil this criteria within the bidding template so at the end of this section I would include a statement like “This proposal has been approved for submission by {Insert name of the person who has approved it}, {Insert job title} (and perhaps a contact email)”.

Budget Information

JISC are a bit coy when it comes to exactly how much your institutional contribution should be. The figure usually mumbled between markers is 30%. Remember that:

The proposal must not include the development or purchase of learning material/learning content, … software, licences and equipment purchase …, it would be acceptable to include this as part of an institution’s contribution  LTIG6grant.doc Para 8

On the budget form I’d use the ‘Details’ column for ‘Institutional Contribution’ to indicate any expenditure which falls in this category. I’d also use the details column to breakdown your entered amounts so that the markers can see if the project is value for money.

Finally

What were the most common reasons that bids were rejected during previous rounds of Learning & Teaching Innovation Grants? – from Guidance to Bidders

  • The proposed work duplicated existing work (including JISC funded work) and/or did not show any awareness of existing work in the same area;
  • Linked to the above, the proposal did not demonstrate clearly that it was innovative; the proposal did not make it clear that proposed outputs would be of interest, transferable or reusable for other institutions, groups or subject areas;
  • The proposal was not eligible – for example it would use JISC funding to buy hardware or software, to develop or purchase learning materials;
  • The proposal was for the development of a tool and there was no evidence of a demand from the wider community;
  • The proposal was not supported by an institutional financial contribution commensurate with the benefit of the proposed work to the institution;
  • Proposals involving the development of a tool did not adhere to standard JISC expectations (free release to the JISC community, use of appropriate web standards, support for interoperability and transferability);
  • Proposals centred on the use of new technology or online resources and tools without any consideration of pedagogical need or accessibility issues.

Bid documents

Final, finally

Even if you are not supported by your local RSC (depending on where you are in the UK we have limited support for HEIs, but do support HE in FE), I’d still get in touch before you submit your proposal because we are always looks for good examples to shout about from our own patch.

Update: Rob and Lis's comments reminded me that I should have thanked Sheila MacNeil at JISC CETIS and the LTIG Porgramme Manager Heather Price for input on this post (CETIS providing interoperability/standards information and Heather highlight some useful bit and pieces including the details of the previously funded LTIG projects).

Share this post on:
| | |
Posted in Featured, Funding, JISC on by .