Half baked

6 Comments

I’ve mentioned the appear.in service a couple of times. This allows you to convene small meetings (up to 8 people) with video, voice and chat without the need for logins or additional browser plugins on both desktop and mobile (my quick video demo here). Today I got an email from appear.in saying

Get notified when someone enters your room!

We have now made it even easier to start a video conversation. When someone enters your appear.in room, you will receive a desktop notification that you can click to enter the room.

How can you use notifications?

  • Get notified when someone shows up for a meeting
  • People who want to talk to you can just go into your room
  • Make sure everyone on your team is alerted when your team meetings start

Read more on our blog.

Rooms you followNotifications work using a Chrome extension, but once you have this installed to can monitor multiple rooms.

So if you were wanting to run remote tutor support hours you could claim an appear.in room and enable notifications. Once you advertise your office ours you can monitor the room, get on with other work and wait for notification.

Because appear.in allows you to ‘lock rooms’ if you are providing one to one support you can prevent someone else ‘walking in’.

The awkward bit is handling the locked room. There is no queuing service and anyone visiting a locked room will be presented with the message below. Unfortunately if someone visits a locked room, sees the locked message when the message doesn’t go away when the room is unlocked.

Locked room

A way around this might be to have two rooms – corridor and office. The corridor room would always be open. As people arrive in the corridor room you could greet them and invite them to your ‘office’ and lock the office during consultation. Once done you could go back to the ‘corridor’ room if anyone else is waiting. If the ‘corridor’ gets busy (more than 7) you’ll have to sit in it yourself or lose the ability to enter (unless as an owner you get priority).

[Writing this it’s all sounding very faffy. I’d imagine you could do something similar with Google Hangouts but I love the fact appear.in requires no login. What do you think?]

Share this post on:
| | |
Posted in Feedback, Half baked, Mashup on by .

1 Comment

I'm not entirely sure what this post is. I started writing it on the train down to altc2013 and think it lost its focus between York and Sheffield. Essentially I wanted to write this to highlight some of the benefits of using BuddyPress as a way to capture user activity streams but at the same time some of the challenges of achieving an integrated experience using WordPress … I'll let you decide it's value and please feel free to comment (the ‘dirty code’ post will be a lot better).

For the last couple of week I’ve been working on the altc2013 conference platform. In previous years the Association for Learning Technology (ALT) had used CrowdVine to create a conference environment which allowed delegates to connect and communicate. This worked well but had a number of data bottlenecks making administration difficult. This combined with the knowledge that ALT members already have very rich networks on other sites including their own blogs and social network sites like Twitter, it was decided to opted some of the connectivist principals used in the ocTEL. The result, hopefully, is a site that replicates the CrowdVine experience with  several additional key features. These include:

In this post I’ll outline the general recipe used in the altc2013 platform, how interoperability was achieved between some existing WordPress plugins (saving the code heavy post for a latter date).

Core plugins

Starting with a self-hosted version of the blogging platform WordPress four key plugins are the basis of the site:

  • Conferencer – used to manage programme and session information
  • BuddyPress – social networking platform
  • FeedWordPress – used to pull delegate activity from 3rd party blogs and sites
  • MailPress – for daily newsletter distribution and management

The common issue when you stray away from WordPress as a blog to WordPress as a ___ is maintaining interoperability between plugins. For example Conferencer was never designed to work with BuddyPress so interfacing these plugins is required on several levels. To illustrate this below is a general interface diagram for the altc2013 platform followed by more detail about each of the main challenges that had to be overcome:

altc2013 platform integrations

Conferencer –> BuddyPress

In Conferencer the custom post type ‘session’ is used to enter and display session information. This post type  is associated with further custom post types for rooms, timeslots, speakers, tracks and sponsors. BuddyPress on the other hand doesn’t use custom post types or taxonomies, instead extending the WordPress core functionality with it’s own custom APIs, functions and features. Out of the box BuddyPress uses Groups as a way for members to cluster and collaborate. Integration between BuddyPress and Conferencer is primarily achieved by renaming Groups to Sessions (a cheap trick but it works). Doing this means when a user tries to view a Conferencer session which automatically uses a /sessions/ in the url, they are redirected to the BuddyPress group. This is achieved by creating a new page with the slug ‘sessions’ and then using this with the BuddyPress Group component (essentially duplicating the same url endpoint but relying on BuddyPress to steal priority over how the page is displayed).

WordPress Add New Page

BuddyPress settings page

At this point all we have done is trick WordPress into displaying a BuddyPress Group page. Additional code is required to hook into Conferencer session creation to generate a group in BuddyPress and create a relationship between the custom post type and group (included in the Github code shared at the end of the post). Another aspect of the integration is the Conferencer generated programme view. This includes a ‘Follow Session’, renamed from ‘Join Group’. This is done by reusing some of the existing BuddyPress functions to render a group button within the Conferencer programme.

BuddyPress <–> FeedWordPress

The FeedWordPress plugin allows the automatic collection of posts made on 3rd party sites using RSS. FeedWordPress ingeniously uses the existing WordPress Links table to maintain a list of sites it collects data from. Meanwhile within BuudyPress members can edit their own profile using defined fields. In altc2013 we use this functionality to allow delegates to register their own blogs. An interface with FeedWordPress is achieved by associating a blog feed address with the WP Links table. The added benefit of allowing users to add their own blog feeds is that we can make an association between blog feed and author. This means when a post is collected by FeedWordPress it is associated with the delegate and consequently BuddyPress associates this with an activity stream entry. 

Example activity stream entry

Reader <–> BuddyPress

The Reader isn’t a plugin in it’s own right (but I should make it), instead it’s a theme customisation I originally developed for ocTEL. All the reader does is render data collected by FeedWordPress which are in turn are just categorised blog posts. The Reader integrates with BuddyPress by using it’s native activity favouriting and by using an addition BuddyPress compatible plugin (BP Likes) also displays and records ‘likes’. This is achieved is a similar way to adding ‘Follow Session’ buttons of the Conferencer generated programme.

Reader - Favourite/Like Favourite in Activity Stream

BuddyPress <–> MailPress

MailPress is a plugin which manages the distribution of a daily newsletter of latest conference activity. The two interfaces with this and BuddyPress is the addition of link within the members notification settings to control their newsletter subscription. Given the way BuddyPress uses WordPress functionality to add additional information to various interfaces this was achieved by matching the WordPress/BuddyPress user id with a table of users maintained by MailPress. The second layer of integration was to include highlights of the BuddyPress activity stream in each newsletter. This was achieved by using existing BuddyPress functionality to render and display a custom activity summary (as used also on the homepage of the conference site.

Summary

Hopefully this post has given you some insight into what was required to create the altc2013 conference platform. Using existing open source plugins, we’ve interfaced them to create new functionality. Whilst the effectiveness of the new altc2013 conference platform is still to be evaluated we now have a basic platform to agilely respond to the needs of delegates.

A reminder that the code we’ve developed is on Github so feel free to peruse over and take in your own direction and comment on, if your an altc2013 platform user the feedback button is the best way to suggest improvements or highlight bugs, and if you are generally interested in this area the comments on this post are open

5 Comments

Repositories are living archives. In terms of the support it must provide for stored files, it must take into account two important functions of the files it holds:

  1. Access: The files are held so that users can access them. This means that they must be stored in formats that can be used by today's intended audience
  2. Preservation: The files are held so that users in 5, 10, 50, or more years can still access them. This means that they must be stored in formats that can be used by future audiences, or in formats that can easily be migrated

These two considerations are not always complementary. A file format that is good for access today may not be a format that is easy to migrate, but a format that is easy to migrate may not be easy to read.

The text above is taken from the JISC infoNet Digital Repositories infoKit. An added complication when considering the deposit of OER is if you are not using a ‘No Derivatives’ licence how can you support remix/editing.  Here’s a scenario taken from the WikiEducator:

A teacher wants to make a collage. She imports several PNG photos into Photoshop and creates the collage. She saves the file as a PSD and exports a copy as a PNG to post on the web. While others can edit the PNG, it would be a lot easier to edit the PSD file. However, in order to use PSD files, the person has to have a copy of Photoshop.

Already it’s starting to get more tricky. PSD is a proprietary file format developed and owned by Abobe and used in Photoshop. You can actually open and edit PSD files in open source tools like GIMP (I’m not sure how legally Gimp can do this – waiting for a response from OSSWatch Update: I've had a response. Upshot 'it can be awkward on all levels'. I'll point to a related blog post when it's published. Post by Scott Wilson at OSS Watch on using proprietary file formats in open source projects). Similarly you can use open source alternatives to Microsoft Office like LibreOffice to open and edit DOC/XLS/PPT etc but in this case Microsoft's proprietary file formats under their Open Specification Promise, which if you read this page on Wikipedia itself has a number of issues and limitations.

The next issue is, as highlighted by Chris Rusbridge in his Open letter to Microsoft on specs for obsolete file formats, the OSP doesn’t cover older file formats. So if you were an earlier adopter publishing OER in editable formats there is a danger that the format you used won’t be suitable down the line.

I’m mindful of the Digital Repository infoKit’s last point of guidance

Be practical: Being overly-strict about file formats may mean collecting no files leading to an empty repository! A sensible approach must be used that weighs up the cost and benefits of different file formats and the effort required to convert between them.

Should OER file formats be tomorrow’s problem?

Share this post on:
| | |
Posted in Half baked, OER on by .

I should say this post contains a lot of technical information, doesn't give much background and is mainly for my hard-core followers

This is a very lose sketch of an experiment I might refine which uses Jason Davies wordcloud script (add-on for d3.js) as a way to filter data hosted in a Google Spreadsheet. I was essentially interested in the Twitter account descriptions of the community using the the Social Media Week – Glasgow hashtag, but a minor detour has reminded me you can:

  • get json data straight from a Google Spreadsheet
  • you can build dynamic queries to get what you want

So I fired up NodeXL this morning and got this pretty graph of how people using the #smwgla hashtag at least twice follow each other.

people using the #smwgla hashtag at least twice follow each other

One of the features of NodeXL is to add stats columns to your data which includes friend/follower counts, location and profile descriptions.

NodeXL Stats

Uploading the data from NodeXL (Excel) to Google Spreadsheets allows me to render an interactive version of the community graph using my NodeXL Google Spreadsheet Graph Viewer.

interactive version of the #smwgla community graph

All this is doing is grabbing data from Google Spreadsheets using their Visualization API and rendering it visually using javascript/HTML5 canvas. You can use the query language part of this API to get very specific data back (if you want a play try Tony Hirst’s Guardian Datastore Explorer). Using Tony’s tool I got this query built. One thing you might notice is I’m selecting a column of twitter description WHERE it contains(‘’) <- a blank – if it’s a blank why did I bother with the WHERE statement?

Switching to Jason Davies wordcloud demo we can play with custom data sources if you have some JSON. In Tony’s tool you have options to get the data in html (tqx=out:html) and csv (tqx=out:csv). There is a third undocumented option for json tqx=out:json. Using this we can get a url for the wordcloud generator https://spreadsheets.google.com/tq?tqx=out:json&tq=select%20AH%20where%20AH%20contains%28%2727%29&key=0Ak-iDQSojJ9adGNUUXZnU2k3V1FRTjR3eFp0RmRNZWc&gid=118

To make the wordcloud interactive, so that when you click on a term it filters the data on that term was can use the option to include {word} in our source url e.g. https://spreadsheets.google.com/tq?tqx=out:json&tq=select%20AH%20where%20AH%20contains%28%27{word}%27%29&key=0Ak-iDQSojJ9adGNUUXZnU2k3V1FRTjR3eFp0RmRNZWc&gid=118

And here is the final result, an interactive wordcloud of #smwgla Twitter account descriptions [Note: you need to hit the Go button when you click-through]:

interactive wordcloud of #smwgla Twitter account descriptions

The end result useful? Not sure, but how the data is extracted is (to me anyway).

Some students didn't take well to Steven Maranville’s teaching style at Utah Valley University. They complained that in the professor’s “capstone” business course, he asked them questions in class even when they didn't raise their hands. They also didn't like it when he made them work in teams.

Those complaints against him led the university denying him tenure – a decision amounting to firing, according to a lawsuit  Maranville filed against the university this month. Maranville, his lawyer and the university aren't talking about the case, although the suit details the dispute. Socratic Backfire? – Inside Higher Ed

A couple of years ago I was fortunate to briefly work with Jim Boyle at the University of Strathclyde. Jim recognised long ago (over a decade) that passive learning wasn’t, and never was, appropriate for teaching. In searching for a better way amongst other things adopted Eric Mazur’s Peer Instruction technique. This technique is based on combining the Socratic model with electronic voting asking students questions, getting them to discuss their reasoning, re-polling the question to make sure they’ve got it.

[When I was working with Jim he was still looking for new ways to improve the way students learned and if you haven’t already seen I’d whole heartily recommend you watch his ESTICT keynote Truth, Lies and Voting Systems which in part looks at the issues using PowerPoint in teaching and learning.]

But why isn’t HE full of Jim’s? Why aren’t all educators looking to educate with the tools at their disposal? I believe part of the problem is ‘expectations’, and not just the expectation of academics that they need to stand up in front of a room and talk for 50 minutes. No the problem is bigger than that. There is the expectation by your colleagues and head of department that you as an academic will stand up for 50 minutes twice a week and lecture your class, there is the expectation by the institution that you as an academic will stand up for 50 minutes twice a week and lecture your class, there is the expectation by the professional body who accredit your course and have supplied you an outline curriculum that you will stand up for 50 minutes twice a week, and unfortunately students themselves have an expectation of university life which includes you as an academic standing up for 50 minutes twice a week.

There will always be individuals within an institution using good teaching practice but to turn these from the minority to the majority needs the institution to have and sell a different expectation of teaching and learning. The OU already do this, Aalborg University and their institution wide PBL approach do this and University of Lincoln’s ‘Student as producer’ has the potential to do it and there will be others but not enough.

</generalising statements>

Share this post on:
| | |
Posted in Half baked on by .

4 Comments

Eyes of Flickr
Eyes of Flickr
Originally uploaded by anyjazz65

Last week I got frustrated at not being able to find some JISC funded project outputs, which was  a little annoying. This led to a small exploration around JISC’s Programme Information Management System (PIMS). The system was originally only available to JISC executive staff but made available to all sometime last year and used to log all the JISC funded programmes and projects. As well as looking up projects via your browser some of the data can be accessed via it’s API.

At the back of my mind was a post by Jonas Eriksson (@joeriks) on Coding my own automated web site monitoring with Google Apps Script and a Spreadsheet. What I wanted to do was pull JISC funded project website addresses from the last 3 years* and automatically test to see if they were still alive.

To do this I first needed a list of project website urls from the last 3 years. Unfortunately the PIMS API doesn’t appear to let you access records based on a date range so instead I just grabbed the lot via http://misc.jisc.ac.uk/pims/api/dev/project which returns the data in XML format. I could have just dumped this straight into a Google Spreadsheet using the importXML formula, but I find this is *very* unreliable so opted to handle it using the Google Apps Script XML Service. [Because of my lack of knowledge of using this service I initially didn’t get very far so tried processing the response as JSON instead by using http://misc.jisc.ac.uk/pims/api/dev/project=json but got into more difficulty because the API returns objects with a duplicate ‘Project’ key so reverted back to XML]

So here is my code to get some selected columns of PIMS data for projects that have finished in the last 3 years:

* Para 27 and 29 of JISC’s general terms and conditions of funding state “27. The institution and its partners must create a web page and web site to explain the project aims and objectives and to disseminate information about project activities and results. …. 29. The lead institution or one of its partners must agree to host the web site on their server for a minimum of 3 years after the end of the project and to assist JISC in archiving it subsequently.”

The data isn’t clean

Any systems that gives the user freedom over the data entered will invariably get some ‘dirty data’. For example it would have been nice to just iterate across the project website urls but: a) not all projects have a url entered; b) not all of the projects are projects (some of the entries are holders for Programme Management or Evaluation); c) urls are entered with leading whitespace; d) the field may have multiple urls or text notes; or e) might just be entered wrong or be an old url.

You can add layers of code to factor some of these out like trim whitespaces or only process urls that begin ‘http:’ but at the end of the day there will always be an error factor.

Regardless of this I was keen to push on and find out how many of these urls were pingable using Apps Script. So here is my next bit of code to ping a spreadsheet of urls:

Using this I found that the UrlFetchApp service would hang if there were over 200 urls and not push it’s results into the sheet so I found that I had to manually adjust the for loop (line 5) to do batches of urls at a time.

The results for this first pass are in the embedded table below (also in this Google Spreadsheet). It’s notable that almost half of the entries don’t have website urls associated with projects. As mentioned earlier not all of the projects entered are externally funder projects a proportion are internal activities.

As there were also quite a few invalid website url entries (n. 32) I tided these up manually (a number of ‘tbc’ and other text entries), and ran again as a 2nd pass also summarised in the table. In the second pass I did some manual checking of the project entries returning 404 and 590 errors.

The 590 errors are all associated with project websites hosted on the HEA website associated with the OER programme. An example is http://www.heacademy.ac.uk/projects/detail/OER_IND_Bradford which returns a HEA themed page with no content. It appears these urls have been entered incorrectly or the HEA have changed the structure of their site as the following url does work  http://www.heacademy.ac.uk/projects/detail/oer/OER_IND_Bradford.

The manual checks only identified 1 or 2 mistaken missing urls indicating that the UrlFetchApp, which automatically follows redirects, is accurate.   


PRODing a different data source

Another data source which has JISC project data is the CETIS PROD directory. This pulls a number of data sources including PIMS and is given some extra TLC from CETIS staff who curate the data manually adding extra pieces of information. Like PIMS, PROD has an API to get data out, but as far as I could see there was no way to get all the data.

I was original made aware of PROD via JISC CETIS’s Wilbert Krann (@wilm) and so a few friendly tweets later I ended up with the following query for the TALIS/PROD data store (I should say I know very little about Linked Data/SPARQL so rather than show my ignorance I’m not even going to mention it):

PREFIX prod: <http://prod.cetis.ac.uk/vocab/>
PREFIX doap: <http://usefulinc.com/ns/doap#>
PREFIX mu: <http://www.jiscmu.ac.uk/schema/muweb/>
PREFIX jisc: <http://www.rkbexplorer.com/ontologies/jisc#>
SELECT DISTINCT *
WHERE {
    ?s a doap:Project .
    ?s doap:name ?project .
    ?s jisc:end-date ?date .
    ?s doap:homepage ?homepage .
}

Just as with the PIMS example it was possible to use Apps Script to fetch the results using this getProdData code. [You’ll see that the fetch query url has been shortened using bit.ly. This is because as Tony discovered Google Apps Script doesn’t like lonnnnnngggg queries, but it is happy to follow redirects.

Below is a summary of the results from pulling and pinging the project homepage urls from PROD (also available in this Google Spreadsheet). There are more than half the number of projects returned from PROD (n. 430), but all the projects with homepage urls are valid without any clean-up. There are still over 30% of projects without a homepage url, but this doesn’t mean that the project doesn’t have some sort of web presence and the PROD data contains other urls if they exist for a JISC page, rss feed, wiki site etc.

Even though PROD gets some extra love and attention 13% of the recorded projects homepage urls hit dead ends (mainly 404 errors). Just as with the PIMS data I had a look at these in a 2nd pass and found that 5 projects actually had an alternative web presence (usually a hosted blog). 

 

Many eyes and many things

What can we take from all of this? A third of JISC funded projects don’t have a project website? One in ten projects with websites aren’t available for 3 years after the project ends? I don’t think it’s as conclusive as that. Those states are based on the assumption that all JISC funded projects have agreed to the general terms and that the general terms have remained unchanged. What interests me more is how the information can be improved and reused.

In terms of improving the quality of the data just as PROD adds an extra level of human curation there is potentially the opportunity to add a wider level of community curation similar to Wikipedia or IBM’s Many Eyes. The challenge is less technical (in this sheet I’ve added a comment link for each entry which links to a comment form) and usually finding the right incentive to get contributions.

In terms of reuse I have one simple use case in mind and I’m sure there are many more. In a couple of hours it was possible to pull this data into a Google Spreadsheet and ping project website urls. It would only be a tiny step to not only automate this but trigger an email to someone when a website went off the radar.

[Here’s a link to the Google Spreadsheet with all the data I pulled]

One of our supported institutions recently asked if I knew of or had any guidelines for organisers planning to run blended events (extending the value of face-to-face events by giving access to a remote audience). I didn’t find anything that entirely fitted the bill but as I’ve arranged, helped and participated in a number of these types of events I’ve done a bit of a brain dump and below is a draft of what I’ve come up with. I’d really welcome any feedback of suggestions you have (you can leave comments in this post or edit the document in Google Docs) Update: Thanks for the contributions so far from Kirsty Pitkin (@eventamplifier) and Alan Lavine (@cogdog)

Guidelines for blended events (online and face to face)

Increasingly event organizers are turning to hybrid events which blend face-to-face with an online audience to maximize impact/amplification and reduce costs for attendees. This guide is designed to identify a number of factors that event organisers should consider before running events and covers a number of areas from technical considerations to the format of the event.

In the planning stages of a blended event, organisers need to make a decision about how the physical and virtual audience will be treated. This is important as from the very beginning you need to manage the expectations of those attending. The main decision to make is will the remote audience be actively integrated into the event or treated as passive observers. There are a number of factors to consider before deciding at which part of the spectrum, in terms of active to passive, the event is going to be. One factor is scale. With large events it becomes increasingly difficult to moderate the audience and engage in effective dialogue in terms of what is happening in the physical and virtual spaces.

Before you start

  • Establish a group of key people who will be involved in designing and delivering the event
  • Decide on how the group will communicate and share documents (eg draft programmes)
  • Agree on a set of event objectives
  • Identify your target audience
  • Decide on the format of your event.
    • Will there be any breakouts (will these also be streamed)
    • Is the remote audience going to be actively engaged during the event
    • Will any of the presentations be delivered remotely
    • What streaming options do you have available (video streaming only, integrated conference environment)
    • How will the physical and virtual space be arranged (what virtual presence will remote delegates have)
    • How will remote participants discover or navigate through the event materials
    • How will your physical and remote audience connect and network
    • Do you have enough staff to support your event
  • Identify speakers (for more interactive events consider targeting speakers who are familiar with delivering hybrid or online events)
  • Create a shortlist of suitable venue locations (criteria for selection may include connectivity, on-site technical support, existing video streaming provision)
  • Identify and resolve legal aspects of streaming the event
    • How will you get consent from presenters to record/distribute their presentation Tip: also ensure presenters know that content of their presentations needs to be copyright cleared or properly attributed
    • How will you get consent from the audience to record/distribute their contributions/comments.
  • Decide if you want to stream the event yourself or use a dedicated company to do it for you

Technical specification

  • General
    • What online conference environment are you going to use
    • How are you going to use the online environment (polls, Q&A pods, breakouts, chat)
    • Are individual logins required for remote delegates if so how will these be generated and distributed.
    • Will you be streaming via wifi or a wired Internet connection? Wired Internet connection is always better.
  • Audio
    • General - does the room have an existing AV setup which can be fed into your live stream
    • Presenter – how will audio be picked up (directional mic, wireless lapel mic or other)
    • Audience – if required how will questions comments from the audience be picked up (roving mics, relayed by host)
  • Video
    • Source - how will video be captured. Tip: webcams are okay but will often struggle in low light levels. A number of current camcorders have the ability to pass through their live video feed as a webcam source via firewire, s-video or HDMI (not all camcorders have this ability). Using a camcorder will also give you more control over zoom, focus and light balance.
    • Coverage - will there be multiple video sources to capture different angles (eg presenter, slides and audience). How does your streaming software support this e.g. switching camera angles. Note: slides with small text or detailed images often appear poorly over a live video stream, so it is best to have them available elsewhere (i.e. Slideshare or Authorstream). Some online conference environments also allow slides to be uploaded beforehand to allow better rendering. Check speakers’ slides in advance, where possible.
    • Remote streaming - will online participant have the option to broadcast their own cameras

Event registration and information

  • Decide if online attendance is going to be promoted in the event publicity
  • Outline the anticipated remote delegate experience - will remote delegates be able to contribute to the discussion etc
  • Provide details of technical requirements to join remotely (do remote delegates need a webcam and/or mic, minimum computer requirements
  • Incorporate legal consent into registration process (eg permission to record and transmit delegate comments)
  • Provide instructions on how to tag discussions and subsequent event blog posts
  • Make sure it is clear who to contact for support and where discussions about technical problems should take place

Preparation

  • Identify risks with the event format and have plans to mitigate these
    • Do you have a secondary communication channel
    • Are slides or presentations hosted elsewhere in case of failure in video or part of the streaming environment
    • Do you have access to backup equipment
    • What do you do in the event of a lost data connection
  • If possible do a full technical rehearsal from the venue
    • Are there reliably and accessible local network points
      • suggest requesting hard line connections for speakers, preferrably on separate network from public wireless
      • Don’t just take their word for it (if they offer so show you the server closet it is a bad sign)! Get a map, find out any limits on connections per router.
    • Are there sufficient power sockets and/or is the venue happy for you to lay extension leads
    • Is the venue lighting sufficient
    • Are there any issues with sound and video. Tip: check there isn’t any problems with audio feedback
    • Can you access your streaming/conference environment
    • Do a speed test on the network to ensure you have sufficient upload speed
  • Consider providing presenters access to the streaming environment prior to the event to let them see what features are available and experience what it is like as a remote delegate
  • Provide your speakers with some tips about how best to involve the remote audience, including looking at the camera periodically and looking out for questions from them
  • Make sure your event team are familiar with the streaming environment
  • Finalise the format of the event
    • Check that the event programme allows enough time (breakout session transfers, questions from the audience)
    • If necessary do the physical and virtual spaces work together (for informal/smaller events can the physical and remote audience see and interact with each other)

During the event

The level of involvement with the physical and virtual audience is very dependent on the type of event you want to run. It is perfectly legitimate to run an event designed only to stream sessions from the venue. The key if using this option is to manage remote delegates expectation of the how the event will be run. The following are suggested factors to consider (some may not apply to your event):

  • Moderators should be at the event as well as online to relay information to remote delegates
  • We had good luck with giving special seating up front to dedicate conference bloggers (give them electric and if possible, ethernet connections)
  • I strongly suggest creating a dedicated backchannel for event supporters, outside the public channels (e.g. Skype chat, or have mobile phone list to share)
  • Greet virtual participants as they arrive and relay information (anticipated start times, sound/video checks)
  • Moderate chat discussion, prompting audience for questions and reflections
  • Provide a clear area or contact for technical support so any issues can be dealt with quickly, without disrupting discussion about the content of your event
  • Guide the remote audience through the materials

After the event

  • Collect evaluation data
  • If archive material is available publicise (consider uploading material to a separate video site like Vimeo)
  • Look for ways to represent discussions in a meaningful way (providing a transcript which removes RTs etc) or summarises key discussion points
  • Make sure your content is accessible. Consider the need for subtitling of video footage or use alternative formats, such as detailed session summaries, to help ensure wide access to content
  • Connect up dispersed materials by providing links back to the main event website
  • Check with speakers to ensure that they are happy for their presentation to remain online

Acknowledgements

This document is published CC-BY-SA if you contribute please add you BY below:)

  • BY Martin Hawksey @mhawksey
  • BY Alan Levine @cogdog
  • BY Kirsty Pitkin @eventamplifier
Share this post on:
| | |
Posted in Half baked on by .

2 Comments

Recently I was at a talk by Prof David Nicholas project lead of the JISC funded Google Generation project which got a lot of attention in 2008 (the one that highlights most search is broad and shallow; users don't go beyond first page of results, 40% never return to a site, rarely going beyond the first 3 pages etc etc)

During David's presentation he kept going back to the idea that, historically, search for academic resources was controlled by librarians, they were the gatekeepers. If you needed to do a search you'd take your slip of paper with your keywords and search operators for approval before being allowed on a terminal to try and find what you were looking for. Internet search has obviously changed this. Now you can search almost anytime, anywhere. As a consequence the librarian is largely out of the loop, unable to assist when the person pops in their 2.3 keywords and pulls the handle, hoping they hit the jackpot with what pops out.

So what has happened is that original awareness mechanism, the slip of paper, has been lost removing the opportunity for the librarian to share their expertise. But whilst librarians secretly plot about how to turn Google off a new awareness mechanism is emerging.

The new slip of paper is something I’ve known about for a while, but it wasn’t until I was listening to David that I understood what it meant. The foundation of this understanding is Tony Hirst’s Joining the Flow – Invisible Library Tech Support (posted in September 2008!!!), which highlights how twitter could be used to “provide invisible support to their patrons by joining in the conversation” . So basically instead of waiting for that slip of paper to cross your desk you go rummaging in the bins trying to find it.

Business is already tapping into this channel below is an example of a recent experience of ‘invisible help’:


Establishing a Twitter based invisible helpdesk isn’t that hard. All you need to do is setup and monitor a some search keywords and not before long you can find yourself becoming a good Samaritan. I’ve started using Tweetdeck to monitor keywords related to blog posts I’ve written so that I can gorilla market my wares (hmm that might make a good WordPress plugin, attach some keywords as meta data and setup a Twitter robot to play good Samaritan for you). There are also some besoke tools emerging in this area. The main one I know of is the Chrome extension InboxQ, which uses Twitter to help you “find people asking questions about things you know”.

The main problem is whereas people like Zoho have a global operation your library will probably have a limited geography and Twitter is probably still used by the minority of patrons (there are ways around the geography problem like promoting a common hashtag), I still think it’s worth trying to search for those slips of paper.

Share this post on:
| | |
Posted in Half baked, Twitter on by .

4 Comments

In this post I want to put down a marker as to the role I think Twitter could have within education. When previously presenting on the use of Twitter in education I’ve always tried to emphasis its not just about a tool for discussion (in fact I try to avoid the word discussion because 140 characters can seriously hamper the depth you can go into), but instead Twitter which can be easily interacted with via its API and 3rd party services has the potential to be used as the building blocks for a service to support teaching and learning.

Some examples for you.

Does your institution use (or is about to cut) a SMS service to send administrative information to students? If so you could save yourself 4p per text by asking students to follow a Twitter account and receive free SMS updates if they are customers of one of the four big mobile network operators.       

Do you use or want to use electronic voting in the classroom but don’t have enough handsets or are frustrated when students don’t bring them in? If so Twitter can be used as a mechanism for collect votes even using the most basic mobile phones.

Making a strategic decision to use Twitter for different aspects of the educational experience I believe students are less likely to perceive it as a gimmick and consequently more likely to take more ownership of it as a tool to support their own education.

A nice diagram I came across recently which illustrates the ‘different aspects of Twitter’ this is Mark Sample’s Twitter Adoption Matrix which featured in his A Framework for Teaching with Twitter post.

Twitter Adoption Matrix  

(Mark has followed up his post with another expanding on Practical Advice for Teaching with Twitter, which is also worth a read)

The idea of building applications around social network sites to aid teaching and learning isn’t new. Examples like OU’s SocialLearn and Purdue’s Hotseat spring to mind. Perhaps the issue with these is they are designed around breadth instead of depth, trying to tap into the illusive Personal Learning Environment.

What if instead we ignore the personal and focus on the functional. That is building applications around Twitter to provide students and tutors with the tools to support learning, focusing on formal uses enabling opportunities for serendipitous informal learning. 

But why Twitter and not Facebook or FriendFeed et al.? For me it comes down to a couple of things. With Facebook there is the ever distraction of games, friends and family. Twitter stripes a lot of this away. FriendFeed is better in terms of simplicity but you are not restricted by 140 characters. Whilst this makes FriendFeed a better tool for deep discussion it makes it less mobile friendly (i.e. you can read notifications from Twitter on the most basic phone via SMS).

Finally flexibility. My favouring of Twitter’s flexibility is perhaps down to my own limitations as an educational mash-up artist. I find it a lot easier to extend Twitter’s functionality because of the simplicity of the core product and number of examples that can easily be adapted.

Hopefully you are getting my gist. Focus on adopting Twitter as a tool. Think of Twitter’s utility. The utility to collect comments. The utility to collect votes. The utility to send notifications. Through focusing on utility you are creating opportunities for other learning theories to come into play enabling the transition from formal to personal.

Share this post on:
| | |
Posted in Half baked, Twitter on by .

If MS Outlook was my idea I would make it easy to read and edit all my social networks, VLEs, PLEs from my inbox.

Email 2.0
Email 2.0 – App friendly by mhawksey (click to enlarge)

Perhaps not a completely original idea but recent developments might this happen sooner rather than later. Google are already exploring what is possible from with Google Wave. The model they are developing not only makes it possible to interact with other sites from your inbox (like reading searching and posting twitter updates), but also makes ‘waves’ embeddable elsewhere.

Mozilla, the developers behind Firefox, are already looking at a new communication platform, codenamed Raindrop, and if you look at some of the prototype sketches a similar theme of ‘one app to interact with them all’ is evident.

Raindrop sketches

Currently, MS Outlook is the first application I fire up in the morning and the last I switch off at night, but for how much longer …

Share this post on:
| | |
Posted in Half baked on by .