hannah dee

Pumpkin Hack!

On Sunday we had our first Aberystwyth Robotics Club pumpkin hack. Kids, pumpkins, flashing lights and electronics together in a fun afternoon workshop.

In the carving station, the kids hacked away at their pumpkins with kid-safe tools or gave their design to one of our high powered Dremel wielding helpers. With a suggested age range of 6-12 we weren’t going to let the attendees loose with super sharp knives or powertools, but they managed to design their pumpkins themselves and help to cut them out (or at least, carve them)

In the coding zone, we had a bunch of laptops, a bunch of Arduino nano microcontrollers, battery packs, wires and some ultra-bright LEDs. Kids wired up their own microcontrollers, with assistance from our student helpers, then programmed them in Arduino C. The programming aspect was mostly copy-and-paste but with just an hour and a bit to spend on it the wiring and the coding was sufficient to keep everyone involved.

Our final display was so much more impressive than I expected.

Here’s the “After” pic:

Here’s a google docs link to the Arduino handout if you want to try running a similar event yourself. You need a lot of helpers, as it’s quite easy to wire things up wrong, and the coding involves working out what bits to copy and paste. But it works, and we had kids as young as 6 with flashing pumpkins and big smiles. The one scary moment was when we realised that windows update had run on all of our laptops, taking out the Arduino drivers. But with the help of one of the attendees, we got around that (phew!) by booting into linux then editing perms on the USB ports.

HCC 2016: Human Centred Computing Summer School

Last week I was invited to present at the first Human Centred Cognition summer school, near Bremen in Germany. Summer schools are a key part of the postgraduate training experience, and involve gathering together experts to deliver graduate level training (lectures, tutorials and workshops) on a particular theme. I’ve been to summer schools before as a participant, but never as faculty. We’re at a crossroads in AI at the moment: there’s been a conflict between “good old fashioned AI” (based upon logic and the symbol manipulation paradigm) and non-symbolic or sub-symbolic AI (neural networks, probabilistic models, emergent systems) for as long as I have known, but right now with deep machine learning in the ascendant it’s come to a head again. The summer school provided us participants with a great opportunity to think about this and to talk about it with real experts.

Typically, a summer school is held somewhere out of the way, to encourage participants to talk to each other and to network. This was no different, so we gathered in a small hotel in a small village about 20 miles south of Bremen. The theme of this one was human centred computing – cognitive computing, AI, vision, and machine learning. The students were properly international (the furthest travelled students were from Australia, followed by Brazil) and were mostly following PhD or Masters programs; more than half were computing students of one kind or another, but a few psychologists, design students and architects came along too.

The view from my room

The bit I did was a workshop on OpenCV, and I decided to make it a hands-on workshop where students would get to code their own simple vision system. With the benefit of hindsight this was a bit ambitious, particularly as a BYOD (Bring Your Own Device) workshop. OpenCV is available for all main platforms, but it’s a pain to install, particularly on macs. I spent a lot of time on the content (you can find that here: http://github.com/handee/opencv-gettingstarted) and not so much time on thinking about infrastructure or thinking about installation. I think about half of the group got to the end of the tutorial, and another 5 or 10 managed to get someway along, but we lost a few to installation problems, and I suspect these were some of the less technical students. I used jupyter notebooks to create the course, which allow you to intersperse code with text about the code, and I think this may have created an extra layer of difficulty in installation, rather than a simplification of practicalities. If I were to do it again I’d either try and have a virtual machine that students could use or I’d run it in a room where I knew the software. I certainly won’t be running it again in a situation where we’re reliant on hotel wifi…

The class trying to download and install OpenCV

The school timetable involved a set of long talks – all an hour or more – mixed between keynote and lecture. The speakers were all experts in their field, and taken together the talks truly provided a masterclass in artificial intelligence, machine learning, the philosophical underpinnings of logic and robotics, intelligent robotics, vision, perception and attention. I really enjoyed sitting in the talks – some of the profs just came for their own sessions, but I chose to attend pretty much the whole school (I skipped the session immediately before mine, for a read through of my notes, and I missed a tutorial as I’d been sat down for too long and needed a little exercise).

It’s hard to pick a highlight as there were so many good talks. Daniel Levin from Vanderbilt (in Tennessee) gave an overview of research in attention and change blindness, showing how psychologists are homing in on the nature of selective attention and attentional blindness. I’ve shown Levin’s videos in my lectures before so it was a real treat to get to see him in person and have a chat. Stella Yu from Berkeley gave the mainstream computer vision perspective, from spectral clustering to deep machine learning. Ulrich Furbach from Koblenz presented a more traditional logical approach to AI, touching on computability and key topics like commonsense reasoning, and how to automate background knowledge.

One of my favourite presenters was David Vernon from Skövde. He provided a philosophical overview of cognitive systems and cognitive architectures: if we’re going to build AI we have a whole bunch of bits which have to fit together; we can either take our inspiration from human intelligence or we can make it up completely, but either way we need to think at the systems level. His talk gave us a clear overview of the space of architectures, and how they relate to each other. He was also very funny, and not afraid to make bold statements about where we are with respect to solving the problem of AI: “We’re not at relativistic physics. We’re not even Newtonian. We’re in the dark ages here“.

Prof Vernon giving us an overview of the conceptual space of cognitive architectures

When he stood up I thought to myself “he looks familiar” and it turns out I actually bought a copy of his book a couple of weeks ago. Guess I’d better read it now.

Kristian Kersting of TU Dortumund, and Michael Beetz of Bremen both presented work that bridges the gap between symbolic and subsymbolic reasoning; Kristian talked about logics and learning, through Markov Logic Networks and the like. Michael described a project in which they’d worked on getting a robot to understand recipes from wikihow, which involved learning concepts like “next to” and “behind“. Both these talks gave us examples of real-world AI that can solve the kinds of problems that traditional AI has had difficulty with; systems that bridge the gap between the symbolic and the subsymbolic. I particularly liked Prof. Beetz’s idea of bootstrapping robot learning using VR: by coding up a model of the workspace that the robot is going to be in, it’s possible to get people to act out the robot’s motions in a virtual world enabling the robot to learn from examples without real-world training.

Prof Kersting reminding us how we solve problems in AI

Each night the students gave a poster display showing their own work, apart from the one night we went into the big city for a conference dinner. Unfortunately the VR demo had real issues with the hotel wifi.

The venue for our conference dinner, a converted windmill in the centre of old Bremen

In all a very good week, which has really got me thinking. Big thanks to Mehul Bhatt of Bremen for inviting me out there. I’d certainly like to contribute to more events like this, if it’s possible; it was a real luxury to spend a full week with such bright students and such inspiring faculty. I alternated between going “Wow, this is so interesting, Isn’t it great that we’re making so much progress!” and “Oh no there is so much left to do!“.

Hotel Bootshaus from the river

Electromagnetic Field 2016

Last weekend was Electromagnetic Field, the UK’s main Hacker/maker camp. It’s an outstanding opportunity for meeting up with tinkerers, coders and makers from across the UK and beyond. I was at the first EMF (in 2012, blog post here) talking about women in tech, and went back to this one to talk about schools outreach and the work we’re doing with kids and families. I spoke about schools and kids engagement in general, but also more specifically about our EU playfulcoding project. You can see my talk here:

And you can view the slides here, if you just want slides, not talk.

The talk was well-received but not full, but that’s fine – one of the cool things about EMFcamp is the sheer range of stuff going on. Over the course of the weekend I went to talks on computer history, quantum effects in imaging, IT security from a sociological standpoint, penetration testing, hardware hacking, animating dinosaurs and the mathematics of the Simpsons. I also went to hands-on workshops on VR, deep machine learning, card-based reasoning (“having daft ideas”) and paper circuits. These were all part of the official program – submitted and approved before the event, allowing people to schedule and so on.

There were also lots of minor “installation” type hacks around the place, and a whole heap of drop in activities. I played some computer games in the retro gaming tent (Sonic the hedgehog), went in a musical ball pit, watched fire pong, and generally strolled around the site going WOW.

I had never been in a ball pit before. I am so going to make one of these.

“The Robot Arms” was the name of the camp bar, and it had an API so you could look online to see how much beer had been sold. Someone even wrote a script to calculate how many drinks had been sold in the last minute so you could tell how busy it was without going down to check. All the barstaff and indeed everyone at the event were volunteers which gives the whole thing a really nice cooperative feeling. I was sat eating my veggie breakfast in the food area on Sunday morning and someone asked for help setting out the chairs at the main stage, and about 10 of us just got up and did it. Loads of my friends there did shifts on the bar, or marshalling in the carpark (I spoke, and figured that was probably enough:). At the closing ceremony Jonty (one of the main organisers) asked everyone who’d volunteered or spoken to stand up, and I swear about 25% of the people there did. This really did make for a really friendly event.

What a cool pub sign, eh?

Much to my embarrassment, I fell out of a hammock installation on the last night though. I was fine getting in there, but the dismount was … inelegant.

This has made my return to Aberystwyth a couple of days late, via the excellent first aid tent and the A&E at Guildford hospital (Royal Surrey). Nothing’s broken, which is a relief, but my gosh it’s all a bit bruised.

my opinion of hammocks is not positive

In all – I loved it, again. I’ll definitely go in 2018.

The last Early Mastery meeting, Girona

Early last Sunday I left sunny mid-Wales for the last ever meeting in our EU Erasmus+ project “Early Mastery/Playful Coding”.

We flew from Bristol to Girona with Ryanair (who call Girona “Barcelona”, which gives some clue to its location). The cloud cover cleared as soon as we crossed the channel, and the view from the airplane was rather lovely. The Pyrenees in particular were stunning.

Once in Girona we met up with the Ysgol Bro Hyddgen crew, teachers from the school up the road in Machynlleth. A chatty evening spent in a lovely riverside bar rounded off the day of travel nicely. Monday morning, bright and early, we headed up into Girona old town for the project meeting proper.

Here’s our (now traditional) meeting arrival selfie: from left to right, Tegid (technology teacher from Bro Hyddgen) and Anna (Welsh teacher from Bro Hyddgen), Martin (schools liason teaching fellow, Aberystwyth Uni) and me, Tomi (ICT teacher from Bro Hyddgen). One of the great things about this project and these meetings in general is we’ve ended up building really good links with the school just up the road, as well as with people across the EU.

The project

Over the last 18 months I’ve written quite a few blog posts about the project. We’ve done a lot of schools work and we’ve had 5 management meetings (of which this was the last). We’ve also had 2 longer “training meetings”, where teachers and academics have tried out each other’s workshops. Every workshop we’ve written has been run by more than one group, and most (indeed all but one) have been run in 2 or 3 different countries. Impact wise we’ve done quite a lot:

  • 45 talks, seminars, training days or other events
  • 80 schools
  • 600 teachers
  • 4000 students
  • 1 book

Did I mention we’ve written a book? The book contains instructions and information for running these workshops yourself. If you’re a teacher looking for easy lessons, or a lecturer looking for cool outreach, or a professional running a code club, or just an interested parent… the book has some great ideas in it. And some typos. But that’s not the end of the world.

The book launch

Our book “Playful Coding: Engaging young minds with creative computing” has been written, collated, edited, typeset and is now not only a PDF (available for free from http://playfulcoding.udg.edu/teacher-guide/, English now, translations to follow) but is also a physical printed book which looks frankly lovely.

As a team, we are skeptical about learning-to-code initiatives that concentrate on getting the skills to get a good job. Coding should be fun, challenging and playful. We hope this comes through in the book. There’s talk about assessment and pedagogy but there’s also a lot of fun, and the activities are all fundamentally cross curricular and hopefully playful.

The meeting concluded with a formal book launch where local teachers came to pick up a physical copy and chat with us over coffee and cake. It was really cool to see so many local teachers turn up to pick up a book in English – we will offer the other project languages (Spanish, Catalan, Romanian, Italian, French, Welsh) shortly but the first to be finished was our one common language.

Underwater robotics with kids

One workshop that’s not in the book is Xefi Cufi’s underwater robotics workshop for kids. It needs some fairly specialised kit… and a swimming pool. But it was great to see that in action too. Here are some junior roboteers building their chassis:

Here’s Eduard showing off the finished product:

And here’s their underwater robotics test pool…

What next?

It’s been a hectic, fascinating, challenging project and at times it’s felt a bit chaotic. I’m still slightly surprised that we’ve managed to do everything we said we would, so well, in the time we had: not only write and run and test workshops, but also write a book. We’re academics, teachers, researchers, outreach officers, postgrads and classroom assistants from 5 different countries but we’ve become a team. I’ve loved the collaborative aspect of the project and seeing how other countries work has been eye opening. My own practice has improved, and I’m sure that some of my ideas have helped to improve practice in other parts of the world, and that’s such a great feeling.

In the wake of Brexit it’s hard to know where we go now. The consortium worked well together and we did some great stuff; there are plans to submit a follow-on grant too. Will I be on it? Well they say they’d like me to be, but in the absence of any firm plans it’s hard to push for that: as a brit, I’m a liability on a Euro project and will remain so until there are serious assurances around research and education funding. I don’t see that happening very soon.

Which is very sad indeed. We’ve done some good work on this project.

On the plus side, I have a sabbatical semester 2 next year, and they do underwater robots, so… I think I’ll be back. Hasta la vista.

Using video in teaching

I gave a talk today about using short videos in teaching, to the Aberystwyth University Teaching and Learning conference (info here). The conference is an annual event which serves as a showcase for best practice in the uni, and it’s always interesting to see what people are up to. As part of my prep for the talk I did a lot of thinking about the different uses of video in learning and teaching, and about the different types of video I’ve put together. So I thought I’d do a blog post about that.

If you’re interested in the how, as well as the what and why, you can find my slides on Google Drive here.

Uses of video

Illustration of a visual point: some things are just best illustrated with a picture or a video. There are lots of examples of this in computer vision, here’s one showing a moving average motion detection. This is really hard to do in slides, without video.

Illustration of a phenomenon that is kinda hard to do in person: sometimes – maybe because things are dangerous, or there’s a piece of kit that’s really expensive, it can be difficult to “take the students to the phenomenon”. So video is a way of bringing the phenomenon to the students. An example of this is a video I made for a friend from the Welsh Crucible program, whose wife was teaching Sylvia Plath’s bee poems to her 6th formers – I called the video Beekeeping for poets. It’s a bit scrappy but it gets the ideas across. This was a very early foray into video making for me, so it’s not got sound or anything. But I like it anyway.

Illustration of a concept I find tricky: sometimes I’m just not that confident about a particular topic. Particularly with the details of algorithms that get complex, I often worry about tripping up in a lecture. These topics are also topics that students probably want to revisit more than once, so the video serves several purposes: it gives me a bit of breathing space and additional confidence in the lecture, and it also gives the students an easy way to repeat the difficult bit. An example of this is my DES encryption video from information security section of CS270. Graphically it’s not great, but practically, it’s saved me a lot of stress:-)

These three videos also illustrate three different types of video: the screencast, the video-clips-and-captions, and the canned presentation.

Other reasons to use video include summarisation, previews, simplifications, and the option to introduce new voices. One thing I really want to look into in the future is bringing in interviews with practitioners, probably by recording Skype/Hangouts calls.

Soapbox Science, Cardiff

Soapbox Science is a public engagement event designed to get scientists out into the public and into public spaces, talking about their work. It’s supposed to demystify science (a bit) but also to change people’s perceptions of what scientists look like; one of the ways it does this is by making all of the scientists on the soapbox women. When I heard about it, I thought… Public engagement? Women in Science? Sounds a bit mad? Guess I’d better apply then!

The event I applied for was my nearest one, this year, and that was Cardiff, and it was yesterday. As you can probably guess from the blog post, I got in.

Having got in, my next problem was what to talk about… for 30 minutes, to a general passers-by kind of audience, without computers or posters or anything like that. As a vision scientist, who works with computers, that’s quite the challenge. The topic I settled on was Shadows.

One of the cool things about Soapbox Science is that it’s OK to bring along props. Some of the scientists had brains, or little bits of gold, or fungus, or felt-and-wax artistic renderings of tumours (no, srsly, they did). I went for an arduino powered cardboard box.

This involved having a neopixel ring inside a cardboard box, programmed with various lighting patterns, and a button on the outside which switched pattern every time the button was pressed. My hope was that by having a ring-shaped light source it would be possible to look out of the middle of the ring, and having the viewpoint of the lightsource (as Da Vinci said, “No luminous body sees the shadow it casts” or something like that). But the viewpoint was just out so you could actually see the shadows anyway. So I made the 16 light sources either chase around with different colours, or gradually illuminate making the shadows hazy, or gradually go off, making the shadows sharp again.

Inside the box I hung a small plastic model of a skateboarder, I soldered all the bits together, and then I had my main prop: a Shadowbox. The shadow effects created really were quite strange, and they served quite well to illustrate the idea that the size of the light source, the colour of the light source and the colour of the screen all affect the shadow’s appearance.

As you look into the box through one of two holes, the experience of seeing the shadows is quite disconcerting, and it can take a while to work out what exactly is going on. But that’s OK – I wanted something kind-of “installationy” and this worked quite well as a visual experience. What didn’t work so well was the skater on a string – as the figurine was suspended from the lid using fishing wire, she swung wildly from side to side if anyone knocked the box, making it all just that little bit more incomprehensible.

The other props I took were some flipbooks, made from a 50 frame sequence of shadow video. I took books representing the input (the actual video), the ground truth (what we want our software to output), some intermediate processing steps and the final output of our shadow detection routine. These were hacked together using python and LaTeX; if you’re interested in any of the code (flipbook code or arduino code) you can find it on my github account. I also took some zoom in crops of images showing pixellated shadow or non-shadow regions, mainly just to show how hard it is to detect shadows when your input is pixels. And I took some sharpies and a sketchbook because … I NEEDED PROPS.

So yesterday, Saturday morning 4th June, I got up early and drove down to Cardiff with a boot full of electronics and poorly put together flipbooks. I arrived just after 12, to a control centre in Yr Hen Llyfrgell which was a hive of activity, helpers, organisers, mascots, labcoats, tshirts, props and of course scientists. And balloons. And coffee. Each scientist was allocated a helper to assist with props and so on: my helper was a very nice and efficient Cardiff Uni medical student called Gunjan who was awesome at ensuring I had the things I needed when I needed them.

One of the mascots was the Cardiff University Dragon, who’s called Dylan. Apparently it was really very hot indeed inside the dragon. The other mascot (who I didn’t get a photo of) was a teddy bear. I’m not sure why.

We’d been advised to have a few 5-minute ideas for talks, and we’d been told we might get questions/heckles and so on, so repeating bits was probably going to be necessary. The time came and I went out, with this written on the back of my hand:

  • Me and science
  • Shadow formation
  • Computer vision
  • Pixels, videos
  • Ground truth
  • Colour, texture

Our soapboxes were in a busy intersection on Cardiff’s shopping district, quite near a woman with an amplifier singing eighties lounge songs (niiice). As talk venues go, I can’t think of many more challenging. The actual “talking about science to the general public on a soapbox” bit was almost exactly as terrifying as I thought it would be.

For the first half hour session, I stood up, talked, drew an audience of about 15, caught people’s eyes, talked some more, waved my props around, tried to get people to look into the shadow box, and then ran out of things to say. Looking at my watch I realised I was just 2 minutes from the end of the session, so that was OK and the audience did have questions. Most of them had stayed till the end, too. They might have had even more questions I suppose, if I had at any point slowed down enough for them to get a word in…

During my second stint on the box, I had a completely different experience. For the first 5 minutes I had no audience, and then a guy I vaguely recognised (maybe from the Crucible?) came and watched at the request of one of the organisers. Which was nice. I didn’t really want to talk to an audience of 0. Slowly more people came and went, including some kids (who really liked the flipbooks) and a remarkable heckler who thought I was a bloke.

At the end of the day everyone was quite hyper, and we all agreed it had been super fun if terrifying. Here’s a picture of me with my excellent helper Gunjan:

At this point I needed to stretch my legs and be quiet for half an hour so I went and checked into my hotel before returning to the afterparty (complete with wine, for those who do that sort of thing). All in all a good day. I’m not sure that it’s my favourite form of public engagement, but it certainly got me out there and out of my comfort zone, talking to people who I’d never had spoken with otherwise.

What I learned from going to every exercise class once

I’m just back from the workout called Insanity, which was the last class in my personal mission to try every type of exercise class offered at Aber Uni at least once (except yoga – you’ve got to draw the line somewhere). I saved Insanity for last, you can probably guess why.

Being a proper nerd of course I kept a spreadsheet, with comments and some estimates (percentage of class completed, approximate proportion of guys, that kind of thing). So here are some stats:

  • Highest max heartrate reached: Insanity. Today. 140bpm
  • Lowest max HR: Pilates, where most sessions I got up to 70bpm max
  • Lowest heartrate reached: also Pilates, 53bpm. Those classes can be relaxing
  • Maximum steps per class: an hour’s Zumba class with 4725 steps
  • Maximum steps per minute of class: a 45 minute Dumbbell workout
  • Hardest class: a tie for work-it-circuits, Bootcamp, and insanity. I’m sure that the nice man who teaches ordinary circuits will be disappointed to learn this.
  • Highest proportion of blokes: Circuits, with an estimated 75% bloke proportion
  • Lowest proportion of blokes: 6 classes had no guys at all (2 of the Aerobics classes, one of the Pilates, one of the Zumbas, a Piyo and a Bootcamp)

The project has taken me 2 months, and has involved going to 2 or 3 classes a week, 23 classes in total. To my surprise, there aren’t any which I’ve actively disliked. The only one I don’t think I’ll return to is PiYo, and that’s because it’s a bit too much like yoga. I think my favourites are dumbbell workout, bodyfit, and (surprisingly for me) bootcamp. But I’ll go back to pretty much all of the others too.

For any aber people wondering… here’s the timetable.

To Kill A Machine

On Friday I went to Cardiff to see a play. It’s a long way to go for a play, but this one’s special. It’s written by my friend Catrin, who’s a law lecturer here in Aberystwyth, and it concerns Alan Turing. She wrote it during the Alan Turing centenary year (2012), and the play has grown and developed since. Some of the actors read a scene at the BCS Mid Wales AGM in 2012, and I thought it was captivating. Since then, my interaction with the play has been accidentally at-a-distance. I wrote a piece on AI for the program, I supported the kickstarter, I spoke to colleagues and friends who loved it, I met the director and producer, I wrote letters of support, I tweeted about it. I had tickets once, but then I had the flu. It played in Aber, but I was in London. It played in London, but I was in Scotland.

So it was quite a relief to actually finally see it. It was even more of a relief to find the play was exactly as brilliant as I’d been told.

This play is not a sanitised biopic. This has not been edited for the Hollywood audience.

It’s an uncompromising story about a brilliant man, who refused to compromise in his work or in his private life. The actors are all great, but Gwydion Rhys (who plays Turing) is particularly captivating; he speaks as I imagine Turing would have done: pausing, thoughtful, awkward. The central device of the play is the game Imitation; this is also the core of Turing’s 1950 paper “On Computing Machinery and Intelligence”. In Turing’s article he debates whether it possible to tell, by asking questions, if you are talking to a man or a woman, or a human or a computer, and uses this debate to discuss the nature of artificial intelligence. The play uses the question and answer “game show” format to chilling effect.

The play goes to Edinburgh next week; it’s a one-act, fast paced, challenging piece of theatre. If you’re in Edinburgh, during the festival, I cannot recommend this enough. It’s on 7-31 August (except tues) at The Zoo, and here’s a link to buy tickets.

Porn star to attend computing conference

In 2015, an IEEE sponsored conference is going to have an ex-playboy centrefold as their guest star. Yes, you read that right. The committee of ICIP – coincidentally, almost entirely made up of guys – think it’s a good idea to have Lena do the prizegiving.

Who’s Lena? In a nutshell:

  • Back in 1973, some people wanted a test image
  • One of them had bought some porn to work (wat?)
  • So they said (hur hur) let’s scan that (wat?)
  • And then released it to the “vision community” who’ve been using it ever since…

(If you want to find out more about the background to Lena, I wrote a longer article a few months back about how a photo from playboy became a part of scientific culture.)

Now I realise that the past is another country and that things were very different in the 1970s, hell I even remember a little bit of the 1970s (it was all brown, the electricity kept going off, and I ate my first curry on election night). But surely, now, we as a research community have grown up a bit and realise that using images from porn might be offputting to some people? Leaving aside the fact that acquisition technologies have progressed a bit since 1973, and that there are now billions of images to choose from, surely we should have moved on from this? But no. This year, I saw Lena in four presentations, two posters, and (shock, horror) 3 of my own students’ face recognition assignments (they’d just copied a sample image from the web, and the obvious sample image for face detection? Lena).

It’s not that I’m anti porn. It’s that I’m anti porn in the workplace. Nobody should have to look at someone else’s choice of erotica whilst working. The absurdity of the situation becomes clearer when you think about its converse: you can be fairly certain that if people started to use shots of semi-naked men in our papers, the papers would be rejected. We certainly wouldn’t see our porn stars on the podium (although… James Deen to present the Marr Prize, yeah!).

And yet, in 2015, the organisers say:

Submit a paper and you might be one of the lucky few to receive a best paper award from her hands! Mark your calendar and do not miss this unique opportunity!

Peter Gabriel: Back to Front

It’s strange the way that knowledge can change the way we see things. I can’t see a live video feed without wondering how it was put together; how the effects were done; how it was mixed to make a (more or less unified) visual experience… and the gig I went to on Friday (Peter Gabriel, Birmingham) really made me think. Cameras, live video manipulation, and cool computer vision effects have really changed the live music experience.

The first time I noticed the use of live video effects in earnest was at an Arctic Monkeys gig in Grenoble, in early 2010. They’d used small screens at the front of the stage which were linked to cameras around the musicians, and which projected retro-style black and white images of the band in real time (actually, if I remember correctly, they were sepia tinted for quite a lot of the show). These videos had been processed to give an old school tv-not-quite-tuned-in effect – it was quite impressive to see live video editing, on the fly, applied to about 8 camera feeds at the same time.

Fast forward 5 years or so, and now I can say that Peter Gabriel’s live show takes this to the next level. Small cameras are now ubiquitous and they certainly were on stage on Friday; it seemed as though some instruments, most lights, and a couple of the people were wearing little video cameras available for live feeding into the stadium display (I’m not sure how many cameras there were, but my guess is “over 20”). Having only seen the show once I can’t tell how precisely choreographed it was, but my intuition is that there’s quite a bit of variation from night to night so you can’t just have it programmed in advance (“oh, we’re halfway through Sledgehammer, let’s cut to the gopro on the drums for 12 seconds“).

A photo of the show, stolen from PG’s facebook page

Perhaps unsurprisingly for an artist who’s always been at the forefront of visual effects and music, this gig provided quite the feast for the eyes. Pretty much each song was accompanied by a different set of computer-vision driven VFX on the big screens. I spotted (I think) some hough circles, some edge enhancement, lots of interesting noise effects, colour channel splitting with different delays on R, G and B (cheap effect but quite trippy, actually), some more generic motion delay, and some fairly serious looking cool 3D/depth imaging, probably driven via something like a Microsoft Kinect. Some of this might well have been precomputed, but a lot was done on the fly1.

Actually, it’s kind of strange; whilst there’s a lot of major video processing, done in real time, from a set of cameras … it’s a simultaneously technical and low-tech extravaganza. The lights are controlled by actual people and not robots; there were five great big triffid-like lighting rigs being pushed around the stage by 3 people each. There were more lighting people hiding in a gantry with spotlights. I spotted eight people lurking in the rigging, accessed by little rope ladders. This is a workforce intensive show.

And the music? Well that was just fucking awesome. From the opening acoustic numbers, to the fantastic way the band dropped into electric half way through Family Snapshot, playing The Family And The Fishing Net (really not sure what it’s about but it’s always been my favourite, a dirty spooky creepy epic of a song, only improved by being played LOUD and live), skipping around Solsbury Hill, then playing the So album the entire way through (in order, with the original lineup), and closing with Biko, dedicated to young people who put their lives on the line to protest injustice (particularly the recent Mexican students). From the start to the finish it was awesome. I think it was crowd pleasing for both the passing fans and the diehards like me. Also, that man really knows how to work a stage. I don’t think I’ve enjoyed a gig as much for a long time.

So, as a vision researcher, I had to think – well wouldn’t that be a fun project? You could do some really remarkable live stuff with 3D/2.5D video mixing, feature tracking, pose estimation… get some real cutting edge computer vision into the performance. (Doing more lightweight processing like they do now would also make a great dissertation – open source live VFX for performance, anyone?).

1Someone’s bound to comment now saying that this is all trivial with aftereffects plugins or something…