hannah dee

BMVA workshop: plants in computer vision

On Wednesday I hosted my first ever British Machine Vision Association (BMVA) one-day workshop. The BMVA are the organisation which drives forwards computer vision in the UK, and they run a series of one-day technical meetings, usually in London, which are often very informative. In order to run one, you have to first propose it, and then the organisation work with you to pull together dates, program, bookings and so on. If you work in computer vision and haven’t been to one yet, you’re missing out.

I won’t write an overview of the whole day – that’s already been done very well by Geraint from GARNet the Arabidopsis research network. So if you want a really nice blow by blow account pop over to the GARNet blog.

We had some posters, and some talks, and some demos, and around 55 attendees. The quality was good – one of the best plant imaging workshops I have been to, with no dud talks. I think London is an attractive venue, the meetings are cheap (£30 for non-members, £10 for members), and both of these factors contributed. But I suspect the real reason we had such a strong meeting was that we’re becoming quite a strong field.

The questions and challenges that come up will be familiar to people who work in other applied imaging fields, like medical imaging :

  • should we use machine learning? (answer: probably)
  • can we trust expert judgments? (answer: maybe… but not unconditionally!)
  • we need to share data – how can we share data? what data can we share?
  • if we can’t automatically acquire measurements that people understand, can we acquire proxy measurements (things which aren’t the things that people are used to measuring, but which can serve the same purpose)?
  • can deep learning really do everything?
  • if we’re generating thousands of images a day, we have to be fully automatic. this means initialisation stages have to be eliminated somehow.

One of the presenters – Milan Sulc, from the Centre for Machine Perception in Prague – wanted to demo his plant identification app. Unfortunately, we discover that all of the plants at the BCS are plastic. Milan disappears to a nearby florists to get some real plants, at which point, the receptionist arrives with an orchid. Which also turns out to be plastic. The lesson here? Always remember to bring a spare plant.

This workshop was part funded by my EPSRC first grant, number EP/LO17253/1, which enabled me to bring two keynotes to the event and that was another real bonus for me. Hanno Scharr from Jülich and Sotos Tsaftaris from Edinburgh are both guys who I’ve wanted to chat with for some time, and they both gave frankly excellent presentations. It was also very good to catch up with Tony Pridmore and the rest of the Nottingham group; it’s been a while since I made a conference in computer vision / plant science, as I had a diary clash over IAMPS this year.

We’re hoping to put together a special issue of Plant Methods on the meeting.

Pumpkin Hack!

On Sunday we had our first Aberystwyth Robotics Club pumpkin hack. Kids, pumpkins, flashing lights and electronics together in a fun afternoon workshop.

In the carving station, the kids hacked away at their pumpkins with kid-safe tools or gave their design to one of our high powered Dremel wielding helpers. With a suggested age range of 6-12 we weren’t going to let the attendees loose with super sharp knives or powertools, but they managed to design their pumpkins themselves and help to cut them out (or at least, carve them)

In the coding zone, we had a bunch of laptops, a bunch of Arduino nano microcontrollers, battery packs, wires and some ultra-bright LEDs. Kids wired up their own microcontrollers, with assistance from our student helpers, then programmed them in Arduino C. The programming aspect was mostly copy-and-paste but with just an hour and a bit to spend on it the wiring and the coding was sufficient to keep everyone involved.

Our final display was so much more impressive than I expected.

Here’s the “After” pic:

Here’s a google docs link to the Arduino handout if you want to try running a similar event yourself. You need a lot of helpers, as it’s quite easy to wire things up wrong, and the coding involves working out what bits to copy and paste. But it works, and we had kids as young as 6 with flashing pumpkins and big smiles. The one scary moment was when we realised that windows update had run on all of our laptops, taking out the Arduino drivers. But with the help of one of the attendees, we got around that (phew!) by booting into linux then editing perms on the USB ports.

First paper from first grant!

We’ve had our first journal paper published from my EPSRC first grant. It gives a comprehensive review of work into the automated image analysis of plants – well, one particular type of plant, Arabidopsis Thaliana. It’s by Jonathan Bell and myself, and it represents a lot of reading, talking and thinking about computer vision and plants. We also make some suggestions which we hope can help inform future work in this area. You can read the full paper here, if you’re interested in computer vision and plant science.

The first grant as a whole is looking at time-lapse photography of plants and aims to build sort-of 3d models representing growth. It’s coming to an end now so we’re wrapping up the details and publishing the work we’ve done. This means keen readers of this blog1 can expect quite a few more posts relating to the first grant soon: we’re going to release a dataset, a schools workshop, and we’ll be submitting another journal paper talking about the science rather than the background.

1Yes, both of you

HCC 2016: Human Centred Computing Summer School

Last week I was invited to present at the first Human Centred Cognition summer school, near Bremen in Germany. Summer schools are a key part of the postgraduate training experience, and involve gathering together experts to deliver graduate level training (lectures, tutorials and workshops) on a particular theme. I’ve been to summer schools before as a participant, but never as faculty. We’re at a crossroads in AI at the moment: there’s been a conflict between “good old fashioned AI” (based upon logic and the symbol manipulation paradigm) and non-symbolic or sub-symbolic AI (neural networks, probabilistic models, emergent systems) for as long as I have known, but right now with deep machine learning in the ascendant it’s come to a head again. The summer school provided us participants with a great opportunity to think about this and to talk about it with real experts.

Typically, a summer school is held somewhere out of the way, to encourage participants to talk to each other and to network. This was no different, so we gathered in a small hotel in a small village about 20 miles south of Bremen. The theme of this one was human centred computing – cognitive computing, AI, vision, and machine learning. The students were properly international (the furthest travelled students were from Australia, followed by Brazil) and were mostly following PhD or Masters programs; more than half were computing students of one kind or another, but a few psychologists, design students and architects came along too.

The view from my room

The bit I did was a workshop on OpenCV, and I decided to make it a hands-on workshop where students would get to code their own simple vision system. With the benefit of hindsight this was a bit ambitious, particularly as a BYOD (Bring Your Own Device) workshop. OpenCV is available for all main platforms, but it’s a pain to install, particularly on macs. I spent a lot of time on the content (you can find that here: http://github.com/handee/opencv-gettingstarted) and not so much time on thinking about infrastructure or thinking about installation. I think about half of the group got to the end of the tutorial, and another 5 or 10 managed to get someway along, but we lost a few to installation problems, and I suspect these were some of the less technical students. I used jupyter notebooks to create the course, which allow you to intersperse code with text about the code, and I think this may have created an extra layer of difficulty in installation, rather than a simplification of practicalities. If I were to do it again I’d either try and have a virtual machine that students could use or I’d run it in a room where I knew the software. I certainly won’t be running it again in a situation where we’re reliant on hotel wifi…

The class trying to download and install OpenCV

The school timetable involved a set of long talks – all an hour or more – mixed between keynote and lecture. The speakers were all experts in their field, and taken together the talks truly provided a masterclass in artificial intelligence, machine learning, the philosophical underpinnings of logic and robotics, intelligent robotics, vision, perception and attention. I really enjoyed sitting in the talks – some of the profs just came for their own sessions, but I chose to attend pretty much the whole school (I skipped the session immediately before mine, for a read through of my notes, and I missed a tutorial as I’d been sat down for too long and needed a little exercise).

It’s hard to pick a highlight as there were so many good talks. Daniel Levin from Vanderbilt (in Tennessee) gave an overview of research in attention and change blindness, showing how psychologists are homing in on the nature of selective attention and attentional blindness. I’ve shown Levin’s videos in my lectures before so it was a real treat to get to see him in person and have a chat. Stella Yu from Berkeley gave the mainstream computer vision perspective, from spectral clustering to deep machine learning. Ulrich Furbach from Koblenz presented a more traditional logical approach to AI, touching on computability and key topics like commonsense reasoning, and how to automate background knowledge.

One of my favourite presenters was David Vernon from Skövde. He provided a philosophical overview of cognitive systems and cognitive architectures: if we’re going to build AI we have a whole bunch of bits which have to fit together; we can either take our inspiration from human intelligence or we can make it up completely, but either way we need to think at the systems level. His talk gave us a clear overview of the space of architectures, and how they relate to each other. He was also very funny, and not afraid to make bold statements about where we are with respect to solving the problem of AI: “We’re not at relativistic physics. We’re not even Newtonian. We’re in the dark ages here“.

Prof Vernon giving us an overview of the conceptual space of cognitive architectures

When he stood up I thought to myself “he looks familiar” and it turns out I actually bought a copy of his book a couple of weeks ago. Guess I’d better read it now.

Kristian Kersting of TU Dortumund, and Michael Beetz of Bremen both presented work that bridges the gap between symbolic and subsymbolic reasoning; Kristian talked about logics and learning, through Markov Logic Networks and the like. Michael described a project in which they’d worked on getting a robot to understand recipes from wikihow, which involved learning concepts like “next to” and “behind“. Both these talks gave us examples of real-world AI that can solve the kinds of problems that traditional AI has had difficulty with; systems that bridge the gap between the symbolic and the subsymbolic. I particularly liked Prof. Beetz’s idea of bootstrapping robot learning using VR: by coding up a model of the workspace that the robot is going to be in, it’s possible to get people to act out the robot’s motions in a virtual world enabling the robot to learn from examples without real-world training.

Prof Kersting reminding us how we solve problems in AI

Each night the students gave a poster display showing their own work, apart from the one night we went into the big city for a conference dinner. Unfortunately the VR demo had real issues with the hotel wifi.

The venue for our conference dinner, a converted windmill in the centre of old Bremen

In all a very good week, which has really got me thinking. Big thanks to Mehul Bhatt of Bremen for inviting me out there. I’d certainly like to contribute to more events like this, if it’s possible; it was a real luxury to spend a full week with such bright students and such inspiring faculty. I alternated between going “Wow, this is so interesting, Isn’t it great that we’re making so much progress!” and “Oh no there is so much left to do!“.

Hotel Bootshaus from the river

The Aberystwyth Image Analysis Workshop AIAW

Last week (on Friday) we held the Aberystwyth Image Analysis workshop. I think it was the 3rd, or maybe the 4th one of these I’ve organised. The aim is to have some informal talks and posters centred around the theme of image analysis (including image processing, computer vision, and other image-related stuff) from across Aberystwyth. To encourage participation from people whether they’ve got results or not we have 10 minute slots for problem statements, short talks, work in progress and so on, and we have 20 minute slots for longer pieces of work. This year there were 4 departments represented in talks: Computer science, Maths, Physics and IBERS (biology), and we had speakers who were PhD students, post docs, lecturers and senior lecturers (no profs this year, boo!).

The range of topics covered was as usual very broad – images are used in research all the time, and it’s really useful to get together and see the kinds of techniques people are using. In Physics, they’re working on tightly and precisely calibrated cameras and instruments, using images as a form of measurement. In Maths the images are fitted to models and used to test theories about (for example) foam. In computer science people are working on cancer research using images, plant and crop growth modelling, and video summarisation (to name but a few of the topics). And the IBERS talk this year came from Lizzy Donkin, a PhD student who’s working on slugs.

Lizzy and I have been trying to track slugs so that she can model how they forage – she spoke for 10 minutes on the problem of slugs and imagery, and I spoke for 10 minutes on preliminary slug tracking results. Here’s a screenshot of my favourite slide, showing the wide range of shape deformations that a single slug can undergo.

Old College minecraft and robotics workshop

On Wednesday, Aber Robotics Club put on a day of coding, gaming and robotics in Old College. We ran two workshops: one on Minecraft, and one on Mindstorms (lego robots). Each had about 30 kids in, and the aim was to have a techy day that taught attendees something new, but that was also fun: it was a summer holiday workshop after all.

In the Mindstorms lego robots workshop we did a mixture of activities – most of which I’ve blogged about before. We did the “program a humanoid robot” exercise, where we get kids to write down programs for their parents (who end up blindfold). We did the “steer a lego robot around a track” using remote control. And we did the “customise your lego robot then have a bit of a fight” Robot Wars style event to finish. These are all tried and tested activities which work really well together and made for a good day, with enough content and learning, and enough fun and chaos too.

We had also had a visit and a talk from Laurence Tyler of Aberystwyth Computer Science, who works in our space robotics group. He talked about robots in space, mars rovers, sattellites, Philae, and all sorts of other cool stuff. He brought along Blodwen, our scale model of the ExoMars lander, and explained how stuff made in Aberystwyth is actually going to end up on Mars. The kids asked all sorts of excellent questions and listened attentively throughout, which was great.

Over in the Minecraft room, the aim was to try and build bits of the Old College building colaboratively. There was apparently quite a bit of destruction as well as construction, but when I popped in at lunchtime I saw some fairly recognisable college parts so they all managed to get something built in the end.

In the afternoon, the Minecraft crew had an introduction to programming in Minecraft, starting with a demo of Jim Finnis’s castle generation software. Which opened quite a few eyes, and got a big “whoa!” from the audience: it’s a super piece of code that just builds amazing castles programmatically. One of the key ideas you have to get in order to code in Minecraft is the idea of a 3D coordinate system (x,y and z): I’m not sure that many of the kids had done that before so there was quite a steep learning curve.

We’ll be revisiting these workshops in the next couple of weeks to see what went well and what needs to be worked on: the kids really liked them both. The minecraft one has more of a setup overhead, as we needed to get hold of enough computers (30 Raspberry Pis, in the end) and sort out networking, a server, and so on. The lego robots workshop is a more polished event now (we’ve run it a fair few times). I’m fairly sure that we’ll run them both again, but they might need a bit of tweaking; in particular I’d like to think up a cool way of working with 3d coordinates for the minecraft one, and I also think it might be good to introduce more “not-sitting-at-a-computer” bits.

Electromagnetic Field 2016

Last weekend was Electromagnetic Field, the UK’s main Hacker/maker camp. It’s an outstanding opportunity for meeting up with tinkerers, coders and makers from across the UK and beyond. I was at the first EMF (in 2012, blog post here) talking about women in tech, and went back to this one to talk about schools outreach and the work we’re doing with kids and families. I spoke about schools and kids engagement in general, but also more specifically about our EU playfulcoding project. You can see my talk here:

And you can view the slides here, if you just want slides, not talk.

The talk was well-received but not full, but that’s fine – one of the cool things about EMFcamp is the sheer range of stuff going on. Over the course of the weekend I went to talks on computer history, quantum effects in imaging, IT security from a sociological standpoint, penetration testing, hardware hacking, animating dinosaurs and the mathematics of the Simpsons. I also went to hands-on workshops on VR, deep machine learning, card-based reasoning (“having daft ideas”) and paper circuits. These were all part of the official program – submitted and approved before the event, allowing people to schedule and so on.

There were also lots of minor “installation” type hacks around the place, and a whole heap of drop in activities. I played some computer games in the retro gaming tent (Sonic the hedgehog), went in a musical ball pit, watched fire pong, and generally strolled around the site going WOW.

I had never been in a ball pit before. I am so going to make one of these.

“The Robot Arms” was the name of the camp bar, and it had an API so you could look online to see how much beer had been sold. Someone even wrote a script to calculate how many drinks had been sold in the last minute so you could tell how busy it was without going down to check. All the barstaff and indeed everyone at the event were volunteers which gives the whole thing a really nice cooperative feeling. I was sat eating my veggie breakfast in the food area on Sunday morning and someone asked for help setting out the chairs at the main stage, and about 10 of us just got up and did it. Loads of my friends there did shifts on the bar, or marshalling in the carpark (I spoke, and figured that was probably enough:). At the closing ceremony Jonty (one of the main organisers) asked everyone who’d volunteered or spoken to stand up, and I swear about 25% of the people there did. This really did make for a really friendly event.

What a cool pub sign, eh?

Much to my embarrassment, I fell out of a hammock installation on the last night though. I was fine getting in there, but the dismount was … inelegant.

This has made my return to Aberystwyth a couple of days late, via the excellent first aid tent and the A&E at Guildford hospital (Royal Surrey). Nothing’s broken, which is a relief, but my gosh it’s all a bit bruised.

my opinion of hammocks is not positive

In all – I loved it, again. I’ll definitely go in 2018.

The last Early Mastery meeting, Girona

Early last Sunday I left sunny mid-Wales for the last ever meeting in our EU Erasmus+ project “Early Mastery/Playful Coding”.

We flew from Bristol to Girona with Ryanair (who call Girona “Barcelona”, which gives some clue to its location). The cloud cover cleared as soon as we crossed the channel, and the view from the airplane was rather lovely. The Pyrenees in particular were stunning.

Once in Girona we met up with the Ysgol Bro Hyddgen crew, teachers from the school up the road in Machynlleth. A chatty evening spent in a lovely riverside bar rounded off the day of travel nicely. Monday morning, bright and early, we headed up into Girona old town for the project meeting proper.

Here’s our (now traditional) meeting arrival selfie: from left to right, Tegid (technology teacher from Bro Hyddgen) and Anna (Welsh teacher from Bro Hyddgen), Martin (schools liason teaching fellow, Aberystwyth Uni) and me, Tomi (ICT teacher from Bro Hyddgen). One of the great things about this project and these meetings in general is we’ve ended up building really good links with the school just up the road, as well as with people across the EU.

The project

Over the last 18 months I’ve written quite a few blog posts about the project. We’ve done a lot of schools work and we’ve had 5 management meetings (of which this was the last). We’ve also had 2 longer “training meetings”, where teachers and academics have tried out each other’s workshops. Every workshop we’ve written has been run by more than one group, and most (indeed all but one) have been run in 2 or 3 different countries. Impact wise we’ve done quite a lot:

  • 45 talks, seminars, training days or other events
  • 80 schools
  • 600 teachers
  • 4000 students
  • 1 book

Did I mention we’ve written a book? The book contains instructions and information for running these workshops yourself. If you’re a teacher looking for easy lessons, or a lecturer looking for cool outreach, or a professional running a code club, or just an interested parent… the book has some great ideas in it. And some typos. But that’s not the end of the world.

The book launch

Our book “Playful Coding: Engaging young minds with creative computing” has been written, collated, edited, typeset and is now not only a PDF (available for free from http://playfulcoding.udg.edu/teacher-guide/, English now, translations to follow) but is also a physical printed book which looks frankly lovely.

As a team, we are skeptical about learning-to-code initiatives that concentrate on getting the skills to get a good job. Coding should be fun, challenging and playful. We hope this comes through in the book. There’s talk about assessment and pedagogy but there’s also a lot of fun, and the activities are all fundamentally cross curricular and hopefully playful.

The meeting concluded with a formal book launch where local teachers came to pick up a physical copy and chat with us over coffee and cake. It was really cool to see so many local teachers turn up to pick up a book in English – we will offer the other project languages (Spanish, Catalan, Romanian, Italian, French, Welsh) shortly but the first to be finished was our one common language.

Underwater robotics with kids

One workshop that’s not in the book is Xefi Cufi’s underwater robotics workshop for kids. It needs some fairly specialised kit… and a swimming pool. But it was great to see that in action too. Here are some junior roboteers building their chassis:

Here’s Eduard showing off the finished product:

And here’s their underwater robotics test pool…

What next?

It’s been a hectic, fascinating, challenging project and at times it’s felt a bit chaotic. I’m still slightly surprised that we’ve managed to do everything we said we would, so well, in the time we had: not only write and run and test workshops, but also write a book. We’re academics, teachers, researchers, outreach officers, postgrads and classroom assistants from 5 different countries but we’ve become a team. I’ve loved the collaborative aspect of the project and seeing how other countries work has been eye opening. My own practice has improved, and I’m sure that some of my ideas have helped to improve practice in other parts of the world, and that’s such a great feeling.

In the wake of Brexit it’s hard to know where we go now. The consortium worked well together and we did some great stuff; there are plans to submit a follow-on grant too. Will I be on it? Well they say they’d like me to be, but in the absence of any firm plans it’s hard to push for that: as a brit, I’m a liability on a Euro project and will remain so until there are serious assurances around research and education funding. I don’t see that happening very soon.

Which is very sad indeed. We’ve done some good work on this project.

On the plus side, I have a sabbatical semester 2 next year, and they do underwater robots, so… I think I’ll be back. Hasta la vista.

Using video in teaching

I gave a talk today about using short videos in teaching, to the Aberystwyth University Teaching and Learning conference (info here). The conference is an annual event which serves as a showcase for best practice in the uni, and it’s always interesting to see what people are up to. As part of my prep for the talk I did a lot of thinking about the different uses of video in learning and teaching, and about the different types of video I’ve put together. So I thought I’d do a blog post about that.

If you’re interested in the how, as well as the what and why, you can find my slides on Google Drive here.

Uses of video

Illustration of a visual point: some things are just best illustrated with a picture or a video. There are lots of examples of this in computer vision, here’s one showing a moving average motion detection. This is really hard to do in slides, without video.

Illustration of a phenomenon that is kinda hard to do in person: sometimes – maybe because things are dangerous, or there’s a piece of kit that’s really expensive, it can be difficult to “take the students to the phenomenon”. So video is a way of bringing the phenomenon to the students. An example of this is a video I made for a friend from the Welsh Crucible program, whose wife was teaching Sylvia Plath’s bee poems to her 6th formers – I called the video Beekeeping for poets. It’s a bit scrappy but it gets the ideas across. This was a very early foray into video making for me, so it’s not got sound or anything. But I like it anyway.

Illustration of a concept I find tricky: sometimes I’m just not that confident about a particular topic. Particularly with the details of algorithms that get complex, I often worry about tripping up in a lecture. These topics are also topics that students probably want to revisit more than once, so the video serves several purposes: it gives me a bit of breathing space and additional confidence in the lecture, and it also gives the students an easy way to repeat the difficult bit. An example of this is my DES encryption video from information security section of CS270. Graphically it’s not great, but practically, it’s saved me a lot of stress:-)

These three videos also illustrate three different types of video: the screencast, the video-clips-and-captions, and the canned presentation.

Other reasons to use video include summarisation, previews, simplifications, and the option to introduce new voices. One thing I really want to look into in the future is bringing in interviews with practitioners, probably by recording Skype/Hangouts calls.

Soapbox Science, Cardiff

Soapbox Science is a public engagement event designed to get scientists out into the public and into public spaces, talking about their work. It’s supposed to demystify science (a bit) but also to change people’s perceptions of what scientists look like; one of the ways it does this is by making all of the scientists on the soapbox women. When I heard about it, I thought… Public engagement? Women in Science? Sounds a bit mad? Guess I’d better apply then!

The event I applied for was my nearest one, this year, and that was Cardiff, and it was yesterday. As you can probably guess from the blog post, I got in.

Having got in, my next problem was what to talk about… for 30 minutes, to a general passers-by kind of audience, without computers or posters or anything like that. As a vision scientist, who works with computers, that’s quite the challenge. The topic I settled on was Shadows.

One of the cool things about Soapbox Science is that it’s OK to bring along props. Some of the scientists had brains, or little bits of gold, or fungus, or felt-and-wax artistic renderings of tumours (no, srsly, they did). I went for an arduino powered cardboard box.

This involved having a neopixel ring inside a cardboard box, programmed with various lighting patterns, and a button on the outside which switched pattern every time the button was pressed. My hope was that by having a ring-shaped light source it would be possible to look out of the middle of the ring, and having the viewpoint of the lightsource (as Da Vinci said, “No luminous body sees the shadow it casts” or something like that). But the viewpoint was just out so you could actually see the shadows anyway. So I made the 16 light sources either chase around with different colours, or gradually illuminate making the shadows hazy, or gradually go off, making the shadows sharp again.

Inside the box I hung a small plastic model of a skateboarder, I soldered all the bits together, and then I had my main prop: a Shadowbox. The shadow effects created really were quite strange, and they served quite well to illustrate the idea that the size of the light source, the colour of the light source and the colour of the screen all affect the shadow’s appearance.

As you look into the box through one of two holes, the experience of seeing the shadows is quite disconcerting, and it can take a while to work out what exactly is going on. But that’s OK – I wanted something kind-of “installationy” and this worked quite well as a visual experience. What didn’t work so well was the skater on a string – as the figurine was suspended from the lid using fishing wire, she swung wildly from side to side if anyone knocked the box, making it all just that little bit more incomprehensible.

The other props I took were some flipbooks, made from a 50 frame sequence of shadow video. I took books representing the input (the actual video), the ground truth (what we want our software to output), some intermediate processing steps and the final output of our shadow detection routine. These were hacked together using python and LaTeX; if you’re interested in any of the code (flipbook code or arduino code) you can find it on my github account. I also took some zoom in crops of images showing pixellated shadow or non-shadow regions, mainly just to show how hard it is to detect shadows when your input is pixels. And I took some sharpies and a sketchbook because … I NEEDED PROPS.

So yesterday, Saturday morning 4th June, I got up early and drove down to Cardiff with a boot full of electronics and poorly put together flipbooks. I arrived just after 12, to a control centre in Yr Hen Llyfrgell which was a hive of activity, helpers, organisers, mascots, labcoats, tshirts, props and of course scientists. And balloons. And coffee. Each scientist was allocated a helper to assist with props and so on: my helper was a very nice and efficient Cardiff Uni medical student called Gunjan who was awesome at ensuring I had the things I needed when I needed them.

One of the mascots was the Cardiff University Dragon, who’s called Dylan. Apparently it was really very hot indeed inside the dragon. The other mascot (who I didn’t get a photo of) was a teddy bear. I’m not sure why.

We’d been advised to have a few 5-minute ideas for talks, and we’d been told we might get questions/heckles and so on, so repeating bits was probably going to be necessary. The time came and I went out, with this written on the back of my hand:

  • Me and science
  • Shadow formation
  • Computer vision
  • Pixels, videos
  • Ground truth
  • Colour, texture

Our soapboxes were in a busy intersection on Cardiff’s shopping district, quite near a woman with an amplifier singing eighties lounge songs (niiice). As talk venues go, I can’t think of many more challenging. The actual “talking about science to the general public on a soapbox” bit was almost exactly as terrifying as I thought it would be.

For the first half hour session, I stood up, talked, drew an audience of about 15, caught people’s eyes, talked some more, waved my props around, tried to get people to look into the shadow box, and then ran out of things to say. Looking at my watch I realised I was just 2 minutes from the end of the session, so that was OK and the audience did have questions. Most of them had stayed till the end, too. They might have had even more questions I suppose, if I had at any point slowed down enough for them to get a word in…

During my second stint on the box, I had a completely different experience. For the first 5 minutes I had no audience, and then a guy I vaguely recognised (maybe from the Crucible?) came and watched at the request of one of the organisers. Which was nice. I didn’t really want to talk to an audience of 0. Slowly more people came and went, including some kids (who really liked the flipbooks) and a remarkable heckler who thought I was a bloke.

At the end of the day everyone was quite hyper, and we all agreed it had been super fun if terrifying. Here’s a picture of me with my excellent helper Gunjan:

At this point I needed to stretch my legs and be quiet for half an hour so I went and checked into my hotel before returning to the afterparty (complete with wine, for those who do that sort of thing). All in all a good day. I’m not sure that it’s my favourite form of public engagement, but it certainly got me out there and out of my comfort zone, talking to people who I’d never had spoken with otherwise.