hannah dee

A tale of 3 engagements

For the last 9 weeks I’ve been visiting the University of Girona (UdG), and working on some research in Vicorob and Udigital. I’ve taken part in three engagement activities whilst I’ve been here – even though I don’t speak the language. It turns out that with colleagues to help translate, it’s possible to be useful even without many words, although in the first two workshops I was more of an observer/helper than a facilitator. The first of these was an underwater robotics workshop, with a visiting class or around 15 teenagers; the second of these was a wheeled robotics workshop with 9 adults in a high security prison; and the third was an “unplugged” activity looking at Artificial Intelligence and Alan Turing with about 150 teenagers in 6 consecutive groups. The rest of this post has a bit more info on each.

Underwater robotics

This workshop took place in CIRS (Centre d’Investigació en Robòtica Submarina) at UdG, and was written and led by Xevi Cufi. The group came from a nearby boys’ school, had been working on their robots back in school for a while, in groups, and had come to CIRS for the final construction and testing. These robots are made out of plumbing pipe, and have three motors. Two of these provide forwards and backwards thrust for the left and right sides of the robot, and the third gives up and down. The basic robots were complete before the workshop, and in this session the participants did final wiring (the controller is attached by a tethered wire) and water tests. Once the wiring was done, the first water test involved getting the robot to have neutral buoyancy by attaching floats to the frame, in a large bucket.

Then they had to try and make the cable neutral too, by attaching bits of float at regular spacing along the tether.

And finally the students got to use their robots in the test pool (CIRS has a massive pool for testing robots). Seeing this come together was great – the students were all fired up to run their contraptions in the water, and they all worked really well.

This project is a big project, and I think the students had been working on their robots for a couple of weeks on and off. I expect a build from scratch would take a few days, as there’s soldering, wiring, building, testing and a lot lot lot of waterproofing (fairly obviously). The payoff is fab though: they clearly got a real sense of achievement piloting their own robots around the pool, picking up objects, and trying not to get their tethers in a knot. With one underwater video camera and a live link to a monitor, which was passed between robots, the workshop really came alive. I’d like to try and run this workshop in Aberystwyth.

Wheeled robots

The second workshop couldn’t have had a more different target audience. Instead of teenage Catholic schoolboys, there were adult prisoners in the Puig de les Basses prison just north of Figueres. In this workshop (also designed and led by Xevi) we used small wheeled Arduino robots, and programmed them in groups to flash lights, display messages, and move backwards and forwards. We have done a lot of wheeled robot workshops as part of the Early Mastery project (and before), and this one followed the general format (get something to run on the robot, modify that code, get the robot to move forwards and backwards). We had about 2 hours, and the participants were working in groups. Here’s a picture of the robot (taken during preparation) – you should be able to make out the display LCD, the LEDs and the wheels in this picture.

In the workshop the participants got to grips with the flashing lights activity very quickly, and the group I was working with seemed to be having fun setting up traffic lights using the R, G and Y LEDs. When the idea of the LCD display screen was introduced, my group decided to get it to give instructions to match the traffic lights (so it said “go” on green, etc.). This was a bit more elaborate than planned – the idea was they were just going to get it to say “hello” or something then move on to the next task – but they were enjoying the coding and working a lot of things out for themselves so we just let them run with it. As soon as one of the other groups got their robot to move, everyone changed their mind and wanted to move on to the next task anyway.

I don’t have any photos of the actual workshop as security was very tight and we weren’t allowed to take in phones, cameras or anything like that. Here’s a picture of the outside of the building though:

It’s amazing how the same thing happens in every robot workshop – whether it’s with 6 year old kids or 50 year old prisoners. As soon as one of the groups gets a robot to actually move, the atmosphere changes and everything moves up a gear. There is something intrinsically motivating about writing a program on a computer, and getting it to move something in the real world. As a programming environment, they used Visualino which provides a block-based interface to Arduino C; I hadn’t seen this before but was very impressed, and I might use it in future.

AI Unplugged

The final engagement activity I have been involved in out here is based upon the AI workshop that we wrote as part of the Technocamps project. This workshop has several components, and UdG were asked to do 6 consecutive 25 minute workshops with schoolkids in the town of Banyoles, as part of their end-of-term robotics project (actually, 3 sets of 6 consecutive workshops). So with a lot of help from Eduard we created some bilingual slides (English/Catalan) and did a double-act. You can see the slides here.

In another room, Marta and Mariona were talking about STEAM and coding, and in yet another room Jordi was talking about various robotics challenges and activities, so the Udigital.edu team was out in force. Here we are having breakfast before the day begins…

The schoolkids had apparently been working on general robotics projects for a couple of weeks at the end of term, so we started by doing a tour of their demos, and saw some lovely little line followers, skittles robots, hill climbers and generally lots of excellent arduino goodies. Here’s one of their projects.

In the workshop Eduard and I ran, we had a set of votes, asking the students if they think computers can think. The way the workshop is structured, we had a vote at the start (to get their initial opinions), then we did an activity which encouraged them to think about what intelligence is, by ordering a load of things (chess computer, sheep, tree, self-driving car, kitten, human… there are about 30 things). This gets them to consider what thinking involves, without actually being explicit or telling them what we think. Then we had another vote. After this we discussed what aspects of intelligence they might think were important, and what aspects computers could do now, then we had a final vote and concluded with some talk about Alan Turing and the Turing Test.

The reason I like to get the participants to vote, repeatedly, on whether they think computers can think, is so that we can see if anyone changes their mind. In my experience (and I’ve run this workshop loads of times – maybe 50 times) people always do change their minds once they’ve thought a bit more about the question; it never ceases to surprise me how different groups can be, too. This time, some groups arrived confident that AI was possible and that computers could think. Some of the others arrived with hardly anybody in the group positive about the potential of AI. We changed some minds though – some in one direction, some in the other.

Here’s a graph of the three vote results, displayed as a proportion of those attending who said “yes” or “maybe” to the question “Can Computers Think?”

This workshop worked well, as you can see from the graph: in every group we managed to get people to think hard enough that some of them changed their minds. It was also great fun, if a bit relentless, running 6 workshops back to back. I think we saw about 150 kids.

Thanks

So thanks, Udigital, for letting me join in and see what you do in terms of outreach. It’s been a great 9 weeks of visit, and I’ve got some ideas that I definitely want to try back in Abersytwyth.

EMRA17

I’m visiting Girona Uni at the moment as part of my sabbatical term, and whilst I’m here I’m trying to expand my horizons a bit academically. SO, this week I attended a workshop on marine robotics, which just happened to be going on whilst I’m here and they let me attend for free. The workshop is for marine robotics, but it is not just a research conference. Attendees come from 30 research centres, and 12 companies. Presentations come from 14 EU projects, 4 national projects, 4 companies. On day one, I saw 16 of the talks and then skipped the rest (including the demo and the dinner) as my folks were visiting and I thought I should probably spend some time with them:-)

Marine robotics is a bit outside my area so it was challenging to sit in and try and follow talks that were at the limits of my knowledge. The conference was also considerably more applied that many of the conferences I go to – companies and researchers working together much more closely, and much more close to product; some of the things presented were research, others were actual pieces of kit that you can buy. The applications varied too from science through to mining. The EU funding that supports these systems is really driving forward innovation in a collaborative way – many of the projects involved tens of institutions, from university research teams through SMEs to big companies.

The keynote came from Lorenzo Brignone, IFREMER lab, which is the French research centre that deals with oceanographic stuff. They have quite a fleet (7 research vessels), with manned submersibles, ROVs (Remote operated vehicles), and AUVs (autonomous underwater vehicles), and a hybrid HROV (AUV/ROV) which is the topic of the keynote. Brignone works in the underwater systems unit, which is mostly made up of engineering. The key problem is that of working reliably underwater near boats which don’t have dynamic positioning – the surface vehicle might move hundreds of metres, so we need to have an ROV that is more independent in order to carry out scientific missions reliably. The design includes the whole system, with on-ship electronics, tether, traction, and a weighted underwater station which includes a fibre-optic link to the HROV. This lets the hybrid system work with vessles of opportunity, rather than waiting for science boats to become ready. Two DVL (doppler velocity log) systems give accurate underwater location. Final output is a semi autonomous vehicle which can be worked by general users (the engineers don’t even have to be on the boat).

The next morning talk covered the DEXrov project, which is looking at systems which can control dextrous robots at a distance (hopefully onshore, removing the cost of hiring a boat). The aim is to get robots that can interact underwater, like divers can. This is controled by an exoskeleton based system – basically, the operator wears an arm and hand exoskeleton which the robot then mimics.

SWARMS – smart and networking underwater robotics in cooperation meshes. 31 partner consortium, looking at networking tech as well as the robotics tech. The project is also developing middleware which will let various heterogenous systems (UAVs, ROVs, misssion control, boats) cooperate. Underwater acoustic network links to wireless on the surface.

Next up Laurent Mortier from the BRIDGES project, which is a big h2020 project (19 partners including 6 SMEs) looking at glider technology. These systems are very low power underwater vehicles which can cover very long journeys, collecting data. Gliders create small changes in buoyancy, using wings to drive themselves forwards. This project looks to increase the depth that gliders can work at, which enables a greater range of scientific questions to be answered. The kind of data they look for depends on the payload, which can be scientific, or commercial (searching for things like leaks from undersea hydrocarbons, finding new oilfields).

Carme Paradeda of Ictineu submarines presented next, on moving from from a manned submarine to underwater robots, in a commercial setting. http://www.ictineu.net/en/ is their website, and they’ve invested 3 million euros, and more than 100,000 hours of R and D went into the creation of their submarine. This is a manned submarine which involved developing new battery technology as part of the project, safer batteries for operating at high pressure.

Marc Tormer of SOCIB (a Balaeric islands research centre) also talked about gliders. Aim is to change the paradigm of ocean observation: from intermittent missions on expensive research vessels, to continuous observations from multiple systems including gliders.

Graham Edwards from TWI (The Welding Institute) talked about ROBUST H2020 project. This project addresses seabed mining. Resources they’re looking for are manganese nodules, which can be found also looking cobalt crusts and sulphide springs. The system uses laser spectrography on an AUV with 3d acoustic mapping tech, to try and get away from the problems associated with dredging.

Pino Casalino, University of Genova (ISME) had the last slot before lunch talking about an Italian national project MARIS working towards robot control systems for marine intervention. This provided another overview of a big multi site project, looking at vision, planning and robot control. I have to admit that at this point my attention was beginning to wander.

One group photo and a very pleasant lunch later (I declined the option of a glass of wine, but did take advantage of the cheesecake and the espresso machine) we were back for an afternoon of talks.

The difficult post-lunch slot fell to Bruno Cardeira, from the Instituto Superior Técnico (Lisbon) talking about the MEDUSA deep sea AUV. This project was joint Portugal-Norway with a lot of partners, looking at deep sea AUVs, in order to survey remote areas up to 3000m depth. They wanted to do data collection and water column profiling, resource exploration, and habitat mapping, with the aim to open up new sea areas for economic exploitation.

Bruno also presented a hybrid AUV/ROV/Diver navigation system, the Fusion ROV which is a commercial product. This talk had a lot of videos with a loud rock soundtrack, which is one way to blow people out of their post-lunch lull, I guess.

The next talk came from Chiara Petroli, of the University of La Sapienza (Rome) talking about the SUNRISE project, working on internet of things wrt underwater robotics. Underwater networking, in a heterogeneous environment. Long distance, low cost, energy efficient, secure comms… underwater. Where of course wifi doesn’t really work. Dynamic, adaptive protocols which use largely acoustic communications have been developed.

Unfortunately by the end of this talk we were already running 15 minutes late (after just two talks). So the desire for coffee was running high in the audience, and I think I detected a snore or two.

Andrea Munafo from the National Oceanography Centre in Southampton, talking about the OCEANIDS project sponsored by the UK’s NERC (natural environment research council). This program is building some new ROVs which will enable long range and autonomous missions. One of these ROVs is called Boaty McBoatface.

The last talk from this session came from Ronald Thenius, of Uni Graz, talking about Subcultron, a learning, self-regulating, self-sustaining underwater culture of robots. 7 partners from 6 countries, aiming to monitor the environment in the Venice lagoon using the worlds’ largest robot swarm using energy autonomy and energy harvesting. Because Venice is big and has a lot of humans, the cultural aspect is quite important. Players: 5 aPads, (inspired by lilies, has solar cells, radio comms), 20 aFish (inspired by fish, moves around, communicates), and 120 aMussels (inspired by clams, many sensors, passive movement, NFC, energy harvesting). I liked this talk a lot.

Post coffee break, it was the turn of Nikola Miskovic, from the University of Zagreb, talking about cooperative AUVs which can communicate with divers using hand gestures and tablets. The project (CADDY – autonomous diving buddy) allowed a number of advances, including the way that the diver could use Google maps underwater. “The biggest challenge when you do experiments with humans and robots is the humans“:-)

Jörg Kalwa, ATLAS ELEKTRONIK GMBH spoke on the SeaCat story – from toy to product. UAV/ROV hybrid with variable payload. This grew out of various precursor systems (experimental and military) – the talk covered the various robots which are ancestors of the current ROV. The current incarnation is a commercial robot which does pretty much everything you might want an ROV to do, but the price point is pretty high.

The penultimate talk is from Eduardo Silva, ISEP / INESC TEC in Porto (Portugal), talking about underwater mining, in flooded opencast mines. Project has a great acronym – Viable Alternative Mine Operating System or VAMOS. Big project (17 partners from 9 countries). This project has a bunch of collaborating robots including UAVs which look like the many of the others (torpedo like), and other underwater vehicles which look a lot more like mining vehicles – tracked tanks, with massive drills and so on.

The day finished with the European Robotics League – a UEFA champions league for robotics. Service robots, industry robots, outdoor robots. This talk came from Gabriele Ferri, CMRE. Emergency robotics, combining ground underwater and air robots cooperating in a disaster response scenario. Mission is to find missing workers (mannequins) and bring them an emergency kit, survey the area, and stem the leak by closing a stop cock.

To be honest, my take home from this workshop is: underwater robots are cool, and brexit is an awesomely stupid idea.

BCSWomen AI Accelerator

BCSWomen Chair Sarah Burnett has had a fab idea, which is to hold a series of webinars that talk about AI and how it is changing the world. In BCSWomen we do a lot of stuff about the women, and a lot of stuff to support women, but we also do a lot of stuff that is useful for tech people in general. The AI Accelerator falls into this category; the idea is that tech is changing and AI is driving that change, so we’re going to try and provide a background and overview of AI to help people get to grips with this. Once I heard the idea I had to put my hand up for a talk, and I grabbed the first slot general intro talk – “What is AI?“. The other speaker in the session was Andrew Anderson of Celaton, who talked about the business side of AI. If you want to join in follow @bcswomen on twitter and I’m sure they’ll tweet about the next one soon.


the talk

As ever I went a bit over the top on the talk prep, but managed to come up with a theme and 45 slides with a bunch of reveals/animations that I thought covered some key concepts quite well. (Yes 45 slides for 20 minutes is a bit much but hey, I rehearsed the timings down to a tee and it was OK.) The live webinar had a few issues with audio, so I re-recorded my talk as a stand-alone youtube presentation; it’s not as good as the original outing (as a bit of time had passed and I hadn’t rehearsed as much) but I think it still works OK. If you want to watch it, here it is:

You can find the slides online here: AI Accelerator talk slides. I am 99% certain that all the images I used were either free for reuse, or created by me, but if it turns out I’ve used a copyright image let me know and I’ll replace it.


the reasoning behind the talk

I’ve been “doing” AI since I first went to uni in 1993, and what people mean when they say AI has changed massively over this time. Things that I read about as science fiction are now everyday, and a lot of this is down to advances in machine learning (ML). So when I started working on the talk I actually asked myself “What do people really mean, when they say AI?”; it turns out that a lot of the time they’re actually talking ML. There are a lot of other questions that need to be raised (if not answered) – the difference between weak AI and strong AI, the concept of embodiment, the way in which some things which we think of as hard (e.g. chess) turned out to be quite easy, and some things we thought would be easy (e.g. vision) turned out to be quite hard. Hopefully in the talk I covered enough of this stuff to introduce the questions.

I decided that for a tech talk there needed to be a bit of tech in it too though, which is why I spent the second half breaking down a bit what we mean by machine learning, and introducing some different subtypes of machine learning. I expect that if you work in the area there’s nothing much new in the talk, but hopefully it gives an overview, and also gives enough depth for people to learn something from it.


so what about the cute robots?

I wanted a visual example for my slides on ML and particularly classification, so I created a robot image, then edited it about a bit to get 16 different variants (different buttons, different numbers of legs, aerials, arm positions). I then wrote a short program to switch the colours around so I got twice as many (just switching the blue and the red channels gives some cyan robots and some yellow robots).

If you want to use them in talks or whatever, feel free. You can get all 32 of the robots, here, along with the python program that switches colours and the gimp file (.xcf) if you want to edit them yourself.

Pumpkin Hack!

On Sunday we had our first Aberystwyth Robotics Club pumpkin hack. Kids, pumpkins, flashing lights and electronics together in a fun afternoon workshop.

In the carving station, the kids hacked away at their pumpkins with kid-safe tools or gave their design to one of our high powered Dremel wielding helpers. With a suggested age range of 6-12 we weren’t going to let the attendees loose with super sharp knives or powertools, but they managed to design their pumpkins themselves and help to cut them out (or at least, carve them)

In the coding zone, we had a bunch of laptops, a bunch of Arduino nano microcontrollers, battery packs, wires and some ultra-bright LEDs. Kids wired up their own microcontrollers, with assistance from our student helpers, then programmed them in Arduino C. The programming aspect was mostly copy-and-paste but with just an hour and a bit to spend on it the wiring and the coding was sufficient to keep everyone involved.

Our final display was so much more impressive than I expected.

Here’s the “After” pic:

Here’s a google docs link to the Arduino handout if you want to try running a similar event yourself. You need a lot of helpers, as it’s quite easy to wire things up wrong, and the coding involves working out what bits to copy and paste. But it works, and we had kids as young as 6 with flashing pumpkins and big smiles. The one scary moment was when we realised that windows update had run on all of our laptops, taking out the Arduino drivers. But with the help of one of the attendees, we got around that (phew!) by booting into linux then editing perms on the USB ports.

HCC 2016: Human Centred Computing Summer School

Last week I was invited to present at the first Human Centred Cognition summer school, near Bremen in Germany. Summer schools are a key part of the postgraduate training experience, and involve gathering together experts to deliver graduate level training (lectures, tutorials and workshops) on a particular theme. I’ve been to summer schools before as a participant, but never as faculty. We’re at a crossroads in AI at the moment: there’s been a conflict between “good old fashioned AI” (based upon logic and the symbol manipulation paradigm) and non-symbolic or sub-symbolic AI (neural networks, probabilistic models, emergent systems) for as long as I have known, but right now with deep machine learning in the ascendant it’s come to a head again. The summer school provided us participants with a great opportunity to think about this and to talk about it with real experts.

Typically, a summer school is held somewhere out of the way, to encourage participants to talk to each other and to network. This was no different, so we gathered in a small hotel in a small village about 20 miles south of Bremen. The theme of this one was human centred computing – cognitive computing, AI, vision, and machine learning. The students were properly international (the furthest travelled students were from Australia, followed by Brazil) and were mostly following PhD or Masters programs; more than half were computing students of one kind or another, but a few psychologists, design students and architects came along too.

The view from my room

The bit I did was a workshop on OpenCV, and I decided to make it a hands-on workshop where students would get to code their own simple vision system. With the benefit of hindsight this was a bit ambitious, particularly as a BYOD (Bring Your Own Device) workshop. OpenCV is available for all main platforms, but it’s a pain to install, particularly on macs. I spent a lot of time on the content (you can find that here: http://github.com/handee/opencv-gettingstarted) and not so much time on thinking about infrastructure or thinking about installation. I think about half of the group got to the end of the tutorial, and another 5 or 10 managed to get someway along, but we lost a few to installation problems, and I suspect these were some of the less technical students. I used jupyter notebooks to create the course, which allow you to intersperse code with text about the code, and I think this may have created an extra layer of difficulty in installation, rather than a simplification of practicalities. If I were to do it again I’d either try and have a virtual machine that students could use or I’d run it in a room where I knew the software. I certainly won’t be running it again in a situation where we’re reliant on hotel wifi…

The class trying to download and install OpenCV

The school timetable involved a set of long talks – all an hour or more – mixed between keynote and lecture. The speakers were all experts in their field, and taken together the talks truly provided a masterclass in artificial intelligence, machine learning, the philosophical underpinnings of logic and robotics, intelligent robotics, vision, perception and attention. I really enjoyed sitting in the talks – some of the profs just came for their own sessions, but I chose to attend pretty much the whole school (I skipped the session immediately before mine, for a read through of my notes, and I missed a tutorial as I’d been sat down for too long and needed a little exercise).

It’s hard to pick a highlight as there were so many good talks. Daniel Levin from Vanderbilt (in Tennessee) gave an overview of research in attention and change blindness, showing how psychologists are homing in on the nature of selective attention and attentional blindness. I’ve shown Levin’s videos in my lectures before so it was a real treat to get to see him in person and have a chat. Stella Yu from Berkeley gave the mainstream computer vision perspective, from spectral clustering to deep machine learning. Ulrich Furbach from Koblenz presented a more traditional logical approach to AI, touching on computability and key topics like commonsense reasoning, and how to automate background knowledge.

One of my favourite presenters was David Vernon from Skövde. He provided a philosophical overview of cognitive systems and cognitive architectures: if we’re going to build AI we have a whole bunch of bits which have to fit together; we can either take our inspiration from human intelligence or we can make it up completely, but either way we need to think at the systems level. His talk gave us a clear overview of the space of architectures, and how they relate to each other. He was also very funny, and not afraid to make bold statements about where we are with respect to solving the problem of AI: “We’re not at relativistic physics. We’re not even Newtonian. We’re in the dark ages here“.

Prof Vernon giving us an overview of the conceptual space of cognitive architectures

When he stood up I thought to myself “he looks familiar” and it turns out I actually bought a copy of his book a couple of weeks ago. Guess I’d better read it now.

Kristian Kersting of TU Dortumund, and Michael Beetz of Bremen both presented work that bridges the gap between symbolic and subsymbolic reasoning; Kristian talked about logics and learning, through Markov Logic Networks and the like. Michael described a project in which they’d worked on getting a robot to understand recipes from wikihow, which involved learning concepts like “next to” and “behind“. Both these talks gave us examples of real-world AI that can solve the kinds of problems that traditional AI has had difficulty with; systems that bridge the gap between the symbolic and the subsymbolic. I particularly liked Prof. Beetz’s idea of bootstrapping robot learning using VR: by coding up a model of the workspace that the robot is going to be in, it’s possible to get people to act out the robot’s motions in a virtual world enabling the robot to learn from examples without real-world training.

Prof Kersting reminding us how we solve problems in AI

Each night the students gave a poster display showing their own work, apart from the one night we went into the big city for a conference dinner. Unfortunately the VR demo had real issues with the hotel wifi.

The venue for our conference dinner, a converted windmill in the centre of old Bremen

In all a very good week, which has really got me thinking. Big thanks to Mehul Bhatt of Bremen for inviting me out there. I’d certainly like to contribute to more events like this, if it’s possible; it was a real luxury to spend a full week with such bright students and such inspiring faculty. I alternated between going “Wow, this is so interesting, Isn’t it great that we’re making so much progress!” and “Oh no there is so much left to do!“.

Hotel Bootshaus from the river

Electromagnetic Field 2016

Last weekend was Electromagnetic Field, the UK’s main Hacker/maker camp. It’s an outstanding opportunity for meeting up with tinkerers, coders and makers from across the UK and beyond. I was at the first EMF (in 2012, blog post here) talking about women in tech, and went back to this one to talk about schools outreach and the work we’re doing with kids and families. I spoke about schools and kids engagement in general, but also more specifically about our EU playfulcoding project. You can see my talk here:

And you can view the slides here, if you just want slides, not talk.

The talk was well-received but not full, but that’s fine – one of the cool things about EMFcamp is the sheer range of stuff going on. Over the course of the weekend I went to talks on computer history, quantum effects in imaging, IT security from a sociological standpoint, penetration testing, hardware hacking, animating dinosaurs and the mathematics of the Simpsons. I also went to hands-on workshops on VR, deep machine learning, card-based reasoning (“having daft ideas”) and paper circuits. These were all part of the official program – submitted and approved before the event, allowing people to schedule and so on.

There were also lots of minor “installation” type hacks around the place, and a whole heap of drop in activities. I played some computer games in the retro gaming tent (Sonic the hedgehog), went in a musical ball pit, watched fire pong, and generally strolled around the site going WOW.

I had never been in a ball pit before. I am so going to make one of these.

“The Robot Arms” was the name of the camp bar, and it had an API so you could look online to see how much beer had been sold. Someone even wrote a script to calculate how many drinks had been sold in the last minute so you could tell how busy it was without going down to check. All the barstaff and indeed everyone at the event were volunteers which gives the whole thing a really nice cooperative feeling. I was sat eating my veggie breakfast in the food area on Sunday morning and someone asked for help setting out the chairs at the main stage, and about 10 of us just got up and did it. Loads of my friends there did shifts on the bar, or marshalling in the carpark (I spoke, and figured that was probably enough:). At the closing ceremony Jonty (one of the main organisers) asked everyone who’d volunteered or spoken to stand up, and I swear about 25% of the people there did. This really did make for a really friendly event.

What a cool pub sign, eh?

Much to my embarrassment, I fell out of a hammock installation on the last night though. I was fine getting in there, but the dismount was … inelegant.

This has made my return to Aberystwyth a couple of days late, via the excellent first aid tent and the A&E at Guildford hospital (Royal Surrey). Nothing’s broken, which is a relief, but my gosh it’s all a bit bruised.

my opinion of hammocks is not positive

In all – I loved it, again. I’ll definitely go in 2018.

The last Early Mastery meeting, Girona

Early last Sunday I left sunny mid-Wales for the last ever meeting in our EU Erasmus+ project “Early Mastery/Playful Coding”.

We flew from Bristol to Girona with Ryanair (who call Girona “Barcelona”, which gives some clue to its location). The cloud cover cleared as soon as we crossed the channel, and the view from the airplane was rather lovely. The Pyrenees in particular were stunning.

Once in Girona we met up with the Ysgol Bro Hyddgen crew, teachers from the school up the road in Machynlleth. A chatty evening spent in a lovely riverside bar rounded off the day of travel nicely. Monday morning, bright and early, we headed up into Girona old town for the project meeting proper.

Here’s our (now traditional) meeting arrival selfie: from left to right, Tegid (technology teacher from Bro Hyddgen) and Anna (Welsh teacher from Bro Hyddgen), Martin (schools liason teaching fellow, Aberystwyth Uni) and me, Tomi (ICT teacher from Bro Hyddgen). One of the great things about this project and these meetings in general is we’ve ended up building really good links with the school just up the road, as well as with people across the EU.

The project

Over the last 18 months I’ve written quite a few blog posts about the project. We’ve done a lot of schools work and we’ve had 5 management meetings (of which this was the last). We’ve also had 2 longer “training meetings”, where teachers and academics have tried out each other’s workshops. Every workshop we’ve written has been run by more than one group, and most (indeed all but one) have been run in 2 or 3 different countries. Impact wise we’ve done quite a lot:

  • 45 talks, seminars, training days or other events
  • 80 schools
  • 600 teachers
  • 4000 students
  • 1 book

Did I mention we’ve written a book? The book contains instructions and information for running these workshops yourself. If you’re a teacher looking for easy lessons, or a lecturer looking for cool outreach, or a professional running a code club, or just an interested parent… the book has some great ideas in it. And some typos. But that’s not the end of the world.

The book launch

Our book “Playful Coding: Engaging young minds with creative computing” has been written, collated, edited, typeset and is now not only a PDF (available for free from http://playfulcoding.udg.edu/teacher-guide/, English now, translations to follow) but is also a physical printed book which looks frankly lovely.

As a team, we are skeptical about learning-to-code initiatives that concentrate on getting the skills to get a good job. Coding should be fun, challenging and playful. We hope this comes through in the book. There’s talk about assessment and pedagogy but there’s also a lot of fun, and the activities are all fundamentally cross curricular and hopefully playful.

The meeting concluded with a formal book launch where local teachers came to pick up a physical copy and chat with us over coffee and cake. It was really cool to see so many local teachers turn up to pick up a book in English – we will offer the other project languages (Spanish, Catalan, Romanian, Italian, French, Welsh) shortly but the first to be finished was our one common language.

Underwater robotics with kids

One workshop that’s not in the book is Xefi Cufi’s underwater robotics workshop for kids. It needs some fairly specialised kit… and a swimming pool. But it was great to see that in action too. Here are some junior roboteers building their chassis:

Here’s Eduard showing off the finished product:

And here’s their underwater robotics test pool…

What next?

It’s been a hectic, fascinating, challenging project and at times it’s felt a bit chaotic. I’m still slightly surprised that we’ve managed to do everything we said we would, so well, in the time we had: not only write and run and test workshops, but also write a book. We’re academics, teachers, researchers, outreach officers, postgrads and classroom assistants from 5 different countries but we’ve become a team. I’ve loved the collaborative aspect of the project and seeing how other countries work has been eye opening. My own practice has improved, and I’m sure that some of my ideas have helped to improve practice in other parts of the world, and that’s such a great feeling.

In the wake of Brexit it’s hard to know where we go now. The consortium worked well together and we did some great stuff; there are plans to submit a follow-on grant too. Will I be on it? Well they say they’d like me to be, but in the absence of any firm plans it’s hard to push for that: as a brit, I’m a liability on a Euro project and will remain so until there are serious assurances around research and education funding. I don’t see that happening very soon.

Which is very sad indeed. We’ve done some good work on this project.

On the plus side, I have a sabbatical semester 2 next year, and they do underwater robots, so… I think I’ll be back. Hasta la vista.

Using video in teaching

I gave a talk today about using short videos in teaching, to the Aberystwyth University Teaching and Learning conference (info here). The conference is an annual event which serves as a showcase for best practice in the uni, and it’s always interesting to see what people are up to. As part of my prep for the talk I did a lot of thinking about the different uses of video in learning and teaching, and about the different types of video I’ve put together. So I thought I’d do a blog post about that.

If you’re interested in the how, as well as the what and why, you can find my slides on Google Drive here.

Uses of video

Illustration of a visual point: some things are just best illustrated with a picture or a video. There are lots of examples of this in computer vision, here’s one showing a moving average motion detection. This is really hard to do in slides, without video.

Illustration of a phenomenon that is kinda hard to do in person: sometimes – maybe because things are dangerous, or there’s a piece of kit that’s really expensive, it can be difficult to “take the students to the phenomenon”. So video is a way of bringing the phenomenon to the students. An example of this is a video I made for a friend from the Welsh Crucible program, whose wife was teaching Sylvia Plath’s bee poems to her 6th formers – I called the video Beekeeping for poets. It’s a bit scrappy but it gets the ideas across. This was a very early foray into video making for me, so it’s not got sound or anything. But I like it anyway.

Illustration of a concept I find tricky: sometimes I’m just not that confident about a particular topic. Particularly with the details of algorithms that get complex, I often worry about tripping up in a lecture. These topics are also topics that students probably want to revisit more than once, so the video serves several purposes: it gives me a bit of breathing space and additional confidence in the lecture, and it also gives the students an easy way to repeat the difficult bit. An example of this is my DES encryption video from information security section of CS270. Graphically it’s not great, but practically, it’s saved me a lot of stress:-)

These three videos also illustrate three different types of video: the screencast, the video-clips-and-captions, and the canned presentation.

Other reasons to use video include summarisation, previews, simplifications, and the option to introduce new voices. One thing I really want to look into in the future is bringing in interviews with practitioners, probably by recording Skype/Hangouts calls.

Soapbox Science, Cardiff

Soapbox Science is a public engagement event designed to get scientists out into the public and into public spaces, talking about their work. It’s supposed to demystify science (a bit) but also to change people’s perceptions of what scientists look like; one of the ways it does this is by making all of the scientists on the soapbox women. When I heard about it, I thought… Public engagement? Women in Science? Sounds a bit mad? Guess I’d better apply then!

The event I applied for was my nearest one, this year, and that was Cardiff, and it was yesterday. As you can probably guess from the blog post, I got in.

Having got in, my next problem was what to talk about… for 30 minutes, to a general passers-by kind of audience, without computers or posters or anything like that. As a vision scientist, who works with computers, that’s quite the challenge. The topic I settled on was Shadows.

One of the cool things about Soapbox Science is that it’s OK to bring along props. Some of the scientists had brains, or little bits of gold, or fungus, or felt-and-wax artistic renderings of tumours (no, srsly, they did). I went for an arduino powered cardboard box.

This involved having a neopixel ring inside a cardboard box, programmed with various lighting patterns, and a button on the outside which switched pattern every time the button was pressed. My hope was that by having a ring-shaped light source it would be possible to look out of the middle of the ring, and having the viewpoint of the lightsource (as Da Vinci said, “No luminous body sees the shadow it casts” or something like that). But the viewpoint was just out so you could actually see the shadows anyway. So I made the 16 light sources either chase around with different colours, or gradually illuminate making the shadows hazy, or gradually go off, making the shadows sharp again.

Inside the box I hung a small plastic model of a skateboarder, I soldered all the bits together, and then I had my main prop: a Shadowbox. The shadow effects created really were quite strange, and they served quite well to illustrate the idea that the size of the light source, the colour of the light source and the colour of the screen all affect the shadow’s appearance.

As you look into the box through one of two holes, the experience of seeing the shadows is quite disconcerting, and it can take a while to work out what exactly is going on. But that’s OK – I wanted something kind-of “installationy” and this worked quite well as a visual experience. What didn’t work so well was the skater on a string – as the figurine was suspended from the lid using fishing wire, she swung wildly from side to side if anyone knocked the box, making it all just that little bit more incomprehensible.

The other props I took were some flipbooks, made from a 50 frame sequence of shadow video. I took books representing the input (the actual video), the ground truth (what we want our software to output), some intermediate processing steps and the final output of our shadow detection routine. These were hacked together using python and LaTeX; if you’re interested in any of the code (flipbook code or arduino code) you can find it on my github account. I also took some zoom in crops of images showing pixellated shadow or non-shadow regions, mainly just to show how hard it is to detect shadows when your input is pixels. And I took some sharpies and a sketchbook because … I NEEDED PROPS.

So yesterday, Saturday morning 4th June, I got up early and drove down to Cardiff with a boot full of electronics and poorly put together flipbooks. I arrived just after 12, to a control centre in Yr Hen Llyfrgell which was a hive of activity, helpers, organisers, mascots, labcoats, tshirts, props and of course scientists. And balloons. And coffee. Each scientist was allocated a helper to assist with props and so on: my helper was a very nice and efficient Cardiff Uni medical student called Gunjan who was awesome at ensuring I had the things I needed when I needed them.

One of the mascots was the Cardiff University Dragon, who’s called Dylan. Apparently it was really very hot indeed inside the dragon. The other mascot (who I didn’t get a photo of) was a teddy bear. I’m not sure why.

We’d been advised to have a few 5-minute ideas for talks, and we’d been told we might get questions/heckles and so on, so repeating bits was probably going to be necessary. The time came and I went out, with this written on the back of my hand:

  • Me and science
  • Shadow formation
  • Computer vision
  • Pixels, videos
  • Ground truth
  • Colour, texture

Our soapboxes were in a busy intersection on Cardiff’s shopping district, quite near a woman with an amplifier singing eighties lounge songs (niiice). As talk venues go, I can’t think of many more challenging. The actual “talking about science to the general public on a soapbox” bit was almost exactly as terrifying as I thought it would be.

For the first half hour session, I stood up, talked, drew an audience of about 15, caught people’s eyes, talked some more, waved my props around, tried to get people to look into the shadow box, and then ran out of things to say. Looking at my watch I realised I was just 2 minutes from the end of the session, so that was OK and the audience did have questions. Most of them had stayed till the end, too. They might have had even more questions I suppose, if I had at any point slowed down enough for them to get a word in…

During my second stint on the box, I had a completely different experience. For the first 5 minutes I had no audience, and then a guy I vaguely recognised (maybe from the Crucible?) came and watched at the request of one of the organisers. Which was nice. I didn’t really want to talk to an audience of 0. Slowly more people came and went, including some kids (who really liked the flipbooks) and a remarkable heckler who thought I was a bloke.

At the end of the day everyone was quite hyper, and we all agreed it had been super fun if terrifying. Here’s a picture of me with my excellent helper Gunjan:

At this point I needed to stretch my legs and be quiet for half an hour so I went and checked into my hotel before returning to the afterparty (complete with wine, for those who do that sort of thing). All in all a good day. I’m not sure that it’s my favourite form of public engagement, but it certainly got me out there and out of my comfort zone, talking to people who I’d never had spoken with otherwise.

What I learned from going to every exercise class once

I’m just back from the workout called Insanity, which was the last class in my personal mission to try every type of exercise class offered at Aber Uni at least once (except yoga – you’ve got to draw the line somewhere). I saved Insanity for last, you can probably guess why.

Being a proper nerd of course I kept a spreadsheet, with comments and some estimates (percentage of class completed, approximate proportion of guys, that kind of thing). So here are some stats:

  • Highest max heartrate reached: Insanity. Today. 140bpm
  • Lowest max HR: Pilates, where most sessions I got up to 70bpm max
  • Lowest heartrate reached: also Pilates, 53bpm. Those classes can be relaxing
  • Maximum steps per class: an hour’s Zumba class with 4725 steps
  • Maximum steps per minute of class: a 45 minute Dumbbell workout
  • Hardest class: a tie for work-it-circuits, Bootcamp, and insanity. I’m sure that the nice man who teaches ordinary circuits will be disappointed to learn this.
  • Highest proportion of blokes: Circuits, with an estimated 75% bloke proportion
  • Lowest proportion of blokes: 6 classes had no guys at all (2 of the Aerobics classes, one of the Pilates, one of the Zumbas, a Piyo and a Bootcamp)

The project has taken me 2 months, and has involved going to 2 or 3 classes a week, 23 classes in total. To my surprise, there aren’t any which I’ve actively disliked. The only one I don’t think I’ll return to is PiYo, and that’s because it’s a bit too much like yoga. I think my favourites are dumbbell workout, bodyfit, and (surprisingly for me) bootcamp. But I’ll go back to pretty much all of the others too.

For any aber people wondering… here’s the timetable.