hannah dee

Thinking and learning about play

I’ve just finished a MOOC (massive open online course) on play, with Futurelearn and the University of Sheffield: Exploring Play. Ideas about play have been coming up quite a bit in my work in the last few years – both in teaching (gamification, exploration) and in research (particularly in the research I’ve been doing into kids and coding). But I didn’t really know much about theoretical or practical ideas of play, particularly not outside of computing, so I signed up for a MOOC to take the broader look.

I found that earlier on in this course, the readings about play types enriched my conception of what play could be. Thinking about play in terms of taxonomies of play (rough and tumble play, imaginative play, etc.) has helped me break down what we mean when we say play, and I have found it useful to think about the things children do in terms of these taxonomies. I even found myself wondering how many different kinds of play particular activities or equipment affording, and wondering if I could alter activities to include more variety in play type. There’s an implicit assumption here, which is that play is good, and lots of types of play is better than one type of play. This has clear implications for the kinds of work we do with schoolkids (and to a lesser extent with uni students); a lot of our activities have time for exploratory play (“what does this do”). Thinking about play types leads us to try and incorporate other types of play e.g. creative play (“what can i make this do”, “what can I make with this”), mastery play (“can I get better at playing with this”), communication play (“can I use this to communicate?”). This brings more variety into the activity which may well end up with deeper learning.

The course was very broad, which I liked – I did it to get the big picture, and for that it really succeeded. We looked at cross-cultural play, online play, the spaces we play in, historical attitudes to play, disability and play…

Thinking about how those with disabilities can access play turned a lot of my ideas upside down: thinking in terms of play as activities with particular values leads to a normative understanding of play. The taxonomies provide a rich conception of what play could be, but they don’t dictate what it should be. Reading case studies about play and disability showed me that this normative conception (play should be educational, for example), doesn’t have to hold. Play doesn’t have to be something that lets children rehearse ideas for the real world. It can be something for itself – catching a ball repeatedly, fidget spinners, and other repetitive actions can all be playful in some way. These could be seen as mastery play, developing close motor skills, or they could just be play. Allowing people the time, space, and equipment to explore play with whatever actions they are able is something we need to do for our children and for ourselves.

In all I enjoyed the MOOC a lot – I think it will take a while for the ideas to settle in my mind as we touched briefly on a lot of different topics, but I also think that some of the things I’ve learned will be put in to action in my teaching and outreach activities pretty soon.

A tale of 3 engagements

For the last 9 weeks I’ve been visiting the University of Girona (UdG), and working on some research in Vicorob and Udigital. I’ve taken part in three engagement activities whilst I’ve been here – even though I don’t speak the language. It turns out that with colleagues to help translate, it’s possible to be useful even without many words, although in the first two workshops I was more of an observer/helper than a facilitator. The first of these was an underwater robotics workshop, with a visiting class or around 15 teenagers; the second of these was a wheeled robotics workshop with 9 adults in a high security prison; and the third was an “unplugged” activity looking at Artificial Intelligence and Alan Turing with about 150 teenagers in 6 consecutive groups. The rest of this post has a bit more info on each.

Underwater robotics

This workshop took place in CIRS (Centre d’Investigació en Robòtica Submarina) at UdG, and was written and led by Xevi Cufi. The group came from a nearby boys’ school, had been working on their robots back in school for a while, in groups, and had come to CIRS for the final construction and testing. These robots are made out of plumbing pipe, and have three motors. Two of these provide forwards and backwards thrust for the left and right sides of the robot, and the third gives up and down. The basic robots were complete before the workshop, and in this session the participants did final wiring (the controller is attached by a tethered wire) and water tests. Once the wiring was done, the first water test involved getting the robot to have neutral buoyancy by attaching floats to the frame, in a large bucket.

Then they had to try and make the cable neutral too, by attaching bits of float at regular spacing along the tether.

And finally the students got to use their robots in the test pool (CIRS has a massive pool for testing robots). Seeing this come together was great – the students were all fired up to run their contraptions in the water, and they all worked really well.

This project is a big project, and I think the students had been working on their robots for a couple of weeks on and off. I expect a build from scratch would take a few days, as there’s soldering, wiring, building, testing and a lot lot lot of waterproofing (fairly obviously). The payoff is fab though: they clearly got a real sense of achievement piloting their own robots around the pool, picking up objects, and trying not to get their tethers in a knot. With one underwater video camera and a live link to a monitor, which was passed between robots, the workshop really came alive. I’d like to try and run this workshop in Aberystwyth.

Wheeled robots

The second workshop couldn’t have had a more different target audience. Instead of teenage Catholic schoolboys, there were adult prisoners in the Puig de les Basses prison just north of Figueres. In this workshop (also designed and led by Xevi) we used small wheeled Arduino robots, and programmed them in groups to flash lights, display messages, and move backwards and forwards. We have done a lot of wheeled robot workshops as part of the Early Mastery project (and before), and this one followed the general format (get something to run on the robot, modify that code, get the robot to move forwards and backwards). We had about 2 hours, and the participants were working in groups. Here’s a picture of the robot (taken during preparation) – you should be able to make out the display LCD, the LEDs and the wheels in this picture.

In the workshop the participants got to grips with the flashing lights activity very quickly, and the group I was working with seemed to be having fun setting up traffic lights using the R, G and Y LEDs. When the idea of the LCD display screen was introduced, my group decided to get it to give instructions to match the traffic lights (so it said “go” on green, etc.). This was a bit more elaborate than planned – the idea was they were just going to get it to say “hello” or something then move on to the next task – but they were enjoying the coding and working a lot of things out for themselves so we just let them run with it. As soon as one of the other groups got their robot to move, everyone changed their mind and wanted to move on to the next task anyway.

I don’t have any photos of the actual workshop as security was very tight and we weren’t allowed to take in phones, cameras or anything like that. Here’s a picture of the outside of the building though:

It’s amazing how the same thing happens in every robot workshop – whether it’s with 6 year old kids or 50 year old prisoners. As soon as one of the groups gets a robot to actually move, the atmosphere changes and everything moves up a gear. There is something intrinsically motivating about writing a program on a computer, and getting it to move something in the real world. As a programming environment, they used Visualino which provides a block-based interface to Arduino C; I hadn’t seen this before but was very impressed, and I might use it in future.

AI Unplugged

The final engagement activity I have been involved in out here is based upon the AI workshop that we wrote as part of the Technocamps project. This workshop has several components, and UdG were asked to do 6 consecutive 25 minute workshops with schoolkids in the town of Banyoles, as part of their end-of-term robotics project (actually, 3 sets of 6 consecutive workshops). So with a lot of help from Eduard we created some bilingual slides (English/Catalan) and did a double-act. You can see the slides here.

In another room, Marta and Mariona were talking about STEAM and coding, and in yet another room Jordi was talking about various robotics challenges and activities, so the Udigital.edu team was out in force. Here we are having breakfast before the day begins…

The schoolkids had apparently been working on general robotics projects for a couple of weeks at the end of term, so we started by doing a tour of their demos, and saw some lovely little line followers, skittles robots, hill climbers and generally lots of excellent arduino goodies. Here’s one of their projects.

In the workshop Eduard and I ran, we had a set of votes, asking the students if they think computers can think. The way the workshop is structured, we had a vote at the start (to get their initial opinions), then we did an activity which encouraged them to think about what intelligence is, by ordering a load of things (chess computer, sheep, tree, self-driving car, kitten, human… there are about 30 things). This gets them to consider what thinking involves, without actually being explicit or telling them what we think. Then we had another vote. After this we discussed what aspects of intelligence they might think were important, and what aspects computers could do now, then we had a final vote and concluded with some talk about Alan Turing and the Turing Test.

The reason I like to get the participants to vote, repeatedly, on whether they think computers can think, is so that we can see if anyone changes their mind. In my experience (and I’ve run this workshop loads of times – maybe 50 times) people always do change their minds once they’ve thought a bit more about the question; it never ceases to surprise me how different groups can be, too. This time, some groups arrived confident that AI was possible and that computers could think. Some of the others arrived with hardly anybody in the group positive about the potential of AI. We changed some minds though – some in one direction, some in the other.

Here’s a graph of the three vote results, displayed as a proportion of those attending who said “yes” or “maybe” to the question “Can Computers Think?”

This workshop worked well, as you can see from the graph: in every group we managed to get people to think hard enough that some of them changed their minds. It was also great fun, if a bit relentless, running 6 workshops back to back. I think we saw about 150 kids.


So thanks, Udigital, for letting me join in and see what you do in terms of outreach. It’s been a great 9 weeks of visit, and I’ve got some ideas that I definitely want to try back in Abersytwyth.


I’m visiting Girona Uni at the moment as part of my sabbatical term, and whilst I’m here I’m trying to expand my horizons a bit academically. SO, this week I attended a workshop on marine robotics, which just happened to be going on whilst I’m here and they let me attend for free. The workshop is for marine robotics, but it is not just a research conference. Attendees come from 30 research centres, and 12 companies. Presentations come from 14 EU projects, 4 national projects, 4 companies. On day one, I saw 16 of the talks and then skipped the rest (including the demo and the dinner) as my folks were visiting and I thought I should probably spend some time with them:-)

Marine robotics is a bit outside my area so it was challenging to sit in and try and follow talks that were at the limits of my knowledge. The conference was also considerably more applied that many of the conferences I go to – companies and researchers working together much more closely, and much more close to product; some of the things presented were research, others were actual pieces of kit that you can buy. The applications varied too from science through to mining. The EU funding that supports these systems is really driving forward innovation in a collaborative way – many of the projects involved tens of institutions, from university research teams through SMEs to big companies.

The keynote came from Lorenzo Brignone, IFREMER lab, which is the French research centre that deals with oceanographic stuff. They have quite a fleet (7 research vessels), with manned submersibles, ROVs (Remote operated vehicles), and AUVs (autonomous underwater vehicles), and a hybrid HROV (AUV/ROV) which is the topic of the keynote. Brignone works in the underwater systems unit, which is mostly made up of engineering. The key problem is that of working reliably underwater near boats which don’t have dynamic positioning – the surface vehicle might move hundreds of metres, so we need to have an ROV that is more independent in order to carry out scientific missions reliably. The design includes the whole system, with on-ship electronics, tether, traction, and a weighted underwater station which includes a fibre-optic link to the HROV. This lets the hybrid system work with vessles of opportunity, rather than waiting for science boats to become ready. Two DVL (doppler velocity log) systems give accurate underwater location. Final output is a semi autonomous vehicle which can be worked by general users (the engineers don’t even have to be on the boat).

The next morning talk covered the DEXrov project, which is looking at systems which can control dextrous robots at a distance (hopefully onshore, removing the cost of hiring a boat). The aim is to get robots that can interact underwater, like divers can. This is controled by an exoskeleton based system – basically, the operator wears an arm and hand exoskeleton which the robot then mimics.

SWARMS – smart and networking underwater robotics in cooperation meshes. 31 partner consortium, looking at networking tech as well as the robotics tech. The project is also developing middleware which will let various heterogenous systems (UAVs, ROVs, misssion control, boats) cooperate. Underwater acoustic network links to wireless on the surface.

Next up Laurent Mortier from the BRIDGES project, which is a big h2020 project (19 partners including 6 SMEs) looking at glider technology. These systems are very low power underwater vehicles which can cover very long journeys, collecting data. Gliders create small changes in buoyancy, using wings to drive themselves forwards. This project looks to increase the depth that gliders can work at, which enables a greater range of scientific questions to be answered. The kind of data they look for depends on the payload, which can be scientific, or commercial (searching for things like leaks from undersea hydrocarbons, finding new oilfields).

Carme Paradeda of Ictineu submarines presented next, on moving from from a manned submarine to underwater robots, in a commercial setting. http://www.ictineu.net/en/ is their website, and they’ve invested 3 million euros, and more than 100,000 hours of R and D went into the creation of their submarine. This is a manned submarine which involved developing new battery technology as part of the project, safer batteries for operating at high pressure.

Marc Tormer of SOCIB (a Balaeric islands research centre) also talked about gliders. Aim is to change the paradigm of ocean observation: from intermittent missions on expensive research vessels, to continuous observations from multiple systems including gliders.

Graham Edwards from TWI (The Welding Institute) talked about ROBUST H2020 project. This project addresses seabed mining. Resources they’re looking for are manganese nodules, which can be found also looking cobalt crusts and sulphide springs. The system uses laser spectrography on an AUV with 3d acoustic mapping tech, to try and get away from the problems associated with dredging.

Pino Casalino, University of Genova (ISME) had the last slot before lunch talking about an Italian national project MARIS working towards robot control systems for marine intervention. This provided another overview of a big multi site project, looking at vision, planning and robot control. I have to admit that at this point my attention was beginning to wander.

One group photo and a very pleasant lunch later (I declined the option of a glass of wine, but did take advantage of the cheesecake and the espresso machine) we were back for an afternoon of talks.

The difficult post-lunch slot fell to Bruno Cardeira, from the Instituto Superior Técnico (Lisbon) talking about the MEDUSA deep sea AUV. This project was joint Portugal-Norway with a lot of partners, looking at deep sea AUVs, in order to survey remote areas up to 3000m depth. They wanted to do data collection and water column profiling, resource exploration, and habitat mapping, with the aim to open up new sea areas for economic exploitation.

Bruno also presented a hybrid AUV/ROV/Diver navigation system, the Fusion ROV which is a commercial product. This talk had a lot of videos with a loud rock soundtrack, which is one way to blow people out of their post-lunch lull, I guess.

The next talk came from Chiara Petroli, of the University of La Sapienza (Rome) talking about the SUNRISE project, working on internet of things wrt underwater robotics. Underwater networking, in a heterogeneous environment. Long distance, low cost, energy efficient, secure comms… underwater. Where of course wifi doesn’t really work. Dynamic, adaptive protocols which use largely acoustic communications have been developed.

Unfortunately by the end of this talk we were already running 15 minutes late (after just two talks). So the desire for coffee was running high in the audience, and I think I detected a snore or two.

Andrea Munafo from the National Oceanography Centre in Southampton, talking about the OCEANIDS project sponsored by the UK’s NERC (natural environment research council). This program is building some new ROVs which will enable long range and autonomous missions. One of these ROVs is called Boaty McBoatface.

The last talk from this session came from Ronald Thenius, of Uni Graz, talking about Subcultron, a learning, self-regulating, self-sustaining underwater culture of robots. 7 partners from 6 countries, aiming to monitor the environment in the Venice lagoon using the worlds’ largest robot swarm using energy autonomy and energy harvesting. Because Venice is big and has a lot of humans, the cultural aspect is quite important. Players: 5 aPads, (inspired by lilies, has solar cells, radio comms), 20 aFish (inspired by fish, moves around, communicates), and 120 aMussels (inspired by clams, many sensors, passive movement, NFC, energy harvesting). I liked this talk a lot.

Post coffee break, it was the turn of Nikola Miskovic, from the University of Zagreb, talking about cooperative AUVs which can communicate with divers using hand gestures and tablets. The project (CADDY – autonomous diving buddy) allowed a number of advances, including the way that the diver could use Google maps underwater. “The biggest challenge when you do experiments with humans and robots is the humans“:-)

Jörg Kalwa, ATLAS ELEKTRONIK GMBH spoke on the SeaCat story – from toy to product. UAV/ROV hybrid with variable payload. This grew out of various precursor systems (experimental and military) – the talk covered the various robots which are ancestors of the current ROV. The current incarnation is a commercial robot which does pretty much everything you might want an ROV to do, but the price point is pretty high.

The penultimate talk is from Eduardo Silva, ISEP / INESC TEC in Porto (Portugal), talking about underwater mining, in flooded opencast mines. Project has a great acronym – Viable Alternative Mine Operating System or VAMOS. Big project (17 partners from 9 countries). This project has a bunch of collaborating robots including UAVs which look like the many of the others (torpedo like), and other underwater vehicles which look a lot more like mining vehicles – tracked tanks, with massive drills and so on.

The day finished with the European Robotics League – a UEFA champions league for robotics. Service robots, industry robots, outdoor robots. This talk came from Gabriele Ferri, CMRE. Emergency robotics, combining ground underwater and air robots cooperating in a disaster response scenario. Mission is to find missing workers (mannequins) and bring them an emergency kit, survey the area, and stem the leak by closing a stop cock.

To be honest, my take home from this workshop is: underwater robots are cool, and brexit is an awesomely stupid idea.

BCSWomen AI Accelerator

BCSWomen Chair Sarah Burnett has had a fab idea, which is to hold a series of webinars that talk about AI and how it is changing the world. In BCSWomen we do a lot of stuff about the women, and a lot of stuff to support women, but we also do a lot of stuff that is useful for tech people in general. The AI Accelerator falls into this category; the idea is that tech is changing and AI is driving that change, so we’re going to try and provide a background and overview of AI to help people get to grips with this. Once I heard the idea I had to put my hand up for a talk, and I grabbed the first slot general intro talk – “What is AI?“. The other speaker in the session was Andrew Anderson of Celaton, who talked about the business side of AI. If you want to join in follow @bcswomen on twitter and I’m sure they’ll tweet about the next one soon.

the talk

As ever I went a bit over the top on the talk prep, but managed to come up with a theme and 45 slides with a bunch of reveals/animations that I thought covered some key concepts quite well. (Yes 45 slides for 20 minutes is a bit much but hey, I rehearsed the timings down to a tee and it was OK.) The live webinar had a few issues with audio, so I re-recorded my talk as a stand-alone youtube presentation; it’s not as good as the original outing (as a bit of time had passed and I hadn’t rehearsed as much) but I think it still works OK. If you want to watch it, here it is:

You can find the slides online here: AI Accelerator talk slides. I am 99% certain that all the images I used were either free for reuse, or created by me, but if it turns out I’ve used a copyright image let me know and I’ll replace it.

the reasoning behind the talk

I’ve been “doing” AI since I first went to uni in 1993, and what people mean when they say AI has changed massively over this time. Things that I read about as science fiction are now everyday, and a lot of this is down to advances in machine learning (ML). So when I started working on the talk I actually asked myself “What do people really mean, when they say AI?”; it turns out that a lot of the time they’re actually talking ML. There are a lot of other questions that need to be raised (if not answered) – the difference between weak AI and strong AI, the concept of embodiment, the way in which some things which we think of as hard (e.g. chess) turned out to be quite easy, and some things we thought would be easy (e.g. vision) turned out to be quite hard. Hopefully in the talk I covered enough of this stuff to introduce the questions.

I decided that for a tech talk there needed to be a bit of tech in it too though, which is why I spent the second half breaking down a bit what we mean by machine learning, and introducing some different subtypes of machine learning. I expect that if you work in the area there’s nothing much new in the talk, but hopefully it gives an overview, and also gives enough depth for people to learn something from it.

so what about the cute robots?

I wanted a visual example for my slides on ML and particularly classification, so I created a robot image, then edited it about a bit to get 16 different variants (different buttons, different numbers of legs, aerials, arm positions). I then wrote a short program to switch the colours around so I got twice as many (just switching the blue and the red channels gives some cyan robots and some yellow robots).

If you want to use them in talks or whatever, feel free. You can get all 32 of the robots, here, along with the python program that switches colours and the gimp file (.xcf) if you want to edit them yourself.

The BCSWomen Lovelace Colloquium 2017

The 10th BCSWomen Lovelace Colloquium was held on April 12th, at Aberystwyth University. Around 200 attendees enjoyed a day of inspiring talks, fascinating student posters, careers advice, employers fair, lots of networking and too much cake. Our headline sponsor this year was Google, who covered loads of the student travel and also sent a speaker along.

As we pay for travel for all the poster contest finalists and as we were in Aberystwyth this year, we paid for 2 nights for everyone. This enabled us to have a social the night before, with Scott Logic providing a hackathon activity which got people talking and coding (and eating pizza).

Our keynote was Dr Sue Black OBE, founder of BCSWomen and general awesome person, who talked about her life and career to date, with PhD, Bletchley Park, Stephen Fry and the Queen. At the end of Sue’s talk she actually had a queue of people waiting for selfies. Then we had Carrie Anne Philbin of Raspberry Pi, who gave a fab talk about using your powers for good. If you haven’t seen her yet check out her youtube channel. Unfortunately she had to dash off which was a shame as people were so inspired by her talk that they kept asking me where she was for pretty much the rest of the day.

As usual the core of the day was an extended lunch and poster session. This year lunch was sponsored by GE, who also had a stand and helped out in lots of other ways.

Best First Year Student, sponsored by Google

1st: Frida Lindblad (Edinburgh Napier) – Making complexity simple in the world of technology

2nd: Aliza Exelby (Bath) – An examination of the effects of growing up in a digital age

Best Second Year Student, sponsored by JP Morgan

1st: Elise Ratcliffe (Bath) – Cryptography for website design

2nd: Rachael Paines (Open) – Where’s all the kit gone? Developing a bespoke equipment management system

Best Final Year Student

1st: Iveta Dulova (St Andrews) – Mobile device based framework for the prediction of early signs of mental health deviations / Hannah Khoo (Greenwich) – Analysing attacks on the CAN bus to determine how they can affect a vehicle

2nd: Louise North (Bath) – Optimising the energy efficiency of code using static program analysis techniques / Anna Rae Hughes (Sussex) – Safeguarding homelessness in a cashless society

Best MSc Student, sponsored by Amazon

1st: Caroline Haigh (Southampton) – Nul points and null values: using machine learning techniques to model Eurovision song contest outcomes

2nd: Isabel Whistlecroft (Southampton) – Can algorithms emulate abstract art?

People’s choice

First year Annette Reid (Bath) – “Ada Loved Lace”: how computer science and the textile industry influence each other

Second year Emma James (Bath) – Can machine learning trump hate?

Final year Rosie Hyde (Middlesex) – Can stress and anxiety be tracked through wearable technology?

MSc Leah Clarke (Durham) – Who will win Wimbledon 2017? Using deep learning to predict tennis matches

After the poster contest we had two more talks. The first was from Milka Horozova, of Google, who’s been in Google for just a few months. She met Google recruiters at the Lovelace in Edinburgh a few years ago so is a real Lovelace Colloquium success story. Our last speaker was Christine Zarges of Aberystwyth Computer Science, who talked about nature-inspired computing – artificial immune systems, neural networks, evolutionary systems. Interesting stuff.

One cake break later (we have a lot of cake, thanks to our CAKE SPONSOR, Bloomberg – yes we have a cake sponsor) and we finished off with the panel session and prizegiving. On the panel was Dominika Bennani of JP Morgan, Carol Long from Warwick and a BCSWomen founder member, Milka Horozova from Google and Claire Knights from UTC Aerospace. And me. The idea of the panel is that all the students can ask any question they like, on anything to do with computing and computing careers, and it’s often my favourite part of the day.

After the close of the panel, we stopped for a group photo on the big steps. Once I get the official photos back I’ll post the big picture, but for now here’s a selfie:

And then the last part of the day was the social, sponsored by ThoughtWorks – with more cake, and some drinks to help the networking go smoothly.

This was my last event as chair: I started it in 2008, and have run it for 10 years, and now it’s time to pass it on. So at the end of the day I handed over to Helen Miles, who’s going to take the Lovelace forward (with me as deputy for a couple of years to ease the transition – I’m still going to be there, whatever!). Helen is also Aberystwyth, and has an office just downstairs from me, which makes the handover easy. Next year, we’re going to Sheffield.

BMVA workshop: plants in computer vision

On Wednesday I hosted my first ever British Machine Vision Association (BMVA) one-day workshop. The BMVA are the organisation which drives forwards computer vision in the UK, and they run a series of one-day technical meetings, usually in London, which are often very informative. In order to run one, you have to first propose it, and then the organisation work with you to pull together dates, program, bookings and so on. If you work in computer vision and haven’t been to one yet, you’re missing out.

I won’t write an overview of the whole day – that’s already been done very well by Geraint from GARNet the Arabidopsis research network. So if you want a really nice blow by blow account pop over to the GARNet blog.

We had some posters, and some talks, and some demos, and around 55 attendees. The quality was good – one of the best plant imaging workshops I have been to, with no dud talks. I think London is an attractive venue, the meetings are cheap (£30 for non-members, £10 for members), and both of these factors contributed. But I suspect the real reason we had such a strong meeting was that we’re becoming quite a strong field.

The questions and challenges that come up will be familiar to people who work in other applied imaging fields, like medical imaging :

  • should we use machine learning? (answer: probably)
  • can we trust expert judgments? (answer: maybe… but not unconditionally!)
  • we need to share data – how can we share data? what data can we share?
  • if we can’t automatically acquire measurements that people understand, can we acquire proxy measurements (things which aren’t the things that people are used to measuring, but which can serve the same purpose)?
  • can deep learning really do everything?
  • if we’re generating thousands of images a day, we have to be fully automatic. this means initialisation stages have to be eliminated somehow.

One of the presenters – Milan Sulc, from the Centre for Machine Perception in Prague – wanted to demo his plant identification app. Unfortunately, we discover that all of the plants at the BCS are plastic. Milan disappears to a nearby florists to get some real plants, at which point, the receptionist arrives with an orchid. Which also turns out to be plastic. The lesson here? Always remember to bring a spare plant.

This workshop was part funded by my EPSRC first grant, number EP/LO17253/1, which enabled me to bring two keynotes to the event and that was another real bonus for me. Hanno Scharr from Jülich and Sotos Tsaftaris from Edinburgh are both guys who I’ve wanted to chat with for some time, and they both gave frankly excellent presentations. It was also very good to catch up with Tony Pridmore and the rest of the Nottingham group; it’s been a while since I made a conference in computer vision / plant science, as I had a diary clash over IAMPS this year.

We’re hoping to put together a special issue of Plant Methods on the meeting.

Pumpkin Hack!

On Sunday we had our first Aberystwyth Robotics Club pumpkin hack. Kids, pumpkins, flashing lights and electronics together in a fun afternoon workshop.

In the carving station, the kids hacked away at their pumpkins with kid-safe tools or gave their design to one of our high powered Dremel wielding helpers. With a suggested age range of 6-12 we weren’t going to let the attendees loose with super sharp knives or powertools, but they managed to design their pumpkins themselves and help to cut them out (or at least, carve them)

In the coding zone, we had a bunch of laptops, a bunch of Arduino nano microcontrollers, battery packs, wires and some ultra-bright LEDs. Kids wired up their own microcontrollers, with assistance from our student helpers, then programmed them in Arduino C. The programming aspect was mostly copy-and-paste but with just an hour and a bit to spend on it the wiring and the coding was sufficient to keep everyone involved.

Our final display was so much more impressive than I expected.

Here’s the “After” pic:

Here’s a google docs link to the Arduino handout if you want to try running a similar event yourself. You need a lot of helpers, as it’s quite easy to wire things up wrong, and the coding involves working out what bits to copy and paste. But it works, and we had kids as young as 6 with flashing pumpkins and big smiles. The one scary moment was when we realised that windows update had run on all of our laptops, taking out the Arduino drivers. But with the help of one of the attendees, we got around that (phew!) by booting into linux then editing perms on the USB ports.

First paper from first grant!

We’ve had our first journal paper published from my EPSRC first grant. It gives a comprehensive review of work into the automated image analysis of plants – well, one particular type of plant, Arabidopsis Thaliana. It’s by Jonathan Bell and myself, and it represents a lot of reading, talking and thinking about computer vision and plants. We also make some suggestions which we hope can help inform future work in this area. You can read the full paper here, if you’re interested in computer vision and plant science.

The first grant as a whole is looking at time-lapse photography of plants and aims to build sort-of 3d models representing growth. It’s coming to an end now so we’re wrapping up the details and publishing the work we’ve done. This means keen readers of this blog1 can expect quite a few more posts relating to the first grant soon: we’re going to release a dataset, a schools workshop, and we’ll be submitting another journal paper talking about the science rather than the background.

1Yes, both of you

HCC 2016: Human Centred Computing Summer School

Last week I was invited to present at the first Human Centred Cognition summer school, near Bremen in Germany. Summer schools are a key part of the postgraduate training experience, and involve gathering together experts to deliver graduate level training (lectures, tutorials and workshops) on a particular theme. I’ve been to summer schools before as a participant, but never as faculty. We’re at a crossroads in AI at the moment: there’s been a conflict between “good old fashioned AI” (based upon logic and the symbol manipulation paradigm) and non-symbolic or sub-symbolic AI (neural networks, probabilistic models, emergent systems) for as long as I have known, but right now with deep machine learning in the ascendant it’s come to a head again. The summer school provided us participants with a great opportunity to think about this and to talk about it with real experts.

Typically, a summer school is held somewhere out of the way, to encourage participants to talk to each other and to network. This was no different, so we gathered in a small hotel in a small village about 20 miles south of Bremen. The theme of this one was human centred computing – cognitive computing, AI, vision, and machine learning. The students were properly international (the furthest travelled students were from Australia, followed by Brazil) and were mostly following PhD or Masters programs; more than half were computing students of one kind or another, but a few psychologists, design students and architects came along too.

The view from my room

The bit I did was a workshop on OpenCV, and I decided to make it a hands-on workshop where students would get to code their own simple vision system. With the benefit of hindsight this was a bit ambitious, particularly as a BYOD (Bring Your Own Device) workshop. OpenCV is available for all main platforms, but it’s a pain to install, particularly on macs. I spent a lot of time on the content (you can find that here: http://github.com/handee/opencv-gettingstarted) and not so much time on thinking about infrastructure or thinking about installation. I think about half of the group got to the end of the tutorial, and another 5 or 10 managed to get someway along, but we lost a few to installation problems, and I suspect these were some of the less technical students. I used jupyter notebooks to create the course, which allow you to intersperse code with text about the code, and I think this may have created an extra layer of difficulty in installation, rather than a simplification of practicalities. If I were to do it again I’d either try and have a virtual machine that students could use or I’d run it in a room where I knew the software. I certainly won’t be running it again in a situation where we’re reliant on hotel wifi…

The class trying to download and install OpenCV

The school timetable involved a set of long talks – all an hour or more – mixed between keynote and lecture. The speakers were all experts in their field, and taken together the talks truly provided a masterclass in artificial intelligence, machine learning, the philosophical underpinnings of logic and robotics, intelligent robotics, vision, perception and attention. I really enjoyed sitting in the talks – some of the profs just came for their own sessions, but I chose to attend pretty much the whole school (I skipped the session immediately before mine, for a read through of my notes, and I missed a tutorial as I’d been sat down for too long and needed a little exercise).

It’s hard to pick a highlight as there were so many good talks. Daniel Levin from Vanderbilt (in Tennessee) gave an overview of research in attention and change blindness, showing how psychologists are homing in on the nature of selective attention and attentional blindness. I’ve shown Levin’s videos in my lectures before so it was a real treat to get to see him in person and have a chat. Stella Yu from Berkeley gave the mainstream computer vision perspective, from spectral clustering to deep machine learning. Ulrich Furbach from Koblenz presented a more traditional logical approach to AI, touching on computability and key topics like commonsense reasoning, and how to automate background knowledge.

One of my favourite presenters was David Vernon from Skövde. He provided a philosophical overview of cognitive systems and cognitive architectures: if we’re going to build AI we have a whole bunch of bits which have to fit together; we can either take our inspiration from human intelligence or we can make it up completely, but either way we need to think at the systems level. His talk gave us a clear overview of the space of architectures, and how they relate to each other. He was also very funny, and not afraid to make bold statements about where we are with respect to solving the problem of AI: “We’re not at relativistic physics. We’re not even Newtonian. We’re in the dark ages here“.

Prof Vernon giving us an overview of the conceptual space of cognitive architectures

When he stood up I thought to myself “he looks familiar” and it turns out I actually bought a copy of his book a couple of weeks ago. Guess I’d better read it now.

Kristian Kersting of TU Dortumund, and Michael Beetz of Bremen both presented work that bridges the gap between symbolic and subsymbolic reasoning; Kristian talked about logics and learning, through Markov Logic Networks and the like. Michael described a project in which they’d worked on getting a robot to understand recipes from wikihow, which involved learning concepts like “next to” and “behind“. Both these talks gave us examples of real-world AI that can solve the kinds of problems that traditional AI has had difficulty with; systems that bridge the gap between the symbolic and the subsymbolic. I particularly liked Prof. Beetz’s idea of bootstrapping robot learning using VR: by coding up a model of the workspace that the robot is going to be in, it’s possible to get people to act out the robot’s motions in a virtual world enabling the robot to learn from examples without real-world training.

Prof Kersting reminding us how we solve problems in AI

Each night the students gave a poster display showing their own work, apart from the one night we went into the big city for a conference dinner. Unfortunately the VR demo had real issues with the hotel wifi.

The venue for our conference dinner, a converted windmill in the centre of old Bremen

In all a very good week, which has really got me thinking. Big thanks to Mehul Bhatt of Bremen for inviting me out there. I’d certainly like to contribute to more events like this, if it’s possible; it was a real luxury to spend a full week with such bright students and such inspiring faculty. I alternated between going “Wow, this is so interesting, Isn’t it great that we’re making so much progress!” and “Oh no there is so much left to do!“.

Hotel Bootshaus from the river

The Aberystwyth Image Analysis Workshop AIAW

Last week (on Friday) we held the Aberystwyth Image Analysis workshop. I think it was the 3rd, or maybe the 4th one of these I’ve organised. The aim is to have some informal talks and posters centred around the theme of image analysis (including image processing, computer vision, and other image-related stuff) from across Aberystwyth. To encourage participation from people whether they’ve got results or not we have 10 minute slots for problem statements, short talks, work in progress and so on, and we have 20 minute slots for longer pieces of work. This year there were 4 departments represented in talks: Computer science, Maths, Physics and IBERS (biology), and we had speakers who were PhD students, post docs, lecturers and senior lecturers (no profs this year, boo!).

The range of topics covered was as usual very broad – images are used in research all the time, and it’s really useful to get together and see the kinds of techniques people are using. In Physics, they’re working on tightly and precisely calibrated cameras and instruments, using images as a form of measurement. In Maths the images are fitted to models and used to test theories about (for example) foam. In computer science people are working on cancer research using images, plant and crop growth modelling, and video summarisation (to name but a few of the topics). And the IBERS talk this year came from Lizzy Donkin, a PhD student who’s working on slugs.

Lizzy and I have been trying to track slugs so that she can model how they forage – she spoke for 10 minutes on the problem of slugs and imagery, and I spoke for 10 minutes on preliminary slug tracking results. Here’s a screenshot of my favourite slide, showing the wide range of shape deformations that a single slug can undergo.