hannah dee

EMRA17

I’m visiting Girona Uni at the moment as part of my sabbatical term, and whilst I’m here I’m trying to expand my horizons a bit academically. SO, this week I attended a workshop on marine robotics, which just happened to be going on whilst I’m here and they let me attend for free. The workshop is for marine robotics, but it is not just a research conference. Attendees come from 30 research centres, and 12 companies. Presentations come from 14 EU projects, 4 national projects, 4 companies. On day one, I saw 16 of the talks and then skipped the rest (including the demo and the dinner) as my folks were visiting and I thought I should probably spend some time with them:-)

Marine robotics is a bit outside my area so it was challenging to sit in and try and follow talks that were at the limits of my knowledge. The conference was also considerably more applied that many of the conferences I go to – companies and researchers working together much more closely, and much more close to product; some of the things presented were research, others were actual pieces of kit that you can buy. The applications varied too from science through to mining. The EU funding that supports these systems is really driving forward innovation in a collaborative way – many of the projects involved tens of institutions, from university research teams through SMEs to big companies.

The keynote came from Lorenzo Brignone, IFREMER lab, which is the French research centre that deals with oceanographic stuff. They have quite a fleet (7 research vessels), with manned submersibles, ROVs (Remote operated vehicles), and AUVs (autonomous underwater vehicles), and a hybrid HROV (AUV/ROV) which is the topic of the keynote. Brignone works in the underwater systems unit, which is mostly made up of engineering. The key problem is that of working reliably underwater near boats which don’t have dynamic positioning – the surface vehicle might move hundreds of metres, so we need to have an ROV that is more independent in order to carry out scientific missions reliably. The design includes the whole system, with on-ship electronics, tether, traction, and a weighted underwater station which includes a fibre-optic link to the HROV. This lets the hybrid system work with vessles of opportunity, rather than waiting for science boats to become ready. Two DVL (doppler velocity log) systems give accurate underwater location. Final output is a semi autonomous vehicle which can be worked by general users (the engineers don’t even have to be on the boat).

The next morning talk covered the DEXrov project, which is looking at systems which can control dextrous robots at a distance (hopefully onshore, removing the cost of hiring a boat). The aim is to get robots that can interact underwater, like divers can. This is controled by an exoskeleton based system – basically, the operator wears an arm and hand exoskeleton which the robot then mimics.

SWARMS – smart and networking underwater robotics in cooperation meshes. 31 partner consortium, looking at networking tech as well as the robotics tech. The project is also developing middleware which will let various heterogenous systems (UAVs, ROVs, misssion control, boats) cooperate. Underwater acoustic network links to wireless on the surface.

Next up Laurent Mortier from the BRIDGES project, which is a big h2020 project (19 partners including 6 SMEs) looking at glider technology. These systems are very low power underwater vehicles which can cover very long journeys, collecting data. Gliders create small changes in buoyancy, using wings to drive themselves forwards. This project looks to increase the depth that gliders can work at, which enables a greater range of scientific questions to be answered. The kind of data they look for depends on the payload, which can be scientific, or commercial (searching for things like leaks from undersea hydrocarbons, finding new oilfields).

Carme Paradeda of Ictineu submarines presented next, on moving from from a manned submarine to underwater robots, in a commercial setting. http://www.ictineu.net/en/ is their website, and they’ve invested 3 million euros, and more than 100,000 hours of R and D went into the creation of their submarine. This is a manned submarine which involved developing new battery technology as part of the project, safer batteries for operating at high pressure.

Marc Tormer of SOCIB (a Balaeric islands research centre) also talked about gliders. Aim is to change the paradigm of ocean observation: from intermittent missions on expensive research vessels, to continuous observations from multiple systems including gliders.

Graham Edwards from TWI (The Welding Institute) talked about ROBUST H2020 project. This project addresses seabed mining. Resources they’re looking for are manganese nodules, which can be found also looking cobalt crusts and sulphide springs. The system uses laser spectrography on an AUV with 3d acoustic mapping tech, to try and get away from the problems associated with dredging.

Pino Casalino, University of Genova (ISME) had the last slot before lunch talking about an Italian national project MARIS working towards robot control systems for marine intervention. This provided another overview of a big multi site project, looking at vision, planning and robot control. I have to admit that at this point my attention was beginning to wander.

One group photo and a very pleasant lunch later (I declined the option of a glass of wine, but did take advantage of the cheesecake and the espresso machine) we were back for an afternoon of talks.

The difficult post-lunch slot fell to Bruno Cardeira, from the Instituto Superior Técnico (Lisbon) talking about the MEDUSA deep sea AUV. This project was joint Portugal-Norway with a lot of partners, looking at deep sea AUVs, in order to survey remote areas up to 3000m depth. They wanted to do data collection and water column profiling, resource exploration, and habitat mapping, with the aim to open up new sea areas for economic exploitation.

Bruno also presented a hybrid AUV/ROV/Diver navigation system, the Fusion ROV which is a commercial product. This talk had a lot of videos with a loud rock soundtrack, which is one way to blow people out of their post-lunch lull, I guess.

The next talk came from Chiara Petroli, of the University of La Sapienza (Rome) talking about the SUNRISE project, working on internet of things wrt underwater robotics. Underwater networking, in a heterogeneous environment. Long distance, low cost, energy efficient, secure comms… underwater. Where of course wifi doesn’t really work. Dynamic, adaptive protocols which use largely acoustic communications have been developed.

Unfortunately by the end of this talk we were already running 15 minutes late (after just two talks). So the desire for coffee was running high in the audience, and I think I detected a snore or two.

Andrea Munafo from the National Oceanography Centre in Southampton, talking about the OCEANIDS project sponsored by the UK’s NERC (natural environment research council). This program is building some new ROVs which will enable long range and autonomous missions. One of these ROVs is called Boaty McBoatface.

The last talk from this session came from Ronald Thenius, of Uni Graz, talking about Subcultron, a learning, self-regulating, self-sustaining underwater culture of robots. 7 partners from 6 countries, aiming to monitor the environment in the Venice lagoon using the worlds’ largest robot swarm using energy autonomy and energy harvesting. Because Venice is big and has a lot of humans, the cultural aspect is quite important. Players: 5 aPads, (inspired by lilies, has solar cells, radio comms), 20 aFish (inspired by fish, moves around, communicates), and 120 aMussels (inspired by clams, many sensors, passive movement, NFC, energy harvesting). I liked this talk a lot.

Post coffee break, it was the turn of Nikola Miskovic, from the University of Zagreb, talking about cooperative AUVs which can communicate with divers using hand gestures and tablets. The project (CADDY – autonomous diving buddy) allowed a number of advances, including the way that the diver could use Google maps underwater. “The biggest challenge when you do experiments with humans and robots is the humans“:-)

Jörg Kalwa, ATLAS ELEKTRONIK GMBH spoke on the SeaCat story – from toy to product. UAV/ROV hybrid with variable payload. This grew out of various precursor systems (experimental and military) – the talk covered the various robots which are ancestors of the current ROV. The current incarnation is a commercial robot which does pretty much everything you might want an ROV to do, but the price point is pretty high.

The penultimate talk is from Eduardo Silva, ISEP / INESC TEC in Porto (Portugal), talking about underwater mining, in flooded opencast mines. Project has a great acronym – Viable Alternative Mine Operating System or VAMOS. Big project (17 partners from 9 countries). This project has a bunch of collaborating robots including UAVs which look like the many of the others (torpedo like), and other underwater vehicles which look a lot more like mining vehicles – tracked tanks, with massive drills and so on.

The day finished with the European Robotics League – a UEFA champions league for robotics. Service robots, industry robots, outdoor robots. This talk came from Gabriele Ferri, CMRE. Emergency robotics, combining ground underwater and air robots cooperating in a disaster response scenario. Mission is to find missing workers (mannequins) and bring them an emergency kit, survey the area, and stem the leak by closing a stop cock.

To be honest, my take home from this workshop is: underwater robots are cool, and brexit is an awesomely stupid idea.

BCSWomen AI Accelerator

BCSWomen Chair Sarah Burnett has had a fab idea, which is to hold a series of webinars that talk about AI and how it is changing the world. In BCSWomen we do a lot of stuff about the women, and a lot of stuff to support women, but we also do a lot of stuff that is useful for tech people in general. The AI Accelerator falls into this category; the idea is that tech is changing and AI is driving that change, so we’re going to try and provide a background and overview of AI to help people get to grips with this. Once I heard the idea I had to put my hand up for a talk, and I grabbed the first slot general intro talk – “What is AI?“. The other speaker in the session was Andrew Anderson of Celaton, who talked about the business side of AI. If you want to join in follow @bcswomen on twitter and I’m sure they’ll tweet about the next one soon.


the talk

As ever I went a bit over the top on the talk prep, but managed to come up with a theme and 45 slides with a bunch of reveals/animations that I thought covered some key concepts quite well. (Yes 45 slides for 20 minutes is a bit much but hey, I rehearsed the timings down to a tee and it was OK.) The live webinar had a few issues with audio, so I re-recorded my talk as a stand-alone youtube presentation; it’s not as good as the original outing (as a bit of time had passed and I hadn’t rehearsed as much) but I think it still works OK. If you want to watch it, here it is:

You can find the slides online here: AI Accelerator talk slides. I am 99% certain that all the images I used were either free for reuse, or created by me, but if it turns out I’ve used a copyright image let me know and I’ll replace it.


the reasoning behind the talk

I’ve been “doing” AI since I first went to uni in 1993, and what people mean when they say AI has changed massively over this time. Things that I read about as science fiction are now everyday, and a lot of this is down to advances in machine learning (ML). So when I started working on the talk I actually asked myself “What do people really mean, when they say AI?”; it turns out that a lot of the time they’re actually talking ML. There are a lot of other questions that need to be raised (if not answered) – the difference between weak AI and strong AI, the concept of embodiment, the way in which some things which we think of as hard (e.g. chess) turned out to be quite easy, and some things we thought would be easy (e.g. vision) turned out to be quite hard. Hopefully in the talk I covered enough of this stuff to introduce the questions.

I decided that for a tech talk there needed to be a bit of tech in it too though, which is why I spent the second half breaking down a bit what we mean by machine learning, and introducing some different subtypes of machine learning. I expect that if you work in the area there’s nothing much new in the talk, but hopefully it gives an overview, and also gives enough depth for people to learn something from it.


so what about the cute robots?

I wanted a visual example for my slides on ML and particularly classification, so I created a robot image, then edited it about a bit to get 16 different variants (different buttons, different numbers of legs, aerials, arm positions). I then wrote a short program to switch the colours around so I got twice as many (just switching the blue and the red channels gives some cyan robots and some yellow robots).

If you want to use them in talks or whatever, feel free. You can get all 32 of the robots, here, along with the python program that switches colours and the gimp file (.xcf) if you want to edit them yourself.

The BCSWomen Lovelace Colloquium 2017

The 10th BCSWomen Lovelace Colloquium was held on April 12th, at Aberystwyth University. Around 200 attendees enjoyed a day of inspiring talks, fascinating student posters, careers advice, employers fair, lots of networking and too much cake. Our headline sponsor this year was Google, who covered loads of the student travel and also sent a speaker along.

As we pay for travel for all the poster contest finalists and as we were in Aberystwyth this year, we paid for 2 nights for everyone. This enabled us to have a social the night before, with Scott Logic providing a hackathon activity which got people talking and coding (and eating pizza).

Our keynote was Dr Sue Black OBE, founder of BCSWomen and general awesome person, who talked about her life and career to date, with PhD, Bletchley Park, Stephen Fry and the Queen. At the end of Sue’s talk she actually had a queue of people waiting for selfies. Then we had Carrie Anne Philbin of Raspberry Pi, who gave a fab talk about using your powers for good. If you haven’t seen her yet check out her youtube channel. Unfortunately she had to dash off which was a shame as people were so inspired by her talk that they kept asking me where she was for pretty much the rest of the day.

As usual the core of the day was an extended lunch and poster session. This year lunch was sponsored by GE, who also had a stand and helped out in lots of other ways.

Best First Year Student, sponsored by Google

1st: Frida Lindblad (Edinburgh Napier) – Making complexity simple in the world of technology

2nd: Aliza Exelby (Bath) – An examination of the effects of growing up in a digital age

Best Second Year Student, sponsored by JP Morgan

1st: Elise Ratcliffe (Bath) – Cryptography for website design

2nd: Rachael Paines (Open) – Where’s all the kit gone? Developing a bespoke equipment management system

Best Final Year Student

1st: Iveta Dulova (St Andrews) – Mobile device based framework for the prediction of early signs of mental health deviations / Hannah Khoo (Greenwich) – Analysing attacks on the CAN bus to determine how they can affect a vehicle

2nd: Louise North (Bath) – Optimising the energy efficiency of code using static program analysis techniques / Anna Rae Hughes (Sussex) – Safeguarding homelessness in a cashless society

Best MSc Student, sponsored by Amazon

1st: Caroline Haigh (Southampton) – Nul points and null values: using machine learning techniques to model Eurovision song contest outcomes

2nd: Isabel Whistlecroft (Southampton) – Can algorithms emulate abstract art?

People’s choice

First year Annette Reid (Bath) – “Ada Loved Lace”: how computer science and the textile industry influence each other

Second year Emma James (Bath) – Can machine learning trump hate?

Final year Rosie Hyde (Middlesex) – Can stress and anxiety be tracked through wearable technology?

MSc Leah Clarke (Durham) – Who will win Wimbledon 2017? Using deep learning to predict tennis matches

After the poster contest we had two more talks. The first was from Milka Horozova, of Google, who’s been in Google for just a few months. She met Google recruiters at the Lovelace in Edinburgh a few years ago so is a real Lovelace Colloquium success story. Our last speaker was Christine Zarges of Aberystwyth Computer Science, who talked about nature-inspired computing – artificial immune systems, neural networks, evolutionary systems. Interesting stuff.

One cake break later (we have a lot of cake, thanks to our CAKE SPONSOR, Bloomberg – yes we have a cake sponsor) and we finished off with the panel session and prizegiving. On the panel was Dominika Bennani of JP Morgan, Carol Long from Warwick and a BCSWomen founder member, Milka Horozova from Google and Claire Knights from UTC Aerospace. And me. The idea of the panel is that all the students can ask any question they like, on anything to do with computing and computing careers, and it’s often my favourite part of the day.

After the close of the panel, we stopped for a group photo on the big steps. Once I get the official photos back I’ll post the big picture, but for now here’s a selfie:

And then the last part of the day was the social, sponsored by ThoughtWorks – with more cake, and some drinks to help the networking go smoothly.

This was my last event as chair: I started it in 2008, and have run it for 10 years, and now it’s time to pass it on. So at the end of the day I handed over to Helen Miles, who’s going to take the Lovelace forward (with me as deputy for a couple of years to ease the transition – I’m still going to be there, whatever!). Helen is also Aberystwyth, and has an office just downstairs from me, which makes the handover easy. Next year, we’re going to Sheffield.

BMVA workshop: plants in computer vision

On Wednesday I hosted my first ever British Machine Vision Association (BMVA) one-day workshop. The BMVA are the organisation which drives forwards computer vision in the UK, and they run a series of one-day technical meetings, usually in London, which are often very informative. In order to run one, you have to first propose it, and then the organisation work with you to pull together dates, program, bookings and so on. If you work in computer vision and haven’t been to one yet, you’re missing out.

I won’t write an overview of the whole day – that’s already been done very well by Geraint from GARNet the Arabidopsis research network. So if you want a really nice blow by blow account pop over to the GARNet blog.

We had some posters, and some talks, and some demos, and around 55 attendees. The quality was good – one of the best plant imaging workshops I have been to, with no dud talks. I think London is an attractive venue, the meetings are cheap (£30 for non-members, £10 for members), and both of these factors contributed. But I suspect the real reason we had such a strong meeting was that we’re becoming quite a strong field.

The questions and challenges that come up will be familiar to people who work in other applied imaging fields, like medical imaging :

  • should we use machine learning? (answer: probably)
  • can we trust expert judgments? (answer: maybe… but not unconditionally!)
  • we need to share data – how can we share data? what data can we share?
  • if we can’t automatically acquire measurements that people understand, can we acquire proxy measurements (things which aren’t the things that people are used to measuring, but which can serve the same purpose)?
  • can deep learning really do everything?
  • if we’re generating thousands of images a day, we have to be fully automatic. this means initialisation stages have to be eliminated somehow.

One of the presenters – Milan Sulc, from the Centre for Machine Perception in Prague – wanted to demo his plant identification app. Unfortunately, we discover that all of the plants at the BCS are plastic. Milan disappears to a nearby florists to get some real plants, at which point, the receptionist arrives with an orchid. Which also turns out to be plastic. The lesson here? Always remember to bring a spare plant.

This workshop was part funded by my EPSRC first grant, number EP/LO17253/1, which enabled me to bring two keynotes to the event and that was another real bonus for me. Hanno Scharr from Jülich and Sotos Tsaftaris from Edinburgh are both guys who I’ve wanted to chat with for some time, and they both gave frankly excellent presentations. It was also very good to catch up with Tony Pridmore and the rest of the Nottingham group; it’s been a while since I made a conference in computer vision / plant science, as I had a diary clash over IAMPS this year.

We’re hoping to put together a special issue of Plant Methods on the meeting.

Pumpkin Hack!

On Sunday we had our first Aberystwyth Robotics Club pumpkin hack. Kids, pumpkins, flashing lights and electronics together in a fun afternoon workshop.

In the carving station, the kids hacked away at their pumpkins with kid-safe tools or gave their design to one of our high powered Dremel wielding helpers. With a suggested age range of 6-12 we weren’t going to let the attendees loose with super sharp knives or powertools, but they managed to design their pumpkins themselves and help to cut them out (or at least, carve them)

In the coding zone, we had a bunch of laptops, a bunch of Arduino nano microcontrollers, battery packs, wires and some ultra-bright LEDs. Kids wired up their own microcontrollers, with assistance from our student helpers, then programmed them in Arduino C. The programming aspect was mostly copy-and-paste but with just an hour and a bit to spend on it the wiring and the coding was sufficient to keep everyone involved.

Our final display was so much more impressive than I expected.

Here’s the “After” pic:

Here’s a google docs link to the Arduino handout if you want to try running a similar event yourself. You need a lot of helpers, as it’s quite easy to wire things up wrong, and the coding involves working out what bits to copy and paste. But it works, and we had kids as young as 6 with flashing pumpkins and big smiles. The one scary moment was when we realised that windows update had run on all of our laptops, taking out the Arduino drivers. But with the help of one of the attendees, we got around that (phew!) by booting into linux then editing perms on the USB ports.

First paper from first grant!

We’ve had our first journal paper published from my EPSRC first grant. It gives a comprehensive review of work into the automated image analysis of plants – well, one particular type of plant, Arabidopsis Thaliana. It’s by Jonathan Bell and myself, and it represents a lot of reading, talking and thinking about computer vision and plants. We also make some suggestions which we hope can help inform future work in this area. You can read the full paper here, if you’re interested in computer vision and plant science.

The first grant as a whole is looking at time-lapse photography of plants and aims to build sort-of 3d models representing growth. It’s coming to an end now so we’re wrapping up the details and publishing the work we’ve done. This means keen readers of this blog1 can expect quite a few more posts relating to the first grant soon: we’re going to release a dataset, a schools workshop, and we’ll be submitting another journal paper talking about the science rather than the background.

1Yes, both of you

HCC 2016: Human Centred Computing Summer School

Last week I was invited to present at the first Human Centred Cognition summer school, near Bremen in Germany. Summer schools are a key part of the postgraduate training experience, and involve gathering together experts to deliver graduate level training (lectures, tutorials and workshops) on a particular theme. I’ve been to summer schools before as a participant, but never as faculty. We’re at a crossroads in AI at the moment: there’s been a conflict between “good old fashioned AI” (based upon logic and the symbol manipulation paradigm) and non-symbolic or sub-symbolic AI (neural networks, probabilistic models, emergent systems) for as long as I have known, but right now with deep machine learning in the ascendant it’s come to a head again. The summer school provided us participants with a great opportunity to think about this and to talk about it with real experts.

Typically, a summer school is held somewhere out of the way, to encourage participants to talk to each other and to network. This was no different, so we gathered in a small hotel in a small village about 20 miles south of Bremen. The theme of this one was human centred computing – cognitive computing, AI, vision, and machine learning. The students were properly international (the furthest travelled students were from Australia, followed by Brazil) and were mostly following PhD or Masters programs; more than half were computing students of one kind or another, but a few psychologists, design students and architects came along too.

The view from my room

The bit I did was a workshop on OpenCV, and I decided to make it a hands-on workshop where students would get to code their own simple vision system. With the benefit of hindsight this was a bit ambitious, particularly as a BYOD (Bring Your Own Device) workshop. OpenCV is available for all main platforms, but it’s a pain to install, particularly on macs. I spent a lot of time on the content (you can find that here: http://github.com/handee/opencv-gettingstarted) and not so much time on thinking about infrastructure or thinking about installation. I think about half of the group got to the end of the tutorial, and another 5 or 10 managed to get someway along, but we lost a few to installation problems, and I suspect these were some of the less technical students. I used jupyter notebooks to create the course, which allow you to intersperse code with text about the code, and I think this may have created an extra layer of difficulty in installation, rather than a simplification of practicalities. If I were to do it again I’d either try and have a virtual machine that students could use or I’d run it in a room where I knew the software. I certainly won’t be running it again in a situation where we’re reliant on hotel wifi…

The class trying to download and install OpenCV

The school timetable involved a set of long talks – all an hour or more – mixed between keynote and lecture. The speakers were all experts in their field, and taken together the talks truly provided a masterclass in artificial intelligence, machine learning, the philosophical underpinnings of logic and robotics, intelligent robotics, vision, perception and attention. I really enjoyed sitting in the talks – some of the profs just came for their own sessions, but I chose to attend pretty much the whole school (I skipped the session immediately before mine, for a read through of my notes, and I missed a tutorial as I’d been sat down for too long and needed a little exercise).

It’s hard to pick a highlight as there were so many good talks. Daniel Levin from Vanderbilt (in Tennessee) gave an overview of research in attention and change blindness, showing how psychologists are homing in on the nature of selective attention and attentional blindness. I’ve shown Levin’s videos in my lectures before so it was a real treat to get to see him in person and have a chat. Stella Yu from Berkeley gave the mainstream computer vision perspective, from spectral clustering to deep machine learning. Ulrich Furbach from Koblenz presented a more traditional logical approach to AI, touching on computability and key topics like commonsense reasoning, and how to automate background knowledge.

One of my favourite presenters was David Vernon from Skövde. He provided a philosophical overview of cognitive systems and cognitive architectures: if we’re going to build AI we have a whole bunch of bits which have to fit together; we can either take our inspiration from human intelligence or we can make it up completely, but either way we need to think at the systems level. His talk gave us a clear overview of the space of architectures, and how they relate to each other. He was also very funny, and not afraid to make bold statements about where we are with respect to solving the problem of AI: “We’re not at relativistic physics. We’re not even Newtonian. We’re in the dark ages here“.

Prof Vernon giving us an overview of the conceptual space of cognitive architectures

When he stood up I thought to myself “he looks familiar” and it turns out I actually bought a copy of his book a couple of weeks ago. Guess I’d better read it now.

Kristian Kersting of TU Dortumund, and Michael Beetz of Bremen both presented work that bridges the gap between symbolic and subsymbolic reasoning; Kristian talked about logics and learning, through Markov Logic Networks and the like. Michael described a project in which they’d worked on getting a robot to understand recipes from wikihow, which involved learning concepts like “next to” and “behind“. Both these talks gave us examples of real-world AI that can solve the kinds of problems that traditional AI has had difficulty with; systems that bridge the gap between the symbolic and the subsymbolic. I particularly liked Prof. Beetz’s idea of bootstrapping robot learning using VR: by coding up a model of the workspace that the robot is going to be in, it’s possible to get people to act out the robot’s motions in a virtual world enabling the robot to learn from examples without real-world training.

Prof Kersting reminding us how we solve problems in AI

Each night the students gave a poster display showing their own work, apart from the one night we went into the big city for a conference dinner. Unfortunately the VR demo had real issues with the hotel wifi.

The venue for our conference dinner, a converted windmill in the centre of old Bremen

In all a very good week, which has really got me thinking. Big thanks to Mehul Bhatt of Bremen for inviting me out there. I’d certainly like to contribute to more events like this, if it’s possible; it was a real luxury to spend a full week with such bright students and such inspiring faculty. I alternated between going “Wow, this is so interesting, Isn’t it great that we’re making so much progress!” and “Oh no there is so much left to do!“.

Hotel Bootshaus from the river

The Aberystwyth Image Analysis Workshop AIAW

Last week (on Friday) we held the Aberystwyth Image Analysis workshop. I think it was the 3rd, or maybe the 4th one of these I’ve organised. The aim is to have some informal talks and posters centred around the theme of image analysis (including image processing, computer vision, and other image-related stuff) from across Aberystwyth. To encourage participation from people whether they’ve got results or not we have 10 minute slots for problem statements, short talks, work in progress and so on, and we have 20 minute slots for longer pieces of work. This year there were 4 departments represented in talks: Computer science, Maths, Physics and IBERS (biology), and we had speakers who were PhD students, post docs, lecturers and senior lecturers (no profs this year, boo!).

The range of topics covered was as usual very broad – images are used in research all the time, and it’s really useful to get together and see the kinds of techniques people are using. In Physics, they’re working on tightly and precisely calibrated cameras and instruments, using images as a form of measurement. In Maths the images are fitted to models and used to test theories about (for example) foam. In computer science people are working on cancer research using images, plant and crop growth modelling, and video summarisation (to name but a few of the topics). And the IBERS talk this year came from Lizzy Donkin, a PhD student who’s working on slugs.

Lizzy and I have been trying to track slugs so that she can model how they forage – she spoke for 10 minutes on the problem of slugs and imagery, and I spoke for 10 minutes on preliminary slug tracking results. Here’s a screenshot of my favourite slide, showing the wide range of shape deformations that a single slug can undergo.

Old College minecraft and robotics workshop

On Wednesday, Aber Robotics Club put on a day of coding, gaming and robotics in Old College. We ran two workshops: one on Minecraft, and one on Mindstorms (lego robots). Each had about 30 kids in, and the aim was to have a techy day that taught attendees something new, but that was also fun: it was a summer holiday workshop after all.

In the Mindstorms lego robots workshop we did a mixture of activities – most of which I’ve blogged about before. We did the “program a humanoid robot” exercise, where we get kids to write down programs for their parents (who end up blindfold). We did the “steer a lego robot around a track” using remote control. And we did the “customise your lego robot then have a bit of a fight” Robot Wars style event to finish. These are all tried and tested activities which work really well together and made for a good day, with enough content and learning, and enough fun and chaos too.

We had also had a visit and a talk from Laurence Tyler of Aberystwyth Computer Science, who works in our space robotics group. He talked about robots in space, mars rovers, sattellites, Philae, and all sorts of other cool stuff. He brought along Blodwen, our scale model of the ExoMars lander, and explained how stuff made in Aberystwyth is actually going to end up on Mars. The kids asked all sorts of excellent questions and listened attentively throughout, which was great.

Over in the Minecraft room, the aim was to try and build bits of the Old College building colaboratively. There was apparently quite a bit of destruction as well as construction, but when I popped in at lunchtime I saw some fairly recognisable college parts so they all managed to get something built in the end.

In the afternoon, the Minecraft crew had an introduction to programming in Minecraft, starting with a demo of Jim Finnis’s castle generation software. Which opened quite a few eyes, and got a big “whoa!” from the audience: it’s a super piece of code that just builds amazing castles programmatically. One of the key ideas you have to get in order to code in Minecraft is the idea of a 3D coordinate system (x,y and z): I’m not sure that many of the kids had done that before so there was quite a steep learning curve.

We’ll be revisiting these workshops in the next couple of weeks to see what went well and what needs to be worked on: the kids really liked them both. The minecraft one has more of a setup overhead, as we needed to get hold of enough computers (30 Raspberry Pis, in the end) and sort out networking, a server, and so on. The lego robots workshop is a more polished event now (we’ve run it a fair few times). I’m fairly sure that we’ll run them both again, but they might need a bit of tweaking; in particular I’d like to think up a cool way of working with 3d coordinates for the minecraft one, and I also think it might be good to introduce more “not-sitting-at-a-computer” bits.

Electromagnetic Field 2016

Last weekend was Electromagnetic Field, the UK’s main Hacker/maker camp. It’s an outstanding opportunity for meeting up with tinkerers, coders and makers from across the UK and beyond. I was at the first EMF (in 2012, blog post here) talking about women in tech, and went back to this one to talk about schools outreach and the work we’re doing with kids and families. I spoke about schools and kids engagement in general, but also more specifically about our EU playfulcoding project. You can see my talk here:

And you can view the slides here, if you just want slides, not talk.

The talk was well-received but not full, but that’s fine – one of the cool things about EMFcamp is the sheer range of stuff going on. Over the course of the weekend I went to talks on computer history, quantum effects in imaging, IT security from a sociological standpoint, penetration testing, hardware hacking, animating dinosaurs and the mathematics of the Simpsons. I also went to hands-on workshops on VR, deep machine learning, card-based reasoning (“having daft ideas”) and paper circuits. These were all part of the official program – submitted and approved before the event, allowing people to schedule and so on.

There were also lots of minor “installation” type hacks around the place, and a whole heap of drop in activities. I played some computer games in the retro gaming tent (Sonic the hedgehog), went in a musical ball pit, watched fire pong, and generally strolled around the site going WOW.

I had never been in a ball pit before. I am so going to make one of these.

“The Robot Arms” was the name of the camp bar, and it had an API so you could look online to see how much beer had been sold. Someone even wrote a script to calculate how many drinks had been sold in the last minute so you could tell how busy it was without going down to check. All the barstaff and indeed everyone at the event were volunteers which gives the whole thing a really nice cooperative feeling. I was sat eating my veggie breakfast in the food area on Sunday morning and someone asked for help setting out the chairs at the main stage, and about 10 of us just got up and did it. Loads of my friends there did shifts on the bar, or marshalling in the carpark (I spoke, and figured that was probably enough:). At the closing ceremony Jonty (one of the main organisers) asked everyone who’d volunteered or spoken to stand up, and I swear about 25% of the people there did. This really did make for a really friendly event.

What a cool pub sign, eh?

Much to my embarrassment, I fell out of a hammock installation on the last night though. I was fine getting in there, but the dismount was … inelegant.

This has made my return to Aberystwyth a couple of days late, via the excellent first aid tent and the A&E at Guildford hospital (Royal Surrey). Nothing’s broken, which is a relief, but my gosh it’s all a bit bruised.

my opinion of hammocks is not positive

In all – I loved it, again. I’ll definitely go in 2018.