Reflections on 6 months of part time

For the last 6 months I’ve been part time at 70%, in pursuit of a bit of headspace and some work-life balance. This was part of Aberystwyth University’s “flexible working” scheme where you can apply for different hours for 6 months on a trial basis, so it was a relatively risk free way of experimenting with a little bit more free time.

Obviously, it’s not always possible to get work done in the time available (and some things – like open days or travel – I didn’t count because nobody bothers counting them). Generally I’ve tried to stick to my days off though, keeping track of my hours, and here I am at the end of the period owed just 4 days. This is a level of slippage I can deal with, and I’ll take those days here and there before term starts.

Did I get everything done at work? Well I’ve had to be a little less perfectionist than usual. I’ve tried to manage everything WRT teaching, admin and research, and my workload has been reduced a bit (no tutees). I do feel like I’m missing stuff and I am no longer on top of everything that goes on in the department, but that’s OK and also part of the point of being part time. I don’t actually have to do everything myself.

So that I didn’t let myself sit indoors messing about on the Internet for a day and a half every week, I set myself some “part time challenges”, because nothing says work-life-balance like a to-do list. I’ve been pretty good at this. On the list were some physical things – walk up a big hill (Cadair Idris), get fit enough to cycle home from town. I had to cycle up the back way (still not fit enough to cycle up Penglais Hill) but I have made it home from town on my bike.

Others were dorky, and these have already been reported on this blog as I built a retro games controller and a drum kit ball pit.

The remaining things on the list were creative; I wanted to paint a picture using watercolours as I’d never used them before, and a picture using acrylics because I’m out of practice. I also decided to teach myself Blender, which is a 3D modelling system. I’ve managed all of these except for Blender, and I think I’m going to give up on that. It’s a fiddly piece of software and I’m happier writing code than digging about in menus so if I get into computer graphics I’ll do it at a lower computational level.

Here’s one of the watercolours, which is a poster of things around Exmouth that my sister in law might like:

Here’s the acrylic which is as usual of Penglais woods:

The problem with that woodland picture is that the patch of pale pathway looks a bit like a ghostly sheep from a distance. I’m going to have to fix that.

I’ve found it quite enjoyable having the time to do things that aren’t work, and to do some dorky “almost work” things which I would otherwise not have had time for. So I’ve gone part time as a permanent measure. This means that in the future this blog’s readers (both of them) can expect more crap watercolours and daft electronics, and fewer conference reports.

Inventeurs meeting, London

A couple of weeks back I went to London to observe an EU project meeting as external evaluator. The project is a direct descendant of the Playful Coding project and has some of the same partners, so it’s good to see what they’re getting up to after that project ended. Inventeurs is a project which looks at transnational collaboration on coding activities, particularly to support migrant children. The UK partner this time is London South Bank University (LSBU) who I have done quite a bit of work with in the past.

LSBU are based at Elephant and Castle which is pretty near where I grew up – we started our meeting at their campus and discussed progress on the work. As part of the project the partners are creating a MOOC (Massive Open Online Course) to get teachers involved in the project, covering the pedagogical aspects and the technical aspects. It was really heartening to see the progress being made on this; it will be a very useful resource for teachers getting into coding and collaboration.

After lunch, we walked to a school in Walworth, which took the group down East Street Market and Walworth Lane, both roads I remember clearly from my early years in London. I suspect I was the only person in the group feeling heavily nostalgic, everyone else was simply captivated by the colours and the sights. It’s not tourist London.

In the first school we took over a business studies classroom and talked about management of the project; the classroom decoration was apt for Mireia’s talk!

At the end of the school day we travelled further south and had an evening session in Peckham Library, looking at collaboration between classes and how we can get schools in different countries to work together on an extended topic over 10 weeks. There are some difficult hurdles to jump here – some of them are to do with prior knowledge and pedagogical issues but I suspect the biggest hurdle might turn out to be term dates.

At 7 we left the meeting and headed to the dinner location (missing the first goal in the England-Croatia game…). Moving around London by tube during a heatwave is not that fun.

The following day we started early at Southbank Engineering UTC, a school in Brixton. We were at this school for the whole day and met some pupils who’d been taking part in the project. The school’s emphasis on engineering and technology was really cool, the walls were papered with fascinating posters and the students we spoke with seemed very engaged.

School dinners are school dinners though. This photo captures the dining hall before the kids showed up (I’m not going to post photos of random schoolkids on the blog). Having spent quite a lot of my life in south London comprehensives, the atmosphere during school dinner was familiar and I have to say not entirely comfortable.

In the afternoon we spent most of the time discussing project plans and how things are going to work for the final period of the project. In September, the project opens up to other schools and becomes open for anyone to join so there are a lot of things to get right (training, connections, the organisation of school partnerships). The idea is that two classrooms in different countries will work together on a 10-week project, devoting a couple of hours a week to it, building up a collaborative animated story online using scratch. It’s a great, ambitious project, involving tech, art, storytelling, transnational collaboration, and themes of social justice.

There was a lot of work done that afternoon, but we did pause for a game I call “Europeans trying Marmite”.

Women in Tech Cymru summer conference

On July 7th Aberystwyth University hosted the first summer conference for Women in Tech Cymru, a new group which looks to support and network women working in tech in Wales. We were lucky to get the Vice Chancellor of Aberystwyth University to come down and open the event and welcome everyone to the uni, and then we had a keynote from Phillipa Davies. I’m afraid I can’t say much about the keynote as I was out on the registration desk for that slot though.

The event was an “Unconference“, which means that largely speaking the attendees made up the conference on the fly by pitching sessions and then breaking out into small groups to discuss things. Here’s a photo of some people unconferencing.

I’ve set up a Google drive folder for any slides or materials and will add them as I get them

Some notes on refreshment and childcare

I am a strong believer that if you’re running an event on a Saturday that’s not for kids, you should put on something for the kids or at least offer childcare. It’s not a problem I’ve ever had to solve myself but I have enough friends and family who’ve managed to reproduce to understand that this is the big sticking point when it comes to attending weekend stuff, particularly although obviously not exclusively for women. Thanks to BCS Mid Wales we were able to employ 2 people from the award-winning Aber Robotics Club to run a lego robots workshop with 6-16 year olds. They built LEGO robots from scratch, raced them remote controlled by Android tablets, modified them for various purposes (speed, agility, strength) and then had a robo-sumo contest against our house robots. This was a major hit – one mum had difficulty getting her offspring to leave.

We wanted to have an event that was free for attendees, but also wanted to raise some money for running costs. So we sold tickets as “supporter tickets” (£15) or “big supporter tickets” (£30), with free entry available for anyone; a sort of pay-as-you-feel conference. I really liked this idea and it’s one I’ll use again as it means people don’t have to commit money up front if they’re not sure, but it’s also easy for people to tangibly help the event. There was no difference at all between supporter and free attendees at the event, so no kudos (or stigma) associates with either decision. In the end, about half the lunch cost came from sponsorship and half from ticket sales which is great.

Jamie McCallion of 13Fields Ltd kindly contributed sponsorship towards lunch, and BCSWomen sponsored coffee breaks, so we were sufficiently caffeinated and fed. We even managed to rustle up funds for 70 Welshcakes in the afternoon break:-)

The tech stream

I had never been to an unconference before and thought it might be useful to have a bit of a more traditional conference element too – it let us have more detail in the programme for starters, which I think helps people decide whether or not to attend. As Aber were hosting it I twisted the arms of 3 superb colleagues to run 30 minute introductory tech talks to hot topics. First up was Azam Hamidinekoo, with “Why you should be interested in deep learning and how you can get started” which provided an overview of some recent exciting developments in neural networks. This was a very clear talk and I’ve had some great feedback from attendees on this.

Next up, just after lunch, was Amanda Clare talking about word2vec, and ways in which the vectorisation of word semantics can be useful in working out the semantic relationships between concepts, but can also be problematic in terms of bias (the associations with feminine terms are often worse than those with masculine terms for example).

And our final tech talker was Edel Sherratt, talking about the Internet of Things, and how we probably want to start looking at formal models and specifications for this particularly when we consider security. Edel has done a lot of work on standards in other contexts and it was really interesting to see this stuff being put forward in a different context. This work could really have impact if it takes off!

At the end of the day there was a panel session, followed by a group photo and a bunch of people drinking lemonade on the pier and admiring Aberystwyth. The photo is below – it captures less than half the attendees as a lot of people left at 3 (some football match or other)…

All I want for Christmas is a drum kit ballpit

At EMFCamp in 2016 there was a musical ballpit, and ever since having a go in it (which was my first ever ballpit experience) I have wanted to build one.

This has become something of an ongoing project. First, I contacted some soft-play suppliers to find out about the cost of balls, and it turns out that soft-play balls (proper ones for use in commercial soft-play facilities) cost quite a lot. 15p+VAT each. Which doesn’t sound like a lot, but to fill a large paddling pool that was going to be something like £2,000, and even if I tried to bodge it with a smaller paddling pool it was going to be really very expensive indeed.

So plan B was formed. I figured that 5p a ball was OK, and I wasn’t bothered with quality. So for the last 18 months or so, every time I’ve seen kids play balls on special offer (in the supermarket, in B&M, in Lidl…) I’ve bought them as long as they’re £5 or less for 100. They pile up. Slowly, but they pile up.

There’s more to the problem of building a drum-kit-ballpit than just having a lot of balls though. You need to make the balls do something – in particular, motion in the ballpit needs to trigger sounds. I did this through a mixture of OpenCV and pygame, in Python, and if you’re interested you can see my hacky code here: on github. Basically the program runs the webcam and plays a drum sound if it sees motion in specified parts of the image. The readme on that project describes in more depth how the code works, if you’re interested in the details. Point the webcam at a ballpit, set the program running, bang the drums.

At the BCS Mid Wales show and tell last week I put the components together – paddling pool, balls, tripod, and motion detection. I hadn’t actually tested it all together before the evening, although I had got Helen to agree to sit in it. As you can probably guess from this slightly giggly video (thanks Colin):

Ballpits are fun.

Building a retro-games controller

I decided to build a retro games controller based on something I saw on the internet. There are lots of discussions and videos and howtos, but to be honest I’ve never been particularly good at following instructions so I just bought a kit from arcade world (the two-player xin-mo board one – here’s a link) and had a go at bodging it together.

It came with some instructions. Here’s a picture of the instructions, along with a pound coin for scale. I did read these instructions. Then I googled, to find slightly more detailed instructions. Then I went “fuck it” and just got on with the job.

Prototype #1 had 6 buttons per player (with select and start, too). It was based on a plank we found in our garage when we moved in.

The buttons look pretty cool when they’re attached. Nice bright colours. Tidy.

Then I wired it up. The kit came with “easy” clip-together wires which had a jumper at the board end and a metal clip at the microswitch end. I can tell you now that it is remarkably easy to wire a joystick up incorrectly – upside-down is my most common error.

However, this wiring and board had a lot of good properties for a prototype.

  • If I plugged it in to my laptop (running linux, of course), and ran jstest on /dev/js0, and pressed the buttons, they all sent some kind of signal
  • If I plugged it into a Raspberry Pi running Retropie it identified as a controller
  • Plugged into the laptop it could control, badly, a driving game. Accelerate and reverse were unintuitive, but left and right worked
  • The joysticks might have been upside down but they felt really cool to play

There were also a couple of negative aspects, though.

  • I couldn’t get retropie to calibrate the thing as it was expecting more buttons on a xin-mo (I thought)
  • The left and the right joystick both controlled the same character so 2-player didn’t really work
  • The left and right controls were too close together anyway for comfortable play

So I removed the components from the board, smashed it up with a hammer, and put it in the woodpile. Time for a new plank and a new start with a little more understanding. Not too much understanding though. There were a couple of big mistakes left to go.

Removing the components from the board is non-trivial as the metal clip on attachments don’t come off easily – indeed, once they’ve clipped on they’re not supposed to come off at all. So a small screwdriver and a bit of leverage was required every time I wanted to change the wiring. For example, to remove the components from the old board, and then to make sure the joysticks were wired the right way up. Again. Prizing the connectors open was fiddly and slippery and generally not straightforward.

Of course having bent the metal clip open they then started falling off all over the place during testing. So then I had to squeeze them all shut again so that I wasn’t debugging software and hardware at the same time.

Anyway I got it all together on the new board with each button attached to the xin-mo interface on the right pin (eventually). Tested it on the laptop – worked OK. Tested it on the retropie set up and I could navigate menus, select stuff, and even start a game. There is a modification you have to make to some config files to get the controller to identify two joysticks, so I did have to edit some stuff (instructions here: on the retropie site). Getting closer.

Still one big problem though. Some of the buttons were behaving really strangely. Back to jstest to see what’s what…

What jstest gives you is a big table of inputs and when you touch the controls, it shows you what’s coming in for each input. The snippet below shows me moving the joystick (Axes up and down), and pressing button 0, 1, 2, and 3 in turn.

Axes:  0:     0  1:-32767 Buttons:  0:on   1:off  2:on   3:off   ... 
Axes:  0:     0  1:     0 Buttons:  0:on   1:off  2:on   3:off   ... 
Axes:  0:     0  1: 32767 Buttons:  0:on   1:off  2:on   3:off   ... 
Axes:  0:     0  1:     0 Buttons:  0:off  1:off  2:on   3:off   ... 
Axes:  0:     0  1:     0 Buttons:  0:on   1:on   2:on   3:off   ... 
Axes:  0:     0  1:     0 Buttons:  0:on   1:off  2:off  3:off   ... 
Axes:  0:     0  1:     0 Buttons:  0:on   1:off  2:on   3:on    ... 

Now the more observant reader will notice that some of these switches default to on, and others default to off. Whoops. Turns out that microswitches have three prongs to attach wires to and it actually matters which ones you choose. Who knew? On the side of the switch, top prong good, bottom prong bad.

Finally there were a couple of buttons which didn’t have leads (maybe I lost a piece of wire? or maybe they were spare buttons?). I soldered those in with a spare cable I stole from Tom of Sam and Tom industries. Doesn’t everyone do their soldering on the cooker?

Finally tested, I put a base plank on so that you can have the controller on your lap on the sofa as well as on a surface. This is the finished product:

And this is Sonic the Hedgehog II on my nice big telly.

BCSWomen Lovelace Colloquium 2018

The 11th BCSWomen Lovelace Colloquium was held just before Easter, at the University of Sheffield with support from Sheffield Hallam University. Regular followers of this blog will know that the day has a well-defined format, with student posters, speakers, a panel on computing careers, and a social at the end of the day. We also have a cake sponsor, so we also have too much cake.

cake

This was the first colloquium since I started the conference in 2008 where I wasn’t in some sense the conference chair. Cardiff, in 2010, was chaired by Miki Burgess (and I didn’t even make it to the event), but I was still involved in the financial side and the lineup. This year, I handed over properly to Helen Miles and was officially just deputy chair. We had two excellent local chairs in Heidi Christensen (tUoS) and Deborah Adshed (SHU), and our long-term supporter Amanda Clare came to lend a hand too. But Helen was in charge.

Any worries I might have had about the running of the event evaporated on arrival. Due to some bad fog in Aberystwyth I set off much later than anticipated (not wanting to drive with zero visibility), and then we got lost in Stockport just for fun, so I turned up about 4 hours after I had planned to arrive.

On the one hand, whoops!… on the other hand, Heidi, Amanda, Helen and company had managed to do pretty much all the pre-event stuff without me. Magic.

On the day it was actually huge fun helping out without feeling I had to go to every session. So during the afternoon talks I helped tidy up and made sure the sponsors and stallholders knew what was going on, and collected poster contest judging results etc. which helps the day go more smoothly and left Helen to be the figurehead.

Indeed this was, I think, the fourth Lovelace where Helen, Amanda and myself have been on the helper staff together. Between us we are quite good at working out what needs to be done and just getting on with it. To be honest, seeing Helen in the chair was just brilliant. Partly because she did such a great job, and partly because for the first time ever there was someone at the Lovelace more stressed than me (sorry Helen:-).

Here’s me and Helen at the evening social – relaxing at last.

me and helen

Robot club coding project

In Aberystwyth Robotics Club we run after school robot-based activities for local schoolkids. This year we’ve just started a new term, and we decided to take in twice as many new starters as before. We decided it’d be a good idea to try being more open and let more kids have a go; it doesn’t matter if they haven’t been keen enough in the past to try out computing or robotics, what matters is they’re interested now and they want to give it a go. If they drop out at Christmas, that doesn’t matter, they’ve given it a shot.

So we are running two parallel sessions for the complete beginners. Half of them are doing a new “coding taster project” and half are doing the normal robot club starter, which is soldering, circuits, arduino basics and a bit of lego robots, and then we swap. I’ve been running the coding half, and we’ve been hacking around with face controlled games. The rest of this blog post is about the coding stuff – and what it’s like doing computer vision based real-time coding with a bunch of 11-12 year olds.

If you’re just here for the resources to try this yourself they’re linked at the bottom of this post.

What did we do?

I came up with a four week programme of sessions (2h per session) which use the computer vision library OpenCV to animate, read the camera, detect things in the camera and then make a game. Broadly speaking the plan is as follows…

Week 1: animation

The first week was all about simple graphics. Drawing dots, squares and lines on a canvas, having a loop and changing things in the loop, thinking about colours and R,G,B space.

Week 2: video and faces

Next up we looked at video and faces. This involved thinking of a video as a set of still images (get an image from the camera, put it in the window, repeat). Then we tried running the face detector, and working out what the face detector can do. This session was a bit short – we spent some time changing sensitivity settings, and min/max face size settings, and trying to fool the face detector by drawing pictures of faces. I’ve beefed this session up in the worksheets so hopefully next time round (in 2 weeks time) it won’t be short.

Week 3: collision detection (when does the ball hit the face?)

This session involved bringing together week 2 (rectangles around faces) and week 1 (balls moving on the screen) and working out when the rectangle and the ball overlap. This is harder than it sounds.

Week 4: game time

Now we introduce classic game elements, including scores, levels, lives and so on. There was also a lot of changing colours, and using graphics to overwrite the face area (e.g. a smiley face). Customisation was a very popular activity – no two games looked alike.

How did it go?

At the end, everyone had a game working, and a lot of the kids wanted to spend longer on their project. This was not all their own work – we have a lot of volunteers so I think the ratio was about 1:3 helper-to-student, and the project is really ambitious. But I do think all the kids have a fairly solid understanding of how their game fits together, even if they didn’t write all the code themselves.

The code was quite hacky and the entire project was based around getting something working quick rather than coding properly. A lot of them had tried Scratch before, but I think only one or two had looked at any textual programming language, and Python is quite a leap from Scratch. I was keen to keep it fun, so I provided them a lot of code to edit (each week had a starting code file which they had to develop). In this way they got things on the screen and spent their time doing animation rather than doing exercises. Robot club is after all supposed to be fun…

I think that all the kids have learned a few fairly key concepts associated with coding: case sensitivity, indentation, if statements, debugging, comments, the idea of iteration, the idea of a variable, and the general way coding best proceeds by small changes and lots of testing. I don’t think any of them could take an empty file and write a python program, but they could probably look at a simple program with conditionals and so on and have a guess at what it does, and maybe edit some aspects.

In terms of higher level concepts, they have a good idea about what a face detector is, and how it works, they have a grasp of image-based coordinate systems, maths and inequalities in 2d space (collision detection), the idea of a colour being made up of R, G and B, and the idea of a video being a set of images presented quickly in succession.

They’re also all able to tell the difference between different quote marks, and can find the colon symbol on a keyboard, although I die a little every time I hear myself refer to “#” as “hashtag”.

Materials

If anyone wants to try this at home or indeed anywhere else… we ran the workshop on Linux, in a lab with OpenCV and Python installed. You can find the instructions for the kids here: worksheets for kids on Google Docs, and the code samples broken down by week here: https://github.com/handee/ARCfacegame. Let me know if you use them.

Thinking and learning about play

I’ve just finished a MOOC (massive open online course) on play, with Futurelearn and the University of Sheffield: Exploring Play. Ideas about play have been coming up quite a bit in my work in the last few years – both in teaching (gamification, exploration) and in research (particularly in the research I’ve been doing into kids and coding). But I didn’t really know much about theoretical or practical ideas of play, particularly not outside of computing, so I signed up for a MOOC to take the broader look.

I found that earlier on in this course, the readings about play types enriched my conception of what play could be. Thinking about play in terms of taxonomies of play (rough and tumble play, imaginative play, etc.) has helped me break down what we mean when we say play, and I have found it useful to think about the things children do in terms of these taxonomies. I even found myself wondering how many different kinds of play particular activities or equipment affording, and wondering if I could alter activities to include more variety in play type. There’s an implicit assumption here, which is that play is good, and lots of types of play is better than one type of play. This has clear implications for the kinds of work we do with schoolkids (and to a lesser extent with uni students); a lot of our activities have time for exploratory play (“what does this do”). Thinking about play types leads us to try and incorporate other types of play e.g. creative play (“what can i make this do”, “what can I make with this”), mastery play (“can I get better at playing with this”), communication play (“can I use this to communicate?”). This brings more variety into the activity which may well end up with deeper learning.

The course was very broad, which I liked – I did it to get the big picture, and for that it really succeeded. We looked at cross-cultural play, online play, the spaces we play in, historical attitudes to play, disability and play…

Thinking about how those with disabilities can access play turned a lot of my ideas upside down: thinking in terms of play as activities with particular values leads to a normative understanding of play. The taxonomies provide a rich conception of what play could be, but they don’t dictate what it should be. Reading case studies about play and disability showed me that this normative conception (play should be educational, for example), doesn’t have to hold. Play doesn’t have to be something that lets children rehearse ideas for the real world. It can be something for itself – catching a ball repeatedly, fidget spinners, and other repetitive actions can all be playful in some way. These could be seen as mastery play, developing close motor skills, or they could just be play. Allowing people the time, space, and equipment to explore play with whatever actions they are able is something we need to do for our children and for ourselves.

In all I enjoyed the MOOC a lot – I think it will take a while for the ideas to settle in my mind as we touched briefly on a lot of different topics, but I also think that some of the things I’ve learned will be put in to action in my teaching and outreach activities pretty soon.

A tale of 3 engagements

For the last 9 weeks I’ve been visiting the University of Girona (UdG), and working on some research in Vicorob and Udigital. I’ve taken part in three engagement activities whilst I’ve been here – even though I don’t speak the language. It turns out that with colleagues to help translate, it’s possible to be useful even without many words, although in the first two workshops I was more of an observer/helper than a facilitator. The first of these was an underwater robotics workshop, with a visiting class or around 15 teenagers; the second of these was a wheeled robotics workshop with 9 adults in a high security prison; and the third was an “unplugged” activity looking at Artificial Intelligence and Alan Turing with about 150 teenagers in 6 consecutive groups. The rest of this post has a bit more info on each.

Underwater robotics

This workshop took place in CIRS (Centre d’Investigació en Robòtica Submarina) at UdG, and was written and led by Xevi Cufi. The group came from a nearby boys’ school, had been working on their robots back in school for a while, in groups, and had come to CIRS for the final construction and testing. These robots are made out of plumbing pipe, and have three motors. Two of these provide forwards and backwards thrust for the left and right sides of the robot, and the third gives up and down. The basic robots were complete before the workshop, and in this session the participants did final wiring (the controller is attached by a tethered wire) and water tests. Once the wiring was done, the first water test involved getting the robot to have neutral buoyancy by attaching floats to the frame, in a large bucket.

Then they had to try and make the cable neutral too, by attaching bits of float at regular spacing along the tether.

And finally the students got to use their robots in the test pool (CIRS has a massive pool for testing robots). Seeing this come together was great – the students were all fired up to run their contraptions in the water, and they all worked really well.

This project is a big project, and I think the students had been working on their robots for a couple of weeks on and off. I expect a build from scratch would take a few days, as there’s soldering, wiring, building, testing and a lot lot lot of waterproofing (fairly obviously). The payoff is fab though: they clearly got a real sense of achievement piloting their own robots around the pool, picking up objects, and trying not to get their tethers in a knot. With one underwater video camera and a live link to a monitor, which was passed between robots, the workshop really came alive. I’d like to try and run this workshop in Aberystwyth.

Wheeled robots

The second workshop couldn’t have had a more different target audience. Instead of teenage Catholic schoolboys, there were adult prisoners in the Puig de les Basses prison just north of Figueres. In this workshop (also designed and led by Xevi) we used small wheeled Arduino robots, and programmed them in groups to flash lights, display messages, and move backwards and forwards. We have done a lot of wheeled robot workshops as part of the Early Mastery project (and before), and this one followed the general format (get something to run on the robot, modify that code, get the robot to move forwards and backwards). We had about 2 hours, and the participants were working in groups. Here’s a picture of the robot (taken during preparation) – you should be able to make out the display LCD, the LEDs and the wheels in this picture.

In the workshop the participants got to grips with the flashing lights activity very quickly, and the group I was working with seemed to be having fun setting up traffic lights using the R, G and Y LEDs. When the idea of the LCD display screen was introduced, my group decided to get it to give instructions to match the traffic lights (so it said “go” on green, etc.). This was a bit more elaborate than planned – the idea was they were just going to get it to say “hello” or something then move on to the next task – but they were enjoying the coding and working a lot of things out for themselves so we just let them run with it. As soon as one of the other groups got their robot to move, everyone changed their mind and wanted to move on to the next task anyway.

I don’t have any photos of the actual workshop as security was very tight and we weren’t allowed to take in phones, cameras or anything like that. Here’s a picture of the outside of the building though:

It’s amazing how the same thing happens in every robot workshop – whether it’s with 6 year old kids or 50 year old prisoners. As soon as one of the groups gets a robot to actually move, the atmosphere changes and everything moves up a gear. There is something intrinsically motivating about writing a program on a computer, and getting it to move something in the real world. As a programming environment, they used Visualino which provides a block-based interface to Arduino C; I hadn’t seen this before but was very impressed, and I might use it in future.

AI Unplugged

The final engagement activity I have been involved in out here is based upon the AI workshop that we wrote as part of the Technocamps project. This workshop has several components, and UdG were asked to do 6 consecutive 25 minute workshops with schoolkids in the town of Banyoles, as part of their end-of-term robotics project (actually, 3 sets of 6 consecutive workshops). So with a lot of help from Eduard we created some bilingual slides (English/Catalan) and did a double-act. You can see the slides here.

In another room, Marta and Mariona were talking about STEAM and coding, and in yet another room Jordi was talking about various robotics challenges and activities, so the Udigital.edu team was out in force. Here we are having breakfast before the day begins…

The schoolkids had apparently been working on general robotics projects for a couple of weeks at the end of term, so we started by doing a tour of their demos, and saw some lovely little line followers, skittles robots, hill climbers and generally lots of excellent arduino goodies. Here’s one of their projects.

In the workshop Eduard and I ran, we had a set of votes, asking the students if they think computers can think. The way the workshop is structured, we had a vote at the start (to get their initial opinions), then we did an activity which encouraged them to think about what intelligence is, by ordering a load of things (chess computer, sheep, tree, self-driving car, kitten, human… there are about 30 things). This gets them to consider what thinking involves, without actually being explicit or telling them what we think. Then we had another vote. After this we discussed what aspects of intelligence they might think were important, and what aspects computers could do now, then we had a final vote and concluded with some talk about Alan Turing and the Turing Test.

The reason I like to get the participants to vote, repeatedly, on whether they think computers can think, is so that we can see if anyone changes their mind. In my experience (and I’ve run this workshop loads of times – maybe 50 times) people always do change their minds once they’ve thought a bit more about the question; it never ceases to surprise me how different groups can be, too. This time, some groups arrived confident that AI was possible and that computers could think. Some of the others arrived with hardly anybody in the group positive about the potential of AI. We changed some minds though – some in one direction, some in the other.

Here’s a graph of the three vote results, displayed as a proportion of those attending who said “yes” or “maybe” to the question “Can Computers Think?”

This workshop worked well, as you can see from the graph: in every group we managed to get people to think hard enough that some of them changed their minds. It was also great fun, if a bit relentless, running 6 workshops back to back. I think we saw about 150 kids.

Thanks

So thanks, Udigital, for letting me join in and see what you do in terms of outreach. It’s been a great 9 weeks of visit, and I’ve got some ideas that I definitely want to try back in Abersytwyth.

EMRA17

I’m visiting Girona Uni at the moment as part of my sabbatical term, and whilst I’m here I’m trying to expand my horizons a bit academically. SO, this week I attended a workshop on marine robotics, which just happened to be going on whilst I’m here and they let me attend for free. The workshop is for marine robotics, but it is not just a research conference. Attendees come from 30 research centres, and 12 companies. Presentations come from 14 EU projects, 4 national projects, 4 companies. On day one, I saw 16 of the talks and then skipped the rest (including the demo and the dinner) as my folks were visiting and I thought I should probably spend some time with them:-)

Marine robotics is a bit outside my area so it was challenging to sit in and try and follow talks that were at the limits of my knowledge. The conference was also considerably more applied that many of the conferences I go to – companies and researchers working together much more closely, and much more close to product; some of the things presented were research, others were actual pieces of kit that you can buy. The applications varied too from science through to mining. The EU funding that supports these systems is really driving forward innovation in a collaborative way – many of the projects involved tens of institutions, from university research teams through SMEs to big companies.

The keynote came from Lorenzo Brignone, IFREMER lab, which is the French research centre that deals with oceanographic stuff. They have quite a fleet (7 research vessels), with manned submersibles, ROVs (Remote operated vehicles), and AUVs (autonomous underwater vehicles), and a hybrid HROV (AUV/ROV) which is the topic of the keynote. Brignone works in the underwater systems unit, which is mostly made up of engineering. The key problem is that of working reliably underwater near boats which don’t have dynamic positioning – the surface vehicle might move hundreds of metres, so we need to have an ROV that is more independent in order to carry out scientific missions reliably. The design includes the whole system, with on-ship electronics, tether, traction, and a weighted underwater station which includes a fibre-optic link to the HROV. This lets the hybrid system work with vessles of opportunity, rather than waiting for science boats to become ready. Two DVL (doppler velocity log) systems give accurate underwater location. Final output is a semi autonomous vehicle which can be worked by general users (the engineers don’t even have to be on the boat).

The next morning talk covered the DEXrov project, which is looking at systems which can control dextrous robots at a distance (hopefully onshore, removing the cost of hiring a boat). The aim is to get robots that can interact underwater, like divers can. This is controled by an exoskeleton based system – basically, the operator wears an arm and hand exoskeleton which the robot then mimics.

SWARMS – smart and networking underwater robotics in cooperation meshes. 31 partner consortium, looking at networking tech as well as the robotics tech. The project is also developing middleware which will let various heterogenous systems (UAVs, ROVs, misssion control, boats) cooperate. Underwater acoustic network links to wireless on the surface.

Next up Laurent Mortier from the BRIDGES project, which is a big h2020 project (19 partners including 6 SMEs) looking at glider technology. These systems are very low power underwater vehicles which can cover very long journeys, collecting data. Gliders create small changes in buoyancy, using wings to drive themselves forwards. This project looks to increase the depth that gliders can work at, which enables a greater range of scientific questions to be answered. The kind of data they look for depends on the payload, which can be scientific, or commercial (searching for things like leaks from undersea hydrocarbons, finding new oilfields).

Carme Paradeda of Ictineu submarines presented next, on moving from from a manned submarine to underwater robots, in a commercial setting. http://www.ictineu.net/en/ is their website, and they’ve invested 3 million euros, and more than 100,000 hours of R and D went into the creation of their submarine. This is a manned submarine which involved developing new battery technology as part of the project, safer batteries for operating at high pressure.

Marc Tormer of SOCIB (a Balaeric islands research centre) also talked about gliders. Aim is to change the paradigm of ocean observation: from intermittent missions on expensive research vessels, to continuous observations from multiple systems including gliders.

Graham Edwards from TWI (The Welding Institute) talked about ROBUST H2020 project. This project addresses seabed mining. Resources they’re looking for are manganese nodules, which can be found also looking cobalt crusts and sulphide springs. The system uses laser spectrography on an AUV with 3d acoustic mapping tech, to try and get away from the problems associated with dredging.

Pino Casalino, University of Genova (ISME) had the last slot before lunch talking about an Italian national project MARIS working towards robot control systems for marine intervention. This provided another overview of a big multi site project, looking at vision, planning and robot control. I have to admit that at this point my attention was beginning to wander.

One group photo and a very pleasant lunch later (I declined the option of a glass of wine, but did take advantage of the cheesecake and the espresso machine) we were back for an afternoon of talks.

The difficult post-lunch slot fell to Bruno Cardeira, from the Instituto Superior Técnico (Lisbon) talking about the MEDUSA deep sea AUV. This project was joint Portugal-Norway with a lot of partners, looking at deep sea AUVs, in order to survey remote areas up to 3000m depth. They wanted to do data collection and water column profiling, resource exploration, and habitat mapping, with the aim to open up new sea areas for economic exploitation.

Bruno also presented a hybrid AUV/ROV/Diver navigation system, the Fusion ROV which is a commercial product. This talk had a lot of videos with a loud rock soundtrack, which is one way to blow people out of their post-lunch lull, I guess.

The next talk came from Chiara Petroli, of the University of La Sapienza (Rome) talking about the SUNRISE project, working on internet of things wrt underwater robotics. Underwater networking, in a heterogeneous environment. Long distance, low cost, energy efficient, secure comms… underwater. Where of course wifi doesn’t really work. Dynamic, adaptive protocols which use largely acoustic communications have been developed.

Unfortunately by the end of this talk we were already running 15 minutes late (after just two talks). So the desire for coffee was running high in the audience, and I think I detected a snore or two.

Andrea Munafo from the National Oceanography Centre in Southampton, talking about the OCEANIDS project sponsored by the UK’s NERC (natural environment research council). This program is building some new ROVs which will enable long range and autonomous missions. One of these ROVs is called Boaty McBoatface.

The last talk from this session came from Ronald Thenius, of Uni Graz, talking about Subcultron, a learning, self-regulating, self-sustaining underwater culture of robots. 7 partners from 6 countries, aiming to monitor the environment in the Venice lagoon using the worlds’ largest robot swarm using energy autonomy and energy harvesting. Because Venice is big and has a lot of humans, the cultural aspect is quite important. Players: 5 aPads, (inspired by lilies, has solar cells, radio comms), 20 aFish (inspired by fish, moves around, communicates), and 120 aMussels (inspired by clams, many sensors, passive movement, NFC, energy harvesting). I liked this talk a lot.

Post coffee break, it was the turn of Nikola Miskovic, from the University of Zagreb, talking about cooperative AUVs which can communicate with divers using hand gestures and tablets. The project (CADDY – autonomous diving buddy) allowed a number of advances, including the way that the diver could use Google maps underwater. “The biggest challenge when you do experiments with humans and robots is the humans“:-)

Jörg Kalwa, ATLAS ELEKTRONIK GMBH spoke on the SeaCat story – from toy to product. UAV/ROV hybrid with variable payload. This grew out of various precursor systems (experimental and military) – the talk covered the various robots which are ancestors of the current ROV. The current incarnation is a commercial robot which does pretty much everything you might want an ROV to do, but the price point is pretty high.

The penultimate talk is from Eduardo Silva, ISEP / INESC TEC in Porto (Portugal), talking about underwater mining, in flooded opencast mines. Project has a great acronym – Viable Alternative Mine Operating System or VAMOS. Big project (17 partners from 9 countries). This project has a bunch of collaborating robots including UAVs which look like the many of the others (torpedo like), and other underwater vehicles which look a lot more like mining vehicles – tracked tanks, with massive drills and so on.

The day finished with the European Robotics League – a UEFA champions league for robotics. Service robots, industry robots, outdoor robots. This talk came from Gabriele Ferri, CMRE. Emergency robotics, combining ground underwater and air robots cooperating in a disaster response scenario. Mission is to find missing workers (mannequins) and bring them an emergency kit, survey the area, and stem the leak by closing a stop cock.

To be honest, my take home from this workshop is: underwater robots are cool, and brexit is an awesomely stupid idea.