We’ve had our first journal paper published from my EPSRC first grant. It gives a comprehensive review of work into the automated image analysis of plants – well, one particular type of plant, Arabidopsis Thaliana. It’s by Jonathan Bell and myself, and it represents a lot of reading, talking and thinking about computer vision and plants. We also make some suggestions which we hope can help inform future work in this area. You can read the full paper here, if you’re interested in computer vision and plant science.
The first grant as a whole is looking at time-lapse photography of plants and aims to build sort-of 3d models representing growth. It’s coming to an end now so we’re wrapping up the details and publishing the work we’ve done. This means keen readers of this blog1 can expect quite a few more posts relating to the first grant soon: we’re going to release a dataset, a schools workshop, and we’ll be submitting another journal paper talking about the science rather than the background.
1Yes, both of you
Last week I was invited to present at the first Human Centred Cognition summer school, near Bremen in Germany. Summer schools are a key part of the postgraduate training experience, and involve gathering together experts to deliver graduate level training (lectures, tutorials and workshops) on a particular theme. I’ve been to summer schools before as a participant, but never as faculty. We’re at a crossroads in AI at the moment: there’s been a conflict between “good old fashioned AI” (based upon logic and the symbol manipulation paradigm) and non-symbolic or sub-symbolic AI (neural networks, probabilistic models, emergent systems) for as long as I have known, but right now with deep machine learning in the ascendant it’s come to a head again. The summer school provided us participants with a great opportunity to think about this and to talk about it with real experts.
Typically, a summer school is held somewhere out of the way, to encourage participants to talk to each other and to network. This was no different, so we gathered in a small hotel in a small village about 20 miles south of Bremen. The theme of this one was human centred computing – cognitive computing, AI, vision, and machine learning. The students were properly international (the furthest travelled students were from Australia, followed by Brazil) and were mostly following PhD or Masters programs; more than half were computing students of one kind or another, but a few psychologists, design students and architects came along too.
The view from my room
The bit I did was a workshop on OpenCV, and I decided to make it a hands-on workshop where students would get to code their own simple vision system. With the benefit of hindsight this was a bit ambitious, particularly as a BYOD (Bring Your Own Device) workshop. OpenCV is available for all main platforms, but it’s a pain to install, particularly on macs. I spent a lot of time on the content (you can find that here: http://github.com/handee/opencv-gettingstarted) and not so much time on thinking about infrastructure or thinking about installation. I think about half of the group got to the end of the tutorial, and another 5 or 10 managed to get someway along, but we lost a few to installation problems, and I suspect these were some of the less technical students. I used jupyter notebooks to create the course, which allow you to intersperse code with text about the code, and I think this may have created an extra layer of difficulty in installation, rather than a simplification of practicalities. If I were to do it again I’d either try and have a virtual machine that students could use or I’d run it in a room where I knew the software. I certainly won’t be running it again in a situation where we’re reliant on hotel wifi…
The class trying to download and install OpenCV
The school timetable involved a set of long talks – all an hour or more – mixed between keynote and lecture. The speakers were all experts in their field, and taken together the talks truly provided a masterclass in artificial intelligence, machine learning, the philosophical underpinnings of logic and robotics, intelligent robotics, vision, perception and attention. I really enjoyed sitting in the talks – some of the profs just came for their own sessions, but I chose to attend pretty much the whole school (I skipped the session immediately before mine, for a read through of my notes, and I missed a tutorial as I’d been sat down for too long and needed a little exercise).
It’s hard to pick a highlight as there were so many good talks. Daniel Levin from Vanderbilt (in Tennessee) gave an overview of research in attention and change blindness, showing how psychologists are homing in on the nature of selective attention and attentional blindness. I’ve shown Levin’s videos in my lectures before so it was a real treat to get to see him in person and have a chat. Stella Yu from Berkeley gave the mainstream computer vision perspective, from spectral clustering to deep machine learning. Ulrich Furbach from Koblenz presented a more traditional logical approach to AI, touching on computability and key topics like commonsense reasoning, and how to automate background knowledge.
One of my favourite presenters was David Vernon from Skövde. He provided a philosophical overview of cognitive systems and cognitive architectures: if we’re going to build AI we have a whole bunch of bits which have to fit together; we can either take our inspiration from human intelligence or we can make it up completely, but either way we need to think at the systems level. His talk gave us a clear overview of the space of architectures, and how they relate to each other. He was also very funny, and not afraid to make bold statements about where we are with respect to solving the problem of AI: “We’re not at relativistic physics. We’re not even Newtonian. We’re in the dark ages here“.
Prof Vernon giving us an overview of the conceptual space of cognitive architectures
When he stood up I thought to myself “he looks familiar” and it turns out I actually bought a copy of his book a couple of weeks ago. Guess I’d better read it now.
Kristian Kersting of TU Dortumund, and Michael Beetz of Bremen both presented work that bridges the gap between symbolic and subsymbolic reasoning; Kristian talked about logics and learning, through Markov Logic Networks and the like. Michael described a project in which they’d worked on getting a robot to understand recipes from wikihow, which involved learning concepts like “next to” and “behind“. Both these talks gave us examples of real-world AI that can solve the kinds of problems that traditional AI has had difficulty with; systems that bridge the gap between the symbolic and the subsymbolic. I particularly liked Prof. Beetz’s idea of bootstrapping robot learning using VR: by coding up a model of the workspace that the robot is going to be in, it’s possible to get people to act out the robot’s motions in a virtual world enabling the robot to learn from examples without real-world training.
Prof Kersting reminding us how we solve problems in AI
Each night the students gave a poster display showing their own work, apart from the one night we went into the big city for a conference dinner. Unfortunately the VR demo had real issues with the hotel wifi.
The venue for our conference dinner, a converted windmill in the centre of old Bremen
In all a very good week, which has really got me thinking. Big thanks to Mehul Bhatt of Bremen for inviting me out there. I’d certainly like to contribute to more events like this, if it’s possible; it was a real luxury to spend a full week with such bright students and such inspiring faculty. I alternated between going “Wow, this is so interesting, Isn’t it great that we’re making so much progress!” and “Oh no there is so much left to do!“.
Hotel Bootshaus from the river
Last week (on Friday) we held the Aberystwyth Image Analysis workshop. I think it was the 3rd, or maybe the 4th one of these I’ve organised. The aim is to have some informal talks and posters centred around the theme of image analysis (including image processing, computer vision, and other image-related stuff) from across Aberystwyth. To encourage participation from people whether they’ve got results or not we have 10 minute slots for problem statements, short talks, work in progress and so on, and we have 20 minute slots for longer pieces of work. This year there were 4 departments represented in talks: Computer science, Maths, Physics and IBERS (biology), and we had speakers who were PhD students, post docs, lecturers and senior lecturers (no profs this year, boo!).
The range of topics covered was as usual very broad – images are used in research all the time, and it’s really useful to get together and see the kinds of techniques people are using. In Physics, they’re working on tightly and precisely calibrated cameras and instruments, using images as a form of measurement. In Maths the images are fitted to models and used to test theories about (for example) foam. In computer science people are working on cancer research using images, plant and crop growth modelling, and video summarisation (to name but a few of the topics). And the IBERS talk this year came from Lizzy Donkin, a PhD student who’s working on slugs.
Lizzy and I have been trying to track slugs so that she can model how they forage – she spoke for 10 minutes on the problem of slugs and imagery, and I spoke for 10 minutes on preliminary slug tracking results. Here’s a screenshot of my favourite slide, showing the wide range of shape deformations that a single slug can undergo.
On Wednesday, Aber Robotics Club put on a day of coding, gaming and robotics in Old College. We ran two workshops: one on Minecraft, and one on Mindstorms (lego robots). Each had about 30 kids in, and the aim was to have a techy day that taught attendees something new, but that was also fun: it was a summer holiday workshop after all.
In the Mindstorms lego robots workshop we did a mixture of activities – most of which I’ve blogged about before. We did the “program a humanoid robot” exercise, where we get kids to write down programs for their parents (who end up blindfold). We did the “steer a lego robot around a track” using remote control. And we did the “customise your lego robot then have a bit of a fight” Robot Wars style event to finish. These are all tried and tested activities which work really well together and made for a good day, with enough content and learning, and enough fun and chaos too.
We had also had a visit and a talk from Laurence Tyler of Aberystwyth Computer Science, who works in our space robotics group. He talked about robots in space, mars rovers, sattellites, Philae, and all sorts of other cool stuff. He brought along Blodwen, our scale model of the ExoMars lander, and explained how stuff made in Aberystwyth is actually going to end up on Mars. The kids asked all sorts of excellent questions and listened attentively throughout, which was great.
Over in the Minecraft room, the aim was to try and build bits of the Old College building colaboratively. There was apparently quite a bit of destruction as well as construction, but when I popped in at lunchtime I saw some fairly recognisable college parts so they all managed to get something built in the end.
In the afternoon, the Minecraft crew had an introduction to programming in Minecraft, starting with a demo of Jim Finnis’s castle generation software. Which opened quite a few eyes, and got a big “whoa!” from the audience: it’s a super piece of code that just builds amazing castles programmatically. One of the key ideas you have to get in order to code in Minecraft is the idea of a 3D coordinate system (x,y and z): I’m not sure that many of the kids had done that before so there was quite a steep learning curve.
We’ll be revisiting these workshops in the next couple of weeks to see what went well and what needs to be worked on: the kids really liked them both. The minecraft one has more of a setup overhead, as we needed to get hold of enough computers (30 Raspberry Pis, in the end) and sort out networking, a server, and so on. The lego robots workshop is a more polished event now (we’ve run it a fair few times). I’m fairly sure that we’ll run them both again, but they might need a bit of tweaking; in particular I’d like to think up a cool way of working with 3d coordinates for the minecraft one, and I also think it might be good to introduce more “not-sitting-at-a-computer” bits.
Last weekend was Electromagnetic Field, the UK’s main Hacker/maker camp. It’s an outstanding opportunity for meeting up with tinkerers, coders and makers from across the UK and beyond. I was at the first EMF (in 2012, blog post here) talking about women in tech, and went back to this one to talk about schools outreach and the work we’re doing with kids and families. I spoke about schools and kids engagement in general, but also more specifically about our EU playfulcoding project. You can see my talk here:
And you can view the slides here, if you just want slides, not talk.
The talk was well-received but not full, but that’s fine – one of the cool things about EMFcamp is the sheer range of stuff going on. Over the course of the weekend I went to talks on computer history, quantum effects in imaging, IT security from a sociological standpoint, penetration testing, hardware hacking, animating dinosaurs and the mathematics of the Simpsons. I also went to hands-on workshops on VR, deep machine learning, card-based reasoning (“having daft ideas”) and paper circuits. These were all part of the official program – submitted and approved before the event, allowing people to schedule and so on.
There were also lots of minor “installation” type hacks around the place, and a whole heap of drop in activities. I played some computer games in the retro gaming tent (Sonic the hedgehog), went in a musical ball pit, watched fire pong, and generally strolled around the site going WOW.
I had never been in a ball pit before. I am so going to make one of these.
“The Robot Arms” was the name of the camp bar, and it had an API so you could look online to see how much beer had been sold. Someone even wrote a script to calculate how many drinks had been sold in the last minute so you could tell how busy it was without going down to check. All the barstaff and indeed everyone at the event were volunteers which gives the whole thing a really nice cooperative feeling. I was sat eating my veggie breakfast in the food area on Sunday morning and someone asked for help setting out the chairs at the main stage, and about 10 of us just got up and did it. Loads of my friends there did shifts on the bar, or marshalling in the carpark (I spoke, and figured that was probably enough:). At the closing ceremony Jonty (one of the main organisers) asked everyone who’d volunteered or spoken to stand up, and I swear about 25% of the people there did. This really did make for a really friendly event.
What a cool pub sign, eh?
Much to my embarrassment, I fell out of a hammock installation on the last night though. I was fine getting in there, but the dismount was … inelegant.
This has made my return to Aberystwyth a couple of days late, via the excellent first aid tent and the A&E at Guildford hospital (Royal Surrey). Nothing’s broken, which is a relief, but my gosh it’s all a bit bruised.
my opinion of hammocks is not positive
In all – I loved it, again. I’ll definitely go in 2018.
Early last Sunday I left sunny mid-Wales for the last ever meeting in our EU Erasmus+ project “Early Mastery/Playful Coding”.
We flew from Bristol to Girona with Ryanair (who call Girona “Barcelona”, which gives some clue to its location). The cloud cover cleared as soon as we crossed the channel, and the view from the airplane was rather lovely. The Pyrenees in particular were stunning.
Once in Girona we met up with the Ysgol Bro Hyddgen crew, teachers from the school up the road in Machynlleth. A chatty evening spent in a lovely riverside bar rounded off the day of travel nicely. Monday morning, bright and early, we headed up into Girona old town for the project meeting proper.
Here’s our (now traditional) meeting arrival selfie: from left to right, Tegid (technology teacher from Bro Hyddgen) and Anna (Welsh teacher from Bro Hyddgen), Martin (schools liason teaching fellow, Aberystwyth Uni) and me, Tomi (ICT teacher from Bro Hyddgen). One of the great things about this project and these meetings in general is we’ve ended up building really good links with the school just up the road, as well as with people across the EU.
Over the last 18 months I’ve written quite a few blog posts about the project. We’ve done a lot of schools work and we’ve had 5 management meetings (of which this was the last). We’ve also had 2 longer “training meetings”, where teachers and academics have tried out each other’s workshops. Every workshop we’ve written has been run by more than one group, and most (indeed all but one) have been run in 2 or 3 different countries. Impact wise we’ve done quite a lot:
- 45 talks, seminars, training days or other events
- 80 schools
- 600 teachers
- 4000 students
- 1 book
Did I mention we’ve written a book? The book contains instructions and information for running these workshops yourself. If you’re a teacher looking for easy lessons, or a lecturer looking for cool outreach, or a professional running a code club, or just an interested parent… the book has some great ideas in it. And some typos. But that’s not the end of the world.
The book launch
Our book “Playful Coding: Engaging young minds with creative computing” has been written, collated, edited, typeset and is now not only a PDF (available for free from http://playfulcoding.udg.edu/teacher-guide/, English now, translations to follow) but is also a physical printed book which looks frankly lovely.
As a team, we are skeptical about learning-to-code initiatives that concentrate on getting the skills to get a good job. Coding should be fun, challenging and playful. We hope this comes through in the book. There’s talk about assessment and pedagogy but there’s also a lot of fun, and the activities are all fundamentally cross curricular and hopefully playful.
The meeting concluded with a formal book launch where local teachers came to pick up a physical copy and chat with us over coffee and cake. It was really cool to see so many local teachers turn up to pick up a book in English – we will offer the other project languages (Spanish, Catalan, Romanian, Italian, French, Welsh) shortly but the first to be finished was our one common language.
Underwater robotics with kids
One workshop that’s not in the book is Xefi Cufi’s underwater robotics workshop for kids. It needs some fairly specialised kit… and a swimming pool. But it was great to see that in action too. Here are some junior roboteers building their chassis:
Here’s Eduard showing off the finished product:
And here’s their underwater robotics test pool…
It’s been a hectic, fascinating, challenging project and at times it’s felt a bit chaotic. I’m still slightly surprised that we’ve managed to do everything we said we would, so well, in the time we had: not only write and run and test workshops, but also write a book. We’re academics, teachers, researchers, outreach officers, postgrads and classroom assistants from 5 different countries but we’ve become a team. I’ve loved the collaborative aspect of the project and seeing how other countries work has been eye opening. My own practice has improved, and I’m sure that some of my ideas have helped to improve practice in other parts of the world, and that’s such a great feeling.
In the wake of Brexit it’s hard to know where we go now. The consortium worked well together and we did some great stuff; there are plans to submit a follow-on grant too. Will I be on it? Well they say they’d like me to be, but in the absence of any firm plans it’s hard to push for that: as a brit, I’m a liability on a Euro project and will remain so until there are serious assurances around research and education funding. I don’t see that happening very soon.
Which is very sad indeed. We’ve done some good work on this project.
On the plus side, I have a sabbatical semester 2 next year, and they do underwater robots, so… I think I’ll be back. Hasta la vista.
I gave a talk today about using short videos in teaching, to the Aberystwyth University Teaching and Learning conference (info here). The conference is an annual event which serves as a showcase for best practice in the uni, and it’s always interesting to see what people are up to. As part of my prep for the talk I did a lot of thinking about the different uses of video in learning and teaching, and about the different types of video I’ve put together. So I thought I’d do a blog post about that.
If you’re interested in the how, as well as the what and why, you can find my slides on Google Drive here.
Uses of video
Illustration of a visual point: some things are just best illustrated with a picture or a video. There are lots of examples of this in computer vision, here’s one showing a moving average motion detection. This is really hard to do in slides, without video.
Illustration of a phenomenon that is kinda hard to do in person: sometimes – maybe because things are dangerous, or there’s a piece of kit that’s really expensive, it can be difficult to “take the students to the phenomenon”. So video is a way of bringing the phenomenon to the students. An example of this is a video I made for a friend from the Welsh Crucible program, whose wife was teaching Sylvia Plath’s bee poems to her 6th formers – I called the video Beekeeping for poets. It’s a bit scrappy but it gets the ideas across. This was a very early foray into video making for me, so it’s not got sound or anything. But I like it anyway.
Illustration of a concept I find tricky: sometimes I’m just not that confident about a particular topic. Particularly with the details of algorithms that get complex, I often worry about tripping up in a lecture. These topics are also topics that students probably want to revisit more than once, so the video serves several purposes: it gives me a bit of breathing space and additional confidence in the lecture, and it also gives the students an easy way to repeat the difficult bit. An example of this is my DES encryption video from information security section of CS270. Graphically it’s not great, but practically, it’s saved me a lot of stress:-)
These three videos also illustrate three different types of video: the screencast, the video-clips-and-captions, and the canned presentation.
Other reasons to use video include summarisation, previews, simplifications, and the option to introduce new voices. One thing I really want to look into in the future is bringing in interviews with practitioners, probably by recording Skype/Hangouts calls.
Soapbox Science is a public engagement event designed to get scientists out into the public and into public spaces, talking about their work. It’s supposed to demystify science (a bit) but also to change people’s perceptions of what scientists look like; one of the ways it does this is by making all of the scientists on the soapbox women. When I heard about it, I thought… Public engagement? Women in Science? Sounds a bit mad? Guess I’d better apply then!
The event I applied for was my nearest one, this year, and that was Cardiff, and it was yesterday. As you can probably guess from the blog post, I got in.
Having got in, my next problem was what to talk about… for 30 minutes, to a general passers-by kind of audience, without computers or posters or anything like that. As a vision scientist, who works with computers, that’s quite the challenge. The topic I settled on was Shadows.
One of the cool things about Soapbox Science is that it’s OK to bring along props. Some of the scientists had brains, or little bits of gold, or fungus, or felt-and-wax artistic renderings of tumours (no, srsly, they did). I went for an arduino powered cardboard box.
This involved having a neopixel ring inside a cardboard box, programmed with various lighting patterns, and a button on the outside which switched pattern every time the button was pressed. My hope was that by having a ring-shaped light source it would be possible to look out of the middle of the ring, and having the viewpoint of the lightsource (as Da Vinci said, “No luminous body sees the shadow it casts” or something like that). But the viewpoint was just out so you could actually see the shadows anyway. So I made the 16 light sources either chase around with different colours, or gradually illuminate making the shadows hazy, or gradually go off, making the shadows sharp again.
Inside the box I hung a small plastic model of a skateboarder, I soldered all the bits together, and then I had my main prop: a Shadowbox. The shadow effects created really were quite strange, and they served quite well to illustrate the idea that the size of the light source, the colour of the light source and the colour of the screen all affect the shadow’s appearance.
As you look into the box through one of two holes, the experience of seeing the shadows is quite disconcerting, and it can take a while to work out what exactly is going on. But that’s OK – I wanted something kind-of “installationy” and this worked quite well as a visual experience. What didn’t work so well was the skater on a string – as the figurine was suspended from the lid using fishing wire, she swung wildly from side to side if anyone knocked the box, making it all just that little bit more incomprehensible.
The other props I took were some flipbooks, made from a 50 frame sequence of shadow video. I took books representing the input (the actual video), the ground truth (what we want our software to output), some intermediate processing steps and the final output of our shadow detection routine. These were hacked together using python and LaTeX; if you’re interested in any of the code (flipbook code or arduino code) you can find it on my github account. I also took some zoom in crops of images showing pixellated shadow or non-shadow regions, mainly just to show how hard it is to detect shadows when your input is pixels. And I took some sharpies and a sketchbook because … I NEEDED PROPS.
So yesterday, Saturday morning 4th June, I got up early and drove down to Cardiff with a boot full of electronics and poorly put together flipbooks. I arrived just after 12, to a control centre in Yr Hen Llyfrgell which was a hive of activity, helpers, organisers, mascots, labcoats, tshirts, props and of course scientists. And balloons. And coffee. Each scientist was allocated a helper to assist with props and so on: my helper was a very nice and efficient Cardiff Uni medical student called Gunjan who was awesome at ensuring I had the things I needed when I needed them.
One of the mascots was the Cardiff University Dragon, who’s called Dylan. Apparently it was really very hot indeed inside the dragon. The other mascot (who I didn’t get a photo of) was a teddy bear. I’m not sure why.
We’d been advised to have a few 5-minute ideas for talks, and we’d been told we might get questions/heckles and so on, so repeating bits was probably going to be necessary. The time came and I went out, with this written on the back of my hand:
- Me and science
- Shadow formation
- Computer vision
- Pixels, videos
- Ground truth
- Colour, texture
Our soapboxes were in a busy intersection on Cardiff’s shopping district, quite near a woman with an amplifier singing eighties lounge songs (niiice). As talk venues go, I can’t think of many more challenging. The actual “talking about science to the general public on a soapbox” bit was almost exactly as terrifying as I thought it would be.
For the first half hour session, I stood up, talked, drew an audience of about 15, caught people’s eyes, talked some more, waved my props around, tried to get people to look into the shadow box, and then ran out of things to say. Looking at my watch I realised I was just 2 minutes from the end of the session, so that was OK and the audience did have questions. Most of them had stayed till the end, too. They might have had even more questions I suppose, if I had at any point slowed down enough for them to get a word in…
During my second stint on the box, I had a completely different experience. For the first 5 minutes I had no audience, and then a guy I vaguely recognised (maybe from the Crucible?) came and watched at the request of one of the organisers. Which was nice. I didn’t really want to talk to an audience of 0. Slowly more people came and went, including some kids (who really liked the flipbooks) and a remarkable heckler who thought I was a bloke.
At the end of the day everyone was quite hyper, and we all agreed it had been super fun if terrifying. Here’s a picture of me with my excellent helper Gunjan:
At this point I needed to stretch my legs and be quiet for half an hour so I went and checked into my hotel before returning to the afterparty (complete with wine, for those who do that sort of thing). All in all a good day. I’m not sure that it’s my favourite form of public engagement, but it certainly got me out there and out of my comfort zone, talking to people who I’d never had spoken with otherwise.
I’m just back from our penultimate project meeting on the Playful Coding project. It’s been a good year-and-a-bit of working, playing, talking to kids, and talking to teachers. After the last week we’ve really made progress on our main output too, which is a book for teachers and people who want to engage school-aged students with programming and computational thinking using playful workshops.
The Wales team this session were myself, Wayne Aubrey and Nigel Hardy from Aberystwyth University, and Tomi Rowlands, Sam Roberts, and Gwennan Philips from Ysgol Bro Hyddgen in Machynlleth. One of the real wins of projects like this is the extra time you get to spend with cool local people as well as the time you spend chatting to teachers and lecturers from other countries – we’ve come up with some good ideas and I think the links we have with Bro Hyddgen now are great.
In May, Girona has a flower festival which means that there are hundreds (literally, hundreds) of floral displays across the town. It also meant that the town was fairly full (hotels were busy and the streets filled up during the afternoon). But we were working pretty much non stop so that didn’t bother us too much.
The aim of the project is to write, test and revise workshop activities for schoolkids, and then to write a book explaining what we’ve done and what we’ve learned. As we’re nearing the end now, we have been mostly writing and testing activities. The group split into 3 sub-groups, to work on different aspects of our remaining tasks, and over the course of the week, we visited three schools and ran 8 workshops as well as adding something like 50 pages of text to our book. Busy busy. Here’s the group shot at a school in Figueras, where we’d just run two parallel workshops and a book editing session:
Figueras is famous for one particular guy: Salvador Dali. His museum is there and after we’d been in the school a full day (9-4) we got to visit the museum. It is definitely a museum to recommend – Dali didn’t just fill it with pictures, there are sculptures and the very building is surrealist. If you get to go try to get a tour as the tour guide was great at explaining what was actually going on behind the art. Believe me, there’s a lot going on behind the art. From 6-8 that evening we gathered in a coffee bar to have a Dali-themed Scratch Hackathon. Here’s a picture of Wayne and I working on our respective Scratch programs. I thought it was a lovely idea to get us staff engaging with Scratch and playful coding – if you spend all your time talking about how coding should be fun without actually doing any fun coding… it can get difficult to maintain the enthusiasm:)
Eduard Muntaner (EduardM on scratch and on twitter) has put together a studio of the scratch outputs we made that evening – there are some fun animations. I made a video activated Dali face where the moustache twirls when you move infront of your laptop camera: you can play it here.
We visited Escola Veinat in Salt (a suburb of Girona) the next day, and ran workshops on Mindstorms and on Scratch. The Scratch workshop had been written by the Romanian team, was being delivered by the Catalan team and my job was to observe, along with an Italian colleague. This multiple observers approach is one of the real strengths of these training meetings – we try out each others’ materials, and we critique them, and we revise them. They’re actually getting really good now. Here I am in the classroom, trying to observe rather than help. It was fairly easy not to help too much as my Catalan is not very good at all…
In the evening we had a talk from Maria Antonia Canals, who is an absolute superstar in terms of pedagogical theory in Spain. She is 84 or something like that and has had the most amazing life, working in schools and in teacher training for so long and in such a creative and thoughtful way. Her specialism is the teaching of mathematics, particularly systems which make maths tangible and she’s invented some really superb systems for explaining abstract concepts to little kids.
On the penultimate day, we went to St George’s School, which is an English language instruction school outside of Girona where we could run 4 parallel workshops. I ran the AppInventor workshop with some 13-year-olds, which after a couple of technology related hiccups went quite well. The kids were amazingly quick to pick it up so even though we’d lost a bit of time to setup, everyone managed to write a basic drawing app and get it onto their phones/tablets.
Whilst I was working on the AppInventor workshop my Aber colleague Nigel was helping out with a Scratch workshop. I think the other workshops were all scratch activities, actually – the school is a 3-18 school so we were able to run workshops, in English or in French, across all age groups.
It was then back to the University of Girona for a quick tour of their underwater robotics lab, and then another afternoon spent working on our book. The book is getting there, the robots are awesome.
A Lightstage is a system which lets you completely control illumination in a particular space, and capture images from multiple views. They’re used for high resolution graphics capture and computer vision, and they’re fairly rare. I don’t think there are many non-commercial ones in the UK, and they’re research kit (which means you can’t really just go out and buy one, you’ve got to actually build it). Usually, Lightstages are used for facial feature capture, but I’m kinda interested to use them with plants. With the support of the National Plant Phenomics Centre, here in Aberystwyth, and and Aberystwyth University Research Fund grant (URF) I’ve been slowly putting one together.
The key ingredient of a Lightstage is a frame which can hold the lights and the cameras equidistant from the target object. We’ve gone for a geodesic dome construction. Here’s a time-lapse video of Geoff from Geodomes.com building ours (a 2 metre 3v dome made out of rigid struts covered in non-reflectant paint). He has a bit of help from Alassane Seck, who did a PhD here in Aberystwyth on Lightstage imaging.
Once we’d got the dome, the next job was to think about mounting lights on the dome. There are a couple of different approaches we can take, but the essential features are that some of the lights are polarised and some of the cameras also have polarising filters. This means we can separate out specular reflections (light that bounces straight off) and diffuse reflections (light that interacts more with the surface of the object). Pete Scully‘s been working on the light placement, doing a lot of work in simulation. Here’s an early simulated placement: dots are lights, boxes are cameras.
The dome was housed in the Physical Sciences building but it’s recently moved. This puts us in a room which is actually light-tight, a key consideration for reducing interference in the controlled lighting situation. Here’s an arty shot of the top of the dome in its new home.
Since the move of room (very recently) things have really picked up. We’ve got a light-proof space, and we’ve got an intern from France (Robin Dousse) working with us too. Andy Starr‘s been working on the electronics and construction from the outset, and during breaks in teaching has really driven the project forwards. Here’s a shot of Robin, Pete and Andy by the dome:
Robin’s been working on programming our LED controllers. We’ve a fairly complicated master-slave controller system, as running 100 or so ultra-bright lights is not trivial. We’re aiming for a pair (one polarised, one not) at each vertex. Here’s a 12 second video of some flashing LEDs. It’s going to look a lot more impressive than this once it’s actually going, but hey, this is actual lights lighting up on our actual dome so I am very pleased.
We’ve now also, finally, got cameras on the dome. We’re not 100% certain about positioning, but we’re getting there. Andy’s been working on the camera triggers. Soon we’ll have something which flashes, takes pictures, and gives us the data we want.