hannah dee

BMVA workshop: plants in computer vision

On Wednesday I hosted my first ever British Machine Vision Association (BMVA) one-day workshop. The BMVA are the organisation which drives forwards computer vision in the UK, and they run a series of one-day technical meetings, usually in London, which are often very informative. In order to run one, you have to first propose it, and then the organisation work with you to pull together dates, program, bookings and so on. If you work in computer vision and haven’t been to one yet, you’re missing out.

I won’t write an overview of the whole day – that’s already been done very well by Geraint from GARNet the Arabidopsis research network. So if you want a really nice blow by blow account pop over to the GARNet blog.

We had some posters, and some talks, and some demos, and around 55 attendees. The quality was good – one of the best plant imaging workshops I have been to, with no dud talks. I think London is an attractive venue, the meetings are cheap (£30 for non-members, £10 for members), and both of these factors contributed. But I suspect the real reason we had such a strong meeting was that we’re becoming quite a strong field.

The questions and challenges that come up will be familiar to people who work in other applied imaging fields, like medical imaging :

  • should we use machine learning? (answer: probably)
  • can we trust expert judgments? (answer: maybe… but not unconditionally!)
  • we need to share data – how can we share data? what data can we share?
  • if we can’t automatically acquire measurements that people understand, can we acquire proxy measurements (things which aren’t the things that people are used to measuring, but which can serve the same purpose)?
  • can deep learning really do everything?
  • if we’re generating thousands of images a day, we have to be fully automatic. this means initialisation stages have to be eliminated somehow.

One of the presenters – Milan Sulc, from the Centre for Machine Perception in Prague – wanted to demo his plant identification app. Unfortunately, we discover that all of the plants at the BCS are plastic. Milan disappears to a nearby florists to get some real plants, at which point, the receptionist arrives with an orchid. Which also turns out to be plastic. The lesson here? Always remember to bring a spare plant.

This workshop was part funded by my EPSRC first grant, number EP/LO17253/1, which enabled me to bring two keynotes to the event and that was another real bonus for me. Hanno Scharr from Jülich and Sotos Tsaftaris from Edinburgh are both guys who I’ve wanted to chat with for some time, and they both gave frankly excellent presentations. It was also very good to catch up with Tony Pridmore and the rest of the Nottingham group; it’s been a while since I made a conference in computer vision / plant science, as I had a diary clash over IAMPS this year.

We’re hoping to put together a special issue of Plant Methods on the meeting.

First paper from first grant!

We’ve had our first journal paper published from my EPSRC first grant. It gives a comprehensive review of work into the automated image analysis of plants – well, one particular type of plant, Arabidopsis Thaliana. It’s by Jonathan Bell and myself, and it represents a lot of reading, talking and thinking about computer vision and plants. We also make some suggestions which we hope can help inform future work in this area. You can read the full paper here, if you’re interested in computer vision and plant science.

The first grant as a whole is looking at time-lapse photography of plants and aims to build sort-of 3d models representing growth. It’s coming to an end now so we’re wrapping up the details and publishing the work we’ve done. This means keen readers of this blog1 can expect quite a few more posts relating to the first grant soon: we’re going to release a dataset, a schools workshop, and we’ll be submitting another journal paper talking about the science rather than the background.

1Yes, both of you

Building a lightstage

A Lightstage is a system which lets you completely control illumination in a particular space, and capture images from multiple views. They’re used for high resolution graphics capture and computer vision, and they’re fairly rare. I don’t think there are many non-commercial ones in the UK, and they’re research kit (which means you can’t really just go out and buy one, you’ve got to actually build it). Usually, Lightstages are used for facial feature capture, but I’m kinda interested to use them with plants. With the support of the National Plant Phenomics Centre, here in Aberystwyth, and and Aberystwyth University Research Fund grant (URF) I’ve been slowly putting one together.

The key ingredient of a Lightstage is a frame which can hold the lights and the cameras equidistant from the target object. We’ve gone for a geodesic dome construction. Here’s a time-lapse video of Geoff from Geodomes.com building ours (a 2 metre 3v dome made out of rigid struts covered in non-reflectant paint). He has a bit of help from Alassane Seck, who did a PhD here in Aberystwyth on Lightstage imaging.

Once we’d got the dome, the next job was to think about mounting lights on the dome. There are a couple of different approaches we can take, but the essential features are that some of the lights are polarised and some of the cameras also have polarising filters. This means we can separate out specular reflections (light that bounces straight off) and diffuse reflections (light that interacts more with the surface of the object). Pete Scully‘s been working on the light placement, doing a lot of work in simulation. Here’s an early simulated placement: dots are lights, boxes are cameras.

The dome was housed in the Physical Sciences building but it’s recently moved. This puts us in a room which is actually light-tight, a key consideration for reducing interference in the controlled lighting situation. Here’s an arty shot of the top of the dome in its new home.

Since the move of room (very recently) things have really picked up. We’ve got a light-proof space, and we’ve got an intern from France (Robin Dousse) working with us too. Andy Starr‘s been working on the electronics and construction from the outset, and during breaks in teaching has really driven the project forwards. Here’s a shot of Robin, Pete and Andy by the dome:

Robin’s been working on programming our LED controllers. We’ve a fairly complicated master-slave controller system, as running 100 or so ultra-bright lights is not trivial. We’re aiming for a pair (one polarised, one not) at each vertex. Here’s a 12 second video of some flashing LEDs. It’s going to look a lot more impressive than this once it’s actually going, but hey, this is actual lights lighting up on our actual dome so I am very pleased.

We’ve now also, finally, got cameras on the dome. We’re not 100% certain about positioning, but we’re getting there. Andy’s been working on the camera triggers. Soon we’ll have something which flashes, takes pictures, and gives us the data we want.

Job! Working with me!

Right: input plants, left: colour based plant segmentation using Gaussian Mixture Models

I’ve won a grant to investigate the dynamic modelling of plant growth using computer vision. The plan is that we’re going to grow a load of Arabidopsis (that’s the plant in the picture above), under time-lapse cameras, and work out where the leaves are, and which leaves cover up which other leaves. Essentially, we’ll use the time-series of images as the plant grows to infer the 3D structure of the plant. Cool, eh?

If you might be interested in this kind of project, and you can do computing and machine learning, then get in touch. The job is 16 months at £32k, which is a bit short (but that’s all I could get with the grant money, unless I wanted to drop the salary). I’m fairly sure I can find someone to spend 16 months in Aberystwyth though – it’s a beautiful small seaside town, and a nice place to live, and the salary is not bad for around here, where things are fairly cheap.

Here’s a link to the Job Description

International Workshop on Image Analysis Methods for the Plant Sciences

The International Workshop on Image Analysis Methods for the Plant Sciences will be held this year in Aberystwyth. The workshop is aimed at computer vision and image processing people working in the plant sciences, and plant science people doing work with images. I’m the co-chair, along with Marie Neal from the National Plant Phenomics Centre, Andrew French from Nottingham Computer Science and Susie Lydon from Nottingham’s Centre for Plant Integrative Biology.

Key facts

Abstract submission 1 Aug. Abstracts should be 2 pages max, PDF, submitted via CMT the conference submission site.

Registration deadline 1 Sep. You can register online via EventBrite.

Conference dates 15-16 Sep. Full day on Mon 15th, Half day on Tue 16th, with tutorials on the afternoon of Tue 16th introducing some open source image analysis tools (Octave and OpenCV).

Registration cost £120 including dinner on the Monday. Accommodation not included (there are lots of hotels and B&Bs in Aberystwyth, though, so accommodation in September should not be hard to arrange).

Topics covered: Plant science-based image analysis techniques from laboratory to field environments and subcellular to whole plant scales. This includes but is not limited to…

  • 3D reconstruction
  • Image segmentation
  • Modelling motion
  • Modelling growth
  • Shape analysis and classification
  • Colour analysis
  • Aerial imaging of fields
  • User interaction and software tools for plant scientists
  • The image analysis-biology pipeline (Imaging and *-omics)
  • Novel and emerging plant imaging techniques
  • Biological challenges for plant image analysis