Last night I completed a project that’s been going for the last three months – a 1000 piece jigsaw. This one was pretty but particularly fiendish – being a street map of Paris. There’s not a lot of variation from piece to piece – green background with white streets, with limited clues. There are river pieces, dark green pieces (for the parks), but mostly the only clues there tend to be are the street names (which need hunting down on the picture on the box to work out where to go.) I refused to search for a street name with Google Maps, though I was tempted at times.

What I found interesting was how the difficulty varied as we went along. To start, assembling the edge wasn’t too hard, because there were limited options. It’s a one-dimensional thing – each edge piece just connects to two others. Then we put in the Seine. That again, was a one-dimensional feature. Some buildings (seemingly the hospitals and buildings of interest like Les Invalides and Louvre were coloured dark brown) and came next, but there were more options working out what went with what. And last of all those pesky streets. So, from being easy to start, it got slower and slower going, until close to the end. Here progress rapidly sped up. The last 50 pieces were polished off last night, very quickly.

The point here is that, as the number of pieces reduces, the number of options reduces too. With 50 pieces left, there are 50 ‘holes’ in the board. With 40 pieces left, there are 40 holes. So each piece becomes easier to place – there are less places it can go. Or one can take the strategy of looking at a particular hole, and trying to find the piece to match it. It’s a similar thing. The fewer the pieces left, the more rapidly each one can be placed. Whereas, on some nights I struggled to get even 3 pieces placed, the last 20 pieces were rattled off in no time at all.

My guess, then, is that a 2000-piece jigsaw would take about 4 times the time of a 1000-piece one: twice the number of pieces, with twice the options for each piece. That would be a year then, other things being equal.

We encounter similar statistical effects in science. The more options, the more difficult it is to solve a problem. And it’s not a linear scaling. Just how much more difficult depends on what the problem is. DNA sequencing is an interesting one. Here, many strands of DNA are broken into smaller lengths, and each length has its base sequence read. Then the problem comes in working out what the original sequence must be, based on what all the snippets are. That needs some serious computer power. The larger the sequence, the longer it is going to take work through the various options.

Or we can consider how energy can be distributed amongst particles in a system. The more energy quanta, and the more particles, the more options there are. Explicitly working through all the options rapidly becomes impossible as the number of particles increase, which is why we have to resort to statistical methods to describe how energy and matter interact. We’re not talking a 1000 piece problem here – there are ten to the power 26 or so molecules in my office – and the options for their movement are extreme indeed.

Fortunately, I am not about to tackle a jigsaw that size.