Verification and Validation

I’ve worked on a lot of computer models of physics things for my various employers to date. Although I’d describe myself as primarily a physicist, I’m also a computer modeller – that’s what holds together my work on mould growth in grain silos, dispersion of dust within pig-sheds, infra-red sensors, radar propagation, electrical behaviour of neurons, etc, which on the face of it are rather different (and some not obviously physics at all). Note that I don’t claim to be a computer programmer – I don’t produce fantastic memory-optimized whizz-bang graphics apps – what I mean by modeller is being able to take a physical thing (like flow of air in a pig-shed), produce equations that describe the physics going on, and then write a computer programme that solves these equations to produce a result that should describe something physical.

A big part of doing this is answering the question "how do I know my computer is turning out a meaningful result?" There are basically two bits to this question.

1. Verification. This is ensuring that my computer programme is actually solving the equations that I think it’s solving, and doing what I think I’ve asked it to do. One small mistake – e.g. a bracket ‘)’ in the wrong place, two lines of code executed in the wrong order, etc., and the answer might be different. I need to be sure that I’m getting what I think I’m getting.

2. Validation. Here’s where we ask the question – "Is my physical model (my equations) really describing reality."

Verification is tedious but fairly straightforward. Validation is harder, and can be really costly. Often validation requires one to do some experimental work – set up a few test cases that can be accessible both to experiment and to the computer modelling – and see how close the two are.

A neat way of doing verification is to work out analytic solutions to the mathematical equations. By ‘analytic’ I mean exact solutions – using pen and paper. That’s often really hard in general – if you could do it you wouldn’t need to do a computer model of it – but often even when you can’t you can still work out certain properties that the solutions must have – e.g. what do you get when you set a particular parameter to zero, or to infinity, or what do you get when particular parameters are identical?

I’ve been doing a spot of verification with a summer student this morning, on a computer model we are putting together describing the response of neurons to Transcranial Magnetic Stimulation. The equations are complicated, but one thing we know is that if we set certain parameters to be identical, certain outputs from the model should be identical. More specifically, we get four graphs out – and all four should be identical. So when we had three that were the same, and the fourth that wasn’t, we knew that something was up with the code.

Of course, finding  the fault  is another thing entirely. Wilson’s law of verification states (from bitter experience) that the time taken to locate a fault is inversely proportional to the size of the fault. In other words, tiny little problems in the code take the longest time to discover.

In the case of this morning, we eventually tracked down the problem. It was really tiny. I’d titled one of the graphs incorrectly. Everything in the code was correct – in this regard at least – it was just that the plot that we thought was ‘plot 4’ was actually another plot – so it wasn’t surprising it was different to the other three. The real plot 4 was hiding elsewhere – once we found it we saw that it was, in fact, the same as the others. Cue hitting head against a wall.  Code verified then. Or, I should say, verified in this regard. Verification and validation never truly stops.

 

 

 

Leave a Reply