To err is human, but to make a real mess requires a computer. Whether it is sending out a gas bill for ten million dollars, or sending a letter to the parents of a one-hundred-and-four year-old woman reminding them that she is due to start school, the rise of the computer has certainly opened up new avenues for getting things wrong in a big way.
But really it’s not the computer that gets things wrong. It’s the people that told it what to do. A computer is very good at obeying instructions. The trouble is, being just a collection of electronic circuits, it has no ability to independently check them for sanity first.
The computer has opened up physics in a massive way. What physicists generally exploit it for is its ability for doing mathematical calculations, very very fast. That means it can work out movement of tsunamis across oceans, calculate the electronic structure of a piece of germanium, predict the firing patterns in networks of neurons amongst many others. Computer modelling, that is, using computers to predict the result of physical processes (e.g. where drops of pesticide go after leaving the nozzle of a spraying apparatus), is something that I have used a lot in my career.
But, as with any use of the computer, modelling requires thought on behalf of the user. Very broadly, there are two things that can go wrong.
First, when I write my computer programme, I can make a mistake. I could be writing a programme to solve a set of equations, and maybe I miss of a ‘square’ term or write a ‘times’ instead of an ‘add’. It won’t bother the computer; it doesn’t know any better, and off it will go and produce an answer for me And if the answer is close to what I expected, I might not spot the problem.
Secondly, and perhaps more subtely, I might get the equations wrong. For example, I might be applying a model that is well-used, but to a situation where it is not valid. Or I might have misunderstood the physics of a particular situation. This is not a mistake with programming, this is a mistake with the physics. But, again, the computer knows no better, and will happily give me an answer that may or may not bear any relation to reality.
This is why a physicists use of computers tends to be accompanied with a series of ‘validation’ and ‘verification’ steps. Broadly speaking, verification is a check to see that the answer the computer gives is plausible. Give it a simple case, one where the answer is obvious without having to use a computer, and see whether it gets that answer right. If it doesn’t, there’s probably a mistake with the computer programming.
Validation is a more difficult procedure, in which one focuses more on the model itself. One might have to set up a series of experiments, in which the results are carefully measured, then ask the computer to do the same thing. Does it get close to the ‘real’ answer. If not, and assuming the programme has been well verified, what is going wrong with the model. Properly validating a physics computer model, particularly a big one (e.g. determining where droplets of pesticide go after leaving the nozzle of a crop sprayer) is no trivial task. But it is one that needs to be done well before anyone will take you seriously.