MuleShoe Blog

May 27, 2010

Computer Models Never Prove Anything

Filed under: Climate Change — zdas04 @ 3:20 pm

“Computer models” never prove anything.  They can be useful in helping to point a researcher in a direction for experimentation, but they can’t prove anything.  I know that if I drop an object near the surface of the earth, it will accelerate toward the center of the earth at a fixed rate of acceleration (as long as I disregard the air that it is falling through).  From this fact I can develop equations to tell me how fast it is falling at any given elapsed time, and I can determine its position at that elapsed time.  These equations make up a “mathematical model”.  I can program them into a computer and they are still a mathematical model.
On the other hand if I look at a liquid flowing inside of a pipe, I have equations that relate fluid velocity to the change in pressure down the pipe.  But what causes the pressure change?  The answer to that is “fluid friction”.  I have equations that allow me to predict pressure drop due to fluid friction.  These equations tell me that the faster the liquid is moving; the more pressure-drop will be caused by friction (thus slowing the fluid down).  For a given pipe geometry and fluid properties, friction is a function of velocity.  But for the same conditions, velocity is a function of friction.  This means that you have to solve the problem iteratively.  You guess a velocity and calculate a “friction factor”, then take that factor into the fluid-flow equations to determine a velocity based on your guess.  Then you take the solved velocity back to the friction factor equation and calculate a new friction factor.  Etc.  Continue this until the difference between the velocity used to calculate friction is acceptably close to the velocity calculated from the fluid-flow equation.  That is the sort of problem that you solve with a computer model.  The model divides the problem up into homogenous cells often called “calculation blocks” that can be handled with the same equations over the entire volume of the block.  If the pipe diameter changes, or if some additional flow comes in from another stream then you have to end your block and start another one.  For piping problems, this is pretty easy.  You start a cell with a given pipe diameter and a fixed flow rate, and when either of those changes you start another cell.
Pipeline computer models are very useful for looking back at known data and comparing it to theoretical data.  In this scenario, you can adjust parameters within the model to make the model match measured data.  Once the two match, you can look at the model and see where there are issues with the system (e.g., a pipeline model can show you where there is gas trapped within a liquid pipeline, or where there is standing liquid in a gas pipeline).
A pipeline computer model can also be used to predict the hydraulic performance of pipeline system modifications or to test a new pipeline design.  No pipeline ever built has exactly matched the performance of the model, but they are generally good enough to avoid the worst mistakes.
Computer models are useful in the same way that a hammer is useful.  If you have a problem like driving a nail, a hammer is a really effective tool.  If you have a problem like brain surgery, a hammer is somewhat less effective.  If my problem is to determine the performance of a ship’s mast under dynamic loading then I can make some pretty good predictions of that performance using a Finite Elements Analysis (FEA) computer model.  The model will not “prove” that a given design will outperform another design, but it can give you a good indication which one has the best chance of success.
The other end of the spectrum is “Climate Modeling”.  Local weather predictions are all based on computer climate models.  These models have a fairly dense grid of calculation blocks, reasonably high quality data, and a great deal of built-in experience to determine which of the hundreds of parameters in a weather system generally dominate local weather patterns for a given location.  These “weather models” are right in all regards (i.e., temperature highs and lows, humidity, wind velocity, and precipitation amounts) nearly ¼ of the time.  They are right enough (i.e., they say you need to wear a raincoat to school and you do need to wear a raincoat) more than half the time.  If they say that small craft should avoid open water, conditions are generally inappropriate for small craft to be on open water often enough to take them seriously.
For local weather predictions, computer models do a bit better than you would do flipping a coin, but not wildly better.  Now consider scaling these “OK” weather models up to the globe.  Also consider scaling the time scale from 5-10 days up to 50-100 years.  Now try to gauge the impact of changing a specific gas from 300 parts per million (ppm) to 800 ppm in the year 2100.  Three hundred ppm is 0.03% of the gas in the atmosphere.
The earth is big.  If you draw a line from the center of the earth to sea level, then shift over 1 mile and draw a line back to the center of the earth, the angle of the arc would be 26 seconds (0.007°).  NASA defines the start of space as 50 miles above sea level.  Extending these two lines that are 1 mile apart at sea level to the edge of space would have a separation of 1.00126 miles. Death Valley, California is at 282 ft below sea level, the one mile separation at this elevation is 0.9999987 miles.
If you built a 1 square mile (at sea level) square from lines going to the center of the earth and out to space, and repeated it over the entire earth you would have 2.5 billion (2.5X109) squares.  If you divided the volume above sea level into 1,000 ft high layers it would be 264 layers, so a 1 mi2 X 1,000 ft grid would have 650 billion (6.5 X 1011) grid blocks.
If the computer model requires 500,000 iterations to converge, then it has to do each calculation block (which may have several hundred calculations in it) 3300 trillion times for a single time step. 
The things that make up “climate” are really a collection of a vast number of things called “weather” which is made up of things like temperature, wind velocity, humidity, barometric pressure, cloud cover, etc.  Weather at a particular location on the earth is dominated by day/night, windy/calm, and sunshine/cloudy conditions.  Consequently a time step that is shorter than about 6 hours is pretty worthless.  So, if I want to predict the climate in the year 2060 I need 73,050 time steps from 2010.  To do 3,300 trillion calculation blocks in each of 73,050 time steps is 241 million trillion (2.41X1021) calculation blocks, a number that only has meaning to a computer.
If we assume that the grid block at Death Valley and the grid block at the edge of space are both exactly 1 mile square then we are introducing an error.  What happens if we do a simple operation like squaring each of the two numbers and repeat it 264 times (once for each layer between sea level and space)?  Well, at the 21st iteration the number for “space” is too big for most computer programs to handle (more than 10300 miles) and at the 31st iteration the Death Valley number is zero.  I started with two numbers that were within 0.125% of each other (using space as the denominator) or 80 inches over 50 miles and did a simple operation on each of them 30 times—the error went from a few inches to an incomprehensible “distance”.  The fluid mechanics equations in the climate models are much more subtle than this simple unconstrained demonstration, but the effect is the same.  Infinitesimal data errors introduced into a program that takes those numbers as “given” and applies equations to them 241 million trillion times is bound to magnify those errors in unpredictable ways.
Of course, the climate models currently being used to predict global catastrophe are not nearly this fine a grid.  One model uses 2.5° in latitude and 3.75° in longitude.  This results in a grid square of 173 mi X 259 mi (44,712 sq miles).  So you end up with 4,405 surface grid blocks instead of 650 billion.  While a person could be reasonably confident that the weather within a single square mile will usually be homogenous enough for categorization, I don’t know anyone except “climate scientists” that would claim that the weather in the southwest corner of a 44,712 square mile grid will be exactly the same as the weather in the northeast corner 311 miles away.
Vertical resolution is also much coarser than my example.  Nor are they evenly distributed.  One model has 19 levels over land and 20 levels over the ocean, and it stops its vertical investigation at 18 miles (7 of the levels are above 9 miles).
With 4,405 grid squares, each with 20 vertical layers, then 500,000 iterations requires 44 billion calculation blocks per time step.  Using 73,050 time steps gets the number iterations down to a manageable 3,200 trillion calculation blocks (3.22X1015).
I say “calculation blocks” because each grid cube has a set of defining parameters.  These parameters tell the program what equations to apply to that generalized 44,000 square miles.  It tells the program things like whether the grid block is over the ocean or the desert.  Some of the more sophisticated models accept input on what percentage of the grid is water, what percent is agricultural, what percent is urban, what percent is forested, etc.  The state of Colorado is 104,000 square miles, so this diverse state would be mostly included in a pair of grid blocks.  Both of which would have elevations ranging from under 5,000 ft to over 14,000 ft.  Each of these two blocks would have an average elevation that wasn’t representative of any particular place.
The result of homogenizing grid blocks of this size is that the results have no intrinsic meaning.  As a model builder, I can predict that something like CO2 in the atmosphere will reduce the amount of heat that can radiate into space.  Then I can “tweak” my model to accommodate that hypothesis.  The amount of modification is subtle, but the effect is cumulative so after a few trillion iterations it starts to make a big difference.
To try to add a patina of legitimacy to the process of Climate Modeling, the purveyors of the concept have used a version of historical weather station data to “calibrate” their models.  The concept is that if you know the answer for a period of time, then if the model can predict the known values for a past period then it can certainly extrapolate those values into the future.  This has proven to be an amazingly complex task.  Things change.  Urban centers sprawl far outside their historical boundaries.  Cropland is allowed to go back to forest.  New cropland is reclaimed from the forest.  Researchers found the raw data impossible to match.  Their solution was to “adjust” the raw data to remove “known anomalies”.  In other words they start with raw data, adjust it to fit their preconceptions, and then “prove” that their computer model can predict the future by matching their “adjusted” data.
Don’t get me wrong—I am certain that the climate is changing.  It has always changed.  It will always change.  The reason for the change, the magnitude of the change, and mankind’s ability to predict the change are areas where I have serious problems.  Climate models do not prove anything.  No computer model proves anything.  Ever.  If we could get scientists to stop trying to make public policy and get back to doing un-manipulated science the world would be a much better place to be.

Powered by WordPress