menu SIGN IN
Phil Green
Phil Green
author, misLeading Indicators: How to Reliably Measure Your Business

Greenbridge Management Inc.
President

Websites:

Reinhart and Rogoff story shows how important it is to check your indicators

Posted almost 4 years ago

This month two academic researchers showed that there were errors in the calculations behind the claim by economists Carmen Reinhart and Kenneth Rogoff that the relationship between government debt and real GDP growth is weak for debt/GDP ratios below a threshold of 90 percent of GDP. Above 90 percent, median growth rates fall by one percent, and average growth falls considerably more.

Reinhart and Rogoff’s finding has been cited widely during the economic crisis of the last few years. Turns out there were some simple spreadsheet errors.

It does not matter how many degrees you have or what prestigious institution you work for, errors in indicators can still happen, and are common. Rogoff works at Harvard University and Reinhart at the University of Maryland. Their paper was published by the National Bureau of Economic Research.

Something similar happened when scientists were trying to measure ozone concentrations over Antarctica in the 1980’s. NASA scientists had programmed their satellite to flag measurements under 180 Dobson units (a measure of ozone) as possible errors. They original excluded them. Meanwhile scientists at the Amundsen-Scott ground station at the south pole were reporting measurements of 300 Dobson units. The NASA scientists figured their satellite was reading incorrectly. In fact it was the ground station that was making the measurement error. These problems combined to delay the discovery of the ozone hole by about eight years.

Comments (1)

Wyn Pugh
Anonymous
BOTiC Ltd.

It is no wonder that Economics is also known as the “dismal science”, but I prefer “guessonomics”.
The young researchers showed that there were indeed spreadsheet errors and so often many such models don’t compute & compare with basic cross-checks. One insurance industry model I was shown in a meeting didn’t add up correctly – I spotted that with a single visual check.
We rely more & more on such models in all disciplines and hardly a day goes by without some important research findings based on “a model”. Anyone who has ever programmed will know that no amount of desk-checking will ever get it 100% right, so I wonder just how many of the research findings are plagued by similar errors.

Posted almost 4 years ago | permalink

Log in to post comments.