pelicanweblogo2010

Mother Pelican
A Journal of Solidarity and Sustainability

Vol. 13, No. 8, August 2017
Luis T. Gutiérrez, Editor
Home Page
Front Page

motherpelicanlogo2012


The Uncertainty Monster: Lessons From Non-Orthodox Economics

Vincent Randall

This article was originally published by
Climate Etc., 5 July 2017
under a Creative Commons License


08.17.Page6.2500Cycle.jpg
CLICK TO ENLARGE

Source: Nature Unbound IV – The 2400-year Bray Cycle


A perspective on economists’ grappling with the ‘uncertainty monster.’

In this essay I am going to try to introduce non-economists who work in fields where they are first coming into contact with the ‘uncertainty monster’ – as Judith Curry calls it – to what some economists have learned from their encounter with it. First I will try to explain why economists encountered the monster before others working in different disciplines. Then I will try to give the reader an overview of what different economists have said about it. Then finally I will briefly consider the differences and similarities between how economists are confronted with the uncertainty monster and how those working in ‘harder’ sciences, like climate science, are confronted with the uncertainty monster. There are definite differences and definite similarities.

A little bit of history

The questions raised by uncertainty seem to have been addressed in more depth and with more clarity in the discipline of economics than they have elsewhere. It seems that this is because they were encountered in economics more forcefully than in other disciplines that lent themselves to mathematical modelling and statistical hypothesis testing. The reason that they were encountered so much more forcefully is that economics deals with human behaviour – and humans are constantly faced with an uncertain future. For this reason all human behaviour is undertaken in the face of uncertainty.

Take the classic economic example of an entrepreneur who wants to make an investment. Let us say that he wants to build a factory that produces cotton goods. Let us further say that he is fully aware of all the costs – from the cost of the cotton-producing machines, to the raw materials, to the wages that the workers will need to be paid and so on. Now he needs to weigh these costs against the amount of unit sales that he can make times the prices at which he can make these sales – that is, . By subtracting the costs from the revenue he will be able to calculate his profit – that is, . Finally, he can compare the profits that he will make to the investment that he has to undertake and decide whether he should do it or not.

The problem is that he has to do this over many years. The initial investment – especially the buildings and machinery – will have to be used for years before they pay themselves off. We are probably talking on the order of 10-20 years. Now our entrepreneur may be able to get a good approximation of the price that he will be able to charge for his goods by looking at similar markets in the first year or two. He may also be able to get a fairly good approximation of the amount of market demand that there will be for his product in the first year or two. But beyond the first year or two everything will be a haze. He has no idea whether there will be a recession, a financial crisis or a depression. He will also have no idea how prices will change – will there be a general rise in prices (an inflation), a general fall in prices (a deflation) or will prices stay the same[2]?

These issues were brought to the fore in economics in the 1920s and 1930s by economists like Gunnar Myrdal and John Maynard Keynes. Prior to this the questions were ignored and agents in economic models were basically though to be omnipotent. But Myrdal and Keynes smashed this consensus – for a while at least. Here is a famous passage from Keynes’ General Theory of Employment, Money and Interest outlining the impossible problems that face the entrepreneur:

The outstanding fact is the extreme precariousness of the basis of knowledge on which our estimates of prospective yield [i.e. profits] have to be made. Our knowledge of the factors which will govern the yield of an investment some years hence is usually very slight and often negligible. If we speak frankly, we have to admit that our basis of knowledge for estimating the yield ten years hence of a railway, a copper mine, a textile factory, the goodwill of a patent medicine, an Atlantic liner, a building in the City of London amounts to little and sometimes to nothing; or even five years hence.

Keynes concludes that this means that a lot of economic activity is determined not by calculation of probabilities or anything like it. Rather it is determined by the state of confidence.

It would be foolish, in forming our expectations, to attach great weight to matters which are very uncertain. It is reasonable, therefore, to be guided to a considerable degree by the facts about which we feel somewhat confident, even though they may be less decisively relevant to the issue than other facts about which our knowledge is vague and scanty. For this reason the facts of the existing situation enter, in a sense disproportionately, into the formation of our long-term expectations; our usual practice being to take the existing situation and to project it into the future, modified only to the extent that we have more or less definite reasons for expecting a change. The state of long-term expectation, upon which our decisions are based, does not solely depend, therefore, on the most probable forecast we can make. It also depends on the confidence with which we make this forecast — on how highly we rate the likelihood of our best forecast turning out quite wrong. If we expect large changes but are very uncertain as to what precise form these changes will take, then our confidence will be weak.

Once this Pandora’s Box was opened up it started eating economic theory from the inside out. The whole theory was based on decisions made in the face of calculable certainty. But once we admitted that the future is properly uncertain the theory started to unravel. Within a few years the economists have put the ‘uncertainty monster’ back in the box. From where I’m standing this rendered their theories pretty much useless and I’m sure that many readers can make the connection between this fundamental epistemological error and the inability of economists to see the Great Financial Crisis coming (not to mention their complete inability to deal with the consequences adequately). But enough history. I am interested here in pointing the reader in the right direction if they want to get a sense of what some economists have learned from the study of their discipline through the lens of fundamental uncertainty.

A dummies guide to uncertainty in economics

First up is Keynes himself. We have already seen how Keynes introduced the concept into economic theory. But he also did some work on the implications uncertainty had for econometric modelling – that is, the use of mathematical and statistical models to try to predict future economic outcomes. Keynes addressed this in his paper ‘Professor Tinbergen’s Method’, written in 1939. The ‘Tinbergen’ in question was Jan Tinbergen, a Dutch economist who pioneered multiple linear regression modelling. Keynes had actually written an entire book on probability and statistics where he advanced a theory of probability that integrated uncertainty. This is too complex to look at now but interested people should get their hands on a copy of ‘Treatise on Probability’.

Keynes lays out some of the issues with statistical modelling in his Tinbergen paper. For example, he makes clear that…

Put broadly, the most important condition is that the environment in all relevant respects, other than the fluctuations in those factors of which we take particular account, should be uniform and homogeneous over a period of time.

Now most people will be taught in statistics class that the coefficients in a multiple linear regression can only be taken at face value if we assume that the statistical model is complete. That is, that all relevant variables have been included in the model. But as most people know, in practice most people do not follow this rule. But they should and the fact that they do not probably means that we should take what they say with more than a pinch of salt. Another problem that Keynes highlights in the paper is as follows:

For, owing to the wide margin of error, only those factors which have in fact shown wide fluctuations come into the picture in a reliable way. If a factor, the fluctuations of which are potentially important, has in fact varied very little, there may be no clue to what its influence would be if it were to change more sharply. There is a passage in which Prof. Tinbergen points out (p. 65), after arriving at a very small regression coefficient for the rate of interest as an influence on investment, that this may be explained by the fact that during the period in question the rate of interest varied very little.

Keynes’ criticism is as fresh today as it was in 1939. Because we have no access to repeatable controlled experiments the model is limited by the actual variability in the historical data. The relationship between one variable and another variable may not be linear. The coefficient may rise massively past a certain point. The example of the interest rate is a good one. If the interest rate only move within the bounds of one or two percentage points in a sample its impact on investment will probably be minimal or non-existent. A regression would tell us this. But if the interest rate was then raised in an unprecedented way – say, by 15% — then the impact on investment could be enormous. This actually happened in 1979-1980 when the interest rate was raised from around 10% to just over 17%. Investment crashed and the economy went into recession.

The next economist to deal extensively with uncertainty was GLS Shackle. Shackle tried to further integrate uncertainty into economic theory in books like Epistemics and Economics: A Critique of Economic Doctrine. That may not be of too much interest to non-economists but he also made some interesting points about uncertainty more generally. He was especially interested in the issue of decision-making under uncertainty – which he understood to be entirely different to decision-making in the face of a probabilistic or ‘risky’ future. He thought that decisions in the face of uncertainty were unique as they are often required but there is no definite way to approach them. From his book Epistemics and Economics: A Critique of Economic Doctrine:

To be uncertain is to entertain many rival hypotheses. The hypotheses are rivals of each other in the sense that they all refer to the same question, and that only one of them can prove true in the event. Will it, then, make sense to average these suggested mutually exclusive outcomes? There is something to be said for it. If the voices are extremely discordant, to listen to the extreme at one end of the range or the other will have most of the voices urging, in some sort of unison, a turn in the other direction. ‘The golden mean’ has been a precept from antiquity, and in this situation it will ensure that, since the mass of hypotheses will still be in disagreement with the answer which is thus chosen, they shall be divided amongst themselves and pulling in opposite directions. Moreover, the average can be a weighed one, if appropriate weights can be discovered. But what is to be their source? We have argued that statistical probabilities are knowledge. They are, however, knowledge in regard to the wrong sort of question, when our need it for weights to assign for rival answers. If we have knowledge, we are not uncertain, we need not and cannot entertain mutually rival hypotheses. The various hypotheses or contingencies to which frequency-ratios are assigned by statistical observation are not rivals. On the contrary, they are members of a team. All of them are true, each in a certain proportion of cases with which, all taken together as a whole, the frequency-distribution is concerned. Rival answers might indeed be entertained to a different sort of question, one referring to the result of a single, particular, ‘proper-named’ and identified instance of that sort of operation or trial from which the frequency-distribution is obtained by many-time repeated trials. But in the answer to a question about a single trial, the frequency-ratios are not knowledge. They are only the racing tipster’s suggestion about which horse to back. His suggestions are based on subtle consideration of many sorts of data, including statistical data, but they are not knowledge.

I have quoted Shackle at length to give the reader a sense of how reading his work might be a useful guide to making certain decisions that are encountered with some regularity in climate science. Epistemics and Economics is partly about economic theory but it is also a book devoted to how rational people can make decisions under uncertainty.

The next economist that may be of interest is Paul Davidson. Davidson highlights the fact that economics is a ‘non-ergodic’ science. By ‘non-ergodic’ he means that the future does not necessarily mirror the past; just because x happened in the past does not mean that x will happen in the future. He writes:

Logically, to make statistically reliable probabilistic forecasts about future economic events, today’s decision-makers should obtain and analyze sample data from the future. Since that is impossible, the assumption of ergodic stochastic economic processes permits the analyst to assert that the outcome at any future date is the statistical shadow of past and current market data. A realization of a stochastic process is a sample value of a multidimensional variable over a period of time, i.e., a single time series. A stochastic process makes a universe of such time series. Time statistics refer to statistical averages (e.g., the mean, standard deviation) calculated from a single fixed realization over an indefinite time space. Space statistics, on the other hand, refer to a fixed point of time and are formed over the universe of realizations (i.e. they are statistics obtained from cross-sectional data). Statistical theory asserts that if the stochastic process is ergodic then for an infinite realization, the time statistics and the space statistics will coincide. For finite realizations of ergodic processes, time and space statistics coincide except for random errors; they will tend to converge (with the probability of unity) as the number of observations increase. Consequently, if ergodicity is assumed, statistics calculated from past time series or cross-sectional data are statistically reliable estimates of the statistics probabilities that will occur at any future date. In simple language, the ergodic presumption assures that economic outcomes on any specific future date can be reliably predicted by a statistical probability analysis of existing market data.

He also makes the case – and this is of interest to those in other sciences – that non-ergodicity may apply to systems that are very sensitive to initial conditions. That is, systems which are commonly referred to as ‘chaotic’ today.

The next economist that merits mention is Tony Lawson. Lawson has gone right back to basics to try to tackle the aspect of uncertainty in economics. He makes the case that recognising uncertainty requires the economist/scientist to occupy an entirely different ontological position – that is, they have to view the world in an inherently different way to the way their uncertainty-free colleagues do. Lawson’s work is massively complex and attempts to build up new epistemological and ontological foundation through which scientists can access truths in the face of uncertainty. I will try to give the reader something of a flavour here. Much of this rests on Lawson’s attack on mathematical modelling as the end goal of science. Lawson claims that only ‘closed systems’ – that is, systems that are both deterministic and in which we fully understand the determinates driving the system – can be mathematically modelled in any serious way.

The first thing to note is that all these mathematical methods that economists use presuppose event regularities or correlations. This makes modern economics a form of deductivism. A closed system in this context just means any situation in which an event regularity occurs. Deductivism is a form of explanation that requires event regularities. Now event regularities can just be assumed to hold, even if they cannot be theorised, and some econometricians do just that and dedicate their time to trying to uncover them. But most economists want to theorise in economic terms as well. But clearly they must do so in terms that guarantee event regularity results. The way to do this is to formulate theories in terms of isolated atoms. By an atom I just mean a factor that has the same independent effect whatever the context. Typically human individuals are portrayed as the atoms in question, though there is nothing essential about this. Notice too that most debates about the nature of rationality are beside the point. Mainstream modellers just need to fix the actions of the individual of their analyses to render them atomistic, i.e., to fix their responses to given conditions. It is this implausible fixing of actions that tends to be expressed though, or is the task of, any rationality axiom. But in truth any old specification will do, including fixed rule or algorithm following as in, say, agent based modelling; the precise assumption used to achieve this matters little. Once some such axiom or assumption-fixing behaviour is made economists can predict/deduce what the factor in question will do if stimulated. Finally the specification in this way of what any such atom does in given conditions allows the prediction activities of economists ONLY if nothing is allowed to counteract the actions of the atoms of analysis. Hence these atoms must additionally be assumed to act in isolation. It is easy to show that this ontology of closed systems of isolated atoms characterises all of the substantive theorising of mainstream economists. It is also easy enough to show that the real world, the social reality in which we actually live, is of a nature that is anything but a set of closed systems of isolated atoms.

There is much more to Lawson’s work – including his exploration of an alternative methodology called ‘critical realism’ – but I will not try to dive too deep into it here.

Finally, a recent, more practical approach to studying systems under uncertainty comes from Philip Pilkington’s book The Reformation in Economics: A Deconstruction and Reconstruction of Economic Theory. Pilkington dedicates an entire chapter to approaching the study of economics while taking into account the existence of uncertainty. He tries to formulate pragmatic principles to do this in a coherent way. He argues that sciences that are not suited to straightforward model-building should instead take what he calls a ‘schematic’ approach. These ‘schemas’ are basic relationships that we can learn about how complex systems work. They are usually derived from logically or empirically provable relationships that exist in these systems. They are different to models in that they do not provide a complete picture of these complex systems – which Pilkington claims is impossible. Rather they let us get to know aspects of how the system works that we can then combine with our judgement to decide on what can be said about the system.

Economics, properly understood, is not the art of constructing models. Rather, it is the art of furnishing, elaborating, understanding and integrating schemas into one’s process of thought. Economics is not about building abstract castles in the sky. Nor is it learned or perfected by engaging in such constructions. It is more like a language that is learned through understanding and practice. You do not learn good sentence construction by studying linguistics; rather, you learn it by becoming as acquainted as possible with the language, with words and their multifarious meanings.

This is a far more open-ended approach than the strict mathematical and statistical modelling that Pilkington claims does not work when the material being studied becomes too complex[3]. It ultimately rests on informed people forming judgements about the material that they study.

What does all this have to do with climate science?

The reader that has made it this far is probably wondering what all this has to do with climate science. I think that non-orthodox economists have undertaken the most thorough study of uncertainty that exists in the sciences today and that their work should be consulted by anyone who writes or thinks about these issues. But there are similarities and differences between the two sciences.

Recall that we made the case that economics studies people who have to make decisions under uncertainty. While climate scientists themselves may have to make decisions in the face of uncertainty, they do not study people who have to make decisions in the face of uncertainty. For example, the effects of CO2 on the climate have little to do with how CO2 makes decisions on how it might affect the climate. The processes studied in climate science are natural processes while the processes studied in economics are human processes. This gives climate science an immediate advantage as the level of uncertainty that is being dealt with is only first order – that is, it is on the part of the scientist – rather than second order – that is, both on the part of the scientist and on the object of study.

Despite this, however, the climate, like the economy, is extremely complex. We can only examine little bits of it at a time. Trying to form a coherent vision of the whole will almost inevitably leave something out. Climate science and economics share this problem in common. Because of this, climate models and economic models have a high degree of indeterminacy that must be understood by those using the models. (At the very least, some might say that using models is the wrong approach given the complexity of the systems being studied).

Finally, statistical measurement is notoriously difficult in both disciplines. Economists are well aware that the statistics that they use are highly imperfect. Bad economists simply plug these into models and obtain garbage-in, garbage-out (GIGO) that they then publish in the journals. But good economists have to weigh up the strengths and weaknesses of the statistical material that they use before passing judgements. Climate scientists are arguably in an even more difficult position than economists here because the data that they use is notoriously disharmonious, spotty and ancient. Again, this is a form of decision-making under uncertainty. How much weight should we give this data?

Overall, there are more similarities between climate science and economics than there are dissimilarities. Climate scientists – and any scientist studying highly complex systems – should pay close attention to what non-orthodox economic theorists have been saying about uncertainty and its derivative problems. They could learn a lot.

Suggested readings

These suggested readings can be supplemented by various papers etc that the authors have written and are available online. The reader will have to use their judgement as to whether they will be of interest to the non-economist.

Davidson, Paul. (1991). ‘Is Probability Theory Relevant for Uncertainty?’.

Davidson, Paul. (1996). ‘Reality and Economic Theory’.

Keynes, John Maynard. (1921). Treatise on Probability.

Keynes, John Maynard. (1939). ‘Professor Tinbergen’s Method’.

Lawson, Tony. (1997). Economics and Reality. Parts I, IV & V.

Pilkington, Philip. (2017). The Reformation in Economics: A Deconstruction and Reconstruction of Economic Theory. Chapters 5 & 10.

Shackle, GLS. (1972). Epistemics and Economics: A Critique of Economic Doctrines. Chapters 1-8, 11, 31, 33, 38).

A very comprehensive bibliography can be found here:

Bibliography on Uncertainty in Post Keynesian Economics

Endnotes

[1] All of this is slightly oversimplified. We abstract from interest repayments, depreciation etc. But it serves to make the basic issues clear.

[2] Again we are oversimplifying here. If we count in, say, interest repayments he will also have to guess at where interest rates will be in a few years’ time.

[3] Pilkington also provides a ‘general theory of bias in science’ in Chapter 5 of his book which may be of interest to readers here.


ABOUT THE AUTHOR

Vincent Randall graduated from Arizona State Teachers' College (now Northern Arizona University) and taught in Clarkdale, Arizona, schools from 1963 to 1992. He is the Cultural Director at the Yavapai-Apache Nation Cultural Center.


|Back to Title|

Page 1      Page 2      Page 3      Page 4      Page 5      Page 6      Page 7      Page 8      Page 9

Supplement 1      Supplement 2      Supplement 3      Supplement 4      Supplement 5      Supplement 6

Bookmark and Share

"From what rests on the surface one is led to the depths."

Edmund Husserl (1859-1938)

GROUP COMMANDS AND WEBSITES

Write to the Editor
Send email to Subscribe
Send email to Unsubscribe
Link to the Google Groups Website
Link to the PelicanWeb Home Page

CREATIVE
COMMONS
LICENSE
Creative Commons License
ISSN 2165-9672

Page 6      

FREE SUBSCRIPTION

[groups_small]

Subscribe to the
Mother Pelican Journal
via the Solidarity-Sustainability Group

Enter your email address: