dinsdag 31 oktober 2017

Improving transparancy of scenario analysis in GAMS



I have recently gotten back to work with GAMS after having spent most of the last 6 years working with R, and I quickly got struck in the typical trap of trying to do things in GAMS that I had gotten used to do with R – just as most of my early R code was basically a translation from GAMS. (Now, let me get one thing straight from the outset: I think discussions about which of the two is the superior tool are meaningless, and I am not going to spend any time on this type of discussions. Both have been designed to deal with specific issues, and they are well suited for the uses they have been built for.)

However, after some years of complete immersion in R, I did get use to take some of its features for granted, and found it frustrating I couldn’t implement them in the same way in GAMS.

Take for instance the issue of comparing different scenarios. Of course, GAMS allows you to “loop” through different scenarios. For instance, your input data can be organised in Excel files where different ranges correspond to different scenarios. This works fine in a relatively simple setting where the nature of the work implies that the number of alternative scenarios is limited and known in advance. 

However, this is not always the case. Take for instance the PLANET model , which has been developed by the Belgian Federal Planning Bureau for long term forecasts of transport demand in Belgium. The model has been developed initially around 2008. In the last ten years, the policy context has evolved, the range of transport technologies has expanded, new transport solutions have emerged, the research questions themselves have changed… Ideally, one would like to have a core model that is relative stable even when the scenarios under investigations change. If the scenarios are dug deeply in the code of GAMS (or in external files), this increases the risk of errors as time goes by. Moreover, these errors are unlikely to yield any error messages, and one may well end up publishing erroneous results. 

Similarly, at the output side, one would like to have spreadsheets or CSV files, with a name that clearly indicates to which scenario they correspond – which, as a side benefit, would make subsequent analysis with R more convenient.

Fortunately, there is a relatively simple work-around: when you call GAMS from the command line, you can use control variables to navigate through scenarios, and produce appropriately named output files for each scenario. An additional advantage of this method is that you can also call R scripts (or Python, depending on your preferences) from the command line. This allows you to move any pre- or post-processing of your data and results from your spreadsheets to the command line, increasing the transparency and the reproducibility of your work. 

For instance, the following code is a stylized version of a script I have recently implemented in PowerShell – obviously, similar code can easily be written in bash if you work on a Linux system:


gams definitions s=defin
$Scenarios = ("BAU","SC1")
foreach ($Sc in $Scenarios ){
gams step2 r=defin,s=save2 --Yr=2012 --YrPrev=2011 --YrNxt=2013 --scen=$Sc
gams step3 r=save2,s=save3 --Yr=2012 --YrPrev=2011 --YrNxt=2013 --scen=$Sc
gams step4 r=save3,s=save4
$Years = (2013..2016)
foreach ($Yr in $Years ) {
gams step2 r=save4,s=save2 --Yr=$Yr --YrPrev=$($Yr-1) --scen=$Sc
gams step3 r=save2,s=save4 --Yr=$Yr --YrNxt=$($Yr+1--scen=$Sc
}
gams step5 r=save4 --scen=$Sc
}

In this example, definitions, step2, step3, step4 and step5 are gms files. 

The model is thus composed of:
·         An initialisation step definitions, where all relevant definitions are given that are stable across scenarios and through time.
·         For each scenario, step2 and step3 are first runs for a base year, followed by a non-repeated run of step4.
·         Next, the model loops through the years for step2 and step3, followed by,
·         step5, where all the results for this scenario are saved as Excel files

Although the code is mostly self-explanatory, some details are noteworthy:

  • With $include, GAMS offers the possibility to call one gms script from another gms script. In the approach proposed here, all gms scripts are called from PowerShell (or bash).
  • The ‘save’ and ‘restart’ options need always to be put before the other control variables.
  • In the loop over the years, information is accumulated in the model. Therefore, the ‘save’ file for the last gms script in each iteration should be the ‘restart’ for the first script. For each scenario, however, we start again from the ‘save’ file of the initialisation step, and there is no need to ‘save’ the results from the final step. 

The crucial change, however, is that control parameters now fulfill a double role:

  • On the one hand, they are used to load input data files and to create output files whose names refer to the scenario and/or the year. For instance, for scenario “BAU” the following code will create an output file ResultsBAU.xlsx, where the parameter of interest MyPar will be written to the sheets of the same name, starting from cell A1:
execute_unload "Results%scen%.gdx";
execute 'gdxxrw Results%scen%.gdx Output=C:\MyModel\output\Results%scen%.xlsx par=MyPar rng= MyPar!A1';

  • On the other hand, these control parameters can also be used as labels. Thus, if the command line ‘feeds’ Yr=2012 to the GAMS script, then GAMS will correctly ‘translate’ LHS(set1;”label1”,set2) = RHS(set1,”label1”,set2, ,"%Yr%") as LHS(set1;”label1”,set2) = RHS(set1,”label1”,set2, ,"2012").
  • One subtle complication concerns operations on "%Yr%" within the GAMS script. For instance, to build on the previous example, "%yearp% + 1" will not be ‘translated’ in “2013” inside the GAMS script. If you wish the script to use label referring to the next year, you need to add this as a control variable to the command line. In the example above, we have used --YrNxt=$($Yr+1). In the script, PAR1(set&,"label1","%YrNxt%") will then be translated correctly to PAR1(set&,"label1","2013") if $Yr=2012.




woensdag 18 november 2015

Terrorists, traffic safety and the availability bias

The day after the terrorist attacks in Paris, the Belgian policy held a nation-wide campaign of speed controls. The campaign had been planned and announced well in advance, but was not cancelled following the attacks.
This led some moral outrage that was forcefully aired on the social media: “we are under attack and the Belgian police is checking whether we are complying with speed limits, where are these guys’ priorities?”
Well, the answer is less obvious and thus much more interesting than the author of these considerations may think.
Let us, for a start, assume that this person was not simply self-serving in his moral outrage, and is a law-abiding citizen who always complies with speed limits and who is genuinely worried about our safety.
Is he right that the Belgian police should have cancelled the planned speed controls?
The behavioural economist in me cries: 

“The reality is that, annually, 1.2 million people are killed in road traffic accidents worldwide, an average death rate of 17.3 per 100,000 population. http://healthintelligence.drupalgardens.com/content/road-traffic-death-rates-across-countries-world-2013 On the other hand, less than 33 000 people died from terrorist attacks in 2014 http://www.statista.com/statistics/202871/number-of-fatalities-by-terrorist-attacks-worldwide/ . Thus, this cry of outrage on the social media is a typical example of the so-called availability bias: the number of victims of road traffics may be around 360 times larger than the number of victims of terrorist attacks, but when people die from terrorist attacks, a lot of people die at the same time, so we have the impression that terrorism is a much larger threat than road traffic (especially when it happens in the city where we got engaged)”.


A smug smile runs across my face and I think: “mission accomplished, another beautiful example of deviation from rational thinking identified in the real world.”
Oh, but wait a second: who says that police resources should be allocated on the basis of the numbers of victims? A serious economic analysis also requires us to calculate how many deaths (from road accidents or terrorism) can be prevented by a given investment in police resources.
And this is where we run into trouble.
For traffic death prevention, the analysis is (in principle, if not in practice) quite simple: we “simply” need to know how much value the “speeder” attaches to driving faster than the legal limit, and we know that he will be deterred from speeding if the value of speeding is smaller than the expected value of a ticket for speeding. And this expected value is simply the product of the fine and the probability of getting caught, which is in turn a function of the resources invested in enforcing speed limits. Well, I know there are some details that still need to be developed (does the speeder’s assessment of the detection probability correspond to the objective probability, for instance?), but, in principle, all these issues can be dealt with if we invest the necessary resources in serious empirical research.
But what about terrorism prevention? Do we know how the probability of catching the terrorists before the act is affected by our investment in intelligence and police work? Well, I am not sure we even can. When security on planes increased, terrorists started attacking trains. If we increase security measures on trains, they will start attacking buses. If we increase security measures on buses, they will start attacking supermarkets. In other words, the more we protect specific targets, the more the terrorists will act at random – which is precisely what they have done in Paris. And if we can prevent them from undertaking random attacks in cities, they will move to random attacks in villages.
What is even more annoying with this type of terrorists is that it is not even clear what would deter them even if they were caught with certainty. After all, the only way to stop them is to kill them, and that is precisely what they want. And, because of the availability bias, it is not sure that the effect on public fear of a failed attack  is that much smaller than the effect of a successful attack.

So what does all this imply? Well, I am not so sure anymore. Maybe I should just take this smug smile from my face and start some real hard thinking about the subject. And, maybe, after all, this whole discussion isn’t worth having. After all, this guy on the social media is obviously a self-serving asshole who wants to drive any speed he wants without getting caught, and who is using the terrorists as cheap excuse. Is anyone suggesting that I am suffering from “representativeness” or “stereotyping” bias now?  

woensdag 21 oktober 2015

Two newspaper articles and some behavioural insights on traffic safety and postnatal depression

Yesterday, two articles took most of the front and the back page of my daily newspaper. Both struck my mind, as the central message they convey is so fundamentally at odds with one of the central current insights with respect to the effect of (implicit) social norms on behaviour.
On the front page of the paper, the Belgian federal minister of transport launched a broad call for citizin participation in the development of new approaches to traffic safety. The central message as represented in the newspaper was: our current policies fail in further reducing the number of fatal accidents, and the current repressive top-down approach should be complemented with a bottom-up approach, as 90% of the accidents are due to human errors.
On the back page, we learned that a new law has been voted that will introduce  systematic screening of pregnant women (or women who have just given birth) for (potential) postnatal depression.
It is a pity that these policy makers and their advisers have not read "Inside the nudge unit', David Halpern's fascinating account of how systematic thinking about human behaviour has improved public policy, with tangible (if, globally speaking, modest) benefits.
One of the central lessons learned over the years is that human beings are extremely conformist, and that their behaviour is heavily influenced by (what they think is) their reference group's norms and behaviour. Now, one of the surprises is that these perceptions are heavily biased by the salience of the behaviour of others. For instance, students think that other students drink much more (and study less) than they actually do, because it is the drinking that is the most salient behaviour (and not the studying).
Why is this relevant for traffic safety and postnatal depression? Well, research has also shown that, the more policy makers talk about a problem, the more people perceive this problem to be the norm, and the more they will adapt their behaviour accordingly. Of course, what I say now is speculative, but I would submit that a traffic safety campaign that emphasizes that most people do drive carefully (and that careful driving is thus the norm) would be more effective than a minister saying in public that the problem is so hopeless that she needs a public consultation to find solution to it (with one caveat: this would only work in countries where most people do indeed drive responsibly).
And what about postnatal depression? Well, of course, commercial advertising constantly shows us perfectly clean, perfectly behaving and perfectly happy children (at least, if you dress them with the right clothes and feed them with the right cereals, and drive them around in the right minivan), so some balance was probably welcome. But now policy makers are sending the signal that postnatal depression is, well maybe not exactly the norm, but at least a risk that is lurking behind every  pregnancy. Guess what? My guess is that the effect will be an increase in the number of reported postnatal depressions, and not only because monitoring will have improved.
So what would the right message to convey here? Well, I suppose the simple truth: raising children is not a cakewalk. But who told you anything in life is? But behavioural research has also shown that people report that having raised children was among the most meaningful activities in their lives. And that does not depend on the brand of cereals you feed them with.

dinsdag 30 augustus 2011

Edward Glaeser's "Triumph of the city"


In “Triumph of the city”, Harvard economics professor Edward Glaeser sets out “how our greatest invention makes us richer, smarter, greener, healthier and happier”. Hyperbole aside (are cities really a greater invention than the wheel?), he largely succeeds in this ambition. Moreover, despite the scholarship that shows on every single page of the book, the book is very enjoyable.
Glaeser starts his book with a startling question: why do people choose to live in cities at all, while there is so much free space around (all of humanity could fit in Texas, as he duly reminds us)? The answer, in short, is that, even in the Internet age, the productive advantages conferred by proximity outweigh all disadvantages linked to urban life. Cities are places where entrepreneurial and smart people congregate because they know that that’s where they can meet other entrepreneurial and smart people,  and it is this interaction that leads to the innovative forces that underlie human progress. If, as Matt Ridley has argued in “The rational optimist” (  see my review of Ridley), human progress is due to “idea having sex”, then urban life is indeed one big orgy.
Of course, Glaeser is not blind to the crime, the squalor, the pollution, and the congestion that often blight city life. However, he correctly argues that several myths and misconceptions surround these elements. For instance, the co-existence of extreme wealth and extreme poverty is not a sign of failure, but the natural consequence of a city’s success: poor people move to cities because they are attracted by the opportunities for upward mobility offered by city life.
Of course, one cannot talk about cities without discussing suburbs and urban sprawl. In economists’ jargon, the move to suburban life is a typical example of an externality: people who can afford the commute from suburbs to the city centre, tend indeed to move to the suburbs, because they only consider the benefits to themselves, and not the costs they impose on society (more congestion, higher energy consumption, biodiversity losses etc). Glaeser describes not only how decreases in transportation cost have led to “classical” suburbanization, but he also discusses the recent emergence of exurbs. Paradoxically, while city dwellers may indeed suffer from high concentrations of air pollutants, their environmental impact and their energy consumption are sometimes orders of magnitude smaller than those of people living close to nature. It would be exaggerated to put that only city life can save the planet, but further suburbanization will certainly lead to disproportionate environmental impacts, especially with respect to greenhouse gas emissions (which are much more closely linked to energy consumption than conventional pollutants). One cannot even start thinking about the consequences should China and India follow the path of the Western world in this domain.
It is impossible to do full justice to the breadth and depth of this book in just a few pages. Let me just briefly mention some of the other the other topics covered in the book: Why do some cities remain successful even after the original causes of their success have become irrelevant (New York, say)? Why would it be better for declining cities to just tear down entire building blocks rather than to invest in new infrastructure? Why are urbanites willing to cope with extraordinary high costs of living? How have building regulations resulted in keeping poor people out of the centre of Paris? Why are the slums of Mumbai the result of poor policies? Why are cities essentially marriage markets? To all these questions, Glaeser provides answers, sometimes surprising, sometimes provocative, but always well argued.
Does that mean that the book is without flaws? Let’s just say, that as an economist with a professional interest in topics with a strong urban dimension (such as transport and waste policy), I would have liked a more detailed discussion of how cities have coped with these specific challenges: What were the issues in these fields? How have changes in technologies and in institutions interacted? What are the challenges for the future?
But these are minor issues, and a more technical discussion may put some of the audience of this book off.
“Triumph of the city” is popularized social science at its best, and is highly recommended to anyone with an interest in urban life – as more than half of the people on this planet currently live in cities, that should be nearly everyone. 

zondag 13 maart 2011

Review of “Sustainable Energy – without the hot air”

I have a confession to make. “Sustainable Energy – without the hot air” had mostly gathered dust on my bookshelf for almost two years before I finally took the time to finish reading it. So why did I take it up again and finalise it this time (and, more importantly, bother to write this review and try and convince you to read it yourself)?
Well, in the last few months, several reports have been published, outlining how it would be possible to run an economy without fossil fuels in the foreseeable future – I suppose that this sudden outburst is not completely unrelated to the preparation of the European Commission’s roadmap for a low-carbon future.
While the headlines of these reports caught my attention, in most cases, I rather quickly abandoned reading them.  The main motivation for doing so was not that I fundamentally disagreed with the authors’ conclusions, but rather that the published material was not sufficiently concrete to help me make up my mind on the validity of these conclusions. Indeed, any report looking 40 years or more into the future is inevitably speculative. There is nothing wrong with that: taking into account the stock nature of greenhouse gasses, we have no choice but to think about the very-long-term consequences of today’s choices. However, with such a time horizon, it is clear that any conclusion you draw can depend critically on some assumptions that may appear to be minor but that have dramatic cumulative impacts in 40 years from now. Therefore, complete transparency with respect to the assumptions used (and how they translate into conclusions) is essential for any publication on this subject. It was precisely because this transparency was lacking in most publications on the subject that I felt that a further reading would not really enlighten my own thoughts.
It was then that I recalled that complete transparency was the key selling point of “Sustainable Energy” (one should not take “selling” too literally, as the book can be downloaded freely). To his own admission, the main motivation of the book’s author, David MacKay, a professor of Physics in Cambridge, is his concern about cutting the emissions of “twaddle about sustainable energy”. He starts from the observation that many of the things that allegedly make a difference in terms of energy consumption simply do not add up. However, he rightly points out that the debate about sustainable energy is one about numbers, and that in a climate where people don’t understand numbers, anyone can get away with murder (or energy plans that do not add up).
The book is essentially a case study of whether it is possible to draw up an energy plan for a country (in this case, the United Kingdom, but the methodology can be readily applied to any country) that would cover its entire energy needs without the use of fossil fuels.
MacKay proceeds as follows.
In the first half of the book, he compares the country’s energy consumption with “sustainable” energy production. His approach is bottom-up pushed to the extremes. On the consumption side, he takes any possible activity that uses energy (transport, heating and cooling, light, “gadgets”, food, etc) and estimates the corresponding energy use of a “typical” household. On the production side, he considers all possible sources of sustainable energy (on-shore wind, solar, hydropower, offshore wind, wave, tide and geothermal). The only constraint MacKay considers are physical constraints. His main motivation for deliberately ignoring economic or environmental constraints is that doing so helps focusing on the question whether conceivable sustainable energy production would be enough to cover total consumption.
After having verified the results of his bottom-up estimates with actual energy consumption (according to official statistics), Mac Kay goes on to the second half of the book, where he verifies whether an appropriate mix of demand (better transport, more efficient heating etc) and supply (importing renewables from abroad, nuclear, even coal with carbon capture and storage) measures would result in a renewable energy mix that actually adds up. 
In case you cannot wait to read the book, the answer to the feasibility question is: yes, it is physically possible to fulfil a country’s energy needs with renewable energy but (even if one ignores economics) it will be far from obvious and it will require some drastic changes in current consumption and production patterns. Actually, the author even hints that some climate engineering may be needed in case the sums for mitigation do not add up (see my discussion of Superfreakonomics for more on this). Whether you like this conclusion or not, I would strongly recommend to find out the details of the argument for yourself, and to verify whether you can agree with them.
What is really wonderful about this book, is that you can indeed verify every single step in the argumentation. Moreover, you can put all the assumptions in a spreadsheet, and verify how the results changes according to geography (you may well live in a country with a very short shoreline but lots of spare place and sun) or time (some of the technical assumptions may change quickly and unexpectedly when time unfolds – for instance, The Economist of 12 March hints that the potential for increased energy efficiency in aviation is much higher than Mac Kay’s estimates).
To put it simply, this is probably the most transparent book I have ever read, and it sets a very high standard for any future work in this field.
Through its emphasis on verifiable facts and figures, rather than on perceptions, the book is also very good at demystifying a lot of misconceptions (no, you will not save the planet by unplugging your mobile-phone charger when it’s not in use; yes, there are limits to energy efficiency that are not imposed by economics, but simply by the laws of physics etc), which can only lead to a better informed debate (in case, of course, that’s what you’re interested in).
Of course, the level of detail and the emphasis on hard facts may also put a lot of people off (despite the tongue-in-cheek humour, it is hardly compelling bed-time reading, which may explain why it lingered on my own “to read” list for so long).  I suppose that’s the eternal dilemma of books that are meant to be intellectually rigorous but that aim at a general audience at the same time: there’s an unavoidable trade-off between rigour and accessibility. This is typically the kind of book that boffins will enthusiastically recommend as being very “accessible”, just to find out that no one except their own kind shares this enthusiasm.
This being said, are there any other issues with the book?
Actually, it covers so much ground, that I do not feel qualified to comment on most of the details (when I wrote that you can verify every detail in the book, what I really meant is that it is possible to set up a team of experts to verify everything ).
Let me therefore limit my discussion to the few subjects where I think I know a thing or two more than Professor MacKay.
First, transport. MacKay makes a compelling case for the (physical) limits to energy efficiency in private transport. However, I am afraid he is way too optimistic concerning the potential of public transport. I do not dispute that public transport is, on average, much more energy efficient than private transport. However, current averages are a very poor predictors of actual energy efficiency if a massive shift from private to public transport were to take place. Indeed, public transport can only be more energy efficient than private transport if it used to move huge (more or less predictable) quantities of people from one (predictable) origin to a (predictable) destination. In other words: it is mainly efficient in peak time in urban areas. The problem is that a significant amount of people who are currently using their car for moving around do not fall into this category (they live spread around in large suburban areas, their work places are dispersed in other large suburban areas, etc). If these people would switch to public transport, energy efficiency of public transport is likely to decrease, not increase. Of course, my reasoning depends also on specific assumptions with respect to spatial structure: one may retort that the promotion of public transport should go hand in hand with measures against urban sprawl. Now, that’s a point I fully agree with, but the reality is that spatial structure is like housing: poor choices that were made in the past are likely to stay with us for a very, very long time. So, to summarize, in some countries where public transport has been underfunded or poorly organised, there is probably some potential to improve the energy efficiency of the transport system by promoting a modal shift. Also, this may be an interesting policy option for the hundreds of cities that will be built in India and China in the decades to come, and where spatial structure remains to be defined. But do not think you can stretch the potential of public transport much further in many European urban areas.
Second, economics. Of course, as an economist, I would have liked to see a more thorough discussion of the economics of sustainable energy (the solutions with the highest technical potential are not necessarily the ones with the highest cost-effectiveness). However, I understand Mackay’s point that you should evaluate the physical feasibility of an economy entirely based on renewable before  you even consider economics. Actually, I would have preferred if MacKay would have stuck to this approach throughout the book, because most of what he has to say on economics does not come close to the analysis of other issues. For instance, MacKay finds it odd that people have faith in market, “given how regularly markets give us things like booms and busts, credit crunches, and collapses of banks”. Very well. Using a similar line of argument, I find it odd that people have faith in governments, given that that they gave us things like decades of economic stagnation and mass famine (Soviet Union and China under Mao), and mass massacres (Birkenau, the Gulag and the Cambodian killing fields were organised by government bureaucracies, were they not?). If you think this argument is a caricature, I agree – it is. But not much more than the diatribe against markets that you can find in Chapter 29 of the book.
My point here is: finding out what is technically achievable is an essential first step in a move to an economy that is based upon renewable energy. However, it is only a first step. Once you go further than that, you have to define priorities  and you must design institutions that will induce the desired changes  in a cost-effective manner. While the design of these institutions should indeed involve legislation, regulation and taxes (as argued by MacKay), there is nothing dogmatic about affirming that you also need to harness the power of market forces to induce these most cost-effective reductions.
By the way, I also have a substantial comment or two to make on the economics. Electrical vehicles may have a lot of technical potential, but, for the foreseeable future, most economists reckon that their cost per tonne of CO2 saved is up to an order of magnitude larger than the cost per tonne of CO2 saved in other sectors of the economy (such as housing). I also doubt that the costs of all sources of renewable energy can be expected to drop in the future: whilst technological progress may induce lower prices, higher demand will also put upward pressure on prices. I do not think we can predict which effect will dominate. However, discussing these issues in depth should be the subject of a separate post on this site.
Third, MacKay barely touches upon the energy and non-energy resource cost of creating the infrastructure  that will provide all this renewable energy. This is not a trivial matter. Actually, some have argued recently that the main constraint on moving to a low carbon economy is that it will not be possible within the coming decades to mine all the minerals that are needed in the creation of this infrastructure. This is maybe an issue that should be considered in a new edition of the book. 
Of course, within the larger picture, these are minor comments. On the whole, this book is an impressive intellectual achievement. As it is freely available on-line, you do not have the excuse of the cost for not downloading it right away and making up your mind yourself (and, more importantly, for not forwarding it to all policy makers you know).
And, oh please, don’t forget to repeat its core one-liner to everyone you know: if everyone does a little, we’ll achieve only a little. 

zondag 20 februari 2011

Superfreakonomics and climate engineering

With Freakonomics, Steven Levitt and Stephen Dubner had written a very enjoyable overview of how the standard techniques of economic analysis can be applied to a wide variety of less standard issues (how to detect cheating teachers and sumo wrestlers, how liberalizing abortion laws eventually translates in lower crime rates , why most drug dealers live with their mothers, etc ).
With Superfreakonomics, they do it again. The range of topics is just as wide as in their first book: why walking drunk is more dangerous than driving drunk, why wages of prostitutes have gone down over the last century, why your month of birth is a good predictor of future professional achievement, how to detect from someone’s bank account whether he is likely to be a suicide terrorist, why children (at least in the US) are more likely to visit their parents in retirement homes if they have siblings, etc.
These two unconventional books are the result of a just as unconventional team. Stephen Dubner, who writes for The New York Times and the New Yorker, had joined efforts with Steven Levitt,  a professor of economics at the University of Chicago. In case the subjects listed above would make you doubt, Steven Levitt is no crank: for his original research, he has won the John Bates Clark medal, which is arguably harder to get than a Nobel Prize.
As the authors explain, both books have one grand unifying theme: (a) people respond to incentives (b) they do not necessarily do this in ways that are predictable or manifest. In the first book, the emphasis was on professor Levitt’s own research in this field, while Superfreakonomics  covers more work undertaken by other researchers.
The general appreciation of the book can be relatively short:  just as its predecessor, it is very well written and chockfull of facts and insights that are both amusing and revealing. If you have any interest in human behavior, you should read it.
However, in this review, I would like to elaborate more on one specific chapter that is specifically relevant for the subject of this blog: the discussion on climate engineering.
The chapter can be summarized as follows. The authors do accept the hypothesis that climate is changing, and that these changes are anthropogenic (even though they rightly point out that a lot of processes are still very poorly understood). However, they doubt very much that a binding international agreement to reduce the emissions of greenhouse gasses (GHG) will ever be signed. Moreover, they argue that, even if such an agreement would be reached today, it would probably be too late to prevent significant climate change.  Finally, scenarios that assume that significant climate change mitigation still falls within the realm of the possible, tend to overestimate the potential of technological solutions (for instance, because they do not take into account the GHG emissions during the production of “clean” technologies such as photovoltaic cells).
In a previous post of this blog, I had already discussed one possible approach to plan B – climate change adaptation.  Levitt and Dubner discuss plan C: climate change engineering.
The idea is really quite simple: just as the presence of greenhouse gasses in the atmosphere tends to increase average temperatures,  the presence of sulfur dioxides tends to decrease average temperatures. This phenomenon is well documented and quite well understood. For instance, important volcano eruptions can lead to a marked decrease in average temperatures for several months or even years.
Does this mean that simply putting huge quantities of sulfur dioxides  will be enough to stop climate change?
Actually, it’s not that simple. In case you’ve forgotten, or are simply too young, we have already been through this: before we started imposing environmental regulations in the 1970s, we had already been emitting huge quantities of sulfur dioxides (the big worry about climate change in the beginning of the 1970s was global cooling, not warming). The ensuing decrease in the emissions of sulfur dioxides is actually one of the really big success stories of  environmental policy, leading to significant health benefits.
Again, one may be tempted to jump to conclusions and think that the choice is now between facing intolerable temperature increases or inhaling particles that directly affect our health and life expectancy.
However, this reasoning overlooks one potential way out that Levitt and Dubner discuss: instead of pouring the sulfur dioxides in the atmosphere at ground levels, we could develop tools to release them directly in the stratosphere instead. In their book, Levitt and Dubner report interviews with several scientists who argue that this should be technically possible at an economic cost that would be a fraction of the cost of climate change mitigation.
 Would this work?
I am not a physicist nor an engineer, so I do not feel qualified to comment on this specific point.  However, if it would work, and if its cost would really be so low (that’s admittedly two big ifs) than moving the focus from climate change mitigation to climate engineering looks like a no-brainer.
Maybe surprisingly, this approach faces some stark opposition, and it is important to understand why.
One counter-argument is that climate engineering comes down to tinkering with the climate. I agree completely with Levitt and Dubner that this argument makes no sense: everything we do involves tinkering. There’s just tinkering that involves doing something and tinkering that involves not doing anything. Some moral philosophers may think there is a deep difference between the two, but from a policy point of view, this seems irrelevant to me.
One argument that merits some discussion is that the possibility of climate engineering would just be an excuse for a continuation of “business-as-usual”.  Of course, unless one thinks that emitting green house gasses is a sin in itself, this argument is in the same category as the previous one: symbolism and not substance.  Thus, the relevant question is: is it a sin to emit greenhouse gases if there an antidote to the global warming effect?
The answer is: in some cases, yes. Well, not in the case of CO2, but there are a lot of less widely known gasses that do not only trap heat in the atmosphere but also hurt the environment in other ways. It has been known for some time that several ozone depleting substances are also very potent greenhouse gasses.
A forthcoming report by the United Nations Environmental Programme now confirms that there are other villains out there: methane and black carbon. The negative environmental effects of black carbon (or soot)  , which is emitted massively by primitive stoves and  old diesel engines, are already well known. However, recent research suggests that black carbon is also a potent greenhouse gas. Moreover, it has a relatively short atmospheric life time, implying that  reducing the emissions of black carbon would not only improve air quality (notably in developing countries), it could lead to significant climate benefits in the short term.  The emissions of methane, in turn, lead to increased formation of tropospheric  ozone, which has also well-documented adverse health effects (contrary to stratospheric ozone which blocks ultraviolet rays).
Even though the science of black carbon is far from settled, this insight somehow turns the problem raised by climate engineering on its head. Because black carbon and tropospheric ozone are local pollutants, individual countries have a strong incentive to reduce emissions of soot and methane (well, at least if governments can be held accountable by their people), and this can lead to climate benefits even if we cannot agree on reducing CO2 emissions. I also think no one would dispute that the Conventions on Long Range Transboundary Air Pollution have been far more successful than the Framework Convention on Climate Change: this suggest that it is far easier to sign successful environmental treaties when the benefits occur in the short run and when there is a little scientific uncertainty surrounding these benefits.
So, if you ask me, I think the first priority is now to work on greenhouse gasses that bring strong co-benefits in terms of local air quality. And, in the meanwhile, we should work further on our understanding of climate engineering, as a possibly quick fix in case we run out of other options.
And more people should read what Levitt and Dubner write on the subject.

zondag 13 februari 2011

Is mankind really unique in its propensity to exchange?

One of the central propositions in "The rational optimist" (see my review of December 2010) is that human progress results from our propensity to exchange, which, according to Matt Ridley, is unique to mankind.

Apparently, it is time to reconsider this assumption.

Recent laboratory work by Keith Chen has shown that capucin monkeys do not only understand monetary exchange, but they also understand the basic laws of demand and supply. Moreover, they also appear to act irrationally in exactly the same settings as humans do (to be  more precise, they are risk-averse: they attach more value to losses than to gains of the same amounts). Finally, when they were allowed to barter not just with the experimenters, but also amongst themselves, the first profession that was created was..... indeed, prostitution!

An accessible discussion of these experiments is provided in Levitt's and Dubner's Freakonomics, which I intend to review more at length in the coming days. Who ever said that economics wasn't fun?