In this, the 500th post on the Lawrence Economics Blog, we bring you a story from the NYT on the statistical value of life. Indeed, as anyone in an environmental economics or policy course knows, the “value” placed on saving a statistical life (VSL) is associated with reductions in risk levels that decrease the probability of being killed (i.e., from reducing the number of purple balls in your urn).
This VSL is pivotal in determining the benefits of many non-economic regulations, and many federal agencies have increased the value used in benefit assessment in the past few years.
The Environmental Protection Agency set the value of a life at $9.1 million last year in proposing tighter restrictions on air pollution. The agency used numbers as low as $6.8 million during the George W. Bush administration.
The Food and Drug Administration declared that life was worth $7.9 million last year, up from $5 million in 2008, in proposing warning labels on cigarette packages featuring images of cancer victims.
The Transportation Department has used values of around $6 million to justify recent decisions to impose regulations that the Bush administration had rejected as too expensive, like requiring stronger roofs on cars.
That is the salient point of the article; the rest mostly gets down to talking about the prospects and problems of using VSLs in the first place. If you are reading this, you probably know already.
Earlier this week, President Obama penned an op-ed in the Wall Street Journal about his Administration’s plans for the regulatory state. The executive branch, as its title suggests, is in charge of executing and administering the laws of the land, and the President expresses his desire to balance the free-market innovation machine while protecting public health and safety:
[C]reating a 21st-century regulatory system is about more than which rules to add and which rules to subtract. As the executive order I am signing makes clear, we are seeking more affordable, less intrusive means to achieve the same ends—giving careful consideration to benefits and costs. This means writing rules with more input from experts, businesses and ordinary citizens. It means using disclosure as a tool to inform consumers of their choices, rather than restricting those choices. And it means making sure the government does more of its work online, just like companies are doing.
As my students learn in 240, 280, and 271, the executive branch, through the Office of Management and Budget, (potentially) plays a central role in shaping regulations as they make their way through the rulemaking process. Indeed, President Reagan issued the seminal executive order concerning benefit-cost analysis, and each President since has attempted to put his stamp on the process.
Of course, there is often a disconnect between what politicians say and what regulators actually do, here are a couple of other takes from a pair of scholars who spend more than their fair share of time thinking about administrative regulation: Stuart Shapiro and Lynne Kiesling.
The deregulation of network industries in the 1970s is a puzzle for many political economists, as consumers generally benefited at the expense of entrenched, well-connected producers. How did that happen?
One widely acknowledged answer is that economist Alfred Kahn, head of the Civil Aeronautics Board, played an influential role. Professor Kahn died this past week, and Thomas Hazlett has a brilliant piece in the Financial Times on Kahn’s influential role.
Those interested a more formal look at the benefits of deregulation might check out Clifford Winston’s 1993 JEL piece that scopes out the movement nicely.
And Kahn’s Ph.D. advisor was none other than Joseph Schumpeter. How do you like that?
“When you limit participation in the governance of an entity to a few like-minded institutions or individuals who have an interest in keeping competitors out, you have the potential for bad things to happen. It’s antitrust 101,” said Robert E. Litan, who helped oversee the Justice Department’s Nasdaq investigation as deputy assistant attorney general and is now a fellow at the Kauffman Foundation. “The history of derivatives trading is it has grown up as a very concentrated industry, and old habits are hard to break.”
Sometimes known as “capture,” of course. When I learned this back in the day, my professor emphasized that capture does not mean that firms necessarily want regulation, but given that there are regulations, firms will bend them to their own advantage — especially politically connected ones.
It is a good couple of weeks for those interested in the economics of (and innovation in) illicit drug markets. First, HBO started up its mega super miniseries, Boardwalk Empire, about how an Atlantic City official built an organized crime empire following the enactment of the 22nd amendment prohibiting the production and sale of alcohol.
Often referred to as “the grand policy experiment,” (also here), the SO2 market was considered a success, and thought of as a model for potential global system to reduce greenhouse gases. As with so many cases in economics, a credible commitment matters. The Journal sums it up nicely:
The market’s collapse shows how vulnerable market-based approaches to reducing air pollution are to government actions. That could scare off investors, who won’t commit to a market where the rules can change at any minute.
One of the great benefits of using market instruments to address environmental problems is that they can substantial lower the costs. The law of demand says that as price goes up, people buy less. As a result of the collapses of this market, we will likely pay more to get less in terms of environmental quality. This may well undermine efforts to implement market solutions elsewhere. If investors are convinced the regulatory environment is unstable or uncertain, they are unlikely to make large capital investments, and are more likely to take stopgap measures.
“The safer they make the cars, the more risks the driver is willing to take. It’s called the Peltzman effect.” — Some CSI Episode
The basic idea is so simple that it’s hardly controversial. If you reduce the cost of doing something, you would expect more of it. The classic Sam Peltzman paper has to do with making cars safer, which reduces the costs (in terms of potential injury or fatality) and hence increases “driver intensity,” as Peltzman puts it. The startling result is that the behavioral changes completely offset the technological improvements, though this does not have to be so.
The Peltzman effect has crept into my RSS feed twice in the past week. From this morning’s Marginal Revolution:
The NHTSA had volunteers drive a test track in cars with automatic lane departure correction, and then interviewed the drivers for their impressions. Although the report does not describe the undoubted look of horror on the examiner’s face while interviewing one female, 20-something subject, it does relay the gist of her comments.
After she praised the ability of the car to self-correct when she drifted from her lane, she noted that she would love to have this feature in her own car. Then, after a night of drinking in the city, she would not have to sleep at a friend’s house before returning to her rural home.
Well, that certainly makes me feel safer.
One of the classic jokes associated with the Peltzman effect is that NHTSA should put a spear extending out of the steering column, making the driver exercise extra caution so as not to be impaled. In that vein, the good folks at Organizations & Marketsalerted me to this cartoon:
Peltzman is one of the most prominent empirical economists ever. Certainly, having an “effect” named after you is a pretty big deal. Some of the more astute of you also recall Peltzman from the Stigler-Peltzman capture theory. Love him or hate him, he is an interesting character. I recommend this interview at EconTalk.
Consumer and financial lobbyists alike are marshalling the troops on K Street to impact the decisions regulators make in setting new rules after Congress finished writing the Dodd-Frank Act on Friday. The 2,000-page financial overhaul bill is expected to face a final vote this week, but despite its length, it leaves many specific directives to regulators. Regulators are left with the freedom to decide what kinds of trading are included in the prohibition against banks’ investment of their own money and “how much money banks have to set aside against unexpected losses.”
Now, the first question is, why would Congress delegate so much authority? Is it in deference to regulators’ superior knowledge? Or do you think it has something to do with not taking responsibility? Or do you have another explanation?
With the financial meltdown and the increasingly-disturbing oil spill, the efficacy of federal regulation is very much in question. The New Yorker‘s James Surowiecki sees it as a “good government gone bad” problem.
These failures weren’t accidents. They were the all too predictable result of the deregulationary fervor that has gripped Washington in recent years, pushing the message that most regulation is unnecessary at best and downright harmful at worst. The result is that agencies have often been led by people skeptical of their own duties. This gave us the worst of both worlds: too little supervision encouraged corporate recklessness, while the existence of these agencies encouraged public complacency.
I’m pretty sure he uses the word “deregulation” incorrectly here, at least in a conventional sense. His argument is more along the lines that enforcement of (some) regulations has become more relaxed. Of course, economists of the public choice stripe would probably point to the coziness between regulators and the regulated as a predictable result of the political process.
We kick off the final week of classes with a holiday, perhaps an apt metaphor for where many of you have been mentally for the past week. The holiday preempts the usual slot for the Economics TeaBA, paving the way an afternoon barbecue for the missus and me.
We get back to serious business in my courses Wednesday. In environmental economics (Econ 280), small groups will be reviewing the cost-benefit analysis of the Minerals and Management Services offshore leasing program. Then in the afternoon, the political economy of regulation course (Econ 240) will be going through some of the administrative regulations governing offshore drilling, truly a look at how the sausage is made. If you are interested, stop on in to see whether they’ve learned anything.
So, I’m headed over to the parade. Hope to see you there.
Brought to us by former OIRA head, Susan Dudley, the brief combs the U.S. budget for all the summary statistics on agency appropriations and staffing. (For those of you who can’t see the axes here, along the X axis is years, beginning in 1960 and ticked off in five-year increments. Up the Y axis is billions of 2005$ in $10 billion increments).
A page turner, I know. The brief reveals that outlays and staffing are at their all-time highs, which does not surprise me. I do, however, marvel at the growth of Homeland Security. In real terms (2005$), the Homeland Security budget has gone from $8.8 billion in 2000 to more than $20 billion today, accounting for more than 40% of U.S. regulatory spending and more than half the personnel as well. Mind boggling.
As I hope will become a tradition here, feel free to play the “my favorite part of the regulatory budget report” game. The winner will receive at least one sticker.
The BP catastrophe has certainly brought more than its share of discussion on the issue. Paul Krugman weighs in on the side that the continuing spill is Exhibit A that liability is a failure the private sector needs a stern regulatory hand to guide it. Tyler Cowen frames the argument and takes Krugman to task on one point:
There is in fact an agency regulating off-shore drilling and in the case under question it totally failed.
Of course, not all regulation is as inept as the Minerals Management Service (MMS) seems to be in this case. One problem is that MMS is charged both with regulating environmental and safety concerns AND is responsible for approving leases to the provide sector.
Minerals Management Service officials, who can receive cash bonuses in the thousands of dollars based in large part on meeting federal deadlines for leasing offshore oil and gas exploration, frequently changed documents and bypassed legal requirements aimed at protecting the marine environment, the documents show.
Emphasis is mine, though the point sort of jumps out at you, doesn’t it? But, it’s not like the appearance of financial impropriety is a new thing with the MMS. On the heels of the spill, in fact, President Obama recommended bifurcating the agency to mitigate the clear incentive compatibility problem.
Pithy, yes. He also sends along this piece on the flow of corporate money supporting the bill. For those of you interested, the capture theory posits that firms often “capture” regulators, and consequently legislation &/or regulation is used as a means to redistribute resources from one group to another. I’d probably go with the Becker model on this one, but he gets an A for brevity and wit.
Also on the corporate interest front comes this great article from alert reader “Mr. O.” The “beverage lobby,” folks with a lot a stake in the soda (a.k.a. “pop”) tax, have dispensed with the niceties and are offering up cold hard cash to quash it:
Yet with the nation’s obesity burden and states and municipalities parched for new cash sources in this recession, the beverage lobby isn’t underestimating the tenacity of those who would impose taxes. So they’ve unveiled a new tact in Philadelphia: abandon the tax and the beverage industry will donate $10 million over two years to the Pew Charitable Trusts to fund health and wellness programs in this city, if Pew would accept the funds, reported BNET.com.
I kid you negative, Mr. O was laughing out loud (LOLZing, as the kids say) at the audacity of this proposal.
So, for any of you other readers out there that identify something of interest, please bring it to our attention. If it clears the bar, it might be you seeing your initials right here on the blog.
The American Power Act is the latest climate bill making its way through the Senate. For both of my classes this term we have talked about the tradeoffs between policies that economists like and policies that might have a chance of passing. Ted Gayer at Brookings definitely puts the APA in the latter camp:
The bill auctions only 24.8 percent of the allowances in the early years (the share devoted to auctions is highlighted in blue), the remainder of the allowances being given away to such things as electricity local distribution companies, trade-exposed industries, refiners, commercial developers of carbon capture and storage, and a National Industrial Innovation Institute. The auctioning ramps up to 79.5 percent of allowances in 2030, and then full auctioning only occurs in 2035
By failing to use a full allowance auction to offset economically harmful taxes and deficits, the Senate bill sacrifices economic gain for political support from interest groups.
Robert Stavins, on the other hand, seems to look up at the sky and see a different color. Stavins is perhaps the most prominent environmental economist in the field, and he seems pretty upbeat about the whole thing:
Over the entire period from 2012 to 2050, 82.6% of the allowance value goes to consumers and public purposes, and 17.6% to private industry. Rounding error brings the total to 100.2%, so to be conservative, I’ll call this an 82%/18% split.
I’m going to have to side with Gayer on this one. It may well be the case that on average the “value” goes to some “public purposes,” but it sure doesn’t seem that way looking at the early splits (Here’s the blown up version for those of you preparing to squint).
For the first 13 years of the program, more than half of the allowances are going to industry, it appears. Not until 2025 do we see the industry percentage phased out (rapidly) and the auction percentage jump (also rapidly). So, to put it another way, today’s Congress is committing the 2025 Congress to implement the tough changes that will accompany climate change. I am going to put the odds on this commitment being credible as “improbable.”
Here we have yet another example of why law professors should simply not be allowed to practice law and economics or moral philosophy without a license–and of how Cass Sunstein has never bothered to do the work necessary to acquire a license to practice law and economics.
Both pieces are interesting reads alongside our work in Econ 280 this week on The Stern Review and William Nordhaus’s critique of it.
Wait, what’s that? You don’t know what OIRA is? Well, it’s the Office of Information and Regulatory Affairs, housed in the White House’s Office of Management and Budget. These are the folks who review agency regulations twice (!) during the federal rulemaking process. The OIRA is charge, among other things, with helping agencies to work through benefit-cost analysis — the source of Professor DeLong’s ire in this case.
So if administrative regulation piques your interest, this is your lucky day.
… is certainly worth a barrel of cure. Instead of having these guys with big yellow boots (I thought only 4-year old boys ran around in public in galoshes out of season), perhaps it would pay to have more egghead types crunching data on safety risk. That was the message I gave in both my classes this week, as we sat down to read Shultz and Fischbeck’s “Workplace Accident and Compliance Monitoring: The Case of Offshore Platform Inspections,” from RFF’s Improving Regulation. In that paper, they identify a set of factors (using factor analysis and a logistic regression model) that does a pretty good job of identifying the high-risk platforms. Pretty good compared to what? Well, certainly much better than random chance, and also better than the Minerals Management Service inspectors who were extensively interviewed for the project.
Neither Shultz nor Fischbeck have been in the press too much, but yesterday we finally did hear from one of them here:
Data problems date back at least a decade. According to John Shultz, who as a graduate student in the late 1990s studied MMS’ inspection program in depth for his dissertation, the agency’s data infrastructure was severely limited. “The thing I regret most is that, to my knowledge, MMS has not fixed the data management problem they have,” said Shultz, who now works in the Department of Energy’s nuclear program. “If you have the data you need, the analysis becomes fairly straightforward. Without the data, you’re simply stuck with conjectures.”
Anyone interested in taking a look at the Shultz and Fischbeck is welcome to contact me, for the paper or for a PowerPoint of their work. Anyone interested in doing research or an independent study related to transportation fuels regulation should also contact me.
No, this isn’t a post about the goodies at this-coming Monday’s Econ TeaBA (where, rumor has it, Professor Galambos will explain the competitive market model to Professor Corry in 15 minutes. Whether he can make good on this promise remains to be seen. In either case, please, no wagering at the TeaBA).
This is a post about who will benefit and who will lose from the climate legislation. We have been talking about the distributional issues in Economics 280 for a couple of weeks, that there are many ways to get the same “quantity,” but who wins and who loses can vary radically. The projected shares are a big key to determining political feasibility — businesses like free permits much more than auctioned permits, and certainly much more than (egads) paying a tax. On this front, we will be reading a paper called “Carbon Geography: The Political Economy of Congressional Support for Legislation Intended to Mitigate Greenhouse Gas Production” in our political economy course next week. The basic idea here is that representatives from states with high per-capita carbon emissions are less likely to support costly carbon restrictions. (Actually, I haven’t read the paper yet, but I would have bet a dollar that’s what it says. That is, I would bet a if I hadn’t discouraged wagering in the previous paragraph).
As for the distribution front, Ted Gayer from Brookings has some preliminary estimates on who is going to capture the value of freely-allocated and auctioned permits over the first 20 years of the program. The program will start with about 75% of the permits being handed out and more than half of the value of those permits accruing to electric utilities. Less than 10% of the revenue will flow to deficit reduction or to offset other taxes. Between 2026 and 2027, however, the percentage of auctioned permits jumps and ascends from 20% to a full 100%. And, if you believe that is a credible commitment, I would encourage you to sleep it off and rethink your position tomorrow. Consumer relief — that is, higher prices reduce consumer benefits — stays steady about 10% throughout. Believe him or not, Gayer’s short brief is worth reading precisely because he hits the heart of the environmental policy debate.
Well, that’s not quite accurate because over-the-counter genetic tests are already here. That is, if you consider that in the time it takes for me to type this post, I could, with an internet connection and a credit card, procure any number of genetic tests from www.23andme.com or a bunch of other companies.
Just don’t try to sell the kits at Walgreens.
Now obviously we’re not talking about your garden-variety paternity tests, which are available on pretty much any street corner these days for about thirty bucks, we’re talking the big test, the one that will tell you your predisposition for Alzheimer’s, obesity, or a physical attraction to Larry King.
Pathway Genomics announced Tuesday that its saliva swab would be on Walgreen’s shelves later this month, offering millions of Americans the chance peek into their genetic code for signs of inheritable diseases like Alzheimer’s.
But within 24 hours the company’s plan was met with stiff response from FDA regulators who said the products may run afoul of federal laws governing medical tests. On Wednesday, the FDA posted a letter to Pathways online, indicating the San Diego-based company never submitted its product for federal review, a requirement for medical devices.
I put my face in my hands at least three times while reading this article. We have a very curious regulatory state indeed.
A few weeks ago, Povolny Lecturer and funnyman Yoram Bauman stood up for the “cap and tax” proposal. He didn’t literally propose a tax, but emphasized that the higher price associated with the cap was the incentive to reduce energy consumption.
On the other side of the pond, there actually is a cap & trade system in place, and it is really all over the price. Carbon prices have ranged from €8 to €30, and the volatility can stymie long-term investments. In other words, there is likely to be an inverse relationship between carbon prices and the payoff to greener (or at least lower-carbon) energy sources. If investors don’t believe that carbon prices will be high, then green investments simply won’t be as attractive.
Enter the British Conservative Party, which has proposed a “Cap and Tax” of its own. The basic idea is that because of the tendency for carbon prices to bottom out, a carbon tax would kick in if permit prices went below a certain level. This would provide some stability to the market, as well as a potential revenue source.
That’s pretty clever.
Now, getting a government to make a credible commitment to a long-term tax is another story.
That’s the answer. The question is from a nice piece at Slate.com is: How do we deal with low probability, high consequence events? And the source of the quotation in this case is John Harrald from George Washington University.
The article is a pretty nice profile of what I would call risk regulation. I am pretty certain risk regulation is somehow different than regulating externalities, but I’m not sure exactly how and I’m not certain that there’s always a bright line. So, I’m asking my political economy class to figure this out for me.
One reason, of course, is that damages are determined in terms of expected values. Regulating low probability events with highly-uncertain outcomes and benefits is problematic indeed. Homeland security measures are notoriously difficult to even frame, let assign a “net benefit” to. How many incidents have our securities regulations discouraged or prevented? What bad things would have happened? What benefit would we have assigned to them? See, for example, this paper by Farrow and Shapiro on the analytical tractability of this problem.
So that gets us back to the original question, which is, should we think about the regulatory framework for the current oil spill fiasco in terms of regulating some sort of risk or internalizing an externality? And, does it make a difference which approach we take in terms of the types of regulations we would want?
All that said, I’m not sure we always wait until bad things happen and then overreact. In many cases, I would think there is excessive ex ante precaution that mitigates the intrepid adoption and diffusion of new technologies.
The good news is that these are exactly the sort of issues we grapple with in the Political Economy of Regulation course. The bad news is, I’m not sure how far we get with these problems.