Thursday, September 11, 2014

What Was the Federal Reserve Thinking in Summer 2008?

The Great Recession didn't officially start until December 2007, but the warning signs came months earlier. Stephen G. Cecchetti explained in the Winter 2009 issue of the Journal of Economic Perspectives, in "Crisis and Responses: The Federal Reserve in the Early Stages of the Financial Crisis."

A complete chronology of the recent financial crisis might start in February 2007, when several large subprime mortgage lenders started to report losses. It  might then describe how spreads between risky and risk-free bonds—“credit spreads”— began widening in July 2007. But the definitive trigger came on August 9, 2007, when the large French bank BNP Paribas temporarily halted redemptions from three of its funds because it could not reliably value the assets backed by U.S. subprime mortgage debt held in those funds. When one major institution took such a step, financial firms worldwide were encouraged to question the value of a variety of collateral they had been accepting in their lending operations—and to  worry about their own finances. The result was a sudden hoarding of cash and cessation of interbank lending, which in turn led to severe liquidity constraints on many financial institutions." 
By August and September 2007, the Fed was already cutting interest rates. By December 2007, the Fed had started creating an alphabet soup of temporary agencies for making emergency loans as needed: Term Auction Facility (TAF), Term Securities Lending Facility (TSLF), Primary Dealer Credit Facility (PDCF), Commercial Paper Funding Facility (CPFF), Term Asset-Backed Securities Loan Facility (TALF). The unemployment rate was climbing, from 5.0% in December 2007 to 6.1% by August 2008.

All of which raises an obvious question: How or why was the Fed so surprised in September 2008, when the US financial system nearly collapsed? This was the month when Lehman Brothers famously went broke. But in the same month, Fannie Mae and Freddie Mac were placed into conservatorship, Bank of America bought out Merrill Lynch, the Fed authorized lending up to $85 billion to bail out the American International Group (AIG); the value of shares in the Reserve Primary Money Fund falls below $1, leading the Fed to announce a $50 billion program to guarantee investments in money market mutual funds;  Citigroup bought otherwise bankrupt Wachovia; and the Troubled Asset Relief Program (TARP) went to Congress, where it would be approved in early October. 
Stephen Golub, Ayse Kaya, Michael Reay offer some thoughts about "What were they thinking? The Federal Reserve in the run-up to the 2008 financial crisis," in a short piece written for VoxEU, which is a condensation of a longer article by the same title forthcoming next year in the Review of International Political Economy, but already available online at the journal's website for those who have a personal or library subscription. 

The authors discuss in some detail what was being said at the meetings of the Federal Open Market Committee (FOMC), since the minutes of those meetings are now publicly available. In the discussion, they offer some simple counts of how many times certain terms came up. For example, here's a figure showing how often the terms "inflation" and "growth" came up at various FOMC meetings. Notice that in summer 2007,inflation is coming up quite a lot; indeed, there is some talk at several of the meetings that the Fed might need to raise interest rates soon to head off a surge of inflation--which of course turned out to be a gross misreading of where the economy was headed. 

Here's a figure showing how often and when the term "subprime" comes up. Notice a surge of mentions in 2007, as the problems in subprime markets first surfaced, but by summer 2008 the term was rarely coming up in these meetings. 

Or as another example, consider CDO and CDS, which stand for "collateralized debt obligation," a kind of subprime mortgage-backed security that turned out to be especially risky, and "credit default swap," a way of trying to insure against the risk of the CDOs. Again, talk of these in the Fed Open Market Committee meetings spiked in late 2007 and the very start of 2008, but had died down considerably by summer 2008.

The Fed was clearly aware of many of the issues about the housing price bubble in its deliberations--there are plenty of individual examples of the subject coming up in meetings and speeches. But in summer 2008, the Fed saw little need to focus on these issues or to take action. Of the many reasons that can be put forward for this seeming neglect of a looming crisis, Golub, Kaya, and Reay offer two that seem plausible to them. 

First, the Fed policymaking wascharacterized by a dominant paradigm, which we call ‘post hoc interventionism’. Post hoc interventionism held that bubbles were difficult to spot correctly, and that if a bubble developed, it could effectively be controlled after it had burst. Further, preventative pricking of bubbles could lead to an unnecessary economic contraction. Thus, monetary policy, instead of aiming at bubbles, should focus on flexible inflation targeting. Post hoc interventionism explains in part the Fed’s de-emphasis on financial stability in favor of inflation targeting. Second, we argue that the Fed’s institutional structure, conventions, and routines were crucial in maintaining post hoc interventionism as well as in undermining the impact of contrary events and dissenting opinions, as suggested by the literature on institutional pathologies in sociology and political science ...
I largely agree with their argument, but I would add that I think the discussion at the Fed was influenced by the experience of the dot-com boom and crash that preceded the previous recession. There had been calls for years through the mid and late 1990s for the Fed to raise interest rates to limit the "irrational exuberance" of the dot-com boom, but the Fed (mostly) just let the boom continue, until it brought on the recession in 2001. That recession had been only six months long and not too deep. Thus, the thinking in summer 2008 was to expect a shallow recession, and to avoid bringing on a deeper recession.  Of course, this thinking neglected what later seemed an obvious point: the 2001 dot-com collapse was about stock market values, and while that pinched the economy, the 2007-2009 recession was about losses the value of debt owed to banks and other financial institutions, which posed a much more fundamental economic risk.

Golub, Kaya, and Reay also emphasize the Fed meetings tended to follow a certain format, where everyone around the table made a short presentation, typically just following up on the latest iterations of the information they had presented earlier. The meetings aimed for unanimity. The format of the meetings and the institutions wasn't set up to encourage challenges from critical ideas. Indeed, even certain groups  within the Fed like the Division of Banking Supervision and Regulation was typically not represented at these meetings, just because it wasn't part of the usual flow of information presented. The lesson here for all organizations is that if you keep looking in the same place all the time, you will inevitably miss the dangers that arise from any other direction. 

Wednesday, September 10, 2014

Foreign-Controlled Domestic Corporations in the United States

U.S. companies turning into foreign-controlled U.S. companies are  the news: for example, Burger King and the Canadian coffee-and-doughnuts company Tim Horton'sMedtronic and the Irish firm Covidien; or back in 2009, as part of the U.S. auto industry bailout, the autoparts maker Delphi emerged from bankruptcy, with assistance from the U.S. government, as a British-based firm.

I won't try to sort through all the tax issues involved here, but Donald J. Marples and Jane G. Gravelle offer a useful starting point in "Corporate Expatriation, Inversions, and 
Mergers: Tax Issues," published on May 27, 2014 by the Congressional Research Service. But here's a sketch of the main issues.

A foreign-controlled domestic company in the U.S. still needs to pay U.S. corporate taxes on its U.S. operations at the U.S.-imposed rate, of course. But two other issues remain relevant. One is that the U.S. is the only major economy in the world that seeks to tax its companies on their global profits--not their national profits--and to do so at the relatively high U.S. corporate tax rate (although this U.S. corporate tax is postponed until the funds are sent back to the U.S,)  When a U.S. company turns into a foreign-controlled firm, it is only taxed on its U.S. operations, not on its global profits earned in other countries with lower corporate tax rates. The second issue is that companies often have ways, in how they set internal accounting prices in the company for provision of certain goods and services, and how they set up their financing, of making profits appear in one country rather than another.

How prevalent are foreign-controlled domestic corporations? James R. Hobbs provides some basic statistics in "Foreign-Controlled Domestic Corporations, 2011," in the Summer 2014 Statistics of Income Bulletin, published by the U.S. Internal Revenue Service. While this statistics give a sense of the issue, it's worth noting that Hobbs is collecting data on U.S. domestic companies that have more than 50% foreign ownership. There are also foreign companies that have a U.S. subsidiary--which isn't quite the same thing--and companies that have their legal headquarters in another country even though a majority of the business sales and the shareholders are in the U.S.

Overall, Hobbs documents that for the 2011 tax year, there were 76,793 foreign-controlled domestic
corporations that "collectively reported $4.6 trillion of receipts and $11.7 trillion of assets" to the IRS. "While Federal income tax returns for FCDCs accounted for just 1.3 percent of all United States (U.S.) corporate returns, they made up 16.2 percent of total receipts and 14.4 percent of total assets." Here's the gradual rise over the last 10 years for the share of foreign-controlled domestic corporations, relative to all U.S. corporations, in receipts, assets, and as a share of tax returns.

Although there is some increase in recent years, a lot of the increase happened before the 21st century. This table is a slightly cut-down version of one appearing in the Hobbs paper (I cut some of the years to make the longer-term trends easier to discern). For example, total receipts of foreign-controlled domestic corporations were 2.06% of all U.S. corporations in 1971, 9.29% by 1990, and 16.19% in 2011. Assets show a similar pattern. Total assets of foreign-controlled domestic corporations were 1.27% of all U.S. corporations in 1971, 9.08% by 1990, and 14.43% in 2011.


In which countries do the foreign owners of these domestic U.S. firms live? Here's a figure. You'll notice that essentially none of the foreign owners are in true tax havens like Grand Caymans or Bermuda. A law passed back in 2004 denied (or greatly restricted) any tax benefits from being based in a county where almost no actual sales or production happened. Thus, the current wave of foreign ownership is about being legally based in places like the  UK, Ireland, Canada, and so on.





Foreign-controlled domestic corporations can have more than half the sales of all U.S. corporations in some industries. For example, such foreign-controlled firms account for 78.3% of all receipts of U.S. corporations in the "Breweries" industry --for example, Anheuser-Busch is owned by the Belgian-headquartered firm InBev. Such firms also account for 64.1% of all U.S. corporate receipts in the "Audio and video equipment manufacturing and reproducing magnetic and optical media" industry; 62.7% of receipts in the "Sound recording industries"; 59.6% of receipts in the "Engine, turbine, and power transmission equipment (manufacturing)" industry; 59.5% of receipts in the "Security brokerage" industry; 54.1% of all receipts in the "Rubber products (manufacturing)" industry; 53.5% of all receipts in the "Electrical and electronic goods (wholesale trade)" industry; 51.7% of receipts in the "Cement, concrete, lime and gypsum products (manufacturing)" industry; and 51.4% of the receipts in the "Motor vehicle and motor vehicle parts and supplies (wholesale trade)" industry.

The best way to tax global corporations is a sticky problem, and I'll come back to it on this blog from time to time. In a globalizing world economy, the issues are only going to become more salient with time. But here, I'll just note that if the U.S. followed the pattern of just about every other high-income country in the world and had a corporate tax that was territorial--that is, aimed only at corporate income earned in the U.S.--the reasons for U.S. corporations to put their headquarters in another country would be much diminished.

Tuesday, September 9, 2014

Stuck on Economics

I have lamented in the past that when your brain is stuck on economics, it can be hard to escape from your obsession. For example, I explain here what it's like to be  driving around northern Montana wondering why the local population was obsessed with GNP, when everyone knows that the economy is now more commonly measured by GDP. Or here is how I ended up "Endorsing Association 3E: Ethics, Excellence, Economics"--and it tastes excellent on nibbles of sourdough bread. Or here is how the Economic Geyser spouts even in the middle of Yellowstone National Park.

Now McDonald's is messing with my ability to turn off the economics portion of my brain. A few years back they prominently advertised the CBO, which we all know stands for Congressional Budget Office, thus causing me to twitch every time I passed a billboard.


Of course, now it's the eco-nom-nom-nomics advertisements. Most of what I watch on television is live sports, and I'm just trying to sit and relax and watch my baseball or football game in peace, when suddenly my brain is jolted into awareness of economics. Please make it stop.




Of course, my children think the ads are hilarious, partly because they make Dad twitch. The children are also fans of the "lolcats" books, which are cats with funny but ungrammatical captions (that badly need the work of an economics journal editor to fix them all right now. Sorry, lost my train of thought there for a moment.) Oh yes, the lolcats also say "nom nom nom" from time to time. So now the lolcats trigger thoughts of economics in my mind, too. Thanks a lot, McDonalds. I need another month of summer vacation.



Monday, September 8, 2014

19th Century Fencing and Information Technology

It's no surprise that US investment is disproportionately focused on information technology. The broad category of information processing technology and equipment was 8% of all private nonresidential US investment in 1950, but 30% of all investment by 2012. This raises the question: Is there a previous time in U.S.  history when investment has been so  heavily focused in a single category?

David Autor offers a possible answer: Investment in fences in the late 19th century U.S. economy. The answer is side comment in Autor's paper "Polanyi's Paradox and the Shape of Employment Growth," presented in August at the Jackson Hole conference sponsored by the Kansas City Federal Reserve. The paper is well worth reading for what it has to say about the links from automation to jobs and wages. Here, I'll offer some thoughts of my own about fencing and information technology.  (Full disclosure: Autor is the Editor of the Journal of Economic Perspectives, and thus my boss.)

Richard Hornbeck published "Barbed Wire: Property Rights and Agricultural Development", in a 2010 issue of Quarterly Journal of Economics (vol. 125: 2, pp. 767-810). He argues for the importance of fencing in understanding the development of the American West. Hornbeck writes (citations and footnotes omitted):

In 1872, fencing capital stock in the United States was roughly equal to the value of all livestock, the national debt, or the railroads; annual fencing repair costs were greater than combined annual tax receipts at all levels of government ... Fencing became increasingly costly as settlement moved into areas with little woodland. High transportation costs made it impractical to supply low-woodland areas with enough timber for fencing. Although wood scarcity encouraged experimentation, hedge fences were costly to control and smooth iron fences could be broken by animals and were prone to rust. Writers in agricultural journals argued that the major barrier to settlement was the lack of timber for fencing: the Union Agriculturist and Western Prairie Farmer in 1841, the Prairie Farmer in 1848, and the Iowa Homestead in 1863 ... Farmers mainly adjusted to fencing material shortages by settling in areas with nearby timber plots."
Then in 1874, Joseph Glidden patented "the most practical and ultimately successful design for
barbed wire." The fencing business took off. Hornbeck quotes a story from a 1931 history:  “Glidden himself could hardly realize the magnitude of his business. One day he received an order for a
hundred tons; ‘he was dumbfounded and telegraphed to the purchaser asking if his order should not read one hundred pounds'".

Remember that fencing was already of central importance to the U.S. capital stock in 1872. Hornbeck presents estimates of how the total stock of fencing expanded over the decades. The pent-up demand was enormous, and cheaper steel was becoming widely available after the 1870s. From 1880 to 1900, for example, the total amount of fencing in Prairie states went from 80 million rods (where a rod equals 16.5 feet or about 5 meters) to 607 million rods; in the Southwest region, the rise was from 162 million rods in 1880 to 710 million rods by 1900. In the South Central states, the gains were comparatively smaller, only about a doubling from 344 million rods in 1880 to 685 million rods in 1900. By comparing across regions with and without fencing, as the fencing arrived, Hornbeck argues:
"Barbed wire may affect cattle production and county specialization through multiple channels, but these results suggest that barbed wire’s effects are not simply the direct technological benefits that would be expected for an isolated farm. On the contrary, it appears that barbed wire affected agricultural development largely by reducing the threat of encroachment by others’ cattle."

The juxtaposition between 19th century fencing and 21st century information technology offers an irresistible chance for loose speculations and comparisons. Fencing in the 19th century made property rights to U.S. land more valuable, especially in the Prairie and Southwest regions, because it protected the farmers crops. Of course, there was also considerable conflict and dislocation as the land was fenced, including conflicts between farmers and ranchers and between settlers and Native Americans. But for many Americans, the fencing of the American West felt like a clear-cut opening of productive opportunities.

The economic gains from modern information technology often seem to arrive in less clear form. True, for some workers the vast gains of electronic technology feel like a brand-new frontier. But many workers throughout the economy experience information technology as a continual mix of gains, costs, and disruptions. For example, email is great; and email eats up my day. Information technology can offer vast cost savings in office-work, greater efficiency in logistics and shipping, and faster development of new designs and technologies--all of which also disrupt companies and workers.

New information technology is far more mutable than fencing: it finds ways to slither into aspects of almost every job, including how that job is scheduled, organized, and paid for. Moreover, information technology is really a series of new technologies, as Moore's law drives the cost of computing lower and lower, creating waves of distinctively different growth opportunities. As Hornbeck points out, barbed-wire fencing did get substantially cheaper over time, with the cost falling by half from 1874 to 1880, and then again almost another two-thirds by 1890, and falling almost to half of that amount by 1897. But that impressive technological record is dwarfed by the productivity gains in information technology.

In short, 19th-century fencing may well have been an investment similar in relative size to modern information technology (although the economic statistics of the late 19th century don't allow anything resembling an apples-to-apples comparison). But at least to me, information technology seems considerably more disruptive, transformative, and ultimately beneficial for the economy.



Friday, September 5, 2014

Shaping the Direction of Health Care Innovation

My hope would be that the health care innovations of the future focus on two goals: how to attain improvements in health across the population, and how to provide the same or more effective health care at lower cost. My worry is that the direction of health care innovation is shaped by incentives related to beliefs about what can be brought to market and what will be demanded by patients and received with favor by health care providers  that are not necessarily well-aligned with these goals. Steven Garber, Susan M. Gates, Emmett B. Keeler, Mary E. Vaiana, Andrew W. Mulcahy, Christopher Lau and Arthur L. Kellermann tackle these issues in "Redirecting Innovation in U.S. Health Care: Options to Decrease Spending and Increase Value," a report from the RAND Corporation.

The authors point out that since the 1950s, growth in U.S. health care spending has typically been about 2% per year faster than growth in GDP, and that most economists trace this cost difference to the continual arrival of new and more expensive health care technologies. They write: " As we argue in this report, the U.S. health care system provides strong incentives for U.S. medical product innovators to invent high-cost products and provides relatively weak incentives to invent low-cost ones." The system also provides strong incentive to focus on drugs, devices, and health information technologies that will generate profits in high-income countries, not to find low-cost ways of addressing health problems in the rest of the world. Here are four of the examples they offer.

The cardiovascular “polypill” "refers to a multidrug combination pill intended to reduce blood pressure and cholesterol, known risk factors for the development of cardiovascular disease. The rationale is that combining four beneficial drugs in low doses in a single pill should produce an easy and affordable way to dramatically modify cardiovascular risk." But as the authors point out, even though a "polypill" only combines existing drugs, putting them in a single pill means that it would have to go through very expensive and length health and safety testing. The result would be a product that might be cheaper and more effective, but given that people could still take a handful of the other pills, the "polypill" would almost certainly be a low-profit product. Moreover, there have been several patents granted on aspects of a "polypill," so any company seeking to test such a pill would be likely to face a patent battle. No private company is likely to push this kind of innovation.

Better use of health information technology in patient records could save a lot of money in terms of lower paperwork costs, and also provide considerable health benefits by informing health care provides about past and current health experiences--for example, thus helping to minimize risks of allergic reactions or bad drug interactions. But despite various pushes and shoves, the health care sector has not been a leader in adopting and using information technology. Indeed, in many cases it seems to have soaked up the time of health care providers on one hand, while providing a tool for increasing the amount billed to insurance companies on the other hand.

The implantable cardioverter-defibrillator (ICD) is "an implantable device consisting of a small pulse generator (roughly half the size of a smartphone) and one or more thin wire leads threaded through large blood vessels into the heart. ICDs are designed to sense a life-threatening cardiac arrhythmia and automatically provide a dose of direct current (DC) electricity to jolt the patient’s heart back to normal." This technology works very well for some patients with heart disease, but not for others: specifically, it isn't recommended for "such as patients who are undergoing bypass surgery or in the early period following a heart attack, the first three months following coronary revascularization, severe heart failure (New York Heart Association Class IV), and those with newly diagnosed heart failure." Thus, this is a case of a positive and useful innovation that is quite likely overused--at substantial cost.

Prostate-specific antigen (PSA) is a test for whether men have prostate cancer. The authors write: "Despite PSA screening’s initial promise, multiple studies in the United States and in Europe have found that it does not reduce prostate cancer–specific mortality. Moreover, screening is associated with substantial harms caused by over-diagnosis and the complications that can occur from aggressive treatment. . . . Based on unfavorable findings, in 2012 the United States Preventive Services Task Force recommended against routine PSA screening for prostate cancer because the harms of screening outweigh the potential benefits. However, because federal law has not been changed, Medicare must still pay for the test’s use, as well as for the subsequent biopsies, surgical procedures, nonsurgical treatments, and complications that these procedures can cause."

The RAND authors point out a number of features of the U.S. health care system that can push innovation away from the methods that would  most improve health and decrease costs. For example, the existing incentives for innovation don't tend to reward methods that will lead to reduced spending. As they note, in a market full of insured third-party payers, there is "[l]imited price sensitivity on the part of consumers and payers. In addition, a bias arises from the  "limited time horizon of providers when they decide which medical products to use for which patients: In many instances, the health benefits from using a drug, device, or HIT are not realized until years in the future, at which time the patient is likely to be covered by a different insurer, such as Medicare. When this is the case, only the later insurer will obtain the financial benefits associated with the (long-delayed) health benefits." More broadly, "[m]any [health care] provider systems are siloed. When this is the case, most decisionmakers consider only the costs and benefits for their parts of their organizations, and few take into account savings that accrue outside of their silos."

They also write of "treatment creep" and the "medical arms race."
"Undesirable treatment creep often occurs when a medical product that provides substantial benefits to some patients is used for other patients for whom the health benefits are much smaller or completely absent. Treatment creep is encouraged by FFS [fee-for-service] payment arrangements, and it is enabled by lack of knowledge about which patients would truly benefit from which products. Treatment creep often involves using products for indications not approved by the FDA. Such “off-label” use—which delivers good value in some instances—is widespread and difficult to control. Treatment creep may reward developers with additional profits for inventing products whose use can be expanded to groups of patients who will benefit little. ..." 
"The “medical arms race” refers to hospitals and other facilities competing for business by making themselves attractive to physicians, who may care more about using new high-tech services than they care about lower prices. ... Robotic surgery for prostate cancer and proton beam radiation therapy provide striking examples of undesirable treatment creep: Although there is little or no evidence that they are superior to traditional treatments, these high-cost technologies have been successfully marketed directly to patients, hospitals, and physicians. High market rewards for such expensive technologies encourage inventors and investors to develop more of them—regardless of how much they improve health."
The authors have an eminently reasonable list of ways to alter the direction  of health care innovation: basically, thinking through the sources of R&D funding, regulatory approval, and decision-making by third-party payers. For example, there could be public prize contests for certain innovations, or some patents that seem to offer substantial health benefits could be bought out and placed in the public domain, and third-party payers (including Medicare and Medicaid) could place more emphasis on being willing to buy new technologies that cut costs. But I confess that as I look over their list of policy recommendations, I'm not sure they suffice to overcome the incentives currently built into the U.S. healthcare system.





Thursday, September 4, 2014

And Here Come the Interest Payments

The federal government has been on a borrowing binge since the start of the Great Recession. I've argued that in the short-run, the path of the budget deficits has been basically correct, because the deficits have helped to cushion the brutal economy of 2008-2009 and the sluggish recovery since then. But the long-term budget deficit picture is a problem.  And even those of us who have largely supported the budget deficits of the last few years need to face that fact that the bills will eventually come due, and interest payments by the federal government are likely to head sharply upward in the next few years.

For some perspective, here's a figure from the August 2014 Congressional Budget Office report, "An Update to the  Budget and  Economic Outlook:  2014 to 2024." The spending categories are expressed as a share of GDP. Thus, over the next decade Social Security and Major Health Care programs rise, and a number of other categories fall a bit. But the biggest spending jump in any of these categories is for interest payments.



Interest payments jump for two reasons: the recent accumulation of federal debt and the expectation that interest rates are going to rise. "Between calendar years 2014 and 2019, CBO expects, the interest rate on 3-month Treasury bills will rise from 0.1 percent to 3.5 percent and the rate on 10-year Treasury notes will rise from 2.8 percent to 4.7 percent; both will remain at those levels through 2024." Of course, predictions don't always come true. But the CBO has already scaled down how much it expects interest rates to rise, and its projections of future deficits may well be on the optimistic side.

When looking at spending as a share of GDP, it's useful to remember that the GDP is now around $17 trillion. This prediction shows a rise in federal interest payments from 1.3 percent of GDP in 2014 to 3.0 percent of GDP by 2024. Converted to actual dollars, this prediction means that the projected rise in interest payments from from $231 billion in 2014 to $799 billion in 2024--more than tripling in unadjusted dollars. By 2024, that's going to be $568 billion per year that isn't available for other spending or to finance tax cuts. It's going to bite  hard.

For an historical comparison, a December 2010 CBO report looked at "Federal Debt and
Interest Costs." The light blue line shows interest payments in nominal dollars, not adjusted for inflation or the size of the economy, and thus isn't useful for looking back several decades. The dark blue line helps to illustrate rise in interest rates is headed for its highest levels since we were paying off the government borrowing of the mid-1980s at relatively high interest rates into the mid-1990s.





When economic times are dire, as they were in the U.S. economy in 2008-2009, having the government borrow money makes sense. Given the lethargic pace of the growth that followed, and the underlying financial fragility of the economy, it made some sense not to make a dramatic push for lower deficits in the last few years. But the coming surge in interest payments is a warning signal that it's past time to start thinking about how to bring down budget deficits in the middle and longer-term.






Wednesday, September 3, 2014

Competition as a Form of Cooperation

Like most economists, I find myself from time to time confronting the complaint that economics is all about competition, when we should be emphasizing cooperation instead. One standard response to this concern focuses on making a distinction between the way people and firms actually behave and the ways in which moralists might prefer that they behave. But I often try a different answer, pointing out that the idea of cooperation is actually embedded in the meaning of the word "compete."

Check the etymology of "compete" in the Oxford English Dictionary. It tells you that the word derives from Latin, in which "com-" means "together" and "petÄ•re" has a variety of meanings, which include "to fall upon, assail, aim at, make for, try to reach, strive after, sue for, solicit, ask, seek." Based on this derivation, valid meanings of competition would be  "to aim at together," "to try to reach together" and "to strive after together." 

Competition can come in many forms. The kind of market competition that economists typically invoke is not about wolves competing in a pen full of sheep, nor is it competition between weeds to choke the flowerbed. The market-based competition envisioned in economics is disciplined by rules and reputations, and those who break the rules through fraud or theft or manipulation are clearly viewed as outside the shared process of competition. Market-based competition is closer in spirit to the interaction between Olympic figure-skaters, in which pressure from other competitors and from outside judges pushes individuals to strive for doing the old and familiar better, along with seeking out new innovations. Sure, the figure-skaters are trying their hardest to win. But in a broader sense, their process of training and coming together under agreed-upon rules is a deeply cooperative and shared enterprise.  

In fact, competition within a market context actually happens as a series of cooperative decisions, every time a buyer and seller come together in a mutually agreed and voluntarily made transaction. This idea of cooperation within the market is at the heart of what the philosopher Robert Nozick in his 1974 work Anarchy, State, Utopia referred to as “capitalist acts between consenting adults.”