Thursday, July 27, 2017

"The Limited Liability Corporation is the Greatest Single Discovery of Modern Times"

Economists and other gossipmongers sometimes like to quote the words of Nicholas Murray Butler, who was at the time President of Columbia University, in a 1911 speech called "Politics and Economics" to the 143rd Annual Banquet of the Chamber of Commerce of the State of New York in 1911 (pp. 43-55), and available through the magic of the HathiTrust Digital Library, .

Here's the much-quoted part of the speech:
"I weigh my words, when I say that in my judgment the limited liability corporation is the greatest single discovery of modern times, whether you judge it by its social, by its ethical, by its industrial or, in the long run,—-after we understand it and know how to use it,—by its political, effects. Even steam and electricity are far less important than the limited liability corporation, and they would be reduced to comparative impotence without it."
The quotation is often used to illustrate someone who has gone far over-the-top in admiring the private company--and perhaps even someone who is licking the boots of New York's rich and powerful at this banquet dinner. But that interpretation is unfair to Butler. This is just an speech, not an academic tract, but he is pointing out the enormous industrial change that has occurred, while also arguing in favor of the development of a body of law like the Sherman antitrust legislation to constrain corporations, and also making the point that these issues of large-scale companies had already been an issue for hundreds of years, back at least to reports commissioned by the Diet of Nuremburg in the 1520s. Here's a more extended quotation from of Butler (bracketed references to laughter or applause are omitted):
The fact of the matter is, and it may just as well be recognized in this country and in every other country, that the era of unrestricted individual competition has gone forever. And the reason why it has gone is partly because it has done its work, partly because it has been taken up into a new and larger principle of co-operation. What happens in every form of organic evolution is that an old part no longer useful to the structure drops away, and its functions pass over into and are absorbed by a new development. That new development is co operation, and co-operation as a substitute for un- limited, unrestricted, individual competition has come to stay as an economic fact, and legal institutions will have to be adjusted to it. It cannot be stopped. It ought not to be stopped. It is not in the public interest that it should be stopped. 
Now, how has this co-operation manifested itself? This new movement of cooperation has manifested itself in the last sixty or seventy years chiefly in the limited liability corporation. I weigh my words, when I say that in my judgment the limited liability corporation is the greatest single discovery of modern times, whether you judge it by its social, by its ethical, by its industrial or, in the long run,—-after we understand it and know how to use it,—by its political, effects. Even steam and electricity are far less important than the limited liability corporation, and they would be reduced to comparative impotence without it. Now, what is this limited liability corporation? It is simply a device by which a large number of individuals may share in an undertaking without risking in that undertaking more than they voluntarily and individually assume. It substitutes co-operation on a large scale for individual, cut-throat, parochial, competition. It makes possible huge economy in production and in trading. It means the steadier employment of labor at an increased wage. It means the modern provision of industrial insurance, of care for disability, old age and widowhood. It means—and this is vital to a body like this—it means the only possible engine for carrying on international trade on a scale commensurate with modern needs and opportunities. ... 
"I know how unsafe it is for any layman even to mention the SHERMAN law. I know that there is a prejudice in some political and journalistic circles against a layman saying anything about that law except the single word “Guilty.” But let me suggest that you do not agitate for an amendment of the SHERMAN law. Supplement it, if you like, but do not amend it. The SHERMAN law has now been subjected to twenty years of the most careful, the most extensive and the most elaborate legal and judicial examination and determination. Under it you are working out a solution slowly, patiently, and with much doubt; but you are working out a solution of the relations of business to that law by the very processes which have always been those governing in our Anglo-Saxon life, the process of the application of the common law, building up from precedent to precedent ; and the man who undertakes to amend that law will make it worse. The first thing that will be done in that case will be to except some privileged people from it, and the only people who will be excepted will be those with a large number of votes. ...
"There is nothing new about all this conflict over large and new business undertakings. ...  As a matter of fact there has not a single thing been said about corporations, about large industrial combinations, which was not said in England about co-partnerships, when co-partnerships were first invented. You may go all the way back five hundred years, and you will find exactly these same expressions. I ran upon this the other day. Let me read it, and perhaps you may guess from what American daily newspaper it comes:
 “`The merchants form great companies and become wealthy ; but many of' them are dishonest and cheat one another. Hence the directors of the companies, who have charge of the accounts, are nearly always richer than their associates. Those who thus grow rich are clever, since they do not have the reputation of being thieves.'
"That was not published in New York, or Chicago or San Francisco. That is found in the Chronicle of Augsburg, Germany, in 1512. In one year more that quotation will be four hundred years old. They were very much disturbed about this problem in those days, and the Diet of Nuremberg appointed a committee in 1522 to investigate monopolies. They sent an inquiry to several cities, to Boards of Trade and Chambers of Commerce, to know what better be done. This is the answer they got from Augsburg: 
“`It is impossible to limit the size of the companies for that would limit business and hurt the common welfare; the bigger and more numerous they are the better for everybody. If a merchant is not perfectly free to do business in Germany he will go elsewhere to Germany’s loss. Any one can see what harm and evil such an action would mean to us. If a merchant cannot do business, above a certain amount, what is he to do with his surplus money? It is impossible to set a limit to business, and it would be well to let the merchant alone and put no restrictions on his ability or capital. * * * * * Some people talk of limiting the earning capacity of investments. This would be unbearable and would work great injustice and harm by taking away the livelihood of widows, orphans and other sufferers, noble and non-noble, who derive their income from investments in these companies. Many merchants out of love and friendship invest the money of their friends—men, women and children—- who know nothing of business, in order to provide them with an assured income. Hence any one can see that the idea that the merchant companies undermine the public welfare ought not to be seriously considered. ...'
"I read that to illustrate that the business and political mind of Europe has been on this question for at least four hundred years. ... . We must learn that economic laws, economic principles, based on everlasting human nature are fundamental and vital, and your care and mine, as citizens of this Republic, is not to interfere with these laws, not to check them; but to see to it that no moral wrong is done in their name. That is a very different proposition from the one of overturning a great economic and industrial system by statute." 
At least for me, the closing paragraph contains food for thought. 

Wednesday, July 26, 2017

Summer 2017 Journal of Economic Perspectives Available Online

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon was launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided--to my delight--that it would be freely available on-line, from the current issue back to the first issue. Here, I'll start with Table of Contents for the just-released Summer 2017 issue, which in the Taylor household is sometimes known as issue #121. Below that are abstracts and direct links for all of the papers. I will almost certainly blog about some of the individual papers in the next week or two, as well.

___________________

Symposium: The Global Monetary System

"International Monetary Relations: Taking Finance Seriously," by Maurice Obstfeld and Alan M. Taylor
In this essay, we highlight the interactions of the international monetary system with financial conditions, not just with the output, inflation, and balance of payments goals usually discussed. We review how financial conditions and outright financial crises have posed difficulties for each of the main international monetary systems in the last 150 years or so: the gold standard, the interwar period, the Bretton Woods system, and the current system of floating exchange rates. We argue that even as the world economy has evolved and sentiments have shifted among widely different policy regimes, there remain three fundamental challenges for any international monetary and financial system: How should exchange rates between national currencies be determined? How can countries with balance of payments deficits reduce these without sharply contracting their economies and with minimal risk of possible negative spillovers abroad? How can the international system ensure that countries have access to an adequate supply of international liquidity—financial resources generally acceptable to foreigners in all circumstances? In concluding, we evaluate how the current international monetary system answers these questions.
Full-Text Access | Supplementary Materials


"The Safe Assets Shortage Conundrum," by Ricardo J. Caballero, Emmanuel Farhi and Pierre-Olivier Gourinchas
A safe asset is a simple debt instrument that is expected to preserve its value during adverse systemic events. The supply of safe assets, private and public, has historically been concentrated in a small number of advanced economies, most prominently the United States. Over the last few decades, with minor cyclical interruptions, the supply of safe assets has not kept up with global demand. The reason is straightforward: the collective growth rate of the advanced economies that produce safe assets has been lower than the world's growth rate, which has been driven disproportionately by the high growth rate of high-saving emerging economies such as China. The signature of this growing shortage is a steady increase in the price of safe assets; equivalently, global safe interest rates must decline, as has been the case since the 1980s. The early literature, brought to light by Ben Bernanke's famous "savings glut" speech of 2005, focused on a general shortage of assets without isolating its safe asset component. The distinction, however, has become increasingly important over time, particularly in the aftermath of the subprime mortgage crisis and its sequels. We begin by describing the main facts and macroeconomic implications of safe asset shortages. Faced with such a structural conundrum, what are the likely short- to medium-term escape valves? We analyze four of them, each with its own macroeconomic and financial trade-offs.
Full-Text Access | Supplementary Materials


"Dealing with Monetary Paralysis at the Zero Bound," by Kenneth Rogoff
Recently, the key constraint for central banks is the zero lower bound on nominal interest rates. Central banks fear that if they push short-term policy interest rates too deeply negative, there will be a massive flight into paper currency. This paper asks whether, in a world where paper currency is becoming increasingly vestigial outside small transactions (at least in the legal, tax compliant economy), there might be relatively simple ways to finesse the zero bound without affecting how most ordinary people live. Surprisingly, this question gets little attention compared to the massive number of articles that take the zero bound as given and look for out-of-the-box solutions for dealing with it. In an inversion of the old joke, it is a bit as if the economics literature has insisted on positing "assume we don't have a can opener," without considering the possibility that we might be able to devise one. It makes sense not to wait until the next financial crisis to develop plans. Fundamentally, there is no practical obstacle to paying negative (or positive) interest rates on electronic currency and, as we shall see, effective negative rate policy does not require eliminating paper currency.
Full-Text Access | Supplementary Materials

Symposium: The Modern Corporation

"Is the US Public Corporation in Trouble?" by Kathleen M. Kahle and René M. Stulz
We examine the current state of the US public corporation and how it has evolved over the last 40 years. After falling by 50 percent since its peak in 1997, the number of public corporations is now smaller than 40 years ago. These corporations are now much larger and over the last twenty years have become much older; they invest differently, as the average firm invests more in R&D than it spends on capital expenditures; and compared to the 1990s, the ratio of investment to assets is lower, especially for large firms. Public firms have record high cash holdings and, in most recent years, the average firm has more cash than long-term debt. Measuring profitability by the ratio of earnings to assets, the average firm is less profitable, but that is driven by smaller firms. Earnings of public firms have become more concentrated—the top 200 firms in profits earn as much as all public firms combined. Firms' total payouts to shareholders as a percent of earnings are at record levels. Possible explanations for the current state of the public corporation include a decrease in the net benefits of being a public company, changes in financial intermediation, technological change, globalization, and consolidation through mergers.
Full-Text Access | Supplementary Materials


"The Agency Problems of Institutional Investors," by Lucian A. Bebchuk, Alma Cohen and Scott Hirst
Financial economics and corporate governance have long focused on the agency problems between corporate managers and shareholders that result from the dispersion of ownership in large publicly traded corporations. In this paper, we focus on how the rise of institutional investors over the past several decades has transformed the corporate landscape and, in turn, the governance problems of the modern corporation. The rise of institutional investors has led to increased concentration of equity ownership, with most public corporations now having a substantial proportion of their shares held by a small number of institutional investors. At the same time, these institutions are controlled by investment managers, which have their own agency problems vis-á-vis their own beneficial investors. We develop an analytical framework for understanding the agency problems of institutional investors, and apply it to examine the agency problems and behavior of several key types of investment managers, including those that manage mutual funds—both index funds and actively managed funds—and activist hedge funds. We show that index funds have especially poor incentives to engage in stewardship activities that could improve governance and increase value. Activist hedge funds have substantially better incentives than managers of index funds or active mutual funds. While their activities may partially compensate, we show that they do not provide a complete solution for the agency problems of other institutional investors.
Full-Text Access | Supplementary Materials


"Towards a Political Theory of the Firm," by Luigi Zingales
The revenues of large companies often rival those of national governments, and some companies have annual revenues higher than many national governments. Among the largest corporations in 2015, some had private security forces that rivaled the best secret services, public relations offices that dwarfed a US presidential campaign headquarters, more lawyers than the US Justice Department, and enough money to capture (through campaign donations, lobbying, and even explicit bribes) a majority of the elected representatives. The only powers these large corporations missed were the power to wage war and the legal power of detaining people, although their political influence was sufficiently large that many would argue that, at least in certain settings, large corporations can exercise those powers by proxy. Yet in economics, the commonly prevailing view of the firm ignores all these elements of politics and power. We must recognize that large firms have considerable power to influe nce the rules of the game. I call attention to the risk of a "Medici vicious circle," in which economic and political power reinforce each other. The possibility and extent of a "Medici vicious circle" depends upon several nonmarket factors. I discuss how they should be incorporated in a broader "Political Theory" of the firm.
Full-Text Access | Supplementary Materials


"A Skeptical View of Financialized Corporate Governance," by Anat R. Admati
Managerial compensation typically relies on financial yardsticks, such as profits, stock prices, and return on equity, to achieve alignment between the interests of managers and shareholders. But financialized governance may not actually work well for most shareholders, and even when it does, significant tradeoffs and inefficiencies can arise from the conflict between maximizing financialized measures and society's broader interests. Effective governance requires that those in control are accountable for actions they take. However, those who control and benefit most from corporations' success are often able to avoid accountability. The history of corporate governance includes a parade of scandals and crises that have caused significant harm. After each, most key individuals tend to minimize their own culpability. Common claims from executives, boards of directors, auditors, rating agencies, politicians, and regulators include "we just didn't know," "we couldn't have predicted," or "it was just a few bad apples." Economists, as well, may react to corporate scandals and crises with their own version of "we just didn't know," as their models had ruled out certain possibilities. Effective governance of institutions in the private and public sectors should make it much more difficult for individuals in these institutions to get away with claiming that harm was out of their control when in reality they had encouraged or enabled harmful misconduct, and ought to have taken action to prevent it.
Full-Text Access | Supplementary Materials

Articles
"The Causes and Costs of Misallocation," by Diego Restuccia and Richard Rogerson
Why do living standards differ so much across countries? A consensus in the development literature is that differences in productivity are a dominant source of these differences. But what accounts for productivity differences across countries? One explanation is that frontier technologies and best practice methods are slow to diffuse to low-income countries. The recent literature on misallocation offers a distinct but complementary explanation: low-income countries are not as effective in allocating their factors of production to their most efficient use. We provide our perspective on three key questions. First, how important is misallocation? Second, what are the causes of misallocation? And third, beyond the direct cost of lower contemporaneous output, are there additional costs associated with misallocation? A summary of our answers is as follows: Misallocation appears to be a substantial channel in accounting for productivity differences across countries, but the measured magnitude of the effects depends on the approach and context. Researchers have not yet found a dominant source of misallocation; instead, many specific factors seem to contribute a small part of the overall effect. Beyond the static cost of misallocation, we believe that the dynamic effects of misallocation on productivity growth are significant and deserve much more attention going forward.
Full-Text Access | Supplementary Materials


"Federal Budget Policy with an Aging Population and Persistently Low Interest Rates," Douglas W. Elmendorf and Louise M. Sheiner
Some observers have argued that the projections for high and rising debt pose a grave threat to the country's economic future and give the government less fiscal space to respond to recessions or other unexpected developments, so they urge significant changes in tax or spending policies to reduce federal borrowing. In stark contrast, others have noted that interest rates on long-term federal debt are extremely low and have argued that such persistently low interest rates justify additional federal borrowing and investment, at least for the short and medium term. We analyze this controversy focusing on two main issues: the aging of the US population and interest rates on US government debt. It is generally understood that these factors play an important role in the projected path of the US debt-to-GDP ratio. What is less recognized is that these changes also have implications for the appropriate level of US debt. We argue that many—though not all—of the factors that may be contributing to the historically low level of interest rates imply that both federal debt and federal investment should be substantially larger than they would be otherwise. In conclusion, although significant policy changes to reduce federal budget deficits ultimately will be needed, they do not have to be implemented right away. Instead, the focus of federal budget policy over the coming decade should be to increase federal investment while enacting changes in federal spending and taxes that will reduce deficits gradually over time.
Full-Text Access | Supplementary Materials


"How Digitization Has Created a Golden Age of Music, Movies, Books, and Television," Joel Waldfogel
Digitization is disrupting a number of copyright-protected media industries, including books, music, radio, television, and movies. Once information is transformed into digital form, it can be copied and distributed at near-zero marginal costs. This change has facilitated piracy in some industries, which in turn has made it difficult for commercial sellers to continue generating the same levels of revenue for bringing products to market in the traditional ways. Yet despite the sharp revenue reductions for recorded music, as well as threats to revenue in some other traditional media industries, other aspects of digitization have had the offsetting effects of reducing the costs of bringing new products to market in music, movies, books, and television. On balance, digitization has increased the number of new products that are created and made available to consumers. Moreover, given the unpredictable nature of product quality, growth in new products has given rise to substantial increases in the quality of the best products. Although there were concerns that consumer welfare from media products would fall, the opposite scenario has emerged—a golden age for consumers who wish to consume media products.
Full-Text Access | Supplementary Materials


Features



"Retrospectives: Friedrich Hayek and the Market Algorithm," by Samuel Bowles, Alan Kirman and Rajiv Sethi
Friedrich A. Hayek (1899-1992) is known for his vision of the market economy as an information processing system characterized by spontaneous order: the emergence of coherence through the independent actions of large numbers of individuals, each with limited and local knowledge, coordinated by prices that arise from decentralized processes of competition. Hayek is also known for his advocacy of a broad range of free market policies and, indeed, considered the substantially unregulated market system to be superior to competing alternatives precisely because it made the best use of dispersed knowledge. Our purpose in writing this paper is twofold: First, we believe that Hayek's economic vision and critique of equilibrium theory not only remain relevant, but apply with greater force as information has become ever more central to economic activity and the complexity of the information aggregation process has become increasingly apparent. Second, we wish to call into question Hayek's belief that his advocacy of free market policies follows as a matter of logic from his economic vision. The very usefulness of prices (and other economic variables) as informative messages—which is the centerpiece of Hayek's economics—creates incentives to extract information from signals in ways that can be destabilizing. Markets can promote prosperity but can also generate crises. We will argue, accordingly, that a Hayekian understanding of the economy as an information-processing system does not support the type of policy positions that he favored. Thus, we find considerable lasting value in Hayek's economic analysis while nonetheless questioning the connection of this analysis to his political philosophy.


"Recommendations for Further Reading," by  Timothy Taylor

Tuesday, July 25, 2017

Housing Prices: Highs and Lows

The interaction of a bubble in real estate prices with the banking and financial sector drove the US economy into the Great Recession. For an overview of how the US housing market is faring a decade after dysfunctions in housing market finance helped bring on the Great Recession that started in 2007, a useful starting point is The State of the Nation's Housing 2017, by the Joint Center for Housing Studies of Harvard University.

Here's the pattern of overall US housing prices: adjusted for inflation, home prices are about 30% above their level in 2000, but still below where they were at the peak of the housing bubble in 2006.
Perhaps not surprisingly, given that pattern, even ten years after the start of the financial crisis in 2007, there there are millions of Americans who are still "underwater" on their mortgages: that is, what they owe is more than what the home would sell for: "According to CoreLogic, the number of households underwater on their mortgages dropped from 4.3 million in 2015 to 3.2 million in 2016, reducing their share of all homeowners from 8.4 percent to 6.2 percent. ... Despite this progress, the share of homeowners with negative equity in some markets is still more than double the national rate. For example, 16.1 percent of homeowners in the Miami metro area were underwater on their mortgages in 2016, along with 15.5 percent in Las Vegas and 12.6 percent in Chicago. At the other extreme, only 0.6 percent of owners in the San Francisco metro area had negative equity."

Comparisons like these help to emphasize that price patterns have been VERY different across US housing markets. For example, here's one figure comparing the rise in home prices in the  ten highest-cost metro areas, compared to the lowest-cost areas and the US average. Clearly, the bubble in housing prices was much more extreme in these high-cost markets. It's also striking that housing prics in the high-cost markets were roughly double the national average in 2000, but more like triple the national average in 2016.
Here's another look at differences in housing prices across local markets using a heat map diagram. The orange areas are places where housing prices rose at least 40%, and sometimes much more, between 2000 and 2016. The blue areas show where housing prices are actually lower, and sometimes much lower, in 2016 than in 2000.


One of my takeaways here is that the affordability of housing is in some ways a regional and even a local issue. Whether housing is "unaffordable" is commonly measured by using a rule-of-thumb like the share of people in a given market who could afford the median-priced house if they spent no more than 30% of their income on housing. By this measure, a very high proportion of households at or below the poverty line will face a problem of unaffordable housing in pretty much every real estate market. But in the markets where housing prices are highest and have been rising most quickly, a substantial share of middle-class families can face housing affordability issues, too.

I have no problem with federal programs to help those at or near the poverty line afford  necessities of life like food, medical care, housing, and so on. But it's not clear that federal programs are appropriate to help those who are not poor, but face an affordability problem because they live in a region with a high cost of housing. It's not clear why those not clear why those who live in Pocatello or Dubuque or Detroit should pay taxes to subsidize the higher cost of living in south Florida or southern California.

In addition, one key reason why housing prices are so high in certain regions is that local rules can make it costly and time-consuming to build. In short, many of the areas with high housing prices have, to a substantial extent, brought those high costs on themselves by the ways in which they regulate and limit construction of new housing. Again, federal programs to help those with very low incomes is one thing, but when a local area or region has helped to create its own high housing costs, that same local areas or region should also have most of the responsibility for addressing the affordability consequences of those decisions.

The theme that housing construction has been relatively slow surfaces in the report in a number of places: "A variety of factors may be holding back a more robust supply response. Labor shortages are a key constraint, reflecting both the substantial drop in the construction workforce following the housing bust and the lower number of young workers entering the industry. In addition, regulatory and stricter financing requirements have limited the supply of land available for both single- and multifamily housing construction. In combination, these forces raise development costs and make it less feasible to build smaller homes for first-time buyers and rental units affordable to low- and moderate-income households."

For example, the blue line in this figure shows the "vacancy rate," which is quite low. The yellow bars show how many new houses have been constructed in the previous 10 years, which is also very low.

Similarly, the vacancy rate for rental apartments is also quite low, and rents are rising in most metro areas.

With low vacancy rates for housing and, at least in certain areas, very high prices, one would expect to see a resurgence of construction.  But in a number of local markets, efforts to build additional housing stock is held back substantially by the conditions demanded by existing home-owners and imposed in their name by local regulators.

Those interested in this topic might also want to check:





Monday, July 24, 2017

Facts about Carbon Emissions from Oil, Natural Gas, and Coal

The BP Statistical Review of World Energy is a useful annual volume that compiles tables and charts about about energy production and consumption. The latest version, released June 2017, includes a table at the end on recent and current carbon emissions from oil, natural gas, and coal. As a footnote under the table emphasizes, this does not include all greenhouse gases (for example, methane is not included), nor does it subtract carbon emissions which have been sequestered or offset in some way. Moreover, while other tables and figures in the book offer data on production of hydroelectric, nuclear, solar, wind, and other noncarbon energy alternatives. this table has a different focus.

Here's a whittled-down version of the tables: specifically, I took out the annual data on emissions for the years from 2007-2015. Thus, the first column is carbon emissions for 2006; the second column is carbon emissions for 2016; the next two columns show the annual growth rate of carbon emissions for 2016 and for the decade from 2005-2015; and the final column shows each country's or region's share of global carbon emissions in 2016.

What are some of the notable patterns here?

1) Anyone who follows this topic at all knows that China leads the world in carbon emissions. Still, it's striking to me that China accounts for 27.3% of world carbon emissions, compared to 16% for the US.

2) On a regional basis, it's striking  that the Asia Pacific region--led by China, India, and Japan, but also with substantial contributions from Indonesia, South Korea, and Australia--by itself accounts for nearly half of global carbon emissions. Moreover, carbon emissions from his region grew 3.6% annually from 2005-2015.

3) Again on a regional basis, carbon emissions from North America (that is, mainly the United States) are nearly the same as carbon emissions from the Europe/Eurasia region. For both regions, carbon emissions have been falling at about 1% per year since 2005.

4) Given the large size of carbon emissions for the massive US economy, and the ongoing decline in the last decade, total carbon emissions for the United States have dropped much more than for any other country in the world from 2005-2016. A number of other nations with smaller total emissions have seen a faster annual rate of decline than the United States. But two of the countries which rank among those with the most rapid declines in carbon emissions from 2005-2015--Ukraine and Greece--are surely more about overall macroeconomic struggles than about as smooth transition to noncarbon energy sources.

5) Total carbon emissions from the three regions of South and Central America, the Middle East, and Africa total 14.1% of the global total, and thus their combined total is less than either the United States or the European/Eurasian economies. However, if the carbon emissions for this group of three regions keeps growing at 3% per year, while the carbon emissions for the US economy keeps falling at 1% per year, their carbon emissions will outstrip the US in about 4-5 years.


Friday, July 21, 2017

Food for Thought on Jobs: Tidbits from Solow, Gershon, Mokyr

The US unemployment rate has been 5.0% or lower for nearly two years, since September 2015, and the most recent estimates for June 2017 show it at 4.4% in June. For most of my life, an unemployment rate at this level would have been cause for near-riotous celebration. But it's also a time where many workers have had little growth in wages, where labor force participation has fallen, where many jobs are replaceable contract work without any clear career path, where many jobs feel under threat from foreign competition and technology, where the control of employers over workers often feels as if it's on the rise, and where inequality has been on the rise. I've recently run across three comments on US labor markets which get at some of these concerns in various ways, which I'm passing along here. As always, there is more discussion at the links.

Here's Robert Solow, as part of an "In Conversation" feature with Heather Boushey at the Washington Center for Equitable Growth blog (July 20, 2017):
"I’d like to find some way of enlarging and improving the way workers, wage earners, are represented in their firms. Unions used to do that, but even with the best will in the world, you could not restore the trade union movement. If it’s true, what we all think, that the nature of workers changed, that people who work for many employers in different industries, and different occupations, really have changed, then neither the craft union nor the industrial union is the right policy vehicle.
"But of course, the online workers that everybody talks about are the prize case in this. They never have contacts with their employers, who change from day to day, and they have no contact with the other people who work for that employer. ... There’s no shop floor, but for the online worker, it’s clear who the boss is. The boss is the one who pays, as usual.
"So what’s the correct, valid form of representation they could have? How could we do something about their voice and about the web of rules in which they operate? Or something about retirement for people who don’t have a single employer for any length of time? What is the right form of representation? I don’t really think it’s having someone on the board of a corporation. It might matter, but it can’t be the whole thing. I think that you need some kind of substitute. Maybe you need a substitute for the shop floor. How can you be part of a group that you never see, never communicate with or anything like that?
"It’s that part of the inequality issue that I think doesn’t attract enough thought, and I don’t know how to go about encouraging that. Who would be good at it? Or what happens in other countries? ... I do think the economics of this is important because the object here is not merely to make people feel good but to make them feel effective and be effective in pursuing their own interests. So that, to me, is part of the inequality issue. It’s not so much a quantitative inequality, it’s a fact that the relationship between the boss and the bossed is getting more and more biased toward the boss, and that makes people feel unhappy."
Here's Ilana Gershon in a 20-minute podcast interview on "The Biggest Mistakes Job Seekers Make Today" (July 10, 2017) at the Knowledge@Wharton website. Gershon has a book out (which I have not yet read) called Down and Out in the New Economy: How People Find and Don’t Find Work Today. Gershon says:
"People are becoming extremely canny about doing research on the companies that they are interested in being hired into, and they’re thinking more carefully about what kinds of jobs that they would like to have. This is the thing that I’ve been really impressed by: People are getting more and more clever about figuring out whether the workplace that they are perhaps about to join is really a workplace that they want to be a part of. ...
"The other thing that people seem to be doing — and it took me a while to realize why and how this was taking up a lot of people’s time — was focusing on weak ties or weak links in order to be able to get jobs. Weak ties and weak links used to be the ways that people were getting jobs. It used to be very effective in the 1970s, but nowadays, technology has changed so much that the pain point in getting a job has really shifted from trying to find out that the job exists in the first place to figuring out how to have hiring managers or recruiters notice you amid a pile of resumes. It’s more a question of getting noticed rather than finding out that the job exists. Weak ties aren’t so helpful for that. It turns out that workplace ties — having someone who knew you from a previous job and can talk about what you are like as a worker — was very helpful for people. ...
"In San Francisco, where I was doing most of my research, people expected a job tenure of two or three years; in the Midwest, people were expecting more like five to eight years. So when people in the Bay Area were looking at a job applicant from the Midwest they would say, “But wait a minute, you stayed too long. You were too static.” This was really a problem. Then I talked to people who were interviewing for jobs in Chicago, and they found it really frustrating because they kept being told, “But you’re a job-hopper, you don’t commit.” And they would say, “But this is the right length of time in my region.” ...
"[T]here isn’t that much pressure on companies right now to do as well as they can by their job applicants — to give them information about when the job is no longer available; to give them enough information about what the job will actually be like. There are a lot of complaints among job seekers about how badly they are being treated in the hiring process. ... As advanced as we are technologically, you will see jobs actively posted that were filled a month, two months, three months before, and they’re still out there showing up as potential jobs for people."
Finally, the Knowledge@Wharton website has also posted a 54-minute podcast in which Jeremy Schwartz moderates a discussion between Robert J. Gordon and Joel Mokyr on the subject  "Can the U.S. Economy Recapture Its Past Growth?" (July 20, 2017).  It's interesting throughout. Mokyr announces at one point: "I’ve been an economic historian all of my life, but I’m going to say something that cuts off the branch on which I’ve sitting: I think the past is not a terribly good guide to the future." Here's are some thoughts from Mokyr on how technology will affect future jobs:
"There’s two kinds of techno-pessimists. There are ones ... who basically say, the best is behind us, and from now it’s going to be slow going. And then there’s the other techno-pessimists who say, this is going to accelerate and it’s going to destroy us. It’s going to destroy people, it’s going to destroy jobs, it’s going to make us all the slaves of these supercomputers. They can’t both be right. Maybe the truth is somewhere in the middle.
The problem you are underlining has been around for basically 200 years. In 1821, [economist David] Ricardo wrote a famous letter to [economist John Ramsay] McCulloch in which he basically said, the way labor-saving technology … has been going, very soon nobody will be working. Well, that was 200 years ago and people are still working. I think the nature of work will change, and machines will replace more arduous, routine, boring work. Now, we may reach a situation where the only people who will work are the people who want to work. ...
I think what you will see if the participation rate declines is an increase in volunteer work, which is already a very large sector of the economy. People who do work because they feel good about it, and they want to do it, and it gives them a chance of participating in society, to meet people, and all the other benefits of work. That I think you will see. And then of course the big issue becomes how do we actually pay people some kind of citizen’s wage, and that’s an issue about distribution that everybody is sort of scratching their head over, and I don’t have an immediate answer to that. ...
I just want to say that economic growth will be slowed down because of people dropping out of the labor force, people retiring, and so on. That takes advantage of the fact that we actually don’t count leisure as part of our national income accounts. And so if you’re not working because you don’t want to work — the national income goes down, that is bad. But of course it isn’t bad because we all understand that leisure itself is a desirable thing. ...
Now the other thing of course is that it’s not so obvious that these people aren’t going to work. I refer both of you to a survey essay that was published last week in The Economist, in which they basically point out that we may be looking at a large proportion of the population, particularly people in the 65 to 74 age bracket, who are basically fit to work, want to work, and there is no reason why they shouldn’t work — but only if we can change the institutions of society that have been systematically discriminating against them.

Thursday, July 20, 2017

The Cycles of Cities

Cities evolve. Sometimes they boom, with strong growth in center-city areas. Sometimes the center city seems to hollow out, but economic activity in the metro area as a whole, including suburbs, remains fairly robust. Sometimes older neighborhoods with low property values experience as wave of new investment in residential housing, often called "gentrification." Sometimes both a downtown area and its surrounding metro area decline. In addition, these patterns seem to affect a number of cities at the same time. Santiago Pinto and Tim Sablik discuss one aspect of these cycles in "Understanding Urban Decline," an essay that appears in the 2016 Annual Report of the Federal Reserve Bank of Richmond.

As a dramatic opening, here's a picture of a house in Detroit's Brush Park neighborhood as it appears in 1881, and then when a photo was taken from the same perspective in 2011. The house in the middle is the one from the earlier picture. The houses on either side are long gone. Detroit is obviously an extreme example, given that its economic base was so closely tied to the auto industry. But it's far from the only example of an urban area that suffered a severe downturn.

The underlying economic theory of cities points out that they offer advantages of agglomeration: that is, it's a lot easier to carry out certain activities of production, hiring, marketing, entertainment, provision of infrastructure, and transportation, when people are bunched more closely together. Thus, urban areas are engines of economic growth. There are also negative aspects of agglomeration, in that traffic congestion, crime, noise, and other negatives are also easier to carry out when people are bunched together. So cities are in a constant struggle to build on the advantages of agglomeration while mitigating the disadvantages.

The preferences of people interact with the economics of cities, and in particular, how people trade off different aspects of location. For example, people will place different values on their location relative  to  their job and the length of their commute, the cost of a place to live,high quality local-schools, a quiet and low-crime neighborhood, a bustling urban neighborhood, mass transit, shopping, parks and libraries, culture/entertainment/sports, and neighbors with certain types of income levels or ethnic mixes. As is true in so many life decisions, you can't always get all of what you want. As people with different income levels and jobs and preferences make these choices, urban patterns will emerge.

Here's what Pinto and Sablik have to say about the patterns that have emerged in US metropolitan areas (footnotes omitted):
"In most U.S. cities, wealthier households tend to live farther away from the city center, though there are a few notable exceptions (such as Chicago, Philadelphia, and Washington, D.C.). One explanation for this is that wealthier households prefer to occupy more land and therefore are willing to live in the suburbs despite higher commuting costs because the price of housing per square foot is lower. On the other hand, when a household’s income becomes sufficiently large, it may choose to move back to the city center to reduce time spent commuting. This type of trade-off could explain, for instance, why both very poor and very wealthy households are found living in some downtowns. Cities such as Boston, New Orleans, Atlanta, and Philadelphia are examples of this type of spatial pattern. Additionally, public transportation can help explain why poorer households live in the city center. Although the cost of housing per unit of land is higher in the city, public transportation allows poor households that don’t have access to cars to economize on transportation costs.
"Transportation may further explain the trend of households moving from city centers to the suburbs, often called suburbanization. Several studies suggest that the development of the highway system contributes to “urban sprawl.” One study estimated that just one highway passing through a central city reduces its population by 18 percent. Cities that experience such a decline in commuting costs do still tend to attract population, but that inflow typically causes the city to expand geographically more than it increases the number of people living in the city center.
"Certain amenities, such as schools, also may explain neighborhood sorting by income or race. For instance, as wealthier households move to the suburbs, the quality of schools and other public services provided there will tend to rise. As this process unfolds, lower-income households are left behind in the city center with limited access to high-quality local public services. This has been observed in the suburbanization that has taken place in many large U.S. cities starting around the mid-twentieth century. ... 
"Cities undergo long cycles of development and decay. When a city is new, buildings near the CBD [central business district]  are the most desirable and tend to be occupied by a mix of firms and wealthier households. But as those buildings age and deteriorate, those households may move to newer developments surrounding the city, leaving behind lower-income households. This process can repeat multiple times, pushing the city border outward as higher-income households retreat to the newest ring of development. Eventually, deteriorated buildings in the city center are redeveloped, once again attracting higher-income households back to the city and starting the cycle anew. This has taken place, for instance, in cities such as Chicago and Philadelphia. This process, however, has raised some controversies since transforming a neighborhood from low- to high-income may displace the low-income households who live there, a process called gentrification. ...
"Because buildings are durable goods, it can take a long time for a city to move through its lifecycle. When a city’s population is growing, it is profitable to construct new housing because demand and prices for housing are rising, and the city expands rapidly. But when the population declines, existing housing stock doesn’t simply disappear. It can take decades before it is profitable to refurbish or replace a building. The surplus of housing depresses  house prices below the cost of construction, and the city stops growing. Moreover, falling rents may draw lower-skilled and lower-income households into the city, intensifying urban  sorting by income."
The underlying message here, as I hear it, is that areas of cities or cities as a whole can become locked into negative patterns. Say that the positive economic effects of agglomeration are not operating well, and so that part of the city has a lot of low-priced real estate which offers housing for a disproportionate number of low-income people. We know that this dynamic can remain in place for decades. Can anything be done about it?  Pinto and Sablik write:
"If policymakers decide that some intervention is warranted, there are a number of different approaches they could consider. One option is to focus on helping households by giving them the tools to improve their situation. This could involve removing barriers that prevent households from relocating to thriving parts of the city, providing housing vouchers to help them move, or improving transportation networks to reduce commuting costs. An alternative approach is to focus on revitalizing the city itself. This includes revitalizing residential or commercial buildings that have declined or offering incentives to employers to locate in the city and hire local people. Economists have labeled these different approaches people-based and place-based policies, respectively."
However, as Pinto and Sablik discuss, the evidence on the effectiveness of either people-based or place-based approaches is fairly weak. For example, people-based policies include idea like  "moving-to-opportunity" programs that subsidize people in depressed urban areas to relocate to other parts or the urban area. Such programs seem to help young children, but the effects on other age groups are minimal or even negative. Improving the quality of local education is a worthwhile goal, but hard to accomplish. place-based approaches like "enterprise zones" or "urban renewal projects" sometimes show an effect in the specific area that is targeted for the policy, but even when the effects seem positive for that area, a common finding is that economic activity was just relocated. from other nearby areas.

Every city has areas where incomes are lower and social problems are higher, and the geographic location of those areas often seems to remain the same for decades at a time. This experience suggests rather strongly that we don't have any magic-bullet policy tools that will target these areas with strong local economic activity and addressing issues like crime. I worry that some cities have a tendency to focus to much on a magic-bullet solution--the single big project or law that will address problems in part of a city. Instead, cities might be better off if they would focus on providing a decent level of public services to all areas of a city: safe public areas and parks, cleaning up garbage, street repair, well-functioning schools, enforcing the housing code,  libraries and post offices, and the like. Moreover, a city can play a big role by whether it encourages or discourages people and firms who want to upgrade the base of real estate in a certain area of a city, whether it's for business or residential uses.

Finally, readers interested in the evolution of urban areas might also want to look back at "Economics of Gentrification" (December 6, 2016).

Tuesday, July 18, 2017

Global Value Chains and Productivity

Production processes have become more likely to cross international borders, thus creating what are called "global value chains" or "global supply chains." Chiara Criscuolo and Jonathan Timmis discuss "The Relationship Between Global Value Chains and Productivity" in the Spring 2017 issue of International Productivity Monitor  (vol. 32, pp. 61-83). I'll start here with a few facts, and then lay out the linkages they discuss. They write: 
"Economies can participate in GVCs [global value chains] by using imported inputs in their exports (the so-called backward linkages in GVC) or by supplying intermediates to third country exports (forward linkages). The overall participation in GVCs which is the total of backward and forward participation differs substantially across countries. Overall participation measure (measured as the sum of backward and forward linkages) reflects the importance of GVCs for an economy, with GVCs accounting for between one-third and two-thirds of gross exports (of goods and services) for OECD economies in 2011 ..."


In the next figure, the blue bars showing the growth of global value chains across countries, while the red triangles show the growth rate of exports. The growth of global value chains is consistently faster than growth of exports, although you have to look closely at the figure to see this, because the blue bars are measured on the left-hand axis and the red triangles with the smaller numbers on tgghe right-hand axis. 
The bulk of these global value chains are regional: in particular, there is an east Asian cluster of value chains, a European cluster, and a North American cluster. There's some evidence that the growth of these global value chains may have slowed in the last few years, although this is a subject of ongoing research. One possible reason is that "there may also be changes in the structure of global production networks, such as China's domestic upgrading and the reorganization of East. Asian value chains, or the shortening of value chains to mitigate supply chain risks and rising labour costs in emerging economies."

How might global value chains affect productivity? Here's a taste of the details on these arguments (with citations omitted here for readability):
"Trade in goods, services and intangible inputs is at the heart of global value chains. The bulk of trade is comprised not of final goods or services, but of trade in intermediate parts and components and intermediate services. Among OECD economies , trade in intermediate inputs accounted for 56 per cent of total goods trade and 73 per cent of services trade over the period 1995-2005. ... GVCs present a new means to access international markets: economies need no longer build complete supply chains at home; instead, they can leverage foreign inputs in their production. The available variety and quality of foreign inputs (capital, labour and intermediates) can positively impact firm productivity. ... A large literature finds that productivity gains in firms that directly import these inputs. In addition, foreign competition in the domestic input market may also lead to price reductions or quality improvements for domestic suppliers, benefiting users of domestic inputs too.  ...
"GVCs are a well-established vehicle for productivity spillovers to local firms. A substantial part of GVC integration is mediated through FDI [foreign direct investment], and such multinational enterprises are typically at the global frontier of productivity, innovation and technology. Exposure to the global frontier can provide an opportunity for local firms to increase productivity through learning about advanced technologies or superior organizational and managerial practices. A large literature has investigated FDI spillovers and arrives at a broad consensus in favour of positive productivity spillovers to industries that supply multinationals through backward linkages, with little evidence through other linkages ... 
"Knowledge acquisition is an important motive for FDI, which may increase the scope for knowledge diffusion. Firms may relocate some activities, including innovation activities, to obtain access to so-called strategic assets - skilled workers, technological expertise, or the presence of competitors and suppliers - and learn from their experience . Firms locate in leading edge countries close to the technology frontier, in order to benefit from the diffusion of advanced technologies. In addition, MNE [multinational enterprise] acquisition of foreign firms can lead to a relocation of innovative activities to where they are most efficiently undertaken and increase knowledge diffusion to affiliates within the group. ...

"To participate directly in GVCs requires scale. For the largest, most productive firms that are able to export, access to new customers in foreign foreign markets can not only lead to increased learning and innovation but also incentivize complementary investments and the restructuring of internal processes to meet the additional demand. ... Upscaling may yield productivity gains. The cost of many productivity-enhancing investments, including those concerning GVC participation listed above, is largely fixed. Such investments are only viable for sufficiently large firms that can spread the fixed costs over high sales volumes. Firm upscaling may therefore contribute to productive investments."
One of the preeminent economic problems of our time is slow economic growth. Given that global value chains have expanded rapidly and seem to contribute to growth, would disrupt these production chains should face a high degree of skepticism.

Those who would like some additional  background on global value chains and productivity might want to look back at some earlier posts on the subject:

Monday, July 17, 2017

The Pricing Answer to Traffic Congestion

Traffic congestion present several tricky problems of analysis. One is that the amount of traffic on the road at a peak travel time is not fixed. As Anthony Downs in his mini-classic 1992 book, Stuck in Traffic. at least some of the travelers who are confronted by congestion will shift their patterns in one of three ways: they will shift the timing of their trip earlier or later, shift the route of their trip (say, from highways to surface roads), and shift the mode of their trip (say, from a single car to mass transit or a carpool). Conversely, when attempts are made to reduce traffic congestion by building more lanes of highway, one result will be that a certain number of those who had previously shifted their timing or route or mode of travel will now shift back to becoming drivers of single-passenger cars, and the gain in reducing congestion will be frustratingly smaller than might have been predicted.

There is certainly a role for a range of technical fixes to traffic problems: more highways, mass transit, coordinating traffic lights, signs warning of congestion, and maybe someday autonomous vehicles platooning in coordinated groups. But ultimately, putting a price on travel during peak-level congested times needs to be part of the answer, too. Brian D. Taylor (no relation) offers a short essay on the subject in "Traffic Congestion Is Counter-Intuitive, and Fixable," ACCESS Magazine, Spring 2017. He writes:
"There are five major views on how to best manage traffic congestion. One view focuses on adding road capacity: wider streets, new traffic lanes, left- and right-turn lanes, more parking, and even new freeways — all of which cost a lot of money. A second view favors putting roads on “diets” and adding capacity elsewhere: improved bus service, more bike lanes, increased building densities to encourage walking, and new rail transit lines — the last of which also costs a lot of money. A third and more cost-conscious view focuses on better management of our existing transportation systems: coordinated signal timing for cars, signal pre-emption for buses, remote coordination of bus and train operations, and freeway service patrols all aim to make our transportation systems operate more effectively. A fourth view is that technology will save the day: traffic-sensitive navigation systems, increasing use of services like Lyft and Uber, and, eventually, fully autonomous vehicles that reduce the need for parking and use road capacity more efficiently. The fifth view is perhaps most favored by transportation experts, but is also generally reviled by the traveling public and the officials they elect: using prices to balance the supply of and demand for travel. ...
[W]hen traffic is crawling along at rush hour, fewer vehicles are getting through than at other times, not more. A typical freeway lane can handle up to 2,000 vehicles per lane per hour, but in really bad traffic that throughput can be cut in half; just when we need the most out of our road system, it performs at its worst. So heavy traffic is not only irritating, it’s also really inefficient. ...
"Road pricing is expanding around the globe. In the US, Los Angeles, Orange County, San Diego, Houston, Minneapolis, and a growing list of other cities have high-occupancy toll (HOT) lanes. The prices on these lanes vary based on congestion levels in the parallel “free” lanes in order to keep traffic flowing smoothly in the toll lanes. The Orange County 91 Express Lanes celebrated their 20th anniversary in 2016, and over 600,000 travelers in Los Angeles have accounts to use the I-110 and I-10 ExpressLanes. The revenues generated have helped to pay for improved public transit service in the ExpressLanes corridors. ...
"Outside the US, London, Milan, Singapore, Stockholm, and several other cities now charge drivers who enter their congested central areas. Chronic bumper-to-bumper traffic disappeared virtually overnight after the charges were introduced in each of these places. Buses now travel much faster and more reliably, the streets are more pleasant for walking and biking, and those who want to pay to drive can do so with few delays. In Stockholm, public support for the central area (or cordon) pricing increased after people saw how well it worked."
The economics of traffic congestion is clear-cut. Those stuck in traffic naturally prefer to think of congestion as caused by everyone else, but everyone in the jam is part of the problem.When you are in a traffic jam, those in front of you in line are imposing costs of time delay on you, and in turn, you are imposing costs of time delay on all of those behind you in line. Those costs are a form of pollution, a "negative externality" as economists call it.   If drivers were required by road pricing to pay these peak-load costs that they impose on others when the roads are congested, many of them would find a way to shift the time or route or mode of their commute, or to telecommute at certain times.

Friday, July 14, 2017

Apprenticeships for Early Childhood Education?

Here's the dilemma: On one side, it seems important that those who do early childhood education are well-qualified for the job. After all, one justification for such programs is to help children who would otherwise have already been lagging behind in kindergarten and first grade to be school-ready. A National Academy of Science report in 2015 recommended that all lead teachers working with children from birth through age 8 should have at least a four-year bachelor's degree. On the other side, these jobs don't have high pay, and aren't likely to have high pay in the future. Thus, the dilemma is that it doesn't make economic sense to require someone to go through a lengthy and potentially costly training program to qualify for a job that doesn't have especially high pay.

Mary Alice McCarthy tackles this question in "Rethinking Credential Requirements in Early Education: Equity-based Strategies for Professionalizinga Vulnerable Workforce," written for the New America think tank (June 2017). Her proposal is that an expansion of paid apprenticeships may be a more functional way of getting high-quality teachers in place for early childhood education. She describes the underlying dilemma this way:
Few people question degree requirements for teachers in elementary schools, including in kindergarten and first grade. Advocates for degree requirements for early educators ask why we would expect anything less for the teachers of our youngest children. If a bachelor’s degree is required to teach a five-year-old, why not a four-year-old? Or a three-year-old? Teachers are teachers, according to this view, and all of them need professional training before they are ready for the classroom. ...
However, the argument that teachers in early childhood centers are the same as teachers in elementary schools and should be held to similar qualification requirements is deeply problematic. Both might be groups of teachers, but they do not represent a single workforce. Just as high school teachers and college faculty both educate, they do so in such different settings and under such distinct expectations that we do not generally think of them as a single workforce. Teachers in early childhood centers operate in a vastly different segment of the labor market than their elementary school peers. The majority work in private settings marked by rules, funding sources, and employer relationships distinct from those of public school teachers. Most importantly, they generally earn significantly lower wages and enjoy far fewer benefits than their counterparts in elementary schools. These two groups of workers are not even represented by the same unions. ...

Degree requirements might change who qualifies for a job as a lead teacher for young children, but they can’t change the underlying realities of the labor market—and that is the real problem with degree requirements in early childhood education and other low-wage occupations. The way the early education market is structured, the costs of any degree requirement will be borne almost entirely by workers who will see little, if any, increase in wages.  And college isn’t getting any cheaper. An average associate degree at a two-year public college costs around $9,500 a year. A bachelor’s degree from a
four-year public institution costs about $18,600 a year. That is a steep entry price for a profession where hourly wages average less than $10 an hour.
McCarthy suggests a structured and organized two-year apprenticeship program instead, which would lead to outside evaluations and a certificate of completion, and points to a pilot study in Philadelphia for evidence of workability. She makes a very strong case.

Requiring a college degree for early childhood education workers is not likely to raise wages for those workers. 
"The working conditions of early educators, meanwhile, are also unlikely to be affected by a degree requirement. The system of funding in the early education field—not the perception of its teachers—is what drives its fragmentation and decentralization. A degree requirement will not make state and local school systems expand the size and scope of their early childhood education centers, where working conditions and pay tend to be better. Nor will it change how federal and state programs channel their funding for early education through a decentralized system of public and private early education centers. Unless those funding sources, particularly the public programs, move toward more school-based provision of early education, there is little reason to expect a degree requirement will spark a recalibration of the early education market."
A college degree is an inefficient way to learn the specific jobs skills needed for a job working with very young children.
"A bachelor’s degree is a very time-consuming credential to earn. It is also a remarkably inefficient way to equip early educators with the knowledge, skills, and competencies outlined in the National Academies report and identified by key stakeholders like the Council for the Accreditation of Educator Preparation (CAEP) and NAEYC [the National Association for the Education of Young Children] ..."
Requiring a college degree for early childhood education would have the effect that those working in these positions are younger, whiter, and tend to come from families with higher income levels.
"In other words, we can expect that the workforce will become more stratified along race, income, and age. Early childhood educators holding degrees are more likely to be young and white, and educators without degrees more likely to be older and from communities of color. That is the case for our elementary teaching workforce, which is more than 80 percent white. A bachelor’s degree requirement has the potential to reduce the likelihood that children from low-income and racially diverse backgrounds will have teachers from their communities."
I agree with the broad direction of McCarthy's proposal, but the application of this insight extends well beyond workers in early childhood education. A substantial number of those who graduate from high school have no reason to view additional academic classwork with anticipation or enjoyment. Throughout their K-12 school careers, they have mostly been in the bottom half, or bottom quarter, of the academic distribution. If a job requires additional years of classroom study, it will appear to them as a heavy burden and a strong discouragement. For many of these students, a well-structured rigorous learning-on-the-job program will be a more attractive option. Our economy needs more alternative career pathways that don't require piling up academic degrees as a starting point.


Thursday, July 13, 2017

The Ingredients for a US Productivity Revival

It's easy to find gloomy predictions for continued slow growth of the US economy. Thus, Lee Branstetter and Daniel Sichel caught my eye with their essay, "The Case for an American Productivity Revival,"  written as a "Policy Brief" for the Peterson Institute of International Economics (June 2017, PB17-26). Here how they start:

Labor productivity performance in the United States has been dismal for more than a decade. But productivity slowdowns—even lengthy ones—are nothing new in US economic history. This Policy Brief makes the case that the current slowdown will come to an end as a new productivity revival takes hold.  
Why the optimism? Official price indexes indicate that innovation in the technology sector has slowed to a crawl, but better data indicate rapid progress. Standard measures, focused on physical capital, suggest that business investment is weak, but broader measures of investment that incorporate intellectual and organizational capital report much more robust investment. New technological opportunities in healthcare, robotics, education, and the technology of invention itself provide additional reasons for optimism. This Policy Brief gauges the potential productivity impact of these developments. The evidence points to a likely revival of US labor productivity growth from the 0.5 percent average rate registered since 2010 to a pace of 2 percent or more. A productivity revival of this magnitude would provide a solid foundation for steady increases in wages ... 

The essay spells out details behind these claims.  Here are a few of the comments that caught my eye.

Past methods for adjusting for the improved quality of microprocessors may not be working well at capturing changes in the last decade or so. 

Before the mid-
2000s, the posted prices of MPUs tended to fall as newer
models were introduced. This price trajectory allowed a
standard methodology used for semiconductors in the
producer price index (matched-model indexes) to capture
quality change through the rapid price declines of older
models. Since the mid-2000s, posted prices of Intel MPUs
have tended to remain stable, even after the introduction
of newer, more powerful models. Reflecting these relatively
flat price profiles, a matched-model index will indicate little
change in quality-adjusted prices even if the quality of each
newly introduced model is much greater than its predecessor.
The new price measure Byrne, Oliner, and Sichel
developed (an hedonic index) more fully captures ongoing
quality change and reveals rapid price declines after this
quality change is taken into account.
This evidence on faster price declines indicates that
innovation and multifactor productivity growth in semiconductors—
the general-purpose technology behind much
of the digital revolution—has been far more rapid than official
indexes suggest.

Although conventional tangible business investment is down as a share of GDP, intangible investment is on the rise.

In this figure, the light blue line shows tangible business investment, and the well-known pattern of overall decline (as a share of GDP) since the 1970s. The dark blue line shows the official US government statistical measure of "intangible investment," which includes "software, scientific R&D, mineral exploration, and the development of entertainment products." The dashed red line shows a broader version of intangible investment that includes both the official measures and also "nonscientific product development, brand equity, training, and organizational capital." As the authors write (footnotes and citations omitted):
In fact, the overall investment share of both tangible and all intangible capital has been relatively stable since the late 1970s. This conclusion is not surprising in an economy in which the newest technical capabilities and products rely at least as much on intangible capital as on tangible capital. This feature surely characterizes leading companies such as Google, Amazon, Facebook, and Microsoft. Even industrial companies like GE are increasingly investing in big data, predictive analytics, and machine learning.

In short, Branstetter and Sichel believe that the official statistics are understating both current productivity gains as well as the investments that firms are making for the future.  They then note: "Four developments have the potential to contribute to faster productivity growth in the United States: improvements in the healthcare system, increasing use of robots, a revolution in e-learning, and globalization of invention." They further argue that these changes can be supported by some mostly well-known policies: for example,
  • "robust federal investment in basic science"
  • "immigrant scientists and entrepreneurs play a disproportionate role in driving the technological advances that power productivity growth in the United States"
  • "globalization of invention presupposes the continuation of an open global trading and investment system supported by the United States"
  • "a public agency or public-private partnership that could certify the efficacy of new educational technologies in the same way the Food and Drug Administration (FDA) certifies the safety and efficacy of new drugs, by supervising rigorous, randomized control trials. Modest policy effort
  • in this direction could yield rich dividends in the form of much faster, more cost-effective human capital formation."
I would add that economic growth is by its nature a disruptive process, and part of embracing this disruption is to find ways for both its benefits and costs to be widely shared. The authors conclude: "A standard productivity growth accounting framework captures these factors to highlight how a significant revival of productivity growth could emerge, especially in the medium to long run. A pace of 2¼ percent a year is eminently plausible—and there are solid reasons to hope for
even more rapid productivity growth."

Wednesday, July 12, 2017

Tally Sticks and the Fundamentals of Money

Do you know the true story of how a careless clerk, while burning up some outdated money, caused the Great Fire of 1834 which destroyed both Houses of the British Parliament, along with large portions of the Palace of Westminster? Tim Harford tells the tale in "What tally sticks tell us about how money works" (BBC News, July 10, 2017).

In this case, the outdated money takes the form of "tally sticks." As Harford explains:
The artefacts in question were humble sticks of willow, about eight inches (20cm) long, called Exchequer tallies. The willow was harvested along the banks of the Thames, not far from the Palace of Westminster in central London. ... Tallies were a way of recording debts with a system that was sublimely simple and effective. The stick would contain a record of the debt, for example: "£9 4s 4d from Fulk Basset for the farm of Wycombe". Fulk Basset was a Bishop of London in the 13th Century. He owed his debt to King Henry III.
Now comes the elegant part. The stick would be split in half, down its length from one end to the other. The debtor would retain half, called the "foil". The creditor would retain the other half, called the "stock" - even today, British bankers use the word "stocks" to refer to debts of the British government. Because willow has a natural and distinctive grain, the two halves would match only each other.
Of course, the Treasury could simply have kept a record of these transactions in a ledger somewhere. But the tally stick system enabled something radical to occur. If you had a tally stock showing that Bishop Basset owed you £5, then unless you worried that he wasn't good for the money, the tally stock itself was worth close to £5 in its own right.
If you wanted to buy something, you might well find that the seller would be pleased to accept the tally stock as a safe and convenient form of payment. So the tally sticks themselves became a kind of money, a particular sort of debt that could be traded freely, circulating from person to person until it utterly separated from Bishop Basset and a farm in Wycombe.

Here are a few pictures of tally sticks: the first is from Harford's article, which shows accounts accounts of the bailiff of Ralph de Manton of Ufford Church in Northampton, circa 1299.The second is from a short article by Frank J. Swetz and Victor J. Katz, "Mathematical Treasures - English tally sticks," Convergence (January 2011). The inscription on that tally stick shows the name of William de Costello, who was Sheriff of London in 1296.


Medieval tally sticks. circa 1299



Largestick with name engraved

Of course, making marks on a stick or bone to keep a record of days or lunar cycles or livestock dates back millennia. The specific idea of tally sticks to keep track of debt dates back hundreds of years; in one telling, they were invented by King Henry the First, son of William the Conqueror, when he became King in 1100 AD.  Moreover, when a tally stick was split, it was divided into two different lengths, with the debtor literally receiving the short end of the stick.

Tally sticks were a highly successful monetary innovation, in the sense that they were used for centuries. They offer a demonstration of the idea that money, at its root, is something that can be stored and then accepted for purchases: as the textbooks say, money is a medium of exchange, a store of value, and a unit of account. They also demonstrate that money can be created by private actors, although textbooks typically emphasize the role of banks in creating money in the modern economy.

The punchline to the story of the tally sticks happens in 1834. Tally sticks had fallen out of use, and apparently there were no historians or museum curators nearby, so a decision was made to burn several large cartloads of remaining tally sticks. Thanks to a "senile Housekeeper and careless Clerk of Works," the fire got loose and roared through the Palace of Westminster, destroying the House of Commons and the House of Lords. It was the largest fire in London between the Great Fire of 1666 and the devastation of the Blitz during War II.

Tuesday, July 11, 2017

Arm's-Length International Trade

"Global value chains often involve numerous cross-border operations, conducted either `intra-firm,' that is, between firms related through ownership or control, or between unaffiliated firms at `arm’s-length.'" The World Bank explains this difference in "Arm’s-Length Trade: A Source of Post-Crisis Trade Weakness," which appears as a "Special Focus" chapter in Global Economic Prospects: A Fragile Recovery (June 2017). The chapter begins:
"Trade growth has slowed sharply since the global financial crisis. Based on U.S. trade data, arm’s-length trade—trade between unaffiliated firms—accounts disproportionately for the overall post-crisis trade slowdown. This is partly because arm’s-length trade depends more heavily than intra-firm trade on sectors with rapid pre-crisis growth that boosted arm’s length trade pre-crisis but that have languished post-crisis, and on emerging market and developing economies (EMDEs), where output growth has slowed sharply from elevated pre-crisis rates. Unaffiliated firms may also have been hindered more than multinational firms by constrained access to finance during the crisis, a greater sensitivity to adverse income and exchange rate movements, heightened policy uncertainty, and their typical firm-level characteristics."
Here's are the growth rates of US imports and exports for arm's-length and intra-firm trade in recent years. Detailed international data with a similar breakdown isn't available, although a similar pattern seems to hold in the research that has been done.


An extraordinarily high share of US trade happens through a relatively small number of firms. For example, Andrew B. Bernard, J. Bradford Jensen, Stephen J. Redding, and Peter K. Schott explained this point  10 years ago in "Firms in International Trade," in the Summer 2007 issue of the Journal of Economic Perspectives: "Yet engaging in international trade is an exceedingly rare activity: of the 5.5 million firms operating in the United States in 2000, just 4 percent were exporters. Among these exporting firms, the top 10 percent accounted for 96 percent of total U.S. exports."

These big players in international trade face a choice: import and export from affiliated firms, which often own pieces of each other, or import and export from unaffiliated firms on an arm's-length basis. The obvious advantage of the first approach is that when operating across national borders, there are likely to be conflicts and issues about  pricing, costs, timeliness, quality, transfers of technology and resources, and more. Addressing such issues on an ongoing basis with an affiliated firm may be more streamlined and easier than, say, trying to sue some other firm in the courts of its home country. The World Bank report notes (citations omitted):
"In practice, multinationals employ intra-firm and arm’s-length transactions to varying degrees. In 2015, intra-firm transactions are estimated to have accounted for about one-third of global exports. Vertically integrated multinational companies, such as Samsung Electronics, Nokia, and Intel, trade primarily intrafirm. Samsung, the world’s biggest communications equipment multinational, has 158 subsidiaries across the world, including 43 subsidiaries in Europe, 32 in China and 30 in North and South America. Other multinationals, such as Apple, Motorola, and Nike, rely mainly on outsourcing, and hence on arm’s-length trade with non-affiliated suppliers."
In a time when international trade faces a relatively high degree of suspicion, it's useful to be clear on just what is involved. When it comes to trading raw materials like energy or metals, or basic manufactured goods like textiles, or exporting to emerging markets, then arm's length trade is common. But a lot of the prominent, productive, and dynamic firms in the US and around the world are tied into international networks of intra-firm trade and global supply chains. If US policymakers put US firms at a disadvantage in accessing and using those global supply chains, those US firms will be at a disadvantage against their global competitors.