Friday, March 6, 2015

US Dependency Ratios, Looking Ahead

In the lingo of demographers and economists, the "dependency ratio" refers to the fact that the working age population from ages 18-64 produces most of the output in any economy, but a certain amount of the consumption is done by those under 18 and those over 65. Thus, there is an "old-age dependency ratio," which is the population 65 and older divided by the population from 18-64, a "youth dependency ratio" which is the under-18 population 17 divided by the population from 18-64, and at "total dependency" ratio which is the sum of the under-18 and 65-and-over population, divided by the 18-64 population.

Sandra L. Colby and Jennifer M. Ortman from the US Census Bureau offer some projections about dependency ratios in the March 2015 report "Projections of the Size and Composition of the U.S. Population: 2014 to 2060" (P25-1143).

As the figure shows, the youth dependency ratio is expected to hover around 35%--in fact, to decline a bit--in the decades to come. However, the old-age dependency ratio is on the rise. It's now about 23%, but 2035 will be up to about 38%. Taking the two ratios together, the under-18 population plus the 65-and-over population is now about 60% of the size of the 18-64 population, but the ratio is headed for about 75% in the next two decades.

It's worth emphasizing that the old-age dependency ratio for a couple of decades in the figure can be estimated with a pretty high degree of accuracy. After all, anyone who is going to  be 21 or older in 2035 has already been born. Large fluctuations in death rates or immigration rates are the only factors that can move the old-age dependency ratio substantially.

The report also includes a breakdown of the growth of population by age that helps to clarify what is happening behind these ratios. By 2040, the under-18 population is projected to rise by a total of 5%; the 18-44 population by 12%; the 45-64 population by 10%; and the 65 and older population by 78%.
Most of the rise in the old-age dependency ratio happens by the early 2030s. Thus, one can think about the next two decades as a time of transition: transition in public policies affecting the elderly like Social Security and Medicare; transition in work patterns as we seek to encourage at least some of the elderly to stay in the workforce longer; transition is how we think about the design of public services and facilities everywhere from hotel rooms to park trails for a population with a larger share of the elderly; and transition in how we start building systems that can support families and communities in providing assistance and care for the elderly who need it.

Thursday, March 5, 2015

Snapshots of US Agriculture

An extraordinary shift happened in the US agricultural sector during the last century or so. Robert A. Hoppe lays out the facts in his report "Structure and Finances of U.S. Farms: Family Farm Report,
2014 Edition," written as Economic Information Bulletin Number 132, December 2014, for the U.S. Department of Agriculture. Indeed, when I hear arguments about how difficult (impossible?) it will be for the US workforce to adjust to the coming waves of technology, my thought quickly jump to the shift in agriculture.

For example, back around 1910, about one-third of all US workers were in agriculture (blue line, measured on the right-hand scale).  It's now about 2%. The absolute number of jobs in agriculture declined, too, but the big change was that more than 100% of the job growth in the U.S. was in the non-agricultural sector. I haven't researched the point, but my guess is that many people around 1910 would have viewed these changes as somewhere between  impossible and inconceivable.

The total actual acres operated by US farms has barely budged in the last half-century. But as the agricultural productivity steadily rose, the number of farms sharply declined, especially during the half-century from about 1930 to 1980.

In the current U.S. farm sector, about 90% are small farms measuring less than $350,000 per year in "gross cash farm income" (this is the revenue for the farm before subtracting expenses, not the income to the farmer). These small farms represent about half the land operated, and one-quarter of the total value of production.

When one looks across various commodities, the share of small farms is bigger in some (poultry, hay livestock) than others (dairy, cotton). But interestingly enough, a substantial share of production in each area still comes from small and medium firms, not just from large ones--although average profits are smaller on small farms.

This ability of small and medium firms to compete with larger firms means that although farm sizes are growing over time, large firms do not have a dramatic cost advantage over smaller ones--at least in a number of crops. as Hoppe notes:
"Extensive economies of scale do not exist in farming. Most cost reductions can be attained at a relatively small business size, compared with other industries, even though farming tends to be capital intensive in the United States. ... Crop production requires local knowledge of soils, pests, and weather while livestock production requires knowledge of livestock and how they respond to local conditions. This knowledge takes
time to acquire and is not easily transferred to others."

In the Winter 2014 issue of the Journal of Economic Perspectives, Daniel A. Sumner explores variuls explanations for the growth of farm size. in "American Farms Keep Growing: Size, Production, and Policy." (Full disclosure: I've been Managing Editor of the JEP since the first issue in 1987. All JEP articles back to the first issue are freely available online courtesy of the American Economic Association.) Sumner focuses on the interaction of managerial capability and agricultural technology in leading to larger farms. He wrote:
Changes in farm size distributions and growth of farms seems closely related to technological innovations, managerial capability, and productivity. Opportunities for competitive returns from investing financial and human capital in farming hinge on applying managerial capability to an operation large enough to provide sufficient payoff. Farms with better managers grow, and these managers take better advantage of innovations in technology, which themselves require more technical and managerial sophistication. Farms now routinely use outside consultants for technological services such as animal health and nutrition, calibration and timing of fertilizers and pesticides, and accounting. The result is higher productivity, especially in reducing labor and land per unit of output. Under this scenario, agricultural research leads to technology that pays off most to more-capable managers who operate larger farms that have lower costs and higher productivity. The result is reinforcing productivity improvements.

Wednesday, March 4, 2015

How Higher Education Perpetuates Intergenerational Inequality

Part of the mythology of US higher education is that it offers a meritocracy, along with a lot of second chances, so that smart and hard-working students of all background have a genuine chance to succeed--no matter their family income. But the data certainly seems to suggest that family income has a lot to do with whether a student will attend college in the first place, and even more to do with whether a student will obtain a four-year college degree.

Margaret Cahalan and Laura Perna provide an overview of the evidence in "2015 Indicators of Higher Education Equity in the United States: 45 Year Trend Report," published by the Pell Institute for the Study of Opportunity in Higher Education and the and University of Pennsylvania Alliance for Higher Education and Democracy (PennAHEAD).

As a starting point, consider what share of high school graduates, age 18-24, are enrolled in college of any type (two-year or four year, public, private, or for-profit). The gap between the top quarter of the income distribution and the bottom quarter has narrowed a bit in the last 45 years (from 33 percentage points to 27 percentage points), but it remains substantial. Of course, if one took into account the fact that students whose families are in the bottom quarter of the income distribution are less likely to become high school graduates, the gap would be wider still.

Given this background, it's not surprising that that those from the top quarter of the income distribution are more likely to have a bachelor's degree by age 24. Indeed, the share of those completing a bachelor's degree by age 24 has risen substantially for students from families in the top quarter of the income distribution, and barely budged for those in the bottom two quarters.

What if we focus just on those who actually entered college? It turns out that if are someone from a family in the top-quarter of the income distribution who enters college, you are extremely likely to complete a bachelor's degree by age 24; if you are in the bottom of the income distribution, you only have about a 21% chance of having a bachelor's degree by age 24. (Frankly, I don't tust that most recent estimate of 99%. It just can't be true that almost all of those who start a bachelor's degree in the top quarter of the income distribution finish it. Here's another skeptic. But I do believe that the gap is a substantial one.)

The report offers a range of evidence that the affordability of college is a bigger problem for students from low-income families even after taking financial aid into account. Students from low-income families take out more debt, and are more likely to attend for-profit colleges. Indeed, a general pattern for higher education a whole is that even as the cost of attending has risen, the share of the cost paid by households, rather  than by the state or federal government, has been rising.
The effects of these patterns on inequality of incomes in the United States are clearcut: higher income families are better able to provide financial and other kinds of support for their children, both as they grow up, and when it comes time to attend college, and when it comes time to find a job after college. In this way, higher education has become a central part part of the process by which high-income families can seek to assure that their children are more likely to have high incomes, too.

This connection is perhaps underappreciated. After all, it's a lot easier for professors and college students to protest high levels of compensation for the top professionals in finance, law, and the corporate world who are in the top 1% of the income distribution, rather than to face the idea that their own institutions of higher education are implicated in perpetuating inequality of incomes across generations. Here's some discussion bearing on the point from "Human Capital in the 21st Century," by Alan B. Krueger, appearing in the First Quarter 2015 issue of the Milken Institute Review. Krueger writes:

Moreover, changes in earnings associated with different levels of education – that is, human capital – have played an outsized role in raising inequality among the bottom 99 percent of Americans. 
Consider the following hypothetical calculation. If the top 1 percent’s share of income had remained constant at its 1979 level, and all of the increase in share that actually went to the top 1 percent were redistributed to the bottom 99 percent – a feat that might or might not have been achievable without shrinking the total size of the pie – then each family in the bottom 99 percent would have gained about $7,000 in annual income (in today’s dollars). That is not an insignificant sum. But contrast it with the magnitude of the income premium associated with educational achievement: The earnings gap between the median household headed by a college graduate and the median household
headed by a high school graduate rose by $20,400 between 1979 and 2013 according to my calculations based on the Bureau of Labor Statistics’ Current Population Survey. This shift – which took place entirely within the bottom 99 percent – is three times as great as the shift that has taken place from the bottom 99 percent to the top 1 percent in the same time frame. What’s worse, there are reasons to believe that the enormous rise in inequality that we have experienced will reduce intergenerational economic mobility and cause inequality to rise further in the future. ...
If the return to education increases over time, and higher-income parents are more prone to invest in the education of their children than lower-income parents – or if talents are inherited from one generation to the next – then the gap between children of higher- and lower-income families would be expected to grow with time. Furthermore, if social networking and family connections also have an important impact on outcomes in the job market, and those connections are transmitted across generations, one would expect the ... effect to be even stronger. There are, indeed, signs that the rise in income inequality in the United States since the late 1970s has been undermining equality of opportunity. For example, the gap in participation in extracurricular activities between children of advantaged and disadvantaged parents has grown since the 1980s, as has the gap in parental spending on educational enrichment activities. Furthermore, the gap in educational attainment between children born to high- and low-income parents has widened. The rising gap in opportunities between children of low- and high-income families does not bode well for the future.

Tuesday, March 3, 2015

ATMs and a Rising Number of Bank Tellers?

The first US bank to install an automatic teller machine (ATM) was a branch of Chemical Bank on Long Island in 1969. After relatively slow growth during the 1970s, there were about 100,000 ATMs across the US by 1990, a total that has now risen to about 400,000. So here's the question: During the rise of ATM machines in US banking, did the number of bank tellers rise or fall?

I would have guessed "fall," and I'm not alone. In a June 14, 2011, interview, President Obama used ATMs a an example of technology displacing labor. He said (I've added punctuation to the raw transcript):
There are some structural issues with our economy where a lot of businesses have learned to become much more efficient with a lot fewer workers. You see it when you go to a bank and you use an ATM, you don't go to a bank teller, or you go to the airport and you're using a kiosk instead of checking in at the gate. 
However, James Bessen collected the actual data on ATMs and bank tellers from an array of scattered sources. Overall, the story is that as the ATM machines arrived, the number of bank tellers held steady and even rose slightly. Bessen discusses the interaction between technology and employment in "Toil and Technology," in the March 2015 issue of Finance & Development. Here is Bessen's figure showing the rise in ATM machines and the number of tellers employed.

Why did the number of bank tellers rise even as ATMs became prevalent? Bessen highlights two changes. One major change wass the spread of opening more bank branches. Bessen points out that you could now open a branch with fewer bank tellers than before; in addition, I'd add that many states were relaxing their rules and allowing banks to open more branches both within and between states during the 1980s and 1990s in particular. The other major change was that the job of a teller changed. Banks began to offer more services, and tellers evolved from being people who put checks in one drawer and handed out cash from another drawer to people who solved a variety of financial problems for customers.

The broader point, of course, is that just looking at how technology can substitute for a certain job is only one part of the analysis. Other parts include how regulations that affect that industry and area of employment are changing, and how new technology may cause jobs to evolve and shift in a way that benefits workers. Bessen argues that the main problem is not the "end of work," but instead the problem is that many workers have a difficult time obtaining the skills they need so that their work can complement the new waves of technology as they arrive. As a result, we observe a combination of stagnant wages for many workers who have been unable to update their skills as needed, combined with much higher wages for those who have the new skills (which contributes to wage inequality), all combined with employers who complain that not enough employees already have the skills the employer wants. As Besson writes:

New information technologies do pose a problem for the economy. To date, however, that problem is not massive technological unemployment. It is a problem of stagnant wages for ordinary workers and skill shortages for employers. Workers are being displaced to jobs requiring new skills rather than being replaced entirely. This problem, nevertheless, is quite real: technology has heightened economic inequality. ... The information technology revolution may well be accelerating. Artificial intelligence software will give computers dramatic new capabilities over the coming years, potentially taking over job tasks in hundreds of occupations. But that progress is not cause for despair about the “end of work.” Instead, it is all the more reason to focus on policies that will help large numbers of workers acquire the knowledge and skills necessary to work with this new technology.

Monday, March 2, 2015

Six Reasons Why Economists Should Say Less About "Competition"

A short essay of mine titled "The Blurry Line Between Competition and Cooperation," was published a month ago at the Library of Economics and Liberty website.  I argued that the rule-based competition in economic markets is inextricably intermingled with of cooperative behavior. Paul H. Rubin takes a stronger positino in his 2013 Presidential Address to the Southern Economic Association titled "Emporîophobia (Fear of Markets): Cooperation or Competition?" It is published last year in the April 2014 issue of the  Southern Economic Journal (80:4, pp. 875-889). Many readers will have access to the Southern Economic journal through a library or personal subscription, but an version of the paper is also available on SSRN here.

Rubin's argument is that both competition and cooperation are used in a metaphorical sense when discussing markets. He makes a case that if economists are choosing between these metaphors, cooperation is not only a metaphor with more positive connotations when explaining or defending markets to noneconomists, but also that "competition" itself is a poor metaphor for describing economic actions and decisions, and how the economy works. In one section of the paper, he offers six reasons why economists in the name of accuracy should stop referring to competition. Here is a sampling.

"First, there is no economic act that is itself competitive." 

Rubin writes: "In their economic lives, people produce goods and services and exchange these goods and services for others. Both the production of goods and the exchange of goods for other goods are
cooperative acts. There is no competition in these actions. The motive for some acts may be
competitive, but the actions themselves are cooperative. ... Unless an agent is willing to
engage in illegal actions (for example, burning a competitor's factory) or willing to go outside
the market (e.g., complaining to the Federal Trade Commission about a competitor), any
competitive act is actually performed through cooperative behavior. "

"Second, theprototypical economy, the purely competitive economy, involves no competition."

Perfect competition as taught in the textbooks is made up of "price-takers" selling identical products, who can sell their complete output at a market price that they cannot affect. Indeed, one correspondent to my earlier piece pointed out that farmers, who are often viewed as in a real-world situation similar to the textbook version of perfect competition, often do not view themselves as competing with their neighbors, and instead often stand ready to share the risks and fixed costs of farming by helping neighbors where possible.

"Third, in other market structures acts may sometimes be viewed as competitive, but not always."

What about market structures that are not perfect competition? Rubin writes: "There may be competition to become the monopolist, but tlais is either competition through being a better cooperator or political competition, for example, by lobbying for exclusive licenses. ... Again, motives may be competitive but the actions themselves are cooperative."

"Fourth, principles of cooperation (through specialization and division of labor) are at least as important to economists as competition."

"Adam Smith is the father of competitive analysis. But he is also the father of
cooperative analysis. Specialization is the mother of cooperation. The pin factory is a masterful
analysis of cooperation. Somehow we economists have made the competitive analysis in Smith
the basis for our discipline and have made cooperation into something of a stepchild."

"Fifth, competition is a tool, not the end purpose of the economy."

"The purpose of an economy is to generate consumer surplus, which occurs through cooperative acts such as transactions and exchanges. Competition is a powerful tool for improving the functioning of
transactions by making sure that in each case the transactors are the best possible partners and
that transactions take place on the best possible terms. That is the purpose of competition. In other
words, the competition that occurs in an economy is competition for the right to cooperate. The
gain comes from the cooperation, not from the competition. Of course, competition is essential,
since it leads to the optimum terms for cooperation and selects the best parties to cooperate, but
nonetheless competition is a tool whose function is to facilitate cooperation. Society is willing to
tolerate markets because of their cooperative benefits, not because they are competitive."

"Sixth, competition is ubiquitous in human interactions, and so competition is not a way of
distinguishing market economies from other economies."

"Economies based on custom also have competition. For example, more successful hunters in a hunter-gatherer economy reap benefits, including access to women. In an exploitive economy success may be measured by exploiting the population and rising through the oppressive hierarchy. This is much more "competitive" than the path to success in a market economy. The unique feature of an economy organized through markets is that the competition that exists is competition for the right to cooperate, but it is the cooperation that is the defining feature of the market economy."

Ultimately, Rubin's argument is that "emporîophobia," his term for "fear of markets," would be reduced if economists put the language of cooperation front and center in their vocabularies.  Here's Rubin's example of how economists might talk about Wal-Mart:
If we focus on competition rather than cooperation, then we think of winners and losers. We feel sorry for the losers and may view the winners as cheaters. At the least, there is a tendency to favor underdogs and the losers from competition may be viewed as underdogs. We may also believe that a world with winners and losers is in some sense unfair. By our emphasis on competition, economists must take some blame for this error. But if we think about cooperation, then the losers are those who are less successful at cooperating, Wal-Mart succeeds not because it has beat up its rivals and driven them out of business. It succeeds because it has done a better job of cooperating with consumers, by offering them stuff they want at the lowest possible prices. Of course, economists
know this, but since non-economists begin with the competition model, economists must be defensive and try to dissuade citizens of their prior beliefs. If the default way of thinking was cooperation, then the critics of markets would be on the defensive.
I'm not fully persuaded by Rubin's argument, in large part because I agree with a clause in the preceding paragraph that "non-economists begin with the competition model." As long as this is true, economists who speak too purified a language of cooperation are in real danger of sounding out-of-touch. Also, economists must then immediately confront the problem that bargaining positions in the economy are not always the same, and the "cooperation" of a minimum-wage worker taking what feels like the only available part-time job before the monthly rent becomes due doesn't look quite the same as the "cooperation" of a chief executive officer receiving a large annual bonus.

But precisely because "non-economists begin with the competition model," it is useful for economists to be concrete and specific about the very specific sense in which they use the term "competition." After all, having many firms "competing" to offer a mixture of prices and qualities that consumers prefer is quite a bit different from having firms "competing" to defraud customers. And in many economic contexts, the form of competition of which free-market economist speak approvingly quickly shades into cooperative behaviors.

Saturday, February 28, 2015

Buy Back Shares or Invest?

Much of this week I've been posting figures and snippets of analysis from the 2015 Economic Report of the President, written by the Council of Economic Advisers. Here's one more. Companies that have additional cash in hand after paying their expenses and dividends have several choices: among the choices, they can use the funds for investing to increase output or improve efficiency, or they can use the funds to buy back the company's own shares. Here's the pattern between these two choices over time. (Firms can also get funds for investment from other sources, like issuing bonds, so the two lines in the figure below don't need to sum to 100%.)

And here's the explanation from the Council of Economic Advisers:
Nonfinancial corporations spent a lower-than-average share of their internal funds (also known as cash flow) on investment during 2011 to 2013 (see Figure 2-25). Instead, these corporations used a good part of those funds to buy back shares from their stockholders. Share buybacks are similar to dividends insofar as they are a way for corporations to return value to shareholders. They differ, however, with regard to permanence: whereas dividend changes tend to persist, share buybacks are one-time events. (When firms raise investment funds by issuing new equity, the nonfinancial sector aggregate of share buybacks in the figures can be negative, as was common in the 1950s and 1960s.) The decline in the invested share of internal funds from 2011 to 2013, together with the rise in share buybacks, suggests that firms had more internal funds than they thought they could profitably invest. As can be seen in Figure 2-25, the investment outlook appears to have improved in 2014, and the investment share of internal funds has rebounded to near its historical average. Share buybacks, however, remain high.
I'll only add that one of the major conundrums for the U.S. economy during the slow recovery since the Great Recession has been the issue of "Sluggish U.S. Investment" (June 27, 2014). Many firms were earning high profits, but as they saw it, the most productive option for using a substantial share those profits during the last few years apparently was not to invest in the higher efficiency and output.

Friday, February 27, 2015

Putting U.S. Labor Force Participation in Context

It's fairly well-known that US labor force participation--that is, the share of U.S. adults who are classified either employed or unemployed--has been dropping. But it's not always recognized how the U.S differs from other high-income economies in this trend, or how

The 2015 Economic Report of the President, released last week by the White House Council of Economic Advisers, offers some striking evidence on these points. The top figures shows labor force participation rates for "prime-age males," who fall into the 25-54 age category. The nice thing about looking at this group is that countries may differ considerably in their patterns of the extent to which students attend school into their early 20s, or the extent to which people retire in their late 50s and early 60s. Looking at the "prime-age" group leaves these ages out of the picture.

For men, the U.S. was middle-of-the-pack in labor force participation rates of prime-age males in 1990, and now vies with Italy for the lowest level. For women, the U.S. was near the top-of-the-pack prime-age labor force participation in 1990, but since then has been surpassed by France, Canada, Germany, and the United Kingdom, and is now about even with Japan--which has not been  historically known as a country with high labor force participation for women.

The Council of Economic Advisers sums up the cross-country patterns in this way:
Since the financial crisis, U.S. prime-age male participation has declined by about 2.5 percentage points, while the United Kingdom has seen a small uptick and most large European economies were generally stable. Of 24 OECD countries that reported prime-age male participation data between 1990 and 2013, the United States fell from 16th to 22nd. The story is somewhat similar among prime-age females. ...  In 1990, the United States ranked 7th out of 24 current OECD countries reporting prime-age female labor force participation, about 8 percentage points higher than the average of that sample. But since the late 1990s, women’s labor force participation plateaued and even started to drift down in the United States while continuing to rise in other high-income countries, as shown in Figure 1-10. As a result, in 2013 the United States ranked 19th out of those same 24 countries, falling 6 percentage points behind the United Kingdom and 3 percentage points below the sample average. 
These patterns of decline in US male and female labor force participation go back in time. The share of the male population above the age of 16 in the labor force has been falling for decades. The share of the female population above the age of 16 in the labor force rose steadily in the second half of the 20th century, but levelled out around 2000 and has been falling since.

When combining the cross-country data, the time series data, and the depth of the Great Recession, the report argues that the decline in labor force particpation rates in recent years is pretty well explaiued. The CEA writes:
Between 2007 and 2012 the decline in participation is fully (and at some points more than fully) explained by the aging of the population and standard business-cycle effects. Beginning in 2012, however, the labor force participation rate decline began to exceed what was predicted from aging and cyclical factors. Since late 2013, the labor force participation rate has stabilized and the portion of the decline that was unexplained shrank, albeit slowly, between the second and fourth quarters of 2014 ...

What explains the "residual" factor in the figure below? Part of it is probably due to a gradually lower rate of labor force participation within US age groups (like the evidence on prime-age workers given above), while another part is surely due to the fact that the Great Recession was so severe that it "led to a greater-than-normal cyclical relationship between unemployment and participation."

Whatever the reasons, as the U.S. economy looks ahead to the next few decades, figuring out ways so that the decline in labor force participation can be stabilized and reverse is an important goal of public policy.