Tuesday, July 16, 2019

Global Perspective on Markets for Sand

Trivia question: If measuring by volume, what mined product is the largest? The answer is "sand and gravel," sometimes known in the geology business as "aggregates." In particular, aggregates are used for concrete and asphalt, and demand for these products from China and other emerging markets has skyrocketed. Sand is also used as part of hydraulic fracturing, so in the United States demand from that source has surged as well. And sand and gravel are also widely used for purposes ranging from land reclamation and water treatment to industrial production of electronics, cosmetics and glass.

Sand and gravel are being mined at an exceptionally rapid rate. There is strong anecdotal evidence of that environmental harms are occurring in some locations, and that more locations are threatened, but systematic analysis has been missing. The UN Environmental Programme offers an update on the situation in "Sand and Sustainability: Finding New Solutions for Environmental Governance of Global Sand Resources" (May 2019).

Here's a sense of the scale of the issue (citations omitted for readability):
An estimated 40-50 billion metric tonnes is extracted from quarries, pits, rivers, coastlines and the marine environment each year. The construction industry consumes over half of this volume annually (25.9 billion to 29.6 billion tonnes in 2012) and could consume even more in future. Though little public data exists about extraction volumes, sources and uses we know that, with some exceptions, most sand and gravels extracted from natural environments are consumed regionally because of the high costs of transport. For example, two thirds of global cement production occurs in China (58.5%) and India (6.6%).
Global concrete production has tripled since the early 1990s.
One obvious question here is how it is remotely possible to run out of sand, given the existence of deserts. But it turns out that sand recently shaped by water is what is economically useful. As the report notes: "Desert sand, though plentiful, is unusable for most purposes because its wind-smoothed grains render it non-adherent for the purposes of industrial concrete." 

Mining enormous quantities of sand from beaches and riverbanks, as well as dredging it from offshore, is linked to a wide array of environmental damages. Here's an overview, again with citations omitted:
Aggregate extraction in rivers has led to pollution and changes in pH levels, instability of river banks leading to increased flood frequency and intensity, lowering of water aquifers, exacerbating drought occurrence and severity. Damming and extraction have reduced sediment delivery from rivers to many coastal areas, leading to reduced deposits in river deltas and accelerated beach erosion. This adds to effects of direct extraction in onshore sand extraction in coastal dune systems and nearshore marine dredging of aggregates, which may locally lead to long-term erosion impacts. Nearshore and offshore sand extraction in New Zealand continues despite considerable uncertainty of the environmental and the cumulative effects of mining, climate change and urbanisation of the coast.
Tourism is affected by loss of key species and beach erosion, while both freshwater and marine fishing — both traditional and commercial — has been shown to be affected through destruction of benthic fauna that accompanies dredging activities. Agriculture land has been affected by river erosion in some cases and the lowering of the water table. The insurance sector is affected through exacerbation of the impact of extreme events such as floods, droughts,and storm surges which can affect houses and infrastructure. A decrease in bed load or channel shortening can cause downstream erosion including bank erosion and the undercutting or undermining of engineering structures such as bridges, side protection walls and structures for water supply. ... For example, the use of sand in reclamation practices is thought to have led to increased turbidity and coral reef decline in the South China Sea. In the Mekong basin, the impact of sand mining in Laos, Thailand, and Cambodia is felt on the Vietnam delta erosion. Singapore demand for sand and gravel in land reclamation projects have triggered an increase in sand mining in Cambodia and Vietnam ... 
For a more detailed overview of environmental risks, Lois Koehnken has written "Impacts of Sand Mining On Ecosystem Structure, Process, and Biodoversity in Rivers" (2018) for the World Wildlife Fund. 

The potential answers here are conceptually straightforward, but can be difficult to implement. The implications of large-scale sand-mining should be considered before the actual mining starts. It would be useful to think about recycling sand-related products to other uses, where possible, like finding ways to re-use old concrete or waste asphalt. Research is ongoing to find alternative substances that could be used in concrete to replace sand: some suggested possibilities include crushed stone, ash left over after waste incineration, stainless steel slag, coconut shells, sawdust, old tires, and more. 

One interesting suggestion is to emphasize "permeable pavement," in which a substantial amount of water can sink through pavement, into the earth, and even reach the groundwater, rather than just running off. The UNEP report notes:
Permeable pavement (sometimes called porous pavement) ... is used in cities around the world, particularly in new cities projects in China and India to reduce surface water runoff volumes and rates by allowing water to infiltrate soil rapidly, helping to reduce flooding while replenishing groundwater reserves. In many cases, permeable roadways, pedestrian walkways, playgrounds, parking zones can also act as water retention structures, reducing or eliminating the need for traditional stormwater management systems. ...  Additional proven benefits include improved water quality, reduced pollutant runoff into local waterbodies, reduced urban heat island effects (great advantage for adaptation to climate change), lower cost of road salting (in cold environments), among others. Less noted is the indirect contribution to reduced demand for natural sand both in constructing these permeable surfaces, and in reducing the need for built drainage systems. Most permeable pavement designs – porous (or pervious) concrete, interlocking pavement slabs, crushed rock and gravels, or clay, amongst other materials – do not use fine aggregates (sand). ...  Recent experimentation has shown that introducing end-of-life tyre aggregates can increase flexibility of rigid permeable pavement systems, and with that their capacity to cope with ground movement or tree root systems. 
Mette Bendixen, Jim Best, Chris Hackney and Lars L√łnsmann Iversen provide a quick readable overview of these issues in "Time is running out for sand" (Nature, July 2, 2019). They offer these projections for sand demand and prices (drawing on underlying research here).

I tend to think of sand and gravel as a nearly infinite resource. But when the world is extracting 50 billion metric tonnes every year of sand and gravel, with that total rising quickly, the effects are far from imperceptible.

For previous posts at this website about global markets for sand, see

Monday, July 15, 2019

Why Don't People Buy More Annuities?

Among economists, it's sometimes known as the "annuities puzzle": Why don't people buy annuities as frequently as one might expect?

In May 2019, Brookings and the Kellogg Business School Public-Private Initiative held a conference on the subject of  “Retirement Policy and Annuitization: A View from the Experts.”  Three papers from that conference are available:  "Can annuities become a bigger contributor to retirement security?" by Martin Neil Baily Brookings and Benjamin H. Harris (June 2019); "Automatic enrollment in 401(k) annuities: Boosting retiree lifetime income," Vanya Horneff, Raimond Maurer, and Olivia S. Mitchell (June 2019); and "Using behavioral insights to increase annuitization rates: The role of framing and anchoring," by Abigail Hurwitz (June 2019).

An annuity involves making a substantial payment in the present, and then receiving a stream of payments in the future. For example, someone retiring at age 65 or 70 might take a chunk of their retirement savings (not all, but a reasonably sized chunk) and buy an annuity. For example one might buy an annuity that starts payments at age 80 or 85 and continues those payments until death. If you wish, you can buy an annuity where the benefits rise over time with inflation. 

Instead of "life insurance," which pays out after you die, annuities are a form of "continued living insurance," which pay out as long as you remain alive. In some contexts, annuities are quite popular. For example, Social Security is an annuity-style program; that is, you pay into it during your lifetime, but at retirement, the benefits arrive in an inflation-adjusted stream of payments until death, not a one-time chunk of cash that could be immediately spent. Many workers also liked the traditional "defined benefit" pension plan, in which an employer pays into a fund that provides benefits until death. A defined benefit pension plan acts like an annuity--albeit one that is purchased and financed indirectly through an employer.

However, one of the big changes in retirement savings in the last few decades is that the classic "defined benefit" pension plan is in decline, and workers instead have a retirement account, like a 401k or an IRA, in which they have saved money for retirement.  Given that the drop in consumption from outliving one's assets could be very large, economic models often suggest that people should be more willing than they seem to be to take some portion of the money in this account and use it to by annuities. The Horneff, Maurer, and Mitchell paper runs through a set of illustrative calculations, suggesting that most people would benefit if they put 10% of retirement wealth into an annuity--and many would benefit from putting in a larger share.

There aren't official statistics on how many Americans buy annuities, but studies suggest that it's probably 10% or less. So why don't people annuitize more of their retirement wealth? There are a number of possible explanations.

Baily and Harris point out that people often tend to use "mental accounts": for example, they think of certain income as being available for short-term consumption, or medium-term savings (say, for home repairs or a vacation), or for retirement savings. Many people think of the retirement savings in their 401k or IRA as their own spending money that they control, not as money that should be used to purchase "continuing life insurance" via an annuity. (In contrast, most people don't think of Social Security or defined pension benefits as their personal money in the same way; instead, people have already mentally handed off the control over those funds to a government or employer account.)

Another issue is that people worry that they will buy an annuity but then die quickly and "lose" money on the transaction. I think of this as a standard problem with every form of insurance. Most insurance--life, health, car, home-- pays off when something bad happens and you need to make a claim. In a way, the best outcome is that I spend my lifetime paying for all these forms of insurance, and never end up using any of them.  But of course, if I never make a claim on my insurance, I feel as if I wasted the money, because the risks never actually  happened. Indeed, what I sometimes call the "unavoidable reality of insurance" is that there will be a relatively small number of people who get large insurance payouts--and they are the unlucky ones  because something bad happened in their lives. The "lucky" ones pay and pay and pay those insurance premiums, and almost never get anything back. No wonder insurance is often unpopular! And it's no wonder that many people often obtain insurance under some form of pressure: you need car insurance to drive your car, and you need home insurance to get a mortgage, and your employer provides health insurance as part of your job compensation, and you are required by law to pay into Social Security or Medicare. With annuities, the fear is that you might be buying one more form of insurance that won't pay off.

Yet another issue is that annuities can be complex financial contracts, and hard for an average person to evaluate. How long do you pay into the annuity? When does it start paying out? Does it pay our for a fixed period, or over the rest of one's life? Are the payouts adjusted for inflation? How large a commission is being charged by the seller? Does the annuity include a minimum payout if you die soon--which could be left to one's heirs? What happens if the company that sold you the annuity goes broke a decade or two in the future? How will the tax code treat income from annuities in the future? In the past, some annuities were not an especially good financial deal, in the sense that someone with the discipline to withdraw money from their retirement accounts in a slow-and-steady way have a high probability of ending up better off than someone who purchased a life annuity.

What might be done to tip the balance, at least a little bit, toward more people buying annuities?

One option is a "nudge" approach in which the default approach would be that a small proportion of retirement accounts would be automatically annuitized at retirement. A body of social science research suggests that lots of people would just go with the default approach, and would end up being pleased that they had done so. But anyone who didn't want this default to happen and didn't want teh annuity could opt out with a phone call. Annuities are a default option in Switzerland, for example.

Another option is to offer a different framing of the choices. For example, instead of "buying an annuity" perhaps there should be an option to "buy higher Social Security benefits for the rest of your life." Some survey evidence suggests that when annuities are described in terms like "spend" and "payment," people are more attracted than if they are described by terms like "invest" and "earning." It would probably also be useful if the presentation of annuities could be standardized, so potential consumers could more easily compare what they are buying.

There are some interesting international comparisons discussed in the Hurwitz paper. She writes: "The United Kingdom had a mandatory annuity law that was repealed in 2014. The Netherlands mandates full annuitization, Chile offers only annuities or phased withdrawals, Israel adopted a mandatory minimum annuity requirement in 2008 ..." There is some evidence that when annuities have become more common in a country, then people often keep on choosing them even if the default or requirement to do so is loosened.

For a couple of previous posts on the "annuities puzzle," see:

Friday, July 12, 2019

China's Changing Relationship with the World Economy

China's economy is simultaneously huge in absolute size and lagging far behind the world leaders on a per person basis. According to World Bank data, China's GDP (measuring in current US dollars) is
$13.6 trillion, roughly triple the size of Germany or Japan, but still in second place among countries of the world behind the US GDP of $20.4 trillion. However, measured by per capita GDP, the World Bank data shows that China is at $9,770, just one-sixth of the US level of $62,641.

Any evaluation of China's economy finds itself bouncing back and forth between the enormous size that has already been achieved and the possibility of so much more growth and change in the future. This pattern keeps recurring in "China and the world: Inside the dynamics of a changing relationship," written by the team of Jonathan Woetzel, Jeongmin Seong, Nick Leung, Joe Ngai, James Manyika, Anu Madgavkar, Susan Lund, and Andrey Mironenko at the McKinsey Global Institute (July 2019).

Here's one illustration. The figure shows the total GDP of China, Japan, and Germany as a share of the US level, which is set at 100%. On this figure, Germany's GDP as a share of the US level peaked in 1979, and Japan's peaked in 1991.

What's interesting about China's situation is not just that the level has risen so sharply. In addition, the peaks for Germany and Japan happened when their levels of per capita GDP were similar or higher to the US level (given the prevailing exchange rates at the time). China's per capita GDP is much lower, suggesting much more room to grow. Similarly, the urbanization rates for Germany in 1979 and Japan in 1991 were in the 70s, while China's urbanization rate is only 58%--again suggesting considerably more room for China to grow.

Here are some other examples of the changes in China that have already happened, with a hint of the potential for much larger changes still remaining. The MGI report notes:
Trade. ... China became the world’s largest exporter of goods in 2009, and the largest trading nation in goods in 2013. China’s share of global goods trade increased from 1.9 percent in 2000 to 11.4 percent in 2017. In an analysis of 186 countries, China is the largest export destination for 33 countries and the largest source of imports for 65. ... However, China’s share of global services trade is 6.4 percent, about half that of goods trade.
Firms. ... Consider that in 2018 there were 110 firms from the mainland China and Hong Kong in the Global Fortune 500, getting toward the US tally of 126. ... However, although the share of these firms’ revenue earned outside China has increased, less than 20 percent of revenue is made overseas even by these global firms. To put this in context, the average share of revenue earned overseas for S&P 500 companies is 44 percent. Furthermore, only one Chinese company is in the world’s 100 most valuable brands.
Finance. China was also the world’s the second largest source of outbound FDI and the second largest recipient of inbound FDI from 2015 to 2017. ... Foreign ownership accounted for only about 2 percent of the Chinese banking system, 2 percent of the Chinese bond market, and about 6 percent of China’s stock market in 2018. Furthermore, in 2017, its inbound and outbound capital flows (including FDI, loans, debt, equity, and reserve assets) were only about 30 percent those of the United States. ...
Technology. China’s scale in R&D expenditure has soared—spending on domestic R&D rose from about $9 billion in 2000 to $293 billion in 2018—the second-highest in the world—thereby narrowing the gap with the United States. However, China still depends on imports of some core technologies such as semiconductors and optical devices, and intellectual property (IP) from abroad. In 2017, China incurred $29 billion worth of imported IP charges, while only charging others around $5 billion in exported IP charges (17 percent of its imports). China’s technology import contracts are highly concentrated geographically, with more than half of purchases of foreign R&D coming from only three countries—31 percent from the United States, 21 percent from Japan, and 10 percent from Germany.
Culture. China has invested heavily in building a global cultural presence. ... Furthermore, its financing of the global entertainment industry has led to more movies being shot in China: 12 percent of the world’s top 50 movies were shot at least partially in China in 2017, up from 2 percent in 2010. However, significant investment appears to have had yet to achieve mainstream cultural relevance globally. Chinese exports of television dramas in terms of the value of exports are only about one-third those of South Korea, and the number of subscribers to the top ten Chinese musicians on a global streaming platform are only three percent those of the top ten South Korean artists, for instance.
The MGI report argues that when looking specifically at trade, technology and financial capital, China's economy is becoming less dependent on the rest of the world, while the rest of the world economy is becoming more dependent on China. For example, one big shift in the last few years is that China's economy has been "rebalancing," which refers to a greater share China's output going to China's consumers and less to capital investment or exports. This shift also means that rising levels of consumption in China are a major force in driving global consumption of goods and services.
In 11 of the 16 quarters since 2015, domestic consumption contributed more than 60 percent of total GDP growth. In 2017 to 2018, about 76 percent of GDP growth came from domestic consumption, while net trade made a negative contribution to GDP growth. As recently as 2008, China’s net trade surplus amounted to 8 percent of GDP; by 2018, that figure was estimated to be only 1.3 percent—less than either Germany or South Korea, where net trade surpluses amount to between 5 and 8 percent of GDP. Rising demand and the development of domestic value chains in China also partly explain the recent decline in trade intensity at the global level. ...Although it only accounts for 10 percent of global household consumption, China was the source of 38 percent of global household consumption growth from 2010 to 2016, according to World Bank data. Moreover, in some categories including automobiles and mobile phones, China’s share of global consumption is 30 percent or more.
I read now and then about the prospect of China's economy "decoupling" from the US economy.. From a US power politics point of view, I think the mental model here is how the economy of the Soviet Union operated in the decades after World War II. Most of the trade of the USSR operated within its own centrally-planned trading bloc, called the Council for Mutual Economic Assistance, of Soviet-controlled countries. The results in terms of output and quality were so miserably bad that jokes told by Russians about their economy became a staple among economists. Since the fall of the USSR, Russia's economy has staggered from one catastrophe another (for discussion, see here and here), while occasionally being buoyed up when oil prices are high.

China's situation is very different. It's economy is not reliant on exports of oil or other natural resources. China's government still controls the financial industry and steers funds to state-owned companies, but it is not following a Soviet-style approach to central planning. In the 21st century, China not isolating itself from the rest of the world economy; rather, it is actively building transportation and trade ties to countries around the world. The education and health levels of China's population are rising rapidly. Future economic  growth for China is likely to be slower and bumpier than the pattern of the last 40 years--while still being notably faster on average than the growth of high-income economies like the U.S.

There are a number of hard questions to face about China's rise in the global economy, and many of the hardest ones go well beyond economics. But old mental models drawn from a time when the US was by far the dominant economy in the world and its main geopolitical opponent was the USSR are not likely to be very useful in searching for answers.

Wednesday, July 10, 2019

Is AI Just Recycled Intelligence, Which Needs Economics to Help It Along?

The Harvard Data Science Review has just published its first issue. Many of us in economics are cousins of burgeoning data science field, and will find it of interest. As one example, Harvard provost (and economist) Alan Garber offers a broad-based essay on "Data Science: What the Educated Citizen Needs to Know."  Others may be more intrigued by the efforts of Mark Glickman, Jason Brown, and Ryan Song to use a machine learning approach to figure out whether Lennon or McCartney is more likely to have authored certain songs by the Beatles that are officially attributed to both, in "(A) Data in the Life: Authorship Attribution in Lennon-McCartney Songs."
But my attention was especially caught by an essay by Michael I. Jordan called "Artificial Intelligence—The Revolution Hasn’t Happened Yet," which is then followed by 11 comments: Rodney BrooksEmmanuel Candes, John Duchi, and Chiara SabattiGreg CraneDavid DonohoMaria FasliBarbara GroszAndrew LoMaja MataricBrendan McCordMax Welling, and Rebecca Willett.  The rejoinder from Michael I. Jordan will be of particular interest to economists, because it is titled "Dr. AI or: How I Learned to Stop Worrying and Love Economics."
Jordan's main argument is that the term "artificial intelligence" often misleads public discussions, because the actual issue here isn't human-type intelligence. Instead, a set of computer programs that can use data to train themselves to make predictions--what the experts call "machine learning," defined as "an algorithmic field that blends ideas from statistics, computer science and many other disciplines to design algorithms that process data, make predictions, and help make decisions." Consumer recommendation or fraud detection systems, for example, are machine learning, not  the high-level flexible cognitive capacity that most of us mean by "intelligence." As Johnson argues, the information technology that would run, say, an operational system of autonomous vehicles is more closely related to a much more complicated air traffic control system than to the human brain.

(One implication here for economics is that if AI is really machine learning, and machine learning is about programs that can update and train themselves to make better predictions, then one can analyze the effect of AI on labor markets by looking at specific tasks within various jobs that involve prediction. Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb take this approach in "Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction" (Journal of Economic Perspectives, Spring 2019, 33 (2): 31-50). I offered a gloss of their findings in a blog post last month.)

Moreover, the machine learning algorithms, which often involve mixing together results from past research and pre-existing data in different situations with new forms of data can go badly astray. Johnson offers a vivid example: 
Consider the following story, which involves humans, computers, data, and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to one in 20.” She let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis, but amniocentesis was risky—the chance of killing the fetus during the procedure was roughly one in 300. Being a statistician, I was determined to find out where these numbers were coming from. In my research, I discovered that a statistical analysis had been done a decade previously in the UK in which these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I returned to tell the geneticist that I believed that the white spots were likely false positives, literal white noise.
She said, “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago. That’s when the new machine arrived.”
We didn’t do the amniocentesis, and my wife delivered a healthy girl a few months later, but the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other situations. The problem had to do not just with data analysis per se, but with what database researchers call provenance—broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation?
The comment by David Donoho refers to this as "recycled intelligence." Donoho writes:
The last decade shows that humans can record their own actions when faced with certain tasks, which can be recycled to make new decisions that score as well as humans’ (or maybe better, because the recycled decisions are immune to fatigue and impulse). ... Recycled human intelligence does not deserve to be called augmented intelligence. It does not truly augment the range of capabilities that humans possess. ... Relying on such recycled intelligence is risky; it may give systematically wrong answers ..."
Donoho offers the homely example of spellcheck programs which, for someone who is an excellent and careful speller, are as likely to create memorable errors as to improve the text. 

From Johnson's perspective, what we should be talking about is not whether AI or machine learning will "replace" workers, but instead thinking about how humans will interact with these new capabilities. I'm not just thinking of worker training here, but of the issues related to privacy, access to technology, the structure of market competition, and other issues. Indeed, Johnson argues that one major ingredient missing from the current machine-learning programs is a fine-grained sense of what specific people want--which implies a role for markets. Johnson argues that rather than pretending that we are mimicking human "intelligence," with all the warts and flaws that we know human intelligence has, we should instead be thinking about interactions of how information technology can address the allocation of public and private resources in ways that benefit people. I can't figure out a way to summarize his argument in brief, without doing violence to it, so I quote here at length: 
Let us suppose that there is a fledgling Martian computer science industry, and suppose that the Martians look down at Earth to get inspiration for making their current clunky computers more ‘intelligent.’ What do they see that is intelligent, and worth imitating, as they look down at Earth?
They will surely take note of human brains and minds, and perhaps also animal brains and minds, as intelligent and worth emulating. But they will also find it rather difficult to uncover the underlying principles or algorithms that give rise to that kind of intelligence——the ability to form abstractions, to give semantic interpretation to thoughts and percepts, and to reason. They will see that it arises from neurons, and that each neuron is an exceedingly complex structure——a cell with huge numbers of proteins, membranes, and ions interacting in complex ways to yield complex three-dimensional electrical and chemical activity. Moreover, they will likely see that these cells are connected in complex ways (via highly arborized dendritic trees; please type "dendritic tree and spines" into your favorite image browser to get some sense of a real neuron). A human brain contains on the order of a hundred billion neurons connected via these trees, and it is the network that gives rise to intelligence, not the individual neuron.
Daunted, the Martians may step away from considering the imitation of human brains as the principal path forward for Martian AI. Moreover, they may reassure themselves with the argument that humans evolved to do certain things well, and certain things poorly, and human intelligence may be not necessarily be well suited to solve Martian problems.
What else is intelligent on Earth? Perhaps the Martians will notice that in any given city on Earth, most every restaurant has at hand every ingredient it needs for every dish that it offers, day in and day out. They may also realize that, as in the case of neurons and brains, the essential ingredients underlying this capability are local decisions being made by small entities that each possess only a small sliver of the information being processed by the overall system. But, in contrast to brains, the underlying principles or algorithms may be seen to be not quite as mysterious as in the case of neuroscience. And they may also determine that this system is intelligent by any reasonable definition—it is adaptive (it works rain or shine), it is robust, it works at small scale and large scale, and it has been working for thousands of years (with no software updates needed). Moreover, not being anthropocentric creatures, the Martians may be happy to conceive of this system as an ‘entity’—just as much as a collection of neurons is an ‘entity.’
Am I arguing that we should simply bring in microeconomics in place of computer science? And praise markets as the way forward for AI? No, I am instead arguing that we should bring microeconomics in as a first-class citizen into the blend of computer science and statistics that is currently being called ‘AI.’ ... 
Indeed, classical recommendation systems can and do cause serious problems if they are rolled out in real-world domains where there is scarcity. Consider building an app that recommends routes to the airport. If few people in a city are using the app, then it is benign, and perhaps useful. When many people start to use the app, however, it will likely recommend the same route to large numbers of people and create congestion. The best way to mitigate such congestion is not to simply assign people to routes willy-nilly, but to take into account human preferences—on a given day some people may be in a hurry to get to the airport and others are not in such a hurry. An effective system would respect such preferences, letting those in a hurry opt to pay more for their faster route and allowing others to save for another day. But how can the app know the preferences of its users? It is here that major IT companies stumble, in my humble opinion. They assume that, as in the advertising domain, it is the computer's job to figure out human users' preferences, by gathering as much information as possible about their users, and by using AI. But this is absurd; in most real-world domains—where our preferences and decisions are fine-grained, contextual, and in-the-moment—there is no way that companies can collect enough data to know what we really want. Nor would we want them to collect such data—doing so would require getting uncomfortably close to prying into the private thoughts of individuals. A more appealing approach is to empower individuals by creating a two-way market where (say) street segments bid on drivers, and drivers can make in-the-moment decisions about how much of a hurry they are in, and how much they're willing to spend (in some currency) for a faster route.
Similarly, a restaurant recommendation system could send large numbers of people to the same restaurant. Again, fixing this should not be left to a platform or an omniscient AI system that purportedly knows everything about the users of the platform; rather, a two-way market should be created where the two sides of the market see each other via recommendation systems.
It is this last point that takes us beyond classical microeconomics and brings in machine learning. In the same way as modern recommendation systems allowed us to move beyond classical catalogs of goods, we need to use computer science and statistics to build new kinds of two-way markets. For example, we can bring relevant data about a diner's food preferences, budget, physical location, etc., to bear in deciding which entities on the other side of the market (the restaurants) are best to connect to, out of the tens of thousands of possibilities. That is, we need two-way markets where each side sees the other side via an appropriate form of recommendation system.
From this perspective, business models for modern information technology should be less about providing ‘AI avatars’ or ‘AI services’ for us to be dazzled by (and put out of work by)—on platforms that are monetized via advertising because they do not provide sufficient economic value directly to the consumer—and more about providing new connections between (new kinds of) producers and consumers.
Consider the fact that precious few of us are directly connected to the humans who make the music we listen to (or listen to the music that we make), to the humans who write the text that we read (or read the text that we write), and to the humans who create the clothes that we wear. Making those connections in the context of a new engineering discipline that builds market mechanisms on top of data flows would create new ‘intelligent markets’ that currently do not exist. Such markets would create jobs and unleash creativity.
Implementing such platforms is a task worthy of a new branch of engineering. It would require serious attention to data flow and data analysis, it would require blending such analysis with ideas from market design and game theory, and it would require integrating all of the above with innovative thinking in the social, legal, and public policy spheres. The scale and scope is surely at least as grand as that envisaged when chemical engineering was emerging as a way to combine ideas from chemistry, fluid mechanics, and control theory at large scale.
Certainly market forces are not a panacea. But market forces are an important source of algorithmic ideas for constructing intelligent systems, and we ignore them at our peril. We are already seeing AI systems that create problems regarding fairness, congestion, and bias. We need to reconceptualize the problems in such a way that market mechanisms can be taken into account at the algorithmic level, as part and parcel of attempting to make the overall system be ‘intelligent.’ Ignoring market mechanisms in developing modern societal-scale information-technology systems is like trying to develop a field of civil engineering while ignoring gravity.
Markets need to be regulated, of course, and it takes time and experience to discover the appropriate regulatory mechanisms. But this is not a problem unique to markets. The same is true of gravity, when we construe it as a tool in civil engineering. Just as markets are imperfect, gravity is imperfect. It sometimes causes humans, bridges, and buildings to fall down. Thus it should be respected, understood, and tamed. We will require new kinds of markets, which will require research into new market designs and research into appropriate regulation. Again, the scope is vast.
I can think of all sorts of issues and concerns to raise about this argument (and I'm sure that readers can do so as well), but I also think the argument has an interesting force and plausibility.   

Tuesday, July 9, 2019

Raising the Minimum Wage: CBO Weighs in

No proposal to raise the minimum wage can be evaluated without asking "how fast and by how much?" The Congressional Budget Office offers an evaluation of three alternatives in "The Effects on Employment and Family Income of Increasing the Federal Minimum Wage" (July 2019).  CBO considers three proposals: "The options would raise the minimum wage to $15, $12, and $10, respectively, in six steps between January 1, 2020, and January 1, 2025. Under the $15 option, the minimum wage would then be indexed to median hourly wages; under the $12 and $10 options, it would not." (There are some other complexities involving possible subminimum wages for teenage workers, tipped worker, and disables workers, which I won't discuss here.)

One way to understand the result is to compare these proposals with the path of wages in the US economy. The top dashed line shows the (adjusted for inflation) wages of workers at the 25th percentile of the wage distribution. The second dashed line shows the wages of workers at the 10th percentile of the wage distribution. The orange line shows the federal minimum wage under current law, both past and present. The three proposals for raising the minimum wage appear on the far right-hand side of the figure.
An obvious takeaway here is that the minimum wage was roughly equal to the 10th percentile of the income distribution in the 1970s. While the minimum wage has fallen below the 10th percentile since then, it almost rises back to that level after the series of minimum wage increases enacted in 2007. However, the minimum wage has been separating from the 10th percentile wage, and the gap is projected to keep growing under current law.

This general background suggests that in a big picture sense, the consequences of having a minimum wage that rises to, say, $12 per hour, won't be all that different from the past consequences of having a minimum wage that's a little below the 10th percentile of wages. However, an increase up to $15/hour in the federal minimum wage would potentially have a greater effect, outside the historical norm.

Considering the effects of a higher federal minimum wage is also complicated by the fact that so many states and cities have already enacted higher minimum wages. The CBO notes:
As of 2019, 29 states and the District of Columbia have a minimum wage higher than the federal minimum. (Many of those states have boosted their minimum wage in recent years.) The minimum wage is indexed to inflation in 17 of those states, and future increases have been mandated in 6 more. Some localities also have minimum wages higher than the applicable state or federal minimum wage; in San Francisco, for instance, the minimum wage increased to $15.59 per hour as of July 1, 2019, and is adjusted for inflation annually. About 60 percent of all workers currently live in states where the applicable minimum wage is more than $7.25 per hour. And in 2025, about 30 percent of workers will live in states with a minimum wage of $15 or higher, CBO estimates ...
Because of all this state and local activity with higher minimum wages, the argument raising the  federal wage has shifted. It's not as much about a minimum wage for all US workers, as it is about a higher minimum wage for the 40% of US workers where that hasn't already happened. And often, those workers live in lower-wage places where the combined forces of politics and economics haven't yet led to a higher minimum wage.

For illustration, here's are estimates of what percentage of workers would be directly affected by a rise in the minimum wage. Past increases in the minimum wage have typically had a directt effect on 5% of workers or less, and an increase in the federal minimum wage to $10/hour or $12/hour fits in this range, while an increase to $15/hour would be a much larger step. (The hollow circles refer to increases in the minimum wage that were proposed back in 2014 to happen in 2016, but didn't actually take place.)

The effects of a higher minimum wage on employment and wages are affected by lots of factors, including all the ways that employers and worker might react to to such an increase in the short-run and the long run, not just through hiring, but also through decisions related to pricing of products and investment in equipment. There are lots of uncertainties in modelling minimum wage increases. CBO did a review of 11 recent studies, finding some that predict a minimum wage will increase employment while others predict it will decrease employment. Here's a table of the studies, for readers who would like to dig deeper. The elasticity is by how much employment for those directly affected by the minimum wage will change in response to a change in wages of 1% caused by the higher minimum wage. In most studies, but not all, the long-run effect is larger than the short-run effect. 

With these uncertainties duly recognized, here the CBO estimate for a phased-in rise in the minimum wage to $15/hour: 
Under the first option [of raising the minimum wage to $15/hour] according to CBO’s median estimate, about 1.3 million workers who would otherwise be employed would be jobless in an average week in 2025. That decrease would account for 0.8 percent of all workers and 7 percent of directly affected workers who would otherwise earn less than $15 per hour. Wages would rise, however, for 17 million directly affected workers who remained employed and for many of the 10 million potentially affected workers whose wages would otherwise fall slightly above $15 per hour—specifically, between the new federal minimum and that amount plus 50 percent of the increase in their applicable minimum wage. The higher wages for those potentially affected workers might lead to reductions in their employment, but some firms might hire more of those workers as substitutes for lower-paid workers whose wages had increased by larger amounts. Those two factors would roughly offset for those higher-wage workers, CBO anticipates. 
The $15 option would alter employment more for some groups than for others. Almost 50 percent of the newly jobless workers in a given week—600,000 of 1.3 million—would be teenagers (some of whom would live in families with income well above the poverty threshold). Employment would also fall disproportionately among part-time workers and adults without a high school diploma. ...
That net effect is due to the combination of factors described above:
  • Real earnings for workers while they remained employed would increase by $64 billion,
  • Real earnings for workers while they were jobless would decrease by $20 billion,
  • Real income for business owners would decrease by $14 billion, and
  • Real income for consumers would decrease by $39 billion.
My own quick take is that an increase in the federal minimum wage to $10/hour or even $12/hour is well within the range of past experience, and the effects are likely to be relatively small. Going to $15/hour is a bigger jump.

In particular, there are states and big parts of the country outside of major metropolitan areas where quite a large share of workers make less than $15/hour. Here are some comparisons from Census Bureau data. In May 2018, for example, the median hourly wage in California as a whole was $20.40. However, the median wage in the San Francisco-Oakland-Hayward metro area was $26/hour, while in the Fresno area the median wage was $16.40/hour. Or if one looks across states, the median hourly wage in Mississippi is $14.70/hour, or in Idaho was $16.47/hour. With a very large and diverse US economy, the effects of a higher federal minimum wage will not be evenly distributed by geography. 

Friday, July 5, 2019

US Multinationals Expand their Foreign-based Research and Development

"For decades, US multinational corporations (MNCs) conducted nearly all their research and development (R&D) within the United States. Their focus on R&D at home helped establish the United States as the unrivaled leader of innovation and technology advances in the world economy. Since the late 1990s, however, the amount of R&D conducted overseas by US MNCs has grown nearly fourfold and its geographic distribution has expanded from a few advanced industrial countries (such as Germany, Japan, and Canada) to many parts of the developing world ..."

Lee G. Branstetter, Britta Glennon, andJ. Bradford Jensen discuss this shift in "The Rise of Global Innovation by US Multinationals Poses Risks and Opportunities" (June 2019, Peterson Institute for International Economics,  Policy Brief 19-9).

Here's the quadrupling in foreign-based R&D by US multinationals in the last couple of decades:
Another measure looks at what share of the patents files by US multinationals are based on cross-border collaboration. It used to be less than 2%; it's now more than 10%--and rising. 
It used to be that almost all the foreign R&D of US multinationals was in five high-income countries Germany, the UK, Japan, Canada, and France.Now, less than half is in those five countries.
The shift here shouldn't be exaggerated. "While US MNCs’ foreign R&D expenditures have increased dramatically, they still conducted about 83 percent of their R&D in the United States in 2015 (down from 92 percent in 1989)."

But the shift is still a real one. Of course, it's driven in part by the fact that US multinationals are building supply chains across borders and selling output in other countries. Emerging market have been growing faster than the US economy in recent decades, and with some stops and starts, will probably continue this pattern of faster "catch-up" growth in the next few decades. Another factor is that an interconnected world economy, research is more likely to cross borders than research in older industries.

Your reaction to US multinationals expanding their overseas R&D efforts may be shaped by whether you are a half-empty or a half-full kind of person. US multinationals accounted for 57% of total US R&D spending in 2015. 

The half-empty concern would be that when US companies shift their R&D overseas, there is a danger of losing US-based technological leadership, with potentially negative consequences for US workers and the US economy. There is a legitimate concern that technology developed outside the US may offer less benefit to the US economy, and may be harder to protect with intellectual property rules.

The half-full response is that centers of technological excellence are developing all around the world, with or without participation by US firms. If US firms wish to stay at the technological cutting edge, they need to  engaged with the researchers and expertise all around the world. not to be separated from it. Also, if US multinationals by basing some of the R&D in other countries, US multinationals are building connections to supply chains and to consumers in those markets. 

Thursday, July 4, 2019

"Loyalty to the Nation All the Time, Loyalty to the Government When it Deserves It."

Mark Twain wrote an essay back in 1905 called "The Czar's Soliloquy" (North American ReviewVol. 180.No. DLXXX).  The essay was triggered by a sentence in the London Times, reporting: "After the Czar's morning bath it is his habit to meditate an hour before dressing himself." Twain imagined that the Czar, standing naked in front of a mirror, was for a few moments honest with himself about the injustices and cruelties that he had allowed and perpetrated, and hoped for a better future. Imagining the Czar's words to himself, Twain wrote:
There are twenty-five million families in Russia. There is a man-child at every mother's knee. If these were twenty-five million patriotic mothers, they would teach these man-children daily, saying : "Remember this, take it to heart, live by it, die for it if necessary: that our patriotism is medieval, outworn, obsolete; that the modern patriotism, the true patriotism, the only rational patriotism, is loyalty to the Nation all the time, loyalty to the Government when it deserves it.
On the Fourth of July in particular, it makes me sad to run into people whose patriotism ebbs and flows according to what political party occupies the White House. There ought to be a large and real line between support of whoever who is in government at a particular time, and a broader patriotism. A country is a mixture of people, ideals, geography, history, cultures, and more. It should be possible to love your country, whether your feelings about the government are positive, negative, neutral, ambivalent, or don't-give-a-damn.