Monday, July 22, 2019

Some Snapshots of University Endowments

How much money do major universities and colleges have in their endowments? How are they investing the money? What returns are they earning? The National Association of College and University Business Officers does a survey of these questions each year, and here are some results for 2018. 

Here's a list of the 40 largest endowments for institutions of higher education. Harvard tops the list. It should be noted that these total endowments don't adjust for number of students. For example, the University of Richmond, which is #40 on this list, has an endowment of $686,000 per student, while the University of Pennsylvania, #7 on this list, has an endowment of $602,000 per student. Princeton has the highest endowment per student at $3.1 million.
How concentrated are endowments among the big institutions? Total endowments for all universities and colleges sum to $616 billion. The top 10 on the list above account for more than one-third of total endowments. The 104 institutions with endowments of more than $1 billion account for more than three-quarters of total endowments. More than two-thirds of all endowments are at private institutions.
How do these institutions invest their endowments? Institutions with big endowments are much more likely to use "alternative strategies," and less likely to be in domestic stocks. "Alternative" refers "Private equity (LBOs, mezzanine, M&A funds, and international private equity); Marketable alternative strategies (hedge funds, absolute return, market neutral, long/short, 130/30, and event-driven and derivatives); Venture capital; Private equity real estate (non-campus); Energy and natural resources (oil, gas, timber, commodities and managed futures); and Distressed debt."

On average, institution with endowments above $1 billion also earn higher returns. However, as the rows at the bottom show, any college which had invested its endowment completely in the S&P 500 10 years ago would have done considerably better than the average over any of the time horizons shown here.


Of course, it's always easy to note in retrospect that some alternative investment choice would have performed better.  Back int 2008, it certainly wasn't clear to many investors how much stock markets would rebound if and when the Great Recession ended. Moreover, a number of large-endowment Ivy League school had had great success with alternative investment categories in the 1990s and into the early 2000s. For a useful discussion of college endowment returns from 1992-2005, I recommend Josh Lerner, Antoinette Schoar, and Jialan Wang on "Secrets of the Academy: The Drivers of University Endowment Success," which appeared in the Summer 2008 issue of the Journal of Economic Perspectives

But it still seems worth noting that college endowments--which often hire high-priced talent to make these decisions--have substantially underperformed the S&P 500 benchmark for the last decade. If they had consistently overperformed the benchmark, I'm sure we'd be hearing a lot more about their investment strategies!

Friday, July 19, 2019

Antitrust in the Digital Economy

Discussions of antitrust and the FAGA companies--that is, Facebook, Amazon, Google, and Apple--often sound like a person with a hammer who just wants to hit something. Here's Peggy Noonan writing in the Wall Street Journal (June 6, 2019):
But the mood in America is anti-big-tech. Everyone knows they're too powerful, too arrogant, loom too large in public life. ... Here's what they [Congress] should be thinking: Break them up. Break them in two, in three; regulate them. Declare them to be what they've so successfully become: once a pleasure, now a utility. It all depends on Congress, which has been too stupid to move in the past and is too stupid to move competently now. That's what's slowed those of us who want reform, knowing how badly they'd do it. Yet now I find myself thinking: I don't care. Do it incompetently, but do something.
When it comes to regulation of the big digital firms, we may end up with incompetence at the end of the day. But first, could we at least take a stab at thinking about what competent antitrust and regulation might look like? For guidance, I've seen three reports so far this year on antitrust and competition in the digital economy:
The three report are all at least 100 pages in length. They are all written by prominent economists, and they have a lot of overlap with each other. Here, I'll just focus on some of the main points that caught my eye when reading them. But be aware that the topics and terms mentioned here typically come up in at least two of the three reports, and often in all three.

Antitrust Policy is Not a One-Size-All Solution for Everything  

Antitrust policy is about attempting to ensure that competition arises. Pace Noonan, it's not about whether the firms or the people running them are powerful, arrogant, or large.

Digital technologies raise all sorts of broad social issues that are not directly related to competition.  For example, the appropriate treatment of personal information would still be an issue if there were four Facebook-like firms and five Google-like firms all in furious competition with each other. Indeed, it's possible that a bunch of new firms competing in these markets might be more aggressive in hoovering up personal education and passing it along to advertisers and others. The question of if or when certain content should be blocked from the digital sites will still be an issue no matter how many firms are competing in these markets. 

Digital firms use algorithms in many of their decisions about marketing and price, and such algorithms can end up building in various forms of bias. This is a problem that will exist with or without competition. The forces of corporate lobbying on the political process are a legitimate public issue, regardless of whether the lobbying comes from many smaller firms, or a larger firm, or an industry association.

In other words, the digital economy raises lots of issues of public interest. Responding to those issues doesn't just mean flailing around with an antitrust hammer, hoping to connect with a few targets, but instead thinks about what problems need to be addressed and what policy tools are likely to be useful in addressing them.

What is the Antitrust Issue with Digital Firms?

In the US economy, at least, being big and being profitable are not violations of existing antitrust law. If a firm earns profits and becomes large by providing consumers with goods and services that they desire, at a price the consumes are willing to pay, that's fine. But if a firm earns profits and becomes large by taking actions that hinder and block the competition, those anticompetitive actions can be violation of antitrust law. The challenge with big digital firms is where to draw the line.
When the internet was young, two decades ago, there was a widespread belief that it would not be a hospitable place for big companies. [T]he economic literature of the beginning of the 21st century assumed that competition between online firms would arise as consumers hopped from site to site, easily comparing their offers. The reality however quickly turned out to be very different. Very early in the history of the Internet, a limited number of “gateways” emerged. With the benefit of hindsight, this might not be too surprising. Users have limited time and need curators to help them navigate the long tail of websites to find what they are looking for. These curators then developed a tendency to keep users on their platform, and by the end of the 1990s, it was common place to speak about AOL’s “walled garden”. AOL’s market power however rested in great part on its role as an Internet service provider and both competition in that domain and, according to some observers, strategic mistakes after its merger with Time Warner eroded its power.
Fast forwarding to today, a few ecosystems and large platforms have become the new gateways through which people use the Internet. Google is the primary means by which people in the Western world find information and contents on the Internet. Facebook/WhatsApp, with 2.6 billion users, is the primary means by which people connect and communicate with one another, while Amazon is the primary means for people to purchase goods on the Internet. Moreover, some of those platforms are embedded into ecosystems of services and, increasingly, devices that complement and integrate with one another. Finally, the influence of these gateways is not only economic but extends to social and political issues. For instance, the algorithms used by social media and video hosting services influence the types of political news that their users see while the algorithm of search engines determines the answers people receive to their questions.
As all of these reports note, it is now apparent that when a large digital company becomes established, entry can be hard. There is a "network effect," where a system that has more users is also attractive to more users. There's a "platform" effect, where most buyers and sellers head for Amazon because the most buyers and sellers are already on Amazon. There are economies of scale, where the startup costs of a new platform or network can be fairly high, but adding additional users has a marginal cost of near-zero. There are issues involving the collection and analysis of data, where more users mean more attraction to advertisers and more data, to which can then be monetized.

None of these issues are new for antitrust. But the big digital firms bring these issues together in some new ways. The markets become prone to "tipping," which means that when an established firm gets a certain critical mass, other firms can't attract enough users to survive. As economists sometimes say, it becomes a case where there is competition for the market, in the sense of which firm will become dominant in that market, but then there is little competition within the market.

One consequence is that when a big firm becomes established, entry is hard and future competition can be thwarted. Thus, there is reason to be concerned that consumers may suffer along various dimensions: price, quality innovation. In the areas of innovationstartup in that area slows down dramatically. From the Scott Morton et al. report:
By looking at the sub-industries associated with each firm—social platforms (Facebook), internet software (Google), and internet retail (Amazon)—a different trend emerges. Since 2009, change in startup investing in these sub-industries has fared poorly compared to the rest of software for Google and Facebook, the rest of retail for Amazon, and the rest of all VC for each of Google, Facebook, and Amazon. This suggests the existence of so-called “kill-zones,” that is, areas where venture capitalists are reluctant to enter due to small prospects of future profits. In a study of the mobile app market, Wen Wen and Feng Zhu come to a similar conclusion: Big tech platforms do dampen innovation at the margin. Their study analyzed how Android app developers adjust their innovation strategies in response to entry (or threat of entry) by Google ...
Some Oddities of a Market with a Price of Zero

Big digital firms provide a remarkable array of services with a market price of zero to consumers. They make their money by attracting users and selling ads, or by charges to producers. Is this a case where the market price should actually be negative--that is, the big tech firms should be paying consumers. The Scott-Morton et al. report offers some interesting thoughts on these lines:
Barter is a common way in which consumers pay for digital services. They barter their privacy and information about what restaurants they would like to eat in and what goods they would like to buy in exchange for digital services. The platform then sells targeted advertising, which is made valuable by the bartered information. But, in principle, that information has a market price. It is not easy to see if the value of any one consumer’s information is exactly equal to the value of the services she receives from the platform. However, many digital platforms are enormously profitable, and have been for many years, which suggests that in aggregate we do know the answer: the information is more valuable than the cost of the services. ...
Online platforms offer many services for zero monetary price while they try to raise participation in order to generate advertising revenue. Free services are prevalent on the internet in part because internet firms can harness multi-sided network externalities. While the low price can be a blessing for consumers, it has drawbacks for competition and market structure in a world where institutions have not arisen to manage negative prices. Because there is currently no convenient way to pay consumers with money, platforms are able to mark up the competitive price all the way to zero. This constraint can effectively eliminate price competition, shifting the competitive process to quality and the ability of each competitor to generate network externalities. Depending on the context this may favor or impede entry of new products. For example, entry will be encouraged when a price of zero leads to supra-competitive profits, and impeded when a zero price prevents entrants from building a customer base through low price. Moreover, unlike traditional markets where several quality layers may coexist at different price levels (provided that some consumers favor lower quality at low price), markets where goods are free will be dominated by the best quality firm and others may compete only in so far as they can differentiate their offers and target different customers. This strengthens the firm’s incentive to increase quality through increasing fixed costs in order to attract customers (known as the Sutton sunk cost effect) and further pushes the market toward a concentrated market structure. ...
It is a puzzle that, to date, no entrepreneur or business has found a way to pay consumers for their data in money. For example, a consumer’s wireless carrier could aggregate micropayments across all manner of digital destinations and apply the credit to her bill each month. ... Furthermore, a carrier that could bargain effectively with platforms on behalf of its subscribers for high payments would likely gain subscribers. Notice that an easy method to pay consumers, combined with price competition for those consumers, might significantly erode the high profits of many incumbent platforms. Platforms likely have no economic incentive to work diligently to operationalize negative prices.
Of course, this idea of a market in which consumers barter attention for zero-marginal-price services isn't new. Television and before that radio operate on the same basic business model.

Categories of Data

The ability to collect data from users, and then to collate it with other information and pass it along to advertisers, is clearly a central part of the business model for digital firms. There's a lot of quick-and-easy talk about "my" data and what "they" should or shouldn't be able to do with it. The Cremer et al. report offers some useful drilling down into different ways of acquiring data and different ways in which data might be used. Sensible rules will take these kinds of distinctions into account. They note:
Data is acquired through three main channels. First, some data is volunteered, i.e. intentionally contributed by the user of a product. A name, email, image/video, calendar information, review, or a post on social media would qualify as volunteered data. Similarly, more structured data—directly generated by an individual—like a movie rating, or liking a song or post would also fall in the volunteered data category. 
Second, some data is observed. In the modern era, many activities leave a digital trace, and “observed data” refers to more behavioural data obtained automatically from a user’s or a machine’s activity. The movement of individuals is traced by their mobile phone; telematic data records the roads taken by a vehicle and the behaviour of its driver; every click on a page web can be logged by the website and third party software monitors the way in which its visitors are behaving. In manufacturing, the development of the Internet of Things means that every machine produces reams of data on how it functions, what its sensors are recording, and what it is currently doing or producing.
Finally, some data is inferred, that is obtained by transforming in a non-trivial manner volunteered and/or observed data while still related to a specific individual or machine. This will include a shopper’s or music fan’s profiles, e.g. categories resulting from clustering algorithms or predictions about a person’s propensity to buy a product, or credit ratings.  The distinction between volunteered, observed and inferred data is not always clear. ...
[W]e will also consider how data is used. We will define four categories of uses: non-anonymous use of individual-level data, anonymous use of individual level data, aggregated data, and contextual data.
The first category, non-anonymous use of individual-level data, would be any individual-level data (volunteered, observed, or inferred) that was used to provide a service to the individual. For instance, a music app uses data about the songs a user has listened to in order to provide recommendations for new artists he or she might enjoy. Similarly, a sowing app uses data from farm equipment to monitor the evolution of the soil. Access to individual-level data can often be essential to switch service or to offer a complementary service.
The second category, anonymous use of individual-level data, would include all cases when individual-level data was used anonymously. Access to the individual-level data is necessary but the goal is not to directly provide a service to the individual who generated the data in the first place. These would typically include cases of data being used to train machine-learning algorithms and/or data used for purposes unrelated to the original purposes for which the data has been collected. An example of this would be the use of skin image data to train a deep learning (Convolutional Neural Network) algorithm to recognise skin lesions or the use of location data for trading purposes. In specific cases, the information extracted, e.g. the trained algorithm, can then be used to provide a better service to some of the individuals who contributed data. For instance, film reviews are used collectively to provide every individual with better recommendations (collaborative filtering). For the anonymous use of individual-level data, access to a large dataset may be essential to compete.
The third category, aggregated data, refers to more standardised data that has been irreversibly aggregated. This is the case for e.g. sales data, national statistics information, and companies’ profit and loss statements. Compared to anonymous use of individual-level data, the aggregation is standard enough that access to the individual-level data is not necessary.
Finally, contextual data refers to data that does not derive from individual-level data. This category typically includes data such as road network information, satellite data and mapping data.
It's interesting to consider whether people's objections about use of their data are rooted purely in principle, or are not about being paid. If firms used your observed data on location, shopping, and so on, but only sold aggregated versions of that data and paid you for the amount of data you contributed to the aggregate, would you still complain?

Some Policy Steps

After this probably over-long mention of what seemed to me like interesting points, what are the most useful potential margins for action in this area--at least if we want to take competent action? Here are two main categories of policies to consider: those related to mergers and anticompetitive behavior, and those related to data.

1) Big dominant firms deserve heightened scrutiny for actions that might affect entry and competition. This is especially true when a firm has a "bottleneck" position when everyone (or almost everyone) needs to go through that firm to access certain service. One particular concern here is that big dominant firms buy up smaller companies that might, if they had remained independent, offered a form of competition or a new platform.

Furman et al. write:
There is nothing inherently wrong about being a large company or a monopoly and, in fact, in many cases this may reflect efficiencies and benefits for consumers or businesses. But dominant companies have a particular responsibility not to abuse their position by unfairly protecting, extending or exploiting it. Existing antitrust enforcement, however, can often be slow, cumbersome, and unpredictable.  ...
Acquisitions have included buying businesses that could have become competitors to the acquiring company (for example Facebook’s acquisition of Instagram), businesses that have given a platform a strong position in a related market (for example Google’s acquisition of DoubleClick, the advertising technology business), and data-driven businesses in related markets which may cement the acquirer’s strong position in both markets (Google/YouTube, Facebook/WhatsApp). Over the last 10 years the 5 largest firms have made over 400 acquisitions globally. None has been blocked and very few have had conditions attached to approval, in the UK or elsewhere, or even been scrutinised by competition authorities.
But along with mergers, there are a variety of other potentially actions which, when used by a dominant firm, may be potentially anticompetitive. Another concern is when big dominant firms start offering a range of other services on their own, and then use their dominance in  one market to favor their own services in other markets. Yet another issue is when big dominant firms choose a certain technological standard that seems more about blocking competition than advancing its own business. Some dominant platform firms use "best-price clauses," which guarantee that they will receive the lowest possible price from any provider. Such clauses also mean that if a new platform firm starts up, it cannot offer to undercut the original provider on price.

In other words, if a large dominant firm keeps your business (and your attention) by providing you with the quality and price of services you want, and earns high profits as a result, so be it. But if that firm is earning high profits by seeking out innovative ways to hamstring the potential competitors, it's a legitimate antitrust problem.

2) Data openness, portability, and interoperability.

Data openness just means that companies need to be open with you about what data of yours they have on-hand--perhaps especially if that data was collected in some way that didn't involve you openly handing it over to them. Portability refers to the ability to move your data easily from one digital firm to another. For example, you might be more willing to try a different search engine, or email program, or a different bank or health care provider if all your past records could be ported easily to the new company. Interoperability refer to when technical standards allow your (email, shopping, health, financial or other) data to be used directly by two different providers, if you desire.

Again, the underlying theme here is that if a big digital firm gets your attention with an attractive package of goods and services, that's fine; but if it hold you in place because it would be a mortal pain to transfer the data they have accumulated on you, that's a legitimate concern.

Finally, I would add that it's easy to focus on the actions of big digital firms that one sees in the headlines, and as a result to pay less attention to other digital economy issues of equal or greater importance. For example, most of us face a situation of very limited competition when it comes to getting high-speed internet access. In many ways, the growth of big digital firms doesn't raise new topics, but it should push some new thinking about those topics. 

Wednesday, July 17, 2019

Interview with Enrico Moretti on the Rising Importance of Location

David Price interviews Enrico Moretti in Econ Focus, a publication of the Federal Reserve Bank of Richmond (First Quarter 2019, pp. 18-23). From the intro to the interview:
Geographic differences in economic well-being, it seems, have become increasingly salient in American policy and political conversation. These differences are a longtime concern of University of California, Berkeley economist Enrico Moretti. In his research, he has found that the sorting of highly educated Americans — and high-paying jobs requiring a lot of education — into certain communities has led to other communities falling behind. ... Moretti's interest in American geographical sorting began during his days as a Ph.D. student at Berkeley, where he arrived after his undergraduate education in his native Milan. At first, he just wanted to fill in some blanks in his knowledge of America. "I started looking at data from the U.S. census," he says. "Just out of curiosity, wanting to know more about this country, I started looking at the different city averages of whatever the census could measure — earnings, level of education of the workforce, the type of industry. I suspected there were big differences, but I didn't know how large the differences were."
Here are some of Enrico's comments that particularly caught my eye, but there's much  more at the interview itself.  (Full disclosure: Enrico has been the current editor of the Journal of Economic Perspectives  for the last five years, which makes him my boss.)
The explosion of the internet, email, and cellphones democratizes the access to information. In the 1990s, people thought it would also make the place where the company is located or where workers live much less important. ... 
But what we have seen over the past 25 years is that the opposite is true: Location has become more important than ever before, especially for highly educated workers. The types of jobs and careers that are available in some American cities are increasingly different from the ones available in other American cities.
There's nothing new in the fact that some areas are economically more dynamic than others and offer better labor market opportunities; that's always been the case. What is different today is how large the difference between the most successful labor markets and the least successful labor markets has become and how fast they are growing apart. It's a paradox because it is true that we can have access to a lot of information and communicate easily from everywhere in the world, but at the same time, location remains crucial for worker productivity and for economic success.
In the first three decades after World War II, manufacturing was the most important source of high-paying jobs in the United States. Manufacturing was geographically clustered, but the amount of clustering was limited. Over the past 30 years, manufacturing employment has declined, and the innovation sector has become a key source of good jobs. The innovation sector tends to be much more geographically clustered. Thus, in the past, having access to good jobs was not tied to a specific location as much as it is today. I expect the difference in wages, earnings, and household incomes across cities to continue growing at least for the foreseeable future. ...
[W]e see some agglomeration of traditional manufacturing firms, but when we compare it to agglomeration of firms in the innovation sector, the latter is much stronger. I have just finished a new project where I study how locating in a high-tech cluster improves the productivity and creativity of inventors. If you look at the major fields — computer science, semiconductor, biology, and chemistry — you see a concentration of inventors that is staggering. In computer science, the top 10 cities account for 70 percent of all the innovation, as measured by patents. For semiconductors, it's 79 percent. For biology and chemistry, it's 59 percent. This means that the top 10 cities generate the vast majority of innovation in each field. Importantly, the share of the top 10 cities has been increasing since 1971, indicating increased agglomeration. ...
Companies in industries that are very advanced and very specialized find it difficult to locate in areas where they would be isolated. Nobody wants to be the first to move to a city because they're going to have a hard time in finding the right type of specialized workers. And it's hard for workers with specialized skills to be first because they're going to have a hard time finding the right job. It's an equilibrium in which areas that have a large share of innovative employers and highly specialized workers tend to attract more of both. It is difficult for areas that don't have a large share of innovative employers and highly specialized workers to jump-start that process. Ultimately, that is what generates the divergence across cities. ...
In a new paper I just finished, I find that by concentrating geographically, high-tech firms and workers become more productive and more innovative, which has aggregate benefits for the national economy. In particular, if you take the current location of inventors in the United States, which is now very concentrated in a handful of locations, and you spread it across all cities, to the point where you equalize the number of inventors in each city, the U.S. aggregate production of innovation in the United States would decline by about 11 percent as measured by number of new patents. Thus, the concentration we observe in tech employment has drawbacks in the sense that it increases inequality across cities, but at the same time, it is good from the point of view of the overall production of innovation in the country. I see this as an equity-efficiency trade-off.

Tuesday, July 16, 2019

Global Perspective on Markets for Sand

Trivia question: If measuring by volume, what mined product is the largest? The answer is "sand and gravel," sometimes known in the geology business as "aggregates." In particular, aggregates are used for concrete and asphalt, and demand for these products from China and other emerging markets has skyrocketed. Sand is also used as part of hydraulic fracturing, so in the United States demand from that source has surged as well. And sand and gravel are also widely used for purposes ranging from land reclamation and water treatment to industrial production of electronics, cosmetics and glass.

Sand and gravel are being mined at an exceptionally rapid rate. There is strong anecdotal evidence of that environmental harms are occurring in some locations, and that more locations are threatened, but systematic analysis has been missing. The UN Environmental Programme offers an update on the situation in "Sand and Sustainability: Finding New Solutions for Environmental Governance of Global Sand Resources" (May 2019).

Here's a sense of the scale of the issue (citations omitted for readability):
An estimated 40-50 billion metric tonnes is extracted from quarries, pits, rivers, coastlines and the marine environment each year. The construction industry consumes over half of this volume annually (25.9 billion to 29.6 billion tonnes in 2012) and could consume even more in future. Though little public data exists about extraction volumes, sources and uses we know that, with some exceptions, most sand and gravels extracted from natural environments are consumed regionally because of the high costs of transport. For example, two thirds of global cement production occurs in China (58.5%) and India (6.6%).
Global concrete production has tripled since the early 1990s.
One obvious question here is how it is remotely possible to run out of sand, given the existence of deserts. But it turns out that sand recently shaped by water is what is economically useful. As the report notes: "Desert sand, though plentiful, is unusable for most purposes because its wind-smoothed grains render it non-adherent for the purposes of industrial concrete." 

Mining enormous quantities of sand from beaches and riverbanks, as well as dredging it from offshore, is linked to a wide array of environmental damages. Here's an overview, again with citations omitted:
Aggregate extraction in rivers has led to pollution and changes in pH levels, instability of river banks leading to increased flood frequency and intensity, lowering of water aquifers, exacerbating drought occurrence and severity. Damming and extraction have reduced sediment delivery from rivers to many coastal areas, leading to reduced deposits in river deltas and accelerated beach erosion. This adds to effects of direct extraction in onshore sand extraction in coastal dune systems and nearshore marine dredging of aggregates, which may locally lead to long-term erosion impacts. Nearshore and offshore sand extraction in New Zealand continues despite considerable uncertainty of the environmental and the cumulative effects of mining, climate change and urbanisation of the coast.
Tourism is affected by loss of key species and beach erosion, while both freshwater and marine fishing — both traditional and commercial — has been shown to be affected through destruction of benthic fauna that accompanies dredging activities. Agriculture land has been affected by river erosion in some cases and the lowering of the water table. The insurance sector is affected through exacerbation of the impact of extreme events such as floods, droughts,and storm surges which can affect houses and infrastructure. A decrease in bed load or channel shortening can cause downstream erosion including bank erosion and the undercutting or undermining of engineering structures such as bridges, side protection walls and structures for water supply. ... For example, the use of sand in reclamation practices is thought to have led to increased turbidity and coral reef decline in the South China Sea. In the Mekong basin, the impact of sand mining in Laos, Thailand, and Cambodia is felt on the Vietnam delta erosion. Singapore demand for sand and gravel in land reclamation projects have triggered an increase in sand mining in Cambodia and Vietnam ... 
For a more detailed overview of environmental risks, Lois Koehnken has written "Impacts of Sand Mining On Ecosystem Structure, Process, and Biodoversity in Rivers" (2018) for the World Wildlife Fund. 

The potential answers here are conceptually straightforward, but can be difficult to implement. The implications of large-scale sand-mining should be considered before the actual mining starts. It would be useful to think about recycling sand-related products to other uses, where possible, like finding ways to re-use old concrete or waste asphalt. Research is ongoing to find alternative substances that could be used in concrete to replace sand: some suggested possibilities include crushed stone, ash left over after waste incineration, stainless steel slag, coconut shells, sawdust, old tires, and more. 

One interesting suggestion is to emphasize "permeable pavement," in which a substantial amount of water can sink through pavement, into the earth, and even reach the groundwater, rather than just running off. The UNEP report notes:
Permeable pavement (sometimes called porous pavement) ... is used in cities around the world, particularly in new cities projects in China and India to reduce surface water runoff volumes and rates by allowing water to infiltrate soil rapidly, helping to reduce flooding while replenishing groundwater reserves. In many cases, permeable roadways, pedestrian walkways, playgrounds, parking zones can also act as water retention structures, reducing or eliminating the need for traditional stormwater management systems. ...  Additional proven benefits include improved water quality, reduced pollutant runoff into local waterbodies, reduced urban heat island effects (great advantage for adaptation to climate change), lower cost of road salting (in cold environments), among others. Less noted is the indirect contribution to reduced demand for natural sand both in constructing these permeable surfaces, and in reducing the need for built drainage systems. Most permeable pavement designs – porous (or pervious) concrete, interlocking pavement slabs, crushed rock and gravels, or clay, amongst other materials – do not use fine aggregates (sand). ...  Recent experimentation has shown that introducing end-of-life tyre aggregates can increase flexibility of rigid permeable pavement systems, and with that their capacity to cope with ground movement or tree root systems. 
Mette Bendixen, Jim Best, Chris Hackney and Lars Lønsmann Iversen provide a quick readable overview of these issues in "Time is running out for sand" (Nature, July 2, 2019). They offer these projections for sand demand and prices (drawing on underlying research here).


I tend to think of sand and gravel as a nearly infinite resource. But when the world is extracting 50 billion metric tonnes every year of sand and gravel, with that total rising quickly, the effects are far from imperceptible.

For previous posts at this website about global markets for sand, see

Monday, July 15, 2019

Why Don't People Buy More Annuities?

Among economists, it's sometimes known as the "annuities puzzle": Why don't people buy annuities as frequently as one might expect?

In May 2019, Brookings and the Kellogg Business School Public-Private Initiative held a conference on the subject of  “Retirement Policy and Annuitization: A View from the Experts.”  Three papers from that conference are available:  "Can annuities become a bigger contributor to retirement security?" by Martin Neil Baily Brookings and Benjamin H. Harris (June 2019); "Automatic enrollment in 401(k) annuities: Boosting retiree lifetime income," Vanya Horneff, Raimond Maurer, and Olivia S. Mitchell (June 2019); and "Using behavioral insights to increase annuitization rates: The role of framing and anchoring," by Abigail Hurwitz (June 2019).

An annuity involves making a substantial payment in the present, and then receiving a stream of payments in the future. For example, someone retiring at age 65 or 70 might take a chunk of their retirement savings (not all, but a reasonably sized chunk) and buy an annuity. For example one might buy an annuity that starts payments at age 80 or 85 and continues those payments until death. If you wish, you can buy an annuity where the benefits rise over time with inflation. 

Instead of "life insurance," which pays out after you die, annuities are a form of "continued living insurance," which pay out as long as you remain alive. In some contexts, annuities are quite popular. For example, Social Security is an annuity-style program; that is, you pay into it during your lifetime, but at retirement, the benefits arrive in an inflation-adjusted stream of payments until death, not a one-time chunk of cash that could be immediately spent. Many workers also liked the traditional "defined benefit" pension plan, in which an employer pays into a fund that provides benefits until death. A defined benefit pension plan acts like an annuity--albeit one that is purchased and financed indirectly through an employer.

However, one of the big changes in retirement savings in the last few decades is that the classic "defined benefit" pension plan is in decline, and workers instead have a retirement account, like a 401k or an IRA, in which they have saved money for retirement.  Given that the drop in consumption from outliving one's assets could be very large, economic models often suggest that people should be more willing than they seem to be to take some portion of the money in this account and use it to by annuities. The Horneff, Maurer, and Mitchell paper runs through a set of illustrative calculations, suggesting that most people would benefit if they put 10% of retirement wealth into an annuity--and many would benefit from putting in a larger share.

There aren't official statistics on how many Americans buy annuities, but studies suggest that it's probably 10% or less. So why don't people annuitize more of their retirement wealth? There are a number of possible explanations.

Baily and Harris point out that people often tend to use "mental accounts": for example, they think of certain income as being available for short-term consumption, or medium-term savings (say, for home repairs or a vacation), or for retirement savings. Many people think of the retirement savings in their 401k or IRA as their own spending money that they control, not as money that should be used to purchase "continuing life insurance" via an annuity. (In contrast, most people don't think of Social Security or defined pension benefits as their personal money in the same way; instead, people have already mentally handed off the control over those funds to a government or employer account.)

Another issue is that people worry that they will buy an annuity but then die quickly and "lose" money on the transaction. I think of this as a standard problem with every form of insurance. Most insurance--life, health, car, home-- pays off when something bad happens and you need to make a claim. In a way, the best outcome is that I spend my lifetime paying for all these forms of insurance, and never end up using any of them.  But of course, if I never make a claim on my insurance, I feel as if I wasted the money, because the risks never actually  happened. Indeed, what I sometimes call the "unavoidable reality of insurance" is that there will be a relatively small number of people who get large insurance payouts--and they are the unlucky ones  because something bad happened in their lives. The "lucky" ones pay and pay and pay those insurance premiums, and almost never get anything back. No wonder insurance is often unpopular! And it's no wonder that many people often obtain insurance under some form of pressure: you need car insurance to drive your car, and you need home insurance to get a mortgage, and your employer provides health insurance as part of your job compensation, and you are required by law to pay into Social Security or Medicare. With annuities, the fear is that you might be buying one more form of insurance that won't pay off.

Yet another issue is that annuities can be complex financial contracts, and hard for an average person to evaluate. How long do you pay into the annuity? When does it start paying out? Does it pay our for a fixed period, or over the rest of one's life? Are the payouts adjusted for inflation? How large a commission is being charged by the seller? Does the annuity include a minimum payout if you die soon--which could be left to one's heirs? What happens if the company that sold you the annuity goes broke a decade or two in the future? How will the tax code treat income from annuities in the future? In the past, some annuities were not an especially good financial deal, in the sense that someone with the discipline to withdraw money from their retirement accounts in a slow-and-steady way have a high probability of ending up better off than someone who purchased a life annuity.

What might be done to tip the balance, at least a little bit, toward more people buying annuities?

One option is a "nudge" approach in which the default approach would be that a small proportion of retirement accounts would be automatically annuitized at retirement. A body of social science research suggests that lots of people would just go with the default approach, and would end up being pleased that they had done so. But anyone who didn't want this default to happen and didn't want teh annuity could opt out with a phone call. Annuities are a default option in Switzerland, for example.

Another option is to offer a different framing of the choices. For example, instead of "buying an annuity" perhaps there should be an option to "buy higher Social Security benefits for the rest of your life." Some survey evidence suggests that when annuities are described in terms like "spend" and "payment," people are more attracted than if they are described by terms like "invest" and "earning." It would probably also be useful if the presentation of annuities could be standardized, so potential consumers could more easily compare what they are buying.

There are some interesting international comparisons discussed in the Hurwitz paper. She writes: "The United Kingdom had a mandatory annuity law that was repealed in 2014. The Netherlands mandates full annuitization, Chile offers only annuities or phased withdrawals, Israel adopted a mandatory minimum annuity requirement in 2008 ..." There is some evidence that when annuities have become more common in a country, then people often keep on choosing them even if the default or requirement to do so is loosened.

For a couple of previous posts on the "annuities puzzle," see:

Friday, July 12, 2019

China's Changing Relationship with the World Economy

China's economy is simultaneously huge in absolute size and lagging far behind the world leaders on a per person basis. According to World Bank data, China's GDP (measuring in current US dollars) is
$13.6 trillion, roughly triple the size of Germany or Japan, but still in second place among countries of the world behind the US GDP of $20.4 trillion. However, measured by per capita GDP, the World Bank data shows that China is at $9,770, just one-sixth of the US level of $62,641.

Any evaluation of China's economy finds itself bouncing back and forth between the enormous size that has already been achieved and the possibility of so much more growth and change in the future. This pattern keeps recurring in "China and the world: Inside the dynamics of a changing relationship," written by the team of Jonathan Woetzel, Jeongmin Seong, Nick Leung, Joe Ngai, James Manyika, Anu Madgavkar, Susan Lund, and Andrey Mironenko at the McKinsey Global Institute (July 2019).

Here's one illustration. The figure shows the total GDP of China, Japan, and Germany as a share of the US level, which is set at 100%. On this figure, Germany's GDP as a share of the US level peaked in 1979, and Japan's peaked in 1991.

What's interesting about China's situation is not just that the level has risen so sharply. In addition, the peaks for Germany and Japan happened when their levels of per capita GDP were similar or higher to the US level (given the prevailing exchange rates at the time). China's per capita GDP is much lower, suggesting much more room to grow. Similarly, the urbanization rates for Germany in 1979 and Japan in 1991 were in the 70s, while China's urbanization rate is only 58%--again suggesting considerably more room for China to grow.

Here are some other examples of the changes in China that have already happened, with a hint of the potential for much larger changes still remaining. The MGI report notes:
Trade. ... China became the world’s largest exporter of goods in 2009, and the largest trading nation in goods in 2013. China’s share of global goods trade increased from 1.9 percent in 2000 to 11.4 percent in 2017. In an analysis of 186 countries, China is the largest export destination for 33 countries and the largest source of imports for 65. ... However, China’s share of global services trade is 6.4 percent, about half that of goods trade.
Firms. ... Consider that in 2018 there were 110 firms from the mainland China and Hong Kong in the Global Fortune 500, getting toward the US tally of 126. ... However, although the share of these firms’ revenue earned outside China has increased, less than 20 percent of revenue is made overseas even by these global firms. To put this in context, the average share of revenue earned overseas for S&P 500 companies is 44 percent. Furthermore, only one Chinese company is in the world’s 100 most valuable brands.
Finance. China was also the world’s the second largest source of outbound FDI and the second largest recipient of inbound FDI from 2015 to 2017. ... Foreign ownership accounted for only about 2 percent of the Chinese banking system, 2 percent of the Chinese bond market, and about 6 percent of China’s stock market in 2018. Furthermore, in 2017, its inbound and outbound capital flows (including FDI, loans, debt, equity, and reserve assets) were only about 30 percent those of the United States. ...
Technology. China’s scale in R&D expenditure has soared—spending on domestic R&D rose from about $9 billion in 2000 to $293 billion in 2018—the second-highest in the world—thereby narrowing the gap with the United States. However, China still depends on imports of some core technologies such as semiconductors and optical devices, and intellectual property (IP) from abroad. In 2017, China incurred $29 billion worth of imported IP charges, while only charging others around $5 billion in exported IP charges (17 percent of its imports). China’s technology import contracts are highly concentrated geographically, with more than half of purchases of foreign R&D coming from only three countries—31 percent from the United States, 21 percent from Japan, and 10 percent from Germany.
Culture. China has invested heavily in building a global cultural presence. ... Furthermore, its financing of the global entertainment industry has led to more movies being shot in China: 12 percent of the world’s top 50 movies were shot at least partially in China in 2017, up from 2 percent in 2010. However, significant investment appears to have had yet to achieve mainstream cultural relevance globally. Chinese exports of television dramas in terms of the value of exports are only about one-third those of South Korea, and the number of subscribers to the top ten Chinese musicians on a global streaming platform are only three percent those of the top ten South Korean artists, for instance.
The MGI report argues that when looking specifically at trade, technology and financial capital, China's economy is becoming less dependent on the rest of the world, while the rest of the world economy is becoming more dependent on China. For example, one big shift in the last few years is that China's economy has been "rebalancing," which refers to a greater share China's output going to China's consumers and less to capital investment or exports. This shift also means that rising levels of consumption in China are a major force in driving global consumption of goods and services.
In 11 of the 16 quarters since 2015, domestic consumption contributed more than 60 percent of total GDP growth. In 2017 to 2018, about 76 percent of GDP growth came from domestic consumption, while net trade made a negative contribution to GDP growth. As recently as 2008, China’s net trade surplus amounted to 8 percent of GDP; by 2018, that figure was estimated to be only 1.3 percent—less than either Germany or South Korea, where net trade surpluses amount to between 5 and 8 percent of GDP. Rising demand and the development of domestic value chains in China also partly explain the recent decline in trade intensity at the global level. ...Although it only accounts for 10 percent of global household consumption, China was the source of 38 percent of global household consumption growth from 2010 to 2016, according to World Bank data. Moreover, in some categories including automobiles and mobile phones, China’s share of global consumption is 30 percent or more.
I read now and then about the prospect of China's economy "decoupling" from the US economy.. From a US power politics point of view, I think the mental model here is how the economy of the Soviet Union operated in the decades after World War II. Most of the trade of the USSR operated within its own centrally-planned trading bloc, called the Council for Mutual Economic Assistance, of Soviet-controlled countries. The results in terms of output and quality were so miserably bad that jokes told by Russians about their economy became a staple among economists. Since the fall of the USSR, Russia's economy has staggered from one catastrophe another (for discussion, see here and here), while occasionally being buoyed up when oil prices are high.

China's situation is very different. It's economy is not reliant on exports of oil or other natural resources. China's government still controls the financial industry and steers funds to state-owned companies, but it is not following a Soviet-style approach to central planning. In the 21st century, China not isolating itself from the rest of the world economy; rather, it is actively building transportation and trade ties to countries around the world. The education and health levels of China's population are rising rapidly. Future economic  growth for China is likely to be slower and bumpier than the pattern of the last 40 years--while still being notably faster on average than the growth of high-income economies like the U.S.

There are a number of hard questions to face about China's rise in the global economy, and many of the hardest ones go well beyond economics. But old mental models drawn from a time when the US was by far the dominant economy in the world and its main geopolitical opponent was the USSR are not likely to be very useful in searching for answers.

Wednesday, July 10, 2019

Is AI Just Recycled Intelligence, Which Needs Economics to Help It Along?

The Harvard Data Science Review has just published its first issue. Many of us in economics are cousins of burgeoning data science field, and will find it of interest. As one example, Harvard provost (and economist) Alan Garber offers a broad-based essay on "Data Science: What the Educated Citizen Needs to Know."  Others may be more intrigued by the efforts of Mark Glickman, Jason Brown, and Ryan Song to use a machine learning approach to figure out whether Lennon or McCartney is more likely to have authored certain songs by the Beatles that are officially attributed to both, in "(A) Data in the Life: Authorship Attribution in Lennon-McCartney Songs."
But my attention was especially caught by an essay by Michael I. Jordan called "Artificial Intelligence—The Revolution Hasn’t Happened Yet," which is then followed by 11 comments: Rodney BrooksEmmanuel Candes, John Duchi, and Chiara SabattiGreg CraneDavid DonohoMaria FasliBarbara GroszAndrew LoMaja MataricBrendan McCordMax Welling, and Rebecca Willett.  The rejoinder from Michael I. Jordan will be of particular interest to economists, because it is titled "Dr. AI or: How I Learned to Stop Worrying and Love Economics."

Jordan's main argument is that the term "artificial intelligence" often misleads public discussions, because the actual issue here isn't human-type intelligence. Instead, a set of computer programs that can use data to train themselves to make predictions--what the experts call "machine learning," defined as "an algorithmic field that blends ideas from statistics, computer science and many other disciplines to design algorithms that process data, make predictions, and help make decisions." Consumer recommendation or fraud detection systems, for example, are machine learning, not  the high-level flexible cognitive capacity that most of us mean by "intelligence." As Jordan argues, the information technology that would run, say, an operational system of autonomous vehicles is more closely related to a much more complicated air traffic control system than to the human brain.

(One implication here for economics is that if AI is really machine learning, and machine learning is about programs that can update and train themselves to make better predictions, then one can analyze the effect of AI on labor markets by looking at specific tasks within various jobs that involve prediction. Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb take this approach in "Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction" (Journal of Economic Perspectives, Spring 2019, 33 (2): 31-50). I offered a gloss of their findings in a blog post last month.)

Moreover, the machine learning algorithms, which often involve mixing together results from past research and pre-existing data in different situations with new forms of data can go badly astray. Jordan  offers a vivid example: 
Consider the following story, which involves humans, computers, data, and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to one in 20.” She let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis, but amniocentesis was risky—the chance of killing the fetus during the procedure was roughly one in 300. Being a statistician, I was determined to find out where these numbers were coming from. In my research, I discovered that a statistical analysis had been done a decade previously in the UK in which these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I returned to tell the geneticist that I believed that the white spots were likely false positives, literal white noise.
She said, “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago. That’s when the new machine arrived.”
We didn’t do the amniocentesis, and my wife delivered a healthy girl a few months later, but the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other situations. The problem had to do not just with data analysis per se, but with what database researchers call provenance—broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation?
The comment by David Donoho refers to this as "recycled intelligence." Donoho writes:
The last decade shows that humans can record their own actions when faced with certain tasks, which can be recycled to make new decisions that score as well as humans’ (or maybe better, because the recycled decisions are immune to fatigue and impulse). ... Recycled human intelligence does not deserve to be called augmented intelligence. It does not truly augment the range of capabilities that humans possess. ... Relying on such recycled intelligence is risky; it may give systematically wrong answers ..."
Donoho offers the homely example of spellcheck programs which, for someone who is an excellent and careful speller, are as likely to create memorable errors as to improve the text.

From Jordan's perspective, what we should be talking about is not whether AI or machine learning will "replace" workers, but instead thinking about how humans will interact with these new capabilities. I'm not just thinking of worker training here, but of the issues related to privacy, access to technology, the structure of market competition, and other issues. Indeed, Jordan argues that one major ingredient missing from the current machine-learning programs is a fine-grained sense of what specific people want--which implies a role for markets. Jordan argues that rather than pretending that we are mimicking human "intelligence," with all the warts and flaws that we know human intelligence has, we should instead be thinking about interactions of how information technology can address the allocation of public and private resources in ways that benefit people. I can't figure out a way to summarize his argument in brief, without doing violence to it, so I quote here at length: 
Let us suppose that there is a fledgling Martian computer science industry, and suppose that the Martians look down at Earth to get inspiration for making their current clunky computers more ‘intelligent.’ What do they see that is intelligent, and worth imitating, as they look down at Earth?
They will surely take note of human brains and minds, and perhaps also animal brains and minds, as intelligent and worth emulating. But they will also find it rather difficult to uncover the underlying principles or algorithms that give rise to that kind of intelligence——the ability to form abstractions, to give semantic interpretation to thoughts and percepts, and to reason. They will see that it arises from neurons, and that each neuron is an exceedingly complex structure——a cell with huge numbers of proteins, membranes, and ions interacting in complex ways to yield complex three-dimensional electrical and chemical activity. Moreover, they will likely see that these cells are connected in complex ways (via highly arborized dendritic trees; please type "dendritic tree and spines" into your favorite image browser to get some sense of a real neuron). A human brain contains on the order of a hundred billion neurons connected via these trees, and it is the network that gives rise to intelligence, not the individual neuron.
Daunted, the Martians may step away from considering the imitation of human brains as the principal path forward for Martian AI. Moreover, they may reassure themselves with the argument that humans evolved to do certain things well, and certain things poorly, and human intelligence may be not necessarily be well suited to solve Martian problems.
What else is intelligent on Earth? Perhaps the Martians will notice that in any given city on Earth, most every restaurant has at hand every ingredient it needs for every dish that it offers, day in and day out. They may also realize that, as in the case of neurons and brains, the essential ingredients underlying this capability are local decisions being made by small entities that each possess only a small sliver of the information being processed by the overall system. But, in contrast to brains, the underlying principles or algorithms may be seen to be not quite as mysterious as in the case of neuroscience. And they may also determine that this system is intelligent by any reasonable definition—it is adaptive (it works rain or shine), it is robust, it works at small scale and large scale, and it has been working for thousands of years (with no software updates needed). Moreover, not being anthropocentric creatures, the Martians may be happy to conceive of this system as an ‘entity’—just as much as a collection of neurons is an ‘entity.’
Am I arguing that we should simply bring in microeconomics in place of computer science? And praise markets as the way forward for AI? No, I am instead arguing that we should bring microeconomics in as a first-class citizen into the blend of computer science and statistics that is currently being called ‘AI.’ ... 
Indeed, classical recommendation systems can and do cause serious problems if they are rolled out in real-world domains where there is scarcity. Consider building an app that recommends routes to the airport. If few people in a city are using the app, then it is benign, and perhaps useful. When many people start to use the app, however, it will likely recommend the same route to large numbers of people and create congestion. The best way to mitigate such congestion is not to simply assign people to routes willy-nilly, but to take into account human preferences—on a given day some people may be in a hurry to get to the airport and others are not in such a hurry. An effective system would respect such preferences, letting those in a hurry opt to pay more for their faster route and allowing others to save for another day. But how can the app know the preferences of its users? It is here that major IT companies stumble, in my humble opinion. They assume that, as in the advertising domain, it is the computer's job to figure out human users' preferences, by gathering as much information as possible about their users, and by using AI. But this is absurd; in most real-world domains—where our preferences and decisions are fine-grained, contextual, and in-the-moment—there is no way that companies can collect enough data to know what we really want. Nor would we want them to collect such data—doing so would require getting uncomfortably close to prying into the private thoughts of individuals. A more appealing approach is to empower individuals by creating a two-way market where (say) street segments bid on drivers, and drivers can make in-the-moment decisions about how much of a hurry they are in, and how much they're willing to spend (in some currency) for a faster route.
Similarly, a restaurant recommendation system could send large numbers of people to the same restaurant. Again, fixing this should not be left to a platform or an omniscient AI system that purportedly knows everything about the users of the platform; rather, a two-way market should be created where the two sides of the market see each other via recommendation systems.
It is this last point that takes us beyond classical microeconomics and brings in machine learning. In the same way as modern recommendation systems allowed us to move beyond classical catalogs of goods, we need to use computer science and statistics to build new kinds of two-way markets. For example, we can bring relevant data about a diner's food preferences, budget, physical location, etc., to bear in deciding which entities on the other side of the market (the restaurants) are best to connect to, out of the tens of thousands of possibilities. That is, we need two-way markets where each side sees the other side via an appropriate form of recommendation system.
From this perspective, business models for modern information technology should be less about providing ‘AI avatars’ or ‘AI services’ for us to be dazzled by (and put out of work by)—on platforms that are monetized via advertising because they do not provide sufficient economic value directly to the consumer—and more about providing new connections between (new kinds of) producers and consumers.
Consider the fact that precious few of us are directly connected to the humans who make the music we listen to (or listen to the music that we make), to the humans who write the text that we read (or read the text that we write), and to the humans who create the clothes that we wear. Making those connections in the context of a new engineering discipline that builds market mechanisms on top of data flows would create new ‘intelligent markets’ that currently do not exist. Such markets would create jobs and unleash creativity.
Implementing such platforms is a task worthy of a new branch of engineering. It would require serious attention to data flow and data analysis, it would require blending such analysis with ideas from market design and game theory, and it would require integrating all of the above with innovative thinking in the social, legal, and public policy spheres. The scale and scope is surely at least as grand as that envisaged when chemical engineering was emerging as a way to combine ideas from chemistry, fluid mechanics, and control theory at large scale.
Certainly market forces are not a panacea. But market forces are an important source of algorithmic ideas for constructing intelligent systems, and we ignore them at our peril. We are already seeing AI systems that create problems regarding fairness, congestion, and bias. We need to reconceptualize the problems in such a way that market mechanisms can be taken into account at the algorithmic level, as part and parcel of attempting to make the overall system be ‘intelligent.’ Ignoring market mechanisms in developing modern societal-scale information-technology systems is like trying to develop a field of civil engineering while ignoring gravity.
Markets need to be regulated, of course, and it takes time and experience to discover the appropriate regulatory mechanisms. But this is not a problem unique to markets. The same is true of gravity, when we construe it as a tool in civil engineering. Just as markets are imperfect, gravity is imperfect. It sometimes causes humans, bridges, and buildings to fall down. Thus it should be respected, understood, and tamed. We will require new kinds of markets, which will require research into new market designs and research into appropriate regulation. Again, the scope is vast.
I can think of all sorts of issues and concerns to raise about this argument (and I'm sure that readers can do so as well), but I also think the argument has an interesting force and plausibility.