Pages

Thursday, July 15, 2021

If You Haven't Switched to the New Conversable Economist Website Yet ...

 If you have been checking this Conversable Economist website and wondering at the lack of recent posts, or if you had been signed up to receive posts by email and have been wondering where they went, the answer is that about a month ago I switched the Conversable Economist blog from Blogger to WordPress. The new web address  http://conversableeconomist.wordpress.com. The 2500 or so archived posts from the last decade have been transferred over, too. All future posts will be added at that website.

The proximate reason for the shift is that Google made a decision to stop supporting Feedburner, which was the software that allowed people to sign up to receive emails about each post. There are subsidiary reasons for the shift, as well, but it doesn't feel worth getting into the minutiae here. I had been feeling for awhile as if the shift might be a good idea, and when Blogger started dropping features that were important to me, it gave me a nudge to go ahead. 

At the Blogger website, I had about 3,000 readers signed up to receive an email of each post. Of course, I don't want to lose you. I think that the names of past email subscribers have been successfully transferred over to WordPress. But of course, there is also a hitch. Many email subscribers have mentioned to me that they like getting the entire post in their email--not just a link to the post. This is still possible! But to receive the full post in your email, you need to sign up for emails via WordPress at the new site.  Overall, if you stop getting emails about new posts, please go to the new home of the blog and sign up there.

One final change perhaps worth noting is that I have added a "Donation" button at the upper right of the new site. Blogger has been free to use. WordPress is not particularly expensive, but it's not free, either. Also, the blog has been for an uncompensated hobby for me during the last decade.  If you are in a financial position to make a donation, it would be a genuine help: in particular, it would help me stop thinking about moving the blog to a subscription model, and instead keep it freely available--which is my preference. 

Thanks for following my musings as the Conversable Economist,

Timothy Taylor


Sunday, July 4, 2021

Learned Hand: "The Spirit of Liberty is the Spirit Which is Not Too Sure That It is Right"

Learned Hand is often on the short list of greatest American judges who never made it to the US Supreme Court. In 1944, during World War II, he delivered a speech on "The Spirit of Liberty" to a vast crowd in Central Park in New York City--with particular attention to the estimated 150,000 newly naturalized Americans attending the event. 

His speech contains one of my own favorite comments: "The spirit of liberty is the spirit which is not too sure that it is right ..." Hand viewed liberty within an ordered society not as the freedom of isolated individuals to act as they wish, but as part of a shared concern for others. He also viewed freedom as an ideal toward which America continually strives, rather than an accomplished reality. He said: 
What do we mean when we say that first of all we seek liberty? I often wonder whether we do not rest our hopes too much upon constitutions, upon laws and upon courts. These are false hopes; believe me, these are false hopes. Liberty lies in the hearts of men and women; when it dies there, no constitution, no law, no court can save it; no constitution, no law, no court can even do much to help it. While it lies there it needs no constitution, no law, no court to save it.

And what is this liberty which must lie in the hearts of men and women? It is not the ruthless, the unbridled will; it is not freedom to do as one likes. That is the denial of liberty, and leads straight to its overthrow. A society in which men recognize no check upon their freedom soon becomes a society where freedom is the possession of only a savage few; as we have learned to our sorrow.

What then is the spirit of liberty? I cannot define it; I can only tell you my own faith. The spirit of liberty is the spirit which is not too sure that it is right; the spirit of liberty is the spirit which seeks to understand the minds of other men and women; the spirit of liberty is the spirit which weighs their interests alongside its own without bias; the spirit of liberty remembers that not even a sparrow falls to earth unheeded; the spirit of liberty is the spirit of Him who, near 2,000 years ago, taught mankind that lesson it has never learned, but has never quite forgotten; that there may be a kingdom where the least shall be heard and considered side by side with the greatest.

And now in that spirit, that spirit of an America which has never been, and which may never be; nay, which never will be except as the conscience and courage of Americans creates it; yet in the spirit of that America which lies hidden in some form in the aspirations of us all; in the spirit of that America for which our young men are at this moment fighting and dying; in that spirit of liberty and America I ask you to rise and with me pledge our faith in the glorious destiny of our beloved country.

Thursday, June 17, 2021

Last Post Here: Moving From Blogger to WordPress

The Conversable Economist blog has been based on the Google Blogger platform for 10 years, since I started the blog in May 2011. However, this will be the last post here on Blogger. I am transferring the blog over to WordPress, at http://conversableeconomist.com. The 2500 or so archived posts from the last decade have been transferred over, too. All future posts will be added at that website.  

The proximate reason for the shift is that  Google made a decision to stop supporting Feedburner, which was the software that allowed people to sign up to receive emails about each post. I've got about 3,000 readers signed up to receive an email of each post, and I didn't want to lose them. I think that their names have been successfully transferred over to WordPress, but if you signed up in the last 2-3 weeks, it's possible that your name did not get added to the WordPress list.  If you stop getting emails about new posts, please go to the new home of the blog and sign up there. 

There are subsidiary reasons for the shift, as well, but it doesn't feel worth getting into the minutiae here. Overall, it feels to me as if WordPress has more features that are easier to access. 

One change perhaps worth noting is that I have added a "Donation" button at the upper right of the new site. Blogger has been free to use. WordPress is not particularly expensive, but it's not free, either. Also, my wife and I are paying for any sins we committed in  past lives with multiple years of college tuition purgatory. If you are in a financial position to make a donation, it would be a genuine help: in particular, it would help me stop thinking about moving the blog to a subscription model, and instead keep it freely available--which is my preference. 

Many thanks to Cameron Payne for her work in setting up the new home for the blog and for transferring the archives and mailing lists.


Wednesday, June 16, 2021

Interview: Amartya Sen on a Bicycle

Christina Pazzanese interviews 87 year-old Amartya Sen (Nobel '98) for the Harvard Gazette (June 3, 2021), with an emphasis on the long arc of his life and career (‘I’ve never done work that I was not interested in. That is a very good reason to go on.’ June 3, 2021). The interview is full of interesting nuggets, like the time he co-taught a Harvard course on social choice theory with Kenneth Arrow and John Rawls. One point that caught my eye was Sen's passion since his boyhood for bicycling: 
I was a bicyclist of quite an extreme kind. I went everywhere on bicycles. Quite a lot of the research I did required me to take long bicycle trips. One of the research trips I did in 1970 was about the development of famines in India. I studied the Bengal famine of 1943, in which about 3 million people died. It was clear to me it wasn’t caused by the food supply having fallen compared with earlier. It hadn’t. What we had was [a] war-related economic boom that increased the wages of some people, but not others. And those who did not have higher wages still had to face the higher price of food — in particular, rice, which is the staple food in the region. That’s how the starvation occurred. In order to do this research, I had to see what wages people were being paid for various rural economic activities. I also had to find out what the prices were of basic food in the main markets. All this required me to go to many different places and look at their records so I went all these distances on my bike.

And when I got interested in gender inequality, I studied the weights of boys and girls over their childhood. Very often, it would happen that the girls and boys were born the same weight, but by the time they were five, the boys had — in weight for age —overtaken the girls. It’s not so much that the girls were not fed well — there might have been some of that. But mainly, the hospital care and medical treatment available were rather less for girls than for boys. In order to find this out, I had to look at each family and also weigh the children to see how they were doing in terms of weight for age. These were in villages, which were often not near my town; I had to bicycle there. ...

When the Nobel committee after you get your prize asks you to give two mementos or two objects connected with your work, I chose two. One was a bicycle, which was an obvious choice. And the other was a Sanskrit book of mathematics from the fifth century by Aryabhata. Both I had a lot of use for.
Also, although I do not expect to be saying anything similar about my ongoing intellectual in 2047, when I have every intention of turning 87, one cannot help but appreciate Sen's ongoing zest for what he does. 
I’m planning to do a book on gender. There should be one in about a year or two. There are so many different problems people get confused that I thought I might put together the problems that make up gender disadvantage. It will draw on prior research, but there will be a number of new things in it. ...

People have given up hope that I might retire. But I like working, I must say. I’ve been very lucky. I’ve never done, when I think about it, work that I was not interested in. That is a very good reason to go on.

I’m 87. Something I enjoy most is teaching. It may not be a natural age for teaching, I guess, but I absolutely love it. And since my students also seem not unhappy with my teaching, I think it’s a very good idea to continue doing it.

For another interview with Sen, this one from summer 2020, see "Interview with Amartya Sen: 
Economics with a Moral Compass?"
(August 5, 2020). 

Monday, June 14, 2021

From Pandemic to Digitalization to Productivity?

We know that the pandemic caused people and firms to make much more widespread use of digital technologies: working from home, ordering on-line, tele-medicine, education from K-12  to college delivered on-line, and so on. Indeed, it seems likely that this surge of digital activity is also providing an incentive for substantial investments in physical capital, intangible capital (like software), and complementary human skills to make use of these investment. Might these shifts in patterns and investments provide a boost to improved productivity growth in the next few years? 

The Group of Twenty has published a report (prepared by staff at the IMF) on these subject: "Boosting Productivity in the Aftermath of COVID-19" (June 2021). The report suggests the possibility that while many people will be better off because of the shift to digital technologies, these gain in well-being may not be well-reflected in conventional economic statistics like GDP. 

It's worth noting that nothing in the report seeks to put a happy face on the economic side of the pandemic experience. Unemployment has soared. Worker skills have been unused, and in some cases will have depreciated. Firms and communities have suffered, many of them grievously. As the report notes: "For instance, the so called `jobless recoveries' from previous US recessions were driven by contractions in routine occupations, which account for about 50 percent of total employment, that are never recovered. More recently, the COVID-19 shock has also hit sectors that are more vulnerable to automation much harder and lowered the share of low-skilled and low-wage workers in the workforce. As we look ahead, the productivity and earnings of low-skilled workers that have lost their jobs in sectors vulnerable to automation are therefore at risk ..."

But it is also true that use of digital technologies has increased, in ways that seem likely to persist, at leas in part, as the pandemic recession faced. Indeed, this shift to heavier use of digital technologies is one reason why stock prices of leading tech companies have done so well in the last year or so. Here are a couple of interesting illustrations from the report. The first shows the pattern of new US patent applications related to remote work and e-commerce, and how it has risen. The second shows the results of a survey of business executives, emphasizing that for most of them, the pandemic recession led to heightened efforts to digitize and automate their operations. 

The report discusses the extent to which this shift may increase productivity: for example, the reallocation of resources away from less-productive to more-productive firms should boost productivity. The report expresses cautious and hedged optimism about the chances for productivity: for example, "In sum, the impact of reallocation so far looks beneficial for productivity, but much remains to be learned and it is associated with several concerns."

The report also raises the difficult question of productivity measurement. Workers who have greater flexibility to work from home may benefit, for example, from less time spent commuting. But shorter commutes don't provide a direct boost to JEP. If I have groceries delivered more often, but my purchase of groceries is pretty much the same, the benefits to me may not be well-captured by conventional economic statistics. If I see my doctor on-line, or children see a K-12 teacher online, or college students attend classes remotely, there will be a mixture of effects on the quality of what is provided and the costs of providing it that will not translate in a simple way into productivity statistics. These kinds of issues have been lurking in the productivity statistics for years, but the economic after-effects of the pandemic may strengthen them.
Mismeasurement of the digital economy has been an often-cited contributor to the prolonged slowdown in measured productivity growth prior to the COVID-19 pandemic. As the productivity slowdown occurred alongside a fast pace of innovation in the hard-to-measure digital economy, a commonly mentioned contributor to the measured slowdown is the inability to capture well in price statistics and deflators the increases in convenience, varieties, free online products, and lower quality-adjusted prices that arises from the digital economy. ... Looking forward, if the pandemic accelerates growth in the digital economy, its contribution to mismeasurement may become more salient. For example, greater prevalence of remote work and online interactions across borders may reduce travel costs, which, if not properly captured, may lead to an underestimation of productivity growth. A shift to digital and peer-to-peer platforms could also bring added convenience, making it feasible to access an increasing number of varieties and lower prices, which, if not properly accounted for, would also result in mismeasurement.
Finally, it's worth noting that the pandemic will affect future productivity growth in a number of ways, not just via the effects on digitalization. For example, many students around the world have experienced a severe disruption of their education. The report notes: 
School closures affected 1.6 billion learners globally at the peak of the pandemic and continue to disrupt learning for millions. These disruptions had disproportionately adverse impacts on schooling in economies with preexisting gaps in infrastructure (such as access to electricity and internet), which constrained their ability to implement remote learning. Girls and learners in low-income households faced disproportionately greater risk of learning losses as they lost a boost from peer-effects that occur in school and may have been less likely to have parental support for remote learning. Women may also have needed to take on additional caregiving and teaching responsibilities while at home, putting them at a disadvantage in the labor market. These interruptions to learning and work will likely set back human capital accumulation—with such effects spread unevenly across generations, genders, and income levels, and with adverse implications for longer-run productivity.

Friday, June 11, 2021

The Social Nature of Government Actions

Economics famously begins with an idea of individuals pursuing their own interests, and then discusses both the positive and negative dynamics that can emerge. But there has been a long-time pattern in human affairs, going back to the days of the hunter-gatherers, that certain outputs have been produced socially--by families, communities, and in modern times also by government. Emmanuel Saez explores this issue in his American Economic Association Distinguished Lecture at the virtual AEA meetings last January on the subject, "Public Economics and Inequality: Uncovering Our Social Nature" (AEA Papers and Proceedings 2021, 111: 1-26, subscription required, but freely available at Saez's website here). Saez writes: 

[O]ur social nature, absent from the standard economic model, is crucial for understanding our large modern social states and why concerns about inequality are so pervasive. Taking care of the young, sick, and elderly has always been done through families and communities and likely explains best why education, health care, and retirement benefits are carried out through the social state in today’s advanced economies. Behavioral economics shows that we are not very good at solving these issues individually, but descriptive public economics shows that we are pretty good at solving them socially. ...  Even though an individual solution through markets is theoretically possible, it does not work well in practice without significant institutional or government help. Human societies are good at providing education, health care, and retirement and income support even though individuals are not.

Although Saez offers a brisk overview of earlier human societies, his main focus is on what he calls ""the rise of the social state in the twentieth century:" He writes: 

Perhaps the most striking fact in modern economies illustrating both our social nature and concerns for inequality is the size of government and the large direct impact it has on the distribution of economic resources. In advanced modern economies, we pool a large fraction of the economic output we produce through government. In the richest countries today, taxes generally raise between 30 and 50 percent of national income and are used to fund not only public goods needed for the functioning of the economy but also a wide array of transfers back to individuals, both in cash and in kind. Even though modern economies generally allocate the fruits of production to workers and owners through a capitalistic market system with well-defined property rights, as societies, a significant fraction of market incomes, typically between one-third and one-half, is shared (that is, effectively “socialized”) through government.
Here's are a couple of figures showing the rise in government spending in advanced economies in the 20th century:
(In the figure "Regalian public goods" is a category that Saez defines as the basic roles of a very limited government, including defense, law and order, administration, and infrastructure).

As Saez notes, the US economy is near the lower end of this range--but it's still a substantial share. I would add that a significant part of the difference is that the US has kept a large portion of its health care spending in a heavily regulated private sector. Saez also notes that there is relatively little cross-border redistribution, and when it happens, it's often in the form of disaster relief. People seem to define their circle of sharing within their country, or to some extent within a lower-level jurisdiction like a state or city, 

Again, the big four social categories on which Saez focuses are education, retirement benefits, health care, and income support. To get a sense of the tone of his argument, here are a few of his comments on these categories: 
Historically, mass education is always government driven through a combination of government funding (at all levels including higher education) and compulsory schooling (for primary and then secondary education). ... 

Before public retirement programs existed, a large fraction of the elderly was working (80 percent of men aged 65 or older were gainfully employed in the United States in the late nineteenth century ...). The elderly who could no longer work enough to support themselves had to rely on family support. Public retirement systems were a way to provide social insurance through the state instead of relying on
self-insurance or family insurance. ... 

[U]niversal health insurance creates significant redistribution by income and also, of course, by health and health-risk status. One important question is why health-care quality is the same for all in such universal health-care systems (at least as a principle, not always realized in practice). Why isn’t health insurance offered in grades, with cheap insurance covering only the most cost-effective treatments. Probably because humans are willing to spend a lot of resources to save a specific life, that is, an actual person with a condition that can be treated. This is likely a consequence of our social nature shaped by evolution: taking care of the sick or injured was helpful for group survival. This makes withholding treatment to the poorly insured socially unbearable. ...

People make mistakes in health- care utilization and treatment choices. Copayments and deductibles lead consumers to reduce demand for high-value care. This may explain why universal health-care systems have low copays and deductibles and why health-care decisions for patients are made primarily by health-care professionals. Like for education, the difficulty for users to understand and navigate health-care choices implies that the market does not necessarily deliver efficiency. In sum, the problem of health care is also primarily resolved at the social level rather than the individual level. ... 

Everywhere, there is strong social reprobation against “free loaders” who could work and support themselves but decide to live off government support This is why income support is concentrated among groups unable or unexpected to work, such as the unemployed, the disabled, and the elderly.
As Saez discusses, the fact that advanced societies have decided that government provision will play such a large role in these four areas is rooted in other social judgements: for example, judgements about the fairness and importance of widespread education for children, judgements about whether the elderly should need to work (and how to define who is "elderly"), judgements about whether the sick and injured will have access to care, and judgements about which groups of people deserve income support and under what conditions. Of course, this kind of social consensus can shift. We saw a shift in the 1980s and 1990s about whether single mothers with small children were expected to work, or not. As another example, back in the 1980s, 50-55% of Americans in the 16-19 age bracket were in the labor force; now, it's about 35%. A substantial part of that shift is in our sense of what people in that age group should be doing with their time. Saez writes: 
However, the social state also intentionally reduces labor supply by design through various regulations: child labor prohibitions and compulsory education limit work by the young, retirement benefits sharply reduce work in old age, and overtime hours-of-work regulations and mandated paid vacation (for example, five weeks in France) reduce work across the board. This implies that labor supply should be seen partly as a social choice, with society having disutility of labor for the very young, the old, and very long hours with no vacation break.
There's much more in the lecture itself. But the main theme deserves attention. Saez writes: "Therefore, social organization does seem to come naturally to us. We can easily take a group perspective and act accordingly." Understanding the group perspective and the social organizations that form as a result seems like an important tool for understanding what we expect from government--and what some of the barriers are to redesigning government programs to operate more effectively 

Wednesday, June 9, 2021

How to Improve College Completion Rates: The Time Commitment Problem

The US higher education system does an OK job of enrolling US high school students. About 70% of US high school graduates enroll in a two-year or four-year college. But the higher education system does a poor job actually producing graduates who have completed college. About half of students who enroll at a four-year college graduate within six years; the completion rate is lower for two-year colleges. If the goal of getting more high school student to attend college is to be a meaningful one, it needs to be accompanied by efforts to raise the college completion rate. 

Philip Oreopoulos discusses these issues in  a review article "What Limits College Success? A Review and Further Analysis of Holzer and Baum’s Making College Work" (Journal of Economic Literature 2021, 59:2, 546–573, subscription required). As Oreopolous details, Holzer and Baum provide an overview of steps to encourage college enrollment and completion. In particular, some of the steps to encourage college enrollment can be fairly low-cost, like requiring high school students as part of their coursework to fill out at least one college application and to take the SAT or ACT, and having states dol a better job of communicating about available financial aid to low-income households. 

Here, I want to focus on policies more directly aimed at improving college completion. For example, one approach discussed in the Holzer and Baum book is a comprehensive set of support services for first-year students. Oreopoluos describes perhaps a prominent example of such a program this way (citations omitted): 

Exhibit A for demonstrating how to improve college access and success is the Accelerated Study in Associate Program (ASAP). MCW [Making College Work] and many other researchers point to it as the central example worth considering. ASAP provides incoming freshman an envelope of comprehensive support services, including tutoring, counseling, career advising, free public transportation passes, and funding for textbooks. Taking advantage of the potential benefits of more structure, students are required to meet regularly with their advisor and tutors, attend a student success seminar, and enroll full-time to participate. The program was experimentally tested on low-income students with remedial needs at CUNY in colleges where the three-year graduation rate was only 20 percent. ASAP doubled graduation rates at CUNY, and similar impacts on persistence were replicated in Ohio ....

Among the evidence we have, comprehensive support programs such as ASAP offer the most promise for improving college completion, at least among community college freshman from disadvantaged backgrounds. The impact of ASAP is the largest I know of, compared to other college program evaluations ... The program represents an impressive “proof of concept” for how much we could help if we offered a gamut of student support and made participation mandatory. As impressive as the results are—doubling completion rates from 20 to 40 percent— they also highlight serious policy limitations. Even with a full range of proactive mandatory support services and financial incentives to stay engaged, 60 percent of ASAP participants still did not complete their degrees. The best program we know, which ... many administrators feel is unaffordable, still fails to help more than half of its target population.
One problem underlying low college completion rates is that the incoming students lack necessary skills to do college-level work. Such students may be admitted to college but then required to take remedial classes before they can begin the classes that lead to their desired degree. Oreopoulous describes the tradeoffs this way: 
Many community colleges provide open access, meaning that they admit any applicant with a high school degree into at least a general studies program. This level of access increases opportunity for all graduating high school seniors to pursue higher education at a relatively low cost. The downside is that many entrants are not well prepared to handle the academic standards of their program. The same colleges therefore often require entrants to take remedial mathematics and English courses before being allowed to take courses that would contribute toward a degree or certificate in their desired program. “About 68 percent of students entering public two-year and 40 percent of those entering public four-year colleges in 2003–2004 took at least one remedial class by 2009” (p. 21). Freshmen find themselves feeling stuck working on subjects they covered earlier and concerned about the longer road they face to completion.

College dropout rates for those taking remediation courses are shockingly high—Jaggars and Stacey (2014) report a 72 percent dropout rate among community college students who take a remedial education course. Adams et al. (2012) use data from 33 participating states and find a 65 percent overall dropout rate by sixth year for students taking remedial courses. Those who require remediation are obviously less prepared and less likely to graduate compared to those who don’t require it, but a consensus of policy researchers agree that reform is needed to avoid discouraging these marginal students facing long delays to complete their degrees.
There may be ways to make such remedial classes feel like less of a hurdle to students: for example, by figuring out ways that students can at least start their desired course of study at the same time as the remedial course, and thus do them side-by-side, rather than being required to start their college experience completely focused on remedial courses. Of course, the better answer would be for high schools to produce fewer graduates who need remedial courses.

Oreopolous also focuses on  theme that I have often found myself emphasizing to prospective or newly-arrived college students: making the necessary time commitment. As he writes: "Many college administrators and faculty recommend two or three hours of study for each hour a student spends in class, implying 25 to 35 hours of effort outside of class for someone enrolled full-time (there is a reason
they call it “full-time” enrollment)." However, a typical college student actually studies about 15 hour/week (or so they say), which  means that a sizeable minority study less than that. 

Oreopoulous discusses the results of some polling he carried out among first-year students at the University of Toronto about their expectations of outside-of-class study time. He  writes: 
Low-performing students admit to time management problems and procrastination, but even when asked to plan their hours in advance, they often set low goals. .,. If students entered a plan with fewer than 15 hours of routine study [as their personal plan on the survey form] , we asked “[W]e’d like to better understand how and why you decided on this number. Is it because you did not expect to gain much from studying more, or because you did not think you would haveenough time, or some other factor? Please share your thoughts in a paragraph or two” ... [A]mong those who eventually ended up with a fall grade average less than 60 percent. ... [a] majority said they felt their target was fair and reasonable. Some justified their answer based on their successful high school experience;
others said they wanted to leave room for sports, extracurricular activities, and friends. Very few of these students anticipated doing so poorly and none said they felt constrained from work. In fact, about half said they were intending to complete graduate studies in the future, 58 percent expected to receive above average fall grades, and the average expected economics grade was 76 percent. It seems as though these students had the wrong reference point for sufficient study time. By the end of the semester ... these kinds of students update their academic expectations downward, but rather than respond by planning to study more, they tend to accept their academic fate and plan to study about the same the following semester.
Of course, some college students have highly limited time to study because of job or family responsibilities. But those examples are not the core of the time commitment problem. Moreover, Oreopolous and his co-authors have found no noticeable effects on grades from trying to encourage more study time with an online program of information, reminders, and coaching. Trying to raise college graduation rates, or levels of academic achievement, for full-time students who are only putting in 15 hours or less of study time per week will inevitably be an uphill battle. 

Monday, June 7, 2021

Some Economics of Preventive Health Care

There's an old dream about preventive health care, which I still hear from time to time. The hope is that by expanding the use of relatively cheap preventive care, then our society could either prevent some more extreme health conditions and/or catch and treat others early in a way that offers a double prize: it could conceivably improve both health and reduce total health care costs. But this happy outcome, while it may hold true in a few cases, is probably the wrong way to think about the economics of preventive medicine. 

Joseph P. Newhouse addresses these questions in "An Ounce of Prevention" (Journal of Economic Perspectives, Spring 2021, 35:2, 101-18).  By his estimate, only about 20% of preventive measures both improve health and save money. But when you think about it, most medical care doesn't save money: instead, it costs something for the benefit of improving health. In the same way, a wide array of preventive case can be worth doing because it improves health, even if it does not (on average) save money. Newhouse writes (citations omitted): 

Vaccination is a well-known example of a measure that improves health and reduces cost. It is typically inexpensive, causes few adverse events, and can confer immunity for many years. The development of the polio vaccine, for example, was one of the great public health triumphs of the 20th century. In the late 1940s, polio crippled 35,000 Americans annually; because of vaccination, it was eradicated in the United States in 1979. Vaccination also differs from many other preventive measures because of the external benefit it confers on the unvaccinated (“herd immunity”). Another example of a preventive measure that saves money and improves health is a “polypill”—a single pill with several active ingredients for secondary prevention of heart disease versus single prescriptions for various agents.

The remaining 80 percent of preventive measures do not save money. The majority of all preventive measures—about 60 percent of them—provide health benefits at a cost of less than $100,000/QALY (2006 dollars). Another 10 percent of measures cost between $100,000 and $1,000,000 per QALY; those measures with costs near the lower end of this range might pass the common rules of thumb of cost-effectiveness ... The remaining 10 percent of preventive measures studied in the literature either worsen expected health or, if they improve it, cost more than $1,000,000 per QALY.

(For the uninitiated, "QALY" stands for "quality-adjusted life year," which is a way of measuring improvements in health.  In this measure, a year in perfect health counts as 1, but gaining a year of impaired health counts less than one. For an overview, see "What's the Value of a QALY?")

Moreover, even if one focuses on a particular preventive measure, it will often be true that the potential health/income payoff for screening some people is higher than others. For example, if one group of people has a genetic predisposition or certain behavioral factors that make certain health conditions more likely, screening those is more likely to pay off with gains. It's quite possible to have situations where universal screening of all ages and groups may not be a cost-effective method of improving health, but screening higher-risk groups might make sense. 

Newhouse also emphasized that it's useful to think about "preventive care" as meaning more than just medical interventions. For example, steps to reduce smoking and consumption of alcohol, or to encourage exercise, or to make sure that babies and small children have good nutrition, can have large payoffs. 

In addition, many of the steps used to address chronic health conditions are usefully thought of as "preventive case," like taking medications for high blood pressure. Indeed, one of the major shifts over the last century or so in US health patterns is that back in 1900, diseases were the major cause of death. Today after dramatic improvements in vaccinations and public health conditions, chronic diseases are the main causes of death. A "chronic" health condition can be loosely defined as one where if you take your meds, and follow the recommendations about what you consume, you can live pretty much a normal life, but otherwise, you have good chance of ending up with sharp decline in in health and a costly period of hospitalization. In the US, those with three more more chronic health conditions account for 61% of all health care spending. Thus, preventive measures to prevent chronic conditions from turning into something worse (both medical and non-medical) hold considerable promise in reducing health care costs. Here's a table from Newhouse: 

Newhouse also discusses the incentive for innovators to develop methods of preventive care vs. developing new treatments. He argues that clinical trials are often much faster for treatment: for example, think about a firm trying to test whether a treatment extends the life of existing cancer patients vs. a firm trying to test whether a preventive treatments will reduce the long-run risk of a certain cancer occurring in the first place. In addition, a firm thinking about developing a preventive care approach must be concerned that many who are low-risk, or view themselves as low-risk, won't use the preventive care. However, if a firm develops a treatment for those who already have the health care condition, the chance of high demand for the product are much  better.  

A companion paper to the Newhouse essay in the same issue of the JEP looks at the controversial issue of mammograms. Amanda E. Kowalski writes: "Mammograms and Mortality: How Has the Evidence Evolved?" (Journal of Economic Perspectives, Spring 2021, 35:2, 119-40). There have been substantial controversies over the years on the recommended age at which women should start and stop getting regular mammograms. For example, prior to 2009, the US Preventive Services Task Force recommended regular mammographies for women 40 and over. However, the current guidance is regular mammographies for women 50-74, while leaving the decision up to women and their doctors for those outside that age range. 

Why not just have universal mammograms for women of all ages? Sure, there's some cost, but "better safe than sorry" and "more knowledge can only be a good thing," right?  As she points out, it's not that simple. Kowalski writes (citations omitted): 
The rationale for widespread mammography is that early detection of potentially fatal breast cancers enables earlier and more effective treatment. But there is a potential drawback: mammography can detect some early-stage cancers that will never progress to cause symptoms—a phenomenon often referred to as overdiagnosis. In such cases, the emotional, financial, and physical costs of a cancer diagnosis and any subsequent treatments occur without any corresponding health benefit. Because it is hard to tell which women will be harmed by their cancers, there is a tendency to treat all women as if their cancers will be lethal. Even if the initial cancer would have never proven life-threatening, exposure to chemotherapy, radiotherapy, and surgery can potentially lead to new conditions, even to new fatal cancers ... 
Just to be clear, "overdiagnosis" is not what is know as a false positive--that is, a screening which finds something that isn't there. Instead, "overdiagnosis" is finding something which is indeed there, but would not have caused a health problem. As she points out, a standard example is prostate cancer., and "autopsy studies showing that almost half of older men die with, but not necessarily of, prostate cancer have been important to prostate cancer screening guidelines since the late 1980s." 

In practice, how big a problem is overdiagnosis from mammograms? There's some controversy over this evidence, but Kowalski  makes a case that if the policy is to screen 100% of women in certain age ranges, the evidence (from randomized control trials done in Canada) is that over the long run, being randomly selected into the mammography group does not lead to an improvement in health on average--and may even be counterproductive. She writes that while most high-income countries still recommend regular mammography for asymptomatic women in their 50s and 60s, skepticism seems to be growing:.
Canadian national guidelines “recommend not screening” with mammography for women aged 40 to 49 but “recommend screening with mammography” for women aged 50 to 74. ... Many other high income countries, including Australia, France, Switzerland, and the United Kingdom, do not recommend mammography for women in their 40s, and they also do not recommend against it as Canadian guidelines do. However, the Swiss Medical Board recommended steps to limit screening programs in 2014. In 2016, the French Minister of Health released results of an independent review that recommended that the national screening program end or undergo radical reforms,
Her recommendations are for some additional research into understanding the characteristics--other than age--that are likely to make a mammography beneficial for women. In addition, when a mammography does find cancer, it may in some cases be wise to reduce or postpone the use of the most aggressive possible treatments. 

Saturday, June 5, 2021

Some Economics of James Buchanan

The Fraser Institute has been publishing a "Essential Scholars" series of short books that provide an overview of the work of prominent thinkers, including John Locke, David Hume, and Adam Smith in the past, and Friedrich Hayek, Joseph Schumpeter, and Robert Nozick from more recent times. The books seek to explain some main themes of these writers in straightforward nontechnical language. In the most recent contribution, Donald J. Boudreaux and Randall G. Holcombe have written  The Essential James Buchanan. The website even include several 2-3 minute cartoon videos, if you need a little help in spotting the main themes.

Buchanan won the Nobel prize in 1986 "for his development of the contractual and constitutional bases for the theory of economic and political decision-making." Boudreaux and Holcombe argue for Buchanan, these theme emerge from a perspective in which group decisions--whether by government or clubs or religious organizations--must always be traced back to what form of agreement was reached by members of the group. In describing Buchanan's view, they write: 
[B]ecause neither the state nor society is a singular and sentient creature, a great deal of analytical and policy confusion is spawned by treating them as such. Collections of individuals cannot be fused or aggregated together into a super-individual about whom economists and political philosophers can usefully theorize in the same ways that they theorize about actual flesh-and-blood individuals. Two or more people might share a common interest and they might—indeed, often do—join forces to pursue that common interest. But two or more people are never akin to a single sentient individual. A collection of individuals, as such, has no preferences of the sort that are had by an actual individual. A collection of individuals, as such, experiences no gains or pains; it reaps no benefits and incurs no costs. A collection of individuals, as such, makes no choices. ...

Buchanan called such aggregative thinking the “organismic” notion of collectives—that is, the collective as organism. From the very start, nearly all of Buchanan’s lifetime work was devoted to replacing the organismic approach with the individualistic one—a way of doing economics and political science that insists that choices are made, and costs and benefits are experienced, only by individuals.
Buchanan took this distinction so seriously that, as I'll discuss below, he proposed renaming the field of economics to highlight it. When thinking about people coming together to take joint actions, whether they are buying and selling in a market, or starting a company, or operating together through government, Buchanan insisted on viewing the process not as actions taken by "the government," but rather as the outcome of negotiations by groups of individuals.  Boudreaux and Holcombe write: 
Buchanan’s fiscal-exchange model of government depicts government as an organization through which individuals come together collectively to produce goods and services they cannot easily acquire through market exchange. Just as individuals trade in markets for their mutual benefit, government facilitates the ability of individuals to engage in collective exchange for the benefit of everyone. This fiscal-exchange model is an ideal, of course; Buchanan was well aware of the possibility that those who exercise government power can and often do abuse it for their own benefit at the expense of others. Much of his
work was devoted to understanding how government can be constrained in order to keep this abuse to a minimum. When those constraints are effective, collective action through government can further everyone’s well-being. The fiscal-exchange model is based on the idea that taxes are the price citizens pay for government goods and services. And just like prices in the marketplace, the value of the goods and services government supplies should exceed the prices citizens pay, in the form of taxes, for these goods and services. ... 
"[W]hen analyzing the groups that individuals form when they come together to pursue collective outcomes, Buchanan insisted that close attention be paid to the details of how these individuals constitute themselves as a group—and most especially, to the decision-making procedures they choose
for their group." Here are some examples.

Buchanan was a big supporter of federalism: that is, the idea that government responsibilities would be divided up into local, state, national, and perhaps other intermediate levels. "Buchanan refers to federalism as     an ideal political order' with several advantages ... Federalism offers citizens more choice, because citizens can choose among jurisdictions," while " governments at the same level in a federal system thus each have stronger incentives to provide a mix and pricing of public goods that is attractive to large numbers of people. ... "In addition, federalism can encourage governments at different levels to police each other."

One of Buchanan's main policy concerns was governments were prone to over-borrowing, because the future generations that would need to repay the debts were not well-represented in current discussions about the extent of borrowing. Boudreax and Holcombe write: 
This ability of current taxpayers to use debt financing to free-ride on the wealth of future generations led Buchanan to worry that government today will both spend excessively and fund too many projects with debt. citizen-taxpayers, after all, are not today’s voters. Thus, the interests of these future generations are under-represented in the political process. To reduce the magnitude of this problem, Buchanan endorsed constitutional rules that oblige governments to annually keep their budgets in balance. His fear that the opportunity for debt financing of government projects and programs would be abused was so acute that it led him to endorse a balanced-budget amendment to the US Constitution. His participation in a political effort to secure such an amendment is one of the very few specific,
ground-level policy battles that he actively joined.
As one more example, Buchanan wrote an article for the first issue in Summer 1987of the Journal of Economic Perspectives, where I work as Managing Editor, as part of a symposium on the Tax Reform Act of 1986  ("Tax Reform as Political Choice." Journal of Economic Perspectives, 1:1, 29-35). For those unfamiliar with the bill, the general thrust of TRA86 was to broaden the tax base by closing or limiting various tax deductions and exemptions, and then to reduce marginal tax rates in a roughly revenue-neutral manner. This advice to broaden the tax base and reduce marginal tax rates is pretty standard, year in and year out, among mainstream public finance economists. But what made it possible for such legislation to actually be enacted in 1986?

Buchanan suggested that there is a cycle to tax policy. Say that you start off in a situation with a broad tax base and few loopholes. Over time, politicians and special interests will carve out a series of tax breaks. But every time they reduce the base of income that is taxed, they will be forced to raise marginal tax rates as well to garner the same amount of revenue. At some point, Buchanan argued, marginal tax rates have become so high that a countermovement forms. Essentially, the countermovement is willing to give up some tax loopholes of its own, as long as many other parties also need to give up their tax loopholes, in exchange for lower tax rates. As soon as this bargain is enacted into law, as in 1986, the political business of carving out loopholes begins all over again. 

Thus, Buchanan did not view  public policy as an attempt to reach a higher level of social welfare or a more efficient allocation of resources. These kinds of goals would be what Buchanan disparaging called "organismic." Instead, Boudreaux and Holcombe described Buchanan's view of the political process in this way: 
Economic and political outcomes are compromises among people with legitimate differences in their preferences. These outcomes can never be correct or incorrect in the same way that an answer to the question “What is the speed of light?” is correct or incorrect. The correct answer to the question about the speed of light is not a compromise among different answers offered by different physicists—the speed of light is what it is, objectively, regardless of physicists’ estimates of it. But the “correct” allocation of resources and “correct” level of protection of free speech are indeed nothing more than the compromises that emerge from the economic and political bargaining of many individuals, each with different preferences. In short, said Buchanan, politics is about finding peaceful agreements among people with different preferences on collective outcomes. Politics, unlike science, is not about making “truth judgments.” The challenge is to discover and use the set of rules that best promotes the making of compromises among people with different preferences. Legitimate scientific inquiry and judgment can play a role in assessing how well or poorly some existing or proposed set of rules will serve this goal. Even here, though, Buchanan warned that people’s differences in fundamental values means that there is no universal one “best” set of rules, scientifically discoverable, for all peoples and for all times. In the end, the best set of rules is that which wins the unanimous approval of the people who will live under it.
Notice here that the unanimous approval will not be for the outcomes of decisions by organizations. People will disagree over outcomes. Instead, Buchanan is suggesting that we might agree to a set of rules, and we might be willing to be coerced under those rules. As Buchanan and Holcombe describe it:
In this situation, individuals might agree to be forced to pay toward financing the [public] good if everyone else is also forced to pay. Everyone could hold the same opinion, saying they do not want to pay unless everyone is forced to pay, but they would all agree to a policy that forces everyone to pay. People could agree to be coerced. The idea that people could agree to be coerced lies at the foundation of the social-contract theory of the state. Even though there is no actual contract, people would agree to give the state the authority to coerce those who violate its mandates, if everyone was bound to the same contract provisions. According to social-contract theory, because people would agree to be coerced for their own benefit, the exercise of such coercion violates no individual’s rights.
Buchanan extended this individual-based contractual view of organizations beyond government, and beyond  market exchange: 
The point is that exchange possibilities are not confined to the simple bilateral exchanges on which economists traditionally focus nearly all of their attention. When this truth is recognized, many familiar features of the real world are seen in a more revealing light. Clubs, homeowners’ associations, business firms, churches, philanthropic organizations—these and other voluntary associations are arrangements in which individuals choose to interact and exchange with each other in ways more complex than simple, one-off, arm’s length, bilateral exchanges. These “complex” exchange relationships are an  important reality for economists to study. But they are more than mere subject matter for research. They are also evidence that human beings who are free to creatively devise and experiment with alternative organizational and contractual arrangements have great capacity to do so. Where the conventional economist sees “market failure,” humans on the spot often see opportunities for mutually advantageous
exchange.
Buchanan felt so strongly about this position that in a 1964 essay, he suggested renaming the field of economics ("What Should Economists Do? Southern Economic Journal, 30:3 pp. 213-222). Boudreaux and Holcombe discuss this essay in their Chapter 10: here, I quote from the 1964 essay. Buchanan argued that the current definition of economics is much too identified with the idea of choice. He wrote in 1964:  
In one sense, the theory of choice presents a paradox. If the utility function of the choosing agent is fully defined in advance, choice becomes purely mechanical. No "decision," as such, is required; there is no weighing of alternatives. On the other hand, if the utility function is not wholly defined, choice becomes real, and decisions become unpredictable mental events. If I know what I want, a computer can make all of my choices for me. If I do not know what I want, no possible computer can derive my utility function since it does not really exist.
Rather than basing economics on an idea of utility functions that do not actually exist until they are called into being by people's choices, Buchanan suggested instead that economics should instead be focused on the principle of voluntary exchange, and the conditions that people agree to in shaping such exchanges. He wrote: 
The theory of choice must be removed from its position of eminence in the economist's thought processes. The theory of choice, of resource allocation, call it what you will, assumes no special role for the economist, as opposed to any other scientist who examines human behavior. Lest you get overly concerned, however, let me hasten to say that most, if not all, of what now passes muster in the theory of choice will remain even in my ideal manual of instructions. I should emphasize that what I am suggesting is not so much a change in the basic content of what we study, but rather a change in the way we approach our material. I want economists to modify their thought processes, to look at the same phenomena through "another window," to use Nietzsche's appropriate metaphor. I want them to concentrate on "exchange" rather than on "choice." 

The very word "economics," in and of itself, is partially responsible for some of the intellectual confusion. The "economizing" process leads us to think directly in terms of the theory of choice. I think it was Irving Babbit who said that revolutions begin in dictionaries. Should I have my say, I should propose that we cease, forthwith, to talk about "economics" or "political economy," although the latter is the much superior term. Were it possible to wipe the slate clean, I should recommend that we take up a wholly different term such as "catallactics," or "symbiotics." The second of these would, on balance, be preferred. Symbiotics is defined as the study of the association between dissimilar organisms, and the connotation of the term is that the association is mutually beneficial to all parties. This conveys, more or less precisely, the idea that should be central to our discipline. It draws attention to a unique sort of relationship, that which involves the co- operative association of individuals, one with another, even when individual interests are different. It concentrates on Adam Smith's "invisible hand," which so few non-economists properly understand.
I am uncertain as to what the practictioners of catallactics or symbiotics would be called. "Catallacticologists?" "Catalysts?" "Symbioticians?" "Symbiotes?" I'm open to suggestions.

Friday, June 4, 2021

The Shrinking Role of European Companies in the Global Economy

The Economist titled its article on European corporations: "The land that ambition forgot. Europe is now a corporate also-ran. Can it recover its footing?" (June 5, 2021). The article is well worth reading, but here are a couple of snapshots and my own reactions. Notice in particular that these changes are fairly recent. The horizontal axis in these graphs starts only two decades ago. 

The share of EU companies among the largest in the world has been declining: "In 2000 nearly a third of the combined value of the world’s 1,000 biggest listed firms was in Europe, and a quarter of their profits. In just 20 years those figures have fallen by almost half."

Here's the EU share of the global economy, and also the stock market capitalization of EU companies compared to global stock market capitalization. The message here is not just that both shares have declined substantially. Notice also that back around 2000 the EU share of the global economy and the share of the EU in global stock market capitalization were roughly the same, but that is no longer true. 
Europe has often been a world leader in drawing up rules and regulations that companies must follow, in areas including digital privacy, environmental protection, use of genetic modification technologies, and so on.  However, the EU countries have overall, judging by performance, have not been an especially friendly place to start or run a company, The European Union itself remains a fractured economic zone, separated by barriers set up by national governments, as well as by language and cultural differences.  
The Economist writes that big EU companies have in recent decades been preferring to expand their sales and operations overseas, rather than in their home base. 

Companies are social mechanisms both for organizing current and future production, and also for planning and investment needed for future innovations in production methods and new products. Europe has a smaller share of these engines of production. 

Thursday, June 3, 2021

Why Have Mortality Rates Been Rising for US Working-Age Adults?

The mortality rate for "working age" US adults in the 25-64 age group has been rising. This isn't a pandemic-related issue, but instead something with roots in the data going back several decades. The National Academies of Sciences, Engineering, and Medicine digs into the underlying patterns and potential explanations in "High and Rising Mortality Rates Among Working-Age Adults" (March 2021, a prepublication copy of uncorrected proofs can be downloaded for free). Their evidence and discussion is mainly focused on the period up through 2017. 

The NAS report compares the US to 16 "peer countries," which are other countries with a high level of per capita income and well-developed health care systems. (The 16 countries are Australia, Austria, Canada, Denmark, Finland, France, Germany, Italy, Japan, Norway, Portugal, Spain, Sweden, Switzerland, the Netherlands, and the United Kingdom.) The two panels below compare life expectancy going back to 1950 for females and males in the US (red line) and the average of the peer countries (blue line). The almost-invisible gray lines show each of the 16 peer countries separately. 

For US women, life expectancy was slightly above that of the peer group in 1950, but starting around 1980 a divergence began. For US men, life expectancy was similar to the peer group but a divergence also began in the 1980s. For both US men and women, life expectancy seems to have flattened out in the last decade or so. 

The report also does a breakdown of the same data by racial/ethnic status. In this figure, the red line shows only white US females and males. The dotted line shows non-white Hispanics, and is available only for recent years, but it pretty much overlaps the peer group. The  dashed line shows black Americans. There is still a life expectancy gap between white and black Americans, but the gap has generally been declining over time. The levelling out of US life expectancies for the 25-64 age group in the last few decades has been largely a phenomenon affecting white Americans. 

Notice that this comparison is not about infant or child mortality rates, nor is it about life expectancy for the elderly. Indeed, life expectancies for US infants and for children under the age of 10, and for adults who have already reached their 80s  are higher than for the peer group of countries. It's the in-between age group where the difference arises. 

As one digs into these patterns more closely, here are some of the details that emerge:
The committee identified three categories of causes of death that were the predominant drivers of trends in working-age mortality over the period: (1) drug poisoning and alcohol-induced causes, a category that also includes mortality due to mental and behavioral disorders, most of which are drug- or alcohol-related; (2) suicide; and (3) cardiometabolic diseases. The first two of these categories comprise causes of death for which mortality increased, while the third encompasses some conditions (e.g., hypertensive disease) for which mortality increased and others (e.g., ischemic heart disease) for which the pace of declining mortality slowed. ...

[I]ncreasing mortality among U.S. working-age adults is not new. The committee’s analyses confirmed that a long-term trend of stagnation and reversal of declining mortality rates that initially was limited to younger White women and men (aged 25–44) living outside of large central metropolitan areas (seen in women in the 1990s and men in the 2000s), subsequently spread to encompass most racial/ethnic groups and most geographic areas of the country. As a result, by the most recent period of the committee’s analysis (2012–2017), mortality rates were either flat or increasing among most working-age populations. Although this increase began among Whites, Blacks consistently experienced much higher mortality. ...

Over the 1990–2017 period, disparities in mortality between large central metropolitan and less-populated areas widened (to the detriment of the latter), and geographic disparities became more pronounced. Mortality rates increased across several regions and states, particularly among younger working-age adults, and most glaringly in central Appalachia, New England, the central United States, and parts of the Southwest and Mountain West. Mortality increases among working-age (particularly younger) women were more widespread across the country, while increases among men were more geographically concentrated.
Regarding socioeconomic status, the committee’s literature review revealed that a large number of studies using different data sources, measures of socioeconomic status, and analytic methods have convincingly documented a substantial widening of disparities in mortality by socioeconomic status among U.S. working-age Whites, particularly women, since the 1990s. Although fewer studies have examined socioeconomic disparities in working-age mortality among non-White populations, those that have done so show a stable but persistent gap in mortality among Black adults that favors those of higher socioeconomic status.
Many of these factors overlap in various ways, and the subject as a whole is not well-understood. A substantial portion of the NAS report is a call for additional research. But if I had to extrapolate from the available data, one pattern that seems common is about parts of the US feeling separated and isolated, either by urban/nonurban status or by socioeconomic status. The specific causes of death contributing to the pattern seem to share the trait that they are potentially worsened by life and economic stress. 

Although the report is about long-term trends, not the pandemic, it does offer the insight that COVID-19 has added to the disparity of mortality rates for working-age adults. Yes,  the elderly accounted for by far the largest share of COVID-19 deaths. But if one looks in percentage terms, the report notes: 
Thus, COVID-19 has reinforced and exacerbated existing mortality disparities within the United States, as well as between the United States and its peer countries. The CDC reported that adults aged 25–44 experienced the largest percentage increases in excess deaths during the pandemic (as of October 2020).  

Tuesday, June 1, 2021

What is Complexity Economics?

What distinguishes "complexity economics"? W. Brian Arthur offers a short readable overview in "Foundations of complexity economics" (Nature Reviews Physics 3: 136–145, 2021).  This is a personal essay, rather than a literature review. For example, Arthur explains how the modern research agenda for complexity economics emerged from work at the Santa Fe Institute in the late 1980s.

How is complexity economics different from regular economics? 
Complexity economics sees the economy — or the parts of it that interest us — as not necessarily in equilibrium, its decision makers (or agents) as not superrational, the problems they face as not necessarily well-defined and the economy not as a perfectly humming machine but as an ever-changing ecology of beliefs, organizing principles and behaviours. 
How does a researcher do economics in this spirit? A common approach is to describe, in mathematical terms, a number of decision-making agents within a certain setting. The agents start off with a range of rules for how they will perceive the situation and how they will make decisions. The rules that any given agent uses can change over time: the agent might learn from experience, or might decide to copy another agent, or the decision-making rule might experience a random change. The researcher can then look at the path of decision-making and outcomes that emerge from this process--a path which will sometimes settle into a relatively stable outcome, but sometimes will not. Arthur writes: 
Complexity, the overall subject , as I see it is not a science, rather it is a movement within science ... It studies how elements interacting in a system create overall patterns, and how these patterns, in turn, cause the elements to change or adapt in response. The elements might be cells in a cellular automaton, or cars in traffic, or biological cells in an immune system, and they may react to neighbouring cells’ states, or adjacent cars, or concentrations of B and T cells. Whichever the case, complexity asks how individual elements react to the current pattern they mutually create, and what patterns, in turn, result.
As Arthur points out, an increasingly digitized world is likely to offer a number of demonstrations of complexity theory at work. 
Now, under rapid digitization, the economy’s character is changing again and parts of it are becoming autonomous or self- governing. Financial trading systems, logistical systems and online services are already largely autonomous: they may have overall human supervision, but their moment-to-moment actions are automatic, with no central controller. Similarly, the electricity grid is becoming autonomous (loading in one region can automatically self- adjust in response to loading in neighbouring ones); air-traffic control systems are becoming autonomous and independent of human control; and future driverless-traffic systems, in which driverless-traffic flows respond to other driverless-traffic flows, will likely be autonomous. ... Besides being autonomous, they are self- organizing, self- configuring, self-healing and self- correcting, so they show a form of artificial intelligence. One can think of these autonomous systems as miniature economies, highly interconnected and highly interactive, in which the agents are software elements ‘in conversation with’ and constantly reacting to the actions of other software elements.
To put it another way, if we want to understand when these kinds of systems are likely to work well, and how they might go off the rails or be gamed, complexity analysis is likely to offer some useful tools. 

But what about using complexity theory for economics in particular? As Arthur writes: "A new theoretical framework in a science does not really prove itself unless it explains phenomena that the accepted framework cannot. Can complexity economics make this claim? I believe it can. Consider the Santa Fe artificial stock market model."

For example, there's a long-standing issue of why stock markets see short-run patterns of boom and bust.  Another puzzle of stock markets is why there is so much trading of stocks. Sure, stock traders will disagree about the underlying value of stocks and about the meaning of recent news which affects perceptions of future value. Such disagreements will lead to a modest volume stock trading, but it's hard to see how they lead to the extremely high volumes of trading seen in modern markets. John Cochrane phrased this point nicely in a recent interview with Tyler Cowen
Why is there this immense volume of trading? When was the last time you bought or sold a stock? You don’t do it every 20 milliseconds, do you? I’ll highlight this. If I get my list of the 10 great unsolved puzzles that I hope our grandchildren will have figured out, why does getting the information into asset prices require that the stock be turned over a hundred times? That’s clearly what’s going on. There’s this vast amount of trading, which is based on information or opinion and so forth. I hate to discount it at all just as human folly, but that’s clearly what’s going on, but we don’t have a good model.
Here is Arthur's description of how complexity economics looks at these stock market puzzles: 
We set up an ‘artificial’ stock market inside the computer and our ‘investors’ were small, intelligent programs that could differ from one another. Rather than share a self- fulfilling forecasting method, they were required to somehow learn or discover forecasts that work. We allowed our investors to randomly generate their own individual forecasting methods, try out promising ones, discard methods that did not work and periodically generate new methods to replace them. They made bids or offers for a stock based on their currently most accurate methods and the stock price forms from these — ultimately, from our investors’ collective forecasts. We included an adjustable rate-of-exploration parameter
to govern how often our artificial investors could explore new methods.

When we ran this computer experiment, we found two regimes, or phases. At low rates of investors trying out new forecasts, the market behaviour collapsed into the standard neoclassical equilibrium (in which forecasts converge to ones that yield price changes that, on average, validate those forecasts). Investors became alike and trading faded away. In this case, the neoclassical outcome holds, with a cloud of random variation around it. But if our investors try out new forecasting methods at a faster and more realistic rate, the system goes through a phase transition. The market develops a rich psychology of different beliefs that change and do not converge over time; a healthy volume of trade emerges; small price bubbles and temporary crashes appear; technical trading emerges; and random periods of
volatile trading and quiescence emerge. Phenomena we see in real markets emerge. ... 

I want to emphasize something here: such phenomena as random volatility, technical trading or bubbles and crashes are not ‘departures from rationality’. Outside of equilibrium, ‘rational’ behaviour is not well- defined. These phenomena are the result of economic agents discovering behaviour that works temporarily in situations caused by other agents discovering behaviour that works temporarily. This is neither rational nor irrational, it merely emerges.

Other studies find similar regime transitions from equilibrium to complex behaviour in nonequilibrium models. It could be objected that the emergent phenomena we find are small in size: price outcomes in our artificial market diverge from the standard equilibrium outcomes by only 2% or 3%. But — and this is important — the interesting things in real markets happen not with equilibrium behaviour but with departures from equilibrium. In real markets, after all, that is where the money is made.
In other words, the key to understanding dynamics of stock markets may reside in the idea that investors are continually exploring new methods of investing, which in turn leads to high volumes of trading and in some cased to dysfunctional outcomes.  Of course, Arthur offers a variety of other examples, as well. 

For those who would like more background on complexity economics, one starting point would be the footnotes in Arthur's article. Another place to start is the essay by J. Barkley Rosser, "On the Complexities of Complex Economic Dynamics," in the Fall 1999 issue of the Journal of Economic Perspectives (13:4, 169-192). The abstract reads: 
Complex economic nonlinear dynamics endogenously do not converge to a point, a limit cycle, or an explosion. Their study developed out of earlier studies of cybernetic, catastrophic, and chaotic systems. Complexity analysis stresses interactions among dispersed agents without a global controller, tangled hierarchies, adaptive learning, evolution, and novelty, and out-of-equilibrium dynamics. Complexity methods include interacting particle systems, self-organized criticality, and evolutionary game theory, to simulate artificial stock markets and other phenomena. Theoretically, bounded rationality replaces rational expectations. Complexity theory influences empirical methods and restructures policy debates.

Monday, May 31, 2021

Clean Energy and Pro-Mining

One approach to the goal of reducing carbon emissions is sometimes called "electrification of everything," a phrase which is a shorthand for an agenda of using electricity from carbon-free sources--including solar and wind--to replace fossil fuels.  The goal is to replace fossil fuels in all their current roles: not just in generating electricity directly, but also in their roles in transportation, heating/cooling of buildings, industrial uses, and so on. Even with the possibilities for energy conservation and recycling taken into account, the "electrification of everything" vision would require a very substantial increase in electricity production in the US and everywhere. 

A necessary but often undiscussed consequence of this transition is a dramatic increase in mining, as discussed in "The Role of Critical Minerals in Clean Energy Transitions," a World Energy Outlook Special Report from the International Energy Agency (May 2021). The IEA notes:

An energy system powered by clean energy technologies differs profoundly from one fuelled by traditional hydrocarbon resources. Building solar photovoltaic (PV) plants, wind farms and electric vehicles (EVs) generally requires more minerals than their fossil fuelbased counterparts. A typical electric car requires six times the mineral inputs of a conventional car, and an onshore wind plant requires nine times more mineral resources than a gas-fired power plant. Since 2010, the average amount of minerals needed for a new unit of power generation capacity has increased by 50% as the share of renewables has risen.

The types of mineral resources used vary by technology. Lithium, nickel, cobalt, manganese and graphite are crucial to battery performance, longevity and energy density. Rare earth elements are essential for permanent magnets that are vital for wind turbines and EV motors. Electricity networks need a huge amount of copper and aluminium, with copper being a cornerstone for all electricity-related technologies. The shift to a clean energy system is set to drive a huge increase in the requirements for these minerals, meaning that the energy sector is emerging as a major force in mineral markets. Until the mid-2010s, the energy sector represented a small part of total demand for most minerals. However, as energy transitions gather pace, clean energy technologies are becoming the fastest-growing segment of demand.

The IEA is careful to say that this rapid growth in demand for a number of minerals doesn't negate the need to move to cleaner energy, and the report argues that the difficulties of increasing mineral supply are "manageable, but real." But here is a summary list of some main concerns: 

High geographical concentration of production: Production of many energy transition minerals is more concentrated than that of oil or natural gas. For lithium, cobalt and rare earth elements, the world’s top three producing nations control well over three-quarters of global output. In some cases, a single country is responsible for around half of worldwide production. The Democratic Republic of the Congo (DRC) and People’s Republic of China (China) were responsible for some 70% and 60% of global production of cobalt and rare earth elements respectively in 2019. ...

Long project development lead times: Our analysis suggests that it has taken on average over 16 years to move mining projects from discovery to first production. ...

Declining resource quality: ... In recent years, ore quality has continued to fall across a range of commodities. For example, the average copper ore grade in Chile declined by 30% over the past 15 years. Extracting metal content from lower-grade ores requires more energy, exerting upward pressure on production costs, greenhouse gas emissions and waste volumes.

Growing scrutiny of environmental and social performance: Production and processing of mineral resources gives rise to a variety of environmental and social issues that, if poorly managed, can harm local communities and disrupt supply. ...

Higher exposure to climate risks: Mining assets are exposed to growing climate risks. Copper and lithium are particularly vulnerable to water stress given their high water requirements. Over 50% of today’s lithium and copper production is concentrated in areas with high water stress levels. Several major producing regions such as Australia, China, and Africa are also subject to extreme heat or flooding, which pose greater challenges in ensuring reliable and sustainable supplies.

The policy agenda here is fairly clear-cut. Put research and development spending into ways of conserving on the use of mineral resources, and on ways of recycling them. Step up the hunt for new sources of key minerals now, and get started sooner than strictly necessary with the planning and permitting. And for supporters of clean energy in high-income countries like the United States, be aware that straitjacket restriction on mining in high-income countries is likely to push production into lower-income countries where any such restrictions may be considerably looser. 

Friday, May 28, 2021

Do Riskier Jobs Get Correspondingly Higher Pay?

The idea of a "compensating differential" is conceptually straightforward. Imagine two jobs that require equivalent levels of skill. However, one job is unattractive in some way: physically exhausting, dangerous to one's health, bad smells, overnight hours, and so on. The idea of a compensating differential is that if employers want to fill these less attractive jobs, they will need to pay workers more than those workers would have received in more-attractive jobs. 

The existence of compensating differentials comes up in a number of broader issues. For example:

1) If you believe in compensating differentials, you are likely to worry less about health and safety regulation of jobs--after all, you believe that workers are being financially compensated for health and safety risks.

2) When discussing gender wage gaps, an issue that often comes up is to compare pay in male-dominated and female-dominated occupations. An argument is sometimes made that male-dominated occupations tend to be more physically dangerous or risky (think construction or law enforcement) or involve distasteful tasks (say, garbage collection). One justification for the pay levels in these male-dominated jobs is that they are in part a compensating differential.

3) When thinking about regulatory actions, it's common to compare the cost of the regulation to the benefits, which require estimating the "value of a statistical life." Here's one crisp explanation of the idea from Thomas J. Kniesner and W. Kip Viscusi:
Suppose further that ... the typical worker in the labor market of interest, say manufacturing, needs to be paid $1,000 more per year to accept a job where there is one more death per 10,000 workers. This means that a group of 10,000 workers would collect $10,000,000 more as a group if one more member of their group were to be killed in the next year. Note that workers do not know who will be fatally injured but rather that there will be an additional (statistical) death among them. Economists call the $10,000,000 of additional wage payments by employers the value of a statistical life.
Notice that at the center of this calculation is the idea of a compensating differential: in this case, estimating that two jobs are essentially identical except for one with a higher risk of injury

4) It's plausible that workers may sort themselves into jobs based on the preferences of those workers. Thus, workers who end up working outdoors or overnight, for example, may be more likely to have a preference for working outdoors or overnight. Those who work in riskier jobs may be people who place a lower value on such risks. It would seem unwise to assume that workers who end up in different jobs have the same personal preferences about job characteristics: my compensating differential for working in a risky job may be higher than the compensating differential for those who actually have such jobs. It's also plausible that workers with lower income levels might be more willing to trade off higher-risk for somewhat higher income than workers with higher income levels. 

5) The idea that high-risk jobs are paid a compensating differential makes the labor market into a kind of health-based lottery, with winners and losers. The compensating differential is based on average levels of risk, but not everyone will have the average outcome. Those who take high-risk jobs, get higher pay, and do not become injured are effectively the winners. Those who take  high-risk jobs but do become injured, and in this way suffer a loss of lifetime earnings, are effectively the losers. 

6) Precise knowledge about the overall safety of jobs is likely to be very unequally distributed between the employer, who has experience with outcomes of many different workers, and the employee, who does not have access to similar data.
 
7) If compensating differentials do not exist--that is, if workers in especially unattractive jobs are not compensated in some way--then it raises questions about how real-world wages are actually determined. If most workers of a given skill level have a range of comparable outside job options, and act as if they have a range of outside options, then one might expect that an employer could only attract a workers for a high-risk job by paying more. But if workers do not act as if they have comparable outside options, then their pay may not be closely linked to the riskiness or other conditions of their employment--and may not be closely linked to their productivity, either.  

As you might imagine, the empirical calculation of compensating differentials is a controversial business. Peter Dorman and Les Boden make the case that it's hard to find persuasive evidence for compensating wage differentials for risky work in their essay "Risk without reward: The myth of wage compensation for hazardous work" (Economic Policy Institute, April 19, 2021). The authors focus on the issue of occupational health and safety. They write: 
Although workplaces are much less dangerous now than they were 100 years ago, more than 5,000 people died from work-related injuries in the U.S. in 2018. The U.S. Department of Labor’s Bureau of Labor Statistics (BLS) reports that about 3.5 million people sustained injuries at work in that year. However, studies have shown that the BLS substantially underestimates injury incidence, and that the actual number is most likely in the range of 5-10 million. The vast majority of occupational diseases, including cancer, lung diseases, and coronary heart disease, go unreported. A credible estimate, even before the Covid-19 pandemic, is that 26,000 to 72,000 people die annually from occupational diseases. ...
The United States stands poorly in international comparisons of work-related fatal injury
rates. The U.S. rate is 10% higher than that of its closest rival, Japan, and six times the rate
of Great Britain. This difference cannot be explained by differences in industry mix: The
U.S. rate for construction is 20% higher, the manufacturing rate 50% higher, and the
transportation and storage rate 100% higher than that of the E.U.
I will not try here to disentangle the detailed issues related to the research for estimating compensating wage differentials for risky jobs. Those who do such research are aware of the potential objections and seek to address them. They argue that although any individual studies are suspect, a developed body of research using different data and method produces believable results. On the other side, Dorman and Boden make the case that such findings should be viewed with a highly skeptical eye. They also point out that during the pandemic, it is far from obvious that the "essential" workers who continued in jobs that involved a higher risk to health received a boost in wages that reflected these risks. They write: 
The view of the labor market associated with the freedom-of-contract perspective, which holds that OSH risks are efficiently negotiated between workers and employers, is at odds with nearly everything we know about how labor markets really work. It cannot accommodate the reality of good and bad jobs, workplace authority based on the threat of dismissal, discrimination, and the pervasive role of public regulation in defining what employment entails and what obligations it imposes. It also fails to acknowledge the social and psychological dimensions of work, which are particularly important in understanding how people perceive and respond to risk.