Sunday, May 31, 2020

"To be Happy at Home is the Ultimate Result of All Ambition"

As the time of recommended stay-at-homes and shutdowns continues, I found myself remembering this post from a couple of years ago. Being happy at home is an ongoing challenge, albeit for different reasons at different times. It first appeared back around the holiday season; here, I've edited lightly to trim the holiday references. 
_____________

I sometimes reflect on how many of us put considerable time and energy into thinking about where to live and furnishing our home--but then rush off and travel to other place to vacation, celebrate, and meet with friends.

Back in 1750, Samuel Johnson wrote in the November 10 issue of his magazine, The Rambler, "To be happy at home is the ultimate result of all ambition, the end to which every enterprise and labour tends ..." It's a thought-provoking sentiment. Many people would not describe their ambitions in this way, but instead would focus on their idea of ambition in a role outside the home and on the idea of becoming a "star" in some way, in business, politics, entertainment, social activism, or some other way. It is of course conceptually impossible for everyone to be recognized as a star by everyone else, and so a desire for public recognition of star-status will leave most people unhappy. Being happy at home can be a difficult goal in its own way, but it does have two virtues. One is that being happy at home is based on one's own feelings and one's own ungilded personality, rather than about how one is perceived and treated by those outside one's family and close friends. The other is that being happy at home is a more broadly achievable goal for many people, unlike the evanescent dreams of fame and celebrity.

Going back further in time, the philosopher Blaise Pascal discussed a related question in 1669. He argued that we cannot be happy in our homes because when we are alone, we fall into thinking about our "weak and mortal condition," which is depressing. Rather than face ourselves and our lives squarely and honestly, we instead rush off looking for diversion. Pascal writes of how people "aim at rest through agitation, and always to imagine that they will gain the satisfaction which as yet they have not, if by surmounting certain difficulties which now confront them, they may thereby open the door to rest. Thus rolls all our life away. We seek repose by resistance to obstacles, and so soon as these are surmounted, repose becomes intolerable."

I aspire to remember and to live out the value of happiness at home. But I recognize in myself the contradiction of aiming at rest through agitation. I know that my opinion of myself, along with the those who have known me longer and more intimately, should matter most. But I recognize in myself a desire to receive attention and plaudits from those who barely know me at all.

Here's a longer version of the comments from Johnson and Pascal. First, from Samuel Johnson, from the November 10, 1750 issue of The Rambler:   
For very few are involved in great events, or have their thread of life entwisted with the chain of causes on which armies or nations are suspended; and even those who seem wholly busied in publick affairs, and elevated above low cares, or trivial pleasures, pass the chief part of their time in familiar and domestick scenes; from these they came into publick life, to these they are every hour recalled by passions not to be suppressed; in these they have the reward of their toils, and to these at last they retire.
The great end of prudence is to give chearfulness to those hours, which splendour cannot gild, and acclamation cannot exhilarate; those soft intervals of unbended amusement, in which a man shrinks to his natural dimensions, and throws aside the ornaments or disguises, which he feels in privacy to be useless incumbrances, and to lose all effect when they become familiar. To be happy at home is the ultimate result of all ambition, the end to which every enterprise and labour tends, and of which every desire prompts the prosecution.
It is, indeed, at home that every man must be known by those who would make a just estimate either of his virtue or felicity; for smiles and embroidery are alike occasional, and the mind is often dressed for show in painted honour, and fictitious benevolence. ... The most authentick witnesses of any man's character are those who know him in his own family, and see him without any restraint, or rule of conduct, but such as he voluntarily prescribes to himself. 
"When I have set myself now and then to consider the various distractions of men, the toils and dangers to which they expose themselves in the court or the camp, whence arise so many quarrels and passions, such daring and often such evil exploits, etc., I have discovered that all the misfortunes of men arise from one thing only, that they are unable to stay quietly in their own chamber. A man who has enough to live on, if he knew how to dwell with pleasure in his own home, would not leave it for sea-faring or to besiege a city. An office in the army would not be bought so dearly but that it seems insupportable not to stir from the town, and people only seek conversation and amusing games because they cannot remain with pleasure in their own homes.
But upon stricter examination, when, having found the cause of all our ills, I have sought to discover the reason of it, I have found one which is paramount, the natural evil of our weak and mortal condition, so miserable that nothing can console us when we think of it attentively.
Whatever condition we represent to ourselves, if we bring to our minds all the advantages it is possible to possess, Royalty is the finest position in the world. Yet, when we imagine a king surrounded with all the conditions which he can desire, if he be without diversion, and be allowed to consider and examine what he is, this feeble happiness will never sustain him; he will necessarily fall into a foreboding of maladies which threaten him, of revolutions which may arise, and lastly, of death and inevitable diseases; so that if he be without what is called diversion he is unhappy, and more unhappy than the humblest of his subjects who plays and diverts himself.
Hence it comes that play and the society of women, war, and offices of state, are so sought after. Not that there is in these any real happiness, or that any imagine true bliss to consist in the money won at play, or in the hare which is hunted; we would not have these as gifts. We do not seek an easy and peaceful lot which leaves us free to think of our unhappy condition, nor the dangers of war, nor the troubles of statecraft, but seek rather the distraction which amuses us, and diverts our mind from these thoughts. ...
They fancy that were they to gain such and such an office they would then rest with pleasure, and are unaware of the insatiable nature of their desire. They believe they are honestly seeking repose, but they are only seeking agitation.
They have a secret instinct prompting them to look for diversion and occupation from without, which arises from the sense of their continual pain. They have another secret instinct, a relic of the greatness of our primitive nature, teaching them that happiness indeed consists in rest, and not in turmoil. And of these two contrary instincts a confused project is formed within them, concealing itself from their sight in the depths of their soul, leading them to aim at rest through agitation, and always to imagine that they will gain the satisfaction which as yet they have not, if by surmounting certain difficulties which now confront them, they may thereby open the door to rest.
Thus rolls all our life away. We seek repose by resistance to obstacles, and so soon as these are surmounted, repose becomes intolerable. For we think either on the miseries we feel or on those we fear. And even when we seem sheltered on all sides, weariness, of its own accord, will spring from the depths of the heart wherein are its natural roots, and fill the soul with its poison.

Thursday, May 28, 2020

How Economists and Sociologists See Racial Discrimination Differently

Economists tend to see discrimination as based on actions of individuals, who in turn are interacting in markets and society. However, sociologists do not feel the same compulsion as economists to build their theories on purposeful decision-making by individuals: "Sociologists generally understand racial discrimination as differential treatment on the basis of race that may or may not result from prejudice or animus and may or may not be intentional in nature."   The Spring 2020 issue of the Journal of Economic Perspectives illustrates the difference with a two-paper symposium on "Perspectives on Racial Discrimination: 
As most economists learned somewhere along the way, one can think of individual motivations for discrimination as coming in two flavors: taste-based discrimination in the oeuvre of Gary Becker (Nobel 1992) or "statistical discrimination" from the writings of Edmund Phelps (Nobel 2006) and Kenneth Arrow (Nobel 1972). One can dispute how economists discuss the subject of discrimination, but it would just be false to claim that it has not been a high-priority topic of top-level economists for decades.)

Taste-based discrimination is the name given to racial prejudice and animus. Statistical discrimination refers to the reality that we all make generalizations about people. Sometimes the generalizations are socially useful: Lang and Spitzer mention the generalizations that people are more likely to give up their seat on the bus or subway to a pregnant woman or an elderly person, based on the statistical generalization that they are more likely to need the seat, or that health care providers are more likely to emphasize breast-cancer screening for women than for me. However, statistical discrimination can also be harmful: say, if it is based on beliefs that those of a certain race who are applying to be hired for a job or to rent an apartment are more likely to be criminals. Moreover, when statistical discrimination is based on inaccurate statistics and exaggerated concerns, it begins to look functionally similar to taste-based discrimination. 

In addition, economists have long pointed out that the effects of discrimination may vary based on the parties involved: for example, in the context of labor market discrimination one can look separately at discrimination by employers, by co-workers, and by customers. For example, if the issue is discrimination by employers, one possible result is firms that are segregated by race, but both selling to the same consumers. If the issue is discrimination by consumers, one result may be that whites become more likely to have the "front-facing" jobs that deal directly with customers.  

The economic approach to discrimination, with its focus on purposeful and intentional acts by individuals, can offer some useful insights, and Lang and Spetzer give a useful overview of the research. For example, while the basic statistics show that blacks are more likely to be arrested for traffic violations, how can we know whether this is linked to prejudiced behavior by the police? One line of research has looked at traffic violations at different times of day, when there is more or less daylight. The underlying idea is that racial prejudice is more likely to manifest itself when the police can see the driver! The evidence from these studies is mixed: One study found that no effect of daylight on the racial mix of traffic stops, but another found that blacks were stopped more often at night on street with better lighting. 

Studies of "ban-the-box" legislation also have unexpected effects, as Lang and Spetzer point out: 
Because a higher proportion of blacks have criminal records than whites do, one might expect that preventing employers from inquiring about criminal records, at least at an early stage, would increase black employment. However, if firms cannot ask for information about criminal records, they may rely on correlates of criminal history, including being a young black man. This concern is even greater if employers tend to exaggerate the prevalence of criminal histories among black men, thus leading to inaccurate statistical discrimination. Agan and Starr (2018) investigate “ban the box” legislation in which companies are forbidden from asking job applicants about criminal background. Before such rules took effect, employers interviewed similar proportions of black and white male job applicants without criminal records. Prohibiting firms from requesting this information reduced callbacks of black men relative to otherwise similar whites. Consistent with this, Doleac and Hansen (2016) find that banning the box reduced the employment of low-skill young black men by 3.4 percentage points and low-skill young Hispanic men by 2.3 percentage points. Similarly, occupational licensing increases the share of minority workers in an occupation despite their lower pass rates on such exams (Law and Marks 2009). Prohibiting the use of credit reports in hiring reduced black employment rather than increasing it (Bartik and Nelson 2019). Taken together, these studies provide strong evidence that statistical discrimination plays an important role in hiring.
As sociologists, Small and Pager have no direct issue with this kind of work in economics: as they point out, some sociologists work in a similar vein. But their essay emphasizes that discriminatory outcomes can emerge from reasonable-sounding institutional choices and from history. 

For example, many companies, when they are hiring, encourage current workers to refer their friends and neighbors. This practice is not overtly racial. But given US patterns of residential segregation and friendship, it means that new hires will tend to reinforce the earlier racial composition of the workforce Or consider the standard practice that when doing layoffs, last hired will be first fired. If a company has only fairly recently started hiring minority groups, then the weight of layoffs will fall more heavily on these groups. As Small and Pager write: 
It is not surprising that a national study of 327 establishments that downsized between 1971 and 2002 found that downsizing reduced the diversity of the firm’s managers—female and minority managers tended to be laid off first. But what is perhaps more surprising is that those companies whose layoffs were based formally on tenure or position saw a greater decline in the diversity of their managers; net of establishment characteristics such as size, personnel structures, unionization, programs targeting minorities for management, and many others; and of industry characteristics such as racial composition of industry and state labor force, proportion of government contractors, and others (Kalev 2014). In contrast, those companies whose layoffs were based formally on individual performance evaluations did not see greater declines in managerial diversity (Kalev 2014).
In other cases, actions taken for discriminatory reasons in the past can have effects for long periods into the future. For example, blacks are much less likely to accumulate wealth through homeownership than whites, and one reason dates back to decisions made by federal agencies in the 1930s. 
However, the Home Owners Loan Corporation and Federal Housing Administration were also responsible for the spread of redlining. As part of its evaluation of whom to help, the HOLC created a formalized appraisal system, which included the characteristics of the neighborhood in which the property was located. Neighborhoods were graded from A to D, and those with the bottom two grades or rankings were deemed too risky for investment. Color-coded maps helped assess neighborhoods easily, and the riskiest (grade D) neighborhoods were marked in red. These assessments openly examined a neighborhood’s racial characteristics, as “% Negro” was one of the variables standard HOLC forms required field assessors to record (for example, Aaronson, Hartley, and Mazumder 2019, 53; Norris and Baek 2016, 43). Redlined neighborhoods invariably had a high proportion of AfricanAmericans. Similarly, an absence of African-Americans dramatically helped scores. For example, a 1940 appraisal of neighborhoods in St. Louis by the Home Owners Loan Corporation gave its highest rating, A, to Ladue, an area at the time largely undeveloped, described as “occupied by ‘capitalists and other wealthy families’” and as a place that was “not the home of ‘a single foreigner or Negro’” (Jackson 1980, 425). In fact, among the primary considerations for designating a neighborhood’s stability were, explicitly, its “protection from adverse influences,” “infiltration of inharmonious racial or nationality groups,” and presence of an “undesirable population” (as quoted in Hillier 2003, 403; Hillier 2005, 217).
More recent research looks at the long-term effects of the boundaries that were drawn at the time. 
The results are consistent with the HOLC boundaries having a causal impact on both racial segregation and lower outcomes for predominantly black neighborhoods. As the authors write, “areas graded ‘D’ become more heavily African-American than nearby C-rated areas over the 20th century, [a] . . . segregation gap [that] rises steadily from 1930 until about 1970 or 1980 before declining thereafter” (p. 3). They find a similar pattern when comparing C and B neighborhoods, even though “there were virtually no black residents in either C or B neighborhoods prior to the maps” (p. 3). Furthermore, the authors find “an economically important negative effect on homeownership, house values, rents, and vacancy rates with analogous time patterns to share AfricanAmerican, suggesting economically significant housing disinvestment in the wake of restricted credit access” (pp. 2–3).
While economists have not totally neglected the role of institutions and history in the transmission of racial discrimination, it's fair to say that it hasn't been their main emphasis, either.  My own sense is that through most of US history, the main issue of racial discrimination was explicit white prejudice. But the balance has shifted, and current differences in racial outcomes are a difficult combination of history, institutions, and social patterns. 

For example, one theme that has emerged from earlier research both by economists and sociologists is that discrimination can reduce the incentives to gain human capital. Indeed, a group that is has experienced discrimination may end up with less human capital for interrelated reasons: less access to educational resources, reduced motivation to gain human capital (because of lurking future  discrimination), reduced expectations or less support from family and peer groups,  and other reasons. Once this  dynamic has unfolded, then even employers who have zero preference for taste-based discrimination, but just hire on the basis of observable background and skills, will end up with different labor market outcomes by race. 

Wednesday, May 27, 2020

Fiscal Federalism: An International View

What is the appropriate balance of taxes and spending between the central government of a country and the subcentral governments--in the US, state and local government? Countries vary, and there's no one-size-fits-all mode. But Kass Forman, Sean Dougherty, and Hansjörg Blöchliger provide an overview of how countries differ and some standard tradeoffs to consider in "Synthesising Good Practices in Fiscal Federalism Key recommendations from 15 years of country surveys" (OECD Economic Policy Paper #28, April 2020).   

Here are a couple of figures to give some background on the underlying issues. On this figure, the horizontal axis is the share of spending done by subcentral governments, while the vertical axis is the share of taxes collected by subcentral governments. Being on the 45-degree line line would mean that these were the same. However, every country falls below the 45-degree line, which means that for every country, some of the revenues spent by subcentral govenments are collected by the central government. 

Its interesting to note the different models of fiscal federalism that prevail in various countries. At the far right, Canada is clearly an outlier, with nearly 70% of all government spending happening at the subnational level, and half of all taxes collected at the subnational level. Other countries where about half or more of government spending happens at the subnational level include the US, Sweden, Switzerland (CHE) and Denmark. 

Mexico is an interesting case where 40% of government spending happens at the subnational level, but tax revenues collected at that level are very low. Germany (DEU) and Israel are countries with a substantial level of subnational spending that is also nearly matched by the level of subnational taxes--and thus a relatively low redistribution of revenue from central to subcentral governments. Many countries huddled in the bottom left of the figure are low both subnational spending and subnational taxes. 

Here's a figure showing the change in these patterns across countries from 1995-2017. 

The crossing point of the horizontal and the vertical lines means relatively little change: for example, the US had a small rise in the share of spending happening at the subnational level and a small drop in the share of revenues raised at the subnational level. 

Some countries with a big rise in the share of spending happening at the subnational level include Spain (ESP), Belgium, and Sweden. Some countries with a big rise in the share of subnational taxes collected include Spain, Belgium, and Italy. Clearly, Spain stands out as a country that has been decentralizing both government revenue and spending. Conversely, Denmark (DNK) stands out as a country that has been decentralizing government spending, but centralizing the collection of tax revenue. Hungary and Netherlands stand out as countries that have moved toward centralizing their spending, and Hungary in particular seems to be both increasing subnational taxes while decreasing subnational spending. 

What are the key tradeoffs here?  Forman, Dougherty, and Blöchliger write (citations omitted): 
Fiscal federalism refers to the distribution of taxation and spending powers across levels of government. Through decentralisation, governments can bring public services closer to households and firms, allowing better adaptation to local preferences. However, decentralisation can also make intergovernmental fiscal frameworks more complex and risk reinforcing interregional inequality unless properly designed. Accordingly, several important trade-offs emerge from the devolution of tax and spending powers. ... 
For example:  
[D]ecentralised fiscal frameworks allow for catering to local preferences and needs, while more centralised frameworks help reap the benefits of scale. Another key trade-off derives from the effect of decentralisation on the cost of information to different levels of government. While greater decentralisation implies that sub-national governments can access more information about the needs of a constituency at lower cost, it simultaneously increases the informational distance between central and sub-national government. In turn, this may make information more costly from the perspective of the central government, impeding its co-ordination and monitoring functions.
Decentralisation could also engender a costly misalignment of incentives. For example, a “common pool” problem may arise when decentralisation narrows the sub-national revenue base and raises the vertical fiscal gap. In this case, the necessary reliance on revenue sharing with central government to ensure SCG [subcentral government] fiscal capacity may also distort the cost/benefit analysis of sub-national governments—particularly in situations where an SCG realises a payoff without bearing the entirety of the associated cost. Rigid arrangements that entrench fiscal dependence on the central government may drive SCGs to manipulate tax-sharing agreements in order to increase their share while undermining their motivation to cultivate the local tax base. Therefore, the possible efficiency and equity gains from decentralisation are closely linked to mitigating the pitfalls of poorly designed revenue sharing.
  What does this mean in practical terms? Their survey of the cross-country research has a number of intriguing findings: 
OECD research has found a broadly positive relationship between revenue decentralisation and growth, with spending decentralisation demonstrating a weaker effect ...

[D]ecentralisation appears to reduce the gap between high and middle-income households but may leave low incomes behind, especially where jurisdictions have large tax autonomy ...

In healthcare, research suggests costs fall and life expectancy rises with moderate decentralisation, but the opposite effects hold once decentralisation becomes excessive (Dougherty et al., 2019[11]). With respect to educational attainment, Lastra-Anadón and Mukherjee (2019[27]) find that a 10 percentage point increase in the sub-national revenue share improves PISA scores by 6 percentage points ...

Decentralisation has also been linked to greater public investment, with a 10% point increase in decentralisation (as measured by both SCG spending and revenue share of government total) “lifting the share of public investment in total government spending from around 3% to more than 4% on average”. The investment driven by decentralisation appears to accrue principally to soft infrastructure, that is human capital as measured by education.
In the US version of fiscal federalism, states and local governments face constraints on their borrowing, while the federal government does not. In the case of disruptions from a national recession or pandemic, when a surge of government borrowing is needed, as in the case of a pandemic, it will thus be natural for subnational governments to turn to the US federal government for support. However, it's worth remembering that in more normal times, having state and local governments bear a substantial responsibility for their there own tax and sending levels can have real benefits for accountability and government services. 
 

Tuesday, May 26, 2020

Will Telecommuting Stick?

Responses to the pandemic have shifted many patterns: online education for both K-12 and higher ed, online health care consultations, online business meetings, and telecommuting to jobs. It will be interesting to see whether these shifts are only temporary, or whether they are the first step to a more permanent shift. Here are some bits and pieces of evidence.

The word "telecommuting" and the emergence of the idea into public discourse happened in the early 1970s. A NASA engineer named Jack Nilles, who in fact was working remotely, is usually credited with coining the word. He was also the lead author on a 1973 book looking more closely at the idea:
The Telecommunications-Transportation Tradeoff: Options for Tomorrow.

Before the pandemic, the share of US workers who telecommuted on a regular basis had been creeping up rather slowly over time, and had exceeded 5% of the workforce. From one perspective, this isn't a huge number. From another perspective, it suggest that a lot of workers and employers have some experience with telecommuting over time. And for the country as a whole, the number of telecommuters already exceeded the number taking mass transit on a regular basis.

Of course, the pandemic changed telecommuting, along with so much else. A Gallup poll found that in the last two weeks of March 2020, the share of US workers who had ever worked remotely doubled:
Line graph. Percentage of U.S. workers saying they are working remotely has doubled to 62% from mid-March to early April 2020.


There's anecdotal evidence suggesting that for big companies, the shift to telecommuting may be substantial and lasting. For example, a New York Times story suggests "Manhattan Faces a Reckoning if Working From Home Becomes the Norm" (by Matthew Haag, May 12, 2020). The article points out that just three companies--Barclays, JP Morgan Chase and Morgan Stanley--employ more than 20,000 employees in Manhattan and "they lease more than 10 million square feet in New York — roughly all the office space in downtown Nashville."
Before the coronavirus crisis, three of New York City’s largest commercial tenants — Barclays, JP Morgan Chase and Morgan Stanley — had tens of thousands of workers in towers across Manhattan. Now, as the city wrestles with when and how to reopen, executives at all three firms have decided that it is highly unlikely that all their workers will ever return to those buildings. The research firm Nielsen has arrived at a similar conclusion. Even after the crisis has passed, its 3,000 workers in the city will no longer need to be in the office full-time and can instead work from home most of the week.
Manhattan has the largest business district in the country, and its office towers have long been a symbol of the city’s global dominance. With hundreds of thousands of office workers, the commercial tenants have given rise to a vast ecosystem, from public transit to restaurants to shops. They have also funneled huge amounts of taxes into state and city coffers. But now, as the pandemic eases its grip, ...  they are now wondering whether it’s worth continuing to spend as much money on Manhattan’s exorbitant commercial rents. They are also mindful that public health considerations might make the packed workplaces of the recent past less viable. ...
Lots of other prominent firms, like Twitter and Facebook, are also announcing that work-from-home will be available to many more employees in the future. 

With just a bit of imagination, one can envisage a future where corporate offices have a lot less floorspace. There would be rooms for team meetings, and a number of cubicles or small offices that would be used by whoever was physically present that day. But the idea that most workers have a desk or cubicle or office reserved or them would fad away. One can also imagine a corresponding shift on the residential side of the market. There might be less demand for living near concentrations of urban jobs, and more demand for living near what has been viewed as a vacation destination. (For example, interest in real estate near Lake Tahoe has perked up.) When it comes to choosing a home, more people might go beyond just counting bedrooms and bathrooms and what the countertops are made of,  and also looking for a place what has that have a nook or a room designed as at-home workspace.

But what actual evidence bears on the likelihood that such a shift will occur? I'm only aware of one previous example of a relatively shock that led to a very large short-run increase in telecommuting: an earthquake in New Zealand in 2011. Erin Brown explains in an article in Knowable magazine ("Could Covid-19 usher in a new era of working from home?"April 30, 2020).
When a magnitude 6.3 earthquake struck Christchurch, New Zealand, on February 22, 2011, the capital city’s central business district was leveled — and hundreds of essential government workers suddenly found themselves working from home, scrambling to figure out how to get their jobs done without access to the office. Some encountered technical difficulties, others had trouble managing teams. But most found the pros outweighed the cons, and agencies held on to remote work options.
“It was immediate telework,” says Kate Lister, president of Global Workplace Analytics, a consulting firm near San Diego that helps companies set up work-from-home polices. And once it was over, they did not go back.”
On the other side, Brown also points out that not all past experiences with telecommuting have been positive, either from the employer or the employee's point of view. Yahoo! and Best Buy both had work at home policies that they abolished in 2013. More recently, before the pandemic the Trump administration had been reducing telework options for federal employees. Brown writes:
A 2017 look at research on alternative work arrangements, including remote work, in the Annual Review of Organizational Psychology and Organizational Behavior identified similar benefits. It also documented challenges for remote workers, including feeling lonely, isolated or not respected by colleagues; an increase in work-family conflict due to longer hours and blurred boundaries; and, in some cases, a tendency among workers who are encouraged to maintain work-life boundaries to “be less likely to extend themselves in crunch times, possibly increasing the workload of non-telecommuting coworkers.”
Katherine Guyot and Isabel V. Sawhill offer an essay with links to much of the recent evidence in  "Telecommuting will likely continue long after the pandemic" (Brookings Institution,  April 6, 2020). They write:
[A] recent paper estimates that only a third of jobs can be done entirely from home. ... Already, nearly one in five chief financial officers surveyed last week said they planned to keep at least 20% of their workforce working remotely to cut costs. ... Cutting commuting is good both for environmental reasons and because it is one of the least enjoyable activities that many adults engage in on a daily basis. ...
It can be a problem when some members of an organization or a team can telework and others cannot. Having more coworkers who telework can result in lower performance, higher absenteeism, and higher turnover among those who do not telework, particularly if team members have very limited face-to-face time. This suggests that telework may create some additional work for onsite workers (for instance, if they have to serve as liaisons for their teleworking colleagues), or that social interaction at work is important for morale. ...
A new study of employees at a U.S. technology services company found that extensive telecommuting is associated with fewer promotions and lower salary growth, but that telecommuters who have face-to-face time with managers or who perform supplemental work outside of normal hours have better outcomes. Supplemental work signals dedication to the job but also blurs the boundary between work and home life, contributing to pressure to be “always on.”
Of course, extrapolating from a pandemic back to (relative) normality is a dicey business. It's plausible that the enormous advances in telecommunications technology would have led to a gradual increase in telecommuting over time. Maybe it will turn out that the pandemic just gave telecommuting a push to move a little faster in the direction it was already headed. Here, I'll just finish with a few thoughts that are dubious about whether a seismic shift in telecommuting has just occurred. 

1) For many high-income workers, the option to telework can feel like a way to get some focused work time and also to improve the work-life balance. But if telework is imposed by the employer, and begins to include an expectation of being available for work 24/7, and perhaps especially if it is imposed on low-income workers, the feeling may be quite different.  More generally, there may be some workers who are happier heading off to work in an office, with a separation between work and home life. 

2) In the pandemic, firms moved people with existing and well-defined jobs into telecommuting for a time. But firms need to evolve over time. Job responsibilities and jobs themselves change. How will hiring and training of new employees happen? How will team-based projects be pursued?  How will new directions for the company be conveyed to teleworking employees? How will work be evaluated? The loyalty that workers feel to their employer as unemployment soars in a pandemic may be rather different than the loyalty they feel after working at home for months or years. 

3) One of the fundamental questions of urban economics is why economic activity tends to be clumped together in metro areas, and often clumped together in certain parts of urban areas, rather than being spread out more evenly across the landscape. The broad answer is that there are "economies of agglomeration," which is a fancy way for saying that workers who are physically located near each other tend to be more productive, as they share ideas and goals and motivate and interact with each other. There is a large body of empirical evidence on the concentration of economic activity, as well as high productivity and innovation, within specific locations. My suspicion is that the wise and career-oriented telecommuter will still want to find a way to make regular in-person appearances at these locations.

Monday, May 25, 2020

Interview with Larry Summers: China, Debt, Pandemic, and More

Irwin Stelzer and Jeffrey Gedmin have a wide-ranging interview with Lawrence Summers in The American Interest (May 22, 2020, "How to Fix Globalization—for Detroit, Not Davos"). As always, Summers is his habitually and incorrigibly interesting and provocative self. Here re a few of many quotable remarks. 

China
In general, economic thinking has privileged efficiency over resilience, and it has been insufficiently concerned with the big downsides of efficiency. Going forward we will need more emphasis on “just in case” even at some cost in terms of “just in time.” More broadly our economic strategy will need to put less emphasis on short-term commercial advantage and pay more attention to long-run strategic advantage. ...
At the broadest level, we need to craft a relationship with China from the principles of mutual respect and strategic reassurance, with rather less of the feigned affection that there has been in the past. We are not partners. We are not really friends. We are entities that find ourselves on the same small lifeboat in turbulent waters a long way from shore. We need to be pulling in unison if things are to work for either of us. If we can respect each other’s roles, respect our very substantial differences, confine our spheres of negotiation to those areas that are most important for cooperation, and represent the most fundamental interests of our societies, we can have a more successful co-evolution that we have had in recent years. ...
Attitudes on Globalization
Someone put it to me this way: First, we said that you are going to lose your job, but it was okay because when you got your new one, you were going to have higher wages thanks to lower prices because of international trade. Then we said that your company was going to move your job overseas, but it was really necessary because if we didn’t do that, then your company was going to be less competitive. Now we’re saying that we have to cut the taxes on those companies and cut the calculus class from your kid’s high school, because otherwise we won’t be able to attract companies to the United States, and you have to pay higher taxes and live with fewer services. At a certain point, people say, “This whole global thing doesn’t work for me,” and they have a point.
So we need a global agenda that is about broad popular interests rather than about corporate freedom—that is, cooperation to assure that government purposes can be served and that global threats can be met. If we have an agenda like that, we can rebuild a constituency for global dialogue.
Government debt
The deepest truth about debt is that you can’t evaluate borrowing without knowing what it’s going to be used for. Borrowing to invest in ways that earn a higher return than the cost of borrowing, and provide the wherewithal for debt service with an excess leftover, is generally a good and sustainable thing. Borrowing to finance consumption, leaving no return to cover debt service, is generally an unsustainable and problematic thing. ...
I think we need to be very careful, with respect to the expectation that we now seem to be setting of having government cover all the losses associated with the COVID period. For the life of me, I cannot understand why grants should have been made to airlines to enable them to continue to function, rather than allowing their share values to be further depressed, and allowing those who would earn substantial premiums by taking risk on airline bonds to do so, accepting the consequences of an investment gone wrong.
Looking towards an economy that is going to be very different than the one we had before COVID, we cannot aspire to maintain every job or every enterprise with a compensation program indefinitely. So as I look at the 30 percent of GDP deficit that we are running in Quarters Three and Four of Fiscal 2020, I don’t think that can be sustained over a multi-year period.
Enforcing Existing Tax Laws for Those With High Incomes
We could raise well over a trillion dollars over the next decade by simply enforcing the tax law that we have against people with high incomes. Natasha Sarin and I made this case and generated a revenue estimate some time ago. If we just restored the IRS to its previous size, judged relatively to the economy; if we moved past the massive injustice represented by the fact that you’re more likely to get audited if you receive the earned income tax credit (EITC) than if you earn $300,000 a year or more; if we made plausible use of information technology and the IRS got to where the credit card companies were 20 years ago, in terms of information technology-matching; and if we required of those who make shelter investments the kind of regular reporting that we require of cleaning women, we would raise, by my estimate, over a trillion dollars [over ten years]. Former IRS Commissioner Charles Rossotti, who knows more about it than I do, thinks the figure is closer to $2 trillion. That’s where we should start.
Coronavirus Priorities
The real crime is not that we miscalibrated on some economic versus public health trade-off. The real crime is that we have not succeeded in generating far greater quantities of testing, far greater mechanisms for those 40 million unemployed people to do contract tracing, far more availability of well-fitting, comfortable, and safe masks, and that we’re under-investing in the development of new therapeutics and vaccines.
When something costs $10 to $15 billion a day, you need to make decisions in new ways. We should not be waiting to see which of two tests works best. We should be producing both of them. We should not wait for vaccines to be proven before we start producing them. We should be producing all the plausible candidates. Remember, one week earlier in moving through this is worth a hundred billion dollars: two months’ worth of the annual defense budget.

Friday, May 22, 2020

Interview with Joshua Angrist: Education Policy and Causality Questions

David A. Price interviews Joshua Angrist in Econ Focus (First Quarter 2020, Federal Reserve Bank of Richmond, pp. 18-22). Angrist is well-known for his creativity and diligence in thinking about research design: that is, don't just start by looking at a bunch of correlations between variables, but instead think about what you might be able to infer about causality from looking at the data in a specific way. A substantial share of his recent research has focused on education policy, and that's the main focus of the interview as well.

To get a sense of what "research design" means in this area,  consider some examples. Imagine that you want to know if a student does better from attending a public charter school. If the school is oversubscribed and holds a lottery (as often happens), then you can compare those attending the charter with those who applied but were not chosen in the lottery. Does being surrounded by high-quality peers help your education? You can look at students who were accepted to institutions like Harvard and MIT but chose not to attend, and compare them with the students that were accepted and did choose to attend. Of course, these kinds of comparisons have to be done with appropriate statistical consideration. But their results are much more plausibly interpreted as causal, not just as a set of correlations. Here are some comments from Angrist in the interview that caught my eye.

Peer Effects in High School? 
I think people are easily fooled by peer effects. Parag, Atila Abdulkadiroglu, and I call it "the elite illusion." We made that the title of a paper. I think it's a pervasive phenomenon. You look at the Boston Latin School, or if you live in Northern Virginia, there's Thomas Jefferson High School for Science and Technology. And in New York, you have Brooklyn Tech and Bronx Science and Stuyvesant.
And so people say, "Look at those awesome children, look how well they did." Well, they wouldn't get into the selective school if they weren't awesome, but that's distinct from the question of whether there's a causal effect. When you actually drill down and do a credible comparison of students who are just above and just below the cutoff, you find out that elite performance is indeed illusory, an artifact of selection. The kids who go to those schools do well because they were already doing well when they got in, but there's no effect from exposure to higher-achieving peers.
How Much Does Attending a Selective College Matter? 
I teach undergrad and grad econometrics, and one of my favorite examples for teaching regression is a paper by Alan Krueger and Stacy Dale that looks at the effects of going to a more selective college. It turns out that if you got into MIT or Harvard, it actually doesn't matter where you go. Alan and Stacy showed that in two very clever, well-controlled studies. And Jack Mountjoy, in a paper with Brent Hickman, just replicated that for a much larger sample. There isn't any earnings advantage from going to a more selective school once you control for the selection bias. So there's also an elite illusion at the college level, which I think is more important to upper-income families, because they're desperate for their kids to go to the top schools. So desperate, in fact, that a few commit criminal fraud to get their kids into more selective schools.
Charter schools and takeovers
The most common charter model is what we call a startup — somebody decides they want to start a charter school and admits kids by lottery. But an alternative model is the takeover. Every state has an accountability system with standards that require schools to meet certain criteria. When they fail to meet these standards, they're at risk of intervention by the state. Some states, including Massachusetts, have an intervention that involves the public school essentially being taken over by an outside operator. Boston had takeovers. And New Orleans is actually an all-charter district now, but it moved to that as individual schools were being taken over by charter operators.
That's good for research, because you can look at schools that are struggling just as much but are not taken over or are not yet taken over and use them as a counterfactual. The reason that's important is that people say kids who apply to the startups are self-selected and so they're sort of primed to gain from the charter treatment. But the way the takeover model works in Boston and New Orleans is that the outside operator inherits not only the building, but also the existing enrollment. So they can't cherry-pick applicants. What we show is that successful charter management organizations that run successful startups also succeed in takeover scenarios.
Angrist has developed the knack of looking for these ways of interpreting a given data set, sometimes called "natural experiments." For those trying to find such examples as a basis for their own research, he says: 
One thing I learned is that empiricists should work on stuff that's nearby. Then you can have some visibility into what's unique and try to get on to projects that other people can't do. This is particularly true for empiricists who are working outside the United States. There's a temptation to just mimic whatever the Americans and British are doing. I think a better strategy is to say, "Well, what's special and interesting about where I am?"
Finally, as a bit of a side note, I was intrigued by Angrist's neutral-to-negative take on the potential for machine learning in econometrics:
I just wrote a paper about machine learning applications in labor economics with my former student Brigham Frandsen. Machine learning is a good example of a kind of empiricism that's running way ahead of theory. We have a fairly negative take on it. We show that a lot of machine learning tools that are very popular now, both in economics and in the wider world of data science, don't translate well to econometric applications and that some of our stalwarts — regression and two-stage least squares — are better. But that's an area of ongoing research, and it's rapidly evolving. There are plenty of questions there. Some of them are theoretical, and I won't be answering those questions, but some are practical: whether there's any value added from this new toolkit. So far, I'm skeptical.
Josh has written for the Journal of Economic Perspectives a few times. Interested readers might want to follow up with:

Thursday, May 21, 2020

Reconsidering the "Washington Consensus" in the 21st Century

The "Washington consensus" has become a hissing and a buzzword over time. The usual implication is that free-market zealots in Washington, DC, told developing countries around the world that they would thrive if they followed free-market policies, but when developing countries tried out these policies, they were proven not to work. William Easterly, who has been a critic of the "Washington consensus" in the past, offers an update and some new thinking in "In Search of Reforms for Growth New Stylized Facts on Policy and Growth Outcomes" (Cato Institute, Research Briefs #215, May 20, 2020). He summarizes some ideas from his NBER working paper of the same title (NBER Working Paper 26318, September 2019)/

Before discussing what Easterly has to say, it's perhaps useful to review how the "Washington consensus" terminology emerged. The name traces back to a 1989 seminar in which John Williamson tried to write down what he saw as the main steps that policy-makers in Washington, DC, thought were appropriate for countries in Latin America facing a debt crisis. As Williamson wrote in the resulting essay published in 1990:
No statement about how to deal with the debt crisis in Latin America would be complete without a call for the debtors to fulfill their part of the proposed bargain by "setting their houses in order," "undertaking policy reforms," or "submitting to strong conditionality."
The question posed in this paper is what such phrases mean, and especially what they are generally interpreted as meaning in Washington. Thus the paper aims to set out what would be regarded in Washington as constituting a desirable set of economic policy reforms. ... The Washington of this paper is both the political Washington of Congress and senior members of the administration and the technocratic Washington of the international financial institutions, the economic agencies of the US government, the Federal Reserve Board, and the think tanks. ... Washington does not, of course, always practice what it preaches to foreigners.
Here's how Williamson summed up the 10 reforms he listed in a follow-up essay in 2004:
  1. Fiscal Discipline. This was in the context of a region where almost all countries had run large deficits that led to balance of payments crises and high inflation that hit mainly the poor because the rich could park their money abroad.
  2. Reordering Public Expenditure Priorities. This suggested switching expenditure in a progrowth and propoor way, from things like nonmerit subsidies to basic health and education and infrastructure. It did not call for all the burden of achieving fiscal discipline to be placed on expenditure cuts; on the contrary, the intention was to be strictly neutral about the desirable size of the public sector, an issue on which even a hopeless consensus-seeker like me did not imagine that the battle had been resolved with the end of history that was being promulgated at the time. 
  3. Tax Reform. The aim was a tax system that would combine a broad tax base with moderate marginal tax rates. 
  4. Liberalizing Interest Rates. In retrospect I wish I had formulated this in a broader way as financial liberalization, stressed that views differed on how fast it should be achieved, and—especially—recognized the importance of accompanying financial liberalization with prudential supervision. 
  5. A Competitive Exchange Rate. I fear I indulged in wishful thinking in asserting that there was a consensus in favor of ensuring that the exchange rate would be competitive, which pretty much implies an intermediate regime; in fact Washington was already beginning to edge toward the two-corner doctrine which holds that a country must either fix firmly or else it must float “cleanly”. 
  6. Trade Liberalization. I acknowledged that there was a difference of view about how fast trade should be liberalized, but everyone agreed that was the appropriate direction in which to move. 
  7. Liberalization of Inward Foreign Direct Investment. I specifically did not include comprehensive capital account liberalization, because I did not believe that did or should command a consensus in Washington. 
  8. Privatization. As noted already, this was the one area in which what originated as a neoliberal idea had won broad acceptance. We have since been made very conscious that it matters a lot how privatization is done: it can be a highly corrupt process that transfers assets to a privileged elite for a fraction of their true value, but the evidence is that it brings benefits (especially in terms of improved service coverage) when done properly, and the privatized enterprise either sells into a competitive market or is properly regulated. 
  9. Deregulation. This focused specifically on easing barriers to entry and exit, not on abolishing regulations designed for safety or environmental reasons, or to govern prices in a non-competitive industry. 
  10. Property Rights. This was primarily about providing the informal sector with the ability to gain property rights at acceptable cost (inspired by Hernando de Soto’s analysis). 
There are really two main sets of complaints about the "Washington consensus" recommendations. One set of complaints is that a united DC-centered policy establishment was telling countries around the world what to do in an overly detailed and intrusive way. The other complaint was that the recommendations weren't showing meaningful results for improved economic growth in countries of Latin America, Africa, or elsewhere. As Easterly points out, these complaints were being voiced by the mid-1990s. 

Conversely, standard responses were that many of these countries had not actually adopted the list of 10 policy reforms. Moreover, the responses went, there is no instant-fix set of policies for raising economic growth, and these policies need to be maintained in place for years (or decades?) before their effects will be meaningful.  And there the controversy (mostly) rested.

Easterly is (wisely) not seeking to refight the specific proposals of the Washington consensus. Instead,  he is just pointing out some basic facts. The share of countries with extremely negative macroeconomic outcomes--like very high inflation, or very high black market premiums on the exchange rate for currency--diminished sharply in the 21st century, as compared to the 1980s and 1990s. Here are a couple of figures from Easterly's NBER working paper:


These kinds of figures provide a context for the 1990 Washington consensus: for example, in the late 1980s and early 1990s when between 25-40% of all countries in the world had inflation rates greater than 40%, getting that wildfire under control had a high level of importance.

Easterly also points out that when these extremely undesirable outcomes diminished in the 1990s, growth across countries of Latin America and Africa has done better in the 21st century. Easterly thus offers this gentle reconsideration of the Washington consensus policies: 
The new stylized facts seem most consistent with a position between complete dismissal and vindication of the Washington Consensus. ... Even critics of the Washington Consensus might agree that extreme ranges of inflation, black market premiums, overvaluation, negative real interest rates, and repression of trade were undesirable. ...
Despite these caveats, the new stylized facts are consistent with a more positive view of reform, compared to the previous consensus on doubting reform. The reform critics (including me) failed to emphasize the dangers of extreme policies in the previous reform literature or to note how common extreme policies were. Even if the reform movement was far from a complete shift to “free market policies,” it at least seems to have accomplished the elimination of the most extreme policy distortions of markets, which is associated with the revival of growth in African, Latin American, and other countries that had extreme policies. 

Wednesday, May 20, 2020

Is a Revolution in Biology-based Technology on the Way?

Sometimes, a person needs a change from feeling rotten about the pandemic and the economy. One needs a sense that, if not right away, the future holds some imaginative and exciting possibilities. A group at the the McKinsey Global Institute--Michael Chui,  Matthias Evers, James Manyika,  Alice Zheng, amd Travers Nisbet--have been working for about a year on their report: "The Bio Revolution: Innovations transforming economies,societies, and our lives" (May 2020). It's got a last-minute text box about COVID-19, emphasizing the speed with which biomedical research has been able to move into action in looking for vaccines and treatments. But the heart of the report is that the authors looked at the current state of biotech, and came up with a list of about 400 "cases that
are scientifically conceivable today and that could plausibly be commercialized by 2050. ... Over the next ten to 20 years, we estimate that these applications alone could have direct economic impact of between $2 trillion and $4 trillion globally per year."

For me, reports like this aren't about the economic projections, which are admittedly shaky, but rather are a way of emphasizing the importance of increasing national research and development efforts across a spectrum of technologies. As the authors point out, the collapsing costs of sequencing and editing genes are reshaping what's possible with biotech. Here are some of the possibilities they discuss.

When it comes to physical materials, the report notes that in the long run:
As much as 60 percent of the physical inputs to the global economy could, in principle, be produced biologically. Our analysis suggests that around one-third of these inputs are biological materials, such as wood, cotton, and animals bred for food. For these materials, innovations can improve upon existing production processes. For instance, squalene, a moisturizer used in skin-care products, is traditionally derived from shark liver oil and can now be produced more sustainably through fermentation of genetically engineered yeast. The remaining two-thirds are not biological materials—examples include plastics and aviation fuels—but could, in principle, be produced using innovative biological processes or be replaced with substitutes using bio innovations. For example, nylon is already being made using genetically engineered microorganisms instead of petrochemicals. To be clear, reaching the full potential to produce these inputs biologically is a long way off, but even modest progress toward it could transform supply and demand and economics of, and participants in, the provision of physical inputs.  ...
Biology has the potential in the future to determine what we eat, what we wear, the products we put on our skin, and the way we build our physical world. Significant potential exists to improve the characteristics of materials, reduce the emissions profile of manufacturing and processing, and shorten value chains. Fermentation, for centuries used to make bread and brew beer, is now being used to create fabrics such as artificial spider silk. Biology is increasingly being used to create novel materials that can raise quality, introduce entirely new capabilities, be biodegradable, and be produced in a way that generates significantly less carbon emissions. Mushroom roots rather than animal hide can be used to make leather. Plastics can be made with yeast instead of petrochemicals. ...
A significant share of materials developed through biological means are biodegradable and generate less carbon during manufacture and processing than traditional materials. New bioroutes are being developed to produce chemicals such as fertilizers and pesticides. ...
 A deeper understanding of human genetics offers potential for improvements in health care, where the social benefits go well beyond higher economic output. The report estimates that there are 10,000 human diseases caused by a single gene.

A new wave of innovation is under way that includes cell, gene, RNA, and microbiome therapies to treat or prevent disease, innovations in reproductive medicine such as carrier screening, and improvements to drug development and delivery.  Many more options are being explored and becoming available to treat monogenic (caused by mutations in a single gene) diseases such as sickle cell anemia, polygenic diseases (caused by multiple genes) such as cardiovascular disease, and infectious diseases such as malaria. We estimate between 1 and 3 percent of the total global burden of disease could be reduced in the next ten to 20 years from these applications—roughly the equivalent of eliminating the global disease burden of lung cancer, breast cancer, and prostate cancer combined. Over time, if the full potential is captured, 45 percent of the global disease burden could be addressed using science that is conceivable today. ...
An estimated 700,000 deaths globally every year are the result of vector-borne infectious diseases. Until recently, controlling these infectious diseases by altering the genomes of the entire population of the vectors was considered difficult because the vectors reproduce in the wild and lose any genetic alteration within a few generations. However, with the advent of CRISPR, gene drives with close to 100 percent probability of transmission are within reach. This would offer a permanent solution to preventing most vector-borne diseases, including malaria, dengue fever, schistosomiasis, and Lyme disease.

The potential gains for agriculture as the global population heads toward 10 billion and  higher seem pretty important, too.

Applications such as low-cost, high-throughput microarrays have vastly increased the amount of plant and animal sequencing data, enabling lower-cost artificial selection of desirable traits based on genetic markers in both plants and animals. This is known as marker-assisted breeding and is many times quicker than traditional selective breeding methods. In addition, in the 1990s, genetic engineering emerged commercially to improve the traits of plants (such as yields and input productivity) beyond traditional breeding.  Historically, the first wave of genetically engineered crops has been referred to as genetically modified organisms (GMOs); these are organisms with foreign (transgenic) genetic material introduced. Now, recent advances in genetic engineering (such as the emergence of CRISPR) have enabled highly specific cisgenic changes (using genes from sexually compatible plants) and intragenic changes (altering gene combinations and regulatory sequencings belonging to the recipient plant). Other innovations in this domain include using the microbiome of plants, soil, animals, and water to improve the quality and productivity of agricultural production; and the development of alternative proteins, including lab-grown meat, which could take pressure off the environment from traditional livestock and seafood.
More? Direct-to-consumer genetic testing is already a reality as a consumer product, but it will start to be combined with other goods and services based on your personal genetic profile: what vitamins and probiotics to take, meal services, cosmetics, whitening teeth, monitoring health, and more.

Pushing back against rising carbon emissions?

Genetically engineered plants can potentially store more CO2 for longer periods than their natural counterparts. Plants normally take in CO2 from the atmosphere and store carbon in their roots. The Harnessing Plant Initiative at the Salk Institute is using gene editing to create plants with deeper and more extensive root systems that can store more carbon than typical plants. These roots are also engineered to produce more suberin or cork, a naturally occurring carbon-rich substance found in roots that absorbs carbon, resists decomposition (which releases carbon back into the atmosphere), may enrich soil, and helps plants resist stress. When these plants die, they release less carbon back into the atmosphere than conventional plants. ...
Algae, present throughout the biosphere but particularly in marine and freshwater environments, are among the most efficient organisms for carbon sequestration and photosynthesis; they are generally considered photosynthetically more efficient than terrestrial plants. Potential uses of microalgal biomass after sequestration could include biodiesel production, fodder for livestock, and production of colorants and vitamins. Using microalgae to sequester carbon has a number of advantages. They do not require arable land and are capable of surviving well in places that other crop plants cannot inhabit, such as saline-alkaline water, land, and wastewater. Because microalgae are tiny, they can be placed virtually anywhere, including cities. They also grow rapidly. Most important, their CO2 fixation efficiency has been estimated at ten to 50 times higher than that of  terrestrial plants.
Using biotech to remediate earlier environmental damage or aid recycling?
One example is genetically engineered microbes that can be used to break down waste and toxins, and could, for instance, be used to reclaim mines. Some headway is being made in using microbes to recycle textiles. Processing cotton, for instance, is highly resource-intensive, and dwindling resources are constraining the production of petroleum-based fibers such as acrylic, polyester, nylon, and spandex. There is a great deal of waste, with worn-out and damaged clothes often thrown away rather than repaired. Less than 1 percent of the material used to produce clothing is recycled into new clothing, representing a loss of more than $100 billion a year.Los Angeles–based Ambercycle has genetically engineered microbes to digest polymers from old textiles and convert them into polymers that can be spun into yarns. Engineered microbes can also assist in the treatment of wastewater. In the United States, drinking water and wastewater systems account for between 3 and 4 percent of energy use and emit more than 45 million tons of GHG a year. Microbes—also known as microbial fuel cells—can convert sewage into clean water as well as generate the electricity that powers the process.
What about longer-run possibilities, still very much under research, that might bear fruit out beyond 2050?
  • "Biobatteries are essentially fuel cells that use enzymes to produce electricity from sugar. Interest is growing in their ability to convert easily storable fuel found in everyday sugar into electricity and the potential energy density this would provide. At 596 ampere hours per kilogram, the density of sugar would be ten times that of current lithium-ion batteries."
  • "Biocomputers that employ biology to mimic silicon, including the use of DNA to store data, are being researched. DNA is about one million times denser than hard-disk storage; technically, one kilogram of DNA could store the entirety of the world’s data (as of 2016)."
  • Of course, if people are going to live in space or on other planets, biotech will be of central importance. 

If your ideas about the technologies of the future begin and end with faster computing power, you are not dreaming big enough.

Tuesday, May 19, 2020

A Wake-Up Call about Infections in Long-Term Care Facilities

Those live in long-term care facilities are by definition more likely to be older and facing multiple health risks. Thus, it's not unexpected that a high proportion of those dying from the coronavirus live in long-term care facilities. But the problem of infections and deaths in long-term care facilities predates the coronavirus pandemic, and will likely outlast it, too. Here's some text from the Centers for Disease Control website:
Nursing homes, skilled nursing facilities, and assisted living facilities, (collectively known as long-term care facilities, LTCFs) provide a variety of services, both medical and personal care, to people who are unable to manage independently in the community. Over 4 million Americans are admitted to or reside in nursing homes and skilled nursing facilities each year and nearly one million persons reside in assisted living facilities. Data about infections in LTCFs are limited, but it has been estimated in the medical literature that:
  • 1 to 3 million serious infections occur every year in these facilities.
  • Infections include urinary tract infection, diarrheal diseases, antibiotic-resistant staph infections and many others.
  • Infections are a major cause of hospitalization and death; as many as 380,000 people die of the infections in LTCFs every year.
If you're a number-curious person like me, you immediately think, "Where does that estimate of 380,000 deaths come from? A bit of searching unearths that the 380,000 is from the National Action Plan to Prevent Health Care-Associated Infections, a title which has the nice ring of a program that is already well-underway. But then you look at Phase Three: Long-Term Care Facilities, and it takes you to a report called "Chapter 8: Long-Term Care Facilities," which dated April 2013.  The 2013 report reads:
More recent estimates of the rates of HAIs [health-care associated infections] occurring in NH/SNF [nursing home/skilled nursing facility] residents range widely from 1.4 to 5.2 infections per 1,000 resident-care days.2,3 Extrapolations of these rates to the approximately 1.5 million U.S. adults living in NHs/SNFs suggest a range from 765,000 to 2.8 million infections occurring in U.S. NHs/SNFs every year.4 Given the rising number of individuals receiving more complex medical care in NHs/SNFs, these numbers might underestimate the true magnitude of HAIs in this setting. Additionally, morbidity and mortality due to HAIs in LTCFs [long-term care facilities] are substantial. Infections are among the most frequent causes of transfer from LTCFs to acute care hospitals and 30-day hospital readmissions.5,6 Data from older studies conservatively estimate that infections in the NH/SNF population could account for more than 150,000 hospitalizations each year and a resultant $673 million in additional health care costs.5 Infections also have been associated with increased mortality in this population.4,7,8 Extrapolation based on estimates from older publications suggests that infections could result in as many as 380,000 deaths among NH/SNF residents every year.5
Because I am on a hunt for the source of the estimate of 380,000 deaths, I take a look at note 5, which refers to a 1991 study: Teresi JA, Holmes D, Bloom HG, Monaco C & Rosen S. Factors differentiating hospital transfers from long-term care facilities with high and low transfer rates. Gerontologist. Dec 1991; 31(6):795-806.  

So to summarize the bidding. Here in 2020, in the midst of a pandemic where the infections are  causing particular harm in nursing homes, the CDC website in 2020 is quoting estimates of deaths from a study published in 2013, and the methodology for estimating those deaths relies on an extrapolation from a study published three decades ago in 1991. 

I'm sure there are many good people making substantial efforts to reduce infections in long-term care facilities, often at meaningful risk to their own health. But ultimately, the degree of success in reducing infections isn't measured by good intentions or efforts: it's measured by actual counts of infections and deaths. And when the CDC describes estimates of "serious infections" that vary by a factor of three, and estimates of deaths based on extrapolations from a 1991 study, it seems pretty clear that the statistics about infections in long-term care facilities are not well-measured or consistent over time.

This problem of infections in long-term care facilities will matter well beyond the pandemic. Populations are aging everywhere: in the United States, 3.8% of the population is currently over 80, but by 2050 it will likely rise to 8.2%. The demand for long-term care is likely to rise accordingly,  which in turn will raise difficult questions about where the workers for such facilities and the financial support will come from. Here, I would emphasize that it will take redoubled efforts if the future rise in number of people in long-term care is not to be matched by a similar rise in the number of people subject to infections, including when (not if) future pandemics arrive.

Monday, May 18, 2020

The Bad News about the Big Jump in Average Hourly Wages

Average hourly wages in April 2020 were 7.9% higher than a year earlier, a very high jump. And as a moment's reflection will suggest, this is actually part of the terrible news for the US labor market. Of course, it's not true that the average hourly worker is getting a raise of 7.9%. Instead, the issue is that only workers who have jobs are included in the average. So the big jump in average hourly wages is actually telling us that a much higher proportion of workers with below-average wages have lost their jobs, so that the average wage of the hourly workers who still have jobs has risen.

Here's are a couple of illustrative figures, taken from the always-useful US Economy in a Snapshot published monthly by the Federal Reserve Bank of New York. The blue line in this figure shows the rise in average hourly wages over the previous 12 months. The red line shows a measure of inflation, the Employment Cost Index. There has been lots of concern in the last few years about why wages were not rising more quickly, and as you can see, the increase in average earnings had pulled ahead of inflation rates in 2019. 

However, the US economy is of course now experiencing a sharp fall in the number of hours worked. As the NY Fed notes, nonfarm payrolls fell by 20 million jobs in April, the largest fall in the history of this data series. Total weekly hours worked in the private sector fell by 14.9% in April compared with 12 months earlier. No matter how many checks and loans are handed out, the US economy will not have recovered until hours worked returns to normal levels.

You will sometimes hear statistics people talk about a "composition effect," which just means that if you are comparing a group over time, you need to beware of the possibility that  the composition of the group is changing. In this case, if you compare the average hourly earnings of the group that is working for hourly wages over time, you need to beware if the composition of the group that is working for hourly wages has systematically shifted in some way. In this case, the bottom of the labor market has fallen out for hourly workers who had been receiving below-average hourly wages.

There's nothing nefarious about these statistics. The average hourly wage is standard statistic published every month. The government statisticians just publish the numbers, as they should. It's up to citizens to understand what they mean.

Friday, May 15, 2020

Interview with Emi Nakamura: Price Dynamics, Monetary and Fiscal, and COVID-19 Adjustments

Douglas Clement at the Minneapolis Federal Reserve offers one of his characteristically excellent interviews, this one with Emi Nakamura, titled "On price dynamics, monetary policy, and this `scary moment in history'” (May 6, 2020, Federal Reserve Bank of Minneapolis). Here are a few of Nakamura's comments that caught my eye, but there's much more in the full interview.

On the current macroeconomic situation
It’s a scary moment in history. I thought the Great Recession that started in 2007 was going to be the big macroeconomic event of my lifetime, but here we are again, little more than a decade later. ... More than other recessions, this particular episode feels like it fits into the classic macroeconomic framework of dividing things into “shocks” and “propagation”—mainly because in this case, it’s blindingly clear what the shock is and that it is completely unrelated to other forces in the economy. In the financial crisis, there was much more of a question as to whether things were building up in the previous decade—such as debt and a housing bubble—that eventually came to a head in the crisis. But here that’s clearly not the case.
Price rigidity at times of macroeconomic adjustment
You might think that it’s very easy to go out there and figure out how much rigidity there is in prices. But the reality was that at least until 20 years ago, it was pretty hard to get broad-based price data. In principle, you could go into any store and see what the prices were, but the data just weren’t available to researchers tabulated in a systematic way. ...
Once macroeconomists started looking at data for this broad cross section of goods, it was obvious that pricing behavior was a lot more complicated in the real world than had been assumed. If you look at, say, soft drink prices, they change all the time. But the question macroeconomists want to answer is more nuanced. We know that Coke and Pepsi go on sale a lot. But is that really a response to macroeconomic phenomena, or is that something that is, in some sense, on autopilot or preprogrammed? Another question is: When you see a price change, is it a response, in some sense, to macroeconomic conditions? We found that, often, the price is simply going back to exactly the same price as before the sale. That suggests that the responsiveness to macroeconomic conditions associated with these sales was fairly limited. ... 
One of the things that’s been very striking to me in the recent period of the COVID-19 crisis is that even with incredible runs on grocery products, when I order my online groceries, there are still things on sale. Even with a shock as big as the COVID shock, my guess is that these things take time to adjust. ... he COVID-19 crisis can be viewed as a prime example of the kind of negative productivity shock that neoclassical economists have traditionally focused on. But an economy with price rigidity responds much less efficiently to that kind of an adverse shock than if prices and wages were continuously adjusting in an optimal way.

What's does the market learn from Fed announcements of changes in monetary policy? 
The basic challenge in estimating the effects of monetary policy is that most monetary policy announcements happen for a reason. For example, the Fed has just lowered interest rates by a historic amount. Obviously, this was not a random event. It happened because of this massively negative economic news. When you’re trying to estimate the consequences of a monetary policy shock, the big challenge is that you don’t really have randomized experiments, so establishing causality is difficult.
Looking at interest rate movements at the exact time of monetary policy announcements is a way of estimating the pure effect of the monetary policy action. ...  Intuitively, we’re trying to get as close as possible to a randomized experiment. Before the monetary policy announcement, people already know if, say, negative news has come out about the economy.The only new thing that they’re learning in these 30 minutes of the [time window around the monetary policy] announcement is how the Fed actually chooses to respond. Perhaps the Fed interprets the data a little bit more optimistically or pessimistically than the private sector. Perhaps their outlook is a little more hawkish on inflation. Those are the things that market participants are learning about at the time of the announcement. The idea is to isolate the effects of the monetary policy announcement from the effects of all the macroeconomic news that preceded it. Of course, you have to have very high-frequency data to do this, and most of this comes from financial markets. ...
The results completely surprised us. The conventional view of monetary policy is that if the Fed unexpectedly lowers interest rates, this will increase expected inflation. But we found that this response was extremely muted, particularly in the short run. The financial markets seemed to believe in a hyper hyper-Keynesian view of the economy. Even in response to a significant expansionary monetary shock, there was very little response priced into bond markets of a change in expected inflation. ... 
But, then, we were presenting the paper in England, and I recall that Marco Bassetto asked us to run one more regression looking at how forecasts by professional forecasters of GDP growth responded to monetary shocks. The conventional view would be that an expansionary monetary policy shock would yield forecasts of higher growth. When we ran the regression, the results actually went in the opposite direction from what we were expecting! An expansionary monetary shock was actually associated with a decrease in growth expectations, not the reverse! ... When Jay Powell or Janet Yellen or Ben Bernanke says, for example, “The economy is really in a crisis. We think we need to lower interest rates” ... perhaps the private sector thinks they can learn something about the fundamentals of the economy from the Fed’s announcements. This can explain why a big, unexpected reduction in interest rates could actually have a negative, as opposed to a positive, effect on those expectations.

The Plucking Model of Unemployment

A feature emphasized by Milton Friedman is that the unemployment rate doesn’t really look like a series that fluctuates symmetrically around an equilibrium “natural rate” of unemployment. It looks more like the “natural rate” is a lower bound on unemployment and that unemployment periodically gets “plucked” upward from this level by adverse shocks. Certainly, the current recession feels like an example of this phenomenon.
Another thing we emphasize is that if you look at the unemployment series, it appears incredibly smooth and persistent. When unemployment starts to rise, on average, it takes a long time to get back to where it was before. This is something that isn’t well explained by the current generation of macroeconomic models of unemployment, but it’s clearly front and center in terms of many economists’ thinking about the policy responses [to COVID-19]. A lot of the policy discussions have to do with trying to preserve links between workers and firms, and my sense is the goal here is to avoid the kind of persistent changes in unemployment that we’ve seen in other recessions.
For more on Nakamura and her work, the Journal of Economic Perspectives has a couple of articles to offer.

What Do We Know about Progress Toward a COVID-19 Vaccine?

There seem to me a few salient facts about the search for a COVID-19 vaccine.

1) According to a May 11 count by the World Health organization, there are now 8 vaccine candidates now in clinical trials, and an additional 102 vaccines in pre-clinical evaluation. Seems like an encouragingly high number.

2) Influenza viruses are different from coronaviruses. We do have vaccines for many influenza viruses--that's the "flu shot" many of us get each fall. But there has never been a vaccine developed for a coronavirus. The two previous outbreaks of a coronavirus--SARS (severe acute respiratory syndrome) in 2002-3 and MERS (Middle East respiratory syndrome) in 2012--both saw substantial efforts to develop such a vaccine, but neither one succeeded .Eriko Padron-Regalado discusses "Vaccines for SARS-CoV-2: Lessons from Other Coronavirus Strains" in the April 23 issue of Infectious Diseases and Therapy. 

3) It's not 100% clear to me why the previous efforts to develop a coronavirus vaccine for SARS or MERS failed. Some of the discussion seems to suggest that there wasn't a strong commercial reason to develop such a vaccine. The SARS outbreak back in 2002-3 died out. While some cases of MERS still happen, they are relatively few and seem limited to Saudi Arabia and nearby areas in the Middle East. Thus, one possible answer for the lack of a previous coronavirus vaccine is a lack of effort--an answer which would not reflect well on those who provide funding and set priorities for biomedical research.

4) The other possible answer is that it may be hard to develop that first coronavirus vaccine, which is why dozens of previous efforts to do so with SARS and MERS failed. Padron-Regalado put it this way (boldface is mine): "In vaccine development, it is ideal that a vaccine provides long-term protection. Whether long-term protection can be achieved by means of vaccination or exposure to coronaviruses is under debate, and more information is needed in this regard." A recent news story talked to researchers who tried to find a SARS vaccine, and summarizes their sentiments with comments like: "But there’s no guarantee, experts say, that a fully effective COVID-19 vaccine is possible. .... “Some viruses are very easy to make a vaccine for, and some are very complicated,” says Adolfo García-Sastre, director of the Global Health and Emerging Pathogens Institute at the Icahn School of Medicine at Mount Sinai. “It depends on the specific characteristics of how the virus infects.”Unfortunately, it seems that COVID-19 is on the difficult end of the scale. ... At this point, it’s not a given that even an imperfect vaccine is a slam dunk."

5) At best, vaccines take time to develop, especially if you are thinking about giving them to a very wide population group with different ages, genetics, and pre-existing conditions.

So by all means, research on a vaccine for the novel coronavirus should proceed full speed ahead. In addition, even if we reach a point where the disease itself seems to be fading, that research should keep proceeding with the same urgency. That research may offer protection against a future wave of the virus in the next year or two. Or it may offer scientific insights that will help with a vaccine targeted at a future outbreak of a different coronavirus. 

But we can't reasonably make current public policy about stay-at-home, business shutdowns, social distancing, or other public policy steps based on a hope or expectation that a coronavirus vaccine will be ready anytime soon.