Friday, December 4, 2020

Lessons about Copyright from the History of Italian Operas

When studying the effects of copyright, one would ideally like to compare settings with and without it. In a modern context, one can look for effects of various changes or extensions in copyright, but it's harder to make comparisons with what creative markets would be like if there was no copyright at all. However, Michela Giorcelli and Petra Moser offer a thought-provoking historical example in "Copyrights and Creativity: Evidence from Italian Opera in the Napoleonic Age" (Journal of Political Economy, November 2020,  128:11, pp. 4163-4210).  

Here's a quick overview of the historical context: 
In 1796, Napoléon began his Italian campaign by invading the Kingdom of Sardinia at Ceva. Although he was unable to subdue Sardinia at the time, two other states, Lombardy and Venetia, were annexed and formed the Cisalpine Republic, which adopted French laws. In 1801, the Republic adopted France’s copyright laws of 1793, granting composers exclusive rights for the duration of their lives, plus 10 years for their heirs (Legge 19 Fiorile anno IX repubblicano, Art. 1–2; Repubblica Cisalpina 1801). In 1804, France replaced its system of feudal laws and aristocratic privilege with the code civil, a codified system of civic laws. The code left copyrights intact where they already existed but did not introduce them in states without copyright laws. As a result, only Lombardy and Venetia offered copyrights until the 1820s (Foà 2001b, 64), while all other Italian states that came under French rule after 1804 had no copyrights, even though they shared the same exposure to French rule, as well as the same language and culture. The empirical analysis examines rich new data on 2,598 operas that composers created across eight Italian states between 1770 and 1900.
In other words, this is a setting where a certain type of performing art is extremely popular, where we have good historical records on performances at the time and since then, and where there was a clear-cut break between nearby regions where some had copyright and some did not. What do they observe? 

Giorcelli and Moser find that in the years before the copyright law takes effect in Lombardy and Venetia, the Italian states look pretty similar both in terms of supply of new operas and also in demand for operas (as measured by factors like theater seats taking population and income into account). Before copyright, the average number of new operas in Lombardy and Venetia rose from 1.4 per year to 3.6 per year--a rise of 157%. 

One effect of copyright that was quickly noticed by composers is that instead of just being paid once for creating the opera, they could receive a stream of payments over time if the opera was popular enough to be performed more widely and repeatedly. When the authors look at measures of the quality of operas, like what was being performed at the Metropolitan Opera in New York in the 20th and 21st century or what recordings of operas are being sold on Amazon even today, they find that the increased quantity of operas was accompanied by higher quality, as well. 

Moreover, the rise in composition of operas was not primarily due to opera composers moving to the areas with copyright, although some of this did occur: instead, the same composers were producing more and better operas.  As other Italian states adopted copyright from 1826 to 1840, they also experienced a rise in quantity and quality of operas produced. 

One last finding is that "there were no benefits from copyright extensions beyond the life of the
original creator." It's important to remember that the broad social purpose of copyright and patent law is not to create "intellectual property" for the creator. Instead, the broad social goal as stated in the so-called "Patents and Copyrights Clause" of the US Constitution is "[t]o promote the Progress of Science and useful Arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries." In other words, giving rights to the author for a limited time is the tool, but the actual social goal is progress in science and art. The issue that arises here in both science and art is that new creations are often built on older ones. If an earlier creator is given too much power, or for too long a time, later progress of science and useful arts can be hindered rather than helped. In our modern economy, corporate ownership of intellectual property means that there will always be political pressure to extend and strengthen copyright and patent law to cover creations that are still bringing in royalties. It's important to remember that while such expansions of intellectual property undoubtedly benefit those who hold the copyrights and patents, they may hinder the creation of new innovations.

For some previous posts on copyright, see: 

Thursday, December 3, 2020

Why Some of the Shift to Telecommuting Will Stick

It seems to me that the tone of the discussion surrounding the pandemic-induced shift telecommuting has been changing. Last spring and early summer, a lot of the discussion was about about how well it was working, how much time it was saving, how much employees preferred it, and so on. But then the discussions tend to express more concerns. In the words of a recent Wall Street Journal article, "Companies Start to Think Remote Work Isn’t So Great After All Projects take longer. Collaboration is harder. And training new workers is a struggle. ‘This is not going to be sustainable.’" Bloomberg reported on the results from a study done on teleworkers by researchers at the Harvard Business School:  "The Pandemic Workday Is 48 Minutes Longer and Has More Meetings. A study of 3.1 million workers around the world found an uptick in emailing, too."

What factors will determine whether the shift to telecommuting sticks? Jose Maria Barrero, Nicholas Bloom, and Steven J. Davis present some results from a series of nationally representative surveys of US workers done from May to October 2020, in "Why Working From Home Will Stick" (December 2020, University of Chicago Becker Friedman Institute Working Paper 2020-174). The authors argue that teleworking will remain substantially higher after the pandemic: they estimate a rise from about 5% of work-days were supplied from home before the pandemic, and it will be something like 22% even after the pandemic is done. Based on the survey data, they suggest five reasons why some of the shift to working from home will persist:

First, reduced stigma. A large majority of respondents report perceptions about working from home have improved since the start of the pandemic among people they know. With fewer people viewing working from home as “shirking from home,” workers and their employers will be more willing to engage in it.

Second, ... COVID-19 compelled firms to experiment with a new production mode – working from home – and led them to acquire information that leads some of them to stick with the new mode after the forcing event ends.

Third, our survey reveals that the average worker has invested over 13 hours and about $660 dollars in equipment and infrastructure at home to facilitate working from home. We estimate these investments amount to 1.2 percent of GDP. In addition, firms have made sizable investments in back-end information technologies and equipment to support working from home. Thus, after the pandemic, workers and firms will be positioned to work from home at lower marginal costs due to recent investments in tangible and intangible capital.

Fourth, about 70 percent of our survey respondents express a reluctance to return to some pre-pandemic activities even when a vaccine for COVID-19 becomes widely available, for example riding subways and crowded elevators, or dining indoors at restaurants. ...

Fifth, ... the massive expansion in working from home has boosted the market for working from equipment, software and technologies, spurring a burst of research that supports working from home, in particular, and remote interactivity, more broadly.
Here are a few reactions: 

1) More work-days happening from home would be bad news for dense urban areas. The authors write: "We estimate that 4 the post-pandemic shift to working from home (relative to the pre-pandemic situation) will lower post-COVID worker expenditures on meals, entertainment, and shopping in central business districts by 5 to 10 percent of taxable sales." 

2) The workers who are well-positioned to benefit from working form home often tend to have higher incomes and workplace status. Workers in retail or manufacturing or many other other jobs don't have a work-from-home option. For new workers getting hired, on-the-job learning and professional connections are almost certainly harder to create when you're one more face in a checkerboard of continual online meetings. In that sense, the additional perk of sometimes working from home is likely to create a separation between a more favored class of  workers that has access to this option and other workers who do not. 

3) There's a conflict in what workers and employers saying about productivity during the pandemic. In this survey data, workers typically report being more productive from home. But employers often report that productivity is lower when people are working at home (for example, see "What Jobs are Being Done at Home During the Covid-19 Crisis? Evidence from Firm-Level Surveys," by Alexander W. Bartik, Zoe B. Cullen, Edward L. Glaeser, Michael Luca & Christopher T. Stanton, NBER Working paper #27422 , June 2020). One possible reason for this gap is that many of those working from home are happy to be doing it, and they are overestimating their productivity. Another possible reason is that workerks tend to focus on their productivity in doing specific day-to-day tasks, but employers are also looking at activities like the benefits of training or brainstorming that may be facilitated by more informal face-to-face interactions. 

4) Finally, there's a lot of research on the "economics of density," which tends to find that workers who are grouped together have higher productivity. After all, there's a reason why cities and downtown areas with concentrated employment came into existence in the first place, and why they have been the engines of economic growth over time. The after-effects of the pandemic will test this connection. If those who work closely in a physical sense continue to have higher pay and productivity, then those who work from home are likely to gain flexibility but suffer some career slowdowns, because they aren't where the action is. Perhaps employers and firms have now learned how to gain the benefits of physical closeness via web-based conference calls. Or maybe not. 

For an overview of these arguments about the economics of density, the Summer 2020 issue of the Journal of Economic Perspectives has a useful Symposium on the Productivity Advantages of Cities: 
For a previous post on this topic from last spring, see "Will Telecommuting Stick?" (May 26, 2020).

COVID Comparisons Across Countries and US States: A Graphing Tool from the FT

 The Financial Times has a useful graphing tool that allows you to compare rates of COVID-19 new cases or deaths, either across countries or across US states. Here are a couple of charts with international comparisons that I made yesterday. Feel free to make your own, and to contemplate them.

This graph show rates of COVID-19 deaths per 100,000 population, based on a seven-day rolling average to smooth the line. On the far right of the diagram, the blue line at the top is the European Union. The purple line just below that is the United Kingdom. The green line below that is Sweden. The pink line is the United States.

As with most statistics, one can view the glass as half-full or half-empty. The pink line showing US COVID-19 death rates has not so far spike as high as the EU rate. But if one looks back over the summer, the US death rate line was substantially above the EU line. 

Perhaps the higher US death rate over the summer will mean a lower death rate this fall? Maybe. But the numbers of new COVID-19 cases gives reason for concern. This graph shows rates of new COVID-19 cases per 100,000 population, again using a seven-day rolling average to smooth the line. The blue EU line for new cases started rising in August and September, and then spiked to well above the US level in September and October, before peaking in early November. The pink US line for new cases started spiking in October, and at least for the moment it seems to have peaked a little later and higher than the EU level--which may presage a  higher US death rate in the weeks to come. The green line showing Sweden's COVID-19 cases was at EU levels last summer, but is now peaking. The United Kingdom seems to be doing a little better than the EU as a whole. The blue line at the bottom showing Canada has done the best of the countries show, but has also seen a substantial recent rise. 

 

There's a tendency to read these graphs as if they are a judgement on public health authorities, or on the willingness of the public to follow public health advice. This view isn't wrong, but it's also incomplete.  The specifics of the virus and how it interacts with the season and with local human environments gets a say of its own, too. 

Wednesday, December 2, 2020

Time to Worry Less About Federal Budget Deficits?

 Jason Furman and Lawrence Summers are prominent Democratic-leaning academic economists, but not among those whose names have been put forward for prominent economic policy positions in a Biden administration--which leaves them free to be a little iconoclastic. Yesterday, they presented a "Discussion Draft" of "A Reconsideration of Fiscal Policy in the Era of Low Interest Rates" in an online event hosted by the Hutchins Center on Fiscal & Monetary Policy and the Peterson Institute for International Economics. Video and slides from of their presentation together with discussants are available here.  Furman and Summers have been ruminating along these lines for some time: for another example, see their essay "Who’s Afraid of Budget Deficits? How Washington Should End Its Debt Obsession" in the March/April 2019 issue of Foreign Affairs.  

Furman and Summers begin by noting that not only have interests rates been very low for more than a decade, but that according to the forecasts embedded in financial market actions (like the willingness to investors to put their money in long-term bonds that pay a low interest rates for decades into the future), interest rates seem likely to remain low for years or decades into the future. Here's, I'll list three main implications they draw for fiscal policy, and offer some thoughts about each one. 

Implication 1: Active Use of Fiscal Policy is Essential in Order to Maximize Employment and Maintain Financial Stability in the Current Low Interest Rate World

The basic idea here is that with interest rates already very low, the Federal Reserve is not going to be able to respond to recessions by cutting interest rates by, say, 5-6 percentage points to stimulate demand. Even if the Fed was to move its benchmark policy interest rate slightly into the negative range by a few tenths of a percent, as some other central banks around the world have done, making those rates negative by several percentage points seems like a policy with risks of its own for financial stability.

Perhaps the main policy challenge here is that fiscal policy has traditionally been somewhat slow to adjust: that is, the economy slows down, Congress starts holding hearings, the economy is still slow, Congress passes a bill, the economy is still slow, the bill begins to take effect, the economy is (maybe) still slow, and the full effects of the stimulus bill percolate through the economy. Is there a way to speed the process? 

History has taught that it's hard for the government to have a bunch of "shovel-ready" projects on hand, just ready and waiting to ramp up if the economy tips into recession. Thus, a lot of the more recent thinking involves considering spending bills that would be triggered--perhaps only in specific areas or regions--by an indicator like an ongoing rise for several months in the unemployment rate. 

Implication 2: Lower Interest Rates Necessitate New Measures of a Country’s Fiscal Situation

When it comes to debt, a key practical issue is not the size of the debt itself, but the size of the payments you need to make. When buying a house, for example, you worry about the size of the monthly payments  in comparison to your income, not the total debt. Similar logic suggests that in a global economy with low interest rates, a government can take on a higher level of debt. Summers and Furman suggest that rather than focusing on the size of the government debt, the appropriate goal should be to look at federal debt service payments (specifically,  they recommend "limiting real interest payments to comfortably below about 2 percent of GDP ideally measured in the economically meaningful sense of net interest less remittances from the Federal Reserve and interest on Federal financial assets").

The general direction of this argument seems clearly correct: that is, one should worry less about a given level of debt when interest rates are lower. As the authors emphasize, long-term economic forecasts come with a heavy dose of uncertainty. They emphasize that if debt payments start rising, policy steps can be taken then. 

But debt problems often don't evolve in a linear way, offering space to politicians for timely interventions before they go bad. As Rudiger Dornbusch used to say, in what I have dubbed the Hemingway Law of Motion: ""The crisis takes a much longer time coming than you think, and then it happens much faster than you would have thought." The current system for marketing federal debt, for example, is showing cracks.  Another concern the authors do not discuss in any detail is that US government borrowing relies on inflows of foreign capital, because of of the low US savings rates. By contrast, government borrowing in Japan, say, can draw on Japan's high domestic savings rate. Thus, a recommendation for higher US borrowing is also a recommendation for higher US reliance on inflows of foreign capital from higher-saving countries, which will also imply generally rising debt from the US economy to foreign investors and generally higher US trade deficits (as the US consumes more domestically, financed by inflows of foreign capital). There would be an emerging pattern of global imbalances with risks of its own. 

Implication 3: The Scope and Need for Public Investment Has Greatly Expanded

Furman and Summers offer an intuitively useful example of potholes in roads. If the potholes remain unfixed, they will get worse in the future and thus impose steadily rising social costs on drivers of vehicles. They write: "Put another way, it is better to fill potholes today than to wait and fill them at a cost that grows faster than the interest rate, which is currently around zero in real terms." What are some other "potholes" that it might be better to fix sooner rather than later? 

The political economy danger here, of course, lies in offering politicians a blank check. With just a bit of rhetorical ingenuity, pretty much every government spending program can be re-conceptualized as an "investment." The authors write: 

The above points depend heavily on what the additional debt is used for. If it is used to fund effective public programs with high rates of return, like research, infrastructure, education and investments and support for children, it is very likely to have benefits far greater than the costs of any additional debt accumulation. Wasteful and poorly designed spending programs or tax cuts, however, are not justified by this logic.

Even in some of these categories, Furman and Summers offer some cautions. For example, when it comes to infrastructure improves, an ongoing political challenge is to make sure the money is spent where it have the biggest payoff, not just spread around among Congressional districts in a way that ends up with beautiful and drastically underused rural highways or "bridges to nowhere" projects. Thus, it's important that users of roads and local governments spending local taxes have some skin in the game when it comes to local infrastructure improvements, and they aren't just spending what feels like free federal money. 

As a bottom line, Furman and Summers suggest that their arguments would justify "[a]dditional investments of about 1 percent of GDP," which would be roughly  $200 billion per year. This of course seems like an invitation to think about how you would spend this money. While I've got nothing against fixing physical potholes, my own preferences here would instead focus on human capital and technology. 

For example, I'm not a big fan of universal pre-K programs: they cost a lot, and the recent evidence on such programs often shows short-term effects that fade over time (for example, here and here), although it still seems worth thinking about how to fund such programs for children from disadvantaged families. However, there does seem to me promising evidence on even earlier interventions for children: for example, the value of pre-natal care and nutrition, interventions aimed at families with children under the age of 2. Indeed, some economists have gone so far as to argue that redistributing spending from pre-K to policies aimed at younger children could be a net gain. 

I would also spend a chunk of the money on a substantial rise in support for community colleges and  apprenticeships. We seem to me to be in a time when employers have a strong demand for workers with particular skills, but those same employers have become more hesitant to do the training themselves--perhaps because they fear that most promising of these trained employees will leave for other jobs, or perhaps because they fear they they have become less able to fire those who do not complete the training successfully. In either case, the ladders of opportunity for getting into good career-oriented jobs have become frayed for many young and young-ish adults, and programs that match employers with public-sector training in the actual skills those employers need seem one way to reduce this problem. 

Finally, it's a long-standing lament for me that the US economy underinvests in research and development, by which I would include not just basic research, but also the ability of communities to create self-sustaining centers where research and new companies and jobs combine in a virtuous circle. There's a strong case to be made that the US should phase in an increase in research and development spending of 50% or more, which can be done with a variety of tools including direct government support, tax incentives for industry, and encouragement for corporate labs. In addition, it would then be useful to have a process for spreading the effects of this technology across the US, rather than having it concentrated in a few cities. There are several fairly detailed proposals in which the federal government might set up a process in which medium-sized cities across the country that have university ties and a reasonable tech base in place could bid to become both reseach and economic centers for these new investments in technology.  

I'm probably more worried about the current trajectory of US borrowing than Furman and Summers (for example, here and here).  But it also seems true to me that, without any conscious decision, the role of federal spending has shifted quite dramatically: back in 1960 for example, 26% of federal spending was payments to individuals, in 2020, 70% of federal spending was payments to individuals. I like the idea of some federal programs focused on longer-term social gains, and this period of low interest rates seems like an opportunity to let this agenda have some air. 

Tuesday, December 1, 2020

Remember the Opioid Crisis?

The COVID-19 pandemic is deservedly the main public health story of our time. But spare a thought for the opioid crisis, which hasn't gone away, and has led to the deaths of about 500,000 Americans in the last two decades. Johanna Catherine Maclean, Justine Mallatt, Christopher J. Ruhm, and Kosali Simon provide an update and overview in "Economic Studies of the Opioid Crisis" (November 2002, National Bureau of Economic Research Working Paper 28067). 

As they point out, the number of deaths from the opioid epidemic is just the starting point for looking at social costs: "Data from the National Survey of Drug Use and Health (NSDUH)--the official government source for substance use statistics in the U.S.--indicate that in 2018, 1.7 million Americans met diagnostic criteria for prescription opioid use disorder (OUD) and over 500,000 for heroin-related OUD (McCance-Katz, 2018). These numbers represent a lower bound on the true prevalence of OUD as individuals are likely to under-report this condition in survey settings and since the NSDUH excludes groups likely to have disproportionately high rates of OUD (e.g., institutionalized and homeless individuals)." Indeed, the combination of deaths and diseases is the main factor causing average life expectancy among non-Hispanic whites to reverse its pattern of increases over time, and instead to start declining around the year 2000.

The authors re-tell the basic story of the opioid crisis, as I have told it here before. It's a commonly viewed as a three-stage event. The first stage from the late 1990s up to about 2010 was an explosive rise in prescription opioids: for example, sales of prescription opioids quadrupled from 1999-2014, but the share of Americans reporting that they were in pain was not rising during this time. It's common to say that this rise was driven by aggressive marketing from the pharmaceutical industry, and marketing did indeed rise. But it seems to me that health care providers also bear a substantial share of the blame for their susceptibility to that marketing. In the second stage, restrictions were imposed on prescription opioids, which then led to a rise in heroin usage. In the third stage, there has been a shift from heroin to fentanyl, which provide a much cheaper high in much less volume--and thus are easily smuggled across national borders in ordinary-looking mailed packages.  

Now that the opioid crisis has been unleashed, and has morphed from a prescription drug crisis into the heroin/fentanyl crises, what's to be done? 

There's still room for identifying physicians who are dramatically more likely to prescribe opioids, and pushing back against that behavior. One study looked at county-level data on what counties have a higher or lower share of doctors who are high-prescribers of opioids. The study also looked at people moving between counties--and whose average health status should be about the same before and after the move. It found that about 30% of the variation in opioid deaths across countries is explained by physician prescribing behavior. There's some evidence that if a state has a "prescription drug monitoring program," which is a centralized database recording all individual prescriptions, and if physicians enter the information into the database and check it before prescribing, it can make a difference in opioid-related mortality, crime, the health of newborns, and the number of children who end up in foster care. Other states have had success with pain management clinics laws," which seek to regulate "pill mills" that are prescribing especially high volumes of these drugs. 

But as noted above, the opioid crisis stopped being primarily a prescription drug issue a few years ago. In addition, steps to reduce prescriptions of opioids always run some risk of nudging users into the illegal opioid markets. Given that the past wars on other illegal drugs have not been notably successful in raising the price or reducing the quantity of illegal drugs, the main policy proposals  here involve trying other methods to protect public health from opioid abuse. 

For example, trying to assure easy access to naloxone, especially among first-line responders including police, seems to have some benefits. Another option is to make treatment cheaper and more available. The authors write (citations omitted):  

Recent estimates suggest that only one in ten individuals with OUD [opioid use disorder] receive medication for treating it in a given year, although there have been recent expansions in availability of DEA-waivered providers of buprenorphine. While there are many reasons why individuals do not receive treatment--including strong psychological barriers to treatment and stigma--commonly stated causes include inability to pay and lack of insurance coverage ...

Overuse of opioids is of course not physically contagious. But there is a sense in which it is socially contagious and also socially destructive in ways that go beyond the harms to individuals.

Monday, November 30, 2020

Cancelling Plans for a Robo-Apocalyse?

We know from historical experience that it's common to hear prophesies about how new automation technologies will wipe out jobs (for examples, see here,  here, here, here, and here) We also know that that the past can be an imperfect guide to the future. Leslie Willcocks offers a number of reasons to believe that claims of future job loss may be overstated in "Robo-Apocalypse cancelled? Reframing the automation and future of work debate" (Journal of Information Technology, 35:4, pp. 286-302, freely available online at present). Specifically, Willcocks offers eight "qualifiers" as to why claims of job loss from robotics, automation, and artificial intelligence are not likely to be as large as often feared. Here are is qualifiers, with a few words of explanation:  

Qualifier 1 – job numbers versus tasks and activities. 

When you look more closely at estimates that one-third or one-half of jobs will be "automated," the evidence actually tends to show that one-third to one-half of jobs will be changed in the future by use of  technology. Maybe some of those jobs will disappear, but in many other cases, the job itself will evolve, as jobs tends to do over time. Of course, it's a lot less exciting to have a headline which says: "The information technology you use at your job is going to keep changing change in ways that affect what you do at work."

Qualifier 2 – job creation from automation

An overall view of the effects of automation on jobs also needs to take into account how, over time and in the present, automation has also led to the creation of many new jobs. Lest we forget, the US unemployment rate before the pandemic hit was under 4%, which certainly doesn't look like evidence that total jobs are being reduced. 

Qualifier 3 – is technology (ever) a fire-and-forget missile?

Technology tends to phase in slowly, often more slowly than enthusiasts may predict. Willcocks writes: "Our own research suggests that implementation challenges are very real in the context of automation, especially for large organizations with a legacy of information technology (IT) investments, infrastructure and outsourcing contracts. There are also cultural, structural and political legacies that will shape the speed of implementation, exploitation and reinvention. In particular, we found in the
2017–2019 period organizations running up against ‘silo challenges’ – in respect of technologies, data, processes, skill bases, culture, managerial mindsets and organizational structures – that slow adoption considerably."

Qualifier 4 – technology: born perfect? perfectible?

"Informed sources also point to the fact that the kind of AI we have today is narrow or weak AI, able to perform a specific kind of problem or task. Nearly all refer to the reality of the Moravec Paradox, that is, the easy things for a 5-year-old human are the hard things for a machine, and vice versa ..." 

Qualifier 5 – distinctive human strengths at work

"Manyika et al. (2017) developed a highly useful (though not exhaustive) framework of 18 human capabilities needed at work, and likely to be needed in the future. These divide into sensory perception, cognitive capabilities, natural language processing, social and emotional capabilities and physical capabilities. They found that automation could perform 7 capabilities at medium to high performance, but their modelling suggests that automation tools are nowhere near able to perform the other 11 capabilities (e.g. creativity, socio-emotional capabilities) to an above human level, and that it would be anything between 15 and 50 years before many tools could. Furthermore, humans tend to use a number of capabilities in specific workplace contexts, and machines are not, and will not be good any time soon, at combining capabilities, let alone being integrated to deal with complex real-life problems ... ... In sum, too little consideration is given to distinctive human qualities that are not easily codifiable or replaceable, especially in combination, and are likely to remain vital at work. Perhaps the direction of travel should be not for replicating human strengths but for automation to be focused on what humans cannot do, or do not want to do."

Qualifier 6 – ageing populations, demographics and automation

Birth-rates have been falling around the world, populations have been aging, and the size of the workforce in many countries is either not rising much or actually declining: "Declining birth
rates and ageing populations across the G20 may well see workforce growth decline to 0.3% a year, leaving a workforce too small to maintain current economic growth, let alone meet espoused aspirational targets." The global economic future in many countries, based on demography, looks more likely to involve labor shortages than labor surpluses. 

Qualifier 7 – automation, skills and productivity shortfalls

It may be that fears over the robo-apocalyse are not so much about the rising abilities of robots as about the shortages of skills from humans. Willcocks writes: "There is an irony here in that, while many studies are predicting large job losses as a result of automation, we are also seeing skills shortages reported across many sectors of the G20 countries. These shortages are not necessarily just in areas relating to designing, developing, supporting or working with emerging digital, robotic and automation
technologies. Demographic changes, plus skills mismatches and shortages, feed into productivity issues at macro and organizational levels. Therefore, it is increasingly likely that despite the lack of attention given to the issue by most studies, major economies over the next 20 years are going to experience large productivity shortfalls even to maintain their present economic growth rates, let alone achieve their espoused growth targets. Automation and its productivity contribution may turn out to be a coping,
rather than a massively displacing phenomenon."

Qualifier 8 – exponential increases in work to be done

Information technology isn't just about automating existing work. Among other changes, it brings with it an explosion of available data, which needs to be managed, examined, stored, protected against cybersecurity threats, integrated with regulatory and legal requirements, and more. Willcocks writes: 
Consider how many organizations are self-reportedly at breaking point despite work intensification, working smarter and the application of digital technologies to date. Then reflect on how the exponential data explosion, the rise in audit, regulation and  bureaucracy and the complex, unanticipated impacts of new technologies are already interacting, and increasing the amount of work to be done, and the time it takes to get around to doing productive work. I would propose a new Willcocks Law to capture some of what is happening,  namely ‘work expands to fill the digital capacity available’. Far from the headlines, a huge if under-analysed work creation scheme may well be underway, to which automation will only be a part solution.
Taking these factors together, it is not at all obvious that artificial intelligence, information technology, and robots are going to reduce the number of jobs. Instead, it seems more plausible that they will reshape jobs, potentially both for better and for worse. 

This issue of the Journal of Information Technology includes a set of short comments on the Willcocks paper. I was especially struck by the comments by Kai Riemer and Sandra Peter  in "The robo-apocalypse plays out in the quality, not in the quantity of work" (pp. 310–315). They point out some possible negative consequences of information technology in the workplace. For example,  if the "easier" tasks are automated, then the remaining human work tasks may be more difficult and less rewarding, and it may be harder for new workers to leap up the learning curve and gain experience. Jobs may have more pressure from automated oversight, with a corresponding pressures for more intense work and reductions in personal autonomy and human interaction. Information technology may also make alternative labor market arrangements like "gig" work more common, in a way that creates a group of less-secure jobs.  

On the other side, Marleen Huysman comments in "Information systems research on artificial intelligence and work: A commentary on “Robo-Apocalypse cancelled? Reframing the automation and future of work debate” (pp. 307-309): "By developing hybrid AI, tools will become our new assistants, coaches and colleagues and thus will augment rather than automate work."

The connecting thread here is that there is little doubt that technology will affect the nature of future jobs. But instead of focusing on a robo-apocalypse of losses in the number of jobs, we would probably be better served by focusing on changes in the qualities of jobs, and in particular on improving skill levels in ways that will help more workers to treat these technologies as complements for their existing work, rather than as substitutes. 

Friday, November 27, 2020

When Hamilton and Jefferson Agreed! On Fisheries

As all of us who learn our US history from Broadway musicals know, Thomas Jefferson and Alexander Hamilton disagreed on everything. But in the aftermath the US Revolutionary War, when George Washington had become the first US president, he asked Jefferson and Hamilton to work together in creating a plan to rescue the fisheries off the New England coast, which had suffered greatly during the Revolutionary War. Jefferson and Hamilton agreed on an incentive-based plan--although for distinctively different reasons. The result of their collaboration was February 1791 "Report on the American Fisheries by the Secretary of State," produced by Jefferson but with the assistance of staff loaned to the project by Hamilton. 

Although I've seen this episode mentioned in passing in several place, the best telling of the story I've run across is by   Joseph R. Blasi in  "George Washington, Thomas Jefferson, and Alexander Hamilton and an Early Case of Shared Capitalism in American History: The Cod Fishery" (Rutgers University School of Management and Labor Relations, Working paper, April 15, 2012).

As the "Report on the American Fisheries" points out, literally dozens of European ships were catching cod of the coast of what would be come New England and Canada in the early 1500s. But during the Revolutionary War, the US fishing industry was largely destroyed. As Blasi says: The American Revolutionary War lasted from 1775-1783 during which time the British went out of their way to paralyze and destroy the important cod fishery because of its economic and its national security importance." Or as Jefferson wrote: 
The fisheries of the United States annihilated during the war, their vessels, utensils, and fishermen destroyed, their markets in the Mediterranean and British America lost, and their produce dutied in those of France, their competitors enabled by bounties to meet and undersell them at the few markets remaining open, without any public aid, and indeed paying aids to the public: Such were the hopeless auspices under which this important business was to be resumed.
George Washington first took office in February 1789. In April 1790 the state of Massachusetts requested a plan for restoring the cod industry, and Washington assigned the job to Jefferson. However, Blasi notes:
[I]t is clear from the historical record that Tench Coxe, the Assistant Secretary of the Treasury under Alexander Hamilton, was sending materials to Jefferson and serving as his lead reasearcher. It is interesting and notable that, despite his growing rivalry with Secretary of the Treasury Alexander Hamilton at this time and their deep policy conflict over Hamilton’s proposal for the first Bank of the United States and many other issues, that Hamilton’s right hand man, Tench Coxe was essentially staffing Jefferson on the fishery issue and serving as his researcher, and that essentially, both departments were cooperating on the fishery issue.
Jefferson pointed out that along with the physical destruction of fishing ships during the war, the fishing industry labored under other disadvantages. The British and French were subsidizing their fishing fleets, while imposing duties on American-caught fish. In addition, taxes imposed by the US government were hurting the fishing industry. As Blasi describes Jefferson's argument:  
Finally, the report lays out a significant disadvantage actually imposed by the young U.S. government, namely, and ironically, barriers to the industry’s development actually imposed by the young Government itself in the form of taxes and duties such as tonnage and Naval duties on the vessels and impost duties on the supplies used in the fishery production (salt, hooks, lines, leads, duck, cordage, cables, iron, hemp, and twine) and in the “nourishment” of the seamen (tea, rum, sugar, and molasses). There was also a tax levied on the coarse woolens of the fishermen and a poll tax on each of them levied by the State of Massachusetts. Jefferson adds up the taxes from duties and concludes “When a business is so nearly in equilibriuo, that one can hardly discern whether the profit be sufficient to continue it, or not, smaller sums than these suffice to turn the scale against it.” (p. 210-211) Ironically, after a war partly motivated by anti-tax fervor, America’s leading industry was being smothered in taxes and government bureaucracy.
Jefferson and Hamilton of course had quite different perspectives on the fishing industry. Jefferson saw the industry as an opportunity for small family-sized businesses. Thus, when listing in his report the advantages of the US-based fishing industry, Jefferson mentioned factors like: 
The neighbourhood of the great Fisheries, which permits our fishermen to bring home their fish to be salted by their wives and children. ... The smallness of the vessels, which the shortness of the voyage enables us to employ and which consequently require but a small capital. .... The cheapness of our vessels, which do not cost above the half of the Baltic fir vessels, computing price and duration. ... Their excellence as Sea-Boats which decreases the risk and quickens the returns.

There was also a widely held belief that the fisheries were a training ground for sailors who then might end up either in the navy or in other jobs in the shipping industry. 

Hamilton, on the other hand, viewed fisheries as part of what he hoped would be a US economic future as a manufacturing power. In his Report on the Subject of Manufactures, finalized in late 1791, he makes a brief comment on fisheries along these lines:

As far as the prosperity of the Fisheries of the United states is impeded by the want of an adequate market, there arises another special reason for desiring the extension of manufactures. Besides the fish, which in many places, would be likely to make a part of the subsistence of the persons employed; it is known that the oils, bones and skins of marine animals, are of extensive use in various manufactures. Hence the prospect of an additional demand for the produce of the Fisheries.
Jefferson's report on fisheries did not make explicit policy recommendations, but the implicit recommendation that Congress should stop burdening the industry with taxes on the inputs it used and instead consider mechanisms to support it was pretty clear. Of course, a number of shipowners of fishing vessels strongly believed that the US should also adopt a system of government bounties, paid directly to them. But the laws that emerged from the first Congress came out a little differently, including both specific legislation about the rights of workers and about profit-sharing. 

As Blasi tells the story, one motivation for the worker's rights legislation was that British fishing vessels were often offering a better deal to American fishermen. Thus, even before Jefferson's report on fisheries was released, Congress passed a law to assure better treatment of US fishermen. Blasi writes:
Ships were one of the largest collections of workers in an employee-employer relationship in the young nation so it is no surprise that First Congress passed a law on July 20, 1790 that laid out work conditions for seamen. From December 1, 1790 every master or commander of a ship had to have a written agreement before a voyage declaring the length of the voyage while every seaman had to agree to be available for the time period or there was a wage penalty. Workers had the right to one third of their wages before the voyage ended and the balance upon the completion of the voyage. The law provided a procedure by which members of the crew other than the captain could move for the repair of leaky or faulty ships, the requirement of a chest of medicines on board, and minimum per person requirements of water, salted meat, and “wholesome ship-bread. If seamen received a lower allowance then the commander had to pay them an extra day’s wages for each day of ‘short allowance.’ ” Other laws provided strict rules for keeping track of seamen as voyages ensued, for making records of seamen seized by foreign powers, and for hospital care and relief for sick and disabled seaman.
Given that many people tend to view the US version of capitalism as red in tooth and claw up basically up to the New Deal of the 1930s, or even up to the present, it's interesting to read the list of contractual rules and occupational health and safety provisions that Congress was passing in 1790. Soon after, Congress reacted to the Jefferson bill with provisions to roll back taxes that would otherwise have been owed. This was carefully not called a "bounty," but was rather an "allowance."  The law specified that an owner of a ship could not receive the allowance unless there was a written profit-sharing agreement with the crew. Blasi writes:
But, clearly, the most significant and the most interesting detail about the “bounties” and incentives is that the Federal government required in the same 1792 law that no allowances could be paid to the owner of the ship unless the ship owner had a written profit sharing agreement with all the fishermen affirming that the traditional and customary shared capitalist practice of broad profit-sharing on the entire catch itself would be honored. ... [T]he owners had to produce this written agreement when they requested payment of the their share of the allowance. So, in the end, the law insured profit sharing in two ways: both the allowance in order to encourage the industry’s revival was shared between the crew and the owners and the custom of broad-based profit sharing on the entire catch had to be honored. The owners of large cod ships were required to have these signed profit sharing agreements with the sailors before the ship left the port. The penalty to the owner was the same as the penalty for desertion of a ship. This probably the first documented case in American history where shared capitalism became the law of the land.
The New England fishing industry had had various forms of profit-sharing with the crew for some time. The idea that such agreements provided greater incentives for the crew was well-known, and such agreement were broadly accepted. But the idea that such agreements would be encouraged by the provision of government incentives to owners was one more innovation for the early United States. 






Jefferson

We have seen that the advantages of our position place our fisheries on a ground somewhat higher such as to relieve our Treasury from the necessity of giving them support, but not to permit it to draw support from them, nor to dispense the Government from the obligation of effectuating free markets for them: