Fall 2022 Workshops

Special Event:

Sept 21, 2022 (Wed)
4:20 – 6:10 PM

Gary Gerstle

Paul Mellon Professor of American History
University of Cambridge

Presentation in person in Case Lounge (Jerome Greene Hall, room 701).

The Rise and Fall of the Neoliberal Order:
America and the World in the Free Market Era

The Center for Law & Economic Studies, the Legal History Workshop, and the Center for Political Economy at Columbia World Projects are pleased to present a roundtable discussion of Gary Gerstle’s The Rise and Fall of the Neoliberal Order: America and the World in the Free Market Era. 

Gerstle, who is Paul Mellon Professor of American History Emeritus and the Paul Mellon Director of Research in American History, Sidney Sussex College, University of Cambridge, will be in conversation with Kate AndriasMaeve GlassJeff GordonLev Menand, and Suresh Naidu (Economics).

RSVP: Please email Chris Mark [email protected] and Adebambo Adesanya at [email protected] to RSVP.  Please indicate whether you are an affiliate of Columbia Law School.*

*Non-Columbia and non-Law School affiliates are welcome. However, we will need to provide your name to the guard at the entrance to Jerome Greene Hall so they can let you into the building.

Oct 3, 2022 (Mon)
4:20 – 6:10 PM

Elliott Ash

ETH Zurich

Presentation in person in Case Lounge (Jerome Greene Hall, room 701).

Can Information Quality Explain Judge In-Group Bias?
Evidence from Wisconsin Criminal Courts

Co-Author: Claudia Marangon (ETH Zurich)

At the authors' request, the draft has not been posted.  It will be distributed to the Law & Economics email list.  If you plan to attend the workshop, would like to read the paper, and are not on the email list, please email [email protected].

This paper studies racial and gender disparities in Wisconsin criminal courts. Using records from 1.5 million cases from 2000-2017, we show large disparities by race and gender in sentencing harshness. Black defendants are more likely to get a jail sentence than comparable defendants of other races, while male defendants are more likely to get a jail sentence than comparable female defendants.  These disparities hold when adjusting for court and time factors, for severity of criminal charges, and for a recidivism risk score that we produce ourselves using a machine learning model trained to predict reoffense. In the aggregate, there is no in-group bias -- that is, judges do not tend to favor defendants of the same race or gender on average. However, we do find a racial in group difference in the response to recidivism risk: judges are more lenient for same-race defendants who are low risk but harsher for same-race defendants who are high risk. Finally, experienced judges are more responsive than inexperienced judges to recidivism risk in their sentencing decisions. Overall, the evidence is suggestive of statistical discrimination with better information about the in-group rather than taste-based discrimination based on out-group animus.

Oct 10, 2022 (Mon)
4:20 – 6:10 PM

Eleanor Wilking

Assistant Professor of Law
Cornell Law School

Presentation in person in Case Lounge (Jerome Greene Hall, room 701).

Independent Contractors in Law and in Fact: Evidence from US Tax Returns

Federal tax law divides workers into two categories depending on the degree of control exercised over them by the service purchaser (i.e., the firm): employees, who are subject to direct supervision; and independent contractors, who operate autonomously. Such worker classification determines the administration of income tax and what it subsidizes, as well as which nontax regulations pertain, such as workplace safety and antidiscrimination protections. The Internal Revenue Service and other federal agencies have codified common law agency doctrine into multifactor balancing tests used to legally distinguish employees from independent contractors. These tests have proved challenging to apply and costly to enforce. Yet we know almost nothing about how firms actually classify workers systemically, and how such classification relates to the control firms actually exercise over workers.

To bridge this gap between legal principles and legal practice, this Article introduces a novel empirical analysis using a comprehensive data source—all digitized U.S. income tax filings between 2001 and 2016. This analysis establishes several new facts. First, using six measures of firms’ control over workers, I show that employees and contractors have grown increasingly similar over the past two decades. I found this convergence to be particularly pronounced among lower earning workers. I then develop a novel theoretical framework to interpret these findings. Second, I provide empirical evidence that the presence of financial incentives created by policy increases the likelihood that employees are reclassified as contractors.

These results suggest a growing misalignment between how workers are classified and the substance of firm–worker relationships. Put another way, two otherwise identical workers, with relationships that feature a similar degree of control, may end up being classified differently due to, among other factors, their firms’ financial incentives. I conclude by discussing the key normative questions raised by the apparent erosion of the legal boundary delimiting contractors and employees.

Oct 24, 2022 (Mon)
4:20 – 6:10 PM

Robert Bartlett

I. Michael Heyman Professor of Law
Berkeley Law

Presentation in person in Case Lounge (Jerome Greene Hall, room 701).

Tiny Trades, Big Questions: Fractional Shares

Justin McCrary
Columbia Law School
Maureen O'Hara, Johnson College of Business, Cornell University

This paper investigates fractional share trading. We develop a methodology for identifying fractional share trades in the Consolidated Transaction Reporting System. Our approach uses a latency-based digital footprint to estimate fractional share trades executed by Robinhood and Drivewealth, the two largest fractional share broker dealers. We find a surprising breadth to fractional share trading: high-priced stocks, meme stocks, IPOs, SPACs, and popular retail stocks now exhibit considerable numbers of these tiny trades. We show that these tiny trades matter: fractional share trades are predictive of future liquidity and volatility, suggesting an information content to fractional share trades. Our results suggest that our measure of fractional share trading better captures this market information than do standard measures of retail trading. We also discuss how current data and reporting protocols preclude knowing the full extent of fractional share trading, inflate trade data, and provide best censured samples of these off-exchange trades.

Nov 7, 2022 (Mon)
4:20 – 6:10 PM

Oren Bar-Gill

William J. Friedman & Alicia Townsend Friedman Professor of Law & Economics,
Harvard Law School

Presentation in person in Case Lounge (Jerome Greene Hall, room 701).

Algorithmic Harm in Consumer Markets

Cass R. Sunstein
, Harvard Law School
Inbal Talgam-Cohen, Israel Institute of Technology

Machine learning algorithms are increasingly able to predict what goods and services particular people will buy, and at what price. It is possible to imagine a situation in which relatively uniform, or coarsely set, prices and product characteristics are replaced by far more in the way of individualization. Companies might, for example, offer people shirts and shoes that are particularly suited to their situations, that fit with their particular tastes, and that have prices that fit their personal valuations. In many cases, the use of algorithms promises to increase efficiency and to promote social welfare; it might also promote fair distribution. But when consumers suffer from an absence of information or from behavioral biases, algorithms can cause serious harm. Companies might, for example, exploit such biases in order to lead people to purchase products that have little or no value for them or to pay too much for products that do have value for them. Algorithmic harm, understood as the exploitation of an absence of information or of behavioral biases, can disproportionately affect identifiable groups, including women and people of color. Since algorithms exacerbate the harm caused to imperfectly informed and imperfectly rational consumers, their increasing use provides fresh support for existing efforts to reduce information and rationality deficits, especially through optimally designed disclosure mandates. In addition, there is a more particular need for algorithm-centered policy responses. Specifically, algorithmic transparency—transparency about the nature, uses, and consequences of algorithms—is both crucial and challenging; novel methods designed to open the algorithmic “black box” and “interpret” the algorithm’s decision-making process should play a key role. And, in appropriate cases, regulators should police the design and implementation of algorithms, with a particular emphasis on exploitation of an absence of information or of behavioral biases.

Nov 21, 2022 (Mon)
4:20 – 6:10 PM

Saul Levmore

William B. Graham Distinguished Service Professor of Law
University of Chicago Law School

Presentation in person in Case Lounge (Jerome Greene Hall, room 701).

Appellate Panels and Second Opinions


Appellate review can be understood as an opportunity to correct errors made by lower courts and, by virtue of their multi-member panels, as a way to benefit from the wisdom of crowds. Appellate panels of three judges, and then a larger Supreme Court of nine, are likely to interpret, apply, or advance law more correctly, or simply better, than a single lower-court judge whose effort is under review.  Appellate review can also be understood as relying on more experienced or otherwise superior decision-makers or as designed to make law more uniform, inasmuch as lower court decisions on many matters will converge as they follow precedents. It has also been understood as a way to take advantage of the knowledge that litigants themselves have about lower court errors.  Appellate review is surely a means of encouraging more careful work by lower courts; people are often more careful when they know that their work can be reviewed or observed by superiors or well-regarded peers. Finally, appellate review may be of great value (even) when it affirms a lower court, because each step adds to the development of a lasting precedent. Most of these perspectives have counterparts in other settings where the familiar question of when to seek and pay for a second opinion arises. But most second opinions, whether sought before agreeing to a medical procedure or contracting for an auto repair, are given by a single analyst, while appellate review in the federal and most state systems normally involves three jurists, and then yet more in the event of a further appeal.  This Article examines the logic of second and third opinions – even without the added complexity introduced by the precise cost of review (in the form of time or money) – and reaches several counterintuitive results. Most appellate processes should be restructured so that one judge alone reviews the lower court. Only if this single appellate judge disagrees with the lower court, should one more judge enter the fray, and even that may be wasteful. Legal questions that are appealed will normally be decided by 2-0 or 2-1 decisions, involving just one or two appellate judges in addition to the lower court judge.

There are other reasonable conclusions to reach once the logic of appellate review, and second opinions quite generally, is examined. There is a case to be made for having the first appellate judge always decide whether further review is in order. On the other hand, and to the contrary, the appellate process could always stop after one review subject to the Supreme Court’s deciding to take the case. These and other possibilities are examined here, but mostly set aside in favor of the central argument about the appellate process. The arguments that drive the conclusions are fueled by some probability theory, and have surprising implications for areas outside of law in which second opinions are commonly sought. I begin with the idea that reaching the correct decision (defined presently) is the immediate goal. As the argument proceeds, the value of long-lasting rules and other aims are brought into play.

Part I begins with the most familiar use of second opinions. It rethinks the wisdom of soliciting another assessment before following a recommendation regarding a serious medical intervention. There are important differences between medical and legal decisions, but it is instructive to begin with an example where it is easier to insist that there is a correct answer. The analysis shows that the common thinking about the value of a second opinion is poorly conceived. Part II then takes account of some of the ways in which judicial review is unlike other calls for review. It suggests that if we incorporate the likelihood that a lower court judge is correct, it is sensible to move to a system where we begin, and usually end, with a single appellate judge. Part III tests the idea of a single reviewer by looking not only at the value of discussion and teamwork among judges, but also at the importance of some assumptions made here about the probability that a judge is correct. Part IV turns to the possibility that appellate judges are not deployed to find correct answers, perhaps because there is often no such thing, but rather to reflect and aggregate preferences, an idea familiar to readers who regard much of what judges do as reflecting political preferences. Part V extends the analysis of appellate review to committees, to boards of directors, and then to juries. The insights offered here suggest some changes in law, though some of these are likely to be politically impossible in the near future.

Dec 5, 2022 (Mon)
4:20 – 6:10 PM

Kathryn Spier

Domenico De Sole Professor of Law
Harvard Law School

Presentation in person in Case Lounge (Jerome Greene Hall, room 701).

Holding Platforms Liable

Coauthor: Xinyu Hua, Hong Kong University of Science & Technology

Should platforms be held liable for the harms suffered by users?  A two-sided platform enables interactions between firms and users.  There are two types of firm: harmful and safe.  Harmful firms impose larger costs on the users.  If firms have deep pockets then platform liability is unnecessary.  Holding the firms liable for user harms deters the harmful firms from joining the platform.  If firms are judgment proof then platform liability plays an instrumental role in reducing social costs. With platform liability, the platform has an incentive to (1) raise the interaction price to deter harmful firms and (2) invest resources to detect and remove harmful firms from the platform. The residual liability assigned to the platform may be partial instead of full. The optimal level of platform liability depends on whether users are involuntary bystanders or voluntary consumers, and the intensity of platform competition.