Publications

Modernizing Fair Lending (2025)

Journal of Legal Analysis (accept), with Spencer Caro & Scott Nelson
Fair lending’s disparate impact doctrine aims to address lending disparities. But which disparities? Traditional fair lending has narrowly focused on equal outcomes—examining differences in loan approval rates or interest rates. However, this singular focus overlooks other critical dimensions of disparities that are essential for fair credit access.
We challenge the conventional focus on equal outcomes, demonstrating how it has failed to address some of the most pernicious harms of traditional credit allocation and has stifled necessary machine-learning and alternative data innovations. We argue that disparities in validity of creditworthiness predictions —the accuracy with which a model identifies creditworthy applicants—severely impact equal access to credit and fail to equally extend credit to the creditworthy. Despite the mounting empirical evidence of the harm of validity disparities, traditional fair lending enforcement inadequately recognizes this disparity dimension, a gap that will become increasingly harmful as lending decisions rely on advanced statistical methods.

“Price Discrimination” Discrimination (2025)

Harvard Business Law Review (forthcoming)

Credit price personalization, where lenders set prices based on individual borrower and loan characteristics, is a common practice across many loan types. And conventional accounts of its harms focus on the ways in which risk-based pricing, or setting prices based on borrowers’ credit risk, can lead to disparities for protected groups like racial minorities and women. This Article examines an often-overlooked yet potentially harmful form of price personalization—charging borrowers different rates based on their willingness-to-pay, known as price discrimination—and argues that this practice can exploit vulnerable borrowers, including protected groups like racial minorities and women, by imposing higher costs unrelated to their credit risk, resulting in what I term “price discrimination” discrimination. Beyond entrenching financial disparities, price discrimination can exacerbate default risks, especially as the use of big data and artificial intelligence can make price discrimination more pervasive.

(more…)

Sex & Startups (2025)

 The Yale Journal on Regulation (forthcoming), with Jens Frankenreiter & Eric Talley

Venture capital is widely perceived to have a gender problem. Both founders seeking capital and the investors themselves are overwhelmingly male, fomenting concerns about how—and how fairly—the VC sector distributes its economic gains. Although gender disparities in funding are well documented, we still know little about whether the governance of VC-backed startups similarly manifests gender imbalances. This knowledge gap is critical, since VC investments often carry strings attached, in the form of cash flow and control rights that can vary substantially from deal to deal.

(more…)

Operationalizing the Search for Less Discriminatory Alternatives in Fair-Lending (2024)

with Vitaly Meursault & Berk Ustun, FAccT ‘24

The Less Discriminatory Alternative is a key provision of the disparate impact doctrine in the United States. In fair lending, this provision mandates that lenders must adopt models that reduce discrimination when they do not compromise their business interests. In this paper, we develop practical methods to audit for less discriminatory alternatives. Our approach is designed to verify the existence of less discriminatory machine learning models – by returning an alternative model that can reduce discrimination without compromising performance (discovery) or by certifying that an alternative model does not exist (refutation). We develop a method to fit the least discriminatory linear classification model in a specific lending task – by minimizing an exact measure of disparity (e.g., the maximum gap in group FNR) and enforcing hard performance constraints for business necessity (e.g., on FNR and FPR). We apply our method to study the prevalence of less discriminatory alternatives on real-world datasets from consumer finance applications. Our results highlight how models may inadvertently lead to unnecessary discrimination across common deployment regimes, and demonstrate how our approach can support lenders, regulators, and plaintiffs by reliably detecting less discriminatory alternatives in such instances.

D-Hacking (2024)

with Emily Black and Zara Hall, FAccT ’24

Recent regulatory efforts, including Executive Order 14110 and the AI Bill of Rights, have focused on mitigating discrimination in AI systems through novel and traditional application of anti-discrimination laws. While these initiatives rightly emphasize fairness testing and mitigation, we argue that they pay insufficient attention to robust bias measurement and mitigation—and that without doing so, the frameworks cannot effectively achieve the goal of reducing discrimination in deployed AI models. This oversight is particularly concerning given the instability and brittleness of current algorithmic bias mitigation and fairness optimization methods, as highlighted by growing evidence in the algorithmic fairness literature. This instability heightens the risk of what we term discrimination-hacking or d-hacking, a scenario where, inadvertently or deliberately, the selection of models based on favorable fairness metrics within specific samples could lead to misleading or non-generalizable fairness performance. We term this effect d-hacking because systematically selecting among numerous models to find the least discriminatory one parallels the concept of p-hacking in social science research of selectively reporting outcomes that appear statistically significant resulting in misleading conclusions. In light of these challenges, we argue that AI fairness regulation should not only call for fairness measurement and bias mitigation, but also specify methods to ensure robust solutions to discrimination in AI systems. Towards the goal of arguing for robust fairness assessment and bias mitigation in AI regulation, this paper (1) synthesizes evidence of d-hacking in the computer science literature and provides experimental demonstrations of d-hacking, (2) analyzes current legal frameworks to understand the treatment of robust fairness and non-discriminatory behavior, both in recent AI regulation proposals and traditional U.S. discrimination law, and (3) outlines policy recommendations for preventing d-hacking in high-stakes domains.

Unstable Personalized Law (2024)

 29 Jerusalem Review of Legal Studies 65

This Article discusses Omri Ben-Shahar and Ariel Porat’s book “Personalized Law,” which offers an exploration of the concept of customizing legal rules to fit individual circumstances and characteristics. The book focuses primarily on the second stage of implementing personalized law—how to use classification models to tailor legal rules to individual profiles. This Article focuses on the challenges in the first stage of personalized law—the creation of individual classification models—and the inherent challenges of such a prediction and classification exercise. The Article addresses a critical aspect of these challenges: the instability of intrapersonal predictions across model iterations. While prediction error is a known characteristic of machine learning models, the focus here is on the variability of individual predictions over time, despite overall model accuracy. This instability can lead to fluctuating legal rules for individuals, creating uncertainty and potentially undermining the effectiveness of personalized law. Current legal systems already exhibit some instability, but personalized law could amplify this issue, particularly in areas with cumulative impacts like privacy preferences or consent age.

I demonstrate the intrapersonal instability of machine learning predictions through the example of predicting credit default. The implications for personalized law are significant: individuals could face varying legal rules without changes in their circumstances, and such instability could erode the reliability and effectiveness of legal frameworks.

(more…)

Orthogonalizing Inputs (2024)

2024 ACM Symposium on Computer Science and Law 

This paper examines an approach to algorithmic discrimination that seeks to blind predictions to protected characteristics by orthogonalizing inputs. The approach uses protected characteristics (such as race or sex) during the training phase of a model but masks these during deployment. The approach posits that including these characteristics in training prevents correlated features from acting as proxies, while assigning uniform values to them at deployment ensures decisions do not vary by group status.

(more…)

The Input Fallacy (2022)

106 Minnesota Law Review 1175 

Algorithmic credit pricing threatens to discriminate against protected groups. Traditionally, fair lending law has addressed such threats by scrutinizing inputs. But input scrutiny has become a fallacy in the world of algorithms. 

Using a rich dataset of mortgages, I simulate algorithmic credit pricing and demonstrate that input scrutiny fails to address discrimination concerns and threatens to create an algorithmic myth of colorblindness. The ubiquity of correlations in big data combined with the flexibility and complexity of machine learning means that one cannot rule out the consideration of protected characteristics, such as race, even when one formally excludes them. Moreover, using inputs that include protected characteristics can in fact reduce disparate outcomes. 

(more…)

Big data and discrimination  (2019)

with Jann Spiess, 86 The University of Chicago Law Review 459 

The ability to distinguish between people in setting the price of credit is often constrained by legal rules that aim to prevent discrimination. These legal requirements have developed focusing on human decision-making contexts, and so their effectiveness is challenged as pricing increasingly relies on intelligent algorithms that extract information from big data. In this Essay, we bring together existing legal requirements with the structure of machine-learning decision-making in order to identify tensions between old law and new methods and lay the ground for legal solutions. We argue that, while automated pricing rules provide increased transpar- ency, their complexity also limits the application of existing law. Using a simulation exercise based on real-world mortgage data to illustrate our arguments, we note that restricting the characteristics that the algorithm is allowed to use can have a limited effect on disparity and can in fact increase pricing gaps. Furthermore, we argue that there are limits to interpreting the pricing rules set by machine learning that hinders the application of existing discrimination laws. We end by discussing a framework for testing discrimination that evaluates algorithmic pricing rules in a controlled environment. Unlike the human decision-making context, this framework allows for ex ante testing of price rules, facilitating comparisons between lenders.

Explanation< Justification: GDPR and the Perils of Privacy (2019)

with Josh Simons, 2 Journal of Law & Innovation 71

The European Union’s General Data Protection Regulation (GDPR) is the most comprehensive legislation yet enacted to govern algorithmic decision-making. Its reception has been dominated by a debate about whether it contains an individual right to an explanation of algorithmic decision-making. We argue that this debate is misguided in both the concepts it invokes and in its broader vision of accountability in modern democracies. It is justification that should guide approaches to governing algorithmic decision-making, not simply explanation. The form of justification – who is justifying what to whom – should determine the appropriate form of explanation. This suggests a sharper focus on systemic accountability, rather than technical explanations of models to isolated, rights-bearing individuals. We argue that the debate about the governance of algorithmic decision-making is hampered by its excessive focus on privacy. Moving beyond the privacy frame allows us to focus on institutions rather than individuals and on decision-making systems rather than the inner workings of algorithms. Future regulatory provisions should develop mechanisms within modern democracies to secure systemic accountability over time in the governance of algorithmic decision-making systems.

Fiduciary Law and Financial Regulation (2019)

with Howell Jackson, in Oxford Handbook of Fiduciary Law (Criddle, Miller & Sitkoff ed.)

This chapter explores the application of fiduciary duties to regulated financial firms and financial services. At first blush, the need for such a chapter might strike some as surprising in that fiduciary duties and systems of financial regulation can be conceptualized as governing distinctive and non-overlapping spheres: Fiduciary duties police private activity through open-ended, judicially defined standards imposed on an ex post basis, whereas financial regulations set largely mandatory, ex ante obligations for regulated entities under supervisory systems established in legislation and implemented through expert administrative agencies. Yet, as we document in this chapter, fiduciary duties often do overlap with systems of financial regulation. In many regulatory contexts, fiduciary duties arise as a complement to, or sometimes substitute for, other mechanisms of financial regulation. Moreover, the interactions between fiduciary duties and systems of financial regulation generate a host of recurring and challenging interpretative issues.

(more…)

Putting Disclosure to the Test: Toward Better Evidence-Based Policy (2015)

28 Loyola Consumer Law Review 31

Financial disclosures no longer enjoy the immunity from criticism they once had. While disclosures remain the hallmark of numerous areas of regulation, there is increasing skepticism as to whether disclosures are understood by consumers and do in fact improve consumer welfare. Debates on the virtues of disclosures overlook the process by which regulators continue to mandate disclosures. This article fills this gap by analyzing the testing of proposed disclosures, which is an increasingly popular way for regulators to establish the benefits of disclosure. If the testing methodology is misguided then the premise on which disclosures are adopted is flawed, leaving consumers unprotected. This article focuses on two recent major testing efforts: the European Union’s testing of fund disclosure and the Consumer Financial Protection Bureau’s testing of the integrated mortgage disclosures, which will go into effect on August 1, 2015. 

(more…)

Working Papers

On the Fairness of Machine-Assisted Human Decisions

(https://arxiv.org/abs/2110.15310), with Bryce McLaughlin & Jann Spiess

When machine-learning algorithms are used in high-stakes decisions, we want to ensure that their deployment leads to fair and equitable outcomes. This concern has motivated a fast-growing literature that focuses on diagnosing and addressing disparities in machine predictions. However, many machine predictions are deployed to assist in decisions where a human decision-maker retains the ultimate decision authority. In this article, we therefore consider in a formal model and in a lab experiment how properties of machine predictions affect the resulting human decisions. In our formal model of statistical decision-making, we show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions. Specifically, we document that excluding information about protected groups from the prediction may fail to reduce, and may even increase, ultimate disparities. In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions. While our concrete theoretical results rely on specific assumptions about the data, algorithm, and decision-maker, and the experiment focuses on a particular prediction task, our findings show more broadly that any study of critical properties of complex decision systems, such as the fairness of machine-assisted human decisions, should go beyond focusing on the underlying algorithmic predictions in isolation.

Incomplete Contracts and Future Data Usage

(https://dx.doi.org/10.2139/ssrn.4350362), with Jens Frankenreiter & Dan Svirsky

Most major jurisdictions require websites to provide customers with privacy policies. For consumers, a privacy policy’s most important function is to provide them with a description of the online service provider’s current privacy practices. We argue that these policies also serve a second, often-overlooked function: they allocate residual data usage rights to online services or consumers, including the power to decide whether a service can modify its privacy practices and use consumer data in novel ways. We further argue that a central feature of the E.U.’s General Data Protection Regulation (GDPR), one of the most comprehensive and far-reaching privacy regulatory regimes, is to restrict privacy policies from allocating broad rights for future data usage to service providers. We offer a theoretical explanation for this type of regulatory intervention by adapting standard models of incomplete contracts to privacy policies. We then use the model to consider how U.S. firms reacted to the GDPR. We show that U.S. websites with E.U. exposure were more likely to change their U.S. privacy policies to drop any mention of a policy modification procedure. Among websites that do not have E.U. exposure, we see the opposite trend and discuss how to understand these changes in the context of an incomplete contracts model

Works in Progress