Please complete your profile! Add your company.
Edit Profile
Abstract
Read Full Article

“Does generative AI infringe copyright?” is an urgent question. It is also a difficult question for two reasons. First, “generative AI” is not just one product from one company. It is a catch-all name for a massive ecosystem of loosely related technologies, including conversational chatbots like ChatGPT, image generators like Midjourney and DALL·E, coding assistants like GitHub Copilot, music composition applications like Lyria, and video generation systems like Sora. Generative-AI models have different technical architectures and are trained on different types of data from different sources using different algorithms. Some take months and cost millions of dollars to train; other models can be spun up in a weekend. These models are made accessible to users in very different ways. Some are offered through paid online services; others are distributed as open-source artifacts, which let anyone download and modify them. Different generative-AI systems behave differently and raise different legal issues. We therefore need the right framework—one that digs deeper than the term “generative AI”—to reason precisely and clearly about the different legal issues at play.

 

The second problem is that copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation because there are connections everywhere. Whether the output of a generative-AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

 

In this Article, we aim to bring order to the chaos. To do so, we make two contributions. First, we introduce the generative-AI supply chain: an interconnected set of stages that transform training data (millions of pictures of cats) into generations (new and hopefully never-seen-before pictures of cats that have never existed). Breaking down generative AI into these constituent stages reveals all the places at which companies and users make choices that may have legal consequences—for copyright and beyond. Second, we specifically apply the supply-chain framing to U.S. copyright law. This framing enables us to trace the effects of upstream technical designs on downstream uses, and to assess who in these complicated sociotechnical systems bears responsibility for infringement when it happens. Because we engage so closely with the technology of generative AI, we are able to shed more light on the copyright questions. We do not give definitive answers as to who should and should not be held liable. Instead, we identify the key decisions that courts will need to make as they grapple with these issues, and we point out the consequences that would likely flow from different liability regimes