News Feed

Style, Copyright, and Generative AI Part 2: Vicarious Liability

In my last blog post, I looked at whether copyright protects artistic style, particularly in the context of generative AI (GAI) art tools like Stable Diffusion and Midjourney. However, in the class action litigation against Stable Diffusion and Midjourney, the plaintiffs are not only concerned that people can use the GAI tools to produce works that mimic the works and/or styles of other artists, but they also argue that the tools should be liable for infringement conducted by their users. In this blog post, I look at this second issue — should GAI tools be held responsible for potential copyright infringement conducted by users of these tools?

Like the rest of the world, CC has been watching generative AI and trying to understand the many complex issues raised by these amazing new tools. We are especially focused on the intersection of copyright law and generative AI. How can CC’s strategy for better sharing support the development of this technology while also respecting the work of human creators? How can we ensure AI operates in a better internet for everyone? We are exploring these issues in a series of blog posts by the CC team and invited guests that look at concerns related to AI inputs (training data), AI outputs (works created by AI tools), and the ways that people use AI. Read our overview on generative AI or see all our posts on AI.

What is vicarious liability? 

One of the claims raised in the suit against Stable Diffusion and Midjourney is that AI tools should be held vicariously liable for copyright infringement because their users can use the systems to create infringing works. Typically, legal liability arises where someone directly commits an act that harms another person in such a way that the law can hold that person responsible for their actions. This is “direct liability.” If a distracted driver hits a cyclist, the cyclist might ask the court to make the driver pay any damages, because the driver is directly liable for the accident. Normally, third parties are not considered responsible for the acts of other people. So, the law would probably not hold anyone but the driver liable for the accident — not a passenger, and not even one who was helping to navigate or who had asked the driver to make the trip. And unless the car was faulty, the manufacturer of the car would not be liable either, even though if no one had made the car the accident would not have happened: the car could have been used without harming anyone, but in this case it was the driver who made it cause harm. 

“Art Meets Technology” by Stephen Wolfson for Creative Commons was generated by the Midjourney AI platform with the text prompt “an artist using a mechanical art tool to create a painting realistic 4k.” CC dedicates any rights it holds to the image to the public domain via CC0.

Under some circumstances, however, U.S. law may hold third parties liable for the harmful acts committed by other people. One such legal doctrine is “vicarious liability” — when a third party has essentially used another party to commit the harmful act. Courts in the United States have found vicarious liability in copyright law when two conditions are met: (1) the third party has the ability to supervise and control the acts of the person who committed the direct infringement, and (2) the third party has an “obvious and direct” financial benefit from the infringing activity. Notably, vicarious liability for infringement only occurs where another party has become directly liable for copyright infringement. If there was no direct liability for infringement at all, a third party cannot be held responsible. 

Vicarious liability requires a relationship between the third party and the person committing the direct infringement, where the third party retains some control over the other person’s actions and where the third party economically benefits from those actions — for example, an employer/employee relationship. 

In the US, the 9th Circuit examined the issue of control and technology-enabled vicarious copyright infringement in the specific context of search engines and credit card payment processors in Perfect 10 v. Amazon.com and Perfect 10 v. Visa. Perfect 10 v. Amazon.com involved Google image search linking to images owned by Perfect 10 on third-party websites. The court held that Google did not have the ability to control what those third-party websites were doing, even though it had control over its website index and its search results. Similarly, in Perfect 10 v. Visa, the 9th Circuit held that Visa was not liable for infringement committed by websites that hosted content belonging to Perfect 10, even though Visa processed credit card payment for those websites. The court wrote that “just like Google [in Perfect 10 v. Amazon.com], Defendants could likely take certain steps that may have the indirect effect of reducing infringing activity on the Internet at large. However, neither Google nor Defendants has any ability to directly control that activity.” In both cases, the relationship between the third party and the potential infringement was not close enough to sustain a vicarious infringement claim because of a lack of control. 

Turning to the second element, obvious and direct financial benefit, the 9th Circuit has written that this is satisfied where infringement acts as a draw for users to the service, and that there is a direct causal link between the infringing activities at issue and the financial benefit to the third party. In another case involving Perfect 10, Perfect 10 v. Giganews, the 9th Circuit held that Usenet provider, Giganews, did not derive a direct financial benefit from users who distributed Perfect 10’s content on their servers, even though Giganews charged a subscription fee to those users. Because it wasn’t clear that users were drawn to Giganews for its ability to distribute Perfect 10’s content, the court was unwilling to hold Giganews vicariously liable for the actions of its users.

Should generative AI tools be liable for the actions of their users?

How do these elements of control and financial benefit apply in the context of generative AI? Normally, no one would argue that the creators of art tools like paintbrushes or digital editing tools like Photoshop or Final Cut Pro should be responsible when their users use their tools for copyright infringement. Their creators cannot directly control how people use them and they do not clearly benefit from copyright infringement conducted with them. That seems uncontroversial. The question, however, is more complex with GAI because these platforms have the ability to deny service to users who misuse their services and may derive their profits/funding based on how many people use them. That said, tools like Stable Diffusion or Midjourney do not have the practical ability to prevent infringing uses of their tools. Like in Perfect 10 v. Amazon.com and Perfect 10 v. Visa, they do not directly control the ways people use their tools. While they could deny access to users who misuse the tools, it seems impossible to stop users from entering generic terms as text prompts to ultimately recreate copyrighted works. Furthermore, as I discussed in my previous post on style and copyright, there are legitimate reasons for people to use other artists’ copyrighted works, such as fair use. So, banning users from prompting tools with “in the style of” or “like another copyrighted work” would be overbroad, harming legitimate uses while trying to stop illegitimate ones, because we can only tell what is legitimate or not based on the facts of individual situations. 

As with any other general purpose art tool, there simply doesn’t seem to be a way to prevent all users from using GAI in ways that raise concerns under copyright, without shutting down the tools themselves, and stopping all uses, legitimate or not. Compare with something like automatic content filtering tools. These tools may be good at finding and automatically removing access to copyrighted material that is posted online, but even the best systems identify many false positives, removing access to permissible or authorized uses along with the infringing ones. In doing so, they can harm legitimate and beneficial uses that copyright law’s purpose is designed to support.

Moreover, GAI tools like Stable Diffusion and Midjourney do not necessarily have an “obvious and direct financial benefit” from copyright infringement conducted by users of their platforms. In the case of Stable Diffusion and Midjourney specifically, that link doesn’t appear to exist. They make the same amount of money from users, regardless of how they put the tools to use. Since neither platform advertises itself as a tool for infringement, both discourage copyright-infringing uses as parts of the terms of service, and neither profits directly from copyright infringement, there does not seem to be a direct causal link between these hypothetical infringing users and Stable Diffusion or Midjourney’s funding. 

Furthermore, it is not clear that copyright infringement is a draw for users to these services. As mentioned above, simply creating works in the style of another artist does not necessarily mean those works are infringing. Moreover, there may be legitimate reasons to use artists’ names as text prompts. For example, a parody artist may need to use the name of their subject to create their parodies, and these works would have a strong argument that they are permissible under fair use in the United States. 

For other GAI tools, the question of whether there is obvious and direct financial benefit from copyright infringement will be case-dependent. Nevertheless, the link between the ability to use the tools for copyright infringement and whether this is a draw for users to the GAI tools will likely be, at best, unclear in most circumstances. And without a causal link between the financial benefit to the GAI creator and infringement conducted by users, this element will not be met. 

Ultimately, while it’s easy to understand why artists would feel threatened by generative AI tools being able to mimic their artistic styles, copyright law should not be a barrier to the legitimate use and development of these tools. While it may make sense for the law to step in and prevent specific instances of infringement, copyright should not prevent the legitimate use and development of generative AI technologies, especially when they can help to expand and enhance human creativity. And while there may not be any perfect solutions to these issues, we need to figure out norms and best practices that can allow these promising new technologies to develop and thrive, while also respecting the rights and concerns of artists and the public interest in access to knowledge and culture. For now, we will watch and see what happens in the courts, and continue to encourage dialog and discussion in this area.

The post Style, Copyright, and Generative AI Part 2: Vicarious Liability appeared first on Creative Commons.