News Feed

What does the CC Community Think about Regulating Generative AI?

In the past year, Creative Commons, alongside other members of the Movement for a Better Internet, hosted workshops and sessions at community conferences like MozFest, RightsCon, and Wikimania, to hear from attendees regarding their views on artificial intelligence (AI). In these sessions, community members raised concerns about how AI is utilizing CC-licensed content, and discussions touched on issues like transparency, bias, fairness, and proper attribution. Some creators worry that their work is being used to train AI systems without proper credit or consent, and some have asked for clearer guidelines around public benefit and reciprocity. 

In 2023, the theme of the CC Global Summit was AI and the Commons, focused on supporting better sharing in a world with artificial intelligence — sharing that is contextual, inclusive, just, equitable, reciprocal, and sustainable. A team including CC General Counsel Kat Walsh, Director of Communications & Community Nate Angell, Director of Technology Timid Robot, and Tech Ethics Consultant Shannon Hong collaborated to use alignment assembly practices to engage the Summit community in thinking through a complex question: how should Creative Commons respond to the use of CC-licensed work in AI training? The team identified concerns CC should consider in relation to works used in AI training and mapped out possible practical interventions CC might pursue to ensure a thriving commons in a world with AI.

At the Summit, we engaged participants in an Alignment Assembly using Pol.is, an open-source, real-time survey platform, for input and voting. 25 people voted using the Pol.is, and in total 604 votes were cast on over 33 statements, with an average of 24 votes per voter. This included both pre-written seed statements and ideas suggested by participants.

The one thing everyone agreed on wholeheartedly: CC should NOT stay out of the AI debate. All attendees disagreed with the statement: “CC should not engage with AI or AI policy.” 

Pol.is aggregates the votes and divides participants into opinion groups. Opinion groups are made of participants who voted similarly to each other, and differently from other groups. There were three opinion groups that resulted from this conversation.

Group A: Moat Protectors

Group A comprises 16% of participants and is characterized by a desire to focus on Creative Commons’ current expertise, specifically some relevant advocacy and the development of preference signaling. They uniquely support noncommercial public interest AI training, unlike B and C. This group is uniquely against additional changes like model licenses and strongly against political lobbying in the US.

Group B: AI Oversight Maximalists

Group B, the largest group with 36% of participants, strongly supports Creative Commons taking all actions possible to create oversight in AI, including new political lobbying actions or collaborations, AI teaching resources, model licenses, attribution laws, and preference signaling. This group uniquely supports political lobbying and new regulatory bodies.

Group C: Equitable Benefit Seekers

Group C, containing 32% of participants, is focused on protecting traditional knowledge, preserving the ability to choose where works can be used, and prioritizing equitable benefit from AI. This group strongly supports requiring authorization for using traditional knowledge in AI training and sharing the benefits of profits derived from the commons. Like group A, this group is against political lobbying in the US.

Want to learn more about the specific takeaways? Read the full report.

We invite CC members to participate in the next alignment assembly, hosted by Open Future.  Sign up and learn more here. 

The post What does the CC Community Think about Regulating Generative AI? appeared first on Creative Commons.