The Laboratorium (3d ser.)

A blog by James Grimmelmann

Soyez réglé dans votre vie et ordinaire afin
d'être violent et original dans vos oeuvres.

Posts about law

GenLaw 2024

I’m virtually attending the GenLaw 2024 workshop today, and I will be liveblogging the presentations.

Introduction

A. Feder Cooper and Katherine Lee: Welcome!

The generative AI supply chain includes many stages, actors, and choices. But wherever there are choices, there are research questions: how ML developers make those choices? And wherever there are choices, there are policy questions: what are the consequences for law and policy of those choices?

GenLaw is not an archival venue, but if you are interested in publishing work in this space, consider the ACM CS&Law conference, happening next in March 2025 in Munich.

Kyle Lo

Kyle Lo, Demystifying Data Curation for Language Models.

I think of data in three stages:

  1. Shopping for data, or acquiring it.
  2. Cooking your data, or transforming it.
  3. Tasting your data, or testing it.

Someone once told me, “Infinite tokens, you could just train on the whole Internet.” Scale is important. What’s the best way to get a lot of data? Our #1 choice is public APIs leading to bulk data. 80 to 100% of the data comes from web scrapers (CommonCrawl, Internet Archive, etc.). These are nonprofits that have been operating long before generative AI was a thing. A small percentage (about 1%) is user-created content like Wikipedia or ArXiv. And about 5% or less is open publishers, like PubMed. Datasets also heavily remix existing datasets.

Nobody crawls the data themselves unless they’re really big and have a lot of good programmers. You can either do deep domain-specific crawls, or a broad and wide crawl. A lot of websites require you to follow links and click buttons to get at the content. Writing the code to coax out this content—hidden behind JS—requires a lot of site-specific code. For each website, one has to ask whether going through this is worth the trouble.

It’s also getting harder to crawl. A lot more sites have robots.txt that ask not to be crawled or have terms of service restricting crawling. This makes CommonCrawl’s job harder. Especially if you’re polite, you spend a lot more energy working through a decreasing pile of sources. More data is now available only to those who pay for it. We’re not running out of training data, we’re running out of open training data, which raises serious issues of equitable access.

Moving on to transformation, the first step is to filter out low-quality pages (e.g., site navigation or r/microwavegang). You typically need to filter out sensitive data like passwords, NSFW content, and duplicates.

Next is linearization: remove header text, navigational links on pages, etc., and convert to a stream of tokens. Poor linearization can be irrecoverable. It can break up sentences and render source content incoherent.

There is filtering: cleaning up data. Every data source needs its own pipeline! For example, for code, you might want to include Python but not Fortran. Training on user-uploaded CSVs in a code repository is usually not helpful.

Ssing small-model classifiers to do filtering has side effects. There are a lot of terms of service out there. If you do deduplication, you may wind up throwing out a lot of terms of service. Removing PII with low-precision classifiers can have legal consequences. Or, sometimes we see data that includes scientific text in English and pornography in Chinese—a poor classifier will misunderstand it.

My last point: people have pushed for a safe harbor for AI research. We need something similar for open-data research. In doing open research, am I taking on too much risk?

Gabriele Mazzini

Gabriele Mazzini, Introduction to the AI Act and Generative AI.

The AI Act is a first-of-its-kind in the world. In the EU, the Commission proposes legislation and also implements it. The draft is send to the Council, which represents governments of member states, and to the Parliament, which is directly elected. The Council and Parliament have to agree to enact legislation. Implementation is carried out via member states. The Commission can provide some executive action and some guidance.

The AI Act required some complex choices: it should be horizontal, applying to all of AI, rather than being sector-specific. But different fields do have different legal regimes (e.g. financial regulation).

The most important concept in the AI Act is its risk-based approach. The greater the risk, the stricter the rules—but there is no regulation of AI as such. It focuses on use cases, with stricter rules for riskier uses.

  • From the EU’s point of view, a few uses–such as social scoring—are unacceptable risk and prohibited.
  • The high-risk category covers about 90% of the rules in the AI Act. This includes AI systems that are safety components of physical products (e.g. robotics). It also includes some specifically listed uses, such as recruitment in employment. These AI systems are subject to compliance with specific requirements ex ante.
  • The transparency risk category requires disclosures (e.g. that you are interacting with an AI chatbot and not a human). This is where generative AI mostly comes in: that you know that content was created by AI.
  • Everything else is minimal or no risk and is not regulated.

Most generative AI systems are in the transparency category (e.g. disclosure of training data). But some systems, e.g. those trained over a certain compute threshold, are subject to stricter rules.

Martin Senftleben

Martin Senftleben, Copyright and GenAI Development – Regulatory Approaches and Challenges in the EU and Beyond

AI forces us to confront the dethroning of the human author. Copyright has long been based on the unique creativity of human authors, but now generative AI generate outputs that appear as though they were human-generated.

In copyright, we give one person a monopoly right to decide what can be done with a work, but that makes follow-on innovation difficult. That was difficult enough in the past, when the follow-on innovation came from other authors (parody, pastiche, etc.). Here, the follow-on innovation comes from the machine. Copyright policy makes this complex right now. It’s an attempt to reconcile fair renumeration for human authors with a successful AI sector.

The copyright answer would be licensing—on the input side, pay for each and every piece of data that goes into the data set, and on the output side, pay for outputs. If you do this, you get problems for the AI sector. You get very limited access to data, with a few large players paying for data from publishers, but others getting nothing. This produces bias in the sense that it only reflects mainstream inputs (English, but not Dutch and Slovak).

If you try to favor a vibrant AI sector, you don’t require licensing for training and you make all the outputs legal (e.g. fair use). This increases access and you have less bias on the output, but you have no remuneration for authors.

From a legal-comparative perspective, it’s fascinating to see how different legislators approach these questions. Japan and Southeast Asian countries have tried to support AI developers, e.g. broad text and data mining (TDM) exemptions as applied to AI training. In the U.S., the discussion is about fair use and there are about 25 lawsuits. Fair use opens up the copyright system immediately because users can push back.

In the E.U., forget about fair use. We have the directive on the Digital Single Market in 2019, which was written without generative AI in mind. The focus was on scientific TDMs. That exception doesn’t cover commercial or even non-profit activity, only scientific research. A research organization can work with a private partner. There is also a broader TDM exemption that enables TDM unless the copyright owner has opted out using “machine-readable means” (e.g. in robots.txt).

The AI Act makes things more complex; it has AI-related components. It confirms that reproductions for TDM are still within the scope of copyright and require an exemption. It confirms that opt-outs must be observed. What about training in other countries? If you at a later stage want to offer your trained models in the EU, you must have evidence that you trained in accordance with EU policy. This is an intended Brussels effect.

The AI Act also has transparency obligations: specifically a “sufficiently detailed summary of the content used for training.” Good luck with that one! Even knowing what’s in the datasets you’re using is a challenge. There will be an AI Office, which will set up a template. Also, is there a risk that AI trained in the EU will simply be less clever than AI trained elsewhere? That it will marginalize the EU cultural heritage?

That’s where we stand the E.U. Codes of practice will start in May 2025 and become enforceable against AI providers in August 2025. If you seek licenses now, make sure they cover the training you have done in the past.

Panel: Data Curation and IP

Panelists: Julia Powles, Kyle Lo, Martin Senftleben, A. Feder Cooper (moderator)

Cooper: Julia, tell us about the view from Australia.

Julia: Outside the U.S., copyright law also includes moral rights, especially attribution and integrity. Three things: (1) Artists are feeling disempowered. (2) Lawyers gotten preoccupied with where (geographically) acts are taking place. (3) Governments are in a giant game of chicken of who will insist that AI providers comply. Everyone is waiting for artists to mount challenges that they don’t have the resources to mount. Most people who are savvy about IP hate copyright. We don’t show the concern that we show for the AI industry for students or others who are impacted by copyright. Australia is being very timid, as are most countries.

Cooper: Martin, can you fill us in on moral rights?

Martin: Copyright is not just about the money. It’s about the personal touch of what we create as human beings. Moral rights:

  • To decide whether a work will be made available to the public at all.
  • Attribution, to have your name associated with the work.
  • Integrity, to decide on modifications to the work.
  • Integrity, to object to the use of the work in unwanted contexts (such as pornography).

The impact on AI training is very unclear. It’s not clear what will happen in the courts. Perhaps moral rights will let authors avoid machine training entirely. Or perhaps they will apply at the output level. Not clear whether these rights will fly due to idea/expression dichotomy.

Cooper: Kyle, can you talk about copyright considerations in data curation?

Kyle: I’m worried about: (1) it’s important to develop techniques for fine-tuning, but (2) will my company let me work on projects where we hand off the control to others? Without some sort of protection for developing unlearning, we won’t have research on these techniques.

Cooper: Follow-up: you went right to memorization. Are we caring too much about memorization?

Kyle: There’s a simplistic view that I want to get away from: that it’s only regurgitation that matters. There are other harmful behaviors, such as a perfect style imitator for an author. It’s hard to form an opinion about good legislation without knowledge of what the state of the technology is, and what’s possible or not.

Julia: It feels like the wave of large models we’ve had in the last few years have really consumed our thinking about the future of AI. Especially the idea that we “need” scale and access to all copyrighted works. Before ChatGPT, the idea was that these models were too legally dangerous to release. We have impeded the release of bioscience because we have gone through the work of deciding what we want to allow. In many cases, having the large general model is not the best solution to a problem. In many cases, the promise remains unrealized.

Martin: Memorization and learning of concepts is one of the most fascinating and different problems. From a copyright perspective, getting knowledge about the black box is interesting and important. Cf. Matthew Sag’s “Snoopy problem.” CC licenses often come with a share-alike restriction. If it can be demonstrated that there are traces of this material in fully-trained models, those models would need to be shared under those terms.

Kyle: Do we need scale? I go back and forth on this all the time. On the one hand,I detest the idea of a general-purpose model. It’s all domain effects. That’s ML 101. On the other hand, these models are really impressive. The science-specific models are worse than GPT-4 for their use case. I don’t know why these giant proprietary models are so good. The more I deviate my methods from common practice, the less applicable my findings are. We have to hyperscale to be relevant, but I also hate it.

Cooper: How should we evaluate models?

When I work on general-purpose models, I try to reproduce what closed models are doing. I set up evaluations to try to replicate how they think. But I haven’t even reached the point of being able to reproduce their results. Everyone’s hardware is different and training runs can go wrong in lots of ways.

When I work on smaller and more specific models, not very much has change. The story has been to focus on the target domain, and that’s still the case. It’s careful scientific work. Maybe the only wrench is that general-purpose models can be prompted for outputs that are different than the ones they were created to focus on.

Cooper: Let’s talk about guardrails.

Martin: Right now, the copyright discussion focuses on the AI training stage. In terms of costs, this means that AI training is burdened with copyright issues, which makes training more expensive. Perhaps we should diversify legal tools by moving from input to output. Let the trainers do what they want, and we’ll put requirements on outputs and require them to create appropriate filters.

Julia: I find the argument that it’ll be too costly to respect copyright to be bunk. There are 100 countries that have to negotiate with major publishers for access to copyrighted works. There are lots of humans that we don’t make these arguments for. We should give these permissions to humans before machines. It seems obvious that we’d have impressive results at hyperscale. For 25 years, IP has debated traditional cultural knowledge. There, we have belatedly recognized the origin of this knowledge. The same goes for AI: it’s about acknowledging the source of the knowledge they are trained on.

Turning to supply chains, in addition to the copying right, there are authorizing, importing, and communicating, plus moral rights. An interesting avenue for regulation is to ask where sweatshops of people doing content moderation and data labeling take place.

Cooper: Training is resource-intensive, but so is inference.

Question: Why are we treating AI differently than biotechnology?

Julia: We have a strong physical bias. Dolly the sheep had an impact that 3D avatars didn’t. Also, it’s different power players.

Martin: Pam Samuelson has a good paper on historical antecedents for new copying technologies. Although I think that generative AI dethrones human authors and that is something new.

Kyle: AI is a proxy for other things; it doesn’t feel genuine until it’s applied.

Question: There have been a lot of talks about the power of training on synthetic data. Is copyright the right mechanism for training on synthetic data?

Kyle: It is hard to govern these approaches on the output side, you would really have to deal with it on the input side.

Martin: I hate to say this as a lawyer, but … it depends.

Question: We live in a fragmented import/export market. (E.g., the data security executive order

Martin: There have been predictions that territoriality will die, but so far it has persisted.

Connor Dunlop

Connor Dunlop, GPAI Governance and Oversight in the EU – And How You Might be Able to Contribute

Three topics:

  1. Role of civil society
  2. My work and how we fit in
  3. How you can contribute

AI operates within a complex system of social and economic structures. The ecosystem includes industry and more. AI and society includes government actors and NGOs exist to support those actors. There are many types of expertise involved here. Ada Lovelace is an organization that thinks abut how AI and data impact people in society. We aim for research expertise, promoting AI literacy, building technical tools like audits and evaluations. A possible gap in the ecosystem is strategic litigation expertise.

At Ada Lovelace, we try to identify key topics early on and ground them in research. We do a lot of polling and engagement on public perspectives. And we recognize nuance and try to make sure that people know what the known unknowns are and where people disagree.

On AI governance, we have been asking about different accountability mechanisms. What mechanisms are available, how are they employed in the real world, do they work, and can they be reflected in standards, law, or policy?

Sabrina Küspert

Sabrina Küspert, Implementing the AI Act

The AI Act follows a risk-based approach. (Review of risk-based approach pyramid.) It adopts harmonized rules across all 27 member states. The idea is that if you create trust, you also create excellence. If provider complies, they get access to the entire EU.

For general-purpose models, the rules are transparency obligations. Anyone who wants to build on a general-purpose model should be able to understand its capabilities and what it is based on. Providers must mitigate systemic risks with evaluation, mitigation, cybersecurity, incident reporting, and corrective measures.

The EU AI Office is part of the Commission and the center of AI expertise for the EU. It will facilitate a process to detail the rules around transparency, copyright, risk assessment, and risk mitigation via codes of practice. Also building enforcement structures. It will have technical capacity and regulatory powers (e.g. to compel assessments).

Finally, we’re facilitating international cooperation on AI. We’re working with the U.S. AI Safety Office, building an international network among key partners, and engaged in bilateral and multilateral activities.

Spotlight Poster Presentations

Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI (Robert Hönig, Javier Rando, Nicholas Carlini, Florian Tramer): We investigated methods that artists can use to prevent AI training on their work, and found that these protections can often be disabled. These tools (e.g. Glaze) work by adding adversarial perturbations to an artist’s images in ways that are unnoticable to humans but degrade models trained on them. You can use an off-the-shelf HuggingFace model to remove the perturbations and recover the original images. In some cases, adding Gaussian noise or using a different fine-tuning tool also suffices to disable the protections.

Training Foundation Models as Data Compression: On Information, Model Weights and Copyright Law (Giorgio Franceschelli, Claudia Cevenini, Mirco Musolesi): Our motivation is the knowledge that models tend to memorize and regurgitate. We observe that model weights are smaller than the training data, so there is an analogy that training is compression. Given this, is a model a copy or derivative work of training data?

Machine Unlearning Fails to Remove Data Poisoning Attacks (Martin Pawelczyk, Ayush Sekhari, Jimmy Z Di, Yiwei Lu, Gautam Kamath, Seth Neel): Real-world motivations for unlearning are to remove data due to revoked consent or to unlearn bad/adversarial data that impact performance. Typical implementations use likelihood ratio tests (LRTs) that involve hundreds of shadow models. We put poisons in part of the training data; then we apply an unlearning algorithm to our poisoned model and then ask whether the algorithm removed the effects of the poison. We add Gaussian poisoning to existing indiscriminate and targeted poisoning methods. Unlearning can be evaluated by measuring correlation between our Gaussians and the output model. We observe that the state-of-the-art methods we tried weren’t really successful at removing Gaussian poison and no method performs well across both vision and language tasks.

Ordering Model Deletion (Daniel Wilf-Townsend): Model deletion (a.k.a. model destruction or algorithmic disgorgement) is a remedial tool that courts and agencies can use that requires discontinuing use of a model trained on unlawfully used data. Why do it? First, in a privacy context, the inferences are what you care about, so just deleting the underlying data isn’t sufficient to prevent the harm. Second, it provides increased deterrence. But there are problems, including proportionality. Think of OpenAi vs. a blog post: if GPT-4 trains on a single blog post of mine, then I could force deletion, which is massively disproportionate to the harm. It could be unfair, or create massive chilling effects. Model deletion is an equitable remedy, and equitable doctrines should be used to enforce proportionality and tied to culpability.

Ignore Safety Directions. Violate the CFAA? (Ram Shankar Siva Kumar, Kendra Albert, Jonathon Penney): We explore the legal aspects of prompt injection attacks. We define prompt injection as inputting data into an LLM that cause it to behave in ways contrary to the model provider’s intentions. There are legal and cybersecurity risks, including under the CFAA, and a history of government and companies targeting researchers and white-hat hackers. Our paper attempts to show the complexity of applying the CFAA to generative-AI systems. One takeaway: whether prompt injection violates the CFAA depends on many factors. Sometimes they do, but there are uncertainties. Another takeaway: we need more clarity from courts and from scholars and researchers. Thus, we need a safe harbor for security researchers.

Fantastic Copyrighted Beasts and How (Not) to Generate Them (Luxi He, Yangsibo Huang, Weijia Shi, Tinghao Xie, Haotian Liu, ; Wang, Yue; Zettlemoyer, Luke; Zhang, Chiyuan; Chen, Danqi; Henderson, Peter): We have all likely seen models that generate copyrighted characters—and models that refuse to generate them. It turns out that using generic keywords like “Italian plumber” suffices. There was a recent Chinese case holding a service provider liable for generations of Ultraman. Our work introduces a copyrighted-characters reproduction benchmark. We also develop an evaluation suite that has consistency with user intent but avoids copyrighted characters. We applied this suite to various models, and propose methods to avoid copyrighted characters. We find that prompt rewriting is not fully effective on its own. But we find that using copyrighted character names as negative prompts increases effectiveness from about 50% to about 85%.

Matthew Jagielski and Katja Filippova

Matthew Jagielski and Katja Filippova, Machine Unlearning: [JG: I missed this due to a livestream hiccup, but will go back and fill it in.]

Kimberly Mai

Kimberly Mai, Data Protection in the Era of Generative AI

Under the GDPR, personal data is “any information relating to an identified or identifiable” person. That includes hash numbers of people in an experimental study, or license plate numbers. It depends on how easy it is to identify someone. The UK AI framework has principles that already map to data protection law.

Our view is that data protection law applies at every stage of the AI lifecycle. This makes the UK ICO a key regulator in the AI space. AI is a key area of focus for us. Generative AI raises some significant issues, and the ICO has launched a consultation.

What does “accuracy” mean in a generative-AI context? This isn’t a statistical notion; instead, data must be correct, not misleading, and where necessary up-to-date. In a creative context, that might not require factual accuracy. At the output level, a hallucinating model that produces incorrect outputs about a person might be inaccurate. We think this might require labeling, attribution, etc., but I am eager to hear your thoughts.

Now, for individual rights. We believe that rights to be informed and to access are crucial here. On the remaining four, it’s a more difficult picture. It’s very hard to unlearn, which makes the right to erasure quite difficult to apply. We want to hear from you how machine learning applies to data protection concepts. We will be releasing something on controllership shortly, and please share your thoughts with us. We can also provide advice on deploying systems. (We also welcome non-U.K. input.)

Herbie Bradley

Herbie Bradley, Technical AI Governance

Technical AI governance is technical analysis and tools for supporting effective AI governance. There are problems around data, compute, models, and user interaction. For example, are hardware-enabled compute governance feasible? Or, how should we think about how often to evaluate fine-tuned models for safety? What are best practices for language model benchmarking? And, looking to the future, how likely is it that certain research directions will pan out? (Examples include unlearning, watermarking, differential privacy, etc.)

Here is another example: risk thresholds. Can we translate benchmark results into assessments that are useful to policymakers. The problems are that this is dependent on a benchmark, it has to have a qualitative element, and knowledge and best practices shift rapidly. Any implementation will likely be iterative and involve conversations with policy experts and technical researchers.

It is useful to have technical capacity within governments. First, to carry out the actual technical work for implementing a policy or carry out safety testing. Second, you need it to have advisory capacity, and this is often much more useful.

Takeaways. First, if you’re a researcher, consider joining government or a think tank that supports government. Second, if you’re a policy maker, consider uncertainties that could be answered by technical capacity.

Panel: Privacy and Data Policy

Sabrina Ross, Herbie Bradley, Niloofar Mireshgallah, Matthew Jagielski, Paul Ohm (moderator), Katherine Lee (moderator)

Paul: We have this struggle in policy to come up with rules and standards that can be measured. What do we think about Herbie’s call for metrics?

Sabrina: We are at the beginning; the conversation is being led by discussions around safety. How do you measure data minimization, for example: comparing utility loss to data reduction. I’m excited by the trend.

Niloofar: There are multiple ways. Differential privacy (DP) was a theory concept, used for the census, and now is treated as a good tool. But with LLMs, it becomes ambiguous again. Tools can work in one place but not in another. Events like this help technical people understand what’s missing. I learned that most NLP people think of copyright as verbatim copying, but that’s not the only form of copying.

Paul: I worry that if we learn too hard into evaluation, we’ll lose values. What are we missing here?

Matthew: In the DP community, we have our clear epsilon values, and then we have our vibes, which aren’t measured but are built into the algorithm. The data minimization paper has a lot of intuitive value.

Herbie: Industry, academia, and government have different incentives and needs. Academia may like evaluations that are easily measurable and cheap. Industry may like it for marketing, or reducing liability risk. Government may want it to be robust or widely used, or relatively cheap.

Niloofar: It depends on what’s considered valuable. It used to be that data quality wasn’t valued. A few years ago, at ICML you’d only see theory papers, now there is more applied work.

Paul: You used this word “publish”: I thought you just uploaded things to ArXiv and moved on.

Katherine: Let’s talk about unlearning. Can we talk about evaluations that might be useful, and how it might fit into content moderation.

Matthew: To evaluate unlearning, you need to say something about a counterfactual world. State of the art techniques include things like “train your model a thousand times,” which is impractical for big models. There are also provable techniques; evaluation there looks much different. For content moderation, it’s unclear that this is an intervention on data and not alignment. If you have a specific goal, that you can measure directly.

Herbie: With these techniques, it’s very easy to target adjacent knowledge, which isn’t relevant and isn’t what you want to target. Often, various pieces of PII are available on the Internet, and the system could locate them even if information on them has been removed from the model itself.

Paul: Could we map the right to be forgotten onto unlearning?

Sophia: There are lots of considerations here (e.g. public figures versus private ones), so I don’t see a universal application.

Paul: Maybe what we want is a good output filter.

Niloofar: Even if you’re able to verify deletion, you may still be leaking information. There are difficult questions about prospective vs. retrospective activity. It’s a hot potato situation: people put out papers then other people show they don’t work. We could use more systematic frameworks.

Sophia: I prefer to connect the available techniques to the goals we’re trying to achieve.

Katherine: This is a fun time to bring up the copyright/privacy parallel. People talk about the DMCA takedown process, which isn’t quite applicable to generative AI but people do sometimes wonder about it.

Niloofar: I see that NLP people have a memorization idea, so they write a paper, and they need an application, so they look to privacy or copyright. They appeal to these two and put them together. The underlying latent is the same, but in copyright you can license it. I feel like privacy is more flexible, and you have complex inferences. In copyright, you have idea, expression, and those have different meanings.

Matthew: It’s interesting to see what changes in versions of a model. You are training the pain of a passive adversary versus one who is really going to try. For computer scientists, this idea of a weak vs. strong adversary is radioactive.

Paul: My Myth of the Superuser paper was about how laws are written to deal with powerful hackers but then used against ordinary users. Licensing is something you can do for copyright risk; in privacy, we talk about consent. Strategically, are they the same?

Sophia: For a long time, consent was seen as a gold standard. More recently, we’ve started to consider consent fatigue. For some uses it’s helpful, for others it’s not.

Paul: The TDM exception is interesting. The conventional wisdom in privacy was that those dumb American rules were opt-out. In copyright, the tables have turned.

Matthew: Licensing and consent change your distribution. Some people are more likely to opt in or opt out.

Herbie: People don’t have a good sense of how the qualities of licenseable differ from what is available on the Internet.

Niloofar: There is a dataset of people chatting with ChatGPT who affirmatively consented. But people share a lot of their private data through this, and become oblivious to what they have put in the model. You’re often sharing information about other people too. A journalist put their conversation with a private source into the chat!

Paul: Especially for junior grad students, the fact that every jurisdiction is doing this alone might be confusing. Why is that?

Herbie: I.e., why is there no international treaty?

Paul: Or even talk more and harmonize?

Herbie: We do. The Biden executive order influenced the E.U.’s thinking. But a lot of it comes down to cultural values and how different communities think.

Paul: Can you compare the U.K. to the E.U.?

Herbie: We’re watching the AI Act closely. I quite like what we’re doing.

Sophia: We have to consider the incentives that regulators are balancing. But in some ways, I think there is a ton of similarity. Singapore and the E.U. both have data minimization.

Herbie: There are significant differences between the thinking of different government systems in terms of how up-to-date they are.

Paul: This is where I explain to my horrified friends that the FTC has 45 employees working on this. There is a real resource imbalance.

Matthew: The point about shared values is why junior grad students shouldn’t be disheartened. The data minimization paper pulled out things that can be technicalized.

Niloofar: I can speak from the side of when I was a young grad student. When I came here, I was surprised by copyright. It’s always easier to build on legacy than to create something new.

Paul: None of you signed onto the cynical “It’s all trade war all the way down.” On our side of the pond, one story was that the rise of Mistral changed the politics considerably. If true, Mistral is the best thing ever to happen to Silicon Valley, because it tamps down protectionism. Or maybe this is the American who has no idea what he’s talking about.

Katherine: We’ve talked copyright, privacy, and safety. What else should we think about as we go off into the world?

Sophia: The problem is the organizing structure of the work to be done. Is fairness a safety problem, a privacy problem, or an inclusion problem? We’ve seen how some conceptions of data protection can impede fairness conversations.

Paul: I am genuinely curious. Are things hardening so much that you’ll find yourself in a group that people say, “We do copyright here; toxicity is down the hall?” (I think this would be bad.)

Herbie: Right now, academics are incentivized to talk about the general interface.

Paul: Has anyone said “antitrust” today? Right now, there is a quiet struggle between the antitrust Lina Khan/Tim Wu camp and all the other information harms. There are some natural monopoly arguments when it comes to large models.

Niloofar: At least on the academic side, people who work in theory do both privacy and fairness. When people who work in NLP started to care more, then there started to be more division. So toxicity/ethics people are little separate. When you say “safety,” it’s mostly about jailbreaking.

Paul: Maybe these are different techniques for different problem? Let me give you a thought about the First Amendment. Justice Kagan gets five justices to agree that social media is core protected speech. Lots of American scholars think this will also apply to large language models. This Supreme Court is putting First Amendment on the rise.

Matthew: I think alignment is the big technique overlap I’m seeing right now. But when I interact with the privacy community, people who do that are privacy people.

Katherine: That’s partly because those are the tools that we have.

Question: If we had unlearning, would that be okay with GDPR?

Question: If we go forward 2-3 years and there are some problems and clear beliefs about how they should be regulated, then how will this be enforced, and what skills do these people have?

Niloofar: On consent, I don’t know what we do about children.

Paul: In the U.S., we don’t consider children to be people.

Niloofar: I don’t know what this solution would look like.

Kimberly: In the U.K., if you’re over 13 you can consent. GDPR has protections for children. You have to consider risks and harms to children when you are designing under data protection by design.

Herbie: If you have highly adversarial users, unlearning might not be sufficient.

Sabrina: We’re already computer scientists working with economists. The more we can bring to bear, the more successful we’ll be.

Paul: I’ve spent my career watching agencies bring in technologists. Some success, some fail. Europe has had success with investing a lot. But the state of Oregon will hire half a technologist and pay them 30% what they would make. Europe understands that you have to write a big check, create a team, and plan for managing them.

Matthew: As an Oregonian, I’m glad Oregon was mentioned. I wanted to mention that people want unlearning to do some things that are more suitable for unlearning, and there are some goals that really are about data management. (Unless we start calling unlearning techniques “alignment.”)


And that’s it!

Postmodern Community Standards

This is a Jotwell-style review of Kendra Albert, Imagine a Community: Obscenity’s History and Moderating Speech Online_, 25 Yale Journal of Law and Technology Special Issue 59 (2023). I’m a Jotwell reviewer, but I am conflicted out of writing about Albert’s essay there because I co-authored a short piece with them last year. Nonetheless, I enjoyed Imagine a Community so much that I decided to write a review anyway, and post it here.

One of the great non-barking dogs in Internet law is obscenity. The first truly major case in the field was an obscenity case. 1997’s Reno v. ACLU, 521 U.S. 844 (1997), held that the harmful-to-minors provisions of the federal Communications Decency Act were unconstitutional because they prevented adults from receiving non-obscene speech online. Several additional Supreme Court cases followed over the next few years, as well as numerous lower-court cases, mostly rejecting various attempts to redraft definitions and prohibitions in a way that would survive constitutional scrutiny.

But then … silence. From roughly the mid-2000s on, very few obscenity cases have generated new law. As a casebook editor, I even started deleting material – this never happens – simply because there was nothing new to teach. This absence was a nagging question in the back of my mind. But now, thanks to Kendra Albert’s Imagine a Community, I have the answer, perfectly obvious now that they have laid it out so clearly. The courts did not give up on obscenity, but they gave up on obscenity law.

Imagine a Community is a cogent exploration of the strange career of community standards in obscenity law. Albert shows that the although the “contemporary community standards” test was invented to provide doctrinal clarity, it has instead been used for doctrinal evasion and obfuscation. Half history and half analysis, their essay is an outstanding example of a recent wave of cogent scholarship on sex, law, and the Internet, from scholars like Albert themself, Andrew Gilden, I. India Thusi, and others.

The historical story proceeds as a five-act tragedy, in which the Supreme Court is brought low by its hubris. In the first act, until the middle of the twentieth century, obscenity law varied widely from state to state and case to case. Then, in the second act, the Warren Court constitutionalized the law of obscenity, holding that whether a work is protected by the First Amendment depends on whether it “appeals to prurient interest” as measured by “contemporary community standards.” Roth v. United States, 354 U.S. 476, 489 (1957).

This test created two interrelated problems for the Supreme Court. First, it was profoundly ambiguous. Were community standards geographical or temporal, local or national? And second, it required the courts to decide a never-ending stream of obscenity cases. It proved immensely difficult to articulate how works did – or did not – comport with community standards, leading to embarrassments of reasoned explication like Potter Stewart’s “I know it when I see it” in Jacobellis v. Ohio, 378 U.S. 184, 197 (1964).

The Supreme Court was increasingly uncomfortable with these cases, but it was also unwilling to deconstitutionalize obscenity or to abandon the community-standards test. Instead, in Miller v. California, 413 U.S. 15 (1973), it threw up its hands and turned community standards into a factual question for the jury. As Albert explains, “The local community standard won because it was not possible to imagine what a national standard would be.”

The historian S.F.C. Milsom blamed “the miserable history of crime in England” on the “blankness of the general verdict” (Historical Foundations of the Common Law pp. 403, 413). There could be no substantive legal development unless judges engaged with the facts of individual cases, but the jury in effect hid all of the relevant facts behind a simple “guilty” or “not guilty.”

Albert shows that something similar happened in obscenity law’s third act. The jury’s verdict established that the defendant’s material did or did not appeal to the prurient interest according to contemporary standards. But it did so without ever saying out loud what those standards were. There were still obscenity prosecutions, and there were still obscenity convictions, but in a crucial sense there was much less obscenity law.

In the fourth act, the Internet unsettled a key assumption underpinning the theory that obscenity was a question of local community standards: that every communication had a unique location. The Internet created new kinds of online communities, but it also dissolved the informational boundaries of physical ones. Is a website published everywhere, and thus subject to every township, village, and borough’s standards? Or was a national rule now required? In the 2000s, courts wrestled inconclusively with the question of “Who gets to decide what is too risqué for the Internet?”

And then, Albert demonstrates, in the tragedy’s fifth and deeply ironic act, prosecutors gave up the fight. They have largely avoided bringing adult Internet obscenity cases, focusing instead on child sexual abuse material cases and on cases involving “local businesses where the question of what the appropriate community was much less fraught.” The community-standards timbers have rotted, but no one has paid it much attention because they are not bearing any weight.

This history is a springboard for two perceptive closing sections. First, Albert shows that the community-standards-based obscenity test is extremely hard to justify on its own terms, when measured against contemporary First Amendment standards. It has endured not because it is correct but because it is useful. “The ‘community’ allows courts to avoid the reality that obscenity is a First Amendment doctrine designed to do exactly what justices have decried in other contexts – have the state decide ‘good speech’ from ‘bad speech’ based on preference for certain speakers and messages.” Once you see the point put this way, it is obvious – and it is also obvious that this is the only way this story could have ever ended.

Second – and this is the part that makes this essay truly next-level – Albert describes the tragedy’s farcical coda. The void created by this judicial retreat has been filled by private actors. Social-media platforms, payment providers, and other online intermediaries have developed content-moderation rules on sexually explicit material. These rules sometimes mirror the vestigial conceptual architecture of obscenity law, but often they are simply made up. Doctrine abhors a vacuum:

Pornography producers and porn platforms received lists of allowed and disallowed words and content – from “twink” to “golden showers,” to how many fingers a performer might use in a penetration scene. Rules against bodily fluids other than semen, even the appearance of intoxication, or certain kinds of suggestions of non-consent (such as hypnosis) are common.

One irony of this shift from public to private is that it has done what the courts have been unwilling to: create a genuinely national (sometimes even international) set of rules. Another is that these new “community standards” – a term used by social-media platforms apparently without irony – are applied without any real sensitivity to the actual standards of actual community members. They are simply the diktats of powerful platforms.

Perhaps none of this will matter. Albert suggests that the Supreme Court should perhaps “reconsider[] whether obscenity should be outside the reach of the First Amendment altogether.” Maybe it will, and maybe the legal system will catch up to the Avenue Q slogan: “The Internet is for porn.”

But there is another and darker possibility. The law of public sexuality in the United States has taken a turn over the last few years. Conservative legislators and prosecutors have claimed with a straight face that drag shows, queer romances, and trans bodies are inherently obscene. A new wave of age-verification laws sharply restrict what children are allowed to read on the Internet, and force adults to undergo new levels of surveillance when they go online. It is unsettlingly possible that the Supreme Court may be about to speedrun its obscenity jurisprudence, only backwards and in heels.

But sufficient unto the day is the evil thereof. For now, Imagine a Community is a model for what a law-review essay should be: concise, elegant, and illuminating.

GenLaw DC Workshop

I’m at the GenLaw workshop on Evaluating Generative AI Systems: the Good, the Bad, and the Hype today, and I will be liveblogging the presentations.

Introduction

Hoda Heidari: Welcome! Today’s event is sponsored by the K&L Gates Endowment at CMU, and presented by a team from GenLaw, CDT, and the Georgetown ITLP.

Katherine Lee: It feels like we’re at an inflection point. There are lots of models, and they’re being evaluated against each other. There’s also a major policy push. There’s the Biden executive order, privacy legislation, the generative-AI disclosure bill, etc.

All of these require the ability to balance capabilities and risks. The buzzword today is evaluations. Today’s event is about what evaluations are: ways of measuring generative-AI systems. Evaluations are proxies for things we care about, like bias and fairness. These proxies are limited, and we need many of them. And there are things like justice that we can’t even hope to measure. Today’s four specific topics will explore the tools we have and their limits.

A. Feder Cooper: Here is a concrete example of the challenges. One popular benchmark is MMLU. It’s advertised as testing whether models “possess extensive world knowledge and problem solving ability.” It includes multiple-choice questions from tests found online, standardized tests of mathematics, history, computer science, and more.

But evaluations are surprisingly brittle; CS programs don’t always rely on the GRE. In addition, it’s not clear what the benchmark measures. In the last week, MMLU has come under scrutiny. It turns out that if you reorder the questions as you give them to a language model, you get wide variations in overall scores.

This gets at another set of questions about generative-AI systems. MMLU benchmarks a model, but systems are much more than just models. Most people interact with a deployed system that wraps the model in a program with a user interface, filters, etc. There are numerous levels of indirection between the user’s engagement and the model itself.

And even the system itself is embedded in a much larger supply chain, from data through training to alignment to generation. There are numerous stages, each of which may be carried out by different actors doing different portions. We have just started to reason about all of these different actors and how they interact with each other. These policy questions are not just about technology, they’re about the actors and interactions.

Alexandra Givens: CDT’s work involves making sure that policy interventions are grounded in a solid understanding of how technologies work. Today’s focus is on how we can evaluate systems, in four concrete areas:

  1. Training-data attribution
  2. Privacy
  3. Trust and safety
  4. Data provenance and watermarks

We also have a number of representatives from government giving presentations on their ongoing work.

Our goals for today are on providing insights on cross-disciplinary, cross-community engagement. In addition, we want to pose concrete questions for research and policy, and help people find future collaborators.

Paul Ohm: Some of you may remember the first GenLaw workshop, we want to bring that same energy today. Here at Georgetown Law, we take seriously the idea that we’re down the street from the Capitol and want to be engaged. We have a motto, “Justice is the end, law is but the means.” I encourage you to bring that spirit to today’s workshop. This is about evaluations in service of justice and the other values we care about.

Zack Lipton

Zack Lipton: “Why Evaluations are Hard”

One goal of evaluations is simply quality: is this system fit for purpose? One question we can ask is, what is different about evaluations in the generative-AI era? And an important distinction is whether a system does everything for everyone or it has a more pinned-down use case with a more laser-targeted notion of quality.

Classic discriminative learning involves a prediction or recognition problem (or a problem that you can twist into one). For example, I want to give doctors guidance on whether to discharge a patient, so I predict mortality.

Generically, I have some input and I want to classify it. I collect a large dataset of input-output pairs and generates a model–the learned pattern. Then I can test how well the model works by testing on some data we didn’t train on. The model of machine learning that came to dominate is that I have a test set, and I measure how well the model works on the test set.

So when we evaluate a discriminative model, there are only a few kinds of errors. For a yes-no classifier, those are false positives and false negatives. For a regression problem, that means over- and under-estimates. We might look into how well the model performs on different strata, either to explore how it works, or to check for disparity on salient demographic groups in the population. And then we are concerned whether the model is valid at all out of the distribution it was trained on–e.g., at a different hospital, or in the wild.

[editor crash, lost some text]

Now we have general-purpose systems like ChatGPT which are provided without a specific task. They’re also being provided to others as a platform for building their own tools. Now language models are not just language models. Their job is not just to accurately predict the next word but to perform other tasks.

We have some ways of assessing quality, but there is no ground truth we can use to evaluate against. There is no single dataset that represents the domain we care about. Evaluation starts going to sweeping benchmarks; the function of what we did in NLP before is to supply a giant battery of tests we can administer to test “general” capabilities of ML models. And the discourse shifts towards trying to predict catastrophic outcomes.

At the same time, these generic capabilities provide a toolkit for building stronger domain-specific technologies. Now people in the marketplace are shipping products without any data. There is a floor level of performance they have with no data at all. Generative AI has opened up new domains, but with huge evaluation challenges.

Right now, for example, health-care systems are underwater. But the clerical burden of documenting all of these actions is immense: two hours of form-filling for every one hour of patient care. At Abridge, we’re building a generative-AI system for physicians to document clinical notes. So how do we evaluate it? There’s no gold standard, we can’t use simple tricks. The problem isn’t red-teaming, it’s more about consistently high-quality statistical documentation. The possible errors are completely open-ended, and we don’t have a complete account of our goals.

Finally, evaluation takes place at numerous times. Before deployment, we can look at automated metrics–but at the end of the day, no evaluation will capture everything we care about. A lot of the innovation happens when we have humans in the loop to give feedback on notes. We use human spot checks, we have relevant experts judging notes, across specialties and populations, and also tagging errors with particular categories. We do testing during rollout, using staged releases and A/B tests. There are also dynamic feedback channels from clinician feedback (star ratings, free-form text, edits to notes, etc.). There are lots of new challenges–the domain doesn’t stand still either. And finally, there are regulatory challenges.

Emily Lanza

Emily Lanza: “Update from the Copyright Office”

The Copyright Office is part of the legislative branch, providing advice to Congress and the courts. It also administers the copyright registration system.

As far back as 1965, the Copyright Office has weighed in on the copyrightability of computer-generated works. Recently, these issues have become far less theoretical. We have asked applicants to disclaim copyright in more-than-de-minimis AI-generated portions of their works. In August, we published a notice of inquiry and received more than 10,000 comments. And a human has read every single one of those comments.

Three main topics:

First, AI outputs that imitate human artists. These are issues like the Drake/Weeknd deepfake. Copyright law doesn’t cover these, but some state rights do. We have asked whether there should be a federal AI law.

Second, copyrightability of outputs. We have developed standards for examination. The generative-AI application was five years ago, entirely as a test case. We refused registration on the ground that human authorship is required; the D.C. District Court agreed and the case is on appeal. Other cases present less clear-cut facts. Our examiners have issued over 200 registrations with appropriate disclaimers, but we have also refused registration in three high-profile cases.

The central question in these more complex scenarios is when and how a human can exert control over the creativity developed by the AI system. We continue to draw these lines on a case-by-case basis, and at some point the courts will weigh in as well.

Third, the use of human works to train AIs. There are 20 lawsuits in U.S. courts. The fair use analysis is complex, including precedents such as Google Books and Warhol v. Goldsmith. We have asked follow-up questions about consent and compensation. Can it be done through licensing, or through collective licensing, or would a new form of compulsory licensing be desirable? Can copyright owners opt in or out? How would it work?

Relatedly, the study will consider how to allocate liability between developers, operators, and users. Our goal is balance. We want to promote the development of this exciting technology, while continuing to allow human creators to thrive.

We also need to be aware of developments elsewhere. Our study asks whether approaches in any other countries should be adopted or avoided in the United States.

We are not the only ones evaluating this. Congress has been busy, too, holding hearings as recently as last week. The Biden Administration issued an executive order in October. Other agencies are involved, including the FTC (prohibition on impersonation through AI-enabled deepfakes), and FEC (AI in political ads).

We plan to issue a report. The first section will focus on digital replicas and will be published this spring. The second section will be published this summer and will deal with the copyrightability of outputs. Later sections will deal with training and more. We aim to publish all of it by the end of the fiscal year, September 30. We will revise the Compendium, and also a study by economists about copyright and generative AI.

Sejal Amin

Sejal Amin (CTO at Shutterstock): “Ensuring TRUST; Programs for Royalties in the Age of AI”

Shutterstock was founded in 2003 and has since become an immense marketplace for images, video, music, 3D, design tools, etc. It has been investing in AI capabilities as well. Showing images generated by Shutterstock’s AI tools. Not confined to any genre or style. Shutterstock’s framework is TRUST. I’m going to focus today on the R, Royalties.

Today’s AI economy is not really contributing to the creators who enable it. Unregulated crawling helps a single beneficiary. In 2023, Shutterstock launched a contributor fund that provides ongoing royalties tied to licensing for newly generated assets.

The current model provides an equal share per image based on their contributions, which are then used in training Shutterstock’s models. There is also compensation by similarity, or by popularity. These models have problems. Popularity is not a proxy for quality; it leads to a rich-get-richer phenomenon. And similarity is also flawed without a comprehensive understanding the world.

For us, quality is a high priority. High-quality content is an essential input into the training process. How could we measure that? Of course, the word quality is nebulous. I’m going to focus on:

  • Aesthetic excellence
  • Safety
  • Diverse representation

A shared success model will need to understand customer demand.

Aesthetic excellence depends on technical proficiency (lighting, color balance) and visual appeal. Shutterstock screens materials for safety both manually and through human review. We have techniques to prevent generation of unsafe concepts. Diversity is important to all of us. We balance and improve representations of different genders, ethnicities, and orientation. Our fund attempts to support historically excluded creator groups. Our goal is shared success.

David Bau

David Bau: “Unlearning from Generative AI Models”

Unlearning asks: “Can I make my neural network forget something it learned?”

In training, a dataset with billions of inputs is run through training, and then can generate potentially infinite outputs. The network’s job is to generalize, not memorize. If you prompt Stable Diffusion for “astronaut riding a horse on the moon” there is no such image in the training set, it will generalize to create one.

SD is trained on about 100 TB of data, but the SD model is only about 4GB of network weights. We intentionally make these nets too small to memorize everything. That’s why they must generalize.

But still, sometimes a network does memorize. Carlini et al. showed that there is substantial memorization in some LLMs, and the New York Times found out that there is memorization in ChatGPT.

In a search engine, takedowns are easy to implement because you know “where” the information is. In a neural network, however, it’s very hard to localize where the information is.

There are two kinds of things you might want to unlearn. First, verbatim regurgitation, second, unwanted generalized knowledge (artist’s style, undesired concepts like nudity or hate, or dangerous knowledge like hacking techniques).

Three approaches to unlearning:

  1. Remove from the training data and retrain. But this is extremely expensive.
  2. Modify the model. Fine-tuning is hard because we don’t know where to erase. There are some “undo” ideas or targeting specific concepts.
  3. Filter outputs. Add a ContentID-like system to remove some outputs. This is a practical system for copyright compliance, but it’s hard to filter generalized knowledge and the filter is removable from open-source models.

Fundamentally, unlearning is tricky and will require combining approaches. The big challenge is how to improve the transparency of a system not directly designed by people.

Alicia Solow-Niederman

Alicia Solow-Niederman: “Privacy, Transformed? Lessons from GenAI”

GenAI exposes underlying weak spots. One kind of weak spot is weaknesses in a discipline’s understanding (e.g., U.S. privacy law’s individualistic focus). Another is weaknesses in cross-disciplinary conversations (technologists and lawyers talking about privacy).

Privacy in GenAI: cases when private data is used in the. If I prompt a system with my medical data, or a law-firm associate uses a chatbot to generate a contract with confidential client data. It can arise indirectly when a non-AI company licenses sensitive data for training. For example, 404 reported that Automattic was negotiating to license Tumblr data. Automattic offered an opt-out, a solution that embraces the individual-control model. This is truly a privacy question, not a copyright one. And we can’t think about it without thinking what privacy should be as a social value.

Privacy out of GenAI: When does private data leak out of a GenAI system? We’ve already talked about memorization followed by a prompt that exposes it. (E.g., the poem poem poem attack.) Another problem is out-of-context disclosures. E.g., ChatGPT 3.5 “leaked a random dude’s photo”–a working theory is that this photo was uploaded in 2016 and ChatGPT created a random URL as part of its response. Policy question: how much can technical intervention mitigate this kind of risk?

Privacy through GenAI: ways where the use of the technology itself violates privacy. E.g., GenAI tools used to infer personal information: chatbots can discern age and geography from datasets like Reddit. The very use of a GenAI tool might lead to violations of existing protections. The example of GenAI for a health-care system is a good example of this kind of setting.

Technical patches risk distracting us from more important policy questions.

Niloofar Mireshghallah

Niloofar Mireshghallah: “What is differential privacy? And what is it not?”

A big part of the success of generative AI is the role of training data. Most of the data is web-scraped, but this might not have been intended to be public.

But the privacy issues are not entirely new. The census collects data on name, age, sex, race, etc. This is used for purposes like redistricting. But this data could also be used to make inferences, e.g., where are there mixed-race couples? Obvious approach is to withhold some fields, such as name, but often the data can be reconstructed.

Differential privacy is a way of formalizing the idea that nothing can be learned about a participant in a database–is the database with the record distinguishable from the database without it? The key concept here is a privacy budget, which quantifies how much privacy can be lost through queries of a (partially obscured) database Common patterns are still visible, but uncommon patterns are not.

But privacy under DP comes at the cost of data utility. The more privacy you want, the more noise you need to add, and hence the less useful the data. And it has a disproprotionate imapct on the tails of the distribution, e.g., more inaccuracy in the census measurements of the Hispanic population.

Back to GenAI. Suppose I want to release a medical dataset with textual data about patients. Three patients have covid and a cough, one patient has a lumbar puncture. It’s very hard to apply differential privacy to text rather than tabular data. It’s not easy to apply clear boundaries between records to text. There are also ownership issues, e.g., “Bob, did you hear about Alice’s divorce?” applies to both Bob and Alice.

If we try applying DP with each patient’s data as a record, we get a many-to-many version. The three covid patients get converted into similar covid patients; we can still see the covid/cough relationship. But it does not detect and obfuscate “sensitive” information while keeping “necessary” information intact. We’ll still see “the CT machine at the hospital is broken.” This is repeated, but in context it could be identifying and shouldn’t be revealed. That is, repeated information might be sensitive! A fact that a lumbar puncture requires local anasthesia might appear only once, but that’s still a fact that ought to be learned, it’s not sensitive. DP is not good at capturing these nuances or these needle-in-haystack situations. There are similarly messy issues with images. Do we even care about celebrity photos? There are lots of contexual nuances.

Panel Discussion

[Panel omitted because I’m on it]

Andreas Terzis

Andreas Terzis: “Privacy Review”

Language models learn from their training data a probability distribution of a sequence given the previous tokens. Can their memorize rare or unique training-data sequences? Yes, yes yes. So thus we ask, do actual LLMs memorize their training data?

Approach: use the LLM to generate a lot of data, and they predict membership of an example in the training data. If it has a high likelihood of being generated, then it’s memorized, if not, then no. In 2021, they showed that memorization happens in actual models, and since then, scale exacerbates the issue. Larger models memorize more.

Alignment seems to hide memorization, but not to prevent it. An aligned model might not return training data, but it can be prompted (e.g., “poem poem poem”) in ways that elicit it. And memorization happens with multimodal models too.

Privacy testing approaches: first, “secret sharer” invovles controlling canaries inserted into the training data. This requires access to the model and can also pollute it. “Data extraction” only requires access to the interface but may underestimate the actual amount of memorization.

There are tools to remove what might be sensitive data from training datasets. But they may not find all sensitive data (“andreas at google dot com”), and on the flipside, LLMs might benefit from knowing what sensitive data looks like.

There are also safety-filter tools. They stop LLMs from generating outputs that violate its policies. This is helpful in preventing verbatim memorization but can potentially be circumvented.

Differential privacy: use training-time noise to provide reduced sensitivity to specific rarer examples. This introduces a privacy-utility tradeoff. (And as we saw in the morning, it can be hard to adopt DP to some types of data and models.)

Deduplication can reduce memorization, because the more often an example is trained on, the more likely it is to be memorized. The model itself is likely to be better (faster to train on and less resources on memorizing duplicates.)

Privacy-preserving LLMs train on data intended to be public, and then finetine locally on user-contributed data. This and techniques in the previous slides can be combined to provide layered defense.

Dave Willner

Dave Willner: “How to Build Trust & Safety for and with AI”

Top-down take from a risk management perspective. We are in a world where a closed system is a very different thing to build trust and safety for than an open system, and I will address them both.

Dealing with AI isn’t a new problem. Generative AI is a new way of producing content. But we have 15-20 years of experience in moderating content. There is good reason to think that generative-AI systems will make us better at moderating content; they may be able to substitute for human moderators. And the models offer us new sites of intervention, in the models themselves.

First, do product-specific risk assessment. (Standard T&S approach: actors, behaviors, and content.) Think about genre (text, image, multimodal, etc.) Ask how frequent some of this content is. And how is this system specifically useful to people who want to generate content you don’t want them to?

Next, it’s a defense in depth approach. You have your central model and a series of layers around it. So the only viable approach is to stack as many layers of mitigations as possible.

  • Control access to the system, you don’t need to let people trying to abuse the system have infinite chances.
  • Monitor your outputs to see what the model is producing.
  • You need to have people investigating anomalies and seeing what’s happening. That can drive recalibration and adjustment.

In addition, invest in learning to use AI to augment all of the things I just talked about. All of these techniques rely on human classification. This is error-prone work that humans are not good at and that takes a big toll on them. We should expect generative-AI systems to play a significant role here; early experiments are promising.

In an open-souce world, that removes centralized gatekeepers … which means removing centralized gatekeepers. I do worry we’re facing a tragedy of the commons. Pollution from externalities from models is a thing to keep in mind, especially with the more severe risks. We are already seeing significant CSAM.

There may not be good solutions here with no downsides. Openness versus safety may involve hard tradeoffs.

Nicholas Carlini

Nicholas Carlini: “What watermarking can and can not do”

A watermark is a mark placed on top of a piece of media to identify it as machine generated.

For example: an image with a bunch of text put on top of it, or a disclaimer at the start of a text passage. Yes, we can watermark, but these are not good watermarks; they obscure the content.

Better question: can we usefully watermark? The image has a subtle watermark that are present in the pixels. And the text model was watermarked, too, based on some of the bigram probabilities.

But even this isn’t enough. The question is what are your requirements? We want watermarks to be useful for some end task. For example, people want undetectable watermarks. But most undetectable watermarks are easy to remove–e.g., flip it left-to-right, or JPEG compress it. Other people want unremovable watermarks. By whom? An 8-year-old or a CS professional? Unremovable watermarks are also often detectable. Some people want unforgeable watermarks, so they can verify the authenticity of photos.

Some examples of watermarks designed to be unremovable.

Here’s a watermarked image of a tabby cat. An ML image-recognition model recognizes it as a tabby cat with 88% confidence. Adversarial perturbation can make the image look indistinguishable to us humans, but it is classified as “guacamole” with 99% confidence. Almost all ML classifiers are vulnerable to this.Synthetic fake images can be tweaked to look like real ones with trivial variations, such as texture in the pattern of hair.

Should we watermark? It comes down to whether we’re watermarking in a setting where we can achieve our goals. What are you using it for? How robustly? Who is the adversary? Is there even an adversary?

Here are five goals of watermarking:

  1. Don’t train on your own outputs to avoid model collapse. Hope that most people who copy the text leave the watermark in.
  2. Provide information to users so they know whether the pope actually wore a puffy coats. There are some malicious users, but mostly not many.
  3. Detect spam. Maybe the spammers will be able to remove the watermark, maybe not.
  4. Detect disinformation. Harder.
  5. Detect targeted abuse. Harder still. As soon as there is a human in the loop, it’s hard to make a watermark stick. Reword the text, take a picture of the image. Can we? Should we? Maybe.

Raquel Vazquez Llorente

Raquel Vazquez Llorente: “Provenance, Authenticity and Transparency in Synthetic Media”

Talking about indirect disclosure mechanisms, but I consider detection to be a close cousin. We just tested detection tools and broke them all.

Witness helps people use media and tech to protect their rights. We are moving fast to a world where human and AI don’t just coexist but intermingle. Think of inpainting and outpainting. Think of how phones include options to enhance image quality or allow in-camera editing.

It’s hard to address AI content in isolation from this. We’ve also seen that deception is as much about context as it is about content. Watermarks, fingerprints, and metadata all provide important information, but don’t provide the truth of data.

Finally, legal authentication. There is a lot of work in open-source investigations. The justice system plays an essential role in protecting rights and democracy. People in power have dismissed content as “fake” or “manipulated” when they want to avoid its impact.

Three challenges:

  1. Identity. Witness has insisted that identity doesn’t need to be a condition of authentication. A system should avoid collecting personal information by default, because it can open up the door to government overreach.
  2. Data storage, ownership, and access. Collecting PII that connects to activists means they could be targeted.
  3. Access to tools and mandatory usage. Who is included and excluded is a crucial issue. Provenance can verify content, but actual analysis is important.

Erie Meyer

Erie Meyer: “Algorithmic Disgorgement in the Age of Generative AI”

CFPB sues companies to protect them from unfair, deceptive, or abusive practices: medical debt, credit reports, repeat-offender firms, etc. We investigate, litigate, and supervise.

Every one of the top-ten commercial banks uses chatbots. CFPB found that people were being harmed by poorly deployed chatbots that sent users into doom loops. You get stuck with a robot that doesn’t make sense.

Existing federal financial laws say that impeding customers from solving problems can be a violation of law. If the technology fails to recognize that consumers are invoking their federal rights, or fails to protect their private information. Firms also have an obligation to respond to consumer disputes and competently interact with customers. It’s not radical to say that technology should make things better, not worse. CFPB knew it needed to do this report because it publishers its complaints online. In those complaints, they searched for the word “human” and it pulled up a huge number of complaints.

Last year, CFPB put out a statement that “AI is Not an Execuse for Breaking the Law.” Bright-line rules benefit small companies by giving them clear guidance without needing a giant team to study the law. They also make it clear when a company is compliant or not, and make it clear when an investigation is needed.

An example: black-box credit models. Existing credit laws require firms making credit decisions to tell consumers why they made a decision. FCRA has use limitations, accuracy and explainability requirements, and a private right of action. E.g., targeted advertising is not on the list of allowed uses. CFPB has a forthcoming FCRA rulemaking.

When things go wrong, I think about competition, repeat offenders, and relationships. A firm shouldn’t get an edge over its competitors from using ill-gotten data. Repeat offenders are an indication that enforcement hasn’t shifted the firm’s incentives. Relationships: does someone answer the phone, can you get straight answers, do you know that Erica isn’t a human?

The audiences for our work are individuals, corporations, and the industry as a whole. For people: What does a person do when their data is misused? What makes me whole? How do I get my data out? For corporations, some companies violate federal laws repeatedly. And for the industry, what do others in the industry learn from enforcement actions?

Finally, disgorgement: I’ll cite a case from the FTC in the case against Google. The reason not to let Google settle was that while the “data” was deleted, the data enhancements were used to target others.

What keeps me up at night is that it’s hard to get great legislation on the books.

Elham Tabassi

Elham Tabassi: “Update from US AI Safety Institute”

NIST is a non-regulatory agency under the Department of Commerce. We cultivate trust in technology and promote innovation. We promote measurement science and technologically valid standards. We work through a multi-stakeholder process. We try to identify what are the valuable effective measurement techniques.

Since 2023, we have:

  • We released an AI Risk Management Framework.
  • We built a Trustworthy AI Resource Center and launched a Generative AI Public Working Group
  • EO 14110 asks NIST to develop a long list of AI guidelines, and NIST has been busy working on those drafts, with a request for information and will be doing listening sessions.
  • NIST will start with a report on synthetic content authentication and then develop guidance
  • There are a lot of other tasks, most of which correspond with the end of July, but these will continue in motion after that.
  • We have a consortium with working groups to implement the different EO components
  • Released a roadmap along with the AI RMF.

Nitarshan Rajkumar

Nitarshan Rajkumar: “Update from UK AI Safety Institute”

Our focus is to equip governments with an empirical understanding of the safety of frontier AI systems. It’s built as a startup within government, with seed funding of £100 million, and extensive talent, partnerships, and access to models and compute.

UK government has tried to mobilize international coordination, starting with an AI safety summit at Bletchley Park. We’re doing consensus-building at a scientific level, trying to do for AI safety what IPCC has done for climate change.

We have four domains of testing work:

  • Misuse: do advanced AI systems meaningfully lower barriers for bad actors seeking to cause harm?
  • Societal imapcts: how are AI systems actually used in the real world, with effects on individuals and society?
  • Autonomous systems: this includes reproduction, persuasion, and create more capable AI models
  • Safeguards: evaluating effectiveness of advanced AI safety systems

We have four approaches to evaluations:

  • Automated benchmarking (e.g. Q&A sets). Currently broad but shallow baselines.
  • Red-teaming: deploying domain experts to manually interact with the model
  • Human-uplift studies: how does AI change the capabilities of novices?
  • Agents and tools: it’s possible that agents will become a more common way of interacting with AI systems

Panel Discussion

Katherine: What kinds of legislation and rules do we need?

Amba: The lobbying landscape complicates efforts. Industry has pushed for auditing mandates to undercut bright-line rules. E.g., facial-recognition auditing was used to undercut pushes to ban facial-recognition. Maybe we’re not talking enough about incentives.

Raquel: When we’re talking about generative-AI, we’re also talking about the broader information landscape. Content moderation is incredibly thorny. Dave knows so much, but the incentives are so bad. If companies are incentivized by optimizing advertising, data collection, and attention, then content moderation is connected to enforcing a misaligned system. We have a chance to shape these rules right now.

Dave: I think incentive problems affect product design rather than content moderation. The ugly reality of content moderation is that we’re not very good at it. There are huge technique gaps, humans don’t scale.

Katherine: What’s the difference between product design and content moderation?

Dave: ChatGPT is a single-player experience, so some forms of abuse are mechanically impossible. That kind of choice has much more of an impact on abuse as a whole.

Katherine: We’ve talked about standards. What about when standards fail? What are the remedies? Who’s responsible?

Amba: Regulatory proposals and regimes (e.g. DSA) that focus on auditing and evaluation have two weaknesses. First, they’re weakest on consequences: what if harm is discovered? Second, internal auditing is most effective (that’s where the expertise and resources are) but it’s not a substitute for external auditing. (“Companies shouldn’t be grading their own homework.”) Too many companies are on the AI-auditing gravy train, and they haven’t done enough to show that their auditing is at the level of effectiveness it needs to be. Scrutinize the business dynamics.

Nicholas: In computer security, there are two types of audits. Compliance audits check boxes to sell products, and actual audits where someone is telling you what you’re doing wrong. There are two different kinds of companies. I’m worried about the same thing happening here.

Elham: Another exacerbation is that we don’t know how to do this well. From our point of view, we’re trying to untangle these two, and come up with objective methods for passing and failing.

Question: Do folks have any reflection on approaches more aligned with transparency?

Nicholas: Happy to talk when I’m not on the panel.

Raquel: A few years ago, I was working on developing an authentication product. We got a lot of backlash from human-rights community. We hired different sets of penetration testers to audit the technology, and then we’d spend resource on patching. We equate open-source with security, but the amount of times we offered people code–but there’s not a huge amount of technical expertise.

Hoda: Right now, we don’t even have the right incentives to create standards except for companies’ bottom line. How do your agencies try to balance industry expertise with impacted communities?

Elham: Technologies change fast, so expertise is very important. We don’t know enough, and the operative word is “we” and collaboration is important.

Nitarshan: Key word is “iterative.” Do the work, make mistakes, learn from them, improve software, platform, and tooling.

Elham: We talk about policies we can put in place afterwards to check for safety and security. But these should also be part of the discussion of design. We want technologies that make it easy to do the right thing, hard to do the wrong thing, and easy to recover. Think of three-plug power outlets. We are not a standard-development organization; industry can lead standard development. The government’s job is to support these efforts by being third-party neutral objectives.

Question: What are the difference in how various institutions understand AI safety? E.g., protect company versus threats to democracy and human rights?

Nitarshan: People had an incorrect perception that we were focused on existential risk and we prominently platformed societal and other risks. We think of the risks as quite broad.

Katherine: Today, we’ve been zooming in and out. Safety is really interesting because we have tools that are the same for all of these topics–same techniques for privacy and copyright don’t necessarily work. Alignment, filters, etc. are a toolkit that is not necesarily specified. It’s about models that don’t do what we want them to do.

Let’s talk about trust and safety. Some people think there’s a tradeoff between safe and private systems

Dave: That is true especially early on in the development of a technology when we don’t understand it. But maybe not in the long run. For now, observation for learning purposes is important.

Andreas: Why would the system need to know more about individuals to protect them?

Dave: It depends on privacy. If privacy means “personal data” than no, but if privacy means “scrutiny of your usage” then yes.

Katherine: Maybe I’m generating a picture of a Mormon holding a cup of coffee. Depending on what we consider a violation, we’d need to know more about them, or to know what they care about. Know the age and context of a child.

Andreas: People have the control to disclose what they want to be know, that can also be used in responding.

Question: How do you think about whether models are fine to use only with certain controls, or should we avoid models that are brittle?

Dave: I’m very skeptical of brittle controls (terms of service, some refusals). Solving the brittleness of model-level mitigations is an important technical problem if you want to see open-source flourish. The right level to work at is the level you can make stick in the face of someone who is trying to be cooperative. Miscalibration is different than adversarial misuse. Right now, nothing is robust if someone can download the model and run it themselves.

Erie: What advice do you have for federal regulators who want to develop relationships with technical communities? How do you encourage whistleblowers?

Amba: Researchers are still telling us that problems with existing models are still unsolved. There are risks that are still unsolved; the genie is still out of the bottle. We’re not looking out to the horizon. Privacy, security, and bias harms are here right now.

Nicholas: I would be fine raising problems if I noticed them; I say things that get me in trouble in many circumstances. There are cases where it’s not worth getting in trouble–when I don’t have anything technically useful to add to the conversation.

Dave: People who work in these parts of companies are not doing it because they love glory and are feeling relaxed. They’re doing it because they genuinely care. That sentiment is fairly widespread.

Andreas: We’re here and we publish. There is a fairly vibrant community of open-source evaluations. In many ways they’re the most trustable. Maybe it’s starting to happen for security as well.

Katherine: Are proposed requirements for watermarking misguided?

Nicholas: As a technical problem, I want to know whether it works. In adversarial settings, not yet. In non-adversarial settings, it can work fine.

Katherine: People also mention homomorphic encryption–

Nicholas: That has nothing do with watermarking.

Katherine: –blockchain–

Nicholas: That’s dumb.

Raquel: There’s been too much emphasis on watermarking from a regulatory perspective. If we don’t embed media literacy, I’m worried about people looking at a content credential and misunderstanding what it covers.

Question: Is there value in safeguards that are easy to remove but hard to remove by accident?

Dave: It depends on the problem you’re trying to solve.

Nicholas: This the reason why depositions exist.

Raquel: This veers into UX, and the design of the interface the user engages with.

Question: What makes a good scientific underpinning for an evaluation? Compare the standards for cryptographic hashes versus the standards for penetration testing? Is it about math versus process?

Nitarshan: These two aren’t in tension. It’s just that right now ML evaluation is more alchemy than science. We can work on developing better methods.


And that’s it, wrapping up a nearly nine-hour day!

Scholars' Amicus Brief in the NetChoice Cases

Yesterday, along with twenty colleagues — in particular Gautam Hans, who served as counsel of record — I filed an amicus brief in the Supreme Court’s cases on Florida and Texas’s anti-content-moderation social-media laws, Moody v. NetChoice and NetChoice v. Paxton. The cases involve First Amendment challenges to laws that would prohibit platforms from wide swaths of content moderation. Florida’s prohibits removing or downranking any content posted by journalistic enterprises or by or about candidates for public office; Texas’s prohibits any viewpoint-based moderation of any content at all.

Our brief argues that these laws are unconstitutional restrictions on the rights of social-media users to find and receive the speech that they want to listen to. By prohibiting most content moderation, they force platforms to show users floods of content those users find repugnant, or are simply not interested in. This, we claim, is a form of compelled listening in violation of the First Amendment.

Here is the summary of our argument:

This case raises complex questions about social-media platforms’ First Amendment rights. But Florida Senate Bill 7072 (SB 7072) and Texas House Bill 20 (HB 20) also severely restrict platform users’ First Amendment rights to select the speech they listen to. Any question here is straightforward: such intrusions on listeners’ rights are flagrantly unconstitutional.

SB 7072 and HB 20 are the most radical experiments in compelled listening in United States history. These laws would force millions of Internet users to read billions of posts they have no interest in or affirmatively wish to avoid. This is compulsory, indiscriminate listening on a mass scale, and it is flagrantly unconstitutional.

Users rely on platforms’ content moderation to cope with the overwhelming volume of speech on the Internet. When platforms prevent unwanted posts from showing up in users’ feeds, they are not engaged in censorship. Quite the contrary. They are protecting users from a neverending torrent of harassment, spam, fraud, pornography, and other abuse — as well as material that is perfectly innocuous but simply not of interest to particular users. Indeed, if platforms did not engage in these forms of moderation against unwanted speech, the Internet would be completely unusable, because users would be unable to locate and listen to the speech they do want to receive.

Although these laws purport to impose neutrality among speakers, their true effect is to systematically favor speakers over listeners. SB 7072 and HB 20 pre- vent platforms from routing speech to users who want it and away from users who do not. They convert speakers’ undisputed First Amendment right to speak without government interference into something much stronger and far more dangerous: an absolute right for speakers to have their speech successfully thrust upon users, despite those users’ best efforts to avoid it.

In the entire history of the First Amendment, listeners have always had the freedom to seek out the speech of their choice. The content-moderation restrictions of SB 7072 and HB 20 take away that freedom. On that basis alone, they can and should be held unconstitutional.

This brief brings together nearly two decades of my thinking about Internet platforms, and while I’m sorry that it has been necessary to get involved in this litigation, I’m heartened at the breadth and depth of scholars who have joined together to make sure that users are heard. On a day when it felt like everyone was criticizing universities over their positions on free speech, it was good to be able to use my position at a university to take a public stand on behalf of free speech against one of its biggest threats: censorious state governments.

GenLaw

Introduction

Katherine Lee and A. Feder Cooper

Welcome! Thank you all for the wonderful discussions that we’ve already been having.

This is a unique time. We’re excited to bring technical scholars and legal scholars together to learn from each other.

Our goals:

  1. Have a common language. When we leave, we’ll all agree on what a “model” is. Today we can tease apart differences in definitions.
  2. Have some shared research directions.

Today, we’re going to focus on IP and on privacy. In between, we’ll have posters.

What are we talking about today? There are many ways to break down the problem. Dataset creation, model training, and model serving all involve choices.

For dataset creation: what type of data, how much data, web data, which web data, and what makes data “good?” Should we use copyrighted data? Private data? What even makes data private? (We delve into some of these in the explainers on the conference website.)

For model training, questions include whether we will use differential privacy? A retrieval model?

For model serving, that includes prompting, generation, fine-tuning, and human feedback. Fine-tuning does more training on a base or foundation model to make it more useful for a particular domain. Human feedback asks users whether a generation is good or bad, and that feedback can be used to further refine the model.

The recurring theme: there are lots of choices. All of these choices implicate concrete research questions. ML researchers have the pleasure of engaging with these questions. These choices also implicate values beyond just ML. The lawsuits against AI companies raise significant questions. The research we do is incorporated into products that serve the public, which has broader implications, such as copyright.

The big theme for today: bring together experts working on both sets of questions to share knowledge. How do concrete legal issues affect the choices ML researchers make and the questions ML researchers pursue?

##Some Nonobvious Observation About Copyright’s Scope For Generative AI Developers ##

Pamela Samuelson

Big questions:

  1. When does making copies of works as training data infringe copyright. (Mark Lemley will address.)
  2. When are outputs of a generative AI infringing derivative works. (This talk will address.)
  3. When is the alteration or removal of copyright management information (CMI) illegal? (This talk will address briefly to get it out of the way.)

Section 1202 makes it illegal to remove or alter CMI, knowing that it will facilitate copyright infringement. It was enacted in 1998 out of fear that hackers would strip out CMI from works, or that infringers would use false CMI to offer up copyrighted works as their own. Congress put in a $2500 minimum statutory damages provision, so these are billion-dollar lawsuits.

Getty is claming that its watermarks on it stock photographs are CMI. Stability is the defendant in two of these cases. One is brought by Getty; the other is a class-action complaint in which Sarah Anderson is the lead plaintiff. OpenAI is being sued twice by the same lawyer (the Saveri group) in the Silverman and Tremlay lawsuits. Meta is being sued by that group as well, and another legal group is suing Alphabet and DeepMind. There are also a substantial number of non-copyright lawsuits. The most significant is the suit by four John Does against OpenAI, GitHub, and Microsoft over GitHub Copilot (which doesn’t actually include a copyright claim).

In DC, Congress is holding hearings, and the Copyright Office is holding listening sessions, and then it will have a notice of inquiry this fall. People in the generative AI community should take this seriously.

Why are these lawsuits being brought? The lawyers in class actions may take as much as a third of any recovery. And many authors fiercely object to the use of their works as training data (even things that were posted on the Internet).

Copyright only protects original expression. The melody of a musical work, words in a poem, lines of source code. There is also a recognition that two works can be substantially similar (not identical) even though they don’t have exact literal identity. Often the ultimate question is from the perspective of a “lay observer/listener/viewer.”

Some works like visual art are highly expressive, while others like factual compilations have a lot of unprotectable (e.g. factual) elements. Courts will often filter out the unprotectable elements from these “thin” copyrights. Example: Saul Steinberg’s “New Yorker’s View of the World,” which was infringed by the movie poster for Moscow on the Hudson). There are lots of differences, but the overall appearance is similar. A more recent example, where there are differences of opinion, is Warhol’s _Orange Prince vs. Lynn Goldsmith’s photograph of Prince, on which it was based.

Another example: Ho v. Taflove. Chang started working for Ho, then switched to Taflove. Ho developed a new mathematical model of electron behavior. Chang and Taflove published a paper drawing from Ho’s work, causing Ho’s work to be rejected (since the model had already been published). But Ho lost his copyright infringement lawsuit (even though it was academic misconduct) because it was an idea, not expression. It didn’t matter how creative it was, it wasn’t part of the expression in the work. Under the merger doctrine of Baker v. Selden, copyright gives no monopoly on scientific ideas.

The Copyright Act gives exclusive rights including the derivative works right. It’s defined in an open-ended way. Can a generative AI system infringe? The Silverman complaint argues that ChatGPT can produce a detailed summary of her book, which is a derivative work. And training data can contain may copies of a work, so the AI model can essentially memorize them. (Slide with lots of examples of Snoopy)

Is the person who puts in the prompt “Snoopy doghouse at Christmas” the infringer? Is the AI system a direct infringer? I don’t think so, because there is no human volition to bring about that image, making the user the infringer. But there is indirect infringement, so perhaps Midjourney could be the indirect infringer! There are four different categories. The most relevant is “vicarious” infringement: does the entity have the “right and ability to control” users’ conduct, and do they have a financial benefit from the user’s infringement?

In general, the outputs will not be substantially similar to expressive elements in particular input works. Insofar as there isn’t infringement of a specific work, there isn’t infringement. The Anderson complaint all but admits this. The GitHub complain reports that 1% of outputs included code matching training data. Is that enough to make Copilot a direct or indirect infringer? The Getty complaint includes a specific Stable Diffusion output, which has enough differences that it’s probably not substantially similar.

These lawsuits are in the very early stages. Note that challenges to a lot of other new technologies have failed, while others have succeeded.

##Is Training AI Copyright Infringement? ##

Mark Lemley

I come to you as a time traveller from a time before PowerPoint. Training data is, as Pam said, the big kahuna. Even if these systems can be prompted to create similar outputs, it doesn’t mean the end of the enterprise. But if it is illegal to train on any work created after 1928 (the cutoff for copyright), training is in big trouble. This is true of all kinds of AI, not just generative AI. To train a self-driving car, you need to train on copyrighted photos of stop signs.

Some incumbents happen to have big databases of training data. Google has its web index, Facebook has billions of uploaded photos. The most common approach is to look at the web – Common Crawl has gathered it together (subject to robots.txt). The LAION image database is built on Common Crawl. All of this is still copyrighted, even if it’s out there on the web. And you can’t do anything with a computer that doesn’t involve making new copies. The potential for copyright liability is staggering if all of those copies of all of those works are infringing.

I will focus on fair use. Fair use allows some uses of works that copyright presumptively forbid, when the use serves a valuable social purpose, or doesn’t interfere with the copyright owner’s market. There are some cases holding that making temporary copies or internal copies for noninfringing uses is okay. The closest analog is the Google Book Search case. Google scanned millions of books and made a search engine. It wouldn’t give you the whole book text, but it would give you a small four-line snippet around the search result. Authors and publishers sued, but the courts said that Google’s internal copies were fair use because the purpose was not to substitute for the books. It didn’t interfere with a market for book snippets; there is no such market.

Other analogies: there are video-game cases. A company made its game to run on a platform, or vice-versa, and in both cases you have to reverse-engineer a copyrighted work (the game or the console). The product is noninfringing (a new game) but the intermediate step involved copying. This was fair use.

The thing about law is that it is backward-looking. When you find a new question, you look for the closest old question that it’s an analogy to. Is this closer to cases finding infringement, or to cases finding no infringement? Most of these cases in prior eras were clear that internal uses are fair use.

Now, past performance is no guarantee of future results. We’re in the midst of a big anti-tech backlash. “AI will destroy the world, and it will also put me out of a job.” This might affect judges; there is a potential risk here.

Fair use has always also worried about displacing markets. So if there were a market for selling licenses to train on their data, and I took it for free instead of paying, that would be less likely to be a fair use. There hasn’t been such a market, but you could imagine it developing. Suppose that Getty and Shutterstock developed a licensing program. That could be harder.

Mark is a lawyer for Stability AI in one of these cases. But it’s notable to him that Stable Diffusion is trained on two billion images. What’s the market price for one of those images? Is it a dollar? If so, there goes the entire market cap of Stability AI. Is it a hundredth of a cent? That doesn’t sound like a market that’s likely to happen?

Finally, fair use is a U.S. legal doctrine. Even if you’re developing your technology in the U.S., you’re probably using it somewhere else. Other countries don’t have fair use as such. Fair dealing is narrower; some countries have specific exceptions for text and data mining (e.g. Israel, Japan, U.K.). And there are lots of countries that just haven’t thought about it. So while the lawsuits have been filed here, and that’s where the money is, the bigger legal risks could be in other jurisdictions, like Europe or India or China.

I think the law will and should conclude that AI training is fair use. We get better AI: less bias, better speech recognition, safer cars. It’s unlikely that Getty’s image set alone is broad and representative. But this is by no means a sure thing.

Just a couple of other things. The output question is a harder question for copyright law. It’s a much more occasional question, because the vast majority of outputs are not infringing. But it’s still interesting and important, and system design can have a major imapct on it. If you find your system produces an output that is very similar to a specific input. Why?

  1. Overtraining on one specific input: it was copied into the training dataset lots of times. Solvable with better deduplication.
  2. Users deliberately triggering infringement. If you ask ChatGPT to write you a story about children at a wizarding school. But if give it the first paragraph of Harry Potter, it will spit out the first several chapters because it knows that the user wants something specific. (Interesting question about who is responsible for this.)
  3. Why is Snoopy so easy to generate images of? Because the system has identified “Snoopy” as a category. Like Baby Yoda, he shows up in lots of images. Usually you can’t own categories, but in a few cases – cartoon characters, superheroes, etc. – you can. Maybe this is worth adding filters to limit generations.

One of the things that computer scientists can do is to help us clearly articulate the ways that the technology work. In Anderson, the plaintiffs say that Stable Diffusion is a “collage” technology. That’s a bad metaphor, and it misleads. We need the technical community to outline good ways of understanding the math behind it.

Where and when does the law fit into AI development and deployment?

Miles Brundage

I’m not a lawyer; my background is in STS. I lead an interdisiplinary team that focuses on the impacts of OpenAI’s work.

My goal is to clarify things that may not be obvious about “how things work” in AI development and deployment. Insights about where things happen rather than the technical details.

AI development vs. deployment = building the thing vs. shipping the thing. Development is pre-training and fine-tuning; deployment is exposing those artifacts to users and applying them to tasks.

Lots of focus is on the model, but other levels of abstraction also matter, such as the components and systems built on top of those models. There are typically many components to a system or a platform. ChatGPT+ has plugins, a moderation API, usage policies, Whisper for speech recognition, the GPT4 model itself (fine-tuned), and custom instructions.

These distinctions matter. The legal issues in AI are not all about the base model. You can imagine a harmful-to-helpful spectrum. You might improve the safety or legality of a specific use case, but things are much more complex for a model that has a wide range of uses. It has a distribution on the spectrum, so policies shift and modify the distribution.

Development and deployment are non-linear. Lots of learning and feedback loops.

GPT-4 was a meme from before it existed. Then there were small-scale experiments, followed by pre-training until August 2022, testing and red-teaming, fine-tuning and RLHF, early commercial testing, later-stage red-teaming, system card and public announcement, scaling up of the platform, plugins, and continued iteration on the model and the system. This is an evolving service in many versions. An in parallel with all of this, OpenAI iterated on its platform and usage policies, and launched ChatGPT4. The system card included an extensive risk assessment.

You can imagine a wide variety of use cases, and layers raise legal issues that are different from base models. The moderation endpoint, for example, is a protective layer on top of the base model. Fine-tuning and the system includes attempts to insert disclaimers and refuse certain requests, and there are mechanisms for user feedback.

Lesson 1: It’s not all about the base model. It’s important but not everything. Outside of specific context like open-source releases, it’s not the only thing you have to think about. Example: regulated advice (e.g. finance, legal, medical). This is much more a function of fine-tuning and moderation and filters and disclaimers than it is of the base model. Any reasonably competent base model will know something about some of these topics, but that’s different from allowing it to proclaim itself as knowledgable. Another example: using tools to browse or draw from a database.

Lesson 2: David Collingridge quote. Important insight is that comprehensive upfront analysis is not possible. Many components and tools don’t exist when you would do that; they were adapted in response to experience with how people use it. The “solution” is reversability and iteration. Even with an API, it is not trivial to reverse changes (let alone with an open-source model). But still, some systems can be iterated on more easily than others. Example of a hard-to-anticipate use case: simplifying surgical consent forms. Example: fine-tuning is massively cheaper than pre-training. And use-case policies can be changed much more easily than retraining. System-level interventions for responding to bad inputs/outputs can be deployed even if retraining the model is infeasible.

Lesson 3: Law sometimes provides little guidance for decision-making. There is wide latitude as to which use cases a company can allow. That’s not unreasonable. There’s limited clarity as to what is an appropriate degree of caution (not just in relation to IP and privacy). Examples: human oversight is important sometimes, but norms of what is an appropriate human-in-the-loop can be loose. Many companies go beyond what is obviously legally required. Similarly, companies need to disclose limitations and dangers of the system. But sometimes, when the system behaves humbly, it can paradoxically make users think it’s more sophisticated. Disclosure to the general public – a challenge is that companies radically disagree about what the risks are and where the technology is going.

To pull this all together, the law fits in at lots of places and lots of times in the AI development process. Even when it does fit in, the implications aren’t always clear. So what to do?

  • Broaden the scope of the research on the legal implications. More layers of the system (use cases, content filters, fine-tuning, development process).
  • Make regulation adaptive.
  • Private actors should solicit public input so they’re not making these decisions on their own.

Panel: IP

Moderators: A. Feder Cooper and Jack Balkin Panelists: Pamela Samuelson, Mark Lemley Luis Villa, and Katherine Lee

Cooper: Can you talk a bit about the authorship piece of generative AI?

Samuelson: Wrote about this 35 years ago when AI was hot. Then it died down again. She thought most of the articles then were dumb, because people said “the computer should own it.” She went through the arguments people made, and said, “The person who is generating the content can figure out whether it is something that has commercial viability.” And that’s not very far from where the Copyright Office is today. If you have computer-generated material in your work, you need to identify it and disclaim the computer-generated part.

Lemley: I agree, but this isn’t sustainable. It’s not sustainable economically, because there will be a strong demand for someone to own these works with exact value. They’ll say “if value then right.” We should resist this pressure, but it’s out there. But also, in the short term, people will provide creativity by prompt creation. Short prompts won’t get copyright, but if the prompt becomes detailed enough, at some point they will say my contributions reach the threshold of creativity. That threshold in copyright law is very low. People will try to navigate that line by identifying their contributions.

But that will create problems for copyright law. AI inverts the usual idea-expression dichotomy because the creative expressive details are easy now, even though my creativity seems to come out in the “idea” (the prompt). This will also create problems for infringement, because we test substantial similarity at the levels of the output works. Maybe my picture of penguins at a beach looks like your picture of penguins at a beach because we both asked the same software for “penguins at the beach.” Similarity of works will no longer be evidence of copying.

Villa: Push back on this. I’m the open-source guy. We’ve seen in the past twenty years a lot of pushback against “if value then right.” Open source peole want to share, and that has created a constituency for making it easier to give things away. We’re seeing this with the ubiquity of photography. Is there protectable expression if we all take photographs of the same thing. Wikipedia was able to get the freedom of panorama into European law. If you took a picture of the Atomium, you were infringing the copyright in the building. There are now constituencies that reject “if value then right.”

Balkin: Go back to the purposes of copyright. In the old days, we imagined a world where there would be lots of public-domain material. We’re in that world now. There are lots of things that will potentially be free of copyright. What’s the right incentive structure for the people who still want to make things?

Lemley: Clearly, no AI would create if it didn’t get copyright it its outputs. That’s what motivates it. One response is that yeah, there will be more stuff that’s uncopyrighted. People will be happy to create without copyright. People make photos on their phones not because of copyright but because they want the pictures. People make music and release it for free. These models can coexist with proprietary models. But one reason we see the current moral panic around generative AI is that there is a generation of people who are facing automated competition. They are not the first people to worry about competing with free. My sense is, it works out. Yes, there is competition from free stuff, but they also reduce artists’ costs of creation, so artists will use generative AI just as they use Photoshop. It’s cheaper to get music and to make music. And people who create don’t do it because it’s the best incentive, but because they’re driven to. (The market price of a law-review article is $0.) But that’s not a complete answer, because they still have to make a living. A third answer: there’s a desire for the story, the authenticity, the human connection. So artisinal stuff coexists with mass-produced stuff in every area where we see human creativity. There will be disruption, but human creativity won’t disappear.

Lee: The way that we conceive of generative AI in products today isn’t the way that they will always work. How will creative people work with AIs? It will depend on those AI products’ UIs.

Samuelson: One of the things that happened in the 1980s when there was the last “computer-generated yada yada” was that the U.K. passed a sui generis copyright-like system for computer works. That’s an option that will get some attention. There are people in Congress who are interested in these issues. So if there’s a perception that there needs to be something, that’s an option. Also, we could consider what copyright lawyers call “formalities”: notice meant you had to opt in to copyright. Anything else is in the public domain. We used to think that was a good thing. Creators can also take comfort that many computer-generated works will be within a genre. So the people who do “real art” will benefit because they can create things that are better and more interesting.

Lemley: There’s a good CS paper: if you train language models on content generated by language models, they eat themselves.

Villa: We may have been in a transitory period where lots of creativity was publicly shared collaboratively in a commons. Think of Stack Overflow, Reddit, etc. ChatGPT reduced StackOverflow posts. Reddit is self-immolating for other reasons.

Lemley: They’re doing it because they want to capture the value of AI training data.

Villa: We’ve taken public collaborative creativity for granted. But if we start asking AIs to produce these things, it could damage or dry up the commons we’ve been using for our training. Individual rational choices could create collective action challenges.

Lee: The world of computer-generated training data is very big. There may be some valid uses, but some have to be very careful.

Villa: Software law has often been copyright and speech law. We’ve had to add in privacy law. But the combinatorial issues in machine learning are fiendishly complex. The legal profession is not ready for it. But being an ML lawyer will be extremely challenging. The issues connect, and being “only” a copyright lawyer is missing important context. See Amanda Levendowski’s paper on fair use and ML bias.

Lee: It’s hard even to define what regurgitation or memorization is. “Please like and subscribe” is probably fine, but other phrases might be different and problematic in context.

Lemley: There’s a fundmaental tension here. If we’re worried about and false speech, a solution would be to repeat training data exactly. That’s not what copyright law prefers. We may have a choice between hallucination and memorization. [JG: he put it much better.]

Samuelson: Lots of lawyers operate in silos. But when we teach Internet law, we have to get much broader. [JG: did someone say Internet Law?] There is a nice strand of people in our space who know something about the First Amendment, about jurisdiction, etc. etc.

Lemley: Academia is probably better about this than practice.

Balkin: There is a distinction between authenticity (a conception of human connection) and originality (this has never been seen before). A real difference between what we do to form connection, and where people do stuff that’s entertaining to them. Just because what comes out of these systems right now is not every interesting, maybe it will be more so in a few years. But that won’t solve the problem of human connection, which is one of the purposes of art. Just because people want to create doesn’t tell you what the economics of creation will be.

Lemley: One point of art is to promote human connection. AI doesn’t do that. Another point is to provoke and challenge; AI can sometimes do that better than humans because it comes from a perspective that no one has. An exciting thing about generative AI is that it can do inhuman things. It has certainly done that in scientific domains. My view here is colored by 130 years of history. Every time this comes up, creators say this new technology will destroy art.

Balkin: Let’s go further back to patronage, where dukes and counts subsidize creativity. We moved to democracy plus markets as how we pay for art.

Lemley: Every time these objections have been made, they have been wrong. Sousa was upset about the phonograph because people wouldn’t buy his sheet music. But technology has always made it easier to create; I think AI will do the same. I have no artistic talent but I can make a painting now. The business model of being one of the small people who can make a good painting is now in trouble. But we have broadened the universe of people who can create.

Lee: We should talk about the people who are using and the people are providing the sources. Take Stack Overflow, where people post to help.

Villa: Look at Wikipedia and open-source software. Wikipedia is in some sense anti-human-connection in its tone. But there is a huge amount of connection in its creation rather than in consumption. Maybe there will be alternative means of connection. I’m not as economically optimistic as Mark is. But we have definitely now seen new patronage models based on the idea that everyone will be famous to fifteen people. Part of being an artist is the performance of authenticity. That works for some artists and not for others. It’s the Internet doing a lot of this, not ML.

Lee: Some people will create traditionally, a larger group will use generative AI. Will we need copyright protections for people using generative AI to create art?

Lemley: This is the prompt-engineering question again. I think we will go in that direction, but I don’t know that we need it. The Internet, 3D printing, generative AI – these shrink the universe of people who need copyright. We’re better off with a world where the people who need it participate in that model and the larger universe of people don’t need to.

Samuelson: People like Kris Kashtanova want to be recognized as authors. The Copyright Office issued a certificate, then cancelled it after learning about Kashtanova’s use of Midjourney and amended it to narrow it to reject prompt engineering as basis for copyright. We as people can recognize someone as an author even if the copyright system doesn’t. (Kashtanova has another application in.)

I like the metaphor of “Copilot.” It’s a nice metaphor for the optimistic version of what AI could be. I’ll use it to generate ideas and refine them, and use it as inspiration for my creative vision. That’s different from thinking about it as something that’s destroying my field. Some writers and visual artists are sincere that they think these technologies will kill their field.

Villa: The U.S. copyright system assumes that it’s a system of incentives. The European tradition has a moral-rights focus that even if I have given away everything, I have some inalienable rights and would be harmed if my work is exploited in certain ways. We’re seeing that idea in software now. 25 years ago, it was a very technolibertarian space. But AI developers now worry that the world can be harmed by their code. They want to give it away at no cost, but with moral strings attached. Every lawyer here has looked at these ethical licenses and cringed, because American copyright is not designed as a tool to make these kinds of ethical judgments. There is a fundamental mismatch between copyright and these goals.

Paul Ohm: I’m on team law. Do you want to defend the proposition that copyright is the first think we talk about, or one of the major things we talk about? There are lots of vulnerable people that copyright does nothing for.

Villa: Software developers have been told for 25 years that copyright licenses are how you create ethical change in the world. Richard Stallman told me that this is how you create that change. The open-source licensing community is now saying to developers “No, but not like that!”

Lee: Is this about copyright in models?

Villa: People want to copyright models, datasets, and put terms of use on services. Open-source sofwtare means you can use it for whatever you want. If a corporation says “You can use it for everything except X,” it’s not open source. We as a legal community have failed to provide other tools.

Balkin: In the early days of the Internet, the hottest issues were copyright. Five years ago it was all speech. Now it’s copyright again. Why? It’s because copyright is a kind of property, and property is central to governance and control. I say “You can’t do this because it’s my property.” Property is often a very bad tool, but in a pinch it’s the first thing you turn to and the only thing you have.

Consider social media. You get a person – Elon Musk – who uses not copyright but control over a system to get people to do what he wants. But that’s a mature development of an industry. But we’re not at that point in the development of AI. It’s very early on.

Samuelson: It’s because copyright has the most generous remedy toolkit of any law in the universe, including $150,000 statutory damages per infringed work. Contract law has lots of limitations on its remedies. Breach of an open-source license is practically worthless without a copyright claim attached, because you probably can’t get an injunction. It’s the first thing out there because we’ve been overly generous with the remedy package. It applies automatically, lasts life plus 70, and has very broad rights. There are dozens of lawsuits that claim copyright because they want those tools, even though the harm is not a copyright harm. You use copyright to protect privacy, or deal with open-source license breach, or suppress criticism. Copyright has blossomed (or you could use a cancer metaphor) into something much bigger. Courts are good at saying, you may be hurt by this, but this is not a copyright claim. But you can see from these lawsuits why copyright is so tempting.

Villa: I’m shocked there are only ten lawsuits and not hundreds.

Corynne McSherry: What about rights of publicity? That was the focus in the congressional hearing two weeks ago.

Samuelson: The impersonation (Drake + The Weeknd) is a big issue because you can use their voices to say things they didn’t say and sing things they didn’t sing. I think impersonation might be narrower than the right of publicity. (The full RoP is a kind of quasi-property right in a person’s name, likeness, and other attributes of their personality. You’re appropriating that, and sometimes implying that they sponsor or agree with you.

Lemley: We should prohibit some deepfakes and impersonation, but current right of publicity law is a disaster. My first instinct is that a federal law might not be bad if it gets rid of state law. My second instinct is that it’s Congress, so I’m always nervous. If it becomes a right to stop people from criticizing me or publicizing accurate things about me, that’s much worse.

Villa: Copyright is standardized globally by treaty. A right of publicity law doesn’t even work across all fifty states. That will be a major challenge for practitioners. All of these additional legal layers are not standardized across the world. Implementation will be a real challenge for creative communities that cross borders.

Samuelson: It will all be standardized through terms of service, so they will essentially become the law. And that isn’t a good thing either.

Artificial Intelligence and the First Amendment##

Jack Balkin

I’m going to talk about free speech and generative AI.

AI systems require huge amounts of data, which must be collected and analyzed. We want to start by asking the extent to which the state can regulate the collection and analysis of the data. Here, I’m going to invoke Privacy’s Great Change of Being: collection, collation, use, distribution, sale, or destruction of data. The First Amendment is most concerned with the back end: sale, distribution, and destruction. On the front end, it has much less to say. You can record a police officer in a public place, but otherwise it doesn’t say much. Privacy law can limit the collection of data.

The first caveat is that lots of data is not personal data. You can’t use privacy law as a general-purpose regulatory tool, just like you can’t use IP. The company might say, “We think what we’re doing is an editorial function like at a newspaper.” That would be a First Amendment right to train and tune however they want. That argument hasn’t been made yet, but it will come.

A very similar argument is being made for social media: is there a First Amendment right to program the algorithms however you want? I think the First Amendment argument is weaker here in AI than it is in social media. But the law here is uncertain.

If you are going to claim that you the company have First Amendment interests, then you are claiming that you are the speaker or publisher or the AI’s outputs. And that’s important for what we’re going to turn to: the relation between the outputs of AI models and the First Amendment.

First, does the AI have rights on its own? No. It’s not a person, it’s not sentient, it doesn’t have the characteristics of personhood that we use to accord First Amendment rights. But there is a wrinkle: the First Amendment protects the speech of artificial persons, like churches and corporations. So maybe we should treat AI systems as artificial persons.

I think that this isn’t going to work. The reason the law gives artificial entities rights is that it’s a social device for people’s purposes. That’s not the case for AI systems. OpenAI has First Amendment rights; that makes sense because people organize. But ChatGPT is not a device for achieving the ends of the people in the company.

Second, whose rights are at stake here? Which people have rights and what rights are they, and who will be responsible for the harms AI causes? Technology is a means of mediating relationships of social power and changes how people can exercise power over each other. Who has the right to use this power, and who can get remedies for harms?

The company will claim that the AI speech is their speech, or the user who prompts the AI will claim that the speech is theirs. In this case, ordinary First Amendment law applies. A human speaker is using this technology as a tool.

The next problem is where someone says, “This is not my speech” but the law says it is. (E.g., someone who repeats a defamatory statement.) But in defamation law, you don’t just have to show that it’s harmful, but you also have to show willfulness. So, for example, in Counterman v. Colorado, you have to show that the person making a threat knew that they were making one.

You can see the problem here. What is my intention here when I provide an AI that hallucinates threats? When you have the intermediation of an AI system that engages in torts. Here’s another problem: The AI system incites a riot. Here again we have a mens rea problem. The company doesn’t have the intent, even though the effects are the same. We’ll need new doctrine, because otherwise the company is insulated from liability.

The second case is an interesting one. First Amendment law is interested in the rights of listeners as well as speakers. Suppose I want to read the Communist Manifesto. Marx and Engels don’t have First Amendment rights: they’re overseas non-Americans who are also dead. So it must be that listeners have a right to access information.

Once again, we’ll have to come up with a different way of thinking about it. Here’s a possibility. There’s an entire body of law organized for listener rights: it’s commercial speech. It’s speech whose purpose is to tell you whether to buy a product. The justification is not that speakers have rights here, it’s that listeners have a right to access true useful information. So we could treat AI outputs under rules designed to deal with listener rights.

There will be some problems with this. Sometimes we wouldn’t want to say that the fact that speech is false is a reason to ban it, e.g. in matters of opinion. So commercial speech is not an answer to the problem, but it could be an inspiration.

Another possibility. There’s speech, but it’s part of a product. Take Google Maps. You push a button. You don’t have a conversation, it gives you directions. It’s simply a product that provides information upon request. The law has treated this as an ordinary case of products liability. But if it’s anything beyond this – if it’s an encyclopedia or a book – the law will treat it as a free-speech case. That third example will be very minor.

In privacy, collection, collation, and use regulations are consistent with the First Amendment. But that’s not a complete solution because most data is not personal data. In the First Amendment, the central problem is the mens rea problem, and that’s a problem whether or not someene claims the speech as their own. In both cases, we’ll need new doctrines.

Spotlight Talks

Colin Doyle, The Restatement (Artificial) of Torts

LLMs seem poised to take over legal research and writing tasks. This article proposes that we can task them with creating Restatements of law, and use that as testing grounds for their performances.

The Restatements are treatises designed to synthesize U.S. law and provide clear concise summaries of what the laws are. Laws differ from state to state, so that the Restatements are both descriptive and normative projects that aim to clarify the law.

How can we do this? The process I’m using is similar to how a human would craft a Restatement. We give the LLM a large number of cases on a topic. Its first step is to write a casebrief for each case in the knowledge base. Its second task is loop through the casebriefs and copy the relevant rules from each brief. Its third iteration is to distill the rules into shorter ones, group them together, and have the language model mark the trends in the law. THen use those notes to write a blackletter law provision, lifted from the American Law Institute’s model for how to write Restatement provisions. Then it writes comments, illustrations, and a Reporter’s Note listing the cases it was derived from. I also asked it to apply the ALI style manual.

It does produce something credible. We get accurate blackletter provisions. We get sensible commentary. We get Reporter’s Notes that cite to the right cases. The comments can be generic, and it breaks down when there are too many cases (keeping the list of cases spills out of the context window).

I’m excited about the possibility of comparing the artificial restatement with the human restatement. Where there’s a consensus, the two are identical. But where the rules get more complicated, we have a divergence. Also, there’s not just one artificial Restatement – we can prime a language model to produce rules with different values and goals.

Shayne Longpre, The Data Provenance Project:

I hope to convince you that we’re experiencing a crisis in data transparency. Models train on billions of tokens from tens of thousands of sources and thousands of distilled datasets. That leads us to lose a basic understanding of the underlying data, and to lose the provenance of that data.

Lots of data from large companies is just undocumented. We cannot properly audit the risks without understanding the underlying data. Reuse, biases, toxicities, all kinds of unintended behavior, and poor-quality models. The best repository we have is HuggingFace. We took a random sample of datasets and found that the licenses for 50% were misattributed.

We’re doing a massive dataset audit to trace the provenance from text sources to dataset creators to licenses. We’re releasing tools to let developers filter by the properties we’ve identified so they can drill down on the data that they’re using. We’re looking not just at the original sources but also at the datasets and at the data collections.

Practitioners seem to fall into two categories. Either they train on everything, or they are extremely cautious and are concerned about every license they look at.

Rui-Jie Yew, Break It Till You Make It:

When is training on copyrighted data legal? One common claim is that when the final use is noninfringing, then the intermediate training process is fair use.This presumes a close relationship between training and application. This is a contrast to a case where there’s an expressive input and an infringing expressive output, e.g., make a model of one song and then make an output that sounds like it.

The real world is much messier. Developers will make one pre-trained model that can be used for many applications, some of which are expressive and some of which are non-expressive. So if there’s a pretrained model that is then fine-tuned to classify images and also to generate music, the model has both expressive and non-expressive uses. This complicates liability attribution and allocation.

This also touches on the point that different architectures introduce different legal pressures. Pre-training is an important part of the AI supply chain.

Jonas A. Geiping, Diffusion Art or Digital Forgery?:

Diffusion models copy. If you generate lots of random images, and then compare them to the training dataset, you find lots of close images. Sometimes there are close matches for both content and style, sometimes they match more on style and less on the content. We find that about 1.88% of generated images are replications.

What causes replication? One cause is data duplication, where the same image is in the dataset with lots of variations (e.g. a couch with different posters in the background). These duplications can be partial, because humans have varied them.

Another cause is text conditioning, where the model associates specific images with specific captions. If you see the same caption, you’re much more likely to generate the same image. If you break the link – you train on examples where you resample when the captions are the same – this phenomenon goes away. The model no longer treats the caption as a key.

Mitigations: introduce variation in captions during training, or add a bit of noise to captions at test time.

Rui-Jie Yew (for Stephen Casper), Measuring the Success of Diffusion Models at Imitating Human Artists:

Diffusion models are getting better at imitating specific artists. How do we evaluate how effectively diffusion models imitate specific artists?

  1. Human judgment. That’s subjective and difficult to apply consistently.
  2. Using training data. But models are increasingly the result of complicated processes.
  3. Using AI image classification.

Our goal is not to automate determinations of infringement. Two visual images can be similar in non-obvious ways that are relevant to copyright law.

We start by identifying 70 artists and reclassify them based on Stable Diffusion’s imitation of their works. Most of them were successfully identified. We also used cosine different to measure similarity between artists’ images and Stable Diffusion imitations of their work versus other artists. Again, there are statistically significant similarities with the imitations.

So yes, Stable Diffusion is successful at imitating human artists!

A Brief Introduction to Machine Learning and Monetization

Nicholas Carlini

Machine learning is a really simple thing that tries to train a model to do something useful. Consider trying to become a lawyer. One way to train them is to go to law school, learn from professors, take exams, etc. Another way is to memorize every test you can and memorize all of the answers. At the end of both, you can probably pass the bar. But one way is actually useful; the other is only useful for passing the bar.

We train machine learning models by doing the second thing. We show them all the tests in the world, and ask them to memorize all the answers at the same time. Any amount of generalization is by luck alone. When we train models on text, we train them by asking them to predict the last blank in a sequence. When we train models on images, we add noise, and then ask them to reconstruct the original image from the noise. Image generation is a side effect of denoising. It makes sense that they memorize the images, because there’s no way to go back to the originals unless you’ve memorized them.

We like to think of generalization as a human kind of generalization. But suppose I’m learning how to play baseball. To an ML researcher, generalization is not getting good at baseball. Rather, it that means that if my parents take me to the same field and throw the ball the same way, I can still hit the ball. Change the field or the way of throwing the ball, the model will fail.

Some policy-space observers argue that the copying from any given input is minimal. So we did experiments to show that models do in fact memorize substantial portions of their inputs. We found examples of people’s personal information, which the model leaks to anyone who wants. Yes, your information is still online anyway, but the model is still doxing you. Several thousand lines of code is not de minimis.

Circa 2020, we knew that text models did this. It turns out that image models also do this. Stable Diffusions memorizes specific input images. It turns out that the different between those output images and the originals is smaller than the difference from compressing it into a JPEG. We can do this for a lot of images.

Takeaways. There are three worlds we might have been in:

  1. Models always emit training data.
  2. Models sometimes emit training data.
  3. Models never omit training data.

We are in the second world, the blurry world where models sometimes emit training data, and sometimes don’t.

Gautam Kamath

Let’s take for granted that ML models are likely to infringe. In the U.S., that requires access plus substantial similarity. So one response is to fix access: remove any copyrighted image from the training set inputs. But it might not be obvious what is copyrighted. Another response is to fix substantial similarity: filter the outputs to remove substantially similar images. Again, though, we don’t have a clear definition of substantial similarity.

So let’s relax a requirement. Access-freeness may be hard to guarantee, and may be too strict. So we’ll go for near access-freeness (as defined by Vyas, Kakade, and Barak) We’ll try to use a model that is close to one that didn’t have access to the copyrighted work.

Is this kosher? Would it hold up in a court of law? I have no idea; I’m not a lawyer. But if it does, it will help with some of the previous challenges. No hard removal problems for training data.

So how do you do this? Let’s turn to differential privacy for inspiration. We feed a dataset into an algorithm to produce a model. Imagine, however, that we had a dataset that was different in one entry. An adversary is trying to tell which of the two datasets we trained on. If they can’t do much better than random guessing, then the training procedure was differentially private. The procedure is differentially private when the model distributions don’t change much when adding or removing one point. DP is widely used in government and industry.

Concretely, DP protects against membership inference (is a single data point in the dataset?), and against revealing any information that wouldn’t be present if a particular data point weren’t trained on.

We can train a differentially private model on training data. Now consider any arbitray copyrighted point. The resulting model is close to one trained on the same dataset minus that point. So differential privacy is very close to near access freeness.

Design choices:

  • What value of epsilon is sufficient? There’s no free lunch; the smaller you set epsilon, the lower the utility of the model.
  • What is a point? A sentence? A document? An image? A portion of one? All images in a series? All of an artist’s works?

Note that copyright and privacy violations are not identical. Consider also lawsuits claiming copying of artists’ styles. Differential privacy might not help here, because style is not a property of a single data point.

Panel: Privacy

Moderators: Katherine Lee and Deep Ganguli
Panelists: Kristen Vaccaro, Nicholas Carlini, Miles Brundage, Gautam Kamath, and Jack Balkin

Lee: What is privacy?

Balkin: Dan Solove wrote a famous taxonomy of privacy. Take your pick. Sometimes it’s about control of information. Sometimes it’s about manipulation, or disclosure. It’s a starfish concepts.

Brundage: I’ve read various definitions but I don’t want to regurgitate one I’ve memorized.

Carlini: You know it when you see it. I try to do most of my work with things that are obvious violations that everyone agrees on. For most attack research, you don’t need a working definition.

Kamath: Computer scientists often like to work with specialized definitions. I work on DP which is a very specific notation.

Vaccaro: I spent a lot of time trying to explain privacy guarantees to people and see what they understand. I’m also interested in networked conceptions of privacy. When you share a picture of yourself, you’re also sharing information about your friends, family, and associates.

Ganguli: What are the most important privacy concerns you worry about in your work? There’s a show on Netflix called Deep Fake Love in which people are shown deep fakes of their partners cheating. As a community, what types of privacy should we be thinking about?

Balkin: People most worry about the loss of control and the obliteration of circles of trust. Another possibility is about false light: the deep fakes problem. You can construct what looks like the person in almost any situation. The method of collection can be one of the most serious violations. We could think about privacy at each stage.

Carlini: I’m mostly worried about data leakage about individual people. If we can’t solve that, what are we even doing?

Balkin: Vulnerability to attack might be an independent privacy problem. It’s a new kind of privacy problem.

Lee: Is generative AI different than search or other kinds of data collection? Take the personal-info leak. Is this worse than Google Search?

Carlini: The reason we do what we do on models trained online is so that we can do it without violating privacy. We could do this on models trained on hospital records, but I don’t want to violate medical privacy. The takeaway is that if this model was trained on your private emails, it would have leaked them, so we should be very careful about training on more private data.

Brundage: It’s important to compare deployed systems to deployed systems; think about ChatGPT refusing certain requests.

Vaccaro: A question I still had at the end of the morning is about the power dynamics. Obviously I spend a lot of time working on social media. People post things there with certain expectations about how they’re going to be used. With generative AI, people have a sense that there is something larger coming out of these uses.

Kamath: I think we don’t have a very good idea of what we agree to and where our data winds up. People click to agree and think things will be fine. Maybe with generative AI, things won’t be fine.

Balkin: Suppose I’ve trained a model And then I put in a picture of this guy here, and ask the model to predict his medical situation and intimate details, which we would ordinarily consider private? (Asking about true predictions, not hallucinations.)

Kamath: You could tell name, demographics, and some broad predictions based on that (e.g. of Indian descent so of higher diabetes risk, but young so maybe not).

Balkin: So a possible future problem is that you can infer sensitive information, not just memorize it.

Ganguli: I used to ask who is Deep Ganguli, and the model would say “I don’t know who that is and I don’t know anything about anyone who works at Anthropic.” Now it says I work at the University of Calcutta, because Ganguli is a Brahmin last name and it knows I do some kind of research.

Balkin: A model of privacy violation is that information is given to a trusted person (e.g. a doctor) and then it leaks out to the wrong person who uses it for an inappropriate purpose. But suppose I could generate that information without any individual leaking it, that solely through different pieces of information it could infer things about me. That seems to be the new thing that would be quite different from search. Once you can do it for one person, you can do it for lots of people.

Lee: This reminds me of recommendation algorithms. Any one product might not lead to an inference, but a collection of them might.

Balkin: We have the same thing with political behavior.

Kamath: Suppose you see a person smoking one cigarette, then another. You say to them, “I think you might develop cancer.” Is that a privacy violation? Now consider a machine learning model that draws complex interfaces. Does that become a privacy violation?

Ganguli: How should we think about malicious access versus incidental access? Is this a useful distinction for generative AI?

Carlini: Suppose I had a model that would reveal all kinds of personal information about me if you asked “Who is Nicholas Carlini?” That would be much worse than a model that would require a very specialized prompt to generate these outputs. We had this discussion in the copyright context as well – a model that infringes when prompted very specifically for infringement, versus one that does it on its own. RLHF does fairly well at preventing the latter, but has a much harder time preventing the first type.

Brundage: Alternatively, at what point is it sufficiently difficult that it’s not an attractive alternative than just Googling it?

Vaccaro: It’s also buying personal data on the market, not just search.

Lee: Does malicious versus incidental changes how you think about the legal analysis?

Balkin: You’re interested in both. If you know there are bad actors, you might have some responsibility to fine-tune to avoid that. Suppose I want to make some money off of vulnerable individuals. I know there are people who are financial idiots who will buy worthless coins. Can I use LLMs to identify these people? Do we have the capacity to do that?

Vaccaro: We don’t need generative models for this. You can do this with regular ML. And in fact you can buy lists of people with vulnerable attributes.

Balkin: Do these new technologies make it easier or more plausible to prey on people? Are they useful for even more efficient preying?

Kamath: How much do these datasets cost?

Carlini: Datasets with credit-card numbers are about $5 to $20 per person. But that’s illegal.

Vaccaro: The legal lists are very cheap. Are there ways for generative models to extract more information? Frankly, social media is very good at this.

Balkin: In the world of social media, the dirty little secret is that the companies promise that their algorithms are very good, and they’re bilking their advertisers. But could these generative AI technologies actually make good on these promises?

Kamath: Do we consider targeted advertising a privacy violation?

Balkin: It has to do with the vulnerability of the target.

Lee: It kind of sounds like we’re talking about traditional ML here. Generative AI has different media: text, audio, video. So you could spoof people by pretending to be someone they know using their voice.

Kamath: I’m not sure copying a voice is a privacy violation.

Carlini: I don’t see this as a privacy attack; that’s a security attack.

Vaccaro: At the same time, this does expose the fact that there should be even more hackers working on these models. You could be very creative at misusing these models.

Carlini: We can go to automated spearphishing.

Balkin: Our conversation is about why we care about privacy. If you think the harm is the creation of vulnerability, then you can see the privacy connection to exploitation.

Question: With generative AI, we have huge pre-trained models. Do you think it’s better to partition the model and only allow access through a service, or is it better to have a public model like Meta has done.

Carlini: I want public models. I think holding things secret only delays the problems. Only don’t release it or actually release it and let people do the security analysis. Trying to hold it back is not going to work. Maybe the OpenAI person feels differently.

Brundage: I think it’s a complex issue. I worry a lot about the implications of big jumps in capabilities. I worry about “just release it” especially when there is a Cambrian Explosion of pairing models up with tools.

Kamath: A classic issue in image classification is adversarial examples. We tried to solve them. And then there was recently a talk showing that you can do the same things against LLMs. Seems like the stakes have been raised. It’s helpful if we can see these issues in the simplest models to predict future issues.

Paul Ohm: Invitation to the Privacy Law Scholars Conference, which happens every June, where we have these conversations. Could you train a generative AI that would never say anything about any individual person ever? You’d be very confident that certain privacy harms wouldn’t be achievable. Would that be tractable?

Fatemehsadat Mireshghallah: You could use theory of mind to keep track of this. (There was a theory of mind workshop here yesterday.) You have to figure out who is a person and reason about what the user is asking about them.

Lee: This is another thing you can try to align a model to do.

Question: I come from Los Alamos, where the concept of putting a technology out there is starting to feel a bit poignant. We have fictional examples of people like Sherlock Holmes who can infer personal details from small observations. From a philosophical/legal standpoint, whats’ the difference between having a few people like that walking around and having a high-volume tool?

Carlini: There’s a difference between standing on the street and looking in their window and looking in through a telescope from far away. But technically they’re the same thing. This is mostly a legal quesiton.

Balkin: The difference has to do with power relations. Sherlock Holmes works with Scotland Yard. If he were to use his powers for blackmail, that would be different. Another difference is that there’s only one of him. If he existed, what restrictions would be put on him? In the law, we’re very worried about concentrations of extreme power. A lot of privacy law is about this.

Kamath: We do have large teams of Holmeses running around. The descendants of the Pinkertons attack unions by collecting intrusive information.

Balkin: That’s not necessarily legal; the problem is often that they’re not called to account for the illegal things they do. Privacy is about the use of information power to take unfair advantage of others and treat them unjustly.

Vaccaro: In San Diego we’re fighting about license plate readers.

Balkin: Unfortunately, this probably isn’t a Fourth Amendment violation.

Vaccaro: Maybe meeting the law is way too low a bar.

Lee: This is also a problem for making a product. User perceptions of privacy matter a lot, too.

Question: I feel like privacy and AI is a problem for rich people. Take GDPR. The big companies benefitted from this regulation, because they had the tools and competencies and scale to implement them. Same thing with AI Act in EU.

Brundage: Privacy is for everyone.

Vaccaro: My concern is that you can pay for privacy. If I want email without Google reading it, I can pay more for it and set up my own server.

Kamath: It’s a tradeoff. We can ask with DP about how much society benefits, and over time there may be more expertise and tools.

Andres Guadamuz: I have to strongly disagree. GDPR and the AI Act aren’t just being pushed by large companies. They’re fighting the enforcement of these privacy laws. But GDPR benefits citizens; I’ve used it personally. The AI Act is going to be a disaster, but that’s not being pushed by the big companies.

Question: You (Balkin) said AI companies will make free-speech arguments. Why will their arguments be weaker than social media companies’ arguments? Will AI alignment efforts improve their First Amendment arguments?

Balkin: The immediate analogy they will make is that fine-tuning to prevent hate speech is a kind of editorial function, like newspapers, and that it’s no different than content moderation at social-media companies. But no one doubts that social media hosts people’s speech. Does that mean that the AI company is conceding that it hosts the speech of the AI? That would put them in a different litigation posture than claiming it’s just a product. That’s the puzzle. (There are people who think that what social media companies do is not editorial.) It would follow from the editorial-function argument that an AI company would have a First Amendment right not to fine-tune.

Question: From an EU perspective, data protection has a lot of overlap with privacy but is considered a distinct right. Under the GDPR, a lot of targeted advertising and cookie targeting is unlawful. It might be that they could be done lawfully, but currently the IAB’s consent framework legally falls short. So when you take these profiling technologies, and you layer on top of that generative AI, what happens when the model not only predicts the most likely token, but also predicts what the person wants to hear?

Kamath: So is model personalization a privacy violation?

Carlini: I don’t see how this being an LLM makes it different.

Lee: The harms are potentially worse because it’s interactive.

Carlini: Of course, if you see my personalized search results, that’s the same problem. I don’t see how the LLM makes it fundamentally a different object.

Lee: We’ll end there, on the question of whether generative AI changes the privacy lawsuits.

Some Skepticism About NFT Copyright

I am participating today in the Copyright Office’s NFT Roundtable. Here is the text of my (brief) opening remarks:

Good morning, and thank you. I’m a professor at Cornell Law School and Cornell Tech. I would like to make one point, at a high level of abstraction. It may seem obvious, but I think it is urgent not to lose sight of.

One promise of blockchain is that it is a perfect paper trail. But all paper trails can fail. Sometimes this is because of technical failures in a record-keeping system itself, which is the problem that blockchain attempts to address. But more often it is because the information that parties attempt to record never corresponded to reality in the first place.

Some transactions that look valid on paper are not, because of forgery, fraud, duress, or mistake. And some otherwise valid transactions are imperfectly documented: perhaps the contract was signed in the wrong place. If a transactional form is used enough times, everything that can go wrong with it eventually will.

The legal system deals with these cases all the time. When the facts on the ground and the records in the books get out of sync, a court or agency will step in to bring them back into alignment. Sometimes in such cases the paper trail prevails. But often it does not, and the legal system will ignore the records, or correct them to match reality.

The application to NFTs should be apparent. The transfer of an NFT by entering a smart-contract transaction on a blockchain is a kind of paper trail, and all paper trails can fail. Some intended NFT transfers will not go through, and some NFT owners will lose control of their NFTs without giving legally valid consent.

This means that if the state of copyright ownership or licensing is tied to ownership of an NFT, one of two propositions must be true. Either the legal system must have some mechanism for correcting the blockchain when its records are in error, or else in some cases copyright owners will lose legal control of their works through preventable forgery, fraud, duress, or mistake.

It is sometimes said that the advantage of a blockchain is that on-chain records are immutable and authoritative. That is precisely why I am skeptical of blockchains in the copyright space. To quote Douglas Adams, “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at and repair.”

Reading Without Footprints

The phrase “I’m not going to link to it” has 385,000 results on Google. The idea is usually that the author wants to explain how someone is wrong on the Internet, but doesn’t want to reward that someone with pageviews, ad impressions, and other attention-based currencies. “Don’t feed the trolls,” goes the conventional wisdom, telling authors not to write about them. But in an age when silent analytics sentinels observe and report everything anyone does online, readers can feed the trolls without saying a word.

Actually, the problem is even worse. You can feed the trolls without ever interacting with them or their websites. If you Google “[person’s name] bad take,” you tell Google that [person’s name] is important right now. If you click on a search result, you reward a news site for writing an instant reaction story about the take. Every click teaches the Internet to supply more car crashes.

Not linking to the bad thing is usually described as a problem of trolls, and of social media, and of online discourse. But I think that it is also a problem of privacy. Reader privacy is well-recognized in law and in legal scholarship, and the threats it faces online are well-described. Not for nothing did Julie Cohen call for a right to read anonymously. Surveillance deters readers from seeking out unpopular opinions, facilitates uncannily manipulative advertising, and empowers the state to crush dissent.

To these I would add that attention can be a signal wrapped in an incentive. Sometimes, these signals and incentives are exactly what I want: I happily invite C.J. Sansom to shut up and take my money every time he publishes a new Shardlake book. But other times, I find myself uneasily worrying about how to find out a thing without causing there to be more of it in the world. There’s a weird new meme from an overrated TV show going around, and I want to know what actually happened in the scene. There’s a book out whose premise sounds awful, and I want to know if it’s as bad as I’ve been told. Or you-know-who just bleated out something typically terrible on his Twitter clone, and I don’t understand what all the people who are deliberately Not Linking To It are talking about.

We are losing the ability to read without consequences. There is something valuable about having a realm of contemplation that precedes the realm of action, a place to pause and gather one’s thoughts before committing. Leaving footprints everywhere you roam doesn’t just allow people to follow you. It also tramples paths, channeling humanity’s collective thoughts in ways they should perhaps not go.

I Do Not Think That NFT Means What You Think It Does

I recently tweeted that every sentence of this “explanation” of blockchain-based non-fungible tokens (NFTs) from the Harvard Business Review is false:

NFTs have fundamentally changed the market for digital assets. Historically there was no way to separate the “owner” of a digital artwork from someone who just saved a copy to their desktop. Markets can’t operate without clear property rights: Before someone can buy a good, it has to be clear who has the right to sell it, and once someone does buy, you need to be able to transfer ownership from the seller to the buyer. NFTs solve this problem by giving parties something they can agree represents ownership. In doing so, they make it possible to build markets around new types of transactions — buying and selling products that could never be sold before, or enabling transactions to happen in innovative ways that are more efficient and valuable.

In a follow-up thread, I expanded on why I am so skeptical about NFTs. I thought it would be useful to clean up and collect my thoughts in one place. I am a law professor who thinks a lot about digital property and about decentralized systems, and I think the idea that NFTs are about to revolutionize property law misunderstands how property law actually works.

Loosely speaking, there are three kinds of property you could use an NFT to try to control ownership of: physical things like houses, cars, or tungsten cubes; information like digital artworks; and intangible rights like corporate shares.

By default, buying an NFT “of” one of these three things doesn’t give you possession of them. Getting an NFT representing a tungsten cube doesn’t magically move the cube to your house. It’s still somewhere else in the world. If you want NFTs to actually control ownership of anything besides themselves, you need the legal system to back them up and say that whoever holds the NFT actually owns the thing.

Right now, the legal system doesn’t work that way. Transfer of an NFT doesn’t give you any legal rights in the thing. That’s not how IP and property work. Lawyers who know IP and property law are in pretty strong agreement on this.

It’s possible to imagine systems that would tie legal ownership to possession of an NFT. But they’re (1) not what most current NFTs do, (2) technically ambitious to the point of absurdity, and (3) profoundly dystopian. To see why, suppose we had a system that made the NFT on a blockchain legally authoritative for ownership of a copyright, or of an original object, etc. There would still be the enforcement problem of getting everyone to respect the owner’s rights.

There are two ways to enforce NFT “ownership.” The first is to get the legal system to do it. Judges would issue orders saying you own this widget because you have the Widget NFT, and then county sheriffs would show up to take possession of the widget and give it to you. The thing is, if you’re going to do that, there’s no point to the blockchain. We already have land registries, the DMV, and the Copyright Office. The blockchain is just an inefficient way of telling judges and sheriffs to do the same thing.

The other is to enforce everything digitally, by linking the physical world to the blockchain using secure digital hardware devices. That way, your car won’t start unless you prove ownership of the YourCar NFT. There are some serious downsides here. When your computer gets hacked, you also lose ownership of your car!

Sometimes, NFT advocates avoid dealing with the inconvenient fact that the physical world doesn’t run on a blockchain by shifting to a future in online spaces that do. They propose a blockchain-based metaverse, or online games with NFT-based economies, etc. The thing is that we’ve had digital property in those virtual spaces for decades. None of them needed a blockchain to work.

The bottom line is that almost1 everything NFT advocates want to do on a blockchain can be done more easily and efficiently without one, and the legal infrastructure needed to make NFTs work defeats the point of using a blockchain in the first place.


I say “almost” everything because NFT art may be an exception. A lot of the current hype around NFTs consists of the belief that the rest of the world will follow the same rules as NFT art. But of course part of the point of art is that it doesn’t follow the same rules as the rest of the world. ↩︎

Some Mistakes I Have Personally Found in Published Federal Judicial Opinions

Applying these principles, the court [in Armstrong v. Eagle Rock Entm’t, Inc., 655 F. Supp. 2d 779, 786 (E.D. Mich. 2009)] found that Eagle Rock Entertainment’s decision to use Louis Armstrong’s picture on the cover liner of its DVD entitled, ‘Mahavishnu Orchestra, Live at Montreux, 1984, 1974,’ without consent was protected by the First Amendment. Rosa & Raymond Parks Inst. for Self Development v. Target Corp., 90 F. Supp. 3d 1256, 1264 (2015).

Armstrong involved Ralphe Armstrong, not Louis Armstrong, who died in 1971.

The examiner’s final rejection, repeated in his Answer on appeal to the Patent and Trademark Office (PTO) Board of Appeals (board), was on the grounds that claims 1 and 2 are anticipated (fully met) by, and claim 3 would have been obvious from, an article by Kalabukhova and Mikheyew , Investigation of the Mechanical Properties of Ti-Mo-Ni Alloys, Russian Metallurgy (Metally) No. 3, pages 130-133 (1970) (in the court below and hereinafter called “the Russian article”) under 35 U.S.C. §§ 102 and 103, respectively. Titanium Metals Corp. of America v. Banner, 778 F.2d 775, 776 (Fed. Cir. 1985)

The author’s surname is Михеев, i.e., Mikheyev, not Mikheyew. There is no letter in the Cyrillic alphabet that transliterates to “w” under any commonly used system of Romanization.

GCC filed a trademark application for the mark GUANTANAMERA for use in connection with cigars on May, 14, 2001. When translated, “guantanamera” means “(i) the female adjectival form of GUANTANAMO, meaning having to do with or belonging to the city or province of Guantanamo, Cuba; and/or (ii) a woman from the city or province of Guantanamo, Cuba.” (Op. U.S.P.T.O. at 2.) Many people are also familiar with the Cuban folk song, Guantanamera, which was originally recorded in 1966. (Id. at 12-13.) Guantanamera Cigar Co. v. Corporacion Habanos, SA, 729 F. Supp. 2d 246, 250 (D.D.C. 2010)

The first recording of “Guantanamera” (lyrics adapted by Julián Orbón from poetry by José Martí, music by Joseíto Fernández) was probably sometime in the 1930s by Fernández. It was released in the United States in two well-known versions in 1963, one by the Weavers (from a 1955 concert) and another by Pete Seeger. All of these predate the 1966 easy-listening version by the Sandpipers.

The Copyright Law of Embedding Just Got a Lot More Interesting

Tim Lee has a remarkable story at Ars Technica about a remarkable copyright case, McGucken v. Newsweek. Its headline, “Instagram just threw users of its embedding API under the bus,” is not an exaggeration. (Disclosure: I am quoted in the story, and I learned about the case from being interviewed for it.) The facts are simple:

Photographer Elliot McGucken took a rare photo (perhaps this one) of an ephemeral lake in Death Valley. Ordinarily, Death Valley is bone dry, but occasionally a heavy rain will create a sizable body of water. Newsweek asked to license the image, but McGucken turned down their offer. So instead Newsweek embedded a post from McGucken’s Instagram feed containing the image.

This is the third case I am aware of in the Southern District of New York in the last two years on nearly identical facts. One of them, Sinclar v. Ziff Davis, held that the Mashable was not liable for an Instagram embed. The court reasoned that by uploading her photograph to Instagram, photographer Stephanie Sinclair agreed to Instagram’s terms of service, including a copyright license to Instagram to display the photograph – and also thereby allowed Instagram to sublicense the photograph to its users who used the embedding API. Thus, Mashable had a valid license from Sinclair by way of Instagram, so no infringement.

McGucken agrees with most of this reasoning, but stops just short of the crucial step.

The Court finds Judge Wood’s decision [in Sinclair] to be well-reasoned and sees little cause to disagree with that court’s reading of Instagram’s Terms of Use and other policies. Indeed, insofar as Plaintiff contends that Instagram lacks the right to sublicense his publicly posted photographs to other users, the Court flatly rejects that argument. The Terms of Use unequivocally grant Instagram a license to sublicense Plaintiff’s publicly posted content, and the Privacy Policy clearly states that “other Users may search for, see, use, or share any of your User Content that you make publicly available through” Instagram.

Nevertheless, the Court cannot dismiss Plaintiff’s claims based on this licensing theory at this stage in the litigation. As Plaintiff notes in his supplemental opposition brief, there is no evidence before the Court of a sublicense between Instagram and Defendant. Although Instagram’s various terms and policies clearly foresee the possibility of entities such as Defendant using web embeds to share other users’ content, none of them expressly grants a sublicense to those who embed publicly posted content. Nor can the Court find, on the pleadings, evidence of a possible implied sublicense. (citations omitted)

Lee did something smart with this dueling pair of cases: he got Facebook (Instagram’s owner) to go on record with its interpretation of its own terms of use.

“While our terms allow us to grant a sub-license, we do not grant one for our embeds API,” a Facebook company spokesperson told Ars in a Thursday email. “Our platform policies require third parties to have the necessary rights from applicable rights holders. This includes ensuring they have a license to share this content, if a license is required by law.”

In plain English, before you embed someone’s Instagram post on your website, you may need to ask the poster for a separate license to the images in the post. If you don’t, you could be subject to a copyright lawsuit.

This statement, I think it is fair to say, comes as a surprise to Mashable, to Judge Wood, and to all of the Instagram users who embed photos using its API. Major online services offer widely-used embedding APIs, and media outlets make extensive use of them. I would not say that it is universal, but it is certainly a widespread practice for which, it is widely assumed, no further license is needed. If that is not true, it is a very big deal, and a great many Internet users are now suddenly exposed to serious and unexpected copyright liability.

McGucken is not the end of the story. I would have said – and in fact I initially told Lee – that it is possible the court would reach a different conclusion at a later stage of the case, once it had more facts about Instagram’s terms of use. That … no longer seems likely. But it is still quite possible Newsweek could win and be allowed to use the embedded photograph. It raised a fair use defense, and might well prevail on that at a later stage. It might also be able to rely on the server rule.

The server rule, which can be traced to Perfect 10 v. Amazon.com from the Ninth Circuit in 2007, holds that only the person whose server transmits a copy of an image “displays” that image within the meaning of the Copyright Act. In an embedding case like Sinclair or McGucken, that would be Instagram, not Mashable or Newsweek – that is how embedding works. There is no dispute that Instagram is licensed to publicly display copies of these photographs; the photographers agreed as much when they uploaded them. So on the server test, no sublicense is needed; embeds are noninfringing.

The server test, although widely relied on by Internet users and Internet services, has also been criticized. The third SDNY embedding API case, Goldman v. Breitbart, held that the defendant websites could be liable for Twitter embeds of Goldman’s photograph. In a detailed opinion, the Goldman court considered and rejected the server test. (Side note: There was an important potential factual distinction in Goldman. There, unlike in Sinclair and in McGucken, the photograph had been uploaded to Twitter by unauthorized third parties, who could give no license to Twitter and thus none to the defendants. But this distinction played no part in Goldman’s legal analysis. While these facts could be relevant to the existence of a license, they don’t affect whether the image was displayed or by whom.)

To summarize, there are two possible routes to finding that API embeds of a photographer’s own uploads are allowed: either the service itself displays the image under the server rule, or the embedder displays it but has a valid sublicense. Goldman rejected the server rule, but did not consider the existence of a sublicense. Sinclair did not consider the server rule but held there was a sublicense. McGucken did not consider the server rule – inexplicably, Newsweek did not ask the court to hold that there was no direct infringement under the server rule – and held that there was no sublicense. No court has considered and ruled on both arguments together, despite the fact that they are joined at the hip.

A particularly careful and thorough critique of the server is Embedding Content or Interring Copyright: Does the Internet Need the “Server Rule”?, by Jane Ginsburg and luke Ali Budiardjo. They argue that the server rule misreads the Copyright Act and should, with Goldman, be rejected. They believe, however, that the sky will not fall, because licenses will fill any gaps that should be filled. They note that YouTube’s terms of service, for example, explicitly provide for a license grant from uploaders to YouTube’s users, and they predict that this practice will be common:

Therefore, it seems likely that platforms can (and will) utilize Terms of Service agreements that are sufficiently broad to protect themselves and their users from infringement claims based on user “sharing” of platform content through platform mechanisms.

I would have thought so, too. Hence my surprise at Instagram’s position. There are two possibilities here. One is that Instagram does not explicitly grant a license because it believes the server test is the law. That position has been risky ever since Goldman. The other is that Instagram is willing to expose its users to copyright liability when they use its system as intended. I think it is not unreasonable to describe this, as Ars does, as throwing its users under the bus.

One last twist. In late April, Sinclair filed a motion for reconsideration of the holding that Mashable had a sublicense from Instagram, including some challenges to the court’s interpretation of Instagram’s terms of use. The main brief in support of reconsideration could be clearer, but her reply brief puts the issue squarely: “Nowhere has Mashable put in the record any proof as to how Instagram ‘validly exercised’ its right in granting Mashable a sublicense of Plaintiff’s photo.” There things sat, until on June 2, Sinclair called the court’s attention to the McGucken order, and then today, June 4, called its attention to the Ars story published just hours before. I speculated to Lee that McGucken “is going to blow up the Sinclair case.” I shouldn’t have used the future tense. It already has.

A Few Thoughts on Cisco v. Beccela’s

Rebecca Tushnet blogged a trainwreck of a copyright opinion in Cisco Systems, Inc. v. Beccela’s Etc. from the Northern District of California. The software-licensing caselaw was not good, but this is one of the most confused opinions I’ve seen.

In brief, Cisco sells networking devices through a network of authorized dealers. The defendants allegedly sell Cisco devices outside of these authorized channels. Cisco sued on a variety of theories, including copyright infringement. In response, the defendants claimed they were making legal first sales.

Ninth Circuit caselaw (see Vernor, Psystar, and Christenson) has held that first sale doesn’t apply to software distributed on CD-ROMs or DVDs which are “licensed” rather than “sold,” and use a messy multi-factor test to determine whether a given shiny plastic disc is licensed or sold. The defendants here argued that the result should be different where the software is “embedded in hardware,” but the court disagreed that this made a difference. “The Ninth Circuit in these cases did not distinguish the first sale doctrine’s application between software and hardware … .” As a result, “[T]he first sale doctrine does not apply to software licensees even when the software is embedded in lawfully purchased hardware … .”

To which I can say only, what does the court think that software IS?

Zoolander: The files are in the computer?!

“Software” could refer to the information in a program – the sequence of bits or characters – or it could refer to a specific physical instantiation of the program – a chip, printout, or other object encoding that information. In copyright terms, the former is a “work”; the latter is a “copy.” Cisco has a copyright in the work, and we can assume that the copyright has never been validly licensed to the defendants. But in first sale, that’s irrelevant. If I’m “the owner of a particular copy … lawfully made,” then I can distribute that copy regardless of whether I have any license from the copyright owner. That’s what first sale is. The reason that Vernor and other cases rejected the application of first sale is that the copy had been licensed rather than sold: that messy multi-factor test tries to figure out what rights the possessor has over a particular shiny plastic disc. For example, does the copyright owner have the right to demand the shiny plastic disc back? If so, then the possessor may not be an “owner” of that “particular copy” and so first sale may not apply.

This reasoning doesn’t on its face distinguish between shiny plastic discs and computer hardware. But that doesn’t mean the two cases are the same. It’s right there in the Beccela’s opinion. In fact, it’s right there in the same sentence where the court announces its conclusion. Cisco’s software isn’t just “embedded in hardware”; it’s “embedded in lawfully purchased hardware,” in the court’s own phrase. That ought to end the case. If the hardware is lawfully purchased (note: “purchased” and not “licensed”), then the defendants are owners of the copies of the software and have full first sale rights. Remember: “copies” are “material objects … in which a work is fixed,” a definition that includes both shiny plastic discs and dense arrays of transistors.

The court here honestly seems to believe that software can somehow be “embedded” in hardware without the hardware being a copy of the software, as though a file were in the computer but not of it. But there is no such thing. That is what it means to store digital information in a thing: the physical structure of the thing becomes an encoding of the thing. Or, in copyright terms, a copy is a physical thing “from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” That’s how software works.

To be fair, I don’t think that courts in previous first-sale and software-licensing cases have been terribly careful about the work/copy distinction or about what software is. The opinions cited in Beccela’s are full of sloppy language that seems to invite this result. But that language was unnecessary; you could come out the same way in a DVD software first sale case while being careful about your terminology. Beccela’s takes these unintelligible fictions about how software works and turns them into an actual holding that is essential to the outcome of the case. It is rare to see the confusion at the heart of modern software copyright licensing so plainly stated.