The only exception is the case of UMG v. Anthropic, since the earliest, previous version of Anthropic will produce song lyrics for songs in the output. That’s the problem. The current status of the case is that they have put in place safeguards to prevent this from happening, and the parties have agreed that, pending the resolution of the case, the safeguards are sufficient, so they are no longer pursuing them. preliminary injunction. At the end of the day, the harder question for AI companies is not legal to participate in training? What do you do when AI produces output that resembles a certain piece of work? Do you expect the majority of these cases to go to trial, or do you see a settlement on the horizon? There may be several settlements. Where I would like to see settlements are big players who have a lot of content or valuable content. The New York Times can be done with a settlement, and with a license agreement, perhaps OpenAI pays money to use the content of the New York Times. There is enough money at stake that we will probably get at least some punishment that sets the parameters. Class-action plaintiffs, understandably I have stars in their eyes. There are many class actions, and I guess the defendant will reject it and hope to win in summary judgment. It is not obvious that he is going to trial. The Supreme Court in the case of Google v. Oracle nudged the fair use law very much in the direction of being resolved in summary judgment, not in front of the jury. I think the AI company will try very hard to get the case decided on summary judgment. Why would it be better if they win on summary judgment instead of a jury verdict? It’s faster and cheaper than going to try. And AI companies are worried that they will not be considered popular, that many people will think, Oh, you made a copy of a work that should be illegal and do not dig into the details of the exhibition- use the doctrine. There are many deals between AI companies and media, content providers, and other rights holders. Most of the time, this offer seems to be more about search than the base model, or at least that’s how it’s been explained to me. In your opinion, can licensed content be used in AI search engines – where answers are obtained from additional generation or RAG – is it legally binding? Why do they do it this way? If you use additional generation to recapture specific targeted content, then the fair use argument becomes more challenging. It is much more likely that an AI-generated search will return text taken directly from one specific source in the output, and is more likely to be fair use. I mean, maybe—but the area where it’s at risk is more competition with the original source material. If instead of directing people to the New York Times story, I instruct the AI that uses RAG to retrieve the text directly from the New York Times story, which seems like a potentially damaging substitution for the New York Times. The legal risk is greater for AI companies. What do you want people to know about the generative AI copyright fight that they might not know about, or might be misinformed about? The thing I hear most often is that it’s a technical problem. This concept is just a plagiarism machine. All it does is take my stuff and grind it back into the form of texts and responses. I hear a lot of artists say that, and I hear a lot of lay people say that, and it’s just not right as a technical matter. You can decide if generative AI is good or bad. You can determine whether it is valid or invalid. But it is really a new fundamental thing we have not experienced before. The fact that it is necessary to practice a lot of content to understand how sentences work, how to argue, and to understand various facts about the world does not mean just copying and pasting things or making collages. It really produces things that no one could have expected or predicted, and provides a lot of new content. I think that is important and valuable.