Allegro
The TRAIN Act is a good start in protecting musicians from A.I. exploitation
Volume 125, No. 1January, 2025
Last month in our A.I. series we shared a compendium of noteworthy copyright lawsuits through which content creators have attempted to secure their intellectual property rights against A.I. developers who have allegedly impermissibly “ingested” their protected works.
But as I mentioned there, these suits are only as good as the current laws on the books. Weak laws afford little or no protection when enforcement is attempted. Such is the case when a content creator is incapable of proving that their work has been used to train A.I software.
This issue is being addressed by Senator Peter Welch, who introduced the Transparency and Responsibility for Artificial Intelligence Networks Act (TRAIN Act), a proposed amendment to the U.S. Copyright Act that intends to enable copyright holders to ascertain if their copyrighted works have been used without their consent in training generative AI models.
Senator Welch said, “If your work is used to train A.I., there should be a way for you, the copyright holder, to determine that it’s been used by a training model, and you should get compensated if it was.”
The AFM thanked Senator Welch for introducing the TRAIN Act. “There must be transparency in machine learning,” said AFM President Tino Gagliardi on the AFM’s Facebook page. He added, “A modern copyright system simply does not work without it. Musicians are otherwise left without adequate recourse to enforce our rights.”
At the core of the TRAIN Act is the introduction of a subpoena process that would allow the legal or beneficial owner of a copyright to request the clerk of any U.S. district court to issue a subpoena to a “model developer or deployer” for disclosure of “copies of, or records sufficient to identify with certainty,” the copyrighted works used to train a generative A.I. model.
However, the requester must have a “subjective good faith belief” that their works were being used and provide a sworn declaration outlining this belief, the purpose of obtaining the records, and an assurance that the information would be used solely to protect their rights. The actual bill can be found here.
Failure by an A.I. developer to comply with the subpoena would result in a “rebuttable presumption” that the developer made copies of the copyrighted work. This mechanism seeks to “solve the ‘black box’ problem,” as few A.I. companies currently disclose their training data. By facilitating access to training records, the TRAIN Act attempts to respond to concerns from the creative community about unauthorized use of their works, signaling a legislative push towards greater transparency in A.I. development. Senator Welch’s office intends to reintroduce this legislation in 2025, since the previous Congress has ended.
While this new statute is a really good start, perhaps it doesn’t go far enough. Perhaps a beefed-up version can be proposed next go around. Here are a couple suggestions.
First, the statute places the onus on the copyright owner to request and enforce a federal subpoena to obtain information they should automatically have access to. That burden seems unfair given the fact that it is their property right that is potentially being abrogated. Why should this burden be placed upon them rather than requiring the software developer to seek permission to use copyright protected material in the first place?
Secondly, the statute only provides access to information. It does not contain any remedy if it turns out the copyrighted material is unlawfully being used. The copyright owner, armed with the information they obtained through the subpoena, would then have to go back to court to file a copyright infringement suit. Thus, the law has no actual teeth on its own.
Thirdly, the TRAIN Act requires the copyright holder to present a subjective good faith belief that their work was used and provide a sworn statement to that effect. No details are provided as to what is necessary to provide the court with this subjective good faith belief.
Finally, failure by the software developer to respond to the request for a subpoena creates only a “rebuttable presumption” that copyrighted material was used. The subpoena can still be challenged.
One possible way to lessen the burden on the copyright holder is to make digital watermarking a requirement for all digitally created content. This would provide an objective basis that could be utilized to prove that copyrighted material was actually used by an A.I. developer.
We should all applaud this legislative attempt to curtail uncompensated nonconsensual use of copyrighted material to train generative A.I. platforms. While it’s a good start, it is a point of departure, rather than the destination.
Send feedback on Local 802’s A.I. series to Allegro@Local802afm.org.
CLICK HERE TO JOIN THE LOCAL 802 A.I. COMMITTEE E-MAIL LIST AND TO LEARN ABOUT UPCOMING MEETINGS
OTHER ARTICLES IN THIS SERIES:
Case Tracker: Artificial Intelligence, Copyrights and Class Actions
Protecting musicians from the existential threats of artificial intelligence