In the ongoing struggle about how to approach the influence of artificial intelligence in nearly every aspect of life, YouTube is taking steps to make sure AI is used properly. The company introduced two new policies about AI today — focusing on unauthorized imitations of famous voices and a requirement that AI-generated content be clearly labeled.
The company laid out its plans in a blog post titled “Our approach to responsible AI innovation.”
Also: Thanks to my 5 favorite AI tools, I’m working smarter now
When creators upload a video, YouTube said, it’ll soon have the option to indicate whether or not that content contains AI-generated material. Content that is produced by AI will have a label applied to it in the description. And for what the company calls “certain types of content about sensitive topics,” a more prominent label will actually be applied over the video.
Additionally, if AI content shows a real person, YouTube will offer an option for the real person to request its removal. Not all content will be removed, and a number of factors will be considered when evaluating the requests. However, if the content is considered parody or satire, things get a little murky, as a well-known individual or public official will have a higher bar for removal. And that itself gets fairly complicated as the line between parody and harmful defamation can be tough to navigate.
Also: How to download YouTube videos for free, plus two other ways
In a similar move, the company’s music partners will have the ability to request the removal of AI-generated content that mimics an artist’s singing or rapping voice. If a label doesn’t mind a certain video, it will likely be able to stay (but probably not monetized). But if the label decides it doesn’t appreciate a video, it likely won’t stick around long.
This decision comes just a few months after YouTube rolled out new AI music creation principles. Initially, this will be available to labels or distributors who represent artists participating in YouTube’s ongoing AI music experiments. Access to additional labels and distributors is coming over the next few months.
It’s important to note that YouTube isn’t necessarily against AI content. “Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform,” the blog post read. But, that potential also carries new risks, which, in turn, require new safety measures.
Also: The ethics of generative AI: How we can harness this powerful technology
The company stressed it would make sure creators were aware of the new policy but noted that there would be penalties for not disclosing AI content. Those penalties would vary, but include removal of the video and demonetization. YouTube didn’t elaborate on how it would know if content was AI-generated, but many AI detectors to this point have been notably unreliable.
Penalties for making a video with someone’s voice are a little different. In fact, they’re non-existent, at least at first. “Content removed for either a privacy request or a synthetic vocals request will not result in penalties for the uploader,” YouTube said.