YouTube is developing new technologies to regulate artificially generated voices and likenesses on its platform. In a recent blog post, the company revealed it is working on “synthetic-singing identification technology” capabilities for Content ID. This tool allows rights holders to identify unauthorized usages of music. The new technology will help detect simulated singing within videos.
Additionally, YouTube is creating a solution allowing people in creative fields like music, acting, sports, and content creation to monitor AI-generated content featuring their faces. The company emphasized scraping YouTube videos without consent breaks its terms of service. This suggests aiming to curb organizations creating deepfakes without permission.
As generative AI continues advancing, YouTube recognizes creators may want control over collaborations informing new tools. The platform pledged to introduce choices regarding how third parties utilize creator content. More information on these options will be shared later this year.
The music industry advocates strongly for regulating unauthorized appropriation of identity and voice in AI media. Current legislative bills like the No FAKES and No AI FRAUD Acts propose granting individuals legal recourse over AI duplicating appearance or speech covertly.
YouTube’s movements correspond with expanding control within social platforms. Both YouTube and TikTok currently mandate outlining artificially made videos. In July, YouTube allowed image takedown requests for deepfakes replicating persons.
Simultaneously, YouTube builds AI for creators, introducing tools last year like Dream Screen allowing YouTube Shorts creators to generate videos/backgrounds via text. A new YouTube Create app designs short videos on mobile akin to TikTok’s CapCut. YouTube also negotiates music licenses for AI-powered music creation, though earlier involvement of recording artists saw limited participation.