© Rokas Tenys / Shutterstock

YouTube users who find their likeness being illicitly used in synthetic media on the site now have another tool in their arsenal – takedown requests. The platform announced it will accept reports to remove videos applying artificial intelligence to digitally put words in people’s mouths, both literally and figuratively.

While AI generation holds promise, it also enables disturbing invasions of privacy if deployed without consent. YouTube recognizes this risk and aims to curb malicious deepfakes through a streamlined reporting flow. The process first requires verifying the complainant is indeed the individual depicted without authorization. Proof of identity like a government ID must align with visible personal details in the offending video.

From there, YouTube evaluates different factors like public interest versus privacy, and whether similar information exists elsewhere openly. If it qualifies as a policy breach, the uploader gets a 48-hour window to edit out the unapproved usage before removal. Merely making it private won’t cut it since access could easily be restored. YouTube stresses transparency too, now mandating disclosures of manipulated media employing techniques like deepfakes, face swaps, and voice cloning.

© 2022 SOPA Images

While laudable in protecting people, moderation at YouTube’s scale presents unique difficulties. Just last month, an AI-generated fake Elon Musk livestreamed in an attempt to scam viewers. As the tech advances, so do bad actors – demanding constant vigilance. YouTube is also developing its own AI, reportedly soliciting rights to famous voices and songs.


Leave a Reply

Your email address will not be published. Required fields are marked *