As new AI products and services flood the internet, one side effect has become increasingly hard to ignore — the rise of deepfakes. While some are harmless and meant for humor, others are maliciously designed to deceive viewers, impersonate public figures, or promote scams. Fake ads featuring celebrities like Elon Musk have become a particularly visible example of this growing problem.
Deepfakes do have legitimate uses in entertainment and media when deployed responsibly. But for creators whose images and voices are being misused to mislead or defraud audiences, the impact can be deeply personal. To address this, YouTube has introduced a new system called “Likeness Detection.”
YouTube says the feature has been in testing for some time but is now being rolled out to a wider group of creators. The goal is to give users a tool to track and respond to unauthorized uses of their image or voice across the platform — protecting creators from unwanted impersonations and potential misinformation.
How Likeness Detection Works
In a video posted to YouTube’s Creator Insider channel, the company outlined how the system operates and how creators can enroll. To participate, creators visit the new Likeness tab in their account settings and consent to data processing. YouTube then requests a government-issued photo ID and a short selfie video, which help its AI models accurately detect potential impersonations.
Once enabled, the system automatically scans uploaded videos for matches. When a likely impersonation is detected, creators will see a notice in their dashboard. From there, they can file a privacy-based removal request, following YouTube’s standard policy. In cases of large-scale misuse or monetized deepfakes, creators can escalate to a copyright takedown request directly from the same page.
Participation in Likeness Detection is optional. Creators can withdraw from the program anytime, at which point YouTube will stop scanning for matches within 24 hours of the request.
A Step Toward AI Accountability
YouTube’s new feature arrives amid increasing concerns about the weaponization of deepfake content. The platform is positioning Likeness Detection as part of a broader effort to build trust between creators and audiences, and to ensure that AI doesn’t erode authenticity online.
The move also follows YouTube’s latest round of visual updates to its app and website — signaling that alongside cosmetic refreshes, the platform is focusing heavily on creator safety and AI transparency.
With any luck, Likeness Detection will help reduce the flood of AI-generated impersonations cluttering homepages and restore some confidence to the creative ecosystem on YouTube.