Business Insider

ByteDance's OmniHuman-1 shows just how realistic AI-generated deepfakes are getting

A screengrab of ByteDance's Omnihuman-1 demo of an AI-generated Albert Einstein talking.
ByteDance shared a demo video of an AI-generated Albert Einstein talking. ByteDance
  • ByteDance demoed an AI model designed to generate lifelike deepfake videos from one image.
  • ByteDance released test deepfake videos of TED Talks and a talking Albert Einstein.
  • Tech firms including Google and Meta are working on tools to better detect deepfakes.

Researchers at ByteDance, TikTok's parent company, showcased an AI model designed to generate full-body deepfake videos from one image and audio — and the results are scarily impressive.

ByteDance said that, unlike some deepfake models that can animate only faces or upper bodies, its OmniHuman-1 could generate realistic full-body animations that sync gestures and facial expressions with speech or music.

ByteDance published several dozen test videos, including videos of AI-generated TED Talks and a talking Albert Einstein, on its OmniHuman-lab project page.

In a paper published Monday that has caught the attention of the AI community, ByteDance said the model supported different body proportions and aspect ratios, making the output look more natural.

"The realism of deepfakes just reached a whole new level with Bytedance's release of OmniHuman-1," Matt Groh, an assistant professor at Northwestern University who specializes in computational social science, said in an X post on Tuesday.

OmniHuman-1 is the latest AI model from a Chinese tech company to grab researchers' attention following the release of DeepSeek's market-shaking R1 model last month.

Venky Balasubramanian, the founder and CEO of a tech company called Plivo, said in a Tuesday X post: "Another week another Chinese AI model. OmniHuman-1 by Bytedance can create highly realistic human videos using only a single image and an audio track."

ByteDance said its new model, trained on roughly 19,000 hours' worth of human motion data, could create video clips of any length within memory limits and adapt to different input signals.

The researchers said OmniHuman-1 outperformed other animation tools in realism and accuracy benchmarks.

Deepfake detection

Deepfakes have become harder to detect as the technology has become more sophisticated. Google, Meta, and OpenAI have introduced AI watermarking tools such as SynthID and Meta's Video Seal to flag synthetic content.

While these tools offer some safeguards, they're playing catch-up with the misuse of deepfake technology.

AI-generated videos and voice clones have fueled harassment, fraud, and cyberattacks, with criminals using AI-generated voices to scam victims. US regulators have issued alerts, while lawmakers have introduced legislation to tackle deepfake porn.

A World Economic Forum article last month highlighted how the technology was exposing security flaws.

Read next

Jump to

  1. Main content
  2. Search
  3. Account