Skip to main content
๐Ÿ”ฅ5 days 07:30:17
Unlimited GPT Image 2 at medium quality, 1 image per run, with Business or EnterpriseUnlimited GPT Image 2 ยท medium onlyGet Unlimited
LogoSeedance 2.0
  • Image to Video
  • Guide
  • Pricing
  • My Creations
Seedance 2.0 face limit: the 3 legit workarounds
2026/05/08

Seedance 2.0 face limit: the 3 legit workarounds

Seedance 2.0 refuses real human face uploads. The 3 ByteDance-documented workarounds: pre-set virtual avatars, AI-generated portraits, asset-id authorization.

You uploaded a photo of yourself, hit Generate, and got a polite refusal. Then you tried a photo of a friend, a celebrity, a profile picture downloaded from somewhere โ€” same refusal each time. This is not a bug, not a feature flag you can flip in settings, and not something the platform you're using can override. Seedance 2.0 has a hard upstream rule: the model refuses any reference image or video containing detectable real human faces[1]. The refusal is at the model layer, not the wrapper, which is why every legitimate Seedance 2.0 access platform โ€” seedance2.so, Higgsfield, Dreamina, OpenArt, the direct Volcengine API โ€” produces the same block.

The good news: there are three legitimate workarounds documented by ByteDance themselves, and one of them (the AI-generated portrait route) takes about 90 seconds and costs less than a dollar. This guide walks through all three with the actual prompts, parameters, and asset-id syntax, so you can get unblocked and ship.

TL;DR

  • Seedance 2.0 rejects real human faces in any uploaded image or video reference. The check runs at the model layer; no wrapper can disable it[1].
  • ByteDance documents three legitimate workarounds: (1) use one of the platform's pre-set virtual avatars referenced by asset://<ID>, (2) generate a fictional portrait with Seedream and use it as the reference, (3) supply your own pre-authorized real-person material with documented rights[1].
  • The fastest path for most users is route 2: generate a portrait with text-to-image, then feed it into reference-to-video. Total cost: ~$0.05-0.10 for the portrait + the normal Seedance generation cost.
  • The asset:// ID system lets you reference platform-curated material (virtual avatars, sample assets, your own previously generated content) instead of uploading new images. Seedance 2.0 accepts these as references because they're pre-cleared.
  • Don't try to "trick" the filter. Cropping eyes out, adding sunglasses, blurring features โ€” all of these have known failure modes. The filter is multi-stage and catches edge cases reliably.

Why Seedance 2.0 refuses real human faces

This is worth understanding because it explains why every platform behaves the same way. ByteDance/Volcengine has a single line in the official Seedance 2.0 API spec: "Seedance 2.0 series models do not support direct upload of reference images/videos containing real human faces. To facilitate creators' use of likenesses, the platform offers the following solutions"[1]. The "following solutions" are the three workarounds documented below.

The reason is straightforward: real-face deepfakes are a legal and reputational liability. ByteDance ships the model behind a face-detection layer, refuses any input that trips it, and routes legitimate likeness use through a structured permission system (asset IDs). This is the same pattern Sora, Veo, and Runway use for the same reason, with different specific implementations.

What this means in practice on seedance2.so:

  • The API rejects the upload with a face-detection error before the generation even starts.
  • The error is identical whether you uploaded your own face, a celebrity, a stock photo of a model, or a wedding photo with multiple people.
  • Cropping the face out doesn't help if the face was originally there in the image dimensions you uploaded โ€” the detection runs on the source image data, not just the visible crop.
  • Drawing or adding text/objects over the face usually doesn't help either; the multi-stage detector catches occlusions that look like deliberate evasion.

If you accept that the rule exists, the rest of this guide is about working with it productively. If you don't, no platform offers an exemption you'll find via Google search.

Workaround 1: Use a pre-set virtual avatar via asset:// ID

ByteDance/Volcengine maintains a library of pre-cleared virtual portraits ("preset virtual avatars" / ้ข„็ฝฎ่™šๆ‹Ÿไบบๅƒ) that Seedance 2.0 accepts as references[1]. You reference them with a special URL syntax: asset://<ASSET_ID> instead of a regular image URL or upload.

When this is a fit:

  • You want a recognizably-styled human character (business person, dancer, athlete, etc.) and don't care about exact identity match.
  • You're producing stock-style content where the character is a placeholder, not a specific person.
  • You want guaranteed-clean output with no risk of the platform retroactively removing it.

When this is not a fit:

  • You need your own face or a specific real person โ€” the asset library is generic.
  • You want narrative continuity with a specific look that doesn't exist in the library.

The asset library is browsable from the studio interface; the IDs are visible there. In API calls, you pass image_url: "asset://<ID>" instead of an HTTPS URL or base64 blob[1]. The downstream prompt syntax is unchanged: you still reference the avatar with @image1 syntax in your text prompt.

Cost: free, the asset is platform-provided. Generation cost is the standard Seedance 2.0 per-second rate.

Workaround 2: Generate a fictional portrait with Seedream, then reference it

This is the route that works for the vast majority of users and is the one I'd recommend by default. The pattern: generate a fictional portrait of someone who looks roughly like the character you want (or even like yourself, if you want a stylized self-portrait), then use that AI-generated image as the Seedance 2.0 reference.

Critically: AI-generated portraits of fictional people pass the face-detection filter. The detector flags photographic images of real humans, not stylized AI generations. This isn't a loophole โ€” it's the explicit recommended path documented by ByteDance for likeness use[1].

The full workflow on seedance2.so:

Step 1: Generate the portrait

Open text-to-image and pick Seedream 4.0 (1 credit per image, ~$0.04 each[2]). Use a prompt that produces a fictional person in the style and posture you'll need for the video.

Photorealistic editorial portrait of a woman in her late 20s, shoulder-length
chestnut hair, warm brown eyes, soft natural makeup, neutral grey background,
soft window light from camera-left. Three-quarter angle, gentle smile.
35mm lens, shallow depth of field, premium magazine photography aesthetic.

Generate 3-4 variations, pick the strongest. Cost: ~$0.04-0.16 in credits depending on how many rolls you do.

Step 2: Upload to Seedance 2.0 reference mode

Open reference-to-video. Upload the portrait you generated. The image becomes @image1 in your prompt. Reference it explicitly:

A woman from @image1 walks through a Tokyo alley at night, neon reflections in puddles.
Camera tracks alongside at hip height. Slow dolly forward.
Cinematic, desaturated teal-and-amber color grade.

The model uses your generated portrait to anchor the character's identity across the generated video. Run on Fast tier first, finalize on Preview when the framing is right.

Step 3: Reuse the same portrait for series consistency

Save the portrait. Reuse it as @image1 in subsequent generations. Because the source image is identical, the model produces visibly consistent character identity across multiple clips, which is the foundation of any narrative AI-video sequence.

This is the same workflow ByteDance documents in their official prompt guide for character-anchored video generation[3]: AI-generated portrait โ†’ Seedance reference video.

Why this passes when uploads of yourself don't

The filter detects photographs of real humans by examining low-level pixel patterns characteristic of camera sensors, skin micro-textures, and other photographic artifacts. AI-generated images have different pixel statistics (the model is trained to recognize this difference); they pass the filter cleanly. This is by design โ€” ByteDance wants users to use generated portraits as the legitimate path for likeness work.

Workaround 3: Pre-authorized real-person material

For commercial work where you genuinely need a specific real person (a brand spokesperson, a licensed celebrity image, a person who has signed a release for AI use), ByteDance documents a "pre-authorized real-person material" path[1]. You supply documentation of rights to use the likeness, and the material is whitelisted at the asset-library level.

The mechanics:

  • This is not a self-service flow on seedance2.so or any third-party wrapper.
  • Authorization is handled directly through Volcengine's enterprise channel.
  • Approval requires legal documentation: signed release from the depicted person, agency contracts for celebrity material, legal proof of life-rights ownership.
  • Approved material is added to your account's asset library and referenceable via asset://<ID> like the platform-provided avatars.

Most individual users won't go this route. It's designed for agencies and enterprise clients running brand campaigns. If you're an individual trying to make a video of yourself, route 2 (generate a stylized fictional portrait that resembles you) is the practical answer.

What doesn't work: the failed "trick the filter" patterns

Search results suggest various tricks for sneaking real faces past the filter. None of these reliably work and most actively backfire. Documenting them so you don't waste time.

Cropping the face out of the frame

Doesn't work consistently. The filter examines the full uploaded image data; if a face was present in the original frame, the detector often flags it even if you cropped tight to the body. And on the generations where it does pass: the model has no reference for the face, so it generates a generic head that probably doesn't match what you wanted.

Adding sunglasses or hats

Partially occluded faces still trip the detector reliably. Modern face-detection models are robust to up to ~50% occlusion. You're better off generating a fictional sunglassed character with Seedream than trying to disguise a real photo.

Blurring the face

Mostly fails. The detector reads pixel statistics and often catches the blurred region as "almost certainly a face that's been blurred." Even when it passes, the model has nothing to reference and produces blurry-faced output.

Drawing or compositing over the face

Inconsistent and produces ugly results. The detector is multi-stage; one stage might miss the alteration but another catches it. And visually, the composite-over-face approach produces output that looks edited rather than coherent.

"Anime-fying" the photo first with a filter

This actually works sometimes โ€” heavily stylized filters can shift pixel statistics enough to pass the photographic-skin detector. But the output you get is filtered through that anime style, not your original photo. At which point you've effectively just done a worse version of route 2: generate a stylized portrait. Skip the filter step and just generate the portrait directly.

Using a deepfake of yourself

Deeply unethical and tripped by the next-stage detection. Don't.

The summary: the filter is robust by design. The path forward is to use one of the three legitimate workarounds, all of which are documented and supported.

A 5-minute end-to-end example

Concrete walkthrough from "I want a Seedance video of a character that looks like me" to a finished clip.

Step 1 (60 seconds): Generate the fictional portrait. Open text-to-image, pick Seedream 4.0. Prompt:

Photorealistic editorial portrait of a [age] [gender] with [hair color and style],
[eye color], [distinguishing features that resemble you],
[clothing style], neutral grey background, soft window light,
three-quarter angle, gentle expression. 35mm, shallow depth, premium photography aesthetic.

Replace bracketed values with your actual likeness traits. Don't try to make it photographic-identical to you; make it "stylized cousin who looks like me." Cost: ~$0.04, takes 30-60 seconds.

Step 2 (30 seconds): Pick the strongest portrait. Generate 2-3 variations if the first one isn't quite right. Total cost: ~$0.04-0.16.

Step 3 (60 seconds): Set up the Seedance 2.0 generation. Open reference-to-video. Upload your chosen portrait. Write the prompt:

A [character description from your portrait] from @image1 [does the action you want]
in [environment]. [Camera move]. [Style descriptors].

Step 4 (60-180 seconds): Generate. Submit on Fast tier first. Wait 30-90 seconds depending on duration and tier. Review the output.

Step 5 (optional, 60-180 seconds): Finalize on Preview tier. If the Fast output looks right, regenerate the same prompt+reference on Preview for higher visual quality.

Total time: 5-10 minutes. Total cost: usually under $1 for a finalized 5-second clip including the portrait generation.

FAQ

Why does Seedance 2.0 refuse my own selfie if I'm okay with using it?

The platform can't verify it's actually you. The face-detection filter is binary: real faces in, no real faces out. There's no upload pathway where you provide consent and the filter relaxes. ByteDance built it this way to prevent the obvious abuse case where someone uploads someone else's photo and claims consent.

Can I get the filter disabled on seedance2.so?

No. The filter runs upstream at the model layer, not at the platform layer. Even Volcengine direct-API users can't disable it. The only path to using a specific real person is route 3 (pre-authorized asset registration through enterprise channels).

Will the platform retroactively remove my videos if I get a real face through somehow?

Possibly, yes. Wrappers periodically re-scan generated content for policy compliance. Output that passed initially can be flagged later if a new detector catches it. The legitimate workarounds don't have this risk.

Does the AI-generated portrait need to be made on seedance2.so specifically?

No. Any AI-generated portrait works as long as it's actually AI-generated (not a real photo with a "this is AI" caption). You can use Seedream, Midjourney, FLUX, DALL-E, or any other image generator. Seedream 4.0 on seedance2.so is convenient because it's already on the platform and integrates directly with the Seedance reference workflow, but the technical filter doesn't care about provenance.

How close to my actual face can the fictional portrait look?

Visually similar is fine; pixel-identical is not. The filter examines pixel statistics, not visual identity. A Seedream-generated portrait that "looks a lot like me" passes; a real photo of me edited to look like a Seedream output usually doesn't. This is the line the system enforces.

What about using a celebrity's face for a fan project?

Don't. The filter blocks it on first upload, you don't have rights to the likeness anyway, and any platform that did let it through would face copyright and right-of-publicity claims from the celebrity's representation. Generate a fictional character "inspired by" instead.

Is this filter unique to Seedance 2.0?

No. Sora, Veo, Runway, Kling, and most other commercial AI video models implement equivalent filters. The specifics differ โ€” some are stricter, some are looser, some have different opt-in mechanics โ€” but the underlying constraint (no real faces without authorization) is now standard across the industry. Seedance 2.0's filter is on the stricter end but well-documented[1].

Can I use a deceased relative's photo?

Same answer as above: filter blocks it. Right-of-publicity laws also vary widely on this depending on jurisdiction. Generate a fictional portrait that captures the spirit of the person if you want a memorial-style video.

Will the filter eventually be relaxed?

Unlikely in the near term given the deepfake legal landscape. ByteDance is more likely to add additional opt-in pathways (verified-self uploads with biometric matching, similar to what Adobe is exploring) than to weaken the default filter.

Getting unblocked, the honest version

If you accept the constraint and use route 2, you're 5 minutes and under a dollar away from generating Seedance 2.0 videos with character-anchored consistency. The fastest path for the typical user is straightforward: generate a Seedream portrait that looks roughly like the character you want, save it, and reference it as @image1 in every Seedance 2.0 generation that involves that character. You'll have better visual consistency than people uploading random selfies anyway, the output ships without retroactive flagging risk, and the workflow scales to as many characters as you want to maintain across a series. The "trick the filter" path leads to refusals, generic faces, and burned credits; the legitimate path leads to videos that ship.

References

  1. Volcengine ArkClaw. Seedance 2.0 video-generation API โ€” content restrictions on real human faces and the three documented workarounds (preset virtual avatars, AI-generated portraits, pre-authorized real-person material). Retrieved May 2026 from volcengine.com/docs/82379/1520757
  2. Seedance2.so. Seedream 4.0 image generation pricing: 1 credit per image. Retrieved May 2026 from seedance2.so/pricing
  3. Volcengine ArkClaw. Doubao Seedance 2.0 prompt guide โ€” image reference and character-anchored video generation patterns. Retrieved May 2026 from volcengine.com/docs/82379/2222480

Further reading

  • ByteDance Seed. Seedance technical report. seed.bytedance.com/seedance
  • Volcengine ArkClaw. Seedance 2.0 multi-modal reference syntax. volcengine.com/docs/82379/2222480
All Posts

Author

avatar for Seedance Team
Seedance Team

Categories

  • Tutorial
TL;DRWhy Seedance 2.0 refuses real human facesWorkaround 1: Use a pre-set virtual avatar via asset:// IDWorkaround 2: Generate a fictional portrait with Seedream, then reference itStep 1: Generate the portraitStep 2: Upload to Seedance 2.0 reference modeStep 3: Reuse the same portrait for series consistencyWhy this passes when uploads of yourself don'tWorkaround 3: Pre-authorized real-person materialWhat doesn't work: the failed "trick the filter" patternsCropping the face out of the frameAdding sunglasses or hatsBlurring the faceDrawing or compositing over the face"Anime-fying" the photo first with a filterUsing a deepfake of yourselfA 5-minute end-to-end exampleFAQWhy does Seedance 2.0 refuse my own selfie if I'm okay with using it?Can I get the filter disabled on seedance2.so?Will the platform retroactively remove my videos if I get a real face through somehow?Does the AI-generated portrait need to be made on seedance2.so specifically?How close to my actual face can the fictional portrait look?What about using a celebrity's face for a fan project?Is this filter unique to Seedance 2.0?Can I use a deceased relative's photo?Will the filter eventually be relaxed?Getting unblocked, the honest versionReferencesFurther reading

More Posts

Seedance 2.0 vs Sora 2: Which AI Video Generator Should You Pick in 2026?
Product

Seedance 2.0 vs Sora 2: Which AI Video Generator Should You Pick in 2026?

A direct comparison of Seedance 2.0 and Sora 2 covering video quality, reference input, audio generation, pricing, and real workflows. Honest take from the Seedance team.

avatar for Seedance Team
Seedance Team
2026/02/09
How to Generate AI Images: A Practical Guide for 2026
Tutorial

How to Generate AI Images: A Practical Guide for 2026

Learn how to generate AI images from text prompts, reference photos, and style guides. Covers how the technology works, prompt tips, and a step-by-step walkthrough using Seedance 2.0's text-to-image tool.

avatar for Seedance Team
Seedance Team
2026/03/21
Top 10 AI Video Generators in 2026 (Including Seedance 2.0), Ranked and Tested
Product

Top 10 AI Video Generators in 2026 (Including Seedance 2.0), Ranked and Tested

We tested every major AI video generator in 2026. Here are the 10 best, ranked by output quality, features, pricing, and real production value.

avatar for Seedance Team
Seedance Team
2026/02/09
LogoSeedance 2.0

Seedance 2.0 โ€” the free AI video generator for text-to-video, image-to-video, video editing, and more. 1080p output with native audio.

Email
Built withLogo of seedance2seedance2
AI Video Models
  • Vidu Q3 Video Generator
  • Seedance 2 Fast
  • Seedance 2.0 API
  • Seedance 1.5 Pro
  • Veo 3
  • Kling V3
  • Grok Video
  • PixVerse AI
  • Happy Horse AI
  • Seedance 2.5
Video Generators
  • TikTok Video Generator
  • UGC Video Generator
  • Short Video Generator
  • Cinematic Video Generator
AI Image
  • Seedream 5.0
  • Seedream 4.5
  • Seedream 4.0
  • Nano Banana Pro
  • GPT Image 2
  • Grok Imagine
  • Nano Banana 2
AI Tools
  • AI Video Prompt Generator
  • Seedance 2 Prompt Generator
  • Nano Banana Prompt Generator
  • AI Image Analyzer
  • AI Video Analyzer
  • Seedance 2.0 Prompts
  • Nano Banana Pro Prompts
  • Video Watermark Remover
Resources & Legal
  • Pricing
  • Blog
  • About
  • Contact
  • Privacy Policy
  • Terms of Service
  • Refund Policy
ยฉ 2026 Seedance 2.0 All Rights Reserved.
ai tools code.marketFeatured on findly.toolsFeatured on ShowMeBestAIMossAI ToolsDang.aiFeatured on Twelve ToolsIAListรฉ sur IA-Insights