Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

z-lab
/
Qwen3.5-27B-DFlash

Text Generation
Transformers
Safetensors
qwen3
feature-extraction
dflash
speculative-decoding
diffusion
efficiency
flash-decoding
qwen
diffusion-language-model
custom_code
text-generation-inference
Model card Files Files and versions
xet
Community
7
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Advice for a 5090

1
#7 opened 8 days ago by
pramjana

Can i use this draft model with Q4 , Q6 and Q8 27B Models ?

👍 3
#6 opened 8 days ago by
hugypufy

dflash with quantize model

1
#5 opened 8 days ago by
Shimon324

Qwen3.5-4B/9B dflash supports VL mode

2
#4 opened 10 days ago by
huzhua

there a public release planned for the Qwen3.5-122B-DFlash model?

1
#3 opened 20 days ago by
wyc201314

FP8 work for base model or is 16-bit of 27B required?

14
#2 opened 20 days ago by
unoid
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs