HiDream-I1 is a brand new, open-source text-to-image AI model developed by Hidream AI. This model is a mammoth 17B parameter model that is available for free. It can be used locally for text to image and is compatible with ComfyUI, the go-to modular gui.
As usual, the ComfyUI team has made sure Hidream works natively not long after it arrived.
What Makes HiDream-I1 Special?
HiDream-I1 offers three versions of its model, each tailored for different needs:
- Full Version
- Best realism and detail
- Needs 50 steps
- Requires >16 GB VRAM (FP8) or >27 GB VRAM (FP16)
- Supports negative prompts
- Larger file sizes (FP8: ~17 GB, FP16: ~34 GB)
- Dev Version
- Balance between speed and quality
- Uses 28 steps
- No negative prompt support
- Slightly faster than Full
- Fast Version
- Prioritizes speed
- Uses 16 steps
- Lowest quality, more stylized results
- Also no support for negative prompts
One additional thing to note, is that while some of these VRAM requirements are ridiculous, I’ve found that I can run them in comfy on my modest 3060 with 12GB VRAM albeit with slower speeds and some magic that the Comfyui team has implemented in their backend.
Each version is also available in various formats on Huggingface in different VRAM-optimized formats like FP8, and quantized GGUF (Q8, Q6, Q4) versions, which are smaller and work on lower-spec GPUs. I like the GGUF quant models that City96 puts out which you can find here, so shoutout to them for putting the work in and releasing these for the community!
I also like the model versions that Comfyorg puts out which are optimized for ComfyUI and split in files that are easy to download with good instruction on where to place all the files as well. You can find those models at their Huggingface page here.
How to Use Hidream in ComfyUI
1. Update ComfyUI
- Open ComfyUI’s manager
- Click “Update All” to get the latest nodes
- Restart ComfyUI when prompted
2. Download The Hidream Models You Want To Use
- Choose model type (Full/Dev/Fast)
- Select format: FP8, Q8, Q6, Q4, etc.
- Download and place:
- Diffusion model →
ComfyUI/models/unet/
orComfyUI/models/diffusion_models/
- Text encoders (4 required) →
ComfyUI/models/text_encoders/
orComfyUI/models/
clip - VAE →
ComfyUI/models/
vae
- Diffusion model →
3. Import Workflows
- Download the official workflows or a user made one from Civitai (plenty available – filter by Hidream and workflows)
- Drag-and-drop
.json
workflow files into ComfyUI to load
4. Configure Workflow Settings
Each version has different recommended settings:
- Sampler:
- Full = UniPC + Simple
- Dev/Fast = LCM + Normal
- Steps:
- Full = 50
- Dev = 28
- Fast = 16
- CFG Scale:
- Full = 5
- Dev/Fast = 1
Personally, I have found the full model to be a waste of time and energy and the dev model has produced better results for me. I’d recommend going with that dev model in either fp8 or Q4 GGUF and I’ve liked the results from 28 steps with LCM + Normal and sd3 sampling at 2 – 6.
I’ve also noticed that light upscales with UltimateSDUpscale at around 0.2 – 0.25 denoise using Euler + simple with cfg at 3 and 10 steps really bring out the details and improve the base image gens greatly. Shoutout to Chronoknight on Civitai, as these upscale settings (with denoise lowered slightly) were from their Hidream workflow here.
5. Generate an Image
- Input your prompt (use ChatGPT or another LLM for prompt writing help)
- Choose latent image size
- Click Run and wait for image generation
Don’t Have Enough VRAM?
You can run HiDream-I1 models in the cloud via platforms like fal.ai or replicate.com, which use high-end GPUs. No downloads are needed — just enter a prompt and click “Run”. Although, these will cost a few cents per image, but it removes some of the hassle and time which is nice!
Or you can rent powerful GPUs on the cloud on places like Runpod.io and run these models in ComfyUI like you would locally. You can even rent GPUs like the 5090 for around $0.69 per hour (fun to test out!).
Quick Tips & My First Impressions
- Best for quality? → Use Q8
- Best for speed? → Use FP8
- Short on VRAM? → Stick to quantized GGUF models like Q4
So with that out of the way, what do I think of HiDream and how does it compare to other leading locally run image models like Flux?
Well.. I’m impressed. It is a tough model to run due to the requirements and size making it very slow. But the quality itself is impressive. The results often look very similar to Flux, but for the images that I’ve made I’ve found it to be slightly better.
The fact that this model is brand new, open source, and has a better license make me incredibly excited for it’s future potential.
What do you think of it?
Hi, I’m the guy behind Helpful Tiger. This website shares a little bit of everything related to generative AI and online marketing. Have questions? Reach out, and I’ll do my best to help!