An Introduction to FLUX AI
FLUX.1 is a family of AI image generation models, released by Black Forest Labs in August 2024, which purport to define a new standard in AI image generation. The team behind Black Forest Labs are actually the original creators of the popular image generation model Stable Diffusion. As of early 2025, FLUX.1 is arguably the best open-source alternative to Midjourney for image generation.
The FLUX.1 family of models includes three primary text-to-image generation models:
- FLUX.1 [pro] — The top-tier model offering state-of-the-art performance image generation with excellent prompt adherence, visual quality, image detail, and output diversity. This model is closed-source, and only available via the Black Forest Labs API or their partners (Replicate, fal.ai).
- FLUX.1 [dev] — A distilled, open-source version of FLUX.1 [pro], which offers similar image quality and prompt adherence, but with greater efficiency. This model is available for non-commercial use when run yourself, or commercially via the Black Forest Labs API, or their partners (Replicate, fal.ai).
- FLUX.1 [schnell] — A smaller, more lightweight, open-source model which produces images faster, but with less detail. This model is available for full commercial use, licensed under Apache 2.0.
The FLUX.1 family also offers a range of supplementary models, called FLUX.1 Tools, designed for further modification or re-creation of images.
- FLUX.1 [Fill] — An inpainting/outpainting model, allowing users to edit or expand images. This could be used, for example, to remove distracting elements in the background of an image, or to outcrop a landscape image to fit a portrait aspect ratio.
- FLUX.1 [Redux] — An adaptor for the base models above, which is used to create variations of a provided image, while maintaining the core elements.
- FLUX.1 [Canny] — A model which takes an image and text input, and uses edge detection to generate an image with the same overall structure as the input image.
- FLUX.1 [Depth] — A model which takes an image and text input, and uses a depth map to generate an image with the same distance relationship as the input image.
To illustrate FLUX’s capabilities, here are some example images generated by FLUX.1 DEV:





As you can see, this open-source model represents a great free alternative to Midjourney if you have the time and hardware to run FLUX models locally.
System Requirements
To run FLUX.1 [dev] or FLUX.1 [schnell] on Mac, the following are recommended system requirements:
- Apple Silicon — For best performance, it’s recommended to use an Apple Silicon Mac (M1/M2/M3/M4) because these come with a dedicated neural engine designed for the type of work used in AI.
- Memory
- FLUX.1 [dev] — At least 32GB of unified memory.
- FLUX.1 [schnell] — At least 32GB of unified memory, or 16GB if you’re using a quantized version of the model.
Installation Methods
There are several methods available to run FLUX locally on your Mac, each of which offer differing levels of complexity and customization. Here we’ll look at a simple method, great for beginners, and an advanced method, allowing for more customization.
Simple Method: DiffusionBee
Diffusion Bee provides a simple UI, and is the fastest and easiest option for using FLUX. However, it also lacks some advanced options, like Lora fine-tuning. If you’re just exploring AI image generation models, this is a great starting point.
![DiffusionBee running FLUX.1[schnell]](/img/resources/diffusionbee-flux.webp)
- Download the latest version (2.25.3 or later) of DiffusionBee from
Github.
- Note: As of February 2025, the DMG available via the DiffusionBee website still links to version 2.25.1, which does not allow you to run Flux.
- Open the downloaded file.
- Drag the DiffusionBee app to your Applications folder.
- Open the DiffusionBee app from your Applications folder.
- Click the ‘Models’ button in the left-hand nav —the stacked cubes icon, second from the bottom.
- Click the Download button under ‘FLUX.1-dev’ or ‘FLUX.1-schnell’.
- Wait for the download to complete.
- Click the ‘Text to image’ button in the left-hand nav—the image file icon, second from the top.
- Click the ‘Model’ dropdown menu, and select the FLUX model you just downloaded.
- Enter your prompt.
- Select an aspect ratio, number of images, and style.
- Click the ‘Generate’ button at the bottom of the menu.
Advanced Method: ComfyUI
Using ComfyUI on Mac is significantly more involved than Diffusion Bee, and requires you to be comfortable using the command line. However, it offers far more advanced capabilities, and is recommended for more advanced use cases, like chaining multiple models.
![ComfyUI running FLUX.1[dev]](/img/resources/comfyui-flux.webp)
-
Install Homebrew by entering the following command in Terminal:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
-
Install Python.
-
Check if Python is already installed. Anywhere from Python 3.10-3.12 should work.
python3 --version
-
If python isn’t installed, install it using Homebrew.
brew install python@3.12
-
-
Install git.
brew install git
-
Clone the ComfyUI repo. This will install ComfyUI into your user folder (e.g. ~/Users/YourName/ComfyUI).
git clone https://github.com/comfyanonymous/ComfyUI
-
Move into the ComfyUI directory.
cd ComfyUI
-
Next, we’re going to create a python virtual environment to keep our pytorch packages and other dependencies separate from system-wide Python packages.
python3 -m venv venv
-
Install PyTorch nightly.
./venv/bin/pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
-
Install ComfyUI’s required packages:
./venv/bin/pip3 install -r requirements.txt
-
Download the FLUX model files.
- Download
clip_l.safetensors
from HuggingFace and place it intoComfyUI/models/clip/
. - Download
t5xxl_fp16.safetensors
from HuggingFace and place it intoComfyUI/models/clip/
. - Download
ae.safetensors
from HuggingFace and place it intoComfyUI/models/vae/
. - Download either
flux1-dev.safetensors
orflux1-schnell.safetensors
from HuggingFace (dev, schnell) and place intoComfyUI/models/unet/
.
- Download
-
Enter the following command to start the ComfyUI web server. To stop the server running at any time, press
ctrl + c
.python3 main.py
-
Open the ComfyUI in your browser of choice
http://127.0.0.1:8188
. -
Drag and drop this workflow file onto the UI. You’ll see something like this:
-
Now you can enter your prompt in the CLIP Text Encode (Positive Prompt) field, and click the ‘Queue’ button to generate the image. Outputs are saved in your ComfyUI folder, or you can also right-click the output to copy or save the image.
-
If you shutdown the server and want to restart it later, you’ll need to reactivate the virtual environment first before running the start command.
cd ComfyUI source venv/bin/activate python3 main.py
Conclusion
Whether you prefer the simplicity of DiffusionBee or the flexibiluty of ComfyUI, running FLUX.1 represents a great free alternative to paid AI image generation models like Midjourney. As you get more experienced, you can try playing with more advanced techniques like chaining LoRAs to achieve more interesting or more consistent results.