How to Install and Run FLUX AI Locally on Mac for Free (Step-by-Step Guide)

Unlock the power of FLUX AI on your Mac in minutes. This step-by-step guide walks you through how to set up FLUX.1 image generation models and run them for free locally on your Mac using DiffusionBee or ComfyUI.

title

An Introduction to FLUX AI

FLUX.1 is a family of AI image generation models, released by Black Forest Labs in August 2024, which purport to define a new standard in AI image generation. The team behind Black Forest Labs are actually the original creators of the popular image generation model Stable Diffusion. As of early 2025, FLUX.1 is arguably the best open-source alternative to Midjourney for image generation.

The FLUX.1 family of models includes three primary text-to-image generation models:

  1. FLUX.1 [pro] — The top-tier model offering state-of-the-art performance image generation with excellent prompt adherence, visual quality, image detail, and output diversity. This model is closed-source, and only available via the Black Forest Labs API or their partners (Replicate, fal.ai).
  2. FLUX.1 [dev] — A distilled, open-source version of FLUX.1 [pro], which offers similar image quality and prompt adherence, but with greater efficiency. This model is available for non-commercial use when run yourself, or commercially via the Black Forest Labs API, or their partners (Replicate, fal.ai).
  3. FLUX.1 [schnell] — A smaller, more lightweight, open-source model which produces images faster, but with less detail. This model is available for full commercial use, licensed under Apache 2.0.

The FLUX.1 family also offers a range of supplementary models, called FLUX.1 Tools, designed for further modification or re-creation of images.

  1. FLUX.1 [Fill] — An inpainting/outpainting model, allowing users to edit or expand images. This could be used, for example, to remove distracting elements in the background of an image, or to outcrop a landscape image to fit a portrait aspect ratio.
  2. FLUX.1 [Redux] — An adaptor for the base models above, which is used to create variations of a provided image, while maintaining the core elements.
  3. FLUX.1 [Canny] — A model which takes an image and text input, and uses edge detection to generate an image with the same overall structure as the input image.
  4. FLUX.1 [Depth] — A model which takes an image and text input, and uses a depth map to generate an image with the same distance relationship as the input image.

To illustrate FLUX’s capabilities, here are some example images generated by FLUX.1 DEV:

A valley with oversized, bioluminescent wildflowers, a meandering river reflecting the starlit sky, and towering cliffs with cascading vines.
A valley with oversized, bioluminescent wildflowers, a meandering river reflecting the starlit sky, and towering cliffs with cascading vines.
Rolling sand dunes under a deep red sky, alien-looking cacti with glowing
flowers, and an ancient, sunken obelisk partially buried in the sand.
Rolling sand dunes under a deep red sky, alien-looking cacti with glowing flowers, and an ancient, sunken obelisk partially buried in the sand.
A hidden desert oasis with sapphire-blue water, floating lotus flowers radiating soft light, and
a crescent moon casting long silver shadows over the dunes.
A hidden desert oasis with sapphire-blue water, floating lotus flowers radiating soft light, and a crescent moon casting long silver shadows over the dunes.
An endless reflective salt flat, mirroring a brewing thunderstorm, with purple lightning bolts
flashing in the distance.
An endless reflective salt flat, mirroring a brewing thunderstorm, with purple lightning bolts flashing in the distance.
A vast, endless ocean where the water reflects a swirling galaxy above, glowing
constellations rippling across the waves, golden fish made of stardust swimming beneath the
surface.
A vast, endless ocean where the water reflects a swirling galaxy above, glowing constellations rippling across the waves, golden fish made of stardust swimming beneath the surface.

As you can see, this open-source model represents a great free alternative to Midjourney if you have the time and hardware to run FLUX models locally.

System Requirements

To run FLUX.1 [dev] or FLUX.1 [schnell] on Mac, the following are recommended system requirements:

  • Apple Silicon — For best performance, it’s recommended to use an Apple Silicon Mac (M1/M2/M3/M4) because these come with a dedicated neural engine designed for the type of work used in AI.
  • Memory
    • FLUX.1 [dev] — At least 32GB of unified memory.
    • FLUX.1 [schnell] — At least 32GB of unified memory, or 16GB if you’re using a quantized version of the model.

Installation Methods

There are several methods available to run FLUX locally on your Mac, each of which offer differing levels of complexity and customization. Here we’ll look at a simple method, great for beginners, and an advanced method, allowing for more customization.

Simple Method: DiffusionBee

Diffusion Bee provides a simple UI, and is the fastest and easiest option for using FLUX. However, it also lacks some advanced options, like Lora fine-tuning. If you’re just exploring AI image generation models, this is a great starting point.

DiffusionBee running FLUX.1[schnell]
DiffusionBee running FLUX.1[schnell]
  1. Download the latest version (2.25.3 or later) of DiffusionBee from Github.
    1. Note: As of February 2025, the DMG available via the DiffusionBee website still links to version 2.25.1, which does not allow you to run Flux.
  2. Open the downloaded file.
  3. Drag the DiffusionBee app to your Applications folder.
  4. Open the DiffusionBee app from your Applications folder.
  5. Click the ‘Models’ button in the left-hand nav —the stacked cubes icon, second from the bottom.
  6. Click the Download button under ‘FLUX.1-dev’ or ‘FLUX.1-schnell’.
  7. Wait for the download to complete.
  8. Click the ‘Text to image’ button in the left-hand nav—the image file icon, second from the top.
  9. Click the ‘Model’ dropdown menu, and select the FLUX model you just downloaded.
  10. Enter your prompt.
  11. Select an aspect ratio, number of images, and style.
  12. Click the ‘Generate’ button at the bottom of the menu.

Advanced Method: ComfyUI

Using ComfyUI on Mac is significantly more involved than Diffusion Bee, and requires you to be comfortable using the command line. However, it offers far more advanced capabilities, and is recommended for more advanced use cases, like chaining multiple models.

ComfyUI running FLUX.1[dev]
ComfyUI running FLUX.1[dev]
  1. Install Homebrew by entering the following command in Terminal:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    
  2. Install Python.

    1. Check if Python is already installed. Anywhere from Python 3.10-3.12 should work.

      python3 --version
      
    2. If python isn’t installed, install it using Homebrew.

      brew install python@3.12
      
  3. Install git.

    brew install git
    
  4. Clone the ComfyUI repo. This will install ComfyUI into your user folder (e.g. ~/Users/YourName/ComfyUI).

    git clone https://github.com/comfyanonymous/ComfyUI
    
  5. Move into the ComfyUI directory.

    cd ComfyUI
    
  6. Next, we’re going to create a python virtual environment to keep our pytorch packages and other dependencies separate from system-wide Python packages.

    python3 -m venv venv
    
  7. Install PyTorch nightly.

    ./venv/bin/pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
    
  8. Install ComfyUI’s required packages:

    ./venv/bin/pip3 install -r requirements.txt
    
  9. Download the FLUX model files.

    1. Download clip_l.safetensors from HuggingFace and place it into ComfyUI/models/clip/.
    2. Download t5xxl_fp16.safetensors from HuggingFace and place it into ComfyUI/models/clip/.
    3. Download ae.safetensors from HuggingFace and place it into ComfyUI/models/vae/.
    4. Download either flux1-dev.safetensors or flux1-schnell.safetensors from HuggingFace (dev, schnell) and place into ComfyUI/models/unet/ .
  10. Enter the following command to start the ComfyUI web server. To stop the server running at any time, press ctrl + c.

    python3 main.py
    
  11. Open the ComfyUI in your browser of choice http://127.0.0.1:8188 .

  12. Drag and drop this workflow file onto the UI. You’ll see something like this:

  13. Now you can enter your prompt in the CLIP Text Encode (Positive Prompt) field, and click the ‘Queue’ button to generate the image. Outputs are saved in your ComfyUI folder, or you can also right-click the output to copy or save the image.

  14. If you shutdown the server and want to restart it later, you’ll need to reactivate the virtual environment first before running the start command.

    cd ComfyUI
    source venv/bin/activate
    python3 main.py
    

Conclusion

Whether you prefer the simplicity of DiffusionBee or the flexibiluty of ComfyUI, running FLUX.1 represents a great free alternative to paid AI image generation models like Midjourney. As you get more experienced, you can try playing with more advanced techniques like chaining LoRAs to achieve more interesting or more consistent results.

Table of Contents

An Introduction to FLUX AISystem RequirementsInstallation MethodsConclusion

More Resources

Practical advice and resources to help you design, build, and scale your product with confidence.

See more

Take Your Product Further with a Fractional Design Lead

Take your product from 0-1: Launch faster, and scale smarter with a fractional FAANG product design lead—elite expertise at a fraction of the cost.

platformplatform