How to Install and Run DeepSeek AI Locally on Mac for Free (Step-by-Step Guide)

Unlock the power of DeepSeek AI on your Mac in minutes. This step-by-step guide walks you through how to set up DeepSeek R1 models and run them for free locally on your Mac using Ollama or Open WebUI.

title

An Introduction to DeepSeek

DeepSeek R1 is an open-source large language model, created by Chinese hedge fund High-Flyer, that offers advanced reasoning and coding skills. Released in January, it made headlines for outperforming OpenAI’s equivalent models across several major benchmarks, while also being cheaper to run. In this guide, we’ll walk through how to install and run DeepSeek R1 for free locally on your Mac using Ollama or Open WebUI.

System Requirements

DeepSeek R1 offers a range of variants with different parameter accounts, meaning you can run a less powerful version with only 8GB of RAM. Performance corresponds with parameter count, so it’s recommended that you install the highest or second highest possible permitted by your computer’s specs. Here’s are some guidelines on the required specs depending on each model:

  • 1.5B — 8GB RAM
  • 7B — 16GB RAM and 4+ Cores (M1 or newer)
  • 8B — 16GB RAM and 4+ Cores (M1 or newer)
  • 14B — 32GB RAM and 6+ Cores (M2 or newer)
  • 32B — 32GB RAM and 8+ Cores (M3 Pro or M4 Pro)
  • 70B — 64GB RAM and 12+ Cores (M4 Pro)
  • 671B — Unavailable locally. This model requires enterprise-grade servers.

How to Install DeepSeek AI

Installing DeepSeek R1 with Ollama is quick and easy, and allows you to use the model directly from within Terminal. However, if you prefer to use a chat-based user interface, then you can run Open WebUI with Docker, which will give you access to the model being run by Ollama.

How to run DeepSeek with Ollama

  1. Install Homebrew by entering the following command in Terminal:
    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    
  2. Install Ollama using Homebrew:
    brew install ollama
    
  3. Open Ollama. When Ollama is running you’ll see the llama icon in your menu bar.
  4. Install your preferred variant of R1, depending on your hardware and requirements:
    1. 1.5b parameters (1.1GB) — basic writing, fast replies
      ollama pull deepseek-r1:1.5b
      
    2. 7b parameters (4.7GB) — basic reasoning and coding
      ollama pull deepseek-r1:7b
      
    3. 8b parameters (4.9GB) — general reasoning and coding
      ollama pull deepseek-r1:8b
      
    4. 14b parameters (9.0GB) — advanced reasoning and coding
      ollama pull deepseek-r1:14b
      
    5. 32b parameters (20GB) — complex reasoning and coding
      ollama pull deepseek-r1:32b
      
    6. 70b parameters (43GB) — complex reasoning and coding
      ollama pull deepseek-r1:70b
      
  5. Once the model has finished downloading and installing, you can confirm installation by entering the following command.
    ollama list
    
    You should see your chosen model listed like the following:
    NAME               ID              SIZE     MODIFIED
    deepseek-r1:32b    38056bbcbb2d    19 GB    7 days ago
    
  6. Now you can run your chosen model by entering the the relevant line from the following:
    ollama run deepseek-r1:1.5b
    ollama run deepseek-r1:7b
    ollama run deepseek-r1:8b
    ollama run deepseek-r1:14b
    ollama run deepseek-r1:32b
    ollama run deepseek-r1:70b
    ollama run deepseek-r1:671b
    
  7. DeepSeek should now be running. All you need to do is submit your query and press enter.
DeepSeek R1 running in Termninal with Ollama.
DeepSeek R1 running in Termninal with Ollama.
Thinking output is provided first, wrapped in <think> tags, before the response output.
Thinking output is provided first, wrapped in <think> tags, before the response output.

If you don’t mind using Terminal, then you’re all set to continue using R1 as is. When you’re done, you can stop the model running by entering /bye.

How to run DeepSeek with Open WebUI

If you prefer using a GUI, then you can continue along to setup Open WebUI, which offers a similar experience to ChatGPT.

  • this method builds on top of the Ollama approach
  1. Complete the steps above to download your preferred model with Ollama. You must leave Ollama running in the background, as it provides the server for Open WebUI to access. However, you don’t need enter the ollama run deepseek-r1:32b command, just having Ollama open is sufficient.

  2. Install and run Docker

    1. Go to the Docker website.
    2. Click Download Docker Desktop .
    3. Select the appropriate installer (Download for Mac - Apple Silicon or Download for Mac - Intel Chip).
    4. Open Docker. You can now leave it running like Ollama.
  3. Download and run Open WebUI by entering the following command in Terminal:

    docker run -d --name open-webui -p 3000:8080 -v open-webui:/app/backend/data --pull=always ghcr.io/open-webui/open-webui:main
    

    This command pulls the latest version of Open WebUI (-pull=always), runs it on port 3000 (p 3000:8080), and sets up persistent storage (v open-webui:/app/backend/data).

    Note, if Docker isn’t running you’ll get an error saying docker: Cannot connect to the Docker daemon . If you see this, try opening Docker again.

  4. Open WebUI should now be running, and have access to DeepSeek via Ollama. You can access it by going to http://localhost:3000/ in your browser of choice.

  5. You’ll need to create a login to use Open WebUI locally—although this is just a security measure in case your device is accessed by someone else.

  6. You can now use DeepSeek R1 through OpenWebUI just like you would use ChatGPT.

DeepSeek R1 running in Open WebUI.
DeepSeek R1 running in Open WebUI.
DeepSeek R1 running in Open WebUI.
DeepSeek R1 running in Open WebUI.
  1. When you’re done, you can shutdown Ollama and Docker. Your data will be preserved for next time you use it.
    1. Close Ollama by clicking the Ollama icon in your menubar, then selecting Quit Ollama.
    2. You can stop the Docker instance running by opening Docker and clicking the stop button next to the open-webui container.
Open WebUI running in a Docker container.
Open WebUI running in a Docker container.

You can then shutdown Docker entirely by clicking the Docker icon in your menu bar, then selecting Quit Docker Desktop.

When you want to run DeepSeek again, all you need to do is:

  1. Open Ollama.
  2. Open Docker and click the play button next to the open-webui container.
The Docker container maintains your conversation history for next time.
The Docker container maintains your conversation history for next time.
  1. Go to http://localhost:3000/ in your browser of choice.

Conclusion

Whether you prefer to use Terminal or Open WebUI, running DeepSeek R1 locally offers a great free alternative to proprietary language models. It’s a great option for when online models are unavailable or you don’t have internet connectivity. While the more powerful variants require expensive setups, the range of parameter sizes mean most Mac users can find an option that works with their hardware. With simple installation and complete control over your data, DeepSeek R1 makes advanced AI capabilities accessible to anyone.

Table of Contents

An Introduction to DeepSeekSystem RequirementsHow to Install DeepSeek AIConclusion

More Resources

Expert-crafted resources to help you design, build, and scale with confidence.

See more

Take Your Product Further with a Fractional Design Lead

Take your product from 0-1: Launch faster, and scale smarter with a fractional FAANG product design lead—elite expertise at a fraction of the cost.

platformplatform