An Introduction to DeepSeek
DeepSeek R1 is an open-source large language model, created by Chinese hedge fund High-Flyer, that offers advanced reasoning and coding skills. Released in January, it made headlines for outperforming OpenAI’s equivalent models across several major benchmarks, while also being cheaper to run. In this guide, we’ll walk through how to install and run DeepSeek R1 for free locally on your Mac using Ollama or Open WebUI.
System Requirements
DeepSeek R1 offers a range of variants with different parameter accounts, meaning you can run a less powerful version with only 8GB of RAM. Performance corresponds with parameter count, so it’s recommended that you install the highest or second highest possible permitted by your computer’s specs. Here’s are some guidelines on the required specs depending on each model:
- 1.5B — 8GB RAM
- 7B — 16GB RAM and 4+ Cores (M1 or newer)
- 8B — 16GB RAM and 4+ Cores (M1 or newer)
- 14B — 32GB RAM and 6+ Cores (M2 or newer)
- 32B — 32GB RAM and 8+ Cores (M3 Pro or M4 Pro)
- 70B — 64GB RAM and 12+ Cores (M4 Pro)
- 671B — Unavailable locally. This model requires enterprise-grade servers.
How to Install DeepSeek AI
Installing DeepSeek R1 with Ollama is quick and easy, and allows you to use the model directly from within Terminal. However, if you prefer to use a chat-based user interface, then you can run Open WebUI with Docker, which will give you access to the model being run by Ollama.
How to run DeepSeek with Ollama
- Install Homebrew by entering the following command in Terminal:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
- Install Ollama using Homebrew:
brew install ollama
- Open Ollama. When Ollama is running you’ll see the llama icon in your menu bar.
- Install your preferred variant of R1, depending on your hardware and requirements:
- 1.5b parameters (1.1GB) — basic writing, fast replies
ollama pull deepseek-r1:1.5b
- 7b parameters (4.7GB) — basic reasoning and coding
ollama pull deepseek-r1:7b
- 8b parameters (4.9GB) — general reasoning and coding
ollama pull deepseek-r1:8b
- 14b parameters (9.0GB) — advanced reasoning and coding
ollama pull deepseek-r1:14b
- 32b parameters (20GB) — complex reasoning and coding
ollama pull deepseek-r1:32b
- 70b parameters (43GB) — complex reasoning and coding
ollama pull deepseek-r1:70b
- 1.5b parameters (1.1GB) — basic writing, fast replies
- Once the model has finished downloading and installing, you can confirm installation by entering the following command.
You should see your chosen model listed like the following:
ollama list
NAME ID SIZE MODIFIED deepseek-r1:32b 38056bbcbb2d 19 GB 7 days ago
- Now you can run your chosen model by entering the the relevant line from the following:
ollama run deepseek-r1:1.5b ollama run deepseek-r1:7b ollama run deepseek-r1:8b ollama run deepseek-r1:14b ollama run deepseek-r1:32b ollama run deepseek-r1:70b ollama run deepseek-r1:671b
- DeepSeek should now be running. All you need to do is submit your query and press enter.


If you don’t mind using Terminal, then you’re all set to continue using R1 as is. When you’re done, you can stop the model running by entering /bye
.
How to run DeepSeek with Open WebUI
If you prefer using a GUI, then you can continue along to setup Open WebUI, which offers a similar experience to ChatGPT.
- this method builds on top of the Ollama approach
-
Complete the steps above to download your preferred model with Ollama. You must leave Ollama running in the background, as it provides the server for Open WebUI to access. However, you don’t need enter the
ollama run deepseek-r1:32b
command, just having Ollama open is sufficient. -
Install and run Docker
- Go to the Docker website.
- Click
Download Docker Desktop
. - Select the appropriate installer (
Download for Mac - Apple Silicon
orDownload for Mac - Intel Chip
). - Open Docker. You can now leave it running like Ollama.
-
Download and run Open WebUI by entering the following command in Terminal:
docker run -d --name open-webui -p 3000:8080 -v open-webui:/app/backend/data --pull=always ghcr.io/open-webui/open-webui:main
This command pulls the latest version of Open WebUI (
-pull=always
), runs it on port 3000 (p 3000:8080
), and sets up persistent storage (v open-webui:/app/backend/data
).Note, if Docker isn’t running you’ll get an error saying
docker: Cannot connect to the Docker daemon
. If you see this, try opening Docker again. -
Open WebUI should now be running, and have access to DeepSeek via Ollama. You can access it by going to
http://localhost:3000/
in your browser of choice. -
You’ll need to create a login to use Open WebUI locally—although this is just a security measure in case your device is accessed by someone else.
-
You can now use DeepSeek R1 through OpenWebUI just like you would use ChatGPT.


- When you’re done, you can shutdown Ollama and Docker. Your data will be preserved for next time you use it.
- Close Ollama by clicking the Ollama icon in your menubar, then selecting
Quit Ollama
. - You can stop the Docker instance running by opening Docker and clicking the stop button next to the
open-webui
container.
- Close Ollama by clicking the Ollama icon in your menubar, then selecting

You can then shutdown Docker entirely by clicking the Docker icon in your menu bar, then selecting Quit Docker Desktop
.
When you want to run DeepSeek again, all you need to do is:
- Open Ollama.
- Open Docker and click the play button next to the
open-webui
container.

- Go to
http://localhost:3000/
in your browser of choice.
Conclusion
Whether you prefer to use Terminal or Open WebUI, running DeepSeek R1 locally offers a great free alternative to proprietary language models. It’s a great option for when online models are unavailable or you don’t have internet connectivity. While the more powerful variants require expensive setups, the range of parameter sizes mean most Mac users can find an option that works with their hardware. With simple installation and complete control over your data, DeepSeek R1 makes advanced AI capabilities accessible to anyone.