How To Run DeepSeek R1 Locally?

In this extensive guide we will learn the DeepSeek R1 Reasoning LLM Model, which can be used for various types of activities. How To Run DeepSeek R1 Locally and Interact with DeepSeek R1 Locally completely free.

How To Run DeepSeek R1 Locally?

How To Run DeepSeek R1 Locally - A Comprehensive Guide to run DeepSeek R1 Reasoning Model Locally

In this tutorial we are going to learn to install the DeepSeek R1 model on the windows computer and use it for problem solving. You will follow the steps for installing DeepSeek-R1 locally on the Windows 10 machine and use it for coding and logical problem-solving. This model will be run locally and there are monthly fees and most importantly no data leaks. So, you will be able to use the full potential of DeepSeek R1 reasoning model and without any fee and your data will be within your control. So, let's get started with the DeepSeek R1 Reasoning model locally.

These days a lot of people and professionals are talking about the recently launched DeepSeek R1 reasoning model. The DeepSeek R1 is an open-source LLM model developed by a Chinese AI firm, DeepSeek, which is now giving huge competition to the existing LLM models. As per the user?s claim, DeepSeek R1 model is as par and even better than existing OpenAI?s o1 model. The DeepSeek R1 model is better than OpenAI?s o1 in terms of reasoning capabilities (as per user?s claim). So, there are great reasons for learning DeepSeek R1 models.

DeepSeek is currently giving DeepSeek R1 reasoning mode totally free, which is good news for the users. There is a huge surge of users and it's very strange how the company is managing the server cost due to the surge in the user base. The model could be free but the cost of server, Internet and power used in running these servers can?t be free. Is this model bad news for the existing models such as ChatGPT, Google Gemini and others?

Company is giving DeepSeek R1 reasoning mode totally free and the reason for this could be to collect the user?s data. In today?s world the data is the lifeblood of AI models. This data might be used for some kind of monetization. But there is no real information on this. So, if you have concerns about the data then you should use the DeepSeek R1 model locally. And in this tutorial I will teach you how to use the DeepSeek R1 model locally by installing and running your query/prompt against a local instance of DeepSeek R1 model.

Video tutorial of running DeepSeek R1 Locally

What is DeepSeek R1?

DeepSeek R1 is one of the reasoning LLM which was released on 20th January 2025 by Chinese AI company DeepSeek. This model is launched as an open-source model exposing all its underlying codebase. So, this model allows anyone to adapt it and even-fine tune the model based on their specific needs.

The DeepSeek R1 is also known as R1 for short and it?s based on the large base model the DeepSeek V3. The DeepSeek company refined the model through the combination of various supervised fine tuners (SFT) on the fine-tuned human-labeled data and came up with this powerful reasoning model. This reinforcement learning is also used to fine tune this model. This model is making a buzz among the AI community and users of Gen AI models.

DeepSeek R1 development key points

Base Model

The DeepSeek R1 is based on the DeepSeek V3, which is used as the foundation for R1 and provides the strong language processing capability. Using the base DeepSeek V3, the R1 Model is developed by adding various fine tuning, reinforcement learning and optimization techniques.

Supervised Fine-Tuning (SFT)

Supervised learning helped this model to gain such intelligence. Here with DeepSeek R1, the company used large datasets and trained the model. For the fine tuning large amounts of clearly labeled inputs and outputs were used. All these enhanced the accuracy of the model on specific tasks.

Reinforcement Learning (RL)

The DeepSeek R1 model used Reinforcement Learning (RL), just after SFT based tuning. The use of Reinforcement Learning (RL) further refined this model. In case of Reinforcement Learning (RL) the model receives rewards for generating correct and logical responses. This is considered an important task for achieving the ability to perform complex reasoning tasks.

What are all released DeepSeek R1 distilled Models?

DeepSeek released 7 distilled models having 1.5b, 7b, 8b, 14b, 32b, 70b and 671b parameters. These models can be run on the desktop and servers with CPU as well.

According to the DeepkSeek claims the DeepSeek R1 is the first-generation reasoning model which achieved the performance comparable to OpenAI-o1 across various reasons tasks in the mathematics, coding and reasoning tasks.

How To Run DeepSeek R1 Locally?

Ollama can be used to run the DeepSeek R1 distilled models. Visit https://ollama.com/ and download the installer for your operating system. If you are using Windows then download the exe file and install it. You can check more details of installing and using Ollama at the tutorial page: Install and use DeepSeek-R1 Locally

Here is the screenshot of the Ollama download section:

Download Ollama for Windows

We can install Ollama and use these models.

How to run DeepSeek R1's various distilled models locally?

Once the Ollama is installed sucessfully you can use this tool to run DeepSeek R1's various distilled models from the command line. Here are the example command to run the models from the terminal. Here is the list of DeepSeek R1 models:

DeepSeek R1 Models

These are the fine-tuned distilled smaller models of DeepSeek R1:

1. DeepSeek-R1-Distill-Qwen-1.5B

To run 1.5B distilled model you can run following command in terminal:

ollama run deepseek-r1:1.5b

2. DeepSeek-R1-Distill-Qwen-7B

To run 7B distilled model you can run following command in terminal:

ollama run deepseek-r1:7b

3.DeepSeek-R1-Distill-Llama-8B

To run 8B distilled model you can run following command in terminal:

ollama run deepseek-r1:8b

4. DeepSeek-R1-Distill-Qwen-14B

To run 14B distilled model you can run following command in terminal:

ollama run deepseek-r1:14b

5. DeepSeek-R1-Distill-Qwen-32B

To run 32B distilled model you can run following command in terminal:

ollama run deepseek-r1:32b

6. DeepSeek-R1-Distill-Llama-70B

To run 70B distilled model you can run following command in terminal:

ollama run deepseek-r1:70b

7. DeepSeek-R1-Distill-Llama-671B

To run 671B distilled model you can run following command in terminal:

0
ollama run deepseek-r1:671b

Here is the output of DeepSeek R1 14B model:

DeepSeek R1 14B model output

In this section we have learned to run the DeepSeek R1 models. There are other ways to run DeepSeek R1 model, which we will learn in future sections.

1

Related Tutorials