google.com, pub-2253711930738256, DIRECT, f08c47fec0942fa0

How to use DeepSeek AI locally on any PC for free

DeepSeek AI is a powerful open-source language model that you can run locally on your PC for free. This guide will walk you through the installation process, ensuring you can harness the capabilities of DeepSeek AI without relying on cloud services.

Prerequisites:

  • Operating System: Windows, macOS, or Linux
  • Hardware: Modern CPU with at least 16 GB of RAM; a dedicated GPU is recommended for optimal performance but not mandatory.
  • Software: Python 3.8 or later, and Git installed on your system.

Step 1: Install Ollama

Ollama is a tool designed for running AI models locally. Open your terminal and run:

bash
curl -fsSL https://ollama.com/install.sh | sh

This command downloads and executes the Ollama installation script. After the process completes, verify the installation by checking its version:

bash
ollama --version

Ensure that the Ollama service is running:

bash
systemctl is-active ollama.service

If it’s not active, start it manually:

bash
sudo systemctl start ollama.service

To have the service start automatically on boot:

bash
sudo systemctl enable ollama.service

Step 2: Download and Run DeepSeek-R1

DeepSeek-R1 offers various model sizes to balance performance and resource usage. For instance, to download and run the 7B model, execute:

bash
ollama run deepseek-r1:7b

If your system has limited resources, consider starting with a smaller model:

  • 1.5b: Minimal resource usage
  • 7b: Balanced performance and resource requirements
  • 8b, 14b, 32b: Intermediate options for higher performance

The download size for these models varies:

  • 1.5b: ~2.3 GB
  • 7b: ~4.7 GB
  • 70b: ~40 GB+

Choose the model that best fits your hardware capabilities.

Step 3: Interact with DeepSeek

Once the model is downloaded, you can start interacting with it directly. To run the DeepSeek-R1 model, use:

bash
ollama run deepseek-r1:7b

You can now input prompts and receive responses from the model.

Optional: Using a Local API for Integration

If you wish to integrate DeepSeek into other applications or services, you can enable the API:

bash
ollama serve &
curl http://localhost:11434/api/generate -d '{"model": "deepseek-r1:7b", "prompt": "Hello, how are you?"}'

This allows you to programmatically interact with DeepSeek.

Conclusion

By following these steps, you can run DeepSeek AI locally on your PC, providing you with a powerful tool for various applications without the need for cloud-based services. Remember to choose the model size that best fits your hardware capabilities to ensure optimal performance.

For a visual walkthrough, you might find this video helpful:

Leave a Comment