Two Simple Methods to Run DeepSeek AI Locally for Enhanced Privacy

Overview of DeepSeek AI

DeepSeek is a prominent name in the artificial intelligence landscape, gaining recognition since its inception in May 2023. This Chinese startup functions as an independent AI research lab and focuses on creating large language models (LLMs) that are remarkably powerful yet cost-effective, standing out against its U.S. counterparts. One notable aspect of DeepSeek is that it operates as an open-source platform, allowing users to access its models without incurring fees.

Key Features of DeepSeek AI

DeepSeek has made significant strides in the tech arena due to several features:

  • Cost Efficiency: DeepSeek claims it can develop LLMs at a fraction of the cost compared to U.S. companies. This affordability could revolutionize accessibility to AI technology, especially in regions with budget constraints.

  • Versatile Models: The company has released various models tailored for different applications. These include specialized models for programming tasks, general-purpose usage, and even vision-related functionalities.

  • Engaging User Experience: Users have reported that interacting with DeepSeek often feels like having a conversation. The system tends to provide elaborate responses that can lead to deeper insights and learning opportunities.

Installing DeepSeek Locally

You might prefer running DeepSeek AI locally to enhance privacy and control over your data. Fortunately, there are two primary methods to achieve this: using Msty or a command-line installation on Linux.

Method 1: Using Msty

To install DeepSeek via Msty, you will need:

  • Ollama: A prerequisite for using DeepSeek.
  • Msty Installed: This software is compatible with Linux, MacOS, or Windows and is entirely free of charge.

Installation Steps:

  1. Launch Msty: Open the Msty GUI. The method varies based on your operating system.
  2. Access Local AI Models: Locate the icon resembling a computer monitor with a lightning bolt on the left sidebar. Click on it to open the Local AI Models section.
  3. Download DeepSeek Model: Find DeepSeek R1 in the list and select the download button (an arrow pointing downward). Wait for the download to finish, then close the Local AI Models window.
  4. Utilize DeepSeek: Return to the main window, open the model selection drop-down menu, choose DeepSeek R1, and start typing your queries.

Method 2: Using Linux Command Line

If you prefer a full installation approach via the command line, ensure that your system meets the following requirements:

  • Minimum 12-core multi-core CPU.
  • An NVIDIA GPU with CUDA support for enhanced performance.
  • A minimum of 16 GB of RAM, preferably 32 GB or more.
  • NVMe storage for better read/write speeds.
  • An OS based on Ubuntu.

Installation Steps:

  1. Ensure that you have Ollama installed; if not, run:

    curl -fsSL https://ollama.com/install.sh | sh

    You’ll be prompted to enter your user password.

  2. To run the DeepSeek R1 model, use:

    ollama run deepseek-r1:8b
  3. Different model versions are available, including:
    • deepseek-r1:1.5b (smallest)
    • deepseek-r1:7b
    • deepseek-r1:14b
    • deepseek-r1:32b
    • deepseek-r1:70b (largest and most advanced)

Once the command completes, you can start working with the selected model right from your terminal.

DeepSeek AI’s robust models and the ability to run them locally highlight a significant step toward privacy-focused AI applications. By harnessing this technology, you can keep your queries confidential while enjoying the advanced capabilities of this emerging AI.

Please follow and like us:

Related