Running Deepseek LLM with 1.5 Billion parameters on 2 core CPU and 4gb RAM Thinkpad....XD
January 30, 2025
What is DeepSeek R1 LLM?
DeepSeek R1 is a cutting-edge open-weight AI model designed to rival top-tier language models. Built with 671 billion parameters, it utilizes 37 billion active parameters per token, ensuring high efficiency and performance across various NLP tasks, including coding, reasoning, and content generation. With its open-source approach, DeepSeek R1 promotes transparency and innovation, making it a strong contender in the AI landscape.
Performance Comparison: DeepSeek R1 vs. ChatGPT
Despite having fewer active parameters per token and being developed at 20 times the cost of ChatGPT, DeepSeek R1 delivers approximately the same results in terms of accuracy, reasoning, and overall language understanding. Its optimized architecture ensures that it maximizes computational efficiency while maintaining high performance across various NLP tasks, including coding, content generation, and logical reasoning. This efficiency allows DeepSeek R1 to achieve results similar to ChatGPT while utilizing fewer active parameters, making it a more cost-effective solution for large-scale AI applications.
Additionally, DeepSeek R1’s open-source nature provides an edge over proprietary models like ChatGPT, offering greater transparency, flexibility, and customization for developers and researchers. By enabling the AI community to experiment and improve upon its architecture, DeepSeek R1 fosters innovation and rapid advancements in AI. This makes it a compelling choice for those looking to leverage state-of-the-art language models without the constraints of closed-source alternatives.
Available System Configuration for Running DeepSeek R1
My system configuration for running DeepSeek R1 includes a ThinkPad T460s with an Intel i5-6200U processor (2 cores, 4GB RAM), running Ubuntu 24.04 LTS (x86_64) with kernel version 6.8.0. However, this setup struggles to provide a stable environment for efficiently running DeepSeek R1, as the model demands significantly higher computational resources. The recommended system for running DeepSeek R1 would require more powerful hardware, such as a multi-core processor, at least 16GB of RAM, and a dedicated GPU for better performance. Despite these limitations, I continue to explore AI models with the available hardware.
What is OLLAMA ?
Ollama is a platform that makes it easy to interact with large language models (LLMs) directly on your own hardware. Instead of relying on cloud services, you can run different AI models locally, giving you more control over your data and privacy. It's designed to help developers and businesses integrate AI into their applications quickly and easily. Ollama provides tools that let you fine-tune and customize models to fit your needs, making it simple to experiment and deploy AI solutions.
What’s great is that Ollama is also optimized to run on hardware that’s not as powerful as what most AI models typically need, so it can be a good option even if you're working with limited resources. It’s a more flexible and privacy-conscious way to use AI, without having to rely on external services.
To install Ollama on your system, you can use the following command in your shell:
curl -fsSL https://ollama.com/install.sh | sh
This will download and run the installation script for Ollama, which will automatically set up the platform on your machine.
Ollama is available not just for Linux, but also for macOS and Windows, so you can easily set it up on any of these platforms. Just follow the installation instructions based on your operating system to get started.
fter installing Ollama, you can check if it was successfully downloaded and installed by running the following command in your terminal:
ollama
If Ollama is installed correctly, this command should display some output, such as a welcome message or a list of available options, confirming that it's ready to use. If the command doesn't work or returns an error, it's possible the installation didn't complete successfully, and you may need to try reinstalling it or troubleshooting the installation process.
Running the DeepSeek r1
To run the DeepSeek LLM with 1.5 billion parameters using Ollama, you can use the following command:
ollama run deepseek-r1:1.5b
This command will start the model with the specified version (deepseek-r1:1.5b
). Before running this, ensure that the required 1.1 GB of manifest files are downloaded. Ollama will manage the process of pulling these files, and once they are in place, the model will be ready to run. Just wait for it to load and start processing once the command is executed.
Yes, after you run the command:
ollama run deepseek-r1:1.5b
Ollama will automatically pull the required 1.1 GB of manifest files needed to run the DeepSeek LLM. The process might take a few minutes depending on your internet connection speed. Once the files are downloaded, the model will be ready to run, and you can begin interacting with it.
//
what are manifest files ?
a file containing metadata for a group of accompanying files that are part of a set or coherent unit. For example, the files of a computer program may have a manifest describing the name, version number, license and the constituent files of the program.
//
after running the command and allowing Ollama to download the necessary manifest files, you'll have successfully created an environment to run the DeepSeek LLM with 1.5 billion parameters. The setup process involves pulling the required files, initializing the model, and ensuring everything is ready to interact with. Once this is done, you can start running DeepSeek LLM and begin experimenting with it. You've now got everything in place for a smooth running environment!
To test your setup by printing numbers from 1 to 10 using the DeepSeek LLM, you can send a message like:
print numbers from 1 to 10
This should prompt the model to generate the output. Depending on how the model is configured, it may respond with the numbers in a specific format, such as:
If the model responds as expected, it means your environment is set up correctly and is processing inputs. If it doesn’t behave as anticipated, you might need to troubleshoot the setup or refine the input.
Conclusion :
In conclusion, running DeepSeek LLM—or any large language model (LLM)—locally with a setup that has limited resources, like a lower-end configuration, is easier than it might initially seem, thanks to platforms like Ollama. With just a simple command, you can download the necessary manifest files, run the model, and start experimenting. While the performance may be limited by hardware, the process itself is relatively straightforward and can be done without complex setups. Software like Ollama simplifies running powerful models locally, even on machines with poor configurations, making it more accessible for users with limited resources to explore AI and LLM capabilities.
commands to run different parameters
DeepSeek-R1-Distill-Qwen-1.5B
ollama run deepseek-r1:1.5b
DeepSeek-R1-Distill-Qwen-7B
ollama run deepseek-r1:7b
DeepSeek-R1-Distill-Llama-8B
ollama run deepseek-r1:8b
DeepSeek-R1-Distill-Qwen-14B
ollama run deepseek-r1:14b
DeepSeek-R1-Distill-Qwen-32B
ollama run deepseek-r1:32b
DeepSeek-R1-Distill-Llama-70B
ollama run deepseek-r1:70b