Learn how to install Ollama on Raspberry Pi step by step. Run AI models locally, optimize performance, and build your own private AI system easily.
Running Large Language Models (LLMs) locally on a Raspberry Pi sounds exciting ā and it is! With Ollama, you can run powerful AI models like Llama, Mistral, and others directly on your Pi without relying on cloud services.
In this guide, you will learn:
- ā What you need before starting
- ā How to install Ollama on Raspberry Pi
- ā How to download and run models
- ā How to use Ollama in terminal
- ā How to access it from another device
- ā Performance tips for Raspberry Pi
Letās get started.
š§° Requirements
Before installing Ollama, make sure you have:
Hardware
- Raspberry Pi 4 (4GB or 8GB RAM recommended)
- Raspberry Pi 5 (works even better)
- 16GB+ microSD card (32GB recommended)
- Stable internet connection
Software
- Raspberry Pi OS (64-bit version required)
- Updated system packages
ā ļø Important: Ollama requires a 64-bit OS. If you’re using 32-bit Raspberry Pi OS, reinstall the 64-bit version.
š Step 1: Check Your Raspberry Pi OS Version
Open Terminal and type:
uname -m
If you see:
aarch64
ā Good! You are running 64-bit OS.
If you see:
armv7l
ā That is 32-bit. You must install 64-bit Raspberry Pi OS.
š Step 2: Update Your Raspberry Pi
Before installing anything, update your system:
sudo apt update && sudo apt upgrade -y
Reboot if needed:
sudo reboot
š¦ Step 3: Install Ollama on Raspberry Pi
Ollama provides an easy installation script.
Run this command:
curl -fsSL https://ollama.com/install.sh | sh
Wait for installation to complete.
After installation, verify:
ollama --version
If you see a version number ā š Ollama is installed successfully!
ā¶ļø Step 4: Run Your First AI Model
Now letās download and run a lightweight model.
For Raspberry Pi, start with:
ollama run tinyllama
The first time, it will download the model (may take some time).
After download, you will see:
>>>
Now you can type:
Explain what is Raspberry Pi in simple words.
And the AI will respond locally!
Type exit to quit.
š¤ Recommended Models for Raspberry Pi
Because Raspberry Pi has limited RAM, use small models:
| Model | Command | Recommended RAM |
|---|---|---|
| TinyLlama | ollama run tinyllama | 4GB |
| Phi | ollama run phi | 4GB |
| Mistral (small) | ollama run mistral | 8GB |
Avoid very large models (7B+) on 4GB Pi ā they will be slow or crash.
š Step 5: Run Ollama as a Background Service
To keep Ollama running:
ollama serve
By default, it runs on:
http://localhost:11434
You can test API:
curl http://localhost:11434
š” Step 6: Access Ollama from Another Computer (Optional)
If your Raspberry Pi IP is:
192.168.1.50
Run Ollama like this:
OLLAMA_HOST=0.0.0.0 ollama serve
Now from another computer on same network:
http://192.168.1.50:11434
You can connect it with:
- Open WebUI
- Custom apps
- Python scripts
- Node.js applications
š Step 7: Use Ollama with Python (Optional)
Install Python requests library:
pip install requests
Create a file test.py:
import requests
response = requests.post(
"http://localhost:11434/api/generate",
json={
"model": "tinyllama",
"prompt": "Explain AI in simple words"
}
)
print(response.json()["response"])
Run:
python3 test.py
Now you are controlling AI with Python!
ā” Performance Optimization Tips
Raspberry Pi is powerful but limited. Follow these tips:
1ļøā£ Use smaller models
Tiny models run faster.
2ļøā£ Use Raspberry Pi 5 if possible
Much better AI performance.
3ļøā£ Use SSD instead of SD card
External SSD improves speed significantly.
4ļøā£ Close unnecessary background apps
5ļøā£ Increase swap memory (Optional)
Edit:
sudo nano /etc/dphys-swapfile
Change:
CONF_SWAPSIZE=2048
Restart swap:
sudo systemctl restart dphys-swapfile
ā ļø Do not set too high if using SD card.
š§ What Can You Do With Ollama on Raspberry Pi?
- Build offline AI assistant
- Smart home voice controller
- AI chatbot for school project
- Local coding assistant
- Edge AI experimentation
- Private AI system (no cloud)
ā Common Problems & Solutions
ā ācommand not foundā
Restart terminal or reboot.
ā Model crashes
Use smaller model.
ā Very slow response
Normal on 4GB Pi. Reduce model size.
š Final Thoughts
Installing Ollama on Raspberry Pi transforms your small computer into a local AI powerhouse.
You now have:
- A private AI system
- No cloud dependency
- Full control over models
- A powerful learning platform
Whether you are a student, hobbyist, or developer ā this is a fantastic way to explore AI at the edge.
Also learn: How to Use Cursor AI: The Ultimate Beginnerās Guide (2025)
