Ollama on Raspberry Pi running local AI model on Raspberry Pi 4 terminal screen
Step-by-step guide to installing Ollama on Raspberry Pi and running AI models locally.

šŸš€ Ollama on Raspberry Pi: Step-by-Step Installation & Complete Setup Guide (2026)

Running Large Language Models (LLMs) locally on a Raspberry Pi sounds exciting — and it is! With Ollama, you can run powerful AI models like Llama, Mistral, and others directly on your Pi without relying on cloud services.

In this guide, you will learn:

  • āœ… What you need before starting
  • āœ… How to install Ollama on Raspberry Pi
  • āœ… How to download and run models
  • āœ… How to use Ollama in terminal
  • āœ… How to access it from another device
  • āœ… Performance tips for Raspberry Pi

Let’s get started.


🧰 Requirements

Before installing Ollama, make sure you have:

Hardware

  • Raspberry Pi 4 (4GB or 8GB RAM recommended)
  • Raspberry Pi 5 (works even better)
  • 16GB+ microSD card (32GB recommended)
  • Stable internet connection

Software

  • Raspberry Pi OS (64-bit version required)
  • Updated system packages

āš ļø Important: Ollama requires a 64-bit OS. If you’re using 32-bit Raspberry Pi OS, reinstall the 64-bit version.


šŸ” Step 1: Check Your Raspberry Pi OS Version

Open Terminal and type:

uname -m

If you see:

aarch64

āœ… Good! You are running 64-bit OS.

If you see:

armv7l

āŒ That is 32-bit. You must install 64-bit Raspberry Pi OS.


šŸ”„ Step 2: Update Your Raspberry Pi

Before installing anything, update your system:

sudo apt update && sudo apt upgrade -y

Reboot if needed:

sudo reboot

šŸ“¦ Step 3: Install Ollama on Raspberry Pi

Ollama provides an easy installation script.

Run this command:

curl -fsSL https://ollama.com/install.sh | sh

Wait for installation to complete.

After installation, verify:

ollama --version

If you see a version number — šŸŽ‰ Ollama is installed successfully!


ā–¶ļø Step 4: Run Your First AI Model

Now let’s download and run a lightweight model.

For Raspberry Pi, start with:

ollama run tinyllama

The first time, it will download the model (may take some time).

After download, you will see:

>>> 

Now you can type:

Explain what is Raspberry Pi in simple words.

And the AI will respond locally!

Type exit to quit.


šŸ¤– Recommended Models for Raspberry Pi

Because Raspberry Pi has limited RAM, use small models:

ModelCommandRecommended RAM
TinyLlamaollama run tinyllama4GB
Phiollama run phi4GB
Mistral (small)ollama run mistral8GB

Avoid very large models (7B+) on 4GB Pi — they will be slow or crash.


🌐 Step 5: Run Ollama as a Background Service

To keep Ollama running:

ollama serve

By default, it runs on:

http://localhost:11434

You can test API:

curl http://localhost:11434

šŸ“” Step 6: Access Ollama from Another Computer (Optional)

If your Raspberry Pi IP is:

192.168.1.50

Run Ollama like this:

OLLAMA_HOST=0.0.0.0 ollama serve

Now from another computer on same network:

http://192.168.1.50:11434

You can connect it with:

  • Open WebUI
  • Custom apps
  • Python scripts
  • Node.js applications

šŸ Step 7: Use Ollama with Python (Optional)

Install Python requests library:

pip install requests

Create a file test.py:

import requests

response = requests.post(
    "http://localhost:11434/api/generate",
    json={
        "model": "tinyllama",
        "prompt": "Explain AI in simple words"
    }
)

print(response.json()["response"])

Run:

python3 test.py

Now you are controlling AI with Python!


⚔ Performance Optimization Tips

Raspberry Pi is powerful but limited. Follow these tips:

1ļøāƒ£ Use smaller models

Tiny models run faster.

2ļøāƒ£ Use Raspberry Pi 5 if possible

Much better AI performance.

3ļøāƒ£ Use SSD instead of SD card

External SSD improves speed significantly.

4ļøāƒ£ Close unnecessary background apps

5ļøāƒ£ Increase swap memory (Optional)

Edit:

sudo nano /etc/dphys-swapfile

Change:

CONF_SWAPSIZE=2048

Restart swap:

sudo systemctl restart dphys-swapfile

āš ļø Do not set too high if using SD card.


🧠 What Can You Do With Ollama on Raspberry Pi?

  • Build offline AI assistant
  • Smart home voice controller
  • AI chatbot for school project
  • Local coding assistant
  • Edge AI experimentation
  • Private AI system (no cloud)

ā— Common Problems & Solutions

āŒ ā€œcommand not foundā€

Restart terminal or reboot.

āŒ Model crashes

Use smaller model.

āŒ Very slow response

Normal on 4GB Pi. Reduce model size.


šŸŽ‰ Final Thoughts

Installing Ollama on Raspberry Pi transforms your small computer into a local AI powerhouse.

You now have:

  • A private AI system
  • No cloud dependency
  • Full control over models
  • A powerful learning platform

Whether you are a student, hobbyist, or developer — this is a fantastic way to explore AI at the edge.

Also learn: How to Use Cursor AI: The Ultimate Beginner’s Guide (2025)

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *