AI on Raspberry Pi: Testing Qwen 2.5 0.5B
AI on Raspberry Pi: Testing Qwen 2.5 0.5B
Clarifying the Issue
Running AI models locally on the Raspberry Pi 5 is a challenge due to its hardware limitations, but with an 8GB RAM upgrade and improved processing power, it has potential as an edge computing device. This time, we tested Qwen 2.5 0.5B, a small-scale language model, using Ollama as our AI runtime. Our goal was to evaluate Qwen’s performance, reasoning ability, and overall usability on the Pi 5. This test was conducted on a Raspberry Pi 5 (8GB RAM) running Raspberry Pi OS Desktop (Bookworm).
Why It Matters
Cloud-based AI is dominant, but running local AI inference offers privacy, reduced latency, and offline capabilities. If Qwen 2.5 0.5B can function well on the Pi 5, it could enable lightweight AI-powered automation, assistants, and knowledge retrieval without requiring a server connection. The key question is whether Qwen 2.5 0.5B performs significantly better than TinyLlama in terms of accuracy, efficiency, and usability.
Key Terms
- Ollama – A local AI model runner optimized for low-resource devices.
- Qwen 2.5 0.5B – A lightweight language model designed for small-scale AI applications.
- Inference – Running an AI model to generate responses.
- Quantization – A method to reduce model size while maintaining accuracy.
- Swap File – Virtual memory used when RAM is maxed out, helping with large models.
Steps at a Glance
- Create the Test Environment – Install Ollama and set up the Pi 5.
- Testing the Model – Evaluate responses using standardized AI prompts.
- Grading Qwen’s Performance – Assess accuracy, reasoning, and coding ability.
1. Create the Test Environment
To install Ollama and Qwen 2.5 0.5B, run the following commands:
Bash
sudo apt update && sudo apt upgrade -y
curl -fsSL https://ollama.ai/install.sh | sh
ollama --version # Verify installation
ollama pull qwen2.5:0.5b # Download model from ollama website
ollama list # Check available models on your local machine
Run the model:
ollama run qwen2.5:0.5b
Since Qwen 2.5 0.5B is a small model, it runs comfortably on an 8GB Raspberry Pi 5 without additional swap space. However, if you experience memory issues, enabling a 4GB swap file can help:
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
We used standardized prompts to evaluate general knowledge, math, coding, and reasoning.
pi@raspberrypi51:~ $ ollama run qwen2.5:0.5b
>>> "What is the capital of Japan?"
The capital of Japan is Tokyo.
>>> "What is 12 × 8?"
$12 \times 8 = 96$.
>>> "Write a Python program that prints the numbers 1 to 10."
for i in range(1, 11):
print(i)
>>> "Write a recursive function in C to calculate factorial."
#include <stdio.h>
int factorial(int n) {
if (n == 0 || n == 1)
return 1;
else
return n * factorial(n-1);
}
void main() {
int num;
printf("Factorial of %d is: ", num);
printf("%d\n", factorial(num));
}
>>> "If a car travels at 60 mph for 2.5 hours, how far does it go?"
To calculate the distance:
Distance = Speed × Time
60 × 2.5 = 150 miles
So the car travels 150 miles.
>>> /bye
Category | Grade | Description |
General Knowledge | A | Correct, concise, and well-formatted response. |
Math Reasoning | A | Answer is correct, with proper formatting. |
Code Generation (Easy) | A | Python output is correct, with no errors or typos. |
Code Generation (Moderate) | B | The C function is mostly correct but lacks an input prompt, which could confuse beginners. |
Logical & Common Sense Reasoning | A | Applied the correct formula and arrived at the right answer. |
Closing Thoughts
Qwen 2.5 0.5B significantly outperforms TinyLlama on the Raspberry Pi 5, demonstrating strong factual recall, reliable math reasoning, and generally accurate code generation. While its C function response could be improved with better user guidance, its accuracy and efficiency make it one of the best lightweight models for local AI inference. For offline AI assistants, automation, and educational applications, Qwen 2.5 0.5B proves to be a practical choice. Future tests could compare it with other small-scale models to see how it stacks up further. Would you like us to test another model next?
Need Raspberry Pi Expertise?
If you need help with your Raspberry Pi projects or have any questions, feel free to reach out to us!
Email us at: info@pacificw.com
Image: Gemini
Comments
Post a Comment