Ollama Installation on Windows, Mac & Linux: Full Guide 2026
Installing Ollama is the first step to running powerful AI models like Llama 3, Mistral, and CodeLlama locally — completely free and offline. This ollama installation on Windows, Mac, Linux full guide walks you through exact steps for each operating system, post-installation verification, and quick-start commands to get your local LLM running in under 10 minutes.
🎯 Time to Install: Windows/Mac: ~3 minutes. Linux: ~5 minutes. No prior AI/ML experience required. Internet needed only for initial download.
Prerequisites: What You Need
| Requirement | Minimum | Recommended |
|---|---|---|
| OS | Windows 10, macOS 12+, Ubuntu 20.04+ | Latest stable release |
| RAM | 8GB | 16GB+ (for 70B models) |
| Storage | 5GB free | 20GB+ for multiple models |
| Internet | Required for install | Not needed after setup |
| Permissions | Admin/Sudo access | Standard user works post-install |
Windows Installation
- Visit ollama.ai and click Download for Windows
- Run
OllamaSetup.exeand follow the installer prompts - Ollama installs to
C:\Users\%USERNAME%\.ollamaand runs silently in background - Open Command Prompt or PowerShell and verify:
ollama --version - Download your first model:
ollama pull llama3
Mac Installation
- Go to ollama.ai and download the Mac installer
- Open
Ollama.dmgand drag Ollama to Applications - Launch Ollama from Applications folder — it will appear in your menu bar
- Open Terminal and verify:
ollama --version - Pull a model:
ollama pull llama3(Apple Silicon auto-enables GPU acceleration)
Linux Installation
- Open Terminal and run the one-line installer:
curl -fsSL https://ollama.ai/install.sh | sh - The script detects your OS, installs dependencies, and sets up Ollama as a systemd service
- Verify installation:
ollama --version - Enable auto-start:
sudo systemctl enable ollama - Start service:
sudo systemctl start ollama - Pull your first model:
ollama pull llama3
Post-Installation Verification
After installation, confirm everything works correctly:
ollama list should show downloaded modelshttp://localhost:11434 in browser — should show Ollama is runningollama run llama3 and type What is 2+2?ollama serve to confirm GPU/CPU accelerationQuick Start Commands
Essential Ollama Commands
# List installed models
ollama list
# Download a new model
ollama pull llama3
# Run interactive chat
ollama run llama3
# Run with custom prompt
ollama run llama3 "Explain quantum computing in simple terms"
# Remove a model to free space
ollama rm llama3
# Stop Ollama service
ollama stop
Troubleshooting Common Issues
Installation Fails on Windows
Solution: Run installer as Administrator. Disable antivirus temporarily during install. Ensure .NET 6.0+ is installed.
Permission Denied on Linux
Solution: Use sudo for install commands. Add your user to ollama group: sudo usermod -aG ollama $USER
Model Download Stuck
Solution: Check internet connection. Retry with: OLLAMA_HOST=https://ollama.ai ollama pull llama3. Large models (70B+) may take 10-30 minutes.
API Connection Refused
Solution: Ensure Ollama service is running. On Linux: sudo systemctl status ollama. Restart if needed: ollama restart.
Next Steps After Installation
- Explore the best Ollama models for coding and choose based on your hardware
- Learn how to run Ollama locally with Llama models for maximum performance
- Compare Ollama vs OpenAI API to decide your long-term AI strategy
- Integrate Ollama with your IDE using Continue or Aider for AI-assisted development
💡 Pro Tip: After installation, run ollama serve in a separate terminal window to keep Ollama active while you work. This prevents timeout issues during long coding sessions.
Frequently Asked Questions
No. Ollama runs efficiently on CPU. GPU acceleration (NVIDIA/Apple Silicon) is optional but recommended for faster inference, especially with larger models like Llama 3 70B.
Windows/Mac: Admin/Sudo required for initial install. Linux: Can be installed in user space with --user flag, but systemd service requires root. Post-install usage doesn't require admin.
The Ollama app itself is ~300MB. Models vary: 7B models take ~4GB, 34B ~19GB, 70B ~39GB. You can download multiple models but remove unused ones with ollama rm to free space.
Yes, 100% free. Ollama is open-source. You only pay for your hardware and electricity. No subscriptions, no per-token fees, no hidden costs.
Conclusion
Installing Ollama is straightforward across all major platforms. Whether you're on Windows, macOS, or Linux, the process takes under 10 minutes and opens the door to powerful, private, offline AI. With your installation complete, you're ready to explore local LLMs, build AI applications, and take control of your AI workflow.
For hands-on guidance, check out our free Ollama tutorial or compare local vs cloud AI in our Ollama vs OpenAI guide.