📥 Installation Guide

Ollama Installation on Windows, Mac & Linux: Full Guide 2026

Prashant Lalwani
10 min readOllamaInstallationLocal AI

Installing Ollama is the first step to running powerful AI models like Llama 3, Mistral, and CodeLlama locally — completely free and offline. This ollama installation on Windows, Mac, Linux full guide walks you through exact steps for each operating system, post-installation verification, and quick-start commands to get your local LLM running in under 10 minutes.

🎯 Time to Install: Windows/Mac: ~3 minutes. Linux: ~5 minutes. No prior AI/ML experience required. Internet needed only for initial download.

Ollama installation on Windows Mac Linux showing terminal setup and model download process

Prerequisites: What You Need

RequirementMinimumRecommended
OSWindows 10, macOS 12+, Ubuntu 20.04+Latest stable release
RAM8GB16GB+ (for 70B models)
Storage5GB free20GB+ for multiple models
InternetRequired for installNot needed after setup
PermissionsAdmin/Sudo accessStandard user works post-install

Windows Installation

WINDOWS 10/11
  1. Visit ollama.ai and click Download for Windows
  2. Run OllamaSetup.exe and follow the installer prompts
  3. Ollama installs to C:\Users\%USERNAME%\.ollama and runs silently in background
  4. Open Command Prompt or PowerShell and verify: ollama --version
  5. Download your first model: ollama pull llama3
ollama --version
ollama version is 0.3.14
ollama run llama3

Mac Installation

macOS 12+
  1. Go to ollama.ai and download the Mac installer
  2. Open Ollama.dmg and drag Ollama to Applications
  3. Launch Ollama from Applications folder — it will appear in your menu bar
  4. Open Terminal and verify: ollama --version
  5. Pull a model: ollama pull llama3 (Apple Silicon auto-enables GPU acceleration)
ollama --version
ollama version is 0.3.14
ollama run mistral

Linux Installation

UBUNTU / DEBIAN / FEDORA
  1. Open Terminal and run the one-line installer:
    curl -fsSL https://ollama.ai/install.sh | sh
  2. The script detects your OS, installs dependencies, and sets up Ollama as a systemd service
  3. Verify installation: ollama --version
  4. Enable auto-start: sudo systemctl enable ollama
  5. Start service: sudo systemctl start ollama
  6. Pull your first model: ollama pull llama3
curl -fsSL https://ollama.ai/install.sh | sh
>>> Installing ollama...
>>> ollama is installed!
ollama run codellama

Post-Installation Verification

After installation, confirm everything works correctly:

Service Running: ollama list should show downloaded models
API Active: Open http://localhost:11434 in browser — should show Ollama is running
Model Test: Run ollama run llama3 and type What is 2+2?
Hardware Detection: Check logs with ollama serve to confirm GPU/CPU acceleration

Quick Start Commands

Essential Ollama Commands

# List installed models ollama list # Download a new model ollama pull llama3 # Run interactive chat ollama run llama3 # Run with custom prompt ollama run llama3 "Explain quantum computing in simple terms" # Remove a model to free space ollama rm llama3 # Stop Ollama service ollama stop

Troubleshooting Common Issues

Installation Fails on Windows

Solution: Run installer as Administrator. Disable antivirus temporarily during install. Ensure .NET 6.0+ is installed.

Permission Denied on Linux

Solution: Use sudo for install commands. Add your user to ollama group: sudo usermod -aG ollama $USER

Model Download Stuck

Solution: Check internet connection. Retry with: OLLAMA_HOST=https://ollama.ai ollama pull llama3. Large models (70B+) may take 10-30 minutes.

API Connection Refused

Solution: Ensure Ollama service is running. On Linux: sudo systemctl status ollama. Restart if needed: ollama restart.

Next Steps After Installation

💡 Pro Tip: After installation, run ollama serve in a separate terminal window to keep Ollama active while you work. This prevents timeout issues during long coding sessions.

Frequently Asked Questions

No. Ollama runs efficiently on CPU. GPU acceleration (NVIDIA/Apple Silicon) is optional but recommended for faster inference, especially with larger models like Llama 3 70B.

Windows/Mac: Admin/Sudo required for initial install. Linux: Can be installed in user space with --user flag, but systemd service requires root. Post-install usage doesn't require admin.

The Ollama app itself is ~300MB. Models vary: 7B models take ~4GB, 34B ~19GB, 70B ~39GB. You can download multiple models but remove unused ones with ollama rm to free space.

Yes, 100% free. Ollama is open-source. You only pay for your hardware and electricity. No subscriptions, no per-token fees, no hidden costs.

Conclusion

Installing Ollama is straightforward across all major platforms. Whether you're on Windows, macOS, or Linux, the process takes under 10 minutes and opens the door to powerful, private, offline AI. With your installation complete, you're ready to explore local LLMs, build AI applications, and take control of your AI workflow.

For hands-on guidance, check out our free Ollama tutorial or compare local vs cloud AI in our Ollama vs OpenAI guide.

Found this installation guide helpful? Share it! 🚀

Twitter/X LinkedIn