Unsloth Studio Installation
Learn how to install Unsloth Studio on your local device.
Unsloth Studio works on Windows, Linux, WSL and MacOS. You should use the same installation process on every device, although the system requirements may differ by device.
WindowsMacOSLinux & WSLDockerDeveloper Install
Mac: Like CPU - Chat + Data Recipes works for now. MLX training coming very soon.
CPU: Unsloth still works without a GPU, but for Chat + Data Recipes.
Training: Works on NVIDIA: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc. + Intel GPUs
Coming soon: Support for Apple MLX and AMD.
Install Instructions
Remember install instructions are the same across every device:
Install Unsloth
MacOS, Linux, WSL:
curl -fsSL https://unsloth.ai/install.sh | shWindows PowerShell:
irm https://unsloth.ai/install.ps1 | iexFirst install should now be 6x faster and with 50% reduced size due to precompiled llama.cpp binaries.
WSL users: you will be prompted for your sudo password to install build dependencies (cmake, git, libcurl4-openssl-dev).
Start training and running
Start fine-tuning and building datasets immediately after launching. See our step-by-step guide to get started with Unsloth Studio:
Get StartedUpdate Unsloth Studio:
Use the same install commands to update.
MacOS, Linux, WSL:
curl -fsSL https://unsloth.ai/install.sh | shWindows PowerShell:
irm https://unsloth.ai/install.ps1 | iexOr use (currently does not work on Windows):
System Requirements
Windows
Unsloth Studio works directly on Windows without WSL. To train models, make sure your system satisfies these requirements:
Requirements
Windows 10 or Windows 11 (64-bit)
NVIDIA GPU with drivers installed
App Installer (includes
winget): hereGit:
winget install --id Git.Git -e --source wingetPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
MacOS
Unsloth Studio works on Mac devices for Chat for GGUF models and Data Recipes (Export coming very soon). MLX training coming soon!
macOS 12 Monterey or newer (Intel or Apple Silicon)
Install Homebrew: here
Git:
brew install gitcmake:
brew install cmakeopenssl:
brew install opensslPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
Linux & WSL
Ubuntu 20.04+ or similar distro (64-bit)
NVIDIA GPU with drivers installed
CUDA toolkit (12.4+ recommended, 12.8+ for blackwell)
Git:
sudo apt install gitPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
Docker
Our Docker image now works for Studio! We're working on Mac compatibility.
Pull our latest Unsloth container image:
docker pull unsloth/unslothRun the container via:
For more information, see here.
Access your studio instance at
http://localhost:8000or external ip addresshttp://external_ip_address:8000/
CPU only
Unsloth Studio supports CPU devices for Chat for GGUF models and Data Recipes (Export coming very soon)
Same as the ones mentioned above for Linux (except for NVIDIA GPU drivers) and MacOS.
Developer Installation (Advanced)
macOS, Linux, WSL developer installs:
Windows PowerShell developer installs:
Nightly - MacOS, Linux, WSL:
Then to launch every time:
Nightly - Windows:
Run in Windows Powershell:
Then to launch every time:
Uninstall
You can uninstall Unsloth Studio by deleting its install folder usually located under $HOME/.unsloth/studio on Mac/Linux/WSL and %USERPROFILE%\.unsloth\studio on Windows. Or run:
MacOS, WSL, Linux:
rm -rf ~/.unsloth/studioWindows (PowerShell):
Remove-Item -Recurse -Force "$HOME\.unsloth\studio"Optional: remove
$HOME\.unslothon Windows or~/.unslothon MacOS/Linux/WSL if you want to delete all Unsloth files
Note: Using the rm -rf commands will delete everything, including your history, cache, chats etc.
Deleting model files
You can delete old model files either from the bin icon in model search or by removing the relevant cached model folder from the default Hugging Face cache directory. By default, Hugging Face uses ~/.cache/huggingface/hub/ on macOS/Linux/WSL and C:\Users\<username>\.cache\huggingface\hub\ on Windows.
MacOS, Linux, WSL:
~/.cache/huggingface/hub/Windows:
%USERPROFILE%\.cache\huggingface\hub\
If HF_HUB_CACHE or HF_HOME is set, use that location instead. On Linux and WSL, XDG_CACHE_HOME can also change the default cache root.
Google Colab notebook
We’ve created a free Google Colab notebook so you can explore all of Unsloth’s features on Colab’s T4 GPUs. You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models. Just Click 'Run all' and the UI should pop up after installation.
Once installation is complete, scroll to Start Unsloth Studio and click Open Unsloth Studio in the white box shown on the left:
Scroll further down, to see the actual UI.

We now precompile llama.cpp binaries for much faster install speeds.
Sometimes the Studio link may return an error. This happens because you might be using an adblocker or Mozilla or Google Colab expects you to stay on the Colab page; if it detects inactivity, it may shut down the GPU session. Nevertheless, you can scroll down a bit
Troubleshooting
Python version error
sudo apt install python3.12 python3.12-venv version 3.11 up to, but not including, 3.14
nvidia-smi not found
Install NVIDIA drivers from https://www.nvidia.com/Download/index.aspx
nvcc not found (CUDA)
sudo apt install nvidia-cuda-toolkit or add /usr/local/cuda/bin to PATH
llama-server build failed
Non-fatal, Studio still works, GGUF inference won't be available. Install cmake and re-run setup to fix.
cmake not found
sudo apt install cmake
git not found
sudo apt install git
Build failed
Delete ~/.unsloth/llama.cpp and re-run setup
Last updated
Was this helpful?


