6 cute pastel coloured sloths staring at their computer screens happy
Unsloth Joins PyTorch Ecosystem

May 11, 2026 • By Daniel & Michael

May 11, 2026

By Daniel & Michael

We’re excited to share that Unsloth has officially joined the PyTorch Ecosystem! At Unsloth, our mission is to make AI more accessible to everyone. Maybe you found us through our model uploads, GGUFs, low-level kernels, educational guides, model bug fixes, training and reinforcement learning (RL) library, or our new web UI, Unsloth Studio. However you discovered Unsloth, our mission has always been to help democratize AI by making open models more accurate, faster, simpler, and more accessible.

In Q1 2026, PyTorch welcomed Unsloth into the PyTorch Ecosystem Landscape, highlighting our work and integrations with PyTorch, our contributions to open-source AI, and the amazing support from our community. We're super excited to work more closely with PyTorch to bring you guys even more open-source goodness!
🦥 Unsloth is?
To give a quick rundown of what exactly is Unsloth and who we are, our main Unsloth repo is for local training and running LLMs. It utilizes many PyTorch frameworks, including PyTorch, torchvision and torchao to help provide Unsloth standardized, reliable infrastructure for training and inference.

Our latest release, Unsloth Studio, is an open-source UI (desktop app coming soon) that lets you train and run 500+ models (Gemma 4, Qwen3.6 etc.) in one unified interface across Windows, Mac, and Linux devices. Unsloth Studio supports many workloads including: building datasets from PDFs, CSVs files, exporting models, self-healing tool-calling, websearch, API endpoints and many more. You can install it in one command here.

You might mainly know us from our model uploads on Hugging Face where we upload lots of quantized GGUFs, MLX, NVFP4, FP8, Bnb 4-bit and more variants of models. The main goal of our quants is to retain as much as accuracy even when quantized. All our model uploads can be used for inference and some can be used for training.

We also don't just focus on text LLMs but support VLMs, embedding, audio, TTS, OCR models and more. We also do lots of lower-level work including customized Triton kernels where Unsloth uniquely makes training of models ~2× faster with 70% less VRAM and zero accuracy degradation.

We have also contributed fixes across many open models, including Gemma, Qwen, Mistral, Llama, and gpt-oss. We also help out with training bugs where we previously identified and fixed a gradient accumulation bug that affected nearly all training implementations, where the reported training loss was incorrect.
🔥 PyTorch Collaborations
We’ve already had the chance to collaborate with the PyTorch team on several exciting projects. A huge thank you to the PyTorch team for their work and making these collabs possible.

Together, we introduced FP8 RL for consumer GPUs, making FP8 RL inference 1.4× faster, while FP8 RL training uses 60% less VRAM and supported 12× longer context lengths.

We also showed how to run LLMs on phones with ExecuTorch using Unsloth GGUFs. In another collab, we demonstrated how LLMs can be quantized to 4-bit and recover up to 70% of lost accuracy with Quantization-Aware Training, or QAT. With PyTorch, we showed that QAT can deliver major efficiency gains, including 4× lower VRAM usage, no inference overhead, and even 1–3% improvements in raw accuracy on benchmarks like GPQA and MMLU Pro.

We’ve also worked with PyTorch on OpenEnv, as well as hackathons with AMD and Hugging Face focused on building better RL environments. Across all of these collabs, the goal has been to make open-source AI more accessible for everyone.
💡 What Joining the Ecosystem Means?
The PyTorch Ecosystem Landscape showcases open-source projects built with PyTorch and recognizes them based on technical merit, community impact, and alignment with PyTorch’s mission. Past projects featured in the landscape include Hugging Face Transformers, SGLang, vLLM, and many others.

For Unsloth, joining the PyTorch Ecosystem is an exciting step. It will help us reach more people in the PyTorch community and give us greater access to resources, support, and opportunities to collaborate.

Beyond that, nothing changes. We’ll keep building open-source projects and releasing new features, models, optimizations, bug fixes, our desktop app, and broader hardware support - all while continuing to listen closely to feedback from you guys. And of course, your contributions will remain an essential part of what we build.
💕 Community 
Thanks to your support, Unsloth is now the 10th most- followed organization on Hugging Face, just behind OpenAI. We’ve also surpassed 250M model downloads and now have over 200 amazing GitHub contributors. None of this would have been possible without the support from you all.

We want to thank each and every one of you who has used or supported Unsloth, whether through Unsloth Studio, our training package, quantized models, model uploads, bug fixes, feedback, or contributions.

We’re excited to keep contributing to open source and can’t wait to share more exciting news and product launches soon. Thank you to PyTorch, Hugging Face and llama.cpp for also making Unsloth possible. 🙏

Be sure to join our Reddit page and Discord server for help or just to show your support! You can also follow us on Twitter (X) and our newsletter on: Substack.
Thank you for reading!
Daniel & Michael Han 🦥
May 11, 2026

Run + train models via a UI

Join Our Discord