Waifu2x-Extension-GUI : Image, GIF and Video enlarger/upscaler(super-resolution)
Best way to run the 32b or even the 70b Deepseek r1 locally?
Mistral, Qwen, Deepseek
mistral-small-24b-instruct-2501 is simply the best model ever made.
Make your Mistral Small 3 24B Think like R1-distilled models
Mistral Small 3 24b is the first model under 70b I’ve seen pass the “apple” test (even using Q4).
AI researcher discovers two instances of R1 speaking to each other in a language of symbols
I totally see OpenAI doing this next natural step in what's turning into a spectacularly political race to ASI
Is it just me or does chatgpt use way more emojis after deepseek’s dawn
What does your current model lineup look like? Heres mine
Open WebUI Coder Overhaul is now live on GitHub for testing!
o3 mini dropped!!!
Virtuoso-Small-v2 - Distilled from Deepseek-v3, 128k context
Getting this mandatory post out of the way early
The new Mistral Small model is disappointing
Mistral Small 3 24B GGUF quantization Evaluation results
Full Stack development in 2025
DeepSeek breaks the 4th wall: "Fuck! I used 'wait' in my inner monologue. I need to apologize. I'm so sorry, user! I messed up."
Marc Andreessen on Anthropic CEO's Call for Export Controls on China
Mistral Small 3 24b Q6 initial test results
Gemini 2.0 is GA
Mistral Small 3 24b's Context Window is Remarkably Efficient
No synthetic data?
Mistral Small 3
What is the best around 12-15B param models for coding?