• Explore
  • Curator's Pick
  • Login
  • Sign up

Ollama and Quantization! 📚 I spent some time learning about AI quantization. ...

avatar
@ahmadmangazap 67
over 1 year ago
DBuzz

... It reduces the quality of LLMs, but also reduces the required memory. I'll be testing this in short order!


#llama #ai #technology #stemgeeks #stem

Posted via D.Buzz


llama ai technology stemgeeks stem

0
0
    0.000
    0 comments
    Menu
    Explore Curator's Pick
    Trade
    Trade STEM Swap