minus-squareCCMan1701A@startrek.websitetoSelfhosted@lemmy.world•n8n + Ollama: self-hosted AI automation that actually workslinkfedilinkEnglisharrow-up1·6 hours agoI run llms using a 780m you’ll be fine. I get pretty close to 10 tokens a second for larger 20B+ models. linkfedilink
I run llms using a 780m you’ll be fine. I get pretty close to 10 tokens a second for larger 20B+ models.