Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get this may seem nitpicky but that is by definition not free, and good luck running even the lightest LLM’s on 8gb ram consumer hardware. 16gb is barely sufficient and you probably need a new MacBook to really stretch that.

People aren’t going to wait minutes per response for clearly inferior results compared to what they get for free on ChatGPT in browser in seconds, whether it’s logical or not. Not to mention they can’t ask more than a few questions tops before the whole thing crumbles. Expectations and reality are too far apart here.

Let’s also address another real issue: what are they going to use? LM studio? Is that really a user experience most will tolerate?

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: