Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The staggeringly effective compression of LLMs is still under appreciated, I think.

2 years ago you had downloaded onto your laptop an effective and useful summary of all of the information on the Internet, that could be used to generate computer programs in an arbitrarily selected programming language.

 help



Yes! Continuing on thoughts of LLM compression, I'm now convinced and amazed that economics will dictate that all devices contain a copy of all information on the Internet.

I wrote a post about it: Your toaster will know mesopotamian history because it’s more expensive not too.

https://wanderingstan.com/2026-03-01/your-toaster-will-know-...


> Your toaster will know mesopotamian history because it’s more expensive not too.

But will it know the difference between too and to?


Fairly certain the least expensive option will always be a dumb toaster that just plugs into the wall

I chose a toaster specifically because it's about the simplest electrical device out there, and thus pushes the thesis to the extreme. But smart toasters are pretty common: https://revcook.com/products/r180-connect-plus-smart-toaster...

And as other commenter pointed out, a smart toaster with ads or data collection can be subsidized and thus be more profitable. (Oh what a world we're headed for!)

In any case, I think the LLM-everywhere thesis holds even strong for even moderate-complexity devices like power plugs, microwaves, and mobile phones.


But in that case, it won't be subsidized by the manufacturer!

I'm sure people would get a cheaper toaster in exchange of an ad being burned in your bread.


Shhhhhhhhh! Don't give them any more bad ideas, sheesh!

Not if it requires the toaster company to maintain a different SKU without the LLM chip and sells very few units.

I got excited about that, until I actually tried to download a model and run it locally and ask it questions. A current gen local LLM which is small enough to live on disk and fit in my laptop's RAM is very prone to hallucination of facts. Which makes it kind of useless.

Ask your local model a verifiable question - for example a list of tallest buildings in Europe. I did it with Gemma on my laptop, and after the top 3 they were all fake. I just tried that again with Gemma-4 on my iphone, and it did even worse - the 3 tallest buildings in Europe are apparently the Burj Khalifa, the Torre Glories and the Shanghai Tower.

I wouldn't call that effective compression of information.


I don't think any LLMs are good at accurately regurgitating arbitrary facts, unless they happen to be very common in their training, and certainly not good at making novel comparisons between them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: