Even the lowest density US states have most of the population in corridors or areas with sufficient density.
E.g. Montana used to have passenger rail through the most densely populated Southern part of the state. That region has comparable density to regions of Norway that have regular rail service. (There are efforts to restart passenger service there)
And it's not like places like Norway have rail everywhere either - the lower threshold for density where rail is considered viable is just far lower.
The actual proportion of the US population that lives in areas with too low density to support rail is really tiny.
Totally makes sense IMHO. Even subscriptions are essentially a simplified/averaged "pay for what you use".
The question is rather whether a single type of subscription license makes sense (e.g. when AI burns through more resources than the average human, should the subscription go up for everybody? - as a human I would be pissed to subsidise heavy AI usage by other users).
E.g. there should probably be special 'AI licenses' similar to how some products have special 'CI licenses'.
Still those technological issues do happen, and in those situations it's good to have a human pilot in control. See for example Qantas Flight 72 - the autopilot thought aircraft was stalling, and sent the plane into a dive. It could have ended up very badly without human supervision.
Apple doesn't make regional variants of the phone, so all models have the technology built-in, even if it's disabled by default. Android phones outside of Japan lack Suica support.
Maybe we should go back to kitchen-sink frameworks so most functionality you need is covered by the fat framework. I'm still using django and it keeps my python project's dependency relatively low :)
Yeah, it's crazy to think an opaque chatbot will be preferable to a well designed UI for most users. People don't like badly designed UIs, but I'm pretty sure most people under 40 prefer a well designed UI to a customer service agent. We call customer service because the website doesn't do what we want, not because we don't want to use the website.
.NET is great because you use a FOSS library and then a month later the developer changes the licence and forces you to either pay a subscription for future upgrades or swap it out.
Lean-zip was not my project but one by others in the lean community. I'm not sure about the methodological details of their process - you might want to check with the original lean-zip authors (https://github.com/kim-em/lean-zip)
Exactly my thought. On Windows I used the free version for casual video editing and making memes. On Linux it just doesn't work. I managed to somehow fix the audio problem, then it had issues with codecs, and in general it was very miserable experience.
> This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it.
This is a very superficial and simplistic analysis of the whole domain. Programmers don't "type". They apply changes to the code. Pressing buttons in a keyboard is not the bottleneck. If that was the case, code completion and templating would have been a revolutionary, world changing development in the field.
The difficult part is understanding what to do and how to do it, and why. It turns out LLMs can handle all these types of task. You are onboarding onto a new project? Hit a LLM assistant with /explain. You want to implement a feature that matches a specific requirement? You hit your LLM assistant with /plan followed by apply. You want to cover some code with tests? You hit your LLM assistant with /tests.
In the end you review the result,and do with it whatever you want. Some even feel confident enough to YOLO the output of the LLM.
So while you still try to navigate through files, others already have features out.
I've seen this objection pop up every single time and I still don't get it.
GPUs run 32, 64 or even 128 vector lanes at once. If you have a block of Rust threads that are properly programmed to take advantage of the vector processing by avoiding divergence, etc how is it supposed to be slower?
Consider the following:
You have a hyperoptimized matrix multiplication kernel and you also have your inference engine code that previously ran on the CPU. You now port the critical inference engine code to directly run on the GPU, thereby implementing paged attention, prefix caching, avoiding data transfers, context switches, etc. You still call into your optimized GPU kernels.
Where is the magical slowdown supposed to come from? The mega kernel researchers are moving more and more code to the GPU and they got more performance out of it.
Is it really that hard to understand that the CUDA style programming model is inherently inflexible and limiting? I think the fundamental problem here is that Nvidia marketing gave an incredibly misleading perception of how the hardware actually works. GPUs don't have thousands of cores like CUDA Core marketing suggests. They have a hundred "barrel CPU"-like cores.
The RTX 5090 is advertised to have 21760 CUDA cores. This is a meaningless number in practice since the "CUDA cores" are purely a software concept that doesn't exist in hardware. The vector processing units are not cores. The RTX 5090 actually has 170 streaming multiprocessors each with their own instruction pointer that you can target independently just like a CPU. The key restriction here is that if you want maximum performance you need to take advantage of all 128 lanes and you also need enough thread copies that only differ in the subset of data they process so that the GPU can switch between them while it is working on multi cycle instructions (memory loads and the like). That's it.
Here is what you can do: You can take a bunch of streaming processors, lets say 8 and use them to run your management code on the GPU side without having to transfer data back to the CPU. When you want to do heavy lifting you are in luck, because you still have 162 streaming processors left to do whatever you want. You proceed to call into cuDNN and get great performance.
We are also currently inmidst a migration from NextJS to TanStack Start and it's worth for the performance and resource gains alone.
NextJS' dev server takes around 3-4 GB memory after a few page click while TanStack / Vite consumes less than a GB.
people's character also changes over time, and everyone's work is built on other people's (of course usually people try to make sure those people are credited correctly)
he is more of an extremely driven and singularly lucky workaholic asshole with sufficient capacity to cram a lot of technical details (or drugs) into his head, which impressed and motivated technical staff (and investors), who then morphed into this Nazi creep as he got more populare he simply began to ignore negative feedback more and more (and obviously got addicted to the far-right echochamber)
Good parallel. An article recently explained how Switzerland has the fastest fibre optical network: all companies share the same cabling. Dig once. No need to hook the property or do anything when switching provider.
Youtube doesn't implement a back function. A real back function would take you back to the same page you came from. If you click a video from the Youtube home page, then click the back button, Youtube will regenerate a different home page with different recommendations, losing the potentially interesting set of recommendations you saw before. You are forced to open every link in a new tab if you want true back functionality.
I wish they (authors of DaVinci Resolve and the Photo Editor) paid more attention to Linux platform. Theoretically DaVinci Resolve runs on Linux, but getting it run is a very bad experience on Ubuntu/Kubuntu 24.04. I even paid for the DaVinci license, as I read somewhere that for Linux it's necessary in order to have all codecs supported. It did not help. Fortunately there were no problems with refund.
There are whole guides online how to walk around these issues and even then I could not get the audio working. Somehow it relies on some old ALSA API, which is no longer maintained/supported on Ubuntu/Kubuntu, or I'm just too stupid to make it work. AI assistants could not provide working solution for me either.
I've moved back to Linux a year ago after around 10 years of Windows (and I used to use Linux Slackware for ~15 years beforehand). I am amazed how big progress the KDE made and whole Linux ecosystem. Gaming these days is just as easy as on Windows, which was my primary reason to switch to Windows. My printer just works now. Even music production is excellent on Linux now. There is plenty of great software options to choose from and they just work - as I would expect from the mature ecosystem.
This all feels so good, given how Linux is not pushing trash into my computer (OS-bound spyware/bloatware), has excellent, customizable UI. Full freedom. I do feel that I own my hardware.
Yet I miss DaVinci Resolve. For now I use Kdenlive, which is nice for simple editing, but feels unfinished, or I just don't know how to use it correctly.
Hey, I just gave it a try and it looks like a really good project. I tried the free version and generated one post, I used the URL option for my app https://www.sneakersbook.com/ and detected clearly the whole idea of the project (I guess this is better if the site is optimized).
As a free user with 10 generation for testing I feel like my first one got a bit wasted in terms that there is no options for settings tone, style or at least I didn't see this. Having in mind that the product is based on generation quantity, a little option for customization will be nice.
I used only the main workflow for first time generation so this is my humble take.
Is having problematic features that causes problems also a requirement?
The answer to the above question will reveal if someone an engineer or a electrician/plumber/code monkey.
In virtually every other engineering discipline engineers have a very prominent seat at the table, and the opposite is only true in very corrupt situations.
E.g. Montana used to have passenger rail through the most densely populated Southern part of the state. That region has comparable density to regions of Norway that have regular rail service. (There are efforts to restart passenger service there)
And it's not like places like Norway have rail everywhere either - the lower threshold for density where rail is considered viable is just far lower.
The actual proportion of the US population that lives in areas with too low density to support rail is really tiny.