nevermind, I see what's happening in the UI. Each `jj` change is preserved in the UI and we can see multiple versions of the same change. The stack then is not really a stack of PRs but a stack of changes (where each change has its own history, i.e., the interdiff view). Did I get it mostly right?
Correct. I'm suprised sexual orientation hasn't been regulated by the same stupid logic. Clearly if you're not making babies you're effecting all kinds of interstate commerce.
The advatange they have is that all 4 of their major metropolitan areas are in a straight line across flat land. The enemy of high-speed is any diviations from flat and straigh. On he accela top speed can be maintained less then 40% of the trip.
This won't fix the AI slop problem. If anything, it will make it worse.
Vibe coders are far more likely to be willing to pay to submit their games, because just like their $200/mo AI subscription, they see it as a necessary expense on their path to get rich quick. People who just want to make things for fun as a hobby are less likely to pay.
After experimenting with various approaches, I arrived at Power Coding (like Power Armor). This requires:
- small codebases (whole thing is injected into context)
- small, fast models (so it's realtime)
- a custom harness (cause everything I tried sucks, takes 10 seconds to load half my program into context instead of just doing it at startup lmao)
The result is interactive, realtime, doesn't break flow (no waiting for "AI compile", small models are very fast now), and most importantly: active, not passive.
I make many small changes. The changes are small, so small models can handle them. The changes are small, so my brain can handle them. I describe what I want, so I am driving. The mental model stays synced continuously.
The USA's westward expansion was indeed facilitated by the timely development of railroads, and so many of the cities were built around the ability to haul freight and service depots along the rail lines, much like ancient cities sprang up alongside rivers and bays because of boat shipping.
However, the United States is also a nation built upon the motor vehicle, and our much-vaunted freeway system here was built deliberately as a national defense measure that could easily move materiel and troops between cities and states, in the event of a domestic invasion or future wars on our own soil. The freeways enjoyed deep investments also due to commercial utility, and again, many cities and habitations sprang up at the nexus of various freeways, as truck-based shipping could service them as well.
I think one of the main obstacles to rail lines in the United States is our car-centrism, and many motorists of any socio-economic class really, really hate trains and public transit of any kind, and any other type of transport that may impinge on their freedom to drive wherever they want on as many highways as possible.
Therefore it is extraordinarily difficult for railways to get good rights-of-way. Amtrak is a redheaded stepchild. Commuter rail may be better respected in places where it was established, like the Eastern Seaboard, but if I asked any voter or motorist here, they would be voting against any sort of rail project whatsoever.
It's not about being costly or not, this is completely irrelevant to the point being made. elm is just some abstract function, that maps ℝ² to ℝ. Same as every other mathematical function it is only really defined by the infinite set of correspondences from one value to some other value. It is NOT exp(x) - ln(y), same as exp is not a series (as you wrongfully stated in another comment). exp can be expressed (and/or defined) as a series to a mathematician familiar with a notion of series, and elm can be expressed as exp(y) - ln(y) to a mathematician familiar with exp and ln. They can also be expressed/defined multiple other ways.
I am not claiming this is better than 1/(x-y) in any way (I have no idea, maybe it isn't if you look closely enough), but you are simply arguing against the wrong thing. Author didn't claim elm to be computationally efficient (it even feels weird to say that, since computational efficiency is not a trait of a mathematical function, but of a computer architecture implementing some program) or anything else, only that (elm, 1) are enough to produce every number and function that (admittedly, somewhat vaguely defined) a scientific calculator can produce.
However, I want to point out that it's weird 1/(x-y) didn't appear on that graph he has in section 3, since if it's as powerful as elm, it should have all the same connections as elm, and it's a pity Odrzywołek's paper misses it.
> You get a link from LinkedIn [or such]. You click on it, the URL loads, and you read the post. When you click the back button, you aren't taken back to wherever you came from. Instead, […]
I've taken to opening anything in a new tab. Closing the tab is my new back button. In an idea world I shouldn't have to, of course, but we live in a world full of disks implementing dark patterns so not an ideal one. Opening in a new tab also helps me apply a “do I really care enough to give this reading time?” filter as my browsers are set to not give new tabs focus - if I've not actually looked at that tab after a little time it gets closed without me doing so at all.
Specifically regarding LinkedIn and their family of dark patterns, I really should log in and update my status after the recent buy-out. I've not been there since updating my profile after the last change of corporate overlords ~9 years ago.
> I don't have a gambling addiction, I just enjoy throwing money away in casinos. I come home thousands of dollars poorer. It's a net return to my finances. Totally healthy.
The JVM has been extremely fast for a long long time now. Even Javascript is really fast, and if you really need performance there’s also others in the same performance class like C#, Rust, Go.
Hot take, but: Performance hasn’t been a major factor in choosing C or C++ for almost two decades now.
I'd wager that a "main agent" is really just a bunch of subagents in a sequential trench coat.
At the end, in both cases, it's a back and forth with an LLM, and every request has its own lifecycle. So it's unfortunately at least a networked systems problem. I think your point works with infinite context window and one-shot ting the whole repo every time... Maybe quantum LLM models will enable that
"Instead of being tried for war crimes, the researchers involved in Unit 731 were given immunity by the U.S. in exchange for their data on human experimentation."
"During the final months of World War II, Japan planned to use plague as a biological weapon against San Diego, California. The plan was scheduled to launch on September 22, 1945, but Japan surrendered five weeks earlier."
GitHub only keeps 14 days of traffic data. After that it's gone. Due to this, a lot of our questions remained unanswered over a long window.
— Was there a spike in clones last month, and where did it come from?
— Which docs pages are actually being read?
— How many people came to the repo over the last 3 months?
We were flying blind. Not the best way to maintain an open source repo.
So build zingg-stats — a small daily automation that runs three Python scripts every morning and posts everything to Slack:
→ GitHub traffic (views, clones, referrers) saved to CSV before the 14-day window closes
→ Charts showing trends over the last 3 months
→ Google Analytics summary from yesterday
We built it with Claude and kept it generic enough that any open source maintainer should be able to adapt it to their own project in an afternoon.
If you run an open source project and have the same blind spots, give it a try.
Autopilots can. Both on airliners and small planes, although only landing on the latter as far as I know. Airbus ATTOL is probably the most interesting of these in that it's visual rather than ILS (note that no commercial airliners are using this).
Adina here, one of superglue's creators. I'm curious to hear folks opinions on this. Also to clarify how agents are able to call APIs via superglue: the first step is to set up auth and systems on superglue so it can process and extract documentation and any other context for calling the APIs, which is then passed on to the agents
I mean the state from the client, like cookies and URL params. You can get access to that in SSR through the framework specific APIs like getServerSideProps in Next, but it’s not a great solution.