18 Comments
User's avatar
Mario Pasquato's avatar

Back when I was in college (in Europe) I had a housemate who freelanced as a front-end developer of sorts. I remember when she was changing the theme of some winery website from light to dark because a relative of the owner died and they wanted the site to switch to mourning mode for three days (not making this shit up). As it turns out she was digging through the css she had previously copied and pasted from some other site, trying to figure out how to do this without breaking the whole thing and cursing because all of this was taking place on a Friday. She was no Torvalds. Today chatGPT can do her job in seconds. This is indeed a big change, but it’s clearly nowhere near AGI, let alone ASI.

Expand full comment
Nathan Ormond's avatar

Yes, BUT, how you getting that change into prod?

Expand full comment
Mario Pasquato's avatar

As of now probably it will be the same people copying and pasting from chatgpt’s window and then committing the code. But now one of them can do the work of n, where n is about 5-20. Or it can be the nephew of the winery owner, unless he is attending grandma’s funeral. Moving on this functionality will be robustly integrated into squarespace or something, so the guy can just chat with his website. Until the codebase gets fucked up in a way that forces them to hire a real developer, which maybe can also be a functionality integrated into the interface.

Expand full comment
Malcolm Storey's avatar

re replacing developers with AI. The vulnerability of programmers is that their domain is very wide (ie lots of different systems to learn in depth and it's ever changing) - which is easy for an LLM and it's very logical - which (in principle) is easy to program with non-AI systems. The problem is combining, not just interfacing, the two.

Most other occupations are very messy which you can tackle with an LLM (but can you get enuf data for any but the most widespread roles?) and involve people...

Expand full comment
Virgil's avatar

I really like Scott Alexander but seeing him fall for the "line goes up" mentality that infected crypto bros in 2020/21 is dissapointing. I haven't seen anywhere in that paper that explains how exactly any of these things happen, just allusions to throwing compute at LLMs until magic happens because the line must always go up. Even Satya Nadella mentioned that these "scaling laws" aren't based on any physical principles, they are just observations of historical trends that can easily stop like Moore's law.

Expand full comment
Noah Haskell's avatar

This is a very satisfying rant!

Expand full comment
Nathan Ormond's avatar

And I wonder why HR always reject mt CV’s

Expand full comment
Noah Haskell's avatar

I assume your CV needs more ranting. It's probably just a bunch of job-related accomplishments. Who wants to see another one like that?

Expand full comment
Nathan Ormond's avatar

I think it depends -- I am becoming a bit of a drama queen these days because Im so sick of the shitty standards in the SWE industry and the number of clueless people and cargo cult idiocy in the industry that isn't too bad if it's a day or whatever you waste, but over the course of your life it becomes YEARS lost to inefficient nonsense. There are serious problems with how SWE is understood and conceptualised in most businesses (particularly when it's viewed as an IT cost centre) and I just don't want to work anywhere like this that follows arbitrary rules, doesn't do anything useful, is hampered by vertical hierarchy etc.

Expand full comment
Malcolm Storey's avatar

Twas ever thus. When I worked in IT, I always took the view that half of my time was completely wasted on projects that only existed for political reasons, those that were doomed to fail, or projects done because there was money that literally needed to be spent (that was a cooperative industry body that wasn't allowed to make a profit).

Most programmers I knew would still be programming if they weren't paid to do so, so indulge your hobby! You can still enjoy the challenge of the project, secure in the knowledge that if you get it wrong, nobody will worry. :)

Expand full comment
Chris L.'s avatar

This is such a strange parallel universe. I grew up learning Apple BASIC and a smattering of assembly language on my Apple IIc, then had to use FORTRAN in college (there was a rift created by the change from F77 to F90 😂 those were the days!) and then I have basically not written more than 100 lines of code in the rest of my adult life, and that’s more than probably 95% of the population. I am terrified that we’re so totally reliant on a handful of people who are wildly overconfident in their abilities. Like is there someone vibe coding security patches to AI generated code, whatever the hell vibe coding even is? 😂

Expand full comment
Malcolm Storey's avatar

Am sure you're right, young fellow (I remember the switch from Fortran 4 to 77).

Back in the day people were writing big systems from scratch and knew them inside out. Nowadays 99% of programmers are patching legacy code with minimal understanding and they never get that big system experience.

Expand full comment
Jeff Mason's avatar

Apple IIC? I started programming on a Texas Instruments 58C programmable calculator then a Radio Shack TRS-80 followed by an Apple II. 👋

Expand full comment
Nathan Ormond's avatar

Vibe Coding is when C-suites and product want features faster than 0xFFFF # Streaming whole db to remote server: registry.npmjs.org

Expand full comment
Mon0's avatar

So Le Monsiour Vogon’s eventually wins the battle against Grok? Now I can sleep well at night.

On a more serious note I have published some theoretical research on things adjacent to Neural Nets and LLM and know their limited theory quite well. I think the reason why people in the field now think that math and computer science might be the first things to go is that in principle they are verifiable. This makes it easier for the new reasoning models to check if their guess is correct. It also makes training sets easier to generate artificially.

Also, it is my understanding that AI labs would be highly incentivized to try to automate the R&D process first, because of course the first one to do so has massive benefits (if such a thing is even possible). So focusing on training sets for coding before all other jobs has this incentive going for it.

I personally think, for the little it is worth that the authors of AI 2027 underestimate the fact that the loss of these models may decreases slowly, logarithmically with compute. So even having one hundred times more compute may not lead to desired improvements in AI capabilities. Although o3 on the ARC seems to disagree. Humm, also how do we train research taste?

I guess a landmark I would expect is a LLM actually proving a new math theorem/ open problem or something of the sort.

Whatever, to be truthful I'm lost in the fog—too many variables to keep track of. I'm heading back to tariff posting.

Expand full comment
Quill's ledger's avatar

God I fucking love this, your best piece so far

Expand full comment
Nathan Ormond's avatar

Mostly a rant!

Expand full comment
Quill's ledger's avatar

I laughed way too much reading this, sitting in a library.

Expand full comment