NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Comprehension debt: A ticking time bomb of LLM-generated code (codemanship.wordpress.com)
mrkeen 11 hours ago [-]
This was a pre-existing problem, even if reliance on LLMs is making it worse.

Naur (https://gwern.net/doc/cs/algorithm/1985-naur.pdf) called it "theory building":

> The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.

Lamport calls it "programming ≠ coding", where programming is "what you want to achieve and how" and coding is telling the computer how to do it.

I strongly agree with all of this. Even if your dev team skipped any kind of theory-building or modelling phase, they'd still passively absorb some of the model while typing the code into the computer. I think that it's this last resort of incidental model building that the LLM replaces.

I suspect that there is a strong correlation between programmers who don't think that there needs to be a model/theory, and those who are reporting that LLMs are speeding them up.

bob1029 10 hours ago [-]
> I suspect that there is a strong correlation between programmers who don't think that there needs to be a model/theory, and those who are reporting that LLMs are speeding them up.

I have some anecdotal evidence that suggests that we can accomplish far more value-add on software projects when completely away from the computer and any related technology.

It's amazing how fast the code goes when you know exactly what you want. At this point the LLM can become very useful because its hallucinations instantly flag in your perspective. If you don't know what you want, I don't see how this works.

I really never understood the rationale of staring at the technological equivalent of a blank canvas for hours a day. The LLM might shake you loose and get you going in the right direction, but I find it much more likely to draw me into a wild goose chase.

The last 10/10 difficulty problem I solved probably happened in my kitchen while I was chopping some onions.

notpachet 9 hours ago [-]
> It's amazing how fast the code goes when you know exactly what you want.

To quote Russ Ackoff[1]:

> Improving a system requires knowing what you could do if you could do whatever you wanted to. Because if you don't know what you would do if you could do whatever you wanted to, how on earth are you going to know what you can do under constraints?

[1] https://www.youtube.com/watch?v=OqEeIG8aPPk

jodrellblank 6 hours ago [-]
If you were playing chess and could do whatever you wanted, you could take several goes in a row, take the opponents pieces off the board and move yours into a winning position. How does that help you play better under the constraints of the rules?
munificent 5 hours ago [-]
This metaphor doesn't really work because the entire point of a game—the thing that makes playing it playful—is that it has no effect on the world outside of the game itself. Thus, ignoring the rules of chess to reach a goal doesn't make sense. There are no chess goals that don't involve the game of chess.

This isn't true in programming or real-world tasks where you are trying to accomplish some external objective.

jacobolus 5 hours ago [-]
If you were playing chess, and you could do whatever you wanted, you might want to, e.g., set up a beautiful mating combination using the minor pieces, set a brilliant trap with forced mate in 10 moves and then trick the opponent into falling into it, keep control of the center all game and make the opponent play in a cramped and crippled style, promote a pawn to a knight for a win, skewer the queen and king, or turn a hopelessly lost position into a draw. The constraint is: you need to take turns and the opponent wants to win / doesn't want to let you win.
Retric 3 hours ago [-]
It’s actually useful technique in chess to see what you would do if you could make multiple moves in a row.

If I could move my Rook there it’s a win, is there any way I can make that happen? How about if I sacrifice my knight etc.

timeinput 5 hours ago [-]
If you don't know you could take the king and win the game why would you bother with any of that?
hvb2 10 hours ago [-]
> The last 10/10 difficulty problem I solved probably happened in my kitchen while I was chopping some onions.

Let me guess, you had tears in your eyes when you found the solution?

lelanthran 7 hours ago [-]
> The last 10/10 difficulty problem I solved probably happened in my kitchen while I was chopping some onions.

Were you the one who developed TOR?

cantor_S_drug 8 hours ago [-]
Another aspect is test cases constrains the domain of possible correct codes by a lot ( a randomly picked number will never solve a quadratic equation, by having many such quadratic equations [test cases] simulatenously, we are imposing lot of constraints on the solution space). Let's say I want LLM to write a regex but by having it run on test cases, I can gain confidence. This is the thesis of Simon Willison. Once LLMs continuously learn about what a code "means" in a tight REPL internal loop, it will start to gain better understanding.
skydhash 5 hours ago [-]
The thing is that for most code, the meaning is outside the code itself. The code is just a description of a computable solution. But both the problem and foundational solution is out there, not in the code.

And some code are solving additional complexities not essential ones (like making it POSIX instead of using bashisms). In this case, it’s just familiarity with the tools that can help you to derive alternatives approaches.

tom_m 5 hours ago [-]
100% - but this has always been true. Some people have always lived into the code before understanding. Now it's probably an even slippier slope.

It's like you don't know how to ski and you're going down a really steep hill...now with AI, imagine that really steep hill is iced over.

rhetocj23 9 hours ago [-]
"I have some anecdotal evidence that suggests that we can accomplish far more value-add on software projects when completely away from the computer and any related technology.

It's amazing how fast the code goes when you know exactly what you want"

Yeah its the same reason why demand for pen and paper still exists. Its the absolute best way for one to think and get their thoughts out. I can personally attest to this - no digital whiteboard can ever compete with just a pen and paper. My best and original ideas come from a blank paper and a pen.

Solutions can emerge from anywhere. But its likely to happen when the mind is focused in a calm state - thats why walking for instance is great.

CliffStoll 7 hours ago [-]
Strongly agree: scribbling requirements, process maps, and block diagrams goes a long way to understanding what needs to be done, as well as getting us to think through what'll be the easy parts and pinch-points.

Goes back to Fred Brooks' Mythical Man-Month: Start with understanding the requirements; then design the architecture. Only after that, begin programming.

cozzyd 9 hours ago [-]
The risk there is I chop my fingers too
grigri907 7 hours ago [-]
9x programmer?
harvey9 6 hours ago [-]
Tom Lehrer: Base 8 is just like base 10 really. If you're missing two fingers.
sorokod 2 hours ago [-]
or you could just skip the pain and count the space between the fingers.
dleeftink 10 hours ago [-]
The illusive onion flow state!
anon7725 5 hours ago [-]
elusive
jacobr1 9 hours ago [-]
I think this is part of the reason why I've had a bit more success with AI Coding than some of my colleagues. My pre-llm workflow was to rapidly build a crappy version of something so that I could better understand it, then rework it (even throw away to the prototype) to build something I now know how I want to handle. I've found even as plenty of thought leaders talk about this general approach (rapid prototyping, continuous refactoring, etc ...) that many engineers are resistant and want to think through the approach and then build it "right." Or alternatively just whip something out and don't throw it away, but rather toil on fixes to the their crappy first pass.

With AI this loop is much easier. It is cheap to even build 3 parallel implementations of something and maybe another where you let the system add whatever capability it thinks would be interesting. You can compare and use that to build much stronger "theory of the program" with requirements, where the separation of concerns are, how to integrate with the larger system. Then having AI build that, with close review of the output (which takes much less time if you know roughly what should be being built) works really well.

HarHarVeryFunny 7 hours ago [-]
> My pre-llm workflow was to rapidly build a crappy version of something so that I could better understand it, then rework it (even throw away to the prototype)

That only works for certain type of simpler products (mostly one-man projects, things like web apps) - you're not going to be building a throw-away prototype, either by hand or using AI, of something more complex like your company's core operating systems, or an industrial control system.

sandoze 7 hours ago [-]
I can’t speak to OS development but industrial coding there’s a lot of experimenting and throw away. You generally don’t write a lot of code for the platform you’re building on (PLCs, automation components). It’s well tested and if it doesn’t hit industry standards (eg. timing, braking) you iterate or start over. At least that was my experience.

When it comes to general software development for customers in the everyday world (phones, computers, web). I often write once for proof, iterate as product requirements becomes clearer/refined, rewrite if necessary (code smell, initial pattern was inefficient for the final outcome).

On a large project, often I’ll touch something I wrote a year ago and realize I’ve evolved the pattern or learned something new in the language/system and I’ll do a little refactor while I’m in there. Even if it’s just code organization for readability.

RHSeeger 8 hours ago [-]
> to rapidly build a crappy version of something so that I could better understand it, then rework it

I do this, too. And it makes me awful at generating "preliminary LOEs", because I can't tell how long something will take until I get in there and experiment a little.

rgbrgb 4 hours ago [-]
In my experience, the only reliable LOE estimate is from someone who just touched that feature or someone who scaffolded it out or did the scrappy prototype in the process of generating the LOE.
nyrikki 8 hours ago [-]
A formalized form of this is the red-green-refactor pattern common in TDD.

Self created or formalized methods work, but they have to have habits or practices in place that prevent disengagement and complacency.

With LLMs there is the problem with humans and automation bias, which effects almost all human endeavors.

Unfortunately that will become more problematic as tools improve, so make sure to stay engaged and skeptical, which is the only successful strategy I have found with support from fields like human factors research.

NASA and the FAA are good sources for information if you want to develop your own.

citizenpaul 8 hours ago [-]
This is what I came to comment. I'm seeing this more and more on HN of all places. Commenters are essentially describing TDD as how they use AI but don't seem to know that is what they are doing. (Without the tests though.)

Maybe I am more of a Leet coder than I think?

XenophileJKO 7 hours ago [-]
In my opinion TDD is antithetical to this process.

The primary reason is because what you are rapidly refactoring in these early prototypes/revisions are the meta structure and the contacts.

Before AI the cost of putting tests on from the beginning or TTD slowed your iteration speed dramatically.

In the early prototypes what you are figuring out is the actual shape of the problem and what the best division of responsibilities and how to fit them together to fit the vision for how the code will be required to evolve.

Now with AI, you can let the AI build test harnesses at little velocity cost, but TDD is still not the general approach.

nyrikki 5 hours ago [-]
There are multiple schools to TDD, sounds like you were exposed to the kind that aims for coverage vs domain behavior.

Like any framework they all have costs,benefits, and places they work and others that they don’t.

Unless taking time to figure out what your inputs and expected outputs, the schools of thought that targeted writing all tests and even implement detail tests I would agree with you.

If you focus on writing inputs vs outputs, especially during a spike, I need to take prompt engineering classes from you

ryoshu 7 hours ago [-]
And you have to pay special attention to the tests written by LLMs. I catch them mocking when they shouldn't, passing tests that don't actually pass, etc.
lxgr 8 hours ago [-]
> With LLMs there is the problem with humans and automation bias, which effects almost all human endeavors.

Yep, and I believe that one will be harder to overcome.

Nudging an LLM into the right direction of debugging is a very different skill from debugging a problem yourself, and the better the LLMs get, the harder it will be to consciously switch between these two modes.

beder 8 hours ago [-]
I agree, and even moreso, it's easy to see the (low!) cost of throwing away an implementation. I've had the AI coder produce something that works and technically meets the spec, but I don't like it for some reason and it's not really workable for me to massage it into a better style.

So I look up at the token usage, see that it cost 47 cents, and just `git reset --hard`, and try again with an improved prompt. If I had hand-written that code, it would have been much harder to do.

bluefirebrand 8 hours ago [-]
> My pre-llm workflow was to rapidly build a crappy version of something so that I could better understand it, then rework it (even throw away to the prototype) to build something I now know how I want to handle.

In my experience this is a bad workflow. "Build it crappy and fast" is how you wind up with crappy code in production because your manager sees you have something working fast and thinks it is good enough

arethuza 7 hours ago [-]
I have a bad feeling that a prototype I wrote ~15 years ago is still being used by a multinational company... It was pretty crappy because it was supposed to be replaced by something embedded in the shiny new ERP system. Funnily enough the ERP project crashed and burned...
MathMonkeyMan 8 hours ago [-]
The trick is not to show anybody the prototype, especially your manager.
fullstop 8 hours ago [-]
Oh, man, I've been there. It's worse if sales sees it.
withinboredom 8 hours ago [-]
I've been there. It gets even worse if the customer sees it.
laserlight 4 hours ago [-]
I wouldn't be surprised if customers are easier to convince, than sales people, that the prototype is not suitable for deployment.
8 hours ago [-]
exe34 8 hours ago [-]
The old "oh I just came up with that exact set of necessary and sufficient specs" in agile meetings.
rossdavidh 8 hours ago [-]
My experience as well; once it is working, other things take priority.

The question is, will the ability of LLMs to whip out boilerplate code cause managers to be more willing to rebuild currently "working" code into something better, now that the problem is better understood than when the first pass was made? I could believe it, but it's not obvious to me that this is so.

bazoom42 8 hours ago [-]
Sounds more like a problem with the manager than with the workflow per se?
jdbernard 8 hours ago [-]
Maybe, but even so workflows like this don't exist in a vacuum. We have to work within the constraints of the organizational systems that exist. There are many practices that I personally adopt in my side projects that would have benefited many of my main jobs over the years, but to actually implement them at scale in my workplaces would require me to spend more time managing/politicking than building software. I did eventually go into management for this reason (among others), but that still didn't solve the workflow problem at my old jobs.
recursive 6 hours ago [-]
Nothing lasts longer than a temporary fix.
nonethewiser 8 hours ago [-]
Thats why you throw away the prototype.
withinboredom 8 hours ago [-]
That's also when you tell your manager: "this is just the happy flow, it isn't production ready". Manager will then ask how long that will take and the answer is that the estimate hasn't changed.
wtetzner 6 hours ago [-]
I'm not sure I understand the problem. Just don't publish the prototype.
wiremine 8 hours ago [-]
> I suspect that there is a strong correlation between programmers who don't think that there needs to be a model/theory, and those who are reporting that LLMs are speeding them up.

I also strongly agree with Lamport, but I'm curious why you don't think Ai can help in the "theory building" process, both for the original team, and a team taking over a project? I.e., understanding a code base, the algorithms, etc.? I agree this doesn't replace all the knowledge, but it can bridge a gap.

wholinator2 7 hours ago [-]
I agree, the llm _vastly_ speeds up the process of "rebuilding the theory" of dead code, even faster than the person who wrote it 3 years ago can. I've had to work on old fortran codebases before and recently had the pleasure of including ai in my method and my god, it's so much easier! I can just copy and paste every relevant function into a single prompt, say "explain this to me" and it will not only comment each line with its details, but also elucidate the deeper meaning behind the set of actions. It can tell exactly which kind of theoretical simulation the code is performing without any kind of prompting on my part, even when the functions are named things like "a" or "sv3d2". Then, i can request derivations and explanations of all relevant theory to connect to the code and come away in about 1 days worth of work with a pretty good idea of the complete execution of a couple thousand lines of detailed mathematical simulations in a languages I'm no expert in. LLMs contribution to building theory has actually been more useful to me than is contribution in writing code!
bunderbunder 7 hours ago [-]
From what I've seen they're great at identifying trees and bad at mapping the forest.

In other words, they can help you identify what fairly isolated pieces of code are doing. That's helpful, but it's also the single easiest part of understanding legacy code. The real challenges are things like identifying and mapping out any instances of temporal coupling, understanding implicit business rules, and inferring undocumented contracts and invariants. And LLM coding assistants are still pretty shit at those tasks.

manishsharan 5 hours ago [-]
Not always.

You could paste your entire repo into Gemini and it could map your forest and also identify the "trees".

Assuming your codebase is smaller than Gemini context window. Sometimes it makes sense to upload a package,s code into Gemini and have it summarize and identify key ideas and function. Then repeat this for every package in the repository.then combine the results . It sounds tedious but it is a rather small python program that does this for me.

bunderbunder 5 hours ago [-]
I've tried doing things like that. Results reminded me of that old chestnut, "Answers: $1. Correct answers: $50."

Concrete example, last week a colleague of mine used a tool like this to help with a code & architectural review of a feature whose implementation spanned four repositories with components written in four different programming languages. As I was working my way through the review, I found multiple instances where the information provided by the LLM missed important details, and that really undermined the value of the code review. I went ahead and did it the old fashioned way, and yes it took me a few hours but also I found four defects and failure modes we previously didn't know about.

prmph 7 hours ago [-]
Indeed, I once worked with a developer on a contract team who was only concerned with runtime execution, no concern whatever for architecture or code clarity or whatever at all.

The client loved him, for obvious reasons, but it's hard to wrap my head around such an approach to software construction.

Another time, I almost took on a gig, but when I took one look at the code I was supposed to take over, I bailed. Probably a decade would still not be sufficient for untangling and cleaning up the code.

True vibe coding is the worst thing. It may be suitable for one-ff shell script s of < 100 line utilities and such, anything more than that and you are simple asking for trouble

N70Phone 7 hours ago [-]
> I.e., understanding a code base, the algorithms, etc.?

The big problem is that LLMs do not *understand* the code you tell them to "explain". They just take probabilistic guesses about both function and design.

Even if "that's how humans do it too", this is only the first part of building an understanding of the code. You still need to verify the guess.

There's a few limitations using LLMs for such first-guessing: In humans, the built up understanding feeds back into the guessing, as you understand the codebase more, you can intuit function and design better. You start to know patterns and conventions. The LLM will always guess from zero understanding, relying only on the averaged out training data.

A following effect is that which bunderbunder points out in their reply: while LLMs are good at identifying algorithms, mere pattern recognition, they are exceptionally bad at world-modelling the surrounding environment the program was written in and the high level goals it was meant to accomplish. Especially for any information obtained outside the code. A human can run a git-blame and ask what team the original author was on, an LLM cannot and will not.

This makes them less useful for the task. Especially in any case where you intent to write new code; Sure, it's great that the LLM can give basic explanations about a programming language or framework you don't know, but if you're going to be writing code in it, you'd be better off taking the opportunity to learn it.

netghost 7 hours ago [-]
Perhaps it's the difference between watching a video of someone cooking a meal and cooking it for yourself.
wiremine 7 hours ago [-]
That's a good analogy.

To clarify my question: Based on my experience (I'm a VP for a software department), LLMs can be useful to help a team build a theory. It isn't, in and of itself, enough to build that theory: that requires hands-on practice. But it seems to greatly accelerate the process.

panarky 7 hours ago [-]
People always wring their hands that operating at a new, higher level of abstraction with destroy people's ability to think and reason.

But people still think and reason just fine, but now at a higher level that gives them greater power and leverage.

Do you feel like you're missing something when you "cook for yourself" but you didn't you didn't plant and harvest the vegetables, raise and butcher the protein, forge the oven, or generate the gas or electricity that heats it?

You also didn’t write the CPU microcode or the compiler that turns your code into machine language.

When you cook or code, you're already operating on top of a very tall stack of abstractions.

JoeAltmaier 7 hours ago [-]
Nah. This is a different beast entirely. This is removing the programmer from the arena, so they'll stop having intuition about how anything works or what it means. Not more abstract; completely divorced from software and what it's capable of.

Sure, manager-types will generally be pleased when they ask AI for some vanilla app. But when it doesn't work, who will show up to make it right? When they need something more complex, will they even know how to ask for it?

It's the savages praying to Vol, the stone idol that decides everything for them, and they've forgotten their ancestors built it and it's just a machine.

mikestorrent 7 hours ago [-]
I agree with your sentiment. The thing is, in the past, the abstractions supporting us were designed for our (human) use, and we had to understand their surface interface in order to be able to use them effectively.

Now, we're driving such things with AI; it follows that we will see better results if we do some of the work climbing down into the supporting abstractions to make their interface more suitable for AI use. To extend your cooking metaphor, it's time to figure out the manufactured food megafactory now; yes, we're still "cooking" in there, but you might not recognize the spatulas.

Things like language servers (LSPs) are a step in this direction: making it possible to interact with the language's parser/linter/etc before compile/runtime. I think we'll eventually see that some programming languages end up being more apropos to efficiently get working, logically organized code out of an AI; whether that is languages with "only one way to do things" and extremely robust and strict typing, or something more like a Lisp with infinite flexibility where you can make your own DSLs etc remains to be seen.

Frameworks will also evolve to be AI-friendly with more tooling akin to an LSP that allows an MCP-style interaction from the agent with the codebase to reason about it. And, ultimately, whatever is used the most and has the most examples for training will probably win...

epgui 5 hours ago [-]
The comment you're replying to is not about abstraction at all. ie.: The difference between passive listening and active learning is not abstraction.
jquaint 7 hours ago [-]
I agree with this sentiment. Perhaps this is why there is such a senior / junior divide with LLM use. Seniors already build their theories. Juniors don't have that skill.
lioeters 9 hours ago [-]
> theory building

That's insightful how you connected the "comprehension debt" of LLM-generated code with the idea of programming as theory building.

I think this goes deeper than the activity of programming, and applies in general to the process of thinking and understanding.

LLM-generated content - writing and visual art also - is equivalent to the code, it's what people see on the surface as the end result. But unless a person is engaged in the production, to build the theory of what it means and how it works, to go through the details and bring it all into a whole, there is only superficial understanding.

Even when LLMs evolve to become more sophisticated so that it can perform this "theory building" by itself, what use is such artificial understanding without a human being in the loop? Well, it could be very useful and valuable, but eventually people may start losing the skill of understanding when it's more convenient to let the machine do the thinking.

tsunamifury 8 hours ago [-]
What if the LLM can just understand the theory or read the code and derive it?
BobbyTables2 9 hours ago [-]
Fully agree.

I was once on a project where all the original developers suddenly disappeared and it was taken over by a new team. All institutional knowledge had been lost.

We spent a ridiculous amount of time trying to figure out the original design. Introduced quite a few bugs until it was better understood. But also fixed a lot of design issues after a much head bashing.

By the end, it had been mostly rewritten and extended to do things not originally planned.

But the process was painful.

lxgr 8 hours ago [-]
I'd actually argue that developers being actually sped up by LLMs (i.e. in terms of increasing their output of maintainable artifacts and not just lines of code) are those that have a good theory of the system they're working on.

At least at this point, LLMs are great at the "how", but are often missing context for the "what" and "why" (whether that's because it's often not written down or not as prevalent in their training data).

827a 7 hours ago [-]
I've used the word "coherence" to describe this state; when an individual or a team has adequately groked the system and its historical context to achieve a level of productivity in maintenance and extension, only then is the system coherent.

Additionally and IMO critically to this discussion: Its easy for products or features to "die" not when the engineers associated with it lose coherence on how it is implemented from a technical perspective, but also when the product people associated with it lose coherence on why it exists or who it exists for. The product can die even if one party (e.g. engineers) still maintains coherence while the other party (e.g. product/business) does not. At this point you've hit a state where the system cannot be maintained or worked on because everyone is too afraid of breaking an existing workflow.

LLMs are, like, barely 3% of the way toward solving the hardest problems I and my coworkers deal with day-to-day. But the bigger problem is that I don't yet know which 3% it is. Actually, the biggest problem is maybe that its a different, dynamic 3% of every new problem.

w10-1 4 hours ago [-]
In my observation, this "coherence" is a matter not only of understanding, but of accepting, particularly certain trade-off's. Often this acceptance is because people don't want to upset the person who insisted on it.

Once they're gone or no longer applying pressure, the strain is relieved, and we can shift to a more natural operation, application, or programming model.

For this reason, it helps to set expectations that people are cycled through teams at slow intervals - stable enough to build rapport, expertise, and goodwill, but transient enough to avoid stalls based on shared assumptions.

tom_m 5 hours ago [-]
I love how AI is surfacing problems that have been present all along. People are beginning to spend more time thinking about what's actually important when building a software product.

My hope is that people keep the dialogue going because you may be right about the feeling of LLMs speeding things up. It could likely be because people are not going through the proper processes including planning and review. That will create mountains of future work; bugs, tech debt, and simply learning. All of which still could benefit from AI tools of course. AI is a very helpful tool, but it does require responsibility.

danmaz74 8 hours ago [-]
Having worked on quite a few legacy applications in my career, I would say that, as for so many other issues in programming, the most important solution to this issue is good modularization of your code. That allows a new team to understand the application at high level in terms of modules interacting with each other, and when you need to make some changes, you only need to understand the modules involved, and ideally one at a time. So you don't need to form a detailed theory of the whole application all at the same time.

What I'm finding with LLMs is that, if you follow good modularization principles and practices, then LLMs actually make it easier to start working on a codebase you don't know very well yet, because they can help you a lot in navigating, understanding "as much as you need", and do specific changes. But that's not something that LLMs do on their own, at least from my own experience - you still need a human to enforce good, consistent modularization.

loudmax 7 hours ago [-]
LLMs can be used to vibe-code with limited or superficial understanding, but they can also be extremely helpful parsing and explaining code for a programmer who wants to understand what the program is doing. Well-managed forward thinking organizations will take advantage of the latter approach. The overwhelming majority of organizations will slide into the former without realizing it.

In the medium to longer term, we might be in a situation where only the most powerful next-generation AI models are able to make sense of giant vibe-coded balls of spaghetti and mud we're about to saddle ourselves with.

jrochkind1 8 hours ago [-]
Yep. I think the industry discounted the need for domain-specific and code-specific knowledge, the value of having developer stick around. And of having developers spend time on "theory building" and sharing.

You can't just replace your whole coding team and think you can proceed at the same development pace. Even if the code is relatively good and the new developers relatively skilled. Especially if you lack "architecture model" level docs.

But yeah LLM's push it to like an absurd level. What if all your coders were autistic savant toddlers who get changed out for a new team of toddlers every month.

diob 7 hours ago [-]
Funny enough I find LLMs useful for fixing the "death of a program" issue. I was consulting on a project built offshore where all the knowledge / context was gone, and it basically allowed me to have an AI version of the previous team that I could ask questions of.

I could ask questions about how things were done, have it theorize about why, etc.

Obviously it's not perfect, but that's fine, humans aren't perfect either.

pjc50 5 hours ago [-]
Ah, an undead programmer. Reminds me of "Dixie Flatline" from Neuromancer (1984), a simulation of a famous dead hacker trapped in a cartridge to help the protagonist.
HPsquared 10 hours ago [-]
Kind of like a dead (natural) language.
adw 7 hours ago [-]
LLMs speed you up more if you have an appropriate theory in greenfield tasks (and if you do the work of writing your scaffold yourself).

Brownfield tasks are harder for the LLM at least in part because it’s harder to retroactively explain regular structure in a way the LLM understands and can serialize into eg CLAUDE.md.

BenoitP 10 hours ago [-]
> "theory building"

Strongly agree with your comment. I wonder now if this "theory building" can have a grammar, and be expressed in code; be versioned, etc. Sort of like a 5th-generation language (the 4th-generation being the SQL-likes where you let the execution plan be chosen by the runtime).

The closest I can think of:

* UML

* Functional analysis (ie structured text about various stakeholders)

* Database schemas

* Diagrams

CaptainOfCoit 9 hours ago [-]
Prolog/Datalog with some nice primitives for how to interact with the program in various ways? Would essentially be something like "acceptance tests" but expressed in some logic programming language.
dpritchett 8 hours ago [-]
Cucumber-style BDD has been trying to do this for a long time now, though I never found it to be super comfortable.
nonethewiser 8 hours ago [-]
zitterbewegung 8 hours ago [-]
You could add part of your workflow to explain what it did. I've also asked it to check its own output and it even fixes itself (sort of counter intuitive) but it makes more sense if you think about someone just asking to fix their own code.
ants_everywhere 9 hours ago [-]
> programming is "what you want to achieve and how"

As in linear programming or dynamic programming.

> I suspect that there is a strong correlation between programmers who don't think that there needs to be a model/theory, and those who are reporting that LLMs are speeding them up.

This is an interesting prediction. I think you'll get a correlation regardless of the underlying cause because most programmers don't think there needs to be a model/theory and most programmers report LLMs speeding them up.

But if you control for that, there are also some reasons you might expect the opposite to be true. It could be that programmers who feel the least sped up by LLMs are the ones who feel their primary contributing is in writing code rather than having the correct model. And people who view their job as finding the right model are more sped up because the busy work of getting the code in the right order is taken off their plate.

kossTKR 9 hours ago [-]
While interesting this is not the point of the article.

Point is LLM's makes this problem 1000 times worse and so it really is a ticking time bomb thats totally new - most people, most programmers, most day to day work will not include some head in the clouds abstract metaprogramming but now LLM's both force programmers to "do more" and constantly destroys anyones flow state, memory, and the 99% of the talent and skill that comes from actually writing good code for hours a day.

LLM's are amazing but they also totally suck because they essentially steal learning potential, focus and increase work pressure and complexity, and this really is new, because also senior programmers are affected by this, and you really will feel this at some point after using these systems for a while.

They make you kind of demented, and no you can't fight this with personal development and forced book reading after getting up at 4 am in the morning just as with scrolling and the decrease in everyones focus, even bibliophiles.

the_af 9 hours ago [-]
> The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.

I really like this definition of "life" and "death" of programs, quite elegant!

I've noticed that I struggle the most when I'm not sure what the program is supposed to do; if I understand this, the details of how it does it become more tractable.

The worry is that LLMs make it easier to just write and modify code without truly "reviving" the program... And even worse, they can create programs that are born dead.

FrustratedMonky 10 hours ago [-]
Debt has always existed, and "LLMs is making it worse"

Yes, I think point is, LLM's are making it 'a-lot' worse.

And then compounding that will be in 10 years when no Senior Devs were being created, so nobody will be around to fix it. Extreme of course, there will be dev's, they'll just be under-water, piled on with trying to debug the LLM stuff.

pixl97 9 hours ago [-]
>they'll just be under-water, piled on with trying to debug the LLM stuff.

So in that theory the senior devs of those days will still be able to command large salaries if they know their stuff, in specific how to untangle the mess of LLM code.

FrustratedMonky 7 hours ago [-]
Good point. Maybe it will circle around, and a few devs that like to dig through this stuff will probably be in high demand. And it will be like earlier cycles when, for example only, a few people really liked working with bits and Boolean logic and they were paid well.
rapind 9 hours ago [-]
I could also argue that 20 years ago EJBs made it a lot worse, ORMs made it massively worse, heck Rails made it worse, and don't even get me started on Javascript frameworks, which are the epitome of dead programs and technical debt. I guarantee there were assembly programmers shouting about Visual Basic back in the day. These are all just abstractions, as is AI IMO, and some are worse than others.

If and when technical debt becomes a paralyzing problem, we'll come up with solutions. Probably agents with far better refactoring skills than we currently have (most are kind of bad at refactoring right now). What's crazy to me is how tolerant the consumer has become. We barely even blink when a program crashes. A successful AAA game these days is one that only crashes every couple hours.

I could show you a Java project from 20+ years ago and you'd have no idea wtf is going on, let alone why every object has 6 interfaces. Hey, why write SQL (a declarative, somewhat functional language, which you'd think would be in fashion today!), when you could instead write reams of Hibernate XML?! We've set the bar pretty low for AI slop.

coredog64 8 hours ago [-]
An abstraction is somewhat reversible: I can take an EJB definition and then rummage around in the official J2EE & vendor appserver docs & identify what is supposed to happen. Similarly, for VB there is code that the IDE adds to a file that's marked "Don't touch" (at least for the early versions, ISTR VB6 did some magic).

Even were I to store the prompts & model parameters, I suspect that I wouldn't get an exact duplicate of the code running the LLM again.

rapind 5 hours ago [-]
I see what you mean. The abstractions I mentioned are pretty much just translations / transformations (immutable) on their own. Keep in mind that most of these are also tied to a version (and versioning is not always clear, not is documentation around that version). The underlying byte code translation could also change even without a language or framework version change.

Also, as soon as a human is involved in implementation, it becomes less clear. You often won't be able to assume intent correctly. There will also be long lived bugs, pointer references that are off, etc.

I concede that the opacity and inconsistency of LLMs is a big (and necessary) downside though for sure.

withinboredom 8 hours ago [-]
In which universe is an abstraction reversible? You can ask 10 people around you to make you a sandwhich. You've abstracted away the work, but I'm willing to bet $10 that each person will not make the same sandwhich (assuming an assortment of meats, veggies, and sauces) ...
rusk 10 hours ago [-]
> programmers who don't think that there needs to be a model/theory

Ah rationalism vs empiricism again

Kant up in heaven laughing his ass off

bodhi_mind 10 hours ago [-]
At least llm will delete code it replaces instead of commenting out every piece of old functionality.
matt_heimer 8 hours ago [-]
LLMs have made it better for us. The quality of code committed by the junior developers has improved.
amelius 10 hours ago [-]
> The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered.

Yeah but we can ask an LLM to read the code and write documentation, if that happens.

WJW 9 hours ago [-]
Good documentation also contains the "why" of the code, ie why it is the way it is and not one of the other possible ways to write the same code. That is information inherently not present in the code, and there would be no way for a LLM to figure it out after the fact.

Also, no "small" program is ever at risk of dying in the sense that Naur describes it. Worst case, you can re-read the code. The problem lies with the giant enterprise code bases of the 60s and 70s where thousands of people have worked on it over the years. Even if you did have good documentation, it would be hundreds of pages and reading it might be more work than just reading the code.

ljm 8 hours ago [-]
The problem will always remain that it cannot answer 'why', only 'what'. And oftentimes you need things like intent and purpose and not just a lossy translation from programming instructions to prose.

I'd see it like transcribing a piece of music where an LLM, or an uninformed human, would write down "this is a sequence of notes that follow a repetitive pattern across multiple distinct blocks. The first block has the lyrics X, Y ...", but a human would say "this is a pop song about Z, you might listen to it when you're feeling upset."

amelius 8 hours ago [-]
That's a bad example, because an LLM is perfectly capable of saying when something is a song or not.
ljm 23 minutes ago [-]
And how does it do that? By looking at the words and seeing that they rhyme?

An LLM is not capable of subtext or reading between the lines or understanding intention or capability or sarcasm or other linguistic traits that apply a layer of unspoken context to what is actually spoken. Unless it matches a pattern.

It has one set of words, provided by you, and another set of words, provided by its model. You will get the bang average response every single time and mentally fill in the gaps yourself to make it work.

sfn42 9 hours ago [-]
It's insane to me how you people are so confident in the LLMs abilities. Have you not tried them? They fuck things up all the time. Basic things. You can't trust them to do anything right.

But sure let's just have it generate docs, that's gonna work great.

medstrom 6 hours ago [-]
There's a skill to phrasing the prompt so the code comes out more reliable.

Was some thread on here the other day, where someone said they routinely give Claude many paragraphs specifying what the code should and shouldn't do. Take 20 minutes just to type it up.

sfn42 6 hours ago [-]
Yeah sure but that's not what dude above is suggesting. Dude above is suggesting "hello ai please document this entire project for me".

I mean even if that did work you still gotta read the docs to roughly the same degree as you would have had to read the code and you have to read the code to work with it anyway.

Marazan 10 hours ago [-]
I'm currently involved in a project where we are getting the LLM to do exactly that. As someone who _does_ have a working theory of the software (involved in designing and writing it) my current assessment is that the LLM generated docs are pure line noise at the moment and basically have no value in imparting knowledge.

Hopefully we can iterate and get the system producing useful documents automagically but my worry is that it will not generalise across different system and as a result we will have invested a huge amount of effort into creating "AI" generated docs for our system that could have been better spent just having humans write the docs.

hiatus 10 hours ago [-]
My experience has been mixed with tools like deepwiki, but that's precisely the problem. I tried it with libraries I was familiar with and it was subtly wrong about some things.
Marazan 2 hours ago [-]
We are not at the subtly wrong stage yet, currently we are at the totally empty words devoid of real meaning stage.
meindnoch 7 hours ago [-]
Magical thinking.
amelius 4 hours ago [-]
That's what people would say 3 years ago about today's state of AI.
ModernMech 8 hours ago [-]
It would be nice if LLMs could do that without being wrong about what the code does and doesn't do.
mixedbit 10 hours ago [-]
My experience is that LLM too often finds solutions that work, but are way more complex than necessary. It is easiest to recognize and remove such complexity when the code is originally created, because at this time the author should have the best understanding of the problem being solved, but this requires extra time and effort. Once the overly complex code is committed, it is much harder to recognize the complexity is not needed. Readers/maintainers of code usually assume that the existing code solves real world problem, they do not have enough context to recognize that much simpler solution could work as well.
jf22 9 hours ago [-]
It's easy to avoid overly complex solutions with LLMs.

First, your prompts should be direct enough to the LLM doesn't wander around producing complexity for no reason.

Second, you should add rules/learning/context to always solve problems in the simplest way possible.

Lastly, after generation, you can prompt the LLM to reduce the complexity of the solution.

justsocrateasin 9 hours ago [-]
Okay how about this situation that one of my junior devs hit recently:

Coding in an obj oriented language in an enormous code base (big tech). Junior dev is making a new class and they start it off with LLM generation. LLM adds in three separate abstract classes to the inheritance structure, for a total of seven inherited classes. Each of these inherited classes ultimately comes with several required classes that are trivial to add but end up requiring another hundred lines of code, mostly boilerplate.

Tell me how you, without knowing the code base, get the LLM to not add these classes? Our language model is already trained on our code base, and it just so happens that these are the most common classes a new class tends to inherit. Junior dev doesn't know that the classes should only be used in specific instances.

Sure, you could go line by line and say "what does this inherited class do, do I need it?" and actually, the dev did that. It cut down the inherited classes from three to two, but missed two of them because it didn't understand on a product side why they weren't needed.

Fast forward a year, these abstract classes are still inherited, no one knows why or how because there's no comprehension but we want to refactor the model.

jofla_net 8 hours ago [-]
True True, I remember another example, with Linus Torvalds, who at a conference used a trivial example of simplifying functions, as to why hes good at what he does, or what makes a good lead developer in general. It went something along the lines of.

"Well we have this starting function which clearly can solve the task at hand. Its something 99 developers would be happy with, but I can't help but see that if we just reformulate it into a do-while instead we now can omit the checks here and here, almost cutting it in half."

Now obviously it doesn't suffice as real-world example but, when scaled up, is a great view at what waste can accumulate at the macro level. I would say the ability to do this is tied to a survival instinct, one which, undoubtedly will be touted as something that'll be put in the 'next-iteration' of the model. Its not strictly something I think that can be trained to be achievable though, as in pattern matching, but its clearly not achievable yet as in your example from above.

acuozzo 7 hours ago [-]
> Tell me how you, without knowing the code base, get the LLM to not add these classes?

Stop talking to it like a chatbot.

Draft, in your editor, the best contract-of-work you can as if you were writing one on behalf of NASA to ensure the lowest bidder makes the minimum viable product without cutting corners.

---

  Goal: Do X.

  Sub-goal 1: Do Y.

  Sub-goal 2: Do Z.

  Requirements:

    1. Solve the problem at hand in a direct manner with a concrete implementation instead of an architectural one.

    2. Do not emit abstract classes.

    3. Stop work and explain if the aforementioned requirements cannot be met.
---

For the record: Yes, I'm serious. Outsourcing work is neither easy nor fun.

metalliqaz 6 hours ago [-]
Every time I see something like this, I wonder what kind of programmers actually do this. For the kinds of code that I write (specific to my domain and generates real value), describing "X", "Y", and "Z" is a very non-trivial task.

If doing those is easy, then I would assume that the software isn't that novel in the first place. Maybe get something COTS

I've been coding for 25 years. It is easier for me to describe what I need in code than it is to do so in English. May as well just write it.

wtetzner 6 hours ago [-]
I mean, unless you just don't know how to program, I struggle to see what value the LLM is providing. By the time you've broken it down enough for the LLM, you might as well just write the code yourself.
baq 3 hours ago [-]
Yeah, but LLM is simply faster, especially in this case where you know exactly what you need, it’s just a lot of typing.
jf22 7 hours ago [-]
What would you tell a junior dev that did this?

You tell them not to create extra abstract classes and put that in your onboarding docs.

You literally do the same thing with llms. Instead of onboarding code standards docs you make rules files or whatever the llm needs.

8 hours ago [-]
stillworks 7 hours ago [-]
Curious about the mechanics here — when you say the model was ‘trained on our code base’, was that an actual fine-tune of the weights (e.g. LoRA/adapter or full SFT), or more of a retrieval/indexing setup where the model sees code snippets at inference? Always interested in how teams distinguish between the two.
8 hours ago [-]
wffurr 9 hours ago [-]
[flagged]
jf22 9 hours ago [-]
Ha, I don't know how to take that? I assure you I wrote it myself.
ithkuil 8 hours ago [-]
It's sad how humans are the ones pressured to pass the turning test
timeon 8 hours ago [-]
We are formed by environment we live in.
trjordan 10 hours ago [-]
LLMs absolutely produce reams of hard-to-debug code. It's a real problem.

But "Teams that care about quality will take the time to review and understand LLM-generated code" is already failing. Sounds nice to say, but you can't review code being generated faster than you can read it. You either become a bottleneck (defeats the point) or you rubber-stamp it (creates the debt). Pick your poison.

Everyone's trying to bolt review processes onto this. That's the wrong layer. That's how you'd coach a junior dev, who learns. AI doesn't learn. You'll be arguing about the same 7 issues forever.

These things are context-hungry but most people give them nothing. "Write a function that fixes my problem" doesn't work, surprise surprise.

We need different primitives. Not "read everything the LLM wrote very carefully" ways to feed it the why, the motivation, the discussion and prior art. Otherwise yeah, we're building a mountain of code nobody understands.

Herring 9 hours ago [-]
> You … become a bottleneck (defeats the point)

It's better if the bottleneck is just reviewing, instead of both coding and reviewing, right?

We've developed plenty of tools for this (linting, fuzzing, testing, etc). I think what's going on is people who are bad at architecting entire projects and quickly reading/analyzing code are having to get much better at that and they're complaining. I personally enjoy that kind of work. They'll adapt, it's not that hard.

trjordan 9 hours ago [-]
There's plenty of changes that don't require deep review, though. If you're written a script that's, say, a couple fancy find/replaces, you probably don't need to review every usage. Check 10 of 500, make sure it passes lint/tests/typecheck, and it's likely fine.

The problem is that LLM-driven changes require this adversarial review on every line, because you don't know the intent. Human changes have a coherence to them that speeds up review.

(And you your company culture is line-by-line review of every PR, regardless of complexity ... congratulations, I think? But that's wildly out of the norm.)

baq 3 hours ago [-]
A proper line by line review tops out at 400-500 lines per hour and the reviewer should be spent and take a 30 minute break. It’s a great process if you’re building a spaceship I guess.
wtetzner 6 hours ago [-]
> It's better if the bottleneck is just reviewing, instead of both coding and reviewing, right?

Not really. There's something very "generic" about LLM generated code that makes you just want gloss over it, no matter how hard you try not to.

mattlondon 8 hours ago [-]
We use the various instruction .md files for the agents and update them with common issues and pitfalls to avoid, as well as pointers to the coding standards doc.

Gemini and Claude at least seem to work well with it, but sometimes still make mistakes (e.g. not using c++ auto is a recurrent thing, even though the context markdown file clearly states not to). I think as the models improve and get better at instruction handling it will get better.

Not saying this is "the solution" but it gets some of the way.

I think we need to move away from "vibe coding", to more caring about the general structure and interaction of units of code ourselves, and leave the AI to just handle filling in the raw syntax and typing the characters for us. This is still a HUGE productivity uplift, but as an engineer you are still calling the shots on a function by function, unit by unit level of detail. Feels like a happy medium.

trjordan 7 hours ago [-]
100% agree. If you care about API design, data flow, and data storage schemas, you're already halfway there.

I think there's more juice to squeeze there. A lot of what we're going to learn is how to pick the right altitude of engagement with AI, I think.

shinecantbeseen 7 hours ago [-]
I've had some (anecdotal) success reframing how I think about my prompts and the context I give the LLM. Once I started thinking about it as reducing the probability space of output through priming via context+prompting I feel like my intuition for it has built up. It also becomes a good way to inject the "theory" of the program in a re-usable way.

It still takes a lot of thought and effort up front to put that together and I'm not quite sure where the breakover line between easier to do-it-myself and hand-off-to-llm is.

solatic 6 hours ago [-]
> We need different primitives

The correct primitives are the tests. Ensure your model is writing tests as you go, and make sure you review the tests, which should be pretty readable. Don't merge until both old and new tests pass. Invest in your test infrastructure so that your test suite doesn't get too slow, as it will be in the hot path of your model checking future work.

Legacy code is that which lacks tests. Still true in the LLM age.

raincole 7 hours ago [-]
> You either become a bottleneck (defeats the point)

How...?

When I found code snippets from StakcOverflow, I read them before pasting them into my IDE. I'm the bottleneck. Therefore there is no point to use StackOverflow...?

ModernMech 8 hours ago [-]
Yes, "just take the time to review and understand LLM-generated code" is the new "just don't write bad code and you won't have any bugs". As an industry, we all know from years of writing bugs despite not wanting to that this task is impossible at scale. Just reviewing all the AI code to make sure it is good code likewise does not scale in the same way. Will not work, and it will take 5-10 years for the industry to figure it out.
alexpotato 8 hours ago [-]
Was listening to the Dwarkesh Patel podcast recently and the guest (Agustin Lebron) [0] mentioned the book "A Deepness In The Sky" by Vernor Vinge [1].

I started reading it and a key plot point is that there is a computer system that is thousands of years old. One of the main characters has "cold sleeped" for so long that he's the only one who knows some of the hidden backdoors. That legacy knowledge is then used to great effect.

Highly recommend it for a great fictional use of institutional knowledge on a legacy codebase (and a great story overall).

0 - https://www.youtube.com/watch?v=3BBNG0TlVwM

1 - https://amzn.to/42Fki8n

Taylor_OD 7 hours ago [-]
RIP Vernor Vinge. Somehow, his ideas seem more and more relevant.
alexpotato 7 hours ago [-]
Especially since he coined the term "technological singularity"

https://en.wikipedia.org/wiki/Technological_singularity

mock-possum 7 hours ago [-]
His description of learning to be a programmer in that far future era was fun too, iirc there was just so much ‘legacy code’, like practically infinite libraries and packages to perform practically any function - that ‘coding’ was mostly a matter of finding the right pieces and wiring them together. Knowing the nuances of these existing pieces and the subtlety of their interpretation was the skill.
alexpotato 6 hours ago [-]
100%

Another great example:

In Fire Upon the Deep, due to the delay in communications between star systems, everyone use a descendant of Usenet.

octoberfranklin 3 hours ago [-]
everyone use a descendant of Usenet

The future I dream of.

blackhaj7 7 hours ago [-]
Sounds great - thanks for the recommendation.

Looks like it is the second in a trilogy. Can you just dive in or did you read the first book before?

duskwuff 5 hours ago [-]
A Fire upon the Deep and A Deepness in the Sky are loosely connected; you can read them in either order. Both novels reveal some details which explain bits of the other.

However, I would recommend skipping Children of the Sky. It's not as good, and was clearly intended as the first installment of a series which Vinge was unable to complete. :(

alexpotato 7 hours ago [-]
I read Fire Upon the Deep first and liked both books.

General recommendation is to read them in order (Fire first, Deepness second) but I don't really think it matters.

blackhaj7 6 hours ago [-]
Awesome - thanks!
octoberfranklin 3 hours ago [-]
You can start with the second, but the first book is better at grabbing the attention of a new reader with wild ideas (broadband audio hive-minds, variable speed-of-light). If you make it through the first chapter you won't be able to put it down.

The second book is just as good, but doesn't try as hard to get you addicted early on. The assumption is that you already know how good Vinge's work is.

I recommend starting with Fire Upon the Deep.

low_tech_punk 8 hours ago [-]
Most programmers don't understand the low level assembly or machine code. High level language becomes the layer where human comprehension and collaboration happens.

LLM is pushing that layer towards natural language and spec-driven development. The only *big* difference is that high level programming languages are still deterministic but natural language is not.

I'm guessing we've reached an irreducible point where the amount of information needed specify the behavior of a program is nearly optimally represented in programming languages after decades of evolution. More abstraction into the natural language realm would make it lossy. And less abstraction down to the low level code would make it verbose.

adamddev1 8 hours ago [-]
The difference is not just a jump to a higher abstraction with natural language. It's something fundamentally differet.

The previous tools (assemblers, compilers, frameworks) were built on hard-coded logic that can be checked and even mathematically verified. So you could trust what you're standing on. But with LLMs we jump off the safely-built tower into a world of uncertainty, guesses, and hallucinations.

mym1990 7 hours ago [-]
If LLMs still produce code that is eventually compiled down to a very low level...that would mean it can be checked and verified, the process just has additional steps.

JavaScript has a ton of behavior that is very uncertain at times and I'm sure many JS developers would agree that trusting what you're standing on is at times difficult. There is also a large percentage of developers that don't mathematically verify their code, so the verification is kind of moot in those cases, hence bugs.

The current world of LLM code generation lacks the verification you are looking for, however I am guessing that these tools will soon emerge in the market. For now, building as incrementally as possible and having good tests seems to be a decent path forward.

cobbal 5 hours ago [-]
There are 4 important components to describing a compiler. The source language, the target language, and the meaning (semantics in compiler-speak) of both those languages.

We call a C->asm compiler "correct" if the meaning of every valid C program turns into an assembly program with equivalent meaning.

The reason LLMs don't work like other compilers is not that they're non-deterministic, it's that the source language is ambiguous.

LLMs can never be "correct" compilers, because there's no definite meaning assigned to english. Even if english had precise meaning, LLMs will never be able to accurately turn any arbitary english description into a C program.

Imagine how painful development would be if compilers produced incorrect assembly for 1% of all inputs.

prmph 7 hours ago [-]
> If LLMs still produce code that is eventually compiled down to a very low level...that would mean it can be checked and verified

I don't think you have thought about this deeply enough. Who or what would do the checking, and according to what specifications?

austin-cheney 8 hours ago [-]
> Most programmers don't understand the low level assembly or machine code.

Most programmers that write JavaScript for a living don't really understand how to scale applications in JavaScript, which includes data structures in JavaScript. There is a very real dependence on layers of abstractions to enable features that can scale. They don't understand the primary API to the browser, the DOM, at all and many don't understand the Node API outside the browser.

For an outside observer it really begs the Office Space question: What would you say you do here? Its weird trying to explain it to people completely outside software. For the rest of us in software we are so used to this we take the insanity for granted as an inescapable reality.

Ironically, at least in the terms of your comment, is that when you confront JavaScript developers about this lack of fundamental knowledge comparisons to assembly frequently come up. As though writing JavaScript directly is somehow equivalent to writing machine code, but for many people in that line of work they are equivalent distant realities.

The introduction of LLMs makes complete sense. When nobody knows how any of this code works then there isn't a harm to letting a machine write it for you, because there isn't a difference in the underlying awareness.

rmunn 7 hours ago [-]
> Most programmers that write JavaScript for a living don't really understand how to scale applications in JavaScript, which includes data structures in JavaScript. There is a very real dependence on layers of abstractions to enable features that can scale.

Although I'm sure you are correct, I would also want to mention that most programmers that write JavaScript for a living aren't working for Meta or Alphabet or other companies that need to scale to billions, or even millions, of users. Most people writing JavaScript code are, realistically, going to have fewer than ten thousand users for their apps. Either because those apps are for internal use at their company (such as my current project, where at most the app is going to be used by 200-250 people, so although I do understand data structures I'm allowing myself to do O(N^2) business logic if it simplifies the code, because at most I need to handle 5-6 requests per minute), or else because their apps are never going to take off and get the millions of hits that they're hoping for.

If you don't need to scale, optimizing for programmer convenience is actually a good bet early on, as it tends to reduce the number of bugs. Scaling can be done later. Now, I don't mean that you should never even consider scaling: design your architecture so that it doesn't completely prevent you from scaling later on, for example. But thinking about scale should be done second. Fix bugs first, scale once you know you need to. Because a lot of the time, You Ain't Gonna Need It.

the_duke 8 hours ago [-]
I feel like natural language specs can play a role, but there should be an intermediate description layer with strict semantics.

Case in point: I'm seeing much more success in LLM driven coding with Rust, because the strong type system prevents many invalid states that can occur in more loosely or untyped languages.

It takes longer, and often the LLM has to iterate through `cargo check` cycles to get to a state that compiles, but once it does the changes are very often correct.

The Rust community has the saying "if it compiles, it probably works". You can still have plenty of logic bugs of course , but the domain of possible mistakes is smaller.

What would be ideal is a very strict (logical) definition of application semantics that LLMs have to implement, and that ideally can be checked against the implementation. As in: have a very strict programming language with dependent types , littered with pre/post conditions, etc.

LLMs can still help to transform natural language descriptions into a formal specification, but that specification should be what drives the implementation.

redsymbol 8 hours ago [-]
There is another big difference: natural languages have ambiguity baked in. If a programming language has any ambiguity in how it can be parsed, that is rightly considered a major bug. But it's almost a feature of natural languages, allowing poetry, innuendo, and other nuanced forms of communication.
low_tech_punk 8 hours ago [-]
I had a similar thought, feature not bug.

The nature of programming might have to shift to embrace the material property of LLM. It could become a more interpretative, social, and discovery-based activity. Maybe that's what "vibe coding" would eventually become.

bluefirebrand 5 hours ago [-]
> The nature of programming might have to shift to embrace the material property of LLM. It could become a more interpretative, social, and discovery-based activity. Maybe that's what "vibe coding" would eventually become

This sounds like an unmaintainable, tech debt nightmare outcome to me

archy_ 6 hours ago [-]
C has a lot of ambiguity in how it is parsed ("undefined behavior") but people usually view that as a benefit because it allows compilers more freedom to dictate an implementation.
roncesvalles 2 hours ago [-]
It's not the same. There is an explosion in expressiveness/ambiguity in the step from high-level programming languages to natural languages. This "explosion" doesn't exist in the steps between machine code and assembly, or assembly and a high-level programming language.

It is, for example, possible to formally verify or do 100% exhaustive testing as you go lower down the stack. I can't imagine this would be possible between NLs and PLs.

lxgr 7 hours ago [-]
> The only big difference is that high level programming languages are still deterministic but natural language is not.

Arguably, determinism isn't everything in programming: It's very possible to have perfectly deterministic, yet highly surprising (in terms of actual vs. implied semantics to a human reader) code.

In other words, the axis "high/low level of abstraction" is orthogonal to the "deterministic/probabilistic" one.

raincole 7 hours ago [-]
Yes, but determinism is still very important in this case. It means you only need to memorize the surprising behavior once (like literally every single senior programmer has memorized their programming language's quirks even they don't want to).

Without determinism, learning becomes less rewarding.

tossandthrow 7 hours ago [-]
A program with ambiguities will not work, a spec with ambiguities is, on the other hand, incredibly common.

Specs are not more abstract but more ambiguous, which is not the same thing.

drdrek 8 hours ago [-]
Somehow many very smart AI entrepreneurs do not understand the concept of limits to lossless data compression. If an idea cannot be reduced further without losing information, no amount of AI is going to be able to compress it.

This is why you see so many failed startup around slack/email/jira efficiency. Half the time you do not know if you missed critical information so you need to go to the source, negating gains you had with information that was successfully summarized.

dorkrawk 7 hours ago [-]
Downloading music off the internet is just the next logical step after taping songs off the radio. Cassette tapes didn't really affect the music industry, so I wouldn't worry about this whole Napster thing.
wkirby 9 hours ago [-]
I see this as the next great wave of work for me and my team. We sustained our business for a good 5–8 years on rescuing legacy code from offshore teams as small-to-medium sized companies re-shored their contract devs. We're currently in a demand lull as these same companies have started relying heavily on LLMs to "write" "code" --- but as long as we survive the next 18 months, I see a large opportunity as these businesses start to feel the weight of their accumulated tech debt accrued by trusting claude when it says "your code is now production ready."
donatj 9 hours ago [-]
A friend was recently telling me about an LLM'd PR he was reviewing submitted by a largely non-technical manager where the feature from the outside entirely appeared to work, but actually investigating the thousands of lines of generated code, it was instead hacking their response cache system to appear to work without actually updating anything on the backend.

It took a ton of effort on his part to convince his manager that this wasn't ready to be merged.

I wonder how much vibe coded software is out there in the wild that just appears to work?

rAum 7 hours ago [-]
lol you should absolutely merge it and go with it in such cases, just collect evidence first to have enough deniability and enjoy the show. You can tell a child not to do the thing over and over or just accept it will very quickly learn for their life that touching hot oven is not a smart thing to do. With so much AI hype induced brainrot seems for certain individuals the only antitode is to make them feel direct consequences of their false beliefs. Without feedback loop there is no learning occurring at all.

More dangerous thing is such idiot managers can judge you by their lens of shipping LLM garbage they didn't applied in reality to see consequences, living in fantasy due to lack of technical knowledge. Of course it directly leads to firing people and adding more tasks/balloning expectation on leftover team who are force trapped to burn out and be replaced as trash as that makes total sense in their world view and "evidence".

ModernMech 7 hours ago [-]
That's only an option when it's not you who will have to clean up the mess.
ebiester 9 hours ago [-]
Where are these non-technical engineering managers and how did they stay in the business?

I haven't seen a truly non-technical manager in over 15 years.

OutOfHere 8 hours ago [-]
I would report the manager to the CTO or CEO or business owners/investors.
meander_water 10 hours ago [-]
I've done my share of vibe coding, and I completely agree with OP.

You just don't build up the necessary mental model of what the code does when vibing, and so although you saved time generating the code, you lose all that anyway when you hit a tricky bug and have to spend time building up the mental model to figure out what's wrong.

And saying "oh just do all the planning up front" just doesn't work in the real world where requirements change every minute.

And if you ever see anyone using "accepted lines" as a metric for developer productivity/hours saved, take it with a grain of salt.

CaptainOfCoit 9 hours ago [-]
> I've done my share of vibe coding

Why? It was almost meant in jest and as a joke, no one seriously believes you don't need to review code, you end up in spaghetti land so quickly I can't believe anyone tried "vibe coding" for more than a couple of hours then didn't quickly give up on something that is obviously infeasible.

Now, reviewing whatever the LLM gives you back, carefully massage it into the right shape then moving on, definitely helps my programming a lot, but careful review is needed that the LLM had the right context so it's actually correct. But then we're in "pair programming" territory rather than blindly accepting whatever the LLM hands you, AKA "vibe coding".

meander_water 9 hours ago [-]
Vibe coding has its place. I've mainly used it to create personalised ui's for tasks very specific to me. I don't write tests, I may throw it away next week but it's served its purpose at least once. Is this grossly inefficient? Probably.
myflash13 11 hours ago [-]
This is not just for LLM code. This is for any code that is written by anyone except yourself. A new engineer at Google, for example, cannot hit the ground running and make significant changes to the Google algorithm without months of "comprehension debt" to pay off.

However, code that is well-designed by humans tends to be easier to understand than LLM spaghetti.

carlmr 11 hours ago [-]
>However, code that is well-designed by humans tends to be easier to understand than LLM spaghetti.

Additionally you may have institutional knowledge accessible. I can ask a human and they can explain what they did. I can ask an LLM, too and they will give me a plausible-sounding explanation of what they did.

ToValueFunfetti 10 hours ago [-]
I can't speak for others, but if you ask me about code I wrote >6 months ago, you'll also be stuck with a plausible-sounding explanation. I'll have a better answer than the LLM, but it will be because I am better at generating plausible-sounding explanations for my behavior, not because I can remember my thought processes for months.
ctkhn 10 hours ago [-]
There might also be a high level design page about the feature, or jira tickets you can find through git commit messages, or an architectural decision record that this new engineer could look over even if you forgot. The LLM doesn't have that
CaptainOfCoit 9 hours ago [-]
> The LLM doesn't have that

The weights won't have that by default, true, that's not how they were built.

But if you're a developer and can program things, there is nothing stopping you from letting LLMs have access to those details, if you feel like that's missing.

I guess that's why they call LLMs "programmable weights", you can definitely add a bunch of context to the context so they can use it when needed.

HPsquared 10 hours ago [-]
Your sphere of plausibility is smaller than that of an LLM though, at least. You'll have some context and experience to go on.
halfcat 10 hours ago [-]
You also might say ”I don’t remember”, which ranks below remembering, but above making something up.
romaniv 6 hours ago [-]
No, this is not a pre-existing problem.

In the past the problem was about transferring a mental model from one developer to the other. This applied even when people copy-pasted poorly understood chunks of example code from StackOverflow. There was specific intent and some sort of idea of why this particular chunk of code should work.

With LLM-generated software there can be no underlying mental model of the code at all. None. There is nothing to transfer or infer.

captainkrtek 6 hours ago [-]
It’s even worse because the solution an LLM produces is not obvious as to whether it was inherently chosen by the user and favored over a different approach for any reason, or it was just what happened to be output and “works”.

I’ve had to give feedback to some junior devs who used quite a bit of LLM created code in a PR, but didn’t stop to question if we really wanted that code to be “ours” versus using a library. It was apparent they didn’t consider alternatives and just went with what it made.

VikingCoder 9 hours ago [-]
Kernighan's Law - Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

Modern Addendum: And if you have an LLM generate your code, you'll need one twice as smart to debug it.

projektfu 8 hours ago [-]
Not entirely my experience, but I do have to be the driver. I mostly use Claude Code, and it will sometimes make a dumb mistake. I can usually ask it to fix the problem and it will. Every now and again, I have to tell it to stop barking up the wrong tree, and tell it where the problem lies in the code it wrote.

In other words, debugging can be at the same "intelligence" level, but since an LLM doesn't really know what it is doing, it can make errors it won't comprehend on its own. The experience is a lot like working with a junior programmer, who may write a bunch of code but cannot figure out what they got wrong.

OutOfHere 8 hours ago [-]
> you'll need one twice as smart to debug it

Just maybe, it's the difference between the "medium" and "high" suffixed thinking modes of an LLM.

Fwiw, for complicated functions that must exist, I have the LLM write a code comment explaining the intent and the approach.

BoxFour 10 hours ago [-]
I'm not of the "LLMs will replace all software developers within a year" mindset, but this critique feels a bit overstated.

The challenge of navigating rapidly changing or poorly documented code isn’t new: It’s been a constant at every company I’ve worked with. At larger organizations the sheer volume of code, often written by adjacent teams, will outpace your ability to fully understand it. Smaller companies tend to iterate so quickly (and experience so much turnover) that code written two weeks ago might already be unrecognizable, if the original author is even still around after those two weeks!

The old adage still applies: the ability to read code is more crucial than the ability to write it. LLMs just amplify that dynamic. The only real difference is that you should assume the author is gone the moment the code lands. The author is ephemeral, or they went on PTO/quit immediately afterward: Whatever makes you more comfortable.

bluefirebrand 3 hours ago [-]
> The old adage still applies: the ability to read code is more crucial than the ability to write it. LLMs just amplify that dynamic

LLMs don't "just" amplify that dynamic

They boost it to impossibly unsustainable levels

IgorPartola 10 hours ago [-]
So far I have found two decent uses for LLM generated code.

First, refactoring code. Specifically, recently I used it on a library that had solid automated testing coverage. I needed to change the calling conventions of a bunch of methods and classes in the library, but didn’t want to rewrite the 100+ unit tests by hand. Claude did this quickly and without fuss.

Second is one time use code. Basically let’s say you need to convert a bunch of random CVS files to a single YAML file, or convert a bunch of video files in different formats to a single standard format, or find any photos in your library that are out of focus. This works reasonably well.

Bonus one is just generating sample code for well known libraries.

I have been curious what would happen if I handed something like Claude a whole server and told it to manage it however it wants with relatively little instruction.

mcoliver 7 hours ago [-]
The counterpoint to this is that LLMs cannot only write code, they can comprehend it! They are incredibly useful for getting up to speed on a new code base and transferring comprehension from machine to human. This of course spans all job functions and is still immature in its accuracy but rapidly approaching a point where people with an aptitude for learning and asking the right questions can actually have a decent shot at completing tasks outside of their domain expertise.
roncesvalles 2 hours ago [-]
I agree. Almost all of the value that I'm getting out of LLMs is when it helps me understand something, as opposed to when it helps me produce something.

I'm not sure how or why the conversation shifted from LLMs helping you "consume" vs helping you "produce". Maybe there's not as much money in having an Algolia-on-steroids as there is in convincing execs that it will replace people's jobs?

cadamsdotcom 2 hours ago [-]
Easy way to understand the code: have your AI write tests for it. Especially the gnarliest parts.

Tests prevent regressions and act as documentation. You can use them to prove any refactor is still going to have the same outcome. And you can change the production code on purpose to break the tests and thus prove that they do what they say they do.

And your AI can use them to work on the codebase too.

dexterlagan 7 hours ago [-]
We've been through many technological revolutions, in computing alone, through the past 50 years. The rate of progress of LLMs and AI in general over the past 2 years alone makes me think that this may be unwarranted worry and akin to premature optimization. Also, it seems to be rooted in a slightly out of date, human understanding of the tech/complexity debt problem. I don't really buy it. Yes complexity will increase as a result of LLM use. Yes eventually code will be hard to understand. That's a given, but there's no turning back. Let that sink in: AI will never be as limited as it is today. It can only get better. We will never go back to a pre-LLM world, unless we obliterate all technology by some catastrophy. Today we can already grok nearly any codebase of any complexity, get models to write fantastic documentation and explain the finer points to nearly anybody. Next year we might not even need to generate any docs, the model built in the codebase will answer any question about it, and will semi-autonomously conduct feature upgrades or more.

Staying realistic, we can say with some confidence that within the next 6-12 months alone, there are good reasons to believe that local, open source models will equate their bigger cloud cousins in coding ability, or get very close. Within the next year or two, we will quite probably see GPT6 and Sonnet 5.0 come out, dwarfing all the models that came before. With this, there is a high probability that any comprehension or technical debt accumulated over the past year or more will be rendered completely irrelevant.

The benefits given by any development made until then, even sloppy, should more than make up for the downside caused by tech debt or any kind of overly high complexity problem. Even if I'm dead wrong, and we hit a ceiling to LLM's ability to grok huge/complex codebases, it is unlikely to appear within the next few months. Additionally, behind closed doors the progress made is nothing short of astounding. Recent research at Stanford might quite simply change all of these naysayers' mind.

gwbas1c 8 hours ago [-]
One of the things I find AI is best for is coding operations that don't need to understand context.

IE, if I need a method (or small set of methods) that have clearly defined inputs and outputs, probably because they follow a well-known algorithm, AI is very useful. But, in this case, wider comprehension isn't needed; because all the LLM is doing is copying and adjusting.

mattlondon 8 hours ago [-]
Yes this is where I find it most useful - I tell it what and where to do it, and it fills in the blanks.

E.g. "extract the logic in MyFunc() in foo.cc into a standalone helper and set up all the namespaces and headers so that it can be called from MyFunc() and also in bar.cc. Add tests and make sure it all compiles and works as expected, then call it in bar.cc in the HTTP handler stub there."

It never needs to make architectural decisions. If I watch it and it looks like it is starting to go off the rails and do something odd, I interrupt it and say "Look at baz.cc and follow the coding style and pattern there" or whatever.

Seems to work well.

I feel like as an engineer I am moving away from concrete syntax, and up an abstraction level into more of abstract form where I am acting more like a TL reviewing code and making the big-brush decisions on how to structure things, making course corrects as I go. Pure vibe-coding is rare.

dbuxton 11 hours ago [-]
I think this is a relative succinct summary of the downside case for LLM code generation. I hear a lot of this and as someone who enjoys a well-structured codebase, I have a lot of instinctive sympathy.

However I think we should be thinking harder about how coding will change as LLMs change the economics of writing code: - If the cost of delivering a feature is ~0, what's the point in spending weeks prioritizing it? Maybe Product becomes more like an iterative QA function? - What are the risks that we currently manage through good software engineering practices and what's the actual impact of those risks materializing? For instance, if we expose customer data that's probably pretty existential, but most companies can tolerate a little unplanned downtime (even if they don't enjoy it!). As the economics change, how sustainable is the current cost/benefit equilibrium of high-quality code?

We might not like it but my guess is that in ≤ 5 years actual code is more akin to assembler where sure we might jump in and optimize but we are really just monitoring the test suites and coverage and risks rather than tuning whether or not the same library function is being evolved in a way which gives leverage across the code base.

dns_snek 10 hours ago [-]
> As the economics change, how sustainable is the current cost/benefit equilibrium of high-quality code

"High quality code"? The standard today is "barely functional", if we lower the standards any further we will find ourselves debating how many crashes a day we're willing to live with, and whether we really care about weekly data loss caused by race conditions.

abraxas 7 hours ago [-]
And if that's what's economically beneficial then it shall be. Unfortunately.
patrickmay 7 hours ago [-]
> However I think we should be thinking harder about how coding will change as LLMs change the economics of writing code: - If the cost of delivering a feature is ~0, what's the point in spending weeks prioritizing it?

Writing code and delivering a feature are not synonymous. The time spent writing code is often significantly less than the time spent clarifying requirements, designing the solution, adjusting the software architecture as necessary, testing, documenting, and releasing. That effort won't be driven to 0 even if an LLM could be trusted to write perfect code that didn't need human review.

grandfugue 10 hours ago [-]
I agree with your point on finding a new standard on what developers should do given LLM coding. Something that matters before may not be relevant in future.

My so far experiences boil down to: APIs, function descriptions, overall structures and testing. In other words, ask a dev to become an architect that defines the project and lay out the structure. As long as the first three points are well settled, code gen quality is pretty good. Many people believe the last point (testing) should be done automatically as well. While LLM may help with unit tests or tests on macro structures, I think people need to define high-levle, end-to-end testing goals from a new angel.

kace91 10 hours ago [-]
The question is whether treating code as a borderline black box balances out with the needed extra QA (including automated tests).

Just like strong typing reduces the amount of tests you need (because the scope of potential errors is reduced), there is a giant increase in error scope when you can’t assume the writer to be rational.

HPsquared 10 hours ago [-]
Black box designs beget black swan events.
9 hours ago [-]
__mharrison__ 9 hours ago [-]
My experience has been that LLMs help me make sense of new code much faster than before.

When I really need to understand what's happening with code, I generally will write it each step.

LLMs make it much easier for me to do this step and more. I've used LLMs to quickly file PRs for new (to me) code bases.

kristianc 10 hours ago [-]
Soon a capable LLM will have enough training material to spit out LLMs are atrophying coding skills / LLM code is unmaintainable/ LLM code is closing down opportunities for juniors / LLMs do the fun bits of coding pieces on demand.

A lot of these criticisms are valid and I recognise there's a need for people to put their own personal stake in the ground as being one of the "true craftsmen" but we're now at the point where a lot of these articles are not covering any real new ground.

At least some individual war stories about examples where people have tried to apply LLMs would be nice, as well as not pretending that the problem of sloppy code didn't exist before LLMs.

bluefirebrand 7 hours ago [-]
> as well as not pretending that the problem of sloppy code didn't exist before LLMs

Certainly not remotely the same volume of sloppy code

Impossibly high volumes of bad code is a new problem

estimator7292 9 hours ago [-]
At my first programmer job, a large majority of the code was written by a revolving door of interns allowed to push to main with no oversight. Much of the codebase was unknown and irreplaceable, which meant it slowly degraded and crumbled over the years. Even way back then, everyone knew the entire project was beyond salvage and needed to be rewritten from scratch.

Well they kept limping along with that mess for another ten years while the industry sprinted ahead. The finally released a new product recently, but I don't think anyone cares because everyone else did it better five years ago

sixhobbits 6 hours ago [-]
One side of the equation is definitely that we'll get more 'bad' code.

But nearly every engineer I've ever spoken to has over-indexed on 'tech debt bad'. Tech debt is a lot like normal debt - you can have a lot of it and still be a healthy business.

The other side of the equation is that it's easier to understand and make changes to code with LLMs. I've been able to create "Business Value" (tm) in other people's legacy code bases in languages I don't know by making CRUD apps do things differently from how they currently do things.

Before, I'd needed to have hired a developer who specialises in that language and paid them to get up to speed on the code base.

So I agree with the article that the concerns are valid, but overall I'm optimistic that it's going to balance out in the long run - we'll have more code, throw away more code, and edit code faster, and a lot of that will cancel.

blindriver 8 hours ago [-]
No, I 100% don't think it will happen.

LLMs have made the value of content worth precisely zero. Any content can be duplicated with a prompt. That means code is also worth precisely zero. It doesn't matter if humans can understand the code, what matters is if the LLM can understand the code and make modifications.

As long as the LLM can read the code and adjust it based on the prompt, what happens on the inside doesn't matter. Anything can be fixed with simply a new prompt.

jv22222 2 hours ago [-]
I've found LLMs to be pretty good at explaining how legacy codebases work. Couldn't you just use that to create documentation and a cheat sheet to help you understand how it all works?
injidup 11 hours ago [-]
Shouldn't you be getting the LLM to also generate test cases to drive the code and also enforce coding standards on the LLM to generate small easily comprehensible software modułes with high quality inline documentation.

Is this something people are doing?

kace91 11 hours ago [-]
The problem is similar to that of journalism vs social media hoaxes.

An llm-assisted engineer writes code faster than a careful person can review.

Eventually the careful engineers get ran over by the sheer amount of work to check, and code starts passing reviews when it shouldn’t.

It sounds obvious, that careless work is faster than careful one, but there are psychological issues in play - expectation by management of ai as a speed multiplier, personal interest in being perceived as someone who delivers fast, concerns of engineers of being seen as a bottleneck for others…

wongarsu 10 hours ago [-]
I have no issue getting LLMs to generate documentation, modular designs or test cases. Test cases require some care; just like humans LLMs are prone to making the same mistake in both the code and the tests, and LLMs are particularly prone to not understanding whether it's the test or the code that's wrong. But those are solvable.

The things I struggle more with when I use LLMs to generate entire features with limited guidance (so far only in hobby projects) is the LLM duplicating functionality or not sticking to existing abstractions. For example if in existing code A calls B to get some data, and now you need to do some additional work on that data (e.g. enriching or verifying) that change could be made in A, made in B, or you could make a new B2 that is just like B but with that slight tweak. Each of those could be appropriate, and LLMs sometimes make hillariously bad calls here

tgv 10 hours ago [-]
I just wrote a reply elsewhere, but we got a new vibe-coded (marketing) website. How is an LLM going to write test cases for that? And what good will they do? I assume it will also change the test cases when you ask it to rewrite things.
fragmede 9 hours ago [-]
> How is an LLM going to write test cases for that?

"Please generate unit tests for the website that exercise documented functionality" into the LLM used to generate the website should do it.

fzeroracer 10 hours ago [-]
The LLM will generate test cases that do not test anything or falsely flag the test as passing, which means you need to deeply review and understand the tests as well as the code it's testing. Which goes back to the point in the article, again.
jf22 9 hours ago [-]
>which means you need to deeply review and understand the tests as well as the code it's testing

Yes...? Why wouldn't you always do this LLM or not?

Herring 10 hours ago [-]
The people who are doing that aren't writing these blog posts. They're writing much better code & faster, while quietly internally panicking a bit about the future.
Agraillo 7 hours ago [-]
You're probably talking about infamous Dark Matter Developers [1]. When the term was coined, I thought there were many of them, now seeing how many developers are here at HN (including myself) I doubt there are many left /s.

The quote that is interesting in the context of the fast-pacing LLM development is this

> The Dark Matter Developer will never read this blog post because they are getting work done using tech from ten years ago and that's totally OK

[1] https://www.hanselman.com/blog/dark-matter-developers-the-un...

sorcercode 7 hours ago [-]
There's truth to a lot of what's said in this post and I see many people complain but these opinions feel short sighted (not meant derogatorily - just that these are shorter term problems).

> Teams that care about quality will take the time to review and understand (and more often than not, rework) LLM-generated code before it makes it into the repo. This slows things down, to the extent that any time saved using the LLM coding assistant is often canceled out by the downstream effort.

I recently tried a mini experiment for myself to (dis)prove similar notions. I feel more convinced we'll figure out a way to use LLMs and keep maintainable repositories.

i intentionally tried to use a language I'm not as proficient in (but obv have a lot of bg in programming) to see if I could keep steering the LLM effectively

https://kau.sh/blog/container-traffic-control/

and I saved a *lot* of time.

jayd16 7 hours ago [-]
> i intentionally tried to use a language I'm not as proficient in (but obv have a lot of bg in programming) to see if I could keep steering the LLM effectively

I think this might be the wrong assumption. In the same way the news happens to be wrong about topics you know, I think it's probably better to judge code you know over code you don't.

It's easy to accept whatever the output was if you don't know what you're looking at.

It'll be interesting to see what it tells experts about sloppy, private code bases (you can't use, existing OSS examples because opinions and docs would be in the LLM corpus and not just derived from the code itself.)

sorcercode 6 hours ago [-]
fair point.

but i'm not proficient != i don't know (i.e. i have worked on javascript many moons ago, but i wouldn't consider myself an expert at it today).

i like to think i can still spot unmaintainable vs maintainable code but i understand your point that maybe the thinking is to have an expert state that opinion.

the code is [oss](https://github.com/kaushikgopal/ff-container-traffic-control) btw so would love to get other takes.

simonsarris 11 hours ago [-]
A softer version of this has existed since word processing and Xerox machines (copiers) took off, in law and regulations. Tax code, zoning code etc exploded in complexity once words became immensely easy to create and copy.
rho4 9 hours ago [-]
Nice pattern detection.
7 hours ago [-]
efitz 7 hours ago [-]
I think I disagree with the premise.

If the assertion is, I want to use non-LLM methods to maintain LLM-generated code, then I agree, there is a looming problem.

The solution to making LLM-generated code maintainable involves:

1) Using good design practices before generating the code, e.g. have a design and write it down. This is a good practice regardless of maintainability issues because it is part of how you get good results getting LLMs to generate code.

2) Keeping a record of the prompts that you used to generate the code, as part of the code. Do NOT exclude CLAUDE.md from your git repo, for instance, and extract and save your prompts.

3) Maintain the code with LLMs, if you generated it with LLMs.

Mandatory car analogy:

Of course there was a looming maintenance problem when the automobile was introduced, because livery stables were unprepared to deal with messy, unpredictable automobiles.

righthand 7 hours ago [-]
You’re suggesting software design principles to a world where people are trying to escape having to learn (anything) software design principles. The only thing Llm users largely want is to say “computer do thing”.
ModernMech 7 hours ago [-]
Having to do this pretty much destroys the value proposition that AI companies are pushing though. At best, what this means is that current software shops can write taller software stacks. Which is valuable, don't get me wrong. But the value proposition, the fantasy of LLMs, is that they will be able to replace your entire development team. If all it does is make the dev team 15% more capable -- because no one else has the knowledge to use it -- that's not a trillion dollar world-shifting technology, it's just another layer in the tech stack.
osigurdson 9 hours ago [-]
I wonder how long it will take for the world to kind of catch up to reality with today's (and likely tomorrow's) AI? Right now, most companies are in a complete holding pattern - sort of doing nothing other than small scale layoffs here and there - waiting for AI to get better. It is like a self-induced global recession where everyone just decides to slow down and do less.
wiradikusuma 6 hours ago [-]
From my experience, you should either treat LLM-generated code as the usual code (before LLM age) that you need to review every time it changes, or you should not review it at all and treat it as black box with clearly defined boundaries. You test it by putting on your QA hat, not your Developer hat.

You can't change your stance later, it will just give you a headache.

When the former breaks, you fix it like conventional bug hunting. When the latter breaks, you fix it by either asking LLM to fix it or scrap it and ask LLM to regenerate it.

malkosta 7 hours ago [-]
I fight against this by using it mostly on trivial tasks, which require no comprehension at all, also fixing docs and extending tests. It helps me to focus on what I love, and let the boring stuff automated.

For complex tasks, I use it just to help me plan or build a draft (and hacky) pull request, to explore options. Then I rewrite it myself, again leaving the best part to myself.

LLMs made writing code even more fun than it was before, to me. I guess the outcomes only depends on the user. At this point, it's clear that all my peers that can't have fun with it are using it as they use ChatGPT, just throwing a prompt, hoping for the best, and then getting frustrated.

strangescript 10 hours ago [-]
So many of these concepts only make sense under the assumption that AI will not get better and humans will continue to pour over code by hand.

They won't. In a year or two these will be articles that get linked back to similar to "Is the internet just a fad?" articles of the late 90s.

gwbas1c 9 hours ago [-]
I disagree. They Not every technological advance "improves" at an exponential rate.

The issue is that LLMs don't "understand." They merely copy without contributing original thought or critical thinking. This is why LLMs can't handle complicated concepts in codebases.

What I think we'll see in the long run is:

(Short term) Newer programming models that target LLMs: IE, describe what you want the computer to do in plain English, and then the LLM will allow users to interact with the program in a more conversational manner. Edit: These will work in "high tolerance" situations where small amounts of error is okay. (Think analog vs digital, where analog systems tend to tolerate error more gracefully than digital systems.)

(Long term) Newer forms of AI that "understand." These will be able to handle complicated programs that LLMs can't handle today, because they have critical thinking and original thought.

tkgally 10 hours ago [-]
A couple of those articles, in case anyone is interested:

“The Internet? Bah! Hype alert: Why cyberspace isn't, and will never be, nirvana” by Clifford Stoll (1995)

Excerpt: “How about electronic publishing? Try reading a book on disc. At best, it's an unpleasant chore: the myopic glow of a clunky computer replaces the friendly pages of a book. And you can't tote that laptop to the beach. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we'll soon buy books and newspapers straight over the Internet. Uh, sure.”

https://www.nysaflt.org/workshops/colt/2010/The%20Internet.p...

“Why most economists' predictions are wrong” by Paul Krugman (1998)

Excerpt: “By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machine's.”

https://web.archive.org/web/19980610100009/http://www.redher...

singleshot_ 8 hours ago [-]
If you assume Krugman was talking about a positive impact, it makes sense to make fun of him.
senordevnyc 9 hours ago [-]
Krugman's quotes are even worse in full:

The growth of the Internet will slow drastically, as the flaw in "Metcalfe's law"--which states that the number of potential connections in a network is proportional to the square of the number of participants--becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machine's.

As the rate of technological change in computing slows, the number of jobs for IT specialists will decelerate, then actually turn down; ten years from now, the phrase information economy will sound silly.

yfw 10 hours ago [-]
Or it could also be like blockchain and nfts...
monkmartinez 10 hours ago [-]
I have been programming as a hobby for almost 20 years. At least for me, there is huge value using LLM's for code. I don't need anyone else's permission, nor anyone else to participate for the LLM's to work for me. You absolutely can not say that about blockchain, nft, or crypto in general.
rhetocj23 9 hours ago [-]
Nah that comparison doesnt make sense.

There is certainly real market penetration with LLMs. However, there is a huge gap between fantasy and reality - as in what is being promised vs what is being delivered and the effects on the economy are yet to play out.

randallsquared 10 hours ago [-]
Exactly so. From the article:

> But those of us who’ve experimented a lot with using LLMs for code generation and modification know that there will be times when the tool just won’t be able to do it.

The pace of change here--the new normal pace--has the potential to make this look outdated in mere months, and finding that the curve topped out exactly in late 2025, such that this remains the state of development for many years, seems intuitively very unlikely.

fhennig 9 hours ago [-]
Just like how in a year or two we will have fully self-driving cars, right?

The last percentage point for something to get just right are the hardest, why are you so sure that the flaws in LLMs will be gone in such a short time frame?

senordevnyc 9 hours ago [-]
We do have fully self-driving cars. You can go to a number of American cities and take a nap in the backseat of one while it drives you around safely.
9 hours ago [-]
justsocrateasin 8 hours ago [-]
But it's not fully self driving. SF Waymo can't bring you to the airport. You missed OPs point, which was that the last few percentage points are the hardest.
metalliqaz 6 hours ago [-]
This requires an assumption that LLM capability growth will continue on an exponential curve, when there are already signs that in reality the curve is logistic
otabdeveloper4 9 hours ago [-]
Two more weeks and "AI" will finally be intelligent. Trust the plan.
jf22 9 hours ago [-]
I know this is sarcasm but if you've been using LLMs for most than two weeks you've probably noticed significant improvements in both the models and the tooling.

Less than a year ago I was generated somewhat silly and broken unit tests with copilot. Now I'm generating entire feature sets while doing loads of laundry.

exasperaited 10 hours ago [-]
pore*
6 hours ago [-]
book_mike 9 hours ago [-]
LLMs are powerful tools but they are not going to save the world. I have seen this before. The experienced crowd gets chuffed because it is a new pattern that radically changes their current workflow. The new crowd haven't optimised yet so they over use the new way of doing things until they moderate it. The only difference I can detect is that rate of change increased to an almost uncomprehensable pace.

The wave’s still breaking, so I’m going to ride it out until it smooths into calm water. Maybe it never will. I don't know.

bluefirebrand 6 hours ago [-]
> The only difference I can detect is that rate of change increased to an almost uncomprehensable pace

This is a pretty seriously bad difference imo

randomtoast 9 hours ago [-]
I this the only way to escape this trap is by developing better LLMs in the future. The rapid rate at which new AI-generated code is produced means that humans will no longer be able to review it all.
jf22 9 hours ago [-]
That's why you have other LLMs review.
sparkie 8 hours ago [-]
The concern here is getting sufficient quality training data for the newer LLMs. They'll be not only learning from human written code, but also from the slop produced by previous generation LLMs. It may be that they get worse over time unless there are further breakthroughs in making the AI have some actual intelligence.
titaniumrain 8 hours ago [-]
If this problem has existed before, why start worrying now? And if scale might make it problematic, can we quantify the impact instead of simply worrying?
pshirshov 8 hours ago [-]
It can be partially addressed with proper set of agent instructions. E.g. follow SOLID, use constructor injection, avoid mutability, write dual tests, use explicit typings (when applicable) etc. Though the models are remarkably bad at design, so that provides just a minor relief. Everything has to be thoroughly reviewed and (preferably) rewritten by a human.
mlhpdx 7 hours ago [-]
Building roads, power, sewer and schools without budgeting for maintenance, upgrades and ultimately replacement. Having a capital burn that can’t plausibly be repaid. Focusing on having more code rather than the right code. Artfully similar behaviors to me.
softwaredoug 10 hours ago [-]
You learn more when you take notes. In the same way, I understand the structure of the code better when my hands are on keyboard.

I like writing code because eventually I have to fix code. The writing will help me have a sense for what's going on. Even if it will only be 1% of the time I need to fix a bug, having that context is extremely valuable.

Then reserve AI coding when there's true boilerplate or near copy-paste of a pattern.

drnick1 8 hours ago [-]
Unrelated to the content of the article, but please stop including Gravatar in your blogs. It is disrespectful to your readers since you allow them to be tracked by a company that has a notoriously poor security and privacy record. In fact, everyone should blackhole that domain.
giancarlostoro 9 hours ago [-]
If the code outputted does not look like code I cannot maintain in a meaningful way (barring like some algorithm or something specialized) I don't check it in. I treat it as if it were code from Stack Overflow, sometimes its awful code, so I rewrite it if applicable (things change, understandings change), other times it works and makes sense.
segmondy 9 hours ago [-]
Well, IMO, the issue is that we are trying to merge with AI/LLM. Why must both of us understand the code base? Before it was us that just understood it, why not just have the AI understand it all? why do you need to understand it? to do what exactly? document it? improve it? fix it? Well, let the LLM do all of that too.
jebarker 9 hours ago [-]
The phenomenon is not just true in coding. I think over time we’ll see that outsourcing thinking isn’t always a good idea if you wish to develop long term knowledge and critical thinking skills. Much like social media has destroyed the ability for many to distinguish truth and fiction.
energy123 7 hours ago [-]
Been trying to figure out a way to use LLMs to better understand code that comes from LLMs at a level of abstraction somewhere between the code itself and the prompt, but haven't succeeded yet.
dweinus 7 hours ago [-]
Ok, not peeking at the comments yet, but I am going to predict the "put more AI on it" people will recommend solving it by putting more AI on it. Please don't disappoint!
daveaiello 9 hours ago [-]
I started using LLMs to refactor and maintain utility scripts that feed data into one of my database driven websites. I don't see a downside to this sort of use of something like Claude Code or Cursor.

This is not full blown vibe coding of a web application to be sure.

vjvjvjvjghv 10 hours ago [-]
You have a similar problem with projects where a large number of offshore developers is used. Every day you get a huge pile of code to review which is basically impossible within the available time. So you end up with a system that nobody really understands.
vanillax 7 hours ago [-]
Offshore coding practices in the 2010's is the same thing as LLM. Id take LLM over offshore 10/hr devs any day of the week...
purpleredrose 8 hours ago [-]
Code will be write only soon enough. It doesn't work, regenerate it until it passes your tests, which you have vetted, but probably was also generated.
jermberj 6 hours ago [-]
> An effect that’s being more and more widely reported is the increase in time it’s taking developers to modify or fix code that was generated by Large Language Models.

And this is where I stop reading. You cannot make such a descriptive statement without some sort of corroborating evidence other than your intuition/anecdotes.

gipp 6 hours ago [-]
Sounds like a great way to shift our problems from categories that are easy to measure to ones that are hard to measure.
axpy906 7 hours ago [-]
I get what the author is saying but isn’t that why we have problem solving, test coverage and documentation?
rafaelbeirigo 10 hours ago [-]
I haven't used them in big codebases, but they were also able to help me understand the code they generated. Isn't this feasible (yet) on big codebases?
JCM9 8 hours ago [-]
It’s not just code, but across the board we’re not seeing AI help people do better things faster, we’re seeing them meh mediocre things faster under the guise of being “good.”

The market will eventually self correct once folks get more burned by that.

codazoda 10 hours ago [-]
I find LLMs most useful in this understanding of legacy code.

I can ask questions like, “how is this code organized” and, “where does [thing] happen?”

laweijfmvo 7 hours ago [-]
is no one using LLMs to help them read/understand code? reading code is definitely a skill that needs to be acquired, but LLMs can definitely help. we should be pushing that instead of “vibe coding”
hnthrow09382743 9 hours ago [-]
Frankly, I'm on the side of no tech/comprehension debt ever being paid down if you want to believe this idea is true.

The analogy of debt breaks when you can discard the program and start anew, probably at great cost to the company. But since that cost is externalized to developers, no developer is actually paying the debt because greenfield development is almost always more invigorating than maintaining legacy code. It's a bailout (really debt forgiveness) of technical debt by the company, who also happens to be paying the developers a good wage on the very nebulous promise that this won't happen again (spoiler: it will).

What developers need to do to get a bailout is enough reputation and soft skills to convince someone a rewrite is feasible and the best option. And leadership who is not completely convinced you should never rewrite programs from scratch.

Joel Spolsky's beliefs here are worth a revisit in the face of hastened code generation by LLMs too, as it was based completely on human-created code: https://www.joelonsoftware.com/2000/04/06/things-you-should-...

Some programs still should not be rewritten: Excel, Word, many of the more popular and large programs. However many smaller/medium applications that are being maintained by developers using LLMs in this way will more easily have a larger fraction of LLM generated code that is harder to understand (again, if you believe the article). Where-as before you might have rewritten a small program, you might now rewrite a medium program.

intrasight 10 hours ago [-]
> ... taking developers to modify or fix code

Fix your tests not your resulting code

Havoc 8 hours ago [-]
At least llms are pretty good at explaining code
ccvannorman 9 hours ago [-]
I joined a company with 20k lines of Next/React generated in 1 month. I spent over a week rewriting many parts of the application (mostly the data model and duplicated/conflicting functionality).

At first I was frustrated but my boss said it was actually a perfect sequence, since that "crappy code" did generate a working demo that our future customers loved, which gave us the validation to re-write. And I agree!

LLMs are just another tool in the chest; a curious, lighting fast jr developer with an IQ of 85 who can't learn and needs a memory wipe whenever they make a design mistake.

When I use it knowing its constraints it's a great tool! But yeah if used wrong you are going to make a mess, just like any powerful tool

pnathan 9 hours ago [-]
I am running a lightweight experiment - I have a Java repo I am vibecoding from scratch essentially. I am effectively acting as a fairly pointy haired PM on it.

The goal is to see what how far I can push the LLM. How good is it... really?

danans 6 hours ago [-]
The guiding principle for most of the tech industry is to produce the cheapest thing you can get away with. There is little intrinsic motivation toward quality left in the culture.

When velocity and quantity are massively incentivized over understanding, strategy, and quality, this is the result. Enshittification of not only the product, but our own professional minds.

purpleredrose 8 hours ago [-]
Code is going to be write only soon enough. There will be no debt just regenerated code.
HarHarVeryFunny 7 hours ago [-]
The only way that could work would be if there was 100% test coverage of every input scenario, whether documented as part of requirements or not, otherwise the regenerated code is almost certain to have regression bugs in it.

Most complex production systems do not have this level of documentation and/or regression coverage, nor I suspect will any AI-generated system. The requirements you fed the AI to "specify" the system aren't even close to a 100% coverage regression test suite, even of the product features, let alone all the more detailed behaviors that customers may be used to.

It's hard to see mission-critical code (industrial control, medical instruments, etc) ever being written in this way since the cost of failure is so high.

throw_m239339 4 hours ago [-]
That's fantastic IMHO, it guarantees competent engineers decades of work to fix all the bad code deployed, eventually. Let's not even get started on performance optimization jobs.
vdupras 8 hours ago [-]
Our collective future has "mediocrity" written all over it.

And when you think about it, LLMs are pretty much, by design, machines that look for truth in mediocrity in its etymological sense.

rhelz 9 hours ago [-]
When was the golden age, when everybody understood how everything worked?
fullstop 8 hours ago [-]
Mid to late 90s, IMO.
m3kw9 5 hours ago [-]
This is what happens when you let the AI run for 30 minutes. Ain’t no way you will read the code with much critique if it’s a 1 hour+ read. You have to generate compartmentized code so you don’t need to check much
wilg 5 hours ago [-]
Luckily LLMs can also comprehend code (and are getting better at doing so), this problem will probably solve itself with more LLMs. (Don't shoot the messenger!)
danans 6 hours ago [-]
When speed and quantity are incentivized over understanding and quality, this is what we get. Enshittification of not only the product, but our own professional minds.
8 hours ago [-]
justinhj 7 hours ago [-]
Technical leaders need to educate their teams not to create this kind of technical debt. We have a new tool for designing and implementing code, but ultimately the agent is the software engineer and the same practices we have always followed still have value; more value perhaps.
bongodongobob 7 hours ago [-]
This is just tech debt. It's all around us. This isn't a new concept and it's not something new with LLMs/AI. This isn't ANY different than on boarding any tech into your stack.
josefrichter 8 hours ago [-]
I guess this is just another definition of vibe coding. You're deliberately creating code that you don't fully understand. This has always existed, but LLMs greatly amplify it.
vonneumannstan 9 hours ago [-]
This is surely an issue and more and more serious people are admitting 50% or more of their code is now AI generated. However it looks like AI is improving fast enough that they will take the cognitive load of understanding large code bases and humans are relegated to System Architecture and Design.
bluefirebrand 8 hours ago [-]
No, but people's brains are rotting from using AI and their standards are getting low enough to accept AI code
scotty79 11 hours ago [-]
I'm sure future LLMs will be able to comprehend more. So the debt, similarly to real world debt, is fine, as long as the line goes up.
tgv 10 hours ago [-]
Idk. The company I work for had a new website designed, and it was built with an (unknown) LLM, through repeated prompting, I believe (so basically a large document that starts the initial description and then adds fixes). It's deployed on an unknown stack, at a random server somewhere. Us techies simply got a few remarks from the LLM about changing the DNS (which were not up to any standard for such requests). The moment some marketeer wants some change on that website, the whole thing may come undone. It's like outsourcing to the lowest bidder. But the CEO is happy, because AI.

I'm also not sure about your basic premise that understanding will improve. That depends on the size of network's internal representation(s), which will start overfitting at some point.

actionfromafar 10 hours ago [-]
But won't that future LLM be able to spew out even more? I mean, I regularly produce stuff I can't comprehend at a later date, why won't the same happen to an LLM?
scotty79 9 hours ago [-]
Because you comprehension grows in time only slightly or not at all and at some point will start to decline. That won't be true for LLMs for some time hopefully. And even then at some point they learn to make stuff in a more "divide and conquer" style so they don't need to understand whole big ball of spaghetti all at once.
actionfromafar 7 hours ago [-]
I mean, even if we don't understand in our brains, what's stopping people from buliding ever more complex systems with LLMs, until the LLMs themselves can't keep up?
bparsons 7 hours ago [-]
This is a problem that is not unique to software engineering, and predates LLMs.

Large organizations are increasingly made up of technical specialists who are very good at their little corner of the operation. In the past, you had employees present at firms for 20+ years who not only understand the systems in a holistic way, but can recall why certain design or engineering decisions were made.

There is also a demographic driver. The boomer generation with all the institutional memory have left. Gen-X was a smaller cohort, and was not able to fully absorb that knowledge transfer. What is left are a lot of organizations run by people under the age of 45 working on systems where they may not fully understand the plumbing or context.

righthand 7 hours ago [-]
I think it’s a lot worse. My coworkers don’t even read the code base for easily answered questions anymore. They just ping me on Slack. I want to believe there are no dumb questions, but now it’s become “be ignorant and ask the expert for non-expert related tasks”.

What happened? I don’t use Llms really so I’m not sure how people have completely lost their ability to problem solve. They surely must remember 6 months ago when they were debugging just fine?

claytongulick 7 hours ago [-]
I'm so glad someone has finally described the phenomena so well.

"Comprehension debt" is a perfect description for the thing I've been the most concerned about with AI coding.

One I got past the Dunning-Kruger phase and started really looking at what was being generated, I ran into this comprehension issue.

With a human, even a very junior one, you can sort of "get in the developer's head". You can tell which team member wrote which code and what they were thinking at the time. This leads to a narrative, or story of execution which is mostly comprehensible.

With the AI stuff, it's just stochastic parrot stuff. It may work just fine, but there will be things like random functions that are never called, hundreds or thousands of lines of extra code to do very simple things. References to things that don't exist, and never have.

I know this stuff can exist in human code bases too - but generally I can reason about why. "Oh, this was taken out for this issue and the dev forgot to delete it".

I can track it, even if it's poor quality.

With the AI stuff, it's just randomly there. No idea why, if it is used, was ever used, makes sense, is extra fluff or brilliant.

It takes a lot of work to figure out.

techlatest_net 8 hours ago [-]
[dead]
revanwjy 9 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:31:30 GMT+0000 (Coordinated Universal Time) with Vercel.