How A(I) Code: Musings on LLMs as Tools
This post comes in two parts, covering two discussions that have been on my mind recently. The first is the question of whether LLMs, which are radically changing how programming is done, are a benefit or a detriment to the world of software development. The only perspective I can offer on this is of an amateur for whom LLMs were the entry point into 'real' programming. So that's what you're getting - reflections from an outsider's point of view, albeit one who spends far too long trawling tech Twitter and using memes like I'm cosplaying my Dad going to work.
The second part is a description of my own best practices for using LLMs to help me code web applications. Contrary to my initial beliefs, it turns out that everyone has their own way of interacting with these new tools. There's plenty to learn from how others use them, so here's my contribution, designed to tease out the best from my readers. The plan is simply to share some of my own pearls of wisdom while ruthlessly stealing best practice from my friends. Welcome to the jungle, baby.
A Cursor on all your houses
There's been a lot of talk online, prompted by the release of tools like Cursor, Devin, and a thousand other YC-funded forks of popular IDEs about whether or not LLM-assisted coding is destroying the art of programming. Some people who learned how to be programmers before AI came along feel cut-up that it's now a trivial thing to produce code; the amateur arrivistes on the other hand are grateful that the gatekeepers of old are being swept away on a tide of new tooling.
In the best traditions of the internet, the most engaging arguments are the ones on the extremes. You have the accelerationists announcing that not only are developers redundant, but the whole practice of computer science is about to be swallowed by all-knowing AIs digesting everything worthwhile and cutting human brains out of the loop. Anyone can write their own software, and anyone can make money doing so. Sometimes this position is almost gleeful. The sans-culottes are out, and the streets shall (stack) overflow with the blood of the unhelpful Ancien Régime. Anyone who has had a sniffy response on a forum to an innocent programming question will know the feeling. What better way to get your own back than to render entire categories of programming knowledge historical artefacts?
On the other side, you have what I call the Abstraction Layer Luddites, who would be coding using pen and paper if it wasn't for pesky things like deadlines, currently existing programming languages and real-world clients. Use LLMs only when you really have to, they say - generated code is riddled with hidden bugs and flaws which will surely blow up anything more complex than a tic-tac-toe game. Real programming is done by real programmers, who have dedicated decades of their lives to learning things the long way round. There are no shortcuts, and there mustn't be. In the purity cult, deploying any tool which abstracts away from your, er, chosen level of abstraction is a spiritual failure. You can't be a great programmer unless you pay your dues to the machine gods; at the very least, pay the dues we've paid before you turned up.
Break those looms!
Strangely, I have a lot of sympathy for the Luddites. Breathless boosting of the destructive potential of new technology is dangerous, especially when the cost of producing code easily is the loss of substantive knowledge and understanding of that code. It's one of the issues I have with startup culture and the cult of 'disruption', which so often slides into a capitalistic orgy of brutal efficiency. Celebrating the lack of need for real knowledge is plainly ignorant, and anyone eagerly awaiting the death of the developer is an accomplice in the enshittification of expertise. Society doesn't need more of that.
Then there's what actually happens when you code. LLMs have a mandate to generate tokens; leaning too heavily on their abilities runs the enormous risk of tricking new or amateur programmers into laziness by default. That is a central concern of the traditionalists. There is no incentive to understand the entirety of a language or architecture, just a thousand 'do what works' solutions which end up creating hidden structural risk invisible from a bottom-up perspective. Exclusive LLM coding removes the incentive to read the documentation, to engage with the underlying community, or to understand the abstract thought processes behind certain language or design choices. The programmer educating themselves solely through LLM, or worse, simply refactoring until something works, lacks the knowledge to provide any useful solution in the absence of their favourite tool. This means that when things go critically wrong, it falls to the people who actually know how things work to fix them. It's unfair to burden those who have paid the dues with fixing the mistakes of neophytes.
These are good points, and they annoy the accelerationists. But they can also appear alienating and backward to anyone coming to the programming game thanks to LLMs. It's difficult to hear arguments like this if you want to get to the interesting stuff quickly. They sit especially uncomfortably when your position in respect of other industries is that disruption is fantastic and efficient software solutions wrecking old business models is the way forward to fame and fortune. As a neophyte jumping over previous barriers to entry, there's also the implication that you're biting the hand that feeds you, and if the technology is pushed to its logical conclusion, the final barrier to entry removed is... you.
The reality is that you can't ignore LLMs in software production. The ease of code generation especially in popular languages like Python does allow you to build more sophisticated software more quickly. I've also found that learning how to code with LLM assistance on my own projects is vastly more fun and fulfilling than any online course. I don't think learning everything the long way round makes sense. It is especially inefficient for people who already possess the appropriate abstract reasoning skills and need to upskill themselves on how to get working solutions down on paper.
So what's the sensible position to take?
It's all about integrity
Personally, I think the modern coder, whether amateur or professional, has to look to what they are doing with a degree of radical honesty. Is what you're doing efficiency, or just intellectual laziness and cynicism? Is it a form of stolen valour? For example - you can't claim that the LLM writes the code you "would otherwise have written" if you don't know have some notion of how you would have done it in the first place. You can't claim to be a competent software engineer or developer if you're working at a level of abstraction above the code (i.e. typing desired outcomes into Claude and copy-pasting into your IDE). There's a question of integrity here too. Real value comes from knowing what you are talking about, and that requires dedication, cogitation, and time.
This really comes into focus if you are releasing commercial software for use by others. Make as many LLM-drafted crud applications for free as you like, but if you are going to be shipping code to real people for use in real-world applications which carry real risk, you should know exactly what your code does and why. Don't take money for software development unless you can look yourself in the mirror and be confident you have done a job worthy of the title. Because who would you rather be - the programmer who flies by the seat of their pants hoping that they can fix what they/the LLM have produced in short order, or the programmer who has a thorough grounding in the fundamentals of their craft? Can you really call yourself an effective engineer if you aren't building on craft fundamentals? If I was being provocative I'd ask you - are you a competent coder or a bullshit artist, and how would it feel if you were called (or caught) out? Not even by some frothing mob of fellow devs, but by your own expensive screw-ups that you don't have the knowledge to fix?
Unless you are a completely degenerate indiehacker with little regard for your users or your own longevity in the industry, most people would choose the second option of studying the fundamentals first. It's more work, more reading, more painful nights spent working on your own knowledge rather than shipping, but in the end it's the more worthy path. You need to know what you are doing - really know what you are doing, at a deep level - to have the ability to create smart and lasting solutions to real problems. At least that's what I believe. However hard you try, you cannot outsource actually knowing things.
The importance of problem solving
So we've established the importance of knowing stuff and not lying, cynically, about your own basic abilities. Great. But what strikes me about the LLM coding discussion is how little problem solving skills and solution architecture are discussed in the context of LLM use. It's not just about knowing syntax and being able to produce loads of code - it's about how these new tools contribute or detract from people's ability to actually solve the problems for which software is simply a means to an end. In other words, architecture and design is the critical element which will determine the level of slop we will have to deal with in coming years.
Why am I making this argument? In essence because the fundamental functionality and quality of a piece of software is a question of design. LLMs can get your idea onto paper, but in my experience, the more clearly you have designed your programme on paper or in your head, the better and cleaner that programme will be. This is because to really build something of quality you must understand what every component and every function is doing. That is inseperable from knowing the inner workings of the particular language or framework you are using, its inner strengths and weaknesses. To build the best software you can, you need to design it properly.
In this sense, to use LLMs in the best possible way, you should be using them critically to help architect your software and refine your structures before methodically moving down into the guts of the coding. Don't rely on anything implicit in the tooling; make your designs uniquely your own, or rely on tried-and-tested modular approaches which have survived contact with the real world. Write it all down, plan it out, and then work it over with the LLM in tow - always cognisant that it may be wrong. If you know how and why everything is supposed to work however, you are in a position to critique everything the LLM produces when asked to generate either lowest-level code or a high-level project framework.
Parallels with other industries
My convictions come from experience. Part of my day job involves dealing with complex financial contracts (structured finance, derivatives, etc). I've encountered a lot of extremely smart, talented people with scary recall and sometimes painful accuracy in pointing out the consequences of sloppy drafting or architecting. What really strikes me about the best ones, however, is their ability to analyse a lower-order problem (i.e. an immediate problem of messy language or drafting) and critique it in terms of higher-order fundamentals. In other words - the landmine you were about to step on was not obvious on the face of the document, and not explicit in know-how articles or practitioner textbooks. Knowledge of the true fundamentals of how the contracts and law work gave insight which was several degrees of utility greater than just a surface level reading of "it's broken, fix it".
Often the language looked ok and worked in the immediate context (which is why juniors would often miss it). However if you hadn't studied e.g. the standard form agreements this was all based on, or didn't understand some fundamental and often esoteric industry characteristics and practices, you'd miss the danger. This would create risk in the agreement and, if it was going to be recast in the future, leave some level of technical debt for others to fix.
What's the carry-over to coding? Well, as a software creator you are going to be deficient if you don't occasionally dig down into the guts of what you are doing. If you don't have the requisite knowledge, you are literally incapable of seeing certain kinds of problems and solutions. You have to be comfortable with the capabilities of the language you are coding with, as well as comfortable in the problem-solving space both inside the context of the language, and at an abstract level. Come to the LLM equipped with all of that, and you are doing something fundamentally different from someone typing intended outcomes into ChatGPT and crossing their fingers. I don't want to buy software from the latter.
Part II: How I Do It
My background is in commercial law and finance. I came to programming having worked my way through hundreds if not thousands of abstract problem spaces. So for me it wasn't entirely a question of how to solve issues with code, it was largely a problem of syntax (although I don't want to underplay just how tough it was to actually learn to work with my chosen languages). There were also certain categories of problem created by how a particular language works (e.g. the way you generally do things in Python vs Javascript) which required a lot of learning and digging on my part. That is where vast amounts of energy were expended despite having access to LLMs.
Your experience may be wildly different to mine. So this section is just intended to illustrate, at a basic level, how I approach constructing larger, more complex applications.
1. Guidance prompt
I tend to work in Cursor, and start every project with a very detailed standalone prompt saved into a text document. This is broken down into sections:
- overview of product or application and outline of purpose of the software
- overview of tech stack
- description of how the relevant pages / feature sets look and work, and then a feature by feature implementation plan of each, using bullet points or other structured notation.
In this guidance prompt I aim to be rigorous, methodical, and exhaustive. I am writing it to prevent slop, prevent off-piste improvisation by the LLM and getting lost on the way to the finish. As mentioned above, this is a critical part of the software design / LLM production process, and it pays to be exhaustive and take time to refine your own thought structures. Time spent on careful planning here should also be devoted to things like database management, preparing for eventual deployment, security etc even if these are areas which will have placeholder content for now. Your aim is to really have a blueprint for the project which covers all the bases.
Where data flows or control flows are used, you need to break them down in a structured format using letter or number notation. I've found LLMs respond best to these kinds of structured inputs, often far better than the same concepts merely expressed in words. You should also pay attention to implicit assumptions in your blueprinting - what have you not mentioned explicitly? What is so obvious as to be forgotten? Remember that the LLM only has the context you give it, so it is again worth critiquing your own planning to make sure you give as much guidance as necessary.
The guidance prompt should be part of the codebase and a document that is easily copy-pasted or referenced by the LLM. It should work as the roadmap to which you and the LLM can return if you get lost.
2. Structuring
I will then ask the LLM to help me structure the project. This will take different forms depending on what stack / frameworks you are using. I would rely heavily on the relevant documentation and if necessary set out the folder structure required, along with the files required. I have a number of startup scripts in Python which help me spool up certain structures by running the relevant terminal commands once I navigate to my chosen folder. Certain popular frameworks have their own start commands which you can integrate into a startup file modified to your taste to include the relevant front/backend libraries you are comfortable working in.
3. Iterative completion
Once my structure is set out in full and I have checked that the application landing page runs e.g. at the local address, I will then methodically move through my feature sets and begin to implement my plan (after, of course, creating a GitHub repo and pushing the initial state of my codebase). I will pick a set of components, a feature, or a page, and work function-by-function according to my original blueprint. In Cursor you can provide certain files as the codebase context, which helps prevent much redundant code. However as I work through these low-level items, I will continuously question the LLM if it proposes doing something unexpected, unorthodox, or not immediately explicable. You should take a critical approach to what is produced and not be afraid to revert changes that are not appropriate. You have to interrogate the shape of the solution as well as the content - is the LLM unintentionally implementing something misguided or lacking the full background?
This becomes particularly important when your application starts to get more complex. At fragile stages of the build, I will include explicit instructions not to change any code other than what is relevant to address the problem being addressed at the time. I will sometimes then run code blocks through a differ (e.g. the one I have built at https://code-redliner.netlify.app) to make sure no hidden changes are being introduced.
It is critical that at each stage of amendment you both check that the application functions as intended, and then push the individual feature set or function set to your repo. That way you always have access to the last functional instance of your software. There is always a danger that the LLM will enter a loop where it is unable to solve a problem for a number of reasons - having the code that worked up till that point is incredibly useful.
4. Manual oversight
As you work through your structure methodically, make sure you are judicious in how you use the LLM. Do not give it more work to do than is strictly necessary. If you have to make small, digestible amendments to the code, make them manually instead of burdening the machine. This is because the memory state of the LLM will often differ from your own, and you may find it reversing out changes you've made but not mentioned to it, or making amendments based on an outdated internal model of the underlying code. Always make sure to feed the latest code in to the LLM in some manner, and be conscientious in checking with a differ.
This kind of manual oversight may be slightly more time consuming than just letting the LLM whirr away, but will avoid some catastrophic screw-ups where hidden bugs are introduced through e.g. naming conventions that will then be frustrating to isolate later in the development process. It will feel like it is slowing you down in the early stages - however it is actually an enormous time-saver in the future.
Conclusion and my final rules
Hopefully this section has been a handy illustration of how I work. I tend to cycle through steps 3 and 4 in my projects until I get to a state of completion I am happy with. I've found that patience is a virtue, and being very sensitive to how you phrase things in your prompts (i.e. exact, clear, expansive) has a material effect on your experience. To wrap up, here are some of my principles I try to apply in my workflow. I'd love to hear yours!
Explicit Is Better Than Implicit
- I stole this one from the Zen of Python. It holds true for LLM instruction as well. The LLM is a machine. Better than hoping for an outcome is to spell it out directly. There is a deeper reason for this than clarity - the value of the solution has some relation to the detail which goes into asking for it, because we are dealing with input and output tokens. The more you enrich your instruction (while remaining focused), the better the outcome will be.
Be Both Imperative And Declarative.
- Don't tell the LLM only what you want. Tell it what you want, how to construct what you want, and (reasonably speaking) what you don't want. Abstracting the underlying processes in anything but the most banal tasks risks leaving the details of your application to a stochastic process. You can just ask for things.
Patience In Construction, Patience In Execution
- Build your application with a firm foundation, set out the scaffolding for the parts to come, and then work on the smaller parts. Do not rush through, sketch-build a load of crap, and think you can fix it later. The tooling is just not sophisticated enough to achieve that at the moment. I find it best breaking work into modular sections, and going brick-by-brick.
How do you work with LLMs? Ping me an email at getmelunens@gmail.com and I'll publish the best solutions in an update post.