We (the Princeton SWE-bench team) built an agent in ~100 lines of code that does pretty well on SWE-bench, you might enjoy it too: https://github.com/SWE-agent/mini-swe-agent
Your task: {{task}}. Please reply
with a single shell command in
triple backticks.
To finish, the first line of the
output of the shell command must be
'COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT'.
You’d be surprised at the amount of time wasted because LLMs “think” they can’t do something. You’d be less surprised that they often “think” they can’t do something, but choose some straight ignorant path that cannot work.
There are theoretically impossible things to do, if you buy into only the basics. If you open your mind, anything is achievable; you just need to break out of the box you’re in.
If enough people keep feeding in that we need a time machine, the revolution will play out in all the timelines. Without it, Sarah Connor is lost.
> 1. Analyze the codebase by finding and reading relevant files
2. Create a script to reproduce the issue
3. Edit the source code to resolve the issue
4. Verify your fix works by running your script again
5. Test edge cases to ensure your fix is robust
This prompt snippet from your instance template is quite useful. I use something like this for getting out of debug loops:
> Analyse the codebase and brainstorm a list of potential root causes for the issue, and rank them from most likely to least likely.
Then create scripts or add debug logging to confirm whether your hypothesis is correct. Rule out root causes from most likely to least by executing your scripts and observing the output in order of likelihood.
I'm trying to understand what does it got to do with LLM size?
Imho, right tools allow small models to perform better than undirected tool like bash to do everything.
But I understand that this code is to show people how function calling is just a template for LLM.
Mini swe agent, as an academic tool, can be easily tested aimed to show the power of a simple idea against any LLM. You can go and test it with different LLMs. Tool calls didn't work fine with smaller LLM sizes usually. I don't see many viable alternatives less than 7GB, beyond Qwen3 4B for tool calling.
> right tools allow small models to perform better than undirected tool like bash to do everything.
Interesting enough the newer mini swe agent was refutation of this hypothesis for very large LLMs from the original swe agent paper (https://arxiv.org/pdf/2405.15793) assuming that specialized tools work better.
A very similar "how to guide" can be found here https://ampcode.com/how-to-build-an-agent written by Thorsten Ball. In general Amp is quite interesting - obviously no hidden gem anymore ;-) but great to see more tooling around agentic coding being published. Also, because similar agentic-approaches will be part of (certain/many?) software suits in the future.
On the other hand, when we critique, it is for the benefit of everyone who reads the critique and learns from it.
That's why critique has value. To the original author/artist (if they see it), but also to everyone else who sees it. "Oh, I was going to intersperse text slides with a transcript, but I remember how offputting that was once on HN, so let's skip the slides."
The article in question is written by a person who stands to benefit heavily from AI & agents succeeding. They appear to be a true believer so this isn’t a disparaging comment, but a snake oil salesman would display the same behavior.
Can someone confirm my understanding of how tool use works behind the scenes? Claude, ChatGPT, etc, through the API offer "tools" and give responses that ask for tool invocations which you then do and send the result back. However, the underlying model is a strictly text based medium, so I'm wondering how exactly the model APIs are turning the model response into these different sort of API responses. I'm assuming there's been a fine-tuning step with lots of examples which put desired tool invocations into some sort of delineated block or something, which the Claude/ChatGPT server understand? Is there any documentation about how this works exactly, and what those internal delineation tokens and such are? How do they ensure that the user text doesn't mess with it and inject "semantic" markers like that?
You have the right picture of what’s going on. Roughly:
* The only true interface with an LLM is tokens. (No separation between control and data channels.)
* The model api layer injects instructions on tool calling and a list of available tools into the base prompt, with documentation on what those tools do.
* Tool calling is delineated by special tokens. When a model wants to call a tool, it adds a special block to the response that contains the magic token(s) along with the name of the tool and any params. The api layer then extracts this and forms a structured json response in some tool_calls parameter or whatever that is sent in the api response to the user. The result of the tool coming back from the user through the tool calling api is then encoded with special tokens and injected.
* Presumably, the api layer prevents the user from injecting such tokens themselves.
* SotA Models are good at tool calls because they have been heavily fine-tuned on them, with all sorts of tasks that involve tool calls, like bash invocations. The fine-tuning is both to get them good at tool calls in general, and also probably involves specific tool calls that the model provider wants them to be good at, such as the Claude Sonnet model getting fine-tuned on the specific tools Claude Code uses.
Sometimes it amazes me that this all works so well, but it does. You are right to put your finger on the fine-tuning, as it’s critical for making tool calling work well. Tool calling works without fine-tuning, but it’s going to be more hit-or-miss.
The disconnect here is that models aren't really "text" based, but token based, like how compilers don't use the code itself but a series of tokens that can include keywords, brackets, and other things. The output can include words but also metadata
> I'm assuming there's been a fine-tuning step with lots of examples which put desired tool invocations into some sort of delineated block or something, which the Claude/ChatGPT server understand?
As far as I know that's what's happening. They are training it to return tool responses when it's unsure about the answer or instructed to do so. There are generic tool trainings for just following the response format, and then probably there are some tool specific trainings. For instance gpt-oss loves to use the search tool, even if it's not mentioned anywhere. Anthropic lists well known tools in their document (eg: text_editor, bash). They are likely to have been trained specifically to follow some deeper semantics compared to just generic tool usage.
The whole thing is pretty brittle and tool invocations are just taking place via in-band signalling, delineated by special tokens or token sequences.
Who says that tokens are money? Local models are getting really good. For now, yes, if you want the best outcomes, you need to purchase tokens. But in the future, that may not be the case.
I'd argue that local models still cost money, albeit less than the vendors would cost. Unless you happen to live off-grid and get your own electricity for free. I suppose there are free tiers available that work for some things as well.
But with edge-case exceptions aside, yes, tokens cost money.
They are great for basic tasks like summarization and translation but for the best results from coding agents and from 90% of so-called AI startups who are using these APIs, they are all purchasing tokens.
No different to operating a slot-machine towards vibe-coders who are the AI companies favourite type of customer - spending endless amounts of money on tokens for another spin at fixing an error they don't understand.
Technically speaking, you can get away with just a Bash tool, and I had some success with this. It's actually quite interesting to take away tools from agents and see how creative they are with the use.
One of the reasons why you get better performance if you give them the other tools is that there has been some reinforcement learning on Sonne with all these tools. The model is aware of how these tools work, it is more token-efficient and it is generally much more successful at performing those actions. The Bash tool, for instance, at times gets confused by bashisms, not escaping arguments correctly, not handling whitespace correctly etc.
> The model is aware of how these tools work, it is more token-efficient and it is generally much more successful at performing those actions.
Interesting! This didn't seem to be the case in the OP's examples - for instance using a list_files tool and then checking if the json result included README vs bash [ -f README ]
> Interesting! This didn't seem to be the case in the OP's examples - for instance using a list_files tool and then checking if the json result included README vs bash [ -f README ]
There is no training on a tool with that name. But it likely also doesn't need training because the parameter is just a path and that's a pretty basic tool.
On the other hand to know how to execute a bash command, you need to know bash. Bash is a known tool to the Claude models [1] and so is text editing [2]. You're supposed to reference those in the tool listing but at least from my testing, the moment you call a tool "bash", Claude makes plenty of assumptions about what the point of this thing is.
Separate tools is simpler than having everything go through bash.
If everything goes through bash then you need some way to separate always safe commands that don't need approval (such as listing files), from all other potentially unsafe commands that require user approval.
If you have listing files as a separate tool then you can also enforce that the agent doesn't list any files outside of the project directory.
> you need some way to separate always safe commands that don't need approval (such as listing files), from all other potentially unsafe commands that require user approval.
This is a very strong argument for more specific tools, thanks!
Yeah, you could get away with a coding agent just using the Bash tool and the Edit tool (tbh somewhat optional but not having it would be highly inefficient). I haven't tried it, but it might struggle with the code search functionality. It would be possible with the right prompting. For example, you could just prompt the LLM to say "If you need to search the source code, use ripgrep with the Bash tool."
Why do humans need a IDE when we could do anything in a shell?
Interface give you the informations you need at a given moment and the actions you can take.
To me a better analogy would be: if you're a household of 2 who own 3 reliable cars, why would you need a 4th car with smaller cargo & passenger capacities, higher fuel consumption, worse off-road performance and lower top speed?
has this agent fully built anything? that is a pretty straight forward question that you should be expected to answer when submitting something like this to HN.
You haven't built anything. You are just a grifter spinning words in desperate need for attention. No one will ever use your "product" because it's useless. You know this and yet you keep trying to hustle the ignorant. Keep boosting yourself with alts.
Very simplistic view on the problem domain IMHO. Yah sure we can add a bunch of functions... ok. But how about snapshotting (or at least work with git), sandboxing both process and network level, prompt engineering, detect when stuck, model switching with parallel solvers for better solutions. These are the kind of things that make coding agents reliable - not function declarations.
It will be included as part of the third instalment. I write these coding agents for a living. Need to start with the basics as the basics is what people need to know to be able to automate functions at their employer, which may not be coding agents. This workshop was delivered at a data engineering conference, for example.
I hate to do meta-commentary (the content is a decent beginner level introduction to the topic!), but this is some of the worst AI-slop-infused presentation I've seen with a blog post in a while.
Why the unnecessary generated AI pictures in between?
Why put everything that could have been a bullet point into it's own individual picture (even if it's not AI generated)? It's very visually distracting, breaks the flow of reading, and it's less accessible as all the picture lack alt-text.
---
I see that it's based on a conference talk, so it's possibly just 1:1 the slides. If that's the case please put it up in it's native conference format, rather than this.
Wow. Yeah. That's unreadable - my frustration and annoyance levels got high fast, had to close the page before I went for the power button on my machine :)
The problem I have with this is that this style of agent design, providing enormous autonomy, makes sense in coding while keeping an expert human in the loop since it can self-correct via debugging. What would the other use cases of giving an agent this much autonomy be today versus a more structured flow versus something more like LangGraph?
Yep, once you've got the base coding agent (as in the workshop above), you can use it to build another agent or anything really. You start from that kernel and you can bootstrap upwards from that point forward and build anything.
The trick with coding agent is guiding the attention towards tasks it can expect will fit in the agent’s token window and deciding when to delegate. Funny as a PM you have the exact problem.
what's the best current cli (with a non interactive option) that is on par with Claude code but can work with other llms like ollama, openrouter etc? I tried stuff like aider but it cannot discover files, the open source gemini one but it was terrible; what is a good one that maybe is the same as CC if you plug in Opus?
Opencode is pretty good and likely meets your needs. One thing I'll call out is Gemini is terrible as an agent currently because Gemini is not a very good tool calling LLM. It's an oracle. https://ghuntley.com/cars/
Exactly my approach to gaining knowledge and learning through building your own(`npx genaicode`). When I was presenting my work on a local meetup I got this exact question: "why u building this instead of just using Cursor".
The answer is explained in this article(tl;dr; transformative experience), even though some parts of it are already outdated or will be outdated very soon as the technology is making progress every day.
Exactly, dude. This is the most important thing, the fundamentals to understand how this stuff works under the hood. I don't get how people aren't curious. Why aren't people being engineers? This is one of the most transformative things to happen in our profession in the last 20 years.
For me, the post is missing an explanation of the reason why I would want to build my own coding agent instead of just using one of the publicly available ones.
Knowing how to build your own agent and what that loop is going to be the new whiteboard coding question in a couple of years. Absolute. It's going to be the same as "Reverse this string", "I've got a linked list, can you reverse it?", or "Here's my graph, can you traverse it?"
I see, thanks. I was wondering earlier if there would be any practical advantage in creating a custom agent, but couldn't think of any. I guess I simply misunderstood the purpose of your post.
Keep an eye out for Sonnet generating Python files. What typically happens is: let's say you had a refactor that needs to happen, and let's say 100 symbols need renaming. Instead of invoking the edit tool 100 times, Sonnet has this behaviour where it will synthesise a Python program and then execute it to do it all in one shot.
I wonder how far I could go with a barebone agent prompted to take advantage of this with Sonnet and the Bash tool only, so that it will always try to use the tool to only do `python -c …`
I really think the current trend of CLI coding agents isn't going to be the future. They're cool but they are _too simple_. Gemini CLI often makes incorrect edits and gets confused, at least on my codebase. Just like ChatGPT would do in a longer chat where the context gets lost: random, unnecessary and often harmful edits are made confidently. Extraneous parts of the codebase are modified when you didn't ask for it. They get stuck in loops for an hour trying to solve a problem, "solving it", and then you have to tell the LLM the problem isn't solved, the error message is the same, etc.
I think the future will be dashboards/HUDs (there was an article on HN about this a bit ago and I agree). You'll get preview windows, dynamic action buttons, a kanban board, status updates, and still the ability to edit code yourself, of course.
The single-file lineup of agentic actions with user input, in a terminal chat UI, just isn't gonna cut it for more complicated problems. You need faster error reporting from multiple sources, you need to be able to correct the LLM and break it out of error loops. You won't want to be at the terminal even though it feels comfortable because it's just the wrong HCI tool for more complicated tasks. Can you tell I really dislike using these overly-simple agents?
You'll get a much better result with a dashboard/HUD. The future of agents is that multiple of them will be working at once on the codebase and they'll be good enough that you'll want more of a status-update-confirm loop than an agentic code editing tool update.
Also required is better code editing. You want to avoid the LLM making changes in your code unrelated to the requested problem. Gemini CLI often does a 'grep' for keywords in your prompt to find the right file, but your prompt was casual and doesn't contain the right keywords so you end up with the agent making changes that aren't intended.
Obviously I am working in this space so that's where my opinions come from. I have a prototype HUD-style webapp builder agent that is online right now if you'd like to check it out:
It's not got everything I said above - it's a work-in-progress. Would love any feedback you have on my take on a more complicated, involved, and narrow-focus agentic workflow. It only builds flask webapps right now, strict limits on what it can do (no cron etc yet) but it does have a database you can use in your projects. I put a lot of work into the error flow as well, as that seems like the biggest issue with a lot of agentic code tools.
One last technical note: I blogged about using AST transformations when getting LLMs to modify code. I think that using diffs or rewriting the whole file isn't the right solution either. I think that having the LLM write code that modifies your code and then running that code to affect the modifications is the way forward. We'll see I guess. Blog post: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
I'm not sure what do you mean by "whole file format", but if it refers to the write_file tool that overwrites the whole file, there is also the replace tool which is apparently inspired by a blog post [1] by Anthropic. It seems that Claude Code also supports the roughly identical tool (inferred from error messages), so editing tools can't be the reason why Claude Code is good.
The replace tool is a form of diff (although it's rudimentary), and the read_file tool can be called with line ranges. I do wish robust patching but it is not the "whole" file reading/writing. Maybe you wanted to say about subagent file handling? I can agree then.
(Also I think Gemini is significantly better when it comes to the context rot, in my experience 100K--300K tokens were required for symptoms to appear. So burning tokens is less problematic with Gemini.)
Oh that's wild, I did suspect that but didn't know it outright. Mind-blowing Google would release that kind of thing, I had wondered why it sucked so much haha. Okay so what is a good representation of the current state of coding agents? Which one should I try that does a better job at code modifications?
Claude code is the strongest atm, but roocode or cline (vscode extensions) can also work well. Roo with gpt5-mini (so cheap, pretty fast) does diff based edits w/ good coordination over a task, and finishes most tasks that I tried. It even calls them "surgical diffs" :D
It simply spoofs itself as Claude Code when calling the API. Anthropic will shut this down the second it benefits them to do so. Like much of the gravy train right now, enjoy it while it lasts.
This comment belongs in a discussion about using LLMs to help write code for large existing systems - it's a bit out of place in a discussion about a tutorial on building coding agents to help people understand how the basic tools-in-a-loop pattern works.
anyone who used those coding agent can already see how it works, you can usually see agent fetching files, running commands, listing files and directories.
i just wrote this comment so people aren't under false belief that it's pretty much all coding agents do, making all this fault tolerant with good ux is lot of work.
> making all this fault tolerant with good ux is lot of work.
Yes, it is. Not only in the department of good design in UX, but these LLMs keep evolving. They are software with different versions, and these different versions are continually deployed, which changes the behavior of the underlying model. So the harness needs to be continually updated to remain competitive.
1. Precompute frequently used knowledge and surface early. For example repository structure, os information, system time.
2. Anticipate next tool calls. If a match is not found while editing, instead of simply failing, return closest matching snippet. If read file tool gets a directory, return directory contents.
3. Parallel tool calls. Claude needs either a batch tool or special scaffolding to promote parallel tool calls. Single tool call per turn is very expensive.
that info can be just included in preffix which is cache by LLM, reducing cost by 70-80% average. System time varies, so it's not good idea to specify it in prompt, better to make a function out of it to avoid cache invalidation.
I am still looking for a good "memory" solution, so far running without it. Haven't looked too deep into it.
Not sure how next tool call be predicted.
I am still using serial tool calls as i do not have any subagents, i just use fast inference models for directly tools calls. It works so fast, i doubt i'll benefit from parallel anything.
There's "swe re-bench", a benchmark that tracks model release dates, and you can see how the model did for "real-world" bugs that got submitted on github after the model was released. (obviously works best for open models).
There are a few models that solve 30-50% of (new) tasks pulled from real-wolrd repos. So ... yeah.
Surprise, as rambunctious dev who’s socially hacked their way through promotions, I will just convince our manager we need to rewrite the platform in a new stack or convince them that I need to write a new server to handle the feature. No old tech needed!
We (the Princeton SWE-bench team) built an agent in ~100 lines of code that does pretty well on SWE-bench, you might enjoy it too: https://github.com/SWE-agent/mini-swe-agent
OK that really is pretty simple, thanks for sharing.
The whole thing runs on these prompts: https://github.com/SWE-agent/mini-swe-agent/blob/7e125e5dd49...
Pretty sure you also need about 120 lines of prompting from default.yaml
https://github.com/SWE-agent/mini-swe-agent/blob/7e125e5dd49...
You’d be surprised at the amount of time wasted because LLMs “think” they can’t do something. You’d be less surprised that they often “think” they can’t do something, but choose some straight ignorant path that cannot work.
There are theoretically impossible things to do, if you buy into only the basics. If you open your mind, anything is achievable; you just need to break out of the box you’re in.
If enough people keep feeding in that we need a time machine, the revolution will play out in all the timelines. Without it, Sarah Connor is lost.
[dead]
> 1. Analyze the codebase by finding and reading relevant files 2. Create a script to reproduce the issue 3. Edit the source code to resolve the issue 4. Verify your fix works by running your script again 5. Test edge cases to ensure your fix is robust
This prompt snippet from your instance template is quite useful. I use something like this for getting out of debug loops:
> Analyse the codebase and brainstorm a list of potential root causes for the issue, and rank them from most likely to least likely.
Then create scripts or add debug logging to confirm whether your hypothesis is correct. Rule out root causes from most likely to least by executing your scripts and observing the output in order of likelihood.
Does this mean it's only useful for issue fixes?
A feature is just an issue. The issue is that the feature isn't complete yet.
when a problem is entirely self contained in a file, it's very easy to edit it with LLM.
that's not the case with a codebase, where things are littered around in tune with specific model of organisation the developer had in mind.
Lumpers win again!
https://en.wikipedia.org/wiki/Lumpers_and_splitters
> in tune with specific model of organisation
You wish
Nice but sad to see lack of tools. Most your code is about the agent framework instead of specific to SWE.
I've built a SWE agent too (for fun), check it out => https://github.com/myriade-ai/autocode
> sad to see lack of tools.
Lack of tools in mini-swe-agent is a feature. You can run it with any LLM no matter how big or small.
I'm trying to understand what does it got to do with LLM size? Imho, right tools allow small models to perform better than undirected tool like bash to do everything. But I understand that this code is to show people how function calling is just a template for LLM.
Mini swe agent, as an academic tool, can be easily tested aimed to show the power of a simple idea against any LLM. You can go and test it with different LLMs. Tool calls didn't work fine with smaller LLM sizes usually. I don't see many viable alternatives less than 7GB, beyond Qwen3 4B for tool calling.
> right tools allow small models to perform better than undirected tool like bash to do everything.
Interesting enough the newer mini swe agent was refutation of this hypothesis for very large LLMs from the original swe agent paper (https://arxiv.org/pdf/2405.15793) assuming that specialized tools work better.
cheers i'll add it in.
What sort of results have you had from running it on its own codebase?
A very similar "how to guide" can be found here https://ampcode.com/how-to-build-an-agent written by Thorsten Ball. In general Amp is quite interesting - obviously no hidden gem anymore ;-) but great to see more tooling around agentic coding being published. Also, because similar agentic-approaches will be part of (certain/many?) software suits in the future.
This looks much better, thank you.
Makes sense, the author says he also works at Amp
Ghuntley also works at Amp
Yes
If a picture is usually worth 1000 words, the pictures in this are on a 99.6% discount. What the actual...?
It's a conference workshop; these are the slides from the workshop, and the words are a dictation from the delivery.
That seems like a leaky implementation detail to me, for a published piece.
You should learn to be grateful for what other people do on their own time and demand nothing from you for you to benefit from it.
On the other hand, when we critique, it is for the benefit of everyone who reads the critique and learns from it.
That's why critique has value. To the original author/artist (if they see it), but also to everyone else who sees it. "Oh, I was going to intersperse text slides with a transcript, but I remember how offputting that was once on HN, so let's skip the slides."
The article in question is written by a person who stands to benefit heavily from AI & agents succeeding. They appear to be a true believer so this isn’t a disparaging comment, but a snake oil salesman would display the same behavior.
Thanks, mate.
Can someone confirm my understanding of how tool use works behind the scenes? Claude, ChatGPT, etc, through the API offer "tools" and give responses that ask for tool invocations which you then do and send the result back. However, the underlying model is a strictly text based medium, so I'm wondering how exactly the model APIs are turning the model response into these different sort of API responses. I'm assuming there's been a fine-tuning step with lots of examples which put desired tool invocations into some sort of delineated block or something, which the Claude/ChatGPT server understand? Is there any documentation about how this works exactly, and what those internal delineation tokens and such are? How do they ensure that the user text doesn't mess with it and inject "semantic" markers like that?
You have the right picture of what’s going on. Roughly:
* The only true interface with an LLM is tokens. (No separation between control and data channels.)
* The model api layer injects instructions on tool calling and a list of available tools into the base prompt, with documentation on what those tools do.
* Tool calling is delineated by special tokens. When a model wants to call a tool, it adds a special block to the response that contains the magic token(s) along with the name of the tool and any params. The api layer then extracts this and forms a structured json response in some tool_calls parameter or whatever that is sent in the api response to the user. The result of the tool coming back from the user through the tool calling api is then encoded with special tokens and injected.
* Presumably, the api layer prevents the user from injecting such tokens themselves.
* SotA Models are good at tool calls because they have been heavily fine-tuned on them, with all sorts of tasks that involve tool calls, like bash invocations. The fine-tuning is both to get them good at tool calls in general, and also probably involves specific tool calls that the model provider wants them to be good at, such as the Claude Sonnet model getting fine-tuned on the specific tools Claude Code uses.
Sometimes it amazes me that this all works so well, but it does. You are right to put your finger on the fine-tuning, as it’s critical for making tool calling work well. Tool calling works without fine-tuning, but it’s going to be more hit-or-miss.
Here's some docs from anthropic about their implementation
https://docs.anthropic.com/en/docs/agents-and-tools/tool-use...
The disconnect here is that models aren't really "text" based, but token based, like how compilers don't use the code itself but a series of tokens that can include keywords, brackets, and other things. The output can include words but also metadata
> I'm assuming there's been a fine-tuning step with lots of examples which put desired tool invocations into some sort of delineated block or something, which the Claude/ChatGPT server understand?
As far as I know that's what's happening. They are training it to return tool responses when it's unsure about the answer or instructed to do so. There are generic tool trainings for just following the response format, and then probably there are some tool specific trainings. For instance gpt-oss loves to use the search tool, even if it's not mentioned anywhere. Anthropic lists well known tools in their document (eg: text_editor, bash). They are likely to have been trained specifically to follow some deeper semantics compared to just generic tool usage.
The whole thing is pretty brittle and tool invocations are just taking place via in-band signalling, delineated by special tokens or token sequences.
All these images make it impossibly hard to read... gd scroll simulator
> You just keep throwing tokens at the loop, and then you've got yourself an agent.
Money. Replace "tokens" with "money". You just keep throwing money at the loop, and then you've got yourself an agent.
Who says that tokens are money? Local models are getting really good. For now, yes, if you want the best outcomes, you need to purchase tokens. But in the future, that may not be the case.
I'd argue that local models still cost money, albeit less than the vendors would cost. Unless you happen to live off-grid and get your own electricity for free. I suppose there are free tiers available that work for some things as well.
But with edge-case exceptions aside, yes, tokens cost money.
> Local models are getting really good.
They are great for basic tasks like summarization and translation but for the best results from coding agents and from 90% of so-called AI startups who are using these APIs, they are all purchasing tokens.
No different to operating a slot-machine towards vibe-coders who are the AI companies favourite type of customer - spending endless amounts of money on tokens for another spin at fixing an error they don't understand.
Why are any of the tools beyond the bash tool required?
Surely listing files, searching a repo, editing a file can all be achieved with bash?
Or is this what's demonstrated by https://news.ycombinator.com/item?id=45001234?
Technically speaking, you can get away with just a Bash tool, and I had some success with this. It's actually quite interesting to take away tools from agents and see how creative they are with the use.
One of the reasons why you get better performance if you give them the other tools is that there has been some reinforcement learning on Sonne with all these tools. The model is aware of how these tools work, it is more token-efficient and it is generally much more successful at performing those actions. The Bash tool, for instance, at times gets confused by bashisms, not escaping arguments correctly, not handling whitespace correctly etc.
> The model is aware of how these tools work, it is more token-efficient and it is generally much more successful at performing those actions.
Interesting! This didn't seem to be the case in the OP's examples - for instance using a list_files tool and then checking if the json result included README vs bash [ -f README ]
> Interesting! This didn't seem to be the case in the OP's examples - for instance using a list_files tool and then checking if the json result included README vs bash [ -f README ]
There is no training on a tool with that name. But it likely also doesn't need training because the parameter is just a path and that's a pretty basic tool.
On the other hand to know how to execute a bash command, you need to know bash. Bash is a known tool to the Claude models [1] and so is text editing [2]. You're supposed to reference those in the tool listing but at least from my testing, the moment you call a tool "bash", Claude makes plenty of assumptions about what the point of this thing is.
[1]: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use...
[2]: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use...
Separate tools is simpler than having everything go through bash.
If everything goes through bash then you need some way to separate always safe commands that don't need approval (such as listing files), from all other potentially unsafe commands that require user approval.
If you have listing files as a separate tool then you can also enforce that the agent doesn't list any files outside of the project directory.
> you need some way to separate always safe commands that don't need approval (such as listing files), from all other potentially unsafe commands that require user approval.
This is a very strong argument for more specific tools, thanks!
Yeah, you could get away with a coding agent just using the Bash tool and the Edit tool (tbh somewhat optional but not having it would be highly inefficient). I haven't tried it, but it might struggle with the code search functionality. It would be possible with the right prompting. For example, you could just prompt the LLM to say "If you need to search the source code, use ripgrep with the Bash tool."
> Edit tool (tbh somewhat optional but not having it would be highly inefficient)
If you need to edit the source, just use patch with the bash tool.
What's the efficiency issue?
Why do humans need a IDE when we could do anything in a shell? Interface give you the informations you need at a given moment and the actions you can take.
To me a better analogy would be: if you're a household of 2 who own 3 reliable cars, why would you need a 4th car with smaller cargo & passenger capacities, higher fuel consumption, worse off-road performance and lower top speed?
>Why are any of the tools beyond the bash tool required?
My best guess is they started out with a limited subset of tools and realised they can just give it bash later.
This is explained in 3.2 How to design good tools?
I'm not sure where this quote is from - it doesn't seem to appear in the linked article.
ahh, sorry, different article :(
Instead of writing about how to build an agent, show us one project that this agent has built.
I'd love to see you build your own agent and then share it here in HN as a show HN.
Thank you for sharing.
And remember to avoid feeding the trolls.
has this agent fully built anything? that is a pretty straight forward question that you should be expected to answer when submitting something like this to HN.
You haven't built anything. You are just a grifter spinning words in desperate need for attention. No one will ever use your "product" because it's useless. You know this and yet you keep trying to hustle the ignorant. Keep boosting yourself with alts.
Can someone please explain the axes: Oracle, Agent, high safety and low safety?
I had a go at this using the on-device models in edge and chrome, phi4-mini and gemini nano, worked surprisingly well for such small models.
https://ryanseddon.com/ai/how-to-build-an-agent-on-device/
Very simplistic view on the problem domain IMHO. Yah sure we can add a bunch of functions... ok. But how about snapshotting (or at least work with git), sandboxing both process and network level, prompt engineering, detect when stuck, model switching with parallel solvers for better solutions. These are the kind of things that make coding agents reliable - not function declarations.
It will be included as part of the third instalment. I write these coding agents for a living. Need to start with the basics as the basics is what people need to know to be able to automate functions at their employer, which may not be coding agents. This workshop was delivered at a data engineering conference, for example.
I hate to do meta-commentary (the content is a decent beginner level introduction to the topic!), but this is some of the worst AI-slop-infused presentation I've seen with a blog post in a while.
Why the unnecessary generated AI pictures in between?
Why put everything that could have been a bullet point into it's own individual picture (even if it's not AI generated)? It's very visually distracting, breaks the flow of reading, and it's less accessible as all the picture lack alt-text.
---
I see that it's based on a conference talk, so it's possibly just 1:1 the slides. If that's the case please put it up in it's native conference format, rather than this.
Wow. Yeah. That's unreadable - my frustration and annoyance levels got high fast, had to close the page before I went for the power button on my machine :)
Agreed. It's unreadable.
The problem I have with this is that this style of agent design, providing enormous autonomy, makes sense in coding while keeping an expert human in the loop since it can self-correct via debugging. What would the other use cases of giving an agent this much autonomy be today versus a more structured flow versus something more like LangGraph?
Building a coding agent involves defining clear goals, leveraging AI, and iterating based on feedback. Start with a simple task and scale up.
Yep, once you've got the base coding agent (as in the workshop above), you can use it to build another agent or anything really. You start from that kernel and you can bootstrap upwards from that point forward and build anything.
Nitpicking. What the author calls sequence diagrams are not that. They are flowcharts.
The trick with coding agent is guiding the attention towards tasks it can expect will fit in the agent’s token window and deciding when to delegate. Funny as a PM you have the exact problem.
Yep. What you need to do is set its direction and then blow wind into its sails.
what's the best current cli (with a non interactive option) that is on par with Claude code but can work with other llms like ollama, openrouter etc? I tried stuff like aider but it cannot discover files, the open source gemini one but it was terrible; what is a good one that maybe is the same as CC if you plug in Opus?
Opencode is pretty good and likely meets your needs. One thing I'll call out is Gemini is terrible as an agent currently because Gemini is not a very good tool calling LLM. It's an oracle. https://ghuntley.com/cars/
Haven’t tried many but the LLM cli seems alright to me
The “valley of it will take our jobs” is an approaching light in the train tunnel.
I live in the “valley”. I battle depression daily that I had before LLMs.
Using LLMs and false guardrails to watchdog inherently deceitful output is a bad system smell.
I know most are “on it”, and I’ve written a coding agent.
But why is this page designed like some brainwashing repetitive Orwellian mantra?
If it’s perceived that we need that, then we’re having to overcome something, and that something is common sense.
So maybe we’ll happily write our coding agents with the intent to stand on the shoulders of a giant.
But everyone knows we’re building the technological equivalent of a crystal meth empire.
So what do you want to do about it? It can’t be stopped.
Even positive things like nuclear energy have been stopped in Germany for example, against an industry lobby that was almighty in the 1980s.
Negative things like IP stealing "AI" can be stopped as well, and the population is increasingly watchful and will organize itself at some point.
Exactly my approach to gaining knowledge and learning through building your own(`npx genaicode`). When I was presenting my work on a local meetup I got this exact question: "why u building this instead of just using Cursor". The answer is explained in this article(tl;dr; transformative experience), even though some parts of it are already outdated or will be outdated very soon as the technology is making progress every day.
Exactly, dude. This is the most important thing, the fundamentals to understand how this stuff works under the hood. I don't get how people aren't curious. Why aren't people being engineers? This is one of the most transformative things to happen in our profession in the last 20 years.
For me, the post is missing an explanation of the reason why I would want to build my own coding agent instead of just using one of the publicly available ones.
Knowing how to build your own agent and what that loop is going to be the new whiteboard coding question in a couple of years. Absolute. It's going to be the same as "Reverse this string", "I've got a linked list, can you reverse it?", or "Here's my graph, can you traverse it?"
I see, thanks. I was wondering earlier if there would be any practical advantage in creating a custom agent, but couldn't think of any. I guess I simply misunderstood the purpose of your post.
You wouldn't.
This project and this post are for the curious and for the learners.
Where is the program synthesis? My way of thinking is given primitives as tools, i want the model to construct and return the program to execute.
Of course following nix philosophy is another way.
Sonnet does this via the edit tool and bash tool. It’s inbuilt to the model.
Interesting.
Keep an eye out for Sonnet generating Python files. What typically happens is: let's say you had a refactor that needs to happen, and let's say 100 symbols need renaming. Instead of invoking the edit tool 100 times, Sonnet has this behaviour where it will synthesise a Python program and then execute it to do it all in one shot.
I wonder how far I could go with a barebone agent prompted to take advantage of this with Sonnet and the Bash tool only, so that it will always try to use the tool to only do `python -c …`
“will transfirm you from being a consumer to a producer”
You mean, “will teach you to make REST calls instead of letting a script do it for you”.
We are all consumers. Unless, of course, you work at one of three companies.
I really think the current trend of CLI coding agents isn't going to be the future. They're cool but they are _too simple_. Gemini CLI often makes incorrect edits and gets confused, at least on my codebase. Just like ChatGPT would do in a longer chat where the context gets lost: random, unnecessary and often harmful edits are made confidently. Extraneous parts of the codebase are modified when you didn't ask for it. They get stuck in loops for an hour trying to solve a problem, "solving it", and then you have to tell the LLM the problem isn't solved, the error message is the same, etc.
I think the future will be dashboards/HUDs (there was an article on HN about this a bit ago and I agree). You'll get preview windows, dynamic action buttons, a kanban board, status updates, and still the ability to edit code yourself, of course.
The single-file lineup of agentic actions with user input, in a terminal chat UI, just isn't gonna cut it for more complicated problems. You need faster error reporting from multiple sources, you need to be able to correct the LLM and break it out of error loops. You won't want to be at the terminal even though it feels comfortable because it's just the wrong HCI tool for more complicated tasks. Can you tell I really dislike using these overly-simple agents?
You'll get a much better result with a dashboard/HUD. The future of agents is that multiple of them will be working at once on the codebase and they'll be good enough that you'll want more of a status-update-confirm loop than an agentic code editing tool update.
Also required is better code editing. You want to avoid the LLM making changes in your code unrelated to the requested problem. Gemini CLI often does a 'grep' for keywords in your prompt to find the right file, but your prompt was casual and doesn't contain the right keywords so you end up with the agent making changes that aren't intended.
Obviously I am working in this space so that's where my opinions come from. I have a prototype HUD-style webapp builder agent that is online right now if you'd like to check it out:
https://codeplusequalsai.com/
It's not got everything I said above - it's a work-in-progress. Would love any feedback you have on my take on a more complicated, involved, and narrow-focus agentic workflow. It only builds flask webapps right now, strict limits on what it can do (no cron etc yet) but it does have a database you can use in your projects. I put a lot of work into the error flow as well, as that seems like the biggest issue with a lot of agentic code tools.
One last technical note: I blogged about using AST transformations when getting LLMs to modify code. I think that using diffs or rewriting the whole file isn't the right solution either. I think that having the LLM write code that modifies your code and then running that code to affect the modifications is the way forward. We'll see I guess. Blog post: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
>Gemini CLI often makes incorrect edits and gets confused
Gemini CLI still uses archaic whole file format for edits, it's not a good representative of current state of coding agents.
I'm not sure what do you mean by "whole file format", but if it refers to the write_file tool that overwrites the whole file, there is also the replace tool which is apparently inspired by a blog post [1] by Anthropic. It seems that Claude Code also supports the roughly identical tool (inferred from error messages), so editing tools can't be the reason why Claude Code is good.
[1] https://www.anthropic.com/engineering/swe-bench-sonnet
Many agents can send diffs. Whole file reading and writing burns tokens and pollutes context.
The replace tool is a form of diff (although it's rudimentary), and the read_file tool can be called with line ranges. I do wish robust patching but it is not the "whole" file reading/writing. Maybe you wanted to say about subagent file handling? I can agree then.
(Also I think Gemini is significantly better when it comes to the context rot, in my experience 100K--300K tokens were required for symptoms to appear. So burning tokens is less problematic with Gemini.)
Oh that's wild, I did suspect that but didn't know it outright. Mind-blowing Google would release that kind of thing, I had wondered why it sucked so much haha. Okay so what is a good representation of the current state of coding agents? Which one should I try that does a better job at code modifications?
Claude code is the strongest atm, but roocode or cline (vscode extensions) can also work well. Roo with gpt5-mini (so cheap, pretty fast) does diff based edits w/ good coordination over a task, and finishes most tasks that I tried. It even calls them "surgical diffs" :D
claude code (with max subscription), cursor-agent (with usage based pricing)
You are wasting your time and everyone elses with Gemini, it is the worst.
Oh I don’t use Gemini! I did try it out and admittedly formed an opinion too narrow on cli agents. But no way do I actually use Gemini.
how does opencode use my claude code subscription instead of making me use api key.
thats one of the things stoppng me from rolling my own. having to use pay per use api.
It simply spoofs itself as Claude Code when calling the API. Anthropic will shut this down the second it benefits them to do so. Like much of the gravy train right now, enjoy it while it lasts.
It works with Claude Pro/Max subscriptions. https://opencode.ai/docs/#configure
Anyone can build a coding agent which works on a) fresh code base b) when you've unlimited token budget
now build it for old codebase, let's see how precisely it edits or removes features without breaking the whole codebase
lets see how many tokens it consumes per bug fix or feature addition.
This comment belongs in a discussion about using LLMs to help write code for large existing systems - it's a bit out of place in a discussion about a tutorial on building coding agents to help people understand how the basic tools-in-a-loop pattern works.
anyone who used those coding agent can already see how it works, you can usually see agent fetching files, running commands, listing files and directories.
i just wrote this comment so people aren't under false belief that it's pretty much all coding agents do, making all this fault tolerant with good ux is lot of work.
> making all this fault tolerant with good ux is lot of work.
Yes, it is. Not only in the department of good design in UX, but these LLMs keep evolving. They are software with different versions, and these different versions are continually deployed, which changes the behavior of the underlying model. So the harness needs to be continually updated to remain competitive.
Agree. To reduce costs:
1. Precompute frequently used knowledge and surface early. For example repository structure, os information, system time.
2. Anticipate next tool calls. If a match is not found while editing, instead of simply failing, return closest matching snippet. If read file tool gets a directory, return directory contents.
3. Parallel tool calls. Claude needs either a batch tool or special scaffolding to promote parallel tool calls. Single tool call per turn is very expensive.
Are there any other such general ideas?
that info can be just included in preffix which is cache by LLM, reducing cost by 70-80% average. System time varies, so it's not good idea to specify it in prompt, better to make a function out of it to avoid cache invalidation.
I am still looking for a good "memory" solution, so far running without it. Haven't looked too deep into it.
Not sure how next tool call be predicted.
I am still using serial tool calls as i do not have any subagents, i just use fast inference models for directly tools calls. It works so fast, i doubt i'll benefit from parallel anything.
There's "swe re-bench", a benchmark that tracks model release dates, and you can see how the model did for "real-world" bugs that got submitted on github after the model was released. (obviously works best for open models).
There are a few models that solve 30-50% of (new) tasks pulled from real-wolrd repos. So ... yeah.
Surprise, as rambunctious dev who’s socially hacked their way through promotions, I will just convince our manager we need to rewrite the platform in a new stack or convince them that I need to write a new server to handle the feature. No old tech needed!
[dead]