> The amount of cognitive overhead in this deceptively simple log is several levels deep: you have to first stop to type logger.info (or is it logging.info? I use both loguru and logger depending on the codebase and end up always getting the two confused.) Then, the parentheses, the f-string itself, and then the variables in brackets. Now, was it your_variable or your_variable_with_edits from five lines up? And what’s the syntax for accessing a subset of df.head again?
What you're describing is called: programming. This can't be serious. What about the cognitive overhead of writing a for loop? You have to remember what's in the array you're iterating over, how that array interacts with maybe other parts of the code base, and oh man, what about those pesky indices! Does it start at 0 or 1? I can't take it! AI save me!
One of the things I love about computing and computer science is how the wide variety of tools available, built over multiple generations, provide people with the leverage to bring their highly complex ideas to life. No matter how they work best, they can use those tools as a way to keep their mind focused on larger goals with broader context without yak shaving every hole punched in a punchcard.
You see a person whose conception of programming is different from yours; I see a person who's finding joy in the act of creating computer programs, and who will be able to bring even more of their ideas to life than they would have beforehand. That's something to celebrate, I think.
And once you have enough experience, you realize that maintaining your focus and managing your cognitive workload are the key levers that affect productivity.
But, it looks like you are still caught up with iterating over arrays, so this realization might still be a few years away for you.
This is a sign that the user hasn't taken the time to set up their tools. You should be able to type log and have it tab complete because your editor should be aware of the context you're in. You don't need a fuzzy problem solver to solve non-fuzzy problems.
Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.
Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.
This. Set up your dev env and pay attention to details and get it right. Introducing probabilistic codegen before doing that is asking for trouble before you even really get started accruing tech debt.
You say "probabilistic" as if some kind of gotcha. The binary rigidness is merely an illusion that computers put up. At every layer, there's probabilistic events going on.
- Your hot path functions get optimized, probabilistically
- Your requests to a webserver are probabilistic, and most of the systems have retries built in.
- Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.
Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.
We’re comparing this to an LSP or intellisense type of system, how exactly are these probabilistic? Maybe they crash or get a memory leak every once in a while but that’s true of any software including an inference engine… I’m much more worried about the fact that I can’t guarantee that if I type in half of a variable name, that it’ll know exactly what i’m trying to type. It would be like preparing to delete a line in vim and it predicts you want to delete the next three. Even if you do 90% of the time, you have to verify its output. It’s nothing like a compiler, spurious network errors, etc (which still exist even with another layer of LLM on top).
Honestly, I've used a fully set up Neovim for the past few years, and I recently tried Zed and its "edit prediction," which predicts what you're going to modify next. I was surprised by how nice that felt — instead of remembering the correct keys to surround a word or line with quotes, I could just type either quotation mark, and the edit prediction would instantly suggest that I could press Tab to jump to the location for the other quote and add it. And not only for surrounding quotes, it worked with everything similar with the same keys and workflow.
Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.
I have seen people suggesting that it's OK that our codebase doesn't support deterministically auto-adding the import statement of a newly-referenced class "because AI can predict it".
I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?
I don't want to wade into the debate here, but by "their tools" GP probably meant their existing tools (i.e. before adding a new tool), and by "a fuzzy problem solver" was referring to an "AI model".
This isn't a language thing, it's a project thing. Language things I can do fluently (like the example of a for loop in the OP comment... lol). But I work on so many different projects that it's impossible to keep this kind of dependency context fresh in my head. And I think that's fine? I'm more than happy to delegate that kind of stuff.
I find there is a limit to the number of programming languages I can stay actively proficient in at any given time.
I am using a much wider range of languages now that I have LLM assistance, because I am no longer incentivized to stick to a small number that are warm in my mental cache.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
If a person cannot remember what to use in order to define their desired solution logic (how do I make a log statement again?), then they are unqualified to implement same.
> But in the end, focus on the parts you love.
Speaking only for myself, I love working with people who understand what they are doing when they do it.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
No but genuinely like writing informative logs. I have been in production support roles and boy does the lack of good logging (or barely any logs at all!) suck. I prefer print style debugging and want my colleagues on the support side to have the same level of convenience.
Not to mention the advantages of being able to search through past logs for troubleshooting and analysis.
I like building stuff - I mean like construction, renovations. I like figuring out how I need to frame something, what order, what lengths and angles to cut. Obviously I like making something useful, but the mechanics are fun too.
I actually take pride in the logs I write because I write good ones with exactly the necessary context to efficiently isolate and solve problems. I derive a little bit of satisfaction from closing bugs faster than my colleagues who write poor logs.
Yeah, it’s also surprising because the user really shouldn’t be using f-strings for logging since they get interpolated whether or not the log level is set to INFO. This is more important when the user is writing say, debug logs that run inside hot loops, which will incur a significant performance penalty by converting lots of data to its string representation.
The thing is, the logging calls already accept variable arguments that do pretty much what people use f-string in logging calls for already, except better. People see f-string, they like f-string, and they end up in logs, that's really all there is to it.
f""-strings for logging is an example of "practicality beats purity"
Yes, f""-strings may be evaluated unnecessarily (perhaps, t-strings could solve it). But in practice they are too convenient. Unless profiler says otherwise, it may be ok to use them in many circumstances.
One thing that has become abundantly clear from the AI craze is how many people - who do programming for a living - really don't like programming. I don't really understand why they got into the field; to be honest, it seems kind of like someone who doesn't like playing the guitar embarking on a career as a guitarist. But regardless of the reasons they seem to be pretty happy for a chance to not have to program any more.
Do you like 'solving problems' or do you like 'getting into the weeds'? Both are valid, and both are common uses of programming.
When I was younger, I loved 'getting into the weeds'. 'Oh, the audio broke? That gives me a great change to learn more about ALSA!'. Now that I'm older, I don't want to learn more about ALSA, I've seen enough. I'm now more in camp 'solving problems', I want the job done and the task successfully finished to a reasonable level of quality, I don't care which library or data structure was particularly used to get the job done. (both camps obviously overlap, many issues require getting into the weeds)
In this framework, the promise of AI is great for camp 'solving problems' (yes yes hallucinations etc.), but horrible for camp 'getting into the weeds'. From your framing you sound like you're from camp 'getting into the weeds', and that's fine too. But I can't say camp 'solving problems' doesn't like programming. Lot of carpenters out there who like to build things without caring what their hammer does.
I like solving problem. But I also want the problem to stay solved. And if I happen to see a common pattern between problems, then I build a solution generator.
Maybe because I don’t think in terms of code. I just have this mental image that is abstract, but is consistent. Code is just a tool to materialize it, just like words are a tool to tell a story. By the time I’m typing anything, I’m already fully aware of my goals. Designing and writing are two different activities.
Reading LLMs code is jarring because it changes the pattern midway. Like an author smashing the modern world and middle earth together. It’s like writing an urban fantasy and someone keeps interrupting you with some hard science-fiction ideas.
I spent a very long time getting into the weeds to learn everything about computer architecture, because at the time it seemed like it was the only way to do it and I wanted to have a career. In the meantime social media / cloud hosting / StackOverflow were invented, it became much easier for people to write online, and it turned out I didn't need to do any of that because the actual authors have all explained themselves on it.
Though, doing this is still the right way to learn how to debug things!
nb I actually just realized I never understood a specific bit of image processing math after working on ffmpeg for years, asked a random AI, and got a perfectly clear explanation of it.
Exactly this. I still "get into the weeds" without AI if I really need to dig into learning something new or if I want to explore some totally new idea (LLMs don't really do "totally new"). If I'm debugging a CRUD app, though... eh, it's sunny outside and I only have a couple more hours of daylight, so, AI it is.
One thing that has become abundantly clear from the AI craze is how many people who do programming for a living are actively hostile to fascinating new applications of computer science that open up entirely new capabilities and ways of working.
Yes but cognitive load is a real thing. Being free to not think about the proper format to log some generic info about the state of the program might seem like a small thing, but remember, that frees up your mind to hold other concerns. See well-trodden research that the human mind can hold roughly three to five meaningful items in working memory at once. When in the flow of programming, you probably have a complicated unconscious process of kicking things out of working memory and re-acquiring them by looking at the code in front of you. I think the author is correctly observing that they are getting the benefit of not having to evict something from their mental cache to remember how logging works for this particular project (especially egregious if you work on 10 codebases and they each use a different logger).
I can totally write the logging code myself, but its tedious formatting the log messages "nicely". In my experience AI will write a nice log message and capture relevant variables automatically, unlike handwritten statements where I inevitably have to make a second pass to include a critical value I missed.
I think we need to abandon this idea of writing code like a scribe copying a book. There’s a plethora of tools ready to help you take advantage of the facts
- that the code itself is an interlinked structure (LSPs and code navigation),
- that the syntax is simple and repetitive (snippets and generators),
- that you are using a very limited set of symbols (grep, find and replace, contextual docs, completion)
- and that files are a tools for organization (emacs and vim buffers, split layout in other editors)
Your editor should be a canvas for your thinking, not an assembly line workspace where you only type code out.
Also logging is important! Send structured logs if possible. Make sure structure is consistent between logs. You may have to reach for some abstraction or metaprogramming to do this.
All logs can be message, object and no need to format anything.
You don't use auto-complete for for-loops? Wait... You use a compiled language, rather than writing machine code by hand? Some would call THAT programming.
I like having muscles. I hate lifting weights.
I like being fit. I hate running.
I like being able to play guitar and piano. I hate practicing.
I like having food in my pantry. I hate grocery shopping.
I like having custom software that fits my needs. I hate writing code.
But this is using a machine to do the lifting for you so you don't develop the muscles. You are actually not strong through technology but left weak and helpless when left on your own.
It is just a bunch of people that don't take pride in self-sufficiency. It is a muscle that has atrophied for them.
The thing I like here is that it runs locally. I use Vim keyword completion[1] a lot for next-word completion. It does a broadly similar sort of "look at surrounding code to offer good suggestions" thing (no LLM stuff, of course). It's wrong often, but it's useful enough that it saves me time overall, I feel.
So, this sounds to me like an expanded version of that, more or less.
I think I'd prefer an AI future with lots of little focused models running locally like this rather than the "über models in the cloud" approach. Or at least having such options is nice.
The use of AI for this seems somewhat overkill, one could just use a language and environment that is aware of whether `logger` or `logging` is available, and what variables are in scope. We have tools that allow us to treat programming as more than guessing the next character at random. Rather they allow you to offload tracking this context to the machine in a reliable way, typed languages and auto complete.
I do wonder if this is part of the divide in how useful one finds LLMs currently. If you're already using languages which can tell you what expressions are syntactically valid while you type rather than blowing up at runtime the idea a computer can take some of the cognitive overhead is less novel.
Automates a tedious, time-consuming task. Easy to catch and correct failures. Love it. My only concern is (other than maybe verbosity) is of my log writing moments are opportunities to briefly reflect on what I have written, and maybe catch problems early.
I know, it's not as nice looking. But the advantage is that the logging system knows that regardless of what values var takes that it's the same log. This is used by Sentry to aggregate the same logs together. Also if the variable being logged also happens to contain a %s then the f-string version will throw an exception. It doesn't matter because f-strings are so fast but the % method is also lazy and doesn't interpolate/format if it's not going to be logged. Maybe in the future we'll get to use template strings for this.
Thanks for the reasons! I've been using f-strings because they're easy to keep track of, but you make really good arguments (I hadn't thought of Sentry/similar systems using %s to aggregate logs)
I appreciate how the author highlighted the python domain-specific tricks like dropping imports and rewriting tabs/spaces. It's good to be reminded that even with "large" language models you can get better results with quality over quantity.
An aside, but it’s quite unfortunate that logger.info and logging.info are automatically linkified because of the .info TLD in this case. I don’t recommend clicking on those links.
I once wrote a simple python program that logged all execution, self inspected code and printed code and variables as output.
Then I learned why programs don't do that by default.
Not to be snarky, but when you become more experienced you will figure out that logging is just writing to permanent storage, one of the most basic blocks of programming, you don't need a dependency for that, writing to disk should be as natural as breathing air, you can do print("var",var). That's it.
If you are really anal you can force writing to a file with a print argument or just a posix open and write call. No magic, no remembering, no blogpost. Just done, and next.
This tracks with how I've been doing my debug logs.
I ask the model to create tons of logs for a specific function and make sure there's an emoji in the beginning to make it unique (I know HN hates emojis).
Best thing of all, I can just say delete all logs with emojis and my patch is ready. Magical usage of LLMs
I'd like to consult the HN hive mind on an tangential point.
Does anyone else here dislike loguru on appearance? I don't have a well-articulated argument for why I don't like it, but it subconsciously feels like a tool that is not sharp enough.
Was looking for evidence, either way, honestly. The author is using loguru here and I've run into it for a number of production deployments.
Loguru sucks very badly!
I would advise you not to use it. It is like trying to not do like everyone else, but still doing in a terribly wrong way ...
For example the backtrace try to be cooler for display but are awful, and totally not appropriate to pipe in monitoring systems like Sentry.
In the same way, as you can see in the article, that is the only logging library that doesn't accept the standard %-string syntax to use instead its own shitty syntax based on format.
> The amount of cognitive overhead in this deceptively simple log is several levels deep: you have to first stop to type logger.info (or is it logging.info? I use both loguru and logger depending on the codebase and end up always getting the two confused.) Then, the parentheses, the f-string itself, and then the variables in brackets. Now, was it your_variable or your_variable_with_edits from five lines up? And what’s the syntax for accessing a subset of df.head again?
What you're describing is called: programming. This can't be serious. What about the cognitive overhead of writing a for loop? You have to remember what's in the array you're iterating over, how that array interacts with maybe other parts of the code base, and oh man, what about those pesky indices! Does it start at 0 or 1? I can't take it! AI save me!
One of the things I love about computing and computer science is how the wide variety of tools available, built over multiple generations, provide people with the leverage to bring their highly complex ideas to life. No matter how they work best, they can use those tools as a way to keep their mind focused on larger goals with broader context without yak shaving every hole punched in a punchcard.
You see a person whose conception of programming is different from yours; I see a person who's finding joy in the act of creating computer programs, and who will be able to bring even more of their ideas to life than they would have beforehand. That's something to celebrate, I think.
> What you're describing is called: programming.
And once you have enough experience, you realize that maintaining your focus and managing your cognitive workload are the key levers that affect productivity.
But, it looks like you are still caught up with iterating over arrays, so this realization might still be a few years away for you.
> What you're describing is called: programming.
Is that the part of programming that you enjoy? Remembering logger vs logging?
For me I enjoyed the technical chalenges, the design, solving customer problems all of that.
But in the end, focus on the parts you love.
This is a sign that the user hasn't taken the time to set up their tools. You should be able to type log and have it tab complete because your editor should be aware of the context you're in. You don't need a fuzzy problem solver to solve non-fuzzy problems.
> user hasn't taken the time to set up their tools
The user, infact, has setup a tool for the task - an "AI model", unless you're saying one tool is better than others.
Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.
Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.
This. Set up your dev env and pay attention to details and get it right. Introducing probabilistic codegen before doing that is asking for trouble before you even really get started accruing tech debt.
You say "probabilistic" as if some kind of gotcha. The binary rigidness is merely an illusion that computers put up. At every layer, there's probabilistic events going on.
- Your hot path functions get optimized, probabilistically
- Your requests to a webserver are probabilistic, and most of the systems have retries built in.
- Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.
Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.
We’re comparing this to an LSP or intellisense type of system, how exactly are these probabilistic? Maybe they crash or get a memory leak every once in a while but that’s true of any software including an inference engine… I’m much more worried about the fact that I can’t guarantee that if I type in half of a variable name, that it’ll know exactly what i’m trying to type. It would be like preparing to delete a line in vim and it predicts you want to delete the next three. Even if you do 90% of the time, you have to verify its output. It’s nothing like a compiler, spurious network errors, etc (which still exist even with another layer of LLM on top).
Honestly, I've used a fully set up Neovim for the past few years, and I recently tried Zed and its "edit prediction," which predicts what you're going to modify next. I was surprised by how nice that felt — instead of remembering the correct keys to surround a word or line with quotes, I could just type either quotation mark, and the edit prediction would instantly suggest that I could press Tab to jump to the location for the other quote and add it. And not only for surrounding quotes, it worked with everything similar with the same keys and workflow.
Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.
I have seen people suggesting that it's OK that our codebase doesn't support deterministically auto-adding the import statement of a newly-referenced class "because AI can predict it".
I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?
I don't want to wade into the debate here, but by "their tools" GP probably meant their existing tools (i.e. before adding a new tool), and by "a fuzzy problem solver" was referring to an "AI model".
In my experience consistency from your tools is really important, and AI models are worse at it than the more traditional solutions to the problem.
I know old timers who think auto-completion is a sign of a lazy programmer. The wheel keeps turning....
> Is that the part of programming that you enjoy? Remembering logger vs logging?
If you're proficient in a programming language then you don't need to remember these things, you just do it, much like spoken language.
This isn't a language thing, it's a project thing. Language things I can do fluently (like the example of a for loop in the OP comment... lol). But I work on so many different projects that it's impossible to keep this kind of dependency context fresh in my head. And I think that's fine? I'm more than happy to delegate that kind of stuff.
I find there is a limit to the number of programming languages I can stay actively proficient in at any given time.
I am using a much wider range of languages now that I have LLM assistance, because I am no longer incentivized to stick to a small number that are warm in my mental cache.
>> What you're describing is called: programming.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
If a person cannot remember what to use in order to define their desired solution logic (how do I make a log statement again?), then they are unqualified to implement same.
> But in the end, focus on the parts you love.
Speaking only for myself, I love working with people who understand what they are doing when they do it.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
No but genuinely like writing informative logs. I have been in production support roles and boy does the lack of good logging (or barely any logs at all!) suck. I prefer print style debugging and want my colleagues on the support side to have the same level of convenience.
Not to mention the advantages of being able to search through past logs for troubleshooting and analysis.
I like building stuff - I mean like construction, renovations. I like figuring out how I need to frame something, what order, what lengths and angles to cut. Obviously I like making something useful, but the mechanics are fun too.
I actually take pride in the logs I write because I write good ones with exactly the necessary context to efficiently isolate and solve problems. I derive a little bit of satisfaction from closing bugs faster than my colleagues who write poor logs.
Yeah, it’s also surprising because the user really shouldn’t be using f-strings for logging since they get interpolated whether or not the log level is set to INFO. This is more important when the user is writing say, debug logs that run inside hot loops, which will incur a significant performance penalty by converting lots of data to its string representation.
But sure, vibe away.
Format strings are very useful, so I'd suggest fixing the language to let you use them. You don't have to live with it interpreting them too early!
Even better, you should be interpreting them at time of reading the log, not when writing it. Makes them a lot smaller.
The thing is, the logging calls already accept variable arguments that do pretty much what people use f-string in logging calls for already, except better. People see f-string, they like f-string, and they end up in logs, that's really all there is to it.
f""-strings for logging is an example of "practicality beats purity"
Yes, f""-strings may be evaluated unnecessarily (perhaps, t-strings could solve it). But in practice they are too convenient. Unless profiler says otherwise, it may be ok to use them in many circumstances.
yep, putting user input into the message to be interpolated is asking for trouble
in C this leads to remote code execution (%n and friends)
in java (with log4j) this previously lead to remote code execution (despite being memory safe)
why am I not surprised the slop generator suggests it
One thing that has become abundantly clear from the AI craze is how many people - who do programming for a living - really don't like programming. I don't really understand why they got into the field; to be honest, it seems kind of like someone who doesn't like playing the guitar embarking on a career as a guitarist. But regardless of the reasons they seem to be pretty happy for a chance to not have to program any more.
Do you like 'solving problems' or do you like 'getting into the weeds'? Both are valid, and both are common uses of programming.
When I was younger, I loved 'getting into the weeds'. 'Oh, the audio broke? That gives me a great change to learn more about ALSA!'. Now that I'm older, I don't want to learn more about ALSA, I've seen enough. I'm now more in camp 'solving problems', I want the job done and the task successfully finished to a reasonable level of quality, I don't care which library or data structure was particularly used to get the job done. (both camps obviously overlap, many issues require getting into the weeds)
In this framework, the promise of AI is great for camp 'solving problems' (yes yes hallucinations etc.), but horrible for camp 'getting into the weeds'. From your framing you sound like you're from camp 'getting into the weeds', and that's fine too. But I can't say camp 'solving problems' doesn't like programming. Lot of carpenters out there who like to build things without caring what their hammer does.
I like solving problem. But I also want the problem to stay solved. And if I happen to see a common pattern between problems, then I build a solution generator.
Maybe because I don’t think in terms of code. I just have this mental image that is abstract, but is consistent. Code is just a tool to materialize it, just like words are a tool to tell a story. By the time I’m typing anything, I’m already fully aware of my goals. Designing and writing are two different activities.
Reading LLMs code is jarring because it changes the pattern midway. Like an author smashing the modern world and middle earth together. It’s like writing an urban fantasy and someone keeps interrupting you with some hard science-fiction ideas.
I spent a very long time getting into the weeds to learn everything about computer architecture, because at the time it seemed like it was the only way to do it and I wanted to have a career. In the meantime social media / cloud hosting / StackOverflow were invented, it became much easier for people to write online, and it turned out I didn't need to do any of that because the actual authors have all explained themselves on it.
Though, doing this is still the right way to learn how to debug things!
nb I actually just realized I never understood a specific bit of image processing math after working on ffmpeg for years, asked a random AI, and got a perfectly clear explanation of it.
Exactly this. I still "get into the weeds" without AI if I really need to dig into learning something new or if I want to explore some totally new idea (LLMs don't really do "totally new"). If I'm debugging a CRUD app, though... eh, it's sunny outside and I only have a couple more hours of daylight, so, AI it is.
One thing that has become abundantly clear from the AI craze is how many people who do programming for a living are actively hostile to fascinating new applications of computer science that open up entirely new capabilities and ways of working.
Yes but cognitive load is a real thing. Being free to not think about the proper format to log some generic info about the state of the program might seem like a small thing, but remember, that frees up your mind to hold other concerns. See well-trodden research that the human mind can hold roughly three to five meaningful items in working memory at once. When in the flow of programming, you probably have a complicated unconscious process of kicking things out of working memory and re-acquiring them by looking at the code in front of you. I think the author is correctly observing that they are getting the benefit of not having to evict something from their mental cache to remember how logging works for this particular project (especially egregious if you work on 10 codebases and they each use a different logger).
I can totally write the logging code myself, but its tedious formatting the log messages "nicely". In my experience AI will write a nice log message and capture relevant variables automatically, unlike handwritten statements where I inevitably have to make a second pass to include a critical value I missed.
I think we need to abandon this idea of writing code like a scribe copying a book. There’s a plethora of tools ready to help you take advantage of the facts
- that the code itself is an interlinked structure (LSPs and code navigation),
- that the syntax is simple and repetitive (snippets and generators),
- that you are using a very limited set of symbols (grep, find and replace, contextual docs, completion)
- and that files are a tools for organization (emacs and vim buffers, split layout in other editors)
Your editor should be a canvas for your thinking, not an assembly line workspace where you only type code out.
The author clearly dislikes writing logging code enough to put the work into creating a fine-tuned model for the purpose.
I thought “making tools to automate work” was one of the key uses of a computer but I might be wrong
For me it's the opposite, I know exactly how to write log lines. It's just tedious. AI auto completes pretty much what I would have written.
Also logging is important! Send structured logs if possible. Make sure structure is consistent between logs. You may have to reach for some abstraction or metaprogramming to do this.
All logs can be message, object and no need to format anything.
That said ai saves typing time.
You don't use auto-complete for for-loops? Wait... You use a compiled language, rather than writing machine code by hand? Some would call THAT programming.
I like having muscles. I hate lifting weights. I like being fit. I hate running. I like being able to play guitar and piano. I hate practicing. I like having food in my pantry. I hate grocery shopping. I like having custom software that fits my needs. I hate writing code.
But this is using a machine to do the lifting for you so you don't develop the muscles. You are actually not strong through technology but left weak and helpless when left on your own.
It is just a bunch of people that don't take pride in self-sufficiency. It is a muscle that has atrophied for them.
Yes, exactly.
The thing I like here is that it runs locally. I use Vim keyword completion[1] a lot for next-word completion. It does a broadly similar sort of "look at surrounding code to offer good suggestions" thing (no LLM stuff, of course). It's wrong often, but it's useful enough that it saves me time overall, I feel.
So, this sounds to me like an expanded version of that, more or less.
I think I'd prefer an AI future with lots of little focused models running locally like this rather than the "über models in the cloud" approach. Or at least having such options is nice.
[1] https://vim.fandom.com/wiki/Any_word_completion
There's also omni-completion, a bit more advanced: https://vim.fandom.com/wiki/Omni_completion
This is interesting, I like the way jetbrains is using the local models for auto completion.
Do you know if there's a similar solution for vscode?
The use of AI for this seems somewhat overkill, one could just use a language and environment that is aware of whether `logger` or `logging` is available, and what variables are in scope. We have tools that allow us to treat programming as more than guessing the next character at random. Rather they allow you to offload tracking this context to the machine in a reliable way, typed languages and auto complete.
I do wonder if this is part of the divide in how useful one finds LLMs currently. If you're already using languages which can tell you what expressions are syntactically valid while you type rather than blowing up at runtime the idea a computer can take some of the cognitive overhead is less novel.
Automates a tedious, time-consuming task. Easy to catch and correct failures. Love it. My only concern is (other than maybe verbosity) is of my log writing moments are opportunities to briefly reflect on what I have written, and maybe catch problems early.
> My favorite use-case for AI is writing logs
You mean like this?
...or something more like this?Python programmers, don't use f-strings for your logs.
I know, it's not as nice looking. But the advantage is that the logging system knows that regardless of what values var takes that it's the same log. This is used by Sentry to aggregate the same logs together. Also if the variable being logged also happens to contain a %s then the f-string version will throw an exception. It doesn't matter because f-strings are so fast but the % method is also lazy and doesn't interpolate/format if it's not going to be logged. Maybe in the future we'll get to use template strings for this.Thanks for the reasons! I've been using f-strings because they're easy to keep track of, but you make really good arguments (I hadn't thought of Sentry/similar systems using %s to aggregate logs)
They aren't just arguments, it's facts. f-strings in logs, especially in a hot code path, can be really bad for performance.
Tangential, but making the logs understandable by the LLM is also very useful
I appreciate how the author highlighted the python domain-specific tricks like dropping imports and rewriting tabs/spaces. It's good to be reminded that even with "large" language models you can get better results with quality over quantity.
An aside, but it’s quite unfortunate that logger.info and logging.info are automatically linkified because of the .info TLD in this case. I don’t recommend clicking on those links.
I once wrote a simple python program that logged all execution, self inspected code and printed code and variables as output.
Then I learned why programs don't do that by default.
Not to be snarky, but when you become more experienced you will figure out that logging is just writing to permanent storage, one of the most basic blocks of programming, you don't need a dependency for that, writing to disk should be as natural as breathing air, you can do print("var",var). That's it.
If you are really anal you can force writing to a file with a print argument or just a posix open and write call. No magic, no remembering, no blogpost. Just done, and next.
This tracks with how I've been doing my debug logs.
I ask the model to create tons of logs for a specific function and make sure there's an emoji in the beginning to make it unique (I know HN hates emojis).
Best thing of all, I can just say delete all logs with emojis and my patch is ready. Magical usage of LLMs
Do you use a debugger?
I'd like to consult the HN hive mind on an tangential point.
Does anyone else here dislike loguru on appearance? I don't have a well-articulated argument for why I don't like it, but it subconsciously feels like a tool that is not sharp enough.
Was looking for evidence, either way, honestly. The author is using loguru here and I've run into it for a number of production deployments.
Anyone have experiences to share?
Loguru sucks very badly! I would advise you not to use it. It is like trying to not do like everyone else, but still doing in a terribly wrong way ...
For example the backtrace try to be cooler for display but are awful, and totally not appropriate to pipe in monitoring systems like Sentry.
In the same way, as you can see in the article, that is the only logging library that doesn't accept the standard %-string syntax to use instead its own shitty syntax based on format.
AOP solved this 30 years ago, though.