aresant 2 days ago

Feels like a mixed bag vs regression?

eg - GPT-5 beats GPT-4 on factual recall + reasoning (HeadQA, Medbullets, MedCalc).

But then slips on structured queries (EHRSQL), fairness (RaceBias), evidence QA (PubMedQA).

Hallucination resistance better but only modestly.

Latency seems uneven (maybe more testing?) faster on long tasks, slower on short ones.

  • TrainedMonkey 2 days ago

    GPT-5 feels like cost engineering. The model is incrementally better, but they are optimizing for least amount of compute. I am guessing investors love that.

    • narrator 2 days ago

      I agree. I have found GPT-5 significantly worse on medical queries. It feels like it skips important details and is much worse than o3, IMHO. I have heard good things about GPT-5 Pro, but that's not cheap.

      I wonder if part of the degraded performance is where they think you're going into a dangerous area and they get more and more vague, for example like they demoed on launch day with the fireworks example. It gets very vague when talking about non-abusable prescription drugs for example. I wonder if that sort of nerfing gradient is affecting medical queries.

      After seeing some painfully bad results, I'm currently using Grok4 for medical queries with a lot of success.

      • fertrevino 2 days ago

        Interesting, it seems the anecdotal experience agrees with the benchmark results.

      • rbinv a day ago

        Afaik, there is currently no "GPT-5 Pro". Did you mean o3-pro or o1-pro (via API)?

        Currently, GPT-5 sits at $10/1M output tokens, o3-pro at $80, and o1-pro at a whopping $600: https://platform.openai.com/docs/pricing

        Of course this is not indicative of actual performance or quality per $ spent, but according to my own testing, their performance does seem to scale in line with their cost.

        • mastercheif a day ago

          GPT-5 Pro is only available on ChatGPT with a ChatGPT Pro subscription.

          Supposedly it fires off multiple parallel thinking chains and then essentially debates with itself to net a final answer.

        • yzydserd a day ago

          O5-pro is available through the ChatGPT UI with a “Pro” plan. I understand that like o3 pro it is a high compute large context invocation of underlying models.

          • rbinv a day ago

            Thanks, I was not aware! I thought they offered all their models via their API.

    • RestartKernel 2 days ago

      I wonder how that math works out. GPT-5 keeps triggering a thinking flow even for relatively simple queries, so each token must be a magnitude cheaper to make this worth the trade-off in performance.

    • JimDabell 2 days ago

      I’ve found that it’s super likely to get stuck repeating the exact same incorrect response over and over. It used to happen occasionally with older models, but it happens frequently now.

      Things like:

      Me: Is this thing you claim documented? Where in the documentation does it say this?

      GPT: Here’s a long-winded assertion that what I said before was correct, plus a link to an unofficial source that doesn’t back me up.

      Me: That’s not official documentation and it doesn’t say what you claim. Find me the official word on the matter.

      GPT: Exact same response, word-for-word.

      Me: You are repeating yourself. Do not repeat what you said before. Here’s the official documentation: [link]. Find me the part where it says this. Do not consider any other source.

      GPT: Exact same response, word-for-word.

      Me: Here are some random words to test if you are listening to me: foo, bar, baz.

      GPT: Exact same response, word-for-word.

      It’s so repetitive I wonder if it’s an engineering fault, because it’s weird that the model would be so consistent in its responses regardless of the input. Once it gets stuck, it doesn’t matter what I enter, it just keeps saying the same thing over and over.

      • namibj 2 days ago

        Go back and edit a prompt of yours in the conversation instead of continuing with garbage in the context.

        • ForHackernews 2 days ago

          That's a good tip, I didn't know you could do that.

      • slashdev a day ago

        If one conversation goes in a bad direction, it's often best to just start over. The bad context often poisons the existing session.

      • TrainedMonkey a day ago

        That sounds like query caching... which would also align with cost engineering angle.

    • UltraSane a day ago

      Since the routing is opaque they can dynamically route queries to cheaper models when demand is high.

    • yieldcrv 2 days ago

      Yeah look at their open source models and how you get such high parameters in such low vram

      Its impressive but a regression for now, in direct comparison to just high parameter model

  • woeirua 2 days ago

    Definitely seems like GPT5 is a very incremental improvement. Not what you’d expect if AGI were imminent.

    • p1esk 10 hours ago

      What would you expect?

  • fertrevino 2 days ago

    Mixed results indeed. While it leads the benchmark in two question types, it falls short in others which results in the overall slight regression.

hypoxia 2 days ago

Did you try it with high reasoning effort?

  • ares623 2 days ago

    Sorry, not directed at you specifically. But every time I see questions like this I can’t help but rephrase in my head:

    “Did you try running it over and over until you got the results you wanted?”

    • dcre 2 days ago

      This is not a good analogy because reasoning models are not choosing the best from a set of attempts based on knowledge of the correct answer. It really is more like what it sounds like: “did you think about it longer until you ruled out various doubts and became more confident?” Of course nobody knows quite why directing more computation in this way makes them better, and nobody seems to take the reasoning trace too seriously as a record of what is happening. But it is clear that it works!

      • aprilthird2021 2 days ago

        > Of course nobody knows quite why directing more computation in this way makes them better, and nobody seems to take the reasoning trace too seriously as a record of what is happening. But it is clear that it works!

        One thing it's hard to wrap my head around is that we are giving more and more trust to something we don't understand with the assumption (often unchecked) that it just works. Basically your refrain is used to justify all sorts of odd setup of AIs, agents, etc.

        • dcre 2 days ago

          Trusting things to work based on practical experience and without formal verification is the norm rather than the exception. In formal contexts like software development people have the means to evaluate and use good judgment.

          I am much more worried about the problem where LLMs are actively misleading low-info users into thinking they’re people, especially children and old people.

      • brendoelfrendo 2 days ago

        Bad news: it doesn't seem to work as well as you might think: https://arxiv.org/pdf/2508.01191

        As one might expect, because the AI isn't actually thinking, it's just spending more tokens on the problem. This sometimes leads to the desired outcome but the phenomenon is very brittle and disappears when the AI is pushed outside the bounds of its training.

        To quote their discussion, "CoT is not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching, fundamentally bounded by the data distribution seen during training. When pushed even slightly beyond this distribution, its performance degrades significantly, exposing the superficial nature of the “reasoning” it produces."

        • hodgehog11 2 days ago

          I keep wondering whether people have actually examined how this work draws its conclusions before citing it.

          This is science at its worst, where you start at an inflammatory conclusion and work backwards. There is nothing particularly novel presented here, especially not in the mathematics; obviously performance will degrade on out-of-distribution tasks (and will do so for humans under the same formulation), but the real question is how out-of-distribution a lot of tasks actually are if they can still be solved with CoT. Yes, if you restrict the dataset, then it will perform poorly. But humans already have a pretty large visual dataset to pull from, so what are we comparing to here? How do tiny language models trained on small amounts of data demonstrate fundamental limitations?

          I'm eager to see more works showing the limitations of LLM reasoning, both at small and large scale, but this ain't it. Others have already supplied similar critiques, so let's please stop sharing this one around without the grain of salt.

          • ipaddr 2 days ago

            "This is science at its worst, where you start at an inflammatory conclusion and work backwards"

            Science starts with a guess and you run experiments to test.

            • hodgehog11 2 days ago

              True, but the experiments are engineered to give results they want. It's a mathematical certainty that the performance will drop off here, but is not an accurate assessment of what is going on at scale. If you present an appropriately large and well-trained model with in-context patterns, it often does a decent job, even when it isn't trained on them. By nerfing the model (4 layers), the conclusion is foregone.

              I honestly wish this paper actually showed what it claims, since it is a significant open problem to understand CoT reasoning relative to the underlying training set.

              • lossolo 2 days ago

                Without a provable hold out, claim that "large models do fine on unseen patterns" is unfalsifiable. In controlled from scratch training, CoT performance collapses under modest distribution shift, even with plausible chains. If you have results where the transformation family is provably excluded from training and a large model still shows robust CoT, please share them. Otherwise this paper’s claim stands for the regime it tests.

                • simianwords 2 days ago

                  I don't buy this for the simple fact that benchmarks show much better performance on thinking than on non thinking models. Benchmarks already consider the generalisation and "unseen patterns" aspect.

                  What would be your argument against

                  1. COT models performing way better in benchmarks than normal models

                  2. people choose to use the COT models in day to day life because they actually find that it gives better performance

                • hodgehog11 2 days ago

                  > claim that "large models do fine on unseen patterns" is unfalsifiable

                  I know what you're saying here, and I know it is primarily a critique of my phrasing, but establishing something like this is the objective of in-context learning theory and mathematical applications of deep learning. It is possible to prove that sufficiently well-trained models will generalize for certain unseen classes of patterns, e.g. transformer acting like gradient descent. There is still a long way to go in the theory---it is difficult research!

                  > performance collapses under modest distribution shift

                  The problem is that the notion of "modest" depends on the scale here. With enough varied data and/or enough parameters, what was once out-of-distribution can become in-distribution. The paper is purposely ignorant of this fact. Yes, the claims hold for tiny models, but I don't think anyone ever doubted this.

                • buildbot 2 days ago

                  This paper's claim holds - for 4 layer models. Models improve on out of context examples dramatically at larger scales.

        • razodactyl 2 days ago

          A viable consideration is that the models will hone in on and reinforce an incorrect answer - a natural side effect of the LLM technology wanting to push certain answers higher in probability and repeat anything in context.

          Regardless of being in conversation or thinking context this doesn't prevent the model from speaking the wrong answer so the paper on the illusion of thinking makes sense.

          What actually seems to be happening is a form of conversational prompting. Of course with the right conversation back and forth with an LLM you can inject knowledge in a way that causes the natural distribution to shift (again - side effect of the LLM tech.) but by itself it won't naturally get the answer perfect every time.

          If this extended thinking were actually working you would expect the LLM to be able to logically conclude an answer with very high accuracy 100% of the time which it does not.

        • p1esk 10 hours ago

          They experimented with gpt-2 scale models. Hard to make any meaningful conclusions in the gpt-5 era.

        • dcre 2 days ago

          The other commenter is more articulate, but you simply cannot draw the conclusion from this paper that reasoning models don't work well. They trained tiny little models and showed they don't work. Big surprise! Meanwhile every other piece of evidence available shows that reasoning models are more reliable at sophisticated problems. Just a few examples.

          - https://arcprize.org/leaderboard

          - https://aider.chat/docs/leaderboards/

          - https://arstechnica.com/ai/2025/07/google-deepmind-earns-gol...

          Surely the IMO problems weren't "within the bounds" of Gemini's training data.

          • robrenaud 2 days ago

            The Gemini IMO result used a specifically fine tuned model for math.

            Certainly they weren't training on the unreleased problems. Defining out of distribution gets tricky.

            • simianwords a day ago

              >The Gemini IMO result used a specifically fine tuned model for math.

              This is false.

              https://x.com/YiTayML/status/1947350087941951596

              This is false even for the OpenAI model

              https://x.com/polynoamial/status/1946478250974200272

              "Typically for these AI results, like in Go/Dota/Poker/Diplomacy, researchers spend years making an AI that masters one narrow domain and does little else. But this isn’t an IMO-specific model. It’s a reasoning LLM that incorporates new experimental general-purpose techniques."

            • Workaccount2 a day ago

              Every human taking that exam has fine tuned for math, specifically on IMO problems.

        • simianwords 2 days ago

          This is not the slam dunk you think it is. Thinking longer genuinely provides better accuracy. Sure there are decreasing returns to increasing thinking tokens.

          GPT 5 fast gets many things wrong but switching to the thinking model fixes the issues very often.

    • SequoiaHope 2 days ago

      What you describe is a person selecting the best results, but if you can get better results one shot with that option enabled, it’s worth testing and reporting results.

      • ares623 2 days ago

        I get that. But then if that option doesn't help, what I've seen is that the next followup is inevitably "have you tried doing/prompting x instead of y"

        • Art9681 2 days ago

          It can be summarized as "Did you RTFM?". One shouldn't expect optimal results if the time and effort wasn't invested in learning the tool, any tool. LLMs are no different. GPT-5 isn't one model, it's 6: gpt-5, gpt-5 mini, gpt-nano. Each takes high|medium|low configurations. Anyone who is serious about measuring model capability would go for the best configuration, especially in medicine.

          I skimmed through the paper and I didnt see any mention of what parameters they used other than they use gpt-5 via the API.

          What was the reasoning_effort? verbosity? temperature?

          These things matter.

        • furyofantares 2 days ago

          Something I've experienced with multiple new model releases is plugging them into my app makes my app worse. Then I do a bunch of work on prompts and now my app is better than ever. And it's not like the prompts are just better and make the old model work better too - usually the new prompts make the old model worse or there isn't any change.

          So it makes sense to me that you should try until you get the results you want (or fail to do so). And it makes sense to ask people what they've tried. I haven't done the work yet to try this for gpt5 and am not that optimistic, but it is possible it will turn out this way again.

        • theshackleford 2 days ago

          > I get that. But then if that option doesn't help, what I've seen is that the next followup is inevitably "have you tried doing/prompting x instead of y"

          Maybe I’m misunderstanding, but it sounds like you’re framing a completely normal proces (try, fail, adjust) as if it’s unreasonable?

          In reality, when something doesn’t work, it would seem to me that the obvious next step is to adapt and try again. This does not seem like a radical approach but instead seems to largely be how problem solving sort of works?

          For example, when I was a kid trying to push start my motorcycle, it wouldn’t fire no matter what I did. Someone suggested a simple tweak, try a different gear. I did, and instantly the bike roared to life. What I was doing wasn’t wrong, it just needed a slight adjustment to get the result I was after.

          • ares623 2 days ago

            I get trying and improving until you get it right. But I just can't make the bridge in my head around

            1. this is magic and will one-shot your questions 2. but if it goes wrong, keep trying until it works

            Plus, knowing it's all probabilistic, how do you know, without knowing ahead of time already, that the result is correct? Is that not the classic halting problem?

            • theshackleford 2 days ago

              > I get trying and improving until you get it right. But I just can't make the bridge in my head around

              > 1. this is magic and will one-shot your questions 2. but if it goes wrong, keep trying until it works

              Ah that makes sense. I forgot the "magic" part, and was looking at it more practically.

              • ares623 2 days ago

                To clarify on the “learn and improve” part, I mean I get it in the context of a human doing it. When a person learns, that lesson sticks so errors and retries are valuable.

                For LLMs none of it sticks. You keep “teaching” it and the next time it forgets everything.

                So again you keep trying until you get the results you want, which you need to know ahead of time.

    • chairmansteve 2 days ago

      Or...

      "Did you try a room full of chimpanzees with typewriters?"

0xDEAFBEAD a day ago

So which of these benchmarks are most relevant for an ordinary user who wants to talk to AI about their health issues?

I'm guessing HeadQA, Medbullets, MedHallu, and perhaps PubMedQA? (Seems to me that "unsupported speculation" could be a good thing for a patient who has yet to receive a diagnosis...)

Maybe in practice it's better to look at RAG benchmarks, since a lot of AI tools will search online for information before giving you an answer anyways? (Memorization of info would matter less in that scenario)

ancorevard 2 days ago

so since reasoning_effort is not discussed anywhere, I assume you used the default which is "medium"?

  • energy123 2 days ago

    Also, were tool calls allowed? The point of reasoning models is to delete the facts so finite capacity goes towards the dense reasoning engine rather than recall, with the facts sitting elsewhere.

username135 2 days ago

I wonder what changed with the models that created regression?

  • oezi a day ago

    There is some speculation that GPT-5 uses a router to decide which expert model to deploy (e.g. to mini vs o/thinking models). So the router might decide that the query can be solved by a cheaper model and this model gives worse results.

andai a day ago

Did this use reasoning or not? GPT-5 with Minimal reasoning does roughly the same as 4o on benchmarks.

credit_guy 2 days ago

Here's my experience: for some coding tasks where GPT 4.1, Claude Sonnet 4, Gemini 2.5 Pro were just spinning for hours and hours and getting nowhere, GPT 5 just did the job without a fuss. So, I switched immediately to GPT 5, and never looked back. Or at least I never looked back until I found out that my company has some Copilot limits for premium models and I blew through the limit. So now I keep my context small, use GPT 5 mini when possible, and when it's not working I move to the full GPT 5. Strangely, it feels like GPT 5 mini can corrupt the full GPT 5, so sometimes I need to go back to Sonnet 4 to get unstuck. To each their own, but I consider GPT 5 a fairly bit move forward in the space of coding assistants.

  • agos 2 days ago

    any thread on HN about AI (there's constantly at least one in homepage nowadays) goes like this:

    "in my experience [x model] one shots everything and [y model] stumbles and fumbles like a drunkard", for _any_ combination of X and Y.

    I get the idea of sharing what's working and what's not, but at this point it's clear that there are more factors to using these with success and it's hard to replicate other people's successful workflows.

  • benlc 2 days ago

    Interestingly I'm experiencing the opposite as you. Was mostly using Claude Sonnet 4 and GPT 4.1 through copilot for a few months and was overall fairly satisfied with it. First task I threw at GPT 5, it excelled in a fraction of the time Sonnet 4 normally takes, but after a few iterations, it all went downhill. GPT 5 almost systematically does things I didn't ask it to do. After failing to solve an issue for almost an hour, I switched back to Claude which fixed it in the first try. YMMV

    • AndyNemmity 2 days ago

      Yeah, GPT 5 got into death loops faster than any other LLM, and I stopped using it for anything more than UI prototypes.

  • czk 2 days ago

    its possible to use gpt-5-high on the plus plan with codex-cli, its a whole different beast! i dont think theres any other way for plus users to leverage gpt-5 with high reasoning.

    codex -m gpt-5 model_reasoning_effort="high"

causality0 2 days ago

I've definitely seen some unexpected behavior from gpt5. For example, it will tell me my query is banned and then give me a full answer anyway.

CuriouslyC 2 days ago

GPT-5 is like an autistic savant

mattwad 2 days ago

i thought cursor was getting really bad, then i found out i was on a gpt 5 trial. gonna stick with claude :)

kumarvvr 2 days ago

I have an issue with the words "understanding", "reasoning", etc when talking about LLMs.

Are they really understanding, or putting out a stream of probabilities?

  • munchler 2 days ago

    Does it matter from a practical point of view? It's either true understanding or it's something else that's similar enough to share the same name.

    • axdsk 2 days ago

      The polygraph is a good example.

      The "lie detector" is used to misguide people, the polygraph is used to measure autonomic arousal.

      I think these misnomers can cause real issues like thinking the LLM is "reasoning".

      • dexterlagan a day ago

        Agreed, but in the case of the lie detector, it seems it's a matter of interpretation. In the case of LLMs, what is it? Is it a matter of saying "It's a next-word calculator that uses stats, matrices and vectors to predict output" instead of "Reasoning simulation made using a neural network"? Is there a better name? I'd say it's "A static neural network that outputs a stream of words after having consumed textual input, and that can be used to simulate, with a high level of accuracy, the internal monologue of a person who would be thinking about and reasoning on the input". Whatever it is, it's not reasoning, but it's not a parrot either.

  • sema4hacker 2 days ago

    The latter. When "understand", "reason", "think", "feel", "believe", and any of a long list of similar words are in any title, it immediately makes me think the author already drank the kool aid.

    • manveerc 2 days ago

      In the context of coding agents, they do simulate “reasoning” when you feed them the output and it is able to correct itself.

    • qwertytyyuu 2 days ago

      I agree with “feel” and “believe” but what words would you suggest instead of “understand” and “reason’?

      • sema4hacker 2 days ago

        None. Don't anthropomorphize at all. Note that "understanding" has now been removed from the HN title but not the linked pdf.

        • platypii 2 days ago

          Why not? We are trying to evaluate AI's capabilities. It's OBVIOUS that we should compare it to our only prior example of intelligence -- humans. Saying we shouldn't compare or anthropomorphize machine is a ridiculous hill to die on.

          • sema4hacker 19 hours ago

            If you are comparing the performance of a computer program with the performance of a human, then using terms implying they both "understand" wrongly implies they work in the same human-like way, and that ends up misleading lots of people, especially those who have no idea (understanding!) how these models work. Great for marketing, though.

    • vexna 2 days ago

      kool aid or not -- "reasoning" is already part of the LLM verbiage (e.g `reasoning` models having `reasoningBudget`). The meaning might not be 1:1 to human reasoning, but when the LLM shows its "reasoning" it does look _appear_ like a train of thought. If I had to give what it's doing a name (like I'm naming a function), I'd be hard pressed to not go with something like `reason`/`think`.

      • insin a day ago

            prefillContext()
  • hodgehog11 2 days ago

    What does understanding mean? Is there a sensible model for it? If not, we can only judge in the same way that we judge humans: by conducting examinations and determining whether the correct conclusions were reached.

    Probabilities have nothing to do with it; by any appropriate definition, there exist statistical models that exhibit "understanding" and "reasoning".

  • jmpeax 2 days ago

    Do you yourself really understand, or are you just depolarizing neurons that have reached their threshold?

    • octomind 2 days ago

      It can be simultaneously true that human understanding is just a firing of neurons but that the architecture and function of those neural structures is vastly different than what an LLM is doing internally such that they are not really the same. Encourage you to read Apple’s recent paper on thinking models; I think it’s pretty clear that the way LLMs encode the world is drastically inferior to what the human brain does. I also believe that could be fixed with the right technical improvements, but it just isn’t the case today.

    • dmead 2 days ago

      He doesn't know the answer to that and neither do you.

  • dang 2 days ago

    OK, we've removed all understanding from the title above.

    • fragmede a day ago

      Care to provide reasoning as to why?

      • dang a day ago

        The article's title was longer than 80 chars, which is HN's limit. There's more than one way to truncate it.

        The previous truncation ("From GPT-4 to GPT-5: Measuring Progress in Medical Language Understanding") was baity in the sense that the word 'understanding' was provoking objections and taking us down a generic tangent about whether LLMs really understand anything or not. Since that wasn't about the specific work (and since generic tangents are basically always less interesting*), it was a good idea to find an alternate truncation.

        So I took out the bit that was snagging people ("understanding") and instead swapped in "MedHELM". Whatever that is, it's clearly something in the medical domain and has no sharp edge of offtopicness. Seemed fine, and it stopped the generic tangent from spreading further.

        * https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

        • fragmede a day ago

          Well thought out, thank you!

          Generic Tangents is my new band's name.

woeirua 2 days ago

Interesting topic, but I'm not opening a PDF from some random website. Post a summary of the paper or the key findings here first.

  • 42lux 2 days ago

    It's hacker news. You can handle a PDF.

    • jeffbee 2 days ago

      I approve of this level of paranoia, but I would just like to know why PDFs are dangerous (reasonable) but HTML is not (inconsistent).

      • HeatrayEnjoyer 2 days ago

        PDFs can run almost anything and have an attack surface the size of Greece's coast.

        • zamadatix 2 days ago

          That's not very different than web browsers, but usually security concerned people just disable scripting functionality and such in their viewer (browser, pdf reader, rtf viewer, etc) instead of focusing on the file extension it comes in.

          I think pdf.js even defaults to not running scripts in PDFs by default (would need to double check), if you want to view it in the browser's sandbox. Of course there's still always text rendering based security attacks and such but, again, there's nothing unique to that vs a webpage in a browser.