hacker news Hacker News
  1. new
  2. show
  3. ask
  4. jobs

Ask HN: Are past LLM models getting dumber?

4 points

by hmate9

2 days ago

5 comments

story

I’m curious whether others have observed this or if it’s just perception or confirmation bias on my part. I’ve seen discussion on X suggesting that older models (e.g., Claude 4.5) appear to degrade over time — possibly due to increased quantization, throttling, or other inference-cost optimizations after newer models are released. Is there any concrete evidence of this happening, or technical analysis that supports or disproves it? Or are we mostly seeing subjective evaluation without controlled benchmarks?

loading...