Report finds newer inferential models hallucinate nearly half the time while experts warn of unresolved flaws, deliberate deception and a long road to human-level AI reliability
Why would somebody intuitively know that a newer, presumably improved, model would hallucinate more? Because there’s no fundamental reason a stronger model should have worse hallucination. In that regard, I think the news story is valuable - not everyone uses ChatGPT.
Or are you suggesting that active users should know? I guess that makes more sense.
There is definitely reason a larger model would have worse hallucinations. Why do you say not? It’s a fundamental problem with data scaling in these architectures
Why would somebody intuitively know that a newer, presumably improved, model would hallucinate more? Because there’s no fundamental reason a stronger model should have worse hallucination. In that regard, I think the news story is valuable - not everyone uses ChatGPT.
Or are you suggesting that active users should know? I guess that makes more sense.
There is definitely reason a larger model would have worse hallucinations. Why do you say not? It’s a fundamental problem with data scaling in these architectures
I’ve never used ChatGPT and really have no interest in it whatsoever.
How about I just do some LSD. Guaranteed my hallucinations will surpass ChatGPT’s in spectacular fashion.