This. Any time someone’s tries to tell me that AGI will come in the next 5 years given what we’ve seen, I roll my eyes. I don’t see a pathway where LLMs become what’s needed for AGI. It may be a part of it, but it would be non-critical at best. If you can’t reduce hallucinations down to being virtually indistinguishable from misunderstanding a sentence due to vagueness, it’s useless for AGI.
Our distance from true AGI (not some goalpost moved by corporate interests) has not significantly moved from before LLMs became a thing, in my very harsh opinion, bar the knowledge and research being done by those who are actually working towards AGI. Just like how we’ve always thought AI would come one day, maybe soon, before 2020, it’s no different now. LLMs alone barely closes that gap. It gives us that illusion at best.
This. Any time someone’s tries to tell me that AGI will come in the next 5 years given what we’ve seen, I roll my eyes. I don’t see a pathway where LLMs become what’s needed for AGI. It may be a part of it, but it would be non-critical at best. If you can’t reduce hallucinations down to being virtually indistinguishable from misunderstanding a sentence due to vagueness, it’s useless for AGI.
Our distance from true AGI (not some goalpost moved by corporate interests) has not significantly moved from before LLMs became a thing, in my very harsh opinion, bar the knowledge and research being done by those who are actually working towards AGI. Just like how we’ve always thought AI would come one day, maybe soon, before 2020, it’s no different now. LLMs alone barely closes that gap. It gives us that illusion at best.