• in_my_honest_opinion@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 days ago

    Sure, but giant context models are still more prone to hallucination and reinforcing confidence loops where they keep spitting out the same wrong result a different way.

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      Sorry, I’m not saying that’s a good thing. It’s not just the context that’s expanding, but the parameter of the base model. I’m saying at some point you just have saved a compressed version of the majority of the content (we’re already kind of there) and you’d be able to decompress it even more losslessly. This doesn’t make it more useful for anything other than recreating copyrighted works.