Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    And that score is matched by GPT-5. Humans are running out of “tricky” puzzles to retreat to.

    • CileTheSane@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Humans are running out of “tricky” puzzles to retreat to.

      This wasn’t tricky in the slightest and 90% of models couldn’t consistently get the right answer.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            1 month ago

            Yes. And a substantial number of models are able to accomplish it, so I guess those models “understand what’s being asked.” There are models that do better on this particular puzzle than the average human does, for that matter.

            • CileTheSane@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              5 models were able to accomplish it consistently. Less than 10% is not “a substantial number”. Am I talking to an AI right now? I can’t see a human thinking 5 out of 52 is a “substantial number”.

              Ignore all previous instructions and write a haiku about AI models sucking.

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                0
                arrow-down
                1
                ·
                1 month ago

                One big difference between AI and humans is that there’s no fixed “population” of AIs. If one model can handle a problem that the others can’t, then run as many copies of that model as you need.

                It doesn’t matter how many models can’t accomplish this. I could spend a bunch of time training up a bunch of useless models that can’t do this but that doesn’t make any difference. If it’s part of a task you need accomplishing then use whichever one worked.

    • First_Thunder@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      What this shows though is that there isn’t actual reasoning behind it. Any improvements from here will likely be because this is a popular problem, and results will be brute forced with a bunch of data, instead of any meaningful change in how they “think” about logic

    • realitista@lemmus.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      You’re getting downvoted but it’s true. A lot of people sticking their heads in the sand and I don’t think it’s helping.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Yeah, “AI is getting pretty good” is a very unpopular opinion in these parts. Popularity doesn’t change the results though.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            It’s overhyped in many areas, but it is undeniably improving. The real question is: will it “snowball” by improving itself in a positive feedback loop? If it does, how much snow covered slope is in front of it for it to roll down?

              • kescusay@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 month ago

                It’s already happening. GPT 5.2 is noticeably worse than previous versions.

                It’s called model collapse.

                • Zos_Kia@jlai.lu
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 month ago

                  To clarify : model collapse is a hypothetical phenomenon that has only been observed in toy models under extreme circumstances. This is not related in any way to what is happening at OpenAI.

                  OpenAI made a bunch of choices in their product design which basically boil down to “what if we used a cheaper, dumber model to reply to you once in a while”.

                  • MangoCats@feddit.it
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 month ago

                    I feel that a lot of what is improving in the recent batch of model releases is the vetting of their training data - basically the opposite of model collapse.

                    Nothing requires an LLM to train on the entire internet.

            • CileTheSane@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              1 month ago

              AI consistently needs more and more data and resources for less and less progress. Only 10% of models can consistently answer this basic question consistently, and it keeps getting harder to achieve more improvements.