• MudMan@fedia.io
    link
    fedilink
    arrow-up
    3
    ·
    13 hours ago

    You’re antropomorphising quite a bit there. It is not trying to be deceptive, it’s building two mostly unrelated pieces of text and deciding the fuzzy logic is getting it the most likely valid response once and that the description of the algorithm is the most likely response to the other. As far as I can tell there’s neither a reward for lying about the process nor any awareness of what the process was anywhere in this.

    Still interesting (but unsurprising) that it’s not getting there by doing actual maths, though.

    • Neverclear@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Maybe you’re right. Maybe it’s Markov chains all the way down.

      The only way I can think to test this would be to “poison” the training data with faulty arithmetic to see if it is just recalling precedent or actually implementing an algorithm.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        Well, that’s supposed to be the point of the paper in the first place. They seem to be tracing paths through the neural net and seeing what lights up when they do things step by step. Someone posted a link to the source article somewhere in this thread.

        Best they can tell, as per the article, they say the math answer and the answer to how it got to the answer are being generated independently.