• oyo@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        And the system doesn’t know either.

        For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.