Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • Fushuan [he/him]@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Not enough for it to make results diverge. Randomness is added to avoid falling into local maximas in optimization. You should still end in the same global maxima. Models usualy run until their optimization converges.

    As stated, if the randomness is big enough that multiple reruns end up with different weights aka optimized for different maximas, the randomization is trash. Anything worth their salt won’t have randomization big enough.

    So, going back to my initial point, we need the training data to validate the weights. There are ways to check the performance of a model (quite literally, the same algorithm that is used to evaluate weights in training is them used to evaluate the trained weights post training) the performance should be identical up to a very small rounding error if a rerun with the same data and parameters is used.