• will_a113@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Have you had any luck importing even a medium-sized codebase and doing reasoning on it? All of my experiments start to show subtle errors past 2k tokens, and at 5k tokens the errors become significant. Any attempt to ingest and process a decent-sized project (say 20k SLOC plus tooling/overhead/config) has been useless, even on models that “should” have a good-enough sized context window.

    • horse_battery_staple@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      My codebase is almost 1.2GB of raw python and go files no images. I think it’s somewhere near 15k tokens for the python codebase and 22k for golang due to all the .mod and .io connectors to python libraries… it was a much bigger mess before if you can believe it.

      What size model are you using? I’m getting pretty good results with R1 32b but these have been distilled to be experts in the languages of the codebases. I’m not using any general models for this.

      Also it depends on the language you’re targeting as well. Rust or Lisp have issues due to how much less they’ve been documented. I think golf type languages like brainfuck are impossible. It really comes down to how the language has been documented. Python gave me issues in the beginning until I specified 3.11 in my weights and distillation/training, and that definitely fixed a lot of the hallucinations I was getting from the model.

      I think static typing languages that have consistent documentation would be the easiest for this. Now that I think of it, maybe getting a typescript expert would be something I could tool around with.

      Edited for legibility and the fact that I just went and looked at my datasets again. Much bigger than I initially thought.