Office space meme:
“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”
Office space meme:
“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”
Yeah, but not to train it
Yeah, it’s about as open source as binary blobs.
So what? You still can gleam something if you know the dataset on which the model has been trained.
If software is hard to compile, can you keep the source code closed and still call software “open source”?
I agree the bad part is that they didn’t provide the script to train the model from scratch.
This is a great starting point for further improvements of the model. Most AI research is done with pretrained weights used as basis. Few are training models completely from scratch. The model is built with Torch, so anyone should be able to fine tune the model on custom data sets.