Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • Poik
    link
    fedilink
    arrow-up
    2
    ·
    23 hours ago

    That… Doesn’t align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and “classical” at least, but most of those don’t scale enough for what is being looked at.

    Also, datasets inherently impose bias upon networks, and it’s easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.

    Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that’s kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.

    Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.