• @theneverfox
    link
    English
    93 months ago

    People just don’t get it… LLMs are unreliable, casual, and easily distracted/incepted.

    They’re also fucking magic.

    That’s the starting point - those are the traits of the technology. So what is it useful for?

    You said drafting basically - and yeah, absolutely. Solid use case.

    Here’s the biggest one right now, IMO - education. An occasionally unreliable tutor is actually better than a perfect one - it makes you pay attention. Hook it into docs or a search through unstructured comments? It can rephrase for you, dumb it down or just present it casually. It can generate examples, and even tie concepts together thematically

    Text generation - this is niche for “proper” usage, but very useful. I’m making a game, I want an arbitrarily large number of quest chains with dialogue. We’re talking every city in the US (for now), I don’t need high quality or perfect accuracy - I need to take a procedurally generated quest and fluff it up with some dialogue.

    Assistants - if you take your news feed or morning brief (or most anything else), they can present the information in a more human way. It can curate, summarize, or even make a feed interactive with conversation. They can even do fantastic transcriptions and pretty good image recognition to handle all sorts of media

    There’s plenty more, but here’s the thing - none of those are particularly economically valuable. Valuable at an individual/human level, but not something people are willing to pay for.

    The tech is far from useless… Even in it’s current state, running on minimal hardware, it can do all sorts of formerly impossible things.

    It’s just being sold as what they want it to be, not what it is