Ok let’s give a little bit of context. I will turn 40 yo in a couple of months and I’m a c++ software developer for more than 18 years. I enjoy to code, I enjoy to write “good” code, readable and so.

However since a few months, I become really afraid of the future of the job I like with the progress of artificial intelligence. Very often I don’t sleep at night because of this.

I fear that my job, while not completely disappearing, become a very boring job consisting in debugging code generated automatically, or that the job disappear.

For now, I’m not using AI, I have a few colleagues that do it but I do not want to because one, it remove a part of the coding I like and two I have the feeling that using it is cutting the branch I’m sit on, if you see what I mean. I fear that in a near future, ppl not using it will be fired because seen by the management as less productive…

Am I the only one feeling this way? I have the feeling all tech people are enthusiastic about AI.

  • knightly the Sneptaur
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    11 months ago

    Generative neural networks are the latest tech bubble, and they’ll only be decreasing in quality from this point on as the human-generated text used to train them becomes more difficult to access.

    One cannot trust the output of an LLM, so any programming task of note is still going to require a developer for proofreading and bugfixing. And if you have to pay a developer anyway, why bother paying for chatgpt?

    It’s the same logic as Tesla’s “self-driving” cars, if you need a human in the loop then it isn’t really automation, just sparkling cruise control that isn’t worth the price tag.

    I’m really looking forward to the bubble popping this year.

    • EmergMemeHologram@startrek.website
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      You’re assuming we only keep trying the same things we’ve already tried. We aren’t.

      We’re constantly improving the accuracy, efficiency, size, complexity, capabilities of this tech.

      I agree with you about trust and everything else, but don’t assume it’s only getting worse, the transformer based architecture is getting a lot of focus and research by basically every institution in the world.

      There’s a new math LLM trained on synthetic problems and about to do math at a university level, we might end up making it up new data like how you teach a kid to read from basic sentences.

      • knightly the Sneptaur
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        I agree with you about trust and everything else, but don’t assume it’s only getting worse

        But they are getting worse, specifically because their own outputs are getting fed to the next generation of language models as input. Google search, for example, is rapidly becoming worthless now that LLMs can be used to optimize for SEO. Meanwhile, Google’s own LLM is getting rebranded 'cuz last year’s tech demo couldn’t even regurgitate basic facts accurately and cost the company $100B.