High skilled jobs will just start using AI as a tool to automate routine (or have already started, in some cases). The most efficient use of AIs we have now is to pair it with a human, anyway
The worry is focused on the amount of damage that is likely to be done by the people in decision-making positions thinking they can save money by removing more paid positions.
I never understood this? How could the CEO be replaced? Who would be controlling the AI? Whould’t that person just be the new CEO? I have so many questions…
If you are trying to seriously understand how to do it… well, you can’t. Current AIs can’t fully replace anybody, and it’s an open question if they can partially replace (AKA improve the productivity) anybody to any impactful extent.
Depending on how loosely you define AI, current AIs are great at replacing warehouse workers and jobs that rely heavily on routine and have little to no innovation and critical thinking involved.
The problem with humans reviewing AI output is that humans are pretty shit at QA. Our brains are literally built to ignore small mistakes. Digging through the output of an AI that’s right 95% of the time is nightmare fuel for human brains. If your task needs more accuracy, it’s probably better to just have the human do it all, rather than try to review it.
Then each QA human will be paired with a second AI that will catch those mistakes the human ignores. And another human will be hired to watch that AI and that human will get an AI assistant to catch their mistakes.
Eventually they’ll need a rule that you can only communicate with the human/AI directly above you or below you in the chain to avoid meetings with entire countries of people.
Should note that a lot of the Microsoft Recall project revolves around capturing human interactions on the computer in real time continuously, with the hope of training a GPT-5 model that can do basic office tasks automagically.
Will it work? To some degree, maybe. It’ll definitely spit out some convincing looking gibberish.
But the promise is to increasingly automate away office and professional labor.
Do you mean AI, just Generative models, or LLMs in particular? I’m pretty thoroughly convinced that AI is a general solution to automation, while generative models are only a partial but very powerful solution.
I think the larger issue is actually that displacement from the workforce causes hardship to those who have been displaced. If that were not the case, most people either wouldn’t care or would actively celebrate their jobs being lost to automation.
I think AI can take far fewer jobs than people will try to replace with AI, that’s kind of the issue
High skilled jobs will just start using AI as a tool to automate routine (or have already started, in some cases). The most efficient use of AIs we have now is to pair it with a human, anyway
The worry is focused on the amount of damage that is likely to be done by the people in decision-making positions thinking they can save money by removing more paid positions.
Companies will save so much money once they decide to replace their CEOs with AIs…
Tbf most could do it for cheaper with a dartboard and some post-its
I never understood this? How could the CEO be replaced? Who would be controlling the AI? Whould’t that person just be the new CEO? I have so many questions…
The shareholders would do a ‘Twitch plays’ on the ai
If you are trying to seriously understand how to do it… well, you can’t. Current AIs can’t fully replace anybody, and it’s an open question if they can partially replace (AKA improve the productivity) anybody to any impactful extent.
Depending on how loosely you define AI, current AIs are great at replacing warehouse workers and jobs that rely heavily on routine and have little to no innovation and critical thinking involved.
Look, I already got the algorithm written right here!
The problem with humans reviewing AI output is that humans are pretty shit at QA. Our brains are literally built to ignore small mistakes. Digging through the output of an AI that’s right 95% of the time is nightmare fuel for human brains. If your task needs more accuracy, it’s probably better to just have the human do it all, rather than try to review it.
Then each QA human will be paired with a second AI that will catch those mistakes the human ignores. And another human will be hired to watch that AI and that human will get an AI assistant to catch their mistakes.
Eventually they’ll need a rule that you can only communicate with the human/AI directly above you or below you in the chain to avoid meetings with entire countries of people.
Should note that a lot of the Microsoft Recall project revolves around capturing human interactions on the computer in real time continuously, with the hope of training a GPT-5 model that can do basic office tasks automagically.
Will it work? To some degree, maybe. It’ll definitely spit out some convincing looking gibberish.
But the promise is to increasingly automate away office and professional labor.
“Take this code and give me jest tests with 100% coverage. Don’t describe, don’t scaffold, full output.”
Saves me hours.
Oh, don’t worry, the errors you see will go away quickly, assuming they aren’t a feature.
Basically it is going the following way:
AI isn’t magic, no matter how much techbros try to humanize the technology because NeuRAl nEtWOrKs.
How about:
Company rehires a percentage of its workforce, with the lowered demand for those specific workers driving salaries down.
Do you mean AI, just Generative models, or LLMs in particular? I’m pretty thoroughly convinced that AI is a general solution to automation, while generative models are only a partial but very powerful solution.
I think the larger issue is actually that displacement from the workforce causes hardship to those who have been displaced. If that were not the case, most people either wouldn’t care or would actively celebrate their jobs being lost to automation.