If robots end up becoming sentient and are not biological they won’t have evolved like we have being vulnerable to bacteria, diseases and poisons/toxins and because of it our sense of smell detects bad odors, we wash our hands, take showers and have a strong aversion to things that disgust us.

Robots may just happily pick up shit, dive in dumpsters and roll around in sewage. And why wash your hands or take a shower when water could hurt you? Who even knows if they would perceive smell the same way we do.

  • Mossy Feathers (She/They)
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    edit-2
    23 hours ago

    Eh, I personally think it is, but you have to temper your expectations. It’s absolutely not Artificial General Intelligence, nor is it as flexible or capable of rapid learning as human intelligence (or likely most forms of living intelligence in general). However, I’d challenge the notion that it lacks intelligence entirely.

    AI still “learns” from what you shove into it; it’s still creating algorithms to adapt to the information stream(s) it’s being exposed to which is not unlike how the human brain is believed to function. As such, I personally view it to be intelligent, but not anywhere near as intelligent as people think it is, and absolutely not in the way people want it to be.

    One of the big differences that I see is that, afaik, AI is unable to learn while it’s running. You have to train it, run it, train it again based on user input, run it, train it again on more user input, and so on. Humans are more efficient at learning when they sleep and take breaks, but are still capable of learning things without “shutting down”, so-to-speak (not that we ever truly shut down outside of death, but that’s tangential).

    Another difference is that, unlike “natural intelligence”, AI ends up being hyper focused on a specific task. It’s a bit like grabbing an ice cream scoop and removing a very specific part of the brain, let’s say the part responsible for imagining images, and then letting people interface with that alone. Yeah, it’s not gonna be good at parsing text because that’s not what it was designed to do. That’s a different part of the brain. The one you’re playing with right now is only good at visualizating images, so you’re gonna get pretty images, but good luck with getting it to do proper text, understanding proper body language, etc.

    Finally, AI hallucinates like crazy. This is one where I’m not sure if we’re really that different from AI (I’ll explain in a moment); but it is a big issue when it comes to try to get AI to factually report information or perform logic tasks. You can ask an AI what 2+2 is and get 4 one day, 5 the next, 3 on Saturdays and then -2027346 on Christmas.

    But wait! Doesn’t that make it unintelligent?

    No.

    Going back to the previous statement about AI being hyper-focused, it just means you’re not interacting with a part of the brain capable of logic; you’re interacting with something else. Maybe the speech center, idk.

    However, there’s another element to this where AI doesn’t have a persistent “reality anchor” like we do. To an AI, fact and fiction are purely conceptual because it doesn’t truly exist in our world, it’s off in its own little digital world. Furthermore, the experiences it can gain from the training set are heavily limited compared to what living creatures experience. We have a constant stream of information that reminds us about what is real, who we are, what things look like, how things move and so on; and we get that data stream in 3 dimensions (arguably 2.5, but I digress) instead of 2. It’s like expecting a plant to thrive when given a trickle of water when it normally grows exclusively in a swamp. We ourselves tend to begin hallucinating when our senses become cut off from the outside world because our brains make up stimuli when the expected stimuli is missing. So… I’m not sure if the hallucinations are totally unreasonable, unrealistic or all that different from how we’d behave if subjected to the same environment; but at the very least it’s something that makes AI appear unintelligent.

    That’s not to say that AI is a good thing or that it lives up to the hype. Fuck AI for being wildly overhyped, overused, and destroying people’s livelihoods in a world where “earning for a living” is still required for some god-forsaken reason (just a reminder that the phrase, “earn a living” implies you don’t deserve to live if you aren’t able to make money or have someone doing it for you). At the same time, however, it kinda is intelligent. I think people are just expecting way more from it that it’s capable of doing. It’s like people expect intelligence to manifest in grayscale when it’s more like RGBA or something.

    Edit: sorry about the massive wall of text; I was fascinated with AI and its potential for a while, which meant it lived rent-free in my head at a series of philosophical questions about things like intelligence and what it means for humans that something designed to function as a series of virtual neurons would behave so similarly yet differently to humans. These were the kinds of conclusions I came to.