• @Mereo@lemmy.ca
    link
    fedilink
    15
    edit-2
    10 months ago

    While cameras generated a mechanical reproduction of a scene, she explained that they do so only after a human develops a “mental conception” of the photo, which is a product of decisions like where the subject stands, arrangements and lighting, among other choices.

    I agree. The types of AIs that we have today are nothing more than mixers of various mental conceptions to create something new. These mental conceptions comes with life experience and is influenced by a person’s world view.

    Once you remove this mental conception, will the AIs that we currently have today be able to thrive on their own? The answer is no.

      • donuts
        link
        fedilink
        10
        edit-2
        10 months ago

        But this weird, almost religious devotion to some promise of AI and the weird white knighting I see folks do for it is just baffling to watch.

        When you look at it through the lens of the latest get-rich-quick-off-some-tech-that-few- people-understand grift, it makes perfect sense.

        They naively see AI as a magic box that can make infinite “content” (which of course belongs to them for some reason, and is fair use of other people’s copyrighted data for some reason), and infinite content = infinite money, just as long as you can ignore the fundamentals of economics and intellectual property.

        People have invested a lot of their money and emotional energy into AI because they think it’ll make them a return on investment.

    • FaceDeer
      link
      fedilink
      410 months ago

      When I generate AI art I do so by forming a mental conception of what sort of image I want and then giving the AI instructions about what sort of image I want it to produce. Sometimes those instructions are fairly high-level, such as “a mouse wearing a hat”, and other times the instructions are very exacting and can take the form of an existing image or sketch with an accompanying description of how I’d like the AI to interpret it. When I’m doing inpainting I may select a particular area of a source image and tell the AI “building on fire” to have it put a flaming building in that spot, for example.

      To me this seems very similar to photography, except I’m using my prompts and other inputs to aim a camera at places in a latent space that contains all possible images. I would expect that the legal situation will eventually shake out along that line.

      This particular lawsuit is about someone trying to assign the copyright for a photo to the camera that took it, which is just kind of silly on its face and not very relevant. Cameras can’t hold copyrights under any circumstances.