Or maybe a two click solution? :)

  • soft_frog@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Docker is basically a virtual machine image you write your software in. Then when you run the software you don’t need to worry about compatibility or having the right dependencies installed, it’s all included in the docker image.

    Think of Docker as being Nintendo cartridges that you can take to any friends house, plug them in, and play. Servers can run more than one Docker container.

    The approach greatly simplifies writing code and having it work on your server, reduces errors, and adds a layer of security.

    • Elle@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’ve read and reread, listened and relistened to info on docker/containers and I still feel like I’m missing something tbh.

      Let’s say you have a docker container for something and it’s for a Linux distro, that won’t run on another OS, will it? Maybe not even a different Linux distro from the one it was made for (e.g. Ubuntu or Arch or Fedora or whatever).

      To go off your example, Docker’s not like an expansion module to make your Switch games work on a PlayStation or Xbox…Right? There seems to be some kind of mixed messaging on this, the way they’re so readily recommended (which seems to be related to a presumption of familiarity that often isn’t there toward those inquiring).

      I guess I’ve also been confused because like…Shouldn’t old installers handle bundling or pulling relevant dependencies as they’re run? I’d imagine that’s where containers’ security benefits come into play though, alongside being virtualized processes if I’m not mistaken.

      • Disregard3145@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        For simplicity its easiest to imagine it as a simulator or emulator. Its not trying to be your machine (called the host machine) pretend it doesn’t actually use your os, or software.

        Imagine each container is a fresh new machine on your desk with a blank hard drive. The image is basically the result of a set of instructions (a Dockerfile) that docker follows to install all the stuff you need to get the machine running.

        Normally it starts by installing an os like alpine Linux (alpine publish this docker image and you simply build on top of that)

        Then you install any extra utilities and software you might need to run the programming, maybe python or Java (again there are images themselves based on alpine managed and updated by official sources)

        Finally you install what you want and tell the computer that’s what to run when it boots up (often the software you want to run gives an official docker image which has done all this for you)

        So when you run a docker image its actually done all this setup for you already and just stored the resultsin a way that it can apply it straight to your shiny new container in an instant and be ready.

        Docker compose is instructions on how to set up a bunch of computers with a network.