Hello everyone, My home server (intel nuc6) died on me recently, I set it to be used as my home server using OpensSUSE Leap with the following services:
- NFS server
- Sftp over ssh for remote file transfers and I was looking for a faster alternative for local transfers (tftp maybe)
- Qbittorrent
- Aria2
- Emby
- I was experiencing with nextcloud then pfsense after.
- Definitely an office suite and a few nextcloud addons.
I have no alternative machine ATM to use it as a replacement but I plan to re-install everything on a VM (Virtualbox or Qemu/libvirt) on my Desktop, I have no experience with containers, but I think installing each service in a countainer would make it easier to move everything later to my new home server.
Would using debian or opensuse and use docker? Maybe even proxmox? or should I just stick with installing everything directly on my distro with no containers? I would love to know your opinion about the best approach.
I firmly am of the mindset of containerizing everything. It may be harder to set up for services that you write yourself or ones that don’t already have containers, but as you said, it’s so much easier to migrate in the future.
I actually use podman for my services and systemd to manage their lifecycle. For each service, I have a folder that contains the systemd service file (doesn’t really work in btrfs systems. You need the service files in the same subvolume as etc or else they won’t start at boot) any config files or anything else that needs to be mounted as a volume into my container. I back up the folder that contains all those folder with my nightly backup. If my server craps out, I can restore that directory from my backup, systemctl link and enable all of my service files, and I’m back up to 100%.
I considering containerizing everything, except the OS (I’m not ready for immutable OSes yet). I mentioned Docker because it is what I keep finding guides for and which I think is simpler. How is it compared with Podman (for a beginner in containerizing)
Edit: I will mostly use BTRFS and snapshots, and I would definitely put my containers in a separate subvolume to avoid data loss when rolling back.
From a user point of view, podman is mostly identical to docker. Like 98% of the time you can just replace ‘docker’ with ‘podman’ and it works. How they work under the hood is very different, though. Podman is designed around running rootless and daemonless. But if you don’t care about those things, use docker. Docker supports rootless as well now anyway, but you need to set it up manually. The biggest difference I have found is that podman doesn’t support docker-compose, which is extremely popular. Lemmy uses it, for example. There’s an additional couple of packages you can install that add support for docker-compose, but then podman uses a daemon, which defeats one of the purposes of using podman in the first place.
My workaround that I use for btrfs and systemd files is to have a folder in /etc with all my service files, then I soft link them to my service’s directory. This is just for organization purposes, as a backup wouldn’t include the data of the systemd file, just the link to it.
Thanks foe explaining, I know podman is rootless. My service where running their own non-login users (qbituser for qbitorrent, emby for emby and so own) and I needed to sudo if I want to change anything. It’s not a big deal for me so Docker seems easier to use.
I considering containerizing everything, except the OS (I’m not ready for immutable OSes yet)
If you do ever want to script the system but don’t want to go full immutable OS, Ansible is very useful. I use it for things like tweaking sysctls, installing common packages like
htop
andborgbackup
, etc. across all my servers.
I like proxmox, but it kinda sounds like you’d be just fine with just docker running on opensuse or debian. Or whichever the favorite container is these days (idk why podman is so great, but I seent some posts about people that love it.)
I have tiered out my server with all my app services (jellyfin, nextcloud, etc) running in docker on a debian vm, then have lxc containers for nfs, VPN, etc. Proxmox itself handles ZFS, but I’m sure that’s bad practice and there is probably a better way - but it works for me so 🤷.
I’ve also got a opnsense vm, but not used for any “production” atm; just checking it out to see if I should switch my pfsense box over.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters LXC Linux Containers NAS Network-Attached Storage NUC Next Unit of Computing brand of Intel small computers SSH Secure Shell for remote terminal access UDP User Datagram Protocol, for real-time communications VPN Virtual Private Network
6 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.
[Thread #200 for this sub, first seen 8th Oct 2023, 16:45] [FAQ] [Full list] [Contact] [Source code]
IMO containerize everything. Containers save a lot of headaches, and time is valuable. You are correct that moving configurations is trivial with containers. Backing them up and restoring is also easy.
In the meantime you can install whatever you want in a VM - just keep track of the Docker configurations and move them when ready. I like Proxmox, but it may be overkill if you aren’t going to have a complex setup. The main selling point would be that you ‘containerize’ your OS as well, which means you can snapshot it and do various other tricks with running multiple OS’s. If your new server will eventually be a NAS, Proxmox can do other neat tricks like running TrueNAS/OpenMediaVault in a VM, or hosting a ZFS pool on Proxmox itself.
If you end up wanting to use Proxmox, you can also use Proxmox within a VM on your current machine to get comfortable in advance.
I’m containerizing everything, I like to keep my setup simple, no OS containerizing since I will be using a low power minipc (NUC, Hp mini, dell micro or lenovo tiny), I will use proxmox in the VM to get an idea on how it works and because I think the web UI might be easier to use than SSHing to the VM. Later on the new server I will mostly use debain+docker.
Well, for starters, tftp is the wrong thing for local file transfers if you want it to be fast. The only reason its still around is because its simple and offer the only file transfer protocol that is built into the firmware of the network card.
You read that right, its a simple file transfer protocol built into every network card made in the last couple decades.
Your best bet for file transfer is probably something like a WebDAV server. Which next cloud can handle for you. You can just enable normal WebDAV on something like httpd but then you gotta handle authentication yourself. (Or allow local and connect with VPN)
Thanks for the note about tftp. I used to use FTP to transfer file from/to my android phone which got me around ~30MB (local transfer), but abandoned it (due to security reasons) for SSH file transfer which only got me ~8MB for local transfer (my phone probably is slow in decrypting). So, I was thinking of keeping SSH file transfer for remote transfer and use tftp (due to its UDP layer) for local transfer. If webdav offered reasonable local transfer speed, I will use it to replace all the above.
Yes, WebDAV will max your local connection. Its generally not the encryption that makes ssh slow but the fact that it is designed to give real time terminal feedback. In order for you to see each letter typed in an ssh session, the buffers are really small and it intentionally sends a tone of small packets. Great for single characters bad for large file transfer.
Its OK here and then when you need to push a config file or something but moving large files is not really what its designed for and consequently, it sucks.
- install proxmox & create VM with favourite distro
- setup docker & portainer (for gui management)
- have fun
portainer is not free to use correct? Alternatively one could use unraid (provides nfs, vms and docker management)
there’s a community edition (portainer-ce) which is totally free to use
The easiest thing to do would be to run a VM now and do a V2P backup and restore.
Thanks for the V2P not, if containerizing everything turned to be a headache, I will opt for a normal non-containerized setup with the idea of converting my VM to a physical machine
chmod +x avoid-docker.sh; ./avoid-docker.sh
Forget what everyone says, go for NixOS. This is really something