I can’t help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!)
Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don’t easily align with an established infrastructure.
“So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!”
Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds.
Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served.
My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment.
I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.
Containers really shine in the selfhosting world in modern times. Complete userspace isolation, basically no worries about dependencies or conflicts since it’s all internally shipped and pre-configured, easy port mapping, immutable “system” files and volume mounting for persistent data… And much more. If built properly, container images solve almost all problems you’re grappling with.
I can’t imagine ever building another application myself without containerization ever again. I can’t remember the last time I installed any kind of server-side software directly on a host without containerization, with the exception of packages required by the host that are unavoidable to support containers or to increase security posture.
I’m my (admittedly strong) opinion, it’s absolute madness, and dare I say, reckless and incomprehensible, why anybody would ever create a brand new product that doesn’t ship via container images in this day and age, if you have the required knowledge to make it happen, or the capacity to get up to speed to learn how to make it happen (properly and following best practices of course) in time to meet a deadline.
I’m sure some would disagree or have special use-cases they could cite where containers wouldn’t be a good fit for a product or solution, but I’m pretty confident that those would be really niche cases that would apply to barely anyone.
While php is still cool… join the dark side and start using containers 😏
And that’s why my rule is: if it doesn’t container it doesn’t go on my server. If I can’t get the application crammed into my docker compose stack I look for an alternative. Hell I run PiHole and Octoprint inside container
deleted by creator
Yes, containers could be the way – if every applications would come in a container or it were super easy to containerize them without the applications knowing it.
Can I run half a dozen of applications in containers that all need port 443 and how annoying is it to set it up?
[This comment has been deleted by an automated system]
Yes, you can just map the internal 443 port to another port outside of the container and then reverse-proxy them all.
retvrn to cgi-bin
The thing that boils my blood is secret sqlite databases. I just want to store my volumes on an NAS using NFS, and run the stacks on a server built for it. Having a container randomly blows up because an undocumented sqlite database failed to get a lock sucks ass.
secret sqlite databases
The thing is: “secret”. SQLite databases in general are awesome. Basically no need to configuration. They just work and don’t even need an own server and in 99% of all cases they’re absolutely enough for what they used for. I’d always chose a SQLite database over anything else - but it should made clear that such a database is used.
I recently(ish) installed Unraid on a new NAS, as I’d heard good things but knew nothing about it. Didn’t really intend to install much on it, but got playing around with the Docker stuff built into it and… fuck me. The amount of time I used to spend installing dependancies, configuring stuff, trying to work out why the hell it wasn’t working. With really not much work I’ve got a fully fledged Arr setup with Jellyfin, got a full dev environment, Grafana and influx for monitoring, automated tls certs, and a bunch of other things all working pretty damn flawlessly.
Containers are awesome.
Sometimes venting off a little helps a a little. I finally sat down and learned the basics of docker and found an easy to follow video series on how to setup Docker with Portainer and Nginx Proxy Manager. Works like charm. I also set up my GoToSocial instance again but failed at setting up a Lemmy instance … but I guess that’s for another discussion :)
Care to share what the helpful series was?
Sure. It’s a 4-part video series by German YouTuber Raspberry Pi Cloud created end of 2021 (but still works with most recent versions).
He goes from technical background over basic system preparation and Docker installation to a fully featured setup. I skipped lots of content up to the point where he was done with the Docker installation (I prepared and “cleaned” my system and installed Docker beforehand.)
- Basics and somewhat not-so-nice manual Docker installation not using the system’s package manager: https://www.youtube.com/watch?v=8QgBqu-tE-I
- Portainer installation and setup and general usage: https://www.youtube.com/watch?v=ZYgCYgxbKgQ
- Nginx Proxy Manager and reverse-proxying, Vaultwarden installation: https://www.youtube.com/watch?v=SsnrH-5_ORE
- More on Nginx Proxy Manager (Let’s Encrypt), Pi-Hole installation and setup: https://www.youtube.com/watch?v=D6aOdey5nj8
Perhaps a solution like CloudPanel or Cloudron would make self-hosting multiple sites / apps easier for you. I use CloudPanel to host multiple Wordpress websites and it works very well. I use Cloudron to quickly deploy various open-source apps on one VPS.
Sadly, a PHP dev environment and a webserver is not enough for modern devs.
I just ended up installing proxmox, and everything I install gets it’s own VM. It binds to the port it wants, and my public IP port 443 binds to a VM win ngnix. If you hit a subdomain, ngnix proxies the request to the actual server and port. Servers can ship whatever certificates they want, my ngnix is the one clients negociate SSL with, so it has its own certificate. The only other thing running on that server is certbot.
It’s honestly much simpler this way. Need to restart a machine after install? Everything stays up. One of the software needs glibc version fuck my ass? Don’t care that machine will have that version of glibc and I will not touch it. Software has a memory leak? Qemu doesn’t, and the VM is limited in ram so only that is crashing.
Just asked sure your VM template is good (and has your ssh key installed) and you’re golden. Before this week’s internet outage, I had 99.999% uptime with a single hypervisor, and the only monitoring I have is just uptime of all services as seen from AWS. I don’t even have alerts.
I sometimes long for the days (that I missed, I’m only 24) of monolithic Linux servers where you have a webserver, a database server and that’s it. Sadly, VMs are cheap and dependencies hell. It’s still quite fun to tinker in the virtualized world. It’s just not the same as what has been.
Docker containers do pretty much solve that, drop a
docker-compose.yml
file in place, maybe tweak a few lines, and that’s all.Not sure what’s the problem though. Pull up a reverse proxy, and give all the crappy shit a private ip and whatever port they want, and access it through the proxy, and everyone can be on 443.
127.42.1.123:443,
whatever. Maybe use real containers, or that crappy docker shit, both offer you independent namespaces with all the port and whatnot.