A few years ago I had a couple old and slow Optiplex’s running Hyper-V, with Windows/Linux VM’s, doing things like NPS, AD, etc.

Had some old equipment collecting dust, so I’ve built out a decent homelab and am curious if anyone else has done the same, and if so what are they running on them for fun?

In my new “rack”:

  • PowerEdge R430
    • Running ProxMox, with a Windows VM (DC), and a Linux VM with Docker for Plex
  • EqualLogic PS4100
    • VM storage for both PowerEdge servers (10TB)
  • Ubiquiti EdgeSwitch 24 250w
  • PowerEdge R720
    • Running ProxMox, with some Linux VM’s, most utilizing Docker for Plex “assistance/automations” (ahem), NextCloud for phone photo backup and wife’s photography, and another DC as a failover of R430’s DC.
  • Skyline969@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    2 years ago

    I have the CPU and RAM of two PowerVault NX3200s that were destined for the skip jammed into one chassis. It powers one hell of a nice Plex server running on UnRaid.

    • L3s@lemmy.worldOPM
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Nice! How do you like Unraid? Haven’t messed with that yet, but it looks pretty slick

  • Schroeder@lemmy.sdf.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I really like the portable rack config! That would be ideal!!

    Very similar set up:

    • Dell R440 (local storage, sadly)
    • Ubiquiti USW-24
    • Low-end Antsle D for random container & VM shenanigans.

    I’d like to mess around more with a Pi cluster (or some other low power/RISC setup) or an Intel NUC to keep the noise and power consumption down.

  • DesertCreosote@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 years ago

    I run a few older things.

    In my rack, I have:

    • 1x 48-port POE Juniper EX220 used as a core switch with a fiber backbone to my upstairs switch; wish it was 10gig, but it was cheap and I needed ports.
    • 1x Dell R720 with 2x Intel Xeon E5-2697 v2 @ 2.70GHz, 256GB RAM, and 80TB of raw storage in a RAID-5 array (64TB usable).
      • Runs ESXi with VMs for everything from vSphere to Plex to some websites I host myself.
    • 1x Dell R710 with 2x Intel X5650 @ 2.67GHz, 40GB RAM, and 24 TB of raw storage.
      • Currently unconfigured since I recently migrated off of this server to the R720.
    • 1x Dell R610 with 2x Intel 5550s, 96GB RAM, and no drives.
      • Got this on a whim, planned to use it for a project, never got to it, now it sits on the bottom of my rack and reminds me of my folly.

    Upstairs I have another 48-port Juniper EX220, and I plan to fill most of the ports with 4+ Ethernet drops to every room in my house, plus extras for WAPs, cameras, and remote sensors.
    I also use Ruckus R710s for wireless connectivity; I have two right now, and plan to eventually get one of the outdoor-rated Ruckus WAPs to mount on my chimney since WiFi coverage gets a little sparse when you get halfway across my yard.

    I was remarking to a friend the other day that I’ve managed to build myself a pretty stellar setup for the early 2010s, at a fraction of the cost it would have taken back then. Though it definitely eats lots of power!

    • L3s@lemmy.worldOPM
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Very nice setup! My work environment was only VMware ESXi’s, it’s a solid solution but gets expensive!

  • Marduk@hammerdown.0fucks.nl
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    Gave up running old server hardware. Way too loud and way too much power draw.

    Now running a Synology DS918+ for most things and an old Dell Optiplex 3020 for blue iris. It no longer sounds like somebody is vacuuming in the basement all of the time and my power draw went from over 200W down to about 40W.

    • L3s@lemmy.worldOPM
      link
      fedilink
      arrow-up
      3
      ·
      2 years ago

      Yeah, my main thought was this is free(minus power costs of course), which is why I went with it.

      I looked into different Synology solutions with some kind of host such as your setup, how are you liking Synology as a NAS and does the Optiplex as a host give you any draw backs(NIC/CPU/Memory maxing out, etc)?

      • Marduk@hammerdown.0fucks.nl
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        Synology is good. I upgraded the RAM in it to 8GB and populated it with 3x8TB Ironwolf drives. It’s the backup target for all other machines on my network as well as running just under 40 docker containers. It’s just not beefy enough to run VM’s tho. The Optiplex machine just runs blue iris since it’s a windows app. This one I added a 500GB SSD and left it at stock 4GB RAM. It works well with blueiris so long as you turn on intel native encoding and don’t try to reencode anything onto the stream like date/time stamps. Don’t have any issues with resources on it.

        Free is good, and admittedly you have much more headroom to do fun things on your hardware :)

    • pdavis@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 years ago

      Similar situation. I realized my desktop that I constantly upgrade every few years was always going to be faster and more powerful than older or salvaged hardware. I now just keep it on all the time and run VMs on it as needed. Just one physical machine to maintain now! I run Aster on it so it acts as three physical machines. It is my gaming machine, my parteners gaming machine, file server, development server, database server, emulator pedestal, Emby server, Blue Iris server, and my work computer.

  • Doubletwist@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Does anyone NOT use old equipment for a homelab?

    I’ve got 4 x Dell R620 servers, though I’ve recently shut down 2 of them because they are running in an unconditioned garage closet and it’s been absolutely hot this summer.

    Also have 4 old 16-disk trays from a Nimble SAN, but again only running one of them (16 x 3TB SAS) for heat reasons.

    I also have the Nimble controller but the firmware is borked and I can’t get a copy without a service contract and you can’t get a service contract on this old hardware. I briefly had it working running Linux on the two controllers, which works but is a bit tricky since shutting down one side causes the other side to shut down unexpectedly. So I’m leaving it unused until I can either sell it or find a copy of the proper firmware.

  • zorflieg@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    VMware esxi 7, win10 sandbox. Debian hosts for docker. I also run some a couple of i7 Mac mini’s. I try to host on a balance of cheap, power efficient devices that I have spares of in case they blow up because I hate dealing with hardware compatibility issues. Since I decided this and shelved spares I haven’t had any failure’s.

  • CaldeiraG@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    Running 2 mini pcs in a Proxmox cluster with a NAS (24TB) for the mediaserver VM

    • Lenovo Thinkcentre M700 (i5-6400T, 32GB RAM)
    • Dell Optiplex 3080 (i5-10500T, 32GB RAM)
  • Anarch157a@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    old Asus A320 with a Ryzen 2600G and 16 GB of ram. Both memory and CPU were on my previous gaming desktop, I gave them to a friend that needed it. When he said he didn’t need it any more, I took both back plus I bought his mobo.

    I’m running Proxmox with one full VM for OPNSense and some containers for Pihole, HomeAssistant, Jellyfin, Gitlab, and file server.

    I figured that repurposing old desktop parts as I upgrade is better and cheaper than buying old server gear. I get better performance with less power draw and less noise.

    • L3s@lemmy.worldOPM
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      ProxMox took me some getting used to, but it’s a solid hypervisor for homelabs. Never used OPNSense, how are you enjoying it?

  • zorflieg@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    Old Dell optiplexes of an 5th gen or higher currently. Some Mac minis i7 2014+, dual core not worth the performance per watt.

  • Reygle@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    Due to work, I’m a VMWare person but I’m gung-ho to learn Proxmox at home. I’ve got access to old server hardware but don’t want it anywhere near me at home- like others say it’s way too loud and too wasteful on the outlet.

    Was thinking about finding a relatively high core count mini pc, or just building an ITX system.

    Would anyone mind recommending a good bit of hardware for light to medium duty use with a couple of VMs?

    • L3s@lemmy.worldOPM
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      I hear you (pun intended, sorry I’m a dad) on the loudness, luckily we have a basement we don’t use very often, but it’s quite annoying when I game down there on the weekend.

      As for mini-PC, it’s probably worth making another post on /c/sysadmin about, I’m not sure how many people will stop in this post since it’s a little older and unfortunately I’m not much help there.

  • arensb@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I don’t have any retired hardware from my current job, since I’m 100% cloud (and I don’t miss hardware one bit (well, except for the one time I found that I didn’t have any spare power cables for the homebrew PC)).

    I have, however, converted my old QNAP NAS to TrueNAS, and it’s much better now.

    • L3s@lemmy.worldOPM
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      On-prem infrastructure is way less fun than having a full cloud stack, how are you enjoying that, and are there any big snags you all have run into?

      Currently in the process of doing the same at work, we mainly utilize file servers(already migrated to SharePoint), DC’s (in process of going full AAD, Endpoint Manager[intune], AutoPilot), and Print Servers (currently testing full cloud solution to replace). This would allow us to be “server less” and no on-prem infrastructure aside from switching/routing/firewalls, and we can segment our network completely since users won’t need to talk to anything on-prem anymore.

      • arensb@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        undefined> On-prem infrastructure is way less fun than having a full cloud stack, how are you enjoying that, and are there any big snags you all have run into?

        There are people who do enjoy playing with hardware, and I’m not going to say they’re wrong, especially since I’m glad they’re around. But that’s not what I want to do for a living.

        I think the biggest challenge I’ve seen is: with on-prem hardware, you can brick a server or a router, and have to go down to the machine room to reimage it from the console. With cloud infrastructure, it’s possible to not just brick, but destroy your entire machine room.

        Having said that, I really like infrastructure-as-code. I’ve set up racks of hardware, and IaC is way more fun.