In the What are YOU self-hosting? thread, there are a lot of people here who are self-hosting a huge number of applications, but there’s not a lot of discussion of the platform these things run on.

What does your self-hosted infrastructure look like?

Here are some examples of more detailed questions, but I’m sure there are plenty more topics that would be interesting:

  • What hardware do you run on? Or do you use a data center/cloud?
  • Do you use containers or plain packages?
  • Orchestration tools like K8s or Docker Swarm?
  • How do you handle logs?
  • How about updates?
  • Do you have any monitoring tools you love?
  • Etc.

I’m starting to put together the beginning of my own homelab, and I’ll definitely be starting small but I’m interested to hear what other people have done with their setups.

  • cablepick@lemmy.cablepick.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    At home I have a Proxmox cluster consisting of two Dell R340 and one Intel NUC. There are 25 VM’s across all three machines. They do various development duties along with home assistant, plex, and Blue Iris. The rack lives in a closet under the stairs and I have fiber ran to my office. We did a massive renovation when we purchased so I wired the entire house since the walls were all opened. Average power draw is around 480 watts.

    Here is a picture of the rack back when it was all R330. Those have since been sold and upgraded to R340. I added vents during the renovation. Inlet temps stay around the house ambient, and exhaust is about 20 degrees F hotter. I cover the front with additional sound baffles to better route fresh air and control noise. Its pretty much silent outside the closet.

    This is the patch panel. I have perimeter cameras all the way around the house plus more than enough wifi access points. Each room also got 2x ethernet on each side, 4x total. My office has got 6x ethernet plus 4x fiber and a 2-inch conduit to pull whatever else I can think of later.

    I use Grafana and custom-made scripts for monitoring and alerting. Most of the infrastructure is automated with scripts. One of these days Ill learn Ansible but I really enjoying just figuring it all out. This isn’t my job I just do it for fun. Here is the dashboard I run on one of my desk monitors.

    I run my hobby websites, and my Lemmy instance, in the colo but its primary purpose is to be an offsite backup. Proxmox backup server performs best on SSD hence the large array. I also do a lot of travel for work so that’s my remote dev machine too. I run my own mail servers with some small VPSs acting as SMTP and IMAP bouncers to internal servers at home and in the colo working in parallel. HA proxy does the bouncing for high availability and dovecot and postfix do the heavy lifting with solr providing lighting fast search. I do use a third party for outbound mail for better deliverability.

    Dell R350 - Colo Proxmox

    • Intel E-2388G processor
    • 128gb 3200 ECC ram
    • Dell H755p raid
    • 8x Crucial MX500 4TB in raid 6
    • Samsung 990 2TB NVMe

    Dell R340 - Proxmox Node 0

    • Intel E-2278G processor
    • 128gb 3200 ECC ram. Despite the spec sheets and irdrac saying these only support 64gb they run 128gb just fine.
    • Ultrastar DC SN640 7.68TB NVMe
    • Dell H810 Flash with LSI firmware. HBA for SC200 disk shelves.
    • Mellanox CX354A @ 40GbE

    Dell R340 - Proxmox Node 1

    • Intel i3-9100T processor
    • 64gb 2400T ECC ram
    • Ultrastar DC SN640 7.68TB NVMe
    • Intel X520-DA2 @ 2x 10gbe

    Intel 7th Gen Nuc - Proxmox Node 2

    • Intel i5-7260U processor
    • 32gb 2400 ram
    • Ultrastar DC SN640 7.68TB NVMe

    This is mounted to the wall under my desk in a silent case. I use Verizon Wireless home internet as a backup and this server is the router. My entire closet rack can go offline and Ill still have internet access.

    Dell R730 - GPU

    • 2x Intel E5-2696 V4 processor
    • 512gb 2400T ECC ram
    • SanDisk Skyhawk 3.84TB NVMe
    • 2x Nvidia P100 16gb GPU
    • Intel X520-DA2 @ 2x 10gbe
    • This one stays powered off when not in use. I built it to play around with tensorflow and AI but haven’t had much time.

    Dell SC200 Disk Shelf 1

    • 12x WD 8TB 5400 rpm shucked drives
    • Single z2 pool
    • Roughly 74 usable TB
    • Cold backups of the primary array. Only powered on once a month to sync.

    Dell SC200 Disk Shelf 2

    • 12x HGST 10tb SAS drives
    • 2x z2 pools
    • Roughly 72 usable TB
    • Primary storage array. Sitting at about 70% utilized so its time to upgrade.
  • ilikedatsyuk@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    I have an HP DL380 Gen8 and then a PC I bought from the local university and use as a server.

    My DL380 runs ESXi. My PC runs Ubuntu on bare metal.

    All of my apps are either fully VM-based (Home Assistant OS) or run in containers. Containers are far easier to build, upgrade, and migrate, and also make file management a lot easier.

    I use Docker Compose. No Swarm or Kubernetes at this point.

    Hopefully this is at least a good start! Let me know if you have any questions.

    • demosthenes@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Yeah, that’s great! I’ve got an old HP desktop that a family member discarded that will be the start of mine.

      Do you use a single docker-compose.yaml file for an entire machine, or docker-compose files per-app?

      • ilikedatsyuk@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        A combo of both. I group all my media apps like Sonarr, Radarr, SABnzbd, etc together in one compose since I consider each of them to be a part of the same “machine”, but most of my apps have their own compose.