• 3 Posts
  • 49 Comments
Joined 2 years ago
cake
Cake day: March 28th, 2023

help-circle




  • Yeah. That’s what I used to do when I started out.

    The simplest thing to do is install Debian on the computer and create partitions. You have 4 HDDs and 2 SSDs so it’d be stupid to create 6 separate partitions for each drive.

    See in the BIOS if your motherboard supports software RAID1, so you are protected against drive failure somewhat. This will allow you to get something barebones running that’ll use at least 2 drives with redundancy. I assume the mobo RAID1 is stupid and only allows for max 2 drives, so the other drives will be just laying around useless. If that’s the case, probably use the 2 SSDs first. I see other posters recommending higher orders of RAID, but I only have 2 HDDs so I never really delved into that :P Perhaps that’s sound

    With a system like that you could probably set up some small NFS for sharing your files by configuring it manually from the terminal.

    Note that going with raw linux is “simpler” in the sense that it’s perhaps easier to wrap your head around or tinker with, but TrueNAS or Unraid have GUIs that will allow you to create e.g. the mentioned NFS share with a few clicks, rather than having to do it from the terminal. Depends on what you’re looking for. You could move up to TrueNAS or Unraid once you’ve played with raw Linux enough for example.


    Once you have that,

    I only ever dealt with ZFS and TrueNAS. ZFS will allow you to create a “partition” (pool in zfs terms) from many drives at the same time, so you’d be able to use more drives than just the two from RAID1.


    The drives that you have are probably shitty SMR drives whose write speed dramatically slows down once you’re writing to them for a longer time. Consider buying CMR drives in the future, or just going all-SSD if it fits your usecase. ZFS hates SMR drives.












  • Well, Git is still centralized. Typically there’s only one main location where work on a project happens - a Git forge like GitHub, or in the simplest scenario just an SSH server.

    Federation will help because it will allow working on a project in one forge from another forge. You could e.g. create a pull request on your own self-hosted forge (e.g. Forgejo instance) and then submit that pull request on another forge that’s hosted somewhere else. GitHub taking down a repo wouldn’t be as annoying, since people would still have the main sources of their pull requests in their own forges. And GitHub wouldn’t be able to remove their fork for whatever reason.





  • Hey, OP here again.

    Here’s what I ended up with:

    • upgrading my TrueNAS CORE to TrueNAS SCALE - it was really easy, just upload a 1.3GB upload file through the web UI. CORE’s apps/plugins are based on BSD jails, where SCALE apps are based on Kubernetes/Docker, so I can any arbitrary Docker container from Dockerhub as I please, rather than being limited to BSD jails

    • migrating all the VMs/LXCs to matching TrueNAS SCALE Applications. So e.g. my hand-made Navidrome LXC was migrated to the TrueNAS SCALE Application. Sometimes there was no equivalent TrueNAS app for what I was using - e.g. Forgejo, so I just ran an arbitrary container from dockerhub.

    • decomissioning the Proxmox mini-pc (Lenovo M920q). I’ll sell it later or maybe turn it into a pfSense router.

    I installed a custom TrueNAS app repository called Truecharts. It has some apps that the default repo doesn’t have, and it also has a nice integration with Ingress (Traefik), which allows you to easily create a reverse proxy using just the GUI.

    I’m still yet to figure out how to set up Let’s Encrypt for the services I made available to the Internet. I can no longer do things the Linux way, i must do it the Kubernetes way, so I’m kind of limited. Looks like HTTP01 challenges don’t work yet and I’ll have to use DNS01.

    Looking back, I’m happy I consolidated. The hypervisor was idling all the time - so what’s the point of having a second machine? Also, the only centralized machine has IPMI, so I have full remote control, and I’ll hopefully never have to plug a VGA cable again. Of course, there’s no iSCSI fault path anymore, though I’m happy I got to experiment with it.

    The downside is as I said - I’m forced to do things the Kubernetes/Docker way, because that’s what TrueNAS uses and that’s the abstraction layer I’m working on. Docker containers are meant for running things, not for portability. I’m sad that I can’t just pack things up in a nice LXC and drag it around wherever I please. Still, I don’t thing I’ll be switching from TrueNAS, so perhaps portability isn’t that big of a deal.

    I’m also sad that I … no longer have a hypervisor. Sure, SCALE can do VMs, but perhaps keeping TrueNAS virtualized would give me the best of both worlds.


  • I too get the feeling that the selection of devices with Tasmota pre-flashed is rather limited. Due to the nature of Tasmota, those devices will only be Wi-Fi devices, which further causes problems with battery usage (contrary to Zigbee/Z-wave etc.) 15 minutes ago I was looking at smart buttons that can run Tasmota, and I’ve only found the Shelly Button 1. And funnily enough, it’s possible to connect it with microUSB (!) so it stays charged.

    All zigbee devices’ firmware is proprietary though, no? This is why I’m willing to suffer for Tasmota

    The device list seems larger if you’re willing to flash Tasmota yourself: https://templates.blakadder.com/