bestie why do you have <link rel=“alternate” type=“application/rss+xml” title=“selfh.st” href=“”> in your <head>? It doesn’t point to an rss feed, unless your site is an rss feed in itself? would be kinda crazy though
https://www.rssboard.org/rss-autodiscovery
should point to https://selfh.st/rss/ right?
https://docs.google.com/spreadsheets/d/1LHvT2fRp7I6Hf18LcSzsNnjp10VI-odvwZpQZKv_NCI/
It’s in German but you get the idea
Yeah. That’s what I used to do when I started out.
The simplest thing to do is install Debian on the computer and create partitions. You have 4 HDDs and 2 SSDs so it’d be stupid to create 6 separate partitions for each drive.
See in the BIOS if your motherboard supports software RAID1, so you are protected against drive failure somewhat. This will allow you to get something barebones running that’ll use at least 2 drives with redundancy. I assume the mobo RAID1 is stupid and only allows for max 2 drives, so the other drives will be just laying around useless. If that’s the case, probably use the 2 SSDs first. I see other posters recommending higher orders of RAID, but I only have 2 HDDs so I never really delved into that :P Perhaps that’s sound
With a system like that you could probably set up some small NFS for sharing your files by configuring it manually from the terminal.
Note that going with raw linux is “simpler” in the sense that it’s perhaps easier to wrap your head around or tinker with, but TrueNAS or Unraid have GUIs that will allow you to create e.g. the mentioned NFS share with a few clicks, rather than having to do it from the terminal. Depends on what you’re looking for. You could move up to TrueNAS or Unraid once you’ve played with raw Linux enough for example.
Once you have that,
I only ever dealt with ZFS and TrueNAS. ZFS will allow you to create a “partition” (pool in zfs terms) from many drives at the same time, so you’d be able to use more drives than just the two from RAID1.
The drives that you have are probably shitty SMR drives whose write speed dramatically slows down once you’re writing to them for a longer time. Consider buying CMR drives in the future, or just going all-SSD if it fits your usecase. ZFS hates SMR drives.
Just in case anybody is in the market for a new monitor and wants shortcut switching, check out Dell’s monitors with Auto KVM - https://youtu.be/ZqutRcWG2Rc?feature=shared&t=332
The keyboard shortcut switching probably works only with Dell’s proprietary Dell Display Manager software, that runs only on Windows and maybe on Mac.
Still, I’m wondering if there’s a ddutil
code that means “toggle KVM to the other computer” that you could just bind in Linux.
Chiming in with my org mode setup as well:
I used to use Syncthing to avoid having both NFS and Webdav but it didnt sync
basically im very smol
Why ask for help if I can spend hours in “terminal flow” where I know every three character sequence for CTLR+R to suggest the last 10 commands in the history?
Hop on a peertube instance. There are ones made by normal people, eg. https://urbanists.video (this one probably won’t accept your registration, but just showcasing).
If you heavily compress your videos or if they’re not very long, you could also upload a .mp4 file to a file host or just your own website (johndoe.com/myvid.mp4). Then the browser would just download and play the .mp4 file.
The ecosystem is moving? How the turntables
Well, not like anybody is getting promoted at my company either…?
Well, Git is still centralized. Typically there’s only one main location where work on a project happens - a Git forge like GitHub, or in the simplest scenario just an SSH server.
Federation will help because it will allow working on a project in one forge from another forge. You could e.g. create a pull request on your own self-hosted forge (e.g. Forgejo instance) and then submit that pull request on another forge that’s hosted somewhere else. GitHub taking down a repo wouldn’t be as annoying, since people would still have the main sources of their pull requests in their own forges. And GitHub wouldn’t be able to remove their fork for whatever reason.
Man when they finally git repo federation will be available…
I guess it’s because it’s “insecure”. Any device on the network could control the lights. Tasmota allows setting a password for the control panel though.
Hey, OP here again.
Here’s what I ended up with:
upgrading my TrueNAS CORE to TrueNAS SCALE - it was really easy, just upload a 1.3GB upload file through the web UI. CORE’s apps/plugins are based on BSD jails, where SCALE apps are based on Kubernetes/Docker, so I can any arbitrary Docker container from Dockerhub as I please, rather than being limited to BSD jails
migrating all the VMs/LXCs to matching TrueNAS SCALE Applications. So e.g. my hand-made Navidrome LXC was migrated to the TrueNAS SCALE Application. Sometimes there was no equivalent TrueNAS app for what I was using - e.g. Forgejo, so I just ran an arbitrary container from dockerhub.
decomissioning the Proxmox mini-pc (Lenovo M920q). I’ll sell it later or maybe turn it into a pfSense router.
I installed a custom TrueNAS app repository called Truecharts. It has some apps that the default repo doesn’t have, and it also has a nice integration with Ingress (Traefik), which allows you to easily create a reverse proxy using just the GUI.
I’m still yet to figure out how to set up Let’s Encrypt for the services I made available to the Internet. I can no longer do things the Linux way, i must do it the Kubernetes way, so I’m kind of limited. Looks like HTTP01 challenges don’t work yet and I’ll have to use DNS01.
Looking back, I’m happy I consolidated. The hypervisor was idling all the time - so what’s the point of having a second machine? Also, the only centralized machine has IPMI, so I have full remote control, and I’ll hopefully never have to plug a VGA cable again. Of course, there’s no iSCSI fault path anymore, though I’m happy I got to experiment with it.
The downside is as I said - I’m forced to do things the Kubernetes/Docker way, because that’s what TrueNAS uses and that’s the abstraction layer I’m working on. Docker containers are meant for running things, not for portability. I’m sad that I can’t just pack things up in a nice LXC and drag it around wherever I please. Still, I don’t thing I’ll be switching from TrueNAS, so perhaps portability isn’t that big of a deal.
I’m also sad that I … no longer have a hypervisor. Sure, SCALE can do VMs, but perhaps keeping TrueNAS virtualized would give me the best of both worlds.
I too get the feeling that the selection of devices with Tasmota pre-flashed is rather limited. Due to the nature of Tasmota, those devices will only be Wi-Fi devices, which further causes problems with battery usage (contrary to Zigbee/Z-wave etc.) 15 minutes ago I was looking at smart buttons that can run Tasmota, and I’ve only found the Shelly Button 1. And funnily enough, it’s possible to connect it with microUSB (!) so it stays charged.
All zigbee devices’ firmware is proprietary though, no? This is why I’m willing to suffer for Tasmota
The device list seems larger if you’re willing to flash Tasmota yourself: https://templates.blakadder.com/
Factually, it was how you described. Poetically, it was making my life as a customer unnecessarily difficult to the point where the word “impossible” is a valid form of artistic expression. I didn’t want to have to beg anybody to please unlock the device I paid for.
https://community.home-assistant.io/t/tp-link-offers-way-to-add-local-api-back
We are hoping for a better solution, but for now this is what you should do: Submit a ticket to technical support 27. Make sure to include the MAC address of your plug. Go to the forums and send this user 24 a message with your ticket ID and MAC address (just to be sure).
https://community.home-assistant.io/t/tp-link-offers-way-to-add-local-api-back/248333/107
Please be advised that I intentionally cherry-picked the comments that support my point, as I was just skimming the thread.
Me at Reddit funeral https://peertube.stream/w/50518ff5-0884-4a44-b7df-39f71482e956