I’m a generalist SysAdmin. I use Linux when necessary or convenient. I find that when I need to upgrade a specific solution it’s often easier to just spin up an entirely new instance and start from scratch. Is this normal or am I doing it wrong? For instance, this morning I’m looking at a Linux VM whose only task is to run Acme.sh to update an SSL cert. I’m currently upgrading the release. When this is done I’ll need to upgrade acme.sh. I expect some kind of failure that will require several hours to troubleshoot, at which point I’ll give up and start from scratch. I’m wondering if this is my ignorance of Linux or common practice?

  • damium@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 months ago

    Your experience may depend on which distro you use and how you install things. If you use a distro with a stable upgrade path such as Debian and stick to system packages there should be almost no issues with upgrades. If you use external installers or install from source you may experience issues depending on how the installer works.

    For anything complex these days I’d recommend going with containers that way the application and the OS can be upgraded independently. It also makes producing a working copy of your production system for testing a trivial task.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 months ago

    Dockerfile, especially for something like a CLI app like that. Change your dockerfile and rebuild when you need to upgrade.

  • hperrin@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    11 months ago

    If you’ve designed everything correctly, then yes, it should be much easier to deploy a new instance on a new machine than to upgrade an existing machine.

  • Dr. Wesker@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    11 months ago

    There’s nothing ignorant IMO about avoiding headaches, and keeping environments new-car-smell fresh.

  • kevincox@lemmy.ml
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    11 months ago

    I think yes. In general if you have good setup instructions (preferably automated) then it will be easier to start from scratch. This is because when starting from scratch you need to worry about the new setup. But when upgrading you need to worry about the new setup as well as any cruft that has been carried over from the previous setup. Basically starting clean has some advantages.

    However it is important to make sure that you can go back to the old working state if required. Either via backups or leaving the old machine around working until the new one has been proven to be operational.

    I also really like NixOS for this reason. It means that you can upgrade your system with very little cruft carrying over. Basically it behaves like a clean install every update. But it is easier to roll back if you need to.

  • Kadath (she/her)@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    It depends on what you need to upgrade/do. I usually upgrade stuff, but at the same time I also have templates in case I quickly need to spin up something new.

    If that’s the case, I seed the new instance whatever conf files are needed and I am up and running quickly. Consider that in my work environment we rarely use containers (more of a philosophy at this point than a real reason, since we also have a relatively big K8s cluster for big data).

    Linux sysadmin here, for the past 25 years.

  • SheeEttin@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    No, it’s the same on the Windows side. Personally I like to build a new one in parallel, then migrate. I do plenty of upgrades on desktops, but I don’t think I’ve ever done one on a server (except stuff like CentOS 7 to 8 where it’s not really that significant of a change).

    Migration is the safe option, but if it’s a huge pain to migrate, I might do the in-place upgrade with a rollback plan ready if it really goes poorly.

  • palordrolap@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    11 months ago

    Whatever makes you happiest in the moment.

    If you have any concerns whatsoever that an upgrade over the top of an existing system might cause problems and/or leave cruft behind that will bug you, however harmless it might be: backup, format the system partition, install fresh.

    Otherwise, backup, install the upgrade.

    One strategy might be to upgrade a couple of times and then for the next upgrade, start afresh instead.

    That might be what I choose to do when the next version of my distro comes out since I’ve upgraded the last couple of times. Prior to that I basically started afresh because I changed distro. Maybe I’ll change distro again.

    I should probably mention that I’m a home user in charge of one PC and have never been a main sysadmin (sysadmin gofer and work monkey, yes; boss, no), so you might want to take this under advisement.

  • Toribor@corndog.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    I’m a sysadmin as well and I consider spinning up a new instance and rebuilding a system from scratch to be an essential part of the backup and recovery process.

    Upgrades are fine, but they can sometimes be risky and over a long enough period of time your system is likely accumulating many changes that are not documented and it can be difficult to know exactly which settings or customizations are important to running your applications. VM snapshots are great but they aren’t always portable and they don’t solve the problem of accumulating undocumented changes over time.

    Instead if you can reinstall an OS, copy data, apply a config and get things working again then you know exactly what configuration is necessary and when something breaks you can more easily get back to a healthy state.

    Generally these days I use a preseed file for my Linux installs to partition disks, install essential packages, add users and set ssh keys. Then I use Ansible playbooks to deploy a config and install/start applications. If I ever break something that takes longer than 20 minutes to fix I can just reinstall the whole OS and be back up and running, no problem.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    11 months ago

    It depends on the type of machine you’re talking about. Pet machines, bare metal or VMs, such as workstations, desktops, laptops are generally upgraded because it takes a while to re-setup everything. Cattle machines such as servers are generally recreated. With that said, creation of such machines typically involves some sort of automation that does the work for you. Setup scripts are the very basic, however configuration as code systems such as Ansible, SaltStack are much preferable. So if I had a VM that runs acme.sh, I’d write an Ansible runbook that creates it from a vanilla OS installation. I stop here for my own infrastructure. When we do this in cloud environments where we need to spin up more than one such VM and quickly, we’d have the OS install and Ansible run in a Jenkins job which builds a VM image that’s pushed to the cloud. Then we spin up ready acme.sh VMs from that image which takes seconds.

  • ShortN0te@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Both sides are absolutely valid. A complete new install is very easy when you only need to run a few scripts. A small setup with minimal dependencies should also not break that easily when you upgrade your distro release.

    I personally always make sure that the way i do things in a distro is the way they intended. That’s how i keep my minimalistic Arch install and multiple larger Debian deployments going for years.