Does having swap memory damage SSDs too much, what do you think about it?
SSDs have a limited number of lifetime writes. Depending on the size of the swap file and the frequency in which you write to it you could go through your lifetime writes faster than you expect. You can keep an eye on it by looking at your drives health metrics. It will tell you how many writes it’s used. Usually in a percentage term
If you pull up the data sheet for your drive it’ll tell you the total number of writes it’s rated for.
SSDs have a limited number of lifetime writes.
Yes, but in the real world it is not a concern. The number of writes you can do is so huge that you will never come even near it, and the speed boost from SSD far far outweighs it.
Good summary.
It should last a long time but I have exhausted multiple disks. Just depends on your workloads and lifespan
I see, will look into that later thank you!
People think that Windows doesn’t do swap because on Windows it’s done automatically for you. Does it wear down the SSD? Yes, but so does every other write operation. Ideally, getting like 32gigs of ram so you’ll never have to use your swap is ideal(or at least use less), but not everyone can do that.
My thought on this:
If it was bad, wouldn’t we know by now?
SSD-only systems have been a thing for over a decade, and SSDs themselves have been around for decades.
If standard swap files damage SSDs, someone probably would have said something.
Exactly. I think I’m still running my original SSD. I had only one die and it was definitely an issue with the disk, not the writes, since it lasted only a year.
On one hand, yes. But, at the same time I think this is why we’re seeing an influx of cheap SSDs onto the market.
Just like the early LED bulbs generally last a long time, but modern ones are created more cheaply and overloaded such that they don’t last so long.
I wonder if the latest very cheap SSDs will have anything like the kind of longevity older drives do.
I thought about that after making my post.
Just like there are shitloads of bad SD cards (no-name, unbranded, generic, etc.), it’s just as cheap and easy for any random company to produce their own SSD and get it in circulation on the market just like legitimate SSDs.
Any SSD that could be damaged by a swap file is not an SSD you should have anywhere near your system in the first place (even if you never plan on putting a swap file on it).
Oh, that makes sense… Now I’m concerned bout mine
Makes sense…
I did calculate it once on an older samsung drive. If you write multiple terabytes every day, you will cross samsungs estimated lifetime in 3 years.
I have no idea how much data a swap partition move per day but it can’t be near that much?
But the swap partition is only used when you run out of ram right? If I have enough ram I should not worry about that.
No, it’s used much more often. How often is determined by a value called swapiness.
100% this. An aggressive memory manager could preemptively write everything from memory to swap even though it’s still in memory, in case it has to evict it quickly.
I generally wouldn’t recommend this, especially if you’re using a cheaper SSD without cache or with QLC memory.
As you already know, cells on an SSD have limited write cycles (as low as 700 for QLC memory) and things like TRIM and wear leveling make sure that your SSD wears uniformly, but on cheaper SSDs the endurance is so low that without cache you will run into wearing issues in a few years of regular use or a few months if you’re using it a lot or with swap enabled. I have seen it first hand many times working in a repair shop.
Keep in mind that endurance is not just a number of terabytes written that will cause your drive to suddenly switch to read-only mode, before it fails it will usually slow down to the point of making your PC unusable, I’ve seen SSDs write as slow as 9MB/s (specifically a Yucun drive from 2018 with TLC memory and no cache), it’s not defective it just has to do a lot of error correction during writes.
Also, another issue that plagues cheap SSDs is that their controllers usually die well before the memory does, keep that in mind when choosing an SSD because this usually happens without early signs of failure or SMART errors.
So in general, unless your PC has a lot of RAM and that swap area will rarely be used, don’t use swap and use zram instead (or just buy more RAM?).
Ooh, so… Would a crucial P3 2280 500gb fall into this category?
P3 is QLC, yes
Oh, saad didn’t knew that, so like, that means this ssd is super bad? And should i remove the swap here? (I have 8gb of ram and 4 of swap)
Oh and how to i set up Zram?
If you only have 8gb you’re probably swapping a lot, depends on what you do with your PC.
Anyway, to set up zram the easiest way is to try this: https://github.com/systemd/zram-generator/tree/main Also, take a look at the zram page on the arch wiki, even if you’re not using arch it’s very well made: https://wiki.archlinux.org/title/Zram
I usually only see swapping when I’m like, playing on big Minecraft modpacks which i don’t usually do much, but ok thanks!
Well, TIL.
Is there any way you can estimate the health status of an SSD?
Check the SMART status. If you’re using KDE you can install plasma-disks which nicely integrates into it and warns you of potential failures.
This won’t predict controller failures of course, those are generally unpredictable, but sometimes SSD controllers that are about to fail will show massive lag spikes or straight up disconnect while you’re using them, if that happens back up your stuff immediately.
Another sign of early failure is extremely slow write speeds. All SSDs slow down a bit after a while once the cache is full, but if you see speeds slower than a mechanical drive, the memory is dying.
Very rarely, you’ll see uncorrectable errors like being unable to open a file, a corrupt file system or files with corrupted chunks (usually 4kb blocks of zeros). If that happens it’s already too late.
Also, the health status of a drive only indicates how worn the memory is, don’t expect the drive to last until it gets to 0%, it’s rare to even get to 60%.
sweet thanks!
I have done some tests, and under low memory conditions, when frequently writing to the drive (especially with a high /proc/sys/vm/dirty_writeback_centisecs) swap can actually reduce the amount of writes to the drive. If you do have enough memory, swap is hardly used but still results in a noticeable speed improvement.
I wouldn’t worry too much about it, since good modern SSDs have such high TBW values that you could usually rewrite half the disk every day and it’d still survive the warranty period. SSDs often survive longer than the TBW value - that’s just the amount that’s warrantied, and manufacturers are very conservative in their warranties.
I’ve seen server systems that have been running 24/7 for over 10 years, with a consumer grade SSD (Samsung 830 EVO or equivalent), with swap enabled, and they’re still running fine.
If you have plenty of RAM (i.e. you usually don’t actually need swap), reduce the value of
vm.swappiness
in/etc/sysctl.conf
.10
is a good value in that case. It’s a number between 0 and 100, where 0 means to never swap and 100 means to always swap (apps will be swapped out shortly after loading). The default on many distros is 60, which tends to start swapping quite a bit once the RAM is around 50% full).Use Zram instead of Swap
Does having swap memory damage SSDs
Not to the point that it’s worth worrying about. Seriously. Unless you’re on something stupid like an SD card the write endurance of a SSD will be fine for the purposes of swap.
If you’re regularly thrashing the swap it’s still fine, but maybe rethink what you’re doing anyway because there’s probably a better way
doubt it would hurt the drive’s lifespan to a noticeable extent. swap partitions have been used for decades on much more fickle hardware.
I’d still only recommend them for systems with under 16GB of RAM (especially if that’s shared with VRAM like a steamdeck) but otherwise it doesn’t have much benefit.
PS: Even if a swap partition is set up, it won’t necessarily be used until memory pages are evicted into swap, which usually only happens when regular RAM is nearly depleted.
Consider ZRAM/compressed RAM swap, maybe?