[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: ZRAM.png (20 KB, 202x229)
20 KB
20 KB PNG
>lets you download more ram
>>
>>107018307
biggest scam that ever was. it just swaps to disk.
>>
>>107018480
a compressed disk that has a higher priority stored in ram than your real swap if you did it correctly
>>
>>107018480
doing it wrong.
i just turned off swap completely and in 20 minutes i will reboot and the system will automatically download free ram to to a approximate value of 22GB.

to get this 22GB i have to give up about 3/4 of my physical ram.
sounds like a good deal considering i have 8gb physical it would cost me about 6gb.

this is so fucking cool. a friend told me about it just today.
i can tell you that i have been missing out.
>>
more like i give away 6gb worth of uncompressed to get about 20gb compressed.
beats buying a new computer.
>>
>>107018480
sorry if it was not clear but when i said i turned off swap, i mean physical swap.

a /dev/zram0 is created instead and mounted as swap using normal swapon.
zram kernel modules must be loaded.
my distro provides /etc/default/zram in pic related for configuration.
>>
>>107018480

this means you have to revisit /etc/fstab as well.
there are plenty of tutorials on the whole setup. arch wiki has a nice page on it for example.
>>
File: bs.png (8 KB, 640x480)
8 KB
8 KB PNG
>>107018480
>>
i didn't feel this happy since discovering file system compression.
that and zram are the ultimate poorfag innovations.
>>
>>107018719
>softram
reads like a crime novel
>>
>>107018745
Kek it does.

I actually fell for it but honestly I'd have to say it delivered. I had this mankyass 486 with 16mb ram and MKTrilogy wouldn't run. Added another 16mb of fake ram on top and I was in. Not a pleasant experience but it worked.
>>
>>107018842
when i was a kid i used drivespace on win98 to compress a floppy. to me it was magic
that was pretty cool albeit fairly primitive (apparently as i learned last month that shit used a single giant file).
still, it did actual transparent file system compression as advertised.
>>
>>107018480
>disk
No, zram exists entirely in RAM. Web browser data is very compressible so my 4GB RAM chromebook acts like it has 7-9GB RAM.

Zswap is the one that works in conjunction with disk swap.
>>
>>107018307
will be fun trying a virtual machine with the same amount of ram as the host
>>
>>107018545
NTA but 3.7 is about right for zstd, yeah. It all depends though. If you are doing a lot of video-based web browsing the compression will be lower, while if you are doing something like rimworld (which stores a lot of shit in ram but rarely accesses it, meaning it doesn't shit the bed with the higher latency - unlike X4 which absolutely does) compresses way harder. I've managed to get a 5.3 compression ratio with mostly rimworld data in my zram.
>>
Any idea of how well it scales?
>>
>>107018307
How is it so good tho?
>faster than zswap
>way faster than memory compression on windows
>>
>>107019535
Depends on how much you're accessing it and what your CPU is versus that along with what you're doing (and thus how big the CPU hit for compression/decompression impacts your workload). If you're just doing daily stuff then you can use zstd on a lot of stuff and fall back to lz4 and its worse compression ratio if you have to go back further (i use lz4 on my i3-2120 machine). The compression ratio depends entirely on what you're shoving into it, so really this is just down to knowing your probable use cases.
>>
File: OpenZFS.png (163 KB, 2560x2335)
163 KB
163 KB PNG
It'd be nice if ram compression was something that was just natively in the kernel and applications could flag regions of memory they are using to preferentially use it or avoid it. The problem with this kind of stuff is that there are way too many applications that need to churn through sizeable portions of their memory. A simple least recently used algorithm for demoting stuff to compressed space is something that can be thrashed pretty easily and adding piles of memcopy operations to continuously compress and decompress stuff can absolutely tank system performance. If you read/write directly to and from compressed regions you run into a bunch of issues with inconsistent space utilization that means you effectively have to perform copy on write operations which can lead to write amplification depending on how you're tracking space utilization (page size can cause issues for instance).

This stuff is useful if you keep thousands of browser tabs open, but as soon as you throw anything demanding at it your system performance tanks.

Evictions for disc cache don't have the hard requirement that evicted blocks be written out somewhere because they're always retrievable from disc, which makes some things easier to handle because you can simply drop stuff you no longer need. ZFS has inline compression, and it keeps the cached content compressed in RAM only decompressing it to present it to the application. For this case that turnaround time doesn't hurt because it's still enormously faster than retrieving things from disk. ZFS also has a much better algorithm for handling cache evictions than a simple LRU, so it's much more resistant to thrashing with demanding mixed workloads.
>>
>>107019535
It entirely depends on the workload. If you have tons of memory being used by applications that are just holding memory, or holding stuff they periodically access in a manner that doesn't care about microseconds of extra latency, then it's fine. The common example here is a web browser with dozens/hundreds of largely inactive tabs that are taking up memory, but not actively being used.

If you're running applications that frequently churn through enormous amounts of memory, it can and will utterly kneecap your performance. Imagine running something like a sorting algorithm on a huge block (tens/hundreds of gigs) of memory. Swapping elements is largely a random affair, so if you have to access and compare billions of elements and even some of the comparisons and swaps require compression, decompression or both, it's going to be painful. Now think about what a compiler is doing, AI workloads, games, and countless other things that see real world gains with better memory performance.

You also have to factor in the CPU usage. ZSTD is a phenomenal compression algorithm, but it's not free. Even LZ4 (which compresses substantially less) has limits on how fast it can go. You need to burn a lot of CPU time when you're cycling through multiple GB/s of memory. That can have all kinds of knock-on effects for other applications running on the system, and in the case of mobile devices you can substantially reduce battery life.
>>
>>107019610
>>107019748
Ty for info, I will try it out and see how it does for me
>>
>>107019788
Good luck.

If you notice things behaving oddly, try turning it off and doing side by side comparisons. My personal experience with it was that it was alright on my laptop with somewhat limited ram for keeping a bunch of applications I wasn't using open all the time when walking between classes, but never did anything productive on my newer one that had 16GB let alone on my desktop systems that had more and were doing actual work.
>>
>>107018500
teach me how to do it correctly
>>
>>107019990
use these sysctl settings (in /etc/sysctl.d) for effective zram usage:

vm.swappiness=180
vm.watermark_boost_factor=0
vm.watermark_scale_factor=125
vm.page-cluster=0

Make sure you've actually set up zram properly as your swap first otherwise it's gonna suck obviously.
>>
File: 1739751211794231.jpg (96 KB, 1200x675)
96 KB
96 KB JPG
I could never get this to work, and I tried the foolproof zram-generator
does this even work with ZFS caching?
>>
>>107018307
>lets you download more ram
Finally! After all these years ...
>>
>>107020277
>does this even work with ZFS caching
Technically yes, but I wouldn't dare to mix the two of them. The problem is that zram is effectively swap, so linux has to move stuff into and out of swap space to access it or to (attempt to) make more space. If you could natively compress RAM without going to swap you might be able to have them play nice together, but there'd still be a latency penalty to double decompressing stuff without some sanity check on zram to disable compression on noncompressible data (which would in turn involve either a lot of redundant checking/memory access, or creating some kind of table to track if a block of memory can't be compressed).

ZFS is considered application data (which is why the OoM killer can be such a problem on linux), and it already stores the vast majority of it's data in compressed form. ZFS is also looking at available system memory to decide how big it wants to make its cache. ZRAM is dynamically sized, so there's some race conditions where the two end up confusing each other and lead to insane amounts of memory IO.

Maybe there's a way to tag specific things to never use zram now, but I'm pretty sure there wasn't last I looked into it, and even if there was, I'm not sure that locking ZFS out of it would solve all the race conditions.

IDK. Don't take my word on this as gospel. It's all over my paygrade. I've not noticed much benefit from zram on modern systems, and I swapped over to root on zfs a few years ago for all of my non-windows systems, so I've had zero inclination to tinker with it.
>>
>>107019643
>It'd be nice if ram compression was something that was just natively in the kernel and applications could flag regions of memory they are using to preferentially use it or avoid it.
madvise(2)
>>
>>107018307
>his cpu IMC can't compress ram automatically
ngmi
>>
>>107020522
got any tips to substantially power zfs arc memory usage?
>>
>>107020522
>>107023560
i meant lower mem use
>>
>>107018500
you should never use a ram disk if you use zram.
>>107019537
it's worse than zswap in every way

the only reason to use zram is if you have absolutely no disk based swap backing. As in you have no disk or you have a slow HDD. even a SATA SSD with zswap would be better if you need more ram than your system has.
>>
>>107019990
swap priority?
swapon /dev/zram0 -p 100
swapon /swapfile -p 50
>>
>>107023560
>>107023573
First you need to address why you want it to be lower. Having your system redlined on memory is actually a good thing unless you have some need for the ability to rapidly allocate a ton of RAM. Unless you're storing large amounts of tiny records in RAM (IE databases with a TB or more of 4-16k allocations), the ARC can free large percentages of your system within fractions of a second in response to memory pressure. You basically need to be spinning up pre-allocated VMs or something along those lines for this to be a significant issue.

There are tunables that you can use to adjust the amount of arc that the ARC can use.
>https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-arc-sys-free
>https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-arc-max

These two parameters allow you to set a reserved amount of free ram on your system. If your memory usage (including the ARC) goes above that threshold, the ARC will attempt to shrink until it is below that threshold once again. The latter sets a hard limit on the amount of memory the ARC can utilize.
>echo 4294967296 >> /sys/module/zfs/parameters/zfs_arc_sys_free
Entering that means the arc will attempt to leave 4GiB of free memory on the system.
>>
>>107024539
>>107020045
wrong
zram-size = ram / 2
compression-algorithm = zstd
swap-priority = 100
>>
>>107024650
saved
>>
>>107025012
the zram size is not a fixed thing.
judging by my distribution docs, some set it at even double the physical ram.
the more you allocate to zram, the sooner it should start "swapping", the sooner it will start to compress.

it really depends on what you need and the compression ratio.
regardless if you play safe or not, you will run out of ram at some point so might as well crank it up. at least thats how i prefer it.
>>
i'm basically zram maxxing, fuck playing safe. its literally free ram for everyone!
>>
>>107025012
this is the doc i'm referring to
>>107018634
i didnt have to do much to get it working
a look into that, then sysctl and making sure the zram module is loaded, which it already was on my computer.
also commented out the old swap in fstab.
>>
>>107025227
>>107018634
https://github.com/systemd/zram-generator
>>
>>107025324
>systemDick
no thanks
>>
>>107025012
zstd is half the speed of lzo-rle for not much compression gain.
>>
>>107024539
You shouldn't have a regular swap file at all unless you're doing hibernation.
>>
>>107025151
it is a fixed thing, thats why theres a setting for it
>>
>>107025600
are you on purpose? i'm saying that the size which is assigned doesn't have to be half of physical memory because it depends on what the user needs and what is possible.
>>
>>107025012
There's no general reason to set zram larger than 8GB, if you need it to be larger than 8GB you'd already know you needed to and wouldn't be asking about it on a Mongolian Throat Singing Bulletin Board.
>>
>>107025834
i heard another anon say its better to get zram compressing ram rather than not and i think he may be right
>>107025864
distros set it up for you
>>
>>107026443
i just started to use it and i think that if you allocate a big chunk (like 3/4ths of physical) with high swappiness, the ram will "swap" much earlier helping you prevent thrashing.

i'm just speculating though.

since zram does not occupy space its not using, by swapping earlier you get a bigger buffer of sorts giving it more time for mems to be compressed way before the uncompressed ram is actually full.

what i mean by 3/4ths is say allocating something like 20gb to zram on a ram which totals 8gb uncompressed
assuming a compression ratio of 3.2, you'd need about 6 gb of physical out of a total of 8gb, leaving the 2gb to remain uncompressed.
that should in theory give you something like 22gb total ram.

thats maybe optimistic or not and it depends on if whatever is put in ram compresses well and i have yet to fill that up.

i might have gotten everything backwards but that seems logical. you should just read the docs for the zram module and zramctl

i'm installing a virtual machine right now to test this out and see how big the performance penalty is etc.
>>
its running the virtual machine.
memory usage has safely passed my normal capacity.
i do see a few minor lags from time as i move the cursor to type this out but nothing major.
the music plays in the background uninterrupted, streaming radio.
firefox has loaded a few tabs including a web app which is moderately demanding.
i'm impressed with how its playing out for now.

physical ram: 7.6gb
these are my settings:
#20gb
ZRAMSIZE=20011080
ZRAMCOMPRESSION=zstd
ZRAMPRIORITY=100

/etc/sysctl.d/99-swappiness.conf:
vm.swappiness = 180
vm.watermark_boost_factor = 0
vm.watermark_scale_factor = 125
vm.page-cluster = 0

here is zramctl output:
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 zstd 19.1G 4.7G 2.2G 2.2G 4 [SWAP]


usage was briefly up to total 14 gb, something like low 7 on the ram and 7 on the zram and not a stutter in terms of audio or usability. some minor hickups but they are quickly resolved. no major thrashing or anything. its impressive
>>
should probably dial down a bit if the compression ratio is at 2.1x or something or lower than 2.8. will have to monitor.

the virtual machine received as much memory as the host has originally.
before zram i always made sure never to pass physical usage because it would obviously thrash to swap and pretty much go on prolonged sleep so this is a massive improvement. like magic
>>
They say that there is an empty spot on my board and to install chip with solder. Am I doing it right?
>>
File: 1747926416252053.png (25 KB, 210x208)
25 KB
25 KB PNG
is it working
>>
>>107028180
if physical swap partition is turned off and swap has been turn on on /dev/zram0 instead, then yes.

run swapon like below to see where swap is mounted

swapon --show

also run zramctl and check that.
>>
I have an nvme backed 4GiB swap (that is never used for some reason). Should I even bother?
>>
>>107028518
maybe your swappiness is too low or 0.
personally, i find swap on ssd is just too slow. i dont know about nvme.
whenever i had to swap to ssd the thrashing was so bad that i had to use thrash-protect.py. its a daemon that sends SIGSTOP and SIGCONT to slow down processees that start thrashing.
it did help in recovering quicker but i still had to wait through the slowdowns where everything is frozen for 1 minute or even more, you never know but it was a bit better than without when thrashing occured.

maybe if you compress it with zswap (same deal like zram but for swap files on disk instead of ram)
then you'd have to write only half as much or less to that drive for the same swap use which would make it quicker.

also, i turned off thrash-protect.py because i suspect it might do more bad than good when zram is used without ssd swap anyhow, there is no point.
>>
actually thrashing-protect saved me many times.
before it, i'd probably just reboot because something allocated so much memory that everything just clogged.
thrash-protect.py gets you out of this mess in most cases
>>
>>107028663
SSD isn't that fast. NVME is multiple times faster.
>>
>>107026443
For distros that use zram by default, the default config is half of RAM up to 8GB.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.