A ramdisk – or if you prefer, RAMdisk – is a method of taking a section of memory and treating it as disk. If you think about it for a moment, the pros/cons should be obvious:
- RAM is much faster than even the fastest disk, so operations on the ramdisk are much faster than when using NVMe, SSD, and certainly spinning disk.
- However, RAM is also volatile. If the server reboots or crashes, anything on the ramdisk is lost.
Ramdisks are excellent places to keep caches, session files, and other ephemeral data. I’ve even setups where people keep database journal on ramdisk (i.e., the Postgres write-ahead log, Oracle redo logs, etc.), which systems around them to copy to permanent storage and restore as part of shutdown/startup and perform frequent backups in case there is a crash.
Let’s look at setting up and using a ramdisk. For fun, I fired up a big 24GB Linode system:
root@bigmem:~# free -m total used free shared buff/cache available Mem: 24049 65 23913 0 70 23732 Swap: 511 0 511
You’d probably think you need to reconfigure the system and reboot, but it’s much simpler.
root@bigmem:~# mkdir /ramdisk root@bigmem:~# mount -t tmpfs -o size=16G ramdisk16 /ramdisk root@bigmem:/ramdisk# df -h /ramdisk Filesystem Size Used Avail Use% Mounted on ramdisk16 16G 0 16G 0% /ramdisk
That’s really all there is to it. Note that “ramdisk16” here is an arbitrary name I’ve chosen. I believe this parameter is required because of the general form of the mount command (which is “mount [type] [options] [device] [mountpoint]”). There is no “device” per se, so a placeholder is used.
Note that the term “ramdisk” a little misleading because you’d think you’d get some kind of /dev/sdX or whatever device to make a filesystem on, etc. You could think of a ramdisk as really more of a “ram filesystem” on a pre-created disk.
I can do anything I want now with /ramdisk – create directories, add files, etc. But when I unmount it (or the system reboots), all is lost. I can make the entry permanent in /etc/fstab, but of course all this means is that the ramdisk will be re-created upon reboot as an empty mount:
ramdisk16 /ramdisk tmpfs defaults,size=16G 0 0
So, is it really any faster? Let’s see. And stay tuned for a head-scratcher!
ioping
Using the ioping tool, here's what the SSD disk looks like: root@bigmem:/# ioping . 4 KiB <<< . (ext4 /dev/sda 19.2 GiB): request=1 time=82.2 us (warmup) 4 KiB <<< . (ext4 /dev/sda 19.2 GiB): request=2 time=933.8 us 4 KiB <<< . (ext4 /dev/sda 19.2 GiB): request=3 time=304.4 us 4 KiB <<< . (ext4 /dev/sda 19.2 GiB): request=4 time=306.1 us 4 KiB <<< . (ext4 /dev/sda 19.2 GiB): request=5 time=279.4 us 4 KiB <<< . (ext4 /dev/sda 19.2 GiB): request=6 time=337.5 us ^C --- . (ext4 /dev/sda 19.2 GiB) ioping statistics --- 5 requests completed in 2.16 ms, 20 KiB read, 2.31 k iops, 9.04 MiB/s generated 6 requests in 5.63 s, 24 KiB, 1 iops, 4.26 KiB/s min/avg/max/mdev = 279.4 us / 432.2 us / 933.8 us / 251.4 us
Meanwhile, doing the same on the ramdisk: root@bigmem:/# cd /ramdisk/ root@bigmem:/ramdisk# ioping . 4 KiB <<< . (tmpfs ramdisk16 16 GiB): request=1 time=1.76 us (warmup) 4 KiB <<< . (tmpfs ramdisk16 16 GiB): request=2 time=8.43 us 4 KiB <<< . (tmpfs ramdisk16 16 GiB): request=3 time=12.1 us 4 KiB <<< . (tmpfs ramdisk16 16 GiB): request=4 time=11.8 us 4 KiB <<< . (tmpfs ramdisk16 16 GiB): request=5 time=10.2 us 4 KiB <<< . (tmpfs ramdisk16 16 GiB): request=6 time=9.30 us 4 KiB <<< . (tmpfs ramdisk16 16 GiB): request=7 time=9.66 us ^C --- . (tmpfs ramdisk16 16 GiB) ioping statistics --- 6 requests completed in 61.5 us, 24 KiB read, 97.6 k iops, 381.3 MiB/s generated 7 requests in 6.48 s, 28 KiB, 1 iops, 4.32 KiB/s min/avg/max/mdev = 8.43 us / 10.2 us / 12.1 us / 1.32 us
I was going to calculate the percent difference but I don’t think that’s necessary.
dd write and read
Let’s blast out an 1GB file:
root@bigmem:/ramdisk# time dd if=/dev/zero of=/ramdisk/1gbfile bs=1MB count=1024 1024+0 records in 1024+0 records out 1024000000 bytes (1.0 GB, 977 MiB) copied, 0.38423 s, 2.7 GB/s real 0m0.450s user 0m0.000s sys 0m0.448s
Now here’s something interesting. I would expect a 2GB file to take about 2x as long, and indeed it does. But how long does a 4GB file take? Or an 8GB? It sure doesn’t scale linearly. Here are some times:
ramdisk file size | time to allocate with dd |
1GB | About .4 seconds |
2GB | About .75 seconds |
4GB | About 32 seconds |
8GB | About 145 seconds |
My theory is that the physical host server can grab 1GB or 2GB of RAM pretty easily and probably has hunks of that size just lying around. But to find 8GB of free memory it has to search what’s free more intensely. My mental analogy is going to a parking lot and wanting to park a single car versus showing up and wanting to park 32 cars – the lot has the space but it takes it a little longer to find free spaces.
I asked the community for an explanation.
Back to our ramdisk. Let’s compare ramdisk to NVMe. Ramdisk:
root@bigmem:/ramdisk# for run in 1 2 3 ; do time dd if=/dev/zero of=/ramdisk/2gbfile bs=1MB count=2048 ; done 2048+0 records in 2048+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.727246 s, 2.8 GB/s real 0m0.853s user 0m0.000s sys 0m0.851s 2048+0 records in 2048+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.729331 s, 2.8 GB/s real 0m0.855s user 0m0.000s sys 0m0.853s 2048+0 records in 2048+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.726534 s, 2.8 GB/s real 0m0.853s user 0m0.000s sys 0m0.851s
So about .85 seconds. NVMe:
root@bigmem:/ramdisk# for run in 1 2 3 ; do time dd if=/dev/zero of=/2gbfile bs=1MB count=2048 ; done 2048+0 records in 2048+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 1.86907 s, 1.1 GB/s real 0m2.059s user 0m0.000s sys 0m1.441s 2048+0 records in 2048+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 1.66434 s, 1.2 GB/s real 0m1.855s user 0m0.000s sys 0m1.437s 2048+0 records in 2048+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 1.69488 s, 1.2 GB/s real 0m1.887s user 0m0.004s sys 0m1.424s
Now let’s try reads:
root@bigmem:/ramdisk# for run in 1 2 3 ; do time dd if=/ramdisk/2gbfile of=/dev/null bs=1024k ; done 1953+1 records in 1953+1 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.183827 s, 11.1 GB/s real 0m0.185s user 0m0.000s sys 0m0.185s 1953+1 records in 1953+1 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.18002 s, 11.4 GB/s real 0m0.181s user 0m0.000s sys 0m0.181s 1953+1 records in 1953+1 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.180192 s, 11.4 GB/s real 0m0.181s user 0m0.000s sys 0m0.181s root@bigmem:/ramdisk# for run in 1 2 3 ; do time dd if=/2gbfile of=/dev/null bs=1024k ; done 1953+1 records in 1953+1 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.172908 s, 11.8 GB/s real 0m0.174s user 0m0.000s sys 0m0.174s 1953+1 records in 1953+1 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.235156 s, 8.7 GB/s real 0m0.236s user 0m0.000s sys 0m0.236s 1953+1 records in 1953+1 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.171044 s, 12.0 GB/s real 0m0.172s user 0m0.000s sys 0m0.172s
So looking at these numbers, I’m guessing the OS caching is helping the disk a lot, which is a good thing. Latency and writes is much better on ramdisk, as we would expect. Particularly for writes, where you’re comparing executing a single CPU opcode (the ramdisk) versus interacting with a storage device, asking it to commit data, waiting for the response. All of that is extremely fast of course, but it’s not going to be as fast as something that never goes further than main memory.
Related Posts:
- MetWeb has a 30% Off Deal on Cheap VPS Offers in Utah for Our Readers! - December 21, 2024
- Is Your Soul as Dark as a Christmas Stocking’s Coal?Make Your Online World Match - December 20, 2024
- Hosteroid has a HOT, Limited Stock Offer in Vienna or Amsterdam! - December 19, 2024
Leave a Reply