NFS and FS-Cache | Faster Performance with Distributed Storage

I make a lot of digital things and need the space to store the information. I don’t like deleting items, after all, I did make the time investment. I also don’t want to put my data in “cold storage” for me to forget about where I put the drive or lose access due to bit-rot. I want a way to be able to easily attach to my larger data repository (a ‘new’ server I have deployed) and using the power of Tailscale, maintain access, no matter where I am, as though it were resident on my local file system. Thanks to the capabilities of NFS and FScache, I have extended my Framework 13’s 2TB of storage to all 32TB of RAIDed storage that is now available without the performance penalty typically experienced by a network file system.

Background

I want to move away from having a bunch of drives to store my data and I want to make it all available to me, whenever I want. I also want to reduce the chances of bit-rot by replicating my data across some machines running BTRFS and backed up on my NAS with ZFS. The point of this is to maintain data integrity and availability. I have more data than will fit on my laptop and I think it would be largely a waste to use my Laptop as such.

The other key here is, when I create videos, I have lots of video files that I use to create a singular video and if I move the working files onto another storage medium, the links to each of the assets are broken and it takes far too long to fix it all. By using NFS with FS-Cache, I can make for a much more flexible and efficient work-flow for all my projects.

I did find a pretty good guide here that answered most of my questions but left me with a few holes. so I will consider this an addendum for this particular openSUSE Tumbleweed user.

https://computingforgeeks.com/how-to-cache-nfs-share-data-with-fs-cache-on-linux/

Since I really like the diagram from that article, I was going to embed that particular image here to give a brief understanding of what is going on but something was blocking that so I made a diagram of my own to illustrate what is going on with this.

When you boil it all down, you are doing nothing more than creating a local cache of the most recently accessed files on your local system that you are accessing over NFS. This will improve performance so long as you stay connected to the NFS share. Once the connection is lost, access to the files, including what you have cached is lost. This makes sense in as such that the data may have changed on the server so to prevent file corruption it is safest to invalidate the data.

The User Space is where the applications will access the various files though the virtual file system, connected to my server and from the perspective of the application, it makes no distinction whether it is local or remote. Using FS-Cache along with NFS, I have better performance with files I am actively working with, leaving the heavy lifting of to the operating system.

Since these instructions are targeting the family of openSUSE distributions that I use, (Leap and Tumbleweed), I will describe the process for making this happen with my selection of machines. It should be easy enough to adapt these instructions to other distributions as necessary.

Install FS-Cache

FS-Cache is available in both Leap and Tumbleweed, this can be easily installed with this command:

sudo zypper install cachefilesd

It will then need to be started and enabled which is easily done in a singular command:

sudo systemctl enable --now cachefilesd

Verify that the service is running by executing:

sudo systemctl status cachefilesd

It should return something like this:

This means you are all good. Don’t worry about that “preset: disabled” bit. It just means that the auto-generated presets for this service is not being applied.

Configure FS-Cache

The next step is to configure FS-Cache. Here you can change the backend directory used for FS-Cache and also adjust any of the Cache Culling for your particular use case. I have found that the default is probably safest for most if not all cases.

Using your favorite editor, mine being micro but many people like vim or nano, edit cachefilesd.conf in the terminal.

sudo micro /etc/cachefilesd.conf

You should see something like this:

I am not using SELinux on my machine, so I can’t give any advice here. What you will notice is that the backend directory is already set to the default /var/cache/fscache, which, unless you have some reason to not like this, can be left just as it is.

The options below are as follows:

brun – allocation of system RAM for the running of file cache. 10%, as seen above means it will allocated 10% of your available RAM to store frequently accessed files.

bcul – allocates your system RAM to store entire files, not just the metadata

bstop – allocates file metadata caches to RAM

frun, fcul and fstop are essentially the same but for what is stored on disk as a percentage of available disk space. It should be noted that raising these values can have an affect on locally available resources. I should also note, that there are more technically accurate ways to describe these values but for the monkey-brain I have, this will do well enough.

You can read up more on this by toiling through the manpage here:

man cachefilesd.conf

Once you have made your changes, if any, restart the cachefilesd service:

sudo systemctl restart cachefilesd

I also recommend you check the status to ensure that any changes didn’t cause the system to fail to start. I did tweak my configuration, hoping to get better performance out of it but ultimately received this as an error:

So, be cautious of your adjustments

FS-Cache with NFS

There are two parts to this, the server side and the client side. Both will require specific configurations to have the devices work nicely together. It should be noted that NFS, or Network File System, cannot use FS-Cache unless it is explicitly instructed on the client side. The server side is completely indifferent.

Server

NFS is great that it is a very fast and efficient protocol for transferring files between machines. I also appreciate that when it works, it works beautifully. To set up NFS-Server on an openSUSE Leap or Tumbleweed system, run the following:

sudo zypper install yast2-nfs-server

This will add within Yast the necessary module to easily configure the NFS server. I will show you the easy way that is less secure, you will have to take it upon yourself to determine what additional security measures you need to take.

I will also show the ncurses view of this because I did configure the server remotely. From the terminal, start YAST

sudo yast

Select Network Services > NFS Server

Between pressing the arrow keys and Tab, you will be able to get to that section.

Select the following options for the NFS Server Configuration

Select Next, and here is where you will make the important additions.

First, you will need to add the directory you want to export, in my case, I am doing my personal home directory /home/wolfnf. This may not be the best for your situation, so adjust as you see fit.

Next, you will have to set the host options. For testing, I have made this wide open on my network. Depending on your network topology you may need to set some static reservations on your network. For me, I have to determine which machines are going to have access yet, so there may be more to follow here. One example of how you can limit this is by setting the allowable addresses to privileged machines that are specified. Really, you can get pretty creative here and depending on the size, shape and thickness of your tin-foil hat, this should be tightened up.

Client Side

To start out, it is best to keep this simple and test out mounting the NFS share with the FS-Cache option enabled and see that it works as expected. You should also think about where you want the file system mounted. Personally, I like mounting my shares under /mnt but you do what you think is best. Start out by adding a directory in /mnt that suits you. Since I am mounting the home directory from my server, Optimus, I am calling that mount point, optimus.

 sudo mkdir /mnt/optimus

Next, I am going to mount the home directory at that location. Here is the general format for mounting an NFS share with FS-Cache enabled.

sudo mount -t nfs <nfs-share:/ > </mount/point> -o fsc

My specific mounting instruction looks like this:

sudo mount -t nfs optimus:/home/ /mnt/optimus -o fsc

If all things are configured correctly, your only feedback should be a new prompt. If something goes wrong, you will then be notified. My recommendation is to check your firewall rules or even turn it off to remove that as a possible issue. If it still doesn’t work. Check to see that the NFS server is running and that you are exporting it to the desired machine. Go back to starting out with just a wild card, as I illustrated previously and see if that fixes it. Permission issues are often the hang up with such configurations.

The last step is to add the entry in to FSTAB so that the share automatically mounts on boot. This will make it a lot more convenient to have this active all the time.

optimus:/home/                             /mnt/optimus            nfs    fsc,noauto,nofail,x-systemd.automount,x-systemd.mount-timeout=10,x-systemd.idle-timeout=5min   0 0

One should just not throw in a bunch of options without understanding what each of them are so in brief, here is what each of these mean and why it is significant:

  • fsc: Enables FS-Cache (FSC) support, which caches frequently accessed files locally to improve performance.
  • noauto: This option tells systemd to not automatically mount the NFS share at boot time. This task will be passed on to a different systemd service.
  • nofail: This option tells systemd not to fail the entire system if an NFS share cannot be mounted (e.g., due to network issues). Instead, it will simply log a warning and continue with the rest of the system startup process.
  • x-systemd.automount: This is a special systemd-specific option that enables automatic mounting of the NFS share when it’s accessed for the first time. It uses systemd’s automounter (systemd-automount) to mount the share lazily, only when needed. Note: The x- prefix indicates that these options are specific to systemd and might not be recognized by other systems or tools.
  • x-systemd.mount-timeout=10: This option sets a timeout value for mounting NFS shares using systemd’s automounter (systemd-automount). In this case, the timeout is set to 10 seconds.
  • x-systemd.idle-timeout=5min: This option sets an idle timeout value for mounted NFS shares using systemd’s automounter (systemd-automount). After 5 minutes of inactivity, the share will be unmounted.

One additional change you need to make is to ensure that when your computer exits sleep, that it automatically mounts your NFS share. This is done easily with this:

sudo systemctl enable nfs-client.target

The Fun, Statistical Information

Metrics are fun, and the authors of FS-Cache would agree as they have provided an easy way to view what is going on with the FS-Cache service by running this in the terminal:

cat /proc/fs/fscache/stats

This will display all the various bits of information that may be of interest to you about what FS-Cache is doing under the desktop.

You can also check the status of the server with this:

cat /proc/fs/nfsfs/servers

Which will give you an output of something similar to this:

To check the state of your NFS shares, assuming you add more, or just want to see what the state of them are, run this:

cat /proc/fs/nfsfs/volumes

This will give you the output:

I didn’t run any tests to see what the spec time savings is but as I have been using it, remotely, way from home, I can tell when something is cached vs not cached. As a result, I am quite happy with this whole project.

Final Thoughts

Having a home server with ample storage, using Tailscale to keep my devices connected and taking advantage of the various Linux file system tools, I am able to create a workflow that is entirely my own for which I can trust to continue to function and be available, with or without cloud services (except maybe Tailscale, we can quibble about that another time). When I have to play the part of digital nomad, I am very much able to be just as productive inside my own compound, just as well as outside of my compound. I now have no real excuse to have access to data or be able to work on any project, wherever I am. All I need is my Framework 13 and an active Internet connection.

The only thing I do wish was an option with this whole setup would be to have a little more flexibility with the FS-Cache that when internet connection was lost, the cached files would stay resident but I understand why they do not. Truly, it makes a lot of sense for the purpose of data integrity. I suppose, there may be a better solution for that part of it, but this is the best I have found so far.

Linux is pretty cool. All the tools available to you with little investment, except maybe time, makes computers a lot of fun. Linux truly puts personal back into personal computer!

References

https://get.opensuse.org
https://computingforgeeks.com/how-to-cache-nfs-share-data-with-fs-cache-on-linux/
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/fscachenfs
https://forums.opensuse.org/t/nfs-automount-not-working-after-sleep/176807/8


Discover more from CubicleNate.com

Subscribe to get the latest posts sent to your email.


Comments

2 responses to “NFS and FS-Cache | Faster Performance with Distributed Storage”

  1. Thanks for the article. Got a quick query before I try implementing this. If NFS is umounted (for ex. when we reboot system), then is FS-Cache cleared? Or are the files still there but just inaccessible till we nfs mount again?

    1. The cache is cleared when it is unmounted. Basically, the cache is very temporal. It would be a lot cooler if it was more persistent but the reason for not having it longer is to prevent data loss once connection has been lost to the files.

Leave a Reply to RahulCancel reply

Discover more from CubicleNate.com

Subscribe now to keep reading and get access to the full archive.

Continue reading