Home Home > 2010 > 08 > 05
Sign up | Login

Deprecation notice: openSUSE Lizards user blog platform is deprecated, and will remain read only for the time being. Learn more...

Archive for August 5th, 2010

Local caching for CIFS network file system – followup

August 5th, 2010 by

Here’s a follow-up to my previous post on Hackweek V: Local caching for CIFS network file system

Since the previous post, I worked on improving the patches that add local caching, fixed a few bugs, addressed review comments from the community and re-posted the patches. I also gave a talk about it at the SUSE Labs Conference 2010 took place at Prague. The slides can be found here: FS-Cache aware CIFS.

This patchset was merged in the upstream Linux kernel yesterday (Yay!) which means this feature would be available starting from kernel version 2.6.35-rc1.

The primary aim of caching data on the client side is to reduce the network calls to the CIFS Server whenever possible, thereby reducing the server load as well the network load. This will indirectly improve the performance and the scalability of the CIFS Server and will improve the number of clients per Server ratio. This feature could be useful in a number of scenarios:

– Render farms in Entertainment industry – used to distribute textures to individual rendering units
– Read only multimedia workloads
– Accelerate distributed web-servers
– Web server cluster nodes serve content from the cache
– /usr distributed by a network file system – to avoid spamming Servers when there is a power outage
– Caching Server with SSDs reexporting netfs data
– where a persistent cache remains across reboots is useful

However, be warned that local caching may not suitable for all workloads and a few workloads could suffer a slight performance hit (for e.g. read-once type workloads). So, you need to careful consider your workload/scenario before you start using local disk caching.

When I reposted this patchset, I got asked whether I have done any benchmarking and could share the performance numbers. Here are the results from a 100Mb/s network:

Environment
————

I’m using my T60p laptop as the CIFS server (running Samba) and one of my test machines as CIFS client, connected over an ethernet of reported speed 1000 Mb/s. ethtool was used to throttle the speed to 100 Mb/s. The TCP bandwidth as seen by a pair of netcats between the client and the server is about 89.555 Mb/s.

Client has a 2.8 GHz Pentium D CPU with 2GB RAM
Server has a 2.33GHz Core2 CPU (T7600) with 2GB RAM

Test
—–
The benchmark involves pulling a 200 MB file over CIFS to the client using cat to /dev/zero under `time’. The wall clock time reported was recorded.

First, the test was run on the server twice and the second result was recorded (noted as Server below i.e. time taken by the Server when file is loaded on the RAM).
Secondly, the client was rebooted and the test was run with caching disabled (noted as None below).
Next, the client was rebooted, the cache contents (if any) were erased with mkfs.ext3 and test was run again with cachefilesd running (noted as COLD)
Next the client was rebooted, tests were run with caching enabled this time with a populated disk cache (noted as HOT).
Finally, the test was run again without unmounting or rebooting to ensure pagecache remains valid (noted as PGCACHE).

The benchmark was repeated twice:

Cache (state) Run #1 Run#2
============= ======= =======
Server 0.104 s 0.107 s
None 26.042 s 26.576 s
COLD 26.703 s 26.787 s
HOT 5.115 s 5.147 s
PGCACHE 0.091 s 0.092 s

As it can be seen when the disk cache is hot, the performance is roughly 5X times than reading over the network. And, it has to be noted that the Scalability improvement due to reduced network traffic cannot be seen as the test involves only a single client and the Server. The read performance with more number of clients would be more interesting as the cache can positively impact the scalability.

Bugs will not get fixed by themselves

August 5th, 2010 by

I received an email from a user who switched from openSUSE to Ubuntu since his Wireless netcard did not work. It worked with openSUSE 11.2 initially but after an online update it failed.  He hoped that openSUSE 11.3 worked, tested it, it failed – and he gave up and wrote a frustrated email.

I was frustrated reading this since we should have been able to help this user if he contacted us in time.

Such a regression is bad but if nobody reports the regression, then it will not get fixed at all. The openSUSE project takes fixes from the upstream projects and also adds fixes ourselves and sends them upstream. Those fixes work on the system of the developer – or the systems of the upstream developers – but nobody has access to every single hardware that a chip supports, so regressions might happen. In the past I’ve seen that such regressions that are reported with a pointer to the exact version that failed, are often fixed quite fast.

(more…)