Comments on: Getting further with Btrfs in YaST Blogs and Ramblings of the openSUSE Members Fri, 06 Mar 2020 17:50:09 +0000 hourly 1 By: ensin Mon, 24 Jun 2019 17:24:53 +0000 Thank you for sharing your btrfs insights.

It’s really unfortunate the btrfs “raid” seems to behave that differently than what one would expect from raid device.

With the udev rules and policy settings in mdraid, it creates no need for “managing a degraded array”. If enabled, detatching and attaching back is a hot-plug non-issue. So maybe the btrfs could also be solved with just some udev rules hooking on device events.

By: Tony Su Mon, 24 Jun 2019 16:59:48 +0000 I would suggest exploring your scenario in a virtual machine (any virtualization including Virtualbox, VMware, KVM, etc) to be familiar with what happens and what to do.

Any questions can be posted to the Technical Help Forums (Installation)

By: Tony Su Mon, 24 Jun 2019 16:56:08 +0000 I’ve collected and posted links to better sources of BTRFS info (IMO)
Although is quite a bit of info and different sources to read, IMO it covers what I feel is the most important info authoritatively and most common scenarios

From one of the links in my above Wiki,
I haven’t yet found anything new, better or different for creating, modifying adding or replacing a disk or otherwise managing a degraded array than what is in the BTRFS Wiki

Yes, I agree there is a future for YaST module(s) that can simplify BTRFS RAID management beyond initial setup.

By: ensin Mon, 24 Jun 2019 01:20:37 +0000 I mean will the filesystem continue to work (whithout manual intervention) when a drive fails or is removed during operation or while the machine was turned off, so that the redundancy is reduced, but operation is still possible?

Will btrfs start duplicating files on a single remaining disk in a strange attempt to “maintain data redundancy”?

And will the filesystem automatically re-sync a working spinning hard drive, that has been removed or turned off, with an SSD say in a laptop, after the hard drive has been plugged in or turned on again?

By: ensin Sun, 23 Jun 2019 10:45:10 +0000 Have you been able to solve the fundamental problem of btrfs “raid”?

That is that instead of provicing a rubust, redundant array of individual disks, it provides a “broken multi-disk setup” whenever a disk fails or gets replaced phyisically, which requires manual fixing.

Have you been able to automate this with auxiliary scripts?

By: Tony Su Fri, 21 Jun 2019 03:15:56 +0000 Well I can report a first successful experiment of what I’d consider one common scenario, although it’s not supported by YaST.

Objective –
New Install Tumbleweed on BTRFS RAID 10 using VMWare Workstation 15

Obstacle 1
VMware does not support installing a new machine on more than one block device, so the most direct scenario is not possible. But, it means that another related scenario can now be explored, converting a default TW install from a BTRFS root fs on a single disk to a RAID10 on multiple disks after install.

Step 1 install TW latest with intention to use YaST Partitioner as much as possible.
Then add 4 empty virtual disks after install.

Obstacle 2
Found that YaST Partitioner cannot add block devices to the mounted root file system device.
Seems YaST can create a new PROFILE (RAID Level and device group) with anything that’s not root, without further investigation don’t know if it’s because root is mounted or it’s just root. Since I cannot expand/add to the existing BTRFS fs using YaST, what follows can be done only with BTRFS tools.

Step 2 (or Step 1 using BTRFS tools)
Display BTRFS on current system, note that “filesystem” can be abbreviated “fi”‘

btrfs fi show
Label: none uuid: 9caca3c8-8b24-4754-bc44-1d4dcdf0ec4f
Total devices 5 FS bytes used 3.93GiB
devid 1 size 18.62GiB used 5.02GiB path /dev/sda2
devid 2 size 20.00GiB used 0.00B path /dev/sdb
devid 3 size 20.00GiB used 0.00B path /dev/sdc
devid 4 size 20.00GiB used 0.00B path /dev/sdd
devid 5 size 20.00GiB used 0.00B path /dev/sde

Note that devid 1 has something on it from my early experimentation creating a btrfs filesystem without RAID, and is why I will have to use “-f” to force striping in the following command. Am skipping my attempt without forcing which threw a surprising “No space left on device” error (which of course isn’t true)

Step 3 Execute conversion to RAID10

btrfs balance start -dconvert=raid10 -mconvert=raid10 /
Done, had to relocate 9 out of 9 chunks

Step 4
Success! The following command verifies

btrfs fi df .
Data, RAID10: total=8.00GiB, used=3.78GiB
System, RAID10: total=64.00MiB, used=16.00KiB
Metadata, RAID10: total=5.00GiB, used=159.41MiB
GlobalReserve, single: total=16.00MiB, used=16.00KiB

By: Yast Team Thu, 20 Jun 2019 10:28:04 +0000 Even if the article sounds a bit enthusiastic about the Btrfs capabilities, is not YaST (or YaST Team) intention to endorse any concrete technology. The goal of the Partitioner is to offer the same level of support for MD RAID, LVM, Bcache and Btrfs and to make easy to use every technology on its own or to combine all of them into a single setup.

In other words, despite the goal of Btrfs developers to include LVM and/or RAID capabilities at file system level so you can live without those other technologies, Btrfs is (still) not a full replacement and there may be situations in which the usage of the most mature LVM/RAID is preferred.

It depends on the use case, but as you mentioned, is probably worth a try and some experiments. We encourage to have fun with it!

If you want to go beyond the Btrfs capabilities offered by YaST (that are admittedly a very small subset of all what Btrfs can do), the official Btrfs wiki would be a good place to start.

By: Yast Team Thu, 20 Jun 2019 09:23:11 +0000 No, we don’t consider RAID 5/6 in Btrfs to be “enterprise ready”, so to speak. If you look to the screenshot illustrating the creation of a new Btrfs you will notice YaST only offers “single”, “dup”, “RAID0”, “RAID1” and “RAID10” as possible RAID levels.

Raid 5 and 6 are technically implemented in some of the tools offered by the distribution and they can be used. But you will have to use the native Btrfs tools to configure it, instead of YaST.

Those Btrfs RAID levels are not officially supported in SUSE Linux Enterprise, which means if you use them you are basically on your own. Of course, it’s also available in openSUSE Tumbleweed and Leap, but with the same level of stability (that is, use it at your own risk).

By: Raider of the lost Sector Thu, 20 Jun 2019 08:05:25 +0000 Does that mean btrfs RAID 5/6 is now finally stable, or can you still expect it to eat your data?

By: Tony Su Wed, 19 Jun 2019 16:41:33 +0000 Very Cool.
Aside from quibbling about paired/unimpaired and dispensable/indispensable in the article, if BTRFS wants to provide full RAID replacement capability, should support RAID 01 as well.

Although I have religiously avoided software RAID in favor of hardware, this sounds interesting enough to experiment with. Will be looking for full documentation on not just setup but management, breaking sets and recovery. And, possibly guidance on using dissimilar block devices and if there are limitations, particularly in numbers of spindles.