Home Home > Tag > snapper
Sign up | Login

Posts Tagged ‘snapper’

Highlights of YaST Development Sprint 65

October 23rd, 2018 by
  • Snapper: list indicates special snapshots; what is snapper anyway?
  • Bcache: configuring attributes
  • AutoYaST: whole disks; partitioned RAIDs; Xen virtual partitions; better merging
  • Booting: "warning, everything is fine!"
  • CaaSP/Kubic: proposing NTP servers according to DHCP response
  • Partitioner UI is a bit faster now

Snapper: Show Currently Mounted and Next to be Mounted Snapshot

Btrfs has some special snapshots: The snapshot currently mounted, and the snapshot that will be mounted next time (unless a snapshot is selected in grub). Now snapper informs the user about these two special snapshot when listing snapshots by a special sign after the number:

# snapper --iso list --disable-used-space
 # | Type   | Pre # | Date                | User | Cleanup | Description           | Userdata     
---+--------+-------+---------------------+------+---------+-----------------------+--------------
0  | single |       |                     | root |         | current               |              
1+ | single |       | 2018-10-18 10:33:50 | root |         | first root filesystem |              
2  | single |       | 2018-10-18 10:43:45 | root | number  | after installation    | important=yes
3- | pre    |       | 2018-10-18 11:03:11 | root |         | ruin system           |              
4  | post   |     3 | 2018-10-18 11:03:11 | root |         | ruin system           |              

For more details visit http://snapper.io/2018/10/18/show-special-snapshots.html.

More Descriptive Name for Snapper Module in YaST Control Center

Previously, the module was called just "Snapper", but users who don’t know that Snapper is could not make any sense of that. We changed it to "Filesystem Snapshots".

Funny anecdote: One team member asked if Snapper really supports LVM when he read the subtitle "Manage Btrfs / LVM filesystem snapshots". Yes, it does! (It has been doing that for a long time). You don’t need Btrfs for snapshots; LVM can also do that, albeit a little differently than Btrfs.

More Bcache Improvements

As you can see in the previous blog post, we are currently working on adding support for Bcache into the YaST partitioner. This time we allow to configure the cache mode for a new bcache device. If you are not sure what a particular cache mode means, we provide also a quite extensive help text. Beside this configuration we also limit operations to prevent data loss or operations that can result in unreliable results. Here with a couple of screenshots:

Using whole disks in AutoYaST

On one hand, now it is possible to format and mount a whole disk without creating any partition. In order to do so, you only need to set the <disklabel> element to none and AutoYaST will understand that you do not want to partition the drive but to use the whole disk as a filesystem.

<drive>
  <device>/dev/sdb</device>
  <disklabel>none</disklabel>
  <partitions config:type="list">
    <partition>
      <mount>/home</mount>
      <filesystem config:type="symbol">xfs</filesystem>
    </partition>
  </partitions>
</drive>

Given the definition above, AutoYaST will format the whole /dev/sdb disk mounting it at /home. But that is not all: it is even possible to use a whole disk as an LVM physical volume or as a software RAID member. The support for the first case was already present in previous AutoYaST versions, but it was not working correctly in SUSE Linux Enterprise 15 and openSUSE Leap 15.

<drive>
  <device>/dev/sdb</device>
  <disklabel>none</disklabel>
  <partitions config:type="list">
    <partition>
      <lvm_group>system</lvm_group>
    </partition>
  </partitions>
</drive>

AutoYaST and partitioned software RAIDs

AutoYaST is now able to create partitioned software RAIDs, something that was not possible in pre-storage-ng times. However, in order to support such a scenario, we needed to change the way in which software RAIDs are described in AutoYaST profiles, although the old format is still supported. So let’s have a look at how a RAID looks like now.

Instead of grouping all RAIDs in a single and special <drive> section, now each RAID is defined in its own section:

<drive>
  <device>/dev/md0</device>
  <raid_options>
    <raid_type>raid0</raid_type>
  </raid_options>
  <partitions config:type="list">
    <partition>
      <mount>/</mount>
      <filesystem config:type="symbol">btrfs</filesystem>
    </partition>
    <partition>
      <mount>/home</mount>
      <filesystem config:type="symbol">xfs</filesystem>
    </partition>
  </partitions>
</drive>

Of course, if you do not want the RAID to be partitioned, just set the <disklabel> element to none, as for any other device.

Better Xen Virtual Partitions support

Analogous to how software RAIDs were defined in AutoYaST until now, Xen virtual partitions with a similar name were grouped in the same <drive> section. It means the /dev/xvda1, /dev/xvda2, etc. were defined within the <drive> section for xvda, which does not exist at all.

To make things clearer, we have decided to use a separate drive section for partition:

<drive>
  <type config:type="symbol">CT_DISK</type>
  <device>/dev/xvdd1</device>
  <disklabel>none</disklabel> <!-- not really needed -->
  <use>all</use>
  <partitions config:type="list">
    <partition>
      <format config:type="boolean">true</format>
      <mount>/home</mount>
      <size>max</size>
    </partition>
  </partitions>
</drive>

AutoYaST Rules: Cleaning the profiles before being merged

AutoYaST rules offer the possibility to configure a system depending on system attributes by merging multiple control files during installation. Check the Rules and Classes section for further documentation.

The merging process is often confusing for people, and the sections in the merged XML files must be in alphabetical order for the merge to succeed.

AutoYaST was cleaning the profiles after a merge, but if the resultant profile was merged with another profiles that profiles were not cleaned before the merge. That was confusing and error prone, so we have fixed it cleaning also them before the merge.

Better explanation of the requirements to boot with GPT

As our readers know, one of the main goals of yast-storage-ng was to offer a more reliable and precise diagnosis on what partitions need to be created in order to ensure that a new system being installed will be able to boot. If something doesn’t fit with such diagnosis, the installer shows a warning message.

In the case of booting a system installed in a GPT device, using the legacy BIOS system (as opposed to EFI), that means SLE-15 and openSUSE Leap 15.0 will warn the user if there is no partition of type BIOS Boot. But there are two problems with that.

  • The warning messages from the Partitioner and, specially, from AutoYaST don’t do a great job in explaining what is wrong.
  • Some users have reported they have GPT systems booting fine in legacy mode without a BIOS Boot partition and, thus, our diagnosis in such cases may be wrong.

We even had a comment in our source code reinforcing the second point!

So we tried to fix our wrong diagnosis… just to end up realizing it was in fact right. After carefully evaluating all the possible setups, checking the different specifications, the Grub2 documentation and even checking the Grub2 source code, we found that layouts without a BIOS Boot partition could get broken (resulting in a non-bootable system) by some file-system level operations. So only the configurations including a BIOS Boot partition can be considered to be 100% safe, both in the short term and against future changes in the system.

We simply cannot allow our users to fall into traps without, at least, a warning message. So we kept the behavior as it was and we focused on improving the messages. After all, advanced users knowing the risks can ignore such warnings. This is how the new warning look in the Partitioner of the upcoming SLE-15-SP1 (and, thus, in openSUSE Leap 15.1).

And this is what AutoYaST will report if the profile doesn’t specify a BIOS Boot partition and it’s not possible to add one to the layout described by such profile.

CaaSP / Kubic: Propose NTP servers according to DHCP response

All-in-one dialog of CaaSP installer asks for NTP Servers. Up to now it searched for NTP servers using SLP only. Otherwise only manual configuration was possible.

Since now CaaSP installer parses DHCP response and fetches NTP servers if any was provided. NTP Servers obtained from DHCP are preferred over those discovered via SLP.

Partitioner UI is a bit faster now

We noticed that clicking around the partitioner UI feels slow. So we used the built-in (Y2PROFILER=1) as well as an external (rbspy) profiler to pinpoint the places that need optimization (mostly caching). Can you see a difference in the following screencast?

Snapper for Everyone

October 16th, 2012 by

With the release of snapper 0.1.0 also non-root users are able to manage snapshots. On the technical side this is achieved by splitting snapper into a client and server that communicate via D-Bus. As a user you should not notice any difference.

So how can you make use of it? Suppose the subvolume /home/tux is already configured for snapper and you want to allow the user tux to manage the snapshots for her home directory. This is done in two easy steps:

  1. Edit /etc/snapper/configs/home-tux and add ALLOW_USERS=”tux”. Currently the server snapperd does not reload the configuration so if it’s running either kill it or wait for it to terminate by itself.
  2. Give the user permissions to read and access the .snapshots directory, ‘chmod a+rx /home/tux/.snapshots’.

For details consult the snapper man-page.

Now tux can play with snapper:

  tux> alias snapper="snapper -c home-tux"

  tux> snapper create --description test

  tux> snapper list
  Type   | # | Pre # | Date                             | User | Cleanup  | Description | Userdata
  -------+---+-------+----------------------------------+------+----------+-------------+---------
  single | 0 |       |                                  | root |          | current     |         
  single | 1 |       | Tue 16 Oct 2012 12:15:01 PM CEST | root | timeline | timeline    |         
  single | 2 |       | Tue 16 Oct 2012 12:21:38 PM CEST | tux  |          | test        |

Snapper packages are available for various distributions in the filesystems:snapper project.

So long and see you at the openSUSE Conference 2012 in Prague.

Snapper and LVM thin-provisioned Snapshots

July 25th, 2012 by

SUSEs Hackweek 8 allowed me to implement support for LVM thin-provisioned snapshots in snapper. Since thin-provisioned snapshots themself are new I will shortly show their usage.

Unfortunately openSUSE 12.2 RC1 does not include LVM tools with thin-provisioning so you have to compile them on your own. First install the thin-provisioning-tools. Then install LVM with thin-provisioning enabled (configure option –with-thin=internal).

To setup LVM we first have to create a volume group either using the LVM tools or YaST. I assume it’s named test. Then we create a storage pool with 3GB space.

  # modprobe dm-thin-pool
  # lvcreate --thin test/pool --size 3G

Now we can create a thin-provisioned logical volume named thin with a size of 5GB. The size can be larger than the pool since data is only allocated from the pool when needed.

  # lvcreate --thin test/pool --virtualsize 5G --name thin

  # mkfs.ext4 /dev/test/thin
  # mkdir /thin
  # mount /dev/test/thin /thin

Finally we can create a snapshot from the logical volume.

  # lvcreate --snapshot --name thin-snap1 /dev/test/thin

  # mkdir /thin-snapshot
  # mount /dev/test/thin-snap1 /thin-snapshot

Space for the snapshot is also allocated from the pool when needed. The command lvs gives an overview of the allocated space.

  # lvs
  LV         VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
  pool       test twi-a-tz 3.00g               4.24
  thin       test Vwi-aotz 5.00g pool          2.54
  thin-snap1 test Vwi-a-tz 5.00g pool thin     2.54

After installing snapper version 0.0.12 or later we can create a config for the logical volume thin.

  # snapper -c thin create-config --fstype="lvm(ext4)" /thin

As a simple test we can create a new file and see that snapper detects its creation.

  # snapper -c thin create --command "touch /thin/lenny"

  # snapper -c thin list
  Type   | # | Pre # | Date                          | Cleanup | Description | Userdata
  -------+---+-------+-------------------------------+---------+-------------+---------
  single | 0 |       |                               |         | current     |
  pre    | 1 |       | Tue 24 Jul 2012 15:49:51 CEST |         |             |
  post   | 2 | 1     | Tue 24 Jul 2012 15:49:51 CEST |         |             |

  # snapper -c thin status 1..2
  +... /thin/lenny

So now you can use snapper even if you don’t trust btrfs. Feedback is welcomed.

Introducing snapper: A tool for managing btrfs snapshots

April 1st, 2011 by

Today we want to present the current development of snapper, a tool for managing btrfs snapshots.

For years we had the request to provide rollbacks for YaST and zypper but things never got far due to various technical problems. With the rise of btrfs snapshots we finally saw the possibility for a usable solution. The basic idea is to create a snapshot before and after running YaST or zypper, compare the two snapshots and finally provide a tool to revert the differences between the two snapshots. That was the birth of snapper. Soon the idea was extended to create hourly snapshots as a backup system against general user mistakes.

The tool is now in a state where you can play with it. On the other hand there is still room and time for modifications and new features.

Overview

We provide a command line tool and a YaST UI module. Here is a brief tour:

First we manually create a snapshot:

# snapper create --description "initial"

Running YaST automatically creates two snapshots:

# yast2 users

Now we can list our snapshots:

# snapper list
Type   | # | Pre # | Date                     | Cleanup  | Description
-------+---+-------+--------------------------+----------+-------------
single | 0 |       |                          |          | current
single | 1 |       | Wed Mar 30 14:52:17 2011 |          | initial
pre    | 2 |       | Wed Mar 30 14:57:10 2011 | number   | yast users
post   | 3 | 2     | Wed Mar 30 14:57:35 2011 | number   |
single | 4 |       | Wed Mar 30 15:00:01 2011 | timeline | timeline

Snapshot #0 always refers to the current system. Snapshot #2 and #3 were created by YaST. Snapshot #4 was created by an hourly cronjob.

Getting the list of modified files between to snapshots is easy:

# snapper diff 2 3
c... /etc/group
c... /etc/group.YaST2save
c... /etc/passwd
c... /etc/passwd.YaST2save
c... /etc/shadow
c... /etc/shadow.YaST2save
c... /etc/sysconfig/displaymanager
c... /var/tmp/kdecache-linux/icon-cache.kcache
c... /var/tmp/kdecache-linux/plasma_theme_openSUSEdefault.kcache

You can also compare a single file between two snapshots:

# snapper diff --file /etc/passwd 2 3
--- /snapshots/2/snapshot/etc/passwd    2011-03-30 14:41:45.943000001 +0200
+++ /snapshots/3/snapshot/etc/passwd    2011-03-30 14:57:33.916000003 +0200
@@ -22,3 +22,4 @@
 uucp:x:10:14:Unix-to-Unix CoPy system:/etc/uucp:/bin/bash
 wwwrun:x:30:8:WWW daemon apache:/var/lib/wwwrun:/bin/false
 linux:x:1000:100:linux:/home/linux:/bin/bash
+tux:x:1001:100:tux:/home/tux:/bin/bash

The main feature of course is to revert the changes between two snapshots:

# snapper rollback 2 3

Finally yast2-snapper provides a YaST UI for snapper.

Testing

Playing with snapper should only be done on test machines. Both btrfs and snapper are not finished and included known bugs. Here is a step-by-step manual for installing and configuring snapper for openSUSE 11.4.

Feedback

We would like to get feedback, esp. about general problems and further ideas. There are also tasks everybody can work on, e.g. the snapper wiki page or a man-page for snapper.

For the time being there is no dedicated mailing-list so just use opensuse-factory@opensuse.org.