Home Home > Distribution > Factory
Sign up | Login

Deprecation notice: openSUSE Lizards user blog platform is deprecated, and will remain read only for the time being. Learn more...

Archive for the ‘Factory’ Category

Highlights of YaST Development Sprint 80

July 19th, 2019 by

After our (really long) sprint report, which featured information from 3 different sprints together, we are back to our regular schedule of publishing a blog post after each sprint.

In a nutshell, these are the highlights from the previous one:

  • A new version of yast2-network will be submitted to Tumbleweed shortly, including quite some internal changes as part of the refactoring effort.
  • The support for offline installation of SUSE Linux Enterprise products has been improved to handle modules and extensions dependencies automatically, among other goodies.
  • The partitioning proposal has been adapted to support SUSE Manager special needs.
  • The guided partitioning ignores now the adjust_by_ram parameter in IBM z Systems, where it is basically useless.
  • Some (open)SUSE 15 features have been backported to 5th Service Pack of SUSE Linux Enterprise.

And last but not least, we have some words to say about the feedback we get from you (that we really appreciate!) and the future of YaST.

Shipping Another Round of Network Refactored Code

One of the problems we wanted to avoid while refactoring yast2-network is diverging too much from the version shipped in Tumbleweed. As we mentioned in our last report, we have done quite some work, especially when it comes to the internals of the user interface, and codebases are starting to look quite different.

For that reason, we decided to merge the new code into the master branch so it can be included in Tumbleweed shortly. We are talking about a pull request which contains more than 340 commits, adds 9000 lines, removes 5000 and modifies 197 files. So, what could possibly go wrong? To be honest we did quite some testing but there is a chance that you can find some issues. In that case, please, open a bug report and we will happily fix it.

About the technical details, we put most of our efforts into drawing a clear line between the user interface and the business logic. In parallel, we are still working on the new data model which enables us to read/write the network configuration using different backends, although at this time we are only supporting sysconfig. We have made some progress during this sprint (you can check the pull request if you are interested), but we will not merge this code into our master branch yet. You can find more details in the updated documentation.

After reading this status update about the yast2-network refactoring, you might be wondering about our plans. Basically, we would like to finish the support to read/write network interfaces configuration and start using it through all the code, killing some rather old and buggy modules like LanItems.

Offline Media Support

For SUSE Linux Enterprise 15 products, there are two installation media:

  • The installer media, which contains only basic packages for a minimal system.
  • The so-called Packages DVD, which contains several modules and extensions, like the Development Tools Module or the Server Application Module.

The main use case of the Packages DVD is allowing to install the systems without an Internet connection. However, YaST support for such a scenario is pretty simplistic. For instance, dependencies between modules are not evaluated and the user has to select them manually. This approach is error-prone and user-unfriendly.

Fortunately, these shortcomings will be addressed in SLE 15 SP2. YaST is now able to solve modules and extensions dependencies, so the user does not have to worry about them anymore. For instance, if you select the Web and Scripting Module, the Basesystem Module and Server Application Module will be automatically selected.

At first, solving dependencies through the 25 repositories which are included in the Packages DVD might be time-consuming, but the current implementation takes advantage of libsolv to reduce it to about 2 seconds in our testing machine.

Improved Extensions and Modules Selection

Additionally, other improvements have been included, like displaying additional information about each module/extension or filtering out base products from the list.

SUSE Manager and the Partitioning Guided Setup

As you know, YaST is a very flexible Linux installer. The creators of each (open)SUSE product or role can define the particular sequence of steps presented to the user and configure many of the options on each of those step. But that was still not enough for the maintainers of SUSE Manager, the purpose-specific distribution to manage software-defined infrastructures.

SUSE Manager follows a pretty specific approach regarding the management of the storage devices. So we had to add some extra configuration options to our partitioning guided proposal to fulfill their needs. We felt the topic deserved its own separate blog post. So follow the link and enjoy the love story of SUSE Manager and the Partitioning Guided Setup.

Taking care of IBM z Systems

For many reasons, IBM z System architecture (s390) is one of those special cases where YaST flexibility is crucial. One of the many options that YaST offers to products and roles creators is called adjust_by_ram, which instructs YaST to make sure that a given volume size is, at least, as big as the RAM of the system. Obviously, this option is especially useful for the swap volume when we want to be able to suspend the system to disk.

However, on the 64-bits s390 architecture, resuming from a swap device is not supported, which renders this option useless. For that reason, YaST will take this situation into account, and it will not enlarge the swap volume bye default for s390 machines, even if adjust_by_ram was set in the control file. Of course, the user will be able to enlarge the volume by activating the corresponding checkbox in the Guided Setup options.

Guided Partitioning Filesystem Options

Backporting Stuff to SLE 12 SP5

Although most of the development effort is focused on SLE 15 SP2 and openSUSE Leap 15.2, we do not forget about SLE 12. SUSE is working on the 5th Service Pack of this release and the YaST team is no exception.

During this sprint, SLE 12 got two interesting features that were backported from SLE 15. The first one is the ability to specify kernel mitigations settings, as you already can do in Tumbleweed. And the second one is the support for PMEM devices, that you can see in the screenshot below.

YaST2 Partitioner PMEM Support

Recently, there was an interesting discussion about the openSUSE installer on Reddit, continued as a thread on our yast-devel mailing list.

This is just a quick heads-up to confirm that we hear you. Some of the issues discussed there were already known to be sore points, some others may need more clarification on why and how things are as they are.

We are taking your feedback and criticism seriously. But we also have to consider all the requirements, and we try to avoid making unrealistic promises. So please bear with us until we had some further discussions first about what we can do and in what time frame.

Conclusions

In addition to what we have described in this report, we have been working on fixing bugs and making other small improvements here and there. And sprint 81th will not be different: part of the team will keep working on yast2-network while the rest squashes as many bugs as possible.

SUSE Manager and the Partitioning Guided Setup

July 16th, 2019 by

Apart from our usual development sprint reports, we (the YaST Team) sometimes publish separate blog posts to summarize a new feature or to present an idea we are working on. Lately, several of those posts have been focused on new features of the YaST Partitioner, like the support for Bcache or the new Btrfs capabilities. But today it’s the turn of another part of yast2-storage-ng: the partitioning proposal, also known as the Guided Setup.

As you may know, YaST is an universal installer used to configure all the (open)SUSE and derivative products. Moreover, the installer options and steps can be refined even further by each of the system roles available for each product. The goal of this blog post is to present some ideas aimed to add new possibilities in the area of the storage guided proposal for those who configure the installer for a certain product or system role. With that we hope to ease the life for the creators of SUSE Manager, the SUSE’s purpose-specific distribution to manage software-defined infrastructures.

Although many of the presented capabilities will land soon in openSUSE Tumbleweed they will not be used by default. Not only because they are not targeted to the openSUSE use-case, but also because so far this is just a prototype. That means all texts are subject to change and most screens will get some adaptations before being used in a final product… or maybe they will even be completely revamped.

One Guided Proposal to Rule them All

Although the Expert Partitioner can be used to tweak the storage configuration of any SUSE or openSUSE distribution during installation, the installer always tries to offer a reasonable proposal about it. Moreover, the “Guided Setup” button in the “Suggested Partitioning” screen leads to a wizard that can be used to configure some aspects of such a proposal, as shown in the following diagram (some actions have been blurred just to emphasize the fact that the concrete list of actions will change after each execution of the wizard).

Default Guided Setup wizard

The exact behavior of the Guided Setup is different in every product and, potentially, in every system role. Many things can be adjusted by the creators of the product or the role, like the partitions and LVM volumes to be proposed, the options to be offered in the wizard, the default value for every option and much more. But all those possibilities were still not enough in the case of SUSE Manager and its unique approach to organize the storage devices.

The Strange Case of SUSE Manager

First of all, the SUSE Manager documentation suggests to allocate each of several data directories (/var/spacewalk, /var/lib/pgsql, /var/cache and /srv) in its own dedicated disk when installing in a production environment. For such setup to make sense, it’s absolutely crucial to choose the right disk for every data directory taking into account both the size and the speed of the disks.

The documentation also suggests to use LVM in production environments. In order to achieve a clear separation of disks when using LVM, the recommended approach is to set up a separate LVM volume group for each relevant data directory instead of allocating all the logical volumes in the usual single shared “system” group.

So, although it may look overkill when installing SUSE Manager just for evaluation purposes, the preferred setup for a final deployment of the product spreads over up to five disks – one containing an LVM volume group with the usual logical volumes of any Linux system (like the root system and the swap space) and each of the other disks containing additional LVM groups, each one dedicated to a particular data directory.

Last but not least, the SUSE Manager guided setup should never offer the possibility of keeping the preexisting partitions in any of the disks. So the usual questions “Choose what to do with existing Linux/Windows/other partitions” (see the image above) should not even be displayed to the user. The answer is always “remove even if not needed”. Period. 😉

Breaking Down the Problem into Smaller Pieces

We didn’t want to implement a completely different guided proposal for SUSE Manager. Instead, we wanted to merge the main ideas behind its approach into the current configurable system, so other products and roles could use them. We identified three different features that we turned into the corresponding optional configuration settings at disposal of anyone defining a new system role. All the new settings are independent of each other and can be combined in any way to provide a fully customized user experience.

First Piece: Explicit Selection of Disks per Volume

First of all, it was necessary to support letting the user explicitly choose a disk for every partition or LVM volume, unlike the default guided setup which automatically finds the best disk to allocate every partition given the requirements and a set of “candidate disks”. To enable that, now the product or role can choose between two values for the new allocate_volume_mode setting. A value of auto (which is the default to keep backwards compatibility) will result in the already known wizard with up to 4 steps.

  • Select the candidate disks
  • Decide what to do with existing partitions
  • Configure the schema (LVM and/or encryption)
  • Configure each file system

As always, the steps in which there is nothing for the user to decide are skipped so the wizard is usually shorter than four steps.

No surprises so far. But allocate_volume_mode can also be set to device, which will result in the alternative wizard displayed in the following image.

New possible Guided Setup wizard

As you can see, there is no initial step to select the set of disks to be used by the system to automatically allocate the needed partitions. Instead, the following screen allows to explicitly assign a disk to every partition or LVM volume group.

New step to assign volumes and partitions to disks

Second Piece: Enforcing a Behavior about Previous Partitions

No matter which allocate mode is configured (auto or device), there is always one step in which the user is asked what to do with the preexisting partitions in the affected disks. So far, the product defined the default answer for those questions, but the user always had the opportunity to change that default option.

Now, the creator of the product or the system role can disable the setting called delete_resize_configurable which is enabled by default in order to prevent the user from modifying the default behavior. The wizard will include no questions about what to do with existing Windows/Linux/other partitions. In most cases, that will imply a whole step of the wizard to be simply skipped.

Third Piece: Separate Volume Groups for some Directories

The most important setting configured by every system role is the list of so-called volumes. That list includes all the file systems (both mandatory and optional ones) that the guided setup should create as separate partitions or LVM logical volumes. Now it’s possible to specify that a volume could be created in its very own separate LVM volume group using the new attribute separate_vg_name. If any of the volumes defined for the current product and role contains such attribute, the screen for selecting the schema will contain an extra checkbox below the usual LVM-related one.

New checkbox for directories into their own separate LVM

Putting the Pieces Together for SUSE Manager

With all the above, we expanded the toolbox for anyone wanting to configure the (open)SUSE installation experience. Which means now we can fulfill the requirements of SUSE Manager maintainers by just adding separate_vg_name to some volumes, setting delete_resize_configurable to false and adjusting the allocate_volume_mode. With all that, the new SUSE Manager workflow for the guided setup will look like this.

First of all, the user will be able to specify the creation of separate LVM volume groups as suggested in the product documentation.

SUSE Manager setup - first screen

Then a second screen to select which separate file systems should be created and to fine-tune the options for every one of them, if any.

SUSE Manager setup - second screen

And finally a last step to assign the correct disk for every partition or separate volume group, depending on the selections on previous screens. With this step the user can optimize the performance by distributing the disks as explained in the SUSE Manager documentation, allocating the areas that need intensive processing to the faster disks and the greedy directories to the bigger devices.

SUSE Manager setup - third screen

As usual, the list of actions will reflect the selections of the user creating as many LVM structures as requested.

SUSE Manager setup - result

Beyond SUSE Manager

As already mentioned, all the guided proposal features can be combined within a given product in any way. For example, one product could adopt the approach of creating separate LVM volume groups while still sticking to the traditional auto allocate mode. Or a given system role could enforce to never delete any existing partition without allowing the user to change that.

But beyond the “Guided Setup” button, the availability of two different allocate modes brings back one idea that has been floating around since the introduction of Storage-ng – adding a section “Wizards” to the Expert Partitioner. That would allow to combine some manual steps with the execution of any of the two available allocate modes of the guided proposal… or with any other workflow that can be implemented in the future.

As always, we are looking forward any feedback about the new features or the guided partitioning proposal in general. And stay tuned for more news!

Highlights of the Latest YaST Development Sprints

June 25th, 2019 by

May and June have been, so far, interesting months for the YaST Team. We worked hard to polish the last details of the recently released openSUSE Leap 15.1, we attended the openSUSE Conference 2019 (with many fruitful conversations), we shared quite some time together around a table without computers (most of the time, we are a geographically distributed team), many team members enjoyed vacations (it’s spring time in Europe), we organized a Leap 15.1 launch party with technical talks in Gran Canaria… and we ran out of energy to also publish our traditional sprint reports in this blog.

We will try to fix that with this blog post in which we will try to summarize some highlights from the latest three development sprints, namely the 77th, 78th and 79th. So be warned, this is going to be a loooong post.

Support for Multi-device Btrfs File Systems

We have been working steadily during the three sprints in implementing all the necessary bits to offer a good experience installing and upgrading an (open)SUSE system on top of several block devices by means of Btrfs RAID capabilities. That includes support in the Partitioner, in AutoYaST, in the storage guided setup and more.

We decided that all that deserved a separate blog post. You can find here: Getting Further with Btrfs in YaST.

More Improvements for the Partitioner

That blog post mentions a couple of changes in the Partitioner that, although initially motivated by the introduction of multi-device Btrfs, go beyond that scope and are aimed to make the all the lists of devices more useful and informative.

Traditionally the Partitioner used two separate columns “Type” and “FS Type” to describe the function of every device. That was sometimes hard to understand. Moreover, quite often the important information (like the relationship between a partition and its RAID or LVM) was simply missing in those tables.

Traditional devices table in the Partitioner

We have merged those columns into a more informative one that identifies the devices and also gives an overview of the relationship between them at first glance. In addition, the table displaying all the system devices now includes multi-device file systems.

Revamped table of devices

Mitigating CPU vulnerabilities from YaST

If you are interested in security (or simply if you have not been living under a rock) you probably have heard about CPU based attacks like Spectre or Meltdown. The last year has seen a number of these CPU issues, all of them coming along with their own kernel options to change the Linux behavior in order to mitigate the security risks at a price of some performance loss.

However, not all users know what affects their architectures or particular models of CPU and which kernel parameters to use to gain more performance if the security risk is acceptable for them.

For that purpose, a new meta-option called “mitigations” was added to the Linux kernel. It allows to enable and disable at once several of those mitigations that prevent CPU attacks. See more information at this document published by SUSE.

We find that kernel option very useful, so we decided to provide an easy way for users to adjust it. Now the YaST bootloader screen contains a new setting which offers three pre-defined options and even a fourth one to let the users fine-tune the settings on their own. As you can see in the screenshot below, we have included extensive documentation in the help dialog, so you will not need to search for this blog post in the future.

It is also possible to modify this option directly from the installation summary. For that purpose, the “Firewall” section was renamed to “Security” and it now includes the possibility to tweak the CPU mitigation options, alongside the traditional settings for firewall and opening the SSH port.

CPU mitigations in the installation summary

Another success story of (open)SUSE offering a promptly solution for our users to easily adapt their systems to ever-changing complex needs.

Memory Optimizations during Installation

While the release of openSUSE Leap 15.1 was approaching, we got several bug reports stating the YaST installer used to freeze when using only 1GB RAM with online repositories (see bug#1136051).

It turned out that at some point YaST loads the details of all available packages. And that needs a lot of memory if you enable the online repositories during installation. For example the OSS Leap repository contains more than 35.000 binary packages!

The problem was in the YaST internal API accessing the package manager library (libzypp). It did not allow to filter the objects, YaST had to read all objects and then do the filtering in the code. And for each object it returned all attributes, even those which were not needed (like the package description, full RPM name, etc…). All that data required a lot of memory.

To fix that we have introduced new API calls that allow specifying more filters (like return all selected packages, packages from specific repository,…) and you can set which attributes should be returned. If you need to know only the name and the version then you will not get the other useless attributes. And to ease the usage of the new API in YaST we provided a nice object oriented wrapper written in Ruby.

This optimization saves a lot of memory, 1GB of RAM should be enough for future installations with the online repositories, even if they grow even more.

Unfortunately, we were only able to diagnose the problem and provide a solution a couple of weeks before the official release of Leap 15.1. Introducing a change in such a sensitive part of the installer was considered too risky (it would have invalidated many of the tests that had been already performed) so the installer included in openSUSE Leap 15.1 is still memory hungry if online repositories are used. For that release, we simply increased the official memory requirements to 1.5 GiB.

Online Migration from openSUSE Leap 15.1 to SLES15-SP1

For openSUSE Leap 15.0 it was only possible to migrate from Leap to SLES (SUSE Linux Enterprise Server) manually (see the documentation). With Leap 15.1 the goal was to also support a migration using YaST. But we got a bug report saying the online migration from openSUSE Leap 15.1 to SLES15-SP1 displayed a wrong migration summary and didn’t work well.

It turned out that YaST needed some small fixes to support this properly. The main problem was that YaST did not expect that the base product or the package vendor can be changed during online migration, so far it was only possible to upgrade to the next SLE service pack level. But that is fixed now.

Wanna try the migration from openSUSE Leap 15.1 to SLES15-SP1? Then follow these steps.

  1. Install the yast2-registration and yast2-migration packages in the Leap 15.1 installation
  2. Make sure the latest online updates are installed (to install the fixes mentioned above)
  3. Start the YaST registration module and register the openSUSE Leap 15.1 product using your registration key
  4. Then start the YaST migration module, select the migration to SLES15-SP1
  5. (There might be reported some package dependency issues in the migration summary, go to the package manager and resolve them. Usually removing the old openSUSE package is the right solution.)
  6. Start the migration, the SLES packages will be downloaded and installed
  7. At the end the system will be rebooted to start the freshly installed SLES, enjoy! 🙂
  8. (It is recommended to review the orphaned packages, leftovers from the Leap installation, with the command zypper packages --orphaned and possibly remove them.)

From Leap to SLES via YaST

Please note that only minimal server installations of Leap are supported for upgrade, full installations especially with third party packages might not work correctly.

Why Cannot I Read the Logs?

Long time ago, the logs of any Linux system were spread over several files living under the /var/log subdirectory. YaST offers its “System Log” module to inspect those files in a convenient way. Since the introduction of Systemd and its journal, that information has been gradually moved to this new mechanism by default. And YaST offers its “Systemd Journal” module to inspect and query that journal.

Both YaST modules can be executed by any user in the system, not only by root. That’s intentional because both the Systemd journal and the traditional Linux log files can register information targeted to unprivileged users. But there was some room for improvement in the error message displayed by both modules when such users were trying to access to protected information.

This is how the new more explanatory message looks in the “System Log” module.

Explanatory pop-up for log viewer

And this is the extended message for “Systemd Journal”, that now mentions the systemd-journal user’s group.

Improved message in the journal viewer

Another Day at the YaST Office: Adapting to Changes

As we usually remark in our blog posts, a significant part of the work of the YaST Team consist in keeping YaST in sync with the constant changes in the underlying system. Of course, these sprints were not an exception in that regard. Without trying to do an exhaustive list, let’s take a look to some of those adaptations since the mentioned underlying changes may be interesting for some readers.

Turns out Systemd developers has decided to change the list of possible states for Systemd services. The systemd-sigabrt state is obsolete and a new systemd-watchdog one was added. In the YaST team we learned some time ago that the list of Systemd states changes more often than what most people would expect. As a consequence we have an automated check to detect these situations. The bell ringed, we adapted the code and everything keeps working.

Systemd is not the only technology that keeps evolving. Quite some time ago, RMT replaced SMT as the default proxy technologies for the SUSE Customer Center. Although both has coexisted for quite some time, from SLE-15 onward only RMT is offered. Thus we have adapted all the references to SMT that still existed in YaST. From now on, only RMT is mentioned to avoid confusion.

Another common adaptation we have to perform in YaST is adjusting some module when the output of the command it runs under the hood has changed. Recently we found out the developers of the command iscsiadm has decided to use more than one exit code to indicate a successful execution (traditionally, only zero should mean that). After a long discussion, we decided to adapt YaST iSCSI to also be happy with the error code 21. What means for you? That future versions of YaST iSCSI should work faster in some situations, since the confusion will not longer result in a timeout.

YaST Firstboot: a Better Example File

Those were just some example of the many adaptations we did lately for changes in the system. But not all adjustments are motivated by external changes. We also realized the example configuration file provided by YaST Firstboot (at /etc/YaST2/firstboot.xml) needed some love. Due to the nature of YaST Firstboot, that file should be customized before using YaST Firstboot. But providing an example file with three different steps about acceptance of licenses (two of them enabled and the third one disabled) and other inconsistencies was definitely not helping anyone to understand how to use the module.

In fact, the status of that default example configuration and of the documentation managed to confuse even SUSE’s quality assurance team. So we improved the example file shipped in the package to make it more realistic and we also updated the YaST Firstboot documentation to clarify how to use that file. Because not all improvements are always done by coding.

Leap 15.1’s Most Annoying Bug: Create Home as Btrfs Subvolume

As all openSUSE users should know, for every Leap release a page is created in the openSUSE wiki listing the so-called Most Annoying Bugs. Leap 15.1 was a very smooth release and this time the corresponding list contains only one bug… and it’s a YaST one. 🙁

In an installed system, when YaST was creating a new user it was always trying to create its home directory as a Btrfs subvolume. Even in cases in which that was impossible. For example, if the directory to be created was not in a Btrfs file system.

Writing user error

Fortunately, the bug didn’t affect the installation process or AutoYaST. We created a fix that was quickly available as a maintenance update. So make sure your openSUSE system is updated before trying to create new users with YaST.

YaST Network refactoring: status report

Since we submitted the first bits of the yast2-network refactoring by the end of April, we have made quite some progress in this area. Although it is still an ongoing effort, we would like you to give you an update on the current state.

We might say that we have been working on two different areas the user interface implementation and the future data model.

About the user interface, the team has improved the code quite a lot to make it easier to maintain and extend. We have introduced some classes to de-couple the widgets from the data, and it pays off. Additionally, we have fixed some bugs (many of them related to validations), simplified the process of adding new devices and reorganized the hardware tab.

New hardware tab in YaST Network

Regarding the internal data model, we have been thinking about the best way to represent network configuration in an agnostic way so, in the future, we can not only support Wicked but other options too (for the time being, the NetworkManager support is quite limited). If you are curious about the details, we have added a document describing the approach to the repository.

The new data model is already being used to handle DNS and Routing configuration. So if you are using Tumbleweed you have been already using the new network code for some weeks, including the UI enhancements presented in our latest post.

New network routing dialog

Although the data model is so far only used in the mentioned parts, the plan is to submit to Tumbleweed a heavily refactored UI layer during next sprint. So stay tuned.

Added Appstream Metadata to the YaST Packages

The YaST package manager is not the only software manager in the (open)SUSE distributions. There are some more, like Discover in KDE or the GNOME software manager, not to mention the online openSUSE appstore.

While the YaST manager is package oriented, those other software managers are application oriented. That makes a huge difference, especially for the beginner users.

The full list of packages does not only include the applications (basically anything a user can start from the desktop), but also shared libraries, pieces that provide functionality for other applications or basic components needed for the system to work. With so many software packages (the openSUSE OSS repository contains over 35.000 of them!) it’s sometimes hard to find the software you need unless you know what you are looking for.

To offer an application-oriented view on top of all that, the application managers need some special data describing the applications inside the packages. The data are located in the /usr/share/metainfo/*.xml files, if you are interested in the technical details check the AppStream documentation provided by the Freedesktop.org.

Our absolutely awesome community contributor Stasiek Michalski (more famous by his nickname lcp) realized YaST was not offered in those application managers and decided to fix it. So he created an XML generator which collects the data from YaST packages and automagically generates the metainfo XML file needed by the other software managers.

YaST in GNOME Software

As a result, the Gnome Software Manager, Discover and other software managers will offer YaST in Tumbleweed just as any user-oriented application. Thanks lcp!

YaST in Discover

AutoYaST Pre-install Scripts & Storage

AutoYaST has a special feature to allow users to customize the installation process and take control in different stages of the installation. For that, the AutoYaST profile offers a section where you can insert your custom scripts. There are four types of scripts: pre-scripts, postpartitioning-scripts, chroot-scripts, post-scripts and init-scripts.

For the particular case of pre-scripts, the documentation states that “It is also possible to change the partitioning in your pre-script“. That means, for example, you could use a script to create a new partition or to configure some kind of technology. Therefore, it would be very convenient to re-analyze the storage devices after running the user pre-scripts. In fact, that was the default behavior in the old storage stack, but the new one was slightly modified to only re-analyze the system under certain conditions.

But turns out some SLE customers were using pre-scripts to configure the behavior of multipath and those changes were not being noticed by AutoYaST.

The solution was quite trivial. We simply decided to always perform a new storage re-analysis after the AutoYaST pre-scripts. We did not find strong reasons to not do it and there should not be significant performance penalty.

And, for the specific case of multipath, YaST now copies some configuration files (e.g., /etc/multipath.conf and /etc/multipath/bindings) to the target system when performing a new installation. Otherwise, the installed system would not contain the configuration applied during the installation.

Clarifying the Usage of Software Management Options

Our software manager is one of the most complex YaST modules, which makes some aspects of its usage not fully obvious for some users. You may have even notice that is the only YaST module in which the interface is clearly different when executed in text and graphical mode. Specially the menu at the top of the interface, which is organized in different sections.

Some users where confused by the fact of some options not being persisted between different executions of the module. Those options are there to modify the current operation of the module, not to change the configuration of package management in the system.

When executed in text mode that was clear because such options where labeled as “temporary change” but the graphical mode didn’t have any indication about it. As you can see in the following screenshot, that’s fixed now.

Temporary options in YaST Software

Product License Hard to Understand? Try Another Language!

Some user reported that in the text-mode installation it was impossible to switch the language of a product license. Although during graphical installation everything works flawlessly, in text mode the language switching widget was there… but disabled.

The point is that such behavior was not exactly a bug. It was in fact done on purpose. We decided to prevent such language change some time ago because on the Linux console we’re not able to display the characters of many languages. Hopefully some of our usual reader have just shout “that’s not true!” 😉 Those users remember that in the report of our 67th sprint we explained that now we always use fbiterm when installing in text mode in order to be able to display characters of almost every language.

We are now able to display all languages that currently have license translations, so we have enabled the language switching widget and now both graphical installation and text-based one deliver an equivalent user experience.

More about languages

This is not the only change related to internationalization we did in these sprints. We also added a specific warning message for situations in which YaST is used to change the language of the system but there is no repository containing the needed translation packages. Something that obviously only affects users configuring system in very restricted environments.

As you can see in this and other recent reports, we have to deal relatively often with difficulties related to translation and internationalization. To reduce the effect of those problems on our final users, we also added some extra mechanisms to detect internationalization errors introduced during development. Hopefully that will mean that in future reports the space dedicated to comment language-related problems gets shorter and shorter. 🙂

And that was just a summary!

As long as this post looks, there are many interesting things we have done in these weeks we left out, intentionally or not. We definitely should avoid skipping three reports in a row in the future!

This week it’s Hack Week at SUSE, which means regular YaST development will be put on hold… or will turn into something completely different. You never know what the result of a Hack Week can be!

But in any case we will go back to our sprint-based pace in August. So expect a new blog post in three weeks. See you then!

Getting further with Btrfs in YaST

June 19th, 2019 by

Since the YaST team rewrote the software stack for managing the storage devices, we have been adding and presenting new capabilities in that area regularly. That includes, among other features, the unpaired ability to format and partition all kind of devices and the possibility of creating and managing Bcache devices. Time has come to present another largely awaited feature that is just landing in openSUSE Tumbleweed: support for multi-device Btrfs file systems.

As our usual readers surely know, Btrfs is a modern file system for Linux aimed at implementing advanced features that go beyond the scope and capabilities of traditional file systems. Such capabilities include subvolumes (separate internal file system roots), writable and read-only snapshots, efficient incremental backup and our today’s special: support for distributing a single file system over multiple block devices.

Multi-device Btrfs at a glance

Ok, you got it. YaST now supports multi-device Btrfs file system… but, what that exactly means? Well, as simple as it sounds, it’s possible to create a Btrfs file system over several disks, partitions or any other block devices. Pretty much like a software-defined RAID. In fact, you can use it to completely replace software RAIDs.

Let’s see an example. Imagine you have two disks, /dev/sda and /dev/sdb, and you also have some partitions on the first disk. You can create a Btrfs file system over some devices at the same time, e.g., over /dev/sda2 and /dev/sdb, so you will have a configuration that looks like this.

        /dev/sda                /dev/sdb
            |                       |   
            |                       |   
     ---------------                |   
    |               |               |   
    |               |               |   
/dev/sda1       /dev/sda2           |   
                    |               |   
                    |               |   
                     ---------------
                            |   
                          Btrfs
                            |   
                            |   
                            @ (default subvolume)
                            |   
                            |   
                 -----------------------
                |       |       |       |   
                |       |       |       |   
              @/home  @/log   @/srv    ...

Once you have the file system over several devices, you can configure it to do data striping, mirroring, striping + mirroring, etc. Basically everything that RAID can do. In fact, you can configure how to treat the data and the Btrfs meta-data separately. For example, you could decide to do striping with your data (by setting the data RAID level to the raid0 value) and to do mirroring with the Btrfs meta-data (setting it as raid1 level). For both, data and meta-data, you can use the following levels: single, dup, raid0, raid1, raid10, raid5 and raid6.

The combination of this feature and Btrfs subvolumes opens an almost endless world of possibilities. It basically allows you to manage your whole storage configuration from the file system itself. Usage of separate tools and layers like software-defined RAID or LVM are simply dispensable when using Btrfs in all its glory.

Managing multi-device Btrfs with the YaST Partitioner

Interesting feature indeed, but where to start? As usual, YaST brings you the answer! Let’s see how the YaST version that is currently being integrated in openSUSE Tumbleweed will ease the management of this cool Btrfs feature. SLE and Leap users will have to wait to the next release (15.2) to enjoy all the bells and whistles.

First of all, the Btrfs section of our beloved Expert Partitioner has been revamped as shown in the following picture.

New Btrfs section of the Partitioner

It lists all the Btrfs file systems, single- and multi-device ones. You can distinguish them at first sight by the format of the name. The table contains the most relevant information about the file systems, alongside buttons to add a new file system and to delete and modify the existing ones.

Existing Btrfs file system can be inspected and modified in several ways. The “Overview” tab includes details like the mount point, file system label, UUID, data and meta-data RAID levels, etc. The file system can be edited to modify some aspects like the mount options or the subvolumes.

Overview of a Btrfs file system

In addition, the tab called “Used Devices” contains a detailed list of the block devices being used by the file system. That list can also be modified to add or remove devices. Note such operation can only be done when the file system does not exist on disk yet. Theoretically, Btrfs allows to add and remove devices from an already created file system, but a balancing operation would be needed after that. Such balancing operation could take quite a considerable amount of time. For that reason it has been avoided in the Expert Partitioner.

Devices of a Btrfs file system

Of course, you can still format a single device as Btrfs in the traditional way (using the “edit” button for such device). But let’s see how the new button for adding a Btrfs file system opens new possibilities.

Adding a Btrfs file system

Similar to the RAID dialog, you have the available devices on the left and you can select the devices where you want to create the file system, and also you can indicate the data and meta-data RAID levels. Of course, the admissible RAID levels will depend on the number of selected devices. You will go to the second step of the Btrfs creation by clicking the “Next” button. In this second step, you can select the mount options and define the subvolumes, see the next image.

Options for a new Btrfs file system

And apart of all that, the Expert Partitioner has received several small improvements after including multi-device Btrfs file systems. Now that the multi-device Btrfs file systems are considered first class citizens, they are included in the general list of devices. Note the “Type” column has also been improved to show more useful information, not only for Btrfs but for all kind of devices.

Revamped list of devices

What else works?

But YaST goes far beyond the Partitioner. We have also ensured the storage proposal (i.e. the Guided Setup) can deal with existing multi-device Btrfs configurations when you are performing a new installation. Moreover, the upgrade process is also ready to work with your multi-device Btrfs file system.

Last but not least, AutoYaST can now also be used to specify that kind of Btrfs setups. The official AutoYaST documentation will include a specific section about advanced management of Btrfs file systems on top of several block devices. The content is being reviewed by the SUSE documentation team right now.

What does not work (yet)?

There is still one scenario that is not 100% covered. As described in bug#1137997, is still not possible to use the “Import Mount Points” button in the Partitioner to recreate a multi-device Btrfs layout. But fear not, it’s in our list of things to fix in the short term!

Get it while it’s hot

Free Software development is a collaborative process and now we need YOU to do your part. Please test this new feature and report bugs if something does not work as you expected. And please, come with your ideas and further improvements and use cases. And, of course, don’t forget to have a lot of fun!

Highlights of YaST Development Sprint 75

April 10th, 2019 by

With the upcoming releases of openSUSE Leap 15.1 and SLE-15-SP1 approaching, the YaST Team at SUSE is investing a quite significant time in polishing details and fixing small (and not so small) bugs. But fortunately, that still leaves us enough time to also work in our mid term goals.

So welcome to our usual selection of selected bug-fixes (listing them all would be boring) and exciting new stuff. This edition includes:

  • A nice howto for reporting Snapper bugs
  • Tons of fixes for right-to-left languages like Arabic
  • Some adjustments and improvements in the storage area
  • A sneak peak into the future of the yast2-network code
  • Some contributor-oriented content: like our new pull request templates and revamped Docker images for testing

Snapper Bug Reporting Howto

During this sprint we fixed a bug that was causing Snapper to crash under very specific circumstances. The scenario was quite unusual so we had to request quite some information from the reporter of the bug to confirm what was happening. As a nice consequence, in addition to having now a more robust Snapper (one bug killed) you can also enjoy a new page in the openSUSE wiki listing the information you should attach to Bugzilla if you find a bug in Snapper while using (open)SUSE.

Which is also a nice excuse to remind you about the equivalent "Report a YaST bug" page.

YaST around the globe… in all directions

Many of the YaST users and of our blog readers are not native English speakers that surely appreciate the fact that YaST and (open)SUSE in general can be used in several languages. But have you ever thought about the implications of developing a multi-language software? Sure? In all of them? 😉

Human languages are so diverse as the human cultures and there are many details to take into account, from the usage of different alphabets to the various ways of dealing with genre or number (in English the words have just one form for singular and another for plural, but that can be way more complex in other languages). In today’s issue we will take a look to one of our favorite translation issues – languages that are written from right to left, like Arabic.

The installer summary in Arabic

Dealing with text that is a mixture of Latin and Arabic script is complex and sometimes we have to deal with interesting bugs. Fortunately we have our own weapon to fight those bugs. If in Star Wars they have protocol droids like C3PO, in the YaST team we have Martin Vidner, which is the closer human equivalent.

He fixed all the reported bugs and even created a tool to help debugging similar problems in the future. You can find the source code of that tool in Github. There is even a hosted instance of the tool to be used by translators or anyone who is curious.

Now, even complex interfaces like our Partitioner look correct enough in right-to-left languages, so we will not have to send mirrors to all our Arabic users.

The YaST Partitioner in Arabic

If you want to know more about this exciting but very complex problem of bidirectional texts, you can start with the following documents.

  • Martin’s great summary of the types problems found in YaST and their respective solutions.
  • Wikipedia: Bi-directional text, an overview of the concepts
  • Unicode Standard Annex #9: Unicode Bidirectional Algorithm, the gory details, 50 pages of them
  • More Arabic YaST

    On related news, we got also some reports about some problems visualizing bullet-points in Korean with the beta versions of the future SLE-15-SP1. But as we could verify, all those problems are gone now.

    SLES installer in Korean

    Storage Fixes

    Other area that has received some attention in this sprint is the storage management. Three related features needed adjustments before the upcoming (open)SUSE releases:

    • Fixed the detection of the boot disk in the Partitioner warnings.
    • The Guided Setup now works better when doing several attempts in different disks.
    • AutoYaST can now install over NFS.

    One of the last storage features that the YaST Team has developed is the support for Bcache devices in the Expert Partitioner. While our QA team was testing it, they found a bug. The Partitioner was complaining because the boot disk did not contain a partition table, which is a mandatory condition for a Legacy (non-UEFI) x86 system. But it was a bogus warning, since they had actually defined a /boot partition in another disk.

    That’s how we found that our Partitioner gets confused if there is a separate partition mounted at /boot and located in a different disk than the root file-system. The Partitioner insisted in considering the disk containing / to be the one that would be using for booting, instead of checking the structure of the disk containing /boot. Now that is fixed and the improvement will be available for the upcoming SLE 15 SP1, Leap 15.1 and, of course, openSUSE Tumbleweed.

    But that was not the only storage bug fixed just in time for the upcoming releases. Some sprints ago, the Storage Proposal algorithm for the initial proposal was modified to try installing on each of the individual disks. If the installation was not possible over a given disk, even after disabling all optional configurations (e.g., snapshots and separate /home), a new proposal is tried over the next disk and so on. The problem was that the disabled options in the previous attempt were not restored back when switching to the next disk. This caused some ugly side effects, for example, if the swap partition was disabled when trying over the first disk, then the proposal did not try to create a swap when it was performing the proposal over the next disk. But now this is also fixed and it will work as expected.

    And last but not least, AutoYaST now supports to install over a Network File System (NFS). This feature was left back when the new YaST storage stack was re-implemented for SLE 15 GA. Actually, this is a non documented feature. That’s why we overlooked that SUSE 12 was able to do it using some hacks and a non-validating AutoYaST profile. But no worries, the feature is available again and such profile will work now in any updated SLE-15 or Leap 15.0. Of course, it will also work while installing SLE-15-SP1 or openSUSE Leap 15.1 and Tumbleweed.

    Nevertheless, we are working on a better and documented way of supporting that scenario in the future, with no need to twist the specification of the AutoYaST profile. Stay tuned for more information.

    Rethinking the Location of Special Boot Partitions

    And now that the storage layer looks sane and healthy for the upcoming releases, we also took some time to think about future improvements. As you know, the storage Guided Setup always proposes to create special boot partitions as needed on each case. That can be a BIOS BOOT (for Legacy x86 systems with GPT), an ESP (for UEFI systems), PReP (for PPC systems) or zipl (for S/390 systems). Strictly speaking those partitions doesn’t have to always be in the same disk than the root partition and in some cases having it on a separate one can have some advantages (like sharing the ESP partition with another operating systems).

    But we have been reconsidering all the cases, the expectations of most users and of the majority of BIOS vendors and the known bugs in other operating systems about sharing boot partitions. We have decided to be more strict in the future about the location of those partitions. Starting today with openSUSE Tumbleweed and in the 15.2 releases of openSUSE Leap and SLE, the Guided Setup will always propose those partitions in the system disk. That is, in the disk containing /boot and the root filesystem.

    The future of YaST Network is here

    Those who follow this blog know that we invested quite some time on the last couple of years rewriting the part of YaST that was more buggy and harder to modify – the storage stack. And surely you have already noticed that since we did it we are introducing new features at a very good pace (like bcache, more powerful Partitioner, Raspberry Pi support, etc.) and fixing the reported bugs in a matter of days or even hours.

    The next in our list of YaST areas to revamp is the networking support. And we are happy to announce that we are starting to have some visible results in that. There is still a very long road ahead and we will provide more information in upcoming reports. But at least we have already a preview of a fully rewritten management of network routes. It’s still not available in openSUSE Tumbleweed. But for those who can’t wait, here you can see the first screenshot. All based in new and clean code backed by automated tests.

    New network routing dialog

    Activating Online Repositories in openSUSE Leap 15.1

    The openSUSE Tumbleweed installer asks at the beginning of the installation whether to activate and use the online repositories when a network connection is available.

    The reason is that the installation DVD does not contain all available packages because of the limited media size. Another advantage is that the installer might directly install newer packages than on the media, this avoids installing the older versions first and then upgrading them to the latest version.

    However, in some case you might not want to use the online repositories, for example if the network connection is slow or is paid.

    We got a bug report that this question was missing in the Leap 15.1. It turned out that the control.xml file which drives the installer did not contain this step. After adding few lines into the file you can now enjoy the online repositories also in Leap 15.1!

    Online repositories in Leap 15.1

    Why are we writing about this? The reason for the missing step in the Leap 15.1 was a bit surprising. Normally all YaST packages are developed in the Git master branch for both Tumbleweed and Leap. However, in this case the Leap 15.1 has been already branched and was developed separately, the changes in the master went only to the Tumbleweed. And we overlooked that small difference when adding this step.

    To avoid this in the future we added a pull request template with a reminder which informs the developers about this difference in the Git setup when opening a pull request.

    If your project also has some unusual setup then the pull request template might a good reminder for you as well.

    Building the Docker Images in OBS

    But the reminders about the correct branches and procedures is not the only news we have for YaST contributors and main developers. As you may remember, few years ago we switched to using Docker at Travis. That works well but we found some disadvantages of that initial setup.

    • You need extra account at the Docker Hub to manage the images.
    • There is no link between OBS and the Docker Hub, we cannot easily trigger image rebuild when a package is updated in OBS.
    • We only blindly triggered the rebuild every 2 hours (in some cases the rebuild is not necessary, in some cases it took too much time).
    • The Docker Hub can use the new OBS packages only after they are published by OBS.
    • The build at the Docker Hub is quite slow (~20 minutes in our case), if an image is currently being built the build is added into the queue and it will start after the previous builds are finished.

    The result is that a new package can be available in Travis several hours after merging the pull request. And even after triggering the build manually it still might take more than one hour.

    We needed a faster cycle and the solution, as usually happens, was in the openSUSE ecosystem. As you may know, the Open Build Service is able of much more than just building packages. So we decided to make use of the OBS capacity of building Docker images.

    Building both our packages and our Docker images in OBS comes with many advantages:

    • The image build is started immediately when the new packages are built, it does not wait for publishing the packages and does not wait for full rebuild (only for the needed packages).
    • No extra accounts/permissions (just use your OBS account).
    • The build in OBS is faster (6-7 minutes).
    • No need for extra Jenkins jobs periodically triggering the image rebuilds.

    This means the new packages should be available in the Docker image in about 10-15 minutes after merging a pull request (for leaf packages, changing a core package which triggers a complete YaST rebuild will of course take more time).

    If you want to learn more about this topic, take a look to the following links:

    And that was not all!

    As usual, the content of this report is just a small subset of all the work the YaST Team does in two weeks. In this sprint, most of that work went to fixing all kind of bugs in preparation for the next releases. Big bugs, small ones, hidden bugs and embarrassingly obvious ones. Hopefully, you got a fix for your reported bug. If not, you can always stay tuned for more news after the next sprint. And don’t forget to have a lot of fun!

experimental openSUSE mirror via IPFS

April 3rd, 2019 by

The InterPlanetary File System (IPFS) can be used to provide files in a more efficient and distributed way than HTTP.

Our filesystem repo already has the go-ipfs client.

You use it with
ipfs daemon --init

And then you can add my Tumbleweed mirror with
zypper ar http://127.0.0.1:8080/ipns/opensuse.zq1.de./tumbleweed/repo/oss/ ipfs-oss

You can also browse the content online at
http://opensuse.zq1.de./tumbleweed/repo/oss/ . During my testing I found that the results are sometimes inappropriately cached on the Cloudflare CDN, so if you used it under this URL without the ipfs client, this might throw signature errors in zypper.

On the server side, the mirror is updated using the syncopensuse script from
https://github.com/bmwiedemann/opensusearchive and consistency of the repo is verified with checkrepo

When a complete repo was synced, dynaname updates a DNS entry to point to the new head:

> host -t txt _dnslink.opensuse.zq1.de.
_dnslink.opensuse.zq1.de is an alias for tumbleweedipfs.d.zq1.de.
tumbleweedipfs.d.zq1.de descriptive text “Last update: 2019-04-03 12:23:43 UTC”
tumbleweedipfs.d.zq1.de descriptive text “dnslink=/ipfs/QmSXEVuU5z23rDxMyFYDhSAUaGRUPswuSXD3aVsBEzucjE”

If you got spare bandwidth and 300 GB disk on some public server, you could also host a mirror of today’s version, simply by doing ipfs pin add QmSXEVuU5z23rDxMyFYDhSAUaGRUPswuSXD3aVsBEzucjE

This is a permalink: http://127.0.0.1:8080/ipfs/QmSXEVuU5z23rDxMyFYDhSAUaGRUPswuSXD3aVsBEzucjE also browsable via any public IPFS gateway. This means, it will always remain on the 20190401 version of Tumbleweed and no changes in content are possible – similar to how a git commit ID always refers to the same data.

So why did I create this IPFS mirror? That is related to my work on reproducible builds for openSUSE. There it regularly happened that published Tumbleweed binaries were built with libraries, compilers and toolchains that were no longer available in current Tumbleweed. This prevented me from verifying that the published binaries were indeed built correctly without manipulation on the OBS build workers.

Now, with this archive of rpms easily available, it was possible to verify many more Tumbleweed packages than before. And most importantly, it remains possible to independently verify even after Tumbleweed moves on to newer versions. This data is going to stay available as long as anyone pins it on a reachable server. I’m going to pin it as long as it remains relevant to me, so probably a bit until after the next full Tumbleweed rebuild – maybe 6 to 12 months.

Thus, it now is even less easy to sneak in binary backdoors during our package build process.

Highlights of YaST Development Sprint 71 & 72

February 27th, 2019 by

We know we owe you a report for the previous development sprint (namely the 71th). But we also know how to compensate that. This week we have not only one, but up to three blog posts to keep you tuned to the YaST evolution.

So let’s start with the summary of what have been implemented and fixed lately. That includes

  • Improvements in the Bcache support in the Partitioner
  • Users home as Btrfs subvolumes
  • Better visualization of Salt formulas in YaST Configuration Management
  • Automatic selection of the needed driver packages
  • Improvements in many other areas like AutoYaST, bootloader, the Partitioner and the storage Guided Setup

You will find links to the other more exhaustive blog posts, about the recently added Bcache support and the revamped Configuration Management module, in the corresponding sections of this report.

Final Improvements in the Bcache Support

During several sprints, we have been detailing our efforts to offer a decent support for the Bcache technology in the YaST partitioner. During this sprint we have implemented what we consider the three final bits:

  • Bache devices without caching
  • Modifying Bcache devices
  • Listing all caching sets

We will now detail this three improvements. But to celebrate that Bcache support looks complete now in the Partitioner, we have published a separate blog post explaining what Bcache is and how to take advantage of that technology using the YaST Partitioner. Enjoy!

And back to the topic of recent improvements, we should mention that the Bcache technology allows to create a Bcache device without an associated caching one. This is useful if you are considering to use Bcache in the future. In that case you can setup all your slow devices as Bcache backing devices without a cache, leaving open the possibility of adding caching devices later. That is now possible by selecting the new option labeled “without caching” during creation, as shown in the following screenshot.

No caching device in Bcache

Of course, that’s not much useful without the possibility of modifying a Bcache device. So in the latest sprints we also added a new “Change Caching” button.

New option to modify a bcache

This option will only work for bcaches that do not exist in your system yet (e.g., a bcache that you are creating right now). For existing bcache devices, this option is only available when the bcache has no associated caching device yet. Otherwise, a detaching action would be required, and that could take very long time in some situations.

Avoiding detach operations in bcache

And last but not least (regarding Bcache), now the Expert Partitioner also shows the list of all caching sets in a separate tab (unsurprisingly) titled “Caching Set Devices”. It is only an informative tab, but thanks to it you will be able to check all devices currently used for caching at a quick glance.

List of caching sets

Create the User’s Home as a Btrfs Subvolume

As many (open)SUSE users know, Btrfs offers several advantages over traditional Linux file-systems. One of them is the possibility of using subvolumes to customize the configuration and features of different parts of the same file-system.

From now on, the YaST Users module allows to create the home path of new users as a subvolume instead of a plain directory, as long as a Btrfs file-system could be found in the system, of course. As simple as checking the “Create as Btrfs Subvolume” option that you will find just below the “Home Directory” field.

Using a Btrfs subvolume as home

In a multi-user system, using Btrfs subvolumes for the home paths of the users can open a world of possibilities. The system administrator can decide to give some users the ability to create snapshots of their corresponding home directories, even automated on every login and/or logout using PAM Snapper. Similar ideas can be applied to a Samba deployment. Moreover, using subvolumes as homes can also be useful to implement per user quotas.

Beyond multi-user systems, fine-tuning the mount options or customizing snapshots and quotas can also be useful for some special system users (e.g. the user associated to a database system).

And all that, as always in YaST, equally easy to configure with the graphical interface and with the text-based one.

Moreover, using subvolumes as homes can also be useful to implement per user quotas, ncurses version

Better Visualization of SUSE Manager Salt Formulas

As we announced on January, we have been working on updating the SUSE Manager Salt Formulas support in YaST Configuration Management to comply with the latest forms specification. As it happens with the Bcache support, we feel we have reached a point in which the feature is polished enough to be shipped in the next (open)SUSE stable releases and to deserve its own separate blog post. So go and enjoy our third blog post of this week, a sneak peak into the marriage of SUSE Manager and YaST, thanks to the magic of Salt Formulas.

In that regard, we have been recently improving how the forms look like to make them as easy to use as possible. Bear in mind that a form can be quite long and the support for scrolling in YaST2 is quite limited. In order to workaround this problem, YaST2 will display a tree at the left side outlining the structure of the form.

Tree view of long salt formulas

Apart from that, we have solved some known problems, like exporting date/time and numeric values in a correct way or handling missing and default values properly.

Sure there are some details that we like to improve but, for now, what we need is feedback from you. So, if you are interested in such a functionality, please check the mentioned dedicated blog post about the topic and have a look to yast2-configuration-management 4.1.5.

Automatically Selecting the Driver Packages in Installed System

In openSUSE Leap 15.0 and SUSE Linux Enterprise 15, the packages containing drivers needed by the system are installed automatically if they are available during the installation. However, if a repository with driver packages is added later in the installed system then the drivers are not automatically selected to install.

Fortunately, in Leap 15.1 and SLE-15-SP1 the package solver implements a new solver mode in which it selects for installation only the recommended driver packages, without also dragging other non-driver recommended packages into the system.

For example, after adding the nVidia driver repository in Leap 15.1 (YaST ➝ Software Repositories ➝ Add ➝ Community Repositories, select “nVidia Graphics Drivers”) and starting the package manager (YaST ➝ Software Management), the respective nVidia driver packages are automatically selected to install if there is a supported nVidia graphics card present in the system. This of course works also for the other driver packages and hardware devices.

Just a last note about automatic selection of recommended driver packages. When running the package manager for the first time in a recently installed openSUSE Leap 15.1, it might automatically select some additional packages to install coming from the Leap OSS on-line repository (which is added automatically the end of the installation). Do not be surprised, that’s actually desired. If you do not want to install those packages simply deselect them, YaST will remember that and it will not select them next time.

See more details in the corresponding documentation.

Improved Disk Selection Screen in the Storage Guided Setup

Although it may sound surprising for some users, it’s not uncommon to install SUSE Linux Enterprise on systems with dozens or even hundreds of disks. Of course, in those advanced scenarios the automatic partitioning proposal is not much useful and users are expected to use the Expert Partitioner. But, what if someone tries to use the partitioning Guided Setup in a system with 30 disks? In SLE-15 (or Leap 15.0, in the case of this concrete screenshot) they will see a screen as unusable as this.

Broken Guided Setup screen

Of course, it only gets worse if the number of disks increases or if the text-mode interface is used. So we improved the disk selection screen, taking also the opportunity to include additional information to make easier to identify removable devices like USB and IEEE 1394 ones. In future releases, the screen will normally look pretty much as it looks in the previous version, just with the extra information about USB.

Normal Guided Setup screen

But if the system contains more than 10 disks to choose from, then a different widget that makes scrolling possible will be used, ensuring the user interface remains usable enough in both graphical and text-based modes.

Guided Setup screen with more than 10 disks

YaST Bootloader Becomes Less Whinny

When the installation of the system is spread over multiple physical disks, YaST Bootloader always tries to setup the bootloader in all those disks. This helps for example when MD RAID is used to achieve redundancy and some disk fails. Then the system can still boot from any of the other disks, since they also contain a proper bootloader.

But in some cases the installation can be spread over multiple disks but with only one of them being prepared to host the bootloader (e.g. containing the needed partitions). YaST bootloader used to complain about it while generating its proposal, which resulted in an error shown during installation.

In this sprint we have relaxed the YaST Bootloader expectations a bit. It still tries to create a fail-over setup, but the proposal does not complain if at least one disk looks bootable. Additionally if Grub2 is properly installed to at least one device, then possible failures in other disks will not result in an error pop-up, but just logged for future reference.

Device Order for MD RAIDs in AutoYaST

During the development of storage-ng, the feature to specify the device order for MD RAIDs with AutoYaST was lost. The feature can be useful for RAID level 10 to ensure that one full half of the RAID will only reside on one disk subsystem and the other half on the other subsystem. Only then a complete failure of a single subsystem keeps the RAID working.

Now the feature has been re-implemented for Leap 15.1 and SLE-15-SP1 and the “device_order” tag of the AutoYaST profile works as expected. For further information about configuring RAID with AutoYaST, consult the AutoYaST documentation.

Improvements in the Partitioner

That AutoYaST feature was not the only thing that temporarily disappeared with the replacement of the old storage components by storage-ng. The “Provide Crypt Passwords” button in the Partitioner was also not there in the early versions included in openSUSE Leap 15.0. But it was not really an omission, but a consequence on how the activation of logical devices works now in YaST.

During installation, all pre-existing logical devices that need some initialization are always activated to make possible to use them. That includes LVM, Multipath devices (for which a user confirmation is requested) and encrypted devices (for which the password is asked to the user). But triggering such activations in an already installed and running system cannot be done so lightly, since it affects the execution of the whole system.

Still, there are cases in which a user executing the YaST Partitioner in an already installed system may desire to activate currently inactive devices. To allow that we brought back the “Provide Crypt Passwords” button, but with two small changes compared to its historical behavior.

  • It activates all kind of devices, not only the crypt ones.
  • It refreshes all the Partitioner information, reading again the status of all the storage devices.

Provide Crypt Passwords button

Of course, any change in behavior is a nice opportunity to revisit and expand the help texts.

The button explained

Another change that was intentionally introduced in the first versions of the rewritten Partitioner was the fact that new file-systems always got an empty label by default, independently of the previous content of the partition being formatted.

But the old Partitioner used to do things differently when reformatting a partition. In that case, it actually copied the label of the previous file-system into the new one. Some users told us (via bug reports) they wanted the old behavior back, so now the Partitioner preserves the old label when a partition is reformatted.

Wow, that was a lot!

As usual, this report only covers a small portion of what the YaST Team has been doing. But definitely this time our readers will have enough material to keep themselves entertained for quite some time. Specially combining this with the two already mentioned monographic posts about the Bcache support and about the compatibility with SUSE Manager Salt Formulas.

Have a lot of fun reading!

Recapping the Bcache support in the YaST Partitioner

February 27th, 2019 by

Usual readers of the YaST Team development sprint reports on this blog already know we have been working steadily on adding support for the Bcache technology to the YaST Partitioner. We have already reached a point in which we consider such feature to be ready to be shipped with openSUSE Leap 15.1 and SUSE Linux Enterprise 15 SP1. That sounds like a nice occasion to offer the full picture in a single blog post, so our beloved users don’t need to dig into several blog posts to know what the future releases will bring regarding Bcache in YaST. Needless to say, all this is already available for openSUSE Tumbleweed users, or will be available in the following weeks.

Bcache 101

But, to begin with, what is Bcache? It’s a Linux technology that allows to improve the performance of any big but relative slow storage device (so-called “backing device” in Bcache terminology) by using a faster and smaller device (so-called caching device) to speed up read and write operations. The resulting Bcache device has then the size of the backing device and (almost) the effective speed of the caching one.

In other words, you can use one or several solid state drives, which are typically fast but small and expensive, to act as a cache for one or several traditional rotational (cheap and big) hard disks… effectively getting the best of both worlds.

How does it all look in your Linux system? Let’s explain it with some good old ASCII art:

(slow hard disk)   (faster device, SSD)
    /dev/sda            /dev/sdb
      |                     |
[Backing device]    [Caching device]  <-- Actually, this is a set of
      |                     |             caching devices (Caching Set)
      |__________ __________|                   
                 |
              [Bcache]
           /dev/bcache0

Take into account that the same caching device (or the same “caching set”, sticking to Bcache terminology) can be shared by several Bcache devices.

If you are thinking about using Bcache later, it is also possible to setup all your slow devices as Bcache backing devices without a cache. Then you can add the caching device(s) at a later point in time.

(slow hard disk)   
    /dev/sda            
      |                     
[Backing device]    
      |                     
      |__________ __________|                   
                 |
              [Bcache]
           /dev/bcache0

Last but not least, the Bcache technology allows to create virtual devices on top of an existing caching set without an associated backing device. Such a device is known as Flash-only Bcache and is only useful in some very specific use cases.

                   (faster device, SSD)
                        /dev/sdb
                            |
                    [Caching device]
                            |
      |__________ __________|                   
                 |
         [Flash-only Bcache]
           /dev/bcache0

You may be thinking: “hmm, all that sounds interesting and daunting at the same time… how can I get started with it in an easy way?“. And sure you are already figuring the answer. 😉

Bcache in the YaST Partitioner

When running on an x86 64 bits system, the YaST Partitioner will offer a Bcache entry in its usual left tree. There you can see two tabs. The second one lists the Bcache caching sets available in the system and its purely informative. But the first one is your entry door to all the power of the Bcache world. That tab allows to visualize, modify and delete the existing Bcache devices. And, of course, it also enables you to create new Bcache devices on top of any of your not-so-fast existing block devices.

Bcache devices in the Partitioner

All Bcache devices can be formatted, mounted or partitioned with the same level of flexibility than other block devices in the system. See the previous screenshots, in which some devices contains partitions while others are formatted directly.

The creation and edition of Bcache devices allow to select which devices to use as backing and as caching, and also to choose one of the available cache modes (more on this below). Any available block devices (like a disk, a partition or an LVM logical volume) can be used as backing device or as caching one. But a screenshot is worth a thousand words.

Screen for creating and editing a Bcache

The backing device is mandatory. Flash-only Bcaches cannot be created and there are no plans to include support for them in the future. But as you can see in the previous screenshot, the caching device can be specified as “without caching”. That allows to create Bcache devices that will get their caching devices in the future, as explained at the beginning of this post.

As mentioned, the third field allows to choose one of the cache modes offered by Bcache. If you are not sure what a particular cache mode means, YaST also provides a quite extensive help text explaining them.

Help about Bcache

Moreover, when modifying a device, the Partitioner will limit risky combinations, preventing data loss and avoiding operations that can result in unreliable results. For example, it prevents modifying Bcache devices with a caching device that is being already used by the system, because that would require a detaching action. That could take a very long time in some situations and interfere with other operations performed by the Partitioner.

Only safe operations allowed

Of course the operation to delete a Bcache device offers the usual checks and information available in other parts of the YaST Partitioner, like shown in the following screenshot (this time using the look and feel of the SLE installer).

Deleting a Bcache device

Bcache for everyone?

With all the functionality explained above, we could say the YaST Partitioner lowers the entry barrier enough for all the (open)SUSE users to begin enjoying the bells and whistles of the Bcache technology. Unfortunately, that’s not exactly true for all the hardware architectures supported by our beloved distributions.

Bcache is only considered stable and mature enough in x86_64 systems (i.e. x86 architecture of 64 bits). If you don’t know whether your computer fits into that description, then almost for sure it does. 😉 We have no evidence of anyone using Bcache successfully in 32 bits systems or in any ARM platform. Moreover, we know for sure the technology is unreliable in the PPC64LE and S390x architectures.

As a result, the YaST Partitioner will only present the “Bcache” section in the left tree when running in a x86_64 system, even in the highly unlikely case of an unsupported system in which a Bcache device is found. If that would even happen, YaST will alert the users about the dangers of using Bcache in such unsupported scenario and will urge them to use manual procedures to modify the existing setup.

Warning: Bcache not supported!

What’s next?

Obviously, as it always happens when a new technology is added to YaST, there is still a lot of room for improvement regarding the Bcache management in the Partitioner. But now it’s the turn for our users to test it and come with bug reports and ideas for further improvements and use cases. Profit!

Highlights of YaST Development Sprint 69 & 70

January 31st, 2019 by

Almost two months has passed since our last sprint report but, except during the Christmas break, the team has been quite busy working on some features and bugfixes for the upcoming (open)SUSE releases.

But a post describing all that we have done would be quite long :), so let’s try to highlight a few of them.

  • YaST got a security audit and, although no real security problems were found, we were asked to introduce some improvements.
  • Now it is possible to run the installer through PXE Boot without any local repository. Pretty specific but cool stuff!
  • We are in the process of revamping SUSE Manager Salt Formulas support in the YaST2 Configuration Management module. Do not be fooled by the name, it is not limited to SUSE Manager.
  • YaST icons are now included in the package were they are used. We hope it will make things easier for icon designers.
  • The Firewall module got support for creating firewalld custom zones.
  • Performance when reading huge /etc/hosts files has been greatly improved.
  • CD/DVD sources are always disabled after installation.

YaST Security Hardening

Our SUSE security team did a security audit for YaST. The good news is that there were no real security problems that you should be concerned about. Still, we did some hardening to make the code even more secure.

This might have caused some breakages in Factory / Tumbleweed because many places in the code were touched. We apologize for any inconveniences that might have caused; but we are sure you prefer YaST to be more secure.

Most changes were centered around calling external commands, which YaST does a lot. Since YaST is running with root permissions in most cases, we want to make sure that this is as secure as possible. If you find any problems with it, please write bug reports.

What exactly we did and how we did it is summarized here: YaST Security Audit Fixes: Lessons Learned and Reminder

Installing via PXE Boot without any Installation Repository

In data centers and other big-scale enterprise environments, administrators rarely install new software via removable media such as DVDs. Instead, administrators rely on PXE (Preboot eXecution Environment) booting to image servers.

Installing Linux Enterprise in such environments typically requires two auxiliary servers in the local network:

  • The DHCP/TFTP server providing the minimal system used by PXE to execute the installer.
  • A server making the SLE DVD repository accessible in the local network via FTP, HTTP or any similar protocol.

Very often, the second one is more a requisite imposed by the installer than something really useful. In most cases, the system been installed will be registered in the SUSE Customer Center (or any of its proxy technologies like SMT or RMT) and will get all the software from there. Thus, we decided to save the administrators the extra steps of downloading the SLE ISO image and setting up an install server to serve the content of that ISO, for cases in which that was really not needed.

But the repositories are not only used to get the software been installed in the final system. As explained often in this blog, we have a single installer for all the products and flavors of SUSE and openSUSE, as different as the installation process looks for all of them. That generic installer uses the information in the installation repository to get its own configuration. That includes the available products (and its corresponding system roles), the steps and options to present to the user, the desired partitioning setup and many other aspects. Without that information, the installer is basically a musician without his score.

Starting with SLE-15-SP1, it will be possible to use the boot parameter NOREPO=1 to tell the installer to not expect (and more important, to not require) any local repository in the DVD or in the local network. In that case, the installer will be able to proceed up to the registration screen and get the information for the upcoming steps of the installation from the registration server. In the openSUSE case (where registration makes no sense), it will be able to reach the screen that allows to add more repositories.

Another step (and certainly not an easy one) to improve the installation experience for our users. Data center administrators, enjoy! 🙂

Revamping SUSE Manager Salt Formulas Support

Back in 2017, the YaST Configuration Management module got support to handle SUSE Manager Salt Formulas as part of a Hack Week project. If you do not know what this feature is about, you might be interested in checking the Forms are the Formula for Success presentation or the Hack Week project follow-up post.

Since then, the forms specification has evolved quite a lot and YaST support was basically outdated. So on November 2018 we started to work in order to bring the missing pieces to the YaST module. Basically, we rewrote the forms support and, although there are still rough edges, we are pretty close to release a new version with up-to-date support for this powerful feature.

Screenshot of how the dhcpd formula looks like

Managing Custom Zones Definitions in YaST Firewall

The new YaST UI for configuring firewalld was announced in the report of the sprint #63 (four months ago… time flies!) and, since then, we have continued improving it.

firewalld ships with some predefined zones. Although it covers most users needs, in addition it allows the user to define custom zones. During the last sprint we have added support in the new UI and also in AutoYaST to manage custom zones.

YaST2 Firewall custom zones definition dialog

During the development process some problems detected in the AutoYaST configuration were addressed too.

Updated YaST Branding and Icon Handling

In the past the YaST icons were included in the yast2-branding-openSUSE (openSUSE) and yast2-theme-SLE (SUSE Linux Enterprise) packages. The standard YaST icons were included in these packages, the standard YaST modules did not include any icons.

However, the disadvantage for the icon designer was that it was not clear which icons were really used.
If you wanted to update the icon theme you could potentially do a lot of useless work because some icons were not used anymore.

Now the icons are included in the respective YaST package, if the package is dropped the icon is dropped as well.

The package manager UI includes compiled-in fallback icons. That means if the branding package is broken or the icon files are accidentally deleted from disk then it will be still usable for emergency recovery.

The branding still works, the vendor can still provide specific icons which will override the included ones. So it is still possible to have a different look in the openSUSE and SLE products.

YaST2 Control Center new branding Screenshot

A big thank you goes to Stasiek Michalski and Noah Davis from the community who did the changes in the YaST code, designed the new icons and did a lot of cleanup!

Improving Performance when Loading Huge /etc/hosts Files

It might happen that you need to maintain a huge /etc/hosts file, especially when dealing with ads blockers. Such file with thousands of lines took an incredible amount of time to get loaded into YaST2. On some configurations it could even happen that loading a /etc/hosts with around 10.000 lines freezes the system completely. After some refactoring in YaST2 Host module, the performance has been significantly improved and loading a file with 10.000 lines now takes approximately 30s on the same configuration where it crashed before.

Disabling CD/DVD Repositories After Installation

If you install your system from a CD/DVD source it usually happens that this repository was not available for whole live of the system. In some use cases this was only uncomfortable because of some warnings but, in other cases, it caused serious complications, for instance, when trying to do a migration.

In the past, under some circumstances, those repositories were already disabled. But, from now on, they will be disabled always in order to avoid unwanted side effects.

Closing Thoughts

That’s all for the first report of 2019. In case you are wondering, the plan is to stick to the plan of publishing a report after each sprint, so expect the next one in about two weeks.

However, we recently had to migrate from the so called GitHub Services (now deprecated) to GitHub web hooks, so you might get an extra blog post about that very soon.

Stay tuned!

Report from the reproducible builds summit 2018

December 17th, 2018 by

Last week I attended the reproducible builds world summit in Paris.
It was very well organized by Holger, Gunner and their hidden helpers in the background. Very similar to the last 2 summits I attended in Berlin.

Because we were around 50 participants, introductions and announcements were the only things done in the big group. All actual work happened in 5-10 smaller circles.

We had participants from large companies like Google (with bazel), MicroSoft and Huawei, but also from many distributions and open source projects. Even MirageOS as non-Linux OS.

We did knowledge-sharing, refine definitions of terms, evolve concepts like “rebuilders” for verifying builds and allow users to better trust software they install, and such.

I learned about the undocumented DB dump (153 MB) and DB schema

And we had some hacking time, too, so there is now
a jenkins job that renders the list of unreproducible openSUSE Factory packages.

Also, my maintainer tool now has added support for the Alpine Linux distribution, thanks to help by one of its maintainers: Natanael Copa.
This is meant to help all cross-distro collaboration, not just for reproducible builds.

There is still work to be done to make better use of Mitre CPE to map package names across distributions.

I think, one major benefit of the summit was all the networking and talking going on, so that we have an easier time working with each other over the internet in the future.