Home Home
Sign up | Login

Highlights of YaST development sprint 36

June 16th, 2017 by

We are still digesting all the great content and conversations from openSUSE Conference 2017, but the development machine never stops, so here we are with the report of our post-conference sprint.

Storage reimplementation: expert partitioner

You have been reading for months about the new stack for managing storage devices and the new features and improvements it will bring to the installation. But so far there was no way to view and fine-tune the details of those devices. During this sprint we have implemented a first prototype of the new version of the YaST2 Expert Partitioner, that awesome tool you can invoke with yast2 storage.

To make the transition easier and to be able to submit it to Tumbleweed as soon as possible (hopefully in a couple of months, together with the rest of the new stack) we decided to postpone any UI redesign. So this first incarnation of the new expert partitioner looks and behaves exactly like the one available in current versions of (open)SUSE.

To try it out (on a scratch machine!), add a repository and remove the current storage library, as described in yast-storage-ng: Trying on Running System and then run zypper install yast2-partitioner. As you may have noticed, we split the partitioner in a separate package, unlike the current version that was part of the basic yast2-storage.

The new expert partitioner will only give you a read-only view of things similar to the following screenshots, not being able to modify anything yet.

New expert partitioner - hard disks list

As you can see in your own system or in the screenshots, the following items are already functional

  • Hard disks and their partitions
  • Volume Groups, Logical Volumes, and Physical Volumes of the Logical Volume Manager (LVM)

The other kinds of devices that you can see in the navigation tree are so far only stubs.

New Expert Partitioner - logical volume overview

You may feel a bit underwhelmed by this, and that’s OK, because most of the effort that we spent on this is actually hidden in a set of nice UI classes which we use to reconstruct the legacy procedural UI code. So the new expert partitioner not only relies on the revamped storage stack, but also on a powerful and reusable set of shiny UI components. If you ever need to code a user interface for YaST, the next section is for you.

New Expert Partitioner - list of physical volumes

New CWM Widgets

This section may be a little bit too developer-oriented, so feel free to skip it if you don’t care about the YaST implementation details. If, to the contrary, you want to have a glance at the new YaST widgets, go ahead.

Before diving into the new widgets, let us introduce what CWM is. It stands for Common Widget Manipulation and it is an old procedural YaST module which puts together a widget, its help and its callbacks. These callbacks are used to initialize, validate and store the content of the widget. This organization allows easier re-usability of widgets, which are then put together into a dialog. We also made an object-oriented version of CWM, which uses the old one under the hood, but is based on classes. So the contents and callbacks all live in their own class which is then used in dialogs. It is already used e.g. in the bootloader module.

As part of the Expert Partitioner rewrite, we created new types of reusable widgets, like Table or Tree, that are now available for its usage in any YaST module.

We also realized that it would be cool to be able to construct full dialogs out of smaller “bricks”, because the partitioner dialogs usually have rather complex structures in which some parts are shared by several dialogs. For this purpose we added new kinds of widgets – a Page which represents a part of a dialog that contains other widgets, and a Pager which allows switching of pages. So far there are two different pagers. The first one is Tabs which shows a set of tabs and allows switching among them and the second one is TreePager which allows switching pages according the item selected in a tree.

As you can see in the screenshots from the Expert Partitioner, there is a tree on the left side, which decides which page is shown on the right side. That right side sometimes contains a set of tabs, which decides what is displayed for every single tab.

Building blocks for the win!

Added support for allocation of memory high into YaST Kdump Command-line

A new option to allocate memory high during enable of Kdump was already implemented in YaST interface but unavailable through command-line. From the next Service Pack (i.e. SLES 12 SP3, Leap 42.3, and Tumbleweed), the user will be able also to use this option in command-line and scripts. In order to do that you can just use the command yast2 kdump enable alloc_mem=low,high, where low sets Kdump Low Memory and high sets Kdump High Memory.

For current users of Kdump command line, the old command to enable kdump yast2 kdump enable alloc_mem=$mem will still work as before, keeping its compatibility.

Handle optional filesystem packages correctly

During installation, when YaST detects in the system a particular filesystem or technology for which the installer would need additional packages to deal with, it alerts the user and tries to install those packages. A very visible case are the ntfs-3g and ntfsprogs packages, installed when a MS Windows partition is found in the system.

But, what happens if those packages are simply not available for installation? That’s the case of SLE12-SP3, which doesn’t include ntfs-3g. Should the installer block the installation of SLE12-SP3 alongside an existing MS Windows just because of that?

Fortunately we have solved that problem for the upcoming SLE12-SP3… and also created the code infrastructure to avoid similar problems in the future. Now we have a separate list for packages that would be nice to have installed in order to deal with a particular technology but that are not 100% mandatory to the point of blocking the installation process if they are not available. So we don’t bother the user about things that cannot be solved anyway.

Issues solved in YaST Remote command-line

But apart from looking into the future, we keep taking care of the existing YaST modules and its supported scenarios. During this sprint, we also addressed some issues related to YaST Remote, when using the command line.

The command yast2 remote list was installing required packages for YaST Remote and also restarting the display manager. However, as this command is expected to be a read-only operation, it shouldn’t change anything in the system. Such a problem was solved and now this command just lists the status of remote options.

Another issue was in the command yast2 remote allow=yes, which was opening a pop-up interface to alert the user about the changes in the system. Such a pop-up was impeding the use of this command in scripts. Therefore, we removed it when executing YaST Remote in command-line and, instead, we now just show a warning message on the console.

Both fixes were submitted as a maintenance update to all the supported versions of SLE and openSUSE and will reach our user as soon as they pass all the extra security checks performed by the respective maintenance teams. Of course, both fixes will also be included in future releases.

Storage reimplementation: simplified actions summary

The Expert Partitioner was not the only thing we did related to the new storage stack during this sprint. We also tried to improve how the information is presented to the user everywhere.

Having a huge amount of information at a glance might be useful in certain cases… as long as that amount can be handled by a human brain! Since we don’t expect all our users to be androids, we decided to improve our storage actions summary. Now is much easier to understand what is going to happen in the disks after pressing the confirmation button.

They say a picture is worth a thousand words. So let’s compare the ultra-detailed list offered before this sprint…

Summarized actions: before

…with the new digested one.

Summarized actions: after

As you can see, the new summary carries the essential information in a clear and legible way. Delete actions are highlighted in bold and, moreover, the set of actions related to btrfs subvolumes are grouped in a collapsible list.

Summarized actions: extended view

Integration of AutoYaST with the new storage has also received our attention during this sprint. Now, the summary dialog in AutoYaST shows the list of storage actions in the new compact way. Currently it is not possible to edit partitions from this AutoYaST dialog, but stay tuned for more information in upcoming sprints.

Summarized actions: AutoYaST

AutoYaST: warn the user when creating smaller partitions

You already know how powerful can AutoYaST be in terms of automating complex installations based on flexible profiles, even trying its best if the profile contains parts that are challenging to implement in the target system.

One of those adjustments that AutoYaST can perform is reducing the size of some of the partitions specified in the provided profile if the target disk is not big enough, to make sure the installation doesn’t get blocked just by some missing space.

The mechanism works very well but that kind of automatic adjustments can be unexpected and can produce undesired results. That’s why we have added the following warning message.

AutoYaST: alert user about adjusted partititions

Of course, this new warning uses the usual AutoYaST reporting mechanisms, so even if the users are not in front of the screen (something very common when performing an unattended installation) they will be notified about the special circumstance.

Docker, Docker everywhere!

And now, another dose of technical content for those of you that love to lurk into the kitchen.

In the report of the sprint 30 we already described how we adopted Docker to power up our continuous integration process in the master branch of our repositories (the one in which we develop Tumbleweed and upcoming products). As also reported, we adopted the same solution for Libyui in the next sprint. And now it was the turn the branches of YaST that we use to maintain already released version of our products. Not a trivial task taking into account the many repositories YaST is divided in and the many products we provide maintenance for.

If you want to refresh your memory about the whole topic of using Docker for the continuous integration infrastructure, here you can watch the talk Ladislav offered about the topic a few days ago in the openSUSE Conference 2017.

Storage reimplementation: full support for DASD devices

If you don’t have a S/390 mainframe laying around, maybe you are not familiar with the concept of DASD (direct-access storage devices). DASDs are used in mainframe basically as regular disks… just that they are not.

DASDs are special disks in various aspects – they have a different partition table type allowing only three partitions with a restricted set of partition ids, they must be managed by a different set of partitioning tools, they have their own specific alignment logic and requirements…

But thanks to YaST and libstorage, in (open)SUSE you don’t have to care about most of those details. The expert partitioner and the installer allow you to treat DASDs almost as regular disks.

During this sprint we adjusted the new libstorage, i.e. the library C++ based layer of the stack, to be able to deal with DASD. As usual with new features implemented in the library, the only “screenshot” we have to show is one of the graphs generated by the library. Enjoy.

DASD support: the example graph

More to come… very soon

We want to have a shorter and more agile feedback loop regarding our development efforts. To achieve that, we have decided to shorten our Scrum sprints from the current three weeks to just two. So you will have more news from us in half a month.

But a feedback loop works in both ways, so we also expect to have more news from you. 🙂 See you soon!

The issues with contributing to projects only once

June 4th, 2017 by

I work to improve the openSUSE Tumbleweed (GNU/)Linux distribution. Specifically I make sure that all packages can be built twice on different hosts and still produce identical results, which has multiple benefits. This generates a lot of patches in a single week.

OBS
Sometimes it is enough to adjust the .spec file – that is a small text file usually specific to us. Then it is straight-forward

  1. osc bco
  2. cd $PROJECT/$PACKAGE
  3. optional: spec_add_patch $MY.patch $SOME.spec
  4. edit *.spec
  5. osc build
  6. osc vc
  7. osc ci
  8. osc sr

And OBS will even auto-clean the branch when the submit-request is accepted. And it has a ‘tasks’ page to see and track SRs in various stages. For the spec_add_patch to work, you need to do once
ln -s /usr/lib/build/spec_add_patch /usr/local/bin/

When you want to contribute patches upstream, so that other distributions benefit from your improvements as well, then you first need to find out, where they collaborate. A good starting point is the URL field in the .spec file, but a google search for ‘contribute $PROJECT’ often is better.

github
Then there are those many projects hosted on github, where it is also pretty low effort, because I already have the account and it even remains signed in. But some repos on github are only read-only mirrors.

  1. check pull requests, if some have been merged recently
  2. fork the project
  3. git clone git@github.com:…
  4. cd $REPO
  5. edit $FILES
  6. git commit -a
  7. git push
  8. open pull request
  9. maybe have to sign a CLA for the project
  10. When change is accepted, delete fork to not clutter up repository list too much (on github under settings)

sourceforge
The older brother of github. They integrate various ways of contributing. The easiest one is to open a Ticket (Patch or Bug) and attach the .patch you want them to merge with a good description. While many developers do not have the time and energy to debug every bug you file, applying patches is much easier, so gets your issue fixed with a higher chance.

devel Mailinglist
Some projects collaborate mainly through their development MLs, then I need to

  1. subscribe
  2. confirm the subscription
  3. git format-patch origin/master
  4. git send-email –to $FOO-devel@… –from $MYSUBSCRIBEDEMAIL 000*.patch
  5. wait for replies
  6. if it is a high-volume ML, also add an IMAP folder and an entry to .procmailrc
  7. unsubscribe
  8. confirm

project bugtracker
Like https://bugzilla.gnome.org/ https://bugs.python.org/ https://bugs.ruby-lang.org/ https://bz.apache.org/bugzilla/

  1. create unique email addr
  2. sign up for account
  3. add info to my account list
  4. optional: search for existing bug (90% of the time there is none)
  5. file bug
  6. attach patch

So as you can see there is a wide range of ways. And most of them have some initial effort that you would only have once… But then I only contribute once per project, so I always pay that.

Thus, please make it easy for people to contribute one simple fix.

Manual encryption partition setup for stronger full disk encryption

May 26th, 2017 by

When installing openSUSE or SUSE Linux Enterprise, YaST is able to configure encrypted LVM using LUKS for full disk encryption. The default configuration is aes-xts-plain64 using a 256 bit master key. However, due to how the XTC mode splits the key into two halves, this reduces the effective key size used for AES to 128 Bits.

In order to use a 512 bit key for 256 effective AES, one needs to perform manual formatting prior to installation:
cryptsetup LuksFormat --key-size 512 /dev/sda1
However the installer suffers from boo#1030299 which prevents it from writing an entry to /etc/crypttab in this instance. This results in a system that is unable to boot after installation.

The work-around is as follows: Boot into the rescue system, open the crypto device and enter the installed system as a chroot:

cryptsetup luksOpen /dev/sda1 crypto
mount /dev/mapper/system-root /mnt
for X in proc dev sys; do mount -bind /$ /mnt/$X; done
chroot /mnt

(This example assumes /dev/sda1 to be the crypto device, and an LVM VG named system with a LV named root, and no separate /boot.)

Then in the chroot, edit /etc/crypttab to have the following line:

crypto /dev/sda1 none none

See man crypttab for additional settings and options. To finalize, regenerate the initrd and reboot

mkinitrd
exit
reboot

A future rewrite of the YaST storage abstraction layer is planned which should address this issue.

Highlights of YaST development sprint 35

May 25th, 2017 by

openSUSE Conference 2017 is coming! And as we flight there (literally, one third of the YaST team is in a plane right now typing this), we wanted to inform our beloved readers on what we did in the previous three weeks.

So here is our report, brought to you by airmail!

Bugfixes, bugfixes everywhere

Leaving openSUSE Tumbleweed aside, The YaST team is currently working to deliver SLE12-SP3, openSUSE Leap 42.3, SLE15, openSUSE Leap 15, SUSE CaaSP 1.0 and Kubic (more about Kubic later). Three of them are already in beta phase, which means they are being extensively tested by several parties and in many scenarios, hardware platforms and possible configurations. That amount of manual testing always result in several bug being discovered, no matter how much we try to have some automated tests for the most common cases.

Many of the bugs our testers are finding are related to internationalization and localization, mainly texts in the UI that are always displayed in English, despite the system been configured (or being installed) in a different language.

But, of course, other kind of bugs are also being found. For example, our hardware detection component (hwinfo) was not able to deal with some new machines, making the installation experience everything but pleasant.

As a result, a significant amount of the YaST team manpower during this sprint was targeted to squash those annoying bugs. Which doesn’t mean we didn’t have time for some interesting new features and improvements.

Storage reimplementation: unlock encrypted devices

Once again, our new storage system comes with news. Now it’s able to detect and unlock preexisting encrypted devices during the hard disks probing step, raising you a new pop-pup dialog to ask for the corresponding device password. After unlocking the devices, all your installed systems will be accessible for upgrade and, moreover, the LVM volumes allocated over encrypted devices will be activated.

The new storage stack is expected to debut in SLE15 (and, thus, openSUSE Leap 15), but the functionality can already be tested, for both the installation and upgrade processes, with the StorageNG test ISOs.

Luks activation in StorageNG

The storage reimplementation & AutoYaST – a love story

But the happiest news coming from the new storage stack during this sprint is it’s marriage with AutoYaST. The new automatic partitioning proposal (that is, the “Guided Setup”) is now integrated with AutoYaST.

Thanks to the new software architecture, AutoYaST users will be able to override every single partitioning setting from the control file.

<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
  <general>
    <storage>
      <!-- Override settings from control file  -->
      <try_separate_home config:type="boolean">false</try_separate_home>
      <proposal_lvm config:type="boolean">true</proposal_lvm>
    </storage>
  </general>
</profile>

So you can easily switch on/off LVM, use a separate partition for /home, enable/disable snapshots, enable/disable Windows resizing, etc. All that, still relying on the automated storage proposal to iron the details up. Something that is not possible with the current version of AutoYaST without being forced to define explicitly every partition and LVM volume.

But the simplest way to use the new libstorage proposal is to not define any setting at all in the AutoYaST configuration file. In that case, the partitioning proposal code will do the complete job, installing a new system with the default options.

Of course, before integrating the new storage stack into the upcoming SLE15, the AutoYaST support have to go one step further. Apart from using and configuring the proposal, it must be possible to define a completely custom setup including partitions, LVM volumes, software RAID devices and so on through the corresponding <partitioning> section of the AutoYaST profile. So we used this sprint to sketch a plan to make that possible in the following months, analyzing all the scenarios and configurations supported by AutoYaST and looking for the best way to support them using the existing yast2-storage-ng infrastructure. The outcome of that effort is this detailed document and a list of tasks (PBIs in Scrum jargon) for the upcoming sprints. So be prepared for more news in this regard.

Automatic Cleanup of Snapshots created by Rollback

So far the user had to ensure that snapshots created by rollbacks got deleted to avoid filling up the storage. This process has now been automated. During a rollback, Snapper now sets the cleanup algorithm to “number” for the snapshot corresponding to the previous default subvolume and for the backup snapshot of the previous default subvolume. This enhanced behavior will be available in SLE12-SP3 and openSUSE Leap 42.3. For more information take a look at the more detailed post in the Snapper blog.

Helping to bring the CaaSP fun to openSUSE

For several sprints already we have been presenting features targeted to SUSE CaaSP, the Kubernets-powered solution for managing containers. Many of those features and custom configurations live in a package called yast2-caasp, originally targeted to this great upcoming product built on top of the SLE12-SP2 codebase.

But now the package is also available for Tumbleweed-based systems by request of the Kubic project. Kubic will be the openSUSE alter ego of SUSE CaaSP, that is, a Container as a Service Platform based on openSUSE and Kubernetes. As with any other YaST component, the exact same source code will shared by the SUSE product and its openSUSE brother.

Improved UX when an invalid registration URL is provided

Humans make mistakes, but when the mistakes are made entering some option in
the installation command line, it usually means that a reboot of the machine is be needed to fix them.

That was the case for the registration URL (regurl) option. In the provided address was malformed the installation just stopped. During this sprint we have added an early check of that URL which allows the user to reenter it and continue with the installation. Something that obviously improves the user experience.

Invalid regurl handling in normal installation

In case of an autoinstallation (AutoYaST), the error is reported and the steps to get installer updates and to register the system are skipped.

Invalid regurl handling in autoinstallation

There is still room for more improvements, allowing the user to also modify the URL in other scenarios. For example, for an URL with a valid format but that points to an unreachable server. But in those cases is not so straightforward to identify the culprit of the problem. It would make no sense to annoy the user with a recurring pop-up to change the registration URL if the root of the issue is not the URL but a incorrect network configuration.

Translations and Interpolations

As mentioned at the begining of this post, we recently got quite some bug reports about missing translations. Although some of them were really caused by bugs in the YaST code, others were a consequence of a buggy Ruby rxgettext script which collects the translatable strings from the Ruby source code. The bug is known by the Ruby-GetText developers, but it’s unclear when (or whether) it will be fixed.

The problem is that the tool cannot collect the translatable strings from interpolations. For example it cannot find the “foo” string from this string literal: "#{_("foo")}". As a result, that string is missing in the resulting POT file and cannot be translated by the SUSE or openSUSE localization teams.

As a workaround, we fixed the YaST code to not use the translations inside interpolations. We also documented the possible problems when mixing translations a interpolations and their solution.

And talking about new developer oriented documentation…

Security Tips for YaST Developers

YaST runs with the administrator privileges (root) and therefore we have to be aware of the possible security issues in the code. During this sprint we published a document with a short summary of security tips for YaST developers.

If you are programming an YaST module you should definitely read it, but it might be interesting also for other programmers as many mentioned issues are generic, not tight only to YaST.

The document is available online here.

See you at the conference

That’s all for this sprint report. We have many more things in the oven, be we didn’t manage to finish them during the sprint, so they will have to wait for the next report. Meanwhile we hope to see many of you at the openSUSE Conference 2017. There will be a whole workshop about modern YaST development, a summary with the more relevant news in the last year of YaST development, talks about the new superb yast2-configuration-management module, about our continuous delivery infrastructure and about how we use Docker to deliver YaST… And, of course, also many other interesting content like the awesome presentation from Thorsten Kukuk about the brand new openSUSE Kubic we mentioned earlier. And even more important, a lot of fun!

openSUSE Conference 2017

For those of you that cannot attend to the conference, see you again in this little corner of the internet in three weeks!

CFA (Config Files API) at a glance

May 17th, 2017 by

In our latest sprint report, we promised you an extra post about the technology we have been using lately to manipulate configuration files. The wait is over! Here is the fine text by Michal Filka explaining everything you need to know to get started with CFA.

Welcome to CFA

As you should know if you follow our blog (we make sure to repeat it once in a while 😉 ), YaST was converted some time ago from a custom programming language called YCP to Ruby. However, this conversion was done on language basis. Some old design decisions and principles stayed, like the usage of SCR for accessing the underlying system.

SCR was designed together with YaST. It uses concept of “agents” for accessing configuration files. These agents contains a description of configuration file using their own format. Moreover SCR offers location transparency. You can e.g. work with a file in the execution system or in a chrooted environment. However, this piece of code is proprietary and limited by the inconsistent quality level of the agents. In addition, is written in C++, developed only within SUSE and, sadly, not very well designed. You cannot easily use just the parser or the location transparency functionality. You always have to go through the complete SCR stack when parsing an input. Similarly, when using location transparency (setting a new location), all subsequent SCR calls are influenced by this. For this and some other reasons we decided to replace the proprietary SCR with something else. That’s how we started to develop and use “Configuration files API”

Configuration files API (CFA) is a library written in ruby intended for accessing various configuration files. You can download it from rubygems.org. It is also available as a set of RPM packages for OpenSUSE 42.3 in the build service. It is structured into several layers and creates an internal abstraction of configuration file. It has been designed and developed in SUSE’s YaST team. However this time it uses (or can use) third party parsers. CFA provides location transparency for the parser on the bottom layer and unified API for application on the top one. Location transparency is achieved by a well known File interface, so you can use any piece of code which implements that interface. Implementing support for a new parser is a bit more complicated. In the worst case you may need to implement a ruby bindings. However, once you have a bindings, implementing other pieces which are needed to get things working in the CFA’s stack is simple.

Lets go through the layers in details.

Bottom layer: File access

Is responsible for the direct access to the configuration files. In the simplest case it accesses local configuration files using the Ruby’s File class, but it can be adapted to access remote, chrooted or memory files too. The developer simply needs to provide a file handler implementing the corresponding read and write methods. Handlers for files in memory and in a chrooted environment are already provided by the common CFA framework.

Middle layer: Parser

This layer parses the configuration file which was loaded by the underlying layer. It knows the structure of the file and transforms it into an abstract representation. The library typically uses external tool for parsing, like Augeas. So, if the external tools has specific requirements, they have to be satisfied to get things working. For example if Augeas is used, you need to provide a proper lens (Augeas jargon for a file format descriptor) to parse the particular configuration file.

Top layer: Configuration file model

The last layer creates a model of the configuration file – an semantically meaningful API for accessing the configuration from an application. It basically creates “an abstraction on top of another abstraction”. It means that it unifies the usage of the various tools that can be used for accessing and parsing the configuration files.

Limitations

The model described above has some handicaps, mainly affecting the developers but visible sometimes for the users as well.

Feed the beast

If you are a developer planning to use CFA to manipulate a given configuration file, you must take into account that you will need to provide the parser and satisfy all its requirements. So, you at least have to know which parser is used for each file and what is needed in order to make the parser work with the file of your interest.

This is especially important in case of nonstandard / custom configuration files. It may require some previous work to evaluate the available options.

Beat the beast

There are also limitations coming from the fact that the library and/or parsers on the second layer provides an abstraction on top of the configuration file. This abstraction transform a configuration file into a more convenient and meaningful model and establishes a relation between the file and the model.

But the problem is that with most parsers (specially with Augeas) this relation is not bijective. That means some irrelevant pieces of the configuration file are not represented in the model. For example some spaces can be left out if they are not needed from a syntactic point of view. That can lead to loss of custom padding.

Another example can be comments. You can often see that, if your file uses e.g. “#” as a comment mark, then some parsers can squash lines full of these marks (which some developers use as a kind of delimiter) to just one “#”.

As a more concrete example, some Augeas lenses do not store initial comment marks in the model. That’s specially common in lenses for files where several different comment marks are allowed. However, some lenses return the comments including their mark at the beginning, so extra handling is needed at an upper layer of CFA or at application level.

Last but not least, some parsers use the concept of default values when adding new key with not defined value. This can of course lead to some inconsistencies in configuration file’s look if not handled by an upper layer.

Practical example

As already mentioned, CFA is being used as a replacement for the old fashioned SCR in YaST. So we can really take a look to the result of that replacement in several parts, like the handling of /etc/hosts in yast2-network. Leaving aside a fact that the code is now much better readable, we gathered also some performance numbers.

The test was run using an example /etc/hosts file with 10.000 entries (believe it or not, the experiment is based in a real life use case reported by a user). The test was done using YaST’s command line interface and measured using the good old time utility. Since the command line interface doesn’t currently support entering hosts entries, only reading operations were tested.

time SCR CFA
real 1m15.735s 0m19.079s
user 1m15.076s 0m18.348s
sys 0m0.164s 0m0.244s

As you can see, this part of code is now approximately four times faster than before, so the practical results look very promising. Moreover, the CFA’s code is better designed and much better covered by automated tests.

That’s why the YaST team is investing into both sides of the same coin – CFA development and conversion of YaST’s code from SCR to CFA.

Highlights of YaST development sprint 34

May 3rd, 2017 by

Here we go again! Only one week after our previous report, we already have a new bunch of (hopefully) exciting news. Let’s take a look to the outcome of our 34th Scrum sprint.

Trusted boot support for EFI

In the report of our 19th sprint, we already presented the new (at that point in time) and shiny Trusted Boot support in YaST2 Bootloader. So far, the only supported scenario was legacy boot (i.e. no UEFI) on x86_64 systems. Now, thanks to TPM2, is also possible use Trusted Boot with EFI, so we added support for it in our beloved bootloader module.

So now YaST Bootloader looks the same in non-EFI and EFI variants, no matter which underlying technology is actually used. Of course, YaST is only the tip of the iceberg, booting in Trusted Boot with EFI is possible thanks to all the tools that has recently added support for TPM2. openSUSE developers and packagers rock!

Configuring the NTP service in CaaSP

For a CaaSP cluster to work properly, it’s vitally important that all nodes have their clocks in sync. So, from now on, the installer is able to configure each node in a proper way and the administration node will act as NTP server for the worker nodes.

To achieve that, the user will be asked to specify one or several NTP servers to be used as time source during administration node installation and YaST will take care of the rest (updating the configuration and enabling the service).

If a NTP service is announced through SLP, YaST will propose automatically.

NTP configuration in YaST

For worker nodes, YaST will configure the system to keep it synchronized with the administration role.

Storage reimplementation: improvements in the guided setup

Through the previous reports, you have been able to follow the evolution of the renovated guided setup for the partitioning proposal. This sprint is not different, we have improved and adjusted that new wizard even further, making it smarter and more usable.

The new version is able to decide which steps to show depending on the current scenario. For example, in systems with only one disk the whole disk selection dialog will be skipped. The steps are also simplified by disabling widgets that are not applicable to the current situation. For example, if there is no previous Linux installation, the question about what to do with the existing Linux partitions will be disabled.

And talking about the actions to perform on preexisting partitions, that has also been improved. In the first guided setup version, these options were only displaying for illustrative purposes, but now they are 100% real and the proposal will honor their values, so the user can easily select what to do with previous Windows or Linux partitions. We even added a third option for other kind of partitions.

New settings in the storage proposal

Last but not least (regarding the guided setup), the password selection for encryption is now more usable, allowing the user to choose a not so strong password if really desired.

Allowing users to shoot their own feet

Insserv removal

YaST is a complex and large piece software. It means that time to time some pieces that used to be great and shiny become obsolete and end up being useless. The cycle of life. 😉

Some time ago, it was decided it was time for insserv to enjoy retirement and it was replaced by a stub. But there were still calls to insserv in YaST and we decided to remove them all. There were several reasons for that decision. Basicaly (open)SUSE has used systemd for couple of years already, so revisiting places where the YaST code depended on SySV was a must. As a side effect it shortened the list of YaST dependencies and, in the end, it is another small step towards smaller installation.

So good bye insserv, it was a pleasure to work with you.

Improvements in the ZFCP Controller Configuration for zKVM on S/390

When running the installer on a mainframe in a zKVM virtual machine it displays a warning about not detected ZFCP controllers:

ZFCP warning on S/390

However, when running in a zKVM virtual machine the disk is accessible via the virtio driver, not through an emulated ZFCP controller. The warning is pointless and confusing.

The fix was basically just an one-liner which skips the warning when the zKVM virtualization is detected, but the YaST module for ZFCP doesn’t usually receive to much maintenance, so we applied our boy scout rule and improved the code a bit.

The improvements include using Rubocop for clean and unified coding style, enabling code coverage to know how much the code is tested (in this case it turned out to be horribly low, just about 4%), removing unused files, etc… You can find the details in the pull request.

Storage reimplementation: improvements all around

As we reported in our report about Hack Week 15, we have been working on an alternative way to offer the power of the new storage library to the Ruby world. The new API is finally ready for prime-time and used in all the Ruby code.

We took the opportunity to greatly improve the developer documentation and to revamp the yast2-storage-ng README and status page.

We also did some extra checks and added automated testing to ensure our partitioning proposal ensure the requisites if a S/390 system using ZFCP, so mainframe users also have a peaceful transition to the new storage stack.

We also greatly improved the ability of the new library to work with alternative name schemes for the devices. Up to now, only using plain kernel device name (e.g. /dev/sda1) was fully supported. Now we can do all the operations (specially generating the /etc/fstab file) by device name, by UUID, by filesystem label, by device id and by device path.

More content already in the oven!

As you already know, at least we repeat it quite often, 😉 YaST was converted from YCP to Ruby some time ago. However, this conversion was done on language basis. Some old design decisions and principles stayed, like the usage of SCR for accessing the underlying system.

Some time ago we introduced CFA (Config Files Api Gem) as a more powerful and flexible Ruby-powered replacement for SCR. Although we have been using it for a while in several YaST modules, we felt the concepts and rationales behind its operation where not that clear.

So we have invested some time improving the documentation and writing a blog post to properly present and explain CFA to the world. We will publish it next week, so stay tuned!

Great news ahead!

The next sprint will be the first in which the whole YaST team will work together integrating the new storage stack in every single part of YaST. So we hope the next report to be full of good news about the expert partitioner, improvements in AutoYaST’s handling of disks and so on.

Of course, that would be only a small fraction of all the stuff we plan to work on. See you in three weeks with more news!

Highlights of YaST development sprint 33

April 26th, 2017 by

It has been a long time since our last status update! The reason is the end of the previous sprint caught quite some of the YaST Team members on vacations and, when the vacation period was over, we were so anxious to jump into development to make YaST another little bit better that the blog post somehow fell behind.

But it’s time to pay our (reporting) debts. So these are some of the highlights of the 33th development sprint that finished on April 11th.

AutoYaST and Salt integration

During this sprint, a new module has been added to the YaST tool box: YaST2 Configuration Management.

This module offers integration between AutoYaST and configuration management systems like Salt. The idea is that AutoYaST will take care of system installation (partitioning, network setup, etc.) and, if desired, the system configuration can be delegated to one of those external tools.

The YaST Configuration Management module was born during an internal SUSE workshop during the last summer and it got more love during HackWeek 14 and 15.

Now it is officially part of the YaST family and it will be included in the upcoming releases.

The new module: YaST - Configuration Manager

Extend the YaST Installer Workflow by Add-ons

The YaST installer already allows extending the installation work flow by add-on extensions. However, this was only supported for the SUSE tag repositories used on the ISO installation media. That means the online repositories, which normally use the RPM-MD data format, could not use this feature.

This sprint we extended the support also for the other repository formats (basically any repository format supported by libzypp). The original limitation was caused by the fact that the other repository types do not support other files except the RPM packages.

To overcome this limitation we now support packaging the installer extension files into a standard RPM package which can be then provided by any repository type.

The implementation is documented and there is the YaST:extension OBS project with a simple example extension.

The example extension in action

YaST pattern definitions

The software pattern definitions have been split and moved to the respective OBS devel projects. See more details in this opensuse-factory announcement. That means the YaST patterns are now maintained in the
YaST:Head project.

The project sources have been imported to the GitHub repository to track the history and use the code review workflow. Also and the usual Jenkins and Travis automation has been set up. For contributing your changes in the YaST patterns use the usual GitHub pull request workflow as for the other YaST packages. Thank you in advance! 😉

CaaSP deserves its own YaST package!

Among other things, our team is working hard to make YaST fulfill the requirements of the upcoming Containers as a Service Platform (CaaSP) product. As part of this effort, we have added some new features, discovered (and fixed) some bugs, improved documentation and so on.

We’ve also added some specific CaaSP code, so during this sprint we’ve decided to create a new dedicated package (yast2-caasp) and move the code there.

Currently it only contains system role handlers and some additional documentation, but most probably we will add some stuff during the upcoming months.

Bootloader improvement

Also the bootloader module has been improved to better handle invalid partitioning proposal in the CaaSP product. When the partitioning proposal did not contain a valid root directory then the bootloader module crashed.

After the fix it now displays details about the problem so the user can manually fix the configuration.

New YaST-bootloader warning

Automatic screenshots in the AutoYaST integration tests

An important part of keeping the quality in a job is investing in the quality of the tools. So during this sprint we decided to improve the capabilities of the AutoYaST integration tests. To help with debugging, AYTests will now store a screenshot and, if possible, YaST2 logs from installation/upgrade under the workspace.

The screenshot will be refreshed every 30 seconds so, if a timeout occurs, it will be now easy to find out where the process got stuck.

Creating your own installation media

Recently, a new set of packages were introduced into Tumbleweed to help setting up the local installation servers. They are named tftpboot-installation-<PRODUCT>-<ARCH> (for example tftpboot-installation-openSUSE-Tumbleweed-x86_64) and are intended to make it easy to set up the PXE boot environment for an installation server.

These packages basically contain the installation environment as found on the SUSE installation media.

And this enables mksusecd to create a network installation image from it.

For example, the package above installs its files into directory /srv/tftpboot/openSUSE-Tumbleweed-x86_64.

So, let’s run

mksusecd \
  --create foo.iso \
  --net=http://download.opensuse.org/tumbleweed/repo/oss/suse \
  /srv/tftpboot/openSUSE-Tumbleweed-x86_64

and we get a network installation ISO image that will install the Tumbleweed distribution using the official openSUSE Tumbleweed repository.

Note that we are using the RPM-MD repository at /tumbleweed/repo/oss/suse instead of /tumbleweed/repo/oss as one might expect. Of course it is also OK to use the /tumbleweed/repo/oss repository but we want to be cool. 😀

You can put foo.iso also on a USB stick

dd if=foo.iso of=/dev/<USB_DEVICE>

and boot from it.

To be extra-cool, try the --fat option

mksusecd \
  --create foo.iso \
  --fat \
  --net=http://download.opensuse.org/tumbleweed/repo/oss/suse \
  /srv/tftpboot/openSUSE-Tumbleweed-x86_64

that will create a FAT partition for the data, which is a bit more convenient when you plan to put the image on an USB stick (and yes, you can still use the image to burn a DVD).

Serial console at boot

The YaST bootloader module can be used to configure the parameters to allow accessing the system’s boot process through a serial console. But that’s a relatively complex topic and is easy to make mistakes when specifying such parameters. In the past, the message displayed by YaST in case of error was not helpful enough. To improve usability, the new pop-up is nicer and includes a proper example, as can be seen in the following screenshot.

More guided serial console setup in YaST-Bootloader

Storage reimplementation: new proposal guided setup

The previous sprint we brought in the ability to use encryption for both partition-based and LVM-based proposals. This time we go a step further and present a new guided setup that allows you to do even more. We have worked hand in hand with our UX experts to design a new proposal wizard composed by four steps.

In the first one, a list with the available disks is showed and you can select which ones to use for your fresh installation.

Guided proposal setup - step 1

In the second screen, you can select a specific disk to be used for the root partition, and also, you might decide what to do with the existing Windows and Linux installed systems. Currently, this last functionality is only illustrative, but it will become functional in the upcoming sprints.

Guided proposal setup - step 2

The third step is surely more familiar for you. As in the previous sprint, here you can select the LVM usage and encryption. And of course, the encryption password will be checked to ensure you use it is strong enough.

Guided proposal setup - step 3

Finally, in the last step the filesystem type for the root partition can be selected. Moreover, you can decide whether to use a separate home or not. Or you can select another filesystem type for your home.

As result of the wizard, a new partition schema is automatically created taking into account all the options you have selected. And naturally, that is only the first version. We will let know about the news in the guided setup in following sprints.

Warn the user in case of inconsistencies in Online Migration

Before the live Service Pack migration the system can be manually modified in many ways producing inconsistencies between the installed system and the extensions/modules registered in the registration server.

In this sprint we have added some steps that will allow the user to fix the inconsistencies. Also the displayed summary of the available migrations has been improved showing information about products without any migration available (e.g. third party addons).

For the online migration it is important to be able to rollback to the original state (before the migration started) when a problem occurs or the migration is aborted by user.

Now the user will be able to decide if the registered but not installed products should be deactivated as part of the rollback.

Warning for inconsistent online migration

See you very soon!

As usual, this report only covers a relatively small part of all the work done during the previous sprint and, as usual, you will have to wait until the end of the current sprint to get more exciting news. The bright side is that the sprint will finish in a matter of days, so you will only need to wait one week to read our next report.

Stay tuned and have a lot of fun!

Fun things to do with driver updates

April 25th, 2017 by

Today: And what if I want to remove some files?

It’s easy and obvious to add new files with a driver update (DUD). But what if you need to remove some files? Or, related: can you replace some read-only file by a writable copy?

Let’s for this article assume you want to modify the Xorg configuration. Say,
/usr/share/X11/xorg.conf.d/10-evdev.conf troubles you.

The direct way would be to write an update.pre script than removes the file and include this into a DUD.

update.pre is run right after the DUD has updated the files in the installation system.

For example:

echo \
  rm /usr/share/X11/xorg.conf.d/10-evdev.conf \
  > update.pre
mkdud --create test1.dud --dist tw --name "remove 10-evdev.conf" update.pre

But when we try test1.dud we run into this:

Driver Update: remove 10-evdev.conf
Driver Updates added:
  remove 10-evdev.conf
[...]
rm: cannot remove '/usr/share/X11/xorg.conf.d/10-evdev.conf': Read-only file system

So, we see the catch: much of the installation system resides on a read-only file system! You can’t just go and modify things.

But how does the driver update process manage to add new files to the installation system then? It does so by restructuring the file system using symlinks. In the process all directories that need to be modified are replaced by writable copies.

In other words: if you include the file you want to remove in the DUD – you will be able to remove it. It’s actually sufficient to include the directory the file resides in to make this work.

So, let’s try this:

mkdir -p /tmp/dud/usr/share/X11/xorg.conf.d
echo \
  "rm /usr/share/X11/xorg.conf.d/10-evdev.conf" \
  > update.pre
mkdud --create test2.dud --dist tw --name "remove 10-evdev.conf" update.pre /tmp/dud

Now we don’t get any error applying test2.dud and when we login to the installation system, we see:

console:vm9732:/ # ls -l /usr/share/X11/xorg.conf.d
total 0
console:vm9732:/ # 

Tip

For easy testing a DUD, boot the machine with

startshell=1 sshd=1 password=*** dud=<URL>

startshell=1 wi ll stop the installation workflow after the installation system has been fully prepared just before YaST will be started. sshd=1 will start an SSH daemon and you’ll be able to connect to the machine and look around.

A similar trick can be used to make files writable (watch out for correct shell quoting):

mkdir -p /tmp/dud/usr/share/X11/xorg.conf.d
echo \
  cp --remove-destination '$(readlink -f /usr/share/X11/xorg.conf.d/10-evdev.conf)' \
  /usr/share/X11/xorg.conf.d/10-evdev.conf \
  > update.pre
mkdud --create test3.dud --dist tw --name "make 10-evdev.conf writable" update.pre /tmp/dud

We can verify the result:

console:vm9732:/ # ls -l /usr/share/X11/xorg.conf.d               
total 4
-rw-r--r-- 1 root root 1099 Apr 24 13:06 10-evdev.conf
console:vm9732:/ #

The file is now writable.

Highlights of YaST development sprint 32

March 22nd, 2017 by

To make sure you didn’t missed us too much, in our latest blog post we summarized all the YaST-related projects worked during Hack Week 15. But after all the fun, it was time for… more fun! So let’s take a look to what the team has delivered on this first sprint after Hack Week 15.

Storage reimplementation: encrypted proposal without LVM

One of the known limitations of the current installer is that it’s only able to automatically propose an encrypted schema if LVM is used. For historical reasons, if you want to encrypt your root and/or home partitions but not to use LVM, you would need to use the expert partitioner… and hope for the best from the bootloader proposal.

But the new storage stack is here (well, almost here) to make all the old limitations vanish. With our testing ISO it’s already possible to set encryption with just one click for both partition-based and LVM-based proposals. The best possible partition schema is correctly created and everything is encrypted as the user would expect. We even have continuous tests in our internal openQA instance for it.

The part of the installer managing the bootloader installation is still not adapted, which means the resulting system would need some manual fixing of Grub before being able to boot… but that’s something for an upcoming sprint (likely the very next one).

Improved add-ons listing for SLE12-SP1

The dialog in SLES-12-SP1 for selecting the add-ons after registering the system was originally designed just for a small list of add-ons. Unfortunately (or fortunately, depending on how you look at it), the number of add-ons grew over the time and it exceeded the original limit for the text mode UI.

The equivalent screen in SLE-12-SP2 is not affected by the problem because it uses a different layout with scrollable list. But the SP1 dialog looks like this.

Broken add-ons list in SP1

If you look carefully at the screenshot you will see that the Web and Scripting Module is missing in the list and the Back, Next and Abort buttons at the bottom are also not displayed.

The fix decreased the size of the Details widget and allowed displaying more items in each column. Now there is even free space for three more add-ons.

Fixed addons list in SP1

Moreover the dialog is now dynamic and checks the current size of the screen. If there is enough free space then the list is displayed in one column so the labels are not truncated and the Details widget size is increased back to the original size.

Add-ons list in SP1 with enough space

Storage reimplementation: Btrfs subvolumes

The management of subvolumes is one of those features that make Btrfs rather unique and that need special handling when compared to more traditional file systems. That was indeed one of the several reasons to rewrite libstorage – Btrfs subvolumes never fully fitted the philosophy and data structures on the old (current) libstorage and yast2-storage.

In this sprint we introduced support for subvolumes in libstorage-ng from the ground up, taking into consideration all the specificities, use cases and scenarios found in the past. And, hopefully, in a way that is also prepared for whatever the future brings.

The new functionality is already working and tested and it’s included in the latest versions of libstorage-ng, but is still not used in the proposal or any other part yast2-storage. You will have to wait another sprint to see more visible results. At least if “more visible” means screenshots. Meanwhile, if you like images you can always enjoy the graphs generated from the internal structures managed by libstorage-ng.

Internal subvolumes representation in libstorage-ng

Storage reimplementation: system upgrade

The new storage stack has been able to install an openSUSE system for quite some time already. While we keep improving that area, the next challenge was to make the upgrade from a previous openSUSE version also possible using our testing ISO.

That implies scanning the hard disks looking for previous installations, allowing the user to select one, mounting the corresponding partitions or LVM volumes, performing the update of every package and doing some final tasks like updating the bootloader configuration.

Following the iterative approach encouraged by Scrum, we focused in the first three steps, which is something that a user (or openQA, for that matter) can test and verify. So now we are able to detect and list pre-existing systems and start the upgrade process on the selected one. And we have automated tests in openQA to ensure it works across all the combinations of partition-based vs LVM-based layout and UUID-based vs name-based fstab file.

Add-ons can define new system roles

YaST is pretty customizable when it comes to adapt/modify the installation workflow. Among other things, add-ons are allowed to adapt the workflow (adding/removing steps), define new proposals, etc. And starting now, they can also define new system roles.

Let’s see an example of adding a new mail server role:

<update>
  <system_roles>
    <insert_system_roles config:type="list">
      <insert_system_role>
        <system_roles config:type="list">
          <system_role>
            <id>mail_role</id>
            <software>
              <default_patterns>base Minimal mail_server</default_patterns>
            </software>
          </system_role>
        </system_roles>
      </insert_system_role>
    </insert_system_roles>
  </system_roles>
</update>

<!-- Don't forget to add the texts -->
<texts>
  <mail_role>
    <label>Mail Server</label>
  </mail_role>
  <mail_role_description>
    <label>• Software needed to set up a mail server
• No production ready yet!</label>
  </mail_role_description>
</texts>

And now let’s see how it looks:

A role added by an addon

Which leads us to the next section…

The list of roles becomes responsive in text mode

A really nice thing about YaST is that it’s able to run in textmode, so you don’t need a graphical interface to install or configure your system. As a consequence, YaST developers need to keep certain limitations in mind when working in the user interface.

Now that add-ons can add new system roles, we noticed a potential problem in the dialog selection screen: we eventually will get out of space if more than one system role is added. So we decided to improve how system roles are displayed to make them fit in a 80×25 mode (that is, only 25 lines of text). Let’s see the changes with some examples.

This is how the screen looks by default, with a reasonably small set of roles.

Default system roles list

If the system detects there is no space to present all the information in such a convenient way, it removes all the spaces so at least the information is all there, even if it looks a little bit packed.

Roles list with no extra space

If even that is not enough, the extra descriptions are omitted, which gives us way more room.

Compact roles list

If roles don’t fit even without the descriptions, the introductory text will be also omitted which means we can present up to eighteen (yes, 18!) roles in the screen.

Storage reimplementation: guided setup mock-up

As explained in several previous reports, we are collaborating closely with SUSE UX experts to design the revamped interfaces of the installer’s partitioning proposal and the expert partitioner. We already showed you the document we used as a base to discuss the partitioning proposal, including the conclusions, and the first very simple prototype of the so-called Guided Setup.

During this sprint, that collaborative effort focused on defining exactly how every step of that wizard should work and look like. The goal was to get some interface mock-ups to be used as starting point for the upcoming sprint. More than ever, a picture (well, four of them) is worth a thousand words.

First step of the guided partitioning setup mock-up

Second step of the guided partitioning setup mock-up

Third step of the guided partitioning setup mock-up

Fourth step of the guided partitioning setup mock-up

Prevent the installation of CaaSP if Btrfs snapshots are not possible

CaaSP is a single purpose system, and having snapshots enabled is essential. So there’s now a check in place that will simply prevent you from going on with the installation if snapshots are disabled (for example, if the disk is too small).

Blocked CaaSP installation

Storage reimplementation: better handling of /etc/fstab and /etc/cryptab

For the new storage stack, we refactored the classes to handle /etc/fstab. While this would normally not be anything to write much about, we included intelligent handling for existing comments based on this standalone GitHub project.

This means that existing comment blocks at the start and at the end of the file remain untouched, and comments before any content entry remain attached to that entry; i.e. when that entry is moved around in the file (e.g. because of mount dependencies), that comment is moving along with the entry it belongs to. While this is not 100% fool proof, it is much better than the usual strategy to simply discard such comments when the file is rewritten.

Quite some adaptations and bugfixes for CaaSP

As you already know from previous reports and other sources, a considerable part of SUSE’s development firepower is focused on building the upcoming CaaSP. As part of that heavy development process, the YaST team invested a significant part of the sprint adapting YaST for CaaSP and fixing bugs introduced by previous adaptations. A large collection of changes here and there that are hard to summarize here but that help CaaSP to be a couple of steps closer to the final goal.

Keep it rolling!

We have already planned our next sprint which will hopefully bring more features to the new storage stack, CaaSP-related improvements, a surprise about AutoYaST and more stuff. And, of course, it will be followed by its corresponding report.

So see you in three weeks. Stay tuned and have a lot of fun!

Fun things to do with driver updates

March 16th, 2017 by

Today: But what if I need a new kernel?

A driver update (DUD) can of course update a single driver. But if that’s not enough and you need a whole new kernel to run an installation?

There are two parts to solve:

  1. replace the kernel used during installation and
  2. get the new kernel installed

We’ll need two tools for this (both available in Tumbleweed or here: mksusecd and mkdud).

1. Replace the kernel used during installation

For this it’s important to know which kernel packages you’ll actually need. Typically it will be kernel-default and kernel-firmware. But older SUSE distributions (SLE 11 comes to mind) had the kernel packages split into kernel-default and kernel-default-base – you’ll need them both.

To make things confusing, modern SUSE distributions also have kernel-default-base – but it’s an alternative to kernel-default. In this case we don’t need it.

If unsure, check kernel-default. If it contains the actual kernel (e.g. /boot/vmlinuz) then you don’t need kernel-default-base.

On some architectures modules are also taken from xen-kmp-default. If that’s important for you, you can add this package to the kernel list as well.

In fact you can add any number of kernel packages or kmps you like.

In the past, sometimes a different kernel flavor was used. For example PowerPC had kernel-ppc64 for a while. Simply use the flavor you need.

It’s a good idea to gather all the kernel rpms into a single directory for easier use:

> mkdir k
> cp kernel-default.rpm kernel-firmware.rpm k
> cp kernel-default-base.rpm k    # only if needed
# add any kernel-related rpms you need

Then, take your SUSE installation iso and run

> mksusecd --create new.iso \
  --kernel k/* -- \
  original_dvd1.iso

Note that the --kernel option accepts a variable number of arguments, so you have to add an isolated -- to terminate the argument list properly.

The output could look like this:

> mksusecd --create new.iso \
  --kernel k/* -- \
  SLES-11-SP4-DVD-ppc64-GM-DVD1.iso
kernel version: 3.0.101-63-ppc64 --> 3.0.101-94-ppc64
CHRP bootable (ppc64)
building: 100%
calculating sha1...

The command above will actually get the list of required modules from the old installation iso. If you are missing some driver or the new kernel comes with some additional driver, the module will not be added to the new iso.

But there’s the --modules option. It will add the listed modules together with any implicitly required modules via module dependencies.

For example, let’s add the airport wifi-module to our PowerPC iso:

> mksusecd --create new.iso \
  --kernel k/* \
  --modules airport -- \
  SLES-11-SP4-DVD-ppc64-GM-DVD1.iso
kernel version: 3.0.101-63-ppc64 --> 3.0.101-94-ppc64
kernel modules added:
  airport, cfg80211, orinoco
CHRP bootable (ppc64)
building: 100%
calculating sha1...

As you can see, it automatically adds orinoco and cfg80211 as well.

2. Get the new kernel installed

This is relatively simple. A driver update can do this:

> mkdud --create foo.dud \
  --dist sle11 \
  --install repo \
  k/*

This creates a driver update for SLE 11 (which also applies to SP4) and the kernel rpms are installed via an auto-generated add-on repo (--install repo).

Now we have the driver update that installs our kernel packages. But how do we use it?

We integrate it into our iso above!

> mksusecd --create new.iso \
  --initrd foo.dud \
  --kernel k/* -- \
  SLES-11-SP4-DVD-ppc64-GM-DVD1.iso

mksusecd has an --initrd option that directly accepts driver updates and integrates them into the iso.

3. Can I have a choice?

Maybe you just want to test this new kernel or sometimes need the old one and sometimes the new one. Can you make an installation iso that lets you choose the kernel?

Oh yes! 🙂

> mksusecd --create new.iso \
  --add-entry 3.0.101-94 \
  --initrd foo.dud \
  --kernel k/* -- \
  SLES-11-SP4-DVD-ppc64-GM-DVD1.iso

This does not replace the old kernel but adds a new boot entry Installation - 3.0.101-94.

So you can install with old or the new kernel.