Home Home > Base-system
Sign up | Login

Archive for the ‘Base System’ Category

Highlights of YaST Development Sprint 80

July 19th, 2019 by

After our (really long) sprint report, which featured information from 3 different sprints together, we are back to our regular schedule of publishing a blog post after each sprint.

In a nutshell, these are the highlights from the previous one:

  • A new version of yast2-network will be submitted to Tumbleweed shortly, including quite some internal changes as part of the refactoring effort.
  • The support for offline installation of SUSE Linux Enterprise products has been improved to handle modules and extensions dependencies automatically, among other goodies.
  • The partitioning proposal has been adapted to support SUSE Manager special needs.
  • The guided partitioning ignores now the adjust_by_ram parameter in IBM z Systems, where it is basically useless.
  • Some (open)SUSE 15 features have been backported to 5th Service Pack of SUSE Linux Enterprise.

And last but not least, we have some words to say about the feedback we get from you (that we really appreciate!) and the future of YaST.

Shipping Another Round of Network Refactored Code

One of the problems we wanted to avoid while refactoring yast2-network is diverging too much from the version shipped in Tumbleweed. As we mentioned in our last report, we have done quite some work, especially when it comes to the internals of the user interface, and codebases are starting to look quite different.

For that reason, we decided to merge the new code into the master branch so it can be included in Tumbleweed shortly. We are talking about a pull request which contains more than 340 commits, adds 9000 lines, removes 5000 and modifies 197 files. So, what could possibly go wrong? To be honest we did quite some testing but there is a chance that you can find some issues. In that case, please, open a bug report and we will happily fix it.

About the technical details, we put most of our efforts into drawing a clear line between the user interface and the business logic. In parallel, we are still working on the new data model which enables us to read/write the network configuration using different backends, although at this time we are only supporting sysconfig. We have made some progress during this sprint (you can check the pull request if you are interested), but we will not merge this code into our master branch yet. You can find more details in the updated documentation.

After reading this status update about the yast2-network refactoring, you might be wondering about our plans. Basically, we would like to finish the support to read/write network interfaces configuration and start using it through all the code, killing some rather old and buggy modules like LanItems.

Offline Media Support

For SUSE Linux Enterprise 15 products, there are two installation media:

  • The installer media, which contains only basic packages for a minimal system.
  • The so-called Packages DVD, which contains several modules and extensions, like the Development Tools Module or the Server Application Module.

The main use case of the Packages DVD is allowing to install the systems without an Internet connection. However, YaST support for such a scenario is pretty simplistic. For instance, dependencies between modules are not evaluated and the user has to select them manually. This approach is error-prone and user-unfriendly.

Fortunately, these shortcomings will be addressed in SLE 15 SP2. YaST is now able to solve modules and extensions dependencies, so the user does not have to worry about them anymore. For instance, if you select the Web and Scripting Module, the Basesystem Module and Server Application Module will be automatically selected.

At first, solving dependencies through the 25 repositories which are included in the Packages DVD might be time-consuming, but the current implementation takes advantage of libsolv to reduce it to about 2 seconds in our testing machine.

Improved Extensions and Modules Selection

Additionally, other improvements have been included, like displaying additional information about each module/extension or filtering out base products from the list.

SUSE Manager and the Partitioning Guided Setup

As you know, YaST is a very flexible Linux installer. The creators of each (open)SUSE product or role can define the particular sequence of steps presented to the user and configure many of the options on each of those step. But that was still not enough for the maintainers of SUSE Manager, the purpose-specific distribution to manage software-defined infrastructures.

SUSE Manager follows a pretty specific approach regarding the management of the storage devices. So we had to add some extra configuration options to our partitioning guided proposal to fulfill their needs. We felt the topic deserved its own separate blog post. So follow the link and enjoy the love story of SUSE Manager and the Partitioning Guided Setup.

Taking care of IBM z Systems

For many reasons, IBM z System architecture (s390) is one of those special cases where YaST flexibility is crucial. One of the many options that YaST offers to products and roles creators is called adjust_by_ram, which instructs YaST to make sure that a given volume size is, at least, as big as the RAM of the system. Obviously, this option is especially useful for the swap volume when we want to be able to suspend the system to disk.

However, on the 64-bits s390 architecture, resuming from a swap device is not supported, which renders this option useless. For that reason, YaST will take this situation into account, and it will not enlarge the swap volume bye default for s390 machines, even if adjust_by_ram was set in the control file. Of course, the user will be able to enlarge the volume by activating the corresponding checkbox in the Guided Setup options.

Guided Partitioning Filesystem Options

Backporting Stuff to SLE 12 SP5

Although most of the development effort is focused on SLE 15 SP2 and openSUSE Leap 15.2, we do not forget about SLE 12. SUSE is working on the 5th Service Pack of this release and the YaST team is no exception.

During this sprint, SLE 12 got two interesting features that were backported from SLE 15. The first one is the ability to specify kernel mitigations settings, as you already can do in Tumbleweed. And the second one is the support for PMEM devices, that you can see in the screenshot below.

YaST2 Partitioner PMEM Support

Recently, there was an interesting discussion about the openSUSE installer on Reddit, continued as a thread on our yast-devel mailing list.

This is just a quick heads-up to confirm that we hear you. Some of the issues discussed there were already known to be sore points, some others may need more clarification on why and how things are as they are.

We are taking your feedback and criticism seriously. But we also have to consider all the requirements, and we try to avoid making unrealistic promises. So please bear with us until we had some further discussions first about what we can do and in what time frame.

Conclusions

In addition to what we have described in this report, we have been working on fixing bugs and making other small improvements here and there. And sprint 81th will not be different: part of the team will keep working on yast2-network while the rest squashes as many bugs as possible.

experimental openSUSE mirror via IPFS

April 3rd, 2019 by

The InterPlanetary File System (IPFS) can be used to provide files in a more efficient and distributed way than HTTP.

Our filesystem repo already has the go-ipfs client.

You use it with
ipfs daemon --init

And then you can add my Tumbleweed mirror with
zypper ar http://127.0.0.1:8080/ipns/opensuse.zq1.de./tumbleweed/repo/oss/ ipfs-oss

You can also browse the content online at
http://opensuse.zq1.de./tumbleweed/repo/oss/ . During my testing I found that the results are sometimes inappropriately cached on the Cloudflare CDN, so if you used it under this URL without the ipfs client, this might throw signature errors in zypper.

On the server side, the mirror is updated using the syncopensuse script from
https://github.com/bmwiedemann/opensusearchive and consistency of the repo is verified with checkrepo

When a complete repo was synced, dynaname updates a DNS entry to point to the new head:

> host -t txt _dnslink.opensuse.zq1.de.
_dnslink.opensuse.zq1.de is an alias for tumbleweedipfs.d.zq1.de.
tumbleweedipfs.d.zq1.de descriptive text “Last update: 2019-04-03 12:23:43 UTC”
tumbleweedipfs.d.zq1.de descriptive text “dnslink=/ipfs/QmSXEVuU5z23rDxMyFYDhSAUaGRUPswuSXD3aVsBEzucjE”

If you got spare bandwidth and 300 GB disk on some public server, you could also host a mirror of today’s version, simply by doing ipfs pin add QmSXEVuU5z23rDxMyFYDhSAUaGRUPswuSXD3aVsBEzucjE

This is a permalink: http://127.0.0.1:8080/ipfs/QmSXEVuU5z23rDxMyFYDhSAUaGRUPswuSXD3aVsBEzucjE also browsable via any public IPFS gateway. This means, it will always remain on the 20190401 version of Tumbleweed and no changes in content are possible – similar to how a git commit ID always refers to the same data.

So why did I create this IPFS mirror? That is related to my work on reproducible builds for openSUSE. There it regularly happened that published Tumbleweed binaries were built with libraries, compilers and toolchains that were no longer available in current Tumbleweed. This prevented me from verifying that the published binaries were indeed built correctly without manipulation on the OBS build workers.

Now, with this archive of rpms easily available, it was possible to verify many more Tumbleweed packages than before. And most importantly, it remains possible to independently verify even after Tumbleweed moves on to newer versions. This data is going to stay available as long as anyone pins it on a reachable server. I’m going to pin it as long as it remains relevant to me, so probably a bit until after the next full Tumbleweed rebuild – maybe 6 to 12 months.

Thus, it now is even less easy to sneak in binary backdoors during our package build process.

Highlights of YaST Development Sprint 69 & 70

January 31st, 2019 by

Almost two months has passed since our last sprint report but, except during the Christmas break, the team has been quite busy working on some features and bugfixes for the upcoming (open)SUSE releases.

But a post describing all that we have done would be quite long :), so let’s try to highlight a few of them.

  • YaST got a security audit and, although no real security problems were found, we were asked to introduce some improvements.
  • Now it is possible to run the installer through PXE Boot without any local repository. Pretty specific but cool stuff!
  • We are in the process of revamping SUSE Manager Salt Formulas support in the YaST2 Configuration Management module. Do not be fooled by the name, it is not limited to SUSE Manager.
  • YaST icons are now included in the package were they are used. We hope it will make things easier for icon designers.
  • The Firewall module got support for creating firewalld custom zones.
  • Performance when reading huge /etc/hosts files has been greatly improved.
  • CD/DVD sources are always disabled after installation.

YaST Security Hardening

Our SUSE security team did a security audit for YaST. The good news is that there were no real security problems that you should be concerned about. Still, we did some hardening to make the code even more secure.

This might have caused some breakages in Factory / Tumbleweed because many places in the code were touched. We apologize for any inconveniences that might have caused; but we are sure you prefer YaST to be more secure.

Most changes were centered around calling external commands, which YaST does a lot. Since YaST is running with root permissions in most cases, we want to make sure that this is as secure as possible. If you find any problems with it, please write bug reports.

What exactly we did and how we did it is summarized here: YaST Security Audit Fixes: Lessons Learned and Reminder

Installing via PXE Boot without any Installation Repository

In data centers and other big-scale enterprise environments, administrators rarely install new software via removable media such as DVDs. Instead, administrators rely on PXE (Preboot eXecution Environment) booting to image servers.

Installing Linux Enterprise in such environments typically requires two auxiliary servers in the local network:

  • The DHCP/TFTP server providing the minimal system used by PXE to execute the installer.
  • A server making the SLE DVD repository accessible in the local network via FTP, HTTP or any similar protocol.

Very often, the second one is more a requisite imposed by the installer than something really useful. In most cases, the system been installed will be registered in the SUSE Customer Center (or any of its proxy technologies like SMT or RMT) and will get all the software from there. Thus, we decided to save the administrators the extra steps of downloading the SLE ISO image and setting up an install server to serve the content of that ISO, for cases in which that was really not needed.

But the repositories are not only used to get the software been installed in the final system. As explained often in this blog, we have a single installer for all the products and flavors of SUSE and openSUSE, as different as the installation process looks for all of them. That generic installer uses the information in the installation repository to get its own configuration. That includes the available products (and its corresponding system roles), the steps and options to present to the user, the desired partitioning setup and many other aspects. Without that information, the installer is basically a musician without his score.

Starting with SLE-15-SP1, it will be possible to use the boot parameter NOREPO=1 to tell the installer to not expect (and more important, to not require) any local repository in the DVD or in the local network. In that case, the installer will be able to proceed up to the registration screen and get the information for the upcoming steps of the installation from the registration server. In the openSUSE case (where registration makes no sense), it will be able to reach the screen that allows to add more repositories.

Another step (and certainly not an easy one) to improve the installation experience for our users. Data center administrators, enjoy! 🙂

Revamping SUSE Manager Salt Formulas Support

Back in 2017, the YaST Configuration Management module got support to handle SUSE Manager Salt Formulas as part of a Hack Week project. If you do not know what this feature is about, you might be interested in checking the Forms are the Formula for Success presentation or the Hack Week project follow-up post.

Since then, the forms specification has evolved quite a lot and YaST support was basically outdated. So on November 2018 we started to work in order to bring the missing pieces to the YaST module. Basically, we rewrote the forms support and, although there are still rough edges, we are pretty close to release a new version with up-to-date support for this powerful feature.

Screenshot of how the dhcpd formula looks like

Managing Custom Zones Definitions in YaST Firewall

The new YaST UI for configuring firewalld was announced in the report of the sprint #63 (four months ago… time flies!) and, since then, we have continued improving it.

firewalld ships with some predefined zones. Although it covers most users needs, in addition it allows the user to define custom zones. During the last sprint we have added support in the new UI and also in AutoYaST to manage custom zones.

YaST2 Firewall custom zones definition dialog

During the development process some problems detected in the AutoYaST configuration were addressed too.

Updated YaST Branding and Icon Handling

In the past the YaST icons were included in the yast2-branding-openSUSE (openSUSE) and yast2-theme-SLE (SUSE Linux Enterprise) packages. The standard YaST icons were included in these packages, the standard YaST modules did not include any icons.

However, the disadvantage for the icon designer was that it was not clear which icons were really used.
If you wanted to update the icon theme you could potentially do a lot of useless work because some icons were not used anymore.

Now the icons are included in the respective YaST package, if the package is dropped the icon is dropped as well.

The package manager UI includes compiled-in fallback icons. That means if the branding package is broken or the icon files are accidentally deleted from disk then it will be still usable for emergency recovery.

The branding still works, the vendor can still provide specific icons which will override the included ones. So it is still possible to have a different look in the openSUSE and SLE products.

YaST2 Control Center new branding Screenshot

A big thank you goes to Stasiek Michalski and Noah Davis from the community who did the changes in the YaST code, designed the new icons and did a lot of cleanup!

Improving Performance when Loading Huge /etc/hosts Files

It might happen that you need to maintain a huge /etc/hosts file, especially when dealing with ads blockers. Such file with thousands of lines took an incredible amount of time to get loaded into YaST2. On some configurations it could even happen that loading a /etc/hosts with around 10.000 lines freezes the system completely. After some refactoring in YaST2 Host module, the performance has been significantly improved and loading a file with 10.000 lines now takes approximately 30s on the same configuration where it crashed before.

Disabling CD/DVD Repositories After Installation

If you install your system from a CD/DVD source it usually happens that this repository was not available for whole live of the system. In some use cases this was only uncomfortable because of some warnings but, in other cases, it caused serious complications, for instance, when trying to do a migration.

In the past, under some circumstances, those repositories were already disabled. But, from now on, they will be disabled always in order to avoid unwanted side effects.

Closing Thoughts

That’s all for the first report of 2019. In case you are wondering, the plan is to stick to the plan of publishing a report after each sprint, so expect the next one in about two weeks.

However, we recently had to migrate from the so called GitHub Services (now deprecated) to GitHub web hooks, so you might get an extra blog post about that very soon.

Stay tuned!

Report from the reproducible builds summit 2018

December 17th, 2018 by

Last week I attended the reproducible builds world summit in Paris.
It was very well organized by Holger, Gunner and their hidden helpers in the background. Very similar to the last 2 summits I attended in Berlin.

Because we were around 50 participants, introductions and announcements were the only things done in the big group. All actual work happened in 5-10 smaller circles.

We had participants from large companies like Google (with bazel), MicroSoft and Huawei, but also from many distributions and open source projects. Even MirageOS as non-Linux OS.

We did knowledge-sharing, refine definitions of terms, evolve concepts like “rebuilders” for verifying builds and allow users to better trust software they install, and such.

I learned about the undocumented DB dump (153 MB) and DB schema

And we had some hacking time, too, so there is now
a jenkins job that renders the list of unreproducible openSUSE Factory packages.

Also, my maintainer tool now has added support for the Alpine Linux distribution, thanks to help by one of its maintainers: Natanael Copa.
This is meant to help all cross-distro collaboration, not just for reproducible builds.

There is still work to be done to make better use of Mitre CPE to map package names across distributions.

I think, one major benefit of the summit was all the networking and talking going on, so that we have an easier time working with each other over the internet in the future.

Highlights of YaST Development Sprint 66

November 7th, 2018 by

The YaST team is working hard in order to extend the installer, improve the new storage layer and get rid of some bugs. So after this sprint, there is quite some unfinished work that will be ready within two weeks.

However, we have some stuff that we would like you to check out:

  • Snapper takes the free space into account when cleaning up snapshots.
  • The partitioning proposal tries to use just a single disk first.
  • The description of those actions that are related to BCache and MD-RAID devices have been greatly improved.
  • YaST is now able to handle repository variables properly.
  • The log viewer displays a helpful message when no logs are found.
  • And last but not least, yast2-sshd got a new maintainer outside of the YaST team. Let’s celebrate!

Extended Snapshots Clean-up Mechanisms in Snapper

So far snapper would delete snapshots if the overall spaced used for them was above a given limit. Now, snapper is able to take the free space into account too, so it will delete snapshots when the free space of the filesystem drops below a given threshold.

Of course, the threshold can be adjusted by the user through the snapper configuration files.

Better Actions Descriptions in Storage-ng

When describing what actions will be performed for storage actions, we already collapsed related actions to one to make it better readable. Instead of:

- Create  partition /dev/sda1 (40.00 GiB)
- Set ID of partition /dev/sda1 to "Linux" (0x83)
- Create ext4 on /dev/sda1
- Add mount point /home for /dev/sda1
- Add entry for /dev/sda1 to /etc/fstab

we report:

- Create partition /dev/sda1 (40.00 GiB) with ext4 for /home

However, actions related to BCache and MD-RAID devices were not taken into account, which produced quite long (and confusing) descriptions. Fortunately, these cases are now properly handled and the description is now quite informative and concise:

Create encrypted RAID1 /dev/md0 (511.87 GiB) for /secret with xfs
from /dev/sda (512.00 GiB), /dev/sdb (512.00 GiB)

Properly Handling Repository Variables

libzypp supports variable substitution in the name and the URLs of repositories and services. So a .repo file might contain something like this (notice the $releasever variable):

[repo-oss]
name=openSUSE-Leap-$releasever
baseurl=http://download.opensuse.org/distribution/leap/$releasever/repo/oss/

libzypp will take care of injecting the correct value but the user could override those values too. So in the example above, upgrading to Leap 15.1 might be as easy as:

zypper --releasever 15.1 dup

However, YaST2 had some problems in these situations that, hopefully, have been fixed during this sprint. Now openSUSE release managers can adjust the list of online repositories in order to take advantage of such a feature. If you want to know more about variable substitution, please check libzypp documentation.

Partitioning Proposal Uses a Single Disk

Until now, the partitioning proposal that is calculated during the installation uses all available disks by default. However, according to the feedback that we have received from our users, most people simply expect the system to be installed in just one disk. So, from now on, this initial proposal will consider each candidate disk in isolation before falling back to a multi-disk approach.

A picture is worth a thousand words, so just compare the images below to see the difference on a system which has three hard disks.

Initial Multi-Disk Partitioning Proposal

Initial Single Disk Partitioning Proposal

Improve Log Viewer Usability

YaST has featured a log viewer for a long time which allows the user to inspect files under /var/log like messages, boot.log or even YaST logs (YaST2/y2log). However, we are already in the Systemd times and most of your system services will log relevant information to the Systemd journal.

For that reason, YaST2 offers a really nice log viewer for (yast2-journal) which includes interesting filtering capabilities.

The problem is that, as one of our users stated in a rather old bug report, having two different tools to check logs can be confusing. Of course, they are getting information from different places so we decided to keep both of them. However, now the old log viewer will show a hint when no information is found in those old-style logs.

Use yast2-journal instead

yast2-sshd Has a New Maintainer

We would like to finish this report announcing that yast2-sshd has a new maintainer outside of the YaST team. This module was dropped back in 2013 and it was shipped in openSUSE 12.3 for the last time.

However, YaST is open source and Caleb Woodbine has built fresh RPM packages after fixing a problem he found in the firewalld integration. So if you are interested in such a module, check out Caleb’s work.

Thanks a lot, Caleb!

Conclusions

As we mentioned at the beginning of this post, there is quite some work in progress but, sadly, you will need to wait for another two weeks to get more details 🙂

Stay tunned!

Fun things to do with driver updates

April 25th, 2017 by

Today: And what if I want to remove some files?

It’s easy and obvious to add new files with a driver update (DUD). But what if you need to remove some files? Or, related: can you replace some read-only file by a writable copy?

Let’s for this article assume you want to modify the Xorg configuration. Say,
/usr/share/X11/xorg.conf.d/10-evdev.conf troubles you.

The direct way would be to write an update.pre script than removes the file and include this into a DUD.

update.pre is run right after the DUD has updated the files in the installation system.

For example:

echo \
  rm /usr/share/X11/xorg.conf.d/10-evdev.conf \
  > update.pre
mkdud --create test1.dud --dist tw --name "remove 10-evdev.conf" update.pre

But when we try test1.dud we run into this:

Driver Update: remove 10-evdev.conf
Driver Updates added:
  remove 10-evdev.conf
[...]
rm: cannot remove '/usr/share/X11/xorg.conf.d/10-evdev.conf': Read-only file system

So, we see the catch: much of the installation system resides on a read-only file system! You can’t just go and modify things.

But how does the driver update process manage to add new files to the installation system then? It does so by restructuring the file system using symlinks. In the process all directories that need to be modified are replaced by writable copies.

In other words: if you include the file you want to remove in the DUD – you will be able to remove it. It’s actually sufficient to include the directory the file resides in to make this work.

So, let’s try this:

mkdir -p /tmp/dud/usr/share/X11/xorg.conf.d
echo \
  "rm /usr/share/X11/xorg.conf.d/10-evdev.conf" \
  > update.pre
mkdud --create test2.dud --dist tw --name "remove 10-evdev.conf" update.pre /tmp/dud

Now we don’t get any error applying test2.dud and when we login to the installation system, we see:

console:vm9732:/ # ls -l /usr/share/X11/xorg.conf.d
total 0
console:vm9732:/ # 

Tip

For easy testing a DUD, boot the machine with

startshell=1 sshd=1 password=*** dud=<URL>

startshell=1 wi ll stop the installation workflow after the installation system has been fully prepared just before YaST will be started. sshd=1 will start an SSH daemon and you’ll be able to connect to the machine and look around.

A similar trick can be used to make files writable (watch out for correct shell quoting):

mkdir -p /tmp/dud/usr/share/X11/xorg.conf.d
echo \
  cp --remove-destination '$(readlink -f /usr/share/X11/xorg.conf.d/10-evdev.conf)' \
  /usr/share/X11/xorg.conf.d/10-evdev.conf \
  > update.pre
mkdud --create test3.dud --dist tw --name "make 10-evdev.conf writable" update.pre /tmp/dud

We can verify the result:

console:vm9732:/ # ls -l /usr/share/X11/xorg.conf.d               
total 4
-rw-r--r-- 1 root root 1099 Apr 24 13:06 10-evdev.conf
console:vm9732:/ #

The file is now writable.

Fun things to do with driver updates

March 16th, 2017 by

Today: But what if I need a new kernel?

A driver update (DUD) can of course update a single driver. But if that’s not enough and you need a whole new kernel to run an installation?

There are two parts to solve:

  1. replace the kernel used during installation and
  2. get the new kernel installed

We’ll need two tools for this (both available in Tumbleweed or here: mksusecd and mkdud).

1. Replace the kernel used during installation

For this it’s important to know which kernel packages you’ll actually need. Typically it will be kernel-default and kernel-firmware. But older SUSE distributions (SLE 11 comes to mind) had the kernel packages split into kernel-default and kernel-default-base – you’ll need them both.

To make things confusing, modern SUSE distributions also have kernel-default-base – but it’s an alternative to kernel-default. In this case we don’t need it.

If unsure, check kernel-default. If it contains the actual kernel (e.g. /boot/vmlinuz) then you don’t need kernel-default-base.

On some architectures modules are also taken from xen-kmp-default. If that’s important for you, you can add this package to the kernel list as well.

In fact you can add any number of kernel packages or kmps you like.

In the past, sometimes a different kernel flavor was used. For example PowerPC had kernel-ppc64 for a while. Simply use the flavor you need.

It’s a good idea to gather all the kernel rpms into a single directory for easier use:

> mkdir k
> cp kernel-default.rpm kernel-firmware.rpm k
> cp kernel-default-base.rpm k    # only if needed
# add any kernel-related rpms you need

Then, take your SUSE installation iso and run

> mksusecd --create new.iso \
  --kernel k/* -- \
  original_dvd1.iso

Note that the --kernel option accepts a variable number of arguments, so you have to add an isolated -- to terminate the argument list properly.

The output could look like this:

> mksusecd --create new.iso \
  --kernel k/* -- \
  SLES-11-SP4-DVD-ppc64-GM-DVD1.iso
kernel version: 3.0.101-63-ppc64 --> 3.0.101-94-ppc64
CHRP bootable (ppc64)
building: 100%
calculating sha1...

The command above will actually get the list of required modules from the old installation iso. If you are missing some driver or the new kernel comes with some additional driver, the module will not be added to the new iso.

But there’s the --modules option. It will add the listed modules together with any implicitly required modules via module dependencies.

For example, let’s add the airport wifi-module to our PowerPC iso:

> mksusecd --create new.iso \
  --kernel k/* \
  --modules airport -- \
  SLES-11-SP4-DVD-ppc64-GM-DVD1.iso
kernel version: 3.0.101-63-ppc64 --> 3.0.101-94-ppc64
kernel modules added:
  airport, cfg80211, orinoco
CHRP bootable (ppc64)
building: 100%
calculating sha1...

As you can see, it automatically adds orinoco and cfg80211 as well.

2. Get the new kernel installed

This is relatively simple. A driver update can do this:

> mkdud --create foo.dud \
  --dist sle11 \
  --install repo \
  k/*

This creates a driver update for SLE 11 (which also applies to SP4) and the kernel rpms are installed via an auto-generated add-on repo (--install repo).

Now we have the driver update that installs our kernel packages. But how do we use it?

We integrate it into our iso above!

> mksusecd --create new.iso \
  --initrd foo.dud \
  --kernel k/* -- \
  SLES-11-SP4-DVD-ppc64-GM-DVD1.iso

mksusecd has an --initrd option that directly accepts driver updates and integrates them into the iso.

3. Can I have a choice?

Maybe you just want to test this new kernel or sometimes need the old one and sometimes the new one. Can you make an installation iso that lets you choose the kernel?

Oh yes! 🙂

> mksusecd --create new.iso \
  --add-entry 3.0.101-94 \
  --initrd foo.dud \
  --kernel k/* -- \
  SLES-11-SP4-DVD-ppc64-GM-DVD1.iso

This does not replace the old kernel but adds a new boot entry Installation - 3.0.101-94.

So you can install with old or the new kernel.

Fun things to do with driver updates

February 16th, 2017 by

Today: update the update process!

Yesterday a colleague asked me if it would be possible to apply a driver update (DUD) to the rescue system. He wanted to use a new btrfsprogs package.

My immediate reaction was: no, you can’t do it. But then, there’s no technical reason why it shouldn’t be possible – it actually nearly works. The updates are downloaded as usual – just not applied to the rescue system.

So I thought: “Why not make a driver update so driver updates work also for the rescue system?”

Here’s how I did it.

First, let’s find out how driver updates are usually applied. The code is here:

https://github.com/openSUSE/installation-images/blob/master/data/root/etc/inst_setup#L84-L87

We need just these three lines:

for i in /update/[0-9]*/inst-sys ; do
  [ -d "$i" ] && adddir "$i" /
done

linuxrc downloads the driver updates and stores them in an /update directory. One (numbered) subdirectory for each update.

It obviously uses some adddir script. So we’ll need it as well. Luckily, it’s not far away:

https://github.com/openSUSE/installation-images/blob/master/data/root/etc/adddir

Next, we’ll have to find the spot where the rescue system is set up. It’s done in this script:

https://github.com/openSUSE/installation-images/blob/master/data/initrd/scripts/prepare_rescue

Let’s do some copy-and-paste programming and insert the above code near the end of the script. It then might look like this

# driver update: add files to rescue system
if [ -d /mounts/initrd/update ] ; then
  cp -r /mounts/initrd/update /
  for i in /update/[0-9]*/inst-sys ; do
    [ -d "$i" ] && /mounts/initrd/scripts/adddir "$i" /
  done
fi

Some notes:

  • You have to know that prepare_rescue is run as the last thing before we exec to init. So everything is already in place, the left-over files from initrd are mounted at /mounts/initrd and will be removed at the end of the script.
  • This means we have to copy our updates into the new root directory, else they will be lost.
  • Also, we plan to make the adddir script available at /scripts/adddir by our driver update (see below).

Now let’s create the driver update:

mkdud --create dud_for_rescue.dud \
  --dist tw --dist leap42.1 --dist leap42.2 --dist sle12 \
  --name 'Apply DUD also to rescue system' \
  --exec 'cp adddir prepare_rescue /scripts' \
  adddir prepare_rescue

Here’s what this call does, line-by-line:

  • the fix works for all current SUSE distributions, so let’s support them
  • give the driver update some nice name
  • this command is run right after the driver update got loaded; we copy the scripts out of the driver update to their final location
  • add adddir and our modified prepare_rescue script

Here is the result: dud_for_rescue.dud.

Now, back to the original problem: how to use this to update a package in the rescue system? That’s easy:

mkdud --create new_btrfs.dud \
  --dist sle12 \
  dud_for_rescue.dud btrfsprogs.rpm

creates a driver update (for SLE12) that updates btrfsprogs also in the rescue system.

How to build OS images without kiwi

December 26th, 2016 by

kiwi has long been the one standard way of building images in openSUSE, but even though there exist extensive writings on how to use it, for many it is still an arcane thing better left to the Great Magicians.

Thus, I started to use a simpler alternative image building method, named altimagebuild when I built our first working Raspberry Pi images in 2013 and now I re-used that to build x86_64 VM images at
https://build.opensuse.org/package/show/home:bmwiedemann/altimagebuild
after I found out that it even works in OBS, including publishing the result to our mirror infrastructure.
It is still in rpm format because of how it is produced, so you have to use unrpm to get to the image file.

This method uses 3 parts.

  • a .spec file that lists packages to be pulled into the image
  • a mkrootfs.sh that converts the build system into the future root filesystem you want
  • a mkimage.sh that converts the rootfs into a filesystem image

The good thing about it is that you do not need specialized external tools, because everything is hard-coded in the scripts.
And the bad thing about it is that everything is hard-coded in the scripts, so it is hard to share general improvements over a wider range of images.

In the current version, it builds cloud-enabled partitionless images (which is nice for VMs because you can just use resize2fs to get a larger filesystem and if you later want to access your VM’s data from outside, you simply use mount -o loop)
But it can build anything you want.

To make your own build, do osc checkout home:bmwiedemann/altimagebuild && cd $_ && osc build openSUSE_Leap_42.2

So what images would you want to build?

AMD/ATI Catalyst fglrx rpms, end of an era!

December 7th, 2016 by

Long time not talking about fglrx rpm, mostly because they’ve got no update since last December 2015.

Short Summary

In a word as hundred, fglrx is now a dead horse!

Dead horse

We had the hope of getting it working for Leap 42.2 in October, but except freezing kernel and xorg, you will not get what you would expect: a stable xorg session

Say goodbye fglrx!, repeat after me, goodbye fglrx.

If you are locked down and forced for any reasons to use fglrx with your gpu, and are still using 42.1, then don’t upgrade to 42.2, without a plan B

It has no more support from AMD upstream, and that’s it!, if someone want to break its computer, it’s still possible to pick the last files and try it by yourself, but the repository will never contain it for 42.2 (see below how-to)

That’s said, I’m not still sure, to keep for a long time the repository, I’ve been managing since 6 years now.

A bit of history

In 2010, when we were working hard to get 11.1 out, the news that no supported ATI (at that time) will be available for end-users, as we have for nvidia gpu

I didn’t check back the irc log, but we were a few, that would like to have this still available, by pure commodity. Especially that I’ve just exchanged a non working gpu by my new hd5750.

I remember the first chaotic steps, how to build that, and create repeating builds, what about the license? Did we have the right to offer a pre-build rpm etc. I spent some time fixing all of this stuff.
And start the build on real hardware. Hey afterward kvm was really in infancy stage.

Release after release amd/ati and openSUSE, the driver was build, on hardware for each supported distribution. When beginning of 2013 Sebastian Siebert, who got some direct contacts with AMD, release
his own script, we collaborate to have the possibility to build on virtual machines, which allow me to simplify the build process, as having on kvm for each openSUSE release supported.

Afterward, AMD start to split fglrx with the fglrx for HD5xx and above, and fglrx-legacy. So 2 drivers to maintain, but as always with proprietary software, the legacy version became rapidly obsolete,
and non usable. Not that bad, in the meantime the AMD effort on the free and open source radeon driver, quickly overcome the performance of legacy.

Still from 2013, to 2016 I’ve been able to propose ready to use rpm for several version of openSUSE’s distributions. I think the repository serve quite well end users, and I never got big flames.

I can’t avoid to mention the openSUSE powered server and sponsored by Ioda-Net SĂ rl that has serve this objective so well during that time frame.

Future of the repository

Now that fglrx is becoming obsolete, I think seriously about why the repository online should stay online.

At openSUSE project level, we still have 13.1, 13.2, 42.1 and 42.2 that are mostly active. 13.1 is already almost out of the game of evergreen,
13.2 will follow soon, and I don’t know yet the exact plan for 42.1, but it will certainly go out of maintenance in less than a year.

If you feel or have the need of the repository, please express that in the comments below.

Wait there’s amd-gpu-pro, no?

Yeap there’s a closed driver, called amd-gpu-pro, available, for newer cards. But there’s two things that bring me out of the game, first I don’t have those newer gpu,
and don’t have the need to replace my hd5750 for the moment. The second and certainly the most important, those drivers are only available for Ubuntu or at least in .deb format.

I will certainly not help proprietary crap, if I don’t have a solid base to work with, and a bit of help from their side. I wish good luck to those who want to try those drivers,
I’ve got a look inside, and got a blame face.

For crazy, and those who don’t love their computer

So you want to loose your time? you can! I’ve kept in raw-src directory all the script used to build the driver.
They differ a bit compared to Sebastian Siebert last version in the sense of making Leap 422 as a possible target.
If you dig a bit around, you should be able to build them, but you’re alone on that way, you’ve been warned!

I’m not against a republished version, if someone find a way to make them working, just drop me a message.

That’s all for this journey, Have Fun! 🙂