Home Home
Sign up | Login

Deprecation notice: openSUSE Lizards user blog platform is deprecated, and will remain read only for the time being. Learn more...

calumma

The openSUSE Team blog. Calumma Brevicorne is of course a kind of chameleon...

Author Archive

Working on an openSUSE Release

November 22nd, 2013 by

release awesome
Finally, openSUSE Linux 13.1 has been released. It has been 8 months since the release of 12.3 in which we’ve been working in a lot of things, but specially in the last weeks, all the geekos have been quite busy trying to make it our best release ever. And we hope it shows.

Maybe you’re wondering what exactly we’ve been doing in these weeks previous to the release, well, that’s what Antonio Larossa is going to talk about in this post. (more…)

LinuxDays 2013

October 24th, 2013 by

gadgets at Linuxdays 2013
Michal Hrušecký writes about Linuxdays 2013. Last year, openSUSE helped this event kick off big, and we were there again this year!

Introducing Linuxdays 2013

Of course you remember the openSUSE Conference 2012: bootstrapping awesome, the four-for-the-price-of-one event? The one in Prague? We had the combination of the openSUSE Conference, SUSE Labs, a Gentoo miniconf and the local LinuxDays conference. The last two for the first time ever. While Gentoo miniconf didn’t have continuation, LinuxDays conference has been bootstrapped successfully and this year in the beginning of October, the conference took place for the second time. And Amazing it was.

Visitors and talks

Let’s start with some numbers. Before the conference started, there were 650 registrations. On Saturday about 400 people went through registration and on Sunday about 300 joined the event. A little geeky number – about 600 different Mac addresses connected to the WiFi. The event featured 31 talks and 6 workshops for these visitors.
building a 3D printer at Linuxdays 2013

Presentations were for both beginners and advanced geeks. There were talks describing Gnome 3 evolution, news in Libre Office or how openSUSE gets created. On the other hand, there were talks about MySQL fine tunning, an introduction to Autotools and a session about TCP Multipath.

Workshops and booths

Another interesting part of the conference was the 3D printing workshop. It ran through the whole Saturday and a part of Sunday. People could order parts for 3D printers with delivery to the conference and there was a team of experienced 3D printer builders in the workshop helping people build their own printer. There was huge interest in this workshop, but due to the limited capacity, only the first nine registrations were accepted. In the end all nine 3D printers were successfully built and many people saw and learned how to build their own!
Geeko popular at Linuxdays 2013

Like last year, there were booths with various interesting open source projects. Mainly distributions – Debian, Fedora, Gentoo, openSUSE, Slax and Ubuntu – but there were also XFCE and Geany booths, an OpenMobility booth and a booth with some geeky stuff to buy, like mugs, T-shirts or Raspberry-Pi’s with various shields.

As far as the openSUSE stand goes, people liked our DVDs, both the ones with openSUSE and SLE (we had some evaluation versions to give away), but as always Geeko got the most attention. People just looooove this lizard. We had plenty of questions regarding where can they get one.

Overall, the conference was quite a success and there will definitely be a LinuxDays 2014!

Statistics

And now it’s time for numbers again. Who were our top contributors to openSUSE Factory in the last week? Here you go:
Linuxdays 2013

Spot Name
1 Raymond Wooninck
2 Dominique Leuenberger
3 Hrvoje Senjan, Bjørn Lie
4 Richard Brown
5 Dirk Mueller
6 Denisart Benjamin
7 Todd R
8 Michal Vyskocil
9 Sascha Peilicke
10 Matthias Mailänder, Marguerite Su

Beta and RC work

October 17th, 2013 by

dister-mechanic-small
As part of the openSUSE community efforts gearing up for openSUSE 13.1, the openSUSE team has been hard at work on a variety of things. Ludwig Nussel, in his role of controller in the team, keeps an eye on progress of the tasks we have been assigned. He gives an overview of some of the things we have been doing.

For beta 1

For beta 1 the focus was on testing and of course practical changes in a variety of areas. Here’s a list of the tasks on progress.opensuse.org.

After the btrfs discussion we checked out how often it is used and did some recommendations to give it extra testing. The result was that it was clearly mentioned in the Beta announcement and that during the installation YaST2 would ask if you could test btrfs.

gantt chart

We organized the Beta Pizza Hackaton, getting internal help from the SLE team on fixing openSUSE bugs. The idea came up to make it a contest. We went over the bugs that were reported, putting them in categories (bronze/silver/gold) to signify importance for the release. We developed an IRC bot to announce changes and a script to record who-did-what to help a committee of ‘wise men’ to decide who would win.

In the technical area, we agreed with the documentation team to move the manuals into activedoc. That way they are easier to maintain and always up to date. And Greek was included as primary language on the dvd! We also had to work with MS to get the bootloader and kernel signed properly for UEFI systems, we documented the development process, hacked further on testing tools, and of course there was a lot of writing of articles for news.opensuse.org.

Last but not least, we’ve been talking to Open Source press and helping them to ensure we have a boxed version of openSUSE 13.1 in Germany. We (and the openSUSE Board, as it is their decision) are still looking for providers in other countries!
13.1 openSUSE 3D box

RC1

Here’s a list of the tasks on progress.opensuse.org. And the summary:

To help the marketing team, we made and ran a script to remind packagers to tell marketing about their features. Based on this and help from the community, we made a start with the feature guide, writing about KDE, GNOME and the Linux kernel. A second goal was to get the ‘promote openSUSE 13.1’ article out, for which we had to kick people for artwork and we had to update countdown.opensuse.org. We also updated software.opensuse.org with information about the USB stick installation.

On the technical side, we did a lot of testing work and of course pick fixes to go into Factory. There was also work with Legal for a new EULA (End User License Agreement) which does not mention Novell anymore (will be included in the final); and we added openSUSE 13.1 to bugzilla, pinged our translators about the release and kicked people about the community repos.

A lot

As you can see, there are a lot of tasks we have to juggle. progress.opensuse.org with it’s useful Gantt chart is a big help for that, but it also simply requires work: checking what is done, what is not done, kicking people to do their thing. We’re hoping to get more community members involved with progress.opensuse.org – a few are already helping out via this tool and you’re welcome to check out how we do it!

The end result is that we can release a quality product, on time – something to be proud of!

Statistics

We’re back with weekly statistics (the stats from last week were in the RC1 announcement. Here are the top-10 contributors to openSUSE Factory of the last week:

Spot Hacker
1 Denisart Benjamin
2 Dominique Leuenberger
3 Raymond Wooninck
4 Hrvoje Senjan
5 Tomáš Chvátal
6 Stephan Kulow
7 Marguerite Su
8 Sascha Peilicke, Matthias Mailänder, Marcus Meissner
9 Cristian Rodríguez
10 Guido Berhörster, Dirk Mueller, Andreas Schwab

Creating openQA Test Cases for openSUSE 13.1

October 2nd, 2013 by

factory-testedIt is time for an update on openQA! Alberto Planas Domínguez discusses how to install and create tests (or “needles“) for the tool that helps keep openSUSE Factory stable.

openQA and testing openSUSE

Our work on openQA was introduced a few months ago on this blog. To recap, the major improvements we made were related to the detection of failed and succeeded tests by introducing the ‘needles’ (PNG files with metadata associated in JSON format) and using openCV to determine test results; and a host of changes to speed up the testing process and improve the webUI a bit. The work is currently residing in a branch by the openSUSE team on github but the original author, Bernhard Wiedemann, plans to test, integrate and deploy openQA during hackweek.

The fact that the new code isn’t running on openqa.opensuse.org yet is unfortunate but not a huge issue. Right now, to contribute test cases, you have to install and run openQA yourself. And with the packages available for openSUSE this is not a huge deal.

So, let’s talk about using openQA and creating test cases!

Installing openQA

Installing openQA is easy. Add a repository in zypper, install some packages and run some scripts. You can find a full description of the installation process in the openQA Tutorial. In summary, you have to go through these steps:

Install the repository and the packages and reboot
% zypper ar http://download.opensuse.org/repositories/devel:openQA/openSUSE_12.3 devel:openQA
% zypper in openQA kvm OVMF
% reboot

Install needles and fix ownership
% /usr/lib/os-autoinst/tools/fetchneedles
% cd /var/lib/os-autoinst/needles
% sudo chown -R wwwrun distri

testing in progress!

testing in progress!

After that we need to make some adjustment in the apache configuration before we can run the service. See the openQA Tutorial for the details.

Get testing

openQA testing works on the base of ISO images which we have to provide to openQA. So the next logical step is to download the last development ISO image from software.opensuse.org/developer and copy it to /var/lib/openqa/factory/iso. Now we can create the jobs (different test sets) and launch the server:

% rpc.pl --host localhost iso_new /var/lib/openqa/factory/iso/*.iso

If we have systemd running, the workers will take the jobs, create a virtual machine, and run the tests in order.

Look Ma, my first test!

Once openQA is installed we can start creating and running tests. Tests are written in Perl, and use an API exported by openQA to the applications. The internal logic of a test is something like the following:

  • The test sends a set of events (keystrokes) to the virtual machine
  • openQA takes screenshots of the consequences of these events
  • The test compares the resulting images with a database of images (or needles, see the previous openQA article for details)

stopwatch

We can move through the installation process using this cycle multiple times. For example, in the test 051_installation_mode.pm we can find this:

sendkey $cmd{"next"};
waitforneedle("inst-timezone", 30) || die 'no timezone';

In the first command we send a keystroke to the system, in this case alt-n because the dictionary $cmd maps the “next” key to this keystroke. After that we wait 30 seconds and check if we find an image compatible with the “inst-timezone” needle.

Inside a test, the main function that can be used to send events is sendkey(), and sendautotype() when we want to send a full string. When we want to assert the current image we can use waitforneedle(), which launches an exception if the needle is not found, and checkneedle(), which returns the result in case that we have a match between the current image and a needle.

More complicated tests

Using this basic functionally we can build higher levels of abstraction to simplify the interaction between the test and the system. For example, we can build a function that writes the password when we execute a sudo command. One of the problems when we execute a program as a root inside a test is that the system not always asks for the root password when we call the sudo program. We can work around this as follows:

sub sudo() {
my $passwd = shift;
my $prog = shift;
sendautotype("sudo $prog\n");
if (checkneedle("sudo-passwordprompt", 2)) {
sendautotype(passwd);
sendkey "ret";
}
}

We use checkneedle() instead of waitforneedle() to make a decision inside the test instead of making an assert and marking the test as failed.

An openQA test is in fact an instance of an object. We usually inherit from a base class an overwrite one of the methods to run the test. The basic structure of a test is:

Finding the needle

Horsies looking for needles

use base "basetest";
use strict;
use bmwqemu;
# Determine if the test can run.
sub is_applicable() {
}
# Main code of the test
sub run() {
}
# Return a map of flags to decide if the fail
# of this test is important, or to decide a
# rollback of the VM status.
sub test_flags() {
}
1;

The method is_applicable() is called to check if the test can run for a given configuration (more about this later). If the function returns true, openQA will call the run() method. It is in this function that we need to put the test code.

The test_flags() is used to decide what to do when the test fails. Depending on what this function returns, openQA can decide to mark the ISO as a wrong ISO if the test fails, or mark only this tests as failed and continue with the next test. There is one more interesting feature here. OpenQA makes snapshots of the CPU, the memory and the hard disk (using a QEMU option). If the test goes well openQA creates a snapshot labeled ‘lastgood’ that can be recovered if the next test fails. This feature is useful to guarantee that every test can be started from a stable system (and failing tests can put the system in an non stable state).

Variables

When openQA creates a new test job, what is really doing is setting a specific combination of environment variables that need to be checked by the test in order to adjust the test’s behavior or to discard itself (through the is_applicable() method).

For example, when we want to test the KDE Desktop Environment, openQA create a new variable named DESKTOP with value “KDE”. This variable can be checked in the test code using $ENV{DESKTOP}. As another example, say we want to create a test to check if Firefox is working properly on two desktops. We can use:

sub is_applicable() {
return $ENV{DESKTOP}=~/kde|gnome/;
}

If the variable DESKTOP is not KDE or GNOME, the test will not be executed.

I want to learn more

If you want to dig into openQA, you can check the source code and the different tests that are now running every day in our deployed instance. You can find the source code of V1 in Bernhard Wiederman’s repository github.com/bmwiedemann/os-autoinst and github.com/bmwiedemann/openQA, V2 is currently in the openSUSE Github account at github.com/openSUSE-Team/os-autoinst and github.com/openSUSE-Team/openQA with here the needles. As mentioned before, Bernhard plans on integrating the V1- and V2 code during hackweek.

You can grab the code to learn by checking other tests, like for example the ones in x11test.d and inst.d.

Contribute!

There is a simple way to start contributing to openSUSE through openQA:

  1. Install openQA
  2. Create a new test in x11test.d to test your application
  3. Send a pull request to the project with the test and the needles

Currently, these tests would then be used by the openSUSE team (running internally) to keep the quality of the upcoming release up, but once V1 and V2 are merged and deployed, you can see and act on the results on openqa.opensuse.org.
statistics dister inside close-up

The weekly statistics

And as every week, here’s the top-10 contributors to openSUSE Factory of last week. We’re pretty much in bug fixing mode now so a big thanks to each of the contributors for helping make openSUSE 13.1 stable!

Spot Name
1 Dominique Leuenberger
2 Denisart Benjamin
3 Dirk Mueller
4 Sascha Peilicke
5 Dr. Werner Fink
6 Marcus Meissner, Bjørn Lie
7 Ismail Donmez
8 Todd R
9 Stephan Kulow, Hrvoje Senjan
10 Niels Abspoel

Merchandising: where we are going

September 25th, 2013 by

Jos Poortvliet wrote shortly before the conference about merchandising. Today, he updates us on the status.

Recap

As a quick recap from the earlier blog, note that the goal of the merchandising program is to support openSUSE ambassadors in visiting events and representing openSUSE there. The old program suffered from some issues, including bad planning, bad materials and very expensive shipping. It was simply not very efficient and we didn’t know how much good it did (or not).

The plans

What we want to do can be best summarized as emphasizing quality over quantity and having a proper process. That means making a decent plan for supplying important events with a big box of quality materials and making sure we get a good return on the efforts we put in. And, as started with the Travel Support Program, we’d like the community to be in charge of it all, not SUSE.

Process

A simplified process picture would look somewhat like this:
Booth materials process

The ambassador/local coordinator/TSP teams make an event plan every six months or every year, looking at what events are coming. They classify them, determining what level of support each gets and who is going. This is of course coordinated with the individual ambassadors. The team then asks the openSUSE Team at SUSE to send the materials to the ambassadors, who in turn go and visit the event. At the end of every month we can let the wider community know about the events we visited!

Where we are

Right now, we’ve assembled a list of events, based on ambassador feedback and earlier event plans. I’ve also figured out what shipping options we have and what they would cost and made a plan for the contents of the box. I’d like to start with two different boxes, a big and a small one, and I have selected about 20 events which would get a box. 7 events will have visitors from the openSUSE team, so we’d be supporting 27 events with materials and people. Of course, the Travel Support Team can support other events – and we have the ability to locally produce materials as well. And after the first year, all this can be changed, with more and cheaper boxes (or fewer and bigger), different event selections and improved content.

Issues

We’re aware that this plan is not perfect: quite a few people who received materials in the last years won’t with this plan. But that’s what happens when you decide to focus on a smaller number of events… And we have and can think about solutions for that; for example, create a 1-kg event box with very reasonable shipping costs. Second, planning is hard and harsh, and we want to be flexible in that regard, and open for feedback.

Current status

So where are we now? Unfortunately, there has not been as much progress since the openSUSE conference as we hoped. A blocking issue is the availability of an artist handling the creation of materials we decided on. This will have to be done at SUSE with only a limiting role (for now) by the community artists. We are of course working on hiring somebody for this task but unfortunately one doesn’t just hire somebody.

However, what we do have is a good idea of shipping, a decent event plan and we not only determined mostly what we’d like to ship in each box (and how heavy they will be) but also have already started talking to suppliers. So as soon as we can get the artwork situation resolved, we should be able to get up and running in relatively short order – hopefully before the end of this year.

To give you a bit of an idea of what we have, see the image below.
openSUSE Booth styled

      1. There is about 2 kg of basic tools like duct tape, colored markers, a network and a vga cable, a dozen openSUSE pens and some trash bags (only in the big box)
      2. There is stuff for the table: a table cloth, flyer holder, table display (cube) and we’d like to have caps or bandana’s for the team
      3. Of course we have a cool and lightweight X-banner (if we can make it fit, they’re either big, heavy or both)
      4. We include some posters with information and for putting up a schedule
      5. And then there are some give-aways like stickers, flyers and perhaps cool posters with Unix Cheat Sheets or other info on them

A big thanks for Anditosan for making the images that make the example booth picture here look good. He’s already gathering potential artwork for the booths and if you have any input – direct it to the openSUSE artwork mailing list.

Next

So, the plan now is that once we have a designer, we create and print the materials, assemble the boxes, make appointments with the ambassadors for the events we’ve planned and start shipping.

Statistics

While merchandising seems to be on ice, development is still very active. Factory is now frozen with the Beta out, so we’re going to focus on bugfixing. The top contributors last week were:

Spot Name
1 Tomáš Chvátal
2 Dominique Leuenberger
3 Michal Vyskocil
4 Petr Gajdos
5 Dr. Werner Fink
6 Bjørn Lie
7 Raymond Wooninck
8 Stefan Dirsch
9 Hrvoje Senjan
10 Sascha Peilicke

Documenting the openSUSE Development and Release Process

September 18th, 2013 by

build
This week, the openSUSE Team Blog is written by Agustin Benito who talks about how we’ve been documenting our work around openSUSE.

Development a GNU/Linux based distribution like openSUSE can be divided in two major phases:

  • openSUSE Development process: creation and integration of the different packages. Milestones (alphas) and Beta.
  • openSUSE Release process: stabilization, QA and task related with the availability of the different images. Update process. From Beta to the Release.

There are also three major continuous actions throughout the process:

  • Management of the process.
  • Communication and marketing related tasks.
  • Development of new features

The development of new features usually happens upstream or within SUSE (in collaboration with community members) before it is added to the distribution. This phase takes place before or in parallel with the Development process (of packages and integration).

Focus: the openSUSE Team

The decision taken by SUSE during 12.2 to concentrate the SUSE employees working on openSUSE in a single department by creating the openSUSE Team had as consequence that new people has joined us to work in openSUSE. One of the early decisions within the openSUSE Team at SUSE was to put more focus on the distribution. The Release Team was created as a subset of the openSUSE Team, formed by Coolo and Tomas (Ismail Dönmez and Michal Hrušecký were also part of it in the past) to take care of the Development process. It was also decided that the whole Team would have as major goal to drive the Release process.
documentation icon

Documenting the process

Starting in July 2012, the openSUSE Team at SUSE has put effort in documenting the Development + Release process. Throughout the years, the process has evolved and some of those changes were not documented or the documentation was not up to date. We have taken the opportunity to analyze the the Dev+Release process, so we could learn from it and being able to design and execute changes to improve it.

The release process

As Team, we focused first on the Release process. We made an initial effort during the last few weeks of the 12.2 Release process, and we improved it during the first milestones of the 12.3 Release. For this 13.1 Release, only minor updates have been required.

Increasing the efficiency of the management side of things and structuring our work as a formal project has been another aspect of this. We set up a management tool that now is used together with the openSUSE sysadmin team. Ludwig Nussel is the controller of our efforts as team in the Release, planned through progress.opensuse.org. The combination of this planning and the improved release documentation has helped us to increase our efficiency despite new people joining our team.

The development process

During the past few weeks we have concentrated our efforts in analyzing and documenting the Development process. Our goal has been to analyze it, providing a high level view and just the minimum amount of details required to understand it by people with some knowledge of these kind of software integration processes.

We have recently published a draft of the Development process on the openSUSE Factory mailing list and are open for feedback.

Writing process

How was this document done?

      1.- The first action was to analyze existing documentation about the Development process. We took as sources several articles from openSUSE wiki and some previous analysis done by the former Boosters Team in this area.
      2.- We organized two sessions in which different people described the different steps of the process. These sessions were taped.
      3.- We transcribed the sessions and created a first document containing all the information about the process.
      4.- We processed that “story” in order to get the high level view of the process.
      5.- Using the ETVX methodology, we elaborated a first draft of the document.
      6.- We analyzed the result within the openSUSE Team and, after introducing some changes, we opened the discussion about it to other SUSE employees involved in the Development process.
      7.- Finally, in order to make the document easier to read (it is a complex process, so the documentation needs time to digest), we introduced improvements in the format and published in the openSUSE wiki, together with the .pdf version.

This process has allowed us to analyze and discuss the process as a team, learning not just about the hows but the whys. It has also worked as a test for documenting future changes in how openSUSE is developed. We also hope that the effort can be worth it to contributors that want to get a high level view of how openSUSE works, since some of the tasks are done in house. This document has to be seen also as an exercise of transparency.

Improvements

At openSUSE Team, we are strong believers of the principle “no data -> no analysis -> no improvements“. We think we are better prepare now to propose to the community improvements in factory to make it more usable for a wider range of contributors.
statistics with Geeko inside

Statistics Time

And like every week, we present you the top-ten of contributors to Factory!

Spot Hacker
1 Raymond Wooninck
2 Dominique Leuenberger
3 Dirk Mueller
4 Dr. Werner Fink, Bjørn Lie
5 Sascha Peilicke
6 luce johann, Michal Vyskocil
7 Michael Schröder
8 Stephan Kulow
9 Ismail Donmez
10 Cristian Rodríguez

OBS Webui Gets New Search Functionality

September 5th, 2013 by

obs-logo
Our team blog this week is written by Stephan ‘coolo’ Kulow and he talks about work in the team done on the Open Build Service.

Search

For years one of the biggest complaint about the webUI was that it is impossible to find packages. The search ability has been part of the interface from the beginning, but with over 200.000 packages being build today it is crucial to get the right package.

Where is my kernel?

Especially for developers new to openSUSE and the build service it is common to have to search for the package to fix for a specific bug. So you find yourself looking for kernel in the webUI and you are prompted with tons of results that are displayed in a rather random order and the notion that your search resulted in more than 200 hits and is basically invalid. Huh? home:foobar:latest-experiments:kernel is surely not the openSUSE kernel to fix, but then what is?

Now if you ask google about “kernel site:build.opensuse.org” you get closer to the problem at hand: “About 16,800 results” – that is a lot to pick the first 20 results to display from. The OBS webUI tried to find a good pick with an algorithm that might have been clever when build.opensuse.org had 100 projects. Today, it can only be called old and useless.
obs in the dark

Ancor for world fame

So I tricked Ancor to look into the problem by claiming he would get all the praises in the OBS world for implementing a sane search.

The problem is far from trivial, but there are good tools to get a better result than what we had now and Ancor has a lot of experience with these (and Rails in general). So it seemed like he could attain a great balance between effort and outcome.

But as always the devil lay in the details, so this post is also about getting feedback about the actual result.

What we did

Ancor integrated Thinking Sphinx into the OBS, so the name, title and description can be combined with other attributes into one big index that allows page ranking.

Additionally there is no limit of 200 results anymore, the webUI will display all results now, but only 20 at a time as you might have seen in larger sites offering search results display…

We collected attributes which are most likely relevant for people searching for packages. For example, we gather the linkcount of a package into the database (so far only the backend knows what is a link and what is a plain package). The idea is to move links down in the source results.

Coming back to the kernel example, the kernel-source package is the real package, while kernel-default, kernel-desktop, kernel-xen, …. exist too but are all links to kernel-source. So it is fair to present kernel-source first.

Problem is: there are still 228 kernel-source packages in the build service (yes, people like branching the kernel – a lot), so the number of links pointing to the package is another attribute. Packages that other packages branch go up in the list while the resulting branches move down. What also plays a role in the calculation: is a package the devel package of another? (which is the final punch to have Kernel:HEAD/kernel-source as first result displayed, as opposed to the old searching algorithm displaying a discontinued “linux-kernel-nutshell” as shown in the screen shots).
search for kernel on OBS

To sort the vast majority of results that are all _links, not linked to and not devel packages, we take the activity index. This is a number the OBS tracks for every package, but is nowhere displayed. It goes from 0 to 100 and goes down with time and goes up with regular commits. So if you look for kde, you will actually see KDE:Unstable:Playground as first project to match. This is because of two things:

  • kde is indeed a very bad search term
  • the unstable playground sees a lot of commits, so your chance of getting something fresh there is the highest

Your feedback wanted

Of course nobody is perfect, and while the code of Ancor is close to, the weights given to the attributes were my choice, so all problems in the sorting you see are my fault. Please take some time and redo some searches you might have done in the past and report if the results are sane to your experienced eye. Within the HTML of the search results is a hidden span with the raw attributes used in the calculation, so if you find something strange, look for weight, linked_count, activity_index, is_devel and co. Possibly the package that looks bogus to you in the top results is just very active.

Depending on the feedback we get, we might need to change the weights or add yet more attributes in the search and ranking. Do your own experiments on build.opensuse.org today!

statistics with Geeko inside

Statistics

And as always, we finish with the top-ten contributors to openSUSE Factory of last week!

Spot Name
1 Michal Vyskocil
2 Dominique Leuenberger
3 Ladislav Slezak
4 Marcus Meissner
5 Hrvoje Senjan
6 Lars Müller, Ismail Donmez, Cristian Rodríguez, Bjørn Lie
7 Stefan Dirsch, Matthias Mailänder, Jan Engelhardt, Dmitry Roshchin
8 Tomasz Konojacki
9 Ludwig Nussel, Dirk Mueller
10 Raymond Wooninck, Lukas Ocilka

The openSUSE Release process

August 29th, 2013 by

Michal “|Miska|” Hrusecky writes this week about the openSUSE release process. He gave a talk about this subject at the openSUSE conference this year and the content of this talk is reproduced below (the marketing section, just like with the presentation, is by Jos Poortvliet).

Release Process

To get openSUSE out is a lot of work. We already shared part of what we are doing to keep Factory rolling. But as you can guess, there is much more to it. But let’s pretend it is a simple three-step process:

Step one: developing Factory

When release openSUSE, we immediately start working on the next version: a never ending story. First thing that happens during a new release cycle is coolo announcing the road map. This is the schedule of the release and important checkpoints that we have to reach on our way. After the release Factory (our development version) is not frozen anymore and people can start submitting new stuff. Usually they go crazy and submit a lot of bleeding edge and experimental packages and quite some parts of Factory will get broken.

Now comes the time for keeping Factory rolling. As one picture can say more than thousand words, take a look at how packages get into factory:

factory

A developer creates a package in his home repository and then sends it to the devel project where it get’s some basic review (depends on how the team is setup there). From there, package goes to the factory, where are several automatic checks (will mention them later) and manual review by review team. Once the package passes all these checks, it gets into Factory. And breaks something else there. So somebody has to branch it, fix it, and go through the same process now with a different package…

We’re working on documenting the Factory development process in a little more detail so you can expect a blog on that subject in the coming months!

Checking up on you…

There are several bots that check packages send to Factory. First is factory-auto, which does some basic checks regarding you spec file, rpmlint warnings and similar. If you pass this quick test, it’s time for more thorough testing. A bot named factory-repo-checker tries to actually install your package in a testing environment to make sure it is possible and it also looks for possible file conflicts, so you wouldn’t overwrite somebody elses package. Last check before a package gets in front of the review team is legal-auto. This one checks the licence (did it change? is it a new package?) and if needed calls in our legal team to take a look at the package. The final step is manual review by members of review team which will try to spot mistakes that automatic checks overlook.

Step two: freezing

We could be breaking and fixing stuff forever, but as we have to release at some point we have to do some freezes. First one is the Toolchain Freeze. When we reach that, no new version of compilers will get in, only fixes. This way the rest can fix all the new compiler warning and errors without fear of getting new ones. Next in the line is Base System Freeze which means frozen kernel, KDE, Gnome, X11 and bellow… All these are core components with many packages depending on them. The rest of packages (after getting fixed/upgraded) gets frozen last with release of Beta 1 when we reach Full Feature Freeze. At that point only fixes go in, no new versions and features or anything else that could break anything.

Step three: branching and selecting an ISO

As we get close to the release, Factory gets branched into separate repository, currently 13.1. Factory then gets unfrozen and starts to live it own life again. The newly born release goes through the last stabilization and deep testing phase (both manual and through openQA). Every automatically build iso gets tested until the Gold Master is selected. Gold Master has to be installable andcannot contain any critical ship stopper bugs. It still can contain known bugs, we would never release if we would wait for bug free release. But all these bugs can be fixed via updates and some of them will even get fixed before official release date. After selecting Gold Master it takes us usually about a week to get it synchronized to our mirrors and to get everything ready for the big day. In that time, developers keeps fixing bugs and when you get our fresh release, we already have many fixes ready and waiting.

what I think I do marketing

Marketing process

Let’s talk release marketing now. We’ve got a HUGE list of things we work on in the marketing team.

Building excitement

There is preparation for the release, stuff we do usually in the 2 months before the release. Think about material like the release counters, posters and badges for on websites or at conferences; writing the sneak peek articles and organizing release parties.

Releasing openSUSE

Then there is the release marketing itself. We write a feature guide, extract feature highlights, create screen shots, write announcements for our news site and for the press. Then we put all of the above, combined with a list of people to be interviewed and a link to the gold master into a press kit we send one week before the release to journalists. This way we can have reviews published on the release date.
the braindumptruck

How it all comes about

All that is essentially build upon the list of new features for openSUSE. We gather the developers’ brain dumps at this wiki page. They share with us a bullet list of what changed since the previous release and how important it is for their users.

Pirates

We take these features to a pirate pad and start looking for everything that is missing (which, usually, is a lot: much of the features we don’t know about because developers don’t tell us). We gather info from the sites of projects like KDE and GNOME about their new versions and put it in. We work on the bullet points to make pretty text and we try to bring in some structure.

Once it is reasonably complete, we move it to a wiki page where we start to polish it up and add screenshots, links and video’s if we have em. The end result you are probably familiar with: the Features page. Note that most of this does not really need much experience, almost anybody can help!
Feature guide writing process

Highlighting

Then the next step is to look at that draft and the structure it has gotten and put in some deep thinking to pick the major features for the highlights. There is a lot of whiteboarding in this step: figuring out what the message of our release is, the theme, that is not easy.

These are the main part of the announcement and everything else, which is then ‘just’ hard work – a lot of it. This step needs the most experienced writing and a lot of feedback-improvement-feedback-improvement cycles.

Timeline: work gets crazy at Beta

The biggest problem with all this is that it can’t be done in a very relaxed way because we’re in a hurry. Beta 1 is about 6 weeks before we have the Goldmaster! When the Goldmaster is done, we need to send our stuff out to the press. At that point we can make only VERY minor changes. After all – just a week later we do the release. In that week, we only write social media messages (and translations) and get our websites, servers and everything else in order. Marketing is ready for the release!

Releasing a conclusion

We hope the above gives some sense of how the openSUSE release process works. Yes, it is long, laborious, but also a lot of fun! And of course it results in the awesome operating system which is openSUSE!

dister-mechanic-small

Weekly statistics

And it is that time again: the unveiling of our statistics of openSUSE top contributors of the last week! Here’s the top-ten most active Factory hackers:

Spot Name
1 Dominique Leuenberger
2 Michal Vyskocil
3 Bjørn Lie
4 Tomáš Chvátal
5 Ismail Donmez
6 Hrvoje Senjan
7 Andrey Karepin
8 Vitezslav Cizek, Ludwig Nussel
9 Cristian Rodríguez
10 Todd R, Hillwood Yang

More on Statistics

August 23rd, 2013 by

Shortly before the openSUSE Conference, we featured a post about openSUSE statistics. It mostly talked about where we got the numbers, teasing that we’d share the details at the openSUSE Conference. And Alberto did. Today we’ll bring you the numbers Alberto did digested in images and text.

Downloads and users

The simplest statistics for a Linux distribution are of course the numbers of downloads and users, so let’s start there.

ISO downloads

The methodology used to count downloads is easy to understand: we count every IP address that hit the server or is redirected to one of the mirrors, and express the intention to start downloading one of the ISO images available for the distribution. In this way, we count independently every download that uses the same proxy and every different product downloaded by the same IP. We can group the downloads by weeks or by month. In both images we can see that we started counting in 2010 and we covered openSUSE 11.0 to 12.3. Also, in both graphs we can see what events explain the peaks, like the release of the distribution.

Downloads per month

Downloads per month

Downloads per week

Downloads per week

To make a more detailed analysis we need to concentrate on the monthly graph. For this plot we calculated a linear regression model using the monthly data (41 samples). In the graph you can see a slight growth but decreasing impact of releases. Extrapolating, we can expect about 560K downloads per month in 2014. Note that this is downloads, not installs! Let’s talk about those next.

Installations

To get a more reliable estimation of persistent installations we count systems that regularly update. A count of the encountered unique systems per week or per month is a fair estimator of the number of active installations (see details on the counting).
updates per month

updates per week

Updates per week

Here you see many interesting things. For example, the red trace on top of the plot in 2010 and 2011 are Factory users. If you look closely, you also see that usage of a particular version already starts before it is out – that is due to testers checking out milestones, Beta and RC versions. And you notice how long it takes some users to move over to new versions of openSUSE – not only has over half of our users not moved to the latest 12.3 yet, but about 140.000 users happily still run releases from before 12.2, most of which (except 11.4) receive no security updates!

When we plot a linear regression model on this data, we see a less encouraging picture compared to the downloads: on average, we lose around 300 users per month. On the size of our installed base this is not huge, but worrying nonetheless.

More data

There are more things that we can learn from the data. We can analyze the behavior of users according to the installation medium or the architecture used and in time we can perhaps analyse how repositories are used and which are popular.

Medium and Architecture per week

Medium and Architecture per week

The Open Build Service

The Open Build Service is what openSUSE uses to build and distribute packages. It is a very integral part of our infrastructure and how we work, and its server logs contain a wealth of information on the work done on openSUSE. For example, mining the list of Submit Requests that go into Factory and devel projects, we created the graph below to give an idea of the development of the number of contributors working on openSUSE, with the total (blue) per month going nicely up as you can see.

OBS Contributor Data

Social media

Thanks to the help of Athanasios-Ilias Rousinopoulos (that is a link to his presentation at oSC13) we’re regularly gathering statistics on our social media, a summary you can see in the graph below. Yes, we’re doing well getting our message to people, thank you all for your part in that!

social media data

Let’s compare: openSUSE vs. Fedora

Numbers are useless if you can’t compare. We searched for data from other distributions like Ubuntu, Gentoo, Arch or Debian, but only Fedora provides real numbers with the methodology used to generate them. Kudos to our friends at Fedora for being open and transparent!

Now, they use a different way of gathering data on downloads: counting the number of different IPs seen per day. To count users, they count the number of different IPs seen for this release since the release date. This complicated matters on our side but we’ve made it work. However, one variable was a bit harder: both distributions have different release cycles and dates. As you see in the graphs, we tried our best to make the comparison as direct as possible. The plots below are in the same scale of time and number of downloads.

openSUSE and Fedora Users

openSUSE and Fedora Users

openSUSE and Fedora Downloads

openSUSE and Fedora Downloads

As you can see, Fedora has more downloads than openSUSE. Looking at the users, the situation is reverse: openSUSE has quite a bit more users than Fedora according to this measurement. How is this possible? The explanation is most likely that most openSUSE users upgrade with a ‘zypper dup’ command to the new releases, while Fedora users tend to do a fresh installation. Note that, like everybody else, we’re very much aware of the deceptive nature of statistics: there is always room for mistakes in the analysis of data. To at least provide a way to detect errors and follow the commendable example set by Fedora, here are our data analysis scripts in github.

statistics dister inside close-up

Contributor Statistics for week 33

All these statistics and still we’re not done! Here is the top-10 of contributors to Factory last week. As you can see, Stephan ‘Coolo’ Kulow is on vacation, freeing up a spot in the table 😉

Spot Name
1 Raymond Wooninck
2 Dominique Leuenberger
3 Sascha Peilicke
4 Hans-Peter Jansen
5 Bjørn Lie
6 Dirk Mueller
7 Ladislav Slezak
8 Ismail Donmez
9 Stefan Dirsch, Jan Engelhardt
10 Hrvoje Senjan

On Coordinating our Work

August 15th, 2013 by
Redmine in action

Redmine in action

This week we present Ancor Gonzales Sosa who writes about how we coordinate our work and introduces progress.opensuse.org!

A distributed team

The openSUSE Team at SUSE is a combat force spread all over the world: we have members in Berlin, Prague, Malaga, Nuremberg and Taiwan. And we are planning to reach new territories in the following months (beware, a openSUSE Team member could be standing behind you just right now). Coordinating the work of a distributed team is not always easy. On the other hand, we are disperse not only from a physical point of view, we also work simultaneously in a lot of different tasks and projects. We also have members with different skills and interactions with SUSE employees and community members outside our team. (more…)