Home Home
Sign up | Login

Deprecation notice: openSUSE Lizards user blog platform is deprecated, and will remain read only for the time being. Learn more...

calumma

The openSUSE Team blog. Calumma Brevicorne is of course a kind of chameleon...

Author Archive

Echoes from oSC’14: state of Factory and openQA

May 8th, 2014 by

No tests, no feature

There has a been a lot of work going on regarding the stabilization of Factory. It is still ongoing process. We have a lot of submissions, Factory is moving really fast and was hard to keep stable. Over the time, there were more and more checks added to the process to make Factory more stable and usable in everyday life. Last of them being added right now are rings and openQA. You can learn more about that in the talk by our release manager coolo. It will also allow you a quick peek behind the curtain on what future will bring.

In that future, one of the core roles in making Factory stable will be played by openQA. As such it got quite some attention during openSUSE Conference 2014 in Dubrovnik. Stable Factory will not happen by itself, it need quite some work. Currently openQA is under heavy development to match the new needs. You can help improving it! The work is being coordinated in a progress.o.o project, the sources are available in a Github’s organization and there is even a talk to introduce you to the world of openQA development 😉

At some point, when openQA gets integrated well enough, users will be able to enjoy “almost” bug free Factory as a rolling distro. How bugfree it will be will depend on our test coverage of the distribution. What is the current state? We have basics pretty well covered, but still sometimes existing tests break and sometimes we miss more tests. If you want to help with making Factory better tested, you can get openQA running locally and start writing tests for stuff that matters to you! During a nice workshop in openSUSE Conference, Ludwig Nussel trained some geekos on how to get a local openQA running and start writing tests. Luckily, workshop was recorded so even if you missed the conference, you can watch it and start writing tests. New tests are welcome, although due to performance reasons (we want openQA to finish at some point, right?), not everything might get run every time. Even though, you can always run your own openQA instance for testing stuff that you care about and reporting to bugzilla!

Looks like there is a bright future in front of us with stable well tested Factory continually rolling forward. There is plenty of work to be done to achieve this, but progress made so far is starting to show results and we will approach the final goal faster as we get more people involved.

Help yourselves to our low hanging fruit!

March 13th, 2014 by

An update on what we’re doing and a call for help!

openQAv2

openQAv2

What was going on

From our previous posts you probably know what do we do these days. We are working on our goal to make Factory more stable by using staging projects and openQA. Both projects are close to reaching important milestones. Regarding the osc plugin that is helping out with staging, we are in the state that coolo and scarabeus can manage Factory and staging projects using it. We did some work on covering it with tests and currently we are somewhere half-way there. Yes, there is a room for improvement (patches are welcome), but it is good enough for now. We are missing integration with OBS, automation and much more. But as the most important part of all this is integration with openQA, we decided to put this sprint on hold and focus even more on openQA.

How to play with it and help move this forward?

During our work, we found some tasks that are not blocking us but would be nice to have. Some of these are quite easy for people interested in diving in and making a difference in these projects. We put them aside in progress.opensuse.org to make it easilly recognizable that these tasks can be taken and are relatively easy. Let’s take a closer look at what tasks are there to play with. Of course – it would be very cool if, through using the new tools, you find out other things that improve the work flow!

  • Helping staging. I already mentioned one way you can help our factory staging plugin.
  • Improving test coverage. It might sound boring at first, but is a quite entertaining task. We have a short documentation explaining how tests work. In summary, we have a class simulating a limited subset of OBS and we just run commands and wait whether our fake OBS ends up in correct state. This makes writing tests much easier. To make it more interesting, we use Travis CI and Coveralls. Big advantage of those is that it does a test run on every pull request and shows you how you improved the test coverage. It also shows what is not covered yet. In other words, it shows what should be covered next 🙂
  • What else needs to be done to make staging project more reliable is to fix all packages that are randomly failing. This is mostly packaging/obs debugging task. So doesn’t require any python and can be done package by package.
  • Last thing on the staging projects todo list for now is making the staging api class into singletons. For those who know python this is actually really easy. But while at it, you will be taking a look at the rest of our code and we will definitely not mind anybody polishing it or removing bugs!
coveralls in action

coveralls in action

Helping openQA

In openQA, currently we have code cleaning tasks. It’s about polishing the code we have – like removing useless index controller and fixing code highlighting. Or doing simple optimizations – like caching image thumbnails.

In case of openQA, you might be a little confused where to send your code. We have been doing some short and mid term planning with Bernhard M. Wiedemann, the man behind openqa.opensuse.org and one of the original openQA developers, together with Dominik Heidler. Now the code considered to be ‘upstream’ have been moved to several repositories under the os-autoinst organization in github. This code contains all the improvements introduced to extensivelly test our beloved openSUSE 13.1. The next generation of the tool is being developed right now in a separate repository and will be merged into upstream as soon as we feel confident enough, which means more automated testing, more staging deployments and more code revisions and audits. If you want to help us, you are welcome to send pull requests to either of those repositories and someone will take a look at them.

What next?

As we wrote, we are currently putting all our effort into openQA. We already have basic authentication and authorization, we are working on tests, fixing bugs and making sure the service in general is easy to deploy. And of course we are working toward our main goal – having a new openQA instance with all cool features available in public. Currently it is already kind of easy to get your own instance running, so if you are web or perl developer, nothing should stop you from playing with it and making it more awesome!

Short report from Installfest 2014 Prague

March 6th, 2014 by

installfest pragueThis week, Michal and Tomáš write about their visit of the Installfest 2014 in Prague.

Installfest Prague is a local conference that despite it’s name has talk tracks and sometime even quite technical topics. We, Michal Hrušecký and Tomáš Chvátal, attended this event and spread the Geeko News around…

Linux for beginners workshop

Tomáš worked with local community member Amy Winston to show that our lovely openSUSE 13.1 is great for day-to-day usage, even (or especially) for people migrating from windows. We demonstrated KDE’s Plasma Desktop, how to use YaST to achieve common tasks and then asked the audience for specific requests about things to demonstrate. We can tell you the participants liked Steam a lot! Apart from the talking about and demonstrating openSUSE we also explained people that SUSE is selling SLE and they can buy it in the store if they want a rock-solid stable OS.

The workshop setup was a bit tricky because the machines didn’t have optical drives and we didn’t have enough usb sticks to work around the issue. Our solution was to boot up the ISO directly from PXE server and force NetworkManager to not start during the boot because it happily overwrote the network configuration and thus loosing the ISO it was booting from.

Factory workflow presentation

Over the last months, our team has been working on improving the way we develop openSUSE: theF Factory Workflow. You can read up on it in earlier blogs here. During the Installfest, we showcased our new workflow which uses openQA and staging projects. We discussed what we try to achieve what we can already do right now with osc/obs/openQA combo. People were quite enthusiastic and two people in the audience already used Factory!

As always we mentioned that we have Easy hacks and we really really want people to work on them.

Overall SUSE presence

openSUSE/SUSE presence was as usual high on this event so we tried our best to let people know that we are cool as project and great as company. We shared with everyone our openSUSE/SLE dvds, posters, stickers, … During the talks/workshops we also gave out SUSE swag as presents to people who answered some tricky questions.

Job offerings

We let everyone know that we are looking for plenty of new people in various departements. Namely QA/QAM to which few people got interested and took the printed out prospects. So hopefully our teams will grow 🙂

About openQA and authentication

February 28th, 2014 by

factory-tested
As you might have read in our previous post, our most important milestone in improving openQA is to make it possible to run the new version in public and accessible to everyone. To achieve this we need to add support for some form of authentication. This blog is about what we chose to support and why.

So what do we need?

openQA has several components that communicate with each other. The central point is the openQA server. It coordinates execution, stores the results and provides both the web interface for human beings and a REST-like API for the less-human beings. The REST API is needed for the so-called ‘workers’, which run the tests and send the results back to the server, but also for client command line utilities. Since this API can be used to schedule new tests, control job execution or upload results to the server, some kind of authentication is needed. We don’t want to write a whole article about REST authentication, but maybe some overview of the general problem could be useful to explain the chosen solution.

Authentication vs authorization

Let’s first clarify two related concepts: authentication vs authorization. Web authentication is easy when there is no API involved. You simply have a user and a password for accessing the service. You provide both to the server through your web browser so the server knows that you are the legitimate owner of the account, that’s all. That means a (hopefully different) combination of user and password for each service which is annoying and insecure. This is where openID comes into play. It allows any website to delegate authentication to a third party (an openID provider) so you can login in that website using your Google, Github or openSUSE account. In any case, it doesn’t matter if you are using a specific user, an openID user or another alternative system – authentication is always about proving to the server that you are the legitimate owner of the account so you can use it for whatever purpose in the website.
openid-icon-250x250

But, what about if you also provide a API meant to be consumed by applications instead of humans using a web browser? That’s where authorization comes into play. The owner of the account issues an authorization that can be used by an application to access a given subset of the information or the functionality. This authorization has nothing to do with the system used for real authentication. The application has no access to the user credentials (username or password) and is not impersonating the user in any way. It is just acting on his or her behalf for a limited subset of actions. The user can decide to revoke this authorization at any time (they usually simply expire in a certain period of time). Of course, a new problem arises: we need to verify that every request comes from a legitimately authorized entity.

Technologies we picked

We need a way to authenticate users, a way to authorize the client applications (like the workers) to act on their behalf and a way to check the validity of the authorizations. We could rely on the openSUSE login infrastructure (powered by Access Manager), but we want openQA to be usable and secure for everybody willing to install it. Without caring if he is inside or outside the openSUSE umbrella.

openID

There’s not much to say about user authentication; openID was the obvious choice. Open, highly standarized and supported and compatible with the openSUSE system and with some popular providers like Google or Github. And even more important, a great way to save ourself the work and the risk of managing user credentials.

authentication – harder

According to the level of popularity and standardization, the equivalent of openID for authorization would be oAuth, but a closer look into it revealed some drawbacks, complexity being the first. oAuth is also not free of controversy, resulting in three alternative specifications: 1.0, 1.0a and 2.0. Despite what the numbers suggest, different specifications are not just versions of the same thing, but different approaches to the problem that have their own supporters and detractors. Using oAuth would mean enabling the openQA server to be able to act as an oAuth provider and the different client applications to act as oAuth consumers. Aside from the complexity and various versions of the standard, one of the steps of the oAuth authorization flow (used for adding a new script or tool) is redirecting the user to the oAuth provider web page in order for him/her to explicitly authorize the application. Which means that a browser is needed, not nice at all when the consumer is a command line tool (usually executed remotely). We want to keep it simple and we don’t want to run a browser to configure the workers or any other client tool. Let’s look for less popular alternatives.

There are two sides to the authorization coin: identifying who is sending the request (and hence the privileges) and verifying that the request is legitimate and the server is not being cheated about that identity. The first part is usually achieved with an API key, a useful solution for open APIs in order to keep service volume under control, but not enough for a full authorization system. To achieve the second part, there is a strategy which is both easy to implement and safe enough for most cases: HMAC with a “shared secret”.

In short, every authorization includes two automatically generated hash keys, the already mentioned API key and an additional private one which is only known to the server and to the application using that authorization. In each request, several headers are added: at least one with the timestamp of the request, another with the API key and a third one which is the result of applying an encryption algorithm (such as SHA-1) to the request itself (including the timestamp and the parameters) using the private hash as salt. In the server, queries with old timestamps are discarded. For fresh requests, the API key header is used to determine the associated private key and the same encryption algorithm is applied to the query, checking the result against the corresponding header to verify both the integrity of the request and the legitimacy of the sender.

We decided to go for this as it is both simple and secure enough for our use case. Of course, the private key still needs to be stored both server and client side, but it’s not the users’ password. It’s just a random shared secret with no relation to the real user credentials or identity. It also can be revoked at any point and can only be used for actions like upload a test result or cancel a job. Secure enough for a system like openQA, with no personal information stored and no critical functionality exposed.

Conclusion

So, we’ve implemented openID and are working on a lightweight HMAC based authorization system. Next week we’ll update you on the other things we are working on, including our staging projects progress!

First fruits – update on openQA and Staging work

February 19th, 2014 by

In our previous summary, we talked about some basic research and some ground work to build on. This time we have some first exciting results!

openQA

Last week we rearranged the repository a little bit, creating a new branch called "devel" where all the exciting (and not so exciting) changes are taking place. Our little Factory 😉

The main difference between this branch as master is that, as you could read in the previous blog, the devel branch is openQA build on Mojolicious, a nice web development framework. And having a proper web framework is starting to show its benefits: we have openID login available! Unfortunately the current openSUSE openID provider is a little bit weird, so it doesn’t play well with our tool yet, but some others are working and openSUSE accounts will be the next step. Having working user accounts is necessary to be able to start defining permissions and to make openQA truly multiuser. And to be able to deploy the new version on a public server!

The other main focus of this week has been internal polishing. We have revamped the database layout and changed the way in which the different openQA components communicate with each other. The openQA functionality is spread out over several parts: the workers are responsible of actually executing the tests in virtual machines reporting the result after every execution; some client utilities are used to load new ISO images and similar tasks and, finally, we have the one openQA Server to rule them all. Until now, the communication between the server and the other components was done using JSON-RPC (a lightweight alternative to XML-RPC). We have dropped JSON-RPC in favor of a REST-like API with just JSON over plain HTTP. This change allowed us to implement exactly the same functionality in a way that is simpler, perfectly supported by any web framework, natively spoken by browsers and easier to authenticate (using, for example, plain HTTP authentication or openID). This is also the first step to future integration with other services (think OBS, as the ultimate goal is to use openQA to test staging projects).

But, who tests the tester? openQA is growing and changing quite fast so we have continued with the task of creating a good testing infrastructure to tests openQA itself to make sure that all our changes do not result in breakage. We only have a few tests right now, but we now have a solid foundation to write more and more tests.

Staging and package manipulation

In the last blog post we told you we were investigating a code test suite to test the abilities of a osc plugin we are writing. osc is the commandline tool for handling the Open Build Service, and this plugin is meant to help with the administration of staging projects. We’ve been thinking about how to move forward with the testing part as we want to make sure the functionality works as advertised. More important, we write tests to make sure that our additions and changes do not break existing functionality. We have started merging functionality from various scripts handling staging thingies and rings we had into this plugin. This is partially done so we can do basic staging project tasks! We can take a request (be it submit or delete) and put it to test in a staging project. We can also move packages between staging projects and we have a simple YAML in project descriptions to indicate what packages and what requests are there.

We're all green!
Coolo already started using the plugin for some tasks, so you can see pretty titles and metadata in staging projects descriptions. Not impressive enough? Let me provide you a good headline:

Thanks to staging projects, no single regression have been introduced this year in the core of Factory

You can enjoy a more detailed description in this announcement written by coolo to the Factory mailing list and have some visual joy in the included screenshot (genuine pr0n for QA engineers).

Last but not least, we also did some cleanup of the sources in the repo and of course we added more tests (as functionality grows). And there has been work on other parts of the plugin, like taking rings into account.

Result

We already have some useful functionality which we are going to expand and build on. Not that much to show yet, but we are progressing towards our goal. You can follow our progress either in the way of tasks (openQA and staging projects) or just follow our commit history on github (again for both openQA and staging plugin).

We are very much looking forward to feedback and thoughts – these changes aim to make Factory development easier and thus we need to hear from all you!

Some news from the trenches

February 7th, 2014 by

As you might know, we are focusing our development efforts in two fronts, namely openQA and staging projects. As we just started we don’t have fireworks for you (yet) but we did some solid ground work that we are going to build upon.

Working on openQA

We are organizing our daily job in openQA into highly focused sprints of two weeks. The focus of the first sprint was clear: cleanup the current codebase to empower future development and lower the entry barrier for casual contributors, which can be translated as “cleaning our own mess”. We created some tasks in progress, grouped in a version with a surprising and catchy name: Sprint 01.

Got my mojo workin’

Up to now, openQA web interface was written using just a bunch of custom CGI scripts and some configuration directives in Apache. We missed a convenient way to access to all the bell and whistles of modern web development and some tools to make the code more readable, reliable and easier to extend and test. In short, we missed a proper web development framework. We evaluated the most popular Perl-based alternatives and we finally decided to go for Mojolicious for several reasons:

  • It provides all the functionality we demand from a web framework while being lightweight and unbloated.
  • It’s stated as a “real time framework” which, buzzwords apart, means that is designed from the ground to fully support Comet (long-polling), EventSource and WebSockets. Very handy technologies for implementing some features in openQA.
  • It really “feels” very close to Sinatra, which makes Ruby on Rails developers feel like at home. And we have quite some Rails developers hanging around, don’t we? Just think about OBS, software.o.o, WebYast, progress.o.o, OSEM, the TSP app…
  • Mojolicious motto is “web development can be fun again”. Who could resist to that?

We’ve now reached the end of the sprint and we already have something that looks exactly the same than what we had before, but using Mojolicious internally. We are very happy with the framework and we are pretty confident that future development of openQA will be easier and faster than ever. OpenQA has mojo!

The database layer in openQA

Another part that we worked on during first sprint was the database layer. The user interface part of openQA use a SQLite database to store the jobs and workers registered in the system. The connection between the code and the database was expressed directly in SQL using a simple API.

We have replaced this layer with another equivalent that uses an ORM (Object-relational mapping) in Perl (DBIx::Class). Every data model in openQA is now a true object that can be created, copied and moved between the different layers of the application. Quite handy.

To make sure we don’t forget anything, we created a bunch of tests covering the whole functionality of the original code, running this test suite after each step of the migration. In this way we have achieved two goals: we now have a simple way to share and update information through the whole system and we can migrate very easily to a different database engine (something that we plan to do in the future).

What to do with staging

Over the time, coolo had accumulated quite some scripts that helped him with Factory. Most of them are actually related to something we are doing right now: the staging projects. So in the end we basically migrated all relevant ones to github and one by one we are merging their functionality to staging plugin. We also experimented with test frameworks that we could use to test the plugin itself, selected few and we even have a first test! The final plan is to have the whole plugin functionality covered with a proper test suite, so we will know when something breaks. Currently, there is a lot of mess in our repo and the plugin itself need big cleanup, but we are working on it.

Contributions are welcome

If you want to help but wonder where to start, we identified tasks that are good to dive into the topic and named them “easy hacks”: mostly self contained tasks we expect to have little effort but we lack the time to do right now. Just jump over the list for openQA or staging projects.

For grabbing the code related to staging project, you only need to clone the already mentioned repository. The openQA code is spread in several repositories (one, two, three and four), but setting up your own instance to play and hack is a piece of cake using the packages available in OBS (built automatically for every git push).

If you simply want to see what we are doing in more detail, take a look at progress.o.o, we have both openQA and Staging projects there.

We are having a lot of fun, and we encourage you to join us!

Trying to add some light

February 3rd, 2014 by

Lately there was some confusion regarding our communication. We, at the openSUSE Team@SUSE are deeply aware that our communication needs to be improved. So in the hope to make everything clear again, here is the summary to clear up what is really going on and what was not happening.

Long story short:

  • There WILL be openSUSE 13.2 in November 2014
  • 13.2 WILL have security and maintenance support provided by SUSE
  • We WILL have coolo as release manager for 13.2
  • SUSE is NOT decreasing manpower put into openSUSE
  • Everybody from the community is welcome and encouraged to be involved with, and if they want to, take over some parts of the release process and we will support you the best we can in doing that

Now for the long story.

Our team and only our team – openSUSE@SUSE – is going to work on improving ‘tooling’ side of the openSUSE project until August. These changes will benefit openSUSE by making it easier to produce better releases in the future.

Nothing changes for the rest of SUSE. SUSE is not abandoning openSUSE. The rest of SUSE will still do the same things they were doing until now and continue to keep openSUSE awesome. This includes Maintenance, Security, Infrastructure, and many other teams besides the openSUSE Team at SUSE who actively support the openSUSE project.

What is our plan?

Our plan is to make sure that future openSUSE releases are easier for everyone to produce. As we grow we could keep putting in more and more full-time release managers (if we find them somewhere), but this approach is probably unsustainable and, more importantly, goes against our desire to empower the community to do more as part of openSUSE.

Therefore, we decided to improve our tools to ensure that making a release is much more straightforward and reliable and we can reduce and distribute the workload needed for integration and release. To make this happen we need time and everyone from the team to work on adapting the tooling side. We also would welcome volunteers to help us with tools and with the following release(s).

With the release date now set in November (mirroring roadmap for 13.1), first milestone should be released in May. That is a perfect oportunity to go to openSUSE Conference in Croatia where we can meet up, gather volunteers to help and discuss how to work. Remember that openSUSE Travel Support is in place to sponsor everyone who needs financial help to get to the event.

Hopefully now we cleared things up a little and we are really sorry again for our poor communication – We’re going to work on it.

Your truely confused openSUSE Team

spec-cleaner: hide all your precious cruft!

January 31st, 2014 by

As we stated in our communication over the time, our team’s main focus for foreseeable future is Factory and how to manage all those contributions. Goal is not to increase the number of SRs that is coming to Factory, but to make sure we can process more and to make sure we see even well hidden consequences to make sure that Factory is “stable” and “usable”.

sprayg

Not really part of our current sprints, but something that will hopefully help us is spec-cleaner that Tomáš Chvátal and Tomáš Čech were working on lately during their free time/hackweek. What is it trying to address? Currently, there are some packaging guidelines, but when you write a spec file for your software, you still have plenty of choices. How do you order all the information in the header? Do you use curly brackets around macros? Do you use macros? Which ones do you use and which not? Do you use binaries in dependencies? Package config? Perl symbols? Package names? There is format_spec_file obs service that tries to unify a little bit the coding style but leaves quite some of the stuff up to you. Not necessarily a bad thing, but if you have to compare changes and review packages that are using completely different coding styles the process becomes harder and slower.

spec-cleaner is format_spec_file taken to another level. It tries to unify coding style as much as it can. It uses consistent conventions, makes most of the decisions mentioned previously for you and if you already decided for one way in the past, it will try to convert your spec file to follow the conventions that it specifies. It’s not enforcing anything, it’s standalone script and therefore you don’t have to be worried that you spec file will be out of your control. You can run it, verify the result (actually, you should verify the results as there might still be some bugs) and commit it to OBS. If we all do it, our packages will all look more alike and it will be easier to read and review them.

How to try it? How to help? Well, code is on GitHub and packages are in OBS. You may have a version of it in your distribution, but that one is heavily outdated (even the 13.1 version), so add openSUSE:Tools repo and try the version from there.

zypper ar -f obs://openSUSE:Tools/openSUSE_13.1 openSUSE-Tools
zypper in spec-cleaner

You can then go to some local checkout and try what changes does it propose for your spec file. Easiest way is to just let it do stuff by calling it and taking a look at changes afterwards.

spec-cleaner -p -i *.spec
osc diff

If it works, great, we will have more unified spec files. If it doesn’t, file a bug 😉

Unterstanding the nvidia driver process

January 15th, 2014 by

Nvidia pic from Muktware
The NVIDIA drivers for openSUSE 13.1 took a while to appear. Many users have asked why this was and we’d like to explain what happened and what we plan to do to prevent this in the future. This post was written with input from the openSUSE developers who maintain these drivers at SUSE and work with NVIDIA to make them available for our users.

How it should work

Legally, the Linux kernel GPLv2 license leaves proprietary binary drivers in a bit of a tangle. While some claim it should be OK, others say not – and that is what most distributions currently assume: one can not ship a Linux distribution with proprietary, binary drivers. NVIDIA has agreed that our users can grab their drivers from the official NVIDIA servers. They are packaged by SUSE engineers however. They take care of both SLE and openSUSE proprietary driver packaging for NVIDIA hardware, and have contacts at NVIDIA who get the drivers up on their ftp mirrors.
Nvidia-1click
The packages are build on a dedicated system, but the package spec (skeleton for building) is in OBS – for example, G03 is here. Anybody could use these to build the nvidia driver locally (the nvidia binary driver is grabbed from the nvidia driver during building). The command sequence for local building can be found in the README.

Once the packages are build, they are send to NVIDIA, which signs the packages with their key and generates and signs repositories, making them available to the public.

To note is the fact that this is all manual and takes a while on NVIDIA’s side (and ours). This of course is part of the reason why we don’t offer NVIDIA packages for Factory, our fast-rolling development repository.

What happened

What occurred for 13.1 is a typical case of “everything went wrong”.

Up until a few weeks before the release, the driver did not build against the Linux Kernel version 3.11. NVIDIA warned against the use of a patch for this problem created by third parties so the driver team did not have a running build. Due to a holiday, it took a while to get the packages build and pushed to NVIDIA. Unfortunately, by the time this was done, the holiday period at NVIDIA blocked progress for another week. Once the package was signed, it took the webteam at NVIDIA another week to get the repository published. It all added up to almost a month and brought quite some inconvenience for users eager to use their latest NVIDIA cards with the latest openSUSE.

About a week later, the openSUSE 13.1 NVIDIA drivers disappeared again from the nvidia servers for a day or so. We don’t know exactly what happened there, it might have simply been a server issue.

What we will do

The first and most obvious thing to change would be to improve coordination. The developers taking care of the NVIDIA packages should be made aware as early as possible about the release planning and a replacement in case for holidays should be available. We noted this in our 13.1 release report and will make sure that there will be a task in our task tracker for this for the next release. We also need to talk to NVIDIA about this to make sure that there, too, somebody can fill in for SUSE’s contacts.

But there are also thoughts on how to improve further. Some of the current ideas:

  • To get drivers for Factory the process needs more automation as the driver may break on kernel or X changes
  • We could try to open the process and work with community members to secure the process in case the SUSE folks are unavailalbe
  • It would be nice if we could make NVidia to see benefit of working more on the driver in openSUSE, e.g. by doing some testing
  • We could write some scripts that check regularly whether repo and packages are still available and in a valid state (meta data matches available RPMs)

We’ll have to see which of these we can implement, when and how. But rest assured that we will do what we can to prevent this in the future!

statistics with Geeko inside

And the stats!

After a bit of a hiatus, we’re back with the numbers. Development has slowed down around new year and isn’t back to speed yet so the 10th spot is shared by quite a big group of people…

Spot Name
1 Stephan Kulow
2 Denisart Benjamin
3 Tomáš Chvátal, Dirk Mueller, Michal Vyskocil
4 Hrvoje Senjan
5 Ciaran Farrell
6 Petr Gajdos
7 Lars Vogdt, Pascal Bleser
8 Jan Engelhardt, Charles Arnold
9 Andreas Stieger, Michal Marek, Niels Abspoel, Kyrill Detinov
10 Dinar Valeev, Bjørn Lie, Ulrich Weigand, Tobias Klausmann, Marcus Meissner, Alexander Graf, Robert Schweikert, Martin Vidner

Discussing about the future of openSUSE

December 11th, 2013 by

This week, the openSUSE team blog is written by Agustin, talking about the proposals the team has done for openSUSE development.

A few months ago the openSUSE Team started a journey that achieved an important milestone last Tuesday, Nov 26th 2013. We have worked on creating a picture of relevant areas of the project in 2016 together with some of the actions we think should be taken during the following months to achieve it. To stop working and raise your head once in a while to analyze what is around you and setting a direction is a very good exercise.

The process we followed

The first step was working on data mining. After many hours of analysis, we identified some clear trends that helped us to establish a solid starting point to begin to work with. Once that phase was over (this is an ongoing process, in fact), we worked for a few weeks/months in trying to define that future picture interviewing several dozens of people. We refined that first attempt through several iterations, including many of those who participated in the original round and others who didn’t. Susanne Oberhauser-Hirschoff was the person who drove that process with Agustin.

We soon realized that discussing high level ideas in a community used to “Get shit done” was going to be easier if we complement them with some more down to earth proposals, specially in technical aspects. We cannot forget that, after all, openSUSE is a technical (and very pragmatic) focused community.

So, in parallel with the already mentioned refinement of the big picture, we started discussing within the team the actions needed to take to make the big picture a reality, the openSUSE development version a.k.a Enhanced/New Factory. After many hours of (sometimes never ending) discussions, we agreed on the ideas we are currently being published, together with the motivations behind them.

Another aspect we tried to bring to the discussion has been a strong dose of realism, trying to ensure that whatever we came up to was compatible with the nature of the project. We have also put focus on making sure that the initial proposal is achievable. So as part of community, we understand very well we cannot succeed alone. We need to work with you. So we just opened with the community a process analogous to what we went through within the team. It might be different in form but similar in principles and goals.

What are we going through these days?

These days the proposals are being discussed in different mailing lists. We are collecting feedback, discussing it, summarizing it, adapting the proposal to it … trying to reach agreements before defining what to do next.

What the proposal looks like?

We divided the proposal in a series of smaller proposals we are publishing in the project mailing list, where the general community topics in openSUSE are discussed, and/or factory ML, where the more technical discussions take place.

  1. openSUSE 2016: taking a picture of openSUSE today
    This mail summarizes the analysis phase we went through. We have tried to provide a simple picture of openSUSE today so the following articles can be justified to some extend.
  2. openSUSE 2016 picture
    This text summarizes the proposed picture for the end of 2016 (in three years). The goal is to set a direction for openSUSE

  3. openSUSE Development Workflow

  4. O Factory – Where art Thou?
    Stephan Kulow summarizes the Action Plan for the first aspect pointed in the previous picture: the new Development process (Factory).

The following articles describe in more detail some relevant (also new) elements pointed in the previous article, since they are new or modify the current process significantly. Some of the articles are in the queue to be published.

  1. One of the options for staging projects
    In this mail Michal Hrusecky provides some details and examples on how the new staging projects might work in the future.
  2. openQA in the new proposal
    This text, written by Ludwig Nussel, explains the principles that should drive the inclusion of openQA in the Factory development process, according with the proposed workflow.
  3. Karma for all
    This mail, written by Ancor González, summarizes our ideas to include a social feature in the process to help achieving Factory goals.
  4. Policies, or why it’s good to know how to change things
    The new process needs to be adaptive. Antonio Larrosa proposed a way, taking what other projects do in this regard as reference.

There might be an eighth article describing some smaller, still relevant, ideas. After publishing the “content”, we will release one last article providing a information about how to achieve these ideas, describing also our compromise in terms of effort and pointing out the challenges we perceive in the plan from the execution point of view.

We would like to invite you to the debate if you haven’t raised your opinions yet.