Home Home > 2014 > 02
Sign up | Login

Deprecation notice: openSUSE Lizards user blog platform is deprecated, and will remain read only for the time being. Learn more...

Archive for February, 2014

About openQA and authentication

February 28th, 2014 by

factory-tested
As you might have read in our previous post, our most important milestone in improving openQA is to make it possible to run the new version in public and accessible to everyone. To achieve this we need to add support for some form of authentication. This blog is about what we chose to support and why.

So what do we need?

openQA has several components that communicate with each other. The central point is the openQA server. It coordinates execution, stores the results and provides both the web interface for human beings and a REST-like API for the less-human beings. The REST API is needed for the so-called ‘workers’, which run the tests and send the results back to the server, but also for client command line utilities. Since this API can be used to schedule new tests, control job execution or upload results to the server, some kind of authentication is needed. We don’t want to write a whole article about REST authentication, but maybe some overview of the general problem could be useful to explain the chosen solution.

Authentication vs authorization

Let’s first clarify two related concepts: authentication vs authorization. Web authentication is easy when there is no API involved. You simply have a user and a password for accessing the service. You provide both to the server through your web browser so the server knows that you are the legitimate owner of the account, that’s all. That means a (hopefully different) combination of user and password for each service which is annoying and insecure. This is where openID comes into play. It allows any website to delegate authentication to a third party (an openID provider) so you can login in that website using your Google, Github or openSUSE account. In any case, it doesn’t matter if you are using a specific user, an openID user or another alternative system – authentication is always about proving to the server that you are the legitimate owner of the account so you can use it for whatever purpose in the website.
openid-icon-250x250

But, what about if you also provide a API meant to be consumed by applications instead of humans using a web browser? That’s where authorization comes into play. The owner of the account issues an authorization that can be used by an application to access a given subset of the information or the functionality. This authorization has nothing to do with the system used for real authentication. The application has no access to the user credentials (username or password) and is not impersonating the user in any way. It is just acting on his or her behalf for a limited subset of actions. The user can decide to revoke this authorization at any time (they usually simply expire in a certain period of time). Of course, a new problem arises: we need to verify that every request comes from a legitimately authorized entity.

Technologies we picked

We need a way to authenticate users, a way to authorize the client applications (like the workers) to act on their behalf and a way to check the validity of the authorizations. We could rely on the openSUSE login infrastructure (powered by Access Manager), but we want openQA to be usable and secure for everybody willing to install it. Without caring if he is inside or outside the openSUSE umbrella.

openID

There’s not much to say about user authentication; openID was the obvious choice. Open, highly standarized and supported and compatible with the openSUSE system and with some popular providers like Google or Github. And even more important, a great way to save ourself the work and the risk of managing user credentials.

authentication – harder

According to the level of popularity and standardization, the equivalent of openID for authorization would be oAuth, but a closer look into it revealed some drawbacks, complexity being the first. oAuth is also not free of controversy, resulting in three alternative specifications: 1.0, 1.0a and 2.0. Despite what the numbers suggest, different specifications are not just versions of the same thing, but different approaches to the problem that have their own supporters and detractors. Using oAuth would mean enabling the openQA server to be able to act as an oAuth provider and the different client applications to act as oAuth consumers. Aside from the complexity and various versions of the standard, one of the steps of the oAuth authorization flow (used for adding a new script or tool) is redirecting the user to the oAuth provider web page in order for him/her to explicitly authorize the application. Which means that a browser is needed, not nice at all when the consumer is a command line tool (usually executed remotely). We want to keep it simple and we don’t want to run a browser to configure the workers or any other client tool. Let’s look for less popular alternatives.

There are two sides to the authorization coin: identifying who is sending the request (and hence the privileges) and verifying that the request is legitimate and the server is not being cheated about that identity. The first part is usually achieved with an API key, a useful solution for open APIs in order to keep service volume under control, but not enough for a full authorization system. To achieve the second part, there is a strategy which is both easy to implement and safe enough for most cases: HMAC with a “shared secret”.

In short, every authorization includes two automatically generated hash keys, the already mentioned API key and an additional private one which is only known to the server and to the application using that authorization. In each request, several headers are added: at least one with the timestamp of the request, another with the API key and a third one which is the result of applying an encryption algorithm (such as SHA-1) to the request itself (including the timestamp and the parameters) using the private hash as salt. In the server, queries with old timestamps are discarded. For fresh requests, the API key header is used to determine the associated private key and the same encryption algorithm is applied to the query, checking the result against the corresponding header to verify both the integrity of the request and the legitimacy of the sender.

We decided to go for this as it is both simple and secure enough for our use case. Of course, the private key still needs to be stored both server and client side, but it’s not the users’ password. It’s just a random shared secret with no relation to the real user credentials or identity. It also can be revoked at any point and can only be used for actions like upload a test result or cancel a job. Secure enough for a system like openQA, with no personal information stored and no critical functionality exposed.

Conclusion

So, we’ve implemented openID and are working on a lightweight HMAC based authorization system. Next week we’ll update you on the other things we are working on, including our staging projects progress!

Goodbye EC2 Tools Long Live AWS Tools

February 26th, 2014 by

For quite some time we had a package named ec2-api-tools in the Cloud:EC2 project and I suspect many that work with EC2 had found the package and were using the ec2-* commands to manage stuff in EC2. Along with the ec2-api-tools Amazon maintained a separate ec2-* tool set for various services. Keeping up with the armada of Amazon developers is not easy and thus the other ec2-* tool sets never got packaged.

Now a new integrated set of tools is available called with the “aws” command and provided by the aws-cli package. The package is available from the Cloud:Tools project and a submit request to Factory is pending. The new package does not obsolete the ec2-api-tools package as there is no issue with having both packages installed. However, I did take the liberty to remove the ec2-api-tool package from the Cloud:EC2 project as it would no longer receive updates considering that we have a nice new tool that unifies all Amazon services. The documentation for the new command can be found .

The aws code is hosted on github and thus contribution of fixes is easy and that is another big plus over the ec2-* tool sets.

Yes, and of course we need to get openSUSE 13.1 into EC2, and I am working on that, stay tuned….

First fruits – update on openQA and Staging work

February 19th, 2014 by

In our previous summary, we talked about some basic research and some ground work to build on. This time we have some first exciting results!

openQA

Last week we rearranged the repository a little bit, creating a new branch called "devel" where all the exciting (and not so exciting) changes are taking place. Our little Factory 😉

The main difference between this branch as master is that, as you could read in the previous blog, the devel branch is openQA build on Mojolicious, a nice web development framework. And having a proper web framework is starting to show its benefits: we have openID login available! Unfortunately the current openSUSE openID provider is a little bit weird, so it doesn’t play well with our tool yet, but some others are working and openSUSE accounts will be the next step. Having working user accounts is necessary to be able to start defining permissions and to make openQA truly multiuser. And to be able to deploy the new version on a public server!

The other main focus of this week has been internal polishing. We have revamped the database layout and changed the way in which the different openQA components communicate with each other. The openQA functionality is spread out over several parts: the workers are responsible of actually executing the tests in virtual machines reporting the result after every execution; some client utilities are used to load new ISO images and similar tasks and, finally, we have the one openQA Server to rule them all. Until now, the communication between the server and the other components was done using JSON-RPC (a lightweight alternative to XML-RPC). We have dropped JSON-RPC in favor of a REST-like API with just JSON over plain HTTP. This change allowed us to implement exactly the same functionality in a way that is simpler, perfectly supported by any web framework, natively spoken by browsers and easier to authenticate (using, for example, plain HTTP authentication or openID). This is also the first step to future integration with other services (think OBS, as the ultimate goal is to use openQA to test staging projects).

But, who tests the tester? openQA is growing and changing quite fast so we have continued with the task of creating a good testing infrastructure to tests openQA itself to make sure that all our changes do not result in breakage. We only have a few tests right now, but we now have a solid foundation to write more and more tests.

Staging and package manipulation

In the last blog post we told you we were investigating a code test suite to test the abilities of a osc plugin we are writing. osc is the commandline tool for handling the Open Build Service, and this plugin is meant to help with the administration of staging projects. We’ve been thinking about how to move forward with the testing part as we want to make sure the functionality works as advertised. More important, we write tests to make sure that our additions and changes do not break existing functionality. We have started merging functionality from various scripts handling staging thingies and rings we had into this plugin. This is partially done so we can do basic staging project tasks! We can take a request (be it submit or delete) and put it to test in a staging project. We can also move packages between staging projects and we have a simple YAML in project descriptions to indicate what packages and what requests are there.

We're all green!
Coolo already started using the plugin for some tasks, so you can see pretty titles and metadata in staging projects descriptions. Not impressive enough? Let me provide you a good headline:

Thanks to staging projects, no single regression have been introduced this year in the core of Factory

You can enjoy a more detailed description in this announcement written by coolo to the Factory mailing list and have some visual joy in the included screenshot (genuine pr0n for QA engineers).

Last but not least, we also did some cleanup of the sources in the repo and of course we added more tests (as functionality grows). And there has been work on other parts of the plugin, like taking rings into account.

Result

We already have some useful functionality which we are going to expand and build on. Not that much to show yet, but we are progressing towards our goal. You can follow our progress either in the way of tasks (openQA and staging projects) or just follow our commit history on github (again for both openQA and staging plugin).

We are very much looking forward to feedback and thoughts – these changes aim to make Factory development easier and thus we need to hear from all you!

Code quality and Code guidelines

February 19th, 2014 by

Today I like to take you precious time to talk about thing that get so less attention in open source world. I like to talk little bit about QA and Coding guidelines.
Many reader who are in companies that take care of themselves or are involved some major open source or free software project like KDE, GNOME or Linux kernel knows that they/we have Coding guidelines. KDE have it here and Kernel have it here. So they have them and whats the big deal? (more…)

Elfcloud.fi a small cloud storage that could

February 13th, 2014 by

I’ve been seeking cloud storage of my life for long time now. My needs are not much (but most of the time they are too big as I have learned) only space and if possible Linux/Mac OS X FUSE for using service.
Service being open source or not isn’t such big thing in this case. How data is stored (crypted or not) and how can I get them out there if I need them is what I treasure most.
I have tested SpiderOak, Wuala, Dropbox and box.net but non of them fits for my needs perfectly. As I want to use these services with Linux all of them have Linux clients and most of them have FUSE-filesystem. Lately I have been using Wuala but it has problem that FUSE stuff is written in Java (it works under openSUSE just fine!). With GUI it’s clean and like said works very well but then comes but if you don’t want to use GUI you are in little bit trouble. It’s supported but I haven’t got it working. Xvfb comes to rescue but still it’s not really a solution!
When I heard about Elfcloud.fi I though okay we have some storage provider in Finland no big deal. I just popped their web site and noticed that they are really open source friendly company. They have Github Python-library (Apache Version 2.0 licensed) and full API documentation which is dead simple JSON stuff also their pricing scheme ain’t that bad. I have mention that this is hardcore crypto cloud service. If you lose your key you can wave goodbye to you data. Best of all they gonna have marvelous FUSE implementation as I’m writing it currently (They also have C++ library available on request. Apache Version 2.0 licensed of course).
So if you want to keep your data on Europe and have it stored in same country that have also is also trusted by Google datacenters and Microsoft you can check out elfcloud.fi. It’s not for everyone for sure but those who are in same need place to store data without hustle this can be the stuff for you.

You can download python API RPM from here: http://download.opensuse.org/repositories/home:/illuusio:/elfcloud/

another way to access a cloud VM’s VNC console

February 8th, 2014 by

If you have used a cloud, that was based on OpenStack, you will have seen the dashboard including a web-based VNC access using noVNC + WebSockets.
However, it was not possible to access this VNC directly (e.g. with my favourite gvncviewer from the gtk-vnc-tools package), because the actual compute nodes are hidden and accessing them would circumvent authentication, too.

I want this for the option to add an OpenStack-backend to openQA, my OS-autotesting framework, which emulates a user by using a few primitives: grabbing screenshots and typing keys (can be done through VNC), powering up a machine(=nova boot), inserting/ejecting an installation medium (=nova volume-attach / volume-detach).

To allow for this, I wrote a small perl script, that translates a TCP-connection into a WebSocket-connection.
It is installed like this
git clone https://github.com/bmwiedemann/connectionproxy.git
sudo /sbin/OneClickInstallUI http://multiymp.zq1.de/perl-Protocol-WebSocket?base=http://download.opensuse.org/repositories/devel:languages:perl

and is used like this
nova get-vnc-console $YOURINSTANCE novnc
perl wsconnectionproxy.pl --port 5942 --to http://cloud.example.com:6080/vnc_auto.html?token=73a3e035-cc28-49b4-9013-a9692671788e
gvncviewer localhost:42

I hope this neat code will be useful for other people and tasks as well and wish you a lot of fun with it.

Some technical details:

  • The code is able to handle multiple connections in a single thread using select.
  • HTTPS is not supported in the code, but likely could be done with stunnel.
  • WebSocket-code was written in 3h.
  • noVNC tokens expire after a few minutes.

Some news from the trenches

February 7th, 2014 by

As you might know, we are focusing our development efforts in two fronts, namely openQA and staging projects. As we just started we don’t have fireworks for you (yet) but we did some solid ground work that we are going to build upon.

Working on openQA

We are organizing our daily job in openQA into highly focused sprints of two weeks. The focus of the first sprint was clear: cleanup the current codebase to empower future development and lower the entry barrier for casual contributors, which can be translated as “cleaning our own mess”. We created some tasks in progress, grouped in a version with a surprising and catchy name: Sprint 01.

Got my mojo workin’

Up to now, openQA web interface was written using just a bunch of custom CGI scripts and some configuration directives in Apache. We missed a convenient way to access to all the bell and whistles of modern web development and some tools to make the code more readable, reliable and easier to extend and test. In short, we missed a proper web development framework. We evaluated the most popular Perl-based alternatives and we finally decided to go for Mojolicious for several reasons:

  • It provides all the functionality we demand from a web framework while being lightweight and unbloated.
  • It’s stated as a “real time framework” which, buzzwords apart, means that is designed from the ground to fully support Comet (long-polling), EventSource and WebSockets. Very handy technologies for implementing some features in openQA.
  • It really “feels” very close to Sinatra, which makes Ruby on Rails developers feel like at home. And we have quite some Rails developers hanging around, don’t we? Just think about OBS, software.o.o, WebYast, progress.o.o, OSEM, the TSP app…
  • Mojolicious motto is “web development can be fun again”. Who could resist to that?

We’ve now reached the end of the sprint and we already have something that looks exactly the same than what we had before, but using Mojolicious internally. We are very happy with the framework and we are pretty confident that future development of openQA will be easier and faster than ever. OpenQA has mojo!

The database layer in openQA

Another part that we worked on during first sprint was the database layer. The user interface part of openQA use a SQLite database to store the jobs and workers registered in the system. The connection between the code and the database was expressed directly in SQL using a simple API.

We have replaced this layer with another equivalent that uses an ORM (Object-relational mapping) in Perl (DBIx::Class). Every data model in openQA is now a true object that can be created, copied and moved between the different layers of the application. Quite handy.

To make sure we don’t forget anything, we created a bunch of tests covering the whole functionality of the original code, running this test suite after each step of the migration. In this way we have achieved two goals: we now have a simple way to share and update information through the whole system and we can migrate very easily to a different database engine (something that we plan to do in the future).

What to do with staging

Over the time, coolo had accumulated quite some scripts that helped him with Factory. Most of them are actually related to something we are doing right now: the staging projects. So in the end we basically migrated all relevant ones to github and one by one we are merging their functionality to staging plugin. We also experimented with test frameworks that we could use to test the plugin itself, selected few and we even have a first test! The final plan is to have the whole plugin functionality covered with a proper test suite, so we will know when something breaks. Currently, there is a lot of mess in our repo and the plugin itself need big cleanup, but we are working on it.

Contributions are welcome

If you want to help but wonder where to start, we identified tasks that are good to dive into the topic and named them “easy hacks”: mostly self contained tasks we expect to have little effort but we lack the time to do right now. Just jump over the list for openQA or staging projects.

For grabbing the code related to staging project, you only need to clone the already mentioned repository. The openQA code is spread in several repositories (one, two, three and four), but setting up your own instance to play and hack is a piece of cake using the packages available in OBS (built automatically for every git push).

If you simply want to see what we are doing in more detail, take a look at progress.o.o, we have both openQA and Staging projects there.

We are having a lot of fun, and we encourage you to join us!

Fosdem 2014 Report & Beta testing new openSUSE booth merchandising Stuff

February 4th, 2014 by

Fosdem 2014

fosdem 2014 - full

Again this year, Fosdem was really delightful, a bit crowdy as hell concerning a number of conference rooms.

But if there’s a constant, it is the awesomeness of the Fosdem staff and its armada of volunteers. Please all of you who made this event so great, receive in the name of openSUSE’s community our warmest thanks and congratulations.

I will not make a mistake if I predict a big success for the different talk’s videos, in the next following weeks.

openSUSE merchandising new collection

After a loooong wait, perceived as a century, openSUSE Booth was furbished with the next generation of merchandising stuff.

At least some part of the complete kit, which should be available in April.

(more…)

Trying to add some light

February 3rd, 2014 by

Lately there was some confusion regarding our communication. We, at the openSUSE Team@SUSE are deeply aware that our communication needs to be improved. So in the hope to make everything clear again, here is the summary to clear up what is really going on and what was not happening.

Long story short:

  • There WILL be openSUSE 13.2 in November 2014
  • 13.2 WILL have security and maintenance support provided by SUSE
  • We WILL have coolo as release manager for 13.2
  • SUSE is NOT decreasing manpower put into openSUSE
  • Everybody from the community is welcome and encouraged to be involved with, and if they want to, take over some parts of the release process and we will support you the best we can in doing that

Now for the long story.

Our team and only our team – openSUSE@SUSE – is going to work on improving ‘tooling’ side of the openSUSE project until August. These changes will benefit openSUSE by making it easier to produce better releases in the future.

Nothing changes for the rest of SUSE. SUSE is not abandoning openSUSE. The rest of SUSE will still do the same things they were doing until now and continue to keep openSUSE awesome. This includes Maintenance, Security, Infrastructure, and many other teams besides the openSUSE Team at SUSE who actively support the openSUSE project.

What is our plan?

Our plan is to make sure that future openSUSE releases are easier for everyone to produce. As we grow we could keep putting in more and more full-time release managers (if we find them somewhere), but this approach is probably unsustainable and, more importantly, goes against our desire to empower the community to do more as part of openSUSE.

Therefore, we decided to improve our tools to ensure that making a release is much more straightforward and reliable and we can reduce and distribute the workload needed for integration and release. To make this happen we need time and everyone from the team to work on adapting the tooling side. We also would welcome volunteers to help us with tools and with the following release(s).

With the release date now set in November (mirroring roadmap for 13.1), first milestone should be released in May. That is a perfect oportunity to go to openSUSE Conference in Croatia where we can meet up, gather volunteers to help and discuss how to work. Remember that openSUSE Travel Support is in place to sponsor everyone who needs financial help to get to the event.

Hopefully now we cleared things up a little and we are really sorry again for our poor communication – We’re going to work on it.

Your truely confused openSUSE Team