Home Home > Infrastructure-2
Sign up | Login

Archive for the ‘Infrastructure’ Category

Using openSUSE as a reverse tunnel site for Windows 7 or 8.1 remote desktop

April 20th, 2015 by

If you can’t open a hole in your office / home firewall then a reverse tunnel can let you workaround the issue.  This blog post uses cygwin, ssh and autossh to create and maintain a reverse tunnel through your firewall.

You should be aware that if you follow the below steps you will punch a hole through your firewall, so be sure and consider the security issues associated with that hole.  Many organizations require security beyond a simple login and password when connectivity is allowed from outside the firewall.  In some organizations following the below instructions without authorization from your IT security team could be a firing offense.

In theory this functionality is relatively basic, but there are lots of resources on the web that only serve to complicate the matter.  The below instructions were followed in 2015 with current SSH to create an actual working reverse tunnel.

The assumed situation is you have:

– Windows 7 or 8.1 PC behind a firewall you want to remote desktop to (the target PC)
– A openSUSE server in the cloud that you are able to ssh into and open appropriate ports and firewall holes
– A client PC from which you want to originate Remote Desktop sessions

The instructions here borrow heavily from the below blog post, but I was unable to get the tunnels to work by following the steps described at that site:

Creating persistent SSH tunnels in Windows using autossh

What has worked for me so far is:

  1. On the openSUSE server
    1. Ensure you have a user account “autossh” (or whatever you want to call it).  This will be used exclusively for reverse tunnels.
    2. Ensure you have a normal user account you can use with scp with to copy files from the target PC to the openSUSE server.
  2. On the target (destination) PC:
    1. Ensure you have remote desktop setup.  Machines on the local LAN should be able to remote desktop into the PC prior to starting this procedure.
    2. Download Cygwin (http://www.cygwin.com/)
    3. Install Cygwin, selecting the autossh and openssh packages.
    4. Start the Cygwin shell (Start -> Programs -> Cygwin).
    5. Generate a public/private key pair.
      1. At the command line, run: ssh-keygen
      2. Accept the default file locations
      3. Use an empty passphrase
    6. Copy your newly-created public key to the SSH server.
      1. scp .ssh/id_rsa.pub user_account@ssh.host.name:id_rsa.pub
  3. Add your public key to your list of authorized keys on the server.
    1. Login to your SSH server as your normal user_account
    2.  mv id_rsa.pub  /tmp
    3. su –  # become root
    4. Ensure /home/autossh/.ssh exists
      1. # ls -ld /home/autossh/.ssh
        drwxr-xr-x 2 autossh users 4096 Apr 24 2015 /home/autossh/.ssh
      2. If not: mkdir /home/autossh/.ssh; chown autossh.users /home/autossh/.ssh; chmod 755 /home/autossh/.ssh
    5. cat /tmp/id_rsa.pub >>  /home/autossh/.ssh/authorized_keys
  4. Tweak the sshd_config on the server
    1. By default openSUSE enables “AllowTcpForwarding” and “TCPKeepAlive”. Verify they are either commented out or set to “yes” in /etc/ssh/sshd_config
    2. Set “GatewayPorts yes” and “ClientAliveInterval 300” in /etc/ssh/sshd_config.  Also make sure they are not commented out.
    3. restart sshd to get the config values to be re-read:    sudo systemctl restart sshd.service
  5. Test your SSH key.
    1. Logout of your SSH sever.
    2. Login to your SSH server again, but as autossh this time. This time, your key will be used for authentication and you won’t be challenged for your login credentials. If you are not logged in automatically, review the previous steps. Or contact your server administrator.
      1. ssh autossh@@ssh.host.name
    3. Logout of your SSH server.
    4. Exit of the Cygwin shell.
  6. Test your SSH Tunnel capability
    1. Open a cmd prompt on your target PC as administrator
      1. start -> run -> cmd -> right click on “cmd” -> left click “run as administrator”
    2. C:\cygwin\bin\ssh -N -R 4489:localhost:3389 autossh@ssh.host.name
      1. Note that this should open an alternate port (4489) or your openSUSE server in the cloud.  openSUSE uses 3389 by default, so you need to use an alternate port on the openSUSE server end.
      2. Any connections to the alternate port should be funneled through the SSH tunnel back to the windows 7 PC on port 3389
    3. From a 3rd computer open a remote desktop connection to ssh.host.name:4489
      1. Note that remote desktop uses :4489 after the server name to designate an alternate port.
      2. If it acts like you’re not connecting at all, in all likelihood you’re not.  You probably have a firewall in place on the openSUSE server.
        1. Open port 4489 in your opensuse server firewall
          1. https://doc.opensuse.org/documentation/html/openSUSE_122/opensuse-security/cha.security.firewall.html#sec.security.firewall.SuSE.yast
          2. Or   “sudo /sbin/yast -> security and Users -> Firewall -> Allowed Services -> Advanced -> add 4489 to list of TCP Ports -> OK -> next -> finish -> quit
      3. retry remote desktop connection
    4. Once it works, from the Windows 7 command prompt kill the the ssh connection to your openSUSE server (contrl-C)
  7. Test your AutoSSH Tunnel capability
    1. From the CMD prompt running as administrator
      1. C:\cygwin\bin\autossh -M 20000 -N -R 4489:localhost:3389 autossh@ssh.host.name
        1. Note that-M opens a monitoring port (I’m not sure how to leverage that)  The monitoring port is opened on the server (ssh.host.name) so if you have multiple autossh commands pointed at the same server, each should use a unique monitoring port as well as a unique tunnel port (4489 in the above.)
    2. From a 3rd computer open a remote desktop connection to ssh.host.name:4489
      1. Make sure you terminate your remote desktop session from the 3rd computer when done testing
    3. If it worked, from the Windows 7 command prompt kill the autossh command (contrl-C)
    4. exit out of your cmd window
  • At this point you can manually invoke autossh to setup a semi-persistent tunnel

Setup the autossh feature as a Windows service


  • the below uses cygrunsrv to install the Windows service.  If you experience problems cygrunsrv -L, cyrunsrv -LV, and cygrunsrv -R <service>    may all be useful for diagnosing the problem.  The first 2 commands list the installed services, and -R removes installed services.
  • Logs for cygrunsrv default to C:\cygwin\var\log\AutoSSH.log
  1. Install autossh as a Windows service
    1. Open a cmd prompt on your target PC as administrator
      1. start -> run -> cmd -> right click on “cmd” -> left click “run as administrator”
    2. cd C:\cygwin\bin
    3. cygrunsrv -I AutoSSH -p /bin/autossh -a “-M 20000 -N -R 4489:localhost:3389 autossh@ssh.host.name” -e AUTOSSH_NTSERVICE=yes
      1. If you get an error with this command, manually type the ” marks.  They may not be handled properly with cut&paste.
      2. Be very careful with the above.  A misbehaving service can be hard to remove in Windows.  It may require safe mode if the service won’t accept stop commands.
    4. Tweak Windows service settings.
      1. Open the Services management console (Administrative Tools -> Services).
      2. Edit the properties of the AutoSSH service.
      3. In the “Log On” tab, select the “This account” radio button and set the service to run as your current user.  This is very important to do before starting the service in order for the ssh certificate to be used.
      4. Change the startup mode to “Automatic (Delayed Start)”
      5. Start the service.
  2. Test your tunnel as described in 6.2 above
    1. Be sure to test after rebooting your target Windows 7 or 8.1 PC.
    2. I have had it working for 6months and used it a lot.  I’ve seen network drops.  Target PC reboots.   openSUSE server reboots.  The tunnel just keeps working.

If all went well, congratulations you now have a persistent tunnel

You should be aware you have just punched a hole through your firewall, so be sure and consider the security issues associated with that hole.  Many organizations require security beyond a simple login and password when connectivity is provided from outside the firewall.

OpenStack Infra/QA Meetup

July 23rd, 2014 by

Last week, around 30 people from around the world met in Darmstadt, Germany to discuss various things about OpenStack and its automatic testing mechanisms (CI).
The meeting was well-organized by Marc Koderer from Deutsche Telekom.
We were shown plans of what the Telekom intends to do with virtualization in general and OpenStack in particular and the most interesting one to me was to run clouds in dozens of datacenters across Germany, but have a single API for users to access.
There were some introductory sessions about the use of git review and gerrit, that mostly had things I (and I guess the majority of the others) already learned over the years. It included some new parts such as tracking “specs” – specifications (.rst files) in gerrit with proper review by the core reviewers, so that proper processes could already be applied in the design phase to ensure the project is moving in the right direction.

On the second day we learned that the infra team manages servers with puppet, about jenkins-job-builder (jjb) that creates around 4000 jobs from yaml templates. We learned about nodepool that keeps some VMs ready so that jobs in need will not have to wait for them to boot. 180-800 instances is quite an impressive number.
And then we spent three days on discussing and hacking things, the topics and outcomes of which you can find in the etherpad linked from the wiki page.
I got my first infra patch merged, and a SUSE Cloud CI account setup, so that in the future we can test devstack+tempest on openSUSE and have it comment in Gerrit. And maybe some day we can even have a test to deploy crowbar+openstack from git (including the patch from an open review) to provide useful feedback, but for that we might first want to move crowbar (which is consisting of dozens of repos – one for each module) to stackforge – which is the openstack-provided Gerrit hosting.

see also: pleia2’s post

Overall for me it was a nice experience to work together with all these smart people and we certainly had a lot of fun

And done…. new images available

June 12th, 2014 by


It took a bit but I am happy to report that all openSUSE 13.1 images in Amazon EC2, Google Compute Engine and Microsoft Azure public cloud environments have been refreshed. After the latest round of the GNU-TLS and OpenSSL fixes the security was, as usual, extremely efficient in providing fixed packages and these have been available in all cloud images via zypper up since last Friday. As of today the base images available in the public cloud frameworks contain the fixes by default.

In Amazon the new images are as follows:

  • ap-northeast-1: ami-79296078
  • ap-southeast-1: ami-84a7fbd6
  • ap-southeast-2: ami-41cbae7b
  • eu-west-1: ami-b56aa4c2
  • sa-east-1: ami-bffb54a2
  • us-east-1: ami-5e708d36
  • us-west-1: ami-16f2f553
  • us-west-2: ami-b7097487

In Google compute engine the image name is: opensuse-13-1-v20140609

The old image (opensuse131-v20140417) has been deprecated. To access the image you will need to add –image=opensuse-cloud/global/images/opensuse-13-1-v20140609 as the openSUSE images are not yet fully integrated into the GCE framework. Still working on that part with Google. This image also has upgrades to the google-cloud-sdk package and enable the bq (big-query) command. The gcloud command is still a bit rough around the edges, but the gcutil command should work as expected. Eventually gcutil is going to be deprecated by Google thus there is work to be done to fix the integration issues with the gcloud command. If anyone has time to work on that please send submit request to the google-cloud-sdk package in the Cloud:Tools project in OBS. Unfortunately Google still hasn’t posted the source anywhere for open collaboration 🙁 . They’ll get there eventually. I will try and push any changes upstream.

In Azure just search for openSUSE in the Gallery, it’s more of a point an click thing 😉

And that’s a wrap. Not certain we will be able to improve on the speed of such fire drill updates, but we’ll try to keep refreshing images as quickly as time allows when critical vulnerabilities in the core libraries get exposed.

Have a lot of fun….

Have some fun… patch your kernel

May 28th, 2014 by

On this point you should have compiled your own Linux kernel. Get it up and running with your hardware but what’s the catch with all of this? Why on earth I want to have this much trouble with my operating system when I can write highly popular fiction with DOS and Wordstar? (more…)

Have some fun today… try your new kernel

April 30th, 2014 by

Last blog was about how to compile openSUSE kernel from GIT. Now we see how to get it up and running in your system. Again word of warning: Changing kernel is always bit of a hardcore trick! Even if it comes from trusted and tested binary from openSUSE (sorry I’m server admin). If you do it by yourself then you are also on your own if your machine won’t boot anymore! (more…)

Have some fun today… compile kernel

April 15th, 2014 by

Are you bored or seeking something to do? Do you want to do something that your friends will call just waste of time but it is so highly nerdy and most cool? Do you want to know what makes openSUSE or Linux in general tick? (more…)

Cloudy with a touch of Green

March 19th, 2014 by

Finally there is some news regarding our public cloud presence and openSUSE 13.1. We now have openSUSE 13.1 images published in Amazon EC2, Google Compute Engine, and Windows Azure.

Well, that’s the announcement, but would make for a rather short blog. Thus, let me talk a bit about how this all works and speculate a bit why we’ve not been all that good with getting stuff out into the public cloud.

Let me start with the speculation part, i.e. hindrances in getting openSUSE images published. In general to get anything into a public cloud one has to have an account. This implies that you hand over your credit card number to the cloud provider and they charge you for the resources you use. Resources in the public cloud are anything and everything that has something to do with data. Compute resources, i.e. the size of an instance w.r.t. memory and number of CPUs are priced differently. Sending data across the network to and from your instances incurs network charges and of course storing stuff in the cloud is not free either. Thus, while anyone can put an image into the cloud and publish it, this service costs the person money, granted not necessarily a lot, but it is a monthly recurring out of pocket expense.

Then there always appears to be the “official” apprehension, meaning if person X publishes an openSUSE image from her/his account what makes it “official”. Well first we have the problem that the “official” stamp is really just an imaginary hurdle. An image that gets published by me is no more or less “official” than any other images. I am after all not the release manager or have any of my fingers in the openSUSE release in any way. I do have access to the SUSE accounts and can publish from there and I guess that makes the images “official”. But please do not get any ideas about “official” images, they do not exist.

Last but not least there is a technical hurdle. Building images in OBS is not necessarily for the faint of heart. Additionally there is a bunch of other stuff that goes along with cloud images. Once you have one it still has to get into the cloud of choice, which requires tools etc.

That’s enough speculation as to why or why not it may have taken us a bit longer than others, and just for the record we did have openSUSE 12.1 and openSUSE 12.2 images in Amazon. With that lets talk about what is going on.

We have a project in OBS now, actually it has been there for a while, Cloud:Images that is intended to be used to build openSUSE cloud images. The GCE image that is public and the Amazon image that is public both came from this project. The Azure image that is currently public is one built with SUSE Studio but will at some point also stem from the Cloud:Images OBS project.

Each cloud framework has it’s own set of tools. The tools are separated into two categories, initialization tools and command line tools. The initialization tools are tools that reside inside the image and these are generally services that interact with the cloud framework. For example cloud-init is such an initialization tool and it is used in OpenStack images, Amazon images, and Windows Azure images. The command line tools let you interact with the cloud framework to start and stop instances for example. All these tools get built in the Cloud:Tools project in OBS. From there you can install the command line tools into your system and interact with the cloud framework they support. I am also trying to get all these tools into openSUSE:Factory to make things a bit easier for image building and cloud interaction come 13.2.

With this lets take a brief closer look at each framework, in alphabetical order no favoritism here.

Amazon EC2

An openSUSE 13.1 image is available in all regions, the AMI (Amazon Machine Image) IDs are as follows:

sa-east-1 => ami-2101a23c
ap-northeast-1 => ami-bde999bc
ap-southeast-2 => ami-b165fc8b
ap-southeast-1 => ami-e2e7b6b0
eu-west-1 => ami-7110ec06
us-west-1 => ami-44ae9101
us-west-2 => ami-f0402ec0
us-east-1 => ami-ff0e0696

These images use cloud-init as opposed to the “suse-ami-tools” that has been used previously and is no longer available in OBS. The cloud-init package is developed in launchpad and was started by the Canonical folks. Unfortunately to contribute you have to sign the Canonical Contributor Agreement (CCA). If you do not want to sign it or cannot sign it for company reasons you can still send stuff to the package and I’ll try and get the stuff integrated upstream. For the interaction with Amazon we have the aws-cli package. The “aws” command line client supersedes all the ec2-*-tools and is an integrated package that can interact with all Amazon services, not just EC2. It is well documented fully open source and hosted on github. The aws-cli package replaces the previously maintained ec2-api-tools package which I have removed from OBS.

Google Compute Engine

In GCE things work by name and the openSUSE 13.1 image is named opensuse131-20140227 and is available in all regions. Google images use a number of tools for initialization, google-daemon and google-startup-scripts. All the Google specific tools are in the Cloud:Tools project. Interaction with GCE is handled with two commands, gcutil and gsutil, both provided by the google-cloud-sdk package. As the name suggests google-cloud-sdk has the goal to unify the various Google tools, same basic idea as aws-cli, and Google is working on the unification. Unfortunately they have decided to do this on their own and there is no public project for google-cloud-sdk which makes contributing a bit difficult to say the least. The gsutil code is hosted on github, thus at least contributing to gsutil is straight forward. Both utilities, gsutil for storage and gcutil for interacting with GCE are well documented.

In GCE we also were able to stand up openSUSE mirrors. These have been integrated into our mirrorbrain infrastructure and are already being used quite heavily. The infrastructure team is taking care of the monitoring and maintenance and that deserves a big THANK YOU from my side. The nice thing about hosting the mirrors in GCE is that when you run an openSUSE instance in GCE you will not have to pay for network charges to pull your updated packages and things are really fast as the update server is located in the same data center as your instance.

Windows Azure

As mentioned previously the current image we have in Azure is based on a build from SUSE Studio. It does not yet contain cloud-init and only has WALinuxAgent integrated. This implies that processing of user data is not possible in the image. User data processing requires cloud-init and I just put the finishing touches on cloud-init this week. Anyway, the image in Azure works just fine, and I have no time line when we might replace it with an image that contains cloud-init in addition to WALinuxAgent.

Interacting with Azure is a bit more cumbersome than with the other cloud frameworks. Well, let me qualify this with, if you want packages. The Azure command line tools are implemented using nodejs and are integrated into the npm nodejs package system. Thus, you can use npm to install everything you need. The nodejs implementation provides a bit of a problem in that we hardly have a nodejs infrastructure in the project. I have started packaging the dependencies, but there is a large number and thus this will take a while. Who would ever implement….. but that’s a different topic.

That’s where we are today. There is plenty of work left to do. For example we should unify the “generic” OpenStack image in Cloud:Images with the HP specific one, the HP cloud is based on OpenStack, and also get an openSUSE image published in the HP cloud. There’s tons of packaging left to do for nodejs modules to support the azure-cli tool. It would be great if we could have openSUSE mirrors in EC2 and Azure to avoid network charges for those using openSUSE images in those clouds. This requires discussions with Amazon and Microsoft, basically we need to be able to run those services for free, which implies that both would become sponsors of our project just like Google has become a sponsor of our project by letting us run the openSUSE mirrors in GCE.

So if you are interested in cloud and public cloud stuff get involved, there is plenty of work and lots of opportunities. If you just want to use the images in the public cloud go ahead, that’s why they are there. If you want to build on the images we have in OBS and customize them in your own project feel free and use them as you see fit.

First fruits – update on openQA and Staging work

February 19th, 2014 by

In our previous summary, we talked about some basic research and some ground work to build on. This time we have some first exciting results!


Last week we rearranged the repository a little bit, creating a new branch called "devel" where all the exciting (and not so exciting) changes are taking place. Our little Factory 😉

The main difference between this branch as master is that, as you could read in the previous blog, the devel branch is openQA build on Mojolicious, a nice web development framework. And having a proper web framework is starting to show its benefits: we have openID login available! Unfortunately the current openSUSE openID provider is a little bit weird, so it doesn’t play well with our tool yet, but some others are working and openSUSE accounts will be the next step. Having working user accounts is necessary to be able to start defining permissions and to make openQA truly multiuser. And to be able to deploy the new version on a public server!

The other main focus of this week has been internal polishing. We have revamped the database layout and changed the way in which the different openQA components communicate with each other. The openQA functionality is spread out over several parts: the workers are responsible of actually executing the tests in virtual machines reporting the result after every execution; some client utilities are used to load new ISO images and similar tasks and, finally, we have the one openQA Server to rule them all. Until now, the communication between the server and the other components was done using JSON-RPC (a lightweight alternative to XML-RPC). We have dropped JSON-RPC in favor of a REST-like API with just JSON over plain HTTP. This change allowed us to implement exactly the same functionality in a way that is simpler, perfectly supported by any web framework, natively spoken by browsers and easier to authenticate (using, for example, plain HTTP authentication or openID). This is also the first step to future integration with other services (think OBS, as the ultimate goal is to use openQA to test staging projects).

But, who tests the tester? openQA is growing and changing quite fast so we have continued with the task of creating a good testing infrastructure to tests openQA itself to make sure that all our changes do not result in breakage. We only have a few tests right now, but we now have a solid foundation to write more and more tests.

Staging and package manipulation

In the last blog post we told you we were investigating a code test suite to test the abilities of a osc plugin we are writing. osc is the commandline tool for handling the Open Build Service, and this plugin is meant to help with the administration of staging projects. We’ve been thinking about how to move forward with the testing part as we want to make sure the functionality works as advertised. More important, we write tests to make sure that our additions and changes do not break existing functionality. We have started merging functionality from various scripts handling staging thingies and rings we had into this plugin. This is partially done so we can do basic staging project tasks! We can take a request (be it submit or delete) and put it to test in a staging project. We can also move packages between staging projects and we have a simple YAML in project descriptions to indicate what packages and what requests are there.

We're all green!
Coolo already started using the plugin for some tasks, so you can see pretty titles and metadata in staging projects descriptions. Not impressive enough? Let me provide you a good headline:

Thanks to staging projects, no single regression have been introduced this year in the core of Factory

You can enjoy a more detailed description in this announcement written by coolo to the Factory mailing list and have some visual joy in the included screenshot (genuine pr0n for QA engineers).

Last but not least, we also did some cleanup of the sources in the repo and of course we added more tests (as functionality grows). And there has been work on other parts of the plugin, like taking rings into account.


We already have some useful functionality which we are going to expand and build on. Not that much to show yet, but we are progressing towards our goal. You can follow our progress either in the way of tasks (openQA and staging projects) or just follow our commit history on github (again for both openQA and staging plugin).

We are very much looking forward to feedback and thoughts – these changes aim to make Factory development easier and thus we need to hear from all you!

The openSUSE TSP application

June 20th, 2013 by

Introduction blog of the TSP
Today, Ancor Gonzalez Sosa writes about the Travel Support Program Application he developed with the openSUSE Team.

Traveling to an event to represent your project, sharing experiences with other people with common interests and showing them what you are passionate about is absolutely awesome – but it can get expensive. This is why openSUSE introduced a Travel Support Program last year.

The openSUSE Travel Support Program

The goal of the Travel Support Program is to support contributors representing openSUSE at events by reimbursing up to 80% of the travel and/or hotel costs. In turn the contributors make a worthy contribution at the event and report back to the openSUSE community about what they did.

We’re not alone in doing this, having drawn inspiration from GNOME’s Conference Travel Subsidy Program, the KDE e.V. Travel Cost Reimbursement initiative and the Travel Policy from The Document Foundation.

vanilla entering reimbursement request_crop

entering reimbursement request

The program is sponsored by SUSE, but the Travel Committee independently manages the money and decides who is supported and how. This is a lot of work: decisions involve the event itself, the contributor asking for support, other Geekos in the area, the costs and of course the entire budget. The team also has to plan the priority of events with the marketing team and communicate about the status of the requests and reimbursement.

And the Free Software world was lacking a proper tool to manage all this… until now!

The brand new TSP application

We developed a new web tool to make the life of the TSP team and the community easier and do this in an open and generic way so other projects could benefit as well. We’ve started using it already for the upcoming openSUSE Conference 2013 and you can see it in action here. It even offers a pretty diagram explaining the TSP process! Of course, the complete source code of the project can be found on Github.


For a more detailed explanation of the goals of the project you can refer to the ‘about the TSP application‘ page in our projects management tool. In that page you will find ‘the 6 Ws’ of the new application: who, what, when, where, why and how (yes, we know that ‘how’ does not start with ‘W’, but we didn’t invented the 6Ws term).

During development we honored the motto “release early, release often” and worked following agile development principles. We begun by collecting ideas and requirements of the TSP team and the people handling the payments on the SUSE side. After developing a first prototype, it was presented in a video conference. You can find the minutes of this meeting in the project’s wiki.

Once the feedback was in and new goals were set, the prototype was deployed on a provisional server in order for the Travel Committee to test it. Using this test-drive installation, the application was improved in an interactive way. Every two weeks (sometimes a bit longer), a new version was installed. Izabel tested the new version providing very useful feedback used to plan the new milestone in our projects management tool and so on. This cycle is still in motion: new version, feedback, planning, new version…

bento style request status

bento themed request status

Awesome Rails goodies

While working on the TSP application we have developed some features that can be interesting for other Ruby on Rails programmers working within the openSUSE infrastructure, like the team behind OSEM. The TSP application includes a Devise backend for the openSUSE authentication infrastructure, a Bento theme for Bootstrap (written in pure Less) and integration with openSUSE Connect through its REST API. We plan to release all these features as individual components to allow reuse in other openSUSE developments.

Present and future, sharing with others

We plan on continuing maintenance of the application and as with most free software projects, it’s hard to predict in which direction the tool is going to evolve. Conference volunteers in charge of the visa invitation letters and the team in charge of merchandising shipping already made some interesting suggestions so it will not be a surprise if we end up developing a full event management tool. Not for registration and scheduling individual conferences -the oSC’13 guys are already doing a great job developing OSEM– but for the administrative tasks and planning behind the attending various events that communities like openSUSE do.


So, if you are an openSUSE contributor and you might need sponsorship for traveling in the future, bookmark the TSP page! If you are a Ruby on Rails developer, just Fork it on Github™ and meet us at oSC’13 to talk about future collaboration. And if you are in charge of a travel support program for another open source project or are thinking about the possibility of starting one, you can run it yourself and we’d be happy to help you in case of trouble. You can find me (Ancor Gonzalez Sosa) as ancorgs in the openSUSE-Project channel on Freenode.

And always remember: have a lot of fun!

openQA in openSUSE

June 6th, 2013 by

factory-testedToday, we’ve got for you an introduction of the teams’ work on openQA by Alberto Planas Domínguez.

The last 12.3 release was important for the openSUSE team for a number of reasons. One reason is that we wanted to integrate QA (Quality Assurance) into the release process in an early stage. You might remember that this release had UEFI and Secure Boot support coming and everybody had read the scary reports about badly broken machines that can only be fixed replacing the firmware. Obviously openSUSE can’t allow such things to happen to our user base, so we wanted to do more testing. (more…)