So last time I was on top of Curl and what problems can come out when you are using it through out different distributions. After that struggle UnReal World RPG have been ported to SDL2. It’s used by Steam so it should we available every distribution you can dream of. How wrong I can be and how correct I am! (more…)
Archive for the ‘Packaging’ Category
Running the rolling openSUSE Factory has been smooth so far, no problems since the last post.
I have been involved in submitting new packages (ftop, dstat, some perl modules), patching other’s existing packages (dnsmasq) and of course taking after packages that I maintain (btrfsprogs). With a very few exceptions I’ve got done everything I needed, the exceptions were my rather silly mistakes. The damage is only the first few seconds when one realizes that the submit request was ‘rejected’. Don’t get bored by it, grab a coffee or go fix it later. Need to say the rejects are backed by a reason or explanation what’s wrong and what should be fixed in the next attempt. Learn from that, take notes, read the docs again. Once this becomes common, the amount of basic mistakes is near to zero, the self-checks become a routine. This makes a happy contributor and the distro maintainers too.
I recommend to skim the factory snapshots announcements, look at the changes or scroll down to the newly added ones. One day you can see your contributions there, go for it.
Before something goes to the Factory distro, the packages are getting ready in the devel projects. I’ve asked for maintainership of filesystems and benchmark projects and did some fixing in packages I use or at least recognize. The state of the projects is not ‘all-green’, build failures exist, but without some motivation I’m not rushing to fix them.
If you are interested (as a user) in a package from those devel projects, feel free to bug me about it. I can help with fixing build failures or submitting to Factory.
All of the above is a routine. A routine of making the distro better on the core side. There’s never enough of it and it may become boring (oh it does) over time. Out of the many research projects and experimenting I do, I decided to focus on one that’s definetely related to openSUSE, is fun, is important, useful and is not there yet.
“No way, really? But there’s AppArmor and SElinux enabled and the compile-time hardening options.”
Yeah. I won’t repeat the arguments why AppArmor and SElinux are insufficient, functionally or usability-wise. So what’s left? Grsecurity, of course. Sadly openSUSE lacks even the unofficial grsecurity-patched kernels unlike Arch, Debian or Gentoo. Sadly2, the patched kernels are unofficial and will remain at that state until grsec is upstream. I don’t dare to predict if/when this will happen.
My hardening efforts got the codename openSUSE-gardening and are hosted in my github repository of the same name. The wiki contains more comprehensive information. It’s still work in progress and does not cover all topics in detail but should be enough to get started.
Quite unexpectedly, spender found the repo and gave it a bit of publicity on twitter. Thanks man
My plan was to update all relevant packages, test the kernels a bit, update the wiki and then post about that here. Nah, I got the right kick to do it now.
Quick start is really simple, a pattern that installs all necessary packages for a desktop use:
Note, you’ll probably need to run linux-pax-flags before the first reboot, it will apply PaX flag exceptions, some binaries may crash due to the protections (like window manager processes, browsers). Once the zypper plugin is properly installed, the flags get updated automatically.
Warning: the patched kernel has not been extensively tested, works for me, might not work for you.
To be continued …
The stable branch of LXQT, the QT branch of LXDE is now available for openSUSE:13.1 and openSUSE:Factory.
Following are a few screenshots of lxqt, which will be quite familiar to any of you that dabbled with Razor-qt in the past.
So if you’re looking for something to try out, please, give it a shot.
Please keep in mind, we are considering this a “Beta” quality release, so there are still some rough edges.
Additionally, lxqt is currently un-branded for openSUSE, so I certainly wouldn’t turn down help from folks that are into helping out with that sort of thing.
Stable packages are available for openSUSE:13.1(i586, x86_64, armv6l and armv7l) and openSUSE:Factory (i586 and x86_64) at:
Unstable Packages (latest git pulls), are available for 13.1 and Factory, i586 at:
And if you happen to be running Fedora, i586 and x86_64 packages are available at:
Finally there is some news regarding our public cloud presence and openSUSE 13.1. We now have openSUSE 13.1 images published in Amazon EC2, Google Compute Engine, and Windows Azure.
Well, that’s the announcement, but would make for a rather short blog. Thus, let me talk a bit about how this all works and speculate a bit why we’ve not been all that good with getting stuff out into the public cloud.
Let me start with the speculation part, i.e. hindrances in getting openSUSE images published. In general to get anything into a public cloud one has to have an account. This implies that you hand over your credit card number to the cloud provider and they charge you for the resources you use. Resources in the public cloud are anything and everything that has something to do with data. Compute resources, i.e. the size of an instance w.r.t. memory and number of CPUs are priced differently. Sending data across the network to and from your instances incurs network charges and of course storing stuff in the cloud is not free either. Thus, while anyone can put an image into the cloud and publish it, this service costs the person money, granted not necessarily a lot, but it is a monthly recurring out of pocket expense.
Then there always appears to be the “official” apprehension, meaning if person X publishes an openSUSE image from her/his account what makes it “official”. Well first we have the problem that the “official” stamp is really just an imaginary hurdle. An image that gets published by me is no more or less “official” than any other images. I am after all not the release manager or have any of my fingers in the openSUSE release in any way. I do have access to the SUSE accounts and can publish from there and I guess that makes the images “official”. But please do not get any ideas about “official” images, they do not exist.
Last but not least there is a technical hurdle. Building images in OBS is not necessarily for the faint of heart. Additionally there is a bunch of other stuff that goes along with cloud images. Once you have one it still has to get into the cloud of choice, which requires tools etc.
That’s enough speculation as to why or why not it may have taken us a bit longer than others, and just for the record we did have openSUSE 12.1 and openSUSE 12.2 images in Amazon. With that lets talk about what is going on.
We have a project in OBS now, actually it has been there for a while, Cloud:Images that is intended to be used to build openSUSE cloud images. The GCE image that is public and the Amazon image that is public both came from this project. The Azure image that is currently public is one built with SUSE Studio but will at some point also stem from the Cloud:Images OBS project.
Each cloud framework has it’s own set of tools. The tools are separated into two categories, initialization tools and command line tools. The initialization tools are tools that reside inside the image and these are generally services that interact with the cloud framework. For example cloud-init is such an initialization tool and it is used in OpenStack images, Amazon images, and Windows Azure images. The command line tools let you interact with the cloud framework to start and stop instances for example. All these tools get built in the Cloud:Tools project in OBS. From there you can install the command line tools into your system and interact with the cloud framework they support. I am also trying to get all these tools into openSUSE:Factory to make things a bit easier for image building and cloud interaction come 13.2.
With this lets take a brief closer look at each framework, in alphabetical order no favoritism here.
An openSUSE 13.1 image is available in all regions, the AMI (Amazon Machine Image) IDs are as follows:
sa-east-1 => ami-2101a23c
ap-northeast-1 => ami-bde999bc
ap-southeast-2 => ami-b165fc8b
ap-southeast-1 => ami-e2e7b6b0
eu-west-1 => ami-7110ec06
us-west-1 => ami-44ae9101
us-west-2 => ami-f0402ec0
us-east-1 => ami-ff0e0696
These images use cloud-init as opposed to the “suse-ami-tools” that has been used previously and is no longer available in OBS. The cloud-init package is developed in launchpad and was started by the Canonical folks. Unfortunately to contribute you have to sign the Canonical Contributor Agreement (CCA). If you do not want to sign it or cannot sign it for company reasons you can still send stuff to the package and I’ll try and get the stuff integrated upstream. For the interaction with Amazon we have the aws-cli package. The “aws” command line client supersedes all the ec2-*-tools and is an integrated package that can interact with all Amazon services, not just EC2. It is well documented fully open source and hosted on github. The aws-cli package replaces the previously maintained ec2-api-tools package which I have removed from OBS.
Google Compute Engine
In GCE things work by name and the openSUSE 13.1 image is named opensuse131-20140227 and is available in all regions. Google images use a number of tools for initialization, google-daemon and google-startup-scripts. All the Google specific tools are in the Cloud:Tools project. Interaction with GCE is handled with two commands, gcutil and gsutil, both provided by the google-cloud-sdk package. As the name suggests google-cloud-sdk has the goal to unify the various Google tools, same basic idea as aws-cli, and Google is working on the unification. Unfortunately they have decided to do this on their own and there is no public project for google-cloud-sdk which makes contributing a bit difficult to say the least. The gsutil code is hosted on github, thus at least contributing to gsutil is straight forward. Both utilities, gsutil for storage and gcutil for interacting with GCE are well documented.
In GCE we also were able to stand up openSUSE mirrors. These have been integrated into our mirrorbrain infrastructure and are already being used quite heavily. The infrastructure team is taking care of the monitoring and maintenance and that deserves a big THANK YOU from my side. The nice thing about hosting the mirrors in GCE is that when you run an openSUSE instance in GCE you will not have to pay for network charges to pull your updated packages and things are really fast as the update server is located in the same data center as your instance.
As mentioned previously the current image we have in Azure is based on a build from SUSE Studio. It does not yet contain cloud-init and only has WALinuxAgent integrated. This implies that processing of user data is not possible in the image. User data processing requires cloud-init and I just put the finishing touches on cloud-init this week. Anyway, the image in Azure works just fine, and I have no time line when we might replace it with an image that contains cloud-init in addition to WALinuxAgent.
Interacting with Azure is a bit more cumbersome than with the other cloud frameworks. Well, let me qualify this with, if you want packages. The Azure command line tools are implemented using nodejs and are integrated into the npm nodejs package system. Thus, you can use npm to install everything you need. The nodejs implementation provides a bit of a problem in that we hardly have a nodejs infrastructure in the project. I have started packaging the dependencies, but there is a large number and thus this will take a while. Who would ever implement….. but that’s a different topic.
That’s where we are today. There is plenty of work left to do. For example we should unify the “generic” OpenStack image in Cloud:Images with the HP specific one, the HP cloud is based on OpenStack, and also get an openSUSE image published in the HP cloud. There’s tons of packaging left to do for nodejs modules to support the azure-cli tool. It would be great if we could have openSUSE mirrors in EC2 and Azure to avoid network charges for those using openSUSE images in those clouds. This requires discussions with Amazon and Microsoft, basically we need to be able to run those services for free, which implies that both would become sponsors of our project just like Google has become a sponsor of our project by letting us run the openSUSE mirrors in GCE.
So if you are interested in cloud and public cloud stuff get involved, there is plenty of work and lots of opportunities. If you just want to use the images in the public cloud go ahead, that’s why they are there. If you want to build on the images we have in OBS and customize them in your own project feel free and use them as you see fit.
How-to build a initrd-virtio on a fully encrypted volume group
If like me you care about your data stored on your laptop, you certainly use a fully encrypted (excepted /boot) configuration based on lvm.
In my case I also like to create, build, fix packages locally with our tool osc. I’ve plenty of power, beefy ssd, so I dedicate a logical lvm for building cleanly package with qemu-kvm configuration, like obs does
Prepare the kvm building system
As root you create 2 lvm volume with lvcreate, one will be the build root, the other one will be the additional swap
In ~/.oscrc I enable the following parameters
build-type = kvm build-device = /dev/mapper/vg0-lvobsbuild build-swap = /dev/mapper/vg1-lvobsswap build-memory = 4096 build-vmdisk-rootsize = 16000 build-vmdisk-swapsize = 4000 build-vmdisk-filesystem = ext4
You just have to adjust the Memory quantity and the device to what you create for your own environment.
For quite some time we had a package named ec2-api-tools in the Cloud:EC2 project and I suspect many that work with EC2 had found the package and were using the ec2-* commands to manage stuff in EC2. Along with the ec2-api-tools Amazon maintained a separate ec2-* tool set for various services. Keeping up with the armada of Amazon developers is not easy and thus the other ec2-* tool sets never got packaged.
Now a new integrated set of tools is available called with the “aws” command and provided by the aws-cli package. The package is available from the Cloud:Tools project and a submit request to Factory is pending. The new package does not obsolete the ec2-api-tools package as there is no issue with having both packages installed. However, I did take the liberty to remove the ec2-api-tool package from the Cloud:EC2 project as it would no longer receive updates considering that we have a nice new tool that unifies all Amazon services. The documentation for the new command can be found .
The aws code is hosted on github and thus contribution of fixes is easy and that is another big plus over the ec2-* tool sets.
Yes, and of course we need to get openSUSE 13.1 into EC2, and I am working on that, stay tuned….
As we stated in our communication over the time, our team’s main focus for foreseeable future is Factory and how to manage all those contributions. Goal is not to increase the number of SRs that is coming to Factory, but to make sure we can process more and to make sure we see even well hidden consequences to make sure that Factory is “stable” and “usable”.
Not really part of our current sprints, but something that will hopefully help us is spec-cleaner that Tomáš Chvátal and Tomáš Čech were working on lately during their free time/hackweek. What is it trying to address? Currently, there are some packaging guidelines, but when you write a spec file for your software, you still have plenty of choices. How do you order all the information in the header? Do you use curly brackets around macros? Do you use macros? Which ones do you use and which not? Do you use binaries in dependencies? Package config? Perl symbols? Package names? There is format_spec_file obs service that tries to unify a little bit the coding style but leaves quite some of the stuff up to you. Not necessarily a bad thing, but if you have to compare changes and review packages that are using completely different coding styles the process becomes harder and slower.
spec-cleaner is format_spec_file taken to another level. It tries to unify coding style as much as it can. It uses consistent conventions, makes most of the decisions mentioned previously for you and if you already decided for one way in the past, it will try to convert your spec file to follow the conventions that it specifies. It’s not enforcing anything, it’s standalone script and therefore you don’t have to be worried that you spec file will be out of your control. You can run it, verify the result (actually, you should verify the results as there might still be some bugs) and commit it to OBS. If we all do it, our packages will all look more alike and it will be easier to read and review them.
How to try it? How to help? Well, code is on GitHub and packages are in OBS. You may have a version of it in your distribution, but that one is heavily outdated (even the 13.1 version), so add openSUSE:Tools repo and try the version from there.
zypper ar -f obs://openSUSE:Tools/openSUSE_13.1 openSUSE-Tools zypper in spec-cleaner
You can then go to some local checkout and try what changes does it propose for your spec file. Easiest way is to just let it do stuff by calling it and taking a look at changes afterwards.
spec-cleaner -p -i *.spec osc diff
If it works, great, we will have more unified spec files. If it doesn’t, file a bug
The NVIDIA drivers for openSUSE 13.1 took a while to appear. Many users have asked why this was and we’d like to explain what happened and what we plan to do to prevent this in the future. This post was written with input from the openSUSE developers who maintain these drivers at SUSE and work with NVIDIA to make them available for our users.
How it should work
Legally, the Linux kernel GPLv2 license leaves proprietary binary drivers in a bit of a tangle. While some claim it should be OK, others say not – and that is what most distributions currently assume: one can not ship a Linux distribution with proprietary, binary drivers. NVIDIA has agreed that our users can grab their drivers from the official NVIDIA servers. They are packaged by SUSE engineers however. They take care of both SLE and openSUSE proprietary driver packaging for NVIDIA hardware, and have contacts at NVIDIA who get the drivers up on their ftp mirrors.
The packages are build on a dedicated system, but the package spec (skeleton for building) is in OBS – for example, G03 is here. Anybody could use these to build the nvidia driver locally (the nvidia binary driver is grabbed from the nvidia driver during building). The command sequence for local building can be found in the README.
Once the packages are build, they are send to NVIDIA, which signs the packages with their key and generates and signs repositories, making them available to the public.
To note is the fact that this is all manual and takes a while on NVIDIA’s side (and ours). This of course is part of the reason why we don’t offer NVIDIA packages for Factory, our fast-rolling development repository.
What occurred for 13.1 is a typical case of “everything went wrong”.
Up until a few weeks before the release, the driver did not build against the Linux Kernel version 3.11. NVIDIA warned against the use of a patch for this problem created by third parties so the driver team did not have a running build. Due to a holiday, it took a while to get the packages build and pushed to NVIDIA. Unfortunately, by the time this was done, the holiday period at NVIDIA blocked progress for another week. Once the package was signed, it took the webteam at NVIDIA another week to get the repository published. It all added up to almost a month and brought quite some inconvenience for users eager to use their latest NVIDIA cards with the latest openSUSE.
About a week later, the openSUSE 13.1 NVIDIA drivers disappeared again from the nvidia servers for a day or so. We don’t know exactly what happened there, it might have simply been a server issue.
What we will do
The first and most obvious thing to change would be to improve coordination. The developers taking care of the NVIDIA packages should be made aware as early as possible about the release planning and a replacement in case for holidays should be available. We noted this in our 13.1 release report and will make sure that there will be a task in our task tracker for this for the next release. We also need to talk to NVIDIA about this to make sure that there, too, somebody can fill in for SUSE’s contacts.
But there are also thoughts on how to improve further. Some of the current ideas:
- To get drivers for Factory the process needs more automation as the driver may break on kernel or X changes
- We could try to open the process and work with community members to secure the process in case the SUSE folks are unavailalbe
- It would be nice if we could make NVidia to see benefit of working more on the driver in openSUSE, e.g. by doing some testing
- We could write some scripts that check regularly whether repo and packages are still available and in a valid state (meta data matches available RPMs)
We’ll have to see which of these we can implement, when and how. But rest assured that we will do what we can to prevent this in the future!
And the stats!
After a bit of a hiatus, we’re back with the numbers. Development has slowed down around new year and isn’t back to speed yet so the 10th spot is shared by quite a big group of people…
|3||Tomáš Chvátal, Dirk Mueller, Michal Vyskocil|
|7||Lars Vogdt, Pascal Bleser|
|8||Jan Engelhardt, Charles Arnold|
|9||Andreas Stieger, Michal Marek, Niels Abspoel, Kyrill Detinov|
|10||Dinar Valeev, Bjørn Lie, Ulrich Weigand, Tobias Klausmann, Marcus Meissner, Alexander Graf, Robert Schweikert, Martin Vidner|
Last time I talked about OBS and how to compile your application that you have developed with GCC. OBS is much more than just a tool for compiling openSUSE additional packages. You can also compile Debian, Ubuntu, Arch and Fedora (and couple more) but why on earth you want to do that? Short answer: because you can! Little bit longer answer: because you can and freedom is two way road. You can’t guess what Linux distribution or OS your user wants to use but you can make sure that you application is first class citizen in that Linux distribution. (more…)
When I was kid Commodore 64 was big thing and I played ‘Gateway to Apshai‘ hour after hour. It really hit me. Others liked Ultimas but ‘Gateway to Apshai’ was THE thing to me. Years after C64 was gone with the wind I found world of Rogue, Omega and Nethack. Sweetest of them was Omega. Omega’s World map was big and you could do what ever you like and wander around and you didn’t have to fight all the time. As this was a long, long time ago none of those games are no more in active development but Sami Maaranen is still developing unique northern hemisphere survival game called UnReal World RPG.
See bigger pictures at IndieDB