Home Home > Distribution
Sign up | Login

Deprecation notice: openSUSE Lizards user blog platform is deprecated, and will remain read only for the time being. Learn more...

Archive for the ‘Distribution’ Category

Have some fun today… try your new kernel

April 30th, 2014 by

Last blog was about how to compile openSUSE kernel from GIT. Now we see how to get it up and running in your system. Again word of warning: Changing kernel is always bit of a hardcore trick! Even if it comes from trusted and tested binary from openSUSE (sorry I’m server admin). If you do it by yourself then you are also on your own if your machine won’t boot anymore! (more…)

Have some fun today… compile kernel

April 15th, 2014 by

Are you bored or seeking something to do? Do you want to do something that your friends will call just waste of time but it is so highly nerdy and most cool? Do you want to know what makes openSUSE or Linux in general tick? (more…)

Cloudy with a touch of Green

March 19th, 2014 by

Finally there is some news regarding our public cloud presence and openSUSE 13.1. We now have openSUSE 13.1 images published in Amazon EC2, Google Compute Engine, and Windows Azure.

Well, that’s the announcement, but would make for a rather short blog. Thus, let me talk a bit about how this all works and speculate a bit why we’ve not been all that good with getting stuff out into the public cloud.

Let me start with the speculation part, i.e. hindrances in getting openSUSE images published. In general to get anything into a public cloud one has to have an account. This implies that you hand over your credit card number to the cloud provider and they charge you for the resources you use. Resources in the public cloud are anything and everything that has something to do with data. Compute resources, i.e. the size of an instance w.r.t. memory and number of CPUs are priced differently. Sending data across the network to and from your instances incurs network charges and of course storing stuff in the cloud is not free either. Thus, while anyone can put an image into the cloud and publish it, this service costs the person money, granted not necessarily a lot, but it is a monthly recurring out of pocket expense.

Then there always appears to be the “official” apprehension, meaning if person X publishes an openSUSE image from her/his account what makes it “official”. Well first we have the problem that the “official” stamp is really just an imaginary hurdle. An image that gets published by me is no more or less “official” than any other images. I am after all not the release manager or have any of my fingers in the openSUSE release in any way. I do have access to the SUSE accounts and can publish from there and I guess that makes the images “official”. But please do not get any ideas about “official” images, they do not exist.

Last but not least there is a technical hurdle. Building images in OBS is not necessarily for the faint of heart. Additionally there is a bunch of other stuff that goes along with cloud images. Once you have one it still has to get into the cloud of choice, which requires tools etc.

That’s enough speculation as to why or why not it may have taken us a bit longer than others, and just for the record we did have openSUSE 12.1 and openSUSE 12.2 images in Amazon. With that lets talk about what is going on.

We have a project in OBS now, actually it has been there for a while, Cloud:Images that is intended to be used to build openSUSE cloud images. The GCE image that is public and the Amazon image that is public both came from this project. The Azure image that is currently public is one built with SUSE Studio but will at some point also stem from the Cloud:Images OBS project.

Each cloud framework has it’s own set of tools. The tools are separated into two categories, initialization tools and command line tools. The initialization tools are tools that reside inside the image and these are generally services that interact with the cloud framework. For example cloud-init is such an initialization tool and it is used in OpenStack images, Amazon images, and Windows Azure images. The command line tools let you interact with the cloud framework to start and stop instances for example. All these tools get built in the Cloud:Tools project in OBS. From there you can install the command line tools into your system and interact with the cloud framework they support. I am also trying to get all these tools into openSUSE:Factory to make things a bit easier for image building and cloud interaction come 13.2.

With this lets take a brief closer look at each framework, in alphabetical order no favoritism here.

Amazon EC2

An openSUSE 13.1 image is available in all regions, the AMI (Amazon Machine Image) IDs are as follows:

sa-east-1 => ami-2101a23c
ap-northeast-1 => ami-bde999bc
ap-southeast-2 => ami-b165fc8b
ap-southeast-1 => ami-e2e7b6b0
eu-west-1 => ami-7110ec06
us-west-1 => ami-44ae9101
us-west-2 => ami-f0402ec0
us-east-1 => ami-ff0e0696

These images use cloud-init as opposed to the “suse-ami-tools” that has been used previously and is no longer available in OBS. The cloud-init package is developed in launchpad and was started by the Canonical folks. Unfortunately to contribute you have to sign the Canonical Contributor Agreement (CCA). If you do not want to sign it or cannot sign it for company reasons you can still send stuff to the package and I’ll try and get the stuff integrated upstream. For the interaction with Amazon we have the aws-cli package. The “aws” command line client supersedes all the ec2-*-tools and is an integrated package that can interact with all Amazon services, not just EC2. It is well documented fully open source and hosted on github. The aws-cli package replaces the previously maintained ec2-api-tools package which I have removed from OBS.

Google Compute Engine

In GCE things work by name and the openSUSE 13.1 image is named opensuse131-20140227 and is available in all regions. Google images use a number of tools for initialization, google-daemon and google-startup-scripts. All the Google specific tools are in the Cloud:Tools project. Interaction with GCE is handled with two commands, gcutil and gsutil, both provided by the google-cloud-sdk package. As the name suggests google-cloud-sdk has the goal to unify the various Google tools, same basic idea as aws-cli, and Google is working on the unification. Unfortunately they have decided to do this on their own and there is no public project for google-cloud-sdk which makes contributing a bit difficult to say the least. The gsutil code is hosted on github, thus at least contributing to gsutil is straight forward. Both utilities, gsutil for storage and gcutil for interacting with GCE are well documented.

In GCE we also were able to stand up openSUSE mirrors. These have been integrated into our mirrorbrain infrastructure and are already being used quite heavily. The infrastructure team is taking care of the monitoring and maintenance and that deserves a big THANK YOU from my side. The nice thing about hosting the mirrors in GCE is that when you run an openSUSE instance in GCE you will not have to pay for network charges to pull your updated packages and things are really fast as the update server is located in the same data center as your instance.

Windows Azure

As mentioned previously the current image we have in Azure is based on a build from SUSE Studio. It does not yet contain cloud-init and only has WALinuxAgent integrated. This implies that processing of user data is not possible in the image. User data processing requires cloud-init and I just put the finishing touches on cloud-init this week. Anyway, the image in Azure works just fine, and I have no time line when we might replace it with an image that contains cloud-init in addition to WALinuxAgent.

Interacting with Azure is a bit more cumbersome than with the other cloud frameworks. Well, let me qualify this with, if you want packages. The Azure command line tools are implemented using nodejs and are integrated into the npm nodejs package system. Thus, you can use npm to install everything you need. The nodejs implementation provides a bit of a problem in that we hardly have a nodejs infrastructure in the project. I have started packaging the dependencies, but there is a large number and thus this will take a while. Who would ever implement….. but that’s a different topic.

That’s where we are today. There is plenty of work left to do. For example we should unify the “generic” OpenStack image in Cloud:Images with the HP specific one, the HP cloud is based on OpenStack, and also get an openSUSE image published in the HP cloud. There’s tons of packaging left to do for nodejs modules to support the azure-cli tool. It would be great if we could have openSUSE mirrors in EC2 and Azure to avoid network charges for those using openSUSE images in those clouds. This requires discussions with Amazon and Microsoft, basically we need to be able to run those services for free, which implies that both would become sponsors of our project just like Google has become a sponsor of our project by letting us run the openSUSE mirrors in GCE.

So if you are interested in cloud and public cloud stuff get involved, there is plenty of work and lots of opportunities. If you just want to use the images in the public cloud go ahead, that’s why they are there. If you want to build on the images we have in OBS and customize them in your own project feel free and use them as you see fit.

osc build with kvm on an encrypted volume group

March 15th, 2014 by

How-to build a initrd-virtio on a fully encrypted volume group

If like me you care about your data stored on your laptop, you certainly use a fully encrypted (excepted /boot) configuration based on lvm.

In my case I also like to create, build, fix packages locally with our tool osc. I’ve plenty of power, beefy ssd, so I dedicate a logical lvm for building cleanly package with qemu-kvm configuration, like obs does

Prepare the kvm building system

As root you create 2 lvm volume with lvcreate, one will be the build root, the other one will be the additional swap

In ~/.oscrc I enable the following parameters

build-type = kvm
build-device = /dev/mapper/vg0-lvobsbuild
build-swap = /dev/mapper/vg1-lvobsswap
build-memory = 4096
build-vmdisk-rootsize = 16000
build-vmdisk-swapsize = 4000
build-vmdisk-filesystem = ext4

You just have to adjust the Memory quantity and the device to what you create for your own environment.

(more…)

getting my DVB-T card to work

March 6th, 2014 by

Today I tried to get a DVB-T card to work with a new antenna on a new 13.1 install.
I know it was working, because I ran it with 12.3 on this machine last year.

hwinfo –tv
showed
Model: “Hauppauge computer works WinTV HVR-1110”
Vendor: pci 0x1131 “Philips Semiconductors”
Device: pci 0x7133 “SAA7131/SAA7133/SAA7135 Video Broadcast Decoder”

So after plugging everything in, I started kaffeine, which still knew about all local channels, but could not tune.
http://www.linuxtv.org/wiki/index.php/Hauppauge_WinTV-HVR-1110 gave the important hint that one needs a firmware file. After that was in /lib/firmware/dvb-fe-tda10046.fw and after a reboot, came the next try. kaffeine now would show 99% SNR, so a good signal and even know about what is currently on air, however picture remained black.
kaffeine hinted that it needs extra software, but could not find it, even though packman repos were available (annoying bug).

http://opensuse-community.org/ finally helped – I needed the libxine2-codecs package from the packman repo.
Now everything is working after less than an hour.

Some news from the trenches

February 7th, 2014 by

As you might know, we are focusing our development efforts in two fronts, namely openQA and staging projects. As we just started we don’t have fireworks for you (yet) but we did some solid ground work that we are going to build upon.

Working on openQA

We are organizing our daily job in openQA into highly focused sprints of two weeks. The focus of the first sprint was clear: cleanup the current codebase to empower future development and lower the entry barrier for casual contributors, which can be translated as “cleaning our own mess”. We created some tasks in progress, grouped in a version with a surprising and catchy name: Sprint 01.

Got my mojo workin’

Up to now, openQA web interface was written using just a bunch of custom CGI scripts and some configuration directives in Apache. We missed a convenient way to access to all the bell and whistles of modern web development and some tools to make the code more readable, reliable and easier to extend and test. In short, we missed a proper web development framework. We evaluated the most popular Perl-based alternatives and we finally decided to go for Mojolicious for several reasons:

  • It provides all the functionality we demand from a web framework while being lightweight and unbloated.
  • It’s stated as a “real time framework” which, buzzwords apart, means that is designed from the ground to fully support Comet (long-polling), EventSource and WebSockets. Very handy technologies for implementing some features in openQA.
  • It really “feels” very close to Sinatra, which makes Ruby on Rails developers feel like at home. And we have quite some Rails developers hanging around, don’t we? Just think about OBS, software.o.o, WebYast, progress.o.o, OSEM, the TSP app…
  • Mojolicious motto is “web development can be fun again”. Who could resist to that?

We’ve now reached the end of the sprint and we already have something that looks exactly the same than what we had before, but using Mojolicious internally. We are very happy with the framework and we are pretty confident that future development of openQA will be easier and faster than ever. OpenQA has mojo!

The database layer in openQA

Another part that we worked on during first sprint was the database layer. The user interface part of openQA use a SQLite database to store the jobs and workers registered in the system. The connection between the code and the database was expressed directly in SQL using a simple API.

We have replaced this layer with another equivalent that uses an ORM (Object-relational mapping) in Perl (DBIx::Class). Every data model in openQA is now a true object that can be created, copied and moved between the different layers of the application. Quite handy.

To make sure we don’t forget anything, we created a bunch of tests covering the whole functionality of the original code, running this test suite after each step of the migration. In this way we have achieved two goals: we now have a simple way to share and update information through the whole system and we can migrate very easily to a different database engine (something that we plan to do in the future).

What to do with staging

Over the time, coolo had accumulated quite some scripts that helped him with Factory. Most of them are actually related to something we are doing right now: the staging projects. So in the end we basically migrated all relevant ones to github and one by one we are merging their functionality to staging plugin. We also experimented with test frameworks that we could use to test the plugin itself, selected few and we even have a first test! The final plan is to have the whole plugin functionality covered with a proper test suite, so we will know when something breaks. Currently, there is a lot of mess in our repo and the plugin itself need big cleanup, but we are working on it.

Contributions are welcome

If you want to help but wonder where to start, we identified tasks that are good to dive into the topic and named them “easy hacks”: mostly self contained tasks we expect to have little effort but we lack the time to do right now. Just jump over the list for openQA or staging projects.

For grabbing the code related to staging project, you only need to clone the already mentioned repository. The openQA code is spread in several repositories (one, two, three and four), but setting up your own instance to play and hack is a piece of cake using the packages available in OBS (built automatically for every git push).

If you simply want to see what we are doing in more detail, take a look at progress.o.o, we have both openQA and Staging projects there.

We are having a lot of fun, and we encourage you to join us!

Fosdem 2014 Report & Beta testing new openSUSE booth merchandising Stuff

February 4th, 2014 by

Fosdem 2014

fosdem 2014 - full

Again this year, Fosdem was really delightful, a bit crowdy as hell concerning a number of conference rooms.

But if there’s a constant, it is the awesomeness of the Fosdem staff and its armada of volunteers. Please all of you who made this event so great, receive in the name of openSUSE’s community our warmest thanks and congratulations.

I will not make a mistake if I predict a big success for the different talk’s videos, in the next following weeks.

openSUSE merchandising new collection

After a loooong wait, perceived as a century, openSUSE Booth was furbished with the next generation of merchandising stuff.

At least some part of the complete kit, which should be available in April.

(more…)

Trying to add some light

February 3rd, 2014 by

Lately there was some confusion regarding our communication. We, at the openSUSE Team@SUSE are deeply aware that our communication needs to be improved. So in the hope to make everything clear again, here is the summary to clear up what is really going on and what was not happening.

Long story short:

  • There WILL be openSUSE 13.2 in November 2014
  • 13.2 WILL have security and maintenance support provided by SUSE
  • We WILL have coolo as release manager for 13.2
  • SUSE is NOT decreasing manpower put into openSUSE
  • Everybody from the community is welcome and encouraged to be involved with, and if they want to, take over some parts of the release process and we will support you the best we can in doing that

Now for the long story.

Our team and only our team – openSUSE@SUSE – is going to work on improving ‘tooling’ side of the openSUSE project until August. These changes will benefit openSUSE by making it easier to produce better releases in the future.

Nothing changes for the rest of SUSE. SUSE is not abandoning openSUSE. The rest of SUSE will still do the same things they were doing until now and continue to keep openSUSE awesome. This includes Maintenance, Security, Infrastructure, and many other teams besides the openSUSE Team at SUSE who actively support the openSUSE project.

What is our plan?

Our plan is to make sure that future openSUSE releases are easier for everyone to produce. As we grow we could keep putting in more and more full-time release managers (if we find them somewhere), but this approach is probably unsustainable and, more importantly, goes against our desire to empower the community to do more as part of openSUSE.

Therefore, we decided to improve our tools to ensure that making a release is much more straightforward and reliable and we can reduce and distribute the workload needed for integration and release. To make this happen we need time and everyone from the team to work on adapting the tooling side. We also would welcome volunteers to help us with tools and with the following release(s).

With the release date now set in November (mirroring roadmap for 13.1), first milestone should be released in May. That is a perfect oportunity to go to openSUSE Conference in Croatia where we can meet up, gather volunteers to help and discuss how to work. Remember that openSUSE Travel Support is in place to sponsor everyone who needs financial help to get to the event.

Hopefully now we cleared things up a little and we are really sorry again for our poor communication – We’re going to work on it.

Your truely confused openSUSE Team

spec-cleaner: hide all your precious cruft!

January 31st, 2014 by

As we stated in our communication over the time, our team’s main focus for foreseeable future is Factory and how to manage all those contributions. Goal is not to increase the number of SRs that is coming to Factory, but to make sure we can process more and to make sure we see even well hidden consequences to make sure that Factory is “stable” and “usable”.

sprayg

Not really part of our current sprints, but something that will hopefully help us is spec-cleaner that Tomáš Chvátal and Tomáš Čech were working on lately during their free time/hackweek. What is it trying to address? Currently, there are some packaging guidelines, but when you write a spec file for your software, you still have plenty of choices. How do you order all the information in the header? Do you use curly brackets around macros? Do you use macros? Which ones do you use and which not? Do you use binaries in dependencies? Package config? Perl symbols? Package names? There is format_spec_file obs service that tries to unify a little bit the coding style but leaves quite some of the stuff up to you. Not necessarily a bad thing, but if you have to compare changes and review packages that are using completely different coding styles the process becomes harder and slower.

spec-cleaner is format_spec_file taken to another level. It tries to unify coding style as much as it can. It uses consistent conventions, makes most of the decisions mentioned previously for you and if you already decided for one way in the past, it will try to convert your spec file to follow the conventions that it specifies. It’s not enforcing anything, it’s standalone script and therefore you don’t have to be worried that you spec file will be out of your control. You can run it, verify the result (actually, you should verify the results as there might still be some bugs) and commit it to OBS. If we all do it, our packages will all look more alike and it will be easier to read and review them.

How to try it? How to help? Well, code is on GitHub and packages are in OBS. You may have a version of it in your distribution, but that one is heavily outdated (even the 13.1 version), so add openSUSE:Tools repo and try the version from there.

zypper ar -f obs://openSUSE:Tools/openSUSE_13.1 openSUSE-Tools
zypper in spec-cleaner

You can then go to some local checkout and try what changes does it propose for your spec file. Easiest way is to just let it do stuff by calling it and taking a look at changes afterwards.

spec-cleaner -p -i *.spec
osc diff

If it works, great, we will have more unified spec files. If it doesn’t, file a bug 😉

Network boot live ISO

January 29th, 2014 by

The x86_64 edition of openSUSE Education’s Li-f-e live DVD supports PXE booting the iso over the network, here is how to do it:

* Install Li-f-e on a server from which other machines will PXE boot, make sure you have plenty of space assigned to / partition(about 20G)
* Set up LTSP by following this quick start guide
* Create /srv/nfs folder and copy Li-f-e iso there
* Run the following as root in terminal:

mount /srv/nfs/openSUSE-Edu-li-f-e.x86_64-42.1.1.iso /mnt
cp /mnt/boot/x86_64/loader/linux /srv/tftpboot/boot/linux-life64
cp /mnt/boot/x86_64/loader/initrd /srv/tftpboot/boot/initrd-life64
echo "/srv/nfs *(ro,no_root_squash,async,no_subtree_check)" >> /etc/exports
cat <<EOF>> /srv/tftpboot/pxelinux.cfg/default
LABEL Li-f-e
kernel boot/linux-life64
append initrd=boot/initrd-life64 isofrom_device=nfs:10.0.0.254:/srv/nfs/ isofrom_system=/openSUSE-Edu-li-f-e.x86_64-42.1.1.iso
IPAPPEND 2
EOF
chkconfig rpcbind on && service rpcbind restart
chkconfig nfsserver on && service nfsserver restart

Now you can PXE boot any machine over the network, select Li-f-e from the end of the boot menu to boot live DVD iso instead of normal LTSP session.

Use “yast2 live-installer” to install Li-f-e on the client machine.