Home Home
Sign up | Login

bmwiedemann

coder, tester, poet

Author Archive

OpenStack Infra/QA Meetup

July 23rd, 2014 by

Last week, around 30 people from around the world met in Darmstadt, Germany to discuss various things about OpenStack and its automatic testing mechanisms (CI).
The meeting was well-organized by Marc Koderer from Deutsche Telekom.
We were shown plans of what the Telekom intends to do with virtualization in general and OpenStack in particular and the most interesting one to me was to run clouds in dozens of datacenters across Germany, but have a single API for users to access.
There were some introductory sessions about the use of git review and gerrit, that mostly had things I (and I guess the majority of the others) already learned over the years. It included some new parts such as tracking “specs” – specifications (.rst files) in gerrit with proper review by the core reviewers, so that proper processes could already be applied in the design phase to ensure the project is moving in the right direction.

On the second day we learned that the infra team manages servers with puppet, about jenkins-job-builder (jjb) that creates around 4000 jobs from yaml templates. We learned about nodepool that keeps some VMs ready so that jobs in need will not have to wait for them to boot. 180-800 instances is quite an impressive number.
And then we spent three days on discussing and hacking things, the topics and outcomes of which you can find in the etherpad linked from the wiki page.
I got my first infra patch merged, and a SUSE Cloud CI account setup, so that in the future we can test devstack+tempest on openSUSE and have it comment in Gerrit. And maybe some day we can even have a test to deploy crowbar+openstack from git (including the patch from an open review) to provide useful feedback, but for that we might first want to move crowbar (which is consisting of dozens of repos – one for each module) to stackforge – which is the openstack-provided Gerrit hosting.

see also: pleia2′s post

Overall for me it was a nice experience to work together with all these smart people and we certainly had a lot of fun

getting my DVB-T card to work

March 6th, 2014 by

Today I tried to get a DVB-T card to work with a new antenna on a new 13.1 install.
I know it was working, because I ran it with 12.3 on this machine last year.

hwinfo –tv
showed
Model: “Hauppauge computer works WinTV HVR-1110″
Vendor: pci 0×1131 “Philips Semiconductors”
Device: pci 0×7133 “SAA7131/SAA7133/SAA7135 Video Broadcast Decoder”

So after plugging everything in, I started kaffeine, which still knew about all local channels, but could not tune.
http://www.linuxtv.org/wiki/index.php/Hauppauge_WinTV-HVR-1110 gave the important hint that one needs a firmware file. After that was in /lib/firmware and after a reboot, came the next try. kaffeine now would show 99% SNR, so a good signal and even know about what is currently on air, however picture remained black.
kaffeine hinted that it needs extra software, but could not find it, even though packman repos were available (annoying bug).
Installing kde3-kaffeine from packman did not help.
http://opensuse-community.org/ finally helped – I needed some codec packages from the packman repo.
Now everything is working after less than an hour.

another way to access a cloud VM’s VNC console

February 8th, 2014 by

If you have used a cloud, that was based on OpenStack, you will have seen the dashboard including a web-based VNC access using noVNC + WebSockets.
However, it was not possible to access this VNC directly (e.g. with my favourite gvncviewer from the gtk-vnc-tools package), because the actual compute nodes are hidden and accessing them would circumvent authentication, too.

I want this for the option to add an OpenStack-backend to openQA, my OS-autotesting framework, which emulates a user by using a few primitives: grabbing screenshots and typing keys (can be done through VNC), powering up a machine(=nova boot), inserting/ejecting an installation medium (=nova volume-attach / volume-detach).

To allow for this, I wrote a small perl script, that translates a TCP-connection into a WebSocket-connection.
It is installed like this
git clone https://github.com/bmwiedemann/connectionproxy.git
sudo /sbin/OneClickInstallUI http://multiymp.zq1.de/perl-Protocol-WebSocket?base=http://download.opensuse.org/repositories/devel:languages:perl

and is used like this
nova get-vnc-console $YOURINSTANCE novnc
perl wsconnectionproxy.pl --port 5942 --to http://cloud.example.com:6080/vnc_auto.html?token=73a3e035-cc28-49b4-9013-a9692671788e
gvncviewer localhost:42

I hope this neat code will be useful for other people and tasks as well and wish you a lot of fun with it.

Some technical details:

  • The code is able to handle multiple connections in a single thread using select.
  • HTTPS is not supported in the code, but likely could be done with stunnel.
  • WebSocket-code was written in 3h.
  • noVNC tokens expire after a few minutes.

Hongkong OpenStack Design Summit

November 13th, 2013 by

So last week many OpenStack (cloud software) developers met in Hongkong’s world expo halls to discuss the future development and show off what is done already.

Overall, I heard there were 3000 attendees, with 800 being developers or so. That sounds like a large number of people, but luckily everything felt well-organized and the rooms were always big enough to have seats for all interested.

The design sessions were usually pretty low-level and focused into one component, so it was not easy for me to make useful contributions in there. The session about read-only API access (e.g. for helpdesk workers and monitoring) and about HA were most useful to me.

In the breakout rooms were interesting sessions by many large OpenStack users (CERN, Ebay, Paypal, Dreamhost, Rackspace) giving valuable insights into what people expect from and do with a cloud. Many of them are using custom-built parts, because the plain OpenStack is still not complete to run a cloud. SUSE Cloud ships with some such missing parts (e.g. deployment and configuration management), but most organisations seem to run their own at the moment.

Cloudbase was there telling about their Hyper-V support that we integrated in SUSE Cloud.
Apart from the 6 SUSE Cloud developers there were several local (and one Australian) SUSE guys manning the booth.

Overall it was quite some experience to be there (in such an exotic and yet nice place) and listen and talk to so many different people from very different backgrounds.

How I ran openSUSE on a Nexus 7

September 22nd, 2013 by

The Nexus 7 (2012 version) is a 7 inch tablet by Google+Asus.
The nice thing about it is, that it has an unlockable bootloader. Also it has an armv7 CPU and we built openSUSE for this CPU for some years. I had one such device with a broken display, so doing some more risky things with it seemed to be appropriate.
I wanted to run my own software on it. Running openSUSE in a chroot (change-root) environment is usually a lot easier than replacing the whole system, so this is where I went.

First, I needed two tools. One is the “adb” – Android DeBug tool from the official sdk and the other is “fastboot” which was hard to find, so I mirror it here.
I got me the stable ROM from http://wiki.cyanogenmod.org/w/Grouper_Info and followed their installation instructions. adb shell only seemed to work while in bootloader (which you reach by holding Volume-Down+Power during boot)
The hardest part was to re-enable USB-debugging by going into Settings/About tablet and tapping Build-Number seven times.

Also before zapping everything that was there, I did in adb shell : cp -a /system/app /sdcard/
and back later.
So after following all the other installation steps, I had cyanogenmod booting. I attached a bluetooth keyboard so that I can better type. The ROM comes with a terminal app, which I opened. type su - to become root after a security popup.
Now, I downloaded my lastest Raspberry-Pi image from http://www.zq1.de/bernhard/linux/opensuse/. This is under /sdcard/Download where I unpacked it with xz -d
Then comes the tricky part. The image has a partition table, but here we just need the root filesystem. With fdisk -lu we can see that it starts at sector 309248. One could copy out that part with dd or use a loop device with offset like this:
#!/system/xbin/sh
mknod /dev/loop0 b 7 0
losetup -d /dev/loop0 # cleanup of previous try
losetup -o `expr 512 \* 309248` /dev/loop0 rasp*img
mkdir -p mnt
mount -t ext2 /dev/loop0 mnt

Now we have access to the openSUSE files under mnt.
In there I created me a chroot.sh:
#!/system/xbin/sh
for m in proc sys dev ; do mount -o bind /$m $m ; done
HOME=/root PATH=/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin /system/xbin/chroot . bin/bash
for m in proc sys dev ; do umount $m ; done

With that, the only remaining thing to do was to add a nameserver line to /etc/resolv.conf before I could use zypper to install software e.g. zypper install yast2-network yast2-ncurses.
Running yast lan on the Nexus 7 gives nice sight.

I guess one could also use the armv7 rootfs to have software built for armv7 instead of the compatible armv6. But for me it does not matter much.

New Raspberry Pi Image

September 7th, 2013 by

update: new image with kernel-3.6 and minimal X11/icewm http://www.zq1.de/~bernhard/linux/opensuse/raspberrypi-opensuse-20130911x.img.xz (103MB)

We got a new armv6 based image for the Raspberry Pi.
This one is only 82MB compressed, so pretty minimalistic.

http://www.zq1.de/~bernhard/linux/opensuse/raspberrypi-opensuse-20130907.img.xz

The exciting new thing is that this was created using an alternative image building automatism which I wrote from scratch in three hours this morning.
The scripts can be found at

https://build.opensuse.org/package/show/devel:ARM:Factory:Contrib:RaspberryPi/altimagebuild

and are also embedded within the image under /home/abuild/rpmbuild/SOURCES/

This means that everyone can now easily build his own images the way he likes and even branch and do submit requests for changes that are useful for others.
The way to use this is simple.
If you have 6GB RAM, you can speed things up with export OSC_BUILD_ROOT=/dev/shm/arm before you do
osc co devel:ARM:Factory:Contrib:RaspberryPi altimagebuild
cd devel:ARM:Factory:Contrib:RaspberryPi/altimagebuild
bash -x main.sh

This pseudo-package does not easily build within OBS or osc alone because it needs root permissions for some of the steps (chroot, mknod, mount), which could only be workarounded with User-Mode-Linux or patching osc.
The build consists of three steps that can be seen in main.sh:

  1. First, osc build is used to pull in required packages and setup the armv6 rootfs.
  2. Second, mkrootfs.sh modifies a copy of the rootfs under .root to contain all required configs
  3. And finally, mkimage.sh takes the .root dir and creates a .img from it that can be booted

This can build an image from scatch in three minutes. And my Raspberry Pi booted successfully with it within 55 seconds.

There are some remaining open issues:

  • the repo key is initially untrusted
  • still uses old 3.1 kernel – solved
  • build scripts have no error handling

Compared to the old image, this one has some advantages:

  • It is easier to resize because the root partition is the last one
  • Compressed image is much smaller
  • Reproducible image build, so easy to customize
  • It is armv6 with floating point support, so could be faster
  • We have over 5200 successfully built packages from openSUSE:Factory:ARM
    so for example you can install a minimalistic graphical environment with zypper install xauth twm xorg-x11-server xinit and start it with startx

So if you wanted to play with openSUSE on RPi, you can do so right now and have a lot of fun.

GPIO on Raspberry Pi

March 31st, 2013 by

We have these working openSUSE Factory images for the Raspberry Pi, which is an ARM-based mini-computer, and since I want to encourage my kid to do more with computers than playing games (even if they are open-source), I looked into how GPIOs worked.
For that, you need to find the pin allocation – e.g.
in the elinux GPIO description or
http://www.raspberrypi.org/archives/1417
has a video which has it explained at 03:00

For my test, I wired together pin11 and pin12, which are GPIO17 and GPIO18.
I wanted GPIO17 to receive what is sent by GPIO18.
This is how this looked for me: foto of GPIO17 and 18 wired together

Using it directly from the shell is simple:
echo 17 > /sys/class/gpio/export
echo 18 > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio18/direction
echo 1 > /sys/class/gpio/gpio18/value
watch -n 1 head /sys/class/gpio/gpio*/value

If the wiring and configuration was right, the “watch” will show gpio17/value to become 1 too.
You can then also pull the wire (or insert a physical switch) and see gpio17/value dropping to 0 again, when it is no more receiving the current from the other pin.

If you managed to get this working, you reached level one of hardware-hackery.

OpenStack on openSUSE

January 11th, 2013 by

Do you want to play with cloud software on your own machines?
Some people have been working to package the current OpenStack version “Folsom” for openSUSE (tested on 12.2) and add scripts to configure it into a working state.
You need 2GB of RAM and 3+ GB of free disk space under /var/lib/
Then you do

wget https://raw.github.com/SUSE-Cloud/automation/master/scripts/jenkins/qa_openstack.sh
export cloudsource=openstackfolsom ; bash -x qa_openstack.sh

This is a script we use for continous integration testing, but it is as useful to setup a simple environment for development, testing or demoing.
Folsom packages are still rather rough and might see some change over the coming weeks.

If you want the older stable version, you can use the above snippet with cloudsource=openstackessex
however, there are some known bugs in that old version and backports are really hard.

Soon there will be Grizzly packages upcoming. More is to come…

P.S. To interact with your cloud, you need credentials, which are automatically sourced from /etc/bash.bashrc.local (it is admin:openstack) and then you use commands like
nova list and glance image-list
but there is also a web-interface that allows you to do most actions in a browser – even VNC, if you use KVM instead of the default lxc.

Making different openSUSE liveCDs

December 29th, 2012 by

In my last post I explored the various liveCD creation methods out there, and I really wanted to try one of the others for openSUSE.
Thus I did so today in less than two hours.
I used Debian’s liveCD as basis and replaced the userspace with an openSUSE-11.4-GNOME-liveCD one (later ones likely do not work as systemd is not compatible with old 2.6.32 kernels).
And it worked like a charm. If you want to try it yourself, you need openSUSE and an empty directory with 5GB free space. Then you do as root:

zypper -n in clicfs squashfs cdrkit-cdrtools-compat
wget -O Makefile http://lsmod.de/bootcd/Makefile.aufslive.11.4
make

This will take a while to download the two isos and then at least another 3 minutes for the processing.
If that seems too hard for you, you can just download the finished iso and try it with qemu-kvm -m 1000 -cdrom xxx.iso

Do not let the debian logo in the bootloader confuse you. Just press enter there.
When running in KVM from RAM, this boots up in 18 seconds, while the original iso took 33 (measured from pressing enter in bootloader to the time the CPU load goes down). However, with physical media the difference will be less pronounced. Some of the difference comes from the faster gzip decompression. Unfortunately debian’s kernel does not support squashfs-xz, so I could not try that.

I hope in the future, we will have aufs patches in our normal openSUSE kernels and add an aufs-live mode to kiwi. That would help with the problems we hit with clicfs when memory runs out (and it can not be freed by deleting files either).

LiveCDs

December 27th, 2012 by

As few of you might know, I made my own SUSE-based LiveCDs a while ago, using (like Knoppix) cloop compression with iso9660 and my own kernel code for file-based overlay to make it writeable. You might be amazed at how fast it runs in KVM. However, the kernel part has bit-rotten and there are other techniques out there today, so I took a look around at how others do their LiveCDs.

But first some broader overview. To make a LiveCD, the biggest problem is that CDs are not writeable (and even modern Flash devices do not want to be written too much). Embedded devices using flash had the same problem. Various approaches have been used in the past to solve this:

  • adapt all software to write into ram-disks e.g. by having symlinks (hard to create and maintain)
  • load all software into RAM (only for small distributions)
  • use file-based overlaying such as unionfs or aufs to have software write into RAM (lsof, pwd, and hardlinks can be tricky)
  • use block-based overlaying (problem: can not easily free disk space again)

Also compression is used to fit more onto a CD. And interestingly, this usually also speeds up booting because it is faster to read 10MB off a CD and decompress it into the original 30MB than to read 30MB from such a slow medium.

Now, to the distributions.

  • openSUSE has the classic DVD installs that use special installation-images and run in RAM and then there are the real LiveCDs that are created by our kiwi tool, use block-based overlaying and LZMA compression of a ext3 by means of our FUSE-based clicfs.
  • All of the other distributions use squashfs for compression. Mageia employs dracut for initrd and unionfs for file-based overlaying
  • Debian uses aufs for file-based overlaying
  • Ubuntu uses overlayfs for file-based overlaying
  • Fedora uses an ext4 filesystem image contained in a squashfs with dm-snapshot for block-based overlaying, thus being most similar to openSUSE

I also spent some time benchmarking (on my AMD A10-5800K) the various technologies with a simple script using Debians uncompressed rootfs of 495132 KiB as data.
squashfs supports three different compression methods: lzo, gzip and xz (aka LZMA).

  • squashfs-lzo: size:220992 compression:11.1MB/s decompression:134.4MB/s
  • squashfs-gzip: size:203328 compression:15.5MB/s decompression:88.9MB/s
  • squashfs-xz: size:176064 compression:6.5MB/s decompression:22.5MB/s
  • cloop(gzip): size:213348 compression:16.2MB/s decompression:49.6MB/s
  • clicfs(xz): size:185300 compression:16.7MB/s decompression:18.2MB/s

This has some surprises: even when using the same compression method, sizes can differ by 5% and speed can differ even more.

If you want to compare numbers on your system, memory throughput is also interesting:
# dd if=/dev/zero of=/dev/null bs=1M count=100000
104857600000 bytes (105 GB) copied, 12.4499 s, 8.4 GB/s

Overall, clicfs is performing OK, considering that it already takes care of the overlaying, but for my own LiveCD I would prefer Debian’s method and I am wondering how it would work.