Home Home > 2012
Sign up | Login

Deprecation notice: openSUSE Lizards user blog platform is deprecated, and will remain read only for the time being. Learn more...

Archive for 2012

Making different openSUSE liveCDs

December 29th, 2012 by

In my last post I explored the various liveCD creation methods out there, and I really wanted to try one of the others for openSUSE.
Thus I did so today in less than two hours.
I used Debian’s liveCD as basis and replaced the userspace with an openSUSE-11.4-GNOME-liveCD one (later ones likely do not work as systemd is not compatible with old 2.6.32 kernels).
And it worked like a charm. If you want to try it yourself, you need openSUSE and an empty directory with 5GB free space. Then you do as root:

zypper -n in clicfs squashfs cdrkit-cdrtools-compat
wget -O Makefile http://lsmod.de/bootcd/Makefile.aufslive.11.4
make

This will take a while to download the two isos and then at least another 3 minutes for the processing.
If that seems too hard for you, you can just download the finished iso and try it with qemu-kvm -m 1000 -cdrom xxx.iso

Do not let the debian logo in the bootloader confuse you. Just press enter there.
When running in KVM from RAM, this boots up in 18 seconds, while the original iso took 33 (measured from pressing enter in bootloader to the time the CPU load goes down). However, with physical media the difference will be less pronounced. Some of the difference comes from the faster gzip decompression. Unfortunately debian’s kernel does not support squashfs-xz, so I could not try that.

I hope in the future, we will have aufs patches in our normal openSUSE kernels and add an aufs-live mode to kiwi. That would help with the problems we hit with clicfs when memory runs out (and it can not be freed by deleting files either).

LiveCDs

December 27th, 2012 by

As few of you might know, I made my own SUSE-based LiveCDs a while ago, using (like Knoppix) cloop compression with iso9660 and my own kernel code for file-based overlay to make it writeable. You might be amazed at how fast it runs in KVM. However, the kernel part has bit-rotten and there are other techniques out there today, so I took a look around at how others do their LiveCDs.

But first some broader overview. To make a LiveCD, the biggest problem is that CDs are not writeable (and even modern Flash devices do not want to be written too much). Embedded devices using flash had the same problem. Various approaches have been used in the past to solve this:

  • adapt all software to write into ram-disks e.g. by having symlinks (hard to create and maintain)
  • load all software into RAM (only for small distributions)
  • use file-based overlaying such as unionfs or aufs to have software write into RAM (lsof, pwd, and hardlinks can be tricky)
  • use block-based overlaying (problem: can not easily free disk space again)

Also compression is used to fit more onto a CD. And interestingly, this usually also speeds up booting because it is faster to read 10MB off a CD and decompress it into the original 30MB than to read 30MB from such a slow medium.

Now, to the distributions.

  • openSUSE has the classic DVD installs that use special installation-images and run in RAM and then there are the real LiveCDs that are created by our kiwi tool, use block-based overlaying and LZMA compression of a ext3 by means of our FUSE-based clicfs.
  • All of the other distributions use squashfs for compression. Mageia employs dracut for initrd and unionfs for file-based overlaying
  • Debian uses aufs for file-based overlaying
  • Ubuntu uses overlayfs for file-based overlaying
  • Fedora uses an ext4 filesystem image contained in a squashfs with dm-snapshot for block-based overlaying, thus being most similar to openSUSE

I also spent some time benchmarking (on my AMD A10-5800K) the various technologies with a simple script using Debians uncompressed rootfs of 495132 KiB as data.
squashfs supports three different compression methods: lzo, gzip and xz (aka LZMA).

  • squashfs-lzo: size:220992 compression:11.1MB/s decompression:134.4MB/s
  • squashfs-gzip: size:203328 compression:15.5MB/s decompression:88.9MB/s
  • squashfs-xz: size:176064 compression:6.5MB/s decompression:22.5MB/s
  • cloop(gzip): size:213348 compression:16.2MB/s decompression:49.6MB/s
  • clicfs(xz): size:185300 compression:16.7MB/s decompression:18.2MB/s

This has some surprises: even when using the same compression method, sizes can differ by 5% and speed can differ even more.

If you want to compare numbers on your system, memory throughput is also interesting:
# dd if=/dev/zero of=/dev/null bs=1M count=100000
104857600000 bytes (105 GB) copied, 12.4499 s, 8.4 GB/s

Overall, clicfs is performing OK, considering that it already takes care of the overlaying, but for my own LiveCD I would prefer Debian’s method and I am wondering how it would work.

new osc buildlog –strip-time option

December 17th, 2012 by

Hi,

as of late each line in the buildlog is prefixed with a “timestamp”. If you do not need
this information just run “osc bl –strip-time …” (this will remove the leading timestamp).
Additionally there’s a new config option to permanently enable the stripping:
osc config general buildlog_strip_time 1
(by default “buildlog_strip_time” is set to False).
The “–strip-time” option is also supported by the “localbuildlog” and “remotebuildlog”
commands.

Marcus

AMD/ATI fglrx 9.002 Catalyst 12.10 released

October 28th, 2012 by

AMD/ATI Catalyst fglrx 12.10 (9.002) released

Notice

This release concern only owners of radeon HD5xxx or above. For older gpu, the fglrx-legacy is still 12.6
SDB:AMD_fgrlx_legacy

fglrx-12.10-s1

fgl_glxgears & AMD CCLE

Release note about 12.10

This Catalyst version support 11.4 to Tumbleweed (thus also kernel 3.6x series). Unfortunately the latest factory didn’t upgrade successfully and failed to create its initrd. So no fglrx 12.10 for factory actually, but you are debugging free radeon, don’t you? 🙂

I’m translating here a note from Sebastian Siebert last post

I have a small request: If you have any problems with the driver, don’t be afraid to report to me (I take German and English bugreports are also gladly accepted). I will try, as far as I am able to reproduce the bug. Together with the necessary system information, I will go directly to the right place at AMD to have the bug fixed in the next driver release.
Thank you very much, Sebastian.

See below what to do in case of troubles.

Or you can also ping him on irc (freespacer)

(more…)

osc12 it’s started already

October 20th, 2012 by

osc12 main entry for saturday & sunday

Just a quick note, I’m sharing my osc12 photo album at this addresse

Enjoy!

OT: Shimano Alfine 11 Di2 SEIS and backwards compatibility with existing hubs

October 17th, 2012 by

This post is completely OT for openSUSE, but I don’t have a better place to put it to share this useful snippet of information.

Shimano has just released its electronic shifting system for Alfine internally geared hubs. In mechanical Alfine, the gear cable pull is translated into rotation and gear selection by a detachable unit that sits on the end of the hub.  Alfine Di2 SEIS replaces this with a MU-S705 motor unit controlled by an electronic brake lever.  But it also introduces a new hub (SG-S705).  I wondered whether the motor unit can be retrofitted to existing hubs, as I have the original Alfine 11 (SG-S700) on my Genesis Day One, and I’m not completely happy with the Versa drop bar brake lever integrated shifter*.

So I mailed Paul Lange, Shimano’s German distributor, to ask. The answer I got is that the SG-S700 hub can not be used with the Di2 components, because it has a return spring for upshifts, whereas SG-S705 does not since the gear selection in both directions is actively performed by the motor.  If you put a MU-S705 motor unit on an SG-S700 it would be working against the return spring.

As far as I understood it, there is a spring in the SM-S700 cable end unit – I didn’t know there is also one in the hub itself, but I’ll check next time I have the wheel out.  Until then, my dreams of perfect drop bar shifting are just that, because at 400 quid SRP the hub is a big investment.  Maybe Shimano will take pity on me and make a mechanical STI…

* Mostly because there is no little cam decoupling the upshift lever from the cable spool inside the Versa shifter, so sometimes it shifts up several gears at once.

Snapper for Everyone

October 16th, 2012 by

With the release of snapper 0.1.0 also non-root users are able to manage snapshots. On the technical side this is achieved by splitting snapper into a client and server that communicate via D-Bus. As a user you should not notice any difference.

So how can you make use of it? Suppose the subvolume /home/tux is already configured for snapper and you want to allow the user tux to manage the snapshots for her home directory. This is done in two easy steps:

  1. Edit /etc/snapper/configs/home-tux and add ALLOW_USERS=”tux”. Currently the server snapperd does not reload the configuration so if it’s running either kill it or wait for it to terminate by itself.
  2. Give the user permissions to read and access the .snapshots directory, ‘chmod a+rx /home/tux/.snapshots’.

For details consult the snapper man-page.

Now tux can play with snapper:

  tux> alias snapper="snapper -c home-tux"

  tux> snapper create --description test

  tux> snapper list
  Type   | # | Pre # | Date                             | User | Cleanup  | Description | Userdata
  -------+---+-------+----------------------------------+------+----------+-------------+---------
  single | 0 |       |                                  | root |          | current     |         
  single | 1 |       | Tue 16 Oct 2012 12:15:01 PM CEST | root | timeline | timeline    |         
  single | 2 |       | Tue 16 Oct 2012 12:21:38 PM CEST | tux  |          | test        |

Snapper packages are available for various distributions in the filesystems:snapper project.

So long and see you at the openSUSE Conference 2012 in Prague.

xtrabackup for MySQL

October 14th, 2012 by

If you run data-driven applications like me, you are probably already running some kind of backup and have plans for disaster recovery. I hope you are not still using SQL dumps?

I have been using Percona XtraBackup professionally for MySQL backups for a while now. Especially if your database access is highly transactional you will find it useful that you can get consistent non-blocking, non-purging backups while continuing to serve transactions. Who wants downtime anyway?

Under the hood the software will take a dirty copy of the InnoDB tablespaces on disk, and extract binary logs required to bring all of these to a specific point in time, or rather LSN, using a patched version of the mysqld binary. The preparation / restore requires applying the binary log to the files which results in MySQL tablespaces and binary log files equivalent to how they would have been with a clean MySQL shutdown.

Mixing transactional with non-transactional database engines is possible if you are willing to accept some blocking time while backing them up. If you are using MySQL replication, you can also use this to create a new slave from either a master or to clone a slave from another without downtime of either.

The upgrade to the 2.0 series adds, among other things, parallel IO and parallel compression. This requires a new streaming file format xbstream in addition the previous tar. Think of it as a tar with multiple input pipes.

I added the xtrabackup package to openSUSE, it is available in the server:database project (repo, SLE 11) right now and will also be part of the next openSUSE release.

Remember that these are only tools. Love your data and protect your business. A copy is not a backup. A backup that isn’t monitored for success is not a backup. A backup that is not proven to restore successfully is barely a backup.

Contact me if you need help setting this up.

Going to osc 12 – Wanna a place in the most Geeky’s car?

October 9th, 2012 by

I finally will be able to attend osc 12, and I’m really looking forward to be there. I wouldn’t have miss that 4th conference, the first one outside of Germany.

Your VIP place is waiting you

I will go there with the Geeko car, and then will spend some time on the road Friday 19th October.

You can find my planned journey on this map.

I’ve still some place in the car, so if you want to join and are near one of the big place I will cross, just ping me back.

For the return, I’m sorry, I will left Prague only Saturday October 27th, due to the Postgresql europe conference.
PG

You should also be there 🙂

Nota: it’s still a small car, so forget to bring your whole geek’s wardrobe 🙂

Run X2go thin-client using kiwi-ltsp

September 27th, 2012 by

Recently, came across x2go packages maintained by Jan Engelhardt for openSUSE and other distributions on open build service. As openSUSE Education Li-f-e has great LTSP integration thanks to KIWI-LTSP, I decided to check out how x2go can fit in with this existing thin-client computing solution.

“x2go is an open (GPL/AGPL) source “server based computing” project. Combining the advantages of existing systems it features ease of use, performance and scalability. x2go provides you with access to your desktop – from within your own network and via the internet. x2go is not limited to particular hardware, it supports a variety of devices and architectures.” -from their website.

Some of the features/benefits of x2go that are not available on LTSP are:

* Remote login from within local lan and internet from any OS
* Session persistence, you can disconnect session from one client and continue where you left off from any other client
* Low bandwidth usage
* Session sharing with other users
(more…)