Home Home > 2010 > 08
Sign up | Login

Deprecation notice: openSUSE Lizards user blog platform is deprecated, and will remain read only for the time being. Learn more...

Archive for August, 2010

OBS 2.1: ACL Feature and Status

August 15th, 2010 by

One and a half year is now gone since I posted about my work for ARM support in the OBS and the work for a port of openSUSE to ARM. Lots of things had happened in the meantime that are related, from my limited view most notably Nokia and Intel joining Moblin and Maemo to MeeGo (MeeGo is currently working on a number of Atom and ARM based devices), chosing to use OBS as build system and last but not least myself joining The Linuxfoundation (you will be not surprised to hear that I work at LF on OBS). In the meantime there had also been a major new OBS release 1.8/2.0 with a bunch of new features.

Interesting is the fact that we adapted the cross build system for OBS to MeeGo, first developed for use in Maemo and openSUSE @ ARM. An improved version for the standard MeeGo releases, and for the MeeGo weekly snapshots is used in the MeeGo OBS System to build all ARM releases of MeeGo (the cross toolchain will later get part of the MeeGo SDK @ ARM), thanks to Jan-Simon Möller (In the openSUSE ML, the issue of reactivating openSUSE:Factory ARM builds were brought up. So it might be a good variant to backport Jan-Simons new solution back into openSUSE @ ARM for that purpose). All the MeeGo related OBS installations will move sooner or later to OBS 2.1.

But now to the most recent work, Access Control support. A preview was shipped with OBS 1.8. Now an own OBS version, 2.1, will be dedicated to the introduction of this single new feature into the OBS mainline: Access Control (or abbrevated ACL for Access Control Lists). ACL means that there is control by the user on a per project or per package basis to protect information, source and binaries from the read access of other users in an OBS system and to hide projects or packages.

What is the intended audience of ACL? ACL is intended for installations of OBS that require protection of projects or packages during work. This can be but is not limited to commercial installations of OBS, or semi public installations of OBS.

How does ACL work? ACL sits on top of two features introduced with OBS 2.0: Role and Permission Management as well as freely definable user groups. ACL uses 4 specifically defined permissions (‘source_access’ for read access to sources, ‘private_view’ for viewing package and project information, ‘download_binaries’ for read access to binaries and ‘access’ permission to protect and hide everything and all from read access and viewing) on a user or group in the Role and Permission management. Also, the preexisting roles “maintainer”, “reader” and “downloader” had been modified with specific predifined permissions (which can at any time changed with the role and permission editor dynamically). And last but not least 4 new flags (namely ‘sourceaccess’ to signal a project/package has read protected source code, ‘binarydownload’ to signal it has read protected packages, ‘privacy’ to signal information/logfiles or status cannot be read and ‘access’ to hide and protect a project or package completely in all possible OBS API calls) had been added to the project and package descriptions to signal that some information is only readable by specific users or groups, or that information is hidden.

How do I use ACL? There are 4 steps to use ACL (a part of them a optional and can only be performed by the Administator of an OBS instance). Step one is to assign the listed permissions to a role, user or group (this step can be done only by the admin, and is not needed for the predefined roles “maintainer”, “reader” and “downloader”). Step two is to add a group for special users to projects which are intended to be run with ACL (this operations can only be performed by the admin). Step three is to protect a project with appropriate protection flags at project creation by adding them to the project meta. Step four is to add other users or groups with one of the new predefined roles that has ACL permissions added to the project meta.

What information can be protected by ACL? The protected information is grouped into 4 categories. Category 1 (flag ‘sourceaccess’) is source code. Category 2 (flag ‘binarydownload’) is binary packages or logfiles or builds. Category 3 (flag ‘privacy’) is project or package information like build status. Category 4 (flag ‘access’) is all viewable or accessable information to any project or package (full blocking of all access and information).

Example of a project configuration using ACL:

<user userid="MartinMohring" role="maintainer" />
<!-- grant user full write and read access -->

<group groupid="MeeGo-Reviewer" role="maintainer" />
<!-- grant group full write and read access -->

<group groupid="MeeGo-Developers" role="reader" />
<!-- grant group full source read access -->

<group groupid="MeeGo-BetaTesters" role="downloader" />
<!-- grant group access to packages/images -->

  <sourceaccess>
    <disable/>
  </sourceaccess>
  <!-- disable read access - unless granted explictely.
          This flag will not accept arch or repository arguments. -->

  <binarydownload>
    <disable/>
  </binarydownload>
  <!-- disable access - unless granted explictely -
          to packages/image and logfiles -->

  <access>
    <disable/>
  </access>
  <!-- disable access - unless granted explictely-,
          project will not visible or found via search,
          nor will any source or binary or logfile be accessable.
          This flag will not accept arch or repository arguments. -->

  <privacy>
    <enable/>
  </privacy>
  <!-- project will not visible.
          This flag will not accept arch or repository arguments. -->

What is the current status of the ACL implementation? The current status is that the complete API of the OBS git master had been instrumented with ACL code, critical portions of the API controllers had been code inspected and a big portion of these API calls now have a testcase in the OBS testsuite. Work is ongoing to make ACL as secure as possible. A code drop of current git master is under test in some bigger OBS systems, most notably the openSUSE Buildsystem. You can find snapshots of this codebase as usual in the OBS project openSUSE:Tools:Unstable. Adrian Schröter updates these “Alpha Snapshots” relatively often, on a 1-2 weekly basis, and runs the testsuite on git master daily. Thanks to Jan-Simon Möller for putting in many of the testcases into the testsuite for the ACL checks. On OBS Testing in general, read also Development and Test.

What is next? Code is tested and debugged against granting unwanted access due to some concepts inside OBS that are “working against ACL”, like project or package links, aggregates or kiwi imaging. We will inform you interested user of course about beta releases and an official 2.1 release.

Stay tuned.

oSC10 – Conference Update

August 12th, 2010 by

So Stage 1 of the next openSUSE Conference is complete (submission deadline), and we are moving forward with Stage 2 (scheduling talks). I personally wasn’t privvy to last year’s submissions, but we have well over 80 submissions covering a huge range of topics this year which is brilliant.

One of the nice things this year is we have submissions from other distributions and projects, which is great 🙂 The submissions from all parties cover a wide variety of topics from very technical to very fun, and it isn’t going to be easy to select which ones to accept.

Thank you to all who submitted a proposal and we will let you know on 20th August whether you are succesful or not.

Problems installing software in openSUSE?, Simple solution!

August 11th, 2010 by

Sometime ago I had a little problem with Zypper at the time of adding some packages. The same problem extended to YaST and well, the end of the world.

Possibly there was a integrity problem between the updater and the database RPM, or whatever, the true thing is that the error goes like:

Tamaño total a descargar: 174,1 MiB. Después de la operación se utilizarán 565,0 KiB adicionales.
¿Desea continuar? [s/n/?] (s): s
Descargando paquete graphviz-2.20.2-45.4.1.i586 (1/153), 868,0 KiB (2,2 MiB desempaquetado)
Descargando delta: ./rpm/i586/graphviz-2.20.2-45.3_45.4.1.i586.delta.rpm, 30,0 KiB
Obteniendo: graphviz-2.20.2-45.3_45.4.1.i586.delta.rpm [hecho (19,6 KiB/s)]
Aplicando delta: ./graphviz-2.20.2-45.3_45.4.1.i586.delta.rpm [hecho]
Instalando graphviz-2.20.2-45.4.1 [error]
La instalación de graphviz-2.20.2-45.4.1 ha fallado:
(con –nodeps –force) Error: Subprocess failed. Error: RPM fallido: error: db4 error(-30987) from dbcursor->c_get: DB_PAGE_NOTFOUND: Requested page not found
error: error(-30987) getting “” records from Requireversion index
error: db4 error(-30987) from dbcursor->c_get: DB_PAGE_NOTFOUND: Requested page not found
error: error(-30987) getting “” records from Requireversion index
error: db4 error(-30987) from dbcursor->c_get: DB_PAGE_NOTFOUND: Requested page not found

If you have some error like this one, the solution its extremely simple, just run in the command line interface:

sudo rpm --rebuilddb && sudo zypper clean -a && sudo zypper ref

This will rebuild the RPM database and then refresh the repos. 😉

Somethings to do after an openSUSE Installation (Part 2)

August 9th, 2010 by

So, continuing with my last post about things to do after an openSUSE installation, now its the time for “Adding Games”, but first, some clarifications about two things:

One: You can add Codecs and other stuff with a 1-click-package, avoiding to use Zypper, I just expose the zypper method because I think its a bit more short :-P, but that its just matter of taste. So, if you are running openSUSE 11.3 with KDE use this ymp, and if you are using GNOME use this other. If you are on a older version of openSUSE, just go to this page and select your version.

Two: The broadcom-wl package its on Packman, so you have to add that repo (Via YaST -> Repositories), before you can install that package.

After this, now we can keep going.

Adding Games

So, the question: “¿Why a lot of games that are popular in the FOSS community aren’t by default in openSUSE?” For example, OpenArena, Battle for Wesnoth, aTanks, BlobWars, Crack Attack, LBreakout2, Torcs, SuperTux,… Well, because those games are on its own repo called “Games”.

To enable this repo just go to: “YaST -> Software Repositories”, then will show you the repos management window, you push “Add”, and then select “Specifying URL”, then in the name “Games”, and the URL “http://download.opensuse.org/repositories/games/openSUSE_11.3/“, “Next”, if he ask you something you say yes (import gpg signature and stuff), then you get back to the main window and then you hit “Accept”.

In the URL, change the number of the version if needed, and done… You have now a LOT of games available via YaST 😀

Next to come: Adding 3D Acceleration.

openSUSE LaunchParty in Mérida, Venezuela

August 8th, 2010 by

Well, finally and happily I was able to make my LaunchParty for 11.3 here in my city. It was a simple setup: The place with the help of a local FOSS Academy that let me use his installations, Some media that I burn to giveaway, one of my laptops and my sister laptop to place two machines for desktop demonstrations, and just the desire to speak the word of the Geeko 😀 …

After the event we have a “PizzaGeek”, with some of the participants to speak a lot of Geek Frikie stuff.

I hang a full photo set in Flickr,

Latex editors and rubber

August 8th, 2010 by

Whether you are a frequent latex user, and especially if you are just starting off with it, you must have encountered situations where compiling the document correctly gets downright painful. Or found it just irritating to google every time or look up a cheat-sheet [pdf] to insert a not-so-common symbol. Or you know about the excellent application kile but as a GNOME/LXDE/Xfce user you did not want a zillion kde libraries installed.

I have started maintaining three packages, namely Texmaker, TeXworks and Rubber, in the Publishing repository. These applications make working with and compiling latex documents user-friendly and painless.

Texmaker

This is a frontend for editing latex documents much like kile (which is distributed with openSUSE 11.3 and prior), with several useful features:

  • integrated pdf viewer
  • user-friendly interface based on qt
  • wizards to generate code
  • integrated error and warning viewer
  • an integrated LaTeX to html conversion tool
  • based on qt with no dependence on kde libraries, which means somebody using a non-kde desktop might install it without pulling in one big chunk of the kde base  (as a GNOME user, I find this to be a problem with kile), and so integrates well with a non-KDE desktop as well.

Install on openSUSE

openSUSE 11.2 (from my home project, this requires libqt4 >= 4.6.1)

1click-installer for Texmaker

openSUSE 11.3

1click-installer for Texmaker

Factory

1click-installer for Texmaker

TeXworks

Also based on the qt toolkit, TeXworks is a Latex frontend with an integrated viewer that supports source/preview synchronisation. This makes it possible for you to right-click on the embedded preview [pdf, ps, etc] and choose to go to the corresponding line/paragraph in the latex source. I think, but I am not sure, that TeXworks is the only Linux application which uses source/preview synchronisation at present.

Install on openSUSE

openSUSE 11.2

1click installer for texworks

openSUSE 11.3

1click installer for texworks

Factory

1click installer for texworks

Rubber

Rubber is a command line application that automates compilation of latex documents, in the sense that it takes care of getting cross-referencing, citations and so on just right with one run, while it takes the native texlive commands (latex/pdflatex) as many as four runs to do so. Rubber makes the process of compiling a source file into the final document completely automated including processing bibliographic references or indices, as well as compilation or conversion of figures and several post-processing work.

Install on openSUSE

openSUSE 11.2

1click installer for rubber

openSUSE 11.3

1click installer for rubber

Factory

1click installer for rubber

Here’s hoping latex users (esp. beginners) on openSUSE will find these applications useful.

Have a lot of fun.

Bye.

——-

Update: The command latexmk works similar to rubber (i.e. running latex/pdflatex as many times as necessary to get the cross-referencing right), while kile/okular can be configured for source/document synchronisation similar to texworks as pointed out in the comments.

Status openFATE Milestone

August 6th, 2010 by

Recently Henne Greenrock sent a status report about the Boosters Standup-Meeting here he said that nothing happened to the openFATE sprint. Well, that’s only partly true, so it seems to be time to take a closer look and revitalize the project openFATE a bit.

What’s the matter with openFATE?

We did a good start with openFATE to involve everybody who is interested into product planning. However, after the screening team was formed it turned out that some parts of the process are not yet working well. The biggest problem is still that the screening team members can not move features from state UNCONFIRMED to NEW which turned out to be crucial for a fluent process. So the Boosters picked up the task since we think this is a huge blocker to work together as a community effectivly.

The openFATE Screening Page lists a few more details about the openFATE screening team and the issues.

How are we going to solve the issues?

The Screening Team Members need additional rights. We will create a user group “openFATE Screener” which gives people in it additional rights in openFATE. For the time being, the group will be maintained within openFATE. Once we have connect.opensuse.org in place, we will use it to maintain the group setting. The important bit here is to give the screeners group the ability to maintain itself, ie. add or remove members.

Being a screener will enable people to change status of a feature from UNCONFIRMED to NEW. That is a responsible task because being in state NEW, the feature goes through the whole mill of the process, also through product and project management for SLE products. We have to make sure to have high quality features here.

Futhermore the screeners will be able to add infoproviders to features in case they know who can help there which is also a very sensible task.

Another part to work on are notifications. People should be notified when they get added to a feature. We use Hermes for that (which is already working) so the only issue is that if people who get added to a feature are not subscribed to the certain notifications in Hermes, they do not receive anything. Which is per purpose as we want to leave the control to the users. The solution to that is to inform the screeners about the Hermes subscription status of the people added. If somebody is not subscribed, its on the screener to talk to the guy and convince him to join into the openFATE game. I don’t think it makes sense to subscribe users silently because that would take away control over their subscriptions and messages get ignored as spam in the end.

Last but not least we have to solve the “I am free, pick me!”-problem, which is about features that went through the decision process and would fit nicely into a product but have no developer implementing it yet. In the company process that means that a teamlead assigns a developer from his team on that. In the community we need to change that so that people can pick the feature themselves. That has some implications to the attached internal process, so that there are still some questions open. We have to investigate a bit on that.

What has happened so far?

I was able to change the keeperproxy, which is a security relevant proxy which filters data that is going to be exposed to the internet. Moreover I was kung-fuing through the Javascript in the openFATE webapp so that it is now possible to change the status if one is screener. I also added a screener attribute to the Person-Model in the openFATE rails app. Last but not least I added the basic API functionality to get and set subscriptions to Hermes.

What needs to be done?

Well, everything that is there is still rough and needs testing and polishing. Futhermore I added the remaining tasks to Retrospectiva, please check there.

As usual we’re happy about your input on this!

Local caching for CIFS network file system – followup

August 5th, 2010 by

Here’s a follow-up to my previous post on Hackweek V: Local caching for CIFS network file system

Since the previous post, I worked on improving the patches that add local caching, fixed a few bugs, addressed review comments from the community and re-posted the patches. I also gave a talk about it at the SUSE Labs Conference 2010 took place at Prague. The slides can be found here: FS-Cache aware CIFS.

This patchset was merged in the upstream Linux kernel yesterday (Yay!) which means this feature would be available starting from kernel version 2.6.35-rc1.

The primary aim of caching data on the client side is to reduce the network calls to the CIFS Server whenever possible, thereby reducing the server load as well the network load. This will indirectly improve the performance and the scalability of the CIFS Server and will improve the number of clients per Server ratio. This feature could be useful in a number of scenarios:

– Render farms in Entertainment industry – used to distribute textures to individual rendering units
– Read only multimedia workloads
– Accelerate distributed web-servers
– Web server cluster nodes serve content from the cache
– /usr distributed by a network file system – to avoid spamming Servers when there is a power outage
– Caching Server with SSDs reexporting netfs data
– where a persistent cache remains across reboots is useful

However, be warned that local caching may not suitable for all workloads and a few workloads could suffer a slight performance hit (for e.g. read-once type workloads). So, you need to careful consider your workload/scenario before you start using local disk caching.

When I reposted this patchset, I got asked whether I have done any benchmarking and could share the performance numbers. Here are the results from a 100Mb/s network:

Environment
————

I’m using my T60p laptop as the CIFS server (running Samba) and one of my test machines as CIFS client, connected over an ethernet of reported speed 1000 Mb/s. ethtool was used to throttle the speed to 100 Mb/s. The TCP bandwidth as seen by a pair of netcats between the client and the server is about 89.555 Mb/s.

Client has a 2.8 GHz Pentium D CPU with 2GB RAM
Server has a 2.33GHz Core2 CPU (T7600) with 2GB RAM

Test
—–
The benchmark involves pulling a 200 MB file over CIFS to the client using cat to /dev/zero under `time’. The wall clock time reported was recorded.

First, the test was run on the server twice and the second result was recorded (noted as Server below i.e. time taken by the Server when file is loaded on the RAM).
Secondly, the client was rebooted and the test was run with caching disabled (noted as None below).
Next, the client was rebooted, the cache contents (if any) were erased with mkfs.ext3 and test was run again with cachefilesd running (noted as COLD)
Next the client was rebooted, tests were run with caching enabled this time with a populated disk cache (noted as HOT).
Finally, the test was run again without unmounting or rebooting to ensure pagecache remains valid (noted as PGCACHE).

The benchmark was repeated twice:

Cache (state) Run #1 Run#2
============= ======= =======
Server 0.104 s 0.107 s
None 26.042 s 26.576 s
COLD 26.703 s 26.787 s
HOT 5.115 s 5.147 s
PGCACHE 0.091 s 0.092 s

As it can be seen when the disk cache is hot, the performance is roughly 5X times than reading over the network. And, it has to be noted that the Scalability improvement due to reduced network traffic cannot be seen as the test involves only a single client and the Server. The read performance with more number of clients would be more interesting as the cache can positively impact the scalability.

Bugs will not get fixed by themselves

August 5th, 2010 by

I received an email from a user who switched from openSUSE to Ubuntu since his Wireless netcard did not work. It worked with openSUSE 11.2 initially but after an online update it failed.  He hoped that openSUSE 11.3 worked, tested it, it failed – and he gave up and wrote a frustrated email.

I was frustrated reading this since we should have been able to help this user if he contacted us in time.

Such a regression is bad but if nobody reports the regression, then it will not get fixed at all. The openSUSE project takes fixes from the upstream projects and also adds fixes ourselves and sends them upstream. Those fixes work on the system of the developer – or the systems of the upstream developers – but nobody has access to every single hardware that a chip supports, so regressions might happen. In the past I’ve seen that such regressions that are reported with a pointer to the exact version that failed, are often fixed quite fast.

(more…)

Software search trick

August 4th, 2010 by

Do you use software.opensuse.org and get an error message “search limit reached” when searching for a generic term like “perl” or “kde”? Here is the solution:
The software search now also supports matching for exact package names, just put your search string into double quotes! See for example perl or kde4