cloud – openSUSE Lizards https://lizards.opensuse.org Blogs and Ramblings of the openSUSE Members Fri, 06 Mar 2020 11:29:40 +0000 en-US hourly 1 How we run our OpenStack cloud https://lizards.opensuse.org/2017/01/23/how-we-run-our-openstack-cloud/ Mon, 23 Jan 2017 13:12:10 +0000 http://lizards.opensuse.org/?p=12244 This post it to document how we setup cloud.suse.de which is one of our many internal SUSE OpenStack Cloud deployments for use by R&D.

In 2016-06 we started the deployment with SOC6 on 4 nodes. 1 controller and 3 compute nodes that also served for ceph (distributed storage) with their 2nd HDD. Since the nodes are from 2012 they only have 1gbit network and spinning disks. Thus ceph only delivers ~50 MB/s which is sufficient for many use cases.

We did not deploy that cloud with HA, even though our product supports it. The two main reasons for that are

  • that it will use up two or three nodes instead of one for controller services, which is significant if you start out with only 4 (and grow to 6)
  • that it increases the complexity of setup, operations and debugging and thus might lead to decreased availability of the cloud service

Then we have a limited supply of vlans even though technically they are just numbers between 1 and 4095, in SUSE we do allocations to be able to switch together networks from further away. So we could not use vlan mode in neutron if we want to allow software defined network (=SDN) (we did not in old.cloud.suse.de and I did not hear complaints, but now I see a lot of people using SDN)
So we went with ovs+vxlan +dvr (open vSwitch + Virtual eXtensible LAN + Distributed Virtual Router) because that allows VMs to remain reachable even when the controller node reboots.
But then I found that they cannot use DNS during that time, because distributed virtual DNS was not yet implemented. And ovs has some annoying bugs are hard to debug and fix. So I built ugly workarounds that mostly hide^Wsolve the problems from our users’ point of view.
For the next cloud deployment, I will try to use linuxbridge+vlan or linuxbridge+vxlan mode.
And the uptime is pretty good. But it could be better with proper monitoring.

Because we needed to redeploy multiple times before we got all the details right and to document the setup, we scripted most of the deployment with qa_crowbarsetup (which is part of our CI) and extra files in https://github.com/SUSE-Cloud/automation/tree/production/scripts/productioncloud. The only part not in there are the passwords.

We use proper SSL certs from our internal SUSE CA.
For that we needed to install that root CA on all involved nodes.

We use kvm, because it is the most advanced and stable of the supported hypervisors. Xen might be a possible 2nd choice. We use two custom kvm patches to fix nested virt on our G3 Opteron CPUs.

Overall we use 3 vlans. One each for admin, public/floating, sdn/storage networks.
We increased the default /24 IP ranges because we needed more IPs in the fixed and public/floating networks

For authentication, we use our internal R&D LDAP server, but since it does not have information about user’s groups, I wrote a perl script to pull that information from the Novell/innerweb LDAP server and export it as json for use by the hybrid_json assignment backend I wrote.

In addition I wrote a cloud-stats.sh to email weekly reports about utilization of the cloud and another script to tell users about which instances they still have, but might have forgotten.

On the cloud user side, we and other people use one or more of

  • Salt-cloud
  • Nova boot
  • salt-ssh
  • terraform
  • heat

to script instance setup and administration.

Overall we are now hosting 70 instance VMs on 5 compute nodes that together have cost us below 20000€

]]>
Basic Nextcloud installation on openSUSE Leap https://lizards.opensuse.org/2016/10/28/nextcloud-installation-on-opensuse-leap/ https://lizards.opensuse.org/2016/10/28/nextcloud-installation-on-opensuse-leap/#comments Fri, 28 Oct 2016 15:09:09 +0000 http://lizards.opensuse.org/?p=12089 Nextcloud Logo

I see the official documentation has full tutorial for RHEL 6 or CentOS 6 and RHEL 7 or CentOS 7. The main documentation covers Ubuntu 14.04 LTS

openSUSE already has the Nextcloud client packaged in Tumbelweed and the Server is in the PHP extra repo! Personally, I prefer to install eveything from official repository, so when an update is available, I can have it without a glitch. This tutorial describes how to install Nextcloud using command line. I followed the official documentation of Ubuntu 14.04 LTS installation.

Why choose openSUSE Leap? openSUSE Leap is a brand new way of building openSUSE and is new type of hybrid Linux distribution. Leap uses source from SUSE Linux Enterprise (SLE), which gives Leap a level of stability unmatched by other Linux distributions, and combines that with community developments to give users, developers and sysadmins the best stable Linux experience available. Contributor and enterprise efforts for Leap bridge a gap between matured packages and newer packages found in openSUSE’s other distribution Tumbleweed. You can download openSUSE Leap from the site https://software.opensuse.org/.

Make sure that ssh (sshd) is enabled and also the firewall either is disabled or make an exception to the apache and ssh services. You can also set a static IP (check out how).

First of all, let’s install the required and recommended modules for a typical Nextcloud installation, using Apache and MariaDB, by issuing the following commands in a terminal:

zypper in apache2 mariadb apache2-mod_php5 php5-gd php5-json php5-fpm php5-mysql php5-curl php5-intl php5-mcrypt php5-zip php5-mbstring php5-zlib

Create Database (optional since it’ll create eveything automatically)
Next step, create a database. First of all start the service.

systemctl start mysql.service
systemctl enable mysql.service

The root password is empty by default. That means that you can press enter and you can use your root user. That’s not safe at all. So you can set a password using the command:

mysqladmin -u root password newpass

Where newpass is the password you want.

Now you set the root password, create the database.

mysql -u root -p
#you’ll be asked for your root passwordCREATE DATABASE nextcloudb;

GRANT ALL ON nextcloudb.* TO ncuser@localhost IDENTIFIED BY ‘dbpass’;

Database user: ncuser
Database name: nextcloudb
Database user password: dbpass

You can change the above information accordingly.

PHP changes
Now you should edit the php.ini file.

nano /etc/php5/apache2/php.ini

change the values

post_max_size = 50G
upload_max_filesize = 25G
max_file_uploads = 200
max_input_time = 3600
max_execution_time = 3600
session.gc_maxlifetime = 3600
memory_limit = 512M

and finally enable the extensions.

extension=php_gd2.dll
extension=php_mbstring.dll

Apache Configuration
You should enable some modules. Some might be already enabled.

a2enmod php5
a2enmod rewrite
a2enmod headers
a2enmod env
a2enmod dir
a2enmod mime

Now start the apache service.

systemctl start apache2.service
systemctl enable apache2.service

Install Nextcloud from source code (option 1, preferable)
Before the installation, create the data folder and give the right permissions (preferably outside the server directory for security reasons). I created a directory in the /mnt directory. You can mount a USB disk, add it to fstab and save your data there. The commands are:

mkdir /mnt/nextcloud_data
chmod -R 0770 /mnt/nextcloud_data
chown wwwrun /mnt/nextcloud_data

Now download Nextcloud (find the latest version at https://nextcloud.com/install/). Then unzip and move the folder to the server directory.

wget https://download.nextcloud.com/server/releases/nextcloud-10.0.0.zip
unzip nextcloud-10.0.0.zip
cp -r nextcloud /srv/www/htdocs
chown -R wwwrun /srv/www/htdocs/nextcloud/

Make sure that everything is OK and then delete the folder nextcloud and nextcloud-10.0.0.zip from the root (user) directory.

Now open your browser to the server IP/nextcloud

Set your administrator username and password.
Your data directory is: /mnt/nextcloud_data
Regarding database, use the following.
Database user: ncuser
Database name: nextcloudb
Database user password: dbpass

Wait until it ends the installation. The page you’ll see is the following.

Install Nextcloud using the respository (option 2)

If you want to have automatic updates of your Nextcloud instance when there’s a new version, you can add the repository. There are packages available for openSUSE Leap 42.1, 42.2 and Tumbleweed (we recommend openSUSE Leap 42.1). You should be an administrator, so you can install Nextloud on your server.

1. Add the Nextcloud repository.
openSUSE_Leap_42.2

zypper ar http://download.opensuse.org/repositories/server:/php:/applications/openSUSE_Leap_42.2/ Nextcloud

openSUSE_Leap_42.1

zypper ar http://download.opensuse.org/repositories/server:/php:/applications/openSUSE_Leap_42.1/ Nextcloud

openSUSE_Leap_Tumbleweed

zypper ar http://download.opensuse.org/repositories/server:/php:/applications/openSUSE_Tumbleweed/ Nextcloud

2. Refresh your repositories

zypper refresh

3. Install Nextcloud (be careful you have to install LAMP first and change permissions of the files).

zypper install nextcloud

4. Open http://serverIP/nextcloud to install your instance (admin user account). Be careful to create another folder with the proper permissions for your data (as described).

5. Login and use Nextloud.

For more information about Nextcloud on openSUSE, check openSUSE wiki.

For any changes, check the github page.

For more configuration, you can follow the official documentation. That was the basic installation on openSUSE Leap.

]]>
https://lizards.opensuse.org/2016/10/28/nextcloud-installation-on-opensuse-leap/feed/ 2
Run copy.com on your openSUSE Raspberry Pi https://lizards.opensuse.org/2015/05/23/run-copy-com-on-your-opensuse-raspberry-pi/ Sat, 23 May 2015 09:34:10 +0000 http://lizards.opensuse.org/?p=11428 A good question is why do you want to sync a folder on your Raspberry Pi with a cloud service. The answer is little complicated. It’s a subproject that I’m working on right now. I want to upload some data I’ll create on a Raspberry Pi (with limited size of SD card). The uploaded data will be saved on other computer and the SD will be clear again to create new data.

The cloud service I prefer is always ownCloud.
Here I used http://www.copy.com. It provides 15GB of disk but you can increase it.

First of all download the file

$ wget http://copy.com/install/linux/Copy.tgz

Then extract it

$ tar xzvf Copy* copy/armv6h/

This will create a folder called “copy,” and in it there will be three sub-folders: “armv6h,” “x86,” and “x86_64.” The first one contains the Copy client binaries for the Raspberry Pi, the second contains the Copy client for 32-bit Linux on a PC, and the third the same client but for 64-bit Linux PCs.

$ cd /copy/armv6h

Now there are 2 ways of using copy. The CopyCmd tool and CopyConsole.

CopyCmd

List of the directories

$ ./CopyCmd Cloud -username=user@gmail.com -password=’mypass’ ls

Upload all content of local /home/user/directory/ to remote /directory

$ ./CopyCmd Cloud -username=user@gmail.com -password=’mypass’ put -r /home/user/directory/ /directory

CopyConsole

The CopyConsole tool keeps a folder on your Raspberry Pi synchronized with the data on Copy.com.
The sync app runs in the background and is started like this:

$ ./CopyConsole -daemon -username=user@gmail.com -password=’mypass’ -root=/home/user/directory

This will sync the local /home/user/directory to copy.com. If you delete something from there, it’ll delete from local folder as well.

Remeber to run this command everytime you restart your pi. It’s better to run it manually because there is username and password that are personal (unless you created an account just for your raspberry pi).

]]>
OpenStack Infra/QA Meetup https://lizards.opensuse.org/2014/07/23/openstack-infraqa-meetup/ Wed, 23 Jul 2014 13:54:38 +0000 http://lizards.opensuse.org/?p=10928 Last week, around 30 people from around the world met in Darmstadt, Germany to discuss various things about OpenStack and its automatic testing mechanisms (CI).
The meeting was well-organized by Marc Koderer from Deutsche Telekom.
We were shown plans of what the Telekom intends to do with virtualization in general and OpenStack in particular and the most interesting one to me was to run clouds in dozens of datacenters across Germany, but have a single API for users to access.
There were some introductory sessions about the use of git review and gerrit, that mostly had things I (and I guess the majority of the others) already learned over the years. It included some new parts such as tracking “specs” – specifications (.rst files) in gerrit with proper review by the core reviewers, so that proper processes could already be applied in the design phase to ensure the project is moving in the right direction.

On the second day we learned that the infra team manages servers with puppet, about jenkins-job-builder (jjb) that creates around 4000 jobs from yaml templates. We learned about nodepool that keeps some VMs ready so that jobs in need will not have to wait for them to boot. 180-800 instances is quite an impressive number.
And then we spent three days on discussing and hacking things, the topics and outcomes of which you can find in the etherpad linked from the wiki page.
I got my first infra patch merged, and a SUSE Cloud CI account setup, so that in the future we can test devstack+tempest on openSUSE and have it comment in Gerrit. And maybe some day we can even have a test to deploy crowbar+openstack from git (including the patch from an open review) to provide useful feedback, but for that we might first want to move crowbar (which is consisting of dozens of repos – one for each module) to stackforge – which is the openstack-provided Gerrit hosting.

see also: pleia2’s post

Overall for me it was a nice experience to work together with all these smart people and we certainly had a lot of fun

]]>
And done…. new images available https://lizards.opensuse.org/2014/06/12/and-done-new-images-available/ https://lizards.opensuse.org/2014/06/12/and-done-new-images-available/#comments Thu, 12 Jun 2014 22:27:08 +0000 http://lizards.opensuse.org/?p=10855 Hi,

It took a bit but I am happy to report that all openSUSE 13.1 images in Amazon EC2, Google Compute Engine and Microsoft Azure public cloud environments have been refreshed. After the latest round of the GNU-TLS and OpenSSL fixes the security was, as usual, extremely efficient in providing fixed packages and these have been available in all cloud images via zypper up since last Friday. As of today the base images available in the public cloud frameworks contain the fixes by default.

In Amazon the new images are as follows:

  • ap-northeast-1: ami-79296078
  • ap-southeast-1: ami-84a7fbd6
  • ap-southeast-2: ami-41cbae7b
  • eu-west-1: ami-b56aa4c2
  • sa-east-1: ami-bffb54a2
  • us-east-1: ami-5e708d36
  • us-west-1: ami-16f2f553
  • us-west-2: ami-b7097487

In Google compute engine the image name is: opensuse-13-1-v20140609

The old image (opensuse131-v20140417) has been deprecated. To access the image you will need to add –image=opensuse-cloud/global/images/opensuse-13-1-v20140609 as the openSUSE images are not yet fully integrated into the GCE framework. Still working on that part with Google. This image also has upgrades to the google-cloud-sdk package and enable the bq (big-query) command. The gcloud command is still a bit rough around the edges, but the gcutil command should work as expected. Eventually gcutil is going to be deprecated by Google thus there is work to be done to fix the integration issues with the gcloud command. If anyone has time to work on that please send submit request to the google-cloud-sdk package in the Cloud:Tools project in OBS. Unfortunately Google still hasn’t posted the source anywhere for open collaboration 🙁 . They’ll get there eventually. I will try and push any changes upstream.

In Azure just search for openSUSE in the Gallery, it’s more of a point an click thing 😉

And that’s a wrap. Not certain we will be able to improve on the speed of such fire drill updates, but we’ll try to keep refreshing images as quickly as time allows when critical vulnerabilities in the core libraries get exposed.

Have a lot of fun….

]]>
https://lizards.opensuse.org/2014/06/12/and-done-new-images-available/feed/ 5
Cloudy with a touch of Green https://lizards.opensuse.org/2014/03/19/cloudy-with-a-touch-of-green/ https://lizards.opensuse.org/2014/03/19/cloudy-with-a-touch-of-green/#comments Wed, 19 Mar 2014 19:08:27 +0000 http://lizards.opensuse.org/?p=10678 Finally there is some news regarding our public cloud presence and openSUSE 13.1. We now have openSUSE 13.1 images published in Amazon EC2, Google Compute Engine, and Windows Azure.

Well, that’s the announcement, but would make for a rather short blog. Thus, let me talk a bit about how this all works and speculate a bit why we’ve not been all that good with getting stuff out into the public cloud.

Let me start with the speculation part, i.e. hindrances in getting openSUSE images published. In general to get anything into a public cloud one has to have an account. This implies that you hand over your credit card number to the cloud provider and they charge you for the resources you use. Resources in the public cloud are anything and everything that has something to do with data. Compute resources, i.e. the size of an instance w.r.t. memory and number of CPUs are priced differently. Sending data across the network to and from your instances incurs network charges and of course storing stuff in the cloud is not free either. Thus, while anyone can put an image into the cloud and publish it, this service costs the person money, granted not necessarily a lot, but it is a monthly recurring out of pocket expense.

Then there always appears to be the “official” apprehension, meaning if person X publishes an openSUSE image from her/his account what makes it “official”. Well first we have the problem that the “official” stamp is really just an imaginary hurdle. An image that gets published by me is no more or less “official” than any other images. I am after all not the release manager or have any of my fingers in the openSUSE release in any way. I do have access to the SUSE accounts and can publish from there and I guess that makes the images “official”. But please do not get any ideas about “official” images, they do not exist.

Last but not least there is a technical hurdle. Building images in OBS is not necessarily for the faint of heart. Additionally there is a bunch of other stuff that goes along with cloud images. Once you have one it still has to get into the cloud of choice, which requires tools etc.

That’s enough speculation as to why or why not it may have taken us a bit longer than others, and just for the record we did have openSUSE 12.1 and openSUSE 12.2 images in Amazon. With that lets talk about what is going on.

We have a project in OBS now, actually it has been there for a while, Cloud:Images that is intended to be used to build openSUSE cloud images. The GCE image that is public and the Amazon image that is public both came from this project. The Azure image that is currently public is one built with SUSE Studio but will at some point also stem from the Cloud:Images OBS project.

Each cloud framework has it’s own set of tools. The tools are separated into two categories, initialization tools and command line tools. The initialization tools are tools that reside inside the image and these are generally services that interact with the cloud framework. For example cloud-init is such an initialization tool and it is used in OpenStack images, Amazon images, and Windows Azure images. The command line tools let you interact with the cloud framework to start and stop instances for example. All these tools get built in the Cloud:Tools project in OBS. From there you can install the command line tools into your system and interact with the cloud framework they support. I am also trying to get all these tools into openSUSE:Factory to make things a bit easier for image building and cloud interaction come 13.2.

With this lets take a brief closer look at each framework, in alphabetical order no favoritism here.

Amazon EC2

An openSUSE 13.1 image is available in all regions, the AMI (Amazon Machine Image) IDs are as follows:

sa-east-1 => ami-2101a23c
ap-northeast-1 => ami-bde999bc
ap-southeast-2 => ami-b165fc8b
ap-southeast-1 => ami-e2e7b6b0
eu-west-1 => ami-7110ec06
us-west-1 => ami-44ae9101
us-west-2 => ami-f0402ec0
us-east-1 => ami-ff0e0696

These images use cloud-init as opposed to the “suse-ami-tools” that has been used previously and is no longer available in OBS. The cloud-init package is developed in launchpad and was started by the Canonical folks. Unfortunately to contribute you have to sign the Canonical Contributor Agreement (CCA). If you do not want to sign it or cannot sign it for company reasons you can still send stuff to the package and I’ll try and get the stuff integrated upstream. For the interaction with Amazon we have the aws-cli package. The “aws” command line client supersedes all the ec2-*-tools and is an integrated package that can interact with all Amazon services, not just EC2. It is well documented fully open source and hosted on github. The aws-cli package replaces the previously maintained ec2-api-tools package which I have removed from OBS.

Google Compute Engine

In GCE things work by name and the openSUSE 13.1 image is named opensuse131-20140227 and is available in all regions. Google images use a number of tools for initialization, google-daemon and google-startup-scripts. All the Google specific tools are in the Cloud:Tools project. Interaction with GCE is handled with two commands, gcutil and gsutil, both provided by the google-cloud-sdk package. As the name suggests google-cloud-sdk has the goal to unify the various Google tools, same basic idea as aws-cli, and Google is working on the unification. Unfortunately they have decided to do this on their own and there is no public project for google-cloud-sdk which makes contributing a bit difficult to say the least. The gsutil code is hosted on github, thus at least contributing to gsutil is straight forward. Both utilities, gsutil for storage and gcutil for interacting with GCE are well documented.

In GCE we also were able to stand up openSUSE mirrors. These have been integrated into our mirrorbrain infrastructure and are already being used quite heavily. The infrastructure team is taking care of the monitoring and maintenance and that deserves a big THANK YOU from my side. The nice thing about hosting the mirrors in GCE is that when you run an openSUSE instance in GCE you will not have to pay for network charges to pull your updated packages and things are really fast as the update server is located in the same data center as your instance.

Windows Azure

As mentioned previously the current image we have in Azure is based on a build from SUSE Studio. It does not yet contain cloud-init and only has WALinuxAgent integrated. This implies that processing of user data is not possible in the image. User data processing requires cloud-init and I just put the finishing touches on cloud-init this week. Anyway, the image in Azure works just fine, and I have no time line when we might replace it with an image that contains cloud-init in addition to WALinuxAgent.

Interacting with Azure is a bit more cumbersome than with the other cloud frameworks. Well, let me qualify this with, if you want packages. The Azure command line tools are implemented using nodejs and are integrated into the npm nodejs package system. Thus, you can use npm to install everything you need. The nodejs implementation provides a bit of a problem in that we hardly have a nodejs infrastructure in the project. I have started packaging the dependencies, but there is a large number and thus this will take a while. Who would ever implement….. but that’s a different topic.

That’s where we are today. There is plenty of work left to do. For example we should unify the “generic” OpenStack image in Cloud:Images with the HP specific one, the HP cloud is based on OpenStack, and also get an openSUSE image published in the HP cloud. There’s tons of packaging left to do for nodejs modules to support the azure-cli tool. It would be great if we could have openSUSE mirrors in EC2 and Azure to avoid network charges for those using openSUSE images in those clouds. This requires discussions with Amazon and Microsoft, basically we need to be able to run those services for free, which implies that both would become sponsors of our project just like Google has become a sponsor of our project by letting us run the openSUSE mirrors in GCE.

So if you are interested in cloud and public cloud stuff get involved, there is plenty of work and lots of opportunities. If you just want to use the images in the public cloud go ahead, that’s why they are there. If you want to build on the images we have in OBS and customize them in your own project feel free and use them as you see fit.

]]>
https://lizards.opensuse.org/2014/03/19/cloudy-with-a-touch-of-green/feed/ 1
Goodbye EC2 Tools Long Live AWS Tools https://lizards.opensuse.org/2014/02/26/goodbye-ec2-tools-long-live-aws-tools/ Wed, 26 Feb 2014 20:06:08 +0000 http://lizards.opensuse.org/?p=10613 For quite some time we had a package named ec2-api-tools in the Cloud:EC2 project and I suspect many that work with EC2 had found the package and were using the ec2-* commands to manage stuff in EC2. Along with the ec2-api-tools Amazon maintained a separate ec2-* tool set for various services. Keeping up with the armada of Amazon developers is not easy and thus the other ec2-* tool sets never got packaged.

Now a new integrated set of tools is available called with the “aws” command and provided by the aws-cli package. The package is available from the Cloud:Tools project and a submit request to Factory is pending. The new package does not obsolete the ec2-api-tools package as there is no issue with having both packages installed. However, I did take the liberty to remove the ec2-api-tool package from the Cloud:EC2 project as it would no longer receive updates considering that we have a nice new tool that unifies all Amazon services. The documentation for the new command can be found .

The aws code is hosted on github and thus contribution of fixes is easy and that is another big plus over the ec2-* tool sets.

Yes, and of course we need to get openSUSE 13.1 into EC2, and I am working on that, stay tuned….

]]>
Elfcloud.fi a small cloud storage that could https://lizards.opensuse.org/2014/02/13/elfcloud-fi-a-small-cloud-storage-that-could/ Thu, 13 Feb 2014 12:08:17 +0000 http://lizards.opensuse.org/?p=10557 I’ve been seeking cloud storage of my life for long time now. My needs are not much (but most of the time they are too big as I have learned) only space and if possible Linux/Mac OS X FUSE for using service.
Service being open source or not isn’t such big thing in this case. How data is stored (crypted or not) and how can I get them out there if I need them is what I treasure most.
I have tested SpiderOak, Wuala, Dropbox and box.net but non of them fits for my needs perfectly. As I want to use these services with Linux all of them have Linux clients and most of them have FUSE-filesystem. Lately I have been using Wuala but it has problem that FUSE stuff is written in Java (it works under openSUSE just fine!). With GUI it’s clean and like said works very well but then comes but if you don’t want to use GUI you are in little bit trouble. It’s supported but I haven’t got it working. Xvfb comes to rescue but still it’s not really a solution!
When I heard about Elfcloud.fi I though okay we have some storage provider in Finland no big deal. I just popped their web site and noticed that they are really open source friendly company. They have Github Python-library (Apache Version 2.0 licensed) and full API documentation which is dead simple JSON stuff also their pricing scheme ain’t that bad. I have mention that this is hardcore crypto cloud service. If you lose your key you can wave goodbye to you data. Best of all they gonna have marvelous FUSE implementation as I’m writing it currently (They also have C++ library available on request. Apache Version 2.0 licensed of course).
So if you want to keep your data on Europe and have it stored in same country that have also is also trusted by Google datacenters and Microsoft you can check out elfcloud.fi. It’s not for everyone for sure but those who are in same need place to store data without hustle this can be the stuff for you.

You can download python API RPM from here: http://download.opensuse.org/repositories/home:/illuusio:/elfcloud/

]]>
another way to access a cloud VM’s VNC console https://lizards.opensuse.org/2014/02/08/another-way-to-access-a-cloud-vms-vnc-console/ Sat, 08 Feb 2014 08:14:58 +0000 http://lizards.opensuse.org/?p=10538 If you have used a cloud, that was based on OpenStack, you will have seen the dashboard including a web-based VNC access using noVNC + WebSockets.
However, it was not possible to access this VNC directly (e.g. with my favourite gvncviewer from the gtk-vnc-tools package), because the actual compute nodes are hidden and accessing them would circumvent authentication, too.

I want this for the option to add an OpenStack-backend to openQA, my OS-autotesting framework, which emulates a user by using a few primitives: grabbing screenshots and typing keys (can be done through VNC), powering up a machine(=nova boot), inserting/ejecting an installation medium (=nova volume-attach / volume-detach).

To allow for this, I wrote a small perl script, that translates a TCP-connection into a WebSocket-connection.
It is installed like this
git clone https://github.com/bmwiedemann/connectionproxy.git
sudo /sbin/OneClickInstallUI http://multiymp.zq1.de/perl-Protocol-WebSocket?base=http://download.opensuse.org/repositories/devel:languages:perl

and is used like this
nova get-vnc-console $YOURINSTANCE novnc
perl wsconnectionproxy.pl --port 5942 --to http://cloud.example.com:6080/vnc_auto.html?token=73a3e035-cc28-49b4-9013-a9692671788e
gvncviewer localhost:42

I hope this neat code will be useful for other people and tasks as well and wish you a lot of fun with it.

Some technical details:

  • The code is able to handle multiple connections in a single thread using select.
  • HTTPS is not supported in the code, but likely could be done with stunnel.
  • WebSocket-code was written in 3h.
  • noVNC tokens expire after a few minutes.
]]>
Hongkong OpenStack Design Summit https://lizards.opensuse.org/2013/11/13/hongkong-openstack-design-summit/ Wed, 13 Nov 2013 13:50:23 +0000 http://lizards.opensuse.org/?p=10133 So last week many OpenStack (cloud software) developers met in Hongkong’s world expo halls to discuss the future development and show off what is done already.

Overall, I heard there were 3000 attendees, with 800 being developers or so. That sounds like a large number of people, but luckily everything felt well-organized and the rooms were always big enough to have seats for all interested.

The design sessions were usually pretty low-level and focused into one component, so it was not easy for me to make useful contributions in there. The session about read-only API access (e.g. for helpdesk workers and monitoring) and about HA were most useful to me.

In the breakout rooms were interesting sessions by many large OpenStack users (CERN, Ebay, Paypal, Dreamhost, Rackspace) giving valuable insights into what people expect from and do with a cloud. Many of them are using custom-built parts, because the plain OpenStack is still not complete to run a cloud. SUSE Cloud ships with some such missing parts (e.g. deployment and configuration management), but most organisations seem to run their own at the moment.

Cloudbase was there telling about their Hyper-V support that we integrated in SUSE Cloud.
Apart from the 6 SUSE Cloud developers there were several local (and one Australian) SUSE guys manning the booth.

Overall it was quite some experience to be there (in such an exotic and yet nice place) and listen and talk to so many different people from very different backgrounds.

]]>