Archive

Archive for the ‘Ubuntu’ Category

A month in OpenStack Diablo: the diablo-1 milestone

June 2, 2011 Leave a comment

Back at the OpenStack Design Summit in Santa Clara, we decided to switch from a 3-month cycle to a 6-month coordinated release cycle, with more frequent milestones delivery in the middle.

Lately we have been busy adapting the release processes to match the delivery of the first milestones. Swift 1.4.0 was released last Tuesday, and today sees the release of the diablo-1 milestone for Nova and Glance.

What should you expect from diablo-1, just 4 weeks after the design summit ? In this short timeframe lots of features have been worked on, and the developers managed to land quite a few of them in time for diablo-1.

Glance’s API was improved to support filtering of /images and /images/detail results and limiting and paging of results. This made support of API versioning necessary. It also grew a new disk format (“iso”) that should ultimately allow to boot ISOs directly in Nova.

On Nova’s side, the most notable addition is support for snapshotting and cloning volumes with the EC2 API. The XenServer plugin now supports Open vSwitch, and pause and suspend capabilities have been added to the KVM hypervisor.

Now keep your seatbelt fastened, because diablo-2 is set to release on June 30th.

OpenStack Nova: Main themes for Diablo

May 20, 2011 4 comments

A few weeks after the OpenStack Design Summit in Santa Clara, we are starting to get a better picture of what should be in the next version of OpenStack Nova, codenamed Diablo, scheduled for release on September 22.

One big priority of this release is to separate code for the network and volume services, and refactor nova-network code to add a clear internal API. This will allow to plug separate network and volume service providers, and pave the way for integration with future OpenStack projects like Quantum/Melange/Donabe and LunR. In preparation for this, we’ll push changes to rely more on the queue (and less on the database) to pass information between components. In the same area, we need some more changes to support multiple NICs and should also provide a client OpenStack API for directly interacting with volumes.

A second theme of the Diablo release is the new distributed scheduler, which should be able to schedule across zones and taking capabilities into account. This will need changes in the way we reference instances, as well as some changes for EC2 API compatibility.

On the API side, we should finalize OpenStack API 1.1 support, including work on floating IPs and shared IP groups. For administrators, instance migration and account administration actions should be added. We’ll also ensure good AWS API compatibility and validation.

Support for snapshotting, cloning and booting from volumes should land early in this cycle, as well as new ways of communicating configuration data between host and guest. We also need to integrate with AuthN/AuthZ with the new common Keystone authentication system. Lots of other features are planned (and others might be added before the end), you can check out the blueprints plan for more detail.

Last but not least, on the QA side, we should have continuous automated testing across a range of reference architectures and increase our unittest and smoketest coverage among other efforts to build-in quality.

The first milestone for this cycle, diablo-1, should be released on June 2nd.

OpenStack @ Ubuntu Developer Summit

May 18, 2011 Leave a comment

Last week I attended the Ubuntu Developer Summit for Oneiric in Budapest. This was the first time I attended UDS as an upstream representative rather than as a Canonical employee. I very much enjoyed it: not being a track lead or a busy technical lead actually gives you desirable flexibility in your agenda 🙂

First of all, a quick comment on the big announcement of the week, which the Twittersphere is not done retweeting yet: “Ubuntu switching from Eucalyptus to OpenStack”. I think it would be more accurate to say that Ubuntu chose to use OpenStack as its default cloud stack for future versions. Comparing Eucalyptus and OpenStack is like comparing apples to apple trees: OpenStack provides several cloud infrastructure pieces (of which only OpenStack Compute -Nova- covers the same space as Eucalyptus). I suspect the wide scope of the project played a role in OpenStack being selected as the default stack for the future. Eucalyptus and OpenStack Nova should both be present as deployment options from 11.10 on.

On the UDS format itself, I’d say that the “one blueprint = one hour” format does not scale that well. The numbers of hours in the week is fixed, so when the project grows you end up having too many sessions going on at the same time. Lots of blueprints do not require one hour of discussion, but rather a quick presentation of plan, feedback from interested parties and Q&A. That’s what we do for our own Design Summits, but I’d admit it makes scheduling a bit more complex. On the good side, having the floor plan inside our UDS badges was a really good idea, especially with confusing room names 🙂

The Launchpad and bzr guys were very present during the week, attentive and reactive to the wishes of upstream projects. They have great improvements and features coming up, including finer-grained bugmail and dramatic speed improvements in bzr handling of large repos.

Last week also saw the rise of creampiesourcing: motivation of groups of developers over bets (“if the number of critical bugs for Launchpad goes to 0 by June 27, I’ll take a creampie in the face”). Seems to work better than karma points.

Finally, Rackspace Hosting was co-sponsoring the “meet and greet” event on the Monday night, promoting OpenStack. I think offering cool T-shirts, like we did at the previous UDS in Orlando, was more efficient in spreading the word and making the project “visible” over time: in Budapest you could see a lot of people wearing the OpenStack T-shirts we offered back then !

Coming up in OpenStack Cactus…

March 16, 2011 Leave a comment

In a bit more than a week, we will hit FeatureFreeze for OpenStack “Cactus” cycle, so we start to have a good idea of what new features will make it. The Cactus cycle focus was on stability, so there are fewer new features compared to Bexar, but the developers still achieved a lot in a couple of months…

Swift (OpenStack object storage)

The Swift team really focused and stability and performance improvements this cycle. I will just single out the refactoring of the proxy to make backend requests concurrent, and improvements on sqlite3 indexing as good examples of this effort.

Glance (OpenStack image registry and delivery service)

Bexar saw the first release of Glance, and in Cactus it was vastly improved to match standards we have for the rest of OpenStack: logging, configuration and options parsing, use of paste.deploy and non-static versioning, database migrations… New features include a CLI tool and a new method for client to verify images. Glance developers might also sneak in an authentication middleware and support for HTTPS connections !

Nova (OpenStack compute)

A lot of the feature work in Nova for Cactus revolved around the OpenStack API 1.1 and exposing features through XenServer (migration, resize, rescue mode, IPv6, file and network injection…). We should also have the long-awaited live migration feature (for KVM), support for LXC containers, VHD images, multiple NICs, dynamically-configured instance flavors or volume storage on HP/Lefthand SANs. XenAPI should get support for Vlan network manager and network injection. We hope support for VMWare/vSphere hypervisor will make it.

The rest of the Nova team concentrated on testing, bugfixing (already 115 bugfixes committed to Cactus !) and producing a coherent release, as evidenced by the work on adding the missing Ipv6 support for FlatManager network model. I should also mention that the groundwork for multi-tenant accounting and multiple clusters in a region also landed in Cactus.

Over the three projects branches, last month we had more than 2500 commits by more than 75 developers. Not too bad for a project less than one-year-old… We’ll see the result of this work on Cactus release day, scheduled April 14.

Categories: Cloud, Openstack, Ubuntu Server

Upstream projects vs. Distributions

February 28, 2011 1 comment

You can globally split open source projects into two broad categories. Upstream projects develop and publish source code for various applications and features. Downstream projects are consumers of this source code. The most common type of downstream projects are distributions, which release ready-to-use binary packages of these upstream applications, make sure they integrate well with the rest of the system, and release security and bugfix updates according to their maintenance policies.

The relationship between upstream projects and distributions is always a bit difficult, because their roles overlap a bit. Since I’m sitting on both sides of the fence, let’s try to find common ground.

Overlapping roles

In an ideal world, everyone would install software through distribution packages, and the roles wouldn’t overlap. In the real world though, upstream projects need to deal with distributions that don’t provide packages for your software, or provide old buggy versions with no mechanism for getting fresh ones. That’s why they need to care about manual installation or update mechanisms. On the other hand, in their rush to release fixes, distributions sometimes carry patches without sending them upstream immediately. Both want to provide bugfix updates to stable versions. In all cases the overlapping roles end up duplicating work and creating unnecessary friction.

Splitting the roles

In my (humble) opinion, upstream projects should encourage the use of packaged software wherever possible, rather than resisting it. They should concentrate on their core competency: working on producing new releases of their code. Dealing with distribution issues, environment specificities or maintaining stable branches is a different type of work, and one that distributions excel in. So the key seems to be in splitting the roles more cleanly.

Upstream projects should release code, together with good documentation on how to manually deploy it: dependencies, startup and upgrade mechanisms, open bug trackers with links to patches… This documentation can be reused by manual deployers and distribution packagers alike. They should stop short of providing installers, auto-updaters, dependency bundles, etc. They should limit the release of point release updates only to critical issues (data loss, security…).

Distributions should be responsible for proper packaging (easy way to install the software and its dependencies, together with startup scripts and other system integration), and would be responsible for more general bugfix updates that match their maintenance policy.

With such a split, you obviously will end up with a subpar user experience if you try to manually install the software from the released code. But you facilitate packaging, so you should end up being packaged in more distributions. I think time is better spent contacting distributions to get packaged rather than trying to improve the manual installation to the point where it is actually usable.

Freshness

One case where you end up doing manual installations (even on supported distributions) is to get the latest released code running on already-released distributions. Due to stable release policies in distributions, they will release bugfix updates for the version that was available when they released, but usually won’t provide a whole new version of a package.

The solution is in specific distribution archives that track the latest upstream releases (like PPAs in Ubuntu) and make them available for users of already-released distributions. Those are usually co-maintained between distributions and upstream projects.

Reference distributions

At this point, it is worth taking collaboration one step further, and have developers that are involved in both projects ! Those can make sure the distribution includes the packages and patches you need for your software to run properly. Those can make sure the distribution is one on which your software is up-to-date, runs properly and gets appropriate bugfix updates. Those can maintain the specific distribution archives for the latest upstream releases.

That distribution can then become a reference distribution for the upstream project, one that is tightly integrated with the upstream project and lives in harmony.

Two closing remarks:

  • You can have multiple reference distributions. That said, one way to limit friction and increase freshness is to have somewhat-synchronized release cycles, which may not scale very well.
  • I realize the proposed role split and reference distro scheme might not be generally applicable to all open source upstream projects. In my experience it worked well with server software.

In OpenStack, having a few Ubuntu core developers in the project (and the Ubuntu server team supporting us) allows us to use Ubuntu as a reference distribution. We have packages up for other distributions, but those are not (yet) official distribution packages. Any other distro developers interested to join ?

Agile vs. Open

January 21, 2011 6 comments

I’ve been asked multiple times why open source project management does not fully adopt agile methodologies, which are so great. Or what are the main differences between the two.

Agile is good for you

So first of all, I’d like to say that I think Agile methodologies are great. Their primary value to me is to allow software development groups to handle their stakeholders requirements in a sane way. By involving developers more in the center of game, they contribute to use Autonomy (one of the three main intrinsic motivators that Dan Pink mentions in his book Drive) as a way to maximize a development team productivity.

Agile vs. Open

That said, applying pure Agile methods doesn’t really work well for open source project management. Some great concepts can be reused, like frequent time-based releases, peer review, or test-driven development. But most of the group tools assume a local, relatively small team. Doing a morning stand-up meeting with a team of 60 in widely different timezones is a bit difficult. It also assumes that project management has some direct control over the developers (they can pick in the backlog, but not outside), while there is no such thing in an open development project.

The goals are also different. The main goal of Agile in my opinion is to maximize a development team productivity. Optimizing the team velocity so that the most can be achieved by a given-size team. The main goal of Open source project management is not to maximize productivity. It’s to maximize contributions. Produce the most, with the largest group of people possible.

That’s why open source project management is all about setting the framework and the rules of play (what can get in trunk and how), and about trying to keep track of what is being done (to minimize confusion and friction between groups of developers). That’s why our release cycles are slightly longer than Agile sprints, to have a cadence that is more inclusive of development styles, and to enforce time to focus, as a group, on QA before a release.

Agile devs in Open source

It’s difficult for Agile developers to abandon their nice tools and adopt seemingly-more-confusing open source bazaar ways. But in the end, I think open source is more empowering, by addressing the two other Dan Pink types of intrinsic motivators, Purpose and Mastery. Working on an open source project and contributing to the world’s amount of public knowledge obviously gives an individual a sense of purpose to his work, but even more important is mastery.

Each developer in an open source project actually represents himself. With all proceedings and production being public, in the end his personal name is attached to it. He builds mastery and influence over the project by his own actions, not by the name of the company that pays his bills. Of course his employer has requirements and usually pays him to work on something specific, but the developer acts as the gateway to get his employer’s requirements into the open source project. That way of handling stakeholders requirements places individual developers at the very center of the game, even more than Agile does. You end up with the highest number of highly-motivated individuals, which in turn leads to lots of stuff getting done.

Agile subteams

Finally, nothing prevents an open source project to have Agile development subgroups contributing to it. These subgroups can have user stories, planning poker, feature backlogs, pair programming and stand-up meetings. There are multiple challenges though. Aligning agile sprints with the open source project’s common development schedule is tricky. The Agile work schedule needs to be adapted to make room for generic open source project tasks like random code reviews or pre-release QA. Some other group may end up implementing a feature from your internal backlog, and communicating the backlog outside the group can be bothersome and challenging.

I’d like to find ways, though. What do you think ? Can Agile and Open live in harmony ? Should they try ?

Categories: Open source, Openstack, Ubuntu

Bleeding edge OpenStack Nova on Maverick (updated x2)

December 2, 2010 25 comments

Want to test the latest cloud goodness ? Thanks to the new Nova trunk PPA, it’s really easy to run the freshest code from OpenStack Compute (Nova) on Ubuntu 10.10. Here is how.

We will install everything on the same machine, one that has VT extensions enabled and therefore can run KVM. My test laptop with 2Gb of RAM has been a bit struggling, but it worked. You should have Ubuntu 10.10 installed on that box.

Dec 9, 2010 UPDATE: rabbitmq-server should be installed before the nova packages, and we should use images with ramdisks at this point.

Feb 25, 2011 UPDATE: Ubuntu cloud images are supported, switch tutorial to using those.

 

Package installation

First you should enable the PPA:

$ sudo apt-get install python-software-properties
$ sudo add-apt-repository ppa:nova-core/trunk
$ sudo apt-get update

Install RabbitMQ first:

$ sudo apt-get install rabbitmq-server

Then install Nova and dependencies [1]:

$ sudo apt-get install nova-api nova-objectstore nova-compute
  nova-scheduler nova-network euca2ools unzip

Congratulations, you just created a cloud.

 

Configuration

You should restart libvirt, especially if you had it installed before, to make sure it realizes that ebtables is now installed:

$ sudo service libvirt-bin restart

Create a specific network for your VMs, here I used my unused 10.0.0.0/8:

$ sudo nova-manage network create 10.0.0.0/8 1 64

Create a user, a project, download credentials and source them:

$ sudo nova-manage user admin ttx
$ sudo nova-manage project create myproject ttx
$ sudo nova-manage project zipfile myproject ttx
$ unzip nova.zip
$ . novarc

 

Register an Ubuntu cloud image

Download an Ubuntu cloud image, then use uec-publish-tarball to register it:

$ r="maverick"
$ wget http://uec-images.ubuntu.com/$r/current/$r-server-uec-amd64.tar.gz
$ uec-publish-tarball $image mybucket

It should output 3 references [2]: emi, eri and eki. You need to use the emi value in the next section (I got “ami-lvdliy0”).

 

Running an instance

First, create a keypair if you haven’t already one:

$ euca-add-keypair mykey > mykey.priv
$ chmod 0600 mykey.priv

Allow the connection to port 22 of the instance (SSH), using the following command:

$ euca-authorize default -P tcp -p 22 -s 0.0.0.0/0

Then start the instance (replace $emi with the value from uec-publish-tarball above):

$ euca-run-instances $emi -k mykey -t m1.tiny

This will return an instance ID (I got “i-1objiev”), an IP address (I got “10.0.0.3”), and the instance will be scheduled and launched. You should check the status with:

$ euca-describe-instances

The instance should quickly go from “launching” to “running”, and you should be able to connect to the ubuntu user through SSH (replace $ipaddress with the one you got from euca-describe-instances):

$ ssh -i mykey.priv ubuntu@$ipaddress

When you are done playing, you can tear the instance down using the following command (replace $instanceid with the instance ID from above):

$ euca-terminate-instances $instanceid

Enjoy !

 

Notes

  1. For this simple tutorial I left nova-volume out, since it requires more configuration setup (like setting up LVM volume groups) before it can be used.
  2. All references are “ami-“: it should be ami, ari and aki. This is a bug that will be fixed (bug 658234)

My desktop backup solution

November 29, 2010 5 comments

I was inspired by a good blogpost by Martin Pitt to setup my own desktop backup solution. I liked the idea of not requiring the computer to be on all the time, and having the backup pushed from the client rather than pulled from the server. However, my needs were slightly different from his, so I adapted it.

His solution uses rsnapshot locally, then pushes the resulting directories to a remote server. I didn’t want to use local disk space (SSD ain’t cheap), but I had a local server with 2Tb available. So in my solution, the client rsyncs to the server, then the server triggers rsnapshot locally if the rsync was successful. This is done over SSH and the server has no right whatsoever on the client.

Prerequisites

In the examples the client to back up will be called mycli and the server on which the backup will live is named mysrv. As a prerequisite, mycli will need rsync and openssh-client installed. mysrv will need rsnapshot and openssh-server installed. OpenSSH needs to have public-key authentication enabled.

SSH setup

On the client side, generate a specific passwordless SSH key for the backup connection:

mkdir ~/.backup
ssh-keygen -f ~/.backup/id_backup

On the server side, we’ll assume you want to put backups into /srv/backup. First of all, create an rbackup user that will be used to run the backup serverside:

sudo mkdir /srv/backup
sudo adduser --home /srv/backup --no-create-home --disabled-password rbackup

Next, add your backup public key (the contents of mycli:~/ .backup/id_backup.pub) to mysrv:/srv/backup/.ssh/authorized_keys. The trick is to prefix it (same line, one space separator) with the only command you want the rbackup user to perform via that SSH connection:

command="rsync --config /srv/backup/rsyncd-mycli.conf --server
--daemon ." ssh-rsa AAAAB3NzaLwm0ckRdzotb3...5Mbiw== ttx@mycli

Finally, you need to let rbackup read those .ssh files:

sudo chgrp -R rbackup /srv/backup/.ssh
sudo chmod -R g+r /srv/backup/.ssh

rsync setup (server-side)

Now we need to set up the rsync configuration that will be used on those connections:

# /srv/backup/rsyncd-mycli.conf
max connections = 1
lock file = /srv/backup/mycli/rsync.lock
log file = /srv/backup/mycli/rsync.log
use chroot = false
max verbosity = 3
read only = false
write only = true

[mycli]
 path = /srv/backup/mycli/incoming
 post-xfer exec = /srv/backup/kick-rsnapshot /srv/backup/mycli/rsnapshot.conf

The post-xfer exec command is executed on successful transfers to /srv/backup/client/incoming. In our case, we want rsync to trigger the /srv/backup/kick-rsnapshot script:

#!/bin/bash
if [ "$RSYNC_EXIT_STATUS" == "0" ]; then
   rsnapshot -c $1 daily
fi

Don’t forget to make that one executable 🙂

rsnapshot setup (server-side)

rsnapshot itself is configured in the /srv/backup/mycli/rsnapshot.conf file. This is where you specify how many pseudo-weekly copies you want to keep (read rsnapshot documentation to understand the interval concept):

# /srv/backup/mycli/rsnapshot.conf
config_version    1.2
snapshot_root    /srv/backup/mycli
cmd_rm      /bin/rm
cmd_rsync   /usr/bin/rsync
cmd_logger  /usr/bin/logger
interval    daily    6
interval    weekly    6   
verbose     2
loglevel    3
lockfile    /srv/backup/mycli/rsnapshot.pid
rsync_long_args    --delete --numeric-ids --delete-excluded
link_dest   1
backup      /srv/backup/mycli/incoming/    ./

Now you just have to create the backup directory hierarchy with appropriate permissions:

mkdir -p /srv/backup/mycli/incoming
chown -R rbackup:rbackup /srv/backup/mycli

The backup (client-side)

The client will rsync periodically to the server, using the following script:

#!/bin/bash
set -e
TOUCHFILE=$HOME/.backup/last_backup

# Check if last backup was more than a day before
now=`date +%s`
if [ -e $TOUCHFILE ]; then
   age=$(($now - `stat -c %Y $TOUCHFILE`))
else
   unset age
fi
[ -n "$age" ] && [ $age -lt 86300 ] && exit 0

nice -n 10 rsync -e "ssh -i $HOME/.backup/id_backup" -avzF \
     --delete --safe-links $HOME rbackup@mysrv::mycli
touch $TOUCHFILE

That script ensures that at most once per day, you will sync to the server. You can run it (as your user) as often as you’d like (I suggest hourly via cron). On successful syncs, the server will trigger rsnapshot to do its magic backup rotation ! Using the same model, you can easily set up multiple directories or multiple clients.

Like with Martin’s solution, you should set up various .rsync-filter files to exclude the directories and files you don’t want copied to the backup server.

The drawback of this approach is that the server keeps an extra copy of your backup (in the incoming directory). But in my case, since the server has plenty of space, I can afford it. It also does not work when you are away from your backup server.

I hope you find that setup useful, it served me well so far.

Categories: Ubuntu

The art of release management

November 8, 2010 3 comments

Last week I started a new job, working for Rackspace Hosting as the Release Manager for the Openstack project. I’m still very much working from home on open source software, so that part doesn’t change. However, there are some subtle differences.

First of all, Openstack is what we call an upstream project. Most of my open source work so far involves distribution work: packaging and delivering various open source software components into a well-integrated, tested and security-maintained distribution. This is hard work, one that is never completely finished or perfect. It is also a necessary part of the open source ecosystem: without distributions, most software would not be easily available for use.

Upstream work, on the other hand, is about developing the software in the first place. It’s a more creative work, in a much more controlled environment. The Openstack project is the new kid on the block of cloud computing software, one that strives to become the open source standard for building cloud infrastructures everywhere. It was announced in July, so it’s relatively young. There are lots of procedures and processes to put in place, an already-large developer group, and an ever-growing community of users and partners. The software itself is positioned to run in high-demand environments: The storage component is in production use at Rackspace, the compute component is in production use at NASA. Openstack is planned to fully replace the current Rackspace Cloud software next year, and a number of governments plan to use it to power their local cloud infrastructure. Those are exciting times.

What does an open source project Release Manager do ? Well first, as it says on the tin, it manages the release process. Every 3 or 6 months, Openstack will release a new version of its components, and someone has to make sure that that happens. That’s OK, but what do I do the other 50 weeks of the year ? Well, release managers also manage the release cycle. A cycle goes through four stages: Design, Development, QA and Release. It is the job of the release manager to drive and help the developer community through those stages, follow work in progress, making sure everyone knows about the steps and freezes, and granting exceptions when necessary. At the very end, he must balance between the importance of a bug and the risk of regression the bugfix introduces: it’s better to release with a known bug than with an unknown regression. He is ultimately responsible for the delivery, on time, of the complete release cycle. And yes, if you condense everything to 3 or 6 months, this is a full-time job 🙂

My duties also include ensuring that the developers have everything they need to work at their full potential and that the project is transparent. I also have to make sure the developer community is a welcoming environment for prospective new contributors, and present the project as a technical envangelist in conferences. And if I still have free time, I may even write some code where I need to scratch an itch. All in all, it’s a pretty exciting job, and I’m very happy to meet everyone this week at the Openstack design summit in Orlando San Antonio.

Categories: Cloud, Openstack, Ubuntu

Introducing UDS-N Cloud Track

October 23, 2010 Leave a comment

On Monday in the sunny Orlando will start the Ubuntu Developers Summit for the Natty Narwhal development cycle, which will result in the Ubuntu 11.04 release. UDS is divided into tracks: in this post I will introduce the “Cloud” track, which I’m very happy to lead. The schedule might still be changed for last-minute imperatives, so you should definitely doublecheck it using the dynamic track schedule.

As you know, Ubuntu Server’s strategy for cloud computing is twofold: be the best operating system to run in the cloud (through our official cloud images), and make available the best of the available open source software to run a cloud (through the Ubuntu Enterprise Cloud product).

On Monday, we’ll start with a quick introduction where all the session leads will present their subjects for the week. Then we’ll dive into the heart of the subject with a session on the future improvements of our cloud images. The afternoon will be dedicated to what will happen with Eucalyptus in Natty Narwhal, followed by a two-hour session to brainstorm an installation service for physical cloud nodes.

Tuesday morning we will continue on the same subject, seeing how Puppet can be used to fully bootstrap and maintain that infrastructure we installed. Then we’ll discuss how the components of the Openstack project can be packaged and integrated into Ubuntu Enterprise Cloud (UEC). The afternoon will discuss the future of awstrial, the open source project behind the cloud 10 try-Ubuntu-Server-for-one-hour-for-free event, followed by a session on EC2 compatibility for UEC, and finally a session on cloud utilities, in particular the image rebundling tools.

Wednesday will start with a session on systems monitoring, followed by a session on cloud outreach campaigns. At 11:00 we should have a session on web scaling technologies (accelerators, NoSQL datastores, message queues, etc.), followed by a session on Hadoop packaging ! After lunch we’ll discuss desktop cloud images, networking in the cloud (in particular the OpenvSwitch project), and the last session of the day will be an OpenStack gap analysis discussion.

Thursday we will start with a session on distributed logging, followed by a session on the Hudson continuous integration system. After the break we will have a session on the UEC Web interface improvements in Natty, followed by a session on how to make the most use of LXC containers. Containers will still be there after lunch, with a discussion on how to containerize ptrace and kill, concurrently with a session on UEC QA and testing efforts. The containers sessions continue with a discussion on how to use containers in UEC. The day ends with a session on hypervisor technology (improvements in KVM and libvirt).

Friday, last day… There is some room for additional sessions, should one subject need more discussion. Already scheduled are sessions on improvements in our cloud-init and cloud-config cloud image bootstrapping system, automated server testing using Hudson, and a best practice guide for cloud applications.

All in all, a very interesting and complete schedule ! If you want more details, see this list of the sessions, with the links to the blueprints (where you can leave comments on the whiteboard section). Remember that UDS attendance is free, so I hope you will be able to participate to these sessions, either locally or remotely ! See you there…

Categories: Cloud, Ubuntu, Ubuntu Server