Archive

Archive for the ‘Ubuntu Server’ Category

Features are in: the diablo-4 milestone

August 31, 2011 1 comment

August was very busy for OpenStack Nova and Glance developers, and the culmination of those efforts is the delivery of the final feature milestone of the Diablo development cycle: diablo-4.

Glance gained final integration with the Keystone common authentication system, support for sharing images between groups of tenants, a new notification system and i18n. Twelve feature blueprints were completed in Nova, including final Keystone integration, the long-awaited capacity to boot from volumes, a configuration drive to pass information to instances, integration points for Quantum, KVM block migration support, as well as several improvements to the OpenStack API.

Diablo-4 is mostly feature-complete: a few blueprints for standalone features were granted an exception and will land post-diablo-4, like volume types or virtual storage arrays in Nova, or like SSL support in Glance.

Now we race towards the release branch point (September 8th) which is when the Diablo release branch will start to diverge from a newly-open Essex development branch. The focus is on testing, bug fixing and consistency… up until September 22, the Diablo release day.

Elite committers vs. Gated trunk

August 12, 2011 1 comment

How to control what gets into your open source project code ? The classic model, inherited from pre-DVCS days, is to have a set of “committers” that are trusted with direct access while the vast majority of project “contributors” must kindly ask them to sponsor their patches. You can find that model in a lot of projects, including most Linux distributions. This model doesn’t scale that well — even trusted individuals are error-prone, nobody should escape peer review. But the main issue is the binary nature of the committer power: it divides your community (us vs. them) and does not really encourage contribution.

Gated trunk

The solution to this is to implement a gated trunk with a code review system like GitHub pull requests or Launchpad branch merge proposals. Your “committers” become “core developers” that have a casting vote on whether the proposal should be merged. Everyone goes through the peer review process, and the peer review process is open for everyone: your “contributors” become “developers” that can comment too. You reduce the risk of human error and the community is much healthier, but some issues remain: your core developers can still (wittingly or unwittingly) evade peer review, and the final merge process is human and error-prone.

Automation ftw

The solution is to add more automation, and not trust humans with direct repository access anymore. An “automated gated trunk” bot can watch for reviews and when a set of pre-defined rules are met (human approvals, testsuites passed, etc.), trigger the trunk merge automatically. This removes human error from the process, and effectively turns your “core developers” into “reviewers”. This last aspect makes for a very healthy development community: there is no elite group anymore, just a developer subgroup with additional review duties.

Gerrit

In OpenStack, we used Tarmac in conjunction with Launchpad/bzr code review in our first attempt to implement this. As we considered migration to git, the lack of support for tracking formal approvals in GitHub code review prevented the implementation of a complex automated gated trunk on top of GitHub, so we deployed Gerrit. I was a bit resisting the addition of a new tool to our toolset mix, but the incredible Monty Taylor and Jim Blair did a great integration job, and I realize now that this gives us a lot more flexibility and room for future evolution. For example I like that some tests can be run when the change is proposed, rather than only after the change is approved (which results in superfluous review roundtrips).

At the end of the day, gated trunk automation helps in having a welcoming, non-elitist (and lazy) developer community. I wish more projects, especially distributions, would adopt it.

Summer of OpenStack: the diablo-3 milestone

July 29, 2011 1 comment

No rest for the OpenStack developers, today saw the release of the July development efforts for Nova and Glance: the Diablo-3 milestone.

Glance gained two performance options: API servers can now cache image data on the local filesystem, and a delayed delete feature allows image deletion to happen asynchronously.

With a bit more than 100 trunk commits over the month, Nova gained support for multiple NICs, FlatDHCP network mode now support a high-availability option (read more about it here), instances can be migrated and system usage notifications were added to the notification framework. Network code was also refactored in order to facilitate integration with the new networking projects, and countless fixes were made in OpenStack API 1.1 support.

We have one more milestone left (diablo-4) before the final 2011.3 release… still a lot to do !

June in OpenStack: the diablo-2 milestone

July 4, 2011 Leave a comment

About a month ago I commented on the features delivered in the diablo-1 milestone. Last week we released the diablo-2 milestone for your testing and feature evaluation pleasure.

Most of the changes to Glance were made under the hood. In particular the new WSGI code from Nova was ported to Glance, and images collections can now be sorted by a subset of the image model attributes. Most of the groundwork to support Keystone authentication was done, but that should only be available in diablo-3 !

Those same initial Keystone integration steps were also done for Nova, along with plenty of other features. We now support distributed scheduling for complex deployments, together with a new instance referencing model. Was also added during this timeframe: support for floating IPs (in OpenStack API), a basic mechanism for pushing notifications out to interested parties, global firewall rules, and an instance type extra specs table that can be used in a capabilities-aware scheduler. More invisible to the user, we completed efforts to standardize error codes and refactored the OpenStack API serialization mechanism.

And there is plenty more coming up in diablo-3… scheduled for release on July 28th.

Time-based releases are good for community

July 1, 2011 5 comments

There was a bit of discussion lately on feature-based vs. time-based release schedules in OpenStack, and here are my thoughts on it. In a feature-based release cycle, you release when a given set of features is implemented, while in a time-based release cycle, you release at a predetermined date, with whatever is ready at the time.

Release early, release often

One the basic principles in open source (and agile) is to release early and release often. This allows fast iterations, which avoid the classic drawbacks of waterfall development. If you push that logic to the extreme, you can release at every commit: that is what continuous deployment is about. Continuous deployment is great for web services, where there is only one deployment of the software and it runs the latest version.

OpenStack projects actually provide builds (and packages) for every commit made to development trunk, but we don’t call them releases. For software that has multiple deployers, having “releases” that combine a reasonable amount of new features and bugfixes is more appropriate. Hence the temptation of doing feature-based releases: release often, whenever the next significant feature is ready.

Frequent feature-based releases

The main argument of supporters of frequent feature-based releases is that time-based cycles are too long, so they delay the time it takes for a given feature to be available to the public. But time-based isn’t about “a long time”. It’s about “a predetermined amount of time”. You can make that “predetermined amount of time” as small as needed…

Supporters of feature-based releases say that time-based releases are good for distributions, since those have limited bearing on the release cycles of their individual subcomponents. I’d argue that time-based releases are always better, for anyone that wants to do open development in a community.

Time-based releases as a community enabler

If you work with a developer community rather than with a single-company development group, the project doesn’t have full control over its developers, but just limited influence. Doing feature-based releases is therefore risky, since you have no idea how long it will take to have a given feature implemented. It’s better to have frequent time-based releases (or milestones), that regularly delivers to a wider audience what happens to be implemented at a given, predetermined date.

If you work with an open source community rather than with a single-company product team, you want to help the different separate stakeholders to synchronize. Pre-announced release dates allow everyone (developers, testers, distributions, users, marketers, press…) to be on the same line, following the same cadence, responding to the same rhythm. It might be convenient for developers to release “whenever it makes sense”, but the wider community benefits from having predictable release dates.

It’s no wonder that most large open source development communities switched from feature-based releases to time-based releases: it’s about the only way to “release early, release often” with a large community. And since we want the OpenStack community to be as open and as large as possible, we should definitely continue to do time-based releases, and to announce the schedule as early as we can.

 

Delivery channels for OpenStack on Ubuntu

June 15, 2011 1 comment

There are multiple available delivery channels available to install OpenStack packages on Ubuntu.

First of all, starting with 11.04 (Natty), packages for OpenStack Nova, Swift and Glance are available directly in Ubuntu’s official universe repository. This contains the latest release available at the time of Ubuntu release: 2011.2 “Cactus” in 11.04. If you don’t use Ubuntu 11.04, or if you want a more recent version, you’ll need to enable one of our specific PPAs, please read on.

OpenStack release PPA

If you want to run 2011.2 on 10.04 LTS (Lucid) or 10.10 (Maverick) you can use the ppa:openstack-release/2011.2 PPA. Enabling it is as simple as running:

$ sudo apt-get install python-software-properties
$ sudo add-apt-repository ppa:openstack-release/2011.2
$ sudo apt-get update

Latest milestone

Starting with the Diablo cycle, we do a coordinated OpenStack release every 6 months. That said, Swift releases stable versions more often. You can get the latest Swift release through the ppa:swift-core/release PPA. Just replace the PPA name in the above example.

Also starting with the Diablo cycle, Nova and Glance deliver a development milestone every 4 weeks. If you want to test the latest features, you can enable those PPAs: ppa:nova-core/milestone or ppa:glance-core/milestone.

PPAs for testers and developers

Just before we deliver one of these intermediary releases or milestones, we use a specific “milestone-proposed” PPA to do final QA on release candidates. Enabling this one and reporting issues will help us in delivering high-quality milestones. Just enable ppa:nova-core/milestone-proposed, ppa:glance-core/milestone-proposed or ppa:swift-core/milestone-proposed.

Finally, for all projects and for every code commit, we generate a package in the trunk PPA. If you’re a developer, or like living on the bleeding edge, you should enable those: ppa:nova-core/trunk, ppa:glance-core/trunk or ppa:swift-core/trunk.

 

I hope this will help you select the best delivery channel for your use case, depending on whether you’re deploying, evaluating, helping in QA or actively developing. For future reference, the list of PPAs is maintained on the wiki.

A month in OpenStack Diablo: the diablo-1 milestone

June 2, 2011 Leave a comment

Back at the OpenStack Design Summit in Santa Clara, we decided to switch from a 3-month cycle to a 6-month coordinated release cycle, with more frequent milestones delivery in the middle.

Lately we have been busy adapting the release processes to match the delivery of the first milestones. Swift 1.4.0 was released last Tuesday, and today sees the release of the diablo-1 milestone for Nova and Glance.

What should you expect from diablo-1, just 4 weeks after the design summit ? In this short timeframe lots of features have been worked on, and the developers managed to land quite a few of them in time for diablo-1.

Glance’s API was improved to support filtering of /images and /images/detail results and limiting and paging of results. This made support of API versioning necessary. It also grew a new disk format (“iso”) that should ultimately allow to boot ISOs directly in Nova.

On Nova’s side, the most notable addition is support for snapshotting and cloning volumes with the EC2 API. The XenServer plugin now supports Open vSwitch, and pause and suspend capabilities have been added to the KVM hypervisor.

Now keep your seatbelt fastened, because diablo-2 is set to release on June 30th.

OpenStack Nova: Main themes for Diablo

May 20, 2011 4 comments

A few weeks after the OpenStack Design Summit in Santa Clara, we are starting to get a better picture of what should be in the next version of OpenStack Nova, codenamed Diablo, scheduled for release on September 22.

One big priority of this release is to separate code for the network and volume services, and refactor nova-network code to add a clear internal API. This will allow to plug separate network and volume service providers, and pave the way for integration with future OpenStack projects like Quantum/Melange/Donabe and LunR. In preparation for this, we’ll push changes to rely more on the queue (and less on the database) to pass information between components. In the same area, we need some more changes to support multiple NICs and should also provide a client OpenStack API for directly interacting with volumes.

A second theme of the Diablo release is the new distributed scheduler, which should be able to schedule across zones and taking capabilities into account. This will need changes in the way we reference instances, as well as some changes for EC2 API compatibility.

On the API side, we should finalize OpenStack API 1.1 support, including work on floating IPs and shared IP groups. For administrators, instance migration and account administration actions should be added. We’ll also ensure good AWS API compatibility and validation.

Support for snapshotting, cloning and booting from volumes should land early in this cycle, as well as new ways of communicating configuration data between host and guest. We also need to integrate with AuthN/AuthZ with the new common Keystone authentication system. Lots of other features are planned (and others might be added before the end), you can check out the blueprints plan for more detail.

Last but not least, on the QA side, we should have continuous automated testing across a range of reference architectures and increase our unittest and smoketest coverage among other efforts to build-in quality.

The first milestone for this cycle, diablo-1, should be released on June 2nd.

OpenStack @ Ubuntu Developer Summit

May 18, 2011 Leave a comment

Last week I attended the Ubuntu Developer Summit for Oneiric in Budapest. This was the first time I attended UDS as an upstream representative rather than as a Canonical employee. I very much enjoyed it: not being a track lead or a busy technical lead actually gives you desirable flexibility in your agenda :)

First of all, a quick comment on the big announcement of the week, which the Twittersphere is not done retweeting yet: “Ubuntu switching from Eucalyptus to OpenStack”. I think it would be more accurate to say that Ubuntu chose to use OpenStack as its default cloud stack for future versions. Comparing Eucalyptus and OpenStack is like comparing apples to apple trees: OpenStack provides several cloud infrastructure pieces (of which only OpenStack Compute -Nova- covers the same space as Eucalyptus). I suspect the wide scope of the project played a role in OpenStack being selected as the default stack for the future. Eucalyptus and OpenStack Nova should both be present as deployment options from 11.10 on.

On the UDS format itself, I’d say that the “one blueprint = one hour” format does not scale that well. The numbers of hours in the week is fixed, so when the project grows you end up having too many sessions going on at the same time. Lots of blueprints do not require one hour of discussion, but rather a quick presentation of plan, feedback from interested parties and Q&A. That’s what we do for our own Design Summits, but I’d admit it makes scheduling a bit more complex. On the good side, having the floor plan inside our UDS badges was a really good idea, especially with confusing room names :)

The Launchpad and bzr guys were very present during the week, attentive and reactive to the wishes of upstream projects. They have great improvements and features coming up, including finer-grained bugmail and dramatic speed improvements in bzr handling of large repos.

Last week also saw the rise of creampiesourcing: motivation of groups of developers over bets (“if the number of critical bugs for Launchpad goes to 0 by June 27, I’ll take a creampie in the face”). Seems to work better than karma points.

Finally, Rackspace Hosting was co-sponsoring the “meet and greet” event on the Monday night, promoting OpenStack. I think offering cool T-shirts, like we did at the previous UDS in Orlando, was more efficient in spreading the word and making the project “visible” over time: in Budapest you could see a lot of people wearing the OpenStack T-shirts we offered back then !

Coming up in OpenStack Cactus…

March 16, 2011 Leave a comment

In a bit more than a week, we will hit FeatureFreeze for OpenStack “Cactus” cycle, so we start to have a good idea of what new features will make it. The Cactus cycle focus was on stability, so there are fewer new features compared to Bexar, but the developers still achieved a lot in a couple of months…

Swift (OpenStack object storage)

The Swift team really focused and stability and performance improvements this cycle. I will just single out the refactoring of the proxy to make backend requests concurrent, and improvements on sqlite3 indexing as good examples of this effort.

Glance (OpenStack image registry and delivery service)

Bexar saw the first release of Glance, and in Cactus it was vastly improved to match standards we have for the rest of OpenStack: logging, configuration and options parsing, use of paste.deploy and non-static versioning, database migrations… New features include a CLI tool and a new method for client to verify images. Glance developers might also sneak in an authentication middleware and support for HTTPS connections !

Nova (OpenStack compute)

A lot of the feature work in Nova for Cactus revolved around the OpenStack API 1.1 and exposing features through XenServer (migration, resize, rescue mode, IPv6, file and network injection…). We should also have the long-awaited live migration feature (for KVM), support for LXC containers, VHD images, multiple NICs, dynamically-configured instance flavors or volume storage on HP/Lefthand SANs. XenAPI should get support for Vlan network manager and network injection. We hope support for VMWare/vSphere hypervisor will make it.

The rest of the Nova team concentrated on testing, bugfixing (already 115 bugfixes committed to Cactus !) and producing a coherent release, as evidenced by the work on adding the missing Ipv6 support for FlatManager network model. I should also mention that the groundwork for multi-tenant accounting and multiple clusters in a region also landed in Cactus.

Over the three projects branches, last month we had more than 2500 commits by more than 75 developers. Not too bad for a project less than one-year-old… We’ll see the result of this work on Cactus release day, scheduled April 14.

Categories: Cloud, Openstack, Ubuntu Server
Follow

Get every new post delivered to your Inbox.

Join 33 other followers