Archive

Archive for the ‘Ubuntu’ Category

UDS is dead, long live ODS

March 4, 2013 1 comment

Back from a (almost) entirely-offline week vacation, a lot of news were waiting for me. A full book was written. OpenStack projects graduated. An Ubuntu rolling release model was considered. But what grabbed my attention was the announcement of UDS moving to a virtual event. And every 3 months. And over two days. And next week.

As someone who attended all UDSes (but one) since Prague in May 2008, as a Canonical employee then as an upstream developer, that was quite a shock. We all have fond memories and anecdotes of stuff that happened during those Ubuntu developer summits.

What those summits do

For those who never attended one, UDS (and the OpenStack Design Summits that were modeled after them) achieve a lot of goals for a community of open source developers:

  1. Celebrate recent release, motivate all your developer community for the next 6 months
  2. Brainstorm early ideas on complex topics, identify key stakeholders to include in further design discussion
  3. Present an implementation plan for a proposed feature and get feedback from the rest of the community before starting to work on it
  4. Reduce duplication of effort by getting everyone working on the same type of issues in the same room and around the same beers for a few days
  5. Meet in informal settings people you usually only interact with online, to get to know them and reduce friction that can build up after too many heated threads

This all sounds very valuable. So why did Canonical decide to suppress UDSes as we knew them, while they were arguably part of their successful community development model ?

Who killed UDS

The reason is that UDS is a very costly event, and it was becoming more and more useless. A lot of Ubuntu development happens within Canonical those days, and UDS sessions gradually shifted from being brainstorming sessions between equal community members to being a formal communication of upcoming features/plans to gather immediate feedback (point [3] above). There were not so many brainstorming design sessions anymore (point [2] above, very difficult to do in a virtual setting), with design happening more and more behind Canonical curtains. There is less need to reduce duplication of effort (point [4] above), with less non-Canonical people starting to implement new things.

Therefore it makes sense to replace it with a less-costly, purely-virtual communication exercise that still perfectly fills point [3], with the added benefits of running it more often (updating everyone else on status more often), and improving accessibility for remote participants. If you add to the mix a move to rolling releases, it almost makes perfect sense. The problem is, they also get rid of points [1] and [5]. This will result in a even less motivated developer community, with more tension between Canonical employees and non-Canonical community members.

I’m not convinced that’s the right move. I for one will certainly regret them. But I think I understand the move in light of Canonical’s recent strategy.

What about OpenStack Design Summits ?

Some people have been asking me if OpenStack should move to a similar model. My answer is definitely not.

When Rick Clark imported the UDS model from Ubuntu to OpenStack, it was to fulfill one of the 4 Opens we pledged: Open Design. In OpenStack Design Summits, we openly debate how features should be designed, and empower the developers in the room to make those design decisions. Point [2] above is therefore essential. In OpenStack we also have a lot of different development groups working in parallel, and making sure we don’t duplicate effort is key to limit friction and make the best use of our resources. So we can’t just pass on point [4]. With more than 200 different developers authoring changes every month, the OpenStack development community is way past Dunbar’s number. Thread after thread, some resentment can build up over time between opposed developers. Get them to informally talk in person over a coffee or a beer, and most issues will be settled. Point [5] therefore lets us keep a healthy developer community. And finally, with about 20k changes committed per year, OpenStack developers are pretty busy. Having a week to celebrate and recharge motivation batteries every 6 months doesn’t sound like a bad thing. So we’d like to keep point [1].

So for OpenStack it definitely makes sense to keep our Design Summits the way they are. Running them as a track within the OpenStack Summit allows us to fund them, since there is so much momentum around OpenStack and so many people interested in attending those. We need to keep improving the remote participation options to include developers that unfortunately cannot join us. We need to keep doing it in different locations over the world to foster local participation. But meeting in person every 6 months is an integral part of our success, and we’ll keep doing it.

Next stop is in Portland, from April 15 to April 18. Join us !

Categories: Open source, Openstack, Ubuntu

The value of Open Development

October 23, 2012 12 comments

Mark’s recent blogpost on Raring community skunkworks got me thinking. I agree it would be unfair to spin this story as Canonical/Ubuntu switching to closed development. I also agree that (as the damage control messaging was quick to point out) inviting some members of the community to participate in closed development projects is actually a step towards more openness rather than a step backwards.

That said, it certainly is making the “closed development” option more official and organized, which is not a step in the right direction in my opinion. It reinforces it as a perfectly valid option, while I would really like it to be an exception for corner cases. So at this point, it may be useful to insist a bit on the benefits of open development, and why dropping them might not be that good of an idea.

Open Development is a transparent way of developing software, where source code, bugs, patches, code reviews, design discussions, meetings happen in the open and are accessible by everyone. “Open Source” is a prerequisite of open development, but you can certainly do open source without doing open development: that’s what I call the Android model and what others call Open behind walls model. You can go further than open development by also doing “Open Design”: letting an open community of equals discuss and define the future features your project will implement, rather than restricting that privilege to a closed group of “core developers”.

Open Development allows you to “release early, release often” and get the testing, QA, feedback of (all) your users. This is actually a good thing, not a bad thing. That feedback will help you catch corner cases, consider issues that you didn’t predict, get outside patches. More importantly, Open Development helps lowering the barrier of entry for contributors to your project. It blurs the line between consumers and producers of the software (no more “us vs. them” mentality), resulting in a much more engaged community. Inviting select individuals to have early access to features before they are unveiled sounds more like a proprietary model beta testing program to me. It won’t give you the amount of direct feedback and variety of contributors that open development gives you. Is the trade-off worth it ?

How much as I dislike the Android model, I understand that the ability for Google to give some select OEMs a bit of advance has some value. Reading Mark’s post though, it seems that the main benefits for Ubuntu are in avoiding early exposure of immature code and get more splash PR effect at release time. I personally think that short-term, the drop in QA due to reduced feedback will offset those benefits, and long-term, the resulting drop in community engagement will also make this a bad trade-off.

In OpenStack, we founded the project on the Four Opens: Open Source, Open Development, Open Design and Open Community. This early decision is what made OpenStack so successful as a community, not the “cloud” hype. Open Development made us very friendly to new developers wanting to participate, and once they experienced Open Design (as exemplified in our Design Summits) they were sold and turned into advocates of our model and our project within their employing companies. Open Development was really instrumental to OpenStack growth and adoption.

In summary, I think Open Development is good because you end up producing better software with a larger and more engaged community of contributors, and if you want to drop that advantage, you better have a very good reason.

Categories: Open source, Openstack, Ubuntu

OpenStack Essex: the last mile

April 4, 2012 1 comment

At the time I’m writing this,  we have final release candidates published for all the components that make up OpenStack 2012.1, codenamed “Essex”:

  • OpenStack Compute (Nova), at RC3
  • OpenStack Image Service (Glance), at RC3
  • OpenStack Identity (Keystone), at RC2
  • OpenStack Dashboard (Horizon), at RC2
  • OpenStack Storage (Swift) at version 1.4.8

Unless a critical, last-minute regression is found today in these proposed packages, they should make up the official OpenStack 2012.1 release tomorrow ! Please check out those tarballs for a last check, and don’t hesitate to ping us on IRC (#openstack-dev @ Freenode) or file bugs (tagged essec-rc-potential) if you think you can convince us to reroll.

Those six months have been a long ride, with 139 features added and 1650 bugs fixed, but this is the last mile.

FOSDEM 2012 feedback

February 8, 2012 1 comment

I’m back from Brussels, where happened the coldest FOSDEM ever. It started on Friday night with the traditional beer event. Since the Delirium was a bit small to host those thousands of frozen geeks, the FOSDEM organizers had enlisted the whole block as approved bars !

On the Saturday, I spent most of my time in the Cloud and Virtualization devroom, which I escaped only to see Simon Phipps announce the new membership-based OSI, and Paolo Bonzini talking about the KVM ecosystem (in a not technical enough way, IMO). My own OpenStack talk was made a bit difficult due to the absence of mike to cover the 550-seat Chavanne auditorium… but the next talks got one. The highlight of the day was Ryan Lane’s “infrastructure as an open source project” presentation, about how Wikimedia Labs uses Git, Gerrit, Jenkins and OpenStack to handle its infrastructure like a contributor-driven open source project. The day ended with a good and frank discussion between OpenStack developers, with upstream projects and downstream distributions.

On Sunday I tried to hop between devrooms, but in a lot of cases the room was full and I couldn’t enter, so I spent more time in the hallway track. I enjoyed Soren’s talk about using more prediction algorithms (instead of simple thresholds) in monitoring systems, introducing his Surveilr project. The highlight of the day was Dan Berrangé’s  talk about using libvirt to run sandboxed applications, using virt-sandbox. There are quite a few interesting uses for this, and the performance penalty sounds more than acceptable.

Overall it was a great pleasure for me to attend FOSDEM this year. Congratulations to the organizers again. I’ll be back next year, hopefully it will be warmer 🙂

Categories: Openstack, Ubuntu

About collaboration

February 1, 2012 4 comments

In recent years, as open source becomes more ubiquitous, I’ve seen a new breed of participants appearing. They push their code to GitHub like you would wear a very visible good behavior marketing badge. They copy code from multiple open source projects, modify it, but don’t contribute back their changes to upstream. They seem to consider open source as a trendy all-you-can-eat buffet combined with a cool marketing gimmick.

In my opinion, this is not what open source is about. I see open source, and more generally open innovation (which adds open design, open development and open community), as a solution for the future. The world is facing economical and ecological limits: it needs to stop designing for obsolescence, produce smarter, reduce duplication of effort, and fix the rift between consumers and producers. Open innovation encourages synergy and collaboration. It reduces waste. It enables consumers to be producers again. That’s a noble goal, but without convergence, we can’t succeed.

The behavior of these new participants goes against that. I call this the GitHub effect: you encourage access to the code, forking and fragmentation, while you should encourage convergence and collaboration on a key repository. And like having a “packaging made from recyclable materials” sign on your product doesn’t make it environment-friendly, just publishing your own code somewhere under an open source license doesn’t really make it open.

On the extreme fringe of that movement, we also see the line with closed source blurring. Building your own closed product on top of open source technology, and/or abusing the word “Open” to imply that all you do is open source, using the uncertainty to reap easy marketing benefits. I’ve even seen a currently-closed-source project being featured as an open source project to watch in 2012. We probably need to start playing harder, denounce fake participants and celebrate good ones.

Some people tell me my view goes against making money with open source. That might be true for easy, short-term money. But I don’t think you need to abuse open source to make money out of it. The long-term benefits of open innovation are obvious, and like for green businesses, good behavior and long-term profit go well together. Let’s all make sure we encourage collaboration and promote the good behavior, and hopefully we’ll fix this.

Categories: Open source, Openstack, Ubuntu

OpenStack developers meeting at FOSDEM

January 27, 2012 3 comments

Next week, the European free and open source software developers will converge to Brussels for FOSDEM. We took this opportunity to apply for an OpenStack developers gathering in the Virtualization and Cloud devroom.

At 6pm on Saturday (last session of the day), in the Chavanne room, we will have a one-hour town hall meeting. If you’re an existing OpenStack contributor, a developer considering to join us, an upstream project developer, a downstream distribution packager, or just curious about OpenStack, you’re welcome to join us ! I’ll be there, Stefano Maffulli (our community manager) will be there, and several OpenStack core developers will be there.

We’ll openly discuss issues and solutions about integration with upstream projects, packaging, governance, development processes, community or release cycles. In particular, we’ll have a distribution panel where every OpenStack distribution will be able to explain how they support OpenStack and discuss what we can improve to make things better for them.

And at the end of the session we can informally continue the discussion around fine Belgian beers or their famous Carbonade !

Making more solid OpenStack releases

January 18, 2012 3 comments

As we pass the middle of the Essex development cycle, questions about the solidity of this release start to pop up. After all, the previous releases were far from stellar, and with more people betting their business on OpenStack we can’t really afford another half-baked release.

Common thinking (mostly coming from years of traditional software development experience) is that we shouldn’t release until it’s ready, or good enough, and calls early for pushing back the release dates. This assumes the issue is incidental: that we underestimated the time it would take our finite team of internal developers working on bugs to reach a sufficient level of quality.

OpenStack, being an open source project produced by a large community, works differently. We have a near-infinite supply of developers. The issue is, unfortunately, more structural than incidental. The lack of solidity for a release comes from:

  • Lack of focus on generic bugfixes. Developers should work on fixing bugs. Not just the ones they filed or the ones blocking them in their feature-adding frenzy. Fixing identified, targeted, known issues. The bugtracker is full of them, but they don’t get attention.
  • Not enough automated testing to efficiently catch regressions. Even if everyone was working on bug fixes, if half your fixes end up creating a set of regressions, then there is no end to it.
  • Lack of bug triaging resources. Only a few people work on confirming, triaging and prioritizing the flow of incoming bugs. So the bugs that need the most attention are lost in the noise.

For the Diablo cycle, we had less than a handful of people focused on generic bugfixing. The rest of our 150+ authors were busy working on something else. Pushing back the release for a week, a month or a year won’t help OpenStack solidity if the focus doesn’t switch. And if our focus switches, then there will be no need for a costly release delay.

Acting now to make Essex a success

During the Essex cycle, our Project Technical Leads have done their share of the work by using a very early milestone for their feature freeze. Keystone, Glance and Nova will freeze at Essex-3, giving us 10 weeks for bugfixing work (compared to the 4 weeks we had for Diablo). Now we need to take advantage of that long period and really switch our mindset away from feature development and towards generic bug fixing.

Next week we’ll hit feature freeze, so now is the time to switch.  If we could:

  • have some more developers working on increasing our integration and unit test coverage
  • have the rest of the developers really working on generic bug fixing
  • have very active core reviewers that get more anal-retentive as we get closer to release, to avoid introducing regressions that would not be caught by our automated tests

…then I bet that it will lead to a stronger release than any delaying of the release could give you. Note that we’ll also have a bug squashing day on February 2 that will hopefully help us getting on top of old, deprecated and easy fixes, and give us a clear set of targets for the rest of the cycle.

That’s on our ability to switch our focus that hinges the quality of future OpenStack releases. That’s on what we’ll be judged. The world awaits, and the time is now.

Virtualization & Cloud devroom at FOSDEM

January 13, 2012 Leave a comment

The Free and Open source Software Developers’ European Meeting, or FOSDEM, is an institution that happens every year in Brussels. A busy, free and open event that gets a lot of developers together for two days of presentations and cross-pollination. There are typically the FOSDEM main tracks (a set of presentations chosen by the FOSDEM organization) and a set of devrooms, which are topic-oriented or project-oriented and can organize their own schedule freely.

This year, FOSDEM will host an unusual devroom, the Virtualization and Cloud devroom. It will happen in the Chavanne room, a 550-seat auditorium that was traditionally used for main tracks. And it will last for two whole days, while other devrooms typically last for a day or a half-day.

The Virtualization and Cloud devroom is the result of the merging of three separate devroom requests: Virtualization, Xen and OpenStack devrooms. It gives us a larger space and a lot of potential for cross-pollination across projects ! We had a lot of talks proposed, and here is an overview of what you’ll be able to see there.

Saturday, February 4

Saturday will  be the “cloud” day. We will start with a set of talks about OpenStack, past, present and future. I will do an introduction and retrospective of what happened last year in the project, Soren Hansen will guide new developers to Nova, and Debo Dutta will look into future work on application scheduling and Donabe. Next we’ll have a session on various cloud-related technologies: libguestfs, pacemaker-cloud and OpenNebula. The afternoon will start with a nice session on cloud interoperability, including presentations on the Aeolus, CompatibleOne and Deltacloud efforts. We’ll continue with a session on cloud deployment, with a strong OpenStack focus: Ryan Lane will talk about how Wikimedia maintains infrastructure like an open source project, Mike McClurg will look into Ubuntu+XCP+OpenStack deployments, and Dave Walker will introduce the Orchestra project. The day will end with a town hall meeting for all OpenStack developers, including a panel of distribution packagers: I will blog more about that one in the next weeks.

Sunday, February 5

Sunday is more “virtualization” day ! The day will start early with two presentations by Hans de Goede about Spice and USB redirection over the network. Then we’ll have a session on virtualization management, with Guido Trotter giving more Ganeti news and three talks about oVirt. In the afternoon we’ll have a more technical session around virtualization in development: Antti Kantee will introduce ultralightweight kernel service virtualization with rump kernels, Renzo Davoli will lead a workshop on tracing and virtualization, and Dan Berrange will show how to build application sandboxes on top of LXC and KVM with libvirt. The day will end with another developers meeting, this time the Xen developers will meet around Ian Campbell and his Xen deployment troubleshooting workshop.

All in all, that’s two days packed with very interesting presentations, in a devroom large enough to accomodate a good crowd, so we hope to see you there !

Ending the year well: OpenStack Essex-2 milestone

December 20, 2011 1 comment

2011 is almost finished, and what a year it has been. We started it with two core projects and one release behind us. During 2011, we got three releases out of the door, grew from 60 code contributors to about 200, added three new core projects, and met for two design summits.

The Essex-2 milestone was released last week. Here is our now-regular overview of the work that made it to OpenStack core projects since the previous milestone.

Nova was the busiest project. Apart from my work on a new secure root wrapper (detailed on previous articles of this blog), we added a pair of OpenStack API extensions to support the creation of snapshots and backups of volumes, the metadata service can now run separately from the API node, network limits can now be set using a per-network base and a per-flavor multiplier, and a small usability feature lets you retrieve the last error that occurred using nova-manage. But Essex is not about new features, it’s more about consistency and stability. On the consistency front, the HA network mode was extended to support XenServer, KVM compute nodes now report capabilities to zones like Xen ones, and the Quantum network manager now supports NAT. Under the hood, VM state transitions have been strengthened, the network data model has been overhauled, internal interfaces now support UUID instance references, and unused callbacks have been removed from the virt driver.

The other projects were all busy starting larger transitions (Keystone’s RBAC, Horizon new user experience, and Glance 2.0 API), leaving less room for essex-2 features. Glance still saw the addition of  a custom directory for data buffering. Keystone introduced global endpoints templates and swauth-like ACL enforcement. Horizon added UI support for downloading RC files, while migrating under the hood from jquery-ui to bootstrap, and adding a versioning scheme for environment/dependencies.

The next milestone is in a bit more than a month: January 26th, 2012. Happy new year and holidays to all !

Improving Nova privilege escalation model, part 3

November 30, 2011 8 comments

In the previous two posts of this series, we explored the deficiencies of the current model and the features of an alternative implementation. In this last post, we’ll discuss the advantages of a Python implementation and open discussion on how to secure it properly.

Python implementation

It’s quite easy to implement the features that were mentioned in the previous post in Python. The main advantage of doing so is that the code can happily live inside Nova code, in particular the filters definition files can be implemented as Python modules that are loaded if present. That solves the issue of shipping definitions within Nova and also the separation of allowed commands based on locally-deployed nodes. The code is simple and easy to review. The trick is to make sure that no malicious code can be injected in the elevated rights process. This is why I’d like to present a model and open it for comments in the community.

Proposed security model

The idea would be to have Nova code optionally use “sudo nova-rootwrap” instead of “sudo” as the root_helper. A generic sudoers file would allow the nova user to run /usr/bin/nova-rootwrap as root, while stripping environment variables like PYTHONPATH. To load its filters definitions, nova-rootwrap would try to import a set of predefined modules (like nova.rootwrap.compute), but if those aren’t present, it should ignore them. Can this model be abused ?

The obvious issue is to make sure sys.path (the set of directories from which Python imports its modules) is secure, so that nobody can insert their own modules in the process. I’ve given some thoughts to various checks, but actually there is no way around trusting the default sys.path you’re given when you start python as root from a cleaned env. If that’s compromised, you’re toasted the moment you “import sys” anyway. So using sudo to only allow /usr/bin/nova-rootwrap and cleaning the environment should be enough. Or am I missing something ?

Insecure mode ?

One thing we could do is check that sys.path all belongs to root and refuse to run in the case it’s not. That would tell the user that his setup is insecure (potentially allowing him to bypass that by running “sudo nova-rootwrap –insecure” as the root_helper). But that’s a convenience to detect insecure setups, not a security addition (the fact that it doesn’t complain doesn’t mean you’re safe, it could mean you’re already compromised).

Test mode ?

For tests, it’s convenient to allow to run code from branches. To allow this (unsafe) mode, you would tweak sudoers to allow it to run $BRANCH/bin/nova-rootwrap as root, and prepend “..” to sys.path in order to allow modules to be loaded from $BRANCH (maybe requiring –insecure mode for good measure). It sounds harmless, since if you run from /usr/bin/nova-rootwrap you can assume that /usr is safe… Or should that idea be abandoned altogether ?

Audit

Nothing beats peer review when it comes to secure design. I call all Python module-loading experts and security white-hats out there: would this work ? Are those safe assumptions ? How much do you like insecure and test modes ? Would you suggest something else ? If you’re one of those that can’t think in words but require code, you can get a glimpse of work in progress here. It will all be optional (and not used by default), so it can be added to Nova without much damage, but I’d rather do it right from the beginning 🙂 Please comment !