Sunday, November 7, 2010

Why Wayland is good for the future of Ubuntu, Canonical, etc.

History
 
The recent announce at UDS about the fact that the venerable X server & protocols will not be the default choice for Unity and as a consequence Ubuntu was a shock for some, it is clearly a relief for me.

In my precedent post about Unity, (before the UDS/Wayland announcement) I said "In this context, Ubuntu/Canonical is more conservative than Apple and Google that decided to dump X altogether. I'm not sure the X environment is fitted for next millennium challenges : once again, time will tell."

Obviously, the X server and protocol are great tools, very mature and with numerous capabilities. For a long time, evolution was slow, some would say "frozen" but then X.org did a great job and things started to evolve again.

However, being in the field, deploying X-based thin-clients since 2003 and designing larger scale and user friendly open source thin client solution (LTSP-Cluster) made me wonder if this was a protocol fitted for the future.

Why not X ?
Without being a X expert, some of the issues that were critical in the (announced) move from X server to Wayland

Hardware support : I strongly recommend to run Linux only on hardware with a well supported video driver : without a well supported driver, the experience can be daunting, especially on a thin-client. X was supposed to be hardware independent, lightweight and provide great performance.

However these fights had been lost long ago : I've learned the hard way that not all X drivers are equals : open-source or not, how many XV channels are supported, 3D (what version exactly, etc.). In fact, selecting good quality desktop or thin-client hardware is a service we sold to our customers !

Size matters : a default X.org server, on my desktop where I wrote this blog is 64Mb (without cache), 180Mb (with cache) on a Ubuntu 10.10, 64 bit with regular 3D effects. Well, on a phone with 256Mb or RAM or on an ARM based thin client with 64Mb, this is not good. I can imagine it also has an impact on battery life on the mobile devices.

A toolbox instead of a simple tool : Without being "bloated", it is certain that X has evolved, along the year, to support a huge number of hardware, protocols and has a long history and has a load of quirks and "this is not a bug, it is a feature" type of functionality.

Crispiness : X is not very crisp. Without being a usability expert, I can guarantee that, in my experience on iPad, Mac, Android device, those devices are much more responsive graphically than your average Linux desktop. Why ? I'm no X expert but I will propose the hypothesis that this is because of the legacy of the project.

3D : Without discussing the time it took to obtain 3D support at the X level, remember the Compiz/Beryl/Looking Glass and the time it took for them to mature while most of the X user where despising the efforts ? How well is 3D integrated into your desktop environment ?

Why Wayland ?

Most of the reasons don't have anything to do with Wayland and are consequences of the present IT competitive environment, the current state of X as well as business needs.  In this context, we will see that Wayland happened to be exactly what was needed : instead of starting from scratch a start-up project, hosted by freedesktop.org, an existing proof of concept and prototype, an already existing but small community with the goal to improve user experience.

My first reason is a practical one : I think that in order to make X evolves to a state suited for Mobile Internet Devices (MID) it request a deep technical knowledge of X as well as a long time to really convince all the X shareholders to agree to a change. Will the market wait for this change : I'm sure it will not ! Will everyone agree to the need to change : not certain either. Can a company accept this level of risk for what is their future ? According to Wikipedia, X11 dates back to 1984. Is it possible and wise to compete in this space with a technology that is 26 years old : I really don't think so (I'm saying this with a profound respect to everybody that contributed to the X11 project : the fact that it was great for 20+ years, in IT, is clearly a tribute to the design and the project itself).

Another way to look at this is the "disruptive innovation" or "disruptive technology" model. Mobile Internet Devices are changing the IT world ... big time. Major players are on the verge of becoming irrelevant in a new market that is growing faster than any other market in IT. Think about the position of RIM, Blackberry, PalmOS, Microsoft, etc. in this market space ? Others are sizing (creating?) the opportunity and went from losers/non-existent to key players (Apple with the iPod, iPhone, iPad and, of course, Google with Android). In order to compete in this space, legacy has to be left behind : your average MID does not share a lot in common with your desktop : UI of every single application has to be re-designed in order to take into account : screen size, touch screen, multi-touch, no mouse/keyboard, battery consumption, etc.

Wayland goals are the following (according to Wikipedia) "every frame is perfect, by which I mean that applications will be able to control the rendering enough that we'll never see tearing, lag, redrawing or flicker".

Well, in term of goal, it really puts the user in the center of things : a display manager that provide a great user experience. Ubuntu is popular on the desktop because it really put efforts into design and decided to place the user in the center of things.

Conclusion

Well, it is always interesting when a decision you were expecting happens. In a sense, I'm pleased and excited because it is the only way to compete and provide an open-source solution in the MID market and, why not, somehow piggybacking on the desktop market.

While Android could have been a possible choice, choosing Android will have constraint Ubuntu/Canonical to ... follow innovation, not to drive innovation. Also, Android has some issue, especially if you value the open-source development model where all the doors are open for the community to join and participate.


All in all, my only reserve is with the timing : if Wayland is ready in one year from now, it means a 2-4 years of delay compared to, let's say, Google and Android. More than this with Apple. In this context, the question is : Is it too late ?

Evolution and revolution are and will be needed in order for Open Source to continue to be relevant. This is particularly true in a disruptive market like the MID market. The announce was certainly a great way to launch one. History will tell if will succeed : I really hope it will and I really encourage the Open Source community to see the big picture out there, regroup and, ultimately, to contribute as much as possible to Wayland & Unity in order to make it a first class open source citizen able to run on any MID device and on any desktop.

Wednesday, October 27, 2010

Why Unity is good for the future of Ubuntu, Gnome, Canonical, etc.

Desktop and windows manager

A few month ago I already blogged about "the end of the (Linux) desktop as we know it." I will not blatantly repost this entry but draw some conclusions linked to the recent adoption of Unity for the default Ubuntu desktop.

My conclusion was the following : "At a certain level, one can say that the battle is already lost : the current desktop environments can not really fight this war as they don't own the key technology : the browser. As a consequence, the risk, for them (Gnome, KDE, etc.) is to be a tool that will launch a browser. A (relatively) simple tool that can be easily changed with almost no user impact..."

The recent announcement at UDS confirm that this road is the one chosen by Ubuntu. Now the question is why and what benefits for the key players here : Canonical, Ubuntu, Gnome and, of course, the users.

Mobile Internet Devices (MID) = application delivery

The new hardware platforms (most of them ARM based, touch-screen based, with small screen and no keyboards) rely extensively on the cloud/web based application and deploy specific small size applications on every connected MID.

The "desktop" on those platforms is very different from the one on your regular  "old-school" computer : it consists mainly of giant dock with widgets and your most used apps and a task-bar that informs you about communications (tweet, email, voice mail, etc.) and the MID status (wifi, phone, battery, etc.).

Most of the regular desktop applications are barely usable on such a device : do you think that Open Office user experience will be great on a 640x480 screen with no keyboard ? All those applications are now somehow "legacy" and given the specific user interaction with the MID and the MID capabilities (touch screen, multi-touch, accelerometer, low computing power, etc.) can not be used, with a great user experience, on those devices. Even the most "cross-platform" software, the browser, is specific to the MID device and every major player rolled out a specific version (lighter, snappier, etc.) for those platform.

So can we use the legacy application, yes. Will they succeed "as is" on those MID : I don't think so.

Can a MID platform succeed on the desktop ?

Well, this is the interesting question. Ubuntu/Canonical decided to bet on it. In fact, instead of synchronizing your desktop to your MID, why not consider your desktop as ... a very large MID ?

Apple did it the other way around, Microsoft as well bu they both come from the old desktop world. Google did it from Android for MID to a hypothetical GoogleOS that will certainly be very similar to your Android experience.

Ubuntu/Canonical is betting on the fact that users will follow them from the MID market to the desktop market. This is an interesting challenge and a really disruptive choice : let's break compatibility with the past and embrace this new way of delivering applications.

Will it succeed : only time, users and the market can tell ;-)

Impact on key players : gnome, Ubuntu/Debian, Canonical, users

Well, as I announced it in my precedent post, the legacy of the desktop environment out there is too cumbersome to carry on this new trendy market. Nokia (QT, Symbian, Meego) , Google (Android), Apple and Canonical came to the same conclusion : they can not build on top of those legacy windows environment (most of the time : because they have to serve a community of developers and users that care only about the "old" desktop model) : they had to start from scratch for the user interface. However, they kept some very useful and precious components like the kernel, basic OS, etc..

In this context, Ubuntu/Canonical is more conservative than Apple and Google that decided to dump X altogether. I'm not sure the X environment is fitted for next millennium challenges : once again, time will tell.

Gnome : I think this is a great news. Leveraging the tools, library and previous applications, a new "shell" can be developed that will be perfectly suited for MID. This, I can imagine that innovation can flow more freely from Desktop to MID and vice versa. Will Gnome developers embrace this change ? Will it provoke a community meltdown ? This is more an ego risk than a technological or business one. FLOSS is famous for its ego war and this may be the greatest risk for this key-player.

Ubuntu/Debian : Well, this really open another market for this distribution. A strong differentiator compared to RedHat, Novell. Competition on this market is called Android. Not an easy one to take on...

Canonical : As far as I know, the OEM division will have another great product to sell ! Just look at sales figures from Apple and Android device. This look very interesting to me. As the third or fourth player (Apple, RIM, Google, ...), becoming leader in this market will be a real challenge. Two very closed platform, two very open platforms. A huge and very fast growing market.

Users : Choice is always beneficial to some degree. As the default desktop environment will change, we will see the adoption rate but I expect it to be quite large in the Ubuntu community. This is a well known tactics (anyone remember Microsoft embedding IE into the desktop ?) that has worked well in the past. Also, it will help "convergence", namely the unification of your desktop and MID environment ... through the cloud services that will be offered on this platform (Music, More content, Storage, Contact, Preferences, ...).

Conclusion

All in all, I think this is great news for the future of Linux and all the involved party (Gnome, Ubuntu/Debian, Canonical and Mr. User). My only concerns are linked to some possible ego-war into the Gnome community and the fact that this will "only" be a tertiary platform in term of applications and content delivery : RIM seems in trouble, Apple the clear gorilla and Android the strong challenger. Will the great marketing and community/viral effect of Ubuntu be able to modify the race results ?

Sunday, October 17, 2010

Airline companies should reward on time passengers and good customers...

Just another upgrade story

I just flight with US Airways and I would like to relate some interesting story that happens to me and is, I think, shared by most of the airline industry.

I was late for various reasons :
  • somehow, my Nexus card was flagged as "deactivated" and I spend half an hour at the custom instead of my usual 2 minutes as it is the case when Nexus is working as expected (+30 minutes).
  • The airport parking was full (because of work done on the parking lot) and I had to use the the shuttle parking (+10 minutes)
  • Security was finding again and again some "large metallic item" in my bag and it was emptied and scanned 8 times (IIRC). It look like it was my keys that causes this (I used the same backpack and the same keys on more than 50 flights without any problem !] (+30 minutes).

During my security adventure, I could see the departure gate emptying and every passenger getting on board. I was not able to hear the announcements but when I was finally able to present myself at the gate, the lady told me "too late, you were supposed to be on board 10 minutes ago, your seat has been re-assigned". Been there, done that anyway so I remain calm and ask the only meaningful question I can think off :"are you sure the plane is at complete capacity ?".

The lady recounted the boarding passes (by hand, not using any computer which I found funny and slightly disturbing) and then visited the plane.
At the end, she came back in a hurry and told me I was lucky : some place were available ... in first class.

Another happy first class passenger  ...  but wait : is this ethical ?

Of course, I was happy with the experience and I'm writing this post from
a very comfortable seat with first class service reflecting on this particular
fact from a marketing and ethical point of view.

I'm not a bad customer as I fly quite often and from a business point of view
it make sense to give me some reward. However, the decision to upgrade me was not at all based on this information but on the sole fact that I was a bad
customer : I was late for the flight.

How come I have been rewarded for a bad behaviour ? Does it encourage me to
come up early ?

My advice to airlines companies 

I'm sure that passengers that came early and are good customers are the one you want to reward. Not the one that are randomly late !

Please encourage ethical behaviour and reward
  • your on-time passengers
  • your good customers

In 2010, this should be an easy function to implement in your CRM : as long as
it will not delay the flight departure, moving a couple of passengers and then
assigning the remaining seats to the late passengers seems way more ethical and
that assigning the remaining (and sometime excellent) seats to the latest passenger...

Tuesday, October 5, 2010

How to choose the right cloud for your needs.


The cloud : Basic definition

The cloud is very trendy these days. It can be seen as a marketing tool to somehow revamp the Software as a service industry. This of course also evolves from simple "SaAS" to different levels of service, namely infrastructure as a service (think amazon EC2, etc.) , platforms as a service (think Google app engine, etc.) and of course the good old "Software as a service" (think SugarCRM, etc.).

Given the broad definition of
the cloud, almost any IT company can say "this is what we've been doing since 200_" (insert your favourite digit here!)

When trying to sell these funny things, vendors
encounter a major difficulty : many corporations/organizations are used to the so called "firewall principle" and don't trust anything outside their firewall for so-called critical applications and also for serious red-tapping (our internal policy prevents us from ..., we can not do this because of ..., etc.).

One can argue that security and IT management is way better for major cloud providers (Amazon, Google, etc.) than it is f
or most of us so-called "small scale" organizations ( compared to 1 Million+ servers , many organizations are "small scale" according to this definition!). Even if this is true from a technical and processing aspect , this does nothing to ease the pain and allow the cloud to be used by these organizations.

Ladies and Gentlemen : Here comes the private cloud !

In the past , vendors adapted their marketing and sales pitch by creating a new marketing term : the so called "private cloud" that refers to the re-branded "public cloud" offered by major hosting providers (Rackspace, etc.), web 2.0+ major actors (Amazon, Google).

In a way, a private cloud is the answer f
or vendors to somehow respect the "behind the firewall rules". It provides access to a cloud platform inside your organization. Also, it implicitly solves the "privacy/security" issue with a saying that says it all : private is private after all !

I believe this is highly misleading because it implies that everything on the public cloud is ... public. This is clearly not the case. Even if any given "public" cloud provider is so-called "multi-tenant" (meaning that different customers can share for a
period of time the same hardware), if you use secure protocols and encryption, you can be pretty certain that everything you send is private and only seen by you . The only risk added being the one of the virtualization layer itself , are all virtual machines really isolated when sharing the same hardware ?

Most of the time, a choice has already been made : most of us have already been using virtualized servers in a production environment in the past .

For the "very secure" few that don't allow virtualization for security reasons , this is not the subject of this post ;-) therefor
e, you have two choices : Change your policy or stay away from virtualization and cloud technology...

My proposal is to drop the "public/private" cloud definition in favor of a more precise definition in order to define where the cloud is located in terms of network topology, whether the cloud is shared or dedicated and who managed the cloud (you or a cloud provider).


In front of your firewall or not ?
More precise classification means, is the cloud behind your firewall or in front of your firewall ?

  • An internal cloud is behind your firewall
  • An external cloud is "in front" of your firewall, on the Internet
Managed by yourself or shared ?

If you manage your cloud, lets call this a self-managed cloud.

If you outsource the management of your cloud, lets call this an outsourced cloud.

Is it dedicated or shared ?

A dedicated cloud is used only by you. You are the
sole user of the resources (servers, network equipment, storage, etc.) that provides the cloud services.

A shared cloud is multi-tenant : several users share resources (CPU, RAM, storage space, etc...).
Often times , dedicated means more expensive : as the sole user of the infrastructure provided by the cloud, you have to assume all the costs.

However, given the "pay as you go" principle of the external shared clouds, it can be less expensive to run your own cloud for the regular load of your organization and to use the public cloud only for specific workloads/peaks.


3 major types of clouds : 

 
Internal dedicated cloud : self managed or outsourced ?

This is
the closest to virtualization as we know it. Most of the time , you own the data, the servers and need to manage everything : the hardware, cloud software and the operating system. The main difference is that you use a cloud stack (open source or not...) consequently , you need to standardize your workload.

The main advantage of doing this is you benefit from an external cloud (managed or outsourced) and tap into external resources when you need it. This allows the IT department to offer a common platform to
each internal customer. This is especially interesting for large organizations and can be much more economical than a public cloud.

Instead of a "pay as you go" way of billing, you can fit this type of cloud into a classic budget : capital investment for the initial deployment, fixed price for the management of the cloud. In practice, one can argue that this is not really "the cloud" as you still have to manage "scarcity" (i.e.: a fixed amount of resources) instead of ubiquity (a somehow "infinite" computing power, bandwidth and storage capacity). It is said that constraint generates innovation , therefore this type of constraint is not necessarily bad per say .


If you self-manage your cloud, you are in charge of all the stacks and regular IT processes : provisioning, capacity planning, etc. You can manage a fixed budget : a capital budget needed with a fixed operation budget (power, staff, etc.). You can manage scarcity (the actual amount of resources you have).

If you outsource the management, you are a user of your cloud and
can decide to scale up based on pre-agreed fees, per server or per-storage nodes.

This model fits nicely into an existing infrastructure and acquisition process : you can really benefit from this model as you are able to leverage the somehow "standardization" of the workload management while you remain completely in sync with your organization
's acquisition policy and budget.


External dedicated : self-managed or outsourced

External dedicated is an extension of the preceding category: you can use a hosting provider to own your own cloud. In this case, the cloud is clearly in front of your firewall. On the other hand , the cloud is not shared. Servers are only used by you and no one else. You can even re-sell your spare cycles ;-)
You can also self-manage or even outsource the management of the cloud.
This model requests only minor updates to your security policy. This is very similar to hosting your public services at a data centre , can become an easy sell and an easy way to start implementing a cloud strategy.

External shared and outsourced
This is the "classical cloud", the one offered by Google, Amazon, Rackspace, etc. This infrastructure management is outsourced to the cloud provider and the cloud is therefore multi-tenant : various users share various physical resources.

Conclusion
We defined different ways a cloud can be deployed into an organization and used some more precise terms to define where the cloud is located (topology), who managed it (you or somebody else) and if the cloud's resources are shared or dedicated.

These definitions clearly overlap the traditional marketing and trendy "public/private" cloud approach and this is why the public/private vocable seems inappropriate due to its lack of precision.

In terms of IT governance and acquisition processes, the internal dedicated self-managed cloud is the easiest to sell/deploy, even for a very conservative organization. Benefits from the cloud could be very important even in this context, especially if you have developers and the need for development, staging and production environment.

Of course, to be able to really tap into the ubiquity of the cloud and its potentially unlimited (but billed!) resources, one will need to extend its comfort zone and somehow connect to a shared external and outsourced cloud. This disruptive innovation is only available for organizations that have completely revamped their development process, their security policy, as well as their acquisition mechanism.

Start-ups don't have any legacy and can start right away without any capital expenditures , this said , Cloud and its operational costs is directly based on usage : this is not something every organization is ready to deal with.

Other organizations will need to compromise and can do so by using any of the mentioned cloud deployment models that provide various levels of flexibility in terms of :
  • capital vs operational budget (private -> shared)
  • outsourcing or not (self-managed -> outsourced)
  • firewall position (internal->external)
 Of course, another important parameter is about free/open source/neo-proprietary/proprietary cloud. This is for another entry ;-)

Wednesday, September 29, 2010

Neo-proprietary tactic considered harmful to open source

Definitions

The Freemium model is a marketing model used by lots of service based companies. The principle is to offer a product "for free" supporting it with or without advertising and offer value-added services to "premium" customer that pay a fee. Very popular amongst startups and recent technology companies : skype, linkedin, rememberthemilk, etc.

At some point, one could even argue that the most successful open source company (RedHat) is very closed to this model : they offer a great product for free (the RedHat Linux Distribution) and monetize services of only a small percent of their users.

Fauxpensource has several definitions and even if this is not yet a widely used term. Some synonims are open-core or neo-proprietary. Neo-proprietary is the term I will use in this post as there is no common sounds or part with Open Source.

Somehow, what happens when you marry the two concepts together ? Well, that's somehow the subject of this post.


Open Source various dimensions for neo-proprietary +freemium companies 

Open-Source has several dimensions not only a economical/marketing : a healthy project has all those aspects covered/fulfilled : technical, political, philosophical, economical, social, ethical. My point is that, more and more, companies are using open source only for its economical value : as a cheap and fast way to build a community of users (free) and, more importantly, customers (paying) that allow them to replicate the business model of traditional software editor.

In some respect, the market is expecting this exact behavior : entrepreneur are now advised by the investors/VC/lawyers to make sure that everything is covered from contributor agreement to free and paid users agreement.

The (not so) hidden plan is simply to have an exit strategy based on an acquisition so that any number of closed source/traditional company can integrate/use the intellectual property (lawyer language here!).

Today, being open-source is seen as the fastest track to build a freemium (aka potential customer) and premium (aka real customer) "community" (much more sexy than customer base).

I will argue here that this is not really open source and it hurts the open source brand as a whole : those startups (and there is a lot out there!) do no take the time to develop their :
  • ethics. For contributor : yeah yeah, open source is very important for us : please sign this contributor agreement that allow us to re-license all your contributions as we see fit forever, etc.
  • political. Most of the time, there is no political structure, only a business structure that dictate the politic based on short term goals or based on the exit strategy (the exit strategy is not communicated to the employee or customers but all the structure is designed to maximize the value of the product)
  • technical. Well, most of the time open-source is selected for technical excellence. In those companies, an open source technology is selected based on license flexibility/compatibility, not based on technical excellence.
  • social. For products, the goal is to have paying users, not to build a community of like minded people. Same thing for employee, developers/contributors. The social aspect is simply a mean to maximize the profit when the company will be sold : it is a honeypot to attract paying customers.
  • philosophical : well, long term goals do not match very well with VC with an agressive exit strategy.

Of course, the economical (and legal!) aspects are very well developed : this is the main (only?) concern of the shareholders (entrepreneur, management, VC, etc.) : it is the only measured indicator (burn rate, market value, conversion rate from free to premium customer, etc.).

Why is this harmful for Open Source?

Well, let's follow a customer experience in this context. He knows a bit about open-source and find the model interesting. He like the idea to have a company that can provide value with a service offering or an add-on offering.

He tried the open-source/freemium edition and later on he decid to purchase the whole package. Then the company is acquired by Orache Corp. and everything is turned upside-down : the "product" can become proprietary again, price can be doubled (or worse) or the service agreement terms completely changed.

For this customer and for the market, there is no difference between this and any proprietary software : the community is, most of the time destroyed by the acquisition, the business ecosystem as well, the long-term strategy of the project can be completely changed and nobody has a saying about this : everything has been carefully planned by the buyer.

The customer has been deceived : he trusted a brand (Open Source) but has been manipulated into something else (in this example : becoming customer of a proprietary software). There is no difference between this story and the numerous stories involving dissatisfied customers that are hostages of the numerous mergers and acquisitions that occurs in the IT industry since its inception.

In fact, the mistake made by those companies is to restrain open-source to a technical, economical and legal questions only and not to consider the complete scope of Open Source namely the ethical, political, social and question.

I learned today a the open-source think tank in Paris that 90% of companies get acquired : this is the most frequent exit strategy for most of the companies and there is nothing wrong with this. Being "Open Source" should be more than an exit strategy for a start-up and that labeling those companies/projects "Open Source" is misleading for all the open source community.
 How can we, as the Open Source community make sure that  our brand still carries feelgood and positive values ?

I think that labelling properly the "neo-proprietary" companies would be very helpful : even if they respect the open source license, they clearly don't intend to respect the spirit of open source. In order to do so, proper criteria accepted by the community have to be defined and the term has to be used extensively by the community.

Sunday, August 1, 2010

Ironman Lake Placid 2010 : I'm now an Ironman ;-)

Effort : a few years before the race


This is a personal overview of a great sporting event. I'm proud of this particular accomplishment. They confirm that effort generate success. I think this is very easy to generalize in any aspect of our life : business, personal, family, etc. Our TV culture, however, put a lot of emphasis on the results. Results only occurs because of efforts but effort is less attractive for prime time. Anyway, skip this paragraph if you are only interested in the race, read more if you want to know more about effort ;-)

I came to Triathlon because/thanks to a friend and co-worker (Nando). After having played Rugby for years (flanker, number 7), I had some repetitive shoulder (left) injuries and have to visit the surgeon. Operation went fine but ... no more rugby. I ran a little and like to bike and my physiotherapist told me to swim in order to help/fasten my rehab. Then Fernando hook me and told me : well, Ben, you run a little, bike a little and swim a little. You should try a triathlon ! (Fernando has already complete several Ironmen at this time). Well, I think that I weighted 44 pound/22 kg more than now at this time : starting a company and a family tend to give this kind of results if we don't take care of ourselves...

First triahtlon in 2005, lots of great reading (Triathlon training bible, going long and others) and self-coaching for a few years. Each year, I changed the distance from sprint to olympic (year 2006), then from Olympic to half-iron (year 2007), then from half-iron to iron distance (year 2008).  In the last two years, I joined the racing club, the cyclist club and the master-swim club in Sherbrooke in order to perfect and train with partners.

My first Ironman distance race was the Esprit Triathlon in Montreal in 2008 (11:38). I raced again in 2009 but with less preparation and more "last-minute" decision to do it and it was harder (weather was very hot : 28 Celsius) and more difficult (12:12). My training buddy Jocelyn decided to do Lake Placid and I joined him to volunteer on the race in 2009 (we were at run aid station number one, just before arriving in town).

Being a volunteer was really an eye opener : helping others to accomplish themselves and perform is really a cool thing to do. Anyway, I pursued my training with the different club and used a training plan from Endurance Nation (http://endurancenation.us). I'm a very active person (CEO of a young company of 30+ people, father of three, traveling quite often) and I was very limited in the amount of hours I can dedicate to Triathon. My personal philosophy is also to do (at least ;-) one hour of training per day. On average this is gave a 7 hours/week during autumn/winter and more during summer (12 hours/week max). This is very similar to what Endurance Nation is proposing : Ironman training for people with a busy life ;-)

During winter I did a lot of running, powermeter work inside, swimming (three times a week) and then strength (core and plyometrics). When the good days arrives (I live in Quebec : lots of snow !) then much more run and much more bike and minimal swim. No marathon this year : I ran several half-marathon and train focusing on speedwork, intensity and bricks.


Stress ? A few days before the race

I'm a very active person and being in holidays is always funny : I tend to gave me challenges and "jobs" that I don't have time to do during my regular year. This year I was in search for a family fun Recreative Vehicule (RV). I found my golden RV a few hours before leaving: I organize the selling from Lake Placid and went back to Quebec for insurance and technical expertise. I guess this was my way to not be over-stressed by "doing nothing" ;-)

It worked well, as far as I know (have to check with my kids and wife ;-), and I was stressless for most of the week prior to the race. We had some good family time : camping under the same tent and being together for 24 hours a day is always an interesting experience. Kids love camping and they found all type of activities. We were less than one mile from Lake Placid main street and we use the free shuttle some time.

My "taper" was really a simple one : I swam the half distance (1.9 km) daily until Friday. Two time at race time (7 AM) using my race day stuff. I biked once with Jason (he was on the same campground) and we did maybe 40 km along the run race. Did some interval runs (high intensity for 30''/2') Tuesday and Thursday.

I was not particularly nervous even when Marc, another buddy of mine doing the race called me Thursday PM : "Ben, you don't have a bib number, something is wrong with your registration". And so it was : no bib number, not on the "official" athlete list but on the "confirmed" list nonetheless. Well, nothing I can do until Friday anyway. So I asked Marc to contact the organizer (no Internet) and he did so with the following advice : "Go to the solution table".

Friday was as bit more stressful because of this situation : anywa, Jocelyn and I are going to recover the athlete kit. I went straight to the solution table and explain my problem. Some long seconds that felt like minutes/hours : "Sorry I can not find you on the athlete list!". I say : well, I have the "active.com" email confirmation so maybe you can double check and see you "other lists".

So she did. After a few seconds (minutes/hours ;-) : "Oh ! I found you, you withdraw from the race last week".
Me : "Well, I'm no aware of this. Can you double check and show me some proof ?". So she did and show me an email from someone with a very similar first name and last name from Quebec (I'm "Benoit des Ligneris" and he was "Benoit desjxxxx"). I stayed very calm and told the lady "Well, that's not my name".

She double check and seems as relieved as I was : "Don't worry, we have extra kits and bibs for this kind of problem". So I took a seat from the nearby table and start to fill the paperwork (disclaimer, etc.). Then she asked me "Well, I have to charge you again because we refunded you". Me : "Well, I checked my Visa transation record up to last week, and I can assure you that nothing positive comes from my VISA account ;-". And indeed, I have been cancelled but the right guy has been refunded...

Anyway, after filling the paperworks, I was the proud owner of bib 3018, championchip and co. All in all, it was very quick : I finished my registration a few minuted before Jocelyn. I was, at this time, a very happy camper ;-).

Saturday : No action day for me. We prepared our bags (Jocelyn, Marcus and I). Prepared our bikes (did a small 100m road test) and bring everything to the transition area. I went to bed at 8PM, no nap, no coffee.

Race day  !


My nutrition preparation if the same for years now : mainly cut and paste from the Joel Friel "Triathlon training bible". I ate 1500-2000 calories 4.5 hours before the race (depending on stomach feeling and temperature). In this particular case, it means waking up at 2.5 AM in the tent, trying not to wake up anyone.

My meal (in this order) :
  • Cereals + soya milk + banana : 200+120+100 = 420 Calories
  • Two fresh bread small buns (from the lake Placid bakery : great bakery !) with honey (lots of honey ;-) :  400 Calories
  • Ensure : 250 * 3 = 750 Calories
  • Total : 1570 Calories (approx)
As I prepared everything before going to sleep I was relatively silent, I only waked up my poor wife, not the kid.

Wake up planned at 5:30 AM but my eyes were open at 5:05 AM. So I wake up and start to pack my stuff. All the family was waked up at this time : long day for everybody I guess !

We leaved, by foot, for the oval at 5:50AM, arrived at the Oval at 6:10. A little late to my taste but still OK I guess.  We met briefly with Pierre, friend and training buddy of Marc in Montréal then we lost Marc somehow. I add a towels in my T1 bag and we went with Jocelyn to gave our special need bag/run. However, The location for the run special need bag was quite far away from the swim start. Going bare foot is not ideal neither given the large number of people (athlete and spectators alike). Anyway, I asked a spectator that was already carrying a special-need bag and he gladly accepted to deliver our bags : thank you very much.

We then wen near the beach in order to put our wetsuit. A few minutes later (6:40) we went in line in order to cross the portal and be officially counted in the future massive swim start. Just in time for the pro start at 6:50. Not very impressive : we did not see much. Some waiting time where I think about my race strategy (more on this later). We wait until the 6:55 before going into the water : put some water in the wetsuit and test the goggles.

Swim leg : start & first lap

Photo & Video Sharing by SmugMugWe then went in the exterior side of the starting line : it is about 100m large but 2700 people where on the "starting block" so even if this is large, it creates a very high density of people in the water. I did some open water swims but the largest one has 900 people and it was a beach start so I guess this was the first time I saw such a human density of population in any water. Anyway, we took position on the exterior side of "the line" (this is a yellow cable that run underwater and hold the buoy in line) as advised during the athlete meeting.

Anyway, it looks bad and somehow, it is bad  :



I wanted to start fast, say the first 300meters and then find some sweet spot and find my own rhythm, ideally drafting some fellow swimmers, mainly to avoid sighting. Also, as advised, I wanted to swim "wide", far away from the line between the buoy. My face get kicked several times, mainly from swimmers from the side and I had to re-install my goggles 4 time. I guess it appear as a long time but it was no more than half the first leg of the swim, so no more than 450 meters. I was pushed on the inside and had to work somehow hard to go wider.

My swim alternate between what I will define pocket of calm and peace : some space between swimmers, a kind of peaceful hole, and then, a few minutes later it transform into a fight zone where several swimmers joined but with slightly different speed and direction.

Anyway, after the first turn, I was able to draft properly someone : my technique is simple, I stay on the right or left side of the swimmer and I avoid contact most of the time. I also "protect" my fellow swimmer from fellows swimmers by making sure nobody gets between us. I switched a couple of time as I was able to
swim a little faster than my unaware swim buddy. First lap in 34 minutes. Great time for me !

Swim : second lap

Photo & Video Sharing by SmugMugThe second lap was very similar to the first one except that it was less crowded and, as a consequence, I was able to keep my goggles ;-) Almost no contact this time : one or two and I guess it was fast swimmers going thru the pack. Anyway, I went wide and was able to draft well (as a consequence, sight 2 or 3 time for the 900 meters) until the buoy. Intensity was not so high for me but I knew it was going to be a long day so I was OK with it. At the buoy, marking the fact that I already did 3/4 of the swim leg, I decided to speed up a bit. I was then able to reach "the line" and it was convenient to do so. Instead of drafting, I increase my speed and went almost "all in" for the last 400 meters. Great feeling.


As my buddy Jocelyn said in Ironman : you are very happy when the swim leg end, then going off the bike is a true benediction and then of course, crossing the line and doing nothing feels great !




1:10:45 was what I read on the screen when exiting the water, 1:10:50 is my official time. I was happy with the result : 5 minutes faster than planned. If the day can continue this way ... so be it ;-)


Transition 1 (T1)

I used a "peeler" : a volunteer (thank you !) that remove the wetsuit for you. Immediately after going out of the water I removed my wetsuit arms and then, I sit down and a volunteer ("peeler") removed my wetsuit : fast and efficient !

The Oval is quite far from the swim end and you have to run to get your bike. There is a huge crowd during the transition and it is very difficult to go slowly ;-) All the spectators are cheering at you and everybody is running fast. I reached he oval, a little overwhelmed and not very focused. Some confusion in the bag handling, maybe because of my non-standard bib number ? Anyway, I finally grabbed my bike, walk to the tent and change myself.

It was my first Ironman race and I was not used to put everything in the bag thing. In all the other races I did, transition stuff is around the bike. I changed myself completely (not quick but comfortable) : bike bib, bike jersey, etc. Putting the compression socks was challenging : next year I will use another system (compression sleeves under the wetsuit).

Then I reached for my bike. My number was called but once again, I guess 3018 was not where the volunteer expected. I grabbed it myself and that was the end of T1.

Anyway, 10 minutes transition can certainly be improved. I forgot to put some sunscreen on !

Bike course : lap 1

Happy to ride, I was careful not to overdo it for the first lap in order to keep energy for the second lap. I get somehow to higher intensity than I planned hitting the 150-150 BPM on my cardio more often than not during climbs and even during the flat sections.

I pass by the family : they were located 1.5 km after the town, before the horse ground. I was very happy to see them and very motivated to see them again ;-)

During the flat section, I ended up almost always with the same group of people : while not drafting. In fact, the lead was quite disputed and everybody had different strength : flat section, up section, down section, climb and down section. It was sometime difficult to respect the rules as so many people where on the road this day. Anyway, going from 896 out of the water to 299 on the bike mean that I went by a lot of people, especially during the first loop.

I broke with most of my "same speed" group during the climb from Jay to Willmington. I practised this a lot in training and I was very careful to keep a high cadence.

I had all my solid nutrition with me and two bottles at the start and 5 gels. During the first lap, I ate solid (Clif Bar are Yummy !) and bananas (given at every aid station). I was quite satisfied with my catching performance : I obtained most of what I wanted during the aid stations and I think I will go lighter next year.
My goal was to fuel the Ironman in someway and still have energy for the Marathon.



We had some showers by moment and the weather was sometimes cool (shade+water) and generally overcast with some patches of sun here and there. A great day for doing an Ironman ;-)


I never practiced the end of papa bear and it is climbing a little before coming to the transition zone but coming on top of Papa bear was very pleasant and felt like the tour de France : lots of people playing music and cheering at all the athletes. Difficult not to rush when everybody is clapping and cheering at you so I guess I went fast until the second lap.

I was impressed by my speed during the first lap (33.54 km/h or 21 miles/hour on average) and decided to not push it for the second lap.When I did my recon (160km of the bike leg), I did 30.1 km/h on average (it is true that I wanted to take it slow and easy in order to practice the course in order to run the Lake Placid half-marathon the day after ;-) and this was my goal for race day.

Bike course : lap 2

Uphill from Lake Placid was OK, I met the family again and was somehow rejuvenated. I did not overpush it until the big downhill.

Well, I went a little slower both from a heart rate and perceived effort point of view. I hate solid for maybe one hour and then stopped (beside bananas) as I know that it is all my stomach can take.

So I started eating my gels and the one from the organization. And an accident happens ! I'm used to Carb Boom gels and they are somehow more "solid" than the powergels that were handed out. So the first time I grabbed one and try to eat it, half of the gels ends up flying in the air and fall on my watch (in the middle of my aerobars) and bike. I was a little puzzled and took notice for the next time. I took most of what was on my watch and ate eat and I believed it was the end of it...


In the flat section, I felt somehow that something was wrong : I had difficulty adjusting my position on the saddle and some delicate part of my anatomy started to feel heated somehow. Anyway, only 60 km to go and I guess I was busy putting some watts in order to stay with my new speed group ;-)

Ausable fork and a half-turn then back to jay and the climb to Willmington. Once again, I pass most of my folks here. The last 20 km were ... more painful that I remembered ;-) Maybe because I went harder but the series of uphills and downhills with the steady climbing get the better of me.

Also, my delicate parts started to really hurt now and in order to move on my saddle I somehow had to "unstick" my bib and move. Once again, first time for me as I never had any problem with this in the past. I guess I was to focused to really analyze things : I really wanted to maintain a high cadence and make sure my nutrition was OK.

Then came the bears : Mama, Teddy and Papa. Everybody was tired and me too. Spinning was painful now and hurt more than usual and I was not tempted to push harder as I knew that a Marathon was waiting for me. So I spin easy and went by this part. Once Papa bear is passed and all the cheering, I really went easy (>90 RPM) to prepare my legs for the run. Powergel, water and that's transition...

Average speed is 20.1 miles/h or 32.2 km/h for the second lap.

On average, speed was 20.14 miles/h or 32.2 km/h so I guess something is wrong with the data : http://ironman.com/events/ironman/lakeplacid/?show=tracker&rid=301&year=2010#axzz0vGlxfHoI

Bike time : 112 mi. (5:33:43) 20.14 mph 299 overall and 64 in my age groupe (30-35). Feeling good ;-)

Nutrition (approximative) :
  • 4 Clif Bar = 1000 Cal
  • 8 half-banana = 8 x 50 = 400 Cal
  • 5 gels = 5 * 90 = 450 Cal
  • 5 bottles sport drink = 5 * 150 = 750 Cal

Total = 2600 Cal. For 5.5 hours = 472 Cal / hours. Not too bad and no stomach problem.

I was more focused on the transition than after the swim leg : I took the time to mentally repeat the different steps needed. I removed my Garmin Forerunner from the bike and put it in my bike jersey. At this time, my crotch really hurt on the right side from unwanted heat and friction and I have no clue why it happens. Stay tuned for more ...

Transition Bike to Run : T2

Several volunteers asked us to slow down and I did. Then an official show the dismount line. Go further than the line and you are disqualified (DQ) not an What a joy to get out of the bike ! Really, this feels so cool ;-) Of course, everybody is cheering at you and it really helps but the first few steps are really enjoyable.

The bike park feels empty (I guess there was "only" 300 bikes compared to 1900 this morning !) and this is a good feeling ;-)

A volunteer grab my bike and I can run to grab my run bag. I visited the toilet (no queue) and was really relieved after this : I decided to handle it until transition on the bike and it was almost an emergency ;-)

Then to the tent to change myself. Tent feels empty and everything is much more organized than during T1. I find a spot and a volunteer took care of everything for me : my dirty stuff in the old bag, prepare my shoes, cap, etc. And this time, I did not forget sunscreen : I will be white during all the run because of the cream but ... no sunburn ;-)

Run course : lap 1


My secret goal was to run a 3:30 marathon. I ran my other ironmandistance marathon in 4:38 for once and 4:58 for the most recent one. I really improved myself on the run leg but I never really was "comfortable" or even "in control" of my Marathin during an Ironman. My personal best is 3:20 on a marathon alone so I guess the 3:30 was an ambitious goal ;-) It means 5:00/km or 8:00/mile. The first six mile were OK I guess


The start of the run is a downhill section down to main street and I felt OK : I went really slow for the first mile and gradually improved my speed. Stomach felt OK. It was hot then but not that much I guess. As I have lost my watches (more on this later) I don't have my entire race splits. So I will only use the available statistics but they fully correspond to my feelings and souvenir ;-)

First 10km were OK and I have a good rythm : 50:20 for the first 10k, this is a little more than 5min/km but I'm on target. Then when going back to town, I tried a "stupid" move : namely to grab my electrolytes that were located in my short back pocket. I cramped badly because I had to turn my body to reach for it and my shoulders and arms were let's say, somehow rigid and hard to move. So I had to stretch myself and proceed carefully ... and slowly. I jogged most of the time I guess until I go by the family (my youngest one is not there anymore, sleeping I guess and my second one is sleeping on the chair but Caroline and my wife cheers at me). I grabbed a lot of energy from this while the other part was saying to me : me too, I want to have nap ;-)

The uphill section to main street is as hard and painful as I remember. Passing by the oval when coming back in town is difficult and the lakedrive section is ... long : the turnover seems like infinite to me and going back to town very similar. It is difficult to see some people that are now over and ... you are only "enjoying" your first lap. Anyway, I'm not pushing things and I don't feel great in my mind. I guess the next loop is going to be difficult. Mental started to drift somehow and this was the end of loop 1.

Run course : lap2

Going down Main Street is not so pleasant : people cheers at us and all but I was really trying to go slowly and to relax in order to avoid shock on my feet so I tried to be tall, have a good leg turnover and relax the arms somehow.

Once again, I'm lucky to go by the kids and they cheers at me and show me their wonderful signs ;-) Give me some heart and it is the start of my improvement in mental attitude : up to this point I jogged but now I really want to race and have a good feeling so I really start to increase the leg turnover when going uphill. No cramp and not any more try to grab my electrolyte : not worth the price ;-)


Pitstop needed and I went to the toilet once again. I guess it lowered by average speed but it was worth the price ;-) I ate some pretzel instead of my electrolytes and here we are : end of river drive and back to town.


At this point, I know that I will make it, whatever happens : nutrition is OK, only 10 km to go and I decided to go "all in". I know that my family will wait for me in the Oval and will not be there when I will get back to town : this is the plan.

I was really pleased to meet Gervais, a running buddy : he ran a little with me and asked me some news about his son (Micael) and Jocelyn. I told him I've seen Jocelyn and he will come in maybe 20-30 minutes as I've seen him on river-drive. I did not see Micael so I was not able to give him some news about him. We chatted for maybe 1km and then he left me in order to support friends and family. This really start me up for a strong finish but I remembered that the turn-around on lake drive was very far so I keep some energy for the last leg.

Benoit des Ligneris (well, last name was more like "Lyyygnureeeess"), you are now an Ironman


The run to the turnover on lake drive does not seems so long this time and it was now time to go back to the Oval : the exit is waiting for me as is the finish line of the Ironman. Feelings are all mixed up at this point and the crowd is really wonderful ! I'm happy and sad and well, running fast : I still have some energy it appears ! It somehow confirm the mental characteristic of endurance I guess.

Someone is flying by me but I don't feel like chasing him : I decided to savour the moment. Carpe diem in some way. It looks like a dream comes true and I'm overwhelmed by joy and pride I guess. This is a post-race analysis and arriving on the oval simply felt good at this time. The ambiance is wonderful I just run past the line and .... I hear the famous "Benoit (I understood this one!) des (OK) something, you are now an Ironman"

Final time is 10h53:42 and my marathon time is 3:52. A bit short of the 3:30 goal but not that much. I wanted to to 11hours so I guess this is great and I'm very pleased by the result and the experience ! A volunteer take back the champion chip and give me a medal and a T-Shirt and a cap.

I felt somehow dizzy and a volunteer grab me to the medical tent. So be it I thinked, a small rest and that's it.

Medical Tent

My weight was OK (only 2 pound less than my usual weight) and they gave me soup and sport drink to bring this to balance. Once on the bed, I felt better but my hands and feet were very cold. I put my new shirt (dry) and new cap on me (dry as well) and remove my shoes. After a while, my hands and feet felt numb somehow and were very cold. So I get a wonderful massage with camphor gel and it brings back heat in my shoulders, arms and hands. Same thing for my legs and feet. The masso-therapist mentioned to me that it could be linked to some thyroid problem or simply to the fact that I was exhausted ;-) My father has the same type of symptoms and it already happens to me, especially when swimming in cold water so this is something I will check.

Anyway, I forgot in the medical tent my cap (2XU, white), my running top (TYR, white+blue, ironman, no zip, two pockets), my two watches (Suunto T6 and Garmin Forerunner 305). Not found yet so if you happen to have found them, please contact me !

After that I went out of the tent and just found my buddy Jocelyn ! We congratulates each others and I then looked for the family. I found them by luck : we decided to grab our bike and stuff and go the the camping when I saw them near the Oval. Everybody was re-united and very pleased to do so. Hugging my family was a real pleasure as I perfectly knew that they supported me countless hours when I went training for this madness...

When I grabbed my bike, my seat was very sticky ... on the right side. And a I finally understood a brutal truth : part of the gel felt on my seat and the stickiness cause the bib-short to stick there. Of course, the friction was almost integrally transferred to my crotch and well, it was painfull for several days after that. I waited 5 days to try my bike and it was not a pleasant experience.

Lessons learned : for next year and for any athlete !
  • Start wider on the swim. 
  • Two swim caps : first cap, then goggles, then second (official) cap. It should help keep the goggles in place
  • Put the bike cloth under the wetsuit
  • No compression socks : simply compression sleeves under the wetsuit
  • Go harder on the swim : I was mainly following this year, I think I would like to follow my own rhythm next year
  • If your seat fells sticky ... it is sticky : use water to wash the seat !! It can cost your dearly otherwise.
  • Only one bottle is OK to start the bike if you make sure to grab one or two at the first aidstation
  • Keep your stuff with you, especially watches ;-)
  • Bring some heat oitment next time I'm cold !
  • Define precisely where the family will be at the finish line to help be together sooner 
Links


      Monday, May 24, 2010

      FLOSS for Medium Businesses : challenges and opportunities

      Context

      Last Friday, I did a presentation to a group of medium businesses. The audience was about 20 IT directors of businesses between 100 and 500 employees (Medium Businesses . A first presentation was made by another presenter about Open Source in general : basic principles, licence/freedom, ecosystem,  business model. As a consequence, we could say that Free Libre and Open Source Software (FLOSS) was introduced before my talk.

      Every organization in the room use FLOSS. 90% of them ran some kind of FLOSS and were aware of it : Asterisk, Apache, CMS, etc. 10% used some kind of appliance or third-party (SaaS) with FLOSS inside.


      Initial presentation : Success stories of Open Source deployment

      My presentation was articulated around different business cases / success story. In each case, the initial business goal was stated :
      • divide the total cost of ownership by 5 for a desktop project we are doing in China
      • step by step migration to open source starting with infrastructure/servers and moving higher in the stack, including OpenOffice and then desktop
      • replace legacy POS system for a restaurant franchise : distributed linux thin clients using LTSP-Cluster
      • a manufacturing company that migrate its complete IT system : Open Source ERP and Linux thin client in the plant
      The reception from the audience was great and several questions/objections were mentioned. If you attended the conference, you'll noticed that the form has been modified but, hopefully, not the content !

      Challenges for Open Source in the Medium Business market

      • Referral cases / business cases more difficult to publicize. It is relatively easy to find information about large migration of large organization. With an efficient PR service and marketing department, they tend to "brag" about their Linux/FLOSS deployment. Mid-size company don't and it is much more difficult to find market evidence of FLOSS Migration. Previsionnist companies (Gartner, etc.) do not analyze this market often...
      • An IT director mentioned "We have a very small IT team, it is not possible to learn new technologies". The financial crisis has stretched the existing resources quite a bit and FLOSS, as a new technology to learn, is problematic. IMHO, this is more linked to the "exit cost" of any solution than something specific to FLOSS : solution in IT don't last for ever. Who should pay for the exit cost : the initial technology or the new challenger ? I will certainly blog about this point later on but I strongly encourage IT directors to integrate the exit cost in their initial technology purchase : this is good practice and will bring agility to your organizations.
      • Another question "How can we find expertise in FLOSS ?". As a matter of fact, the business model of the Open Source companies was not clear enough and even if the two companies that presented (including Revolution Linux) were able to offer consulting, services and third level support to these companies, they were not known beforehand. The existing suppliers do not support FLOSS : they are used to sell hardware, licences and services.

      Transition from a vendor market to buyer market

      I think this was the biggest objection. It can be explained this way : "While I have vendors calling me all day long and pushing me new product and services, nobody is marketing open source..."

      I think this is true : established companies with sales channel already open tend to rely heavily on interruption marketing to sell new products and new offerings to their existing customers. This is a well known marketing fact : selling a new product to an existing customer cost 7 to 10 time less than selling the same product to a new customer.

      In a sense, I think that Open Source companies operate on a much more leaner business model than the closed source one and that marketing/sales is not in their ADN and is not their priority. On another level, Open Source companies can be seen as start-up/immature companies compared to some of the players in the IT market that exists since 20+ years with an established sales, marketing and partner management department.

      My answer was that Open Source is essentially a buyer market, not a vendor market. If you want to select an open source software, you can certainly find between 10 and more than 1000 open source product depending on what you are looking for (ex: a Web Server, a CMS, etc.). As a consequence, you have to define your need carefully, select one or several open source solution and then evaluate the maturity of the solution.

      In a sense, it is a very different experience than buying a proprietary product or even more easily done, fill a PO and acquire a new proprietary solution from you existing supplier...


      Opportunities

      As always, I think that all those problems are great opportunities : how can Open Source company embrace those problems and provide a better service to the medium business market ? How to lower the barrier so that FLOSS can become mainstream ?

      Do you have similar experience about medium businesses and FLOSS ?

      Monday, May 17, 2010

      The end of the (Linux) desktop as we know it ?

      Embedded Linux : common trends

      More and more of the Linux ecosystem (PC hardware vendor, phone hardware vendor, search engine giant and more recently a well known Linux distro, Ubuntu) uses Linux as an embedded system for the desktop. Some examples to illustrate this trend :
      • Asus Express gate embed Linux in the motherboard. You can have, in a few seconds, a browser, skype, etc.
      • Google Chrome OS : not yet released but it is define as the Web OS with a minimalist/zen approach (like an OS based on Chrome, the browser)
      • Mobile platform : you'll have plenty to choose. ARM based : Symbian, Meego, Android, etc.
      • Last but not least, Canonical announce "Unity", a minimal/Zen OS that will be available to OEM but can be nonetheless deployed on Ubuntu Lucid and later.

      Major desktop environment (Gnome, KDE) : things of the past ?

      One interesting Linux specificity is the fragmentation of the windows manager market. No other "mainstream" operating system has such a complexity : the window manager is unique and completely integrated (from kernel to applications) into the Operating system. Thanks to the XFree standardization, Linux is more complex : several window managers exist and have to co-exist.

      However, on this level, things are changing rapidly :
      • Collaboration between Gnome, KDE and XFCE (and others) is accomplished with the Freedesktop project. The project goals are to define common tools (like X.org), sub-systems (Dbus, etc.), API to ease integration and interoperability of the different window manager.
      • Zen-ification : Simple is beautiful. Minimalist systems set the trend. Clunky interfaces tends to disappear and are flagged as bad design. Esthetic and ergonomic are the two main change drivers. Simplicity is especially important in order to "cross the chasm" and reach the general public.
      • Cloud computing : Browsers are the key to the world these days. An interesting point to notice is that none of the desktop environment are relevant in the browser war : they use/integrate a major browser based on user choices (OK. they provide a browser but ... those browsers do no exist on the larger scale of the Internet!).

      Negative impacts for the (Linux) desktop

      • With the cloud, more and more users will be completely satisfied with a browser. As a consequence, users are more likely to care about their browser and the associated information (bookmark, session, password, cookies, etc.) than about their desktop. Major browsers offers a form of cloud synchronization : Mozilla Weave and Chrome Sync (with a google account ... of course) are leading the pack here. Those tools contribute to free the user from the desktop : everything can reside in the cloud.
      • Even if a desktop is less and less relevant, migration from 100% local applications (PC) to a 100% cloud will take time. During this time, a desktop is still needed but this is an end-of life situation for this product line. The fact that major players roll out their own desktop environment is a sign that current desktop environment do not meet the needs of the future. Instead of improving the current ones, major organization decided to ... create their own. Asus with Express Gate, google with Android and soon Chrome OS and, more recently, Canonical with Unity.
      • The direct consequence is a form of "commoditization" of the different desktop environments : they all look alike and most "regular" users don't really care. The differentiation factor is small/difficult to point out : only style, look and feel is experienced by the user after all !
      •  
         
      Positive impact 
      • Linux everywhere : after the servers, here come some form of "desktop". The platform is now very popular and we can see convergence between mobile phones, netbook, mobile internet device, etc.
      • Hardware support ... will become better and better on the "desktop". As more and more hardware manufacturers provide an embedded Linux, hardware support will become a non-issue. Most components are standard and this will lead to a well supported platform for the desktop.

      Conclusion ?

      The value is shifting from desktop environment and desktop applications to browser and cloud applications. The direct consequence is that the Linux desktop environments should unite and work more closely together in order to address this need.

      This will not be an easy task as lot of flame-wars and ego-wars will have to be resolved : the feud between the different windows manager is long-lasting and not really decreasing. Key projects like Freedesktop are very important in this regard.

      Major players created their own "non-desktop environment" to provide a zen-minimal environment that contain a browser and some additional technologies (video-conferencing, etc.). Those players decisions should be a serious wake-up call for the window managers : a major hardware manufacturer, google, an open-source friendly distribution editor (Canonical with the recent Unity announce) decided to create their own "desktop" environment. Those products will be delivered to millions of users...

      At a certain level, one can say that the battle is already lost : the current desktop environments can not really fight this war as they don't own the key technology : the browser. As a consequence, the risk, for them (Gnome, KDE, etc.) is to be a tool that will launch a browser. A (relatively) simple tool that can be easily changed with almost no user impact...

      Sunday, May 9, 2010

      The Cloud : at least an environment that favor open-source !

      What is the cloud ... without its marketing tag ?

      During the 2010 think tank in Napa, the cloud was very present. One of the question was about the "threat" that the cloud represents for Open Source. This is funny because some very well in informed participants mention some key statistics. The major player in the cloud field, namely Amazon and Google, run an open-source (linux based) solution for their cloud.

      Some unofficial statistics mention that more than 90% of Amazon EC2 instances run ... Linux. At some point, one could define the cloud as an Open Source OS + Open Source virtualization.


      Why open source matters ?

      Price of a computing cycle, bandwidth, RAM and storage are going down because of various factors including Moore's law (processors, memory) and unprecedented market size increase : developing countries are now entering in the digital age and crossing the digital divide.

      An old open-source theory stated that, when the cost of a computer will reach 250$ then Open Source software will be in a strong position because it will be very difficult to pay hundred of dollars for ... software (was not able to find the reference but this is not my theory, feel free to comment and I will correct this blog entry).

      I think that Open Source allows the cloud to exists : as the cost of servers get under 2500$ (lets assume that a server can be used by 10 users, we will find the same kind of ratio as for the initial prediction). In this context, when you manage 100 000s of servers, should you pay for ... operating system ? virtualization layer ? etc. In fact, most of the cloud players (Google, Facebook, Amazon, etc.), as start-ups, rely exclusively on open source. Constraint drive innovation and I really think that the present scale of those companies is a direct consequence of their platform choice : using any type of closed source software would have hurt the control on their platform and kill any innovation in the egg.

      From and end-user point of view, the number one OS vendor (Microsoft) adapted itself to the cloud in a very reactive mode. In order to use a proprietary system in the cloud, you have to pay an extra 10c/hours/EC2 instance. If you use 100000s of hours, should you pay this tax ? Should you port your cloud-software to ... Linux ? At a certain level, RedHat and other licence-based commercial distros have the same problems : they have to somehow "rent" you your licence so that you can use them on the cloud. Once again, why pay for this and not use ... a freer distro like Debian or Ubuntu or even Fedora, CentOS or OpenSuse ? As a customer, no doubt you will consider alternatives and cost/benefits as it


      What of the dominant players ?


      While there is different players, I will concentrate on Amazon as Amazon is clearly in the leadership position : they published the API and offer a multi-tenant cloud infrastructure that is very cheap and very ... scalable. Scalability is not the problem of the user anymore at least from a hardware, storage and bandwidth point of view. However, applications scalability on the cloud is a very young and immature science ;-)

      The EC2 API is in the process of becoming a de-facto standard and most of the cloud provider build their solution on top of this API. Others, like Eucalyptus, re-implement the API with a different toolbox (i.e. : other virtualization technologies) and as an open-source : you can have your own private cloud on your servers.




      The Cloud : at least an environment that favor open-source !

      If you look at the proprietary/closed-source/traditional licence-based business model they have to adapt to the cloud and somehow switch from a multi-instance architecture to a multi-tenant one. They also have to switch from a per-user licence to a per-usage model.  This involves all kind of interesting gymnastic like the one Microsoft did : 10c/hour/instance. If you "rent" an instance for one year, your cost for MS Windows in the cloud is .10$*24*365 = 876$. Very expensive IMHO : they don't really believe in the cloud. If you are doing the math, you better buy a bunch of  server licences that includes 5 VM : this will be way cheaper.

      On the other hand there is no cost involved and no change required to any open-source software is you want to run it on the cloud : just do it ! No licence change, no hassle, almost no difference from a regular deployment in your data-center, no supplier to contact and ask "if you can run it on the cloud/how you can run it, etc.", no special provisioning (to have your precious licence number for each instance or to connect your licence server to your instances).

      Moreover, open-source companies that develop open-source software already have a business model that is compatible with the Cloud : their software is already freely downloadable and can be run on any computer. When the old license-based model is used then some adaptation on the business sides of things, for instance usage based vs instance based. When a service-model is used, then no change is needed.

      This can be a great opportunity for Open Source : a virgin territory without any legacy players and a business advantage because there is no need to change the existing software to benefit from this new market...

      Wednesday, May 5, 2010

      Switch : When change is hard. A change management book !

      Introduction

      Another book about change management could certainly be your reaction when reading this review. The Heath brothers are well known for the bestseller "Made to stick". In "Made to stick" (this is a recommended reading ;-), the emphasis is psychological and human centric : this is a (vulgarized) psychological book that propose a model for human mind : rational mind and emotional mind. I think this is important to mention because "Switch" is compatible with this psychological model and both books work well together.

      Change : if was easy, it would already be done !

      Change is, by definition, hard. If it was not the case then ... it would already be done. As a consequence, the second part of the title is somehow redundant but that is certainly a good marketing coup ;-) The second part of the title has at least one virtue : it explains clearly what he book is about "How to change when change is hard". I've read in an unidentified source (don't remember!) that the greater the success, the harder the change. 

      Friday, April 30, 2010

      How to help large organizations to contribute open source project ?

      Context

      During the 2010 thinktank in Napa, one of the participant asked the audience the following question "How could we help organizations to contribute to Open Source software".

      The problem is the following : most of the large organizations rely on open source software one way or another. There is not necessarily an official policy about FLOSS usage but system engineers and IT administrators & developers tends to use and deploy Open Source software.

      In this context, and because, more-often than not, this software is critical to one or several business unit, in theory, it should be easy to contribute.

      Hypothesis : IP protection and no open source policy

      The problem is more acute for large organization because there is, more often than not, a legal department and some policies are in place to guide workers. Those policies are in place to protect industrial secret and more generally IP of those organizations.

      In those organization, more often than not, there is no open source policy : open source leaders do not necessarily have corporate support and, as a consequence, those projects tends to be hidden/not widely publicized.

      How to improve the situation : patchs & minor contributions ?

      A public Open Source policy (like the one from the government of California) should be published : the goal is not really to have non-open source users contribute but simply to help open source projects to become more publicized inside the organization. It is a clear sign that Open Source is welcome in town !

      Lawyers should become involved at this point in order to define a "contribution policy" for the organization. The main question to answer is : Is the corporation OK to contribute with his own name and (c) on the code or not (for marketing, business or fear of liabilities reasons).

      If the organization is OK to be involved and become a more or less official contributor of the projects then, proceed to next step.

      If the organization is not OK to be involved, the idea is to use a partner : more often than not an Open Source integrator that will be able to somehow "white label" the contributions and, as a consequence, cut any potential liability between the organization and the contributions (as a matter of fact, we already did this for several customers : they did not want their name to appear in the patch for various reasons).

      Once this legal issue is defined, then the CIO should then publish a "contribution guideline" that officially allows and encourage bugfix, patches, documentation update and contribution to be made by internal IT either directly or indirectly (via a partner). The goal of this guideline is not, once again, to _force_ contribution : it simply show the path and will make contributing easy and possible for everyone in the organization.

      As contribution become more common, good practice should be encouraged : an internal blog or intranet can be set-up to list every contribution in order to reinforce the positive attitude and to create rock stars among their peers : top contributors could be awarded some really cool things like ... attending the developer meeting of their proffered technology, attend training session or technical training, etc.

      Contributing to a community is a good thing and you could then suggest the marketing & communication department to publicize your contributions to the open source community. HT department will be interested as well by this as you will be able to attract highly skilled and motivated IT professionals.


      How to improve the situation : major improvements, plugins, new projects ?

      In this case, the situation is a bit more complex. We did several consulting sessions for customers that really wanted to open source either a complete project or certain area of the code. Most of those projects were initiated by the IT department and let's say that the relation with the legal department was very very conflictual.

      One of the root cause of this conflict is the fact that the legal department was not part of the initial project team, they were seen as a necessary evil and ... they behave as such ;-)

      Lesson learned : please, involve your legal department as soon as possible. Propose them to work with a lawyer firm specialized in open source IP and licensing. Have them have lawyers luncheons and let them decide of the funny things like license and IP management for the project. 



      Initially, I would recommend to start with small scale project. Ideally, select an existing add-on, plugins, piece of code. Once you can have a small success and contribute one component, you can expect the process to become more and more routine like.

      After some successful projects, we could even envision an official "open-source policy" for any project that is not part of the core business/specificity of your organization but I don't think that this should be a goal : this is a consequence of the normalization or major contribution. The same concepts apply : you should celebrate success, list contribution and eventually publicize them !


      Before any open-source release, you should have a process in place. The process could be very simple (in our company, this is a very simple form that ask for the Open Sourcing of a project) in order to make sure that : you own the code that you want to open source (it seems evident but ... is it ? Do you have cut&paste some code snippet, have some of your code been developed by sub-contractors, etc.) and then you can make sure that the code is not something you want to offer to your competitors. Finally, you can select the appropriate license based on your own legal department advice but also based on the type of license in use by this specific open source community.



      Conclusion


      I think that those small steps could really help large organizations to contribute more to the open source software they are using. Those ideas are grounded, easy to implement and easy to put in application. 
      • The "contribution barrier" will be lowered inside the organization
      • It will reveal a clear path for open source contributors
      • It will reveal a clear path for starting open source projects
      • Legal department will feel respected and part of the solution
      • Internal publicity (blog/intranet) will reinforce the process inside the organization