VMWorld 2010

by Sebastien Mirolo on Thu, 9 Sep 2010

VMWorld was happening last week in the Moscone center in San Francisco and it was definitely very crowed. Virtualization is a key technology to enable cloud computing, the quiet IT revolution, and VMware was definitely on top of its game at VMworld 2010.

Trends

There was little about virtualization itself, a lot about clouds and even more about services.

The big deal about virtualization is that it detaches workloads from the computer infrastructure and that in turns pulls everyone to see clouds in terms of services. It is thus no surprise that data centers are becoming highly automated, highly virtualized environments, being made accessible to end-users through web portals and now cloud APIs.

A fundamental change for many businesses starting to rely on clouds is the shift from a fixed to a flexible cost structure for their IT departments, the opportunity to bill each team for their own consumption and the rapid turn around to either scale-up and (as important) scale-down their infrastructure. In short, it is like moving from investing into a company-wide power plant to a per-department monthly bill. In accounting terms, it is a huge deal.

Another trend supported by clouds is the spring of B.Y.O.C. initiatives (Bring Your Own Computer). Fortylines was founded on the premise that knowledge workers should use their own hardware and favorite systems and it is good to see we are not alone. It is estimated that 72% of corporate end-points will be user owned devices by 2014. ThinApp, ThinDirect are the kind of technologies that enable to safely bring company services to such devices. Coupled with a company ThinApp store tied to a web front-end and RSS feeds, it will provide end-users the opportunity to keep up to date on demand.

Companies that leverage public and independent cloud providers are ripping a huge benefits in cost reduction. Companies that leverage private clouds might be in better control of delivering reliable performance and bandwidth to their employees. Most businesses will thus implement an hybrid cloud strategy in practice.

Deployment from development (just enough capacity for testing) to production systems (full capacity) has always been a major headache for web-based businesses. The cloud provides a huge opportunity to avoid a harsh cut-over and instead gradually ramp-up development systems into production while tearing down previous production systems.

After virtualizing machines, there is an increasing pressure to also virtualizated the routers, network, etc. I must admit that I did not fully grasp the rationale for this yet. Apparently it is an optimization to avoid going through the hypervisor and the physical network but seems to only apply for virtual machines in a single box.

Lingua Franca

If you are new to the clouds, want to impress your friends, or just avoid to get a CIO's door slammed in your face in the first five minutes, there are a few lingua franca to read about.

Virtual Machine (VM) Provisioning is definitely a basic. EC2 API, vCloud API and appCloud are amongst the API (Application Programming Interface) you will surely encounter. Typica and Dasein are a cross in-between framework and API that might also pop-up from time to time.

A virtualization stack is usually built around a Data Center, an Hypervisor, a Virtual Machine (see OVF), an Application Delivery (see VDI), a Display Protocol (see RDP) and a Client End-Point (or Access Devices). SNMP and role-based access control remain classics for server management. Storage solutions (ex. Gluster) are also a key talking point in any serious cloud conversation.

Compliance to different laws and regulations is a major part of doing business in many fields and IT is often charged to implement the required systems. So PCI DSS, ISO27005, Fisma and ENISA are most likely be thrown in at some point as well.

Opportunities

Aside the advantages, there was a lot of issues and challenges presented about relying on a cloud infrastructure all week long. That means as many business opportunities for the willing technology entrepreneurs and that was really exiting.

Policy issues

A major hurdle to use public clouds over private clouds is trust in outside services and fear of vendor lock-in. Both are usually achieved through transparent audit processes. Logging all activities (admin changes as well!) to a centralized log server are generally a first welcome step. Laws vary between states and countries and businesses need to protect their liability. So a second necessary step is to be able to track and enforce where a Virtual Machine is physically located at all time.

Entering the cloud, an IT department will suddenly struggle with license management and proliferation of images. Both issues did not exist when workload were tied to a physical computer. The IT technician job will have to change accordingly. A car analogy I heard and seems perfectly appropriate is the change from fiddling with the engine by hear in earlier years to finding the diagnostic plug and analyzing the on-screen measurements. With large cloud running in the thousands of virtual machines, user interface designers are also in for a ride on managing complexity.

The cloud provider will need to go to great length to insure Virtual Machines from one company cannot sniff the traffic of another company since both are collocated in the same data center, sometimes running on the same processor. This is known as the multi-tenancy problem.

Another problem that is bound to plague clouds if not carefully handled are rogue, pirate and unauthorized VMs consuming up resources.

Last question that seems simple but still requires careful thinking is "Who is responsible to update the OS?"

Technical issues

The Internet Protocol and IP addresses were meant to describe both identity and location. It is also usual that local networks are configured with default 192.168.* addresses. Ingenious engineers will be busy solving the shared addresses conflicts and performance issues that arise out of those designs in a virtualized environments.

At the heart of an hypervisor, virtualized I/O performance is a very difficult problem. As an increasing number of business applications are read-data bound, while, for example, Windows was optimized for write through output, the headaches and fine tuning will keep going on for a long time.

Operating systems assume to own all of the physical memory. Having virtualized the physical memory without the OS consent, the hypervisor will thus only see allocates for machine memory when necessary. There is a whole lot of techniques to do transparent page sharing, hypervisor swapping, etc. but the most mind blowing cleverness I have seen is "ballooning" as a way to reclaim machine memory.

Complete virtualized desktops are wonderful in that they deliver the OS and the App in a single package. It removes a lot of update and dependencies headaches. On the other hand, such packages have the side effect to get native Anti-Virus software to hang the local machine for a while when checking a downloaded >30Mb VM image and report all kinds of improper behavior (Of course, an OS is bundled in that VM and will most likely contain "suspicious" kernel code).

Conclusion

Voila, lots of exciting news from VMWorld this year! Clouds are making their ways into the IT infrastructure of many companies and are bound to tremendously change how businesses organize themselves and deliver value to their customers. I hope you enjoyed it.

by Sebastien Mirolo on Thu, 9 Sep 2010