Posts for virtualization

Update time again

by Sebastien Mirolo on Sat, 3 Mar 2012

I have been playing around with condor since last week. It is available through apt-get so it was somehow straightforward to experiment with it. As it turns out the .deb package installed version 7.2.4 which is unfortunate because the support for transferring directories was only added in version 7.5.4 (with further releases fixing a slew of bugs related to that feature).

This post is not about condor (see my condor notes). It is about two days spent upgrading my development environment to get to the point where I can write code that uses that condor feature. The following paragraphs shows typical decisions and consequences that are an intrinsic part of software management.

The question is how do I get a version of condor above 7.5.4 installed on my system. The latest stable version of condor is, as of today, 7.6.6 so the closer the better. I am not picky. As long as I can get the required feature installed quickly and move on with my actual work the better. We are still figuring out how to leverage condor in our cloud infrastructure. Being picky will come in due time.

Starting an upgrade process

I had used ubuntu 10.04 (natty) thus far. Since it is almost two years old at this point, my first idea was to update to ubuntu 11.10 (oneiric), the 12.04 LTS still being in beta.

Ubuntu 11.10 provides linux kernel 3.0.0. Almost as expected the vmware tools did not compile with the new kernel headers. That is a problem because my development setup involves mounting my home directory on the OSX host read-only in the guest virtual machine, then ssh into that machine from Terminal.app. This way I can use reliable copy/paste shortcuts on the terminal session and write code with emacs on OSX. I used VMware Fusion 3.1 until then. Patching vmware tools wasn't beyond the realm of possibilities but I figured for less than fifty dollars, I could save me a lot of time and trouble down the line by just upgrading to VMware Fusion 4.1. I did. On the way I also installed OSX updates to avoid breaking my setup later on when another tool would require those updates. It is called an update but at 1.4Gb, it could as well be a complete new system. Anyway after a couple hours I was now running OSX 10.7.3 and VMware Fusion 4.1. Ubuntu 11.10 installed perfectly and the VMware tools worked without a glitch. None-the-less the condor version provided by oneiric was still 7.2.4.

In the meantime I also discovered that Fusion 4.1 moved some of the configuration files around. I first found out the mac addresses for my vms and then fixed their ip addresses in the dhcp configuration.

$ find /Library/Virtual\ Machines -name '*.vmx' -exec grep -H 'ethernet0.generatedAddress ' {} \;
$ diff prev /Library/Preferences/VMware\ Fusion/vmnet8/dhcpd.conf
+host armada {
+    hardware ethernet 00:0c:29:a6:1b:41;
+    fixed-address 192.168.144.10;
+    option host-name "armada";
+}

Finally I restarted the network interface by running:

$ /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop
$ /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start

Going down the rat hole

At that point I could have decide to compile the latest condor sources but instead I figured I would see what version of condor is packaged in Fedora. So I went on to install Fedora 16 on VMware Fusion. After the usual modifications to be able to ssh into the Fedora virtual machine,

$ systemctl start sshd.service
$ systemctl enable sshd.service
$ diff -u prev /etc/sysconfig/iptables
 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
+-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
 -A INPUT -p icmp -j ACCEPT
$ iptables-restore < /etc/sysconfig/iptables

I was back to trying to install the vmware tools and try to get vmhgfs to work. That were things got ugly. Fedora 16 iso ships with linux kernel 3.1.0 but a first *yum update* will bring kernel 3.2.7 thus *yum install kernel-devel* will bring sources for kernel 3.2.7 in /usr/src/kernels. The choices are thus either to compile VMware Tools for kernel 3.1.0 and/or 3.2.7. The following patch managed to do that but unfortunately vmhgfs would load but mounting the host filesystem would fail (see grep vmware-tools /var/log/messages).

$ yum groupinstall "Development Tools"
$ yum install kernel-devel
$ yum install kernel-devel-3.1.0
$ tar zxvf /media/VMware\ Tools/VMwareTools-8.8.1-528969.tar.gz
$ cd vmware-tools-distrib
$ tar xvf lib/modules/source/vmhgfs.tar
# fix error: assignment of read-only member 'i_nlink'
$ diff -u prev vmhgfs-only/fsutil.c
- inode->i_nlink = 1;
+ set_nlink(inode, 1);
# fix error: expected ')' before numeric constant
$ diff -u prev vmhgfs-only/tcp.c
+ #include <linux/moduleparam.h>
$ tar cvf lib/modules/source/vmhgfs.tar vmhgfs-only
$ ./vmware-install.pl

As it turns out, there is an open source alternative to the vmware tools called open-vm-tools. Of course it comes as package in Ubuntu but does not seem to be part of Fedora repo. I downloaded open-vm-tools-8.6.0-425873.tar.gz (stable) and open-vm-tools-2011.12.20-562307.tar.gz (devel). The first one errored out while compiling with linux kernel 3.2.7 sources. The second one compiled fine and even managed to work. I was able to mount my host home directory on a Fedora 16 vm with:

# Make sure we are running the 3.2.7 kernel.
$ uname -a
$ yum groupinstall "Development Tools"
$ yum install kernel-devel
$ yum install glib2-devel pam-devel libX11-devel libXext-devel libXinerama-devel libXi-devel libXrender-devel libXrandr-devel libXtst-devel libXScrnSaver-devel uriparser-devel libpng-devel gtk2-devel gtkmm24-devel libdnet-devel gcc-c++ libicu-devel
# Not sure the following one help but I did install it.
$ yum install dkms
$ cd open-vm-tools-2011.12.20-562307
$ ./configure
$ make
$ make install
$ mkdir -p /mnt/hgfs
$ /usr/local/bin/vmtoolsd &
# verify modules are loaded and running
$ lsmod | grep vm
$ ps -e | grep vm

I finally had Fedora 16 / VMware Fusion 4.1 / OSX 10.7.3 providing the same features I had working with my previous setup two days earlier. Where was I? Oh yeah, condor...

$ yum install condor
$ condor_version
7.7.3

Conclusion

VMware Fusion works almost seamlessly with Ubuntu. That is not the case with Fedora. Unfortunately condor is a lot better integrated with Fedora than Ubuntu.

VirtualBox is now an Oracle product. Since Oracle is also famous for "stealing" Redhat, there might be a chance a VirtualBox / Fedora combination works out of the box. I don't know. Maybe I will try when I am forced to chase down another upgrade path.

Virtual Machines Server

by Sebastien Mirolo on Sat, 8 Jan 2011

Ubuntu 10.10 comes with support for Virtual Machines. As I started to port fortylines testing infrastructure to a new more quiet machine, it is a good time to start playing around with virtual machine provisioning and decommissioning.

Ubuntu's default virtual machine infrastructure is built around KVM. The Ubuntu wiki has some information on booting a virtual image from an UEC Image. A blog post titled "Setting up virtualization on Ubuntu with KVM" contains also a lot of useful information. After browsing around, the tools to get familiar with include vmbuilder, kvm, qemu, cloud-init and eucalyptus.

$ which kvm
/usr/bin/kvm
$ which qemu-system-x86_64
/usr/bin/qemu-system-x86_64

First, the easiest seems to try booting from a pre-built image. So I downloaded the current uec image and looked forward to boot the virtual machine from it. I tried the amd64 unsuccessfully and the i386 image goes through the same set of errors: "Could not initialize SDL" (add -curses option), then "General error mounting filesystems" (replave if=virtio by if=scsi,bus=0,unit=6). Finally I got a login prompt running the following commands.

# download UEC image
wget http://uec-images.ubuntu.com/server/maverick/20101110/maverick-server-uec-i386.tar.gz
mkdir maverick-server-uec-i386
cd maverick-server-uec-i386
tar zxvf maverick-server-uec-i386.tar.gz
chmod 444 maverick-server-uec-i386*

# booting the virtual machine from the UEC image
kvm -drive file=maverick-server-uec-i386.img,if=scsi,bus=0,unit=6,boot=on \ -kernel "maverick-server-uec-i386-vmlinuz-virtual" \ -append "root=/dev/sda ec2init=0 ro init=/usr/lib/cloud-init/uncloud-init \ ds=nocloud ubuntu-pass=ubuntu" -net nic,model=virtio \ -net "user,hostfwd=tcp::5555-:22" -snapshot -curses

The idea behind the Ubuntu virtual machine support investigation is to run nightly mechanical builds on a virtual machine. The virtual machine is provisioned with a standard EUC image, the build is performed, installing prerequisites as necessary, the generated log is communicated back to the forum server and the virtual machine decommissioned.

The two main issues to be solved are starting the automatic build in the virtual machine, communicating the log back to forum server. A third issue not directly related to the cloud infrastructure is to run a sudo command on the virtual instance through a batch script.

The documentation and the kernel command line hint at a "xupdate=" option to the /usr/lib/cloud-init/uncloud-init init process. I thus mounted the disk image and starting digging through the uncloud-init script to find clues on how it could be useful for my purpose.

mkdir image
losetup /dev/loop2 maverick-server-uec-i386.img
mount /dev/loop2 image
less image/usr/lib/cloud-init/uncloud-init
...
 if [ -d "${mp}/updates" ]; then
      rsync -av "${mp}/updates/" "/" ||
               { log FAIL "failed rsync updates/ /"; return 1; }
fi
if [ -d "${mp}/updates.tar" ]; then
        tar -C / -xvf "${mp}/updates.tar" ||
                { log FAIL "failed tar -C / -xvf ${mp}/updates.tar"; return 1; }
fi
script="${mp}/updates.script"
if [ -f "${script}" -a -x "${script}" ]; then
        MP_DIR=${mp} "${mp}/updates.script" ||
                { log FAIL "failed to run updates.script"; return 1; }
fi
...

The uncloud-init script is designed to customize a virtual instance before the system fully boots and becomes operational, thus it is no surprise that xupdate mechanism cannot be used for starting the build process. It seems we will have to login into the instance and run the build process

For our purpose of a mechanical build system, it is possible to run virtual instances without bringing up an ssh server. Once the build is finished, we could mount the disk image through a loopback device on the host and retrieve the files from the mounted drive. That requires to add an entry like the following in /etc/fstab. Some blogs suggest to use autofs instead but I haven't been able to get it to work properly nor do I understand how it gets rid of the "mount as root" requirement.

/var/images/build-uec-i386.img /mnt/images/build auto ro,user,noauto,loop 0 0

Once the virtual machines are not provisioned locally but rather spawn into the cloud, that approach does not work anymore. So we might look into using the virtual instance ssh server to transfer logs around. All that is required is to copy the build master controller ssh public key into the virtual instance ubuntu account authorized_keys file, something that can be done by uncloud-init through the xupdate mechanism. So we create a custom update disk as follow.

mkdir -p overlay/updates
# ... set subdirectory structure to match the updated root ...
genisoimage -rock --output updates.iso overlay
qemu-img create -f qcow2 -b maverick-server-uec-i386.img disk.img
# This command works but still prompts for login.
kvm -drive file=disk.img,if=scsi,bus=0,unit=5,boot=on \
  -drive file=updates.iso,if=scsi,bus=1,unit=6 \
  -kernel "maverick-server-uec-i386-vmlinuz-virtual" \
  -append "root=/dev/sda ro init=/usr/lib/cloud-init/uncloud-init \
  ds=nocloud ubuntu-pass=ubuntu xupdate=sdb:mnt" \
  -net nic,model=virtio -net "user,hostfwd=tcp::5555-:22" -nographic

The dws script needs to communicate to the source control repository through the Internet. I found out that the edition of /etc/network/interfaces is unnecessary once you install libvirt. Despite some posts around the web, it seems the virtual bridge is only necessary to access the virtual machine from outside the host if either.

sudo aptitude install libvirt0

Ubuntu 10.10 had already done it as part of the installation for me as shown through the ifconfig command.

...
virbr0    Link encap:Ethernet  HWaddr 9e:66:65:fc:97:5b  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
...

Two commands require sudo access, apt-get and shutdown. We use apt-get to install system prerequisites and shutdown to cleanly stop the virtual machine. We thus add the following two lines to the /etc/sudoers file. The batch script can then execute both commands without prompting for a password.

%admin ALL = NOPASSWD: /sbin/shutdown
%admin ALL = NOPASSWD: /usr/bin/apt-get

Once the virtual machine and the ssh server is started, it is then possible to execute the build script on the guest, copy the log file and shutdown the virtual machine in three successive ssh commands.

ssh -p 5555 ubuntu@localhost /home/ubuntu/bin/dkicks
scp -P 5555 -r ubuntu@localhost:/home/ubuntu/log .
ssh -p 5555 ubuntu@localhost /sbin/shutdown -P 0

The following python code can be used to wait until the ssh server responds.

def waitUntilSSHUp(hostname,login=None,port=22,timeout=120):
    '''wait until an ssh connection can be established to *hostname*
    or the attempt timed out after *timeout* seconds.'''
    import time

    up = False
    waited = 0
    sshConnect = hostname
    if login:
        sshConnect = login + '@' + hostname
    while not up and (waited <= timeout):
        time.sleep(30)
        waited = waited + 30
        cmd = subprocess.Popen(['ssh',
                                '-o', 'BatchMode yes',
                                '-p', str(port),
                                sshConnect,
                                'echo'],
                               stdout=subprocess.PIPE,
                               stderr=subprocess.STDOUT)
        cmd.wait()
        if cmd.returncode == 0:
            up = True
        else:
            sys.stdout.write("waiting 30 more seconds (" \
                                 + str(waited) + " so far)...\n")
    if waited > timeout:
        raise Error("ssh connection attempt to " + hostname + " timed out.")

As it turns out, the build script is running out of space while installing all the prerequisites and compiling the repository. The original disk image (1.4Gb) seems to small for that purpose.

There seem to be three solutions to this problem.

  • Find a base image with a bigger disk
  • Create a new image with a bigger disk
  • Increase the size of the disk on the original disk image

As the next steps in our vm mechanical build project consist of running centOS disk images, it is a good time to start investigating running EC2 images locally. Apparently, there is a large library of those and we should find a public one that is sized correctly for our purpose. Looking around on the web, there is a lot of documentation creating and uploading EC2 images but I couldn't find relevant information on downloading a public image and running it locally in kvm. I was looking for something as simple as a url to a disk image but no luck so far.

To increase the size of disk image, the most common solution consists of concating two raw files together and update the partition table. The partition update part looks like a lot of complexity to code in a batch system. Currently we are using an update disk to customize the default disk image and now we also need to resize it which seem tricky enough. So I looked into building an image with vm-builder. Apparently that is how the UEC image I used earlier was put together.

$ aptitude search vm-builder
p   python-vm-builder                        - VM builder
p   python-vm-builder-ec2                    - EC2 Ubuntu VM builder
p   ubuntu-vm-builder                        - Ubuntu VM builder

I am not yet certain vmbuilder will also provide a mean to create CentOS images or if I will need a different tool for that purpose. None-the-less, let's start there for now.

sudo vmbuilder kvm ubuntu --rootsize=8192

The ubuntu-kvm directory was created with two files in it: run.sh, a shell script to with the kvm invoke command and a tmp78iihO.qcow2 file of 389Mb, the system disk image. Let's launch the image and see what's in it.

cd ubuntu-kvm && ./run.sh

Using the "ubuntu" login and "ubuntu" password, I am able to to get a shell prompt.

$ df -h
Filesystem  Size Used Avail Use% Mounted on
/dev/sda1   7.6G 482M  6.7G   7% /
...  
$ ps aux | grep sshd
$ find /etc -name 'sshd*'

So we have a bootable image with 6.7G of space available. The sshd daemon is not running nor installed and most likely the scripts necessary to make a copy of that image unique in the cloud are not there either. Let's add our modifications to run the build script first, see how far it goes.

$ sudo aptitude install openssh-server
$ mkdir -p /home/ubuntu/bin
$ mkdir -p /home/ubuntu/.ssh
$ sudo vi /etc/sudoers
  # Defaults
+ # Preserve environment variables such that we do not get the error message: 
+ # "sorry, you are not allowed to set the following 
+ #  environment variables: DEBIAN_FRONT"
+ Defaults        !env_reset

  # Members of the admin group may gain root privileges
  %admin ALL=(ALL) ALL
+ %admin ALL = NOPASSWD: /sbin/shutdown
+ %admin ALL = NOPASSWD: /usr/bin/apt-get
$ sudo shutdown -P 0
> kvm -drive file=tmp78iihO.qcow2 \
  -net nic,model=virtio -net "user,hostfwd=tcp::5555-:22"

The previous command hangs the virtual machine in start-up while the following command does not permit to ssh into the virtual machine.

> kvm -drive file=tmp78iihO.qcow2 -net "user,hostfwd=tcp::5555-:22" &
> ssh -v -p 5555 ubuntu@localhost
...
ssh_exchange_identification: Connection closed by remote host

There are apparently more to vmbuilder that the documentation suggests to build an equivalent image to the one I originaly used...

Looking through the vmbuilder source repository I found a README.files in automated-ec2-builds that mentioned a uec-resize-image script.

sudo aptitude install bzr
bzr branch lp:~ubuntu-on-ec2/vmbuilder/automated-ec2-builds

I might actually be able to resize my original image with a single command after all.

aptitude search *uec*
bzr branch lp:~ubuntu-on-ec2/ubuntu-on-ec2/uec-tools
ls uec-tools/resize-uec-image
sudo install -m 755 uec-tools/resize-uec-image /usr/local/bin

Let's use the resize script and check the free space on our new image.

> resize-uec-image maverick-server-uec-i386.img 5G
> ls -la maverick-server-uec-i386.img
-rw-r--r--  5368709120 2011-01-03 07:28 maverick-server-uec-i386.img
>kvm -drive file=maverick-server-uec-i386.img,if=scsi,bus=0,unit=6,boot=on \
    -kernel "maverick-server-uec-i386-vmlinuz-virtual" \
    -append "root=/dev/sda ec2init=0 ro \
    init=/usr/lib/cloud-init/uncloud-init ds=nocloud \
    ubuntu-pass=ubuntu" -net nic,model=virtio \
    -net "user,hostfwd=tcp::5555-:22" -snapshot -curses
$ df -h
Filesystem  Size Used Avail Use% Mounted on
/dev/sda1   5.0G 516M  4.2G  11% /
...

Finally, I managed to get a script that starts a virtual machine, runs the mechanical build end-to-end and copies the build logs back out. It is a good start but there remains a few issues with the current approach. The cloud-init script re-enables ssh password authentication after it updates sshd_config with our version.

# /usr/lib/cloud-init/uncloud-init
...
pa=PasswordAuthentication
sed -i "s,${pa} no,${pa} yes," /etc/ssh/sshd_config 2>/dev/null &&
        log "enabled passwd auth in ssh" ||
        log "failed to enable passwd ssh"
...

The IP address of our virtual machine is always the same but uncloud-init will generate different ssh server keys every time we run our build on fresh virtual-machine script. That creates identification issues for the ssh client that the "StrictHostKeyChecking no" parameter does not always solve.

I looked quickly through virsh and eucalyptus. It seems each running instance needs to be registered with a global store, i.e. requires sudo access on the host. Those tools do not seem suited for the kind of thirty minute life span virtual machine (start, build, throw away) I need.

It took way longer than I anticipated to figure the pieces out and it surely a long and winding road ahead before we have a simple to use cloud-based build infrastructure.

VMWorld 2010

by Sebastien Mirolo on Thu, 9 Sep 2010

VMWorld was happening last week in the Moscone center in San Francisco and it was definitely very crowed. Virtualization is a key technology to enable cloud computing, the quiet IT revolution, and VMware was definitely on top of its game at VMworld 2010.

Trends

There was little about virtualization itself, a lot about clouds and even more about services.

The big deal about virtualization is that it detaches workloads from the computer infrastructure and that in turns pulls everyone to see clouds in terms of services. It is thus no surprise that data centers are becoming highly automated, highly virtualized environments, being made accessible to end-users through web portals and now cloud APIs.

A fundamental change for many businesses starting to rely on clouds is the shift from a fixed to a flexible cost structure for their IT departments, the opportunity to bill each team for their own consumption and the rapid turn around to either scale-up and (as important) scale-down their infrastructure. In short, it is like moving from investing into a company-wide power plant to a per-department monthly bill. In accounting terms, it is a huge deal.

Another trend supported by clouds is the spring of B.Y.O.C. initiatives (Bring Your Own Computer). Fortylines was founded on the premise that knowledge workers should use their own hardware and favorite systems and it is good to see we are not alone. It is estimated that 72% of corporate end-points will be user owned devices by 2014. ThinApp, ThinDirect are the kind of technologies that enable to safely bring company services to such devices. Coupled with a company ThinApp store tied to a web front-end and RSS feeds, it will provide end-users the opportunity to keep up to date on demand.

Companies that leverage public and independent cloud providers are ripping a huge benefits in cost reduction. Companies that leverage private clouds might be in better control of delivering reliable performance and bandwidth to their employees. Most businesses will thus implement an hybrid cloud strategy in practice.

Deployment from development (just enough capacity for testing) to production systems (full capacity) has always been a major headache for web-based businesses. The cloud provides a huge opportunity to avoid a harsh cut-over and instead gradually ramp-up development systems into production while tearing down previous production systems.

After virtualizing machines, there is an increasing pressure to also virtualizated the routers, network, etc. I must admit that I did not fully grasp the rationale for this yet. Apparently it is an optimization to avoid going through the hypervisor and the physical network but seems to only apply for virtual machines in a single box.

Lingua Franca

If you are new to the clouds, want to impress your friends, or just avoid to get a CIO's door slammed in your face in the first five minutes, there are a few lingua franca to read about.

Virtual Machine (VM) Provisioning is definitely a basic. EC2 API, vCloud API and appCloud are amongst the API (Application Programming Interface) you will surely encounter. Typica and Dasein are a cross in-between framework and API that might also pop-up from time to time.

A virtualization stack is usually built around a Data Center, an Hypervisor, a Virtual Machine (see OVF), an Application Delivery (see VDI), a Display Protocol (see RDP) and a Client End-Point (or Access Devices). SNMP and role-based access control remain classics for server management. Storage solutions (ex. Gluster) are also a key talking point in any serious cloud conversation.

Compliance to different laws and regulations is a major part of doing business in many fields and IT is often charged to implement the required systems. So PCI DSS, ISO27005, Fisma and ENISA are most likely be thrown in at some point as well.

Opportunities

Aside the advantages, there was a lot of issues and challenges presented about relying on a cloud infrastructure all week long. That means as many business opportunities for the willing technology entrepreneurs and that was really exiting.

Policy issues

A major hurdle to use public clouds over private clouds is trust in outside services and fear of vendor lock-in. Both are usually achieved through transparent audit processes. Logging all activities (admin changes as well!) to a centralized log server are generally a first welcome step. Laws vary between states and countries and businesses need to protect their liability. So a second necessary step is to be able to track and enforce where a Virtual Machine is physically located at all time.

Entering the cloud, an IT department will suddenly struggle with license management and proliferation of images. Both issues did not exist when workload were tied to a physical computer. The IT technician job will have to change accordingly. A car analogy I heard and seems perfectly appropriate is the change from fiddling with the engine by hear in earlier years to finding the diagnostic plug and analyzing the on-screen measurements. With large cloud running in the thousands of virtual machines, user interface designers are also in for a ride on managing complexity.

The cloud provider will need to go to great length to insure Virtual Machines from one company cannot sniff the traffic of another company since both are collocated in the same data center, sometimes running on the same processor. This is known as the multi-tenancy problem.

Another problem that is bound to plague clouds if not carefully handled are rogue, pirate and unauthorized VMs consuming up resources.

Last question that seems simple but still requires careful thinking is "Who is responsible to update the OS?"

Technical issues

The Internet Protocol and IP addresses were meant to describe both identity and location. It is also usual that local networks are configured with default 192.168.* addresses. Ingenious engineers will be busy solving the shared addresses conflicts and performance issues that arise out of those designs in a virtualized environments.

At the heart of an hypervisor, virtualized I/O performance is a very difficult problem. As an increasing number of business applications are read-data bound, while, for example, Windows was optimized for write through output, the headaches and fine tuning will keep going on for a long time.

Operating systems assume to own all of the physical memory. Having virtualized the physical memory without the OS consent, the hypervisor will thus only see allocates for machine memory when necessary. There is a whole lot of techniques to do transparent page sharing, hypervisor swapping, etc. but the most mind blowing cleverness I have seen is "ballooning" as a way to reclaim machine memory.

Complete virtualized desktops are wonderful in that they deliver the OS and the App in a single package. It removes a lot of update and dependencies headaches. On the other hand, such packages have the side effect to get native Anti-Virus software to hang the local machine for a while when checking a downloaded >30Mb VM image and report all kinds of improper behavior (Of course, an OS is bundled in that VM and will most likely contain "suspicious" kernel code).

Conclusion

Voila, lots of exciting news from VMWorld this year! Clouds are making their ways into the IT infrastructure of many companies and are bound to tremendously change how businesses organize themselves and deliver value to their customers. I hope you enjoyed it.

Ubuntu Server and VMware Fusion

by Sebastien Mirolo on Sat, 17 Oct 2009

Anticipated the upcoming next release of Ubuntu, I decided to setup an Ubuntu Server Edition sandbox under VMware to test the configuration scripts. I do not know about you, but I feel a little anxious about upgrading the customer-facing Internet server, fixing bugs in real-time.

VMware Fusion's update tool suggested me to download and install version 2.06. I did that then created an Ubuntu sandbox virtual machine and installed the 64-bit Server Edition ISO image (ubuntu-9.04-server-amd64.iso).

That is when I tried to mount an OSX user account folder on the sandbox virtual machine that things starting to break down. The "Install VMware Tools" menu did not seem to do anything. I could not find the VMware Tools .tar.gz package anywhere on the filesystem either. Hopefully, I already had a Ubuntu Desktop Edition virtual machine and the VMware Tools DVD-ROM mounted correctly there as /media/cdrom1/VMwareTools-7.9.7-196839.tar.gz. Since I already had setup that Ubuntu Desktop Edition virtual machine with an ssh server, I remote copied the package onto the Ubuntu Server Edition sandbox. After installing the VMware tools and rebooting the virtual machine, I could then access the OSX files through /mnt/hgfs/username as usual.

# On the Ubuntu Desktop Edition virtual machine:
ifconfig | grep 'inet addr'
# On the Ubuntu Server Edition virtual machine:
scp ipaddress:/media/cdrom1/VMwareTools-7.9.7-196839.tar.gz .
tar zxvf VMwareTools-7.9.7-196839.tar.gz
cd vmware-tools-distrib
sudo ./vmware-install.pl
sudo shutdown -r 0

I still cannot copy or paste from and to the Ubuntu Server Edition virtual machine and still require to use ctrl + command key combination; maybe because there is no X11 installed on that virtual machine. In any case, I have a way to move files from the OSX filesystem to the Ubuntu Server Edition sandbox so debugging of the upgrade path for the live server is under way.

Share with your network