As part of our ongoing research, we started running DryadLINQ applications on a virtual cluster running Windows Server 2008 VMs on top of Xen. Although we noticed acceptable performance degradation with para-virtualized Linux guests (from our previous research), with Windows we noticed extremely higher performance degradations. Also we noticed that whenever we run a DryadLINQ job which has considerable amount of communication, the VM that runs the DryadLINQ job manager crashes. We had an exactly similar experience with Windows VMs provided by GoGrid earlier. However, those VMs had only 1 CPU core where as the VMs we just ran have 8 CPU cores. (Note: This effect has nothing to do with DryadLINQ, but the handling of I/O operations by the guest and the host OSs)
Overall, the full virtualization approach may not seem to work well for parallel applications. Especially for applications with considerable inter-process communication requirements. Following is a brief analysis I wrote.
If we look at the development of the virtualization technologies, we can see these three virtualization techniques.
Full virtualization -> Para-Virtualization -> Hardware Assisted Virtualization
(e.g. VM Ware Server) e.g. Xen, Hyper-V e.g. Hyper-V, Virtual Iron, VM-Ware Workstation (64 bit)
So far, para-virtualization seems to be the best approach for virtualization to achieve better performance. However, it requires modified guest operating systems. This is the problem for Windows guests, and that is why we cannot run Windows Server 2008 on Xen as para-vritualized guest. According to  the hardware assisted approach is still in its early stages and not performing well, but I think this will catch up soon.
Hyper-V coming from Microsoft, may provide better virtualization solutions to Windows guests and currently it supports both para-virtualization and hardware assisted virtualization.
Given these observations and the observations from our tests, currently it does not seem feasible to have one virtualization layer (like Xen) on a cluster and provide both Windows and Linux VMs for parallel computations. (Here I assumed that Linux on Xen is performing better than Linux on Hyper-V, we need to verify this).
However, if we go for a hybrid approach such as using technologies like XCAT to boot Windows and Linux bare-metal host environments, and provide Linux virtualization using Xen and Windows virtualization using Hyper-V, we may be able to utilize best technologies from both worlds.
Then the next thing to figure out is the performance and how we can run Dryad, Hadoop, and MPI on these virtual environments.
Although we did not see this “mythical” 7% virtualization overhead, they are not so bad as well (on private clouds /clusters at least – we saw 15% to 40% performance degradations).
However, we need to figure out ways of handling large data sets in VMs to use Hadoop and Dryad for wide variety of problems. The main motivation of MapReduce is “moving computation to data”. In virtualized environments, we currently store only an image of the VM without data. If the data is large we have to either move them to the local disks of virtual resources before we run computations, or attach blobs/block storage devices to VMs. The first approach does not preserve anything once the virtual resource is rebooted, whereas the second approach adds network links in between the data and computations (moving data). If our problems are not so data intensive, we can simply ignore these aspects for the moment, but I think this is something worth investigating.
Hope to add more results soon.