Virtualization

From GO Wiki
Revision as of 16:32, 17 September 2010 by Sjcarbon (talk | contribs) (cons)
Jump to navigation Jump to search

Overview

Rational

In addition to the general advantages of virtualization in a computing infrastructure, there are potentially several advantages specific or especially important to the GO.

Consistency

AmiGO is currently. Other

This would also make the

Redundancy

Given that no infrastructure is bullet-proof, the ability to take a running machine and move it to a different facility without any downtime is a

Distribution

Similar to the idea of the experimental GO software ISO images, VM images are potentially a way to distribute GO software .

Also, with it's larger size, it would be possible to make

Platforms

Ubuntu/Debian

Currently, the preferred platform for future development on AmiGO is Ubuntu 10.04. Ubuntu is

In line with consistency and speed to production, a Debian-based system

In addition to being the

Eucalyptus

KVM

Amazon API

???

Cons

While the cons list would seems to be fairly minimal with virtualization, there are several points which need to be considered.

Initial infrastructure cost

While most newer machines support virtualization in the hardware (necessary to get to make virtualization worthwhile), it is not worthwhile on older hardware or, given KVM's current connection to x86 architectures, non-x86 systems.

In addition, moving large images around can me a slow and time-consuming process. Either patience or switching to faster networking hardware (where not already in place) would be necessary to get all of the benefits.

Both of these issues are at least partially addressed by the fact that new infrastructure rollout is necessary anyways as machines are replaced in the normal course of things. If a virtualization infrastructure is planned and rolled out over time, additional cost could be minimal.

Increased complexity

Even though virtualization gives better management and resource allocation at a high level, the fact is that there is at least one additional layer between software and hardware, and management and resource allocation at a low level becomes much more complicated, albeit largely hidden from the administrator.

This can make troubleshooting of some kinds of problems more difficult and increases the number of ways that things can go wrong (also connected to para-virtualization versus full-virtualization).

Some of this is inevitable--computing has always gotten more complex over time as higher-level abstractions are created over old ones. The increased flexibility presented by virtualization will hopefully pay for the increased complexity and problems associated with it.

Inexperience

Related to the above, as with any new way of doing things, there is going to be a learning period where the solutions to new problems are slow and non-optimal. Hopefully, by using a well-supported software infrastructure with a large community around it, this will be kept to a minimum.

Monoculture

Put all your eggs in the one basket and--WATCH THAT BASKET.
— Pudd'nhead Wilson's Calendar

Currently, AmiGO is developed on one platform, put into production on two others, with software developed for on a fairly wide variety. Just the act of having AmiGO function in all of these different places helps maintain a clean architecture and good coding practices (e.g. bugs that are not apparent on one platform cause crashes on another). To some extent, it becomes tradeoff between robustness and time to develop and get into production. Given resource limitations, favoring the latter is probably the best at this time.

On a more paranoid angle, monoculture also increases the risk of a bug or security hole on one instance being trivially exploitable on all instances. I'm unaware of any specific attack against AmiGO/GO software, (only general attacks again, say, LBL), but the potential is there.