Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Virtualization preferences



 On Sat, 26 Jul 2008 10:55:20 -0400 
"Matt Shields" <[hidden email]> wrote: 

> On Sat, Jul 26, 2008 at 10:17 AM, Jerry Feldman <[hidden email]> wrote: 
> 
> > On Sat, 26 Jul 2008 08:51:40 -0400 
> > Kent Borg <[hidden email]> wrote: 
> > 
> > > Jerry Feldman wrote: 
> > > > I think the future of computing will be leaning very heavily on 
> > > > virtualization. 
> > > 
> > > And isn't that strange.  Isn't this an indication of major failure in 
> > > operating system design? 
> > > 
> > > I remember when computers were sometimes nearly naked hardware.  The 
> > > idea of having a real OS sounded so good, it would let multiple programs 
> > > run and isolate them from each other.  Time passes, and I am a Linux 
> > > user.  I know a fair amount about it, and it can run tons of different 
> > > programs at once...yet I run Linux guests on top of Linux hosts.  And 
> > > others do that too. 
> > 
> > Back in the 1970s, I ran an IBM data center with VM370 and OS/VS1 as 
> > the guest production OS. On an unintentional benchmark, we had to rerun 
> > payroll. The first night we ran it with OS/VS1 as the native OS, and 
> > the second night ran it under VM370 with online CMS users. (CMS was a 
> > single user OS used by our developers).  We got better throughput on 
> > OS/VS1 under VM370 than native. I am also aware of a few other 
> > companies who achieved much better throughput using DOS (IBM's 
> > mainframe DOS). But, both of these cases were a result of bottlenecks 
> > in the OS that were alleviated by VM. 
> > 
> > The reason I think that virtualization will be the way to go in the 
> > data center is flexibility. Hardware is constantly evolving to become 
> > faster, more memory, and smaller.  While there will always be some 
> > legacy systems, the ability of the vurtualization systems to 
> > reconfigure dynamically over multiple machines gives the data center 
> > people the capability to remove the physical boundaries, and also the 
> > ability to bring new hardware into the mix without shutting down.  I 
> > think that one of the talks planned for the event will be on this 
> > feature. 
> > 
> > -- 
> > -- 
> > Jerry Feldman <[hidden email]> 
> > Boston Linux and Unix 
> > PGP key id: 537C5846 
> > PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846 
> > 
> > _______________________________________________ 
> > 
> > 
> I've been thinking along the same lines.  In my current job we have a number 
> of large datacenters and tons of different applications and have always 
> ordered spcific hardware for specific tasks, this has become quite costly, 
> especially since some of these applications sit dormant some of the time. 
> 
> I evaluated a product called AppLogic (just another vm app) which you build 
> your virtual infrastucture (firewalls, load balancers, storage, servers) on 
> top of AppLogic and it handles provisioning and management across all your 
> servers.  It also did some cool stuff like distributing your data across 
> your entire grid to provide redundancy, that way you didn't need a SAN.  But 
> when I was talking to management, I was trying to get them out of the 
> thinking of ordering specific hardware for specific projects and think more 
> along the lines of a giant grid, like Google or Amazon, where you just 
> provision your applications.  Who cares what your hardware is?  Let the vm 
> application worry about properly distributing the load and what the hardware 
> is doing.  Sure you'll still need to monitor the host machines to see when 
> they need to be replaced, but this makes upgrading hardware so much easier 
> because, you put the host machine into maintenance mode, vm's get migrated 
> off, then you remove the host from the cluster.  So much simpler and saves a 
> ton of time, and anything that saves me time I'm all for. 


BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org