Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] SSD



On 05/31/2012 12:59 PM, Jack Coats wrote:
> I ran virtual systems on mainframes back when.
>
> There was a neat thing they did then was to basically NOT do virtual
> memory on the 'client OS'.  The 'client OS' could figure out it was
> being run as a virtual machine, and through special communications
> (called Diag back when) would tell the host OS what it needed done.
> That way the hostOS actually did all the virtual memory management for
> the guest systems.  ... If we turned that off, the overhead on the
> guestOS went WAY up.
>
> Normal cpu overhead back then was about 5% to run as in a guestOS
> rather than directly in the hostOS.  The first time I remember being
> told we were running 'second level', we had been running for a week in
> production and no one had noticed any problems. ...
>
> I wonder how todays virtualized systems overhead runs?
>
Sounds like you used VM370. With IBM VM370 using a OS/VS1 guest, there
were several interesting options. I found that disabling paging on
OS/VS1 gave me much better performance. Another thing was print
spooling. IBM OS/VS1 had a terrible spooling algorithm, and VM370 worked
much better (with a few custom patches). This was crazy because it was
double spooling. Some IBM mainframes had firmware assist for
VM370/OS/VS1. There were some other interesting tricks. Our software had
been migrated from Burroughs MCP that had dynamic drive allocation. When
a job needed a tape drive it requested it. On IBM's OS/VS1 this was not
possible. Tape drives for a job step had to be allocated upon initiation
of the job step. What we did on VM370 was to allocate more logical
drives than physical, so for instance, if we needed 6 tapes during a job
step, we allocated 6 drives, then when a program opened a tape, the
operator could simply reassign.

-- 
Jerry Feldman <gaf at blu.org>
Boston Linux and Unix
PGP key id:3BC1EB90 
PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66  C0AF 7CEA 30FC 3BC1 EB90





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org