Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Best practice for production servers: To reboot or not to reboot?



I have a friend that years ago (Linux kernel 0.98 or such) put in
several machines that just ran as
network routers (before the prices came down).  He felt crushed when
after 3 years the customer
moved physical locations and he had to bring down his nicely running server.

This was before the days of advanced garbage collection schema's like
we see in systems today.
Java and other pseudo interpreted languages just bring memory
management issues to a point.
(One bank I worked for used a major Java based web application that
required rebooting of 2
of their major servers twice a day due to Java memory issues.  Not to
pick on Java, but any
application that acted like that should be tossed, IMHO.  Oh yes, this
was running IBM Websphere.)

When I did IBM mainframes running VM/XA, they did a 'theraputic
reboot' (IPL) every month as general
system maintenance.  It could go over a year, but had to be rebooted
(at that time) whenever Daylight
Savings Time changed :) ... Still IBMs VM/XA was a work horse for us.
It did 'break' if more than about
2000 interactive users were on it simultaneously.  But that was due to
architectural issues in the
virtual memory management, and each user ran their own separate
operating system (normally CMS,
but we also ran Amdahl's version of AT&T UNIX, and MVS, all at the
same time of course :) .. and as far
as 'raw speed', many gaming desktops could blow away the then 'big
mainframes' computationally today,
but they had IO speed and separate I/O processors (channels) that made
a big difference in I/O capabilities.
Memories are good. ... I still try to forget the 72 hour mainframe
moves to move thousands of users
across the country back when a T1 was 'fast networking', at least for
distances.  Even in-house
channel to channel communication was limited to 3 mega bytes per
second per channel (Now they have
faster channel processors).  And that was the same speed to or from
disk drives, or network processors.
... Memories. ... Now back to our daily discussion :)






BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org