Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

No subject



For computers with dual power supplies, we plugged one into each of
the two separate
power strips.=A0 For ones that had only one power supply, we found some dev=
ices
that would plug into both and would 'fail over' if their primary
failed.=A0 We made sure
the primaries were balanced within each rack.

Emergency lighting, backbone networking, etc all ran off separate UPS backe=
d
power circuits.=A0 Even HVAC ran off of UPS power.

...

When I worked for a small regional bank, they used the idea of
separate UPSes in the base
of each rack.=A0 Typically 1500KVA UPSes or equivalent.=A0 We did monitor
the UPSes
centrally and replaced on the average one or two batteries a week.=A0 I
thought we should
have kept spares, but we never did.=A0 Air conditioners was not backed
up, so if we had
power outage we still had to shut down (think Houston TX, august with
rolling power outages).
Bad design for a data center that was mission critical for the business, IM=
HO.

At that bank we ran lots of Dell and IBM, Intel architecture servers,
Cisco routers,
a couple IBM AIX boxes, and the like.=A0 We tried to spec everything to be =
120VAC.
Someone found a 'deal' on some old equipment we had to have either 220VAC o=
r
208 installed for, but we had those wired separately.=A0 They were not on U=
PSes.
We had a larger (but old) UPS that we had serviced annually (I don't
remember the
brand) that mainly powered the IBM system 32 (or whatever their mid-series =
of
powerpc type low end 'mainframes' were called).=A0 Not the big ones.=A0 But=
 it was
on this separate fairly large UPS, along with it's direct peripherals.

...

If you need some special power or data center design stuff (raised
flooring, environmental
monitors etc), Leibert has been in the business 'forever' and have
been around for
a LONG time.=A0 I think they are now part of Emerson at emersonnetworkpower=
.com

If you want some good information about UPSes, go to the apcc.com web site =
and
read till your eyes cry for mercy.=A0 You will understand the Watts vs
VA difference, at
least enough to make practical decisions (not to engineer, but as
knowledgeable use).

---
When working for an oil company, we were putting in a new unix based
data center.  We were just
starting to use hardware raid controllers and putting 'huge' 8G disk
drives on them (5 per controller).
Starting power on those drives was significant, but they had an option
of 'delayed startup'.  If you
turned it on, on each drive, they would delay startup by the scsi ID
number times 10 seconds.
This allowed us to recover after a power failure without killing our
data center wide UPS.  All that to say
understand your power options on equipment you put in the data center.
 Without the delay
we would have needed a much larger UPS to handle the start up currents.

---

I was installing a computer on a ship.  We had a problem because
without a UPS, it was
doing corrupt writes on occasion.  It turned out to be they would
happen when the power
sagged whenever someone used the elevator (it was on the same circuit,
even though I
was told it was not).  My suggestion was to put a UPS on the computer as a =
power
conditioner.  The vendor said it was not necessary. ... The problem
was resolved after I
left the project, by putting a UPS on the computer. ... Life goes on.


Sorry for the LONG response but I hope it helps in understanding of some re=
al
world data center power issues.



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org