Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Adventures in N40L Land



On 02/14/2012 12:34 PM, Bill Bogstad wrote:
> On Tue, Feb 14, 2012 at 11:57 AM, Richard Pieri <richard.pieri at gmail.com> wrote:
>> On 2/14/2012 11:32 AM, Bill Bogstad wrote:
>>> Reason?   Also, do you mean actual physical geometry or the lies that
>>> all drives seem to give now?  (Which from what I've seen on a random
>>> collection of drives seem to all be the same anyway.)
>>
>> Unbalanced disks generate unbalanced I/O loads which the RAID system may not
>> be able handle properly.  This can cause the RAID controller to fault good
>> disks that aren't keeping up with the faster-performing disks in the set.
> With in-disk command queueing and the fact that most (all?) RAID5
> implementations don't bother to read parity blocks during a read
> (unless an error occurs), I would think the head positions would get
> out of sync even with identical drives due to the differences in the
> stream of READ requests.  The result could be different times for an
> operation to the same block location.      And what about the
> automatic bad-block sparing that most drives do now (and hide from the
> controller unless you explicitly use SMART to find out)?  That is
> going to effectively cause "identical" block locations to have
> completely different performance characteristics.   Given both of
> these, I would think that a sufficiently pathological stream of READ
> requests could cause fairly significant differences in performance
> even with "identical" drives.
>
>  Have you seen failures caused by this?   Can you provide more details
> about the circumstances?
>
> That's not to say that I couldn't see a possible problem with mixing
> 5400/7200 RPM drives or completely different transfer rates.  But if
> all the drives are more or less in the same performance band, is that
> going to be enough of a difference to matter?
This could have been a contributing factor when Pegasus failed last
summer. We had a couple of 72GB drives where one went bad. We had
purchased a couple of 300GB drives, but JABR added the 2 300GB drives
into the existing RAID array by partitioning. Ultimately we got into a
situation where the entire system died.

-- 
Jerry Feldman <gaf at blu.org>
Boston Linux and Unix
PGP key id:3BC1EB90 
PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66  C0AF 7CEA 30FC 3BC1 EB90





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org