Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] ZFS vs. Btrfs



Another fundamental difference is how the two handle mirrored data and
metadata.

ZFS's mirroring is built on conventional plexes. In a simple 4 disk
array, pairs of physical devices are bonded as single virtual devices
and then these vdevs are joined to form a larger pool. Anything written
to one disk in a pair is written to the other disk in the pair.
Striping is performed across vdevs within the pool.

Btrfs mirroring is built entirely on file extents. In a simple 4 disk
volume, all four disks are attached to a single volume with mirrored
data and metadata. Any extent written to one device will be written to
another device based on a balancing algorithm within the file system
driver.

This abstract approach lets you do something that seems weird at first:
mirror sets with odd numbers of devices. To illustrate the idea,
imagine a Btrfs volume with three 1TB disks. In raid0 (striped) you
have 3TB capacity, and writing 1TB of data will take 1TB of that
capacity leaving 2TB. In raid1 (mirrored) you have 1.5TB capacity, and
writing 1TB of data will take 2TB of that capacity leaving 0.5TB. Every
extent is replicated on 2 physical devices so it is still resilient to
a disk failure and can still self-repair corrupted data.

Btrfs raid10 requires at least 4 devices.

-- 
Rich P.



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org