Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

ZFS and block deduplication



On Apr 27, 2011, at 5:00 PM, Edward Ned Harvey wrote:
> 
> It's even more difficult than that ...  Yes, many files span multiple
> blocks, and therefore begin at the beginning of one block and end in the
> middle of another block, but the hashes are calculated on a per-block basis
> up to 128k.  So any files that are smaller than 128k *might* occupy a block
> by themselves, but since they're probably being written a whole bunch of
> files at a time, most like the write aggregation is consolidating many small
> writes into a single block.

This is not difficult.  I just target the httpd executable instead of apachectl.  The apache2 executable on the Debian system that I am looking at right now is 357K so it will have at least two blocks to itself.  The exim4 executable is 680K, taking at least four blocks to itself.  ntpd is 410K, three blocks.  sshd is 429K, three blocks.  It's not hard to find a system binary to attack this way.


> And of course the countermeasure of all the above is trivial.  Enable
> verification.  ;-)

No, the countermeasure is to keep system and user storage separate from each other, like system admins have been doing for about as long as we've had multi-user systems to admin.

--Rich P.







BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org