Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

ZFS and block deduplication



> From: discuss-bounces-mNDKBlG2WHs at public.gmane.org [mailto:discuss-bounces-mNDKBlG2WHs at public.gmane.org] On Behalf
> Of Bill Bogstad
> 
> > The only difficulty is working up an exploit with a matching hash before
6:25
> AM tomorrow.

It's even more difficult than that ...  Yes, many files span multiple
blocks, and therefore begin at the beginning of one block and end in the
middle of another block, but the hashes are calculated on a per-block basis
up to 128k.  So any files that are smaller than 128k *might* occupy a block
by themselves, but since they're probably being written a whole bunch of
files at a time, most like the write aggregation is consolidating many small
writes into a single block.  

So even if you have a technique to calculate some data that will generate a
hash collision, you're not quite sure what data you need to collide with,
because that ultimately depends on what activity is taking place on the
target machine at the time of the updates being applied...  And of course,
simply generating a collision isn't enough to do anything useful (unless
your goal as an attacker is to simply cause random corruption).  You have to
generate a specific collision that will be corrupted in just the right ways
as to provide yourself with an exploitable flaw...

And of course the countermeasure of all the above is trivial.  Enable
verification.  ;-)






BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org