Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

(no subject)



On Sat, Jul 10, 2004 at 05:40:50PM -0400, julesg at newebmail.com wrote:
> 
> First, the statements I have made are truthful and accurate in all respects.
> ================================================
> 
> It is certainly true that this claim is robust and that many observers think
> I am either a fraud or mistaken.  But the fact is, while I don't have a
> system (a compressor and decompressor) that will compete with say, ARJ, or
> PKZIP, I do have a system that compress'es files and then, at a 'receiver'
> site, decompresses the file.  And this system does meet my basic claim, that
> I can compress a file several times, recompressing the output from the prior
> stage.
> 
> Slowly.
> 
> 
> That's it.  Everything I have just said is true.

Will it always compress a compressed file and make the output at
least one byte smaller than before? Or is there a limit, after
which it stops working no matter how long it runs?

-dsr-




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org