Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

a few facts



Jules Gilbert wrote, On 08/15/2004 11:00 PM:
> I realized I omitted something important -- sorry.
>  
>  
> This process should not be regarded as lossless; but lossless 
> compressor's can readily be constructed using this method.

Well, yes, that's quite important.

When you tell a bunch of people "I have this wonderful 
compression/decompression algorithm that takes a lot of resources, and needs 
very random data, but it works", with no other qualifiers, most people are 
going to assume you mean (a) something that will work with any random data, 
not data that is known to posess any other characteristics than randomness, 
and (b) it is lossless.  Those are the defaults that will be assumed unless 
you say otherwise, and this is the first time (to my recollection) that you 
have said that your method is lossy.

Most of what Mark has been proposing, to the best of my understanding, 
requires either knowing more about the data beforehand, or you have a 
monstrous toolbox of algorithms to apply against the data, in the hopes of 
finding segments of the input stream that match an algorithm that can be 
represented in a smaller space than the original input stream segment.  I 
would have to agree with Mark that it is theoretically possible, but it seems 
to me that it would be kind of rare.  If the data is truly random, then 
running into a sequence in the stream that could be represented by an equasion 
smaller than the sequence should be rather unlikely, regardless of computing 
power.  But again, Mark said "It could happen", not "It's feasible or 
practical", so I would have to agree with him there.

As for Jules' algorithm, I thought it was lossy when he explained the part 
about curve fitting.  Making data smaller by throwing away part of it is not 
magic.  I was hoping I was misstaken, though, because he never said it was 
lossy (until a few hours ago).

Jules, the reason I never took you up on your offer is that I didn't feel that 
I had a strong enough math background to fully understand your algorithm, so I 
would not be able to honestly say I saw it work and understood it.  But now 
that you have stated that it's a lossy alogrithm, it's clear that what you say 
(now) you can do, you can do.  With each iteration you throw away more of the 
original data so of course you can make it smaller, but the "it" is 
irrevocably changed in the process.

The other reason I didn't accept your offer is that showing a video of a 
computer performing math would not make for a very dynamic, captivating 
meeting, and I have to think of BLU first.





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org