Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] AMD FX-8120 update



On Tue, Mar 6, 2012 at 11:41 AM, Nilanjan Palit <tollygunj at hotmail.com> wrote:

>> > power.
>>
>> ie. Is very dependent on how low a power state you can bring a system
>> too vs. what are essentially continuous (albeit often very limited)
>> computational needs. ? His statement is only ALWAYS true,
>
>
> I did not use the words "ALWAYS true" -- I used the words "In general".
> There are always corner cases, which I did not want to go into for lack of
> time/space.

Which the original poster didn't.   I'm suggesting that you "corner
cases" might be bigger then you think.

> I don't understand what you are trying to say here. What is a "batch job"?
> Can you give examples of other tasks that are wall-clock constrained besides
> multimedia (video, music, VoIP, Skype, ...)?

Not a batch job:

1. MythTV backend commercial scanning. This can be run after the
entire show is recorded at which point it becomes a batch job, but
some people like to use it for viewing live TV on a delay and the GPU
is typically not used for this.  You would say it should be optimized,
but I'm pretty sure it hasn't been yet.
2. Factory process control
3. Monitoring stock market values as they come in.
4. Anything with input that isn't (almost) completely specified before
execution starts.  i.e. Anything that does significant computation
based on input from the physical world.
5. Most? embedded systems.

These are all jobs that you can NOT run to completion and just shut
everything off.    Now you suggest that you can "generally" cycle the
system between halt/full power and save energy over running slowly all
the time.   For some limited definition of "generally", you are going
to be right.   Giving the wide range of ways in which CPUs (and Linux)
are used, I would suggest that actual testing is probably a better
idea then "rules of thumb" for anything that matters.

Batch job:

Simulating orbital mechanics
Protein Folding simulations
Pretty much anything for which people have traditionally used supercomputers
Any computation where (almost) all input is completely specified
before execution starts

Bill Bogstad

Off? topic P.S.  I have an allergic reaction to explicit (as well as
implicit) use of the word "always".   IMHO, anyone who uses the word
always is almost :-) always wrong.   When I hear statements of that
type, I immediately try to figure out what the underlying assumptions
of the speaker are that makes them feel the word is appropriate.   If
I can't find plausible circumstances that make the statement false, I
feel that I either don't know that much about the subject or the
speaker has come up with an amazing new insight.   Unfortunately, upon
further research and reflection, I am more often then not
disappointed.  That's not to say that an insight only true in a
restricted context is valueless.   Great results can occur from
exploring the true context of a problem in order to come up with
solutions for actual problems rather then theoretical models.   But
knowing the boundary conditions for when a solution will fail are
important to making correct use of it.



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org