Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

make -j optimization



On Fri, Oct 01, 2004 at 03:07:10AM +0000, John Chambers wrote:
> David Kramer komments:
> 
> | In GNU make, you can specify a -j <n> option, to run <n> commands
> | simultaneously.  This is good.
> |
> | I'm having an argument with a coworker over whether specifying a value for
> | <n> greater than the number of the processors in the system actually does
> | any good or not.  I don't see how it can.  In fact, specifying one fewer
> | than the number of processors shoudl have almost the same performance as
> | specifying the number of processors.
> 
> Well, if this were true, then on a  single-processor  machine,  there
> should  be  no advantage in running several makes in parallel.  But I
> know from lots of experience that this is far from true.   I  have  a
> number of makefiles for C programs that have nearly every function in
> a different file, so there are lots of .o files to make. So I've done
> parallelism  the brute-force way, by firing up make subprocesses with
> '&' on the end of the line.  This should be a lot less efficient than
> a  make  that  can do the job automagically.  I've found that on most
> machines, this can speed up a build by a factor of 3 or 4.

I'm not surprised. Today's machines have plenty of memory (GUIs are
pigs that require a lot of memory), and compilation/linking is one of
those rare things that doesn't consume much. You can fit an awful lot of
build activity in memory without incurring the huge expense of paging,
giving you a decent chance of fully utilizing your CPU.

There's a limit to how much you can scale up, of course, and the formula
for finding that limit isn't simple. It'll depend on, among other things:

- How much time each activity spends waiting on I/O: when you run
  enough processes to keep all CPUs constantly busy, there's nothing to
  be gained by running more.

- Cache size: more processes mean more cache misses, which hurts
  performance.

- Process management overhead: more processes require more work
  from the kernel for management and context-switching, although
  this is probably the least important factor.

Nathan




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org