Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

End of Moore's law?



On 07/10/2010 08:39 PM, Shankar Viswanathan wrote:
> On Sat, Jul 10, 2010 at 8:08 PM, Jerry Natowitz <j.natowitz-KealBaEQdz4 at public.gmane.org> wr=
ote:
>  =20
>> Except that the example DF gave is of a linear change and Moore's Law =
is
>> about exponential changes. I'm not following the chip world closely
>> enough to know if CMOS manufacturing process size is continuing to
>> shrink at the same rate as it used to - 18 months between process
>> changes that resulted in a halving of cell size - but starting at 65nm=

>> process, gate pitch reduced slower than general feature size, and at
>> 45nm many other features started showing reducing scaling.  This would=

>> explain why clock speeds are increasing more slowly than expected, but=

>> transistor count continues to grow (more cores, more cache).
>>    =20
> Moore's law as originally stated is still holding quite well -- the
> number of transistors in a chip of a given area is indeed doubling
> every two years (give or take a few months). Previously those extra
> transistors directly correlated with extra performance, so that led
> several people to incorrectly assume that the law said that
> "performance" doubled every two years. In the microprocessor space,
> these days those extra transistors are being used for other purposes
> besides just raw performance: integration of more components and also
> power management. So we are now getting better performance per Watt of
> power consumed, but not that much improvement in raw performance.
>
> Also, it is increasingly harder to extract more single thread
> performance out of CPU cores, therefore the scaling to multiple cores.
> But I would argue that software is still catching up to the
> multithreaded world (for non-server applications), so we haven't yet
> been able to unlock the full potential of the extra cores in your
> desktop/laptop. Applications are also increasingly more memory or I/O
> bound, so most of the time the CPU just sits waiting for data.
>
>  =20
True. My boss just asked me to plan out a new system with a couple of
requirements, (1) it must have an 8-core Nehelem chip and (2) 64GB memory=
=2E

I agree 100% with your last paragraph. We are still stuck in the Von
Neumann paradigm. Some of the things that slow down both multi-threading
and multi-tasking is the sharing of data. Essentially, there are places
in the code where only a single task may proceed at one time. When
properly coded, these critical regions are kept very small. There have
been a number of different approaches.

--=20
Jerry Feldman <gaf-mNDKBlG2WHs at public.gmane.org>
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846








BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org