Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month at the Massachusetts Institute of Technology, in Building E51.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] memory management



On Mon, Jun 22, 2015 at 03:13:13PM -0400, Matthew Gillen wrote:
> I'll chime in on this one more time just to be clear about what my beef
> with linux is here.  Several people have said, in effect, "Have more
> RAM" or "Have enough RAM for what you need".  Which is obviously true,
> but missing the point.

I don't think it is.  You don't seem to like the answer, but it IS the
answer.  4GB really is not that much these days.  The kernel itself
uses ~.5GB - 1GB or so...  A browser with a lot of tabs open can
easily have a VSIZE > 1GB.  That's half your RAM gone right there.

> For my day-to-day, I do have enough RAM.  

If you browse daily, and browsing is causing your problem, then it
seems clear to me that you do not. ;-)  But I use firefox daily, leave
it up for WEEKS at a time, and don't have the problems you are
describing, so I think it must be something about your particular
usage patterns... Like maybe visiting sites that put too much garbage
on a page, or some such.  I certainly have seen THAT--it's usually
sites that have one seemingly endless page of content, with multiple
(or even tens) of flash ads.  Don't visit those sites. ;-)  Or, maybe
your system is woefully out of date.

> What strikes me as odd and wrong is that the OS doesn't seem to protect
> itself from thrashing.  The system is perfectly happy to render itself
> inoperative in the service of some lone process sucking up memory.

What if the process in question is running the dialysis equipment that
is currently filtering your blood? 

The VM design assumes that the manager of the system has some clue...
If you need more memory you'll add it, you'll keep your software
updated, or if need be you'll use ulimit to limit misbehaving
processes (or, you know, fix them if you have the code).

Its goal is not only to allocate resources, but keep the system
stable.  If it starts killing processes randomly that's kind of the
opposite of stable.  So it tries really hard to not kill processes.
Maybe a pending I/O will complete and free up the necessary pages, or
maybe the user will kill the process...  It plays a "wait and see"
game that's hard for it to win because there are too many variables.
But it tries anyway because the consequences are potentially very
severe if it has to kill a process.

In contrast, when you run out of file descriptors, you've run out of
file descriptors.  There's no back-up plan, nothing else to be done
but fail.  But the consequences are far less severe--the application
can do a number of things in response:  Depending on the specifics, it
might ignore the problem, or retry later, or tell the user about the
problem and let them decide what to do.

You can't do much of anything if you can't get any memory to do it
in... including notify the user in many cases, as the code for doing
that is probably going to allocate space for a string, or some such.

-- 
Derek D. Martin    http://www.pizzashack.org/   GPG Key ID: 0xDFBEAD02
-=-=-=-=-
This message is posted from an invalid address.  Replying to it will result in
undeliverable mail due to spam prevention.  Sorry for the inconvenience.




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org