[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-devel
Subject:    Re: About memory allocation failures....
From:       Kuba Ober <kuba () mareimbrium ! org>
Date:       2002-02-04 17:04:06
[Download RAW message or body]

On Monday 28 January 2002 12:45 pm, Thiago Macieira wrote:
> Kuba Ober wrote
>
> >I agree. I presume the only way to handle memory allocation issues is to
> > warn user when the amount of available (free mem + free swap +
> > (buffers+cache - reasonable minimum)) is below a certain threshold.
> > Reaching that threshold is the only condition which can cause
> > new/memalloc to fail, and it's a system-wide condition. A tiny
> > application sitting there and looking at the system's current memory
> > footprint would do. A message box may then say thing like that:
>
> Here we run into the same problem that was discussed for free disk space.
> How are we going to determine what's low and what's not? The same way we
> had an application dumping half a gig on the disk from one moment to the
> other, we can have an application requesting several megabytes at once.
> VMWare does that, for instance (mine allocates 96MB).

What's low and what's not can be set at nearly any small number which is far 
enough from serious disk thrashing. I'd say that 16mb should be fine for 
>=128mb, and 8mb for anything below 128mb of ram. The particular value in 
itself doesn't change too much.

> And like you have quotas on disk space, you can have a limit on the
> resident set size. So, while the whole system has hundreds of megabytes
> free, a certain given user might have no more than a few pages left.

That's true, although kde doesn't really work on systems with per-user 
limits. I'm working periodically on one such a system (freebsd 1gig ram 
student server with 2 decent pIII processors, top-notch motherboard, 
top-notch scsi drives, 2gig of swap with disk-file-to-disk-file dd transfer 
of about 100mb/s) - as soon as two student labs (about 15 machines) run kde 
(just starting the desktop), things get itchy, and then each student has 
otherwise-reasonable limits (2 minutes process runtime is one of them) - kde 
is useless on such a system. Nobody can help, and the small memory-watcher 
utility won't help either. Kde does a good job on a single user workstations 
without limits in place, or on small servers with less than a few users. I 
don't know what kind of hardware would one need (and if such beasts exist at 
all) to run a 100 user kde desktop server (with x terminals attached via 
ethernet).... :-(

> My opinion is that dealing with low-memory situations is best left to the
> kernel. What we can do is abort with fatal errors and dump something to
> stderr or .xsession-errors. Nevertheless, I try and test for new/malloc
> returning 0 in almost every case and return with error.

1. Are you using automatic code-generation tool to make sure that *all* 
new/malloc's are tested, or
2. Are you using automatic code-screening tool to make sure that *all* 
new/malloc's are tested?
3. Are you sure that your error is any more meaningful than a post-processed 
core dump?

I'd say that if ~((1 || 2) && 3), or additionally you're writing a desktop 
app, you're wasting your time and user machines' memory. It's just plain 
useless. Virtual memory systems are not supposed to run out of memory, that's 
what virtual memory is for. A different approach was taken by e.g. QNX team, 
who have made sure that things are small and stay small. Unless we want to 
port KDE to QNX (oh boy, it would be sooooo nice), there's no point in 
testing memory allocations, nada. Prove me I'm wrong.

Cheers,
Kuba
 
>> Visit http://mail.kde.org/mailman/listinfo/kde-devel#unsub to unsubscribe <<
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic