Re: GLIB for a webserver



Hello!
Sorry again for the delay to replay.

I was thinking in monitor the server memory to not let it reach the maximum
allowed by the OS, which seems to be the basic idea behind what you have
said. Although I think it' s better than nothing, the problem is that other
machines processes may allocate memory after me and then, when my process
try to allocate just a byte, it would crash.

The best solution I though until now is to pre allocate the needed memory.
For instance, if there will be 100 threads running and each one could use
20k of memory, I will try to pre-allocate 100*20k of memory. Of course each
thread might use a different amount, but I can at least try to offer
something like that.

Anyway, if everything fails, there will always be the possibility to
restart. My job is try to avoid it the maximum I can.

About the malloc failing being the same as being as good as dead, I kind of
agree, but I think this should be up to the user to decide. An exit() in
the server could is something I would like to avoid at all costs, because
the outside problems may be temporary. For instance, malloc could fail
because other server running on the same machine is processing 1 million
transactions. After they are processed, the resources are already available
and the server will be able t "ressurect" again :D

Regards,
Marcelo Valle.
 Em 11/12/2011 09:12, <jcupitt gmail com> escreveu:

Hi Marcelo,

On 10 December 2011 23:05, Marcelo Elias Del Valle <mvallebr gmail com>
wrote:
   Imagine the following scenario: the application running on my web
server spends 20k of memory for each concurrent request. If I have
enough concurrent requests in a way it will consume more memory than,
let's say, 2Gb available memory, the expected result would be to
refuse new connections, not restart the application and all current
requests being handle.

I have 2p to offer on this question too. I use gobject as the basis
for an image processing library that gets used on servers. It's not
quite the same as your problem (and I'm sure it's much less
complicated!) but I think there are some similarities.

In my opinion, this is part of a general question of resource envelopes.

There are many resources my library has to manage apart from memory:
for example, it has to work with images comprised of many separate
files, sometimes up to 5,000 files to make a single image. Many *nix
systems have a limit of about 1,000 simultaneously open files, so my
library has to keep track of open files and if a request comes in for
a section of image not currently mapped, has to consider closing some
old files before attempting to open a new one.

It's not practical to track all memory allocations but my library does
track pixel buffers. If total pixel buffer usage gets near to a limit
set by the user it will start to free old buffers before creating new
ones. It also tries to limit the number of simultaneous threads that
can be active to a user-specified number. I've not really looked at
GPU usage yet but I imagine there will have to be some logic to
constrain that somehow as well.

Anyway, in my opinion, something like a web or image server needs to
be told a resource envelope it should try to run within and needs some
internal mechanism to manage requests which might push the system
outside that range. If you've got to the point where malloc() is
failing, your server is already as good as dead, unfortunately.

John




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]