Re: Time to heat up the new module discussion



Hello,

> AOT is not widely used because it does not offer the same performance of JIT

That is correct, AOT offers higher performance than JIT, as you are able
to turn the advanced optimizations on in advance.

But AOT does have to generate PIC code, which is indeed slower, at the
expense of sharing the generated code across multiple applications.
Its about the same difference between static and shared libraries. 

> Jits on the desktop are usually bad not just because they do take more 
> memory but also because you need the build system of mono installed 
> which means more bloat.

Considering that Gtk# applications consume less memory than PyGtk
applications am puzzled by this blank statement. 

Am not quite sure what you mean by "the build system of Mono", there is
no such thing as "the build system of mono".   Maybe you mean that you
need to have the "mono" command installed?

> Every compacting GC automatically doubles memory use - you have two 
> managed heaps ergo twice the RAM required. If you copy MS and go for a 
> three generation one then you risk trebling memory use over using a 
> non-compacting one.

The extra memory used for a compacting collector is in the nurseries.  A
compacting GC can allocate memory during the compacting phase for
temporary objects, and release it back to the OS when its done.  

So certainly during a collection you would notice more memory usage, but
it would go back to the used memory after the collection is over.

See:

	http://www.mono-project.com/Compacting_GC

And:

http://svn.myrealbox.com/viewcvs/trunk/mono/mono/metadata/sgen-gc.c?rev=61245&view=auto

Look for "get_os_memory" and "free_os_memory"

> (malloc and free do not return memory to the OS on linux and most other 
> systems - the memory is retained for reuse for the app).

Mono's current GC (Boehm) can return memory to the OS.   And so does our
new compacting GC (link above).

> Mmap'ping blocks of memory can be returned to the OS but they are at 
> least 5x slower than malloc/free and are only worth using with memory pools.

sbrk() the call used by malloc to grow its heap is pretty much mmap() in
the Linux kernel (see mm/mmap.c).

mmapping() would indeed be slower if you used it for every 5 byte
allocation (and so would sbrk).  But luckily people figured out how to
batch these requests so any "cost" is amortized over a few thousands or
millions of calls, rendering the number indistinguishable from a call to
sbrk() that malloc does on every pass.

Miguel.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]