Re: The hall of bloat
- From: Mark McLoughlin <markmc redhat com>
- To: William Lovaton <walovaton yahoo com mx>
- Cc: Desktop Devel <desktop-devel-list gnome org>
- Subject: Re: The hall of bloat
- Date: Tue, 10 May 2005 11:23:20 +0100
Hi,
On Mon, 2005-05-09 at 19:39 -0500, William Lovaton wrote:
> The following chart shows the memory consumption by programs as shown by
> gnome-system-monitor. The first value is the virtual memory size and
> the value in parenthesis is the resident size. Both values are in
> megabytes. The first section of the table will show the processes
> started by default when you log into gnome which correspond to the
> "Memory usage after login" field in the previous table.
>
> FC4T2 (G2.10) FC3 (G2.8)
> ------------- ----------
> gnome-panel 31.6 (9.9) 22.0 (10.8)
Okay, after some poking around it seems that one big chunk of this
increase in virtual memory size is ORBit's IO thread which is defaulting
to a stack size of 10M. That's something we'll need to look further at,
for sure.
Cheers,
Mark.
P.S. - I thought it would be interesting for others if I share my
little escapade with others on figuring out some of this.
* First, lets just look at the rss and vsize values on FC3 and FC4
for myself
[markmc FC4 ~]$ ps -eo user,pid,vsize,rss,args | grep gnome-panel
markmc 2279 32996 10696 gnome-panel
[markmc FC3 ~]$ ps -eo user,pid,vsize,rss,args | grep gnome-panel
markmc 7565 25932 7456 gnome-panel
So, the values are different, sure - but the big jump in total
mapped memory is obvious alright.
* Next, lets look at the memory map of the FC3 and the FC4T2 panels
using "pmap $(panel_id)" - the following jumps out on FC4:
b7219000 4K ----- [ anon ]
b721a000 10240K rw--- [ anon ]
b7c1a000 384K rw-s- [ shmid=0x150003 ]
This means that there's a 4k anonymous mapping, followed by a
10M (!) chunk, followed be a 384K shared memory segment
* Okay, lets find out what this shared memory segment is ...
Convert the shmid to decimal:
[markmc FC4 ~]$ python
Python 2.4.1 (#1, Apr 6 2005, 14:01:51)
[GCC 4.0.0 20050405 (Red Hat 4.0.0-0.40)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 0x150003
1376259
Lookup the segment:
[markmc FC4 ~]$ ipcs -m | grep 1245187
key shmid owner perms bytes nattch status
0x00000000 1376259 markmc 600 393216 2 dest
Not terribly useful, but where's it coming from? Lets strace the panel and
see if that sheds any light
[markmc FC4 ~]$ gnome-session-remove gnome-panel
[markmc FC4 ~]$ GNOME_PANEL_DEBUG=1 strace gnome-panel 2>foo.log
[markmc FC4 ~]$ emacs foo.log
Searching for 1376259 gives this:
open("/usr/lib/libpng12.so.0", O_RDONLY) = 19
read(19, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\240*\250"..., 512) = 512
fstat64(19, {st_mode=S_IFREG|0755, st_size=148140, ...}) = 0
old_mmap(0xa7d000, 145572, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 19, 0) = 0x504000
old_mmap(0x527000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 19, 0x23000) = 0x527000
close(19) = 0
munmap(0xb7c58000, 134780) = 0
_llseek(18, 0, [0], SEEK_SET) = 0
read(18, "\211PNG\r\n\32\n\0\0\0\rIHDR\0\0\0000\0\0\0000\10\6\0\0"..., 4096) = 1438
close(18) = 0
munmap(0xb7c79000, 4096) = 0
shmget(IPC_PRIVATE, 393216, IPC_CREAT|0600) = 1376259
shmat(1376259, 0, 0) = 0xb7c1a000
write(3, "\22\0\v\0\4\0\200\1\371\0\0\0\371\0\0\0 \0\0\0\5\0\0\0"..., 9396) = 9396
read(3, "\34\300\260\0\4\0\200\1\371\0\0\0>z\240\0\0y\240\0\0\0"..., 32) = 32
read(3, "\34\300\261\0\4\0\200\1\365\0\0\0>z\240\0\0z\240\0\0\0"..., 32) = 32
read(3, "\34\300\262\0\4\0\200\1\355\0\0\0>z\240\0\0z\240\0\0\0"..., 32) = 32
read(3, "\1\2\267\0\0\0\0\0\35\0`\2\4\0\0\0\0\0\0\0$B\22\0\0\20"..., 32) = 32
shmctl(1376259, IPC_64|IPC_RMID, 0) = 0
So, we're loading libpng for the first time, read in a png, create
a shared memory segment, attach to it, write some junk to a file
descriptor (wait, that looks like the X connection), read some
junk back and mark the segment as destroyed.
Hmm. That sounds like GDK's shared memory scratch images. And sure
enough, that's what it is - see gtk+/gdk/gdkimage.c:allocate_scratch_images()
GDK now allocates a Xshm image of 384k by default.
* Okay, that's the first mystery solved. Now for that whopping 10M!
If you search (in the strace log ) for the base address of the
little 4k mapping just before it, you come across
write(18, "%\0\1\0\1\0\1\0\33\0/home/markmc/.gtk-book"..., 37) = 37
gettimeofday({1115719077, 815716}, NULL) = 0
pipe([19, 22]) = 0
pipe([23, 24]) = 0
mmap2(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7219000
mprotect(0xb7219000, 4096, PROT_NONE) = 0
clone(child_stack=0xb7c194c4, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID|CLONE_DETACHED, parent_tidptr=0xb7c19bf8, {entry_number:6, base_addr:0xb7c19bb0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}, child_tidptr=0xb7c19bf8) = 2282
futex(0x8336ac, FUTEX_WAKE, 1) = 1
gettimeofday({1115719077, 818131}, NULL) = 0
gettimeofday({1115719077, 818761}, NULL) = 0
writev(17, [{"GIOP\1\2\1\0\316\2\0\0", 12}, {"`\2\341\277\3\0\0\0\0\0\0\0\34\0\0\0\0\0\0\0\354\242<\0"..., 718}], 2) = 730
futex(0x82db6a4, FUTEX_WAIT, 1, NULL) = 0
futex(0x82db6a0, FUTEX_WAIT, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x82db6a0, FUTEX_WAKE, 1) = 0
futex(0x82db680, FUTEX_WAKE, 1) = 0
socket(PF_FILE, SOCK_STREAM, 0) = 25
fcntl64(25, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
fcntl64(25, F_SETFD, FD_CLOEXEC) = 0
connect(25, {sa_family=AF_FILE, path="/tmp/orbit-markmc/linc-ca7-0-2eeb42361b775"}, 45) = 0
write(22, "A", 1) = 1
writev(25, [{"GIOP\1\2\1\0\224\1\0\0", 12}, {"\300\3\341\277\3\0\0\0\0\0\0\0\34\0\0\0\0\0\0\0009\10h"..., 404}], 2) = 416
Okay, that looks like - ask FAM to monitor .gtk-bookmarks, create a
couple of pipes, map the 10M segment, change the permissions on the
first 4K of the segment, create a thread with clone(), send a CORBA
message somewhere, open another CORBA connection, write a byte to one
of those pipes we created and send another CORBA message down the
newly opened CORBA connection?
Wtf?
Hmm, isn't the panel monitoring ~/.gtk-bookmarks these days for its
"Places" menu? Yes, there it is in panel-menu-items.c. Ah, then it
does gnome_vfs_get_volume_monitor(), which does
_gnome_vfs_get_client() which creates two new threaded CORBA POAs
and...
So, that's what's going on. This 10M segment is the stack of ORBit's
IO thread created down in the bowels of linc in
link_exec_set_io_thread()
That's a damn big stack, why on earth is it that big?
[markmc FC4 ~]$ ulimit -s
10240
Uggh. Lets try setting it to glibc's PTHREAD_STACK_MIN value
[markmc FC4 ~]$ ulimit -s 16
[markmc FC4 ~]$ GNOME_PANEL_DEBUG=1 gnome-panel
[markmc FC4 ~]$ ps -eo user,pid,vsize,rss,args | grep gnome-panel
markmc 2365 22772 10944 gnome-panel
[markmc FC4 ~]$ pmap 2365
2365: gnome-panel
...
b7ca3000 4K ----- [ anon ]
b7ca4000 12K rw--- [ anon ]
b7ca7000 384K rw-s- [ shmid=0x188003 ]
Well, now. That certainly makes a difference doesn't it?
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]