Re: [Tracker] [systemd-devel] How to use cgroups for Tracker?



On 21/10/14 11:06, Lennart Poettering wrote:
What functionality of cgroups were you precisely using?

This was not set up by the Tracker team but another Team at Nokia. Also, perhaps Philip or Ivan have some comments and remember better.

It's a while back now, but from what I remember on the N9 / N900 / etc.:

- Memory restriction so we didn't starve other processes on the phone.
- Disk space restriction for the DB - though I think this was done with
  a separate partition.
- CPU restriction so that phone calls had enough cycles to perform.

Anyway, what precisely are you trying to do?

Even using the kernel APIs, we still get bug reports about crazy CPU use
and/or disk use. Since employing sched_setscheduler(), I think the
situation

What precisely are you setting with sched_setscheulder() and ioprio_set()?

https://git.gnome.org/browse/tracker/tree/src/libtracker-common/tracker-sched.c#n29

and

https://git.gnome.org/browse/tracker/tree/src/libtracker-common/tracker-ioprio.c#n131

is much better, but some people really want Tracker to
   a) only perform when they're away from their computer OR
   b) be completely unnoticeable.

Now, we can do a) without cgroups, but I believe b) would be better done
using cgroups.

Well, it will always be noticable since it generates IO. It will
always show up in top/iotop hence. If you want it to not interfere
with other things beind done on the system use
ioprio_set/sched_setscheduler to mark things as batch jobs.

You're right. When we get reports of 100% CPU use I am not so concerned if the system is not doing anything else - but I suspect what leads people to find out about the 100% situation is related to their UI not responding.

If you want to put hard bandwitdh limits on things, then I#d really
advise not to, because you effectively just prolong with that what you
want to do, keep the hw full ypowered on for longer that way. Hard
bandwidth limits are something for accounting models, for implementing
business plans, but I think it would be wrong to use them for "hiding"
things in top/iotop.

I see, you make a fair point.

Why don't the kernel API calls pointed out above not suffice?

I think it depends on the API and who you talk to.

For me, the APIs work quite well. However, we still get bug reports. I find
this quite hard to quantify personally because the filesystems, hardware and
version of Tracker all come into play and can make quite some
difference.

What precisely are the bug reports about?

Often 100% CPU use.
Sometimes high memory use (but that's mostly related to tracker-extract).
Disk use occasionally when someone is indexing a crazy data set.

The latest bug report:
https://bugzilla.gnome.org/show_bug.cgi?id=676713

The one API that doesn't really work for us is setrlimit(), mainly because
we have to guess the memory threshold (which we never get right) and we get
a lot of SIGABRTs that get reported as bugs. I suppose we could catch
SIGABRT and exit gracefully, but lately, we've agreed (as a team) that if an
extractor or library we depend on uses 2Gb of memory and brings a smaller
system to its knees, it's a bug and we should just fix the
extractor/library, not try to compensate for it. Sadly, there are always
these sort of bugs and it's precisely why tracker-extract is a separate
process.

What functionality of setrlimit() are you precisely using?

https://git.gnome.org/browse/tracker/tree/src/libtracker-common/tracker-os-dependant-unix.c?h=tracker-1.2#n289

That coupled with bug reports often like this one where a PDF with no text is taking over 200 seconds to extract and using 2Gb of memory:
https://bugs.freedesktop.org/show_bug.cgi?id=85196

That leads to SIGABRT and a bug report against Tracker, which we were not handling because really, the file itself (or library used to open it) should be fixed and tracker-extract was restarting itself until recently (which is a bug that needs fixing).

I guess we could use RLIMIT_CPU and handle SIGXCPU, but I have no idea what limitation (in seconds) to use in the first place.

--
Regards,
Martyn


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]