[glib/glib-2-72: 25/39] gthreadpool: Update unused_threads while we still own the pool lock
- From: Matthias Clasen <matthiasc src gnome org>
- To: commits-list gnome org
- Cc:
- Subject: [glib/glib-2-72: 25/39] gthreadpool: Update unused_threads while we still own the pool lock
- Date: Tue, 20 Sep 2022 19:07:39 +0000 (UTC)
commit ec0cdf638e8dbe82fc97702eb42586a2c86d48ad
Author: Marco Trevisan (TreviƱo) <mail 3v1n0 net>
Date: Mon Jul 11 18:45:36 2022 +0200
gthreadpool: Update unused_threads while we still own the pool lock
As per the rationale explained in the previous commit, we could end up
having the unused_threads value not to be conformant to what
g_thread_pool_get_num_threads() returns, because an about-to-be-unused
thread might not be counted yet as such, while the pool threads number
has been already decreased.
To avoid such scenario, and to make sure that when all the pool's
threads are stopped, they're unmarked as unused, let's increase the
unused_threads value earlier, while we still own the pool lock so that
it will always include the pool that is not used anymore, but not yet
queued.
As per this we can update the test, not to repeat the stop-unused call
as now we're sure that when the pool has no threads anymore, the unused
threads value is also updated accordingly.
Also adding a tests with multiple pools.
(cherry-picked from commit a275ee66796ab0d6d95ed8647f2170be9b136951)
glib/gthreadpool.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
---
diff --git a/glib/gthreadpool.c b/glib/gthreadpool.c
index c7d587a566..ae02684437 100644
--- a/glib/gthreadpool.c
+++ b/glib/gthreadpool.c
@@ -165,8 +165,6 @@ g_thread_pool_wait_for_new_pool (void)
local_max_idle_time = g_atomic_int_get (&max_idle_time);
last_wakeup_thread_serial = g_atomic_int_get (&wakeup_thread_serial);
- g_atomic_int_inc (&unused_threads);
-
do
{
if ((guint) g_atomic_int_get (&unused_threads) >= local_max_unused_threads)
@@ -235,8 +233,6 @@ g_thread_pool_wait_for_new_pool (void)
}
while (pool == wakeup_thread_marker);
- g_atomic_int_add (&unused_threads, -1);
-
return pool;
}
@@ -403,12 +399,16 @@ g_thread_pool_thread_proxy (gpointer data)
}
}
+ g_atomic_int_inc (&unused_threads);
g_async_queue_unlock (pool->queue);
if (free_pool)
g_thread_pool_free_internal (pool);
- if ((pool = g_thread_pool_wait_for_new_pool ()) == NULL)
+ pool = g_thread_pool_wait_for_new_pool ();
+ g_atomic_int_add (&unused_threads, -1);
+
+ if (pool == NULL)
break;
g_async_queue_lock (pool->queue);
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]