Re: Proposal: giop-thread-invocation-stack



Michael Meeks wrote:

>Hi Frank,
>
>On Mon, 2003-11-10 at 11:22, Frank Rehberger wrote
>  
>
>>2.) I would like to encapsulate this in module GIOP in following 
>>function (similar to "*_foreach" on lists), is it acceptable in terms of 
>>performance?
>>    
>>
>
>	Hmm; I'd prefer not to get too over-heavy with iterators etc. can we
>not put all the functionality we're trying to implement in a few methods
>eg. 'remove_servants' or somesuch, that are just located together and
>maintainable that way ? the current_invocations code really needs to run
>for every invocation so speed is presumably quite important.
>  
>
GIOP module does not #include PortableObject headers, so it is not 
possible to move  the iteration/comparison from poa module into GIOP 
module completely.

> I guess we should really be CC'ing all of this to ORBit-list.



-------------------------------------------------------
------------ preceeding conversation ------------------
-------------------------------------------------------

Hi Frank,

On Mon, 2003-11-10 at 11:22, Frank Rehberger wrote:

>>> >	Any chance you could do that re-factor first ?
>>> >
>>    
>>
>> yes, no problem, sound reasonable :)
>  
>

	Great :-)

>> Short question: What is the difference between states
>> giop_thread_io() and giop_thread_safe()?
>  
>

	thread-safe means we do a load of locking even in the single threaded
case - since we never know when we might do a call from another thread [
default for all of Gnome ]. Thread I/O means we've started eg. a
'THREAD_PER_REQUEST' policy, or we've done a cross-thread call - which
means we fire-up a separate thread to do all the I/O - this is a 1 time
switch, and the grunt happens in 'linc2/src/linc.c' mostly.


>> 1.) it seems to leak if pobj is on stack twice, as it does not "break" 
>> loop  on first match.
>  
>

	Grief - is it possible for pobj to be on the stack twice ? I guess it
might be, if so we need to fix it in stable as well - it may be the
cause of a bug or two I recall.


>> 2.) I would like to encapsulate this in module GIOP in following 
>> function (similar to "*_foreach" on lists), is it acceptable in terms of 
>> performance?
>  
>

	Hmm; I'd prefer not to get too over-heavy with iterators etc. can we
not put all the functionality we're trying to implement in a few methods
eg. 'remove_servants' or somesuch, that are just located together and
maintainable that way ? the current_invocations code really needs to run
for every invocation so speed is presumably quite important.

	I guess we should really be CC'ing all of this to ORBit-list.

	Thanks,

		Michael.

-------------------------------------------------------
------------ preceeding conversation ------------------
-------------------------------------------------------



Michael Meeks wrote:

> Hi Frank,
>
> On Fri, 2003-11-07 at 01:45, Frank Rehberger wrote:
>  
>
>> this is first proposal of thread aware handling of invocation stacks.
>> "make check" runs successfully, but it might be wise to test it with 
>> dev-build of gnome-2.6 also.
>>
>> any comments?
>>   
>

>     I think that we need to do a re-factor first to make this more 
> elegant.
> My feeling is that in the non-threaded case; we want to return a
> GIOPThread * from giop_thread_self that is the 'main thread' [ we can
> use a static local variable to store this one-off ].
>
>     The code would then avoid having so many conditionals / so much code
> etc. and this would also be rather useful in several other places for
> smoothing out conditionals / special cases I think.
>
>     Any chance you could do that re-factor first ?
>  
>
no problem, sound reasonable.

Two questions:
A) what is difference between giop_thread_io() and giop_thread_safe()?
B) The invocation stack is iterated in poa.c, it seems to leak if pobj 
is on stack twice.

     for (l = poa->orb->current_invocations; l; l = l->next) {
               ORBit_POAObject pobj = l->data;
                                                                               

               if (pobj->servant == servant)
                       objid = ORBit_sequence_CORBA_octet_dup 
(pobj->object_id);
       }

I would like to encapsulate it in module GIOP with following function, 
is it acceptable in terms of performance?

/* @return first stack elem top-down the  @equal operation returns TRUE 
for otherwise NULL */
gpointer
giop_thread_invocation_stack_find_first (GIOPThread*,  gboolean 
(*equal)(gpointer stack_elem, gpointer comp_elem), gpointer comp_elem);

Best regards, Frank


-------------------------------------------------------
------------ preceeding conversation ------------------
-------------------------------------------------------

Hi Frank,

On Fri, 2003-11-07 at 01:45, Frank Rehberger wrote:

>> this is first proposal of thread aware handling of invocation stacks.
>> "make check" runs successfully, but it might be wise to test it with 
>> dev-build of gnome-2.6 also.
>>
>> any comments?
>  
>
Yes - sorry for being so slow though. I think that we need to do a 
re-factor first to make this more elegant. My feeling is that in the 
non-threaded case; we want to return a GIOPThread * from 
giop_thread_self that is the 'main thread' [ we can use a static local 
variable to store this one-off ]. The code would then avoid having so 
many conditionals / so much code etc. and this would also be rather 
useful in several other places for smoothing out conditionals / special 
cases I think. Any chance you could do that re-factor first ? Otherwise, 
it looks just great :-) Thanks, Michael.

-------------------------------------------------------
------------ preceeding conversation ------------------
-------------------------------------------------------


Hello,

this is first proposal of thread aware handling of invocation stacks.
"make check" runs successfully, but it might be wise to test it with 
dev-build of gnome-2.6 also.

any comments?

------------------------------------------------------------------------

? src/orb/orb-core/bak.corba-orb.c
? src/orb/orb-core/bak.corba-object.c
? src/orb/poa/data
? test/everything/iorfile
Index: include/orbit/GIOP/giop-types.h
===================================================================
RCS file: /cvs/gnome/ORBit2/include/orbit/GIOP/giop-types.h,v
retrieving revision 1.21
diff -u -r1.21 giop-types.h
--- include/orbit/GIOP/giop-types.h	27 Oct 2003 16:14:12 -0000	1.21
+++ include/orbit/GIOP/giop-types.h	7 Nov 2003 01:39:19 -0000
@@ -33,6 +33,9 @@
 	void        (*request_handler) (gpointer poa_object,
 					gpointer recv_buffer,
 					gpointer dummy);
+
+        /* multi threaded setup only */
+        GSList       *current_invocations;
 };
 
 #define GIOP_INITIAL_MSG_SIZE_LIMIT 256*1024
Index: include/orbit/GIOP/giop.h
===================================================================
RCS file: /cvs/gnome/ORBit2/include/orbit/GIOP/giop.h,v
retrieving revision 1.21
diff -u -r1.21 giop.h
--- include/orbit/GIOP/giop.h	27 Oct 2003 16:14:12 -0000	1.21
+++ include/orbit/GIOP/giop.h	7 Nov 2003 01:39:19 -0000
@@ -27,6 +27,17 @@
 void        giop_recv_set_limit    (glong limit);
 void        giop_incoming_signal_T (GIOPThread *tdata);
 
+GIOPThread *giop_thread_self_try  (void);
+
+/* must be called from thread itsself, otherwise error */ 
+GSList     *giop_thread_invocation_stack_as_list (GIOPThread * tdata);
+void        giop_thread_invocation_stack_push (GIOPThread * tdata, 
+					       gpointer elem);
+void        giop_thread_invocation_stack_pop  (GIOPThread * tdata);
+
+gpointer    giop_thread_invocation_stack_peek (GIOPThread * tdata);
+
+
 typedef struct _GIOPQueue GIOPQueue;
 GIOPThread *giop_thread_get_main  (void);
 void        giop_thread_set_main_handler (gpointer    request_handler);
Index: src/orb/GIOP/giop.c
===================================================================
RCS file: /cvs/gnome/ORBit2/src/orb/GIOP/giop.c,v
retrieving revision 1.44
diff -u -r1.44 giop.c
--- src/orb/GIOP/giop.c	31 Oct 2003 14:39:47 -0000	1.44
+++ src/orb/GIOP/giop.c	7 Nov 2003 01:39:20 -0000
@@ -192,6 +192,7 @@
 	g_free (dirname);
 }
 
+
 gboolean
 giop_thread_safe (void)
 {
@@ -282,6 +283,7 @@
 	tdata->keys = NULL;
 	tdata->async_ents = NULL;
 	tdata->request_queue = NULL;
+	tdata->current_invocations = NULL;
 
 	if (giop_main_thread)
 		tdata->request_handler = giop_main_thread->request_handler;
@@ -336,6 +338,8 @@
 		g_warning ("Leaked async ents");
 	if (tdata->request_queue)
 		g_warning ("Leaked request queue");
+	if (tdata->current_invocations)
+		g_warning ("Leaked invocation stack");
 #endif
 	if (tdata->invoke_policies)
 		g_queue_free (tdata->invoke_policies);
@@ -359,6 +363,71 @@
 	}
 
 	return tdata;
+}
+
+GIOPThread *
+giop_thread_self_try (void)
+{
+	if (!giop_thread_safe ())
+		return NULL;
+
+	return g_private_get (giop_tdata_private);
+}
+
+GSList*
+giop_thread_invocation_stack_as_list (GIOPThread* tdata)
+{
+	g_assert (giop_thread_safe ());
+	g_assert (tdata);
+
+	/* only thread itself may modify its stack */ 
+	g_assert (tdata == g_private_get (giop_tdata_private));
+
+	return tdata->current_invocations;
+}
+
+void
+giop_thread_invocation_stack_push (GIOPThread* tdata, gpointer elem)
+{
+	g_assert (giop_thread_safe ());
+	g_assert (tdata);
+
+	/* only thread itself may modify its stack */ 
+	g_assert (tdata == g_private_get (giop_tdata_private));
+
+	tdata->current_invocations = 
+		g_slist_prepend (tdata->current_invocations, elem);
+}
+
+void
+giop_thread_invocation_stack_pop (GIOPThread* tdata)
+{
+	GSList *top = NULL;
+	g_assert (giop_thread_safe ());
+	g_assert (tdata);
+	g_assert (tdata->current_invocations);
+
+	/* only thread itself must modify its stack */ 
+	g_assert (tdata == g_private_get (giop_tdata_private));
+	
+	top = tdata->current_invocations;
+	tdata->current_invocations = tdata->current_invocations->next;
+	g_slist_free_1 (top);
+}
+
+gpointer
+giop_thread_invocation_stack_peek (GIOPThread* tdata)
+{
+	g_assert (giop_thread_safe ());
+	g_assert (tdata);
+
+	/* only thread itself must modify its stack */ 
+	g_assert (tdata == g_private_get (giop_tdata_private));
+ 
+	if (tdata->current_invocations)
+		return tdata->current_invocations->data;
+	else
+		return NULL;
 }
 
 
Index: src/orb/poa/poa.c
===================================================================
RCS file: /cvs/gnome/ORBit2/src/orb/poa/poa.c,v
retrieving revision 1.113
diff -u -r1.113 poa.c
--- src/orb/poa/poa.c	3 Nov 2003 22:56:56 -0000	1.113
+++ src/orb/poa/poa.c	7 Nov 2003 01:39:21 -0000
@@ -89,21 +89,39 @@
 ORBit_POA_invocation_stack_push (PortableServer_POA  poa,
                                  ORBit_POAObject     pobj)
 {
- 	LINK_MUTEX_LOCK (poa->orb->lock);
-	poa->orb->current_invocations =
-		g_slist_prepend (poa->orb->current_invocations, pobj);
-	LINK_MUTEX_UNLOCK (poa->orb->lock);
+	if (giop_thread_safe ()) {
+		GIOPThread *tdata = giop_thread_self_try ();
+		if (!tdata) 
+			g_error ("invocation not managed by POA container");
+		giop_thread_invocation_stack_push (tdata, pobj);
+	} else {
+		/* lock/unlock might be obsolet?? */ 
+		LINK_MUTEX_LOCK (poa->orb->lock);
+		poa->orb->current_invocations =
+			g_slist_prepend (poa->orb->current_invocations, pobj);
+		LINK_MUTEX_UNLOCK (poa->orb->lock);
+	}
 }
                                                                                 
 static void
 ORBit_POA_invocation_stack_pop (PortableServer_POA  poa,
                                 ORBit_POAObject pobj)
 {
-	/* FIXME, we know it is the top element to be removed */ 
-	LINK_MUTEX_LOCK (poa->orb->lock);
-	poa->orb->current_invocations =
-		g_slist_remove (poa->orb->current_invocations, pobj);
-	LINK_MUTEX_UNLOCK (poa->orb->lock);
+	if (giop_thread_safe ()) {
+		GIOPThread *tdata = giop_thread_self_try ();
+		if (!tdata) 
+			g_error ("invocation not managed by POA container");
+		giop_thread_invocation_stack_pop (tdata);		
+	} else {
+		/* FIXME, we know it is the top element to be removed,
+		 * solution: use g_slist_free_1 (head); */ 
+	
+		/* lock/unlock might be obsolet?? */ 
+		LINK_MUTEX_LOCK (poa->orb->lock);
+		poa->orb->current_invocations =
+			g_slist_remove (poa->orb->current_invocations, pobj);
+		LINK_MUTEX_UNLOCK (poa->orb->lock);
+	}
 }
 
 static  PortableServer_ObjectId*
@@ -113,16 +131,35 @@
 	PortableServer_ObjectId *objid = NULL;
 	GSList *l;
 
-	LINK_MUTEX_LOCK (poa->orb->lock);
-	for (l = poa->orb->current_invocations; l; l = l->next) {
-		ORBit_POAObject pobj = l->data;
+	if (giop_thread_safe ()) {
+		GIOPThread *tdata = giop_thread_self_try ();
+		if (!tdata) 
+			g_error ("invocation not managed by POA container");
+
+		for (l = giop_thread_invocation_stack_as_list (tdata);
+		     l; l = l->next) {
+			ORBit_POAObject pobj = l->data;
+			 
+			/* FIXME, is pobj->object_id shared with other
+			 * threads, do we need lock ? */ 
+			if (pobj->servant == servant)
+				objid = ORBit_sequence_CORBA_octet_dup (pobj->object_id);
+		}
+			
+		return objid;
+	} else {
+		/* lock/unlock might be obsolet?? */ 
+		LINK_MUTEX_LOCK (poa->orb->lock);
+		for (l = poa->orb->current_invocations; l; l = l->next) {
+			ORBit_POAObject pobj = l->data;
+			
+			if (pobj->servant == servant)
+				objid = ORBit_sequence_CORBA_octet_dup (pobj->object_id);
+		}
+		LINK_MUTEX_UNLOCK (poa->orb->lock);
 		
-		if (pobj->servant == servant)
-			objid = ORBit_sequence_CORBA_octet_dup (pobj->object_id);
+		return objid;
 	}
-	LINK_MUTEX_UNLOCK (poa->orb->lock);
-
-	return objid;
 }
 
 static  CORBA_Object
@@ -132,16 +169,35 @@
 	CORBA_Object  result = CORBA_OBJECT_NIL; 
 	GSList *l;
 
-	LINK_MUTEX_LOCK (poa->orb->lock);
-	for (l = poa->orb->current_invocations; l; l = l->next) {
-		ORBit_POAObject pobj = l->data;
+	if (giop_thread_safe ()) {
+		GIOPThread *tdata = giop_thread_self_try ();
+		if (!tdata) 
+			g_error ("invocation not managed by POA container");
+
+		for (l = giop_thread_invocation_stack_as_list (tdata);
+		     l; l = l->next) {
+			ORBit_POAObject pobj = l->data;
+			
+			/* FIXME, is pobj shared with other threads,
+			 * do we need lock ? */ 
+			if (pobj->servant == servant)
+				result = ORBit_POA_obj_to_ref (poa, pobj, NULL, NULL);
+		}
+		
+		return result;
+	} else {
+		/* lock/unlock might be obsolet?? */ 
+		LINK_MUTEX_LOCK (poa->orb->lock);
+		for (l = poa->orb->current_invocations; l; l = l->next) {
+			ORBit_POAObject pobj = l->data;
+			
+			if (pobj->servant == servant)
+				result = ORBit_POA_obj_to_ref (poa, pobj, NULL, NULL);
+		}
+		LINK_MUTEX_UNLOCK (poa->orb->lock);
 		
-		if (pobj->servant == servant)
-			result = ORBit_POA_obj_to_ref (poa, pobj, NULL, NULL);
+		return result;
 	}
-	LINK_MUTEX_UNLOCK (poa->orb->lock);
-
-	return result;
 }
 
 /* FIXME, I would prefer if argument would be of type PortableServer_POA */ 
@@ -150,14 +206,21 @@
 {
 	ORBit_POAObject pobj = CORBA_OBJECT_NIL;
 
-	LINK_MUTEX_LOCK (orb->lock);
-	if (orb->current_invocations == NULL)
-		pobj = CORBA_OBJECT_NIL;
-	else
-		pobj = (ORBit_POAObject) orb->current_invocations->data;
-	LINK_MUTEX_UNLOCK (orb->lock);
-	
-	return pobj;
+	if (giop_thread_safe ()) {
+		GIOPThread *tdata = giop_thread_self_try ();
+		if (!tdata) 
+			g_error ("invocation not managed by POA container");
+		return giop_thread_invocation_stack_peek (tdata); 
+	} else {
+		LINK_MUTEX_LOCK (orb->lock);
+		if (orb->current_invocations == NULL)
+			pobj = CORBA_OBJECT_NIL;
+		else
+			pobj = (ORBit_POAObject) orb->current_invocations->data;
+		LINK_MUTEX_UNLOCK (orb->lock);
+		
+		return pobj;
+	}
 }
 
 /* PortableServer_Current interface */



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]