Re: [gnet] Problems with GNet and GIOChannels...



On Fri, 2004-08-20 at 10:02, Tim Müller wrote:
> On Friday 20 August 2004 14:23, James Wiggs wrote:
> 
> > > Have you tried GConn instead? Not sure if GConn is still marked as
> > > experimental, but I've made good experiences with it, and the API is
> > > _much_ simpler than GTcpSocket.
> >
> >    I'm all for simpler, believe me, but not at the expense
> > of stability.  I can't emphasize enough how important it is
> > that these processes be as solid as granite.  Can any of the
> > GNet developers chime in with an opinion on how stable GConn
> > is right now?  I chose GTcpSocket specifically because GConn
> > is labelled in the documentation as "experimental."
> 
> GConn is basically just a wrapper around GTcpSocket anyway. I always thought 
> that the 'experimental' was in reference to API stability and not to the 
> stability of the code itself. Don't really know for sure though. However, 
> unless I'm mistaken it's not marked as 'experimental' any longer.

   If this is the case, I will definitely include that
modification in this weekend's rewrite.  Thanks for the
tip on this.

> >    My programming here reflects my past experience writing
> > parallel numerical analysis codes for massively parallel non-
> > shared memory machines.  I'm very comfortable with the concept
> > of processes passing messages to each other, so I chose that
> > methodology for coordinating between processes.  Not saying
> > your solution isn't better, just that it isn't a natural way
> > of modeling the problem for me.  There will be a *lot* of
> > these messages; I figure the overhead of maintaining the AMQ
> > would be less than setting up this idle callback and then
> > inserting it into another thread's context.  There would have
> > to be locking implemented on that thread's context, as well,
> > right?
> 
> well, yes. g_source_attach() will need to lock the context you're inserting 
> into; however, the same goes for g_async_queue_push(), which will need to 
> lock the GAsyncQueue you're pushing data onto. The thing you save with 'my' 
> method is the constant polling of the GAsyncQueue (and the additional locking 
> that g_async_queue_try_pop() entails). Anyway, was just a suggestion.
>
> > How much overhead is really involved in setting
> > up an idle callback and then inserting it into the context of
> > another thread?
> 
> I'd say less than pushing data and then constantly polling the GAsyncQueue, 
> because you only need to lock the context once, and then the data is within 
> the scope of the context, whereas you need to lock the async queue at least 
> once for the push and once for the pull. Just a wild guess though, I haven't 
> looked into this in detail.

   It still feels to me like there must be more overhead
involved in creating a source and attaching a callback to
another thread's context as compared to just doing a
g_async_queue_push/pop().  You're talking multiple function
calls to create the source and then set the callback on
it, plus a lock and another function call to attach the
source to the other context, plus the overhead of handling
the callback in g_main_context_iteration(), compared to two
function calls, each of which includes a lock implicitly.

   Just to be sure we're on the same page, it *is* clear to
you that the first loop that pops data off the GAsyncQueue
stops when:
(a) it hits the maximum allowed iterations, *OR*
(b) there are no more messages in the queue
whichever happens first, right?  That is, it won't poll
maxiter times if there are (N < maxiter) messages in the
queue; it would only poll N+1 times, then break.  I guess
a nice optimization would be to explicitly get the lock
on the queue before doing the loop, using the unlocked
try_pop function, then release the lock when done.  I
should also mention that:

1) No thread ever pushes a message to its own GAsyncQueue
2) No thread ever reads from another thread's GAsyncQueue

   So the organization of the code is somewhat simplified.

> >    I'd love to do this, but the problem is the code is intimately
> > tied to the hardware receiver, so no test program could really show
> > the full context.  I don't know how to get around that problem.  I'm
> > going to be rewriting some parts of the code over the next few days
> > and testing it again on Monday.  Hopefully I will have a better grip
> > on things then.
> 
> You mean if you pipe packets from /dev/random over your TCP connection as fast 
> as possible, then you don't get this effect with your code?

   I have a modified version of the code which reads all
of the logged packets from the day out of a binary file,
and then forwards them through the AMQs and out through
the socket.  It does *not* exhibit the problem that the
"live" version has.  Very frustrating, trying to lock it
down...

> Cheers
>  -Tim
> _______________________________________________
> gnet mailing list
> gnet lists gnetlibrary org
> http://lists.gnetlibrary.org/mailman/listinfo/gnet
> 




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]