Re: QueryContext 'id' in CluePackets?

On Mon, 2003-07-28 at 14:02, DJ Adams wrote:
> On Mon, Jul 28, 2003 at 01:54:14PM -0400, Alex Graveley wrote:
> > > But this is a different issue (sending clues of a certain type only,
> > > as opposed to minimising the repeated sending of the same clues).
> > 
> > Not really.  If the cpm knows what a backend is interested in, it won't
> > resend the cluepacket unless something of interest has changed, which
> Perhaps I'm not understanding something, or we're talking at cross
> purposes; the issue I was thinking of was related to how many times
> a backend receives the same clue (whether accompanied by others in 
> a single cluepacket or not) - esp. in the case of a simple ('stupid') 
> backend. I'm not sure how you determine whether "something of interest
> has changed" in this context.

If a clue inside a cluepacket is of the cluetype supported by the
backend, and if the content of said clue has not been seen by the
backend before in this round of chaining, then send the packet to the
backend.  Otherwise, discard.

Basically this means that if the bugzilla backend (only interested in
"bugzilla" clue_types) has already processed a bugzilla clue containing
"#93898" as a value, it will never be asked to process it again (unless
another bugzilla clue is added, or the "#93898" is overwritten with
another value) during the rest of the chaining process.  Meaning it
won't go fetch the webpage again, or generate new matches for the
match-cache to discard.

> > Does jabber have the same concept and usage pattern as dashboard with
> > regard to injecting new information to annotate/specialize the source
> What I was referring to was the common model of having a relatively 
> simpler data feeder and consumers that took all data packets and rejected
> immediately those that it didn't want (at the time) to handle.

Ya, so that makes it a different model to the one used heavily by

Also, on thinking about it more, the incremented id in a QueryContext
idea will not work... if you increment every time there is a change,
backends will still be reprocessing everytime the cluepacket changes; if
you increment once per chaining-process, the backend will miss any
changes made to the cluepacket that might generate new matches.

Trusting my gut seems to have worked for once :-)


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]