Re: Updates Galore



Hi,

Yeah, that's basically what I was thinking, although also visually
fading out the snippet as well.  This is to reinforce to the user
that the data is older.

It means that Dashboard needs to keep track of old snippets...

Cheers!

On Fri, 2003-12-12 at 07:01, Ryan P Skadberg wrote:
> The idea which someone (puck?) has mentioned on IRC once or twice is the
> idea of "fading" data.  So, instead of just straight killing things off,
> things die off once a certain amount of data has come through (a certain
> amount of queries?).  I could see something like this:
> 
> * Packet comes in and show data
> * Second packet comes in and it puts it's data on the top of the window
> * A third comes in and does the same
> * Once a threshold is hit, the data from the bottom starts getting
> erased as no longer relevant.
> 
> I like this idea.  I have already seen when testing that I will test
> something, move to check e-mail and a clue packet will be sent and my
> test data will go away.  It would be nice if it were just moved down
> some.
> 
> Thoughts?
> 
> Skadz
> 
> On Thu, 2003-12-11 at 03:01, Jim McDonald wrote:
> > > On Thu, 2003-12-11 at 00:03, Ryan P Skadberg wrote:
> > >
> > >> The Clue Packet Manager is still crashing here and there.  Large
> > >> Segfaults during HTML and Text Chainers.  Haven't had a chance to look
> > >> at it yet, but these are the biggest crashes.  Also, seeing Null
> > >> Pointer Exceptions in the Text Indexer and RSS backends thus far.
> > >> Looks like we need to be checking for these more closely.
> > >
> > > The crashes occur because of the following sequence of events:
> > >
> > >         * Cluepacket comes in
> > >
> > >         * RunQuery in the CPM kills any outstanding running threads
> > >
> > >         * A new thread is launched for each backend
> > >
> > >         * One of those backends is a chainer.  it creates a new
> > >         cluepacket, and sends it out
> > >
> > >         * the new, chained cluepacket comes in
> > >
> > >         * RunQuery in the CPM kills any outstanding running threads:
> > > INCLUDING ITSELF!
> > 
> > The reason I didn't fix this one is that I'm not sure what the expected
> > behaviour should be.  Does the new clupacket supercede the old one
> > entirely (i.e. we shut down the outstanding backend requests and treat
> > this as the equivalent of new user input) or does it complement it (i.e.
> > we generate a new set of threads with for the clue but keep the old ones
> > around)?
> > 
> >    I suppose this leads to a slightly different questions, which is does
> > the today's model of saying that the latest clue is all-important and
> > any information from older cluepackets should be abandoned/ignored what
> > we want to happen?  In a situation where you have lots of frontends and
> > a busy user might this mean that dashboard is just continually throwing
> > up information without it staying on-screen for long enough to be of
> > any use?
> > 
> > > Nat
> > 
> > Cheers,
> > Jim.
-- 
Andrew Ruthven
Senior Systems Engineer, Actrix Networks Ltd   -->   www.actrix.gen.nz
At Actrix puck actrix gen nz
At Home:  andrew etc gen nz





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]