Re: [BuildStream] Responsive, but not overly verbose UX - not-in-scheduler



On Thu, Apr 18, 2019 at 17:22:02 +0900, Tristan Van Berkom via buildstream-list wrote:
The fact that we don't have any feedback about the fetching of
junctions, and also that the user settings (e.g. fetchers) has no
impact on the junction fetching is rather buggy indeed (I use a system
monitor, and I feel rather offended that BuildStream is causing network
activity to occur without telling me about it or allowing me to police
it in a way that is consistent with all other fetch operations).

I'd note that fetch is only half the story.  The staging of the junction source
to enable loading of elements from within it is often even worse for WSL than
cloning the junction's repo in the first place.

This is what I had in mind in order to fix these two problems:

  * Pop up the status bar right away instead of deferring it

By "status bar" do you mean what I think of as the scheduler UI ?

  * Have the Loader trigger a callback mentioning that it requires a
    fetch operation (or multiple fetch operations) to be performed.

  * Stream run the scheduler on it's behalf to fetch the junction

If this essentially means we start the scheduler always and treat all longer
running operations semi-equally, then I think it could be an interesting way to
implement the goal of my proposal.  I didn't want to suggest it because the
scheduler seemed so focussed on progressing fully realised elements from one
end of a semi-rigid pipeline to the other.

It should be feasible to fetch as many junctions which were encountered
in the first pass load of one project at once in parallel, rather than
serializing potential fetches at once (all while respecting the
configuration of how many fetches the user has allowed to occur in
parallel).

Since we (currently) load elements depth-first, we don't encounter junctions in
any kind of parallel way as far as I can tell.

This way, if we enhance progress indicators in some way in the future,
it can be applied equally and consistently to all operations which
support progress reporting (Source plugins could optionally support
such a reporting API one day, the loader might provide a counter of how
many elements were loaded so far, etc).

If we allowed a progress stream to come back from jobs toward the scheduler's
UI then yes that would be a not-unreasonable way to do this.

What do you think, is there a reason we need to add an additional
logging mechanism for early logging that I overlooked, or a reason that
it would be more desirable ?

If you're happy with the idea of breaking the scheduler away from its rigid
focus on only progressing elements through a fixed pipeline then this could be
a viable approach.  There'd need to be some work done to allow for jobs which
run entirely within the parent rather than always farming out to subprocesses,
though it's possible we already have that with the cache jobs (I don't know
either way).

The *critical* thing here is that the goal is to *ALWAYS* provide an indication
of progress toward the user's desired outcome within a few seconds, and to
never block output entirely during that time.  Ultimately I don't mind how
that's achieved so long as it's consistent and easy to add anywhere we end up
with a long-running job to do.

D.

-- 
Daniel Silverstone                          https://www.codethink.co.uk/
Solutions Architect               GPG 4096/R Key Id: 3CCE BABE 206C 3B69


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]