Re: Resource allocation behaviour



Hi All,

Apologies in advance for the essay!

Lee Baylis wrote:
I would also like to extend the mrp-resource data schema with a 'maximum allocatable
units' field definable on each resource.

It turns out this is already there in the code (resource->priv- >units), but I can't see any UI routines for users to set the value, or find anywhere where it is used (certainly 100 has been hard coded into planner in many places as a max_usage value). So, I will look at adding these UI routines instead of extending the schema for this one.

Maurice van der Pot wrote:
I have mentioned before on this list that I would like to limit
file/database format changes to as few releases as possible.
A few possible options are then:

- implementing as much as possible of the other features that also need
  changes and only release a new version when all of it has been
  included.

- first spend time on getting a release out with a lot of the
  regular bugfixes/features that don't require format changes.
  Then add your work to SVN and focus on the change-requiring features
  as above.

Only the option on the project to set overload behaviour is left, so I will use a flag for now, then look at adding it to the schema when I start introducing algorithms to choose from - at that point I will have a better idea as to what behaviour this option will need to hold.

Maurice van der Pot wrote:
Do send the simple version you have to the list, because I don't want
you to do a lot of rework if it is reviewed only after you have put in
an awful lot of time.

In fact I would have liked to have had some updates on the approach you
have taken to implement this. We have not been able to suggest issues
with the approach or come to an (informal) spec.

I've still not come to a final spec with John - I broke it down into:

1) Putting in triggers to determine when an action has overloaded a resource, 2) Adding one or more algorithms from the discussion on this list and bugzilla to fire when this happens, 3) Modifying the UI to graphically handle some of the scenarios which can arise.

It is details regarding the algorithms which are still under discussion, so for now I have just concentrated on working to the first step, and my thinking was that the best way to demonstrate it would be to start just by rejecting actions which caused a resource to be overloaded.

On that point, very little of what I have completed so far has represented a massive leap or me coding off into the unknown. My main observation was that there is already code in planner relating to detecting overallocation, in the Resource Usage view routines. The steps I have taken so far have been to tidy that up a little, move some of it to library files instead of the planner-usage-row file, and add a few functions to calculate and evaluate resource allocations.

In the interests of submitting smaller steps, I've attached the completed patches from just that piece of work, but note that no new user functionality is introduced by incorporating these patches into planner - they just set the stage for the next steps.

planner-usage-row:

Attachment: planner-usage-row.c.patch
Description: Binary data



- I have taken the Date struct, expanded it slightly, and more formally named it MrpAssignmentEdge, since it essentially contains information about the start and finish edges of an assignment. I have moved it, the date_type and the date_compare function to the mrp- assignment library, and taken code which initialised the old Date members out of the draw functions and moved it to a function in the assignment library, where overallocation routines can make use of it too.

- I have introduced a PlannerUsageRowColorScheme struct which simplifies some of the draw functions by moving logic which didn't need to be in them elsewhere. color_schemes are now passed between functions instead of allocation units, and the calculations which the functions used to perform are moved to the mrp-resource library, where overallocation routines can make use of them too.

mrp-assignment:

Attachment: mrp-assignment.c.patch
Description: Binary data

Attachment: mrp-assignment.h.patch
Description: Binary data



Aside from moving the MrpAssignmentEdge struct as detailed above, I have introduced several new functions which are useful for manipulating allocation edges, isolating time periods of interest, and collecting running allocation totals across a project

mrp-resource:

Attachment: mrp-resource.c.patch
Description: Binary data

Attachment: mrp-resource.h.patch
Description: Binary data



Aside from moving the resource allocation status calculations as detailed above, I have introduced several new functions which are useful for populating allocation edges from a resource, and calculating overallocation conditions.

***

The slightly trickier part has been working out where to fire these new library routines, and this is what I have been following up on over the last few days, and had hoped to have achieved by now. I'll run through my thinking and observations below.

I realised potentially any action which causes the task manager to initiate a recalculation of the Gantt chart could result in tasks moving around to the point where some resource or other becomes overallocated, so I set out to investigate which actions are cause for concern.

I have so far enumerated those actions which are capable of moving in time tasks which are already allocated to a resource. Whenever this movement can occur relative to other allocated tasks, resource overallocation is possible. Please let me know if you think I have missed any:

1) Changing a resource's calendar
2) Changing project calendars (assuming different resources are using different calendars)
3) Altering project start date (under same assumption as above)
4) Indenting a task into a parent
5) Un-indenting a task from a parent
6) Deleting a task and associated tree
7) Changing the maximum number of units associatable with a resource (currently not implemented in the UI)
8) Assigning a task to a resource
9) Removing a resource assignment
10) Changing the number of units of an assignment
11) Adding a relation to a task via the dialog or dragging between tasks
12) Changing the relation associated with a task
13) Removing a relation from a task
14) Altering the work of a task via the dialog or clicking on the task
15) Altering the duration of a fixed task
16) Constraining a task

In terms of catching all of these, I had originally been following up on an idea which worked well for simpler actions and allowed me to create a build where several items in the list above caused pop up messages and blocked overloading, but the idea has not been as straightforward as I would have liked for the more complicated actions, so I welcome some discussion:

I think by far the most convenient method for determining whether a resource has become overloaded is to allow mrp_task_manager_recalc to run, and then apply a test, followed by taking any necessary corrective action. Any other method of determining whether an action will cause an overload, as far as I can see, will just end up duplicating most of what mrp_task_manager_recalc already performs.

My first idea was that the best way to determine whether any given action has overloaded a resource would be to allow that action to fire function calls all the way down to mrp_task_manager_recalc, then run the overallocation test.

Should the test fail, I had hoped to then be able to trigger some action at the mrp_task_manager_recalc level in order to handle resource overallocation scenarios.

This approach may still be useful for situations where the user has specified at the project level that some fixed algorithm be ran any time any resource becomes overloaded - however there are two obvious scenarios for which it is insufficient:

1) If the user has specified at the project level that resource overloading is to be prohibited - in which case, the action and subsequent recalculation resulting in the overload has already run, and needs to be undone

2) Those (hopefully not too ambitious) scenarios where the user has not specified a resource overallocation behaviour at the project level, but rather has asked to be prompted with a choice of behaviours every time overallocation occurs. Again, the recalculation has already been performed at this point, so needs to be undone and then re-performed with whichever behaviour the user has selected to be active.

Faced with these scenarios, my first thought, and the one I have been playing with for a while, was that each of the _do functions resulting from an action would need to perform a check for overallocation and then be prepared to undo the actions it took. For simpler actions (assigning a task to a resource, for example), this was not much code and worked well for a while. However:

- undo data for some of these actions can be quite complicated, for example deleting a task tree

- the planner cmd manager already seems equipped to handle reversing actions as a consequence of providing edit->undo

- I found the start of an attempt at cmd transaction support in the cmd manager code, although it has not yet been completed to the point where it deals with custom errors mid-transaction

Having made these observations, I am now inclined to change tactic and try and use the planner cmd manager (probably via transaction support) to perform actions, check for overload, and then roll them back or reapply them specifying a different overallocation behaviour. I wondered if anyone can see any issues with this approach, or has any other ideas.

I have come up with some issues myself:

1) All of the actions I have looked at so far fill out cmds in the cmd manager which invoke the same cmd_undo routine regardless of whether or not the original cmd_do succeeded or not. I'm not sure if this is very clever design or an oversight - as far as I can tell there aren't currently any scenarios in planner where this behaviour causes a problem.

However, introducing additional failure criteria (i.e., resource has been overloaded) for some of the do cmds may upset this balance. For example, again, deleting a task tree. At the moment no facility is provided for this action to return failure, and the undo routine involves recreating the entire task tree which was deleted.

Now, if the ability for a task tree deletion to fail is introduced, meaning the cmd can run but not delete the tree, it looks to me like running the undo will create a duplicate tree. I am writing some code to test whether this actually happens at the moment using the delete tree action. Undos for other actions could meet with similar complications.

I can think of a couple of fixes though:

i) The obvious one is to use the ability for a cmd object to record the failure of its cmd_do actions, to modify each action's undo routine and include clauses depending on whether the original cmd succeeded. A transaction manager could then roll back the failed cmd by applying the more flexible undo.

ii) feels a bit dirty and involves finding a way to retroactively persuade the cmd manager that the cmd_do didn't actually run in the first place, then optionally trying the do cmd again, but specifying different overallocation behaviour.

2) Undo and redo on the edit menu might offer stages that the user wasn't aware of unless transaction support is used, although it is already the case in planner that some actions remain in the undo/redo menu even though they failed (creating circular task dependencies, for example)

If transaction support were the way to go, since it is unfinished, I can see a few ways to finish it:

i) Halting as soon as an error is encountered, rolling back to the start of the transaction, and then freeing the cmds in the transaction so that nothing appears in the redo menu

ii) Entering the end_transaction marker when an error is encountered, then rolling back actions to the start of the transaction - in this case, the transaction does appear in the redo menu, but presumably fails again if someone clicks it and can be configured to roll back again on redo error

iii) Ignoring the error, carrying on with the transaction, naturally writing the end_transaction marker, and then evaluating whether there was an error and rolling back if there was. This method would probably be better if we had any expectation that the error mid- transaction was temporary and that the transaction might succeed on redo.

Personally, I prefer the first, since I don't think most of our transactions are the kind of thing that might turn out to be be magically fixed the next time someone clicks redo, and I can't see the point of offering the opportunity to redo something we know is just going to fail again. Also, i) can be modified with the kind of "if error occurs try this instead" behaviour I am looking for.

That's all my thoughts for now - thanks for reading this far!

Thanks,
lee


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]