Published by Arto Jarvinen on 14 Aug 2015

## The devil in the detail

When writing a quality system manual there is (at least should be) a need to talk about real-world objects. Let’s call them primary objects for lack of a better word. Examples are a product or a (software) bug (whether a bug is an “object” is of course also debatable but in the abstract world of software it is). Sometimes we on the other hand need to refer to some kind of representation of that primary object like a ticket in a bug tracking system (representing a real-world bug). Let’s call these objects meta-objects as they contain information about primary objects. Sometimes this causes no problems like when we have a product and a product specification describing the product. (The product specification is of course also a real-world object but it is not “primary” in the sense that it doesn’t have any justification without the primary object it describes.) In this case the objects have different names (“product” and “product specification”) and will not be confused.

Sometimes it is very convenient to use the same name for the primary object and its meta-object though. When for instance modeling a system in a UML tool, the modeler may describe real-world, primary use cases with meta-objects of the type use case in the tool. When we refer to “use case” in the quality system manual for such an organization, we don’t know whether we refer to the primary use case or its representation in the tool, the meta-use case. Many times this ambiguity goes unnoticed. The text may apply reasonably well to both objects. Sometimes we are able to infer from the text to which object it refers. But sometimes we may have a slightly larger ambiguity. Let’s say that we are developing a medical device and wish to manage the clinical risks potentially caused by the device. Let’s also assume that we have a tool for documenting and tracking individual clinical risks. When we in the quality system manual write “find all clinical risks” we can’t be sure if we are referring to the clinical risks already documented as records in the tool or if we are referring to previously unknown primary clinical risks. The editor of the quality system manual will also invariable go astray from time to time and describe the primary object when a description of the meta-object was called for, and sometimes vice versa. A primary use case is after all a rather different animal than the meta-use case in the tool.

It might be a little clunky to come up with different names for the primary objects and the meta-objects. What would we for instance call the use case meta-objects? “Meta-use case” doesn’t roll so easily off the tongue. I have therefore adapted a convention according to which all primary objects are denoted with all lower-case letters and the meta-objects with capitalized names.

This may not be a Nobel winning insight but I’ve seen confusion about this many times, and I have been confused myself, so I still wanted to share it.

Published by Arto Jarvinen on 07 Jul 2015

## This is not quality

We all know that market economy is not perfect but that the alternatives are worse. Right now I’m trying to wrap my brain around the fact that there are companies that seem to survive in a market economy despite apparent disregard for their customers. The example I currently have on my mind is CanalDigital, a Swedish satellite TV provider.

They have a habit of calling their customers offering additional channels and other products. Each and every time they have called me I have told them to take me off their calling lists since I hate wasting my time on such calls and I never buy anything during an unsolicited phone call. I probably asked about five times before I mailed CanalDigital’s customer support making the same request. I told them that I would cancel my subscription if they didn’t stop harassing me over the phone. They promised to take me off the lists. A few weeks later I got a call from – you guessed it – CanalDigital. I might have lost my temper a bit on some poor innocent student trying to make some extra money at a call center.

I wrote to customer support again telling them that they obviously hadn’t taken me off the lists after all and that I now therefore wanted to cancel my subscription. The answer I got was a simple: “We have received your cancellation. We will process it shortly.” No explanations, no excuses.

After a couple of weeks I got a call from – CanalDigital. For a second I thought that maybe a manager had heard of this and wanted to make things right. But no, it was a an unfortunate call center temp who wanted to talk about a special offer for a “faithful customer”. Kafka couldn’t have written CanalDigital’s sales manual better and there is some humor in this.

I can’t understand however hard I try that a company like CanalDigital can get away with this. I’m also confused about the purpose of harassing customers until they can’t take it anymore. Or am I a very odd customer who doesn’t want unsolicited phone calls at random hours?

I got a partial explanation a bit later. There was a person calling from customer support wanting to confirm my cancellation. I again tried, helpful as I am by nature, to explain the reason for my cancellation. He was not the least interested but instead offered me the explanation that the call center staff was not really employed by CanalDigital, meaning that CanalDigital couldn’t really be held responsible for their behavior. Maybe the customer support isn’t employed by CanalDigital either.

I confirmed my cancellation.

Edit: There were more calls from CanalDigital. I complained again. They promised to take me off all the list this time. Yesterday there was a new call. There was even a call to my daughter who doesn’t have anything to do with this except that I used to own her phone subscription. I tried to take the complaint to an arbitration organization, Telekområdgivarna. It turned out that CanalDigital is not a member. Figures.

Published by Arto Jarvinen on 13 Sep 2014

## The return of StarUML

Way back I used StarUML as a quality system manual (process) modeling tool and then used a home-grown code generator to create web pages for WordPress representing the quality system manual. The diagrams on this page are for instance created in StarUML. StarUML was very easy to use and to customize and it had excellent documentation too. Unfortunately the open source project was discontinued which led me to try to create something own in Eclipse and later to use EPF.

Eclipse (the frameworks EMF, GMF and GEF) is a very general and versatile platform meaning that one needs to do a ton of boilerplate code to get something to happen like to synchronize a tree editor with a diagram editor. It’s also a patchwork with many similar but not identical concepts; there are for instance several different “Editing Domains” and “Command Stacks”, almost, but not quite, doing the same thing.

EPF is ok but it’s a bit of a committee product and everything is hard-coded such as which attributes each modeling element has. The attributes may or may not make sense in a given organization. I have ended up stripping away quite a lot of generated HTML to simplify things.

So I’m delighted to notice that StarUML is back! There seems to be a new version 2.0 coming up. Read more about it here.

Published by Arto Jarvinen on 10 Sep 2014

## Ordering the product backlog

Several posts in this blog discuss the order in which new features should be implemented. In this post I try to summarize some of my thinking so far. The following terminology will be used in this post:

• New proposed features are described in “change requests” that are in effect small documents or records in a database describing various aspects of the proposed feature.
• To realize a change request a number of “tasks” need to be completed. Some tasks are related directly to a change request whereas other tasks are more “global” (e.g. system testing).
• Change requests are organized in an ordered list with change requests to be completed first highest up in the list. Change requests are always picked from the top of the list.
• For practical purposes also the tasks (task descriptors) that are derived from the change requests are stored in the product backlog.

Where in the product backlog a new change request and its associated tasks should be put (and thus when it is going to be realized) depends on several things:

• The additional expected income we will get from the new feature. This is, among other things, a function of the customer benefit of the new feature and the certainty that we will be able to deliver the feature. We wish to deliver high value features first everything else being equal.
• The expected cost of developing the new feature. This depends on a large number of things such as the novelty of the technology and the skills of the developers. We want to deliver features that are inexpensive to realize first everything else being equal.
• The level of uncertainty of successful realization or attractiveness of the new feature.
• The dependencies among the features. Several functional features may for instance depend on that we can achieve enough performance on the given hardware platform.

Let’s consider three different development scenarios:

#### “Web site”

We own a web site to which we add features more or less continuously from a potentially long product backlog. We have a dedicated team that implements and releases new features in an ongoing process where new change requests come in regularly and new features are released incrementally as they become available. The new features are mostly independent from each other and carry low uncertainties. A faulty or useless new feature can easily be removed from the web site without affecting other features.

In this scenario change requests in the product backlog should be ordered strictly based on their estimated income / cost ratio; inexpensive features which bring in a lot of money should be realized first. Since uncertainties are assumed to be low, they can be largely ignored. Also, with low uncertainty, we don’t really need the overhead of a project organization with all the planning, tracking and risk management. A Kanban-style development process is quite sufficient. Since there are no dependencies, it is sufficient to look at each change request and compare its income / cost with that of all the other change requests. See also this post and this post.

#### “New version of an embedded system

We develop an embedded system for, say medical imaging modalities in a series of projects, adding new features in each project. The product has existed for several years and has a long product backlog. A new project is started based on some signal from the market or based on a predetermined schedule. A new project usually has a “theme” that binds together the change requests. The new features are mostly independent but there may be a few features that are critical to the success of the rest. One example is performance: if we discover that the hardware resources available are not sufficient then we need to scale back on some of the other features.

This case is similar to the first one except that instead of delivering features in a continuous stream, we deliver them in batches, produced in projects. Uncertainty is assumed to be higher so we need to consider uncertainty-weighted expected values of the income and the cost of each feature before ordering the features in a income / cost order.

Ideally, independent high-uncertainty features should in this scenario be evaluated outside the regular product development stream in a research project, concept development project or similar, so that the few high-uncertainty features don’t stall the whole project. High-uncertainty features that are necessary for some other, low-uncertainty features, on the other hand need to be addressed early in the project; there is no use in developing a number of new features if we at the end discover that we can’t get for instance real-time video performance when this is a must-have requirement. The project tasks therefore need to be ordered so that we bring down uncertainty as fast (and inexpensively as possible). See also this post and this post.

#### “Innovation”

We develop a totally new and innovative product based on new technology or new science in general. The product backlog only covers the features for the first project. There are several make-or-break uncertainties within the selected set of features regarding the technology, the market, or perhaps some other area. This means that there is a significant risk that the project will fail.

In this case we assume that all or most features in the product backlog are required for the product to have any value at all (as this is the first version of the product); all features depend on all other features. Selecting change requests for the project is therefore relatively straightforward in this scenario. Instead we need to focus on the order in which we realize the features within the project so as to minimize the expected project cost. We need to order the project tasks so that tasks that give a large degree of uncertainty reduction per unit of cost come first in the project; if we fail, it is better to fail early than to fail late. See also this post.

Published by Arto Jarvinen on 09 May 2014

## Optimizing the value of a project using Stage-Gate

In an earlier post I wrote about risk-driven development. The idea that I proposed was to address project risks starting with the biggest risk first (to “fail early” if you have to fail). In this post I will try to elaborate on that rather vague statement and prove that a strategy along these lines indeed maximizes the value of the project.

The traditional way to evaluate projects is by using discounted cash flow valuation (DCF). The model assumes that you make a big one off investment and then get a positive cash flow from that investment. This model works fine if we are investing in say a new paper machine, provided that we can foresee the future cash flow attributable to the new machine. The risk of the investment is weighted in by discounting the future cash flow with a discount factor that is a function of the risk level.

In new product development projects that use the Stage-Gate model we get to make incremental decisions and thus do the investment piecewise while learning along the way. (It’s not as easy to see how one can learn from investing in say a 10:th of a paper machine.) If the market changes or we realize that we can’t overcome a technical challenge then we can abort the whole project minimizing our losses before we’ve spent all our money. This possibility to make incremental decisions increases the expected value of the project by decreasing the expected cost. This is the principle behind real options valuation.

Another way to see Stage-Gate is as a series of consecutive go-no go experiments. Each successful experiment takes us one step closer to the full product. If the experiment fails then we abort the project. All experiments must succeed in order for the whole project to succeed.

Let’s look more closely at the stages in a Stage-Gate (project) process: We do some work in each stage and based on the results we either abort the project with probability

$1–{p}_{1}$

or continue the project with probability

${p}_{1}$

We abort the project if a fatal (for the project) risk has been realized during the stage. The probability to abort is therefore here equal to the probability of a fatal risk being realized.

The question discussed in many of the posts on this blog is: in what order should we do the experiments in order to maximize the value of the project? For this we need to introduce the concept of a decision tree and some associated entities.

 A number of consecutive experiments represented as a decision tree.

In the decision tree we have a number of “events” depicted as circles. These represent our experiments. Each experiment has a cost of

${c}_{i}$

,

a probability of succeeding of
${}_{}$
p
i

, and a probability of failing of
$1$

p
i

. The cost of failing an experiment is
${}_{}$
C
i

,
${}_{}$
C
i

and
${}_{}$
c
i

>
0

. Also

${}_{}$
C
i

=

n
=
1

i

cn

which means that the cost of failing the project at the n:th experiment (by failing the n:th experiment) is the accumulated cost of all experiments up to and including the n:th.

We can “unfold” the value of the project step by step. Let’s look at the value
${}_{}$
V
1

of the project before the first experiment. It is simply the probability weighted average of the value of the two branches.

${}_{}$
V
1

=
(
1

p
1

)
(
&InvisibleTimes;

C

1

)
+

p
1

&InvisibleTimes;
(

C

1

+

V
2

)
=

C

1

+

p
1

V
2

If the first experiment fails we will have a negative value
$V$
=

C

1

=

c

1

of the first experiment. Otherwise we get whatever comes down the other branch which is
${}_{}$
V
2

subtracted with the cost
${}_{}$
C
1

.

${}_{}$
V
2

,
${}_{}$
V
3

, and
${}_{}$
V
4

can be written in the same format.

${}_{}$
V
2

=

C

2

+

p
2

V
3

${}_{}$
V
3

=

C

3

+

p
3

V
4

${}_{}$
V
4

=

C

4

+

p
4

I
${}_{}$
V
4

is where it gets a little more interesting as it is here we actually have an opportunity to get some income
$I$

.

Untangling the recursion we get

$V$
=

V
1

=

C

1

p
1

C
2

p
1

p
2

C
3

p
1

p
2

p
3

C
4

+

p
1

p
2

p
3

p
4

I

The income
$I$

is multiplied by all probabilities so for the income the order of the experiments doesn’t matter. Maximizing the value with respect to the order of the experiments is therefore equivalent to minimizing the cost (remember that all costs in the expressions here have positive values). So we need to minimize

$C$
=

C
1

+

p
1

C
2

+

p
1

p
2

C
3

+

p
1

p
2

p
3

C
4

with respect to the order of the experiments with costs
${c}_{j}$

, and associated probabilities for success
${p}_{j}$

. It is also from the above easy to guess what the expression for the cost is with an arbitrary number of experiments. I choose intuition before induction for now though and will not try to prove it.

What we want is a rule or a set of rules for sorting the experiments so as to minimize the expected cost. Let’s first assume that the order of the experiments
${}_{}$
E
i

as shown in the figure above minimizes the total cost
$C$

. Any permutation of the experiments would therefore increase the cost. From this we can deduce how the
${c}_{i}$

and
${p}_{i}$

must relate to each other.

Now trade places between the first and the second experiment. This should (per definition) give a higher expected cost. Expanding all
${C}_{i}$

into their constituencies and setting up the inequality we get

${}_{}$
c
1

+

p
1

(

c
1

+

c
2

)
+

p
1

p
2

(

c
1

+

c
2

+

c
3

)
+

p
1

p
2

p
3

(

c
1

+

c
2

+

c
3

+

c
4

)
<

c
2

+

p
2

(

c
2

+

c
1

)
+

p
2

p
1

(

c
2

+

c
1

+

c
3

)
+

p
2

p
1

p
3

(

c
2

+

c
1

+

c
3

+

c
4

)

After some juggling around we finally get

${}_{}$
c
1

+

p
1

(

c
1

+

c
2

)
<

c
2

+

p
2

(

c
1

+

c
2

)

Switching any two adjacent experiments give similar (but not entirely the same) inequalities

${}_{}$
c
2

+

p
2

(

c
1

+

c
2

+

c
3

)
<

c
3

+

p
3

(

c
1

+

c
2

+

c
3

)

and

${}_{}$
c
3

+

p
3

(

c
1

+

c
2

+

c
3

+

c
4

)
<
c
4

+

p
4

(

c
1

+

c
2

+

c
3

+

c
3

)

As long as all inequalities above are true, we will increase the cost by reversing the order of two adjacent experiments. I have not managed to prove that the pair-wise inequalities are a sufficient condition for a global minimum. Switching the first and the third experiment would for instance give the inequality

${}_{}$
c
1

+

p
1

&InvisibleTimes;
(

c
1

+

c
2

)
+

p
1

&InvisibleTimes;

p
2

&InvisibleTimes;
(

c
1

+

c
2

+

c
3

)
<

c
3

+

p
3

&InvisibleTimes;
(

c
2

+

c
3

)
+

p
2

&InvisibleTimes;

p
3

&InvisibleTimes;
(

c
1

+

c
2

+

c
3

)

which doesn’t necessarily follow from the pair-wise inequalities above it. Remains also to do the math for an arbitrary number of experiments but that seems like the easier of the two remaining issues.

The expressions in the inequalities are easy enough to put in a spreadsheet to get simple tool for ordering a number of experiments though. I did just that and the spreadsheet simulation show that the conditions above are a predictor for a global minimum with the admittedly small number of experiments I have carried out. I therefore still dare to postulate that we wish to have a small
${}_{}$
c
i

in some way combined with a small
${}_{}$
p
i

in early experiments. Remember that
${}_{}$
p
i

is the probability of succeeding with the experiment. A small probability of success means a large probability of failure means that we should do the uncertain and cheap experiments to start with.

The spreadsheet simulation I did for instance gives that if we have a series of four experiments with costs 20, 30, 40, and 20 with the corresponding probabilities for success of 0.4, 0.6, 0.8, and 0.9, then we should order the experiments in the order 1, 2, 4, 3 whereby we get an expected cost of 80.56. The sum of the costs of all experiments is 110 so by doing the experiments one at the time and aborting if failing we can bring down our expected cost by 27%. With many other random ways to order the experiments we will only decrease or expected cost by a few percent.

In conclusion: the riskier the project, the more we will gain (a) by using some kind of Stage-Gate model with a decision to continue or to abort after each experiment (or group of experiments) and (b) by ordering the experiments with those that give most uncertainty reduction for the money in the beginning.

When I started this post I was hoping that either the proof would be pretty easy (there is after all no esoteric mathematics involved) or that it would fall into a class of well-known problems such as a shortest path or a traveling salesman that already have solutions. But so far, no luck. I will keep on looking and if you, dear reader, have some ideas, please let me know. Until then, I’m going to trust my hunch and my incomplete proof.

Published by Arto Jarvinen on 02 May 2014

## Bring in the just machines please!

As hinted in an earlier post, human beings are not exactly behaving in a consistent and measurable way when it comes to acting upon risk. I usually consider evolution to be rational and therefore people to be rational in some paleolithic sense but sometimes I wonder. In a book published only (?) on the Internet, Aswath Damodaran summarized a number of interesting facts about our behavior when exposed to risk:

• Individuals are generally risk averse, i.e., they don’t act on expected returns only, and are more so when the stakes are large than when they are small.
• There are big differences in risk aversion across the population and significant differences across sub-groups.
• Risk aversion for a population varies with time.
• Individuals are far more affected by losses than equivalent gains.
• Individuals become more risk averse when they get frequent feedback on the results of their activity.
• The choices that people make (and the risk aversion they manifest) when presented with risky choices or gambles can depend upon how the choice is presented (framing).
• Individuals tend to be much more willing to take risks with what they consider “found money” than with money that they have earned (house money effect).
• There are two scenarios where risk aversion seems to decrease and even be replaced by risk seeking. One is when individuals are offered the chance of making an extremely large sum with a very small probability of success (long shot bias). The other is when individuals who have lost money are presented with choices that allow them to make their money back (break even effect).
• When faced with risky choices, whether in experiments or game shows, individuals often make mistakes in assessing the probabilities of outcomes, over estimating the likelihood of success, and this problem gets worse as the choices become more complex.

The reason I’m reading the book is that it gives an account of real options as a way to reasoning about project investment decisions, the theme of some earlier posts. I will return to real options later.

The book’s author at one point speculates if it wouldn’t be better to have computers make our investement decisions, given the inconsistencies of human decision makers; as it says in the lyrics of the song I.G.Y. By Donald Fagen:

A just machine to make big decisions
Programmed by fellows with compassion and vision
We’ll be clean when their work is done
We’ll be eternally free yes and eternally young

Published by Arto Jarvinen on 30 Apr 2014

## I wasn’t first – this time either

Having Googled around a little bit more I realize that what I wrote two posts down wasn’t exactly new thinking. Similar ideas were described by Robert C. Cooper in this article. I didn’t read the paper before I wrote my post, I swear 🙂

Even if I didn’t earn the Nobel Prize in management this time either, I’m happy to see my ideas corroborated.

Published by Arto Jarvinen on 21 Apr 2014

## The discovery backlog

I have realized that engineers use words differently from other people. When an engineer says “problem” he or she often doesn’t mean anything negative (except in “Houston, we have a problem”). Problems are engineers’ raison d’être; engineers thrive on solving problems. When the problems get tough, the tough engineers get going.

The same goes for the word “risk”. We have “risk lists” in our projects. We do “risk mitigation”. There are entire companies filled with brilliant engineers doing nothing but “risk management”.

Using the words “problem” and “risk” in some other contexts, like with the sales team, may not always be a good idea though. The lone engineer may come out as downer, an overly pessimistic person, who’s not willing to “see the opportunities instead of the problems” (a popular cliché at least in Sweden).

So I realize I need a better word than the “risk backlog” i just invented in my previous post. What about “discovery backlog”? We don’t have to call the items “risks”, they are just things that we currently don’t know. Like if anybody is going to buy our product or if the quantum drive will really work as intended. We need to sooner or later discover those things. I can’t really wrap my brain around “opportunity backlog”.

Published by Arto Jarvinen on 19 Apr 2014

## Risk-driven development

Several project management models include provisions to manage risk. Risk is here defined as a probability for an adverse event times the quantified consequence of that adverse event. The IBM Rational Unified Process recommends addressing risk while planning the iterations of what in RUP is called Elaboration phase. Barry Boehm’s Spiral Model is guided by risk considerations. So are the various versions of the Stage-Gate model. The Scrum literature, while mentioning risk as one of the prioritization principles for the product backlog, leaves it mostly to the judgment of the product owner to make a good prioritization.

We can intuitively understand that creating something entirely novel such as a car that runs 10 000 km without refueling is more risky than developing next year’s model of an existing car with only some cosmetic changes. The risk in new product development is usually not evenly distributed on all tasks in the development project. Developing the engine of the ultra-long-range car (ULRC) carries far more risk than developing the entertainment system or the suspension.

Risk-driven development means that we want to eliminate as much risk as we can, as fast as possible, in any way possible; we don’t want to end up having invested a large amount of money and reputation in a project that after all that investment still has a high probability of failure. We also have to take into account the opportunity cost, the gain we would have got if we had invested the money in another project.

As an illustration, assume that the biggest uncertainty in a project (like the ULRC engine) is left as the last component to be developed in the project, then we would end up having invested a lot of money in the project without still knowing if the product will ever work. The cost of the risk being realized would be the opportunity cost plus the total accrued project cost up to the time of the ultimate failure.

We can also look at it from an capital budgeting point of view. When selecting investment targets, we always wish to match return and risk. For a particular level of risk we expect a certain level of (expected) return. Assuming that the income from the project is fixed (as long as it succeeds), then the risk level at which we invest our next unit of money in the project should be guiding our willingness to make that investment; the lower the risk, the more attractive the investment. I will try to elaborate on this in later posts.

When developing an ULRC it is probably thus not be wise to start with specifying and designing the entertainment system or the suspension. Neither does a comprehensive and approved requirements specification help much to lower the risk in this particular case. The only novel requirements may be the 10 000 km range and that’s easy enough to understand and to write down. Instead we should, as already hinted above, focus on designing and building prototypes of the long-range engine and its related parts.

There are of course variations to the risk-driven development theme. In some cases we need to build some low-risk parts first to be able to even start with the high-risk parts. For instance, we may need to build the rest of the powertrain or at least a test bench simulating the rest of the powertrain to be able to carry out tests with the new engine.

One framework for risk-driven development is, as mentioned in the introduction, the Stage-Gate process consisting of phases (stages) and tollgates. The tollgates are decision points at which the future execution of the project is decided based on the project’s risk level so far. If we at a certain tollgate think the risk is too high for a substantial new investment, e.g. for ramping up development or starting an expensive marketing campaign, then we need to find ways to lower the risk further before we make the additional investment. If we can’t find such ways, then we may need to abort the project altogether.

A problem with the Stage-Gate model is that it is often confused with a waterfall development model which e.g., mandates that the product requirements are developed and preferably frozen and approved in the beginning of the project. Indeed, in many quality management systems the tollgate criteria are defined in terms of produced documents and those criteria are the same for all projects.

The Scrum process doesn’t have formal tollgates. All development in Scrum is made in sprints (similar to iterations). The progress of the project is checked after each sprint and adjustments are made to both the plan and the process as needed. Scrum does not mandate any particular order in which the product should be developed but recommends that potentially shippable product increments are delivered as a result of each sprint. (This usually works for software but maybe not for a car.)

To conclude, here are a couple of ideas that should make the Scrum and the Stage-Gate processes more effective together:

• Rename the risk list that exists in most project models to risk backlog and think of it in the same way as about the product backlog in Scrum. This implies an order in which the risks shall be addressed and should be used to plan the project (iterations, sprints, whatever). Risk-driven activities include developing functionality, interviewing customers, building prototypes, doing analyses, and so on.
• Use the risk backlog as the main input to the tollgate decisions criteria in the Stage-Gate model. The tollgate criteria should be allowed to vary from project to project and should be concerned about the biggest remaining risks in the project (including risks such as that there is no market for the product we are developing). The fixed lists of documents that is often used as tollgate criteria do not fit every project since they do not match the risk profile of every project. It is after all risk that we wish to assess at the tollgate and the risk backlog, including any more detailed material on each risk, is the main indicator of project risk.
• Synchronize any gate decision with the end of a sprint and make sure that whatever is required for the gate decision is produced in the last sprint(s).

I have given up my graphical editor (GMF) project a second time. The reason is that although it is rather simple to get something to work, it’s extremely difficult to get everything to work. The main reason is that the different parts that you need for creating a complete graphical editor seem to be created at different times by different people. They use the same design patterns but different class libraries. The different frameworks have concepts such as Command, Editing Domain, Undo Context, but they are not implemented with the same classes. To be able to get them to work together, a lot of “wrapping” of classes and handling of several instances of almost the same class is necessary and the end result becomes a mess. Too much of a mess to keep in memory if not working with it on a daily basis.