## The devil in the detail

When writing a quality system manual there is (at least should be) a need to talk about real-world objects. Let’s call them primary objects for lack of a better word. Examples are a product or a (software) bug (whether a bug is an “object” is of course also debatable but in the abstract world of software it is). Sometimes we on the other hand need to refer to some kind of representation of that primary object like a ticket in a bug tracking system (representing a real-world bug). Let’s call these objects meta-objects as they contain information about primary objects. Sometimes this causes no problems like when we have a product and a product specification describing the product. (The product specification is of course also a real-world object but it is not “primary” in the sense that it doesn’t have any justification without the primary object it describes.) In this case the objects have different names (“product” and “product specification”) and will not be confused.

Sometimes it is very convenient to use the same name for the primary object and its meta-object though. When for instance modeling a system in a UML tool, the modeler may describe real-world, primary use cases with meta-objects of the type use case in the tool. When we refer to “use case” in the quality system manual for such an organization, we don’t know whether we refer to the primary use case or its representation in the tool, the meta-use case. Many times this ambiguity goes unnoticed. The text may apply reasonably well to both objects. Sometimes we are able to infer from the text to which object it refers. But sometimes we may have a slightly larger ambiguity. Let’s say that we are developing a medical device and wish to manage the clinical risks potentially caused by the device. Let’s also assume that we have a tool for documenting and tracking individual clinical risks. When we in the quality system manual write “find all clinical risks” we can’t be sure if we are referring to the clinical risks already documented as records in the tool or if we are referring to previously unknown primary clinical risks. The editor of the quality system manual will also invariable go astray from time to time and describe the primary object when a description of the meta-object was called for, and sometimes vice versa. A primary use case is after all a rather different animal than the meta-use case in the tool.

It might be a little clunky to come up with different names for the primary objects and the meta-objects. What would we for instance call the use case meta-objects? “Meta-use case” doesn’t roll so easily off the tongue. I have therefore adapted a convention according to which all primary objects are denoted with all lower-case letters and the meta-objects with capitalized names.

This may not be a Nobel winning insight but I’ve seen confusion about this many times, and I have been confused myself, so I still wanted to share it.

## Running Eclipse Process Frameworkd in Ubuntu 12.04 LTS 32 bit

I’m using the EPF to create the quality system manual of the medical device company I’m working for. While we are using Windows at the company, I also wanted to be able to use Ubuntu when working from home. Getting it to work on Ubuntu was not trivial. Plain Eclipse seems to run out of the box but EPF uses editor components that aren’t installed by default in 12.04 and the packages are also hard to find.

What worked for me was to install xulrunner-1.9.2 from the Mozilla site. Not all versions of this library will work according to the EPF docs.

I installed as instructed on the Mozilla page. Don’t forget to run:

sudo ./xulrunner --register-global

I then also added the following lines to .bashrc:

export MOZILLA_FIVE_HOME=/opt/xulrunner
export LD_LIBRARY_PATH=\$MOZILLA_FIVE_HOME

and reread the file by running:

bash

I then started EPF from the terminal thus:

./epf -clean

I still get error messages about failed assertions but at least the editors in epf now seem to work.

I also tried to run EPF on a 64 bit Ubuntu but the application wouldn’t even start so I’ll settle for running it in a virtual 32 bit machine (that runs on a 64 bit machine). (I need the 32 bit machine anyway for my Internet banking application which runs neatly on Linux but only on 32 bit machines.)

I upgraded my Eclipse Process Framework Composer application from version 1.5.0 to 1.5.1.1. The upgrade seems quite uneventful so far. The only thing I had to do manually was to copy my tweaked layouts to the new structure. Below a list of the files I have modified to get the colors, fonts and layouts that I want (mostly for my own reference). I find the original color scheme a little bit daring.

...epf-composer/plugins/org.eclipse.epf.publishing_.../xsl/bookmark.xsl
...epf-composer/plugins/org.eclipse.epf.publishing_.../xsl/index.xsl
...epf-composer/plugins/org.eclipse.epf.publishing_.../xsl/PublishedBookmarks.xsl
...epf-composer/plugins/org.eclipse.epf.publishing_.../xsl/topnav.xsl
...epf-composer/plugins/org.eclipse.epf.publish.layout_.../layout/css/default.css
...epf-composer/plugins/org.eclipse.epf.publish.layout_.../layout/xsl/activity_wbs.xsl
...epf-composer/plugins/org.eclipse.epf.publish.layout_.../layout/xsl/overview.xsl
...epf-composer/plugins/org.eclipse.epf.publishing_.../docroot/stylesheets/common.css


## How to create and manage a useful operations manual

In the very first post on this blog I talked about operations manuals, i.e. descriptions of the prescribed or recommended way of working in an engineering organization. I fully acknowledge that very few people find this topic particularly exciting. Some knowledge management folks I talk to consider operations manuals a failed concept to start with. It’s all in the heads of the people they say. Some slide-ware-wielding types on the other hand find them utterly boring and way too detailed.

I don’t subscribe to the failed concept view. There is the need for personalized knowledge and for codified knowledge [1]. Although organizations according to [1] should focus on one or the other, there are very few organizations that would not benefit from some codified knowledge in the form of an operations manual. Engineering organizations with many quite complex but to some extent repetitive methods would definitely benefit.

The boring part could well be true but then again, you could say that of many other phenomena in an organization. I don’t get too excited about expense reports or the budget process but I still realize I need to put on a happy face. And there is as far as I know no clause in ISO9001 or any other standard stating that the prose of the operations manual shall be dry and dull.

Still, all too many operations manuals are collecting dust (physical or digital). There are reasons for this beyond the lack of faith in the concept and the disdain for dull details. Here are a few common ones:

 Consider using a formal method such as the one implemented in the Eclipse Process Framework for defining the operations manual.
• The people who create the operations manual don’t know the operations well enough.
• It is not clear what the operations manual shall be used for.
• Nobody is accountable for the operations manual.

I’ll say a few more words about these three reasons below.

#### The people who create the operations manual don’t know the operations well enough

Writing a good operations manual requires a rare set of skills. You need to be good with words, you need to know quality management which is not a small area of study, and you need to have deep knowledge of the operations and the business goals of the particular organization. Some humor to spice up the prose and artistic talent for making good-looking illustrations would also come handy.

In large organizations the operations manuals are often written by people not directly involved in the day-to-day operations of the company and therefore not necessarily familiar with the details of the operations. These people can write good operations manuals provided that they do a lot of “floor-walking” and are willing to learn the nuts and bolts of the operations. Manuals on a too abstract or too generic level will invariably collect piles of the aforementioned dust.

#### It is not clear what the operations manual shall be used for

I often hear that the operations manual should exist so that “people can consult it to find out how to carry out their tasks”. This undoubtedly sounds right but requires some clarification. Since every project and every department is unique in some ways, one process (perhaps the only one in the operations manual) will not fit all.

My experience is that a layer of adaptation is needed between the operations manual and a project or between the operations manual and a particular organizational unit. A project may for good reasons want to skip a stage in the standard project management process. A department may need a role that is unique to the department or maybe even temporary. For projects a Project Specification usually fills the gap. The Project Specification points out the roles, processes, tools etc used in the project. It may refer to the operations manual or it may define some project-unique stuff. For each organizational unit a similar Department Specification can be created.

If we allow that additional adaptation layer, we can use the operations manual as a rather formal “blue-print” for the organization; a “reference manual” instead of a “user guide”. It may (and should) still contain all the useful checklists, templates, method descriptions and so on but to find the exact set of such checklists, templates etc for a particular project or department, the Project Specification or the Department Description should be the first source of information – the “user guide”. (Or you could just ask your project manager or line manager.)

#### Nobody is accountable for the operations manual

Very few useful things get done in an organization without accountability. The accountability should lie with somebody who has the necessary skills and resources (sorry for sometimes stating the obvious). Spread among the line managers, the accountability for the operations manual often gets too thinly spread. Line managers are usually occupied by day-to-day prioritizations and problem solving and don’t have the bandwidth to engage in the details of process design, role definitions and the like (just like they usually don’t get deeply involved in detailed product design).

One solution is to create a specific management level position that focuses solely on operational excellence. This role would be responsible for keeping the engineering organization effective and efficient which is naturally tied to the operations manual as it defines how to achieve that effectiveness and efficiency. Of course any existing “quality department” and its manager should be the first choice for this role as long as it is made sure that the department is focusing on operational excellence and has the will, the skills and the resources to work very closely with the people in the trenches.

[1] What’s Your Strategy for Managing Knowledge? Morten T. Hansen, Nitin Nohria, and Thomas Tierney. Harvard Business Review, March – April, 1999.

## A look at SPEM and the Eclipse Process Framework

As you might have seen, there are a few pages on this site about operations manuals and process modeling, i.e. about describing the way of working in an organization in a semi-formal way. I have implemented a custom meta-model for process modeling [5] in Rational Rose and used it to describe the whole operations manual for a medical device company. The manual itself was generated in HTML format from the UML model in Rose. Since Rose is much too expensive for private use, I have also played around with StarUML, an open source UML editor, and made a translator that generates contents for a WordPress (blog) site from a process model in StarUML. StarUML is a very powerful and well documented tool with a good API. Unfortunately it’s a dead project so continuing that line of development doesn’t seem too attractive.

 The EPF editor and the resulting web site. With tweaked styles.

In the mean time, a standard meta-model for process modeling, the SPEM (Software Process Engineering Meta-Model) [1], has been adopted by the Object Management Group. A variant of the SPEM meta-model has been implemented in the Eclipse Process Framework (EPF) Composer, an open source tool based on Eclipse [2]. I write “variant” because I haven’t had the time look at the exact mapping between SPEM and the implementation but it is not 1:1. In contrast to the StarUML project, the Eclipse project is one of the most active open source projects on the net. The basic platform seems very stable and a large number of other adaptations are also available, e.g. for UML2.

The purpose of the EPF is “To provide an extensible framework and exemplary tools for software process engineering – method and process authoring, library management, configuring and publishing a process.”.

SPEM is based on the more or less formal meta-model used for describing the original Rational Unified Process in its early web version (the Rational heritage is also hinted by the meta-model documentation that is generated from Rose and some of the code in the web site generated from the EPF model).

I have been using SPEM and EPF for a few days now and have compared them to my earlier experiences from tools and meta-models for process modeling. These are some of my tentative conclusions:

• The documentation of both SPEM and EPF is excellent including a good tutorial. Very few open source projects can claim such good user guidance.
• The Eclipse tool itself seems stable and runs equally well on Windows XP and Ubuntu 8.04 (the Ubuntu platform requires some tweaks at installation though, see [4]). I particularly appreciate the Linux compatibility.
• The tool has an attractive user interface.
• I’m happy that the modeling element Outcome is added to the meta-model. It took me a while to realize that such a modeling element was needed to represent those intangible results like a “informed customer”. (It corresponds roughly to the Objective modeling element in my meta-model.)
• The Category concept is very useful, particularly for defining various structures for publishing the model.
• Importing and exporting process models work fine.
• There are provisions to manage the models in ClearCase and CVS. To what extent Subclipse can be used with EPF I haven’t investigated.

There are in general too many modeling elements in SPEM. I prefer my meta-models simple and conceptually lean. That was one of my goals with my own meta-model [5]. More specifically:

• There are numerous different modeling elements for an activity (Step, Task, Process, Capability Pattern, Discipline, Activity, Task Descriptor, Phase) where I have only two (Activity and Workflow) that can be hierarchically structured. It is hard to remember the relationship between for instance a Task and a Task Descriptor (~inheritance) or to understand the conceptual difference between an Activity and a Task Descriptor (hierarchy).
• There is a multitude of classes for various types of guidance. The only advantage that I really see with this is that one doesn’t have to write a title for the guidance since the title is given by the modeling element type.

Some other observations in the negative territory have to do with the fact that the implementation of SPEM in EPF is rather hard-wired:

• There is no modeling element for Event which I have used quite heavily to describe the events that for instance trigger a change request or a customer support ticket. And I don’t see any way to add it either.
• I would have preferred free format class diagrams through which to define the relationships between modeling elements. At least as a complement to the form based entry mode. It is nice that diagrams are created automatically from the data but still.
• There is no way to add custom attributes to modeling elements. I have often needed this in my models to describe attributes that are there in the real implementation of the artifact (in a document header or in a field in a database object). Some examples are Approved by (the formal approver of the document), Valid from (the date from which a document is valid), ID (the unique identifier of the artifact) and so on.
• There is no true inheritance mechanisms through which new modeling elements (“classes”) can be added to the language. I have often found it useful to design inheritance hierarchies when for instance specifying all artifact types used in an organization when these artifacts have several different levels of formality (for an example, see [5]). Instead there is this, in my mind somewhat ad-hoc-ish, mechanism for specializing Roles, Tasks and Artifacts from the Method library into the Process descriptions.

Last but not least, a couple of comments on visual appearance:

• Some of the generated views are very cluttered, for instance some that belong to the Phase modeling element.
• Tweaking the style of the published web site is very difficult as the style definitions are spread on at least 5 different files. Being able to modify the style is important for industrial users. They will invariably want to use their own graphical profile.

Despite the above critique I will try out EPF in an industrial setting in the near future. It seems easy enough to grasp and use and gives a much better framework for structuring the processes than e.g. unstructured Word documents. Maybe I later get the inspiration to implement my own meta-model in the Eclipse Domain Specific Language Toolkit.

## Describing and disseminating know-how

The research about the human brain and behavior strongly suggests that most of the information processing we do in our brains including a substantial amount of decision making happens without ourselves being aware of it (in the sense that we can communicate such awareness). Libet’s experiment (see a Libet’s short delay) for instance shows that an action potential builds up in the brain up to half a second before we become aware of ourselves taking an action.

Other unrelated research suggests that a lot of the knowledge we use when performing familiar tasks is tacit knowledge, i.e. knowledge in a format that isn’t easily describable or communicable. So not only do we make subconscious decisions, we base those decisions on subconscious knowledge when we are really good at what we are doing. The natural development of competence has by several authors (the original source seems difficult to find) been described with the model in the exhibit below.

 The stages of competence development.

When discussing knowledge a distinction is often made between “knowing that” and “knowing how”. We know that 2 + 2 = 4 and we know how to add numbers. (In my native languages Finnish and Swedish there are actually, in contrast to common English, different words for these two types of knowledge. In Finnish we say “tietää” and “osata”, in Swedish “veta” and “kunna” – probably corresponding to the old English words “wit” and “ken”.)

The whole purpose of an operations manual is to describe and facilitate the build-up of know-how in the organization. The question is: How can we describe and disseminate know-how that in its most evolved form is unconscious? This is an other way of asking the questions put forth in an earlier post.

I still don’t have answers to the questions referred to above that I’m satisfied with but I’m convinced that we have to take into account how the human brain is wired.

## Is there such a thing as a useful operations manual?

 Now to something completely different

I started out my career designing image processing algorithms and hardware. At one point I rather inexplicably diverted into management consulting and took interest in issues such as operational effectiveness, cycle time reduction, and quality management. A while after my defection an old colleague of mine, who was at that time working on his PhD in image processing, asked me in an email what I was doing nowadays. Having tried to explain to him what I was doing, I got a reply with only one word in it – “perverted”.

In some of my darker moments as a management consultant, especially when working with quality management systems, I tend to agree with my former colleague. The quality management system manual, or using my preferred term, operations manual (OM), is too often that Dilbertian “big honking binder” that everybody treats as a “dead raccoon”.

All is not gloom though. There are a number of compelling reasons to create and use an OM and there are success stories out there:

• An OM is a good place to store and make available good practices, a place for storing tips and tricks that have proven to work in each particular organization.
• An OM can be used as a basis for discussions and deliberations about good ways of working within the organization.
• An OM is the basis for training and training material regarding the organization’s way of working.
• An OM may be required as an evidence that you have practices required by various laws and regulations in place. Such specific documented practices are for instance required for manufacturers of medical devices and aircraft.
• An OM with common practices facilitates collaboration. If everybody in a global organization agree upon the typical steps in their project development process, then project managers can get a fair picture of the progress of the project and subproject teams can coordinate their work.

The benefits of the right number of good, required, and common practices are hard to dismiss. Try this corporate policy from hell to convince yourself: “It is our policy to throw away and forget past experiences, to ignore the law, and to let every project reinvent their ways of working from scratch.” If you really believe that the above policy is a good one, then you can stop reading here.

Those of you who are still with me may now ask (1) what a really useful OM would look like and (2) what it should contain, not to be treated as a dead raccoon by the staff, and (3) how should the management of it be organized so that it would continue to reflect the best practices of the staff that actually perform the work. I will attempt to address these question in later posts.