Published by Arto Jarvinen on 19 Apr 2014

The risk backlog

Several project management models include provisions to manage risk. Risk is here defined as a probability for an adverse event times the quantified consequence of that adverse event. The IBM Rational Unified Process recommends addressing risk while planning the iterations of what in RUP is called Elaboration phase. Barry Boehm’s Spiral Model is guided by risk considerations. So are the various versions of the Stage-Gate model. The Scrum literature, while mentioning risk as one of the prioritization principles for the product backlog, leaves it mostly to the judgment of the product owner to make a good prioritization.

We can intuitively understand that creating something entirely novel such as a car that runs 10 000 km without refueling is more risky than developing next year’s model of an existing car with only some cosmetic changes. The risk in new product development is usually not evenly distributed on all parts of the new product. The engine of the ultra-long-range car (ULRC) carries far more risk than the entertainment system or the suspension.

Risk-driven development basically means that we want to eliminate as much risk as we can, as fast as possible, in any way possible; we don’t want to end up having invested a large amount of money and reputation in a project that after all that investment still has a high probability of failure. To illustrate this in an other way, assume that the biggest uncertainty in a project (like the ULRC engine) is left as the last component to be developed in the project, then the risk goes up with the accrued project cost. We would end up having invested a lot of money in the project without still knowing if the product will ever work.

When developing an ULRC it is probably thus not be wise to start with specifying and designing the entertainment system or the suspension. Neither does a comprehensive and approved requirements specification help much to lower the risk in this particular case. The only novel requirements may be the 10 000 km range and that’s easy enough to understand and to write down. Instead we should, as already hinted above, focus on designing and building prototypes of the long-range engine and its related parts.

There are of course variations to the risk-driven development theme. In some cases we need to build some low-risk parts first to be able to even start with the high-risk parts. For instance, we may need to build the rest of the powertrain or at least a test bench simulating the rest of the powertrain to be able to carry out tests with the new engine.

One framework for risk-driven development is, as mentioned in the introduction, the Stage-Gate process consisting of phases (stages) and tollgates. When combined with the IBM Rational Unified Process, each phase may contain a number of iterations. The tollgates are decision points at which the future execution of the project is decided based on the project’s risk level so far. If we at a certain tollgate think the risk is too high for a substantial new investment, e.g. for ramping up development, then we need to find ways to lower the risk further before we make the additional investment. If we can’t find such ways, then we may need to abort the project altogether.

A problem with the Stage-Gate model is that it is often confused with a waterfall development model which e.g., mandates that the product requirements are developed and preferably frozen and approved in the beginning of the project. Indeed, oftentimes the tollgate criteria are defined in terms of produced documents and those criteria are the same for all projects.

The Scrum process doesn’t have formal tollgates. All development in Scrum is made in sprints (similar to iterations). The progress of the project is checked after each sprint and adjustments are made to both the plan and the process as needed. Scrum does not mandate any particular order in which the product should be developed but recommends that potentially shippable product increments are delivered as a result of each sprint. (This may work for software but maybe not for a car.)

To conclude, here are a couple of ideas that may make both Scrum and the Stage-Gate processes more effective:

  • Rename the risk list that exists in most project models to risk backlog and think of it in the same way as about the product backlog in Scrum. This implies an order in which the risks shall be addressed and should be used to plan the project (iterations, sprints, whatever). Risk-driven activities include developing functionality, interviewing customers, building prototypes, doing analyses, and so on.
  • Use the risk backlog, not a fixed set of documents, as the main artifact when making tollgate decisions in the Stage-Gate model. It is after all risk that we wish to assess at the tollgate and the risk backlog, including the status of every risk, is the main indicator of project risk.

Links

[1] Risk-driven planning in the IBM Rational Unified Process.

[2] Risk-driven planning in Scrum.

Published by Arto Jarvinen on 16 Feb 2014

Cleaning up

I have given up my graphical editor (GMF) project a second time. The reason is that although it is rather simple to get something to work, it’s extremely difficult to get everything to work. The main reason is that the different parts that you need for creating a complete graphical editor seem to be created at different times by different people. They use the same design patterns but different class libraries. The different frameworks have concepts such as Command, Editing Domain, Undo Context, but they are not implemented with the same classes. To be able to get them to work together, a lot of “wrapping” of classes and handling of several instances of almost the same class is necessary and the end result becomes a mess. Too much of a mess to keep in memory if not working with it on a daily basis.

To clean up this blog I have made all the Eclipse EMF, GMF, and GEF posts private, i.e. invisible for the external reader. If you wish to discuss any aspects of those frameworks or give me a hint as to how to go forward with less intellectual effort, then please drop me an email.

Published by Arto Jarvinen on 31 Jul 2013

Running Eclipse Process Frameworkd in Ubuntu 12.04 LTS 32 bit

I’m using the EPF to create the quality system manual of the medical device company I’m working for. While we are using Windows at the company, I also wanted to be able to use Ubuntu when working from home. Getting it to work on Ubuntu was not trivial. Plain Eclipse seems to run out of the box but EPF uses editor components that aren’t installed by default in 12.04 and the packages are also hard to find.

What worked for me was to install xulrunner-1.9.2 from the Mozilla site. Not all versions of this library will work according to the EPF docs.

I installed as instructed on the Mozilla page. Don’t forget to run:

sudo ./xulrunner --register-global

I then also added the following lines to .bashrc:

export MOZILLA_FIVE_HOME=/opt/xulrunner
export LD_LIBRARY_PATH=$MOZILLA_FIVE_HOME

and reread the file by running:

bash

I then started EPF from the terminal thus:

./epf -clean

I still get error messages about failed assertions but at least the editors in epf now seem to work.

I also tried to run EPF on a 64 bit Ubuntu but the application wouldn’t even start so I’ll settle for running it in a virtual 32 bit machine (that runs on a 64 bit machine). (I need the 32 bit machine anyway for my Internet banking application which runs neatly on Linux but only on 32 bit machines.)

Published by Arto Jarvinen on 16 Mar 2013

Trust, transparency and Toyota

A recent article in The Economist [1] ascribed some of the economic and social success of the Nordic countries to a high level of trust. In the period of large-scale emigration of Swedes to America, they came to be known as “dumb Swedes” in the new country because their high level of trust in people. Today the descendants of these dumb Swedes still have higher that average trust in their fellow citizens and tend to live in rather well-run states such as Minnesota.

It is possible to have relatively high taxes (such as in Minnesota or Sweden) if people trust that taxes are used for good purposes. Trust in the Nordic countries emanate from many sources but transparency is a major one. It is not easy to embezzle public funds when all records are public and subject to the scrutiny of the press and of curious citizens.

I claim that the same goes for corporations. With a high level of trust between employees, departments, country organizations etc the transaction costs can be low. Transaction costs in a corporate setting are typically different types of follow-up and reporting procedures, elaborate internal pricing schemes, and in more extreme cases, turf wars.

Toyota
A car and a process you can trust.

A typical scenario is that when a particular problem area catches the eye of a particular manager (or a group of managers) who doesn’t entirely trust the organization’s ability to handle the problem then they feel the natural, and in this situation perfectly responsible, need to alleviate their uncertainty by starting to make inquiries. If the situation gets more serious the managers start requiring extra (ad-hoc) reporting on the progress of the resolution of the problem or feel that they need to put together a “tiger team” to expedite the resolution process.

Despite superficial similarities, the above behavior is the antithesis of the Toyota Production System where managers likewise come running when there is a problem. But unlike in the scene described above, they don’t come running to expedite the process, they come running to help solving the root cause of the disturbance.

The extra reporting, the extra phone calls, the extra emails etc are all caused by the lack of trust in the process and add little value to the actual problem resolution process. They in fact make the process less efficient. A “tiger team” furthermore masks any deficiencies in the regular process by effectively bypassing it, preventing the organization from addressing the root cause of the lack of trust.

The explicit goal of building trust has not afaik been on the top of any process improvement models. Many models do result in higher trust when successfully implemented but I believe more explicit actions can be taken to improve trust faster. Some such actions could be:

  • Make all processes extremely transparent; make it easy for anybody to see the backlogs and progress of every department. This facilitates the Genchi Genbutsu, “go and see”, of the Toyota Production System, an attitude that helps managers to stay informed about what’s going on in the organization on a continuous basis.
  • When the organization is more mature, make metrics about performance visible for everyone.
  • Make decisions and their rationales visible.
  • When communicating about your area of responsibilities, make sure that you are well read. When uncertain, state this and the reason for the uncertainty.
  • State clearly who’s responsible for what. If nobody steps forward and clearly takes charge of an issue, then uncertainty thrives.

Last but not least: do a good job (and make it known to others that you did a good job)!

Links

[1] The secret of their success.

Published by Arto Jarvinen on 04 Dec 2012

Product, not project – part 2

Something caught my eye yesterday when I helped my son to get started with Code::Blocks, a light-weight integrated software development environment (IDE): in all IDEs that I’ve worked with lately (Eclipse, Visual Studio and Code::Blocks) the collection of source code and other files is collective called “project”. This may seem like an unimportant little observation but again I believe that using the right term is important for people’s mental models of what’s going on. A “project” is something temporary while a “product” would be something rather more persistent. I would have suggested the “product” word here instead. See also my earlier post on this topic.

Published by Arto Jarvinen on 29 Oct 2012

Managing products

In an earlier post I wrote about the difference between a project and a product. This distinction may seem obvious for some but considering the number of times I’ve found myself discussing its implications I’ve come to the conclusion that it may not be all that obvious.

Many traditional development process descriptions start with a requirements specification of some kind and then go on describing the creation of the rest of the development artifacts, all the way to a verified, validated and released product. In contrast to such one-off, linear processes, most system development organizations are almost completely occupied with continuously upgrading and correcting existing products based on a steady stream of internal ideas and new requirements and wants from customers, distributors and other external stakeholders. The upgrades are typically indicated by the version number of the product. (I’m for instance writing this on a computer running Ubuntu 12.04 which is an upgrade from Ubuntu 11.10 and so on.)

For such more or less continuous product development clearly something else is needed than a one-off development process to guide the engineering efforts. We need to plan several (upgrade) projects ahead to secure the necessary resources and for communicating with the market. We also need to continuously decide exactly what new features and corrections to add to the product at each upgrade.

Enter product management (and say hello to the product manager).

A product is according to Wikipedia “anything that can be offered to a market that might satisfy a want or need”. To maximize profits we want to make sure that a product matches the “wants or needs” as well as possible at the right price. Not only do we need to react to the feedback from existing customers and other stakeholders, we also need to pro-actively add innovative (and some not so innovative) new features so as to maximize profits, market share or whatever our goal might be for the moment.

To handle both the short-term requests from existing customers, sales etc and to actively manage the products features in the medium and long term is the purpose of product management and the ultimate responsibility of the product manager.

To plan and communicate the overall contents and timing of the major upgrades of the product, the product manager creates and maintains a product plan (aka product road-map) that in turn is based on both ad-hoc input from the market and thorough analyses of the targeted market segments, societal trends, the competition, available and future technology, partners etc (see e.g. [1]). All this makes the product manager one of the most important roles in the company, if not the most important role, and product management perhaps the most important process in the company.

Since we need to manage each product over its entire life-cycle, product management is not (and I’m sorry for keeping repeating myself) a one-shot project activity but a recurring line activity. The figure below gives a simplified view of how product management can be integrated into a project-oriented organization.

Product Management
Continuous product management.

Each new suggested feature, whether a new function requested by a customer or a feature suggested internally by the project manager, system architect or somebody else, is described in a change request.

New change requests are regularly evaluated by a change control board (CCB) with respect to cost / benefit and their consistency with the overall product plan. If the benefit exceeds the cost and the suggested new feature is in line with the product plan, then the change request is accepted for implementation and at some point scheduled into a project. Otherwise the change request is rejected.

The CCB is typically chaired by the product manager and has the major stakeholders of the change request such as the project managers of all ongoing projects and the line managers supplying the resources as members. The CCB is moderated and administered by a configuration manager.

While I can’t see any real alternatives to the above process and I have implemented it in several organizations, there are several challenges associated with doing so. I will return to these in future posts. A nice thing with the above process is that it plays very well with Scrum and other agile methods. This too may be the topic of future posts.

Links

[1] Michael Porter. Competitive Strategy.

Published by Arto Jarvinen on 25 Aug 2012

The difference between “document-based” and “model-based”

Many of the posts on this blog is about “model-based engineering”, particularly in the context of software and system development. Model-based is often compared to the “old” document-based way of doing things, i.e. when specifications and other descriptions come in the form of documents instead of models.

So what is the difference between “document-based” and “model-based” engineering? Strictly speaking there doesn’t need to be any difference at all since models are always possible to serialize (convert into a stream of bytes). Many model editors for instance store the created models as (fairly) readable XMI files. These files thus conform to a well-defined metamodel.

When the documents are the primary artifacts, i.e. not the output of a model editor, then they usually don’t conform to a well-defined metamodel. An example is a design document that contain prose with or without some predefined headings and subheadings and informal figures (typically with “boxes” and “lines”).

The following is a list of difficulties that I have discovered with informal or semi-formal (typically conforming to a document template) documents:

  • It is hard to extract exactly the right subset of information needed for each task, e.g. to see a requirement and its associated test cases at the same time when updating the test case(s).
  • The meaning of a piece of text or a graphical notation in a figure is not always clear. It may for instance be hard to determine whether a certain text paragraph represent a formal requirement or if it is an informal description of the system of interest.
  • It is hard to maintain relationships (traces, links) between pieces of information in several semi-structured documents. This makes for instance impact analysis or test coverage analysis difficult.
  • Information is often duplicated in several documents and therefore hard to keep consistent.
  • It may be difficult to reuse information because it is entangled with other information.
  • Concurrent work (on same document) may be very difficult, especially if it is a Word document that can’t easily be merged with a parallel version of the same document.

The following is my list of some advantages with model-based descriptions, i.e. descriptions adhering to a well-defined metamodel:

  • Different cross-sections of the information can be shown in views or reports; it is easy to see the various aspects of the system architecture. Given the proper metamodel, a traceability matrix between test cases and requirements would for instance be easy to create.
  • Each information item type has a well-defined meaning. It is for instance easy to recognize item types such as requirements, test cases, sensors, and threads (to pick a few random ones from my past consulting assignments).
  • Relationships are created and maintained “automatically”, as an integral part of the method. These releationships facilitate for instance impact analysis (of a change to the system).
  • Information needs to be entered only once to become available in all contexts; it is easier to keep it consistent.
  • Reuse is facilitated by the clearly defined and separated information items (and composites of such information items).
  • Concurrent work easy due to high granularity and concurrent access.

There are of course disadvantages with model-based approaches and advantages with document-based approaches. I conveniently leave these to a future post.

Published by Arto Jarvinen on 01 Jul 2012

Innovation is a team sport

As a consultant I’ve been working for a large number of companies doing system development. Many of them have had reasonably well defined processes for development, customer support and so on. Of various reasons I had recently reason to try to recollect how the companies did innovation. Somewhat to my surprise I couldn’t really remember many examples of how innovation actually happened. One company had regular “innovation jams” modeled after IBM’s idea with the same name. Many smaller companies still had their founders working for them, driving a lot of the innovation. One company had all their developers visit several customer a year to get new ideas and feedback on the current products. But i still fail to remember one single company with something like a comprehensive innovation process. (Ok, that doesn’t prove there isn’t one but still.)

Stanford
Plenty of creative collisions going on down there…
Berkeley
… and over there here too.

I then came across the book Innovation to the Core by Peter Skarzynski and Rowan Gibson. It presents a systematic approach to innovation backed up by examples from companies like P&G, Whirlpool and GE. The one idea that appealed to me was that innovation is not typically done by a single genius getting an epiphany; an innovation is instead most often a mash-up of several already existing ideas brought together by a team of different people i a creative dialog.

Diversity and connected minds

Two prerequisites for innovation emphasized by the authors are thus diversity of thinking and a rich web of connections and conversations between people. On a large scale this is exactly what is happening in Silicon Valley. I remember when I was on a training at Haas Business School at Berkeley back in 2009. At our lectures there would be venture capitalists, industrialists, innovators just dropping in and talking to us, many seemingly out of curiosity. I’m sure they had some sort of other business at the university as well and that some of the seemingly random visits were prearranged but still. The people that dropped in took the opportunity to meet people with other perspectives, i.e. us (and perhaps to get e free lecture by a prominent researcher).

Another thing about Silicon Valley that comes to mind is its diverse population. There are many accents, nationalities, lifestyles etc represented in the multitude of companies found in the San Fransisco Bay Area. Research done by Duke University shows that immigrants helped found more than one-quarter of all U.S. engineering and technology companies between 1995 and 2005. According to TechCrunch a full 52% of the startups in Silicon Valley were founded or co-founded by foreign entrepreneurs.

Research done by Richard Florida of the University of Toronto has shown a positive correlation between the number of same-sex couples and concentrations of high-tech businesses. He states:

A visible LGBT community [signals] openness to new ideas, new business models, and diverse and different thinking kinds of people—precisely the characteristics of a local ecosystem that can attract cutting-edge entrepreneurs and mobilize new companies.

For one good idea you need many bad ideas

As every venture capitalist will testify, to find one or two ideas that really take off, you need one thousand ideas to choose from. The idea selection process needs to be very cost effective to be able to handle a large number of ideas without killing the few good ones.

Also important to remember is that an idea is usually not perfect when it is first conceived. It needs to be iterated over and over. Another book, Innovation: The Five Disciplines for Creating What Customers Want by by Curtis R. Carlson describes a process for iterating ideas at “watering holes”, diverse groups of people, for sometimes up to 50 times before the idea gets really mature (if it ever does).

There is of course much more to be said about this and there is abundant literature on the topic. I do find a little bit of comfort in the fact that research shows that good ideas require teamwork. That explains perhaps why my solitary armchair invention sessions have usually produced frustratingly little.

Links

Steven Johnson: Chance favors the connected mind.

Published by Arto Jarvinen on 06 May 2012

Seeing the pattern – or not

My late roommate from Stanford, John Vlissides (he passed away much too early), went on to co-author a book Design Patterns: Elements of Reusable Object-Oriented Software that has had quite an impact on the software development community. According to Wikipedia it was in its 39th printing in 2011.

I read the book a long time ago, probably around the time it came out and haven’t had a very good reason to revisit it since. Mostly because I don’t really do software development anymore except as a hobby. Also, I believe somebody actually “borrowed” my copy. My last project (media player) was rather straightforward regarding design patterns. There were “input pins”, “output pins” and “filters”. (The tricky parts were the real-time aspect and the multitude of threads.)

This new hobby project, based on Eclipse, is something totally different. It is soaked in design patters. It uses “adapters”, “decorators”, “factories”, “commands”, pretty much the whole index of the DP book. I just realized that the reason why I have such a hard time understanding the code is because I don’t know the design patterns.

Show jumping course pattern
See the pattern? (The rider is my daughter and the horse is my horse Brim Time.)

This is quite similar to an other problem of mine from a totally different domain: remembering a show jumping course. I’m learning jumping with my horse. My daughter, who has been riding and jumping for much longer than I have, only needs to look at a course briefly to remember it. She can figure out the order of the fences even without seeing the numbers because she has seen a large enough number of courses to know what they usually look like; she knows the patterns that are used in constructing a show jumping course.

The concept of a design pattern can be applied to a wide spectrum of domains. Software design patterns [2] have been compared to architectural design patterns [1]. Other design patterns that I come to think of (now that I’m at it) include:

  • Western democracy and its institutions.
  • A supermarket.
  • Traffic with its rules. The informal patterns vary somewhat from country to country which makes it somewhat perilous to negotiate the first few kilometers from the airport in a foreign country in an unfamiliar car.
  • The weather.
  • The interactions in the imperial court of Ming dynasty China.
  • A matrix organization.

The list of course goes on indefinitely. Patterns are so ubiquitous that it feels almost trivial to talk about them but the fact that we know how to behave in a supermarket without thinking of it highlights how useful and important patterns are.

Some of my earlier posts such as this one and this have in fact been describing process patterns in organizations.

I went ahead and bought a Kindle-copy of DP. One advantage with e-books that I haven’t thought of before is that an e-book is almost impossible to lose by lending it to somebody (I don’t even know how to lend it).

Links

[1] Patterns in architecture.

[2] Patterns in software.

Published by Arto Jarvinen on 29 Apr 2012

Eclipse extensions

With the risk of this becoming a very long post and me repeating what other people have already written, I below give my own account of what Eclipse extension points and extensions are and how they work. I do this mostly as a documentation for my own future reference (if I ever get my graphical editor project finished) but maybe there are others out there coming from the same place (of ignorance) I’m coming from.

So as not to infringe any copyrights I have replaced the pizza theme in the original example with a slightly more realistic video processing pipeline graphical editor theme (a more catchy name will be needed for the commercial version). This example is inspired by a tool that I used quite often while developing applications using the Microsoft DirectShow video processing framework: GraphStudio.

GraphStudio makes it possible to build video processing applications graphically by interconnecting filters and then run the application. For a screenshot of GraphStudio, see below. I think you get the picture.

GraphStudio
It could look something like this.

GMF could be used to create a graphical editor much like GraphStudio. Each filter in the graph and thus in the video processing pipeline could be represented by a plug-in providing the necessary processing for that particular filter. (Don’t ask me about any details, this is just a mock-up.)

Much more on filter graphs and video processing can be found on my other, for the moment somewhat sleeping blog.

The idea

An Eclipse plug-in is a software module that can be developed starting with an Eclipse’s plug-in project template (select New -> Plug-in project). See [1] and [2] for details. When completed, the plug-in can be compiled and exported into the plugins directory of the Eclipse installation using the Export command. (A rather confusing detail is that one should not point at the plugins directory when exporting the plug-in but to the eclipse directory one level up. The export wizard adds plugins to the path automatically.)

Each plug-in installed in the plugins directory adds code and data to each running instance of Eclipse on that computer. The smallest (atomary) piece of code and / or data is called an extension. One plug-in may contain many extensions. Each extension may contain many attributes that can either be static data such as strings or Java classes. An Eclipse instance running on a computer is in fact a combination of potentially hundreds of such plug-ins with an even larger number of extensions, all dynamically linked together, a bit like COM objects in Windows. To see what plug-ins that are installed with your copy of Eclipse, go to Help -> About Eclipse -> Installation Details and choose the Plug-ins tab.

The attributes of the extension must match those of a corresponding extension point declaration in the piece of software that will be using the extension (the declaration and the use don’t strictly speaking need to be connected but it often is). An often used metaphor is that of a socket (extension point) and a plug (extension). The “pins” in the plug must fit into the holes of the socket.

Examples of common extensions are menu items (the example below also adds a menu and a tool to the running instance of Eclipse, in addition to a “filter” extension), editors, parts of a GMF-generated editor (see some of my earlier posts), documentation.

Declaring an extension point

The existence of an extension point (the “socket”) is in this example declared in the plugins.xml file of the plug-in project com.ostrogothia.filtergraph. This declaration goes like:

<extension-point id="com.ostrogothia.filter" name="filter" schema="schema/filter.exsd"/>

The rest of the declaration is done in the referred .exsd schema file. The definition in this file includes a boilerplate declaration of the extension construct itself + the structure of the (interesting) extension points’ attributes. The latter, more interesting part may look like:

<element name="filter">
   <complexType>
      <attribute name="name" type="string">
         <annotation>
            <documentation>
                  Human readable name of the filter.
            </documentation>
         </annotation>
      </attribute>
      <attribute name="filter" type="string">
         <annotation>
            <documentation>
                  The implementation of the filter.
            </documentation>
            <appinfo>
               <meta.attribute kind="java" basedOn="com.ostrogothia.filtergraph.IFilter"/>
            </appinfo>
         </annotation>
      </attribute>
   </complexType>
</element>

Both the plugin.xml and schema.exsd file are best edited with the editors built into the Eclipse software development environment. The above snippets show the results of such editing in the resulting xml files.

The above declaration says that we can expect the extension to provide two things, a name string and a filter class. The class implements the interface IFilter (which is defined in the same plugin that defines the extension point). IFilter in turn defines a method that returns the supported input video formats. This method can be called by the user of the extension once a reference has been obtained to the filter class (see below).

Providing extensions to the extension point

In this example the extensions we wish to provide are filters that can be inserted into a filter graph to assemble a video processing pipeline. The filters of course must implement methods for receiving upstream input data and making processed output data available for downstream filters. We bother with none of that now. The only method a filter implements in this example is a getter for the compatible input video formats of the filter. The formats are provided as a string.

We thus define a video decoder filter as a new plug-in com.ostrogothia.filtergraph.filter. It’s plugin.xml looks like:

<plugin>
   <extension
          point="com.ostrogothia.filter" id="1" name="Video decoder filter">
         <filter name="Video decoder" filter="com.ostrogothia.filtergraph.filter.VideoDecoder"/>
   </extension>
</plugin>

It defines the two attributes required by the extension point: the name of the filter (Video decoder filter) and a class implementing the filter (VideoDecoder).

The plug-in also provides the actual Java implementation of the VideoDecoder:

package com.ostrogothia.filtergraph.filter;
 
import com.ostrogothia.filtergraph.IFilter;
 
public class VideoDecoder implements IFilter {
 
	public String getInputFormats() {
		return "video/x-h263, video/x-jpeg";
	}
}

That’s as exciting as it gets.

Running the plug-in

When Eclipse starts it creates an internal data structure with essential data about each extension in each plug-in it finds in the plugins directory (it loads the full extension only when actually needed). The code that uses the extension can access all aspects of the extension through the internal data structure. The full code for reading the extensions of type filter and getting the input formats looks like:

public void run(IAction action) {
	StringBuffer buffer = new StringBuffer();
	IExtensionRegistry reg = Platform.getExtensionRegistry();
	IConfigurationElement[] extensions = reg.getConfigurationElementsFor("com.ostrogothia.filter");
	for (int i = 0; i < extensions.length; i++) {
		IConfigurationElement element = extensions[i];
		buffer.append(element.getAttribute("name"));
		buffer.append(' ');
		buffer.append('(');
		String inputFormats = null;
		try {
			IFilter filter = (IFilter) element.createExecutableExtension("filter");
			inputFormats = String.valueOf(filter.getInputFormats());
		} catch (Exception e) {
			inputFormats = e.toString();
		}
		buffer.append(inputFormats);
		buffer.append(')');
		buffer.append('\n');
	}
	MessageDialog.openInformation(window.getShell(),
				"Installed filters", buffer.toString());
}

Note how all extensions matching the extension point com.ostrogothia.filter are read into the variable extensions which is then looped through to find all individual extensions and their attributes. The element.createExecutableExtension("filter") method creates an instance of the class whose name is the value of the attribute filter. The method getInputFormats() of that instance can then be called to get the input video formats from the instance (here representing a video decoder filter).

This is what you get when you hit the FilterGraph button in Eclipse when both plug-ins in this example are installed:

Filters
Filters.

Now I realize that I also need to understand EMF to understand GMF. Maybe a topic for yet another over-sized post.

Links

[1] Getting started with Eclipse plug-ins: understanding extension points.

[2] Getting started with Eclipse plug-ins: creating extension points.

[3] Working XML: Define and load extension points.

[4] The example source code.

Next »