Archive for the 'Modeling' Category

Published by Arto Jarvinen on 04 Dec 2012

Product, not project – part 2

Something caught my eye yesterday when I helped my son to get started with Code::Blocks, a light-weight integrated software development environment (IDE): in all IDEs that I’ve worked with lately (Eclipse, Visual Studio and Code::Blocks) the collection of source code and other files is collective called “project”. This may seem like an unimportant little observation but again I believe that using the right term is important for people’s mental models of what’s going on. A “project” is something temporary while a “product” would be something rather more persistent. I would have suggested the “product” word here instead. See also my earlier post on this topic.

Published by Arto Jarvinen on 25 Aug 2012

The difference between “document-based” and “model-based”

Many of the posts on this blog is about “model-based engineering”, particularly in the context of software and system development. Model-based is often compared to the “old” document-based way of doing things, i.e. when specifications and other descriptions come in the form of documents instead of models.

So what is the difference between “document-based” and “model-based” engineering? Strictly speaking there doesn’t need to be any difference at all since models are always possible to serialize (convert into a stream of bytes). Many model editors for instance store the created models as (fairly) readable XMI files. These files thus conform to a well-defined metamodel.

When the documents are the primary artifacts, i.e. not the output of a model editor, then they usually don’t conform to a well-defined metamodel. An example is a design document that contain prose with or without some predefined headings and subheadings and informal figures (typically with “boxes” and “lines”).

The following is a list of difficulties that I have discovered with informal or semi-formal (typically conforming to a document template) documents:

  • It is hard to extract exactly the right subset of information needed for each task, e.g. to see a requirement and its associated test cases at the same time when updating the test case(s).
  • The meaning of a piece of text or a graphical notation in a figure is not always clear. It may for instance be hard to determine whether a certain text paragraph represent a formal requirement or if it is an informal description of the system of interest.
  • It is hard to maintain relationships (traces, links) between pieces of information in several semi-structured documents. This makes for instance impact analysis or test coverage analysis difficult.
  • Information is often duplicated in several documents and therefore hard to keep consistent.
  • It may be difficult to reuse information because it is entangled with other information.
  • Concurrent work (on same document) may be very difficult, especially if it is a Word document that can’t easily be merged with a parallel version of the same document.

The following is my list of some advantages with model-based descriptions, i.e. descriptions adhering to a well-defined metamodel:

  • Different cross-sections of the information can be shown in views or reports; it is easy to see the various aspects of the system architecture. Given the proper metamodel, a traceability matrix between test cases and requirements would for instance be easy to create.
  • Each information item type has a well-defined meaning. It is for instance easy to recognize item types such as requirements, test cases, sensors, and threads (to pick a few random ones from my past consulting assignments).
  • Relationships are created and maintained “automatically”, as an integral part of the method. These releationships facilitate for instance impact analysis (of a change to the system).
  • Information needs to be entered only once to become available in all contexts; it is easier to keep it consistent.
  • Reuse is facilitated by the clearly defined and separated information items (and composites of such information items).
  • Concurrent work easy due to high granularity and concurrent access.

There are of course disadvantages with model-based approaches and advantages with document-based approaches. I conveniently leave these to a future post.

Published by Arto Jarvinen on 06 May 2012

Seeing the pattern – or not

My late roommate from Stanford, John Vlissides (he passed away much too early), went on to co-author a book Design Patterns: Elements of Reusable Object-Oriented Software that has had quite an impact on the software development community. According to Wikipedia it was in its 39th printing in 2011.

I read the book a long time ago, probably around the time it came out and haven’t had a very good reason to revisit it since. Mostly because I don’t really do software development anymore except as a hobby. Also, I believe somebody actually “borrowed” my copy. My last project (media player) was rather straightforward regarding design patterns. There were “input pins”, “output pins” and “filters”. (The tricky parts were the real-time aspect and the multitude of threads.)

This new hobby project, based on Eclipse, is something totally different. It is soaked in design patters. It uses “adapters”, “decorators”, “factories”, “commands”, pretty much the whole index of the DP book. I just realized that the reason why I have such a hard time understanding the code is because I don’t know the design patterns.

Show jumping course pattern
See the pattern? (The rider is my daughter and the horse is my horse Brim Time.)

This is quite similar to an other problem of mine from a totally different domain: remembering a show jumping course. I’m learning jumping with my horse. My daughter, who has been riding and jumping for much longer than I have, only needs to look at a course briefly to remember it. She can figure out the order of the fences even without seeing the numbers because she has seen a large enough number of courses to know what they usually look like; she knows the patterns that are used in constructing a show jumping course.

The concept of a design pattern can be applied to a wide spectrum of domains. Software design patterns [2] have been compared to architectural design patterns [1]. Other design patterns that I come to think of (now that I’m at it) include:

  • Western democracy and its institutions.
  • A supermarket.
  • Traffic with its rules. The informal patterns vary somewhat from country to country which makes it somewhat perilous to negotiate the first few kilometers from the airport in a foreign country in an unfamiliar car.
  • The weather.
  • The interactions in the imperial court of Ming dynasty China.
  • A matrix organization.

The list of course goes on indefinitely. Patterns are so ubiquitous that it feels almost trivial to talk about them but the fact that we know how to behave in a supermarket without thinking of it highlights how useful and important patterns are.

Some of my earlier posts such as this one and this have in fact been describing process patterns in organizations.

I went ahead and bought a Kindle-copy of DP. One advantage with e-books that I haven’t thought of before is that an e-book is almost impossible to lose by lending it to somebody (I don’t even know how to lend it).

Links

[1] Patterns in architecture.

[2] Patterns in software.

Published by Arto Jarvinen on 29 Apr 2012

Eclipse extensions

With the risk of this becoming a very long post and me repeating what other people have already written, I below give my own account of what Eclipse extension points and extensions are and how they work. I do this mostly as a documentation for my own future reference (if I ever get my graphical editor project finished) but maybe there are others out there coming from the same place (of ignorance) I’m coming from.

So as not to infringe any copyrights I have replaced the pizza theme in the original example with a slightly more realistic video processing pipeline graphical editor theme (a more catchy name will be needed for the commercial version). This example is inspired by a tool that I used quite often while developing applications using the Microsoft DirectShow video processing framework: GraphStudio.

GraphStudio makes it possible to build video processing applications graphically by interconnecting filters and then run the application. For a screenshot of GraphStudio, see below. I think you get the picture.

GraphStudio
It could look something like this.

GMF could be used to create a graphical editor much like GraphStudio. Each filter in the graph and thus in the video processing pipeline could be represented by a plug-in providing the necessary processing for that particular filter. (Don’t ask me about any details, this is just a mock-up.)

Much more on filter graphs and video processing can be found on my other, for the moment somewhat sleeping blog.

The idea

An Eclipse plug-in is a software module that can be developed starting with an Eclipse’s plug-in project template (select New -> Plug-in project). See [1] and [2] for details. When completed, the plug-in can be compiled and exported into the plugins directory of the Eclipse installation using the Export command. (A rather confusing detail is that one should not point at the plugins directory when exporting the plug-in but to the eclipse directory one level up. The export wizard adds plugins to the path automatically.)

Each plug-in installed in the plugins directory adds code and data to each running instance of Eclipse on that computer. The smallest (atomary) piece of code and / or data is called an extension. One plug-in may contain many extensions. Each extension may contain many attributes that can either be static data such as strings or Java classes. An Eclipse instance running on a computer is in fact a combination of potentially hundreds of such plug-ins with an even larger number of extensions, all dynamically linked together, a bit like COM objects in Windows. To see what plug-ins that are installed with your copy of Eclipse, go to Help -> About Eclipse -> Installation Details and choose the Plug-ins tab.

The attributes of the extension must match those of a corresponding extension point declaration in the piece of software that will be using the extension (the declaration and the use don’t strictly speaking need to be connected but it often is). An often used metaphor is that of a socket (extension point) and a plug (extension). The “pins” in the plug must fit into the holes of the socket.

Examples of common extensions are menu items (the example below also adds a menu and a tool to the running instance of Eclipse, in addition to a “filter” extension), editors, parts of a GMF-generated editor (see some of my earlier posts), documentation.

Declaring an extension point

The existence of an extension point (the “socket”) is in this example declared in the plugins.xml file of the plug-in project com.ostrogothia.filtergraph. This declaration goes like:

<extension-point id="com.ostrogothia.filter" name="filter" schema="schema/filter.exsd"/>

The rest of the declaration is done in the referred .exsd schema file. The definition in this file includes a boilerplate declaration of the extension construct itself + the structure of the (interesting) extension points’ attributes. The latter, more interesting part may look like:

<element name="filter">
   <complexType>
      <attribute name="name" type="string">
         <annotation>
            <documentation>
                  Human readable name of the filter.
            </documentation>
         </annotation>
      </attribute>
      <attribute name="filter" type="string">
         <annotation>
            <documentation>
                  The implementation of the filter.
            </documentation>
            <appinfo>
               <meta.attribute kind="java" basedOn="com.ostrogothia.filtergraph.IFilter"/>
            </appinfo>
         </annotation>
      </attribute>
   </complexType>
</element>

Both the plugin.xml and schema.exsd file are best edited with the editors built into the Eclipse software development environment. The above snippets show the results of such editing in the resulting xml files.

The above declaration says that we can expect the extension to provide two things, a name string and a filter class. The class implements the interface IFilter (which is defined in the same plugin that defines the extension point). IFilter in turn defines a method that returns the supported input video formats. This method can be called by the user of the extension once a reference has been obtained to the filter class (see below).

Providing extensions to the extension point

In this example the extensions we wish to provide are filters that can be inserted into a filter graph to assemble a video processing pipeline. The filters of course must implement methods for receiving upstream input data and making processed output data available for downstream filters. We bother with none of that now. The only method a filter implements in this example is a getter for the compatible input video formats of the filter. The formats are provided as a string.

We thus define a video decoder filter as a new plug-in com.ostrogothia.filtergraph.filter. It’s plugin.xml looks like:

<plugin>
   <extension
          point="com.ostrogothia.filter" id="1" name="Video decoder filter">
         <filter name="Video decoder" filter="com.ostrogothia.filtergraph.filter.VideoDecoder"/>
   </extension>
</plugin>

It defines the two attributes required by the extension point: the name of the filter (Video decoder filter) and a class implementing the filter (VideoDecoder).

The plug-in also provides the actual Java implementation of the VideoDecoder:

package com.ostrogothia.filtergraph.filter;
 
import com.ostrogothia.filtergraph.IFilter;
 
public class VideoDecoder implements IFilter {
 
	public String getInputFormats() {
		return "video/x-h263, video/x-jpeg";
	}
}

That’s as exciting as it gets.

Running the plug-in

When Eclipse starts it creates an internal data structure with essential data about each extension in each plug-in it finds in the plugins directory (it loads the full extension only when actually needed). The code that uses the extension can access all aspects of the extension through the internal data structure. The full code for reading the extensions of type filter and getting the input formats looks like:

public void run(IAction action) {
	StringBuffer buffer = new StringBuffer();
	IExtensionRegistry reg = Platform.getExtensionRegistry();
	IConfigurationElement[] extensions = reg.getConfigurationElementsFor("com.ostrogothia.filter");
	for (int i = 0; i < extensions.length; i++) {
		IConfigurationElement element = extensions[i];
		buffer.append(element.getAttribute("name"));
		buffer.append(' ');
		buffer.append('(');
		String inputFormats = null;
		try {
			IFilter filter = (IFilter) element.createExecutableExtension("filter");
			inputFormats = String.valueOf(filter.getInputFormats());
		} catch (Exception e) {
			inputFormats = e.toString();
		}
		buffer.append(inputFormats);
		buffer.append(')');
		buffer.append('\n');
	}
	MessageDialog.openInformation(window.getShell(),
				"Installed filters", buffer.toString());
}

Note how all extensions matching the extension point com.ostrogothia.filter are read into the variable extensions which is then looped through to find all individual extensions and their attributes. The element.createExecutableExtension("filter") method creates an instance of the class whose name is the value of the attribute filter. The method getInputFormats() of that instance can then be called to get the input video formats from the instance (here representing a video decoder filter).

This is what you get when you hit the FilterGraph button in Eclipse when both plug-ins in this example are installed:

Filters
Filters.

Now I realize that I also need to understand EMF to understand GMF. Maybe a topic for yet another over-sized post.

Links

[1] Getting started with Eclipse plug-ins: understanding extension points.

[2] Getting started with Eclipse plug-ins: creating extension points.

[3] Working XML: Define and load extension points.

[4] The example source code.

Published by Arto Jarvinen on 17 Mar 2012

Meticuously matching metamodels

Many commonly used tools assume a very specific conceptual model of the world. The tools might be geared to manage classes, operations, attributes, and relations (UML editors), fields, projects, screens, and roles (Jira), inputs, outputs, controls, and mechanisms (IDEF0 editors), or filters, pins, and connectors (DirectShow GraphEdit). The chosen concepts are represented in the tool’s metamodel (whether this is explicit or not).

Since people have attended different schools, work and have worked at different companies, read different books, come from different cultures or are just genetically wired in different ways, every person holds his or her own mental models of the world. Just try to create a commonly accepted definition of concepts such as “freedom” and “democracy” and you’ll see what I mean. An even worse scenario is a metamodel that lacks the concepts of freedom and democracy altogether, like the Orwellian Newspeak.

Just like Newspeak, an improper metamodel prevents us from reasoning about certain things in an organization. If there is no concept of a “project” in a tool, then it is probably not appropriate in a project oriented organization. likewise, some tools may have a “project” concept in their metamodel but lack a “product” concept. Such tools are probably not appropriate in a very product oriented organization. Seen the other way around, some organization may run all their development activities in the line organization and would find the “project” concept useless or confusing. Likewise, some organizations don’t see product development as a continuous activity where features are added to an existing project year after year but as one single project followed by “maintenance”. In this kind of an organization the concept “product” may be synonymous with “project” and one of the concepts would be superfluous.

The conclusion of the above is that when choosing a tool to support a process in an organization, one should start with explicitly matching the metamodel of the tool to the actual concepts used in the organization. If the match is poor or contrived then the tool will be hard to use and to explain.

Unfortunately it is sometimes very difficult to find the metamodel of a tool. Not many user guides start with a drawing of the tool’s metamodel.

Another way would be to start with the conceptual model of the “things” to manage in the organization and thereafter to create a tool with a metamodel that matches these concepts exactly. This is the aim of for instance the Eclipse EMF and GMF projects. Unfortunately the threshold for adopting the otherwise promising GMF technology is still very high which is an impediment for using them in a commercial setting. Hopefully this will change.

Published by Arto Jarvinen on 22 Mar 2011

Do the things we model exist?

Tree
Does it exist?

When describing a metamodel it is often difficult to keep apart the description (model) of something and the thing itself. If I want to describe a metaclass representing a system function for instance I find it easy to slip and start talking about the real-world function when the intention was to talk about the description of the real-world function.

I have noticed that it is easier to slip with some metaclasses than others. The slip seems more likely to happen with a description of something concrete than with a description of something abstract. A system function (a mapping of some input values to some output values) for instance feels concrete enough for the slip to happen whereas a non-functional requirement is a bit harder to imagine in the concrete world.

The existence of some things can be determined with our senses but we need some sort of measuring device for other “things”; some things seem to exist more than others. We can for instance determine the (approximate) weight of a permanent magnet with our own senses but would need a magnetometer to measure the existence and quantity of the magnetic field. Our model of the magnet can just as easily describe the weight as the magnetic field. We can likewise rather easily determine whether a function exists in a system by applying input stimuli and observing what happens. It is slightly harder the observe the existence of non-functional characteristics such as electromagnetic emission; you need some special purpose instruments and probably a special purpose measurement chamber. But both “things” exist in this sense and may make sense to describe in our model.

The answer? I believe that everything we model must exist in the sense that it must be possible to determine if the model is a useful description of the real world or not. But we need to be clear about whether we talk about the real world “thing” (phenomenon) or the description. There can be a huge difference.

Published by Arto Jarvinen on 26 Feb 2011

Define model!

There are many definitions of “model-based development” or “model-driven development”. “Model”, like “quality” or “justice”, is an elusive word. In system development we often think of a graphical view of a UML model or a SysML model when we talk about a system model. We also perhaps think about a specific tool for creating such models.

In this post I’d like to elaborate on a previous post in which I claimed that most (everything?) of what we do is based on models of the reality we’re dealing with. Some mundane situations where we use more or less explicit models of reality that help us to predict how the reality will react to our actions and help us choose actions that help us reach our goals:

  • Keeping track of social relationships. Some scholars in fact claim that the brain has evolved as a response to a pressure to manage social relationships in a large group.
  • Navigating roads or the terrain using either the memory or an explicit map.
  • Writing a piece of open source software for my own pleasure and use (see my other blog) keeping the specs in my head, as comments in the code, as a user guide, or as sketches on a piece of paper.
  • Human walking or almost any kind of physical activity.

Whether an activity, be it system design or a road trip, is model-based is therefore, I claim, not a very interesting question. My answer is “yes, to some degree”. It is more useful to ask more detailed questions about the characteristics of the model:

  • What is the abstract syntax of the model?
  • What is the concrete syntax of the model?
  • What are the semantics of the metaclasses of the model? This question can be rephrased to:
    • What is the purpose of the model and each of the metaclasses?
    • What questions does each metaclass or cluster of metaclasses answer? Examples could be “what are the functional requirements on this system?” and “how is safety goal X satisfied?”.
    • What transformation rules apply to each of the metaclasses?
  • How useful is the model for its purpose?
  • What is the ratio of amount of information in the model and in the modeled piece of reality? Rephrased:
    • How efficient is the model?
    • How much information is built into the semantics of the metaclasses?
  • How accurately can we answer the above questions?

I might try to analyze some existing models according to the above questions in future posts. In this post I will just say a few words about “model-based” versus “document-based” development. There is of course no fundamental difference between a “document” (for instance a Word document) and a “model” (typically created with a modeling tool such as Enterprise Architect or Rational Rose) since many modeling languages have textual concrete syntaxes and can thus be expressed as “documents”. We could very well describe a well-defined system model in Word.

It is difficult to answer the questions above accurately about for instance a free-format design document. The concrete and abstract syntaxes are typically not well-defined although we may have a template that gives guidelines as to the type of information that goes into the document. We may draw informal diagrams where a “box” may represent several different things; our metaclasses are not explicitly identified. We use regular prose to describe different aspects of the design. We may use words such as “task”, “process” and “module” without defining exactly what we mean. The purpose of the document may be unclear. It can in principle be used as a basis for future development or as a documentation of the as-built system. If we are not clear about the purpose we may fail to keep the design document updated when for instance the software is changed. Since the document is only readable by humans, its purpose is probably to communicate and perhaps to deliberate on the design. Whether it is useful for this purpose and efficient depends to a large degree on the skills of the designer who creates the document.

Published by Arto Jarvinen on 30 Dec 2010

Test driving an organization

Reorganizations are probably as old as humanity. The paleolithic hunting parties must have been subject to reorganizations from time to time. It might have been a bit more physical than the garden variety reorganization today, but the objective was probably similar: to make the organization more effective and efficient or to reflect a new power structure.

Reorganizations are not always easy. I have a couple of times used a rather simple verification method to better prepare for the new organization. The method can be used either to design the new organization and the new processes or to “test drive” (verify) the new organization and processes after they’ve been designed but before they are launched. The method is inspired by the ideas of Ivar Jacobsson described in [1], in particular the concepts business use case [2] and sequence diagram [3] as applied to organizations rather than IT systems.

The verification method suggested is based on two assumptions:

  • The organization exists to provide services or products to external parties. Providing such a product or service requires interactions between the organization and the external parties. These interactions can be describe as business use cases, i.e. short “stories”. The business use cases in turn are realized by activities inside the organization (together forming a process) performed by a number of roles. (See [2] and the example below for further clarification.)
  • The most important aspect of getting something done is that somebody is responsible for doing it and that the “baton is passed” between the different roles participating in the process so that nothing falls between the cracks. The exact method each role uses to get the job done is of course also important but not as important as that there is somebody there to do the work in the first place. People usually find ways when they know it’s their responsibility even if there isn’t any prescribed method and methods can be designed later if needed.

The method is centered around the creation of one or several sequence diagram(s) representing the process(es) and the roles realizing the associated business use case. (See below for an example.)

The sequence diagram should be created interactively in a workshop by a workshop leader and with all line managers likely to provide resources to the process present. Once all resource providers agree upon the sequence diagram and thus the process it represents, the process and the organization is considered verified.

The following steps can be used as a guideline:

  1. Select one or a few business use case(s) to analyze. In the example below the business use case is “get support”.
  2. Identify initiating event and role (“business actor” in [2]).
  3. Identify involved roles.
  4. Draw the sequence diagram together. Add and remove roles and tasks until a reasonable workflow is found.
  5. Identify risks for delays, bottlenecks or other potential problems with the drafted sequence diagram, e.g. by asking the following questions:
    • Does the task add value?
    • Is the role receiving an task request likely to have the right skills to perform the task?
    • Are the right incentives in place to perform the task?
    • Are there enough man-hours available to perform the task?
    • Is the task a natural part of a role’s responsibilities or a “hack”?
    • Is the required information for performing the task readily available?
  6. (Optional step) Suggest allocation of roles to departments.

Depending on the degree of consensus going into the workshop and the complexity of the processes, a two hour workshop might only suffice to cover one single business use case realization (sequence diagram).

The sequence diagram below represents a (simplified) realization of the business use case “get support” and is thus a (simplified) example output from a verification workshop.

Customer support process

A business use case realization.

By creating the sequence diagram we have shown that we have a reasonable organization and process in place to solve this task. Among other things we have shown that:

  • There is somebody to receive the support requests and that the customer will get a much wanted acknowledgement of the reception of the support request.
  • The first line support will not get stuck in hard-to-solve problems but can give fast answers when such answers are available.
  • There is a path for solving product issues (“bugs”) with a developer on call (and that such a developer on call must be allocated).
    • Other typical business use cases that could benefit from this type of reasoning could be tender creation, request for a new product feature, scheduling of a new project, and creation of the annual business plan and budget. Some interesting questions will undoubtedly surface when working through these business use cases.

      Links

      [1] Ivar Jacobsson. The Object Advantage.

      [2] Business Use-Case Model Guideline

      [3] UML basics: The sequence diagram

Published by Arto Jarvinen on 28 Oct 2010

Everything is model-based

Today I visited a company called Sörman (yes, they’ve kept the umlauts in their name which I applaude) and listened to a seminar. One of the speakers gave me the following insight:

In traditional development methods information is stored in documents. But, every time the information is processed by humans, it is actually transformed into a model – a mental model. We don’t think in terms of linear text when we reason about technical solutions. Instead we create some sort of model of the system in our head and reason in terms of that model. It may well happen of course that different people get different mental models from the same linear text as there may be several ways to transform it into a mental model.

Since model-based development is closer to our mental models to start with, there should be fewer transformation errors going back and forth between the (external) system model and the mental model. Not all models are graphical though. There are some mental meta-models that are largely linear and suitable for a textual concrete syntax. We for instance like stories that in turn create images in our heads which is for instance why use cases might still be a good way to describe system behavior.

Published by Arto Jarvinen on 04 Oct 2010

What is the meaning of semantics?

“Semantics” is a word often used by people that I work with and call friends (should I be worried?). The word has always struck me as vague. I have usually conveniently ignored that and happily joined any interesting discussion or debate.

I’m currently involved in the design of a couple of domain specific languages (DSL). One of them is the language that I’m trying to “implement” using GMF and that is described elsewhere on this blog. The other is a language used for describing electronic systems in the automotive industry. It seems that the S-word pops up so often now that it is perhaps time to reflect upon and understand what it – yes – means.

“Semantics” is usually translated to “meaning”. But what is the meaning of “meaning”? To describe “meaning” one obviously has to use other, more “meaningful” words, words that convey some meaning to the listener or reader. A DSL typically uses as its elements concepts from the domain that it is used to represent some aspect of. In my case I use modeling elements such as Role and Workflow. In the automotive example we have elements such as (software) Task and Sensor. These have some meaning in the real world so we intuitively associate them with their real-world counterparts. Modeling the real world in this way is interesting – but is it useful? (I owe that very good question to professor Isaacs at Berkeley.)

I of course have a use in mind when I create my models of reality. So while the Role element of my modeling language has a counterpart in reality, its semantics are more usefully described by the use of what is created from my description of a Role. A Role in my models of reality is eventually translated into a web page which is linked to other web pages and has on it text that is used by its readers for various purposes such as understanding their job or writing job ads. That little explanation is at least a start of a useful description of the semantics of Role in my case. Correspondingly the semantics of a model of software Task is often best described by the actual software (source code or executable) that is created (manually or automatically) based on that model.

It is somewhat ironic that while you wish to create an abstract model of reality, the only way to truly understand that abstract model is to understand the concrete details of what the model is used for. This concrete use is by real nerds called the operational semantics of the modeling language.

Next »