Friday, October 23, 2009

William Cook on Industry and Academia

You should really read William Cook's ECOOP 2009 Banquet Speech on Industry and Academia. Very good! Here's an excerpt:

The way I see it is that industry generally has more problems than they do solutions, but academia often has more solutions than problems. As an academic, I think everyone here will realize the value of a good problem. So if nothing else, we should revise the technology transfer story to include flows in both directions. Industry could transfer problems to academia, and academia could provide solutions to industry. I want to emphasize that both these flows are of high value.

Exactly how I see it, too!

A few thoughts on code generation with MPS

Here is a typical scenario that I use when working with classical external DSLs: I describe certain structures in the model. From the model I generate all kinds of code, infrastructure, glue code, and also an API (e.g. superclasses) against which I then implement the manually-written business logic.

Here is an important thing: if I want to enforce certain constraints, for example visiblities or dependencies, then it is not enough to enforce these constraints on model level. I must make sure, through generation of suitable code structures, that in the manually written parts I cannot violate these constraints. For example, I need to prevent the creation of depenencies that have been prevented on DSL level. Consequently, I have to put a lot of thought into the structure of the generated code and sometimes I need to use "tricks" to enforce these constraints on code level. Or I need to use static analysis tools.

Contrast this with the approach in MPS: here you code *everything* in the model, because you can use Java (or other languages we're working on) directly on the model level. You don't implement code against some kind of generated API. So you can enforce the constraints mentioned above only in the model! The generated code is really not important here (sure, it should be readable and debuggable, but it's not something against you program). This simplifies the design of the generated code, and hence the code generator, quite a bit.
Thursday, October 22, 2009

Training on DSLs, Product Lines and Language Workbenches

On February 3 and 4 2010 I will run a course on Domain Specific Languages (for Product Lines) at Sioux Embedded Systems in Eindhoven, The Netherlands. In addition to concepts, I will cover Xtext as well as MPS.

I invite you to join, even if you already know one of the tools - maybe especially if you already one of the tools, so you can learn another approach. This will certainly deepen your understanding about DSL and the technologies used to implement them.

If we get a nice group, this could be a very interesting, productive and fun training.

Practical Product Lines 2009

I just returned from the PPL 2009 conference in Amsterdam. Although small, the conference was very good: nice program, competent participants, nice venue, good food. I will definitely go back next year. It is also better value for money than SPLC if you're interested mainly in practical aspects of PLE.

To get an impression of what the conference was like, take a look at the Twitter #ppl2009 stream.
Sunday, October 18, 2009

Slides uploaded to

I have just uploaded a couple of my currently relevant presentations to All of them had been online at my website, too, but I decided I wanted to participate in the slideshare community.
Sunday, October 11, 2009

Type Systems for Xtext DSLs

I recently implemented a type system for an Xtext DSL. I wanted to give you a couple of pointers of how to do it, if you have to.

What is a type system? A type system is essentially a set of sophisticated constraints. I.e. it is a way of determining whether a model is correct beyond its structure. We all know type systems from programming languages. If you try to add an int and a String, you'll get a type error.

So, in principle, you could simply implement a type system as a set of Check constraints. This is true, but for non trivial type systems and languages, this can grow complex. A more principled approach is recommended, and it consists of three ingredients.

The type meta model: the structure of types and their relationship often warrants its own meta model. For example, an array type isn't trivial: It has to express that it is an Array, it has to contain the size, and it has to point to the type of the elements in the array. The array type itself is structured. Hence, it is a good idea to create a meta model, or language, to capture the types. In the context of Xtext, you can either do this as a separate EMF meta model that you reference from the Xtext DSL, or you simply add additional rules to the language that represent the types once they are transformed to Ecore. Make sure that the grammar structure does not allow you to actually "write down" the types in models - you want to work with them behind the scenes.

So how do you associate the type of an element with the element itself? A good way is simply to create a bunch of Xtend functions. I usually call it typeof(...) and define it polymorphically for all the language elements for which I want to have a type. These functions typically do one of two things: for atomic elements (e.g. an integer literal or an array declaration) the typeof(...) extension simply returns the type object, i.e. instances of the type meta model (or language) defined above. For non-atomic elements (let's say, a comparison operation that compares the result of a constant and a plus expression) the typeof function contains a type derivation rule. For example, for a plus expression, it returns the type of one of the operands (the types of the two operands have to be the same, see below). For a comparison
operation, it simply returns boolean.

Finally, once you have these two ingredients in place, you can now implement the actual type constraints. For example, it a type constraint might say that for a plus expression, the typeof(...) the left operand and the typeof(...) the right operand need to be the same. Note that by calling the typeof(...) function for both operands, the resulting type is calculated correctly even if the two operands are, for example, multiplication operations themselves.

One last comment: usually, types don't have to be the same for a constraint to be correct, but they have to be "compatible", whereas compatible is defined specficially for each type. To incorporate this problem, simply create an isCompatible(t1, t2) Xtend function, which you polymorphically override for all relevant type combinations.

That's it! Using this approach gives you a scalable, and maintainable type system implementation.

PS: MPS uses the same approach. It even comes with a separate DSL for defining type systems, including a set of operators for type inference and type compatibility. Very nice!

back to

This is Markus Voelter's Blog. It is not intended as a replacement for my regular web site, but rather as a companion that contains ideas, thoughts and loose ends.

December 2005 / January 2006 / February 2006 / March 2006 / April 2006 / May 2006 / June 2006 / July 2006 / August 2006 / September 2006 / October 2006 / November 2006 / December 2006 / February 2007 / March 2007 / April 2007 / May 2007 / June 2007 / July 2007 / September 2007 / October 2007 / November 2007 / December 2007 / January 2008 / February 2008 / March 2008 / April 2008 / May 2008 / June 2008 / July 2008 / August 2008 / September 2008 / October 2008 / November 2008 / December 2008 / January 2009 / February 2009 / March 2009 / April 2009 / May 2009 / June 2009 / July 2009 / August 2009 / September 2009 / October 2009 / November 2009 / December 2009 / January 2010 / February 2010 / April 2010 / May 2010 / June 2010 / July 2010 / August 2010 / September 2010 / October 2010 / November 2010 / December 2010 / January 2011 / March 2011 / April 2011 / May 2011 / June 2011 / July 2011 / October 2011 / November 2011 / December 2011 / January 2012 / February 2012 / October 2012 / January 2013 /

You can get an atom feed for this blog.