PLE Track at OOP 2010
At this year's OOP conference there will be a two day track on product line engineering, organized by Christa Schwanninger of SIEMENS. The two days are packad with a number of really interesting and relevant talks, going beyond only the technical aspects (such as variant management). So if you're interested in PLE, take a look at
the talks and consider joining :-)
The Role of Domain Experts in *designing* DSLs
One of the advantages of using DSLs is - so people say - that the communication between domain experts and software developers is improved. However, a good "business" DSL is one where the business or domain folks can do the coding themselves and then they run the generator to build the executable software. Where in this picture is the communication between the domain expert and the developer that the DSL should improve? In an ideal scenario, business people, after creating the models, press a button. No need to communicate to a developer.
However, there's of course another take at this problem: before domain users can use the DSL to write down their domain specific programs or models, the DSL needs to be created! This DSL creation, and the development of the generators, etc. is of course something a developer does! So the communication between domain experts and developers happens during language creation and evolution!
So, where does this leave us? We need to improve the domain expert/developer collaboration during DSL creation! How can DSL tools help with that? After all, they are designed for developers, right? Are we back to the same old "domain experts write it down in Word, throw it over the fence, and then the devs build the DSL"-kind of scenario? Something that DSLs aim at avoiding in the first place?
Here are some ideas and examples of how domain experts can play a role in language development.
When building a DSL with Xtext, language definition is so quick and straight forward that the domain expert can provide input, the developer builds the grammar, and then the domain expert can use the resulting language/editor to try out whether the language can represent his ideas. Because Xtext is so lightweight, such a round-trip can happen in a couple of minutes. Consequently, it is absolutely feasible to put a domain expert and a developer in front of a machine to develop a language together. There's no real need to write a "language specification".
Intentional has emphasized this idea for a while now. They encourage language developers to start new domains/languages by first just writing down domain concepts (essentially, words at this time) and their relationships. At this point, without defining sophisticated custom projections, models can be edited using a default projection. After an hour or two of pair-language-development the dev can spend the rest of the day alone implementing the fancy custom projections. But at least a first rough cut at defining models based on the new language is available immediately.
With MetaEdit+, the graphical shapes representing the concepts in a diagram can be defined with a nice graphical editor. Again, this shape definition is something that can involve domain experts.
So what do we take from this? Quick turn-around during DSL development is essential to be able to include the domain expert in this crucial phase. When selecting a DSL tool, you should take into account this concern - it hasn't been discussed much AFAIK, and most DSL tools don't make this a priority. Maybe this should change.
Learn about DSLs and Schwäbisch at the same time: MID Insight 09 Keynote
Two weeks ago I gave a keynote at the
MID Insight 09 conference. I talked about "Programming and Modeling - Two worlds?" The folks at MID filmed the keynote and I uploaded it to vimeo. So
here is the video and
the slides are here.
By the way: While the slides are in English, the talk and the video are in German (Schwäbisch).
Tutorial @ SE 2010: Software Engineering mit Domänenspezifischen Sprachen und Language Workbenches
I was just notified that my (and Peter Friese's) tutorial for
SE 2010 has been accepted: Software Engineering with DSLs and Language Workbenches. This is a German conference and a German tutorial, so here's the German abstract:
Als Softwareentwickler sind wir daran gewöhnt, uns mithilfe existierender Programmiersprachen auszudrücken, also die Fachlichkeit des zu erstellenden Systems mittels objektorientierter, funktionaler, oder andere Ausdrucksmittel der Implementierungssprache zu beschreiben. In den letzten Jahren haben sich allerdings Werkzeuge entwickelt, mit denen es möglich ist, eigene Sprachen zu bauen beziehungsweise existierende Sprachen zu erweitern. In diesem Tutorium gehen wir darauf ein, wie sich Softwareentwicklung ändert wenn man die Möglichkeit hat die Sprache an die Domäne anzupassen, und nicht gezwungen ist, sich im Rahmen existierender Programmiersprache zu bewegen. Im ersten Teil des Tutoriums erläutern wir Grundlagen zu domänenspezifischen Sprachen und Spracherweiterungen und erläutere einige wichtige Anwendungsfälle für diese Technologien. Im zweiten Teil werden wir zwei Werkzeuge zeigen mit denen Sprachbau mit vertretbarem Aufwand möglich ist (Eclipse Xtext, JetBrains MPS). Der zweite Teil besteht zum Großteil aus praktischen Übungen.By the way: it does not overlap with the
PIK 2010 workshop, so you can actually come and join us for both :-)
Thoughts on Migrating Model-Driven Solutions between Tools
This week I was at a customer who started MDD a couple of years ago. They started with oAW 3.x and at some point migrated to oAW 4.x, using the Classic mode. This means that the meta model was implemented as Java classes. Because the decided to generate from UML, the meta classes were subclasses of the UML 1.x meta model implemented in Java that shipped with oAW classic.
The model defines the core data structures of a large scientific instrument, and it serve as a data exchange layer between systems implemented in several programming languages. Consequently, they have code generators for seven different target languages, i.e. they have put quite a bit of effort into the generators. The meta model structure is reasonably simple, but the behaviour, or derived attributes if you will, implemented in Java is quite non trivial.
Now they decided to migrate to the most current oAW, i.e. using EMF for the meta model and models, and Xtend for the derived behaviour. Consequently, all the meta classes implemented as subclasses of UML 1.x classes don't work anymore. Generally, the behaviour implemented in Java is obsolete and cannot be directly reused. A manual migration of the behaviour from Java to Xtend (or to whatever other platform) is required. This is a lot of work - almost prohibitive!
So what can we do to avoid situations like these?
There is no simple solution. Dependencies on tools, no matter which kinds of tools, are always a problem. If you wanted to migrate your system from one database/middleware/UI framework to another, this is also a lot of work. What can we do specifically in MDD tools to tackel this problem? I think there are a couple of things that are a good starting point:
Instead of writing the code generators directly against the UML meta model, you should introduce an intermediate domain specific layer. Code generators are written against this domain specific meta model, and a model-to-model transformation transforms the profiled UML model to the domain specific meta model. This has several benefits: generators become simpler because the are written against a simpler meta model. Simpler generators are easier to migrate. The complexity of handling UML (they used a couple of non-trivial conventions) is encapsulated in the one M2M transformation instead of being repeated into seven code generators. Also, a migration from UML 1 to 2 (or to a real DSL, throwing away UML) would be simpler, because the differences would all be handled in one single place. No change to the code generstors required.
As a second point, you should use declarative approaches wherever possible. Things like validations, scopes, type systems etc. should not be implemented in "procedural" code. Turing complete, procedural code is hard to "understand" with a tool (other than a compiler) and hence is complex to transform into another representation when you want to migrate the code. The more declarative, restricted and structured such descriptions are, the easier it is to work with them. This is of course the whole point about models in the first place! So maybe I should say it differntly: make sure that as many aspects of meta models, validations, scopes, generators and transformations are themselves described as a model! In oAW 3.x/Classic this is clearly not the case - it is the case to a larger degree (not enough!) with more current versions of Eclipse Modeling and oAW.
Finally, assuming you use models for all these things, make sure it is models based on the same tools/formalisms as your application models. This makes them "just another model" which you can (relatively) easily process in a migration with the tools you are working with anyway. In tools like MPS and Intentional, all aspects of DSLs are defined with models expressed in other MPS/Intentional DSLs. It is turtles all the way down :-)
So what do I take away from this: First, migrations are always hard, also in case of MDD tools. They are especially hard if the effort is put into the generators (for multiple platforms) and not so much into the models (note that we actually added an intermediate domain-specific meta model now; introducing this is just two or three days of work). The second takeaway is that we need to enhance tools such as Xtext
(or EMF in general) to describe more aspects of DSLs as models. Validations, scopes and type systems, as well as model navigation and queries ("give me all types of all public attributes of all classes in this file") are important candidates. There will of course always remain some aspects that need to be described with turing-complete implementation code. But by keeping this to a minimum, the tool lock-in and migration effort is minimized.
Eclipse Demo Camp Stuttgart Location Change
If you are planning to join, please take a look
at this page to see the new updated location at the airport.
Oslo becomes SQL Server Modeling - my 2 cents
Here are my two cents why I am disappointed by Microsoft's decision to change Oslo to SQL Server Modeling. In
this eWeek article Doug Purdy is being quoted as follows:
"The great irony to all these (negativ, ed.) comments is that all we did was change the name from 'Oslo' to ... SQL Server Modeling and now we get the #fail tag. If we had called it Windows Modeling or .NET Modeling would it have been #success?" When I saw Oslo and M, I was always disturbed by the focus on SQL Server integration. I always wondered, why, during talks and demos, the SQL Server aspect was emphasized. Sure, a scalable repository is good and useful. But I always considered it only
one option of storing models, basically a "persistence backend" of the nice and cool DSL facilities. I hoped MS would keep the DSL infrastructure indepedent of SQL Server.
This most recent move changes the picture. Now Oslo is basically an API for SQL Server. SQL Server is not merely one of many persistence backends but rather the reason why M & Co exist. And consequently, future decisions in M & Co will be driven by the data community and not by the needs of the DSL community.
So yes, I am somewhat disappointed. Oslo will probably still be a cool and useful tool for data modeling and programming on the SQL server. But it has lost its appeal to me as a general purpuse DSL tooling (at least until I get convinced otherwise :-))
PIK 2010: Produktlinien im Kontext
Here some information about an interesting workshop at the
SE 2010 conference in Paderborn.
Produktlinien sind heute in vielen Bereichen der Software-Industrie vertreten, von Embedded Systems bis zu betrieblichen Informationssystemen. Sie ermöglichen höhere Produktivität, steigern die Qualität und verbessern die strategischen Positionen der Unternehmen. Zugleich sind Produktlinien noch eine relativ junge Technologie, die für viele Unternehmen noch bedeutende Herausforderungen und Risiken birgt.Der Workshop PIK 2010 beleuchtet aktuelle Erfahrungen mit Software-Produktlinien und fördert den Dialog zwischen Praxis und anwendungsorientierter Forschung. Im Mittelpunkt steht das Wechselspiel zwischen technischen Fragestellungen und den geschäftlichen, organisatorischen und Prozessaspekten. Daneben werden auch neue technologische Entwicklungen vorgestellt und diskutiert.If you consider joining, please read the
Call for Papers.
Eclipse Modeling: Models Getting Bigger
If you take a look at the more recent activities at Eclipse Modeling you can clearly see a focus on scalability, teamwork, and big or many models.
CDO, the database-based persistence and collaboration layer for EMF models has been extremely popular at this year's Eclipse Summit. I heard about several projects that consider using CDO as the backend for an enterprise wide modeling infrastructure.
What's currently is still missing in EMF (and to some extent also in CDO) is a flexible and scalable query facility. A query language is needed and EMF resources need to provide an API against which queries can be executed. The
model query project, spearheaded by SAP, aims at providing this.
In the file/text based environment scalability is also an issue. Modern IDE is, for example, build an index of symbols and the resources in which they're defined to support efficient linking and lookup. The
EMF index project, led by the itemis folks in Kiel provides this capability specifically for use by Xtext.
Finally, the
ARTOP project has created a number of facilities to work with EMF/XMI models that are several hundred megabytes in size (the tools are not publicly available right now).
All of these projects and developments are a clear signal that Eclipse modeling is picking up momentum in "real" environments. It is a good sign if a technology leaves behind the "it works" phase and addresses "how to use it large scale".
Programming, Modeling, DSLs and Language Workbenches
More and more I am getting to the point where I think that there should be no difference between modeling and programing. What we really want is to program ... at different levels of abstaction ... from different viewpoints ... all viewpoints integrated ... … with different degrees of domain-specificity ... with suitable notations ... with suitable level of expressiveness ... and always precise and tool processable.
I am talking about this idea quite a bit. Some slides towards this point are at
slideshare and
as PDFs on my website.
I will blog about some of the thoughts from these slides in the next couple of weeks.