Thursday, November 26, 2009
 

The Role of Domain Experts in *designing* DSLs

One of the advantages of using DSLs is - so people say - that the communication between domain experts and software developers is improved. However, a good "business" DSL is one where the business or domain folks can do the coding themselves and then they run the generator to build the executable software. Where in this picture is the communication between the domain expert and the developer that the DSL should improve? In an ideal scenario, business people, after creating the models, press a button. No need to communicate to a developer.

However, there's of course another take at this problem: before domain users can use the DSL to write down their domain specific programs or models, the DSL needs to be created! This DSL creation, and the development of the generators, etc. is of course something a developer does! So the communication between domain experts and developers happens during language creation and evolution!

So, where does this leave us? We need to improve the domain expert/developer collaboration during DSL creation! How can DSL tools help with that? After all, they are designed for developers, right? Are we back to the same old "domain experts write it down in Word, throw it over the fence, and then the devs build the DSL"-kind of scenario? Something that DSLs aim at avoiding in the first place?

Here are some ideas and examples of how domain experts can play a role in language development.

When building a DSL with Xtext, language definition is so quick and straight forward that the domain expert can provide input, the developer builds the grammar, and then the domain expert can use the resulting language/editor to try out whether the language can represent his ideas. Because Xtext is so lightweight, such a round-trip can happen in a couple of minutes. Consequently, it is absolutely feasible to put a domain expert and a developer in front of a machine to develop a language together. There's no real need to write a "language specification".

Intentional has emphasized this idea for a while now. They encourage language developers to start new domains/languages by first just writing down domain concepts (essentially, words at this time) and their relationships. At this point, without defining sophisticated custom projections, models can be edited using a default projection. After an hour or two of pair-language-development the dev can spend the rest of the day alone implementing the fancy custom projections. But at least a first rough cut at defining models based on the new language is available immediately.

With MetaEdit+, the graphical shapes representing the concepts in a diagram can be defined with a nice graphical editor. Again, this shape definition is something that can involve domain experts.

So what do we take from this? Quick turn-around during DSL development is essential to be able to include the domain expert in this crucial phase. When selecting a DSL tool, you should take into account this concern - it hasn't been discussed much AFAIK, and most DSL tools don't make this a priority. Maybe this should change.
 
Comments:
With 'designing DSLs' you mean designing the _syntax_ of the language, right? That is, or should be, easy. But prototyping the syntax of a language is not the same as domain analysis, nor is it the same as DSL implementation, both of which are typically harder.
 
Not just syntax. Whatever is necessary to "play" with the language and to find out whether it fits the domain. In addition to syntax, this may also include important constraints, scope definitions or type system rules.

But generators and stuff are not included.
 
For years we have been stuck with hierarchical file systems and document structures. We were so used to shoehorning everything into nested folders that we didn't even realize it anymore. Eventually, we have almost overcome this twisted habit, think and design in "clouds of connected objects". E.g. Wikis often have just collections of pages that cross reference each other freely, even though many Wikis introduced page nesting in the meantime. Sadly. The WWW is not hierachical.

The grammars we typically build with Xtext (and in fact most grammars I know) are compositions of hierarchical elements with cross references. EMF models are hierarchical and, being the underlying model of Xtext, tend to (mis-)lead DSL grammar developers to approach new domains with a "hierarchical mindeset". However, the real world is mostly non-hierarchical.

May I propose another step in the collaboration between developer and domain expert before they devise the actual grammar:

Draw a simple UML class diagram with no operations, or even better: one of Chen's good old entity-relationship diagrams (and I mean entity-relationship diagram and not just a relational DBMS table diagram). Let's call this the domain-type model. Add cardinalities and constraints. Make several object diagrams for this type model that represent real-life domain-object constellations that our model should be able to capture.

When the domain expert has gained some confidence in this domain-type model, then developer and domain expert design a language that that suits the purpose.

So, I argue that you should "play" with a type model (actually with "instances" of this model!) before you play with grammar.
 
Oliver, I agree to the idea of playing with the entities. The idea of being able to play with these things as a means of feedback to the domain expert is central to what I posted.

However, in my experience, Xtext grammars are a good approach for actually playing with this stuff. I am not opposed to "ER-style" diagrams, but over the last couple of years I have become convinced that a simple textual language is a much more "agile" tool for this kind of playing and exploration - not least because it is trivial to evolve existing "prototype" models as the language changes.

But again: I am not saying that there couldn't be other approaches to achieve the same goals (see also my mentioning of Intentional and MetaEdit+). In the same spirit, I agree that the approach proposed by you will certainly work, too. It's just that my personal experience is with another tooling.
 
Among our customers developers use sometimes term “pair metamodeling” to describe the process and communication approach between domain expert and developer. This means that a developer implements the language (metamodel, graphical notation, constraints) and one or more domain expert can comment and test the language. Based on our experience such immediate feedback on language ideas, and being able to test them easily, is a key increment to define good languages: languages that fit to the purpose and solve the domain experts needs.

It is worth to emphasize that in our case, based on MetaEdit+ tool, the domain expert can use the language AT THE SAME TIME when language developer modifies it. It is also possible that developer implements the constraints and generators and that the domain expert draws the graphical symbols, concrete syntax, for the language. Changes made to the language even update automatically to the existing models and old models always open in the changed language. I can’t see how language development could become more agile...
 
I agree. MetaEdit+ (as I suggested in my post) is really good in this way. So is Xtext, although in another way.

But compare this to UML and profiles, or Eclipse GMF, ... just to name two bad examples :-)
 
Yes, tools seem to make a big difference, and perhaps because of using wrong type of tools people believe that language development and getting tool support for the in-house developed language, DSL, is difficult and time consuming. Obviously it must be so if we look how much time and effort creation of languages like UML has taken! This way of thinking, IMHO, totally ignores the DS part of languages. Public reports from companies like Panasonic and Polar says that creation of domain-specific languages, code generators, and tooling takes 1-3 manweeks. Some other cases are mentioned at: (http://www.metacase.com/blogs/jpt/blogView?showComments=true&entry=3416224823) showing data from other cases I personally know in more detail. So, I agree that tools make a big difference. At least I can’t find any other reason for the difference among the efforts put to language development. Therefore it is of vital importance for domain experts and language developers to try several tools for their language developer project.
 
Yes, tools seem to make a big difference, and perhaps because of using wrong type of tools people believe that language development and getting tool support for the in-house developed language, DSL, is difficult and time consuming. Obviously it must be so if we look how much time and effort creation of languages like UML has taken! This way of thinking, IMHO, totally ignores the DS part of languages. Public reports from companies like Panasonic and Polar says that creation of domain-specific languages, code generators, and tooling takes 1-3 weeks. Some other cases are mentioned at: (http://www.metacase.com/blogs/jpt/blogView?showComments=true&entry=3416224823) showing data from other cases I personally know in more detail. So, I agree that tools make a big difference. At least I can’t find any other reason for the difference among the efforts put to language development. Therefore it is of vital importance for domain experts and language developers to try several tools for their language developer project.
 
Interesting article!

I agree with you that we need to involve domain experts in DSL design. For this we can use the concepts of Domain-Driven Design. See also my article on DSL design recommendations based on Domain-Driven Design.

I prefer to use graphical metamodels of the abstract syntax of a DSL when "knowledge crunching" the domain with domain experts.
 
Just wanted to emphasize Oliver's point about letting domain experts play with instances - models rather than metamodels. All domain experts are capable of giving feedback on the language as seen through a model.

Some domain experts can also cope with working on the metamodel level, at least with tools like MetaEdit+ that don't require the metamodeler to program.
 
Post a Comment

<< Home

back to voelter.de

ABOUT ME
This is Markus Voelter's Blog. It is not intended as a replacement for my regular web site, but rather as a companion that contains ideas, thoughts and loose ends.

ARCHIVES
December 2005 / January 2006 / February 2006 / March 2006 / April 2006 / May 2006 / June 2006 / July 2006 / August 2006 / September 2006 / October 2006 / November 2006 / December 2006 / February 2007 / March 2007 / April 2007 / May 2007 / June 2007 / July 2007 / September 2007 / October 2007 / November 2007 / December 2007 / January 2008 / February 2008 / March 2008 / April 2008 / May 2008 / June 2008 / July 2008 / August 2008 / September 2008 / October 2008 / November 2008 / December 2008 / January 2009 / February 2009 / March 2009 / April 2009 / May 2009 / June 2009 / July 2009 / August 2009 / September 2009 / October 2009 / November 2009 / December 2009 / January 2010 / February 2010 / April 2010 / May 2010 / June 2010 / July 2010 / August 2010 / September 2010 / October 2010 / November 2010 / December 2010 / January 2011 / March 2011 / April 2011 / May 2011 / June 2011 / July 2011 / October 2011 / November 2011 / December 2011 / January 2012 / February 2012 / October 2012 / January 2013 /

FEED
You can get an atom feed for this blog.