EclipseCon Sessions Accepted
Together with a number of other folks (Bernd Kolb, Arno Haase, Artem Tikhomirov, and Jean Bezivin) our Long Talk on Model-2-Model Transformations has been accepted. Our tutorial on MDSD from frontend to code has also been accepted. It is a major success for the openArchitectureWare team to be involved in two modeling talks at EclipseCon. Very Cool :-)
Labels: eclipse
M2M, UML and the Defense Industry
Today I had a chance to take a look into a large defense contractor. Of course I got involved with that company because of my MDSD experience, so I got a chance to see how they use MDSD to develop naval battle management software for frigates. They are completeley committed to UML based modeling because it is essential that they are "standards-based" with everything they do ("in the worst case we'll go to the OMG and have our approach standardized" - you can only say this sentence if you're really big :-)). They use a very formal development process with various models at various level of abstraction. Most of the transformations between the models are manual. They also have a number of automated transformations towards the end of the process. Now, what was really interesting is that they have all their intermediate models (results of transformations) also in the UML tool. The tool was grinding heavily under the load. When I asked them what they actually do with these "intermediate" models, their answer was: "Noting.". They don't look at them much and they don't modify them.
So, I was asking myself: why keep these models persistent at all? It just results in huge amounts of "stuff" in the UML tool that you don't really care about. All in all, this really put me off and confirmed the approach I always use and proclaim, which is that intermediate models in a transformation chain are useful so you can test the transformation steps or use them for documentation, but you don't really work with them. You should consider them as basically a data structure to connect the various transformation stages. If you need to "annotate" these intermediate models in order to prepare them for the next transformation stage, you should use aspect models and have the transformer read those as well as the original input.
Also, it was quite obvious that the UML-only approach (profiles or not) isn't really a sustainable approach. UML was useful, in their case, as the modeling language (for the models people really worked with) but was completely unsuitable for the intermediate models which were very domain (and technology specific). Custom meta models should be used there. This results in simpler models as well as simpler transformations. And remember: since you don't actually look at or modify these intermediate models, it doens't matter that they are not "standard" UML. And after all, they are (indirect) instances of MOF, which is also a standard....
It was also interesting from another perspective. The last time I was involved with the defense industry was when I did my internship at EADS in Ulm in 1997. Back then, Ada was still an important language and Java was completely unimportant. Today, Ada is considered legacy, and mission critical applications (e.g. in the naval battle management domain) are built using Java. It was also interesting to see that they are using an iterative development process with iterations of approximately 4 weeks length and an intermediate release every three iterations. It is really good to see that the waterfall (and the V-Model) seems to give way to somewhat more agile development.
Embedded vs. Enterprise
Traditionally, the enterprise software development community has considered the embedded folks to be a bit conservative and somewhat boring. "Bitcounters" they were often called :-). And indeed, traditioinally the embedded field hasn't been very eager to adapt new technologies, especially language adoption was very slow. Per-unit cost was often more important than inital software development cost, so complex one-off systems had to be built. As a consequence of (often premature) optimization for small devices, there was often no room for "nice design".
However, for the last couple of years, things have been changing dramatically. As a consequence of the fast growing complexity and size of (distributed) embedded systems the requirements for embedded software development are changing. The effort spent on developing the system becomes more important, compared to the per-unit cost of the device. There are more things pushing into the same direction: product lines, shorter time to market.
So, the embedded software development community is forced to consider more explicitly things such as software architecture, variability management, reuse and other similar topics. And since frameworks and "classic" OO technology often isn't a good fit (for reasons of performance and/or resource scarcity in general) and because the complexities involved are getting so big that classes aren't enough of an abstraction anyway, the embedded world is rapidly embracing model-driven development.
Using MDD, embedded developers can build meaningful abstractions to manage the system complexity, handle variabilities on the model level and in the generators and even simulate aspects of the system. And from these models, they can generate their tried and trusted C code ... that runs on their tried and trusted infrastructures (RTOSs).
Note that MDD has been used in the embedded world for ages (maybe not by that name), as exemplified by tools such as Ascet or Matlab/Simulink. However, these tools often addressed mainly algorithmic aspect of the target system. Using models to describe the system as a whole, including processors, networks, distribution, deployment and quality of service is a more recent development.
In
my talk at OOP 2007, I will elaborate on these topics ... the slides aren't finished yet, so let me know at
voelter@acm.org what you want me to talk about in the session :-)
News from the MDSD Book
The
german edition of the
MDSD book is in for a major overhaul! The second edition will contain more or less all the contents of the English version, plus a lot of new features, among them some restructurings and a new, updated set of examples based on
current tools.
In order to really get some fresh air into the book (and to spread the work :-)) two well-known additional members of the MDSD community will join the team of authors:
Arno Haase and
Sven Efftinge. Since both of them are involved with oAW, the book will keep (and even extend) it's use of oAW in the examples.
So what I'm saying is: you should look forward to this new edition of the book, and you need to buy it even if you already own the first edition and/or the english version :-)