I am a huge fan of modeling as a way of developing software. Model the problem, whether business process or technology, using hierarchical layers of abstraction from any specific technology implementation, and use automated completeness and consistency checking to expose any errors of your understanding of the problem. Fix these in the model, while they are still cheap to fix, and devote resources to solving your actual, high-level, business problems rather than to fighting the technology at a low (coding) level. This seems, to me, to reflect good IT governance - use the resources given to IT efficiently, without the waste associated with building a system that doesn't do quite what the business needs and then fixing it. Modeling is an enabler for good governance.
The devil, of course is in the detail of how you turn the model into a working system in the real world. Here you roughly have 4 options: use the model as requirements documentation and develop the code independently; generate code by transforming a business-friendly, more-or-less proprietary, model into standards-compliant code (Java, say); interpret the business-friendly, more-or-less proprietary, model at run time; or execute the model written in a standards-based systems-engineering modelling language, such as UML, more or less directly. All these approaches are used, they often overlap, each has advantages and disadvantages - different risks to manage - and there is really no one right choice.
- I don't really like the fully independent, stand-alone, requirements model approach, unless you have a mature systems engineering culture. This is because, without discipline, (at best) some of the investment in the model may be wasted, as programmers duplicate the analysis; or, (at worst) the built system may diverge from the model, which is then not just wasted effort but actively misleading.
- Nevertheless, if you can get this approach to work in your culture, it, it probably maximises flexibility and minimises lock-in, although it doesn't offer many productivity gains (beyond the not-insignificant productivity gain of only needing to build the code once, from the validated model).
- Generating code from a business-friendly model is very attractive as, if implemented properly, you get the productivity gains possible from generating a lot of code from a simple high-level business description. Moreover, once you have the code, you should be free from lock-in to your model vendor (although if you are used to developing at an abstracted business level, you might find developing and maintaining actual code, if you walk away from your vendor, more difficult than you thought) - and code is generally portable between platforms. There is, however, a need for a mature culture and a modeling tool that delivers a clean architecture and code that you don't need to modify apart from the model (the place for custom code is before build and deploy).
- If people are allowed to change the code without updating the model, then the model will become misleading and untrusted and you are lost.
- Be wary of "round trip engineering" - where you can change the model and regenerate the code, or change the code and regenerate the model - as a panacea for this. It is too easy to change the code so that the model is changed in some fundamental way that breaks its business validity and by the time you discover this, the code is in production. At best, maintaining at the code level and generating the model is likely to produce a more complex, less comprehensible model.
- This isn't to say that code generation is a bad idea, just that you need to find out about how your tool vendor manages the generated code, so that you can use it to give you independence from the tool vendor and any limitations of the tool, without sacrificing the benefits of a high-productivity, model-driven, approach.
- You also need to check that you can regenerate code quickly, in response to changes to the model; that the quality/performance of the generated code is at least as good, on average) as what you can code manually (not that hard, once you look at all the code that you write, not just the code that finally makes it in production after "bedding in"); that you generate code in all the environments/languages that you need; and that you generate fully standards-compliant and readable code.
- All of that is achievable these days - but you do have to check - and you do need a mature culture that can work at a high level, highly productive, abstracted, model level and only modify the generated code as a last resort (and with good reason).
- An increasingly popular alternative is to work with a business-oriented model that is interpreted at run-time - this is a quick and responsive development process; although you need to check that the performance of the system produced is also OK. Obviously, unless the model is fully open-standards compliant, with published metadata formats and structures, there is the risk of lock-in to your tool vendor (no standard specifies everything and non-standard extensions can be an issue), although this can be managed.
- If this approach results in development at the business level, using business users, resulting in less mismatch between the business and the technology; and is also highly productive, then any potential lock-in risk may be worth it - just check out the vendors stability and resilience before you commit; and at least think through the business continuity risks if, say, the vendor is acquired and closed down.
- There is also a risk that the model isn't flexible or powerful enough to do everything you want, as fast as you want, or that it forces you into a particular style of operation (that's true of the code generation approach too, of course); but that is less of a problem these days than it was (technology is more powerful and more flexible, with better interoperability standards).
- In any case, any product worth looking at will let you embed or link to low-level code (preferably something standards-based like Java) if you need to. Just check that doing so won't adversely impact the model-driven culture you have built up; and remember that debugging code can be hard if you are used to thinking at the business process level. Basically, custom code should be implemented as reusable components, and should not be modified to change behaviour outside of the model.
- Model interpretation is probably particularly suited to the "citizen developer" as model and performance validation can be in real-time and it encourages working at a pure business level - but make sure that the risk of lock-in to a particular vendors' platform has been explicitly assessed.
- Finally, perhaps more esoteric (or academic), is the possibility of a fully standards-based approach in which everything is written in a formal systems engineering notation such as UML, and then the UML simply becomes another high-level language like, for instance, Smalltalk. This is a good engineering approach, but UML at the level of detail where it specifies a system completely is not particularly easy to read or write - look up Object Constraint Language (see the slides here) if you don't believe this. Still, to some extent, this is an implementation issue; there is no need to write the UML manually, it could be generated from a visual model; and the SysML dialect of UML shows promise.
- There are standards for executable UML - see fUML (Foundational UML) and ALF (the action language for fUML) from the OMG - there is a 2011 tutorial here, which gives a good idea of the approach; see also here.
- There is also an executable model vs. code generation discussion here - you could "execute" your model by generating standards-compliant code, which is automatically kept in synch with the model (with no manual intervention). Then, perhaps you get the best of both worlds: highly productive, business-level, model-based development; with no vendor lock-in and all the portability implied by having low-level code to compile on different platforms.
- There are certainly organisations in the "systems engineering" domain that are successfully generating code for robust (even safety-critical) systems, from UML and other standards-based modeling tools.
So, as I said, no one right approach, but different issues to consider - and don't take a religious approach, but look at the how well different tools implement the various approaches - a clean, transparent and maintainable architecture is an important consideration. All of these approaches can work but all of them can be compromised by a bad implementation (code generation won't free you from lock-in, for example, if the generated code is heavily dependent on proprietary extensions to the published code standard or proprietary libraries). Despite being CTO of Mendix, Johan den Haan gives a neutral overview of the benefits of both code generation and model interpretation, here - but do read all the discussion comments too.
Fundamentally, if a product makes a claim such as, for example, that vendor lockin isn't a problem because its modeling is, say, 100% standards compliant and portable - or because it generates standards-based code - if this claim is important to you, you do have to validate it. If you actually try to walk away from a vendor, it is just possible to discover overlooked dependencies on, say, security, performance or scalability capabilities in systems owned by the vendor. You might even go so far as to run a little proof of concept showing that you can actually move to a different vendor without disrupting the business too much - the same goes, of course, for any claim that is key the the acceptance/purchase of a particular product.
If you want to read further, this seems an interesting starting-point, although it is a few years old now. Basically, since then, things have just got better for the model-driven approach - although be aware that many cultures are still very resistant to the idea of building any models (apparently not realising that code is itself just another, rather opaque, model of the business process). Services-orientation encourages separation of business logic and technology, standards are better and more complete and the power and speed of the tools have improved. Possibly, also, the power, productivity and flexibility of alternative development approaches has improved too. Freedom in Development is about having choices. And perhaps the "correct" choice depends on the environment you are modeling for - sometimes, sadly, code itself might the only model that is acceptable to the culture of the organisation employing you...