In Part 3, I argued that programming languages need to be flexible, and support powerful abstractions. But, there is a concern: Many developers and managers alike fear that more flexible, powerful languages are dangerous. They're right, but there is a solution: testing and transparency.
To continue with the Enterprise Hammer example: One approach to making the Enterprise Hammer safer is to limit its power and generality. If the hammers can only strike with a certain amount of force, then the hammer cannot accidentally break strong structures. Moreover, if the linkage between the hammers and the frame is deliberately complex, nobody will attach other implements that might damage something.
This safety comes at an extremely high price. The problem with reduced power becomes obvious as soon as you need more power. The problem with reduced generality is more subtle, and therefore even more damaging. When solutions cannot be generalized, work must instead be duplicated. Poor generalization is the leading cause of "code bloat". And the damage from code bloat is not linear. Poor generalization makes software geometrically more expensive to build and maintain (think "spaghetti code"). Even worse, tools that inhibit generalization inhibit genius--those cross-disciplinary leaps of imagination that move the industry forward.
The "safe language" argument appeals to fear, while the "flexible language" argument appeals to a sense of opportunity and adventure. Both are powerful motivations, so for a long time this argument has been a stalemate. Happily, that period is coming to an end. Two new factors have come into play: automated testing and transparency. Over the next five years they will turn the balance totally in the favor of more flexible languages.
Automated testing turns software on itself. By using code to evaluate that other code functions correctly, we can reach a much higher level of assurance than was ever achieved by the stopgap measure of weakening our tools. Instead of building small, soft hammers, and placing them in a low-powered frame, we build exactly the parts we need. We then test each of the parts individually (unit testing), test aggregates of parts working together (functional testing), and test the entire system (acceptance testing). If we decide to modify the frame, engine, or hammers, we can quickly re-execute our tests and verify that the entire system still works as before.
Transparency comes from open source. Other things being equal, developers who depend on open source will outperform developers who do not. To return to the Enterprise Hammer: Imagine now that frame and engine are covered with an opaque outer shell, to prevent hammer developers from seeing (or tinkering with) the internals of the Enterprise Hammer. This is the closed source world. If the frame and engine are performing perfectly, then the people building and using hammers may not care much. But when problems or questions arise, the shell needs to be removed. There are, of course, halfway measures such as documentation and technical support. But those are never as good as the code itself.
Are testing and transparency recent discoveries? No. What is new is their widespread adoption and acceptance by developers. Something else is new too: developers today understand the basics of OO. When James Gosling designed Java, it gave developers plenty of new stuff to think about. Most developers were new to OO, and had yet to learn inheritance, polymorphism, etc. Developers now understand OO, and understand automated testing. In other words, developers are ready to learn something new, and they have a great safety net (testing) to use along the way.
For evidence that testing and transparency foster good code, take a look at Spring. Spring has succeeded wildly with only a fraction of the investment (time and $) that has gone into closed-source J2EE stacks. But doesn't Spring's success argue in favor of Java, not more flexible languages? Stay tuned for the next installment.