Issue: January/February 2000 | PDF

10 Best Influences on Software Engineering

I wanted to get some perspective on the best and worst influences we’ve seen during software engineering’s first 50 years. After drafting an initial list of influences, I turned to our advisory boards. IEEE Software has active editorial and industrial advisory boards comprised of leading experts from around the world. What follows is our discussion of best influences on software engineering. As you will see, we had some significant differences of opinion!

Any list like this is bound to contain significant subjectivity. As Marten Thomas, our board member from Praxis Critical Systems, pointed out, this discussion might not tell us much about software engineering’s prospects, but it does say a lot about our attitudes at the beginning of the 21st Century.

Reviews and Inspections

One of the great breakthroughs in software engineering was Gerald Weinberg’s concept of “egoless programming”—the idea that no matter how smart a programmer is, reviews will be beneficial. Weinberg’s ideas were formalized by Michael Fagan into a well-defined review technique called Fagan inspections. The data in support of the quality, cost, and schedule impact of inspections is overwhelming. They are an indispensable part of engineering high quality software. I proposed Fagan inspections as one of the 10 best influences.

Christof Ebert: Inspections are surely a key topic, and with the right instrumentation and training they are one of the most powerful techniques for defect detection. They are both effective and efficient, especially for upfront activities. In addition to large-scale applications, we are applying them to smaller applications, in extreme programming contexts, where daily builds also help.

Terry Bollinger: I do think inspections merit inclusion in this list. They work, they help foster broader understanding and learning, and for the most part they do lead to better code. They can also be abused, for instance in cases where people become indifferent to the skill set of the review team (“we reviewed it, so it must be right”), or when they don’t bother with testing because they are so sure of their inspection process.

Robert Cochran: I would go more basic than this. Reviews of all types are a major positive influence. Yes, Fagan Inspection is one of the most useful members of this class, but I would put the class of inspections/reviews in the list rather than a specific example.

Information Hiding

David Parnas’s 25-year old concept of information hiding is one of the seminal ideas in software engineering—the idea that good design consists of identifying “design secrets” that a program’s classes, modules, functions, or even variables and named constants should hide from other parts of the program. While other design approaches focus on notations for expressing design ideas, information hiding provides insight into how to come up with the good design ideas in the first place. Information hiding is at the foundation of both structured design and object oriented design. In an age when buzzword methodologies often occupy center stage, information hiding is a technique with real value.

Terry Bollinger: Here here! To me this technique has done more for real, bottom-line software quality than almost anything else in software engineering. I find it worrisome that folks sometimes confuse it with “abstraction” and other concepts that don’t enforce the rigor of real information hiding. I find it even more worrisome that some process-oriented methodologies focus so much on bug counting that they are oblivious to why a program with good information hiding is enormously easier to debug—and has fewer bugs to begin with—than one that lacks pervasive use of this concept.

Christof Ebert: Surely agreed, and far too often missing in current designs.

Incremental Development

The software engineering literature of the 1970s was full of horror stories of software meltdowns during the integration phase. Components were brought together for the first time during “system integration.” So many mistaken or misunderstood interface assumptions were exposed at the same time that debugging a nest of intertwined assumptions became all but impossible. Incremental development and integration approaches have virtually eliminated code-level integration problems on modern software projects. Of these incremental approaches, the daily build is the best example of a real-world approach that works. It minimizes integration risk, provides steady evidence of progress to project stakeholders, keeps quality levels high, and helps team morale because everyone can see that the software works.

Dave Card: I wouldn’t include “daily builds” as one of the 10 best influences. I see that as a “brute force” method of integration. It’s kind of like having a housing contractor confirm that each window is the right size by holding it up to a hole in the wall, and then chiseling or filling the hole as necessary to make the window fit.

Terry Bollinger: I disagree. Incremental integration, including daily builds, represents a retreat from the earlier, ultimately naive view that everything could be defined with mathematical precision “up front,” and all the goodies would flow from that. Such philosophies overlooked the inherent limitations of the human brain at foreseeing all the implications of a complex set of requirements.

If you really want to see the daily build approach in ferocious action, take a look at the open source community. When things get busy there releases may even be hours apart. Such fast releases also reduce the human “set up time” for returning to the context of each problem.

Steve McConnell: At a level of abstraction higher than daily builds, Barry Boehm’s spiral lifecycle model is an incremental approach that applies to the whole project. The model’s focus on regular risk reduction is appealing, but the model is so complicated that it can only be used by experts. I’d give it the prize for most complicated and least understood diagram of the 20th century. Unlike daily builds, it seems like a great incremental development approach that has had little impact in practice.

Terry Bollinger: I disagree. The daily build you touted above really is closer to the spiral model than to “traditional” phased design. And where do you think interface prototyping goes? It surely is not in the original phased development models of the 70s and 80s. Barry Boehm noticed that reality was closer to the daily build model than to the rigidly defined phased models of that era.

Christof Ebert: I don’t agree with Steve’s conclusion. The spiral model diagram was too complex because it tried to put everything into one picture. But the combination of risk management and increments that the spiral model proposes are highly successful when blended together.

Robert L. Glass: In my experience, the spiral model is the approach that most good software people have always used; they just didn’t call it that. Good programmers never did straight waterfall; they may have pretended they did to appease management, but in fact there was always a mix of design-code-test-a-little thrown in along the way.

User Involvement

We’ve seen tremendous developments in the past several years in techniques that bring users more into the software product design process. Techniques such as JAD sessions, user interface prototyping, and use cases engage users with product concepts in ways that paper specifications simply cannot. Requirements problems are usually listed as the #1 cause of software project failure; these techniques go a long way toward eliminating requirements problems.

Wolfgang Strigel: I think user interface prototyping is especially important since it addresses the fundamental engineering concept of building models. No architect would build a bridge without a model to see if those who pay for it will like it. It’s really simple, if you plan on ignoring the customer, don’t prototype.

Karl Wiegers: Use cases, sensibly applied, provide a powerful mechanism for understanding user requirements and building systems that satisfy them.

Steve Mellor: Whatever techniques are applied, we need to be clear about context. When we talk about use cases for example, they can be very helpful in a context where there is little communication about the problem. Use cases put the software folks face-to-face with the clients and users, and put them in a position to learn the vocabulary of the problem. But use cases are appalling in helping to create an appropriate set of abstractions. Should you use use cases? Depends on your context.

Terry Bollinger: I think these techniques tie back to the value of incremental development. And this includes all forms of prototyping—communication protocols, subsystem interfaces, and even the method interfaces to an object class. There’s a lot of merit to trying out different approaches to discover what you missed, misunderstood, or got out of proportion.

Automated Revision Control

Automated revision control takes care of mountains of housekeeping details associated with team programming projects. In the Mythical Man-Month in 1975, Fred Brooks’ “surgical team” made use of a librarian. Today, that person’s function is handled software. The efficiencies achieved by today’s programming teams would be inconceivable without automated revision control.

Terry Bollinger: The promise is high, and the fundamental concept is solid. I don’t think our associated methods for organizing large numbers of programmers efficiently are good enough yet to make the value of this technology really shine. For example, the really interesting code still tends to get written by relatively small teams for whom the benefits of automated revision control are much less conspicuous, for example, many of the open source efforts.

Perhaps the most important software engineering innovations of the next century will come not from more intensive methods for making people more organized, but rather from learning to automate the overall software process in ways that we do not currently understand very well, but which revolve around the overall ability of a computer to remember, communicate, and organize data far better than people do.

I think this kind of innovation will be closely related to methods such as automated revision control. Our current planning tools, which too often just try to automate paper methods, certainly don’t seem to be there yet.

Internet Development

What we’ve seen with the Open Source development is just the beginning of collaborative efforts made possible via the Internet. The potential this creates for effective, geographically distributed computing is truly mind boggling.

Terry Bollinger: Internet interactions allow interesting sorts of interactions to happen, and not just with source code. I think this one is important simply because it isn’t very easy to predict—e.g., it may lead to approaches that are neither proprietary nor open source, but which are effective in ways we don’t fully foresee or understand yet.

Wolfgang Strigel: Yes, although it stands on the foundation of best practices in software design and development of the 20th century, it brings new aspects which will change the face of software development as we know it.

Programming Languages Hall of Fame: Fortran, Cobol, Turbo Pascal, Visual Basic

A few specific technologies have had significant influence on software development in the past 30 years. As the first widely used third generation language, Fortran was influential, at least symbolically. Cobol was arguably at least as influential in practice. Originally developed by the U.S. defense department for business use, most practitioners have long-forgotten Cobol’s military origins. Cobol has recently come under attack as being responsible for Y2K problems, but let’s get serious. Y2K is only a Cobol problem because so much code has been written in Cobol. Does anyone really think programmers would have used 4-digit years instead of 2-digit if they had been programming in Fortran instead of Cobol?

Christof Ebert: The real added value of Cobol was to make large IT systems feasible and maintainable, and many of those are still running. The “problems” with Cobol weren’t really about Cobol per se. The problems were with the programmers who happen to be using Cobol. Today, many of these same programmers are using or abusing Java or C++, and the legacy-problems discussion will surely come back someday with regard to those languages.

(*Editor’s Note: IEEE Software’s March/April 2000 issue will focus on recent developments in Cobol and future directions.)

Turbo Pascalthe first integrated editor, compiler, debugger—forever changed computer programming. Prior to Turbo Pascal, compilation was done on disk, separate from editing. To check his work, a programmer had to exit his editing environment, compile the program, and check the results. Turbo Pascal put the editor, compiler, and debugger all in memory at the same time. By making compile-link-execute cycles almost instantaneous, the feedback loop between programmer’s creating code and executing the code was shortened. Programmers became able to experiment in code more effectively than they could in older, more batch-oriented environments. Quick turnaround cycles paved the way for effective code-focused development approaches, such as those in use at Microsoft and touted by Extreme Programming advocates.

Terry Bollinger: I think the real message here is not “Turbo Pascal” per se, but rather the inception of truly integrated programming environments. With that broadening (integrated environments, with Turbo Pascal as the first commercial example), I would agree that this merits inclusion in the top ten.

Christof Ebert: Aside from better coding, such programming environments also made the incremental approaches feasible that we discussed earlier.

Academics and researchers talked about components and reuse for decades, and nothing happened. Within 18 months of Visual Basic’s release, a thriving pre-built components market had sprung from nothing. The direct-manipulation, drag-and-drop, point-and-click programming interface was a revolutionary advance.

Christof Ebert: Even more relevant is the single measure of speed of penetration of VB. By far the of the fastest growing “languages” and still going strong. Only Java passed it recently in speed.

Terry Bollinger: Visual Basic provided the benefits promised (but for the most part never really delivered) by object-oriented programming, mainly by keeping the “visual” part of the object paradigm and largely dispensing with the complex conversion of objects into linguistic constructions. Visual Basic is limited, but within those limits it has had a profound impact on software development and has made meaningful development possible for a much larger group of people than would have been possible without it.

Kudos to Microsoft for this one—and that’s coming from a guy who co-edited the Linux issue!

Capability Maturity Model for Software (SW-CMM)

The Software Engineering Institute’s SW-CMM is one of the few branded methodologies that has had any affect on typical software organizations. More than 1000 organizations and 5000 projects have undergone SW-CMM assessment, and dozens of organizations have produced mountains of compelling data on the effectiveness of process improvement programs based on the SW-CMM model.

Dave Card: I would consider the software CMM to be both a best influence and a dead end. Steve has explained the case for a best influence. However, I have also seen a lot of gaming and misuse of the CMM that leads to wasted and counterproductive activity. Even worse, it has spawned a growth industry in CMMs for all kinds of things—apparently based on the premise that if your model has five levels it must be right, regardless of whether you have any real knowledge of the subject matter. It reminds me of the novel, Hitch-Hiker’s Guide to the Galaxy, where the answer was 42—except that now it’s 5.

One of the unanticipated effects of the CMM has been to make the “effective” body of software engineering knowledge much shallower. My expectation was that an organization that was found to be weak in risk management, for example, would read one of the many good books on risk management or get help from an expert in the field. Instead, what many (if not most) organizations do, is to take the 4 pages in the CMM that describe risk management and re-package it as 4-6 pages of their local process documentation.

Overall, I believe the CMM has helped a lot. However, I think it does have some harmful side effects that can be mitigated if the community is willing to recognize them. Moreover, there are limits to how far it can take us into the 21st century.

Nancy Mead: I don’t think the CMM per se is what is significant—it’s the recognition of a need for standardized software processes, whatever they may be.

Robert Cochran: Standardization is key. Another example is recent efforts to put a good structure on RAD using the DSDM approach. DSDM is not well known in the US, but interest is rapidly growing on this side of the Atlantic.

And don’t forget the many thousands who have used ISO9000 as their SPI framework. I would say that SPI using any of the recognized frameworks (or indeed a mixture of them) is where benefit is gained. They give a structure and coherence to the SPI activity.

Terry Bollinger: I do think that some concept of “process improvement” probably belongs in this list, including both ISO 9001-3 and maybe the first three levels of CMM. But I disagree that the CMM taken as a whole has been one of the most positive influences of the twentieth century.

Wolfgang Strigel: This one can really stir the emotions. Despite its simplicity, the main problem with the CMM is its rigid and ignorant interpretation. The concept itself is sort of a truism. I believe the biggest accomplishment of CMM is in the Software Engineering Institute’s success in marketing the concept. It certainly has raised the awareness within thousands of companies about “good software engineering practices.” Universities were woefully slow in recognizing software engineering as a topic, and the SEI jumped into the fray, educating not only new grads but old professionals as well. Its problems get aggravated with people reading too much into it. The grading is questionable and not the most relevant part of the CMM. I disagree that ISO 9000-3 would merit a similar place in the hall of fame. After all, with ISO 9000-3 you can diligently do all the wrong things, and you will be compliant as long as you do them diligently.

It may be time to throw away the old CMM model and come up with an update that takes into account the experiences from the last 10 years. I don’t think we need a new model or more models; just taking the cobwebs out of the current one will carry us for the next 10 years.

Christof Ebert: Despite the argument about ISO 9000, the CMM is definitely the standard that set the pace. At present, it is the framework that allows benchmarking and that contributes most to solving the software engineering crises—just by defining standard terminology. This is my personal favorite “best influence.”

Object Oriented Programming

Object oriented programming offered great improvements in “natural” design and programming. After the initial hype faded, practitioners were sometimes left with programming technologies that increased complexity, provided only marginal productivity gains, produced unmaintainable code, and could only be used by experts. In the final analysis, the real benefit of object oriented programming is probably not objects, per se, but the ability to aggregate programming concepts into larger chunks than subroutines or functions.

Terry Bollinger: Whatever real benefits people were getting from OO, it was mostly coming from the use of traditional Parnas-style information hiding, and not all the other accoutrements of OO.

OO is a badly mixed metaphor. It tries to play on the visual abilities of the human brain to organize objects in a space, but then almost immediately converts that over into an awful set of syntax construction that don’t map well at all into the linguistic parts of our brains. The result, surprise surprise, is an ugly mix that is usually neither particularly intuitive nor as powerful as it was originally touted. (Has anyone done any really deep inheritance hierarchies lately? And got them to actually work without ending up putting in more effort than it would have taken to simply do the whole thing over again?)

Christof Ebert: Agreed, because people haven’t taken the time to understand object orientation completely before they design with it. In a short time, it has created more harm to systems than so-called “bad Cobol programming.”

Takaya Ishida: I think it is questionable whether Cobol and Object Oriented Programming should be on the 10 best list or 10 worst list. It seems to me that Cobol and other popular high level procedural languages should be blamed for most of the current problems in programming. It is so easy to program in such languages that even novice programmers can write large programs. But the result is often not well-structured. With object-oriented languages such as Smalltalk, programmers take pains to learn how to program, and the result is well structured programs. Having programming approaches that can be used only by experts may be a good idea. Our past attitude of “easy going” programming is a major cause of the present software crisis including the Y2K issue.

Terry Bollinger: I really like Smalltalk style OO, and I’m optimistic about Java. The real killer was C++, which mixed metaphors in a truly horrible fashion and positively encouraged old C programmers to create classes with a few hundred “methods” in them, to make a system “object oriented.”Argh! So maybe the culprit here is more C++, rather than OO per se, and there’s still hope for object oriented programming.

Component Based Development

Component-based development has held out much promise, but aside from a few limited successes, it seems to be shaping up to be another idea that works better in the laboratory than in the real world. Component-version incompatibilities have given rise to massive setup-program headaches, unpredictable interactions among programs, de-installation problems, and a need for utilities that restore all the components on a person’s computer to “last known good state.” This might be one of the 10 best for the twenty first century, probably not for the twentieth.

Robert L. Glass: The reuse (component-based) movement might have trouble getting off the ground, but some related developments have shown great promise. Design patterns are a powerful medium for capturing, expressing, and packaging design concepts/artifacts—”design components” if you will. The patterns movement looks to be the successful end-run around the reuse movement. Similarly, we’ve been very successful in domain generalization. Compiler-building was perhaps the first such success many decades ago

Steve McConnell: Yes, 30 years ago a person could get a Ph.D. just for writing a compiler, but today compiler-building domain-generalization has been such a success that compiler building is usually taken for granted.

Robert L. Glass: There have been too few successes other than compiler building. I would assert that the Enterprise Resource Planning (ERP) system, such as SAP’s, is the most recent and most successful example of building general purpose software to handle a whole domain—in this case, backoffice business systems.

Terry Bollinger: The components area has indeed been a major headache, but it’s sort of an “ongoing” problem, not one that has been fully recognized as a dead end. Visual Basic is an example of how component based design can work if the environment is sufficiently validated ahead of time.

The fully generic “anything from anybody” model is much more problematic, and is almost certainly beyond the real state of the art—versus the “marketing state of the art”—which generally is a few years to a few centuries more advanced than the real state of the art.

Christof Ebert: Work on components helped to see the direction towards frameworks, reuse, and pattern languages. So it is a useful intermediate step. Thinking in components still is a good design principle and entirely in line with the point made earlier about information hiding.

Metrics and Measurement

Metrics and measurement have the potential to revolutionize software engineering. In the few instances in which they have been used effectively (NASA’s Software Engineering Lab and a few other organizations), the insights that well-defined numbers can provide have been amazingly useful. Powerful as they can be, software process measurements aren’t the end, they’re the means to the end. The metrics community seems to forget this lesson again every year.

Dave Card: I would include “metrics” as a dead end, but I also would include “measurement” in the list of top prospects for the twenty first century. The word “metrics” doesn’t appear in most dictionaries as a noun—and that is a good indication of how poorly measurement is understood and why it is consequently misused in our discipline. The ability to express things meaningfully in numbers is an essential component of any engineering discipline, so I don’t see how we can have “software engineering” in the 21st century without “measurement.”

Deependra Moitra: Any problems are not with metrics but with the metrics community. We cannot blame an idea/concept/approach if people messed up with it. I can’t imagine performing my job without metrics, especially in a geographically distributed development mode.

Terry Bollinger: Metrics are a problem. Our foundations are off badly in this area, because we keep trying to apply variations of the metrics that worked well for manufacturing to the highly creative design problems of software. It’s a lot like trying to teach people how to write like Shakespeare by constantly checking their words for spelling errors.

Christof Ebert: Entirely true, but still metrics as such are of big value. We should see metrics as a tool (or means). Metrics usage has matured dramatically over the past ten years. A metrics discipline in software engineering is as necessary as theoretic foundations are in other engineering disciplines to avoid software development approaches that are built on nothing more than gut feeling.

Wolfgang Strigel: Regardless of terminology (metrics or measurements), I believe that this should become another top contender for the next century. How can we claim to be professionals if nobody can measure some basics facts about the development activity? How can we manage activities that cannot be measured? Measurement failed in this century not because it is a bad concept, but because our profession is still in its infancy. There is no other engineering discipline that would not measure product characteristics. And there is no other manufacturing activity that would not try to measure output over input (=productivity). It is not the concept of measurement that is at fault. The problem are consumers who have become too tolerant of faulty software and a red-hot market place in which inefficiencies do not cause the failure of companies. If there was an oversupply in software developers we would see a lot more measurement applied to optimize the process.

Summary

Maarten Boasson: The greatest problem in the software engineering community is the tendency to think that a new idea will solve all problems. This tendency is so strong that previously solved problems are forgotten as soon as a new idea gains some support, and consequently problems are re-solved in the new style. This is a perfect recipe for preventing progress!

This behavior suggests (to me, at least) that the field of software engineering, and even computer science, is very immature. This is not anybody’s fault, but the simple result of the very short time that computing has been around. Compared to other fields, we really are hardly out of the nursing stage. What is collectively our fault, however, is the religious fanaticism with which the current belief (or one’s personal belief) is protected against “attacks” from people doing things differently. Such behavior probably also existed during the early years of other disciplines, but why don’t we learn from their example?

The problems in developing today’s and tomorrow’s systems are overwhelming; they require many different types of problems to be solved. No other scientific or engineering discipline relies on a single technique for addressing problems—why are we, so-called professional engineers (and computer scientists), stupid enough to think that our field is fundamentally different in that respect?

So, what do we need to do? First, industrial management has to understand that SE is not an engineering discipline like so many others (yet) and that standards, methods and tools are all likely to be wrong (once we really understand what developing software means). This implies that software development is still much more of an art than management believes it to be and that therefore, only the best and the brightest should be allowed to develop systems of more than trivial complexity. A side effect of such an approach would be a significant reduction in effort needed for their development. Second, real experiments with novel/alternative approaches to software development should be carried out, possibly as a cooperation between industry and academia. These should not be simplified, typical academic problems, but real life systems. Such experiments will have to be performed as a parallel activity to the development of the systems in the traditional way; indeed, no manager will accept the risk involved in this type of experiment (although, my guess is that the risk is generally smaller than the traditional approach). Results—whichever way—should be analyzed and widely published. Third, the metric for academic success needs to be reconsidered. The “publish or perish” paradigm is a guarantee for large amounts of garbage being published. Also, the really interesting results in multidisciplinary work are practically unpublishable, due to the strong segmentation of the field, with super-experts in epsilon-width fields. Fourth, we (IEEE Software, e.g.) should continuously stress that problems have not yet been solved, that innovation is essential and that all claims to having invented THE solution are simply wrong.

Sidebar: 10 Best Influences on Software Engineering in the 20th Century

  1. Reviews and Inspections
  2. Information Hiding
  3. Incremental Development
  4. User Involvement
  5. Automated Revision Control
  6. Programming Languages Hall of Fame: Fortran, Cobol, Turbo Pascal, Visual Basic
  7. Software Capability Maturity Model (SW-CMM)
  8. Object Oriented Programming
  9. Component-Based Development
  10. Metrics and Measurement

Editor: Steve McConnell, Construx Software  |  More articles