A Mission and An Adventure: The Cultural Differences Between Micro and Mainframe Programmers
Software Practitioner
Issue: November/December 1992
The number of people writing programs for microcomputers has exploded in the last five years. The U.S. Department of Labor reports fewer than 1.5 million persons working in various programming jobs in all programming environments (DOL 1990). But figures from the microcomputer industry suggest that many more people than that are programming microcomputers–Borland International for instance has sold more than 2 million copies of Turbo Pascal for the IBM PC (Kahn 1989), and that’s only one language product from one company.
In spite of the amount of activity in microcomputers, microcomputer software development has largely been ignored by the software-engineering community. A quick glance through recent articles in IEEE Software and Communications of the ACM shows articles on embedded systems, real-time systems, parallel processing, Ada, LISP, and Unix, but few articles on C, Basic, Microsoft Windows, the Apple Macintosh, or other topics of special interest to programmers who work on micros.
Authors who have been writing about software engineering for 10 or 20 years have so far ignored the unique aspects of microcomputer development, variously believing that microcomputer development is not of sufficient value to warrant learning about, is inferior to mainframe development, or is merely immature and will gradually grow into a set of mainframe-like practices. Each of these perceptions is at least partially mistaken. Micros aren’t just another, weaker platform. They represent a different programming culture altogether, one that’s weaker in some ways and stronger in others than the one it’s gradually replacing. Anyone who wants to fully understand current computing practices needs to come to an understanding of microcomputer software-development approaches.
Characteristics of a Typical Microcomputer Development Effort
The stereotypical development process on a micro consists of a lone developer hacking at his kitchen table into the wee hours of the morning. His only companions are few cans of caffeine-rich cola and an endless supply of golden sponge cake.
In the stereotype, the programmer has a few design notes scribbled on a paper napkin, but he carries most of the design in his head. He hasn’t done any formal studies to determine whether his program will be marketable; his inspiration for the program’s functionality and user interface comes from private programming muses. When the product is complete, he does not go through any formal quality-assurance activity, but releases it immediately to the software-buying public. The result is a hastily conceived, haphazardly written, error-prone product.
This stereotype might have been accurate 10 years ago and somewhat accurate 5 years ago, but microcomputer environments have become sufficiently complex that the product developed by a lone programmer hacking out code is now virtually obsolete. The more typical development project involves a team of programmers, and that is the subject of this discussion. Let’s examine the real differences between microcomputer and mainframe development approaches.
Differences in Development Approaches
The greatest differences between mainframe and micrcomputer development approaches are as follows:
- The microcomputer programmer has a machine entirely dedicated to his use.
- The microcomputer programmer would rather make mistakes and correct them than go to a great effort to avoid making mistakes in the first place.
- Decision-making power is pushed down to the level of the individual programmer.
- Correctness is not the most important product characteristic; timeliness is.
- The typical microcomputer programmer is different from his mainframe counterpart.
The following subsections discuss each of these differences in more detail. My viewpoint stems from spending about 80 percent of my time during the last ten years working in microcomputer environments and about 20 percent working in mainframe environments.
Dedicated Machine
Having a machine dedicated to a single programmer changes several aspects of how a programmer works.
When a machine is dedicated to an individual programmer, the level of programmer support can be greatly improved. The environment can include a compiler, dedicated debuggers, automated test equipment, and comprehensive online help for the programming language and other tools. Microcomputer development environments are highly interactive; for example, in even the worst environments you can edit, compile, locate errors, and correct errors in a program without leaving the program editor. This level of support is far less common on mainframe computers.
Microcomputer compilers tend to have faster turnaround times. In a mainframe environment a 15-minute turnaround is relatively fast. It is more time effective for a programmer to check for syntax errors or minor logical errors than it is for the compiler to do so. On a microcomputer with nearly instantaneous turnaround, it is often more time effective to have the compiler check a program for syntax errors, undeclared variables, and ordering of arguments in subroutine calls than it is to check the same information manually.
Tools on micros tend to be used differently. Debuggers are used to find errors just as they are in mainframe environments. But because they are highly interactive and highly symbolic, they are also commonly used as automated code-inspection tools. After a developer compiles his code, he steps through the code line-by-line in the debugger to verify that it does what he designed it to do. If you think your peers can be objective about errors during a code inspection, try your computer! At the far right of the mainframe attitude, Harlan Mills says that “an interactive debugger is an outstanding example of what is not needed” (Mills 1986). He does not seem to know about the effectiveness of uses like this. I do not mean to single out Mills; his sentiments have been echoed by many of the most eminent people in software engineering, and I am generally one of his biggest fans. But it is not at all clear to me that the software engineers who criticize certain aspects of microcomputer devel opment practices are fully aware of what they are criticizing.
Another effect of the increased amount of hardware support given a microcomputer programmer is that development activities are not as neatly divided as they are on many mainframe projects. For example, mainframe development projects often create a distinction between “coding” and “unit testing.” The coding milestone is considered to be complete when the programmer has finished typing the code into the computer–before it is even compiled. A programmer at that point hands the code to a tester for independent compilation, testing, and integration. In most micro shops, the idea of turning code over for integration without first compiling, exercising, and thoroughly testing it would get an extremely frosty reception. Each programmer has a broader set of responsibilities for code that he creates.
An overarching difference between the two environments is the extent to which they support learning about programming. The simple fact that a programmer can compile a program, run it, and see the results within seconds rather than hours makes a great difference both in how and in how quickly a programmer learns about programming. People learn through feedback, and micros improve the quality and quantity of the feedback that programmers receive about their programs.
Difference between avoiding and correcting mistakes
Some of the differences between the two cultures arise because one culture advocates avoiding mistakes and the other advocates correcting them. Years of experience have taught mainframe programmers that it is better to invest time to prevent mistakes than to waste time fixing them later. Harlan Mills argues that rigorous development approaches are required because otherwise programmers can spend 50 percent or more of their time debugging (Mills 1983). Mills makes this clear statement of the traditional objection to the fix-it-later approach:
Debugging not only shows a lack of concentration and design integrity, but is also a big drag on productivity. If somebody writes a hundred lines of code in one day and then takes two weeks to get it debugged, they have really only written ten lines a day (Mills 1983).
The typical microcomputer viewpoint is different. Experience on micros has taught many programmers that it is better to take time to fix mistakes than it is to waste time in up-front overhead that often does not pay off in the long run. The desire to avoid time-consuming debugging on mainframes arises from valid reasoning in that environment, but it depends on the existence of bugs that take days to find. With today’s highly interactive debuggers, it is a rare bug that eludes an experienced programmer for more than an hour. The notion that the penalty for poor work is spending 50 percent of a project debugging rarely applies to micros. When you realize that the penalty is closer to 10 percent, the scales tip from getting it right the first time toward getting it done with minimum fixed overhead and a modest amount of debugging.
This does not imply that the upstream activities of requirements analysis and architectural design are less important on micros than on mainframes. It does imply that the pay-off from low-level quality-assurance activities such as detailed design reviews and code reviews is not as big in the microcomputer as in the mainframe environment.
Individual Empowerment
Microcomputers are more distributed than mainframes, and microcomputer shops are more likely to distribute decision-making power too. They push it down to the people who have to live with and implement the decisions. (The tradition of comparing computer-system organization to management organization goes back a few years–at least to Yourdon and Constantine’s Structured Design (1979). It’s interesting to observe that object-oriented programming–in which control is relatively decentralized–has come of age at the same time as decentralized computers and decentralized human management structures.)
Perhaps the best illustration of the difference between micro and mainframe development approaches–from the microcomputer perspective–comes from the relationship between Microsoft and IBM. IBM represents the epitomy of the mainframe approach, Microsoft the micro approach. The two companies had a significant clash of corporate personalities in their joint development of OS/2. After the companies parted ways, this story circulated at Microsoft:
Microsoft and IBM were in a boat race with nine people in each boat. Microsoft had eight people rowing and one person steering. IBM had one person rowing and eight people steering. After losing the race by a wide margin, IBM commissioned a three-month, two million dollar study to determine how they could win the next race. After extending the study to six months and five million dollars, they finally came to a conclusion: the one person doing the rowing needs to row harder.
Although IBM programmers probably would not agree with the way they this story characterizes their development efforst, the story gives insight into how microcomputer programmers perceive mainframe development practices. My own guess is that many mainframe software engineers would benefit from asking themselves periodically whether the procedures-and-standards medicine they use might be worse than the disease it is intended to cure.
The difference in centralization applies also to the software’s customers. Many mainframe programs are written by an MIS department for an internal customer. The MIS department tries to retain as much control of the system as it can: programming services, source code, user data, and the hardware itself. The MIS department is a sole-source provider for a single captive audience. Microcomputer programs tend to be written for external customers. Rather than retaining complete control of a system, the microcomputer shop foregoes control of programming services, user data, and the hardware itself. If a customer doesn’t like a microcomputer product, it buys a competing product instead. The result is that microcomputer developers are forced to be closer to their customer than mainframe developers. The result of being closer to the customer is that development takes on a different set of priorities than is often assumed in a mainframe environment.
Correctness and Timeliness
Many PC programs are not life critical, and many users would rather have the software with bugs–soon–than not have the software at all. The primary development problem thus shifts from “How do we develop highly reliable software in a reasonable amount of time” to “How do we quickly develop software that is reasonably reliable?”
How good are mainframe practices at answering the second question? We can make a rough assessment. A lower limit for a small software project is probably about 16,000 lines of code (LOC). According to Barry Boehm’s COCOMO model, a nominal, 16,000 LOC organic-mode project could be produced at about 350 LOC per programmer month (Boehm 1981). Capers Jones reported in 1977 that for projects of at least 16,000 lines of code, the productivity ranges up from 125 LOC per programmer month (Jones 1977). The best mainframe productivity figure I have seen has been for a cleanroom project that averaged 740 LOC per month with 3.3 defects per KLOC during certification testing (Linger and Mills 1988).
Successful, timely product releases in the microcomputer world need productivity not on the order of 125 to 740 LOC per month, however, but on the order of 1000 to 2000 LOC per month. The traditional mainframe approach is no answer to the question of rapid development on micros. Productivity practices that produce 1500 lines of code per month are a competitive necessity. If that level of productivity also implies 10 to 20 errors per KLOC during system testing, so be it. In the microcomputer world, that is a small price to pay to meet the more important goal of providing customers with the functionality they need at the time they need it.
Typical Programmers
The typical microcomputer programmer has a different profile than his mainframe counterpart. Statistically, he’s younger, has a different marital status, and has fewer years of experience. At Microsoft Corporation, for example, about half the programmers are between the ages of 20 and 29, single, and in a full-time work environment for the first time. Each of these factors makes a difference between micro and mainframe programmers.
The gap in age alone can have significant effect. Frank O’Grady wrote a a sensitive article called “A Rude Awakening” in which he identified the difference in age as one of three biggest barriers to changing from a mainframe to a micro job (O’Grady 1990). He says,
This experience is humiliating enough when you are 22, being intimidated by older hot-shot programmers. “What do you mean, you still don’t get it? How many times do I have to explain it to you?” It is quite a different matter when you are 42, and you are being shamed and humiliated by 25-year-olds.
The difference in age and marital status is linked to the high, entrepreneurial spirit of many microcomputer shops. O’Grady points out that the first thing he noticed after switching to microcomputer shop was that nobody ever went home. That is easier to do if you are 25 and single than if you are 45 and married with children. Software development for many microcomputer programmers is a mission and an adventure. For mainframe programmers it is rarely more than a job, even if it is a good job.
The stereotype of microcomputer programmers as hackers also contains more than a grain of truth. Because the employees are younger and entrepreneurial spirit is high, programmers are less interested in formal procedures and more interested in results. That translates into riskier, more aggressive development practices. Because the CEOs of companies such as Microsoft, Borland, Aldus, and (formerly) Ashton-Tate have been programmers themselves, they better understand the thrill of seat-of-the-pants programming; thus hacking has support at the highest levels of the microcomputer industry.
Lessons Learned
Many of the differences between the PC and mainframe development styles are incidental to the two platforms rather than fundamental to either of them. Because the differences are incidental, programmers in each environment have opportunities to learn from programmers in the other.
The main fault of programmers working on micros is that they are repeating all the mistakes of their mainframe predecessors. Microcomputer programmers are often unaware of practices that have been used successfully in mainframe environments for 20 years. In 1988 I attended an informal discussion that included programmers working on the third release of Aldus PageMaker. For the third release, they had hired a programming-practices consultant who recommended an approach that was novel to them: try to define their requirements before coding and testing. The programmers were very excited about this new idea.
Clearly, programmers who do not know about requirements definition have something to learn about software development. But such a lack of knowledge is hardly unusual in the microcomputer world, and many a large microcomputer project has been conducted without the benefits of source-code control, a good design process, a quality-assurance strategy, or even a formal schedule. But before you dismiss Aldus’s more casual approach, realize that it did get them to the third release of a very successful product, and that is better than most companies do. The rest will come with time, as needed, the same way it did in the mainframe world.
Mainframe programmers can also learn a few things about programming practices. They can learn to be more critical of bureaucratic overhead–sometimes deliberate, formal procedures are not the best way to meet a project’s objectives. They can learn about new programming techniques–using debuggers and other tools effectively. They can insist that their hardware and software vendors provide development environments that are as powerful as those available on microcomputers.
Most of all, mainframe programmers can reexamine the heavy crust of assumptions that has become barnacled onto their habits and practices during the last 20 years. Many of the assumptions are still legitimate, but many have been invalidated by changing technology and improved development techniques. Perhaps the success of unorthodox microcomputer-programmming practices will be the spur that prods such a reexamination.
In the final analysis, it does not matter who is ahead and who is behind at present. The extent to which programmers from both environments learn from each other will determine who moves ahead with the state of the art in the future and who is relegated to the same scap heap as flow-chart templates, punch cards, and coding pads.
References
Boehm 1981. Boehm, Barry W. Software Engineering Economics. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1981.
DOL 1990. U. S. Department of Labor. “The 1990-91 Job Outlook in Brief,” Occupational Outlook Quarterly, Spring 1990, Washington, D.C.: U.S. Government Printing Office.
Jones, T. Capers, 1977. “Program Quality and Programmer Productivity,” IBM Technical Report TR 02.764, January 1977, pp. 42-78.
Kahn 1989. Kahn, Phillippe. “Software Craftsmanship into the ’90s,” Programmer’s Update, vol. 7, no. 4A (July 1989), pp. 39-44.
Linger and Mills 1988. Linger, Richard W. and Harlan D. Mills. “A Case Study in Cleanroom Software Engineering: The IBM COBOL Structuring Facility,” Proceedings of the 12th Annual Computer Software and Applications Conference, Los Alamitos, Calif: IEEE CS Press, 1988.
Mills 1983. Mills, Harlan D. Software Productivity. Boston, Mass: Little, Brown, 1983.
Mills 1986. Mills, Harlan D. “Structured Programming: Retrospect and Prospect,” IEEE Software, November 1986, pp. 58-66.
O’Grady 1990. O’Grady, Frank. “A Rude Awakening: A Generation of Programmers Wants to Know–Is There Life After COBOL,” American Programmer, vol. 3, nos. 7-8 (July/August 1990), pp. 44-49.
Yourdon and Constantine 1979. Yourdon, Edward and Larry L. Constantine. Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design, Englewood Cliffs, New Jersey: Yourdon Press, 1979.
Author: Steve McConnell, Construx Software | More articles