May 13, 2018
This is a part of the list of read books which I maintain. See all of them. You can only find here a blob of notes from the book itself and some thoghts on them. More often than not the actual book is more useful than what you can see here.
Title: Code Complete Author: Steve McConnell
The irony of the shift in focus away from construction is that construction is the only activity that’s guaranteed to be done.
When art critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine. — Pablo Picasso
If this book were a dog, it would nuzzle up to construction, wag its tail at design and testing, and bark at the other development activities.
The ideal software project goes through careful requirements development and architectural design before construction begins. The ideal project undergoes comprehensive, statistically controlled system testing after construction. Imperfect, real-world projects, however, often skip requirements and design to jump into construction. They drop testing because they have too many errors to fix and they’ve run out of time. But no matter how rushed or poorly planned a project is, you can’t drop construction; it’s where the rubber meets the road. Improving construction is thus a way of improving any software - development effort, no matter how abbreviated.
Computer science has some of the most colorful language of any field. In what other field can you walk into a sterile room, carefully controlled at 68°F, and find viruses, Trojan horses, worms, bugs, bombs, crashes, flames, twisted sex changers, and fatal errors?
The comparison between the wave theories of sound and light was so productive that scientists spent a great deal of effort looking for a medium that would propagate light the way air propagates sound. They even gave it a name —“ether”—but they never found the medium.
The value of metaphors should not be underestimated. Metaphors have the virtue of an expected behavior that is understood by all. Unnecessary communication and misunderstandings are reduced. Learning and education are quicker. In effect, metaphors are a way of internalizing and abstracting concepts, allowing one’s thinking to be on a higher plane and low-level mistakes to be avoided. —Fernando J. Corbató
A heuristic is an algorithm in a clown suit. It’s less predict- able, it’s more fun, and it comes without a 30-day, money-back guarantee.
Thus, knowing how to approach problems in general is at least as valuable as knowing specific solutions for specific problems.
A confusing abundance of metaphors has grown up around software development. David Gries says writing software is a science (1981). Donald Knuth says it’s an art (1998). Watts Humphrey says it’s a process (1989). P. J. Plauger and Kent Beck say it’s like driving a car, although they draw nearly opposite conclusions (Plauger 1993, Beck 2000). Alistair Cockburn says it’s a game (2002). Eric Raymond says it’s like a bazaar (2000). Andy Hunt and Dave Thomas say it’s like gardening. Paul Heckel says it’s like filming Snow White and the Seven Dwarfs (1994). Fred Brooks says that it’s like farming, hunting werewolves, or drowning with dinosaurs in a tar pit (1995). Which are the best metaphors?
Jon Bentley says you should be able to sit down by the fire with a glass of brandy, a good cigar, and your favorite hunting dog to enjoy a “literate program” the way you would a good novel.
In writing, a high premium is placed on originality. In software construction, trying to create truly original work is often less effective than focusing on the reuse of design ideas, code, and test cases from previous projects.
“Accretion,” in case you don’t have a dictionary handy, means any growth or increase in size by a gradual external addition or inclusion.
You just want to be sure that you plan enough so that lack of planning doesn’t create major problems later.
making structural changes in a program costs more than adding or deleting peripheral features
They build in margins of safety; it’s better to pay 10 percent more for stronger material than to have a skyscraper fall over.
When the Empire State Building was built, each delivery truck had a 15-minute margin in which to make its delivery. If a truck wasn’t in place at the right time, the whole project was delayed.
People who are effective at developing high-quality software have spent years accumulating dozens of techniques, tricks, and magic incantations.
Use whatever metaphor or combination of metaphors stimulates your own thinking or communicates well with others on your team.
Doing the most expensive part of the project twice is as bad an idea in software as it is in any other line of work.
Code put into well-factored classes can be reused in other programs more easily than the same code embedded in one larger class.
the projects that used functional design were able to take about 35 percent of their code from previous projects. Projects that used an object-oriented approach were able to take more than 70 percent of their code from previous projects. If you can avoid writing 70 percent of your code by planning ahead, do it!
Notably, the core of NASA’s approach to creating reusable classes does not involve “designing for reuse.” NASA identifies reuse candidates at the ends of their projects. They then perform the work needed to make the classes reusable as a special project at the end of the main project or as the first step in a new project. This approach helps prevent “gold-plating”—creation of functionality that isn’t required and that unnecessarily adds complexity.
Avoid creating omniscient classes that are all-knowing and all-powerful.
Aside from the computer itself, the routine is the single greatest invention in computer science.
The routine makes modern programming possible.
routines were good because the avoidance of duplication made a program easier to develop, debug, document, and maintain. Period.
Without the abstractive power of routines, complex programs would be impossible to manage intellectually
For routines, cohesion refers to how closely the operations in a routine are related. Some programmers prefer the term “strength”: how strongly related are the operations in a routine? A function like Cosine() is perfectly cohesive because the whole routine is dedicated to performing one function. A function like CosineAndTan() has lower cohesion because it tries to do more than one thing. The goal is to have each routine do one thing well and not do anything else.
Sometimes the only problem with a routine is that its name is wishy-washy; the routine itself might actually be well designed. If HandleOutput() is replaced with FormatAndPrintOutput(), you have a pretty good idea of what the routine does.
Never put parameters in a routine in randomly or alphabetically ordering. That always lead to problems.
Always use all parameters you pass down to a procedure/function. Remove it quickly once you no longer need one of them.
Seven is a magic number for people’s comprehension. Psychological research has found that people generally cannot keep track of more than about seven chunks of information at once. This discovery has been applied to an enormous number of disciplines, and it seems safe to conjecture that most people can’t keep track of more than about seven routine parameters at once.
To combine the call and the test into one line of code increases the density of the statement and, correspondingly, its complexity.
Safety-critical applications tend to favor correctness to robustness. It is better to return no result than to return a wrong result.
Consider whether your program really needs to handle exceptions, period. As Bjarne Stroustrup points out, sometimes the best response to a serious run-time error is to release all acquired resources and abort. Let the user rerun the program with proper input.
Data is sterilized before it’s allowed to enter the operating room. Anything that’s in the operating room is assumed to be safe. The key design decision is deciding what to put in the operating room, what to keep out, and where to put the doors—which routines are considered to be inside the safety zone, which are outside, and which sanitize the data. The easiest way to do this is usually by sanitizing external data as it arrives, but data often needs to be sanitized at more than one level, so multiple levels of sterilization are sometimes required.
One program I worked on made extensive use of a quadruply linked list. The linked- list code was error prone, and the linked list tended to get corrupted. I added a menu option to check the integrity of the linked list.
In debug mode, Microsoft Word contains code in the idle loop that checks the integrity of the Document object every few seconds. This helps to detect data corruption quickly, and it makes for easier error diagnosis.
A vague, wishy-washy name is like a politician on the campaign trail. It sounds as if it’s saying something, but when you take a hard look, you can’t figure out what it means.
In the vast majority of systems, efficiency isn’t critical. In such a case, see that the routine’s interface is well abstracted and its code is readable so that you can improve it later if you need to. If you have good encapsulation, you can replace a slow, resource-hogging, high-level language implementation with a better algorithm or a fast, lean, low-level language implementation, and you won’t affect any other routines.
It’s usually a waste of effort to work on efficiency at the level of individual routines. The big optimizations come from refining the high-level design, not the individual routines.
You generally use micro-optimizations only when the high-level design turns out not to support the system’s performance goals, and you won’t know that until the whole program is done.
Once you start coding, you get emotionally involved with your code and it becomes harder to throw away a bad design and start over.
One of the biggest differences between hobbyists and professional programmers is the difference that grows out of moving from superstition into understanding.
If you often find yourself suspecting that the compiler or the hardware made an error, you’re still in the realm of superstition.
The “Just One More Compile” syndrome leads to hasty, error-prone changes that take more time in the long run.
Hacks usually indicate incomplete understanding and guarantee errors both now and later.
Few things are more satisfying than rewriting a problematic routine and never finding another error in it.
Improper data initialization is one of the most fertile sources of error in computer programming. Developing effective techniques for avoiding initialization problems can save a lot of debugging time.
Sometimes, one would forget that a re-initialization would be needed and programs with wrong assumptions, which results in vague and fragile code that fails in very unpredictable ways.
Scope answer the question of how famous a variable is. Reducing ones celebrity is a good thing and this point is perfectly valid for variables too. Simplicity is the key.
The more information you can hide, the less you have to keep in mind at any one time. The less you have to keep in mind, the smaller the chance that you’ll make an error because you forgot one of the many details you needed to remember.
You rarely, if ever, need to use naked global data.
You can’t give a variable a name the way you give a dog a name—because it’s cute or it has a good sound.
Names should be as specific as possible. Names like x, temp, and i that are general enough to be used for more than one purpose are not as informative as they could be and are usually bad names.
It’s OK to figure out murder mysteries, but you shouldn’t need to figure out code. You should be able to read it.
The key is that any convention at all is often better than no convention. The convention may be arbitrary. The power of naming conventions doesn’t come from the specific convention chosen but from the fact that a convention exists, adding structure to the code and giving you fewer things to worry about.
The use of named constants has been shown to greatly aid program maintenance. As a general rule, any technique that centralizes control over things that might change is a good technique for reducing maintenance efforts.
But if the way the code is written leaves any shadow of a doubt about its purpose.
It’s dangerous to use a named constant in one place and a literal in another to represent the same entity.
Some programming practices beg for errors.
Some of the brightest people in computer science have suggested that arrays never be accessed randomly, but only sequentially (Mills and Linger 1986). Their argument is that random accesses in arrays are similar to random gotos in a program: such accesses tend to be undisciplined, error prone, and hard to prove correct. They suggest using sets, stacks, and queues, whose elements are accessed sequentially, rather than using arrays.
Programmer-defined data types are one of the most powerful capabilities a language can give you to clarify your understanding of a program.
Creating your own types makes your programs easier to modify and more self- documenting, if your language supports that capability. When you create a simple type using typedef or its equivalent, consider whether you should be creating a new class instead.
Pointer usage is one of the most error-prone areas of modern programming, to such an extent that modern languages like Java, C#, and Visual Basic don’t provide a pointer data type. Using pointers is inherently complicated, and using them correctly requires that you have an excellent understanding of your compiler’s memory-man- agement scheme. Many common security problems, especially buffer overruns, can be traced back to erroneous use of pointers (Howard and LeBlanc 2003). A liberal dose of defensive programming practices will help even further.