C Tips and Tricks

Table of Contents

Introduction

Prototypes

Avoid Vendor-Specific Make tools

Avoid Vendor-Specific Assemblers

Volatile Declarations

Keep It Simple, Silly (KISS)

Profile

Use More Than One Compiler

Dump, Dump, Dump

Use a Stand-alone Editor

Use Version Control

Write Module Tests

Head and Tail Pointers

Memory Management

Use goto to Handle Errors

Portability

Separate Input, Processing, and Output

Always Initialize Pointers to NULL

Code for Changes

Computers Get Faster!

Properties or Attributes

Win32 Compilers

Keep Sources and Objects Separate when Building

Be Patient Towards Inexperienced Developers

Read Microsoft's Development Books

Guard Each of Your Functions

Error Handling

Stay Awake

DISCLAIMER

History

Version 1.00 (2007.02.25)

Version 1.01 (2008.??.??)

Version 1.02 (2008.??.??)

Version 1.03 (2008.12.30)

Version 1.04 (2008.12.31)

Version 1.05 (2009.01.01)

Version 1.06 (2009.02.03)

Version 1.07 (2009.04.15)

Version 1.08 (2009.04.30)

Version 1.09 (2009.05.01)

Version 1.10 (2009.06.02)

Version 1.11 (2009.06.12)

Version 1.12 (2010.01.25)

Version 1.13 (2010.02.06)

Version 1.14 (2010.09.01)

Version 1.15 (2010.10.18)

Version 1.16 (2012.06.25)

Version 1.17 (2012.09.24)

Version 1.18 (2013.04.30)

Version 1.19 (2015.02.07)

Introduction

There are plenty of books about expert programming in C, C++, and the whole horde of C derivatives. Most of these, though, don't really give any detailed, specific hints on how to do stuff. In this article I'll try to present my most important C/C++ tips and tricks. I expect the article to grow over time as my lost memory permits. This is not a chest of gold nuts, simply some very basic things that I have learned over a decade as a professional software developer, working in about 10 different companies. If you don't understand something in this article, drop me an email (see the Contact page for more information). Please keep in mind that this is only a ''random'' collection of advice; as I remember more tips and tricks, I'll add them to this document.

Prototypes

A nifty trick that I have learned over the years is to make prototypes of complicated, difficult code in C# (a free C# compiler (csc.exe) is included in each release of Microsoft .NET). This generally takes about 1/20th the time of making the prototype in C/C++. And gives you real hands-on experience with the problem domain before you start coding it up in C/C++. The key is to find a language that is easy to express complex problems in with the use of dictionaries, lists, and objects. So, if the problem is difficult, such as a complex data format or a completely new file format, put together a prototype in C#, and get to know your problem domain before you code it in C or C++. Both of these languages, C and C++, are so difficult to code in (due to all the house-keeping chores that you have to do when using them), that it is hard to focus on the problem at hand. C#, on the other hand, handles all the house-keeping for you, so you can focus on getting to understand the problem domain. In many ways you can think of C# as executable pseudo-code: In a matter of hours you can have something usable up and running, experiment with it, get to know the problem domain completely, and then you can start the hard job of making a C implementation.

This suggestion may sound stupid, but it works really well. To create a really great solution, that expresses your complete and profound understanding of the problem at hand, you need to code twice: Once to throw away and once to use. And, yes, you eventually throw away the prototype and keep the C/C++ implementation.

Obviously, you wouldn't make a prototype for something that you have done many times before. I can give you an example of where I have successfully used a prototype: I got the task of writing a PDF417 barcode generator module. I had no clue what PDF417 was and knew nothing about it beforehand. In fact, I had never ever worked with barcodes prior to making the PDF417 generator - and PDF417 is probably one of the world's most advanced barcode formats. So I coded a prototype in Python, played around with it, got to understand the problem at hand really well, and then coded it up in C. The result was pretty nifty, compared to a public domain implementation out there, and worked really well. So, if you are confronted with a problem from a completely unknown problem domain, go prototype!

Whatever language you pick for prototyping, use that same language when you write add-on tools such as in-house build system tools and so on. That way you limit the number of languages required to be known to a developer to two: The implementation language and the scripting language. My personal combo is C++ as the implementation language and C# as the scripting language. That combo is rock-solid and can be ported to a wide array of platforms due to the existence of the Mono .NET framework for Un*x machines.

Avoid Vendor-Specific Make tools

Always go a huge circle around vendor-specific make tools. Use an open source, freeware tool such as CMake instead. This will help you enormously when you some day decide to switch compiler or to extend your build system to handle yet another compiler or decide to port to another platform. Alternatively, code up an ad-hoc build system using C#. It takes only a few days and gives you full control of your build system. An example of a two-hour make utility can be found in the Nutbox C#/.NET console toolbox that I occasionally maintain. NutBox is open source, freeware, and public domain, so you may copy and paste and reuse as you see fit.

Avoid Vendor-Specific Assemblers

Same as with vendor-specific make tools. Go for an open source, freeware assembler instead, such as YASM. Search on Google for "open source assembler" and find something you can use. Vendor-specific assemblers bite you in your back when you want to switch compiler or want to try out another vendor's compiler.

Volatile Declarations

If you can't make your code run except with all optimizations disabled, then you have almost certainly forgot a volatile keyword in front of a variable declaration somewhere. Make sure you have made all global variables that are accessed asynchronously by multiple threads or interrupt handlers and other code volatile. Now don't go adding volatile to all global variables. Volatile slows down access of the variable a bit because volatile tells the compiler to not cache the variable in a register - which, incidentally, is exactly the same that happens when you disable all optimizations, albeit it then happens for all variables in your application.

Keep It Simple, Silly (KISS)

If you find yourself writing complex Boolean expressions, with more than one relational or logical operator, then you have forgotten to simplify your models. You need to simplify your models and code until you can express all conditions as a simple Boolean expression. The trick is to weed out conditions as you go. If you ever find yourself adding or subtracting one here and there to make it work, then STOP! You no longer understand what you are doing and need to roll back, cut out code, and simplify the model until you have a clear view of and understanding of what you are doing. Don't be ashamed of having done this, though. Pretty much any programmer at some (early) state in his career experiences a phase of adding and subtracting one's to make it work. As the years go by, each programmer builds up enough experience to quickly identify cases of lost overview and learns to go back to the drawing board and draw up a simpler solution. One example of this is that if you do a buffered I/O module, you'll almost inevitably run into the plus-one/minus-one problem, until you realize that you need to make the code so that input is separate from output, also in terms of state variables, then you'll end up having a very nice and clean solution. The most important thing is to learn to identify different sub-models: The reason your code gets complicated and overloaded with complicated conditions and nested if-statements and nested loops and so on is that you have not realized that you are working on a number of different sub-models in the same code. For instance, if you try to make a buffered I/O module without understanding the two sub-models of buffered input and buffered output, then you inevitably end up with very complicated code (as seen in most C compilers' run-time libraries). If, on the other hand, you approach the task of making a buffered I/O module as a question of making a buffered input module and a buffered output module in one and same source file then you will quickly find that you have got your models straight and that all the code becomes immensely simple and efficient. Poor programmers view simple code as an expression of a poor programmer. In reality, it is the other way round: The more skilled and crafty the programmer, the simpler the code. I have again and again managed to write code that was so simple that people almost thought me stupid. And I am happy about that. Because it shows that I fully understood the problem at hand and managed to boil it down to excessively simple and efficient code. So don't be ashamed of making simple code. Go for it; spend the rest of your professional career on trying to make the simplest of the simplest code. Simple code is easy to optimize and alter whereas complex code is hard to optimize and alter. That way you can harvest enormous performance gains while your code is so trivial and simple that other guys wonder how you manage to do it. The truth is that complicated code reveals lack of understanding of the problem domain and simple code shows a great grasp of the problem domain. Which is one of the things I like most about object-oriented programming: That you can often make code that is so simple that people literally wonder how it can be doing anything at all. (Ain't that funny? The way that two lines of C++ code can be doing all sorts of interesting things that 30 lines of C code can't?)

Profile

You may have heard the name "a profiler". You may not have tried it. Give yourself a day or two off from your ordinary work and sit down and try to use the profiler, including adding the compiler options that need to be added to make profiling work (if any), and try out your compiler's profiler. A profiler tells you how much time each source line and function uses, which can help you to identify bottlenecks in seconds - bottlenecks that you would never have discovered no matter how much you have designed and implemented all of the code yourself. A profiler is your friend. When you get acquainted with your profiler, you learn to write code that works, with little regard to performance, and then later run your app through the profiler to see what needs extra attention. This way you can spend 3 months coding quickly at medium quality, a few days profiling, and another month optimizing the application, for a total of about four months, and get a much faster application than if you spent 15 months hand-coding everything for performance. Profiling saves both development time (when you get used to using a profiler) and execution time. The key to getting good readings from a profiler is to use a gigantic real-life test case. Ideally, you'll have gathered some really nifty test cases in the form of bug reports from customers. Such test cases are about the best you can ever get your hands on as they are pure gold in the form of real-life examples of how your product is being used. Therefore, establish good relations to the tech support department and make them pass on any copy of customer files they get. Ideally you'd gather these test cases in a read-only directory on a company-wide server so that all developers can get their hands on them (when you think of customer's test cases, think of geese laying gold eggs!). When you have boring, idle days, you can always sit down and profile some legacy project and add a speed-up of a few hundred percent. Most bosses like that! It gives the customers a feel that the project is alive and cared for, even though it is only touched when the most grave and critical errors are found.

Use More Than One Compiler

Make your project support more than one compiler (OpenWatcom, GNU C) right from the start. This gives you access to a horde of development tools, profilers in various versions with each their own strengths and weaknesses, and, best of all, gives you warning and error messages from several compilers. Each compiler has its own bugs and weaknesses. Some compilers don't warn properly on incorrect C++ constructs (which can't compile with other compilers). Some compilers don't warn properly about errors such as assigning a string to a floating point variable, and so on. Each compiler covers perhaps 30 percent of the possible set of warnings and errors, so if you use three or four compilers, you get full coverage. This is sort of an advanced way of using Lint without using Lint that makes your code portable to multiple compilers. Lint also has its bugs and deficiencies, but feel free to add a Lint pass if you want to. It can't harm, at best help to make your code even better.

This tip also goes for .NET development: Remember to regularly build with both the Microsoft.NET compiler (csc.exe) and with the Mono Project's compiler.

Dump, Dump, Dump

Whenever you start out working with a new data format, such as a file format or a client/server protocol, write a dumper for that data format from the reference manuals and test it against plenty of real-life test cases. This gives you full exposure to the entire data format and ensures that you early on learn the entire data format. This again protects you against those "Oh, I didn't know that..." experiences that may come from loosely studying a data format, designing an application that uses it, and then, gradually, come to realize that you don't know the data format well enough. Therefore, always start out new projects writing dumpers for the data formats (.OBJ, .JPG, .PNG, .MP3, etc.) that your application natively writes or reads. A dumper is an invaluable tool that can be used to find and detect dozens and hundreds of errors. If you have worked on a data format for many years that you don't have a dumper for, then sit down and write the dumper already today! A dumper is basically a console application that takes as its input a file in the data format you want to support and writes as its output a text file containing a human-readable presentation of the data format. Obviously, dumpers are only really useful with binary data formats. For text data formats, and also for binary data formats, a checker (a syntax checker, a validator) is also very useful.

When you have gotten used to writing dumpers, you also get used to making all your solutions so that you can save a copy of their data formats on disk and later dump them to see what the heck is going on.

Dumpers are especially useful for newcomers to the project. Using a dumper they can get to know the data format in a matter of weeks or months, rather than working with it for years without having a mental picture of what the data format consists of.

Use a Stand-alone Editor

Instead of relying on the IDEs (Integrated Development Environment) of one or other vendor, get used to using your own stand-alone editor. This prepares you for changing compiler, using multiple programming languages (as you pick up yet another language along the road), and separates you from vendor-specific routines and practices. An excellent stand-alone editor is SlickEdit available from www.slickedit.com. The price is rather steep, at $300, but think about it: Next to your keyboard, your editor is your most frequently used tool. So don't go about saving bucks on your keyboard or on your editor.

Use Version Control

Use version control, such as Git, for putting your source files under version control. Initially, it may seem like extra work that does not pay off. But when you have a 50+ file project and you change 5 or 10 files to add a new feature and you introduce a bug, you'd really wish you had put your stuff under version control. Version control allows you to see which changes you, and every other developer, have made, making it possible to get a precise idea of what has been changed with a simple command. Git is freeware and gratis and is a great version control system.

Tip: You can use www.github.com as your repository for freeware or closed-source projects; the latter costs money, though.

Write Module Tests

Test each module independently with a separate program that is named foo_test if it tests foo. Make the test program so that it is quiet and returns zero (Unix console program convention) if everything runs well. Only make it write to screen if it fails and then make it return a non-zero error code. This way you can build the test program from the makefile and execute it from the same makefile, and make will stop if it fails. Module tests serve two purposes:

  1. To test the module thoroughly and versatilely.
  2. To document the use of the module with a real-life test example.

The second purpose of module tests allows newcomers to use the module test as a real-life example of how the module is intended to be used. Make sure you test every function, every class, every method, and so on.

Try to integrate your module tests into your makefiles so they are run whenever you build the related components because newcomers won't know what to run when. And by coding it into the makefiles, you make a document trail that anybody can follow and use.

Head and Tail Pointers

When you need to work on something that is a fixed size, you'll probably think of writing it like this:

char   *pValue;
size_t   Length;

pValue = pParameter;
Length = Size;

Then you'll use it as follows:

for (char *pTemp = pValue; pTemp < pValue + Length; pTemp++)
    ...;

A better method, which allows you to write blazingly fast code is to instead keep track of the head and tail (end) pointer:

char *pHead;
char *pTail;    /* points to char AFTER end of pHead */

pHead = pParameter;
pTail = pParameter + Size;

Then you'll use it as follows:

for (char *pTemp = pHead; pTemp < pTail; pHead++)
    ...;

Remember, I said blazingly fast code. The second method saves an ADD instruction whenever you want to make a loop. Furthermore, it gives you a very comfortable model to think in when you code. Instead of adding Length here and there, you always make sure you fulfill the condition that pHead <= x < pTail.

Imagine you wrote a stream I/O module that read the source file into memory as one huge block, to speed up I/O and avoid buffering overhead. Then you have a member function called GetChar(), which returns the next character. If you use the head/tail approach, you can code the class as follows:

char *pHead;    /* points to start of buffer */
char *pTail;    /* points to one item AFTER end of buffer */
char *pNext;    /* points to next item to return */

GetChar() then becomes:

int cStream::GetChar(void)
{
    return (pNext < pTail) ? *pNext++ : EOF;
}

If you had used a separate length field, the code would look like this:

int cStream::charGet(void)
{
    return (pNext < pHead + Length) ? *pNext++ : EOF;
}

Now, which one do you think produces a blazingly fast result? But the primary reason that I recommend using pHead/pTail pointers, instead of pHead/Length, is that it tremendously simplifies the implementation. Try it out a few times and you'll discover what I mean.

Memory Management

Always define your own wrappers around malloc() and free() (such as MemoryCreate() and MemoryDelete()); these wrappers can, at any later point in time, be used to enhance memory allocation tracking and implementing a sophisticated memory debugging module. Furthermore, you can make MemoryCreate() fill the newly allocated memory with a pattern, such as 0xCC on Intel platforms (which is an INT 3 instruction, which is a breakpoint instruction), which makes it easy to spot uninitialized fields in debuggers and everyday execution:

int MemoryCreate(void **ppThis, size_t Size)
{
    int result = 0;
    void *pThis = NULL;

    pThis = malloc(Size);
    if (pThis == NULL)
    {
        result = ERROR_MEMORY_CREATE;
        goto failed;
    }

#ifndef _NDEBUG
    memset(pThis, 0xCC, Size);
#endif

failed:
    *ppThis = pThis;
    return result;
}

Use goto to Handle Errors

I know many books and many people with little hands-on experience on handling errors in C say "Don't ever use goto!" But the fact is that goto is a marvelous construct for handling errors and for writing state machines and nothing else. The function MemoryCreate(), shown above, nicely illustrates how goto can be used to handle errors. The advantage of using goto, rather than nested if-else statements, is that your code remains "flat": Only one level deep. Compare this code:

FILE *pFile = fopen("foo.txt", "rt");
if (pFile != NULL)
{
    char Buffer[1024];
    if (fread(Buffer, 1, 1024, pFile) == 1024)
    {
        Node *pNode = (Node *) malloc(sizeof(Node));
        if (pNode != NULL)
        {
            pNode->pNext = NULL;
            strcpy(pNode->Text, Buffer);
            ...
        }
        else
        {
            fclose(pFile);
            return -1;
        }
    }
    else
    {
        fclose(pFile);
        return -1;
    }
}

To this code:

#define fail(code)  { result = (code); goto failed; }

    int result = 0;
    FILE *pFile = NULL;
    char Buffer[1024];
    Node *pNode = NULL;

    pFile = fopen("foo.txt", "rt");
    if (pFile == NULL)
        fail(ERROR_FILE_CREATE);

    if (fread(Buffer, 1, 1024, pFile) != 1024)
        fail(ERROR_FILE_READ);

    pNode = (Node *) malloc(sizeof(Node));
    if (pNode == NULL)
        fail(ERROR_MEMORY_CREATE);

    pNode->pNext = NULL;
    strcpy(pNode->Text, Buffer);

    /* insert node into global list */
    ...
    pNode = NULL;/ * pass ownership of pNode memory to global list */

    ...

failed:
    free(pNode);
    pNode = NULL;

    if (pFile != NULL)
    {
         fclose(pFile);
         pFile = NULL;
    }

return result;

Can you see the difference? The first code is very cumbersome and very hard to ensure it cleans up properly after itself. Using wisely chosen gotos, you get very fast and very clean code. It is trivially easy to write, trivially easy to maintain, and trivially easy to debug. So, say after me: "Master C Programmers love gotos for error handling and state machines!" One more time. Yes, that's it.

You might argue that a goto is expensive on contemporary platforms, due to CPU stalls and all that, but the fact is that an if statement includes at least one goto - the goto that is executed if the condition is false. So whether you use an explicit goto or an implicit goto, you'll still have a goto.

If you want to be advanced, you define this macro:

#define fail(code)  { result = (code); goto failed; }

Which requires you to define the integer return variable result and the label failed in each function, but you'll quickly get used to that. My recommendation is to keep it at that level to make the code readable for newcomers. I have once developed a tremendously advanced C error handling mechanism that I used in some projects, but eventually discontinued in the name of simplicity for newcomers. You can go extremely far when it comes to developing your own C error handling mechanism, but I recommend that you stick to something along what has been given above. Introducing 50 macros that simplifies the error handling is really not a simplification but rather a "complexification". The macro fail() can be understood by anyone in a matter of a few minutes. And that's about the time you have to introduce newbies to what you are doing. So keep it simple and stick to the fail() macro as the most advanced error handling construct that you use in your C code. Obviously, you'd use exception handling in C++ code. Exception handling is a much, much better mechanism for handling errors than the paradigm I present above, but some people are so unfortunate that they must stay in C. And they are the audience of this article, for which reason I explain the above.

Portability

You never know when you need to port to a new platform. It almost always comes as a complete surprise to those developing the software that they suddenly need to support some new platform. For this reason, isolate all your platform-specific code in nice abstractions in separate modules. For instance, if you use the Win32 CreateFile() function, make a File class that you can encapsulate the Win32 calls within - and don't allow a single type, class, struct, function or anything from the native API to slip out through your abstraction. So don't go about making a File class that allows the user to query the Win32 handle and stuff like that.

The best way to handle portability issue is to use a GUI Toolkit such as WxWidgets, FOX, or the very expensive commercial Qt Toolkit. FOX is a freeware light-weight bare-bone GUI toolkit. Qt is amazingly expensive, but also very, very feature complete. With Qt, you can create advanced, portable applications in a matter of day or weeks (once you know the basics of it). WxWidgets, which is freeware, places itself somewhere in between FOX and Qt. In my opinion, it is simply not worth it to make your own portable GUI Toolkit. Even if Qt costs 2999 euro per seat, you'll quickly save this in development time. If you can save just one month of development time per developer, the price of Qt has been paid. And you can easily do that. So, start researching Qt and then prepare a written report that explains all of the benefits of Qt to your boss. If you are coding in C++, that is.

Separate Input, Processing, and Output

Often you see conversion code that reads in its input, converts it, and writes it out in the same mixture of source lines:

ch = getb();
if (ch == 0x12)
    ch = 0xA4;
putb(ch);
word = getw();
switch (word)
{
    case ... :
    case ... :
    case ... :
}
putw(word);

Always separate this into the three logical steps: input, conversion, and output. As shown below:

struct foo {
    byte ch;
    word wo;
};
foo bar;

/* input record */
bar.ch = getb();
bar.wo = getw();

/* conversion */
if (bar.ch == 0x12)
    bar.ch = 0xA4;
...

/* output */
putb(bar.ch);
putw(bar.wo);

This makes the code tremendously easier to read and maintain for newcomers! There's no end to how much faster the maintenance of the code is when it has been structured nicely and separated into its logical steps. It also naturally leans itself towards making reader and writer methods that read and write respectively records using the defined structure.

Always Initialize Pointers to NULL

Always initialize all pointers to NULL. This ensures you don't accidentally free() an invalid pointer. It also ensures that nobody later on debugs, sees that the pointer is not-NULL, and then adds a fatal call to free() (which might corrupt and ruin the heap).

So, use:

FILE *pFile = NULL;
...
pFile = fopen("foo.txt", "rt");

Instead of:

FILE *pFile;
...
pFile = fopen("foo.txt", "rt");

You might argue that the first form wastes an instruction, but contemporary compilers can easily figure out there is no need to NULL the pointer in the above code. If somebody later on adds new code, so you have:

FILE *pFile = NULL;
char *pBuffer = NULL;

pBuffer = new char[1024];
...

pFile = fopen("foo.txt", "rt");

Then everything works fine and you still don't have an uninitialized pointer in your code.

Code for Changes

Try to always code so that your code is as easy to maintain (change) as at all possible! The best way to learn this is to spend 3-5 years maintaining others code, such as fixing bugs, prior to beginning to write your own code. That way you learn what it is that is annoying to experience as a maintainer. Over time, you'll come to understand that it is 200 times more important that your code is easy to maintain than it is that you have squeezed out every clock cycle of every line everywhere. Occasionally, though, you might bump into some very demanding module that simply has to be coded in a nasty way to get the performance you need, but then you only have one or two of these modules in your entire application, rather than hundreds of them.

Computers Get Faster!

Always keep in mind that by the time that you have developed your project, computers have become twice as fast they are today. This means there's no point in chasing clock cycles here and there. Code it as easy, as simple, as reliable as possible. If the profiler later reveals that it got too slow in spots here and there, then you can recode those few spots.

Obviously, this hint was written back in a time when PCs doubled their speed every year. This is no longer the case, but then they instead double the number of cores every year or so.

Properties or Attributes

Take a week off to try to code with properties in C#. This gives you an understanding of properties and attributes and teaches you to code access methods in C and C++:

C++:

class Writer
{
public:
    size_t SizeGet(void);
    void SizeSet(size_t Value);
};

C:

size_t WriterSizeGet(cWriter *pThis);
void WriterSizeSet(cWriter *pThis, size_t Value);

Properties/attributes are a marvellous abstraction mechanism that forces you into thinking logically about your models and to make these models coherent and beautiful. Once you've become accustomed to thinking in properties, you'll find yourself doing this in any programming language that you use: Assembly, C, etc. Because it is a brilliant model that greatly simplifies code while making the implementation trivial in structure and contents.

Win32 Compilers

GNU CC is good for small code and OpenWatcom is good for fast code. OpenWatcom generates code about 10 percent faster than the Microsoft Visual C++ compiler. Microsoft Visual C++ is good for, erhm, something. Oh, there it is: Features and completeness of Win32 API supported. Also nice IDE, but don't use those, remember?

Keep Sources and Objects Separate when Building

When you build your project, build in such a manner that object files (and generated executables and libraries and dynamic-link libraries) are separated from your source files. This way you don't have your development sandbox littered with junk object files and all that crap that a compiler can output. This is straightforward if you use CMake.

Be Patient Towards Inexperienced Developers

When you encounter a headstrong, insisting inexperienced developer who doesn't know what he or she is talking about, be patient and loving. Spend the hours needed to make him or her see your point of view by transferring all your knowledge about the problem at hand to the inexperienced developer. Preferably by you typing up a nifty and neat article that presents all you know about the issue to the developer and anyone else interested in the matter. If nothing else, your supervisor can read the article, become convinced, and help to cut through in favor of your opinion in the matter. Don't go about wanting to have the inexperienced employee fired and stuff like that. He might be a fool, but what were you 10 years ago? See. We all start out as feeble retarded fools who gradually advance to experienced feeble retarded fools - and that's all that separates us: Experience. And experience takes time for which reason you should give the inexperienced developer his time to learn his things.

Read Microsoft's Development Books

Read Code Complete, Writing Solid Code, and Debugging The Development Process, all from Microsoft Press. These are about the three best books ever written about software development (to the best of my knowledge) in the history of mankind. I am by no means a Microsoft buff - in fact I consider Microsoft mostly an elaborate scam to extract money from humanity (also seen on their constant updates of GUIs without adding features worth mentioning), but I acknowledge that they got some very skilled guys to do a number of very good books in their name. Once you've read these books, you might realize that most of my advice can be found in these books. Why do you think I recommend these books? Because they and I agree on how to do things.

Guard Each of Your Functions

Get used to using the assert() macro from assert.h to check each and every parameter to each and every function. Be strict in your requirements - an assertion can always be loosened but not made stricter later on (as that might break existing code). This will save you a lot of debugging effort when you suddenly get parameters you didn't expect to get. The assertions express your views on what you expect, as the developer, to receive as input. The client of your module can then read the assertions to see what precisely it was that you expected to be passed. Ideally, a tool could be made to extract prototypes and assertions so that you got a report like this:

int foo(const char *pValue, int *pResult):
    assert(pValue != NULL && strlen(pValue) < 1024);
    assert(pResult != NULL);

int bar(int Value, int Base, int Size, char *pResult):
    assert(Value >= 0);
    assert(Base >= 2 && Base <= 36);
    assert(Size >= 1 && Size <= 512);
    assert(pResult != NULL && *pResult == '\0');

Remember to use a single assertion for each parameter so that you can easily locate the exact cause of the failed assertion if an assertion fails. Also remember to define the macro NDEBUG (to disable assertions so they don't generate any code) in the RELEASE/SHIP build of the product.

Error Handling

Error handling is one of the most difficult and treacherous elements of software development. Many strategies have been developed and tried out. The one I have had the most success with, in C, is to use this simple and efficient paradigm:

Always make all functions return an integer that indicates success (0) or failure (nonzero).

Ideally, you define a number of error codes for each module:

#define ERROR_STRING__BASE          0x1000 /* globally unique */
#define ERROR_STRING_CREATE         (ERROR_STRING__BASE + 1)
#define ERROR_STRING_DELETE         (ERROR_STRING__BASE + 2)
#define ERROR_STRING_UPCASE         (ERROR_STRING__BASE + 3)
#define ERROR_STRING_LOCASE         (ERROR_STRING__BASE + 4)
#define ERROR_STRING_INTEGER_PARSE  (ERROR_STRING__BASE + 5)
#define ERROR_STRING_FLOAT_PARSE    (ERROR_STRING__BASE + 6)

Etc., etc. Then you add this simple macro:

#define fail(code)  { result = (code); goto failed; }

Then you can code like this:

int StringIntegerParse(const char *pValue, int *pResult)
{
    register int result = 0;
    char *pThat = NULL;

    /* malloc() call not really needed, only given as example of something that can fail */
    pThat = (char *) malloc(1024);
    if (pThat == NULL)
        fail(ERROR_STRING_CREATE);

    memset(pThat, '\0', 1024);

    ...
    if (this or that)
        fail(ERROR_STRING_INTEGER_PARSE);

failed:
    /* clean up local variables prior to exiting */
    free(pThat);
    pThat = NULL;

    return result;
}

But the key point here is to establish the protocol that all functions return an integer, which indicates success (0) or failure (nonzero). You'd wrap all system calls in a portable wrapper that adhered to this protocol. This way the user of the function always know what the return type of the function is and always know that if the function returns zero, everything is okay. By doing so, you avoid the mess of return types seen for example in the Win32 API and in POSIX, where you often have to waste your time on decoding return values rather than on coding your own solution:

int MyCreateThisOrThat(...)
{
    BOOL wrc;    /* Windows Return Code */

    wrc = CreateThisOrThat("foo", "bar", FLAG_THIS_OR_THAT);
    if (wrc == 0)
    {
        DWORD code = GetLastError();

        /* handle last error code */
        /* why not just make all APIs return the error code? */
    }

    ...

failed:
    return result;
}

Is called using the standard error code protocol:

result = MyCreateThisOrThat("foo", "bar", FLAG_THIS_OR_THAT);
if (result != 0)
    fail(result);
...

Stay Awake

Avoid getting bogged down by the thousands of tips and tricks that exist for crude languages such as C and C++. Gather yourself an array of the 50 most useful tips and tricks, and let go of the rest. After all, you're paid to produce high-quality, simple code, not to display your mastery of zillions of arcane and advanced C and C++ tricks. As usual, try to keep the code as simple as at all possible. Initially, you'll perhaps feel dumb for doing so, but over time you'll get used to it and it will pay off very handsomely over the years as you master the art of making simple and yet advanced code.

DISCLAIMER

The advice is given as is without any inherent warranty or guarantee of suitability. I believe all of the advice to be pearls from a sage with 30+ years of experience in software development, but others might vehemently disagree. In that case, the others are wrong, and you should not listen to them.

I am in no way affiliated with any company mentioned in this document.

History

Version 1.00 (2007.02.25)

  1. Initial Version.

Version 1.01 (2008.??.??)

  1. Minor modifications to document header.

Version 1.02 (2008.??.??)

  1. Minor modifications to document header.

Version 1.03 (2008.12.30)

  1. Major revamp of document header, indentation, and code snippets so as to retain formatting when output as HTML file. The document was also revised to enhance readability.

Version 1.04 (2008.12.31)

  1. Fixed minor typos, minor spelling errors, and elaborated a bit on the use of the fail() macro.

Version 1.05 (2009.01.01)

  1. Added Table of Contents and added document title.

Version 1.06 (2009.02.03)

  1. Replaced references to Object Pascal and Python with references to Boo and C#.

Version 1.07 (2009.04.15)

  1. Changed the ugly Times New Roman font into the Calibri font.

Version 1.08 (2009.04.30)

  1. Polished the language a bit; removed some of the more rough and unappealing phrases.
  2. Fixed a Pythonese typo in the first assertion in the document ("and" became "&&").

Version 1.09 (2009.05.01)

  1. Changed the font size from 12 points to 11 points in the author header field.

Version 1.10 (2009.06.02)

  1. Made all links open up a new browser window (important in the HTML version).
  2. Added a link to the OpenWatcom compiler.

Version 1.11 (2009.06.12)

  1. Added standard page header and page footer.

Version 1.12 (2010.01.25)

  1. Fixed: The version history so that it includes the actual document versions.
  2. Added: Missing instance pointers in C example for attributes/properties.
  3. Added: Version number to document author line.

Version 1.13 (2010.02.06)

  1. Removed all references to Boo as C# seems the logical choice for C/C++ developers.

Version 1.14 (2010.09.01)

  1. Published on my personal page, which meant rewriting the entire document in the ScrewTurn Wiki Markup language.
  2. Fixed: Embarassing 'pFoo == NULL' typos in samples; obviously, it should have been pFoo = NULL.

Version 1.15 (2010.10.18)

  1. Added: Color coding of C examples using the ScrewTurn Wiki SyntaxHighlighter plugin.

Version 1.16 (2012.06.25)

  1. Replaced GNU Make with CMake.
  2. Replaced CVS with Git.
  3. Deleted the "Donate Frequently" paragraph as it did not belong here.

Version 1.17 (2012.09.24)

  1. Fixed: Minor adjustments really not worth mentioning.
  2. Fixed: Rewrote the document from ScrewTurn Markup Language to MeWeb Markup Language.

Version 1.18 (2013.04.30)

  1. Fixed: A sample used retval instead of result, fixed thanks to Christian Smith.
  2. Fixed: Fixed a few remnants from the old Wiki: Chapter headings including the {c:x} macro

Version 1.19 (2015.02.07)

  1. Added: Published on the Symbiosis Software website as part of the library.
  2. Fixed: Claim that I am not affiliated with any company.