# C++ Then and Now

I’ve been working with C++ in large scale software development for a long time and feel very comfortable with the concepts, syntax and constructs. Finally, the much awaited C++0x standard was published in 2011, naming it formally as C++11. I glanced over the changes and used some of the new features as and when needed. Never did I realize the quantum of changes, until I had to debug a third party math library. There were constructs I didn’t understand, and clueless about much of the new syntax – it felt like an unfamiliar language.

What do you think of C++11?
It may be the most frequently asked question. Surprisingly, C++11 feels like a new language: The pieces just fit together better than they used to and I find a “higher-level style of programming” more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before.
– Bjarne Stroustrup –

Below is a contrived sample code which uses many of the new language and library features. If you are able to understand the code, that’s good. If you know how to use it, that’s even better. But if you can justify when and why it should be used, you’ve probably crossed the Modern C++ barrier.

#include <array>
#include <iostream>
#include <optional>
#include <unordered_map>

using UnorderedMap = std::unordered_map;
auto modernCpp1() -> void
{
UnorderedMap umap{ {u8"Linux", "path/to/dir"}, {"Windows", R"(path\to\dir)"} };
for (auto&& [first, second] : umap)
std::cout << second << "\t" << first << std::endl;
}

{
constexpr int len = 2*3;
std::array<int, len> a = { 0, 1'000, 2'000, 3'000, 4'000, 5'000 };

auto sum { 0 };
std::for_each( std::begin(a), std::end(a), [&sum](int val)->void { sum += val; } );

return sum > 0 ? std::optional<int>(sum) : std::nullopt;
}

int main()
{
modernCpp1();

if (auto ret = modernCpp2(); ret.has_value())
std::cout << "sum = " << ret.value() << std::endl;

return 0;
}


For the uninitiated, observe the following.

• using keyword, instead of typdef
• Trailing return type
• unordered_map, a new container type
• String literal enhancements with u8 and R prefix
• Initialization with braces
• RValue reference
• auto type inference
• Structured binding
• Range based for loops
• Attributes
• constexpr
• array, a new container type
• Digit seperator
• Lambda function
• optional data structure
• Selection statement with initializer

To have the right context, these are the published C++ standards.

 C++ 98 Major update C++ 03 Minor update C++ 11 Major update C++ 14 Minor update C++ 17 Minor update C++ 20 Likely a Major update
• The differences between C++98 and C++03 are so few and so technical that they ought not concern users.
• Modern C++ means C++11 and later, including the draft C++20.
• C++11 typically means including C++14 and C++17.

The following are some of the guiding principles of the C++ committee in the development of the new standard. For more details, see the general and specific design goals of C++11.

• Preferring standard library additions over changes to the language
• Improve abstraction mechanisms rather than to solve narrow use cases
• Increasing type safety
• Improving performance
• Maintaining the zero overhead principle, which means no overhead from unused features
• Maintaining backwards compatibility

Modern C++ does feel like a new language. The changes are not all incremental. Some of the new concepts require a ground up understanding of the basics. It adds much needed functionality and addresses many of the “shortcomings” as percieved by other programming languages.

“Then” was abstraction and performance, “Now” is higher level abstraction and better performance. There are plenty of features that makes the code more safe and developers more productive. It provides full backward compatibility so there is not need to panic, adopt it incrementally. Even if you choose not to actively adapt your code base, you may still have to read 3rd party code. As time goes by, new developers will prefer the modern techniques. Like transitioning from C to C++, it will now be Classic C++ to Modern C++. Just don’t get left behind!

Here’s a valid line of code in Modern C++.

[](){}();

See Modern C++ features for a comprehensive list of all the new language and library changes.

## A brute force method of reducing superfluous header includes in a project

Header files are just a part and parcel of the C/C++ programming language. However, the number of header file includes go out of control very quickly. In most C/C++ based projects, maintaining minimal or optimal header includes is a challenge. Sooner or later, you will find many unnecessary header includes in the source files.

This causes a few problems.

• It multiplies the time taken to compile C++ code by the number of times it’s used in a program.1
• For an incremental minimal build, every unnecessary include potentially increases the number of files that get recompiled.1
• Refactoring and reorganizing code becomes difficult.

There are free tools2 to identify dependencies but reducing the superfluous dependencies is a painful manual task. Then there are some expensive heavy duty tools. This however, is a simple and free alternative, not perfect but quite effective.

It’s a brute force method which leverages the compiler to identify true dependencies. For each file, it comments out an include and builds the project. If it builds successfully, it is assumed that the header is not required. If the build fails, the include in uncommented (and built again as a sanity check). It is recommended to run it on all on the .h first and then the .cxx.

The script files can be accessed from GitHub at https://github.com/cognitivewaves/misc/tree/master/check-header-includes.

As mentioned earlier, it’s not perfect, as it does not identify changes in behavior due to the order of an “unnecessary” header. E.g.There may be subtle changes in behavior if you have some macros redefined differently depending on some #ifdefs from a previous header identified as “not required”. However, it is nice to have a tool which gets rid of the “obvious” and “silly” superfluous dependencies. So it is best to review the identified unnecessary headers before committing it.

Currently, the script works only on Windows using Visual Studio projects. But it is easy enough to replicate it on Linux and other compilers.

# Why learn Vi?

If you ever have to work on a Linux system, it is well advised to have a basic knowledge of using the Vi editor. This is not be confused with the editor war. Here I’m only highlighting the practical benefits of being familiar with Vi.

• It is installed by default (seen as a standard system utility) and is available on all Linux distributions since it is part of the POSIX standard as well as the Single UNIX Specification. All other editors (including nano, emacs) are optional or additional installations.
• It is a lightweight application and can work in stripped down versions of Linux.
• It is a console based text editor which works without a Graphical User Interface. This comes in handy especially when logging into a machine remotely, which is quite common on Linux.
• It gets invoked by default in a number of shell commands like man, less, git etc.

As much as new users find it painful, some users get along fine with vi in small doses. For those coming from a Windows background, learning vi/Vim by comparison with a typical GUI text editor is recommended.

Note that Vi and Vim (Vi IMproved) are not the same. Vim is based on the Vi editor, and is an extended version with many additional features. Vim has nevertheless been described as “very much compatible with Vi“. When possible, install Vim which is an additional package. It is more “user friendly” than standard Vi.

# Shared Responsibility

You buy “stuff”1 everyday. Some are essential, but many others that you probably don’t need. In any case, imagine two scenarios.

Scenario 1

You go to a store to buy, and like the look and feel of it. You pick it up and head out of the store. No payment, no receipt, no credit card swipe. Then one day, when you have used it enough and feel that it was worth it, you pay for it, and you pay what you think it was worth. No price tag, no time limits, no collection calls, just your moral obligation.

Scenario 2

You are enticed, cajoled, convinced or fooled into buying it. Pay for it upfront with limited warranty on the product, no guarantee of satisfaction and very few options of getting your money back.

Which one would you chose? Obviously scenario 1, isn’t it?
Not just because it is free until you decide to pay for it, but also because YOU are always in control.

Does it sound too idealistic? Are there even such products and services?
Yes, and many that you are likely using quite regularly too, but may not even be aware of it.

Open source software is modeled exactly on the first scenario. Furthermore, many of these are ad-free. Do you rely on Wikipedia, or use Mozilla Firefox or prefer Linux (more accurately GNU/Linux) or any of the thousands of “free” software out there?

Most people, including myself, agree to this idea of our shared responsibility towards the systems and software that are made available to us for “free”. We all understand that there is cost (monetary, manpower, administration, etc.) and hence it is not free in the true sense. Someone, somewhere, is paying for it. Someone has taken up the burden of our missing contribution, however minuscule it may be.

Yet, when it comes to acting on it, we defer, procrastinate and finally pass on it, expecting and hoping that someone else will sustain it. I was no exception. I would go places, spend on food and drinks which was more expensive that it was worth, but didn’t make the much needed contribution. It is not that the monetary contribution has to be much, and yet we don’t. This is bystander apathy, a very regressive attitude for a society.

Finally in November 2013, I committed myself to make a contribution as little as USD 10 to a few of the softwares that I use regularly. I did not go bankrupt (obviously) and life is better now that I have fulfilled my shared responsibility. Having taken that first step, I am now committed to contributing every year.

If everyone reading this chipped in \$3, we would be supported for another year – Mozilla Firefox

If all our past donors simply gave again today, we wouldn’t have to worry about fundraising for the rest of the year – Jimmy Wales, Wikipedia

Such ecosystems can only exist and sustain with voluntary collective contributions. There are many ways to participate but financial contribution is important. The ball is in your court. Participate in any way possible and fulfill your shared responsibility. I promise you, take that first step and make that contribution. It will give you a sense of satisfaction.

# Cognitive Traps

## A very interesting perspective into our typical thought process.

From “The Ascent of Money” by Niall Ferguson

Availability Bias, which causes us to base decision on information that is more readily available in our memories, rather than the data we really need.

Hindsight bias, which causes us to attach higher probabilities to events after they have happened (ex post) than we did before they happened (ex ante).

The problem of induction, which leads us to formulation general rules on the basis of insufficient information.

The fallacy of conjunction (or disjunction), which means we tend to overestimate the probability that seven event of the 90 percent probability will all occur, while underestimating the probability that at least one of seven events of 10 percent probability will occur.

Confirmation bias, which inclines us to look for confirming evidence of an initial hypothesis, rather than falsifying evidence that would disprove it.

Contamination effects, whereby we allow irrelevant but promxiate information to influence a decision.

The affect heuristic, whereby preconceived value judgements interfere with our assessment of costs and benefits.

Scope neglect, which prevents us from proportionately adjusting what we should be willing to sacrifice to avoid harms of different orders of magnitude.

Overconfidence in calibration, which leads us to underestimate the confidence intervals within which our estimates will be robust (e.g.to conflate the ‘best case’ scenario with the ‘most probable’).

Bystander apathy, which inclines us to abdicate individual responsibility when in a crowd.