If the content of this website has been useful, please consider a contribution. Thank you for taking the time to donate.
|Cost of hosting this website||100 USD|
|Total donations till date||25 USD|
Header files are just a part and parcel of the C/C++ programming language. However, the number of header file includes go out of control very quickly. In most C/C++ based projects, maintaining minimal or optimal header includes is a challenge. Sooner or later, you will find many unnecessary header includes in the source files.
This causes a few problems.
There are free tools2 to identify dependencies but reducing the superfluous dependencies is a painful manual task. Then there are some expensive heavy duty tools. This however, is a simple and free alternative, not perfect but quite effective.
It’s a brute force method which leverages the compiler to identify true dependencies. For each file, it comments out an include and builds the project. If it builds successfully, it is assumed that the header is not required. If the build fails, the include in uncommented (and built again as a sanity check). It is recommended to run it on all on the
.h first and then the
The script files can be accessed from GitHub at https://github.com/cognitivewaves/misc/tree/master/check-header-includes.
As mentioned earlier, it’s not perfect, as it does not identify changes in behavior due to the order of an “unnecessary” header. E.g.There may be subtle changes in behavior if you have some macros redefined differently depending on some #ifdefs from a previous header identified as “not required”. However, it is nice to have a tool which gets rid of the “obvious” and “silly” superfluous dependencies. So it is best to review the identified unnecessary headers before committing it.
Currently, the script works only on Windows using Visual Studio projects. But it is easy enough to replicate it on Linux and other compilers.
If you ever have to work on a Linux system, it is well advised to have a basic knowledge of using the vi editor. This is not be confused with the editor war. Here I’m only highlighting the practical benefits of being familiar with vi.
As much as new users find it painful, some users get along fine with
vi in small doses. For those coming from a Windows background, learning vi/Vim by comparison with a typical GUI text editor is recommended.
Note that vi and Vim are not the same. When possible, install
Vim (Vi IMproved) which is an additional package. It is more “user friendly” than standard
Lately, you’ve probably heard a lot about WebGL and how it is transforming graphics rendered in a web browser. The consortium definition is quoted below. As highlighted in the quotation, here I will to show the similarities between desktop OpenGL and browser WebGL.
You can access the code at https://github.com/cognitivewaves/OpenGL-Render .
Desktop OpenGL :
Browser WebGL :
Most people, including myself, agree to the idea of our shared responsibility towards the systems and software that are made available to us for “free”. We all understand that there is cost (monetary, manpower, administration, etc.) and hence it is not free in the true sense. Someone, somewhere is paying for it. Someone has taken up the burden of our missing contribution, however minuscule it may be.
Yet, when it comes to acting on it, we defer, procrastinate and finally pass on it, expecting and hoping that someone else will sustain it. I was no exception. I would go places, spend on food and drinks which was more expensive that it was worth, but didn’t make the much needed contribution. It is not that the monetary contribution has to be much and yet we don’t. This is bystander apathy, a very regressive attitude for a society.
Finally in November 2013, I committed myself to make a contribution as little as USD 5 to a few of the software123 that I use regularly. I did not go bankrupt (obviously) and life is better now that I have fulfilled my shared responsibility. Having taken that first step, I am now committed to contributing every year.
If everyone reading this chipped in $3, we would be supported for another year – Mozilla Firefox
If all our past donors simply gave again today, we wouldn’t have to worry about fundraising for the rest of the year – Jimmy Wales, Wikipedia
I promise you, take that first step and make that contribution. It will give you a sense of satisfaction.
I had spent a fair amount of time on OpenGL about 10 years back, though I wouldn’t call myself an expert. Over these 10 years, I noticed OpenGL evolving and kept pace with it from the outside. Then came WebGL and wanted to get my hands dirty. That’s when I realized that I was way out of touch. As they say, the devil is in the details. All the terminology and jargon just wasn’t adding up. So I went back to basics.
Here is an attempt to summarize the evolution and status of OpenGL. It’s not meant to be an introduction to OpenGL but more for those who want to go from “then” to “now” in one page. For more details, see OpenGL – VBO, Shader, VAO.
Traditionally, all graphics processing was done in the CPU which generated a bitmap (pixel image) in the frame buffer (a portion in RAM) and pushed it to the video display. Graphics Processing Unit (GPU) changed that paradigm. It was specialized hardware to do the heavy graphics computations. The GPU provided a set of “fixed” functions to do some standard operations on the graphics data, which is referred to as the Fixed Function Pipeline (FFP). Though the Fixed Function Pipelline was fast and efficient, it lacked flexibility. So GPUs introduced the Programmable Pipeline, the programmable alternative to the “hard coded” approach.
OpenGL 1.0 (Classic OpenGL) provided libraries to compute on the CPU and interfaced with the Fixed Function Pipeline. OpenGL 2.0 (and higher) adds Programmable Pipeline API.
OpenGL ES is OpenGL for Embedded Systems for mobile phones, PDAs, and video game consoles, basically for devices with limited computation capability. It consists of well-defined subsets of desktop OpenGL. Desktop graphics card drivers typically did not support the OpenGL-ES API directly. However, as of 2010 graphics card manufacturers introduced ES support in their desktop drivers and this makes the ES term in the specification confusing. OpenGL ES 2.0 is based on OpenGL 2.0 with the fixed function API removed.
The Programmable Pipeline requires a
Program which is “equivalent” to the functions provided by the Fixed Function Pipeline. These programs are called Shaders. The programming language for the shaders used to be assembly language but as the complexity increased, high-level languages for GPU programming emerged, one of which is called OpenGL Shading Language (GLSL). Like any program, the Shader program needs to be compiled and linked. However, the Shader code is loaded to the GPU, compiled and linked at runtime using APIs provided by OpenGL.
So, modern OpenGL is great, except it makes learning graphics programming harder (much harder). It is generally easier to teach new graphics programmers using the Fixed Function Pipeline. Ideas behind Shaders are pretty complicated and the minimum required knowledge in basic 3D programming (vertex creation, transformation matrices, lighting etc.) is substantial. There is a lot more code to write and many more places to screw up. A classic fixed function OpenGL programmer was oblivious to most of these nasty details.
“Then” was Fixed Function Pipeline and “Now” is Programmable Pipeline. Much of what was learned then must be abandoned now. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not transfer well. To make matters worse, OpenGL has started to deprecate fixed functionality. In OpenGL 3.2, the Core Profile lacks these fixed-function concepts. The compatibility profile keeps them around.
The transition in terms of code and philosophy is detailed in OpenGL – VBO, Shader, VAO.
You buy “stuff”1 everyday. Some are essential, but many others that you probably don’t need. In any case, imagine two scenarios.
You go to a store to buy and like the look and feel of it. You pick it up and head out of the store. No payment, no receipt, no credit card swipe. Then one day, when you have used it enough and feel that it was worth it, you pay for it, and you pay what you think it was worth. No price tag, no time limits, no collection calls, just your moral obligation.
You are enticed, cajoled, convinced or fooled into buying it. Pay for it upfront with limited warranty on the product, no guarantee of satisfaction and very few options of getting your money back.
Which one would you chose? Obviously scenario 1, isn’t it?
Not just because it is free until you decide to pay for it, but also because YOU are always in control.
Does it sound too idealistic? Are there even such products and services?
Yes, and many that you are likely using quite regularly too but may not even be aware of it.
Open source software is modeled exactly on the first scenario. Furthermore, many of these are ad-free. Do you rely on Wikipedia, or use Mozilla Firefox or prefer Linux (more accurately GNU/Linux) or any of the thousands of “free” software out there?
Such ecosystems can only exist and sustain with voluntary collective contributions. There are many ways to participate but financial contribution is important. The ball is in your court. Participate in any way possible and fulfill your shared responsibility.
The question of ‘should I add a null pointer check?‘ is a very simple and obvious ‘YES’ to the majority of software developers. Their reasoning is equally simple.
These are valid statements, but the answer is not that simple. Though a crash indicates a poor quality software, an absence of it is no guarantee for good quality. The primary goal of any software is to provide functionality in a reliable and efficient manner. Not crashing, though good, is useless (and often detrimental in engineering applications) if the behaviour is incorrect.
This perspective comes from my experience in building software for engineers. It can simulate assembling and analyzing complex designs with hundreds and thousands of parts and assemblies. These component sizes could range from a large part to small hidden nuts and bolts, and it is visually impossible to confirm the accuracy of the model. There is no room for ‘possibly unknown’ error, as these components will eventually be manufactured and assembled. The cost of a manufacturing error (because of an inaccurate and unreliable software, though it never crashed) is much too great compared to a software crash and reworking the model.
Defensive programming (to prevent a crash) can easily lead to bad software development.
Some alternatives are generally suggested as it is very hard to accept a fatal error.
NULL checks out of paranoia should be avoided. However, there are some legitimate uses of it.
For the faint hearted who feel this approach as too radical, there is sort of a middle ground.
My recommendation is not to do defensive programming without a reason (which is usually rare). Keep in mind that every line of code is supposed to be hit at some point otherwise it is dead code. The bottom line… don’t fear the crash but leverage it.
If you develop software only on Windows using Visual studio, it’s a luxury. Enjoy it while it lasts. Sooner than later, you will come across
Makefiles, maybe exploring some software on Linux or the misfortune of having a build system that uses
Cygwin on Windows.
Now you figure out that
Makefiles are text files and open it in an editor hoping to get some insight into its workings. But, what do you see? Lots of cryptic hard to understand syntax and expressions.
So, where do you start? Internet searches for
Makefiles provide a lot of information but under the assumption that you come from a non-IDE Unix/Linux development environment. Pampered Visual Studio developers are never the target audience.
Here I will try to relate to the Visual Studio build system which will hopefully give an easier understanding of
Makefiles. The goal is not provide yet another tutorial on makefiles (because there are plenty available on the internet) but to instill the concept by comparison.
See Makefiles and Visual Studio for a Visual Studio friendly introduction to the Make utility.
From “The Ascent of Money” by Niall Ferguson
Availability Bias, which causes us to base decision on information that is more readily available in our memories, rather than the data we really need.
Hindsight bias, which causes us to attach higher probabilities to events after they have happened (ex post) than we did before they happened (ex ante).
The problem of induction, which leads us to formulation general rules on the basis of insufficient information.
The fallacy of conjunction (or disjunction), which means we tend to overestimate the probability that seven event of the 90 percent probability will all occur, while underestimating the probability that at least one of seven events of 10 percent probability will occur.
Confirmation bias, which inclines us to look for confirming evidence of an initial hypothesis, rather than falsifying evidence that would disprove it.
Contamination effects, whereby we allow irrelevant but promxiate information to influence a decision.
The affect heuristic, whereby preconceived value judgements interfere with our assessment of costs and benefits.
Scope neglect, which prevents us from proportionately adjusting what we should be willing to sacrifice to avoid harms of different orders of magnitude.
Overconfidence in calibration, which leads us to underestimate the confidence intervals within which our estimates will be robust (e.g.to conflate the ‘best case’ scenario with the ‘most probable’).
Bystander apathy, which inclines us to abdicate individual responsibility when in a crowd.