Seriously, I’d like to know, because this is a bit ridiculous.
For all the heavily encouraged coding styles out there, nearly all the open source software packages I’ve had to compile for Linux from Scratch have either
- Insanely chatty defaults for compilation; that is, GCC provides ‘notices’ about seemingly minor points, or
- A large number of warnings when compiling – unused variables, overloaded virtual functions, and deprecated features soon to disappear.
In the worst case, some of these warnings appear to be potential problems with the source. Leaving potentially uninitialized variables around seems to be a great way to run into runtime crashes if someone decides to use them. Overloading virtual functions with a different method signature has the same possible impact. And comparing signed and unsigned numbers is just a recipe for a crash or unpredictable behaviour down the line.
I just don’t get it. In my former development experiences, any compiler notifications were something to pay attention to. Usually when first developing an application, they were indicative of a typo, a forgotten variable or just general stupidity with the language. If a warning was absolutely unavoidable, it was specifically ignored in the build scripts with a clear explanation as to why.
So what’s the case with your programs? Have you noticed any stupid or insightful compiler messages scrolling past?
They are not strongly typed languages, therefore anything that the compiler could potentially treat as undesirable by the programmer displays as a warning. For some of the issues you’ve outlined it’s likely a combination of some bad programming and some bad legacy. For others they are perfectly explainable.
Uninitialized variables actually aren’t that big of a deal assuming you set them (eventually) before using them. It’s the (pre C-99) C code spec that’s more of an issue in this case. Additionally some of these warnings you might be seeing are as a result of how the compiler is building its internal compilation tree as it walks over the source; these ‘errors’ might never actually effect anything in the real world. A good example of this is any sort of plugin system where you don’t know at design time what’s actually going to be (or not be) on the other side.
As for overloading virtual functions incorrectly, it’s true that this can have disastrous consequences. At the same time however it is that very power that gives you the ability to do something crazy like this is also one of the reasons that C and C++ are still widely used today.
Another thing you mentioned is when dealing with signed and unsigned integer comparisons. Keep in mind that comparing a signed and unsigned integer is completely valid in all cases but when one of the two (or both) numbers has the highest order bit set. Likely the code you are compiling is trying to interface with another library and the programmer was too lazy to fix an issue that would never be (i.e. if the numbers stayed below 2^X-1). It could also just be that the compiler got smarter in newer versions and is displaying warnings that were previously missed. Given that a signed vs unsigned issue was just found in an open source implementation of Blowfish for the first time, 13 years after its release, is just another reason I’d rather have a compiler display a warning than not.
@Tyler B
The worst part is that most of these packages add in -Werror, which will just halt everything.
@Tyler B
Yeah, I guess my point was more so not that I don’t want to see these warnings (which are admittedly pretty nasty), but that the developers should be fixing them if there are so many. Even someone like me who’s more of a sysadmin than real programmer could fix a signed vs. unsigned warning or removing unused variables.
@Dave L
In that case they might be using an older compiler version that was more relaxed?
Drives me nuts too. The other problem is when using a header only library and their warnings become your warnings.