Using RTTI in C++

RTTI in C++ is great when you need it, but there are a few gotchas – like the one I fell into recently:

I have a vector of pointers and I want to be able to display the name of the actual class of the instance pointed to. (Yes, this is for programmers’ use, no-one else needs this information).

Unfortunately I declared the vector like this:

class Foo;

std::vector<Foo*> myList;

Now, iterating through the list works OK, but (and here’s the clue I missed) calling member functions won’t work unless the compiler sees the full definition of Foo.

Now, I was able to call member functions, because they were actually called in another module, but in the same header file, this inline function didn’t work:

std::string getName(Foo* foo){ return typeid(*foo).name(); }

It always returned the name “Foo” and not the real name (it might be “Bar”).

All I had to do was to push the implementation of this method into the module file, making sure I had:

#include “Foo.h”

near the top.

This meant that the compiler could see the full definition of Foo and thus knew, or could work out, where the type information was stored and return the correct type_info structure.

Posted in C++, Programming | Leave a comment

Migration from Windows Live Spaces

Hmm, not going too well so far.

I’ve got a new WordPress blog, but no migration of contents from Windows Live Spaces :-(

Edit:  Woohoo (_8-o) I tried migration again and it worked!  See:

https://quamrana.wordpress.com/2010/02/16/a-ramble-about-markdown/

https://quamrana.wordpress.com/2009/09/25/dont-use-using-namespace/

https://quamrana.wordpress.com/2008/09/11/refactoring-vs-ocp/

https://quamrana.wordpress.com/2008/09/01/code-divisions/

https://quamrana.wordpress.com/2005/01/10/to-document-or-not-to-document/

Posted in Uncategorized | Leave a comment

A Ramble about Markdown

Ok, so this is a ramble so it won’t necessarily make sense.
I’m trying to re-implement markdownsharp: http://code.google.com/p/markdownsharp/ , but its quite difficult.
I’m wanting to do it as a state machine, but I’ve had to re-start development already as some parts (typically headers) are tricky as you have to re-write previously parsed input.
The other tricky part is keeping the existing tests passing when they run through my implementation.  I’ve refactored the existing nunit tests so that exactly the same test conditions are applied to the existing implementation.  I’ve actually split the tests into two: Tests just for the existing implementation and tests just for the new implementation.  Plus there is a file for tests that both new tests inherit which I call Common tests. This has helped me ‘write’ just one failing test at a time.  I just move an existing test for the existing implementation into Common tests and then its applied to both implementations and, surprise surprise, my implementation fails.  So I just write some implementation and all tests pass again.
Except that my reasonably regular implementation produces reasonably regular output which is not what the existing tests are expecting, so  I have to keep on tweaking my implementation to be just as irregular.
So, in the end, I hope to factor out the irregular parts so that the irregularity is more programmable and then removing that will produce more regular and possibly smaller output.
Posted in Software Development | 1 Comment

DON’T USE ‘USING NAMESPACE’

Sorry, was I shouting?

Just don’t do it!

What I mean is when you are writing a C++ header, never, ever use ‘using namespace std;‘ in the header.
It does mean that you have to write things like ‘std::string‘ and ‘std::cout‘, but its a small price to pay for mixing names from a namespace into the global namespace and ending up with collisions.

Although you can use it in a module file, you still shouldn’t, (even though I’ve done it myself) because you may start writing a class definition, local to the module, which after time may migrate to a header file of its own.  So it may erroneously take the ‘using namespace std;’ with it, or you find that you need to edit the header to put in the missing ‘std::’ prefix to symbols you’ve used.

Just don’t do it!
Posted in Software Development | 1 Comment

Refactoring vs OCP?

I gave a presentation on Open Closed Principle to my colleagues recently and in my preparation I wanted to explain when you did actually change the source code.
 
However, I had already done some slides where I had taken some examples of code that was Open/Closed and refactored it so that adding a new class brought in some new functionality that itself was open/closed.
 
This was the point I was stumbling on.  I wanted to show how you sometimes have to change the code, ie not being open/closed for a short while.  But then you’re open/closed again and then you can add new code for a while without changing any existing code.
 
But I was just refactoring a little.
 
Then it hit me that refactoring by definition IS changing code, but not changing functionality, and OCP, by definition is NOT changing existing code, but still introducing new functionality.
 
Isn’t that weird.  Two things that are opposite, but they go together just great!
 
Posted in Software Development | 1 Comment

Code Divisions

I want to talk about code coverage – that is line coverage in the context of running unit tests.  There’s lots of stuff talked about how there’s either not much coverage of software, or when there is how misleading the figures are.
Just to be clear, I’m only going to talk about line coverage – whether a line ever executed during a single test run.  My experience doesn’t yet extend to branch coverage.
I’ve been using TDD for several years to write software, but I’ve not had the tools to check my coverage.  The theory of TDD, at least, seems to indicate that very high coverage should be possible, so when I got my hands on a coverage tool I was eager to see.  The first piece of code I used it on was to write a DLL in C++.  Now this straight away had the advantage that unit tests should be able to reach all parts of the code, since it was a library: There was no program start-up code in main() that might be missed.  Well, I can report that even though the code made network calls, I could get 100% line coverage.
Now, for convenience sake, I split the code into at least two divisions: code that made network calls and code that didn’t.  My main effort was to develop the library, through TDD, without having to wait for network round-trips, so I started off by mocking out the networking and testing everything else.  Once I wanted to plug in network calls, I made an additional unit test app that made network calls through the library.  That ran slowly, but doing it last meant that I didn’t have to run it very often.
Eventually I ended up with 100% line coverage and I didn’t have to try very hard because TDD gave me a big boost.
Well, that’s all well and good for greenfield development where there is only a small DLL to write, what about all the legacy code that we have, especially the bits that call out to time consuming librarys, like networking, database, file system etc?
Well, I want to propose a general scheme of divisions.  That is, dividing the code up into primary parts that have 100% line coverage – things like business logic and algorithms and anything else that can have difficult bits mocked out –  and everything else.
I propose that development proceeds by having unit tests maintained for the production code where each module or file records either 100% or 0% line coverage.  Then the idea is to work out ways of increasing the unit test coverage and move code from modules of 0% coverage into modules of 100% coverage.
It then becomes easy to see if you violate the divisions, because either a module moves off 0% and becomes partially tested, or drops from 100% to partially tested.  You can then reflect on what happened.
More recently I’ve used eclEmma for Java and pyCoverage for Python to try this out, but I need more practice.  I’ll have to see what happens.
Posted in Software Development | 1 Comment

To document or not to document

I’ve been told that we need to document the software that we write because when another programmer needs to pick up the program in time to come he will need to understand what it is all about in order to do any work.  This will be an improvement in quality of the software that we write.  However, we cannot afford the time to do this right now.

If not now then when?

Well Agile Developlment has an alternative view:  Its not the ‘Not knowing how it works’, ‘because its not documented’.  Its the not knowing if it works which is more important.

Having an automated test suite has many advantages:  Its automated so it can be run at a moment’s notice.  Its already written so it will have improved the design otherwise it could not be so easily tested.  Its easily run to support refactoring to further improve the design.  Its easily extended: in order to test for an alleged bug or to write the next test before adding functionality.

Finally the tests themselves provide an aspect of documentation.  The test scenarios allow a new programmer to see the current uses of the software.

Now is better than later.

 

Posted in Computers and Internet | 2 Comments