I'm working on a class whose simplified version looks like this:
class Http_server {
public:
void start(int port)
{
start_server();
std::string content_type = extract_content_type(get_request());
}
private:
void start_server()
{
...
}
std::string get_request()
{
...
}
std::string extract_content_type(const std::string& request) const
{
...
}
};
Now I want to write a test case for extract_content_type
. The problem is: it's private, so I cannot call it from the outside. The only function I can test is start
, but that one would actually start the server (start_server
) and wait for a request (get_request
).
How I see it, I've got three options:
extract_content_type
publicextract_content_type
into a utility class or namespacestart_server
and get_request
virtual and create a mock object that overrides themI don't want to make anything public or move to a utility namespace that's just used once in a single class, so the least evil is option 3.
I've seen at least one example of this in the V8 code base: http://code.google.com/p/v8/source/browse/trunk/test/cctest/test-date.cc
Still, I'm not sure if it's a good idea. virtual
isn't the default in C++ for two reasons:
What would you do? Live with the useless virtual? Or rather not test the function at all? I'm not into TDD, nor do I want to be, but it's just easier to develop functions like extract_content_type
against a test.
So when should I declare a destructor virtual? Whenever the class has at least one virtual function. Having virtual functions indicate that a class is meant to act as an interface to derived classes, and when it is, an object of a derived class may be destroyed through a pointer to the base.
A virtual function is a member function of a base class that is overridden by a derived class. When you use a pointer or a reference to the base class to refer to a derived class object, you can call a virtual function for that object and have it run the derived class's version of the function.
Why use virtual functions. We use virtual functions to ensure that the correct function is called for an object, regardless of the reference type used to call the function. They are basically used to achieve the runtime polymorphism and are declared in the base class by using the virtual keyword before the function.
Yes it is very bad practice - you're letting your tools make design decisions for you. I think the main problem here is that you're trying to treat each individual method as a unit. This is generally the cause of all unit test woes.
I think you may have another option:
Make the unit test class a friend of your class to test
class Foo {
public:
#ifdef UNITTEST
friend class FooTest;
#endif
...
protected:
...
private:
...
};
And here's the reference: http://praveen.kumar.in/2008/01/02/how-to-unit-test-c-private-and-protected-member-functions/
The answer is that you don't test private functions. Ideally, you don't even write them, you create them by refactoring (although I admit that this is very hard in practice).
Your private functions should be tested implicitly when your public/protected functions are tested. If the functionality of a private function is not fully asserted that way, then that means the function does things that have no visible effect outside of the class.
This is not just a TDD issue. Since private functions are an implementation-detail I usually assume that I can refactor them without breaking anything. If there was a test for a function, and I decide to refactor it's signature, that wouldn't hold anymore, confusing me very much.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With