I am using a protocol, which is basically a request & response protocol over TCP, similar to other line-based protocols (SMTP, HTTP etc.).
The protocol has about 130 different request methods (e.g. login, user add, user update, log get, file info, files info, ...). All these methods do not map so well to the broad methods as used in HTTP (GET,POST,PUT,...). Such broad methods would introduce some inconsequent twists of the actual meaning.
But the protocol methods can be grouped by type (e.g. user management, file management, session management, ...).
Current server-side implementation uses a class Worker
with methods ReadRequest()
(reads request, consisting of method plus parameter list), HandleRequest()
(see below) and WriteResponse()
(writes response code & actual response data).
HandleRequest()
will call a function for the actual request method - using a hash map of method name to member function pointer to the actual handler.
The actual handler is a plain member function there is one per protocol method: each one validates its input parameters, does whatever it has to do and sets response code (success yes/no) and response data.
Example code:
class Worker { typedef bool (Worker::*CommandHandler)(); typedef std::map<UTF8String,CommandHandler> CommandHandlerMap; // handlers will be initialized once // e.g. m_CommandHandlers["login"] = &Worker::Handle_LOGIN; static CommandHandlerMap m_CommandHandlers; bool HandleRequest() { CommandHandlerMap::const_iterator ihandler; if( (ihandler=m_CommandHandlers.find(m_CurRequest.instruction)) != m_CommandHandler.end() ) { // call actual handler return (this->*(ihandler->second))(); } // error case: m_CurResponse.success = false; m_CurResponse.info = "unknown or invalid instruction"; return true; } //... bool Handle_LOGIN() { const UTF8String username = m_CurRequest.parameters["username"]; const UTF8String password = m_CurRequest.parameters["password"]; // .... if( success ) { // initialize some state... m_Session.Init(...); m_LogHandle.Init(...); m_AuthHandle.Init(...); // set response data m_CurResponse.success = true; m_CurResponse.Write( "last_login", ... ); m_CurResponse.Write( "whatever", ... ); } else { m_CurResponse.Write( "error", "failed, because ..." ); } return true; } };
So. The problem is: My worker class now has about 130 "command handler methods". And each one needs access to:
What is a good strategy for a better structuring of those command handler methods?
One idea was to have one class per command handler, and initializing it with references to request, response objects etc. - but the overhead is IMHO not acceptable (actually, it would add an indirection for any single access to everything the handler needs: request, response, session objects, ...). It could be acceptable if it would provide an actual advantage. However, that doesn't sound much reasonable:
class HandlerBase { protected: Request &request; Response &response; Session &session; DBHandle &db; FooHandle &foo; // ... public: HandlerBase( Request &req, Response &rsp, Session &s, ... ) : request(req), response(rsp), session(s), ... {} //... virtual bool Handle() = 0; }; class LoginHandler : public HandlerBase { public: LoginHandler( Request &req, Response &rsp, Session &s, ... ) : HandlerBase(req,rsp,s,..) {} //... virtual bool Handle() { // actual code for handling "login" request ... } };
Okay, the HandlerBase could just take a reference (or pointer) to the worker object itself (instead of refs to request, response etc.). But that would also add another indirection (this->worker->session instead of this->session). That indirection would be ok, if it would buy some advantage after all.
Some info about the overall architecture
The worker object represents a single worker thread for an actual TCP connection to some client. Each thread (so, each worker) needs its own database handle, authorization handle etc. These "handles" are per-thread-objects that allow access to some sub-system of the server.
This whole architecture is based on some kind of dependency injection: e.g. to create a session object, one has to provide a "database handle" to the session constructor. The session object then uses this database handle to access the database. It will never call global code or use singletons. So, each thread can run undisturbed on its own.
But the cost is, that - instead of just calling out to singleton objects - the worker and its command handlers must access any data or other code of the system through such thread-specific handles. Those handles define its execution context.
Summary & Clarification: My actual question
I am searching for an elegant alternative to the current ("worker object with a huge list of handler methods") solution: It should be maintainable, have low-overhead & should not require writing too much glue-code. Additionally, it MUST still allow each single method control over very different aspects of its execution (that means: if a method "super flurry foo" wants to fail whenever full moon is on, then it must be possible for that implementation to do so). It also means, that I do not want any kind of entity abstraction (create/read/update/delete XFoo-type) at this architectural layer of my code (it exists at different layers in my code). This architectural layer is pure protocol, nothing else.
In the end, it will surely be a compromise, but I am interested in any ideas!
The AAA bonus: a solution with interchangeable protocol implementations (instead of just that current class Worker
, which is responsible for parsing requests and writing responses). There maybe could be an interchangeable class ProtocolSyntax
, that handles those protocol syntax details, but still uses our new shiny structured command handlers.
The HTTP protocol is a request/response protocol. A client sends a request to the server in the form of a request method, URI, and protocol version, followed by a MIME-like message containing request modifiers, client information, and possible body content over a connection with a server.
In request/response communication mode, one software module sends a request to a second software module and waits for a response. Because the first software module performs the role of the client, and the second, the role of the server, this mode is also referred to as client/server interaction.
You've already got most of the right ideas, here's how I would proceed.
Let's start with your second question: interchangeable protocols. If you have generic request and response objects, you can have an interface that reads requests and writes responses:
class Protocol { virtual Request *readRequest() = 0; virtual void writeResponse(Response *response) = 0; }
and you could have an implementation called HttpProtocol
for example.
As for your command handlers, "one class per command handler" is the right approach:
class Command { virtual void execute(Request *request, Response *response, Session *session) = 0; }
Note that I rolled up all the common session handles (DB, Foo etc.) into a single object instead of passing around a whole bunch of parameters. Also making these method parameters instead of constructor arguments means you only need one instance of each command.
Next, you would have a CommandFactory
which contains the map of command names to command objects:
class CommandFactory { std::map<UTF8String, Command *> handlers; Command *getCommand(const UTF8String &name) { return handlers[name]; } }
If you've done all this, the Worker
becomes extremely thin and simply coordinates everything:
class Worker { Protocol *protocol; CommandFactory *commandFactory; Session *session; void handleRequest() { Request *request = protocol->readRequest(); Response response; Command *command = commandFactory->getCommand(request->getCommandName()); command->execute(request, &response, session); protocol->writeResponse(&response); } }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With