I'm in the process of designing a system that will allow me to represent broad-scope tasks as workflows, which expose their workitems via an IEnumerable method. The intention here is to use C#'s 'yield' mechanism to allow me to write psuedo-procedural code that the workflow execution system can execute as it sees fit.
For example, say I have a workflow that includes running a query on the database and sending an email alert if the query returns a certain result. This might be the workflow:
public override IEnumerable<WorkItem> Workflow() {
// These would probably be injected from elsewhere
var db = new DB();
var emailServer = new EmailServer();
// other workitems here
var ci = new FindLowInventoryItems(db);
yield return ci;
if (ci.LowInventoryItems.Any()) {
var email = new SendEmailToWarehouse("Inventory is low.", ci.LowInventoryItems);
yield return email;
}
// other workitems here
}
CheckInventory and EmailWarehouse are objects deriving from WorkItem, which has an abstract Execute() method that the subclasses implement, encapsulating the behavior for those actions. The Execute() method gets called in the workflow framework - I have a WorkflowRunner class which enumerates the Workflow(), wraps pre- and post- events around the workitem, and calls Execute in between the events. This allows the consuming application to do whatever it needs in before or after workitems, including canceling, changing workitem properties, etc.
The benefit to all this, I think, is that I can express the core logic of a task in terms of the workitems responsible for getting the work done, and I can do it in a fairly straightforward, almost procedural way. Also because I'm using IEnumerable, and C#'s syntactic sugar that supports it, I can compose these workflows - like higher-level workflows that consume and manipulate sub-workflows. For example I wrote a simple workflow that just interleaves two child workflows together.
My question is this - does this sort of architecture seem reasonable, especially from a maintainability perspective? It seems to achieve several goals for me - self-documenting code (the workflow reads procedurally, so I know what will be executed in what steps), separation of concerns (finding low inventory items does not depend on sending email to the warehouse), etc. Also - are there any potential problems with this sort of architecture that I'm not seeing? Finally, has this been tried before - am I just re-discovering this?
Personally, this would be a "buy before build" decision for me. I'd buy something before I'd write it.
I work for a company that's rather large and can be foolish with its money, so if you're writing this for yourself and can't buy something I'll retract the comment.
Here are a few random ideas:
I'd externalize the workflow into a configuration that I could read in on startup, maybe from a file or a database.
It'd look something like a finite state machine with states, transitions, events, and actions.
I'd want to be able to plug in different actions so I could customize different flows on the fly.
I'd want to be able to register different subscribers who would want to be notified when a particular event happened.
I wouldn't expect to see anything as hard-coded as that e-mail server. I'd rather encapsulate that into an EmailNotifier that I could plug into events that demanded it. What about a beeper notification? Or a cell phone? Blackberry? Same architecture, different notifier.
Do you want to include a handler for human interaction? All the workflows that I deal with are a mix of human and automated processing.
Do you anticipate wanting to connect to other systems, like databases, other apps, web services?
It's a tough problem. Good luck.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With