Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Prefered methods for interacting with a rules engine

I am about to dive into a rules oriented project (using ILOGs Rules for .NET - now IBM). And I have read a couple different perspectives regarding how to set up the rules processing and how to interact with the rule engine.

The two main thoughts I have seen is to centralize the rule engine (into its own farm of servers) and program against the farm via a web service API (or in ILOG's case via WCF). The other side is to run an instance of the rule engine on each of your app servers and interact with it locally with each instance having its own copy of the rules.

The up side to centralization is the ease of deployment of the rules to a centralized location. The rules scale as they need to rather than scaling each time you expand your application server configuration. This reduces waste from a purchased license perspective. The down side to this set up is the added overhead of making service calls, network latency, etc.

The upside/downside to locally running the rule engine is the exact opposite of the centralized configuration's upside/downside. No slow service calls (fast API calls), no network issues, each app server relies on it self. Managing deployment of rules becomes more complex. Each time you add a node to your app cloud you will need more licenses for rule engines.

In reading white papers I see that Amazon is running the rule engine per app server configuration. They appear to do a slow deployment of rules and recognize that the lag in rule publishing is "acceptable" even though business logic is out of a sync for a given period of time.

Question: From your experiences, what is the best way to start integrating rules into a .net based web app for a shop that has not yet spent much time working in a rules driven world?

like image 349
Andrew Siemer Avatar asked Jul 27 '09 23:07

Andrew Siemer


1 Answers

I never liked the centralization argument. It means that everything is coupled into the rules engine, which becomes a dumping ground for all the rules in the system. Pretty soon you can't change anything for fear of the unknown: "What will we break?"

I much prefer following Amazon's idea of services as isolated, autonomous components. I interpret that to mean that services own their data and their rules.

This has the added benefit of partitioning the rules space. A rule set becomes harder to maintain as it grows; better to keep them to a manageable size.

If parts of the rule set are shared, I'd prefer a data-driven, DI approach where a service can have its own instance of a rules engine and load the common rules from a database on startup. This might not be feasible if your iLog license makes multiple instances cost prohibitive. That would be a case where product that's supposed to be helping might actually be dictating architectural choices that will bring grief. It would be a good argument for a less expensive alternative (e.g., JBoss Rules in Java-land).

What about a data-driven decision tree approach? Is a Rete rules engine really necessary, o is the "enterprise tool" decision driving your choice?

I'd try to set up the rules engine so it was as decoupled from the rest of the enterprise as possible. I wouldn't have it calling out to databases or services if I could. Better to make that the responsibility of the objects asking for a decision. Let them call to the necessary web services and databases to assemble the necessary data, pass it to the rules engine, and let it do its thing. Coupling is your enemy: Try to design your system to minimize it. Keeping rules engines isolated is a good way to do it.

like image 121
duffymo Avatar answered Nov 09 '22 09:11

duffymo