Thursday, January 31, 2008

Why evolve a RETE?


I mean, really... What's the point?

Well, apart from "why not?" here are some reasons why I think it'll be useful.


Built in memory


Inference engines describe the interaction between a set of logical rules and a memory store. So right off the bat you're evolving an entity that has a memory. In classical genetic programming a memory store must be built in. Also, the memory store in an inference engine is, by design, indexed so that changes in the memory contents are automatically monitored.

Built in modules / functions


Each rule in an inference engine can be thought of as a function (rule right hand side) that is executed when a certain logical condition occurs. (rule left hand side) So reusable functions are built in by design. (Again, something that must be added to classical GP.) In addition, modularity is also inherent in inference engines: all rules that share a given logical condition can be considered in the same "module" as they are all only potentially active when that logical state is obtained. This is explicitly used when creating goal driven rules: i.e. rules with the following format:

if
Goal(name="foo")
...other conditions...
then
insert another Goal to activate another set or rules, or remove matched Goal above to
deactivate current set of rules, etc...


Built in recursion, loops, and decision structures


Functions (rules) can easily be executed in a recursive fashion. As long as the right logical conditions persist (and are refreshed according to the conflict resolution style of the inference engine) the same rule can execute multiple times. This could lead to recursion and loop structures evolving naturally. (Totally untested and unproven in practice.) And decision structures fall out naturally, as rules are just if-then-else statements.

Potential bloat solution



CPU use can be limited by restricting individual inference agents to a certain number of recognize-act cycles through the agenda. In fact, this will probably be necessary as there is no guarantee that evolution would produce rulesets with valid ending conditions. :)

Memory use bloat may be handled by combining RETEs from separate agents into one collective RETE. That way, substantially similar RETEs could be combined, and the total size would be dictated not by the total number of agents, but by the degree of difference between the agents. (i.e. two substantially similar agents would be combined, and use only slightly as much memory as one.)



No comments: