Wednesday, May 12, 2010

Somebody got a(6) of A006069.

It's so fuckin' on.

Thursday, March 27, 2008

Why am I obsessed by grey codes on the n-cube?

http://www.research.att.com/~njas/sequences/A003042
http://www.research.att.com/~njas/sequences/A091299
http://www.research.att.com/~njas/sequences/A003043
http://www.research.att.com/~njas/sequences/A066037

And check out who contributed a(5) on this one:
http://www.research.att.com/~njas/sequences/A006070

Yes, for years this comes up again in my mind: how many grey codes are there in the n cube? (Otherwise known as Hamilton(ian) paths through the n-cube.) This is another one of those problems that grips me every few years. I am gripped again. My C code version runs pretty fast, but I know a(6) will take a looooong time to run. I've written a java version that serializes intermediate subproblems to solve (opening up the possibility for a distributed computing approach) but I think I've reached the end of the brute force approach.

Here's some code:

***BEGIN***

#include
#include
#include
#include

#define ALL_DESTINATIONS 255
#define STARTER_DEPTH 3

typedef unsigned char byte;

static byte size;
static byte destination = ALL_DESTINATIONS;
static unsigned int length;
static byte terminalPosition;
static byte* nodes;
static bool* nodeIndex;
static unsigned long long* sequenceCount;
static byte* edgeMasks;
static unsigned int forkDepth = 0;

void generateCodes(byte pos) {

byte node = nodes[pos];

pos++;

byte i = 0;

for(i = 0; i < size; i++) {

byte next = (node ^ edgeMasks[i]);

if(nodeIndex[next] == 0) {

if((pos == terminalPosition)) {
if(destination == ALL_DESTINATIONS || destination == next) {
nodes[pos] = next;
sequenceCount[next]++;
}
} else {
nodeIndex[next] = 1;
nodes[pos] = next;
generateCodes(pos);
nodeIndex[next] = 0;
}
}
}
}

int makeFork(unsigned int pos, char** fileName) {

if(pos == forkDepth) {

int pid = fork();

unsigned int j = 0;

if(pid == 0) {
*fileName = (char*)malloc(sizeof(char*) * pos * 3);
char* nodeString = (char*)malloc(sizeof(char*) * 3);

for(;j <= pos; j++) {
sprintf(nodeString, "%d", nodes[j]);
strcat(*fileName, nodeString);
strcat(*fileName, "-");
}
return 1;
} else {
printf("Forked %u:\t", pid);
for(;j <= pos; j++) {
printf("%d\t", nodes[j]);
}
printf("\n");
return 0;
}
} else {

byte node = nodes[pos];

pos++;

byte i = 0;

unsigned int stop = 0;

for(; !stop && i < size; i++) {

byte next = (node ^ edgeMasks[i]);

if(nodeIndex[next] == 0) {

if(destination != next) {
nodeIndex[next] = 1;
nodes[pos] = next;
stop = makeFork(pos, fileName);

if(!stop)
nodeIndex[next] = 0;
}
}
}
return stop;
}
}

int main(int argc, char* argv[]) {

if(argc >= 2) {
size = atoi(argv[1]);
} else {
size = 4;
}

forkDepth = STARTER_DEPTH;

if(argc >= 3) {
forkDepth += atoi(argv[2]);
}

if(argc >= 4) {
destination = atoi(argv[3]);
}

length = 1 << size;

nodes = (byte*)calloc(length, sizeof(byte));
nodeIndex = (bool*)calloc(length, sizeof(bool));
sequenceCount = (unsigned long long*)calloc(length, sizeof(unsigned long long));
edgeMasks = (byte*)calloc(size, sizeof(byte));

byte i = 0;

for(; i < length; i++) {
nodes[i] = 0;
nodeIndex[i] = 0;
sequenceCount[i] = 0L;
}

byte mask = 1;

for(i = 0; i < size; i++) {
edgeMasks[i] = mask;
mask <<= 1;
}

unsigned int index = 0;
unsigned int indexMask = 1;
for(i = 0; i < STARTER_DEPTH; i++) {
nodeIndex[index] = 1;
nodes[i] = index;
index ^= indexMask<<i;
}


terminalPosition = length - 1;

char* fileName = 0;

makeFork(STARTER_DEPTH-1, &fileName);
printf(fileName);

if(fileName) {

struct timespec start, stop;
clock_gettime(CLOCK_REALTIME,&start);
generateCodes(forkDepth);
clock_gettime(CLOCK_REALTIME,&stop);
double run_time = (stop.tv_sec - start.tv_sec) + (double)(stop.tv_nsec - start.tv_nsec) / 1000$

FILE* outFile = fopen(fileName, "w");
fprintf(outFile, "Runtime: %f\n", run_time);

unsigned long total = 0;
unsigned long cyclical = 0;
unsigned long acyclical = 0;
for(i = 0; i < length; i++) {
unsigned long count = sequenceCount[i];
if(count != 0) {
fprintf(outFile, "%d: %u\n", i, count);
total+=count;
}
}

for(i = 0; i < size; i++) {
cyclical += sequenceCount[1<<i];
sequenceCount[1<<i] = 0;
}

for(i = 0; i < length; i++) {
acyclical += sequenceCount[i];
}

unsigned long long actual_total = total * size * (size - 1) * 1 << size;
unsigned long long actual_cyclical = cyclical * size * (size - 1) * 1 << size;
unsigned long long actual_acyclical = acyclical * size * (size - 1) * 1<<size;

fprintf(outFile, "cyclical:\t%llu\t%llu\n", cyclical, actual_cyclical);
fprintf(outFile, "acyclical:\t%llu\t%llu\n", acyclical, actual_acyclical);
fprintf(outFile, "total:\t%llu\t%llu\n", total, actual_total);
fprintf(outFile, "\n");

fflush(outFile);
fclose(outFile);

free(fileName);

}

free(nodes);
free(nodeIndex);
free(sequenceCount);
free(edgeMasks);

return 0;
}


***END***

Just looking over the results for n=4 shows lots of symmetry and redundancy that can no doubt be exploited to avoid lots of processing. This round of obsessioon I'm going to ferret that out and conquer n=6....dammit.

Thursday, January 31, 2008

The preceding few posts are a mirror of the content that can be found at evolvablerules.org, a site I've set up to track the work I'm doing to apply evolutionary computation concepts to inference engines.

Cool stuff I want to do with evolvable RETEs


Yes, Jenny, the cart is before the horse. :)

The basics aren't nearly done, but these are the interesting things I'd like to eventually do with evolvable RETEs.

Evolving Guilds



Early on I wasn't sure how the RHS of rules would be evolved. NEAT is useful for creating the RETE, which is the indexing behind the rule LHSs. (The LHS of each rule is derived from the logical conditions upstream in the RETE from the position of a rule execution node.) The necessary contents of the working memory also falls out from the LHS conditions. The question then is this: what do you do once rule conditions are met, and how can that be determined by evolution?

The first solution that came to me was to create the RHS code via grammatical evolution. The population would RHSs that all shared a common LHS, no matter which RETE they came from. In other words, RHSs from different RETEs would evolve together, independently from the mechanism that evolves their LHSs.

So, for example, say many agents had the following rule:

if
foo == 12
bar == "a_value"
bas1 >= bas2
then
...do something...

All RETEs that share this rule would join together and evolve the RHS of that rule via GE. The closest analogy I could find was a Guild: a group of artisans who want to do something, and collectively discover the tool necessary to do it. So, in a way, it could be a model for simulating technological evolution via cooperation.

For example, say all Guild members had a rule like this:

if
exists nail(inBoard == yes, protrusion == 4")
then
...some method for pounding the nail...

Possible RHS operations might be "hit it with a tack hammer," "hit it with a sledge hammer," "hit it with a regular hammer," or "leave it alone." Maybe there's more complex actions like "hit it once with a regular hammer, then tap it 5x with the tack hammer." Discovering the right action to take (RHS) given a certain situation (LHS) would be the job of the Guild. Essentially, Guild members would get together and "discuss" how they do their job. The better a member does their "job" (overall objective function performance) would have greater "prestige," affecting their influence in the Guild's evolutionary process. (A form of indirect fitness.)

Recently I've decided that it would be easier to have rule RHSs that only have one operation, easily encoded and evolved along with the LHS. This has it's limitations, and will reduce the richness of the evolutionary process, but I want to make the basics work before stepping off into la la land. (i.e. simultaneous evolution methods with mixed interdependent populations. Oi!) It's an idea I'd like to revisit later, though. It has other parallels with symbiotic / co-evolutionary processes like meme development, (cultural evolution) parasite/host codevelopment, and symbiotic relationships.

How to evolve a RETE...


So... Combining NEAT and a RETE... How is that done?

Prep


First off, some links to read:

Encoding


NEAT evolves the structure and weights of a neural network simultaneuosly. It does this by 1) encoding both the network connections and weight information in a single genome, and 2) tagging new network structures with unique identifiers as they are created in each generation.
So, the first challenge in adapting NEAT to RETEs is in defining the genome representation. In a NEAT evolved neural network there is only one node structure: a node takes multiple weighted inputs, (real valued activation levels) and propagates the processed value to multiple other nodes of the same type. A RETE uses different type nodes, basically one and two input nodes. The values propagated are tokens which represent knowledge of the current state of objects in working memory. A genome representation of a RETE must have the following properties:

  • Encode only valid RETEs. Encoding must specify the following:

    • Alpha node expressions

    • Alpha node connections. (Alpha nodes can only connect downstream to Alpha Nodes, Rule Nodes, or Beta/Gamma nodes. The Alpha node network is a tree.)

    • Beta/Gamma node expressions

    • Beta/Gamma node connections. (Beta/Gamma nodes must have two inputs, and connect downstream to Rule nodes or Beta/Gamma nodes)


    • Rule nodes (RHS code, probably a single operation per node, with multiple nodes to describe more complex RHS functionality)



  • Node graph must be acyclic

  • Be variable length, to enable continual complexification

  • Two parallel encodings should be used: alpha nodes (encoded top down) and beta/gamma nodes. (encoded bottom up)


Working memory object structure can be specified by analyzing the LHS conditions in Alpha/Beta/Gamma nodes. Thus their structure does not need to be separately encoded.
The main difficulty I forsee is not in encoding the network structure, but encoding the node expressions. In NEAT the analog is encoding the node connection weights. The logical expressions in RETE nodes are far more complex. Also, the structure is resident in the nodes. (As opposed to the network edges.) One possible solution is to encode expressions in a way similar to Grammatical Evolution. In NEAT the weight of a node connection produced in crossover is calculated from the average of the matching genes in crossover parents. This would be possible with a GE type encoding for node expressions, as the distance between two arbitrary numerical strings can be easily calculated. However, averaging two GE genomes would introduce far more nonlinearity than averaging two neural network connection weights.

Evolution


I plan on incorporating the major innovations of neat: historical marking, continual complexification, speciation.

Why evolve a RETE?


I mean, really... What's the point?

Well, apart from "why not?" here are some reasons why I think it'll be useful.


Built in memory


Inference engines describe the interaction between a set of logical rules and a memory store. So right off the bat you're evolving an entity that has a memory. In classical genetic programming a memory store must be built in. Also, the memory store in an inference engine is, by design, indexed so that changes in the memory contents are automatically monitored.

Built in modules / functions


Each rule in an inference engine can be thought of as a function (rule right hand side) that is executed when a certain logical condition occurs. (rule left hand side) So reusable functions are built in by design. (Again, something that must be added to classical GP.) In addition, modularity is also inherent in inference engines: all rules that share a given logical condition can be considered in the same "module" as they are all only potentially active when that logical state is obtained. This is explicitly used when creating goal driven rules: i.e. rules with the following format:

if
Goal(name="foo")
...other conditions...
then
insert another Goal to activate another set or rules, or remove matched Goal above to
deactivate current set of rules, etc...


Built in recursion, loops, and decision structures


Functions (rules) can easily be executed in a recursive fashion. As long as the right logical conditions persist (and are refreshed according to the conflict resolution style of the inference engine) the same rule can execute multiple times. This could lead to recursion and loop structures evolving naturally. (Totally untested and unproven in practice.) And decision structures fall out naturally, as rules are just if-then-else statements.

Potential bloat solution



CPU use can be limited by restricting individual inference agents to a certain number of recognize-act cycles through the agenda. In fact, this will probably be necessary as there is no guarantee that evolution would produce rulesets with valid ending conditions. :)

Memory use bloat may be handled by combining RETEs from separate agents into one collective RETE. That way, substantially similar RETEs could be combined, and the total size would be dictated not by the total number of agents, but by the degree of difference between the agents. (i.e. two substantially similar agents would be combined, and use only slightly as much memory as one.)



This idea has been floating around my head since about 1999: Wouldn't it be cool if techniques
used in evolutionary computation were applied to an inference engine?

Yes, ten years: An embarrassingly long time to have an idea incubate. But as I've wandered through the valley of AI and computer science I've slowly picked up the pieces to accomplish this project. I think I'm finally ready to get it done.

From my first experience with inference engines I thought that evolution could be applied to them. It seemed to me that, given the existence of genetic programming, which could be used to evolve general purpose computation trees, a similar technique could be used to evolve sets of inferencing rules. Soon I realized that focusing on the rules was the wrong approach, and it would be better to evolve the structure of the RETE used to optimize their execution. The method to do this eluded me, though.

Then, in 2002 I attended the GECCO conference in NYC. There I discovered Ken Stanley's technique for evolving neural networks, NEAT. It was immediately apparent to me that if NEAT could be used to evolve one kind of network, it could be used to evolve any kind. Thus began my years long quest to procrastinate and do very little to develop the idea. :)

In 2004 I started a sourceforge project, REAT, as a public repository for code related to the effort. After a flurry of activity I haven't been able to contribute to it until recently.

Sunday, October 01, 2006

Is this a cute fuckin' cat, or what?