The codes (at least those which call qsort) can produce different results on different machines. I believe that this is because in ccompare() it returns 0 to qsort if (permvec[a->color] == permvec[b->color]) and so the order of vertices with the same colour is not well defined. Unfortunately qsort is implemented differently in different versions of the standard C library and so a more portable version would be to return (a->vertex - b->vertex). I'm not sure if this would affect the results adversely - possibly it might if the vertices had some sort of meaningful ordering. I implemented dynamically allocated structures to replace "adjacencytype graph[GRAPHSIZE]" and all arrays of length MAXVERTEX. I'm not sure how important this would be for your experiments but for mine (where I sometimes use very small graphs & small numbers of iterations, the lines "bzero(graph,GRAPHSIZE)" were killing the efficiency. I also implemented a new read_graph function for sparse graphs. For large sparse graphs the DIMACS binary format is clearly inefficient and so one would like to store the instances in a sparse format. My method for storing them (used by most of the public domain graph partitioning codes) has a header followed by N lines; line i (1 <= i <= N) then lists the neighbours of the i-th vertex. This takes up about the same amount of space as the DIMACS ascii format but is much more efficient to read in, even for dense graphs. For example on our DEC alpha machines, the C4000.5 instance of the DIMACS benchmark set takes 41 seconds to read in & set up using the DIMACS ascii format but only 4.7 seconds using the adjacency list format. Beyond that I did have a number of problems with the tabu algorithm when used (in extremis) on the very small and/or very dense graphs that are produced at the top level of the multilevel process. Although I appreciate that most users of the code, and you yourself, will probably never use examples of this nature, you might be interested to hear of my experiences. One problem occurred with large and extremely dense graphs and in particular one example with about 1000 vertices of which 750 were adjacent to every other vertex (i.e. of degree N-1).This tended to mean that the algorithm took hours to run rather than seconds (because of the brute force searches). It was easy to alleviate by filtering out all the vertices of degree N-1 prior to calling tabu and colouring them separately, but you might wish to put something similar into your code (or just warning the user). More difficult were the very small graphs (e.g. less than about 20 vertices). If you pick a tabu list size of 7 then sometimes (and always for graphs of 7 or less vertices) the code can get stuck in an infinite loop. Eventually I fixed this by ignoring graphs of size 7 or less and setting tabusize = minimum(7, N/3) for the others, but there's still no guarantee that this will work (especially for long searches) and I couldn't come up with a good formula to do so. Having said all that I should stress that I am very grateful to have been able to use your code, I found it easy to understand and use and that the observations are just that, observations. With best regards, Chris Walshaw ================================================================================ Dr. C. H. Walshaw, Computing & Mathematical Sciences, email:- C.Walshaw@gre.ac.uk University of Greenwich, URL:- http://www.gre.ac.uk/~c.walshaw Maritime Greenwich Campus, Tel:- +44 20 8331 8142 Old Royal Naval College, FAX:- +44 20 8331 8665 Park Row, Greenwich, London, SE10 9LS. United Kingdom ================================================================================