My publications:

PhD Thesis

The Analysis and Design of Concurrent Learning Algorithms for Cooperative Multiagent Systems
     George Mason University [2006]

Refereed Journal Articles

Theoretical Advantages of Lenient Learners: An Evolutionary Game Theoretic Perspective
     In Journal of Machine Learning Research [2008] (with Karl Tuyls and Sean Luke)
Biasing Coevolutionary Search for Optimal Multiagent Behaviors
     In IEEE Transactions on Evolutionary Computation [2006] (with Sean Luke and R. Paul Wiegand)
A Comparison of Bloat Control Methods for Genetic Programming
     In Evolutionary Computation [2006] (with Sean Luke)
Cooperative Multi-Agent Learning: The State of the Art
     In Autonomous Agents and Multi-Agent Systems [2005] (with Sean Luke)
MASON: A Multiagent Simulation Environment
     In Simulation [2005] (with Sean Luke, Claudio Cioffi-Revilla, Keith Sullivan, and Gabriel Balan)

Papers in Refereed Conference Proceedings

Theoretical Advantages of Lenient Q-learners: An Evolutionary Game Theoretic Perspective
     In Proceedings of AAMAS [2007] (with Karl Tuyls)
Archive-based cooperative coevolutionary algorithms
     In Proceedings of GECCO [2006] (with Sean Luke and Joseph Harrison)
Selecting Informative Actions Improves Cooperative Multiagent Learning
     In Proceedings of AAMAS [2006] (with Sean Luke)
Lenience Towards teammates Helps in Cooperative Multiagent Learning
     In Proceedings of AAMAS [2006] (with Keith Sullivan and Sean Luke)
Can Good Learners Always Compensate for Bad Learners?
     In Proceedings of AAMAS [2006] (with Keith Sullivan, Gabriel Balan, and Sean Luke)
Tunably Decentralized Algorithms for Cooperative Target Observation
     In Proceedings of AAMAS [2005] (with Sean Luke, Keith Sullivan, and Gabriel Balan)
A Visual Demonstration of Convergence Properties of Cooperative Coevolution
     In Proceedings of PPSN [2004] (with Sean Luke and R. Paul Wiegand)
Ant Foraging Revisited
     In Proceedings of ALIFE-IX [2004] (with Sean Luke)
Learning Ant Foraging Behaviors
     In Proceedings of ALIFE-IX [2004] (with Sean Luke)
A Sensitivity Analisys of a Cooperative Coevolutionary Algorithm Biased for Optimization
     In Proceedings of GECCO [2004] (with Sean Luke and R. Paul Wiegand)
Alternative Bloat Control Methods
     In Proceedings of GECCO [2004] (with Sean Luke)
A Pheromone-Based Utility Model for Collaborative Foraging
     In Proceedings of AAMAS [2004] (with Sean Luke)
Population Implosion in Genetic Programming
     In Proceedings of GECCO [2003] (with Sean Luke and Gabriel Balan)
Methods for Evolving Robust Programs
     In Proceedings of GECCO [2003] (with Sean Luke)
Improving Coevolutionary Search for Optimal Multiagent Behaviors
     In Proceedings of IJCAI [2003] (with Sean Luke and R. Paul Wiegand)
Fighting Bloat With Nonparametric Parsimony Pressure
     In Proceedings of PPSN [2002] (with Sean Luke)
Lexicographic Parsimony Pressure
     In Proceedings of GECCO [2002] (with Sean Luke)
A Comparison of Two Competitive Fitness Functions
     In Proceedings of GECCO [2002] (with Sean Luke)
Is the Perfect the Enemy of the Good?
     In Proceedings of GECCO [2002] (with Sean Luke)
A Survey and Comparison of Tree Generation Algorithms
     In Proceedings of GECCO [2001] (with Sean Luke)
The Development of the AQ20 Learning System and Initial Experiments
     In Proceedings of IIS [2001] (with Guido Cervone and Ryszard Michalski)
   

Refereed Workshops Papers

Learning Team Behaviors with Adaptive Heterogeneity
     In Proceedings of AAMAS [2005]
Time-dependent Collaboration Schemes for Cooperative Coevolutionary Algorithms
     In Proceedings of AAAI Fall Symposium on Coevolutionary and Coadaptive Systems [2005] (with Sean Luke)
An Application of Evolutionary Algorithms to Study the Extent of SLHF Anomaly Associated with Coastal Earthquakes
     In Late-Breaking Papers of GECCO [2004] (with Guido Cervone, Ramesh Singh, and Sean Luke)
Learnable Representation for Real World Planning
     In Proceedings of the AAAI Workshop on Representational Issues for Real-World Planning Systems [2000]
     (with Mihai Boicu, Gheorghe Tecuci, Bogdan Stanescu, and Cristina Cascaval)
LEM: A new Multistrategy Approach that Speeds up Evolutionary Computation
     In Proceedings of the Fifth Workshop on Multistrategy Learning [2000] (with Guido Cervone and Ryszard Michalski)

 
 

PhD Thesis

The Analysis and Design of Concurrent Learning Algorithms for Cooperative Multiagent Systems
Author:
Liviu Panait
Formats:
PDF
Citation:
Liviu Panait. (2006). The Analysis and Design of Concurrent Learning Algorithms for Cooperative Multiagent Systems. George Mason University.
Abstract:
     Concurrent learning is the application of several machine learning algorithms in parallel to automatically discover behaviors for teams of agents. As machine learning techniques tend to find better and better solutions if they are allowed additional time and resources, the same would be expected from concurrent learning algorithms. Surprisingly, previous empirical and theoretical analysis has shown this not to be the case. Instead, concurrent learning algorithms often drift towards suboptimal solutions due to the learners' coadaptation to one another. This phenomenon is a significant obstacle to using these techniques to discover optimal team behaviors.
     This thesis presents theoretical and empirical research on what causes the aforementioned drift, as well as on how to minimize it altogether. I present evidence that the drift often occurs because learners have poor estimates for the quality of their possible behaviors. Interestingly, improving a learner's quality estimate does not require more sophisticated sensing capabilities; rather, it can be simply achieved if the learner ignores certain reward information that it received for performing actions. I provide formal proofs that concurrent learning algorithms will converge to the globally optimal solution, provided that each learner has sufficiently accurate estimates.
     This theoretical analysis provides the foundation for the design of novel concurrent learning algorithms that benefit from accurate quality estimates. First, the estimates of learners employing the biased cooperative coevolutionary algorithm are greatly improved based on reward information that was received in the past. Second, the informative cooperative coevolutionary algorithm provides learners with simpler, functionally-equivalent estimates at a reduced computational cost. Finally, I describe the lenient multiagent Q-learning algorithm, which benefits from more accurate estimates when tackling challenging coordination tasks in stochastic domains.
Bibtex entry:
@PhDThesis{panait06analysis,
    author = "Liviu Panait",
    title = "The Analysis and Design of Concurrent Learning Algorithms for Cooperative Multiagent Systems",
    school = "George Mason University",
    address = "Fairfax, Virginia",
    year = "2006"
    }
 

Refereed Journal Articles

Theoretical Advantages of Lenient Learners: An Evolutionary Game Theoretic Perspective
Authors:
Liviu Panait, Karl Tuyls, and Sean Luke
Formats:
PDF
Citation:
Liviu Panait, Karl Tuyls, and Sean Luke. 2008. Theoretical Advantages of Lenient Learners: An Evolutionary Game Theoretic Perspective. In Journal of Machine Learning Research. 9(Mar): 423-457
Abstract:
     This paper presents the dynamics of multiple learning agents from an evolutionary game theoretic perspective. We provide replicator dynamics models for cooperative coevolutionary algorithms and for traditional multiagent Q-learning, and we extend these differential equations to account for lenient learners: agents that forgive possible mismatched teammate actions that resulted in low rewards. We use these extended formal models to study the convergence guarantees for these algorithms, and also to visualize the basins of attraction to optimal and suboptimal solutions in two benchmark coordination problems. The paper demonstrates that lenience provides learners with more accurate information about the benefits of performing their actions, resulting in higher likelihood of convergence to the globally optimal solution. In addition, the analysis indicates that the choice of learning algorithm has an insignificant impact on the overall performance of multiagent learning algorithms; rather, the performance of these algorithms depends primarily on the level of lenience that the agents exhibit to one another. Finally, the research herein supports the strength and generality of evolutionary game theory as a backbone for multiagent learning.
Bibtex entry:
@Article{panait08theoretical,
    author = "Liviu Panait and Karl Tuyls and Sean Luke",
    title = "Theoretical Advantages of Lenient Learners: An Evolutionary Game Theoretic Perspective",
    journal = "Journal of Machine Learning Research",
    volume = "9",
    number = "Mar",
    pages = "423--457",
    year = "2008"
    }

Biasing Coevolutionary Search for Optimal Multiagent Behaviors
Authors:
Liviu Panait, Sean Luke, and R. Paul Wiegand
Citation:
Liviu Panait, Sean Luke, and R. Paul Wiegand. 2006. Biasing Coevolutionary Search for Optimal Multiagent Behaviors. In IEEE Transactions on Evolutionary Computation. 10(6): 629-645
Abstract:
     Cooperative coevolutionary algorithms offer great potential for concurrent multiagent learning domains and are of special utility to domains involving teams of multiple agents. Unfortunately, they also exhibit pathologies resulting from their game-theoretic nature, and these pathologies interfere with finding solutions that correspond to optimal collaborations of interacting agents. We address this problem by biasing a cooperative coevolutionary algorithm in such a way that the fitness of an individual is based partly on the result of interactions with other individuals (as is usual), and partly on an estimate of the best possible reward for that individual if partnered with its optimal collaborator. We justify this idea using existing theoretical models of a relevant subclass of coevolutionary algorithms, demonstrate how to apply biasing in a way that is robust with respect to parameterization, and provide some experimental evidence to validate the biasing approach. We show that it is possible to bias coevolutionary methods to better search for optimal multiagent behaviors.
Bibtex entry:
@Article{panait06biasing,
    author = "Liviu Panait and Sean Luke and R. Paul Wiegand",
    title = "Biasing Coevolutionary Search for Optimal Multiagent Behaviors",
    journal = "IEEE Transactions on Evolutionary Computation",
    volume = "10",
    number = "6",
    pages = "629--645",
    year = "2006"
    }

A Comparison of Bloat Control Methods for Genetic Programming
Authors:
Sean Luke and Liviu Panait
Citation:
Sean Luke and Liviu Panait. 2006. A Comparison of Bloat Control Methods for Genetic Programming. In Evolutionary Computation. 14(3): 309-344.
Abstract:
     Genetic programming has highlighted the problem of bloat, the uncontrolled growth of the average size of an individual in the population. The most common approach to dealing with bloat in tree-based genetic programming individuals is to limit their maximal allowed depth. An alternative to depth limiting is to punish individuals in some way based on excess size, and our experiments have shown that the combination of depth limiting with such a punitive method is generally more effective than either alone. Which such combinations are most effective at reducing bloat? In this article we augment depth limiting with nine bloat control methods and compare them with one another. These methods are chosen from past literature and from techniques of our own devising. Testing with four genetic programming problems, we identify where each bloat control method performs well on a per-problem basis, and under what settings various methods are effective independent of problem. We report on the results of these tests, and discover an unexpected winner in the cross-platform category.
Bibtex entry:
@Article{luke05comparison,
    author = "Sean Luke and Liviu Panait",
    title = "A Comparison of Bloat Control Methods for Genetic Programming",
    journal = "Evolutionary Computation",
    volumne = "14",
    number = "3",
    pages = "309--344",
    year = "2006"
    }

Cooperative Multi-Agent Learning: The State of the Art
Authors:
Liviu Panait and Sean Luke
Formats:
PDF
Citation:
Liviu Panait and Sean Luke. 2005. Cooperative Multi-Agent Learning: The State of the Art. In Autonomous Agents and Multi-Agent Systems. 11(3) 387-434. Springer.
Abstract:
     Cooperative multi-agent systems are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility. Due to the interactions among the agents, multi-agent problem complexity can rise rapidly with the number of agents or their behavioral sophistication. The challenge this presents to the task of programming solutions to multi-agent systems problems has spawned increasing interest in machine learning techniques to automate the search and optimization process.
     We provide a broad survey of the cooperative multi-agent learning literature. Previous surveys of this area have largely focused on issues common to specific subareas (for example, reinforcement learning or robotics). In this survey we attempt to draw from multi-agent learning work in a spectrum of areas, including reinforcement learning, evolutionary computation, game theory, complex systems, agent modeling, and robotics.
     We find that this broad view leads to a division of the work into two categories, each with its own special issues: applying a single learner to discover joint solutions to multi-agent problems (team learning), or using multiple simultaneous learners, often one per agent (concurrent learning). Additionally, we discuss direct and indirect communication in connection with learning, plus open issues in task decomposition, scalability, and adaptive dynamics. We conclude with a presentation of multi-agent learning problem domains, and a list of multi-agent learning resources.
Bibtex entry:
@Article{panait05cmasl,
    author = "Liviu Panait and Sean Luke",
    title = "Cooperative Multi-Agent Learning: The State of the Art",
    journal = "Autonomous Agents and Multi-Agent Systems",
    publisher = "Springer",
    volume = "11",
    number = "3",
    pages = "387--434",
    year = "2005"
    }

MASON: A Multiagent Simulation Environment
Authors:
Sean Luke, Claudio Cioffi-Revilla, Liviu Panait, Keith Sullivan, and Gabriel Balan
Citation:
Sean Luke, Claudio Cioffi-Revilla, Liviu Panait, Keith Sullivan, and Gabriel Balan. 2005. MASON: A Multiagent Simulation Environment. In Simulation. Sage Publications. (to appear).
Abstract:
     MASON is a fast, easily extensible, discrete-event multi-agent simulation toolkit in Java, designed to serve as the basis for a wide range of multi-agent simulation tasks ranging from swarm robotics to machine learning to social complexity environments. MASON carefully delineates between model and visualization, allowing models to be dynamically detached from or attached to visualizers, and to change platforms mid-run. This paper describes the MASON system, its motivation, and its basic architectural design. It then compares MASON to related multi-agent libraries in the public domain, and discusses six applications of the system built over the past year which suggest its breadth of utility.
Bibtex entry:
@Article{luke05mason,
    author = "Sean Luke and Claudio Cioffi-Revilla and Liviu Panait and Keith Sullivan and Gabriel Balan",
    title = "MASON: A Multiagent Simulation Environment",
    journal = "Simulation",
    publisher = "Sage Publications",
    volume = "(to appear)",
    year = "2005",
    }
 

Papers in Refereed Conference Proceedings

Theoretical Advantages of Lenient Q-learners: An Evolutionary Game Theoretic Perspective
Authors:
Liviu Panait and Karl Tuyls
Citation:
Liviu Panait and Karl Tuyls. 2007. Theoretical Advantages of Lenient Q-learners: An Evolutionary Game Theoretic Perspective. In Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS-2007). ACM.
Bibtex entry:
@InProceedings{panait07theoretical,
    author = "Liviu Panait and Karl Tuyls",
    title = "Theoretical Advantages of Lenient Q-learners: An Evolutionary Game Theoretic Perspective",
    booktitle = "Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multi Agent Systems -- {AAMAS}-2007",
    publisher = "ACM",
    year = "2007",
    }

Archive-based Cooperative Coevolutionary Algorithms
Authors:
Liviu Panait, Sean Luke, and Joseph Harrison
Citation:
Liviu Panait, Sean Luke, and Joseph Harrison. 2006. Archive-based Cooperative Coevolutionary Algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference -- GECCO-2006. ACM.
Bibtex entry:
@InProceedings{panait06archive,
    author = "Liviu Panait and Sean Luke and Joseph Harrison",
    title = "Archive-based Cooperative Coevolutionary Algorithms",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference -- {GECCO}-2006",
    publisher = "ACM",
    pages = "345--352",
    year = "2006",
    }

Selecting Informative Actions Improves Cooperative Multiagent Learning
Authors:
Liviu Panait and Sean Luke
Citation:
Liviu Panait and Sean Luke. 2006. Selecting Informative Actions Improves Cooperative Multiagent Learning. In Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS-2006). ACM.
Bibtex entry:
@InProceedings{panait06selecting,
    author = "Liviu Panait and Sean Luke",
    title = "Selecting Informative Actions Improves Cooperative Multiagent Learning",
    booktitle = "Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multi Agent Systems -- {AAMAS}-2006",
    publisher = "ACM",
    year = "2006",
    }

Lenience Towards teammates Helps in Cooperative Multiagent Learning
Authors:
Liviu Panait, Keith Sullivan, and Sean Luke
Citation:
Liviu Panait, Keith Sullivan, and Sean Luke. 2006. Lenience Towards teammates Helps in Cooperative Multiagent Learning. In Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS-2006). ACM.
Bibtex entry:
@InProceedings{panait06lenience,
    author = "Liviu Panait and Keith Sullivan and Sean Luke",
    title = "Lenience Towards teammates Helps in Cooperative Multiagent Learning",
    booktitle = "Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multi Agent Systems -- {AAMAS}-2006",
    publisher = "ACM",
    year = "2006",
    }

Can Good Learners Always Compensate for Bad Learners?
Authors:
Keith Sullivan, Liviu Panait, Gabriel Balan, and Sean Luke
Citation:
Keith Sullivan, Liviu Panait, Gabriel Balan, and Sean Luke. 2006. Can Good Learners Always Compensate for Bad Learners?. In Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS-2006). ACM.
Bibtex entry:
@InProceedings{panait06can,
    author = "Keith Sullivan and Liviu Panait and Gabriel Balan and Sean Luke",
    title = "Can Good Learners Always Compensate for Bad Learners?",
    booktitle = "Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multi Agent Systems -- {AAMAS}-2006",
    publisher = "ACM",
    year = "2006",
    }

Tunably Decentralized Algorithms for Cooperative Target Observation
Authors:
Sean Luke, Keith Sullivan, Liviu Panait, and Gabriel Balan
Formats:
PDF
Citation:
Sean Luke, Keith Sullivan, Liviu Panait, and Gabriel Balan. 2005. Tunably Decentralized Algorithms for Cooperative Target Observation. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multi Agent Systems. ACM.
Abstract:
     Multi-agent problem domains may require distributed algorithms for a variety of reasons: local sensors, limitations of communication, and availability of distributed computational resources. In the absence of these constraints, centralized algorithms are often more efficient, simply because they are able to take advantage of more information. We introduce a variant of the cooperative target observation domain which is free of such constraints. We propose two algorithms, inspired by K-means clustering and hill-climbing respectively, which are scalable in degree of decentralization. Neither algorithm consistently outperforms the other across over all problem domain settings. Surprisingly, we find that hill-climbing is sensitive to degree of decentralization, while K-means is not. We also experiment with a combination of the two algorithms which draws strength from each.
Bibtex entry:
@InProceedings{luke05tunably,
    author = "Sean Luke and Keith Sullivan and Liviu Panait and Gabriel Balan",
    title = "Tunably Decentralized Algorithms for Cooperative Target Observation",
    booktitle = "Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multi Agent Systems",
    publisher = "ACM",
    year = "2005",
    }

A Visual Demonstration of Convergence Properties of Cooperative Coevolution
Authors:
Liviu Panait, R. Paul Wiegand, and Sean Luke
Formats:
PDF
Citation:
Liviu Panait, R. Paul Wiegand, and Sean Luke. 2004. A Visual Demonstration of Convergence Properties of Cooperative Coevolution. In Parallel Problem Solving from Nature -- PPSN-2004. Springer. 892-901.
Abstract:
     We introduce a model for cooperative coevolutionary algorithms (CCEAs) using partial mixing, which allows us to compute the expected long-run convergence of such algorithms when individuals' fitness is based on the maximum payoff of some N evaluations with partners chosen at random from the other population. Using this model, we devise novel visualization mechanisms to attempt to qualitatively explain a difficult-to-conceptualize pathology in CCEAs: the tendency for them to converge to suboptimal Nash equilibria. We further demonstrate visually how increasing the size of N, or biasing the fitness to include an ideal-collaboration factor, both improve the likelihood of optimal convergence, and under which initial population configurations they are not much help.
Bibtex entry:
@InProceedings{panait04visual,
    author = "Liviu Panait and R. Paul Wiegand and Sean Luke",
    title = "A Visual Demonstration of Convergence Properties of Cooperative Coevolution",
    booktitle = "Parallel Problem Solving from Nature -- PPSN-2004",
    publisher = "Springer",
    pages = "892--901",
    year = "2004",
    }

Ant Foraging Revisited
Authors:
Liviu Panait and Sean Luke
Formats:
PDF
Citation:
Liviu Panait and Sean Luke. 2004. Ant Foraging Revisited. In Proceedings of the Ninth International Conference on the Simulation and Synthesis of Living Systems (ALIFE-IX).
Abstract:
     Most previous artificial ant foraging algorithms have to date relied to some degree on a priori knowledge of the environment, in the form of explicit gradients generated by the nest, by hard-coding the nest location in an easily-discoverable place, or by imbuing the artificial ants with the knowledge of the nest direction. In contrast, the work presented solves ant foraging problems using two pheromones, one applied when searching for food and the other when returning food items to the nest. This replaces the need to use complicated nest-discovery devices with simpler mechanisms based on pheromone information, which in turn reduces the ant system complexity. The resulting algorithm is orthogonal and simple, yet ants are able to establish increasingly efficient trails from the nest to the food in the presence of obstacles. The algorithm replaces the blind addition of new amounts of pheromones with an adjustment mechanism that resembles dynamic programming.
Bibtex entry:
@InProceedings{panait04ant,
    author = "Liviu Panait and Sean Luke",
    title = "Ant Foraging Revisited",
    booktitle = "Proceedings of the Ninth International Conference on the Simulation and Synthesis of Living Systems ({ALIFE-IX})",
    year = "2004",
    }

Learning Ant Foraging Behaviors
Authors:
Liviu Panait and Sean Luke
Formats:
PDF
Citation:
Liviu Panait and Sean Luke. 2004. Learning Ant Foraging Behaviors. In Proceedings of the Ninth International Conference on the Simulation and Synthesis of Living Systems (ALIFE-IX).
Abstract:
     Insects are good at cooperatively solving many complex tasks. For example, foraging for food far away from a nest can be solved through relatively simple behaviors in combination with pheromones. As task complexity increases, however, it may become difficult to find individual agent rules which yield a desired emergent cooperative behavior, or to know if any such rules exist at all. For such tasks, machine learning techniques like evolutionary computation (EC) may prove a valuable approach to searching the space of possible rule combinations. This paper presents an application of genetic programming to search for foraging behaviors. The learned foraging behaviors use only pheromone information to find the path to the nest and to the food source.
Bibtex entry:
@InProceedings{panait04learning,
    author = "Liviu Panait and Sean Luke",
    title = "Learning Ant Foraging Behaviors",
    booktitle = "Proceedings of the Ninth International Conference on the Simulation and Synthesis of Living Systems ({ALIFE-IX})",
    year = "2004",
    }

A Sensitivity Analisys of a Cooperative Coevolutionary Algorithm Biased for Optimization
Authors:
Liviu Panait, R. Paul Wiegand, and Sean Luke
Formats:
PDF
Citation:
Liviu Panait, R. Paul Wiegand, and Sean Luke. 2004. A Sensitivity Analisys of a Cooperative Coevolutionary Algorithm Biased for Optimization. In Proceedings of the Genetic and Evolutionary Computation Conference -- GECCO-2004.
Abstract:
     Recent theoretical work helped explain certain optimization-related pathologies in cooperative coevolutionary algorithms (CCEAs). Such explanations have led to adopting specific and constructive strategies for improving CCEA optimization performance by biasing the algorithm toward ideal collaboration. This paper investigates how sensitivity to the degree of bias (set in advance) is affected by certain algorithmic and problem properties. We discover that the previous static biasing approach is quite sensitive to a number of problem properties, and we propose a stochastic alternative which alleviates this problem. We believe that finding appropriate biasing rates is more feasible with this new biasing technique.
Bibtex entry:
@InProceedings{panait04sensitivity,
    author = "Liviu Panait and R. Paul Wiegand and Sean Luke",
    title = "A Sensitivity Analisys of a Cooperative Coevolutionary Algorithm Biased for Optimization",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference -- {GECCO}-2004",
    year = "2004",
    }

Alternative Bloat Control Methods
Authors:
Liviu Panait and Sean Luke
Formats:
PDF
Citation:
Liviu Panait and Sean Luke. 2004. Alternative Bloat Control Methods. In Proceedings of the Genetic and Evolutionary Computation Conference -- GECCO-2004.
Abstract:
     Bloat control is an important aspect of evolutionary computation methods, such as genetic programming, which must deal with genomes of arbitrary size. We introduce three new methods for bloat control: Biased Multi-Objective Parsimony Pressure (BMOPP), the Waiting Room, and Death by Size. These methods are unusual approaches to bloat control, and are not only useful in various circumstances, but two of them suggest novel approaches to attack the problem. BMOPP is a more traditional parsimony-pressure style bloat control method, while the other two methods do not consider parsimony as part of the selection process at all, but instead penalize for parsimony at other stages in the evolutionary process. We find parameter settings for BMOPP and the Waiting Room which are effective across all tested problem domains. Death by Size does not appear to have this consistency, but we find it a useful tool as it has particular applicability to steady-state evolution.
Bibtex entry:
@InProceedings{panait04alternative,
    author = "Liviu Panait and Sean Luke",
    title = "Alternative Bloat Control Methods",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference -- {GECCO}-2004",
    year = "2004",
    }

A Pheromone-Based Utility Model for Collaborative Foraging
Authors:
Liviu Panait and Sean Luke
Formats:
PDF
Citation:
Liviu Panait and Sean Luke. 2004. A Pheromone-Based Utility Model for Collaborative Foraging. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS-2004).
Abstract:
     Multi-agent research often borrows from biology, where remarkable examples of collective intelligence may be found. One interesting example is ant colonies' use of pheromones as a joint communication mechanism. In this paper we propose two pheromone-based algorithms for artificial agent foraging, trail-creation, and other tasks. Whereas practically all previous work in this area has focused on biologically-plausible but ad-hoc single pheromone models, we have developed a formalism which uses multiple pheromones to guide cooperative tasks. This model bears some similarity to reinforcement learning. However, our model takes advantage of symmetries common to foraging environments which enables it to achieve much faster reward propagation than reinforcement learning does. Using this approach we demonstrate cooperative behaviors well beyond the previous ant-foraging work, including the ability to create optimal foraging paths in the presence of obstacles, to cope with dynamic environments, and to follow tours with multiple waypoints. We believe that this model may be used for more complex problems still.
Bibtex entry:
@InProceedings{panait04pheromone,
    author = "Liviu Panait and Sean Luke",
    title = "A Pheromone-Based Utility Model for Collaborative Foraging",
    booktitle = "Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems -- {AAMAS}-2004",
    year = "2004",
    }

Population Implosion in Genetic Programming
Authors:
Sean Luke, Gabriel Balan, and Liviu Panait
Formats:
PDF
Citation:
Sean Luke, Gabriel Balan, and Liviu Panait. 2003. Population Implosion in Genetic Programming. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2003). E. Cantu-Paz et al, eds. Springer-Verlag. 1729-1739.
Abstract:
     With the exception of a small body of adaptive-parameter literature, evolutionary computation has traditionally favored keeping the population size constant through the course of the run. Unfortunately, genetic programming has an aging problem: for various reasons, late in the run the technique become less effective at optimization. Given a fixed number of evaluations, allocating many of them late in the run may thus not be a good strategy. In this paper we experiment with gradually decreasing the population size throughout a genetic programming run, in order to reallocate more evaluations to early generations. Our results show that over four problem domains and three different numbers of evaluations, decreasing the population size is always as good as, and frequently better than, various fixed-sized population strategies.
Note:
This paper was nominated for the Best Paper of Conference award for the Genetic Programming track.
Bibtex entry:
@InProceedings{luke03population,
    author = "Sean Luke and Gabriel Balan and Liviu Panait",
    title = "Population Implosion in Genetic Programming",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference ({GECCO}-2003)",
    editor = "E. Cantu-Paz et al",
    publisher = "Springer-Verlag",
    pages = "1729--1739"
    year = "2003",
    }

Methods for Evolving Robust Programs
Authors:
Liviu Panait and Sean Luke
Formats:
PDF
Citation:
Liviu Panait and Sean Luke. 2003. Methods for Evolving Robust Programs. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2003). E. Cantu-Paz et al, eds. Springer-Verlag. 1740-1751.
Abstract:
     Many evolutionary computation search spaces require fitness assessment through the sampling of and generalization over a large set of possible cases as input. Such spaces seem particularly apropos to Genetic Programming, which notionally searches for computer algorithms and functions. Most existing research in this area uses ad-hoc approaches to the sampling task, guided more by intuition than understanding. In this initial investigation, we compare six approaches to sampling large training case sets in the context of genetic programming representations. These approaches include fixed and random samples, and adaptive methods such as coevolution or fitness sharing. Our results suggest that certain domain features may lead to the preference of one approach to generalization over others. In particular, coevolution methods are strongly domain-dependent. We conclude the paper with suggestions for further investigations to shed more light onto how one might adjust fitness assessment to make various methods more effective.
Note:
This paper was nominated for the Best Paper of Conference award for the Genetic Programming track.
Bibtex entry:
@InProceedings{panait03methods,
    author = "Liviu Panait and Sean Luke",
    title = "Methods for Evolving Robust Programs",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference ({GECCO}-2003)",
    editor = "E. Cantu-Paz et al",
    publisher = "Springer-Verlag",
    pages = "1740--1751"
    year = "2003",
    }

Improving Coevolutionary Search for Optimal Multiagent Behaviors
Authors:
Liviu Panait, R. Paul Wiegand, and Sean Luke
Formats:
PDF
Citation:
Liviu Panait, R. Paul Wiegand, and Sean Luke. 2003. Improving Coevolutionary Search for Optimal Multiagent Behaviors. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI-03). Georg Gottlob and Toby Walsh, eds. 653-658.
Abstract:
     Evolutionary computation is a useful technique for learning behaviors in multiagent systems. Among the several types of evolutionary computation, one natural and popular method is to coevolve multiagent behaviors in multiple, cooperating populations. Recent research has suggested that coevolutionary systems may favor stability rather than performance in some domains. In order to improve upon existing methods, this paper examines the idea of modifying traditional coevolution, biasing it to search for maximal rewards. We introduce a theoretical justification of the improved method and present experiments in three problem domains. We conclude that biasing can help coevolution find better results in some multiagent problem domains.
Bibtex entry:
@InProceedings{panait03methods,
    author = "Liviu Panait R. Paul Wiegand and Sean Luke",
    title = "Improving Coevolutionary Search for Optimal Multiagent Behaviors",
    booktitle = "Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence ({IJCAI}-03)",
    editor = "Georg Gottlob and Toby Walsh",
    pages = "653--658"
    year = "2003",
    }

Fighting Bloat With Nonparametric Parsimony Pressure
Authors:
Sean Luke and Liviu Panait
Formats:
Gzipped PostScript (.ps.gz)
PDF
Citation:
Sean Luke and Liviu Panait. 2002. Fighting Bloat With Nonparametric Parsimony Pressure. In Parallel Problem Solving from Nature - PPSN VII (LNCS 2439). Juan Julian Merelo Guervos et al, eds. Springer Verlag. 411-421.
Abstract:
     Many forms of parsimony pressure are parametric, that is final fitness is a parametric model of the actual size and raw fitness values. The problem with parametric techniques is that they are hard to tune to prevent size from dominating fitness late in the evolutionary run, or to compensate for problem-dependent nonlinearities in the raw fitness function. In this paper we briefly discuss existing bloat-control techniques, then introduce two new kinds of non-parametric parsimony pressure, Direct and Proportional Tournament. As their names suggest, these techniques are based on simple modifications of tournament selection to consider both size and fitness, but not together as a combined parametric equation. We compare the techniques against, and in combination with, the most popular genetic programming bloat-control technique, Koza-style depth limiting, and show that they are effective in limiting size while still maintaining good best-fitness-of-run results.
Bibtex entry:
@InProceedings{luke02fighting,
    author = "Sean Luke and Liviu Panait",
    title = "Fighting Bloat With Nonparametric Parsimony Pressure",
    booktitle = "Parallel Problem Solving from Nature - PPSN VII (LNCS 2439)",
    editor = "Juan Julian Merelo Guervos et al",
    publisher = "Springer Verlag",
    pages = "411--421"
    year = "2002",
    }

A Comparison of Two Competitive Fitness Functions
Authors:
Liviu Panait and Sean Luke
Formats:
Gzipped PostScript (.ps.gz)
PDF
Citation:
Liviu Panait and Sean Luke. 2002. A Comparison of Two Competitive Fitness Functions. In GECCO-2002: Proceedings of the Genetic and Evolutionary Computation Conference. W. B. Langdon et al, eds. Morgan Kauffman. 503-511.
Abstract:
     Competitive fitness is the assessment of an individual's fitness in the context of competition with other individuals in the evolutionary system. This commonly takes one of two forms: one-population competitive fitness, where competition is solely between individuals in the same population; and N-population competitive fitness, often termed competitive coevolution. In this paper we discuss common topologies for one-population competitive fitness functions, then test the performance of two such topologies, Single-Elimination Tournament and K-Random Opponents, on four problem domains. We show that neither of the extremes of K-Random Opponents (Round Robin and Random-Pairing) gives the best results when using limited computational resources. We also show that while Single-Elimination Tournament usually outperforms variations of K-Random Opponents in noise-free problems, it can suffer from premature convergence in noisy domains.
Note:
This paper was nominated for the Best Paper of Conference award for the Genetic Algorithms track.
Bibtex entry:
@InProceedings{panait02comparison,
    author = "Liviu Panait and Sean Luke",
    title = "A Comparison of Two Competitive Fitness Functions",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2002)",
    editor = "W. B. Langdon et al",
    publisher = "Morgan Kaufmann",
    pages = "503--511"
    year = "2002",
    }

Is the Perfect the Enemy of the Good?
Authors:
Sean Luke and Liviu Panait
Formats:
Gzipped PostScript (.ps.gz)
PDF
Citation:
Sean Luke and Liviu Panait. 2002. Is the Perfect the Enemy of the Good? In GECCO-2002: Proceedings of the Genetic and Evolutionary Computation Conference. W. B. Langdon et al, eds. Morgan Kauffman. 820-828.
Abstract:
     Much of the genetic programming literature compares techniques using counts of ideal solutions found. These counts in turn form common comparison measures such as Koza's Computational Effort or cumulative Probability of Success. The use of these measures continues despite past warnings that they are not statistically valid. In this paper we too criticize the measures for serious statistical problems, and also argue that their motivational justification is faulty. We then present evidence suggesting that ideal solution counts are not necessarily positively related to best-fitness-of-run statistics: in fact they are often inversely correlated. Thus claims based on ideal solution counts can mislead readers into thinking techniques will provide superior final results, when in fact the opposite is true.
Note:
This paper won the Best Paper of Conference award for the Genetic Programming track.
Bibtex entry:
@InProceedings{luke02ideal,
    author = "Sean Luke and Liviu Panait",
    title = "Is the Perfect the Enemy of the Good?",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2002)",
    editor = "W. B. Langdon et al",
    publisher = "Morgan Kaufmann",
    pages = "820--828"
    year = "2002",
    }

Lexicographic Parsimony Pressure
Authors:
Sean Luke and Liviu Panait
Formats:
Gzipped PostScript (.ps.gz)
PDF
Citation:
Sean Luke and Liviu Panait. 2002. Lexicographic Parsimony Pressure. In GECCO-2002: Proceedings of the Genetic and Evolutionary Computation Conference. W. B. Langdon et al, eds. Morgan Kauffman. 829-836.
Abstract:
     We introduce a technique called lexicographic parsimony pressure, for controlling the significant growth of genetic programming trees during the course of an evolutionary computation run. Lexicographic parsimony pressure modifies selection to prefer smaller trees only when fitnesses are equal (or equal in rank). This technique is simple to implement and is not affected by specific differences in fitness values, but only by their relative ranking. In two experiments we show that lexicographic parsimony pressure reduces tree size while maintaining good fitness values, particularly when coupled with Koza-style maximum tree depth limits.
Bibtex entry:
@InProceedings{luke02lexicographic,
    author = "Sean Luke and Liviu Panait",
    title = "Lexicographic Parsimony Pressure",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference ({GECCO}-2002)",
    editor = "W. B. Langdon et al",
    publisher = "Morgan Kaufmann",
    pages = "829--836"
    year = "2002",
    }

A Survey and Comparison of Tree Generation Algorithms
Authors:
Sean Luke and Liviu Panait
Formats:
Gzipped PostScript (.ps.gz)
PDF
Citation:
Sean Luke and Liviu Panait. 2001. A survey and comparison of tree generation algorithms. In GECCO-2001: Proceedings of the Genetic and Evolutionary Computation Conference. Lee Spector et al, eds. Morgan Kaufmann. 81-88.
Abstract:
     This paper discusses and compares five major tree-generation algorithms for genetic programming, and their effects on fitness: RAMPED HALF-AND-HALF, PTC1, PTC2, RANDOM-BRANCH, and UNIFORM. The paper compares the performance of these algorithms on three genetic programming problems (11-Boolean Multiplexer, Artificial Ant, and Symbolic Regression), and discovers that the algorithms do not have a significant impact on fitness. Additional experimentation shows that tree size does have an important impact on fitness, and further that the ideal initial tree size is very different from that used in traditional GP.
Bibtex entry:
@InProceedings{luke01survey,
    author = "Sean Luke and Liviu Panait",
    title = "A survey and comparison of tree generation algorithms",
    booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001)",
    editor = "Lee Spector et al",
    publisher = "Morgan Kaufmann",
    pages = "81--88"
    year = "2001",
    }

The Development of the AQ20 Learning System and Initial Experiments
Authors:
Guido Cervone, Liviu Panait, and Ryszard Michalski
Formats:
PDF
Citation:
Guido Cervone, Liviu Panait, and Ryszard Michalski. 2001. The Development of the AQ20 Learning System and Initial Experiments. In Proceedings of the International Conference on Intelligent Information Systems (IIS-2001).
Abstract:
      Research on a new system implementing the AQ learning methodology, called AQ20, is briefly described, and illustrated by initial results from an experimental version. Like its predecessors, AQ20 is a multi-purpose learning system for inducing general concepts descriptions from concept examples and counter-examples. AQ20 is viewed as a natural induction system because it aims at producing descriptions that are not only accurate but also easy to understand and interpret. This feature is achieved by representing descriptions in the form of attributional rulesets that have a higher representation power than decision trees or conventional decision rules. Among new features implemented in AQ20 are the ability to handle continuous variables without prior discretization, to control the degree of generality of rules by a continuous parameter, and to generate more than one rule from a star. Initial experimental results from applying AQ20 to selected problems in the UCI repository demonstrate a high utility of the new system.
Bibtex entry:
@InProceedings{cervone01development,
    author = "Guido Cervone and Liviu Panait and Ryszard Michalski",
    title = "The Development of the AQ20 Learning System and Initial Experiments",
    booktitle = "Proceedings of the International Conference on Intelligent Information Systems (IIS-2001)",
    year = "2001",
    }
 

Refereed Workshops Papers

Learning Team Behaviors with Adaptive Heterogeneity
Authors:
Liviu Panait
Formats:
PDF
Citation:
Liviu Panait. 2005. Learning Team Behaviors with Adaptive Heterogeneity. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS-2005).. ACM.
Bibtex entry:
@InProceedings{panait05learning,
    author = "Liviu Panait",
    title = "Learning Team Behaviors with Adaptive Heterogeneity",
    booktitle = "Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multi Agent Systems -- {AAMAS}-2005",
    publisher = "ACM",
    year = "2005",
    }

Time-dependent Collaboration Schemes for Cooperative Coevolutionary Algorithms
Authors:
Liviu Panait and Sean Luke
Formats:
PDF
Citation:
Liviu Panait and Sean Luke. 2005. Time-dependent Collaboration Schemes for Cooperative Coevolutionary Algorithms. In Proceedings of the AAAI 2005 Fall Symposium on Coevolutionary and Coadaptive Systems. AAAI Press.
Abstract:
     Cooperative coevolutionary algorithms represent a popular approach to learning via problem decomposition. Since they were proposed more than a decade ago, their properties have been studied both formally and empirically. One important aspect of cooperative coevolutionary algorithms concerns how to select collaborators for computing the fitness of individuals in different populations. We argue that using a fixed number of collaborators during the entire search may be suboptimal. We experiment with a simple ad-hoc collaboration scheme that varies the numbers of collaborators over time. Empirical comparisons in a series of problem domains indicate that decreasing the numbers of collaborators over time fares better than fixed collaboration schemes. We conclude with a brief discussion of our findings and suggest directions for future research.
Bibtex entry:
@InProceedings{panait05time,
    author = "Liviu Panait and Sean Luke",
    title = "Time-dependent Collaboration Schemes for Cooperative Coevolutionary Algorithms",
    booktitle = "Proceedings of the AAAI 2005 Fall Symposium on Coevolutionary and Coadaptive Systems",
    publisher = "AAAI Press",
    year = "2005",
    }

An Application of Evolutionary Algorithms to Study the Extent of SLHF Anomaly Associated with Coastal Earthquakes
Authors:
Guido Cervone, Liviu Panait, Ramesh Singh, and Sean Luke
Formats:
PDF
Citation:
Guido Cervone, Liviu Panait, Ramesh Singh, and Sean Luke. 2004. An Application of Evolutionary Algorithms to Study the Extent of SLHF Anomaly Associated with Coastal Earthquakes. In Late Breaking Papers of the Genetic and Evolutionary Computation Conference -- GECCO-2004. Springer.
Abstract:
     Multi sensor remote sensing provides real time high resolution data that can be used to study anomalous changes on land, in the ocean, and in the atmosphere associated with an impending earthquake. Anomalous behaviour in Surface Latent Heat Flux (SLHF) prior to large coastal earthquakes has been recently found. However, an SLHF time series usually contains several sharp peaks that may be associated either with earthquakes or with atmospheric perturbations. In this paper we have used evolutionary algorithms to perform a search in a large space bounded by longitude, latitude and time, to distinguish between signals associated with earthquakes and those associated with atmospheric phenomena. The algorithm finds paths which delimit the extent of the detected anomalies by optimizing an objective function that takes into consideration several aspects, such as spatial and time continuity, the magnitude of the anomalies, and the distance to the continental boundary. This search strategy is crucial for the development of a fully automated early warning system for providing information about impending earthquakes in a seismically active coastal region.      Experiments have been performed over a 2000 km2 area comprising a part of the continental boundary between the African and Eurasian plate, roughly corresponding to Italy and Greece, one of the most seismically active regions. Using a 365-days-long time series, we identified three signals associated with seismic events. Additionally, it was possible to establish that the extent of the signal does not propagate further than 600 km from the epicenter of the earthquake.
Bibtex entry:
@InProceedings{cervone04application,
    author = "Guido Cervone and Liviu Panait and Ramesh Singh, and Sean Luke",
    title = "An Application of Evolutionary Algorithms to Study the Extent of SLHF Anomaly Associated with Coastal Earthquakes",
    booktitle = "Late Breaking Papers of the Genetic and Evolutionary Computation Conference -- GECCO-2004",
    publisher = "Springer",
    year = "2004",
    }

Learnable Representation for Real World Planning
Authors:
Mihai Boicu, Gheorghe Tecuci, Bogdan Stanescu, Liviu Panait, and Cristina Cascaval
Formats:
PDF
Citation:
Mihai Boicu, Gheorghe Tecuci, Bogdan Stanescu, Liviu Panait, and Cristina Cascaval. 2000. Learnable Representation for Real World Planning. In Proceedings of the AAAI Workshop on Representational Issues for Real-World Planning Systems.
Abstract:
     This paper presents a learnable representation for real-world planning systems. This representation is a significant extension of the ones used in the most recent systems from the Disciple family, the Disciple-Workaround system for plan generation, and the Disciple-COA system for plan critiquing. This representation is defined to support an integration of domain modeling, knowledge acquisition, learning and planning, in a mixed-initiative framework. It also helps to remove the current distinction between the development phase of a planning system and its maintenance phase. It provides an elegant solution to the knowledge expressiveness / knowledge efficiency trade-off, and allows reasoning with incomplete or partially incorrect knowledge. These qualities of the representation are supported by several experimental results.
Bibtex entry:
@InProceedings{boicu00learnable,
    author = "Mihai Boicu and Gheorghe Tecuci and Bogdan Stanescu and Liviu Panait and Cristina Cascaval",
    title = "Learnable Representation for Real World Planning",
    booktitle = "Proceedings of the AAAI Workshop on Representational Issues for Real-World Planning Systems",
    year = "2000",
    }

LEM: A New Multistrategy Approach that Speeds up Evolutionary Computation
Authors:
Guido Cervone, Kenneth Kaufman, Ryszard Michalski, and Liviu Panait
Citation:
Guido Cervone, Kenneth Kaufman, Ryszard Michalski, and Liviu Panait. 2000. LEM: A New Multistrategy Approach that Speeds up Evolutionary Computation. In Proceedings of the Fifth Workshop on Multistrategy Learning.
Abstract:
     The Learnable Evolution Model (LEM), first presented at the Fourth International Workshop on Multistrategy Learning, employs machine learning to guide evolutionary computation. Specifically, LEM integrates two modes of operation: Machine Learning mode, which employs a machine learning algorithm, and Darwinian Evolution mode, which employs a conventional evolutionary algorithm. The central new idea of LEM is that in machine learning mode, new individuals are ``genetically engineered'' by a repeated process of hypothesis formation and instantiation, rather than created by random operators of mutation and/or recombination, as in Darwinian-type evolutionary algorithms. At each stage of evolution, hypotheses are induced by a machine learning system from examples of high and low performance individuals. New individuals are created by instantiating the hypotheses in different ways. In recent experiments concerned with complex function optimization problems, LEM has significantly outperformed selected evolutionary computation algorithms, sometimes achieving speed-ups of the evolutionary process by two or more orders of magnitude (in terms of the number of generations). In another recent application involving a problem of optimizing heat exchangers, LEM produced designs equal or superior to best expert designs. The recent results have confirmed earlier findings that LEM is able to significantly speed-up evolutionary processes (in terms of the number of generations) for certain problems. Further research is needed to determine the classes of problems for which LEM is most advantageous.
Bibtex entry:
@InProceedings{cervone01lem,
    author = "Guido Cervone and Kenneth Kaufman and Ryszard Michalski and Liviu Panait",
    title = "{LEM}: A New Multistrategy Approach that Speeds up Evolutionary Computation",
    booktitle = "Proceedings of the Fifth Workshop on Multistrategy Learning",
    year = "2000",
    }