pavery

Pippa Avery online!

Research

Here you can find information on my previous research.


Independent Research

Recently I have been performing independent research in conjunction with Julian Togelius and Garry Greenwood. One of my personal hobbies is in Tower Defence games, and with Julian and some of his students we published a current state of research, and proof of concept paper which you can view here:

Computational Intelligence and Tower Defence Games
Computational Intelligence and Games (CIG), 2012 IEEE Conference on

I have also been working with Garry on creating a Tower Defence competition for the Computational Intelligence research community. More recently we have been working in conjunction with Daniel Ashlock on the creation of a Divide the Dollar game competition. I also acted as editor and second author for one of Garry’s recent paper publications: “Update rules, reciprocity and weak selection in evolutionary spatial games”. This paper was published in the IEEE 2012 Computational Intelligence and Games (CIG) Conference.


University of Nevada, Reno

Working with the ECSL lab, I developed a technique for evolving strategic maneuvering in a Real Time Strategy (RTS) capture the flag game.

The research involved creation of a simple RTS capture the flag game using a Python front end and a C++ server.  My research involved coevolving Influence Maps (IM)s to generate coordinating team tactics for a Real Time Strategy (RTS) game. Each entity in the team was assigned their own IM which allowed each entity to act independently of the team and team coordination is achieved by evolving all team entities’ IM parameters together as a single chromosome. This technique showed potential for using IMs for coevolving team tactics.

Please feel free to download my published papers for more information:

Coevolving team tactics for a real-time strategy game
Evolutionary Computation (CEC), 2010 IEEE Congress on

Using Co-evolved RTS Opponents to Teach Spatial Tactics
Computational Intelligence and Games (CIG), 2010 IEEE Symposium on

Evolving Spatial Tactics using Influence Maps
Proceedings of the 5th international conference on Computational Intelligence and Games

Evolving coordinated spatial tactics for autonomous entities using influence maps
Computational Intelligence and Games, 2009. CIG 2009. IEEE Symposium on

I’ve uploaded a zip file with some demo code of example coevolved unit strategies. I will upload the coevolution code very soon. The code was developed in conjunction with Ben Avery, who helped with the openGL and physics coding.

This was developed on Ubuntu, but if you can set up the dependencies it can also be run on cygwin. Send me an email (eos2102atgmaildotcom) if you need a hand getting it up and running.

  1. Make sure you have the following:
    • python
    • python wx
    • openGL
    • python-dev
  2. Download and unzip the code
  3. run using python boatviz.py

This should display an auto-run of the game. If you select one of the boats and check the Influence Map box, you will be able to see the Influence Map (IM) the boat is using when determining the best A* path. The IM was evolved during the evolutionary process, and updates depending on where the enemy boats are. It is also used to maneuver around land masses, and set goal points (cells) in the game map.


University of Adelaide

For my PhD research I studied under Zbigniew Michalewicz, and continued research on the game of Tempo. Tempo was is game developed by the Department of Defence for training personal in the task of resource allocation.  Tempo is a zero sum game played between two opposing parties by allocating resources in a cold war style simulation. The goal of the game is to acquire more offensive utilitis than the opposition before war breaks out. The decision-making process requires allocating the yearly budget on the following:

  1. Operating existing forces.
  2. Acquiring additional forces.
  3. Intelligence and counter intelligence.
  4. Research and development.

My research involved developing a computer player to play against a human player. The computer player grew to adapt its strategy to the human player, and encouraged a more positive learning experience. The computer player was developed by coevolving a fuzzy logic rule base, and evolving against other computer players, and/or a model of the current human player.

You can view a copy of my PhD thesis here:
Coevolving a Computer Player for Resource Allocation Games – Using the game of TEMPO as a test space.
Doctorate of Philosophy thesis, School of Computer Science, University of Adelaide

A zip file of the code is available here:
Tempo.zip

Some of the code was originally written by Martin Schmidt, but over the 3.5 years I worked on it a lot of it was rewritten/added on to. I haven’t had time to clean the code or write installation / running instructions, but I plan to do this soon. In the meantime, please keep that in mind while viewing :)

Additionally, here are a list of my publications on the research:

Adapting to Human Gamers Using Coevolution
Advances in Machine Learning II, Studies in Computational Intelligence Volume 263, 2010

Coevolving strategic intelligence
Evolutionary Computation, 2008. CEC 2008

Adapting to human game play
Computational Intelligence and Games, 2008. CIG ’08. IEEE Symposium On

Short and long term memory in coevolution
International Journal of Information Technology & Intelligent Computing, 2008; 3(1):1-30

Static experts and dynamic enemies in coevolutionary games
Evolutionary Computation, 2007. CEC 2007. IEEE Congress on

A Historical Population in a Coevolutionary System
Computational Intelligence and Games, 2007. CIG 2007. IEEE Symposium on