Kostas Kostiadis, PhD
Electronic Trading
Books
  1. Kostiadis K.
    VDM Verlag Dr. Muller Aktiengesellschaft & Co. KG, Germany, 2008.
    ISBN-10: 3639102592.
    ISBN-13: 978-3639102598.
    Book
    You can buy a copy now from amazon uk, amazon us, or your preferred retailer.
    The source code for the book is available upon request.

Thesis
  1. Kostiadis K. PhD Thesis, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, United Kingdom, 2002.
    Abstract
    In recent years, two major areas of computer science have started converging. Artificial intelligence research is moving towards realistic domains requiring real-time responses, and real-time systems are moving towards more complex applications requiring intelligent behaviour. To that extend, this thesis contributes techniques for building agents in a particularly complex class of real-world domains. More specifically, this thesis addresses the question of whether agents can learn to become individually skilled and also learn to co-operate in the presence of both teammates and adversaries in a complex, real-time, noisy environment with no communication.

    The first step towards this investigation is the analysis of the functional requirements specification for agents wishing to operate in such environments. Following this, an agent architecture is proposed, which enables agents to combine reactive real-time responses together with deliberative intelligent decisions. A multi-threaded framework is also presented, which allows agents to operate with bounded response times.

    Once the core components within the agent have been described, the focus turns to the agent's behaviour. Starting from its perception, this thesis demonstrates how a sparse distributed memory model can be used as a generalisation component for tasks that involve large state spaces. In addition, a simple task is used to demonstrate how reinforcement learning can lead to co-operative problem solving via intelligent action selection.

    Finally, this thesis demonstrates how the proposed generalisation and action selection methods can be combined in order to produce a decision-making module suitable for noisy, real-time, collaborative and adversarial domains that involve large state spaces. A complex learning task is used to demonstrate that agents can learn to co-operate via their action selection mechanisms even when they have no knowledge of the concept of co-operation. Experimental results demonstrate how the learned policies outperform fixed, hand-coded ones.

Book Chapters
  1. Hu H., Kostiadis K., Hunter M., and Kalyviotis N. In Birk A., Coradeschi S., and Tadokoro S., editors, RoboCup-01: Robot Soccer World Cup V. Springer Verlag, Berlin, 2002.
    Abstract
    This article presents an overview of the Essex Wizards 2001 team that participated in the RoboCup 2001 simulator league. Four major issues have been addressed, namely a generalized approach to position selection, strategic planning and encoded communication, reinforcement learning (RL) and Kanerva-based generalization, as well as the agent architecture and agent behaviours.
  2. Hu H., Kostiadis K., Hunter M., and Kalyviotis N. In Stone P., Balch T., and Kraetszchmar G., editors, RoboCup-00: Robot Soccer World Cup IV. Springer Verlag, Berlin, 2001.
    Abstract
    This article gives an overview of the Essex Wizards 2000 team that participated in the RoboCup 2000 simulator league. A brief description of the agent architecture for the team is introduced. Both low-level behaviours and high-level behaviours are presented. The design issues regarding fixed planning and reinforcement learning are briefly outlined.
  3. Kostiadis K. and Hu H. In Veloso M., Pagello E., and Kitano H., editors, RoboCup-99: Robot Soccer World Cup III. Springer Verlag, Berlin, 2000.
    Abstract
    To meet the timing requirements set by the RoboCup soccer server simulator, this paper proposes a multi-threaded approach to simulated soccer agents for the RoboCup competition. At its higher level each agent works at three distinct phases: sensing, thinking and acting. Instead of the traditional single threaded approaches, POSIX threads have been used here to break down these phases and implement them concurrently. The details of how this parallel implementation can significantly improve the agent's responsiveness and its overall performance are described. Implementation results show that a multi-threaded approach clearly outperforms a single-threaded one in terms of efficiency, responsiveness and scalability. The proposed approach will be very efficient in multi-processor systems.
  4. Hu H., Kostiadis K., Hunter M., and Seabrook M. In Veloso M., Pagello E., and Kitano H., editors, RoboCup-99: Robot Soccer World Cup III. Springer Verlag, Berlin, 2000.
    Abstract
    This paper describes the Essex Wizards team that participated in the RoboCup'99 simulator league. It is mainly concentrated on a multi-threaded implementation of simulated soccer agents to achieve real-time performance. Simulated robot agents work at three distinct phases: sensing, thinking and acting. POSIX threads are adopted to implement them concurrently. The issues of decision-making and co-operation are also addressed.

Refereed Conference Papers
  1. Kostiadis K. and Hu H. Proceedings IEEE/RSJ International Conference on Intelligent Robots & Systems (IROS 2001), Hawaii, October 2001.
    Abstract
    The complexity of most modern systems prohibits a hand-coded approach to decision making. In addition, many problems have continuous or large discrete state spaces; some have large or continuous action spaces. The problem of learning in large spaces is tackled through generalisation techniques, which allow compact representation of learned information and transfer of knowledge between similar states and actions. In large smooth state spaces, it is expected that similar states will have similar values and similar optimal actions. Therefore it should be possible to use experience gathered through interaction with a limited subset of the state space and produce a good approximation over a much larger subset.

    In this paper Kanerva coding and reinforcement learning are combined to produce the K-RL multi-stage decision-making module. The purpose of K-RL is twofold. Firstly, Kanerva coding is used as a generalisation method to produce a feature vector from the raw sensory input. Secondly, the reinforcement learning component receives this feature vector and learns to choose a desired action. The efficiency of K-RL is tested using the "3 versus 2 possession football" challenge, a sub-problem of the RoboCup domain. The results demonstrate that the learning approach outperforms all hand-coded policies.
  2. Kostiadis K., Hunter M., and Hu H. Proceedings IEEE International Conference on Systems, Man, and Cybernetics (SMC2000), Tennessee, October 2000.
    Abstract
    Developers of AI software are normally faced with design challenges involving robustness, efficiency, and extensibility. Most of these challenges at a higher level are independent of the application-specific requirements. Although design patterns have been successfully adopted to tackle these issues, they are rarely documented. Consequently this knowledge remains hidden in the minds of developers or buried within complex system source code.

    The primary contribution of this paper is to describe an abstract design methodology that can be applied in many (single or) multi-agent systems. The paper mainly illustrates how design patterns can ease the development and increase the efficiency of such systems. As an example, the Essex Wizards multi-agent system is presented which won the third prize in the RoboCup'99 simulator league competition.
  3. Hu H., Kostiadis K., and Liu Z. Proceedings IASTED Robotics and Applications Conference, California, October 1999.
    Abstract
    Research on the cooperation of multiple mobile robots has to address three main problems: (i) how to appropriately divide the functionality of the system into multiple robots, (ii) how to manage the dynamic configuration of the system in order to realise co­operative behaviours, and (iii) how to achieve coordination and learning for a team of mobile robots. This paper addresses these issues using a behaviour-based approach. More specifically, the aim of this research is to develop a team of mobile robots with coordination and learning capabilities for the robot soccer competition. The methodology to implement co­operative behaviours and the learning strategy are presented. The construction of the Essex Wizards soccer robots and the initial simulation results are introduced to demonstrate the feasibility.
  4. Kostiadis K. and Hu H. Proceedings IEEE/RSJ International Conference on Intelligent Robots & Systems (IROS 1999), Korea, October 1999.
    Abstract
    The complexity of most multi-agent systems prohibits a hand-coded approach to decision making. In addition to that, a complex, dynamic, adversarial environment like the one of a football game makes decision-making and co-operation even more difficult. This paper addresses these problems by using machine learning techniques and agent technology. By gathering useful experience from earlier stages, an agent can significantly improve its performance. The method used requires no previous knowledge regarding the environment. Since co-operation in adversarial domains is a very challenging task, the proposed learning algorithm assigns each agent a role to play to achieve a certain goal. By distributing the responsibilities among the agents and linking their goals, an efficient way of co-operation emerges.

Refereed Workshop Papers
  1. Hunter M., Kostiadis K., and Hu H. Proceedings European RoboCup Workshop, Amsterdam, May 2000.
    Abstract
    Selecting an optimal position for each soccer robot to move to in a robotic football game is a challenging and complex task since the environment and robot motion are so dynamic and unpredictable. This paper provides an overview of behaviour-based position selection schemes used by the Essex Wizards'99 simulated soccer team, the third place finisher in RoboCup'99 simulator league at Stockholm. The focus concentrates on how each position selection behaviour is selected for individual robot agents. The factors that need to be considered and the architecture used to implement such position selection are also described. Finally the team performance at RoboCup'99 is examined, and future extensions are proposed.