Tutorials will take place on September 12, 2010.

Author(s)Tutorial title

Xin YaoA Rigorous Theoretical Framework for Measuring Generalisation of Co-evolutionary Learning

Frank Neumann
Tobias Friedrich
Foundations of Evolutionary Multi-Objective Optimization

Günther RaidlHybrid Optimization Approaches

Tony Brabazon
Michael O’Neill
Natural Computing and Finance

Fatos XhafaHeuristic and Meta-heuristic Approaches for Scheduling in Large Scale Distributed Computing Environments

Tadeusz Burczyński
Michał Bereta
Wacław Kuś
Artificial Immune Systems in Optimization and Classification Problems with Engineering and Biomedical Applications

Piotr FaliszewskiThe Complexity of Elections: New Domain for Heuristic Computations

Simon M. LucasLearning to Play Games

A Rigorous Theoretical Framework for Measuring Generalisation of Co-evolutionary Learning

Co-evolutionary learning offers a very attractive learning paradigm where how well a learner (individual) does depends on a dynamic environment that includes other learners (individuals) in the same or co-evolving populations. As the case for other types of learning, generalisation is a key issue in co-evolutionary learning [1]. Although there have been experimental studies on the robustness [2], which is closely related to our notion of generalisation, of co-evolved learners, no rigorous theoretical framework exists for measuring generalisation quantitatively in co-evolutionary learning. This talk [1] will introduce such a rigorous theoretical framework for measuring generalisation, which is applicable to any type of co-evolutionary learning, based on statistical machine learning theories. Different definitions of generalisation are discussed. Specific case studies, using iterated prisoner's dilemma games as examples, will be presented. Under the vigorous theoretical framework, we are able to estimate generalisation in co-evolutionary learning within certain bounds. Such estimation represents a major step forward in measuring generalisation for co-evolutionary learning in practice [3].

Selected References:
[1] S. Y. Chong, P. Tino and X. Yao, "Measuring Generalization Performance in Co-evolutionary Learning", IEEE Transactions on Evolutionary Computation, 12(4):479-505, August 2008.
[2] P. Darwen and X. Yao, "On evolving robust strategies for iterated prisoner's dilemma", In Progress in Evolutionary Computation, Lecture Notes in Artificial Intelligence, Vol. 956, Springer-Verlag, Heidelberg, Germany, pp.276-292, 1995.
[3] S. Y. Chong, P. Tivno and X. Yao, "Relationship between generalization and diversity in coevolutionary learning", IEEE Transactions on Computational Intelligence and AI in Games, 1(3):214-232, September 2009.

About the author

Xin Yao is a Professor (Chair) of Computer Science at the University of Birmingham, UK.

He is the Director of CERCIA (the Centre of Excellence for Research in Computational Intelligence and Applications, http://www.cercia.ac.uk) at the University of Birmingham, which is specialised in applied research and knowledge transfer. He is an IEEE Fellow and a Distinguished Lecturer of IEEE Computational Intelligence Society. He won the 2001 IEEE Donald G. Fink Prize Paper Award and several other best paper awards. He was a Cheung Kong Scholar (Changjian Chair Professor) of the Ministry of Education of China, a Distinguished Visiting Professor (Grand Master Chair Professorship) of USTC in Hefei, and a Distinguished Visiting Professor of Yuan Ze University, Taiwan. In his spare time, he did the voluntary work as the Editor-in-Chief (2003-08) of IEEE Transactions on Evolutionary Computation, and is an associate editor or editorial board member of 12 international journals, and the editor of the World Scientific book series on "Advances in Natural Computation". He has been invited to give 60 keynote/plenary speeches at international conferences. His major research interests include evolutionary computation and neural network ensembles. He has more than 300 refereed publications in journals and conferences. His work has been funded by AWM, EPSRC, EU, Royal Society, Chinese Academy of Sciences, NSFC, Honda, Marconi, BT, Thales and Severn Trent Water.

More information can be found at http://www.cs.bham.ac.uk/~xin/

Foundations of Evolutionary Multi-Objective Optimization

Evolutionary algorithms have been shown to be especially successful when dealing with multi-objective problems. The area of evolutionary multi-objective optimization has grown rapidly during the last years. In this tutorial, we give an overview on the different results that have been achieved on the theoretical aspects of evolutionary multi-objective optimization.

There has been a lot of progress on the foundational side of evolutionary multi-objective optimization. The first part of the tutorial will cover convergence and runtime analysis of evolutionary multi-objective optimization. We will also discuss how multi-objective models of single-objective optimization problems can provably lead to faster evolutionary algorithms for different combinatorial optimization problems.

How to achieve a good spread over the Pareto front is one of the big questions in evolutionary multi-objective optimization. Therefore the second part is dedicated to several diversity mechanisms, particularly the hy- pervolume indicator. This indicator plays an important role and has found many applications. We present recent results on complexity and approximability of the hypervolume and discuss what they imply for applications.

Expected enrollment
The tutorial is for people interested in evolutionary multi-objective optimization as well as for people interested in theoretical aspects of evolutionary computation in general. We expect a number of approximately 50 participants.
About the authors
Frank Neumann

Frank Neumann studied Technical Computer Science in Dortmund and Kiel, Germany. He received his diploma and Ph.D. from the Christian- Albrechts-University of Kiel in 2002 and 2006, respectively. Currently, he is the coordinator of the research area “Bio-inspired Computation” within the Algorithms and Complexity Group at the Max Planck Institute for In- formatics in Saarbruecken, Germany. In his work, he considers theoretical aspects of evolutionary computation methods in particular for combinatorial and multi-objective optimization problems and has received six best paper awards at leading international conferences such as GECCO and PPSN. Frank has been a co-chair of the track “Formal Theory” at GECCO 2007 and the chaired of the track “Evolutionary Combinatorial Optimization” at GECCO 2008. He has given tutorials on “Computational Complexity and Evolutionary Computation” (together with Thomas Jansen) at GECCO (2007-2010) and a tutorial on “Computational Complexity of Evolutionary Computation in Combinatorial Optimization” at PPSN 2008 (together with Carsten Witt).

More information can be found at http://www.mpi-inf.mpg.de/~fne

Tobias Friedrich

Tobias Friedrich received his MSc in computer science from the Uni- versity of Sheffield, UK, in 2002 and his diploma in mathematics from the University of Jena, Germany, in 2005. In 2007 he obtained a PhD in theo- retical computer science from the Saarland University, Germany. Currently, he is a member of the Algorithms Group in the International Computer Science Institute Berkeley, USA. The central topic of his work are random- ized algorithms (both classical and evolutionary) and randomized methods in mathematics and computer science in general. The last two years, Tobias won two best paper awards in the tracks “Genetic Algorithms” (GECCO 2008) and “Evolutionary Multiobjective Optimization” (GECCO 2009).

More information can be found at http://www.mpi-inf.mpg.de/~tfried

Hybrid Optimization Approaches

It is well known that for many practical optimization problems, especially from the domain of combinatorial optimization, "pure" evolutionary algorithms (EAs) are not as effective as more sophisticated hybrid approaches combining concepts of evolutionary computation with problem-specific procedures, other metaheuristics, exact optimization techniques, or methods from the fields of artificial intelligence or operations research. The general idea is to "exploit the best of different worlds" and to benefit from synergy by such combinations. Numerous publications of the recent years and dedicated scientific workshops document the significance of this trend and specific research area.

This tutorial starts with an overview on the most commonly used hybridization techniques such as decoder-based EAs, embedding local search procedures in EAs, and exploiting concepts of simulated annealing, tabu search, path relinking, very large neighborhood search, and other metaheuristics. A systematic classification of hybrid approaches will then be discussed, leading to further approaches, especially also including collaborative combinations, in which different search strategies cooperate in a looser form.

In the remainder, we will in particular focus on combinations of EAs with branch-and-bound and (integer) linear programming methods. Discussed topics include solution merging, strategic guidance of evolutionary algorithms by means of linear programming and Lagrangian relaxations, and exploiting EAs and other metaheuristics within exact branch-and-bound frameworks. Not so common but highly promising ideas such as applying metaheuristics for cut and column generation are addressed.

About the author
prof. Raidl

Prof. Günther Raidl heads the Algorithms and Data Structures group of the Institute of Computer Graphics and Algorithms at the Vienna University of Technology. In September 2005 he received a professorship position for combinatorial optimization at this University. Before, he was Research Assistant, Lecturer and Assistant Professor at the Vienna University of Technology. Prof. Günther Raidl received his master degree in Computer Science in 1992 and his PhD in 1994 and completed his habilitation in Practical Computer Science in 2003 at the Vienna University of Technology.

His research interests include algorithms and data structures in general, combinatorial and continuous optimization, mathematical programming techniques, metaheuristics, hybrid optimization approaches, evolutionary computation, network design, cutting and packing, transport optimization, warehouse logistics, scheduling, and bioinformatics.

Günther Raidl is associate editor for the International Journal of Metaheuristics and the Evolutionary Computation Journal. Furthermore he is co-founder and steering committee member of the European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCOP) and has been editor-in-chief of the 2009 Genetic and Evolutionary Computation Conference (GECCO). He has (co-)edited 12 books and authored over 100 reviewed articles in journals, books, and conference proceedings.

More information can be found at http://www.ads.tuwien.ac.at/raidl/

Natural Computing and Finance

Recent years have seen the application of multiple Natural Computing (NC)  algorithms  for the purposes of financial modelling. Particular features of financial markets including their  dynamic and interconnected characteristics bear parallel with processes in  the natural world and prima facie, this makes NC methods ‘interesting’ for  financial modelling applications. Another feature of both natural and financial  environments is the phenomenon of emergence, or the activities of multiple  individual agents combining to co-evolve their own environment.  The scale of NC applications in finance is illustrated by Chen & Kuo  who list nearly 400 papers that had been published by 2001 on the use of evolutionary computation alone in computational economics and finance. Since  then several hundred additional papers have been published underscoring the  continued growth in this application area.

Some of the major areas of financial applications using NC methods are:  forecasting, algorithmic trading, portfolio management, risk management,  derivatives modelling and market modelling. In this tutorial, we describe the utility of NC methods for a range of topics in finance and highlight interesting, open, areas of research.

About the authors
Prof. Anthony Brabazon

Professor Anthony Brabazon is currently Head of Research in the School of Business at University College Dublin. His primary research interests concern the development of natural computing theory and the application of natural computing algorithms to real-world problems, particularly in the domain of finance. He is co-founder and co-director (with Dr. Michael O'Neill, School of Computer Science and Informatics, UCD) of the Natural Computing Research and Applications Group at UCD (see http://ncra.ucd.ie). The NCRA engages in both basic and applied research in natural computing methodologies. The NCRA is the only dedicated natural computing research facility in Ireland. Prof. Brabazon is also Director of FMC2  a large research group in financial mathematics and computation (www.fmc-cluster.org). This research group is funded via Science Foundation Ireland and industrial collaborators.

Anthony has an established publication record in the field of evolutionary computation / natural computing with in excess of 120 peer-reviewed publications. He has authored / edited several books including 'Biologically Inspired Algorithms for Financial Modelling'  and 'Natural Computing in Computational Finance' (Volumes I, II and III), all published by Springer.  As well as sitting on the editorial boards of several journals, Anthony is a regular reviewer for a wide range of journals in natural computing, operations research and finance. He is a regular member of programme committees at major conferences in the field of natural computing, including GECCO (Genetic and Evolutionary Computation Conference), IEEE Congress on Evolutionary Computation, and EuroGP (European Conference on Genetic Programming).  He is  founder and co-Chair of EvoFin, the only dedicated European research workshop on the application of evolutionary algorithms to computational finance and economics and is also a member of the IEEE Computational Finance and Economics Technical Committee.

More information can be found at http://www.ucd.ie/research/people/business/dranthonybrabazon/

Dr. Michael O'Neill

Dr. O'Neill is a founder of the UCD Natural Computing Research Applications group with Prof. Anthony Brabazon, and is a Senior Lecturer in the UCD School of Computer Science & Informatics. He is the lead author of the seminal book on Grammatical Evolution, and is independently ranked as one of the top 10 researchers in Genetic Programming.

Dr. O'Neill has published in excess of 170 peer-reviewed publications including 3 Monographs.  In 2009 he published the second book on Grammatical Evolution, "Foundations in Grammatical Evolution for Dynamic Environments" with Ian Dempsey and Anthony Brabazon, and in 2006 he published the book on "Biologically Inspired Algorithms for Financial Modelling" with Prof. Brabazon. He has over 1000 citations, and a H-index of 17.  Dr. O'Neill has co-authored a number of successful funding applications with a total value over €7 Million.

Michael is currently a Chair of the Genetic Programming Track of the ACM Genetic and Evolutionary Computation Conference 2010, and Co-Chair and Founder of the European Workshop on Evolutionary Computation in Finance & Economics (EvoFIN 2009).  Dr. O'Neill is on the Editorial Boards of the Journal of Artificial Evolution and Applications, and the International Journal of Innovative Computing and Applications. He has acted as External Examiner for a number of PhDs in the field of Evolutionary Computation.

More information can be found at http://www.ucd.ie/research/people/computerscienceinformatics/drmichaeloneill

Heuristic and Meta-heuristic Approaches for Scheduling in Large Scale Distributed Computing Environments

Scheduling is a key issue in distributed computing and has been largely investigated in the past for traditional distributed computing environments, including clusters and supercomputers. With the fast development of Internet and the emerging new distributed computing paradigms such as Grid Computing, P2P computing, Cloud Computing, etc., scheduling has attracted again the attention of many researchers and developers from academia and industry. For such new computing paradigms, the design of efficient schedulers is a must in order to leverage the enormous computing capacity offered by large scale distributed systems such as Grid and P2P systems.

Scheduling problems, in most general formulations, are known to be NP-hard problems. The versions of scheduling for large scale distributed environments inherit many of the characteristics of scheduling in traditional distributed environments. However, the problem becomes even more challenging due to new characteristics and requirements of large scale distributed environments, among which we could distinguish the dynamics of the system, the high degree of heterogeneity of resources and tasks/applications, the high heterogeneity of interconnection networks, scalability and efficiency requirements, the multi-objectivity, the presence of local schedulers as well as local policies on resources that spawn different administrative domains, security issues, etc. In view of the complexity of the problem, obviously, heuristic and meta-heuristic methods are the de facto approach to deal in practice with the problem.

In this tutorial we will review the state of the art in heuristic/meta-heuristic methods for scheduling problems in large scale distributed environments and will focus on the following:

  • Introduce large scale distributed environments with an emphasis on Grid systems.
  • Introduce scheduling problems, their requirements and formal definitions as multi-objective optimization problems.
  • Present the application of ad hoc methods to scheduling problems.
  • Present the application of local search based methods to scheduling problems with special emphasis to Tabu Search.
  • Present the application of population based methods (Genetic Algorithms and Memetic Algorithms — both unstructured and structured) to scheduling problems.
  • The study of the performance of the heuristic and meta-heuristic based schedulers using a Grid simulator.
About the author
Fatos Xhafa

Fatos Xhafa holds a PhD in Computer Science from the Technical University of Catalonia (UPC, Barcelona, Spain) in 1998. He is currently Associate Professor at the Department of Languages and Informatic Systems (LSI) of the UPC and member of the ALBCOM Research Group of LSI Department. His current research interests include parallel algorithms, combinatorial optimization, approximation and meta-heuristics, distributed programming, Grid and P2P computing. His research is supported by research projects from Spain, European Union and NSF/USA. He has published in leading international journals and conferences and has served in the Organizing Committees of several conferences and workshops. Recently, he served as General Co-Chair of INCOS-2009, PC Co-Chair of AINA-2010 and General Co-Chair of CISIS-2010. He is also member of editorial board of several international journals.

More information can be found at http://www.lsi.upc.es/~fatos/

Artificial immune systems in optimization and classification problems with engineering and biomedical applications

Nature inspired computing has proved to be useful in various application areas. Evolutionary methods, neural networks, swarm intelligence and many other approaches have been applied to many technical and engineering problems, such as optimization, learning, data analysis, knowledge engineering and many others. Some of these methods perform better in the given application areas and some work better in others. However, it can hardly be assumed that there exists a problem domain in which nature inspired techniques are not employed, at least as a part of the proposed solution. After decades of development, biologically inspired methods are well established and appreciated tools. On the other hand, many novel approaches arise to solve problems in innovative ways, hopefully more effectively. One of such novel areas is the field of Artificial Immune Systems (AIS). The question arises as to what AIS can offer as problem solving techniques and whether the paradigms proposed by AIS researchers are in fact novel. The aim of the tutorial is to provide a set of carefully selected problems connected with the current research directions of Immune Computing.

Immune system, especially this of vertebrates, is a very complicated system of interacting cells, organs and mechanisms, whose purpose is to protect the host body against any danger, either exterior or internal. To achieve that goal the immune system has to decide not only about what is not part of the host but also what can cause a damage. This is a very difficult task, as not all that comes from outside is dangerous. On the other hand, autoimmune diseases are examples of internal threats. To protect the host against all such dangers is not an easy task, especially in a changing environment. It is obvious that the immune system has to develop some sense of self, i.e., the sense of what is part of the host. How the immune system achieves this is hard to explain, as the host body changes its functioning and structure over time. Nonetheless, the immune system is able to perform its task effectively. To deal with such difficult task the immune system needs the ability to learn new threats, to remember previous experiences and to develop specialized responses to different pathogens. Taking a closer look at all these features one can state that the immune system can be considered as a cognitive system. For that reason the immune system gained an interest of computational sciences.

Given all the complexity of functioning of the immune system, it is necessary to extract higher level paradigms which could serve as the basis of constructing computational models and algorithmic solutions. The most important paradigms in the filed of Artificial Immune Systems are Clonal Selection (CS), Immune Network Theory (IMT), Negative Selection (NS) and recently emerged Danger Theory (DT).

The question is whether AIS can offer anything really new and/or useful. Clonal Selection can seem to be another exemplification of evolutionary approach to problem solving. The fact of existing of immune networks in biological immune systems is questioned by biologists. Negative Selection appears to be truly novel approach, but one could ask whether it is enough to invest time and resources to develop AIS. In the first examinations of the immune ideas, the researchers developed several algorithmic solutions based on immune paradigms, often separately. Clonal Selection and Immune Network Theory have been applied to optimization, data analysis or clustering. Negative Selection has been applied to computer security, anomaly or fault detection. Many of the proposed algorithms have successfully dealt with the tasks appointed to them. However, their usefulness according to their robustness and scalability has been under dispute when compared to other well established computational methods.

The broad application areas and the new ideas proposed, show the vitality of the still young research field of Immune Computing in intelligent problem solving techniques. The engineering applications of AIS allow to optimize, e.g. physical systems to obtain better properties of such systems. The nondestructive identification of material and structural properties is also possible (e.g. the properties of human bone). The classification of biomedical signals is other important area of application of AIS. The AIS based classifiers can be helpful during human diagnosis.

During tutorial we are going to present AIS algorithms and applications:

  • AIS for optimization and identification problems,
  • AIS for classification problems,
  • applications of AIS in engineering problems,
  • applications of AIS in biomedical problems.
About the authors
Tadeusz Burczyński

Tadeusz Burczyński is a Professor at Cracow University of Technology where he is Director of Institute of Computer Science and a Professor at Silesian University of Technology where he is Head of the Department for Strength of Materials and Computational Mechanics. He is a Corresponding Member of the Polish Academy of Sciences. He has made significant contributions to development and application of computational intelligent systems based on biologically-inspired techniques in computational sciences and a computational methodology for multi-scale modelling, optimization and identification problems. He is author or co-author of more than 500 published works including 14 books and guest editor of several special issues of international journal.

More information can be found at http://dydaktyka.polsl.pl/KWMIMKM/TBurczynski.aspx

The Complexity of Elections: new domain for heuristic computations

Computational social choice is a new research field spanning computer science, artificial intelligence, economics, and operations research and its main goal is to provide computational study of various collective decision-making mechanisms. During the last decade, one of the most prominent topics studied within computational social choice was computational complexity of elections. Researchers were interested both in the hardness of determining winners and in the complexity of manipulating results of elections. Unfortunately, almost all results regarding the complexity of elections are theoretical. It is, thus, crucial that theoretical findings be verified in practice and one of the purposes of this talk is to encourage more applied researchers to enter the field.

The talk will start with a very brief presentation of the most commonly studied election model and of several interesting voting systems. In particular, we will be looking at plurality, scoring protocols, and the voting system of Dodgson (the author of Alice’s Adventures in Wonderland, better known by his pen-name Lewis Carol). Dodgson’s voting will be our showcase of an election system which is, in some sense, very desirable but for which even computing the winner is remarkably difficult.

Further on, we will consider several other voting problems, such as the complexity of determining if a given candidate still has a chance of winning given partial information regarding the votes, and the issues of manipulating elections. For each of these problems we will discuss the strengths and limitations of currently known results and provide some intuition as to what further results would be desirable.

About the author
dr Faliszewski

Dr Piotr Faliszewski is an assistant professor (adiunkt) at the Department of Computer Science at the AGH University of Science and Technology, which he joined in the spring of 2009. Before, he was a graduate student at the Univeristy of Rochester and worked as a lecturer at the Rochester Institute of Technology. Dr Piotr Faliszewski defended his PhD thesis on the computational complexity of manipulating elections in 2008, at the University of Rochester

His interests include widely understood algorithms and computational complexity theory, in particular as applied to computational social choice, a new research field spanning computer science, artificial intelligence, economics, and operations research. The goal of computational social choice is to provide computational perspective on collective decision making (either by people or software agents). Dr Piotr Faliszewski is also interested in algorithmic game theory, focusing on cooperative games and their solution concepts.

Piotr Faliszewski is on the program committee of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-2010) and on the program committee of the 3rd International Workshop on Computational Social Choice (COMSOC-2010). He has coauthored over 20 peer-reviewed articles in journals, conference proceedings, and books.

More information can be found at http://home.agh.edu.pl/~faliszew/

Learning to Play Games

This tutorial provides a practical introduction to game learning with function approximation architectures. The tutorial will cover the two main approaches to learning in games: evolution (including co-evolution), and temporal difference learning (TDL).

It will be shown that the choice of function approximation architecture has a critical impact on what is learned. In addition to standard MLPs, attention is also given to N-Tuple systems and interpolated table-based approximators, as these have recently shown great potential to learn quickly and effectively. Examples will be used to illustrate how the function approximator must be appropriately matched with the learning algorithm in order to get best results.

Performance of these methods will be shown on a range of benchmark reinforcement learning problems and games. Each method will be demonstrated with reference to some simple fragments of software, illustrating how the learning algorithm is connected with the game and with the function approximation architecture.

About the author
prof. Simon

Prof. Simon M. Lucas (SMIEEE) is a professor of computer science at the University of Essex (UK) where he leads the Game Intelligence Group.  His main research interests are games, evolutionary computation, and machine learning, and he has published widely in these fields with over 130 peer-reviewed papers, mostly in leading international conferences and journals. He was chair of IAPR Technical Committee 5 on Benchmarking and Software (2002 - 2006) and is the inventor of the scanning n-tuple classifier, a fast and accurate OCR method. He was appointed inaugural chair of the IEEE CIS Games Technical Committee in July 2006, has been competitions chair for many international conferences, and co-chaired the first IEEE Symposium on Computational Intelligence and Games in 2005. He was program chair for IEEE CEC 2006, program co-chair for IEEE CIG 2007, and for PPSN 2008. He is an associated editor of IEEE Transactions on Evolutionary Computation, and the Journal of Memetic Computing. He has given invited keynote talks and tutorials at many conferences including IEEE CEC, IEEE CIG, and PPSN.   Professor Lucas was recently appointed as the founding Editor-in-Chief of the IEEE Transactions on Computational Intelligence and AI in Games.

More information can be found at http://dces.essex.ac.uk/staff/lucas/