Department of Computer Science
Department of Computer Science
University of Hertfordshire
Hatfield AL10 9AB
President of the RoboCup Federation
Editor of Journal of Autonomous Agents and Multi-Agent Systems
Associate Editor of Advances in Complex Systems
Editor of Paladyn. Journal of Behavioral Robotics
Led by the University of Bremen, the project is interested in a generic cognitive framework to allow robots to interact with humans in a "biologically plausible" way. For this UH uses principles based on Information Theory for Intelligent Information Processing to create a self-organizing informational anticipatory architecture. CORBYS is funded by the European Commission under the 7th Framework Program Grant agreement No. 270219. (See also UH Press Release)
UH participates for a decade in the RoboCup Soccer competition, until 2004 in the 2D simulation, since 2004 in 3D humanoid simulation, and since 2013, also in the Kidsize league.
Are there general principles underlying intelligent information processing in living beings which we can exploit without having to resort to specialized solutions that vary from task to task?
Can understanding of mechanisms for "life" teach us something about how to achieve intelligence in artificial systems?
More seriously, information theory is one of the most universal concepts with applications in computer science, mathematics, physics, biology, chemistry and other fields. It allows a lucid and transparent analysis of many systems and provides a framework to study and compare seemingly different systems using the same language and notions.
Recent research in learning models, e.g. Neural Networks, emphasizes the universal role of information. Can information theory open a universal approach towards the development of Artificial Intelligence and the studies of Artificial Life?
Here is a question I was asking myself (or, actually Usenet, to be precise) a couple of years ago about Information Theory and Stochastic Control.
There was no reply from Usenet, however, a few results relevant to that question have come up (or we have become aware of) in the meantime:
We will occasionally update this list when we a paper that seems particularly close to addressing above question or a question related to it.
Some further useful links to resources (people/papers) on Biology and Information can be found at the
To start reading about empowerment, the following papers provide a good entry point:
Klyubin, A. S., Polani, D., and Nehaniv, C. L., (2005). All Else Being Equal Be Empowered. Advances in Artificial Life, European Conference on Artificial Life (ECAL 2005), vol. 3630 of LNAI, 393-402. Springer.
Klyubin, A. S., Polani, D., and Nehaniv, C. L., (2008). Keep Your Options Open: An Information-Based Driving Principle for Sensorimotor Systems. PLoS ONE, 3(12):e4018. http://dx.doi.org/10.1371/journal.pone.0004018, Dec 2008.
Recently a very closely related principle for intelligent behaviour generation has been suggested, Causal Entropic Forcing, maximizing the entropic volume of future trajectories. The authors, Wissner-Gross and Freer's aim is to derive it from physical rather than biological considerations, for details see
Characteristic for the biological motivation of empowerment is that the latter considers everything with respect to the agent: potential reach is determined by what action sequences the agent may choose and the reachable states need to be distinguishable by the agent's sensors to count as separate states.
Also, while in many contexts empowerment and Causal Entropic Forcing will give similar results for the drives produced, there is one difference: empowerment considers what part of the agent action information actually reappears in the later (sensorically distinguishable) state. So, if the dynamics exhibits (uncontrolled) noise in parts of the state space, empowerment and Causal Entropic Forcing can be expected to give different results: empowerment will drive the agent (and its action "tentacles") away from the noise as this indicates a less controllable (and thus undesirable) region of the environment. Causal Entropic Forcing does not have an agent at the heart of its concept who sees its actions reflected in the state, and we expect it therefore to be indifferent to environmental noise, or possibly even be attracted to it, as it may enhance the richness of the trajectories available (but this conjecture requires precise analysis, as it may depend on the nature of the dynamics and the noise in the environment).
How is this selection process attained in nature? How can we copy it in artificial systems? How are new channels of environmental information be tapped and exploited by a system that originally accessed a different set of information channels? Can we make use of it in artefacts?
Successful multiagent studies have been carried out e.g. in the framework of
Meeting schedule of the SEPIA unit
Martin Biehl The Structure of Agents Tom Anthony Empowerment and its Dynamics Andrés Burgos Information-Theoretic Models for Biological Systems PhD viva passed February 2017 Malte Harder Information-Driven Self-Organization of Agents and Agent Collectives PhD viva passed October 2013 Sander van Dijk Informational Constraints and Organization of Behaviour, PhD viva passed October 2013 Christoph Salge Informational models of Social Interaction Philippe Capdepuy Emergence of Cooperation in Agent Collectives through a Potential Entropy Maximization Principle, PhD awarded December 2010 Dorothee Francois Facilitating children's play with robots: a developmental robotics approach, PhD awarded February 2009 Tobias Jung Fast Reinforcement Learning with Kernel Machines; remote supervision, PhD awarded February 2008 Alexander Klyubin Organization of Information Flow through the Perception-Action Loop, PhD awarded May 2007 Robert Lowe Multiagent Systems/Evolution of Expression, PhD awarded February 2007