Department of Computer Science
School of Computer Science
University of Hertfordshire
Hatfield AL10 9AB
President of the RoboCup Federation
Editor of Journal of Autonomous Agents and Multi-Agent Systems
Associate Editor of Advances in Complex Systems
Editor of Paladyn. Journal of Behavioral Robotics
The origins of intelligence constitute some of the most intriguing issues. Even assuming existence life as we know it, where comes intelligence from? Why does intelligence consistently emerge in disparate lines of descent (mammals, birds, and even octopuses)?
As opposed to human engineering, evolution does not have the luxury to "deliberate" on architectures and designs, so it must emerge through spontaneous selection and variation. However, the reappearance in different guises and under different conditions, while retaining very characteristic hallmarks across its occurrences (octopuses!) makes it also clear that it is not likely that evolution keeps reinventing the wheel in every occurrence of intelligent behaviour.
Therefore, rather than assuming that evolution reinvents the wheel, our assumption is that there are principles that consistently favour the reemergence of intelligent cognition, namely implicit advantages in "appropriate" processing of information; we interpret appropriateness in terms such as information parsimony or informationally well-matched perception-action loops.
Using information theory to model these requirements, one can derive variational principles for the information balance vs. utility of agents, for the "fit" of an agent into their informational niche as intrinsic motivation (empowerment), and many more aspects.
One advantage is that the principles are fully general; if indeed it turns out such principles would be evolutionarily relevant as hypothesised, they would apply across the scales: to very simplest organisms as well as to the most complex ones. This permits finding generalities in the principles of cognitive operation detached from any particular details of how this would be implemented in the cognitive "hardware" of an organism. In addition, such principles would also apply to artificial agents.
Thus, they can inform the construction of AI systems which are not just hand-tuned per design to particular behaviours foreseen and planned by human engineers, but which take into account the informational structure of the world we live in as a natural (and biologically plausible) way as to how to generate appropriate cognitive responses to it.
RoboCup is a large worldwide robotics robotics competition series. The complexity of the challenges which include football (soccer), rescue, work, logistics, home robotics and more are a powerful driver towards a better understanding what is required to make robots more flexible, capable and suitable to operate in a real-world environment without requiring careful calibration by a human engineer. Many of the research questions above find a directly corresponding challenge in the RoboCup setup.
In addition to fostering science, RoboCup is a prime environment for educating students on all levels, not only with respect to technical skills, but also organization, teamwork, management and professional and focused work under high pressure. Not least, amongst the successful activities of the RoboCup project we find highly effective channels for technology transfer, or spin-offs and industrially relevant challenges, and also RoboCup Junior: the latter is aimed at demonstrating the fascination of STEM topics for high-school students and attracting them to embrace the challenges invariably created by technological/scientific issues.
UH participates since 2003 in the RoboCup Soccer competition, until 2004 in the 2D simulation, since 2004 in 3D humanoid simulation, and since 2013, also in the Kidsize league. Currently Daniel Polani is the President of the RoboCup Federation (2017-2019).
Led by UKE, the project explores sensorimotor contingencies in a social setting (robot-robot, robot-human, human-human)
Led by ISME and the University of Lecce, the project develops swarms of underwater vehicles for the sonar mapping of submarine features.
Led by the University of Bremen, the project is interested in a generic cognitive framework to allow robots to interact with humans in a "biologically plausible" way. For this UH uses principles based on Information Theory for Intelligent Information Processing to create a self-organizing informational anticipatory architecture. CORBYS is funded by the European Commission under the 7th Framework Program Grant agreement No. 270219. (See also UH Press Release)
Are there general principles underlying intelligent information processing in living beings which we can exploit without having to resort to specialized solutions that vary from task to task?
Can understanding of mechanisms for "life" teach us something about how to achieve intelligence in artificial systems?
More seriously, information theory is one of the most universal concepts with applications in computer science, mathematics, physics, biology, chemistry and other fields. It allows a lucid and transparent analysis of many systems and provides a framework to study and compare seemingly different systems using the same language and notions.
Recent research in learning models, e.g. Neural Networks, emphasizes the universal role of information. Can information theory open a universal approach towards the development of Artificial Intelligence and the studies of Artificial Life?
Here is a question I was asking myself (or, actually Usenet, to be precise) a couple of years ago about Information Theory and Stochastic Control.
There was no reply from Usenet, however, a few results relevant to that question have come up (or we have become aware of) in the meantime:
We will occasionally update this list when we a paper that seems particularly close to addressing above question or a question related to it.
Some further useful links to resources (people/papers) on Biology and Information can be found at the
To start reading about empowerment, the following papers provide a good entry point:
Klyubin, A. S., Polani, D., and Nehaniv, C. L., (2005). All Else Being Equal Be Empowered. Advances in Artificial Life, European Conference on Artificial Life (ECAL 2005), vol. 3630 of LNAI, 393-402. Springer.
Klyubin, A. S., Polani, D., and Nehaniv, C. L., (2008). Keep Your Options Open: An Information-Based Driving Principle for Sensorimotor Systems. PLoS ONE, 3(12):e4018. http://dx.doi.org/10.1371/journal.pone.0004018, Dec 2008.
More recently, a very closely related principle for intelligent behaviour generation has been suggested, Causal Entropic Forcing, maximizing the entropic volume of future trajectories. The authors, Wissner-Gross and Freer's aim is to derive it from physical rather than biological considerations, for details see
Characteristic for the biological motivation of empowerment is that the latter considers everything with respect to the agent: potential reach is determined by what action sequences the agent may choose and the reachable states need to be distinguishable by the agent's sensors to count as separate states.
Also, while in many contexts empowerment and Causal Entropic Forcing will give similar results for the drives produced, there is one difference: empowerment considers what part of the agent action information actually reappears in the later (sensorically distinguishable) state. So, if the dynamics exhibits (uncontrolled) noise in parts of the state space, empowerment and Causal Entropic Forcing can be expected to give different results: empowerment will drive the agent (and its action "tentacles") away from the noise as this indicates a less controllable (and thus undesirable) region of the environment. Causal Entropic Forcing does not have an agent at the heart of its concept who sees its actions reflected in the state, and we expect it therefore to be indifferent to environmental noise, or possibly even be attracted to it, as it may enhance the richness of the trajectories available (but this conjecture requires precise analysis, as it may depend on the nature of the dynamics and the noise in the environment).
How is this selection process attained in nature? How can we copy it in artificial systems? How are new channels of environmental information be tapped and exploited by a system that originally accessed a different set of information channels? Can we make use of it in artefacts?
Successful multiagent studies have been carried out e.g. in the framework of
Andrei Robu Clocks and Time Martin Biehl The Structure of Agents Tom Anthony Empowerment and its Dynamics Andrés Burgos Information-Theoretic Models for Biological Systems PhD viva passed February 2017 Malte Harder Information-Driven Self-Organization of Agents and Agent Collectives PhD viva passed October 2013 Sander van Dijk Informational Constraints and Organization of Behaviour, PhD viva passed October 2013 Christoph Salge Informational models of Social Interaction Philippe Capdepuy Emergence of Cooperation in Agent Collectives through a Potential Entropy Maximization Principle, PhD awarded December 2010 Dorothee Francois Facilitating children's play with robots: a developmental robotics approach, PhD awarded February 2009 Tobias Jung Fast Reinforcement Learning with Kernel Machines; remote supervision, PhD awarded February 2008 Alexander Klyubin Organization of Information Flow through the Perception-Action Loop, PhD awarded May 2007