Special Issue of
Royal Society Philosophical Transactions A
World models, A(G)I, and the Hard problem(s) of life-mind continuity:
Towards a unified understanding of natural and artificial intelligence
Towards a unified understanding of natural and artificial intelligence
Paper submissions: January 15th, 2024
Please note: due to the number of expected interested contributors, we will likely end up releasing multiple installments, spaced approximately 6 months apart; inclusion in either the 1st or 2nd (or possibly 3rd) installment will depend on the desired timelines of individual authors, as well as the specific topics covered in articles.
There will also be a (hybrid in-person and virtual) workshop planned for spring 2025, where we will invite interested participants to engage in structured discussions on the topic of world models and AI capabilities.
The surprising power of foundation models has taken the world by storm, causing us to re-evaluate some of the most fundamental questions about our minds, our relationships to technology and each other, and even to consider the question as to whether existential threats from advanced artificial intelligence may be our undoing [1]–[7]. Among the major open questions raised by these technologies is the issue of the extent of the world-knowledge that may be learned through training on vast quantities of text, and of the ways in which those capacities might be augmented with different forms of multimodality. Proposals for realizing advanced artificial intelligence increasingly center on autonomous agents governed by world models [8]–[10]. The predictive (or compressive) abilities afforded by world models may be one of the main ways in which biological systems can be so intelligent in being able to achieve goals across a broad range of task environments, or to learn above and beyond core priors [11]–[13]. Predictive generative models that reflect the structure of the world and its likely state-transitions through causal dependencies provide powerful means of allowing systems to flexibly respond to various situations.
World modeling may be the key for robustly “learning to learn,” such that we may describe biological intelligences as the only examples of (significantly) general intelligence that we can clearly identify. It is increasingly suggested that predictive models of the world may form a basis for the kinds of cognitive capacities we wish to engineer for “System 2 AI” [14]–[18], with respect to creating systems capable of the kinds of sophisticated (potentially self-reflexive and deliberate) complex/multi-step reasoning processes associated with seemingly uniquely forms of human intelligence. It has even been proposed that the integrated modeling of spatiotemporally and causally coherent system-world configurations may provide not only bases for expanding machine autonomy, but also means of understanding phenomenal consciousness as a stream of experience [19]–[21].
In these ways, it seems that one of the most important questions we will ever ask (and hopefully answer) is what kinds of world modeling capacities are sufficient for realizing what range of functional properties in which variety of systems under which circumstances? Towards this end, in addition to exploring mutually illuminating parallels between natural and artificial intelligence, this special issue will also consider perspectives that emphasize the situatedness of biological intelligences as value-driven, self-making cybernetic systems.
Could world modeling provide not only bases for understanding consciousness (in multiple ways), but perhaps life itself as a potentially distinct phenomenon in nature? That is, can we identify necessary and sufficient conditions for defining living phenomena in terms of the sentience afforded by systems with capacities for minimal predictive system-world modeling? Might common principles of computation and complexity underlie not just the major transitions we would associate with various conscious phenomena, but potentially the origins of life itself [22]? This is not to say that all living systems possess phenomenal consciousness in the same sense—e.g. as iterative system-world estimation for informing and being informed by action-perception cycles on the timescales of their formation)—or exhibit the same range of intelligent functions. However, given the potential for these computational models of mind and life to help illuminate core aspects of cognition, and considering the potential biomedical application that could come from a deeper understanding of biological intelligence, this is an issue worthy of further exploration.
Given the uncertainty around the capacities of LLMs and the potentially geopolitical consequences that could flow from different interpretations of their functioning, the importance of pursuing this line of inquiry may be difficult to overstate. Towards these ends, this special issue invites contributions focused on the following questions (but with additional topics also welcomed if relevant):
What is the range of world models that we might associate with which kinds of systems?
What are the functional properties of different kinds of world models with respect to inference and learning?
How can models of systems equipped with underlying world models shed light on the physics of complex systems?
Which kinds of world models might be associated with which kinds of conscious phenomena?
Which kinds of world models might be associated with LLMs, and to what extent might this change with further-proposed technological developments (e.g. attempts at incorporating multimodality of different varieties)?
Which kinds of world models are characteristic of all life, ranging from animals to plants and fungi, and even individual cells?
To the extent that we can decipher how these world-modeling capacities are implemented within different kinds of living systems, to what extent might this afford what degrees of prediction and control?
In posing these questions, we recognize that world-modeling has taken on a broad range of connotations, each of which might be valuable in different contexts. However, this wide range of meanings could also lead to confusion if we fail to keep track of their context-sensitivity. In this way, the expression ‘world models’ runs the risk of becoming semantically vacuous, or even misleading, perhaps akin to the way in which ‘consciousness’ became something of a “suitcase word” in cognitive science, and a place for hiding our ignorance with respect to phenomena that challenge our understanding. Yet, what if these similarities in the promise and peril of the semantic ambiguities associated with world models and consciousness are due to the fact that they are referring to fundamentally similar phenomena?
Could present advances in AI not only help to illuminate the Hard problem of consciousness (i.e., why should there be anything that it feels like to be some kinds of systems?), but also the Hard problem of life (i.e., how are living systems so unusually adaptive/intelligent such that the descendants of a single cell could dominate a planet for billions of years?)? And perhaps most importantly of all, could these be the key questions we will need to answer in assessing the promise and peril of present foundation models in AI? Could answering these questions potentially help guide us through our ongoing and forthcoming attempts to build new kinds of thinking machines, which may end up being the single most important thing we ever do as a species?
Editors
We are deeply grateful to Professor Kahneman for helping to inspire this special issue, and for providing his warm support during the planning process.
We hope to honor his memory with the work to come.
References
[1] M. Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf Doubleday Publishing Group, 2017.
[2] G. Marcus and E. Davis, Rebooting AI: Building Artificial Intelligence We Can Trust. Knopf Doubleday Publishing Group, 2019.
[3] M. Mitchell, Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux, 2019.
[4] S. Russell, Human Compatible: Artificial Intelligence and the Problem of Control. Penguin, 2019.
[5] N. Bostrom, Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
[6] A. Safron, Z. Sheikhbahaee, N. Hay, J. Orchard, and J. Hoey, “Value Cores for Inner and Outer Alignment: Simulating Personality Formation via Iterated Policy Selection and Preference Learning with Self-World Modeling Active Inference Agents,” in Active Inference, C. L. Buckley, D. Cialfi, P. Lanillos, M. Ramstead, N. Sajid, H. Shimazaki, and T. Verbelen, Eds., in Communications in Computer and Information Science. Cham: Springer Nature Switzerland, 2023, pp. 343–354. doi: 10.1007/978-3-031-28719-0_24.
[7] R. Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin, 2000.
[8] D. Ha and J. Schmidhuber, “World Models,” arXiv:1803.10122 [cs, stat], Mar. 2018, doi: 10.5281/zenodo.1207631.
[9] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi, “Dream to Control: Learning Behaviors by Latent Imagination,” arXiv:1912.01603 [cs], Mar. 2020, Accessed: Apr. 04, 2020. [Online]. Available: http://arxiv.org/abs/1912.01603
[10] A. Safron and Z. Sheikhbahaee, “Dream to explore: 5-HT2a as adaptive temperature parameter for sophisticated affective inference.” PsyArXiv, Jul. 13, 2021. doi: 10.31234/osf.io/zmpaq.
[11] S. Legg and M. Hutter, “A collection of definitions of intelligence,” Frontiers in Artificial Intelligence and applications, vol. 157, p. 17, 2007.
[12] A. Safron, “AIXI, FEP-AI, and Integrated World Models: Towards a Unified Understanding of Intelligence and Consciousness,” in Active Inference, C. L. Buckley, D. Cialfi, P. Lanillos, M. Ramstead, N. Sajid, H. Shimazaki, and T. Verbelen, Eds., in Communications in Computer and Information Science. Cham: Springer Nature Switzerland, 2023, pp. 251–273. doi: 10.1007/978-3-031-28719-0_18.
[13] F. Chollet, “On the Measure of Intelligence,” arXiv, arXiv:1911.01547, Nov. 2019. doi: 10.48550/arXiv.1911.01547.
[14] Y. Bengio, “The Consciousness Prior,” arXiv:1709.08568 [cs, stat], Sep. 2017, Accessed: Jun. 11, 2019. [Online]. Available: http://arxiv.org/abs/1709.08568
[15] V. Thomas et al., “Independently controllable factors,” arXiv preprint arXiv:1708.01289, 2017.
[16] Y. Bengio, T. Deleu, E. J. Hu, S. Lahlou, M. Tiwari, and E. Bengio, “GFlowNet Foundations.” arXiv, Apr. 07, 2022. doi: 10.48550/arXiv.2111.09266.
[17] A. Safron, O. Çatal, and T. Verbelen, “Generalized Simultaneous Localization and Mapping (G-SLAM) as unification framework for natural and artificial intelligences: towards reverse engineering the hippocampal/entorhinal system and principles of high-level cognition.” PsyArXiv, Oct. 01, 2021. doi: 10.31234/osf.io/tdw82.
[18] M. Assran et al., “Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture.” arXiv, Apr. 13, 2023. doi: 10.48550/arXiv.2301.08243.
[19] A. Safron, “Integrated World Modeling Theory (IWMT) Expanded: Implications for Theories of Consciousness and Artificial Intelligence.” PsyArXiv, Jun. 21, 2021. doi: 10.31234/osf.io/rm5b2.
[20] A. Safron, “Integrated World Modeling Theory (IWMT) Implemented: Towards Reverse Engineering Consciousness with the Free Energy Principle and Active Inference,” PsyArXiv, Aug. 2020. doi: 10.31234/osf.io/paz5j.
[21] A. Safron, “The Radically Embodied Conscious Cybernetic Bayesian Brain: From Free Energy to Free Will and Back Again,” Entropy, vol. 23, no. 6, Art. no. 6, Jun. 2021, doi: 10.3390/e23060783.
[22] A. Safron, D. A. R. Sakthivadivel, Z. Sheikhbahaee, M. Bein, A. Razi, and M. Levin, “Making and breaking symmetries in mind and life,” Interface Focus, vol. 13, no. 3, p. 20230015, Apr. 2023, doi: 10.1098/rsfs.2023.0015.