Abstract
Intelligent agents are characterized primarily by their far-sighted expedient behavior. We present a working prototype of an intelligent agent (ADAM) based on a novel hierarchical neuro-symbolic architecture (Deep Control) for deep reinforcement learning with a potentially unlimited planning horizon. The control parameters form a hierarchy of formal languages, where higher-level alphabets contain the semantic meanings of lower-level vocabularies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bakker, B., Schmidhuber, J., et al.: Hierarchical reinforcement learning based on subgoal discovery and subpolicy specialization. In: Proceedings of the 8-th Conference on Intelligent Autonomous Systems, pp. 438–445 (2004)
Barto, A.G., Mahadevan, S.: Recent advances in hierarchical reinforcement learning. Discret. Event Dyn. Syst. 13(1–2), 41–77 (2003)
Botvinick, M.M.: Hierarchical reinforcement learning and decision making. Curr. Opin. Neurobiol. 22(6), 956–962 (2012)
Dietterich, T.G., et al.: The MAXQ method for hierarchical reinforcement learning. In: ICML, vol. 98, pp. 118–126 (1998)
Friston, K.J., Parr, T., Yufik, Y., Sajid, N., Price, C.J., Holmes, E.: Generative models, linguistic communication and active inference. Neurosci. Biobehav. Rev. 118, 42–64 (2020)
Gage, P.: A new algorithm for data compression. C Users J. 12(2), 23–38 (1994)
Kotseruba, I., Tsotsos, J.K.: 40 years of cognitive architectures: core cognitive abilities and practical applications. Artif. Intell. Rev. 53(1), 17–94 (2020)
Laird, J.E., Lebiere, C., Rosenbloom, P.S.: A standard model of the mind: toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. AI Mag. 38(4), 13–26 (2017)
Langley, P., Laird, J.E., Rogers, S.: Cognitive architectures: research issues and challenges. Cogn. Syst. Res. 10(2), 141–160 (2009)
Laukien, E., Crowder, R., Byrne, F.: Feynman machine: the universal dynamical systems computer (2016). arXiv preprint arXiv:1609.03971
Levy, A., Platt, R., Saenko, K.: Hierarchical reinforcement learning with hindsight (2018). arXiv preprint arXiv:1805.08180
Levy, O., Goldberg, Y.: Neural word embedding as implicit matrix factorization. In: Advances in Neural Information Processing Systems, pp. 2177–2185 (2014)
Nachum, O., Gu, S.S., Lee, H., Levine, S.: Data-efficient hierarchical reinforcement learning. Adv. Neural Inf. Process. Syst. 31, 3307–3317 (2018)
Pateria, S., Subagdja, B., Tan, A.H., Quek, C.: Hierarchical reinforcement learning: a comprehensive survey. ACM Comput. Surv. (CSUR) 54(5), 1–35 (2021)
Pezzulo, G., Parr, T., Friston, K.: The evolution of brain architectures for predictive coding and active inference. Philos. Trans. R. Soc. B 377(1844), 20200531 (2022)
Pezzulo, G., Rigoli, F., Friston, K.J.: Hierarchical active inference: a theory of motivated control. Trends Cogn. Sci. 22(4), 294–306 (2018)
Ritter, F.E., Tehranchi, F., Oury, J.D.: ACT-R: a cognitive architecture for modeling cognition. Wiley Interdiscip. Rev. Cogn. Sci. 10(3), e1488 (2019)
Russell, S.: Human compatible: artificial intelligence and the problem of control. Penguin (2019)
Shumskii, S.: ADAM: a model of artificial psyche. Autom. Remote Control 83(6), 847–856 (2022). https://doi.org/10.1134/S0005117922060030
Shumsky, S.: Machine Intelligence. Essays on the theory of machine learning and artificial intelligence. RIOR (2019). (in Russian). https://doi.org/10.29039/02011-1
Shumsky, S.: Scalable natural language understanding: from scratch, on the fly. In: 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI), pp. 73–74. IEEE (2018). https://doi.org/10.1109/IC-AIAI.2018.8674432
Vezhnevets, A.S., et al.: Feudal networks for hierarchical reinforcement learning. In: International Conference on Machine Learning, pp. 3540–3549. PMLR (2017)
Wainwright, M.J.: Variance-reduced \( q \)-learning is minimax optimal (2019). arXiv preprint arXiv:1906.04697
Wang, R., Yu, R., An, B., Rabinovich, Z.: I\(^{2}\)HRL: interactive influence-based hierarchical reinforcement learning. In: Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 3131–3138 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Shumsky, S., Baskov, O. (2023). ADAM: A Prototype of Hierarchical Neuro-Symbolic AGI. In: Hammer, P., Alirezaie, M., Strannegård, C. (eds) Artificial General Intelligence. AGI 2023. Lecture Notes in Computer Science(), vol 13921. Springer, Cham. https://doi.org/10.1007/978-3-031-33469-6_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-33469-6_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33468-9
Online ISBN: 978-3-031-33469-6
eBook Packages: Computer ScienceComputer Science (R0)