![]() Recent work has revealed insight-related coarse semantic coding in the right hemisphere and internally focused attention preceding and during problem solving. Insight research began a century ago, but neuroimaging and electrophysiological techniques have been applied to its study only during the past decade. This can take the form of a solution to a problem (an "aha moment"), comprehension of a joke or metaphor, or recognition of an ambiguous percept. ![]() Insight occurs when a person suddenly reinterprets a stimulus, situation, or event to produce a nonobvious, nondominant interpretation. IKE-XAI also allows researchers to identify the agent’s Aha! moment by determining from what moment the knowledge representation stabilizes and the agent no longer learns. Our experiments show that the IKE-XAI approach helps understanding the development of the Q-learning agent behavior by providing a global explanation of its knowledge evolution during learning. We propose using graph representations as visual and explicit explanations of the behavior of the Q-learning agent. Therefore, to extract the agent acquired knowledge at different stages of its training, our approach combines: first, a Q-learning agent that learns to perform the TOH task second, a trained recurrent neural network that encodes an implicit representation of the TOH task and third, an XAI process using a post-hoc implicit rule extraction algorithm to extract finite state automata. We showcase this technique to solve and explain the TOH task when researchers have only access to moves that represent observational behavior as in human-machine interaction. ![]() Our main contribution proposes a 3-step methodology named Implicit Knowledge Extraction with eXplainable Artificial Intelligence (IKE-XAI) to extract the implicit knowledge, in form of an automaton, encoded by an artificial agent during its learning. We position ourselves in the field of explainable reinforcement learning for developmental robotics, at the crossroads of cognitive modeling and explainable AI. The TOH is a well-known task in experimental contexts to study the problem-solving processes and one of the fundamental processes of children’s knowledge construction about their world. We investigate the development of the knowledge construction of an artificial agent through the analysis of its behavior, i.e., its sequences of moves while learning to perform the Tower of Hanoï(TOH) task. A Machine Learning algorithm develops also a latent representation of the task it learns. In closing, I suggest that progress toward implementing human-like understanding in machines-machine understanding-may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.ĭuring the learning process, a child develops a mental representation of the task he or she is learning. I propose a hypothesis that might help to explain why consciousness is important to understanding. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Here I draw a distinction between natural, artificial, and machine understanding. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. This article addresses the question of whether machine understanding requires consciousness.
0 Comments
Leave a Reply. |