Artificial Cognitive Systems (ACS) is under development based on mathematical models of higher cognitive functions for human-like functions such as vision, auditory, inference, and behaviour. Although the final goal is to provide human-like decision making and behaviour, we are currently focusing on hierarchical knowledge development and recognition of both explicit and implicit human intention. In real world applications these are the core components for robust situation awareness capability, which is directly related to the decision making and behaviour. We propose to utilize multimodal cognitive neuroscience data such as fMRI, EEG, eye gaze, and GSR.
Recognition of human intention is very critical for the awareness of situation involving people. Although current human-machine interfaces have been developed to utilize explicitly-represented human intention such as keystrokes, gesture, and speech, the actual hidden human intention may be different from the explicit one. Also, people may not want to go through tedious processes to present their intentions explicitly, especially for routine sequential tasks and/or sensitive personal situations. Therefore, it is desirable to understand the hidden or unrepresented intention, i.e., ‘implicit’ intention, for the next-generation intelligent human-oriented user interface. We had measured multimodal signals, i.e., EEG, ECG, GSR, video, and eye gaze signals, while the subjects are asked both obvious and non-obvious questions. The latter includes sensitive personal questions which may incur differences between the explicit and implicit intentions. Also, the subjects made ‘Yes’ or “No’ answer for each question by speech. The measured signals for the obvious questions are regarded as the references, which are used to understand the non-obvious cases. It demonstrated the possibility of understanding human implicit intention, i.e., classifying into categories, from brain signals, which may be utilized as a next-generation human-machine interface.
For robust situation awareness in unknown environments it is required to keep on improving the knowledge. Human utilizes active learning to develop their knowledge efficiently, and a new active learning model is presented and tested for automatic speech recognition systems. Human can self-evaluate their knowledge systems to identify the weak or uncertain topics, and seek for the answers by asking proper questions to experts (or teachers) or searching books (or webs). Then, the new knowledge is incorporated into the existing knowledge system. The active learning algorithm had been applied to both a simple toy problem and a real-world speech recognition task. We introduced a uncertainty measure for each unlabelled data, which is calculated from the current classifier. The developed algorithm shows better recognition performance with less number of labelled data for the classifier training. In the future we will also incorporate a smooth transition on the selection strategy based on the exploitation-exploration trade-off. At the early stage of learning human utilizes exploitation while exploration is applied at the later stage.
Seminar: Artificial Cognitive Systems with Implicit Intention Understanding and Active Learning
Wednesday, May 22, 2013 - 14:00
Suh-Yeon Dong, Byeong Yeol Kim, CheongAn Lee, Hyunah Song, and Soo-Young Lee Department of Electrical Engineering and Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Korea s