Assisting aged population or a population with disabilities is a critical problem in today's world. To compensate for their declined motor function, power-assist wearable robots have been proposed. In some cases, however, the cognitive function of these populations may have declined as well, and it may be insufficient to compensate only for their motor function deficiency. Perception-assist wearable robots, which can perceive environmental information using visual sensors attached to them, have been proposed to address this problem. This study addresses the problem of identifying motion intentions of the user of an upper-limb power-assist wearable robot, while the user engages in desired interactions with others. It is important to consider both the interacting parties in order to accurately predict proper interaction. Therefore, this paper presents an interaction recognition methodology by combining both the user's motion intention and the other party's motion intention with environmental information. A fuzzy reasoning model is proposed to semantically combine the motion intentions of both parties and environmental information. In this method, the motion intentions of both the user and the other party are simultaneously estimated using kinematic information and visual information, respectively, and they are employed for predicting the interactions between both parties. The effectiveness of the proposed approach is experimentally evaluated.
All Science Journal Classification (ASJC) codes
- Experimental and Cognitive Psychology
- Artificial Intelligence
- Cognitive Neuroscience