@phdthesis{oai:sucra.repo.nii.ac.jp:00010432, author = {MD., GOLAM RASHED}, month = {}, note = {xvi, 130 p., Traditionally, humans have viewed robots as a “mechanical machines”, designed to perform a variety of industrial tasks. But within the last decades, the reality of robots is quite different from the traditional view and has enabled us to start developing social robots to support humans in their daily activities. The concept of the social robot is rapidly emerging and gradually being introduced as a part of human society where interaction among humans and social robots seems to be important to provide mental, communicational, and physical support to humans in society. As a consequence, many social robots have already been deployed in social spaces, where humans interact with reactive services in which social robots wait until the human proactively seeks services. Nevertheless, nowadays we are moving in a direction where we introduce social robots in social spaces with the ability to proactively offer services to humans in which social robots estimate human intentions, and can offer services only to those who would need it. To achieve such capabilities, social robots should have the capacity to observe human behaviors so that they can easily identify humans who are in need. But, observing human behaviors is a challenging task for social robots. This dissertation deals with making human-robot interaction systems capable of observing human behaviors so that social robots can understand their intentions, interests, and preferences concerning surrounding environments. Our findings will help social robots to proactively offer services to those humans who may want to be serviced. In this dissertation, a real life museum guide robot scenario is considered as a testbed for my proactive social robotics research. The first part of the work is on developing a guide robot system which observes people’s interests and intentions towards paintings in museum scenarios and proactively offers guidance to them using a guide robot, if needed. To do that, multiple USB video camera sensors are utilized to support the guide robot in detecting and tracking people’s visual focus of attention (VFOA) toward paintings. Further, each person’s head orientation and profile information and computed importance values are considered as local behavior to identify a target-person that may be interested in a particular painting. After identifying the target-person, the guide robot moves autonomously through an appropriate motion path from the so called public-distance to his/her social-distance to explain details about the painting to which s/he is interested. Furthermore, the viability of the proposed guide robot system is demonstrated by experimenting with the Robovie-R3 as a museum guide robot. Finally, the system is tested to validate its effectiveness. Continuing to improve the recognition of people’s interests, intentions, and preferences concerning paintings in the museum, a network enabled sensing system is designed and implemented by incorporating different sensing modalities in combination where sensors are distributed in the environments as opposed to conventional sensing systems that are usually on-board the robot. This network enabled sensing system may assist the guide robot to recognize human intentions before proactively approaching people that may want guidance or commentary about the paintings. To do that, first, observational experiments are conducted in a museum with participants. From these experiments, mainly three kinds of walking trajectory patterns are found, which characterize global behavior, and additionally, visual attentional information are also found that indicates the local behavior of the people. These behaviors ultimately indicate whether certain people are interested in the exhibits and could benefit from the guide robot system providing additional details about the paintings. Based on the findings, a network enabled Human Robot Interaction (HRI) system is designed and implemented for the museum. Finally, the viability of the proposed HRI system is demonstrated by experimenting with a set of Desktop Robots as guide robots. Experiments reveal that the proposed HRI system is effective for the network enabled Desktop Robots to proactively provide guidance. To detect and track all the people inside any real public social spaces for reading an individual’s interests, intentions as well as extracting knowledge on their actual expectations from their surroundings, a social robot should have robust human sensing systems. Most state-of-the-art human sensing systems fail to track any initially detected person, especially in crowded large scale social spaces where potential partial and full occlusion between persons and/or objects frequently happen. To combat this issue in observing people’s behaviors for social robots, in the final part of this dissertation, a new method is introduced which uses LIDAR to identify humans and track their positions, body orientation, and movement trajectories in any public space to read their various types of behavioral responses to surroundings. We install a network of LIDAR poles at the shoulder level of typical adults to reduce potential occlusion between persons and/or objects even in large scale social environments. With this arrangement, a simple but effective human tracking method is proposed that works by combining multiple sensors’ data so that large-scale areas can be covered. How valuable information related to people’s behaviors can be autonomously collected and analyzed using this method is also described. Additionally, a solution to visualize people’s movement patterns and preferences with respect to any social space is presented. Thereafter, the effectiveness of the proposed human detection and tracking method is evaluated in an art gallery of a real museum. Ultimately, results revealed good human tracking performance and provided valuable behavioral information related to the art gallery which are very important to deploy in any museum guide robot system in the future., Dedication i Acknowledgement ii Abstract iv Contents viii List of Figures xii List of Tables xvi 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Research Contribution . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Organization of Sections . . . . . . . . . . . . . . . . . . . . . . . 5 2 Interdisciplinary Background 7 2.1 Definitions of Social Robots . . . . . . . . . . . . . . . . . . . . . 8 2.1.0.1 Socially Interactive Robots . . . . . . . . . . . . 8 2.1.0.2 Sociable Robots . . . . . . . . . . . . . . . . . . . 8 2.1.0.3 Design-Centered Social Robots . . . . . . . . . . 9 2.1.1 Towards a Definition of Social Robots . . . . . . . . . . . . 9 2.2 Potential Applications of Social Robots . . . . . . . . . . . . . . . 10 2.2.1 Guidance Services . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.2 Informational Services . . . . . . . . . . . . . . . . . . . . 10 2.2.3 Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.4 Entertainment Services and Companionship . . . . . . . . 12 2.2.5 Autism Therapy . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.6 Peer, Tool, Tutorship in Education . . . . . . . . . . . . . 13 2.3 Human Robot Interaction . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Human Detection and Tracking in Spaces . . . . . . . . . . 19 2.3.1.1 Vision Based System: . . . . . . . . . . . . . . . 19 2.3.1.2 Laser Based System . . . . . . . . . . . . . . . . 21 2.3.1.3 3-D Range Based System . . . . . . . . . . . . . 22 2.3.1.4 Ubiquitous Sensor Based System . . . . . . . . . 22 2.3.1.5 Different Sensing Modalities in Combination . . . 23 2.3.1.6 Occlusion Problems and Handling in Human Detection and Tracking . . . . . . . . . . . . . . . . 24 2.3.2 Human Intention Recognition in HRI . . . . . . . . . . . . 25 2.3.3 Designing the Social Robot’s Behaviors . . . . . . . . . . . 28 2.3.4 Interaction Between Humans and Social Robots . . . . . . 31 2.4 Tracking Human Behaviors in the Museum . . . . . . . . . . . . . 33 2.5 Museum Guide Robot . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6 Overall Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3 A Vision Based Guide Robot System: Initiating Proactive Social Human Robot Interaction in Museum Scenarios 36 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2 Proposed Guide Robot System . . . . . . . . . . . . . . . . . . . . 39 3.2.1 People Detection and Tracking Framework . . . . . . . . . 40 3.2.1.1 Target-Person Selection Procedure . . . . . . . . 40 3.2.1.2 Recognition of Target Person’s VFOA . . . . . 42 3.2.2 Guide Robot’s Motion Path Planning . . . . . . . . . . . . 43 3.3 System Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3.1 Experiment Design . . . . . . . . . . . . . . . . . . . . . . 46 3.3.2 Experimental Cases . . . . . . . . . . . . . . . . . . . . . . 46 3.3.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.3.3.1 People’s Impression . . . . . . . . . . . . . . . . . 48 3.3.3.2 Success Rate . . . . . . . . . . . . . . . . . . . . 48 3.3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.3.4.1 People’s Impression: . . . . . . . . . . . . . . . . 49 3.3.4.2 Success Rate . . . . . . . . . . . . . . . . . . . . 51 3.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4 Network Guide Robot System Proactively Initiating Interaction with Humans Based on Their Local and Global Behaviors 53 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.2 Observational Experiments . . . . . . . . . . . . . . . . . . . . . . 55 4.2.1 Findings of Conducted Observation Experiments . . . . . . 57 4.3 Proposed HRI System . . . . . . . . . . . . . . . . . . . . . . . . 59 4.3.1 Server Sub-System (SSS) . . . . . . . . . . . . . . . . . . . 61 4.3.1.1 Global Behavior Tracking Unit (GBTU). . . . . . 61 4.3.2 Client Sub-System (CSS) . . . . . . . . . . . . . . . . . . . 63 4.3.2.1 Local Behavior Tracking Unit (LBTU). . . . . . . 63 4.3.2.2 Robot Control Unit (RCU). . . . . . . . . . . . . 65 4.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.4.1 Demonstration using Guide Robots. . . . . . . . . . . . . . 68 4.4.1.1 Case-1. . . . . . . . . . . . . . . . . . . . . . . . 69 4.4.1.2 Case-2. . . . . . . . . . . . . . . . . . . . . . . . 70 4.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.5.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5 Robustly Tracking People with LIDARs in a Crowded Museum for Behavioral Analysis 73 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.1.1 Importance of Tracking Museum visitors . . . . . . . . . . 74 5.2 Drawbacks of a Human Tracking Method . . . . . . . . . . . . . 76 5.3 Extended Human Tracking System: Proposed Approach . . . . . 77 5.3.1 Likelihood Computing Model . . . . . . . . . . . . . . . . 79 5.3.2 Reassigning Unique-ID to a Temporarily Lost Person . . . 82 5.4 Art Gallery Installation . . . . . . . . . . . . . . . . . . . . . . . . 85 5.4.1 Tracking System Setup . . . . . . . . . . . . . . . . . . . . 85 5.4.2 Tracking Accuracy Evaluation . . . . . . . . . . . . . . . . 88 5.4.2.1 Visualization of Visitors’ Movement Patterns and Preferences to Exhibits . . . . . . . . . . . . . . . 91 5.4.3 Application of the proposed System for the MPs: Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.5.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6 Conclusions 97 6.1 Methodological Contributions . . . . . . . . . . . . . . . . . . . . 98 6.2 Theoretical Contributions . . . . . . . . . . . . . . . . . . . . . . 98 6.3 Technical Contributions . . . . . . . . . . . . . . . . . . . . . . . 99 6.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.5 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 A Data Collection Techniques 104 References 108, 主指導教員 : 久野義徳, text, application/pdf}, school = {埼玉大学}, title = {Observing People's Behaviors in Public Spaces for Initiating Proactive Human-Robot Interaction by Social Robots}, year = {2016}, yomi = {エムディ, ゴラム ラシェド} }