WEKO3
アイテム
{"_buckets": {"deposit": "39376f8a-cd50-4e46-8ecd-d92be2120e06"}, "_deposit": {"created_by": 15, "id": "19053", "owners": [15], "pid": {"revision_id": 0, "type": "depid", "value": "19053"}, "status": "published"}, "_oai": {"id": "oai:sucra.repo.nii.ac.jp:00019053", "sets": ["966"]}, "author_link": [], "item_113_alternative_title_1": {"attribute_name": "タイトル(別言語)", "attribute_value_mlt": [{"subitem_alternative_title": "自然な手招きジェスチャによる人間とロボットのインタラクションシステム"}]}, "item_113_biblio_info_9": {"attribute_name": "書誌情報", "attribute_value_mlt": [{"bibliographicIssueDates": {"bibliographicIssueDate": "2019", "bibliographicIssueDateType": "Issued"}}]}, "item_113_date_35": {"attribute_name": "作成日", "attribute_value_mlt": [{"subitem_date_issued_datetime": "2020-07-21", "subitem_date_issued_type": "Created"}]}, "item_113_date_granted_20": {"attribute_name": "学位授与年月日", "attribute_value_mlt": [{"subitem_dategranted": "2019-09-20"}]}, "item_113_degree_grantor_22": {"attribute_name": "学位授与機関", "attribute_value_mlt": [{"subitem_degreegrantor": [{"subitem_degreegrantor_name": "埼玉大学"}], "subitem_degreegrantor_identifier": [{"subitem_degreegrantor_identifier_name": "12401", "subitem_degreegrantor_identifier_scheme": "kakenhi"}]}]}, "item_113_degree_name_21": {"attribute_name": "学位名", "attribute_value_mlt": [{"subitem_degreename": "博士(学術)"}]}, "item_113_description_13": {"attribute_name": "形態", "attribute_value_mlt": [{"subitem_description": "x, 55 p.", "subitem_description_type": "Other"}]}, "item_113_description_23": {"attribute_name": "抄録", "attribute_value_mlt": [{"subitem_description": "The steadily expanding population of elderly persons in Japan and other industrialized countries has posed a vexing problem to the health care systems that serve aging citizens. The field of robotics, in particular the development of service robots that can provide assisted-care, has made many gains over the last decade. For actual care tasks, we need to develop communication technology to allow users to easily ask for services to assisted-care robots. Thus, human-robot interaction (HRI) becomes one of the most important aspects of development in service robots. Interacting with service robots via nonverbal cues allows for natural and efficient communication with humans.\nHuman-Robot Interaction system must be designed and implemented so that age-related challenges in functional ability, such as perceptual, cognitive and motor functions, are taken into account. There is the increasing popularity of service robots communication using interfaces: touch panels, voice control, etc. Compared with these interfaces, gesture interfaces which users use the movements of the hands, fingers, head, face and other parts of the body have the advantage of simplicity; they require less learning time. For older users, who may operate other interfaces with limited speed and accuracy, the gesture interfaces can be attractive and make interactions more flexible. Gesture interfaces may make interaction with robots more attractive and friendly to older users because they are natural and intuitive, they require minimal learning time and they lead to a high degree of user satisfaction.\nThe main objective for this research is to develop the Human-Robot interaction system by empowering them to take into account gestures performed by the human in a flexible, fast and natural way and by the means of an intentional control architecture that enables the robot to quickly react to the users\u0027 stimuli. To get this objective, we proposed natural hand calling gesture recognition algorithm using skeleton features in crowded environments for human-robot interaction. For the gesture interface to communicate with robot, this work mainly focuses on natural calling hand gestures. Hand gesture recognition is a challenging problem in computer vision and is a topic of active research.\nIn real situations, the user may perform gestures in various positions and the environment may also have many people with hand motions. And, we make the observation that if the person does not have any intention to call the robot, he/she may not move his/her arm against the gravity. When a person calls someone, it is natural to direct his/her hand with an open towards the target person. However, there are still challenges in vision-based hand gesture recognition such as illumination changes and the background-foreground problem, where objects in the scene might even contain skin-like colors. Another issue is the presence of crowds moving around with many hand motions. In crowded environments, conventional methods might erroneously recognize hand movements as calling gestures.\nBased on these findings from observations of people\u0027s daily activities and challenges, a service robot was developed and used to interact with elderly people for helping their daily activities in a natural way. We developed a hand calling gesture recognition method that can recognize in real time natural gestures not specified in advance. Following this research program, the following challenges have been identified:(1) illumination changes and the background-foreground problem, where objects in the scene might even contain skin-like colors, (2) the presence of crowds moving around with many hand motions and randomly moving objects, (3) the caller\u0027s position that may be varied and not in front of the camera but in the view of the camera, (4) the natural gestures that is no need to remember the defined gesture, and (5) less learning time and more satisfaction for elderly people because of the gestures used in childhood.\nIn our approach, only the people who gaze towards the robot with defined wrist positions are checked from the scene. This comes from our observation that people typically gesture to call others while gazing towards the target person. Then based on the overall body poses of some people, we determine candidate people that might be calling the robot. We then zoom into each candidate person\u0027s hand-wrist part to extract finer details of the person\u0027s hand pose. Our approach then uses the key-points of the fingertips to make final decision on whether something is a calling gesture or not. So essentially, we process in stages, the combination of overall body pose and local hand pose to determine whether someone is calling the robot or not. This cascade of calling gesture detection stages allows for efficient recognition in crowded settings.\nThe major goal of this research is to recognize natural calling gestures from people in an interaction scenario where the robot continuously observes the behavior of a humans. In our approach, firstly, the robot moves amongst people. At that time, when the person calls the robot by a hand gesture, the robot detects the person who is calling the robot from among the crowd. While approaching to the potential caller, the robot observes whether the person is actual calling the robot or not. We tested the proposed system at a real elderly care center. We validate our findings using our experimental setup, which is composed of a humanoid robot (Aldebaran\u0027s NAO) and an i-Cart mini (T-frog) that carries the NAO humanoid and a webcam.\nThis thesis proposes a service robot system that provides assisted-care to the elderly. This system recognizes natural calling gestures in an interaction scenario where the robot visually observes the behavior of humans. Therefore, an algorithm for natural calling gesture recognition in crowded environments, for human-robot interaction is introduced. To detect users, this study uses the key-points from the OpenPose real-time detector. Using these key-points, gaze detection and finding the hand-wrist positions are performed. If the algorithm finds the gaze and defined hand-wrist position, it zooms into the hand-wrist part. After that, it finds the key-points of the hand\u0027s fingertips. From these key-points, this algorithm recognizes whether the user is calling or not by a simple but effective rule-based classification, developed based on basic observations about how people perform calling gestures in real settings. After detecting the calling gesture, the robot moves to the caller. While approaching, the robot observes whether the user is actually calling or not. From this result, the interaction between humans and robot more effective.", "subitem_description_type": "Abstract"}]}, "item_113_description_24": {"attribute_name": "目次", "attribute_value_mlt": [{"subitem_description": "Contents v\nList of Figures vii\nList of Tables ix\nList of Equations x\n1 Introduction 1\n1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2\n1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3\n1.3 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4\n1.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5\n2 Background and Literature Review 6\n2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6\n2.1.1 Service Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7\n2.1.2 Human-Robot Interaction for Elderly Care . . . . . . . . . . . . . . . . . . . 8\n2.1.3 Gestures and Hand Gestures Recognition . . . . . . . . . . . . . . . . . . . . 10\n2.1.3.1 Gestures Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 10\n2.1.3.2 Hand Gestures Recognition . . . . . . . . . . . . . . . . . . . . . . . 12\n2.2 Conventional Hand Gestures Recognition Approaches . . . . . . . . . . . . . . . . . 14\n2.2.1 Sensors used for Gesture Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 15\n2.2.2 Gesture Recognition Methodology . . . . . . . . . . . . . . . . . . . . . . . . 17\n2.3 Literature Review in Human-Robot Interaction System based on Gestures . . . . . . 19\n2.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22\n3 Natural Calling Gesture Recognition 23\n3.1 Algorithm Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23\n3.2 Body Key-points Feature Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . 25\n3.3 Detect Gaze and Find Hand-Wrist Position . . . . . . . . . . . . . . . . . . . . . . . 25\n3.4 Zoom into the Wrist Part and Hand Fingertip Key-points Features Acquisition . . . 26\n3.5 Recognized Calling Gestures Using Rule-Based Classification . . . . . . . . . . . . . 28\n3.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\n4 Human-Robot Interaction System 31\n4.1 Service Robot Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31\n4.2 Process Model of Human-Robot Interaction Based on Gesture . . . . . . . . . . . . . 31\n4.3 Gesture Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34\n4.4 Measuring Distance and Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34\n4.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35\n5 Experimental Results and Analysis 36\n5.1 Experiment Analysis of Natural Hand Calling Gesture Recognition Method . . . . . 36\n5.1.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36\n5.1.2 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39\n5.2 Experiment Analysis of Human-Robot Interaction System . . . . . . . . . . . . . . . 42\n5.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43\n6 Conclusions and Future Work 45\n6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45\n6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46\nBibliography 48", "subitem_description_type": "Other"}]}, "item_113_description_25": {"attribute_name": "注記", "attribute_value_mlt": [{"subitem_description": "主指導教員 : 久野義徳", "subitem_description_type": "Other"}]}, "item_113_description_33": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"subitem_description": "text", "subitem_description_type": "Other"}]}, "item_113_description_34": {"attribute_name": "フォーマット", "attribute_value_mlt": [{"subitem_description": "application/pdf", "subitem_description_type": "Other"}]}, "item_113_dissertation_number_19": {"attribute_name": "学位授与番号", "attribute_value_mlt": [{"subitem_dissertationnumber": "甲第1141号"}]}, "item_113_identifier_registration": {"attribute_name": "ID登録", "attribute_value_mlt": [{"subitem_identifier_reg_text": "10.24561/00019022", "subitem_identifier_reg_type": "JaLC"}]}, "item_113_publisher_11": {"attribute_name": "出版者名", "attribute_value_mlt": [{"subitem_publisher": "埼玉大学大学院理工学研究科"}]}, "item_113_publisher_12": {"attribute_name": "出版者名(別言語)", "attribute_value_mlt": [{"subitem_publisher": "Graduate School of Science and Engineering, Saitama University"}]}, "item_113_record_name_8": {"attribute_name": "書誌", "attribute_value_mlt": [{"subitem_record_name": "博士論文(埼玉大学大学院理工学研究科(博士後期課程))"}]}, "item_113_text_31": {"attribute_name": "版", "attribute_value_mlt": [{"subitem_text_value": "[出版社版]"}]}, "item_113_text_36": {"attribute_name": "アイテムID", "attribute_value_mlt": [{"subitem_text_value": "GD0001147"}]}, "item_113_text_4": {"attribute_name": "著者 所属", "attribute_value_mlt": [{"subitem_text_value": "埼玉大学大学院理工学研究科(博士後期課程)理工学専攻"}]}, "item_113_text_5": {"attribute_name": "著者 所属(別言語)", "attribute_value_mlt": [{"subitem_text_value": "Graduate School of Science and Engineering, Saitama University"}]}, "item_113_version_type_32": {"attribute_name": "著者版フラグ", "attribute_value_mlt": [{"subitem_version_resource": "http://purl.org/coar/version/c_970fb48d4fbd8a85", "subitem_version_type": "VoR"}]}, "item_access_right": {"attribute_name": "アクセス権", "attribute_value_mlt": [{"subitem_access_right": "open access", "subitem_access_right_uri": "http://purl.org/coar/access_right/c_abf2"}]}, "item_creator": {"attribute_name": "著者", "attribute_type": "creator", "attribute_value_mlt": [{"creatorNames": [{"creatorName": "AYE, SU PHYO", "creatorNameLang": "en"}, {"creatorName": "アエ, ス ピヨ", "creatorNameLang": "ja-Kana"}]}]}, "item_files": {"attribute_name": "ファイル情報", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_date", "date": [{"dateType": "Available", "dateValue": "2020-07-21"}], "displaytype": "detail", "download_preview_message": "", "file_order": 0, "filename": "GD0001147.pdf", "filesize": [{"value": "2.5 MB"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "licensetype": "license_note", "mimetype": "application/pdf", "size": 2500000.0, "url": {"label": "GD0001147.pdf", "objectType": "fulltext", "url": "https://sucra.repo.nii.ac.jp/record/19053/files/GD0001147.pdf"}, "version_id": "fbb705a1-ca52-4a3c-8fcb-aa9ee8689946"}]}, "item_language": {"attribute_name": "言語", "attribute_value_mlt": [{"subitem_language": "eng"}]}, "item_resource_type": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"resourcetype": "doctoral thesis", "resourceuri": "http://purl.org/coar/resource_type/c_db06"}]}, "item_title": "Human-Robot Interaction System Based on Natural Calling Gestures", "item_titles": {"attribute_name": "タイトル", "attribute_value_mlt": [{"subitem_title": "Human-Robot Interaction System Based on Natural Calling Gestures", "subitem_title_language": "en"}]}, "item_type_id": "113", "owner": "15", "path": ["966"], "permalink_uri": "https://doi.org/10.24561/00019022", "pubdate": {"attribute_name": "PubDate", "attribute_value": "2020-07-21"}, "publish_date": "2020-07-21", "publish_status": "0", "recid": "19053", "relation": {}, "relation_version_is_last": true, "title": ["Human-Robot Interaction System Based on Natural Calling Gestures"], "weko_shared_id": -1}
Human-Robot Interaction System Based on Natural Calling Gestures
https://doi.org/10.24561/00019022
https://doi.org/10.24561/00019022f1ccd384-98a2-4248-899b-b610f83111a2
名前 / ファイル | ライセンス | アクション |
---|---|---|
GD0001147.pdf (2.5 MB)
|
|
Item type | 学位論文 / Thesis or Dissertation(1) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
公開日 | 2020-07-21 | |||||||||
タイトル | ||||||||||
言語 | en | |||||||||
タイトル | Human-Robot Interaction System Based on Natural Calling Gestures | |||||||||
言語 | ||||||||||
言語 | eng | |||||||||
資源タイプ | ||||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_db06 | |||||||||
資源タイプ | doctoral thesis | |||||||||
ID登録 | ||||||||||
ID登録 | 10.24561/00019022 | |||||||||
ID登録タイプ | JaLC | |||||||||
アクセス権 | ||||||||||
アクセス権 | open access | |||||||||
アクセス権URI | http://purl.org/coar/access_right/c_abf2 | |||||||||
タイトル(別言語) | ||||||||||
その他のタイトル | 自然な手招きジェスチャによる人間とロボットのインタラクションシステム | |||||||||
著者 |
AYE, SU PHYO
× AYE, SU PHYO
|
|||||||||
著者 所属 | ||||||||||
埼玉大学大学院理工学研究科(博士後期課程)理工学専攻 | ||||||||||
著者 所属(別言語) | ||||||||||
Graduate School of Science and Engineering, Saitama University | ||||||||||
書誌 | ||||||||||
収録物名 | 博士論文(埼玉大学大学院理工学研究科(博士後期課程)) | |||||||||
書誌情報 |
発行日 2019 |
|||||||||
出版者名 | ||||||||||
出版者 | 埼玉大学大学院理工学研究科 | |||||||||
出版者名(別言語) | ||||||||||
出版者 | Graduate School of Science and Engineering, Saitama University | |||||||||
形態 | ||||||||||
内容記述タイプ | Other | |||||||||
内容記述 | x, 55 p. | |||||||||
学位授与番号 | ||||||||||
学位授与番号 | 甲第1141号 | |||||||||
学位授与年月日 | ||||||||||
学位授与年月日 | 2019-09-20 | |||||||||
学位名 | ||||||||||
学位名 | 博士(学術) | |||||||||
学位授与機関 | ||||||||||
学位授与機関識別子Scheme | kakenhi | |||||||||
学位授与機関識別子 | 12401 | |||||||||
学位授与機関名 | 埼玉大学 | |||||||||
抄録 | ||||||||||
内容記述タイプ | Abstract | |||||||||
内容記述 | The steadily expanding population of elderly persons in Japan and other industrialized countries has posed a vexing problem to the health care systems that serve aging citizens. The field of robotics, in particular the development of service robots that can provide assisted-care, has made many gains over the last decade. For actual care tasks, we need to develop communication technology to allow users to easily ask for services to assisted-care robots. Thus, human-robot interaction (HRI) becomes one of the most important aspects of development in service robots. Interacting with service robots via nonverbal cues allows for natural and efficient communication with humans. Human-Robot Interaction system must be designed and implemented so that age-related challenges in functional ability, such as perceptual, cognitive and motor functions, are taken into account. There is the increasing popularity of service robots communication using interfaces: touch panels, voice control, etc. Compared with these interfaces, gesture interfaces which users use the movements of the hands, fingers, head, face and other parts of the body have the advantage of simplicity; they require less learning time. For older users, who may operate other interfaces with limited speed and accuracy, the gesture interfaces can be attractive and make interactions more flexible. Gesture interfaces may make interaction with robots more attractive and friendly to older users because they are natural and intuitive, they require minimal learning time and they lead to a high degree of user satisfaction. The main objective for this research is to develop the Human-Robot interaction system by empowering them to take into account gestures performed by the human in a flexible, fast and natural way and by the means of an intentional control architecture that enables the robot to quickly react to the users' stimuli. To get this objective, we proposed natural hand calling gesture recognition algorithm using skeleton features in crowded environments for human-robot interaction. For the gesture interface to communicate with robot, this work mainly focuses on natural calling hand gestures. Hand gesture recognition is a challenging problem in computer vision and is a topic of active research. In real situations, the user may perform gestures in various positions and the environment may also have many people with hand motions. And, we make the observation that if the person does not have any intention to call the robot, he/she may not move his/her arm against the gravity. When a person calls someone, it is natural to direct his/her hand with an open towards the target person. However, there are still challenges in vision-based hand gesture recognition such as illumination changes and the background-foreground problem, where objects in the scene might even contain skin-like colors. Another issue is the presence of crowds moving around with many hand motions. In crowded environments, conventional methods might erroneously recognize hand movements as calling gestures. Based on these findings from observations of people's daily activities and challenges, a service robot was developed and used to interact with elderly people for helping their daily activities in a natural way. We developed a hand calling gesture recognition method that can recognize in real time natural gestures not specified in advance. Following this research program, the following challenges have been identified:(1) illumination changes and the background-foreground problem, where objects in the scene might even contain skin-like colors, (2) the presence of crowds moving around with many hand motions and randomly moving objects, (3) the caller's position that may be varied and not in front of the camera but in the view of the camera, (4) the natural gestures that is no need to remember the defined gesture, and (5) less learning time and more satisfaction for elderly people because of the gestures used in childhood. In our approach, only the people who gaze towards the robot with defined wrist positions are checked from the scene. This comes from our observation that people typically gesture to call others while gazing towards the target person. Then based on the overall body poses of some people, we determine candidate people that might be calling the robot. We then zoom into each candidate person's hand-wrist part to extract finer details of the person's hand pose. Our approach then uses the key-points of the fingertips to make final decision on whether something is a calling gesture or not. So essentially, we process in stages, the combination of overall body pose and local hand pose to determine whether someone is calling the robot or not. This cascade of calling gesture detection stages allows for efficient recognition in crowded settings. The major goal of this research is to recognize natural calling gestures from people in an interaction scenario where the robot continuously observes the behavior of a humans. In our approach, firstly, the robot moves amongst people. At that time, when the person calls the robot by a hand gesture, the robot detects the person who is calling the robot from among the crowd. While approaching to the potential caller, the robot observes whether the person is actual calling the robot or not. We tested the proposed system at a real elderly care center. We validate our findings using our experimental setup, which is composed of a humanoid robot (Aldebaran's NAO) and an i-Cart mini (T-frog) that carries the NAO humanoid and a webcam. This thesis proposes a service robot system that provides assisted-care to the elderly. This system recognizes natural calling gestures in an interaction scenario where the robot visually observes the behavior of humans. Therefore, an algorithm for natural calling gesture recognition in crowded environments, for human-robot interaction is introduced. To detect users, this study uses the key-points from the OpenPose real-time detector. Using these key-points, gaze detection and finding the hand-wrist positions are performed. If the algorithm finds the gaze and defined hand-wrist position, it zooms into the hand-wrist part. After that, it finds the key-points of the hand's fingertips. From these key-points, this algorithm recognizes whether the user is calling or not by a simple but effective rule-based classification, developed based on basic observations about how people perform calling gestures in real settings. After detecting the calling gesture, the robot moves to the caller. While approaching, the robot observes whether the user is actually calling or not. From this result, the interaction between humans and robot more effective. |
|||||||||
目次 | ||||||||||
内容記述タイプ | Other | |||||||||
内容記述 | Contents v List of Figures vii List of Tables ix List of Equations x 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Background and Literature Review 6 2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 Service Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.2 Human-Robot Interaction for Elderly Care . . . . . . . . . . . . . . . . . . . 8 2.1.3 Gestures and Hand Gestures Recognition . . . . . . . . . . . . . . . . . . . . 10 2.1.3.1 Gestures Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.3.2 Hand Gestures Recognition . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Conventional Hand Gestures Recognition Approaches . . . . . . . . . . . . . . . . . 14 2.2.1 Sensors used for Gesture Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2.2 Gesture Recognition Methodology . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Literature Review in Human-Robot Interaction System based on Gestures . . . . . . 19 2.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3 Natural Calling Gesture Recognition 23 3.1 Algorithm Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Body Key-points Feature Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Detect Gaze and Find Hand-Wrist Position . . . . . . . . . . . . . . . . . . . . . . . 25 3.4 Zoom into the Wrist Part and Hand Fingertip Key-points Features Acquisition . . . 26 3.5 Recognized Calling Gestures Using Rule-Based Classification . . . . . . . . . . . . . 28 3.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4 Human-Robot Interaction System 31 4.1 Service Robot Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2 Process Model of Human-Robot Interaction Based on Gesture . . . . . . . . . . . . . 31 4.3 Gesture Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.4 Measuring Distance and Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5 Experimental Results and Analysis 36 5.1 Experiment Analysis of Natural Hand Calling Gesture Recognition Method . . . . . 36 5.1.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.1.2 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.2 Experiment Analysis of Human-Robot Interaction System . . . . . . . . . . . . . . . 42 5.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 6 Conclusions and Future Work 45 6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Bibliography 48 |
|||||||||
注記 | ||||||||||
内容記述タイプ | Other | |||||||||
内容記述 | 主指導教員 : 久野義徳 | |||||||||
版 | ||||||||||
[出版社版] | ||||||||||
著者版フラグ | ||||||||||
出版タイプ | VoR | |||||||||
出版タイプResource | http://purl.org/coar/version/c_970fb48d4fbd8a85 | |||||||||
資源タイプ | ||||||||||
内容記述タイプ | Other | |||||||||
内容記述 | text | |||||||||
フォーマット | ||||||||||
内容記述タイプ | Other | |||||||||
内容記述 | application/pdf | |||||||||
作成日 | ||||||||||
日付 | 2020-07-21 | |||||||||
日付タイプ | Created | |||||||||
アイテムID | ||||||||||
GD0001147 |