Niels Ole Bernsen - Böcker
Visar alla böcker från författaren Niels Ole Bernsen. Handla med fri frakt och snabb leverans.
4 produkter
4 produkter
1 049 kr
Skickas inom 10-15 vardagar
References 74 Part II Annotation and Analysis of Multimodal Data: Speech and Gesture 4 FORM 79 Craig H. Martell 1. Introduction 79 2. Structure of FORM 80 3. Annotation Graphs 85 4. Annotation Example 86 5. Preliminary Inter-Annotator Agreement Results 88 6. Conclusion: Applications to HLT and HCI? 90 Appendix: Other Tools, Schemes and Methods of Gesture Analysis 91 References 95 5 97 On the Relationships among Speech, Gestures, and Object Manipulation in Virtual Environments: Initial Evidence Andrea Corradini and Philip R. Cohen 1. Introduction 97 2. Study 99 3. Data Analysis 101 4. Results 103 5. Discussion 106 6. Related Work 106 7. Future Work 108 8. Conclusions 108 Appendix: Questionnaire MYST III - EXILE 110 References 111 6 113 Analysing Multimodal Communication Patrick G. T. Healey, Marcus Colman and Mike Thirlwell 1. Introduction 113 2. Breakdown and Repair 117 3. Analysing Communicative Co-ordination 125 4. Discussion 126 References 127 7 131 Do Oral Messages Help Visual Search? Noëlle Carbonell and Suzanne Kieffer 1. Context and Motivation 131 2. Methodology and Experimental Set-Up 134 3. Results: Presentation and Discussion 141 4. Conclusion 153 References 154 Contents vii 8 159 Geometric and Statistical Approaches to Audiovisual Segmentation Trevor Darrell, John W. Fisher III, Kevin W. Wilson, and Michael R. Siracusa 1. Introduction 159 2. Related Work 160 3. Multimodal Multisensor Domain 162 4. Results 166 5. Single Multimodal Sensor Domain 167 6.
1 049 kr
Skickas inom 10-15 vardagar
References 74 Part II Annotation and Analysis of Multimodal Data: Speech and Gesture 4 FORM 79 Craig H. Martell 1. Introduction 79 2. Structure of FORM 80 3. Annotation Graphs 85 4. Annotation Example 86 5. Preliminary Inter-Annotator Agreement Results 88 6. Conclusion: Applications to HLT and HCI? 90 Appendix: Other Tools, Schemes and Methods of Gesture Analysis 91 References 95 5 97 On the Relationships among Speech, Gestures, and Object Manipulation in Virtual Environments: Initial Evidence Andrea Corradini and Philip R. Cohen 1. Introduction 97 2. Study 99 3. Data Analysis 101 4. Results 103 5. Discussion 106 6. Related Work 106 7. Future Work 108 8. Conclusions 108 Appendix: Questionnaire MYST III - EXILE 110 References 111 6 113 Analysing Multimodal Communication Patrick G. T. Healey, Marcus Colman and Mike Thirlwell 1. Introduction 113 2. Breakdown and Repair 117 3. Analysing Communicative Co-ordination 125 4. Discussion 126 References 127 7 131 Do Oral Messages Help Visual Search? Noëlle Carbonell and Suzanne Kieffer 1. Context and Motivation 131 2. Methodology and Experimental Set-Up 134 3. Results: Presentation and Discussion 141 4. Conclusion 153 References 154 Contents vii 8 159 Geometric and Statistical Approaches to Audiovisual Segmentation Trevor Darrell, John W. Fisher III, Kevin W. Wilson, and Michael R. Siracusa 1. Introduction 159 2. Related Work 160 3. Multimodal Multisensor Domain 162 4. Results 166 5. Single Multimodal Sensor Domain 167 6.
1 640 kr
Skickas inom 10-15 vardagar
This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR – “Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication”, 2003– 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious – curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human–computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.
1 640 kr
Skickas inom 10-15 vardagar
This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR – “Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication”, 2003– 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious – curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human–computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.