كتاب Advanced Man-Machine Interaction
منتدى هندسة الإنتاج والتصميم الميكانيكى
بسم الله الرحمن الرحيم

أهلا وسهلاً بك زائرنا الكريم
نتمنى أن تقضوا معنا أفضل الأوقات
وتسعدونا بالأراء والمساهمات
إذا كنت أحد أعضائنا يرجى تسجيل الدخول
أو وإذا كانت هذة زيارتك الأولى للمنتدى فنتشرف بإنضمامك لأسرتنا
وهذا شرح لطريقة التسجيل فى المنتدى بالفيديو :
http://www.eng2010.yoo7.com/t5785-topic
وشرح لطريقة التنزيل من المنتدى بالفيديو:
http://www.eng2010.yoo7.com/t2065-topic
إذا واجهتك مشاكل فى التسجيل أو تفعيل حسابك
وإذا نسيت بيانات الدخول للمنتدى
يرجى مراسلتنا على البريد الإلكترونى التالى :

Deabs2010@yahoo.com


-----------------------------------
-Warning-

This website uses cookies
We inform you that this site uses own, technical and third parties cookies to make sure our web page is user-friendly and to guarantee a high functionality of the webpage.
By continuing to browse this website, you declare to accept the use of cookies.
منتدى هندسة الإنتاج والتصميم الميكانيكى
بسم الله الرحمن الرحيم

أهلا وسهلاً بك زائرنا الكريم
نتمنى أن تقضوا معنا أفضل الأوقات
وتسعدونا بالأراء والمساهمات
إذا كنت أحد أعضائنا يرجى تسجيل الدخول
أو وإذا كانت هذة زيارتك الأولى للمنتدى فنتشرف بإنضمامك لأسرتنا
وهذا شرح لطريقة التسجيل فى المنتدى بالفيديو :
http://www.eng2010.yoo7.com/t5785-topic
وشرح لطريقة التنزيل من المنتدى بالفيديو:
http://www.eng2010.yoo7.com/t2065-topic
إذا واجهتك مشاكل فى التسجيل أو تفعيل حسابك
وإذا نسيت بيانات الدخول للمنتدى
يرجى مراسلتنا على البريد الإلكترونى التالى :

Deabs2010@yahoo.com


-----------------------------------
-Warning-

This website uses cookies
We inform you that this site uses own, technical and third parties cookies to make sure our web page is user-friendly and to guarantee a high functionality of the webpage.
By continuing to browse this website, you declare to accept the use of cookies.



 
الرئيسيةالبوابةأحدث الصورالتسجيلدخولحملة فيد واستفيدجروب المنتدى

شاطر
 

 كتاب Advanced Man-Machine Interaction

اذهب الى الأسفل 
كاتب الموضوعرسالة
Admin
مدير المنتدى
مدير المنتدى
Admin

عدد المساهمات : 18996
التقييم : 35494
تاريخ التسجيل : 01/07/2009
الدولة : مصر
العمل : مدير منتدى هندسة الإنتاج والتصميم الميكانيكى

كتاب Advanced Man-Machine Interaction  Empty
مُساهمةموضوع: كتاب Advanced Man-Machine Interaction    كتاب Advanced Man-Machine Interaction  Emptyالإثنين 14 سبتمبر 2020, 12:44 am

أخوانى فى الله
أحضرت لكم كتاب
Advanced Man-Machine Interaction
Karl-Friedrich Kraiss (Ed.)
Fundamentals and Implementation
With 280 Figures  

كتاب Advanced Man-Machine Interaction  A_m_m_12
و المحتوى كما يلي :


Table of Contents
1 Introduction 1
2 Non-Intrusive Acquisition of Human Action . 7
2.1 Hand Gesture Commands 7
2.1.1 Image Acquisition and Input Data . 8
2.1.1.1 Vocabulary . 8
2.1.1.2 Recording Conditions 9
2.1.1.3 Image Representation 10
2.1.1.4 Example . 11
2.1.2 Feature Extraction 12
2.1.2.1 Hand Localization . 14
2.1.2.2 Region Description 20
2.1.2.3 Geometric Features 22
2.1.2.4 Example . 27
2.1.3 Feature Classification . 29
2.1.3.1 Classification Concepts . 29
2.1.3.2 Classification Algorithms . 31
2.1.3.3 Feature Selection 32
2.1.3.4 Rule-based Classification . 33
2.1.3.5 Maximum Likelihood Classification 34
2.1.3.6 Classification Using Hidden Markov Models . 37
2.1.3.7 Example . 47
2.1.4 Static Gesture Recognition Application 48
2.1.5 Dynamic Gesture Recognition Application . 50
2.1.6 Troubleshooting 56
2.2 Facial Expression Commands . 56
2.2.1 Image Acquisition 59
2.2.2 Image Preprocessing 60
2.2.2.1 Face Localization 61
2.2.2.2 Face Tracking . 67
2.2.3 Feature Extraction with Active Appearance Models 68
2.2.3.1 Appearance Model . 73
2.2.3.2 AAM Search 75
2.2.4 Feature Classification . 78
2.2.4.1 Head Pose Estimation 78
2.2.4.2 Determination of Line of Sight . 83
2.2.4.3 Circle Hough Transformation 83
2.2.4.4 Determination of Lip Outline 87
2.2.4.5 Lip modeling . 88
2.2.5 Facial Feature Recognition – Eye Localization Application 90
2.2.6 Facial Feature Recognition – Mouth Localization Application . 90X Table of Contents
References . 92
3 Sign Language Recognition . 95
3.1 Recognition of Isolated Signs in Real-World Scenarios 99
3.1.1 Image Acquisition and Input Data . 101
3.1.2 Image Preprocessing 102
3.1.2.1 Background Modeling 102
3.1.3 Feature Extraction 105
3.1.3.1 Overlap Resolution 106
3.1.3.2 Hand Tracking 107
3.1.4 Feature Normalization 109
3.1.5 Feature Classification . 110
3.1.6 Test and Training Samples . 110
3.2 Sign Recognition Using Nonmanual Features 111
3.2.1 Nonmanual Parameters 111
3.2.2 Extraction of Nonmanual Features 113
3.3 Recognition of continuous Sign Language using Subunits . 116
3.3.1 Subunit Models for Signs 116
3.3.2 Transcription of Sign Language . 118
3.3.2.1 Linguistics-orientated Transcription of Sign Language . 118
3.3.2.2 Visually-orientated Transcription of Sign Language . 121
3.3.3 Sequential and Parallel Breakdown of Signs 122
3.3.4 Modification of HMMs to Parallel Hidden Markov Models 122
3.3.4.1 Modeling Sign Language by means of PaHMMs 124
3.3.5 Classification 125
3.3.5.1 Classification of Single Signs by Means of Subunits
and PaHMMs . 125
3.3.5.2 Classification of Continuous Sign Language by Means
of Subunits and PaHMMs . 126
3.3.5.3 Stochastic Language Modeling 127
3.3.6 Training 128
3.3.6.1 Initial Transcription 129
3.3.6.2 Estimation of Model Parameters for Subunits 131
3.3.6.3 Classification of Single Signs 132
3.3.7 Concatenation of Subunit Models to Word Models for Signs . 133
3.3.8 Enlargement of Vocabulary Size by New Signs 133
3.4 Performance Evaluation 134
3.4.1 Video-based Isolated Sign Recognition 135
3.4.2 Subunit Based Recognition of Signs and Continuous Sign
Language . 136
3.4.3 Discussion 137
References . 137Table of Contents XI
4 Speech Communication and Multimodal Interfaces . 141
4.1 Speech Recognition . 141
4.1.1 Fundamentals of Hidden Markov Model-based Speech
Recognition . 142
4.1.2 Training of Speech Recognition Systems . 144
4.1.3 Recognition Phase for HMM-based ASR Systems . 145
4.1.4 Information Theory Interpretation of Automatic Speech
Recognition . 147
4.1.5 Summary of the Automatic Speech Recognition Procedure 149
4.1.6 Speech Recognition Technology 150
4.1.7 Applications of ASR Systems 151
4.2 Speech Dialogs . 153
4.2.1 Introduction . 153
4.2.2 Initiative Strategies . 155
4.2.3 Models of Dialog . 156
4.2.3.1 Finite State Model . 156
4.2.3.2 Slot Filling . 157
4.2.3.3 Stochastic Model 158
4.2.3.4 Goal Directed Processing . 159
4.2.3.5 Rational Conversational Agents 160
4.2.4 Dialog Design . 162
4.2.5 Scripting and Tagging . 163
4.3 Multimodal Interaction 164
4.3.1 In- and Output Channels . 165
4.3.2 Basics of Multimodal Interaction 166
4.3.2.1 Advantages . 167
4.3.2.2 Taxonomy 167
4.3.3 Multimodal Fusion . 168
4.3.3.1 Integration Methods 169
4.3.4 Errors in Multimodal Systems 172
4.3.4.1 Error Classification 172
4.3.4.2 User Specific Errors 172
4.3.4.3 System Specific Errors . 173
4.3.4.4 Error Avoidance . 173
4.3.4.5 Error Resolution . 174
4.4 Emotions from Speech and Facial Expressions . 175
4.4.1 Background . 175
4.4.1.1 Application Scenarios 175
4.4.1.2 Modalities 176
4.4.1.3 Emotion Model . 176
4.4.1.4 Emotional Databases . 177
4.4.2 Acoustic Information . 177
4.4.2.1 Feature Extraction . 178
4.4.2.2 Feature Selection 179
4.4.2.3 Classification Methods . 180XII Table of Contents
4.4.3 Linguistic Information 180
4.4.3.1 N-Grams . 181
4.4.3.2 Bag-of-Words . 182
4.4.3.3 Phrase Spotting . 182
4.4.4 Visual Information 183
4.4.4.1 Prerequisites 184
4.4.4.2 Holistic Approaches . 184
4.4.4.3 Analytic Approaches . 186
4.4.5 Information Fusion . 186
4.4.6 Discussion 187
References . 187
5 Person Recognition and Tracking 191
5.1 Face Recognition . 193
5.1.1 Challenges in Automatic Face Recognition . 193
5.1.2 Structure of Face Recognition Systems . 196
5.1.3 Categorization of Face Recognition Algorithms . 200
5.1.4 Global Face Recognition using Eigenfaces 201
5.1.5 Local Face Recognition based on Face Components or
Template Matching . 206
5.1.6 Face Databases for Development and Evaluation 212
5.1.7 Exercises . 214
5.1.7.1 Provided Images and Image Sequences 214
5.1.7.2 Face Recognition Using the Eigenface Approach . 215
5.1.7.3 Face Recognition Using Face Components . 221
5.2 Full-body Person Recognition . 224
5.2.1 State of the Art in Full-body Person Recognition 224
5.2.2 Color Features for Person Recognition . 226
5.2.2.1 Color Histograms 226
5.2.2.2 Color Structure Descriptor 227
5.2.3 Texture Features for Person Recognition . 228
5.2.3.1 Oriented Gaussian Derivatives . 228
5.2.3.2 Homogeneous Texture Descriptor 229
5.2.4 Experimental Results . 231
5.2.4.1 Experimental Setup 231
5.2.4.2 Feature Performance . 231
5.3 Camera-based People Tracking 232
5.3.1 Segmentation 234
5.3.1.1 Foreground Segmentation by Background Subtraction . 234
5.3.1.2 Morphological Operations 236
5.3.2 Tracking 237
5.3.2.1 Tracking Following Detection . 238
5.3.2.2 Combined Tracking and Detection 243
5.3.3 Occlusion Handling . 248
5.3.3.1 Occlusion Handling without Separation . 248Table of Contents XIII
5.3.3.2 Separation by Shape . 248
5.3.3.3 Separation by Color 249
5.3.3.4 Separation by 3D Information . 250
5.3.4 Localization and Tracking on the Ground Plane . 250
5.3.5 Example 253
References . 259
6 Interacting in Virtual Reality 263
6.1 Visual Interfaces 265
6.1.1 Spatial Vision 265
6.1.2 Immersive Visual Displays . 266
6.1.3 Viewer Centered Projection 268
6.2 Acoustic Interfaces 272
6.2.1 Spatial Hearing 272
6.2.2 Auditory Displays 273
6.2.3 Wave Field Synthesis . 274
6.2.4 Binaural Synthesis 275
6.2.5 Cross Talk Cancellation . 276
6.3 Haptic Interfaces 279
6.3.1 Rendering a Wall . 281
6.3.2 Rendering Solid Objects . 283
6.4 Modeling the Behavior of Virtual Objects . 286
6.4.1 Particle Dynamics 286
6.4.2 Rigid Body Dynamics . 288
6.5 Interacting with Rigid Objects 292
6.5.1 Guiding Sleeves 294
6.5.2 Sensitive Polygons . 296
6.5.3 Virtual Magnetism 296
6.5.4 Snap-In . 297
6.6 Implementing Virtual Environments . 298
6.6.1 Scene Graphs 298
6.6.2 Toolkits . 301
6.6.3 Cluster Rendering in Virtual Environments . 303
6.7 Examples 305
6.7.1 A Simple Scene Graph Example 307
6.7.2 More Complex Examples: SolarSystem and Robot 308
6.7.3 An Interactive Environment, Tetris3D 310
6.8 Summary 310
6.9 Acknowledgements . 312
References . 312XIV Table of Contents
7 Interactive and Cooperative Robot Assistants . 315
7.1 Mobile and Humanoid Robots 316
7.2 Interaction with Robot Assistants 321
7.2.1 Classification of Human-Robot Interaction Activities . 323
7.2.2 Performative Actions . 323
7.2.3 Commanding and Commenting Actions 325
7.3 Learning and Teaching Robot Assistants 325
7.3.1 Classification of Learning through Demonstration . 326
7.3.2 Skill Learning . 326
7.3.3 Task Learning . 327
7.3.4 Interactive Learning 328
7.3.5 Programming Robots via Observation: An Overview . 329
7.4 Interactive Task Learning from Human Demonstration 330
7.4.1 Classification of Task Learning . 331
7.4.2 The PbD Process . 333
7.4.3 System Structure . 335
7.4.4 Sensors for Learning through Demonstration 336
7.5 Models and Representation of Manipulation Tasks . 338
7.5.1 Definitions 338
7.5.2 Task Classes of Manipulations 338
7.5.3 Hierarchical Representation of Tasks 340
7.6 Subgoal Extraction from Human Demonstration . 342
7.6.1 Signal Processing 343
7.6.2 Segmentation Step 343
7.6.3 Grasp Classification 344
7.6.3.1 Static Grasp Classification 344
7.6.3.2 Dynamic Grasp Classification . 346
7.7 Task Mapping and Execution . 348
7.7.1 Event-Driven Architecture . 348
7.7.2 Mapping Tasks onto Robot Manipulation Systems . 350
7.7.3 Human Comments and Advice . 354
7.8 Examples of Interactive Programming 354
7.9 Telepresence and Telerobotics 356
7.9.1 The Concept of Telepresence and Telerobotic Systems 356
7.9.2 Components of a Telerobotic System 359
7.9.3 Telemanipulation for Robot Assistants in Human-Centered
Environments 360
7.9.4 Exemplary Teleoperation Applications . 361
7.9.4.1 Using Teleoperated Robots as Remote Sensor Systems 361
7.9.4.2 Controlling and Instructing Robot Assistants via
Teleoperation . 362
References . 364Table of Contents XV
8 Assisted Man-Machine Interaction . 369
8.1 The Concept of User Assistance . 370
8.2 Assistance in Manual Control 372
8.2.1 Needs for Assistance in Manual Control 373
8.2.2 Context of Use in Manual Control . 374
8.2.3 Support of Manual Control Tasks . 379
8.3 Assistance in Man-Machine Dialogs . 386
8.3.1 Needs for Assistance in Dialogs 386
8.3.2 Context of Dialogs . 387
8.3.3 Support of Dialog Tasks 390
8.4 Summary 394
References . 395
A LTI-LIB — A C++ Open Source Computer Vision Library 399
A.1 Installation and Requirements . 400
A.2 Overview 402
A.2.1 Duplication and Replication 402
A.2.2 Serialization . 403
A.2.3 Encapsulation 403
A.2.4 Examples . 403
A.3 Data Structures . 404
A.3.1 Points and Pixels . 405
A.3.2 Vectors and Matrices 406
A.3.3 Image Types . 408
A.4 Functional Objects 409
A.4.1 Functor . 410
A.4.2 Classifier 411
A.5 Handling Images 412
A.5.1 Convolution . 412
A.5.2 Image IO 413
A.5.3 Color Spaces 414
A.5.4 Drawing and Viewing . 416
A.6 Where to Go Next . 418
References . 421
B IMPRESARIO — A GUI for Rapid Prototyping of Image Processing
Systems 423
B.1 Requirements, Installation, and Deinstallation . 424
B.2 Basic Components and Operations . 424
B.2.1 Document and Processing Graph 426
B.2.2 Macros . 426
B.2.3 Processing Mode . 429
B.2.4 Viewer 430
B.2.5 Summary and further Reading 431
B.3 Adding Functionality to a Document . 432XVI Table of Contents
B.3.1 Using the Macro Browser 432
B.3.2 Rearranging and Deleting Macros . 434
B.4 Defining the Data Flow with Links . 434
B.4.1 Creating a Link 434
B.4.2 Removing a Link . 436
B.5 Configuring Macros . 436
B.5.1 Image Sequence 437
B.5.2 Video Capture Device . 440
B.5.3 Video Stream 442
B.5.4 Standard Macro Property Window 444
B.6 Advanced Features and Internals of IMPRESARIO 446
B.6.1 Customizing the Graphical User Interface 446
B.6.2 Macros and Dynamic Link Libraries . 448
B.6.3 Settings Dialog and Directories . 449
B.7 IMPRESARIO Examples in this Book . 450
B.8 Extending IMPRESARIO with New Macros 450
B.8.1 Requirements 450
B.8.2 API Installation and further Reading . 451
B.8.3 IMPRESARIO Updates . 451
References . 451
Index .
Index
4-neighborhood, 20
8-neighborhood, 20
AAC, 393
accommodation, 266, 271
acoustic alert, 382
acoustic interfaces, 272
Active Appearance Model, 68, 186
active shape model, 87, 186
active steering, 383, 385
active stereo, 268
AdaBoost, 64, 183
adaptive filtering, 384
adjacency
pixel, 20
agent, 349
ALBERT, 363
alerts and commands, 381
analytic approaches, 186
application knowledge, 386
architecture
event-driven, 348
ARMAR, 322
articulatory features, 178
assistance in manual control, 372
assisted interaction, 369
auditory display, 273
augmented reality, 266, 392
auto completion, 394
automatic speech recognition, 141, 154
automation, 384
autonomous systems, 358
background
Gaussian mixture model, 104
median model, 103
modeling/subtraction, 102
background subtraction, 234
shadow reduction, 255
bag-of-words, 182
Bakis topology, 39
Bayes’ rule, 148
Bayesian Belief Networks, 388, 389
Bayesian nets, 182
beliefs, desires, and intentions model, 159
binaural synthesis, 274, 275
black box models, 376
border
computation, 21
length, 23
points, 20
braking augmentation, 383
camera model, 251
cascaded classifier, 66
CAVE, 266
cheremes, 119
circle Hough transformation, 83
class-based n-grams, 181
classification, 29, 110
class, 29
concepts, 29
garbage class, 30
Hidden Markov Models, 37
Markov chain, 37
maximum likelihood, 34
overfitting, 31
phases, 29
rule-based, 33
supervised, 31
training samples, 29
unsupervised, 31
color histogram, 226, 241
color space
RGB, 11
color structure descriptor (CSD), 227
command and control, 153
commanding actions, 325
commenting actions, 325
compactness, 25
conformity with expectations, 391
context of use, 5, 370
control input limiting, 384
control input management, 383
control input modifications, 383
controllability, 391
convergence, 266, 271
conversational maxims, 162454 Index
coordinate transformation, 251
cross talk cancellation, 274, 276
crossover model, 373, 376
DAMSL, 163
data entry management, 372, 391, 393
data glove, 336, 363
decoding, 147
Dempster Shafer theory, 388
design goals, 2
dialog context, 391
dialog design, 162
dialog management, 154
dialog system state, 390
dialog systems, 1
dilation, 237
discrete fourier transform (DFT), 210
display codes and formats, 392
distance sensors, 378
do what i mean, 394
driver modeling, 375
driver state, 374
dual hand manipulation, 338
dynamic grasp classification, 346
dynamic systems, 1
early feature fusion, 186
ECAM, 392
eigenfaces, 183, 185, 201
eigenspace, 78
emotion model, 176
emotion recognition, 175
emotional databases, 177
ensemble construction, 180
erosion, 237
error
active handling, 172
avoidance, 173
identification, 393
isolation, 393
knowledge-based, 173
management, 393
multimodal, 172
passive handling passive, 172
processing, 173
recognition, 173
rulebased, 173
skillbased, 172
systemspecific, 173
technical, 173
tolerance, 391
userspecific, 172
Euclidian distance, 78
event-driven architecture, 348
eye blinks, 375
face localization, 61
face recognition
affecting factors, 194
component-based, 206
geometry based, 200
global/holistic, 200
hybrid, 200
local feature based, 200
face tracking, 67
facial action coding system, 186
facial expression commands, 56
facial expression recognition, 184
facial expressions, 183
feature
area, 23
border length, 23
center of gravity, 23
classification, 29, 110
compactness, 25
derivative, 27
eccentricity, 24
extraction, 12, 105
geometric, 22
moments, 23
normalization, 26, 109
orientation, 25
selection, 32
stable, 22
vector, 8
feature extraction, 178
feature groups, 122, 124
feature selection, 179
filter-based selection, 179
fine manipulation, 338
finite state model, 156
force execution gain, 383
formant, 178
frame-based dialog, 157
freedom of error, 2
function allocation, 385
functional link networks, 376
functionality, 390Index 455
functionality shaping, 392
fusion
early signal, 168
late semantic, 169
multimodal integration/fusion, 168
fuzzy logic, 388
Gabor Wavelet Transformation, 83
garbage class, 30
Garbor-wavelet coefficients, 185
Gaussian mixture models, 241
gen locking, 268, 303
geometric feature, see feature, geometric
gesture recognition, 7
goal directed processing, 159
goal extraction, 342
grasp, 324
classification, 344
dynamic, 324, 344
segmentation, 343
static, 324, 343
taxonomy, 348
taxonomy of Cutkosky, 345
gray value image, 11
guiding sleeves, 294
Haar-features, 62
hand gesture commands, 7
hand localization, 14
handover strategies, 385
haptic alerts, 381
haptic interfaces, 279
head pose, 375
head pose estimation, 78
head related impulse response, 272
head related transfer function, 272
head-mounted display, 266, 361
head-up display, 380, 382
Hidden Markov Models, 37, 142
Bakis topology, 39
multidimensional observations, 46
parameter estimation, 44
high inertia systems, 380
histogram
object/background color, 15
skin/non-skin color, 18
holistic approaches, 184
homogeneous coordinates, 251
homogeneous texture descriptor (HTD), 229
human activity, 322
decomposition, 323
direction, 322
operator selection, 323
human advice, 354
human comments, 354
human transfer function, 374
human-computer interaction, 164
humanoid robot, 316
image
gray value, 11
intensity, 11
mask, 11
object probability, 16
preprocessing, 102
representation, 10
skin probability, 19
immersion, 264
immersive display, 266
immersive interface, 361, 362
Impresario, 6
information management, 379, 391
input shaping, 384
integral image, 62
intensity panning, 273
intent recognition, 392
intentions, 370
inter-/intra-gesture variance, 22
interaction, 164
active, 322
passive, 322
robot assistants, 321
interaction activity
classification, 323
commanding actions, 323
commenting actions, 323
performative actions, 323
interaction knowledge, 386
interactive learning, 328
task learning, 330
interactive programming
example, 354
interaural time difference, 272
intervention, 372
intrusiveness, 374
Java3D, 6456 Index
kernel density estimation, 242
keyword spotting, 180
kNN classifier, 205
language model, 148, 181
late semantic fusion, 186
latency, 263
LAURON, 361
learning
interactive, 328
learning automaton, 350
learning through demonstration
classification, 326
sensors, 336
skill, 326
task, 327
level of detail rendering, 300
lexical predictions, 393
Lidstone coefficient, 182
limits of maneuverability, 381
line-of-sight, 375
linear regression tracking, 245
linguistic decoder, 148
local regression, 377
magic wand, 363
magnetic tracker, 336
man machine interaction, 1
man-machine dialogs, 386
man-machine task allocation, 385
manipulation tasks, 338
classes, 338
definition, 338
device handling, 339
tool handling, 339
transport operations, 339
manual control assistance, 379
mapping tasks, 350
movements, 353
coupling fingers, 351
grasps, 351
power grasps, 352
precision grasps, 352
Markov chain, 37
Markov decision process, 158
maximum a-posteriori, 182
maximum likelihood classification, 34, 85
maximum likelihood estimation, 181
mean shift algorithm, 244
mental model, 386
menu hierarchy, 386
menu options, 386
MFCC, 178
mobile robot, 316
modality, 166
morphological operations, 236
motion parallax, 266
motor function, 165
moving platform vibrations, 384
multimodal interaction, 393
multimodality, 4, 166
multiple hypotheses tracking, 107
mutlimodal interaction, 164
n-grams, 181
natural frequency, 381
natural language generation, 155
natural language understanding, 154
nearest neighbor classifier, 205
noise canceling, 384
normalization
features, 26, 109
notation system, 118
movement-hold-model, 120
object probability image, 16
observation, 29
oculomotor factors, 266
ODETE, 363
on/off automation, 385
operator fatigue, 375
oriented gaussian derivatives (OGD), 228
overfitting, 31
overlap resolution, 106
Parallel Hidden Markov Models, 122
channel, 122
confluent states, 123
parametric models, 376
particle dynamics, 286
passive stereo, 268
PbD process, 333
abstraction, 333
execution, 334
interpretation, 333
mapping, 334
observation, 333
segmentation, 333Index 457
simulation, 334
system structure, 335
perceptibility augmentation, 380
perception, 165
performative actions, 323
person recognition
authentication, 191
identification, 191
verification, 191
phrase spotting, 182
physically based modeling, 286
physiological data, 371
pinhole camera model, 251
pitch, 178
pixel adjacency, 20
plan based classification, 389
plan library, 388, 391
plan recognition, 388
predictive information, 380
predictive scanning, 394
predictive text, 394
predictor display, 381
predictor transfer function, 380
preferences, 370
principal component analysis, 179
principle component analysis (PCA), 204
prioritizing, 392
programming
explicit, 358
implicit, 358
programming by demonstration, 330
programming robots via observation, 329
prompting, 392
prosodic features, 178
question and answer, 153
Radial Basis Function (RBF) network, 345
rational conversational agents, 160
recognition
faces, 193
gestures, 7
person, 224
sign language, 99
recording conditions
laboratory, 9
real world, 9
redundant coding, 382
region
description, 20
geometric features, 22
responsive workbench, 271
RGB color space, 11
rigid body dynamics, 288
robot assistants, 315
interaction, 321
learning and teaching, 325
room-mounted display, 267
safety of use, 2
scene graph, 298
scripting, 163
segmentation, 16, 234
threshold, 16
self-descriptiveness, 391
sense, 165
sensitive polygons, 296
sensors for demonstration, 336
data gloves, 336
force sensors, 337
magnetic tracker, 336
motion capturing systems, 336
training center, 337
sequential floating search method, 179
shape model, 186
shutter glasses, 267
sign language recognition, 5, 99
sign-lexicon, 117
signal
transfer latency, 358
signal processing, 343
simplex, 59
single wheel braking, 385
skill
elementary, 331
high-level, 331
innate, 327
learning, 326
skin probability image, 19
skin/non-skin color histogram, 18
slot filling, 157
snap-in mechanism, 297
soft decision fusion, 169, 187
sparse data handling, 158
spatial vision, 265
speech commands, 382
speech communication, 141
speech dialogs, 153458 Index
speech recognition, 4
speech synthesis, 155
spoken language dialog systems, 153
static grasp classification, 344
steering gain, 383
stemming, 181
stereopsis, 265
stick shakers, 381
stochastic dialog modeling, 158
stochastic language modeling, 127
bigramm-models, 127
sign-groups, 128
stopping, 180
subgoal extraction, 342
subunits, 116, 117
classification, 125
enlargement of vocabulary size, 133
estimation of model parameters, 131
training, 128
suitability for individualization, 391
supervised classification, 31
supervised training, 376
supervisory control, 1
support of dialog tasks, 390
Support Vector Machines (SVM), 346
tagging, 163
takeover by automation, 372
task
abstraction level, 331
example methodology, 332
goal, 328
hierarchical representation, 340
internal representation, 332
learning, 327
mapping, 333
mapping and execution, 348
model, 328
subgoal, 328
task learning
classification, 331
taxonomy
multimodal, 167
telepresence, 356
telerobotics, 356
concept, 356
controlling robot assistants, 362
error handling, 360
exception handling, 360
expert programming device, 360
input devices, 359
local reaction, 359
output devices, 359
planning, 359
prediction, 360
system components, 359
template matching, 209
temporal texture templates, 240
texture features
homogeneous texture descriptor (HTD),
229
oriented gaussian derivatives (OGD), 228
threshold
automatic computation, 17
segmentation, 16
tiled display, 271
timing, 392
tracking, 238
color-based tracking, 246
combined tracking detection, 243
hand, 107
hypotheses, 107
multiple hypotheses, 107
shape-based tracking, 244
state space, 107
tracking following detection, 238
tracking on the ground plane, 250
tracking system, 268
traction control, 384
traffic telematics, 378
training, 144
training center, 337
training phase, 29
training samples, 29
trajectory segmentation, 343
transcription, 117, 118
initial transcription, 129
linguistic-orientated, 118
visually-orientated, 121
transparency, 391
Trellis diagram, 146
trellis diagram, 42
tremor, 384
tunnel vision, 373
tutoring, 392
unsupervised classification, 31
usability, 2, 163, 390Index 459
user activities, 390
user assistance, 5, 370
user expectations, 371
user identity, 370
user observation, 371
user prompting, 371
user state, 370
user state identification, 388
user whereabouts, 390
variance
inter-/intra-gesture, 22
vehicle state, 377
viewer centered projection, 268
virtual magnetism, 296
virtual reality, 4, 263
vision systems, 378
visual commands, 382
visual interfaces, 265
visualization robot intention, 362
Viterbi algorithm, 42, 146
voice quality features, 178
VoiceXML, 163
VR toolkits, 301
warning, 393
wave field synthesis, 274
wearable computing, 390, 394
whereabouts, 370
window function, 178
working memory, 386
workload assessment, 371
wrapper-based selection, 179


 كلمة سر فك الضغط : books-world.net
The Unzip Password : books-world.net
أتمنى أن تستفيدوا من محتوى الموضوع وأن ينال إعجابكم

رابط من موقع عالم الكتب لتنزيل كتاب Advanced Man-Machine Interaction
رابط مباشر لتنزيل كتاب Advanced Man-Machine Interaction
الرجوع الى أعلى الصفحة اذهب الى الأسفل
 
كتاب Advanced Man-Machine Interaction
الرجوع الى أعلى الصفحة 
صفحة 2 من اصل 1
 مواضيع مماثلة
-
» كتاب Testing Machine Tools For the use of Machine Tool Makers
» حل كتاب Machine Design of Machine Design Textbook - Manual Solutions
» كتاب Machine Forging
» كتاب Machine Blacksmithing
» كتاب Machine Design

صلاحيات هذا المنتدى:لاتستطيع الرد على المواضيع في هذا المنتدى
منتدى هندسة الإنتاج والتصميم الميكانيكى :: المنتديات الهندسية :: منتدى الكتب والمحاضرات الهندسية :: منتدى الكتب والمحاضرات الهندسية الأجنبية-
انتقل الى: