Computational Models and Applications
De som köpt den här boken har ofta också köpt The Anxious Generation av Jonathan Haidt (inbunden).
Köp båda 2 för 2103 krLiming Zhang is a Professor of Electronics at Fudan University, where she leads the Image and Intelligence Laboratory. Since the 1980s she has been engaged in biological modeling and its application to engineering, such as artificial neural network models, visual models and brain-like robot models, and has published three books in Chinese on artificial neural networks, image coding and intelligent image processing, as well as over 120 pages in the area. Since 2003 she has been studying problems in modeling visual attention and applying it in computer vision, robot vision, object tracking, remote sensing and image quality assessment. She has served as a Senior Visiting Scholar at the University of Notre Dame and Technical University of Munich. Weisi Lin is an Associate Professor in the division of computer communications at Nanyang Technological University's School of Computer Engineering. He also serves as Lab Head, Visual Processing, and Acting Department Manager, Media Processing, in Institute for Infocomm Research. Lin has also participated in research at Shantou University (China), Bath University (UK), National University of Singapore, Institute of Microelectronics (Singapore), Centre for Signal Processing (Singapore). His research interests include image processing, perceptual modeling, video compression, multimedia communication and computer vision. He holds 10 patents, has written 4 book chapters, and has published over 130 refereed papers in international journals and conferences. He is a Chartered Engineer, and a Fellow of IET. Lin graduated from Zhongshan University, China with B.Sc in Electronics and M.Sc in Digital Signal Processing, and from King's College, London University, UK with Ph.D in Computer Vision.
Preface xi PART I BASIC CONCEPTS AND THEORY 1 1 Introduction to Visual Attention 3 1.1 The Concept of Visual Attention 3 1.1.1 Selective Visual Attention 3 1.1.2 What Areas in a Scene Can Attract Human Attention? 4 1.1.3 Selective Attention in Visual Processing 5 1.2 Types of Selective Visual Attention 7 1.2.1 Pre-attention and Attention 7 1.2.2 Bottom-up Attention and Top-down Attention 8 1.2.3 Parallel and Serial Processing 10 1.2.4 Overt and Covert Attention 11 1.3 Change Blindness and Inhibition of Return 11 1.3.1 Change Blindness 11 1.3.2 Inhibition of Return 12 1.4 Visual Attention Model Development 12 1.4.1 First Phase: Biological Studies 13 1.4.2 Second Phase: Computational Models 15 1.4.3 Third Phase: Visual Attention Applications 17 1.5 Scope of This Book 18 References 19 2 Background of Visual Attention - Theory and Experiments 25 2.1 Human Visual System (HVS) 25 2.1.1 Information Separation 26 2.1.2 Eye Movement and Involved Brain Regions 28 2.1.3 Visual Attention Processing in the Brain 29 2.2 Feature Integration Theory (FIT) of Visual Attention 29 2.2.1 Feature Integration Hypothesis 30 2.2.2 Confirmation by Visual Search Experiments 31 2.3 Guided Search Theory 39 2.3.1 Experiments: Parallel Process Guides Serial Search 40 2.3.2 Guided Search Model (GS1) 42 2.3.3 Revised Guided Search Model (GS2) 43 2.3.4 Other Modified Versions: (GS3, GS4) 46 2.4 Binding Theory Based on Oscillatory Synchrony 47 2.4.1 Models Based on Oscillatory Synchrony 49 2.4.2 Visual Attention of Neuronal Oscillatory Model 54 2.5 Competition, Normalization and Whitening 56 2.5.1 Competition and Visual Attention 56 2.5.2 Normalization in Primary Visual Cortex 57 2.5.3 Whitening in Retina Processing 59 2.6 Statistical Signal Processing 60 2.6.1 A Signal Detection Approach for Visual Attention 61 2.6.2 Estimation Theory and Visual Attention 62 2.6.3 Information Theory for Visual Attention 63 References 67 PART II COMPUTATIONAL ATTENTION MODELS 73 3 Computational Models in the Spatial Domain 75 3.1 Baseline Saliency Model for Images 75 3.1.1 Image Feature Pyramids 76 3.1.2 Centre-Surround Differences 79 3.1.3 Across-scale and Across-feature Combination 80 3.2 Modelling for Videos 81 3.2.1 Extension of BS Model for Video 81 3.2.2 Motion Feature Detection 81 3.2.3 Integration for Various Features 83 3.3 Variations and More Details of BS Model 84 3.3.1 Review of the Models with Variations 85 3.3.2 WTA and IoR Processing 87 3.3.3 Further Discussion 90 3.4 Graph-based Visual Saliency 91 3.4.1 Computation of the Activation Map 92 3.4.2 Normalization of the Activation Map 94 3.5 Attention Modelling Based on Information Maximizing 95 3.5.1 The Core of the AIM Model 96 3.5.2 Computation and Illustration of Model 97 3.6 Discriminant Saliency Based on Centre-Surround 101 3.6.1 Discriminant Criterion Defined on Centre-Surround 102 3.6.2 Mutual Information Estimation 103 3.6.3 Algorithm and Block Diagram of Bottom-up DISC Model 106 3.7 Saliency Using More Comprehensive Statistics 107 3.7.1 The Saliency in Bayesian Framework 108 3.7.2 Algorithm of SUN Model 110 3.8 Saliency Based on Bayesian Surprise 113 3.8.1 Bayesian Surprise 113 3.8.2 Saliency Computation Based on Surprise Theory 114 3.9 Summary 116 References 117 4 Fast Bottom-up Computational Models in the Spectral Domain 119 4.1 Frequency Spectrum of Images 120 4.1.1 Fourier Transform of Images 120 4.1.2 Properties of Amplitude Spectrum 121 4.1.3 Properties of the Phase Spectrum 123 4.2 Spectral Residual Approach 123 4.2.1 Idea of the Spectral Residual Model 124 4.2.2 Realization of Spectral Residual Model 125 4.2.3 Performance of SR Approach 126 4.3 Phase Fourier Transform Approach 127 4.3.1 Introduction to the Phase Fourier Transform 127 4.3.2 Phase Fourier Transform Approach 128 4.3.3 Results and Discussion 129 4.4 Phase Sp