Selective Visual Attention
Computational Models and Applications
Del i serien IEEE Press
1 371 kr
Tillfälligt slut
Beskrivning
Produktinformation
- Utgivningsdatum:2013-05-17
- Mått:163 x 241 x 23 mm
- Vikt:658 g
- Format:Inbunden
- Språk:Engelska
- Serie:IEEE Press
- Antal sidor:352
- Förlag:John Wiley & Sons Inc
- ISBN:9780470828120
Utforska kategorier
Mer om författaren
Liming Zhang is a Professor of Electronics at Fudan University, where she leads the Image and Intelligence Laboratory. Since the 1980s she has been engaged in biological modeling and its application to engineering, such as artificial neural network models, visual models and brain-like robot models, and has published three books in Chinese on artificial neural networks, image coding and intelligent image processing, as well as over 120 pages in the area. Since 2003 she has been studying problems in modeling visual attention and applying it in computer vision, robot vision, object tracking, remote sensing and image quality assessment. She has served as a Senior Visiting Scholar at the University of Notre Dame and Technical University of Munich. Weisi Lin is an Associate Professor in the division of computer communications at Nanyang Technological University's School of Computer Engineering. He also serves as Lab Head, Visual Processing, and Acting Department Manager, Media Processing, in Institute for Infocomm Research. Lin has also participated in research at Shantou University (China), Bath University (UK), National University of Singapore, Institute of Microelectronics (Singapore), Centre for Signal Processing (Singapore). His research interests include image processing, perceptual modeling, video compression, multimedia communication and computer vision. He holds 10 patents, has written 4 book chapters, and has published over 130 refereed papers in international journals and conferences. He is a Chartered Engineer, and a Fellow of IET. Lin graduated from Zhongshan University, China with B.Sc in Electronics and M.Sc in Digital Signal Processing, and from King’s College, London University, UK with Ph.D in Computer Vision.
Innehållsförteckning
- Preface xi PART I BASIC CONCEPTS AND THEORY 11 Introduction to Visual Attention 31.1 The Concept of Visual Attention 31.1.1 Selective Visual Attention 31.1.2 What Areas in a Scene Can Attract Human Attention? 41.1.3 Selective Attention in Visual Processing 51.2 Types of Selective Visual Attention 71.2.1 Pre-attention and Attention 71.2.2 Bottom-up Attention and Top-down Attention 81.2.3 Parallel and Serial Processing 101.2.4 Overt and Covert Attention 111.3 Change Blindness and Inhibition of Return 111.3.1 Change Blindness 111.3.2 Inhibition of Return 121.4 Visual Attention Model Development 121.4.1 First Phase: Biological Studies 131.4.2 Second Phase: Computational Models 151.4.3 Third Phase: Visual Attention Applications 171.5 Scope of This Book 18References 192 Background of Visual Attention – Theory and Experiments 252.1 Human Visual System (HVS) 252.1.1 Information Separation 262.1.2 Eye Movement and Involved Brain Regions 282.1.3 Visual Attention Processing in the Brain 292.2 Feature Integration Theory (FIT) of Visual Attention 292.2.1 Feature Integration Hypothesis 302.2.2 Confirmation by Visual Search Experiments 312.3 Guided Search Theory 392.3.1 Experiments: Parallel Process Guides Serial Search 402.3.2 Guided Search Model (GS1) 422.3.3 Revised Guided Search Model (GS2) 432.3.4 Other Modified Versions: (GS3, GS4) 462.4 Binding Theory Based on Oscillatory Synchrony 472.4.1 Models Based on Oscillatory Synchrony 492.4.2 Visual Attention of Neuronal Oscillatory Model 542.5 Competition, Normalization and Whitening 562.5.1 Competition and Visual Attention 562.5.2 Normalization in Primary Visual Cortex 572.5.3 Whitening in Retina Processing 592.6 Statistical Signal Processing 602.6.1 A Signal Detection Approach for Visual Attention 612.6.2 Estimation Theory and Visual Attention 622.6.3 Information Theory for Visual Attention 63References 67PART II COMPUTATIONAL ATTENTION MODELS 733 Computational Models in the Spatial Domain 753.1 Baseline Saliency Model for Images 753.1.1 Image Feature Pyramids 763.1.2 Centre–Surround Differences 793.1.3 Across-scale and Across-feature Combination 803.2 Modelling for Videos 813.2.1 Extension of BS Model for Video 813.2.2 Motion Feature Detection 813.2.3 Integration for Various Features 833.3 Variations and More Details of BS Model 843.3.1 Review of the Models with Variations 853.3.2 WTA and IoR Processing 873.3.3 Further Discussion 903.4 Graph-based Visual Saliency 913.4.1 Computation of the Activation Map 923.4.2 Normalization of the Activation Map 943.5 Attention Modelling Based on Information Maximizing 953.5.1 The Core of the AIM Model 963.5.2 Computation and Illustration of Model 973.6 Discriminant Saliency Based on Centre–Surround 1013.6.1 Discriminant Criterion Defined on Centre–Surround 1023.6.2 Mutual Information Estimation 1033.6.3 Algorithm and Block Diagram of Bottom-up DISC Model 1063.7 Saliency Using More Comprehensive Statistics 1073.7.1 The Saliency in Bayesian Framework 1083.7.2 Algorithm of SUN Model 1103.8 Saliency Based on Bayesian Surprise 1133.8.1 Bayesian Surprise 1133.8.2 Saliency Computation Based on Surprise Theory 1143.9 Summary 116References 1174 Fast Bottom-up Computational Models in the Spectral Domain 1194.1 Frequency Spectrum of Images 1204.1.1 Fourier Transform of Images 1204.1.2 Properties of Amplitude Spectrum 1214.1.3 Properties of the Phase Spectrum 1234.2 Spectral Residual Approach 1234.2.1 Idea of the Spectral Residual Model 1244.2.2 Realization of Spectral Residual Model 1254.2.3 Performance of SR Approach 1264.3 Phase Fourier Transform Approach 1274.3.1 Introduction to the Phase Fourier Transform 1274.3.2 Phase Fourier Transform Approach 1284.3.3 Results and Discussion 1294.4 Phase Spectrum of the Quaternion Fourier Transform Approach 1314.4.1 Biological Plausibility for Multichannel Representation 1314.4.2 Quaternion and Its Properties 1324.4.3 Phase Spectrum of Quaternion Fourier Transform (PQFT) 1344.4.4 Results Comparison 1384.4.5 Dynamic Saliency Detection of PQFT 1404.5 Pulsed Discrete Cosine Transform Approach 1414.5.1 Approach of Pulsed Principal Components Analysis 1414.5.2 Approach of the Pulsed Discrete Cosine Transform 1434.5.3 Multichannel PCT Model 1444.6 Divisive Normalization Model in the Frequency Domain 1454.6.1 Equivalent Processes with a Spatial Model in the Frequency Domain 1464.6.2 FDN Algorithm 1494.6.3 Patch FDN 1504.7 Amplitude Spectrum of Quaternion Fourier Transform (AQFT) Approach 1524.7.1 Saliency Value for Each Image Patch 1524.7.2 The Amplitude Spectrum for Each Image Patch 1534.7.3 Differences between Image Patches and their Weighting to Saliency Value 1544.7.4 Patch Size and Scale for Final Saliency Value 1564.8 Modelling from a Bit-stream 1574.8.1 Feature Extraction from a JPEG Bit-stream 1574.8.2 Saliency Detection in the Compressed Domain 1604.9 Further Discussions of Frequency Domain Approach 161References 1635 Computational Models for Top-down Visual Attention 1675.1 Attention of Population-based Inference 1685.1.1 Features in Population Codes 1705.1.2 Initial Conspicuity Values 1715.1.3 Updating and Transformation of Conspicuity Values 1735.2 Hierarchical Object Search with Top-down Instructions 1755.2.1 Perceptual Grouping 1755.2.2 Grouping-based Salience from Bottom-up Information 1765.2.3 Top-down Instructions and Integrated Competition 1795.2.4 Hierarchical Selection from Top-down Instruction 1795.3 Computational Model under Top-down Influence 1805.3.1 Bottom-up Low-level Feature Computation 1815.3.2 Representation of Prior Knowledge 1815.3.3 Saliency Map Computation using Object Representation 1845.3.4 Using Attention for Object Recognition 1845.3.5 Implementation 1855.3.6 Optimizing the Selection of Top-down Bias 1865.4 Attention with Memory of Learning and Amnesic Function 1875.4.1 Visual Memory: Amnesic IHDR Tree 1885.4.2 Competition Neural Network Under the Guidance of Amnesic IHDR 1915.5 Top-down Computation in the Visual Attention System: VOCUS 1935.5.1 Bottom-up Features and Bottom-up Saliency Map 1935.5.2 Top-down Weights and Top-down Saliency Map 1945.5.3 Global Saliency Map 1965.6 Hybrid Model of Bottom-up Saliency with Top-down Attention Process 1965.6.1 Computation of the Bottom-up Saliency Map 1975.6.2 Learning of Fuzzy ART Networks and Top-down Decision 1975.7 Top-down Modelling in the Bayesian Framework 1995.7.1 Review of Basic Framework 2005.7.2 The Estimation of Conditional Probability Density 2015.8 Summary 202References 2026 Validation and Evaluation for Visual Attention Models 2076.1 Simple Man-made Visual Patterns 2076.2 Human-labelled Images 2086.3 Eye-tracking Data 2096.4 Quantitative Evaluation 2116.4.1 Some Basic Measures 2116.4.2 ROC Curve and AUC Score 2136.4.3 Inter-subject ROC Area 2136.5 Quantifying the Performance of a Saliency Model to Human Eye Movement in Static and Dynamic Scenes 2156.6 Spearman’s Rank Order Correlation with Visual Conspicuity 217References 219PART III APPLICATIONS OF ATTENTION SELECTION MODELS 2217 Applications in Computer Vision, Image Retrieval and Robotics 2237.1 Object Detection and Recognition in Computer Vision 2247.1.1 Basic Concepts 2247.1.2 Feature Extraction 2247.1.3 Object Detection and Classification 2277.2 Attention Based Object Detection and Recognition in a Natural Scene 2317.2.1 Object Detection Combined with Bottom-up Model 2317.2.2 Object Detection based on Attention Elicitation 2337.2.3 Object Detection with a Training Set 2367.2.4 Object Recognition Combined with Bottom-up Attention 2397.3 Object Detection and Recognition in Satellite Imagery 2407.3.1 Ship Detection based on Visual Attention 2427.3.2 Airport Detection in a Land Region 2457.3.3 Saliency and Gist Feature for Target Detection 2487.4 Image Retrieval via Visual Attention 2507.4.1 Elements of General Image Retrieval 2517.4.2 Attention Based Image Retrieval 2537.5 Applications of Visual Attention in Robots 2567.5.1 Robot Self-localization 2577.5.2 Visual SLAM System with Attention 2597.5.3 Moving Object Detection using Visual Attention 2627.6 Summary 265References 2658 Application of Attention Models in Image Processing 2718.1 Attention-modulated Just Noticeable Difference 2718.1.1 JND Modelling 2728.1.2 Modulation via Non-linear Mapping 2748.1.3 Modulation via Foveation 2768.2 Use of Visual Attention in Quality Assessment 2778.2.1 Image/Video Quality Assessment 2788.2.2 Weighted Quality Assessment by Salient Values 2798.2.3 Weighting through Attention-modulated JND Map 2808.2.4 Weighting through Fixation 2818.2.5 Weighting through Quality Distribution 2818.3 Applications in Image/Video Coding 2828.3.1 Image and Video Coding 2828.3.2 Attention-modulated JND based Coding 2848.3.3 Visual Attention Map based Coding 2858.4 Visual Attention for Image Retargeting 2878.4.1 Literature Review for Image Retargeting 2888.4.2 Saliency-based Image Retargeting in the Compressed Domain 2898.5 Application in Compressive Sampling 2928.5.1 Compressive Sampling 2938.5.2 Compressive Sampling via Visual Attention 2968.6 Summary 300References 300PART IV SUMMARY 3059 Summary, Further Discussions and Conclusions 3079.1 Summary 3089.1.1 Research Results from Physiology and Anatomy 3089.1.2 Research from Psychology and Neuroscience 3099.1.3 Theory of Statistical Signal Processing 3109.1.4 Computational Visual Attention Modelling 3109.1.5 Applications of Visual Attention Models 3139.2 Further Discussions 3149.2.1 Interaction between Top-down Control and Bottom-up Processing in Visual Search 3149.2.2 How to Deploy Visual Attention in the Brain? 3159.2.3 Role of Memory in Visual Attention 3169.2.4 Mechanism of Visual Attention in the Brain 3169.2.5 Covert Visual Attention 3179.2.6 Saliency of Large Smooth Objects 3179.2.7 Invariable Feature Extraction 3209.2.8 Role of Visual Attention Models in Applications 3209.3 Conclusions 320References 321Index 325
Mer från samma författare
Educational Innovation Through Technology
Qingtang Liu, Jing Lei, Liming Zhang, Yantao Wei
880 kr
Mer från samma serie
Grid Converters for Photovoltaic and Wind Power Systems
Remus Teodorescu, Marco Liserre, Pedro Rodriguez
1 471 kr
Du kanske också är intresserad av
Educational Innovation Through Technology
Qingtang Liu, Jing Lei, Liming Zhang, Yantao Wei
880 kr
- Signerad!
- Nyhet
- Nyhet
- Nyhet
- Nyhet