Nonnegative Matrix and Tensor Factorizations (inbunden)
Format
Inbunden (Hardback)
Språk
Engelska
Antal sidor
504
Utgivningsdatum
2009-09-11
Upplaga
1
Förlag
John Wiley & Sons Inc
Medarbetare
Zdunek, Rafal / Phan, Anh Huy
Illustrationer
Illustrations
Dimensioner
249 x 175 x 30 mm
Vikt
1226 g
Antal komponenter
1
Komponenter
1369:Standard Color 6.69 x 9.61 in or 244 x 170 mm (Pinched Crown) Case Laminate on White w/Gloss La
ISBN
9780470746660

Nonnegative Matrix and Tensor Factorizations

Applications to Exploratory Multi-way Data Analysis and Blind Source Separation

Inbunden,  Engelska, 2009-09-11
1762
  • Skickas från oss inom 7-10 vardagar.
  • Fri frakt över 249 kr för privatkunder i Sverige.
Finns även som
Visa alla 1 format & utgåvor
This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF). This includes NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD). NMF/NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets. It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been successfully applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining. As such, the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models. Key features: Acts as a single source reference guide to NMF, collating information that is widely dispersed in current literature, including the authors own recently developed techniques in the subject area. Uses generalized cost functions such as Bregman, Alpha and Beta divergences, to present practical implementations of several types of robust algorithms, in particular Multiplicative, Alternating Least Squares, Projected Gradient and Quasi Newton algorithms. Provides a comparative analysis of the different methods in order to identify approximation error and complexity. Includes pseudo codes and optimized MATLAB source codes for almost all algorithms presented in the book. The increasing interest in nonnegative matrix and tensor factorizations, as well as decompositions and sparse representation of data, will ensure that this book is essential reading for engineers, scientists, researchers, industry practitioners and graduate students across signal and image processing; neuroscience; data mining and data analysis; computer science; bioinformatics; speech processing; biomedical engineering; and multimedia.
Visa hela texten

Passar bra ihop

  1. Nonnegative Matrix and Tensor Factorizations
  2. +
  3. Everything Is Tuberculosis

De som köpt den här boken har ofta också köpt Everything Is Tuberculosis av John Green (inbunden).

Köp båda 2 för 2044 kr

Kundrecensioner

Har du läst boken? Sätt ditt betyg »

Fler böcker av författarna

Recensioner i media

"[A] focus on the algorithms that are most useful in practice and aim to derive and implement, in MATLAB, efficient and simple iterative algorithms that work with real-world data." (Book News, December 2009)

Övrig information

Andrzej Cichocki, Laboratory for Advanced Brain Signal Processing, Riken Brain Science Institute, Japan Professor Cichocki is head of the Laboratory for Advanced Brain Signal Processing. He has co-authored more than one hundred technical papers, and is the author of three previous books of which two are published by Wiley. His most recent book is Adaptive Blind Signal and Image Processing with Professor Shun-ichi Amari (Wiley, 2002). He is Editor-in-Chief of International Journal Computational Intelligence and Neuroscience and Associate Editor of IEEE Transactions on Neural Networks. Shun-ichi Amari, Laboratory for Mathematical Neuroscience, Riken Brain Science Institute, Japan Professor Amari is head of the Laboratory for Mathematical Neuroscience, as well as vice-president of the Riken Brain Science Institute. He serves on editorial boards for numerous journals including Applied Intelligence, Journal of Mathematical Systems and Control and Annals of Institute of Statistical Mathematics. He is the co-author of three books, and more than three hundred technical papers. Rafal Zdunek, Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology, Poland Associate Professor Zdunek is currently a lecturer at the Wroclaw University of Technology, Poland and up until recently was a visiting research scientist at the Riken Brain Science Institute. He is a member of the IEEE: Signal Processing Society, Communications Society and a member of the Society of Polish Electrical Engineers. Dr Zdunek has guest co-edited with Professor Cichocki amongst others, a special issue on Advances in Non-negative Matrix and Tensor Factorization in the journal, Computational Intelligence and Neuroscience (published May 08). Anh Huy Phan, Laboratory for Advanced Brain Signal Processing, Riken Brain Science Institute, Japan Anh Huy Phan is a researcher at the Laboratory for Advanced Brian Signal Processing at the Riken Brain Science Institute.

Innehållsförteckning

Preface. Acknowledgments. Glossary of Symbols and Abbreviations. 1 Introduction Problem Statements and Models. 1.1 Blind Source Separation and Linear Generalized Component Analysis. 1.2 Matrix Factorization Models with Nonnegativity and Sparsity Constraints. 1.2.1 Why Nonnegativity and Sparsity Constraints? 1.2.2 Basic NMF Model. 1.2.3 Symmetric NMF. 1.2.4 Semi-Orthogonal NMF. 1.2.5 Semi-NMF and Nonnegative Factorization of Arbitrary Matrix. 1.2.6 Three-factor NMF. 1.2.7 NMF with Offset (Affine NMF). 1.2.8 Multi-layer NMF. 1.2.9 Simultaneous NMF. 1.2.10 Projective and Convex NMF. 1.2.11 Kernel NMF. 1.2.12 Convolutive NMF. 1.2.13 Overlapping NMF. 1.3 Basic Approaches to Estimate Parameters of Standard NMF. 1.3.1 Large-scale NMF. 1.3.2 Non-uniqueness of NMF and Techniques to Alleviate the Ambiguity Problem. 1.3.3 Initialization of NMF. 1.3.4 Stopping Criteria. 1.4 Tensor Properties and Basis of Tensor Algebra. 1.4.1 Tensors (Multi-way Arrays) Preliminaries. 1.4.2 Subarrays, Tubes and Slices. 1.4.3 Unfolding Matricization. 1.4.4 Vectorization. 1.4.5 Outer, Kronecker, Khatri-Rao and Hadamard Products. 1.4.6 Mode-n Multiplication of Tensor by Matrix and Tensor by Vector, Contracted Tensor Product. 1.4.7 Special Forms of Tensors. 1.5 Tensor Decompositions and Factorizations. 1.5.1 Why Multi-way Array Decompositions and Factorizations? 1.5.2 PARAFAC and Nonnegative Tensor Factorization. 1.5.3 NTF1 Model. 1.5.4 NTF2 Model. 1.5.5 Individual Differences in Scaling (INDSCAL) and Implicit Slice Canonical Decomposition Model (IMCAND). 1.5.6 Shifted PARAFAC and Convolutive NTF. 1.5.7 Nonnegative Tucker Decompositions. 1.5.8 Block Component Decompositions. 1.5.9 Block-Oriented Decompositions. 1.5.10 PARATUCK2 and DEDICOM Models. 1.5.11 Hierarchical Tensor Decomposition. 1.6 Discussion and Conclusions. 2 Similarity Measures and Generalized Divergences. 2.1 Error-induced Distance and Robust Regression Techniques. 2.2 Robust Estimation. 2.3 Csiszr Divergences. 2.4 Bregman Divergence. 2.4.1 Bregman Matrix Divergences. 2.5 Alpha-Divergences. 2.5.1 Asymmetric Alpha-Divergences. 2.5.2 Symmetric Alpha-Divergences. 2.6 Beta-Divergences. 2.7 Gamma-Divergences. 2.8 Divergences Derived from Tsallis and Rnyi Entropy. 2.8.1 Concluding Remarks. 3 Multiplicative Iterative Algorithms for NMF with Sparsity Constraints. 3.1 Extended ISRA and EMML Algorithms: Regularization and Sparsity. 3.1.1 Multiplicative NMF Algorithms Based on the Squared Euclidean Distance. 3.1.2 Multiplicative NMF Algorithms Based on Kullback-Leibler I-Divergence. 3.2 Multiplicative Algorithms Based on Alpha-Divergence. 3.2.1 Multiplicative Alpha NMF Algorithm. 3.2.2 Generalized Multiplicative Alpha NMF Algorithms. 3.3 Alternating SMART: Simultaneous Multiplicative Algebraic Reconstruction Technique. 3.3.1 Alpha SMART Algorithm. 3.3.2 Generalized SMART Algorithms. 3.4 Multiplicative NMF Algorithms Based on Beta-Divergence. 3.4.1 Multiplicative Beta NMF Algorithm. 3.4.2 Multiplicative Algorithm Based on the Itakura-Saito Distance. 3.4.3 Generalized Multiplicative Beta Algorithm for NMF. 3.5 Algorithms for Semi-orthogonal NMF and Orthogonal Three-Factor NMF. 3.6 Multiplicative Algorithms for Affine NMF. 3.7 Multiplicative Algorithms for Convolutive NMF. 3.7.1 Multiplicative Algorithm for Convolutive NMF Based on Alpha-Divergence. 3.7.2 Multiplicative Algorithm for Convolutive NMF Based on Beta-Divergence. 3.7.3 Efficient Implementation of CNMF Algorithm. 3.8 Simulation Examples for Standard NMF. 3.9 Examples for Affine NMF. 3.10 Music Analysis and Decomposition Using Convolutive NMF. 3.11 Discussion and Conclusions. 4 Alternating Least Squares and Related Algorithms for NMF and SCA Problems. 4.1 Standard ALS Algorithm. 4.1.1 Multiple Linear Regression Vectorized Version of ALS Update Formulas. 4.1.2 Weig