Jorma Rissanen - Böcker
Visar alla böcker från författaren Jorma Rissanen. Handla med fri frakt och snabb leverans.
6 produkter
6 produkter
536 kr
Skickas inom 10-15 vardagar
No statistical model is "true" or "false," "right" or "wrong"; the models just have varying performance, which can be assessed. The main theme in this book is to teach modeling based on the principle that the objective is to extract the information from data that can be learned with suggested classes of probability models. The intuitive and fundamental concepts of complexity, learnable information, and noise are formalized, which provides a firm information theoretic foundation for statistical modeling. Inspired by Kolmogorov's structure function in the algorithmic theory of complexity, this is accomplished by finding the shortest code length, called the stochastic complexity, with which the data can be encoded when advantage is taken of the models in a suggested class, which amounts to the MDL (Minimum Description Length) principle. The complexity, in turn, breaks up into the shortest code length for the optimal model in a set of models that can be optimally distinguished from the given data and the rest, which defines "noise" as the incompressible part in the data without useful information.Such a view of the modeling problem permits a unified treatment of any type of parameters, their number, and even their structure. Since only optimally distinguished models are worthy of testing, we get a logically sound and straightforward treatment of hypothesis testing, in which for the first time the confidence in the test result can be assessed. Although the prerequisites include only basic probability calculus and statistics, a moderate level of mathematical proficiency would be beneficial. The different and logically unassailable view of statistical modelling should provide excellent grounds for further research and suggest topics for graduate students in all fields of modern engineering, including and not restricted to signal and image processing, bioinformatics, pattern recognition, and machine learning to mention just a few.
Del 107 - IMA Volumes in Mathematics and its Applications
Mathematics of Information Coding, Extraction and Distribution
Inbunden, Engelska, 1998
1 096 kr
Skickas inom 10-15 vardagar
High performance computing consumes and generates vast amounts of data, and the storage, retrieval, and transmission of this data are major obstacles to effective use of computing power. Challenges inherent in all of these operations are security, speed, reliability, authentication and reproducibility. This workshop focused on a wide variety of technical results aimed at meeting these challenges. Topics ranging from the mathematics of coding theory to the practicalities of copyright preservation for Internet resources drew spirited discussion and interaction among experts in diverse but related fields. We hope this volume contributes to continuing this dialogue.
1 694 kr
Skickas inom 7-10 vardagar
This book presents a comprehensive and consistent theory of estimation. The framework described leads naturally to a generalized maximum capacity estimator. This approach allows the optimal estimation of real-valued parameters, their number and intervals, as well as providing common ground for explaining the power of these estimators. Beginning with a review of coding and the key properties of information, the author goes on to discuss the techniques of estimation and develops the generalized maximum capacity estimator, based on a new form of Shannon's mutual information and channel capacity. Applications of this powerful technique in hypothesis testing and denoising are described in detail. Offering an original and thought-provoking perspective on estimation theory, Jorma Rissanen's book is of interest to graduate students and researchers in the fields of information theory, probability and statistics, econometrics and finance.
536 kr
Skickas inom 10-15 vardagar
No statistical model is "true" or "false," "right" or "wrong"; the models just have varying performance, which can be assessed. The main theme in this book is to teach modeling based on the principle that the objective is to extract the information from data that can be learned with suggested classes of probability models. The intuitive and fundamental concepts of complexity, learnable information, and noise are formalized, which provides a firm information theoretic foundation for statistical modeling. Inspired by Kolmogorov's structure function in the algorithmic theory of complexity, this is accomplished by finding the shortest code length, called the stochastic complexity, with which the data can be encoded when advantage is taken of the models in a suggested class, which amounts to the MDL (Minimum Description Length) principle. The complexity, in turn, breaks up into the shortest code length for the optimal model in a set of models that can be optimally distinguished from the given data and the rest, which defines "noise" as the incompressible part in the data without useful information.Such a view of the modeling problem permits a unified treatment of any type of parameters, their number, and even their structure. Since only optimally distinguished models are worthy of testing, we get a logically sound and straightforward treatment of hypothesis testing, in which for the first time the confidence in the test result can be assessed. Although the prerequisites include only basic probability calculus and statistics, a moderate level of mathematical proficiency would be beneficial. The different and logically unassailable view of statistical modelling should provide excellent grounds for further research and suggest topics for graduate students in all fields of modern engineering, including and not restricted to signal and image processing, bioinformatics, pattern recognition, and machine learning to mention just a few.
Del 107 - IMA Volumes in Mathematics and its Applications
Mathematics of Information Coding, Extraction and Distribution
Häftad, Engelska, 2012
1 064 kr
Skickas inom 10-15 vardagar
High performance computing consumes and generates vast amounts of data, and the storage, retrieval, and transmission of this data are major obstacles to effective use of computing power. Challenges inherent in all of these operations are security, speed, reliability, authentication and reproducibility. This workshop focused on a wide variety of technical results aimed at meeting these challenges. Topics ranging from the mathematics of coding theory to the practicalities of copyright preservation for Internet resources drew spirited discussion and interaction among experts in diverse but related fields. We hope this volume contributes to continuing this dialogue.
Del 15 - World Scientific Series In Computer Science
Stochastic Complexity In Statistical Inquiry
Häftad, Engelska, 1989
618 kr
Tillfälligt slut
This book describes how model selection and statistical inference can be founded on the shortest code length for the observed data, called the stochastic complexity. This generalization of the algorithmic complexity not only offers an objective view of statistics, where no prejudiced assumptions of "true" data generating distributions are needed, but it also in one stroke leads to calculable expressions in a range of situations of practical interest and links very closely with mainstream statistical theory. The search for the smallest stochastic complexity extends the classical maximum likelihood technique to a new global one, in which models can be compared regardless of their numbers of parameters. The result is a natural and far reaching extension of the traditional theory of estimation, where the Fisher information is replaced by the stochastic complexity and the Cramer-Rao inequality by an extension of the Shannon-Kullback inequality. Ideas are illustrated with applications from parametric and non-parametric regression, density and spectrum estimation, time series, hypothesis testing, contingency tables, and data compression.