James E. Gentle – författare
Visar alla böcker från författaren James E. Gentle. Handla med fri frakt och snabb leverans.
15 produkter
15 produkter
1 395 kr
Skickas inom 10-15 vardagar
Monte Carlo simulation has become one of the most important tools in all fields of science. Simulation methodology relies on a good source of numbers that appear to be random. These "pseudorandom" numbers must pass statistical tests just as random samples would. Methods for producing pseudorandom numbers and transforming those numbers to simulate samples from various distributions are among the most important topics in statistical computing. This book surveys techniques of random number generation and the use of random numbers in Monte Carlo simulation. The book covers basic principles, as well as newer methods such as parallel random number generation, nonlinear congruential generators, quasi Monte Carlo methods, and Markov chain Monte Carlo. The best methods for generating random variates from the standard distributions are presented, but also general techniques useful in more complicated models and in novel settings are described. The emphasis throughout the book is on practical methods that work well in current computing environments. The book includes exercises and can be used as a test or supplementary text for various courses in modern statistics.It could serve as the primary test for a specialized course in statistical computing, or as a supplementary text for a course in computational statistics and other areas of modern statistics that rely on simulation. The book, which covers recent developments in the field, could also serve as a useful reference for practitioners. Although some familiarity with probability and statistics is assumed, the book is accessible to a broad audience. The second edition is approximately 50 per cent longer than the first edition. It includes advances in methods for parallel random number generation, universal methods for generation of nonuniform variates, perfect sampling, and software for random number generation.
1 609 kr
Skickas inom 10-15 vardagar
In recent years developments in statistics have to a great extent gone hand in hand with developments in computing. Indeed, many of the recent advances in statistics have been dependent on advances in computer science and techn- ogy. Many of the currently interesting statistical methods are computationally intensive, eitherbecausetheyrequireverylargenumbersofnumericalcompu- tions or because they depend on visualization of many projections of the data. The class of statistical methods characterized by computational intensity and the supporting theory for such methods constitute a discipline called “com- tational statistics”. (Here, I am following Wegman, 1988, and distinguishing “computationalstatistics”from“statisticalcomputing”, whichwetaketomean “computational methods, including numerical analysis, for statisticians”.) The computationally-intensive methods of modern statistics rely heavily on the developments in statistical computing and numerical analysis generally. Computational statistics shares two hallmarks with other “computational” sciences, such as computational physics, computational biology, and so on. One is a characteristic of the methodology: it is computationally intensive. The other is the nature of the tools of discovery. Tools of the scienti?c method have generally been logical deduction (theory) and observation (experimentation). The computer, used to explore large numbers of scenarios, constitutes a new type of tool. Use of the computer to simulate alternatives and to present the research worker with information about these alternatives is a characteristic of thecomputationalsciences. Insomewaysthisusageisakintoexperimentation. The observations, however, are generated from an assumed model, and those simulated data are used toevaluate and study the model.
1 395 kr
Skickas inom 10-15 vardagar
Computational inference has taken its place alongside asymptotic inference and exact techniques in the standard collection of statistical methods. Computational inference is based on an approach to statistical methods that uses modern computational power to simulate distributional properties of estimators and test statistics. This book describes computationally-intensive statistical methods in a unified presentation, emphasizing techniques, such as the PDF decomposition, that arise in a wide range of methods.The book assumes an intermediate background in mathematics, computing, and applied and theoretical statistics. The first part of the book, consisting of a single long chapter, reviews this background material while introducing computationally-intensive exploratory data analysis and computational inference.The six chapters in the second part of the book are on statistical computing. This part describes arithmetic in digital computers and how the nature of digital computations affects algorithms used in statistical methods. Building on the first chapters on numerical computations and algorithm design, the following chapters cover the main areas of statistical numerical analysis, that is, approximation of functions, numerical quadrature, numerical linear algebra, solution of nonlinear equations, optimization, and random number generation.The third and fourth parts of the book cover methods of computational statistics, including Monte Carlo methods, randomization and cross validation, the bootstrap, probability density estimation, and statistical learning.The book includes a large number of exercises with some solutions provided in an appendix.
541 kr
Skickas inom 10-15 vardagar
Numerical linear algebra is one of the most important subjects in the field of statistical computing. Statistical methods in many areas of application require computations with vectors and matrices. This book describes accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Although the book is not tied to any particular software system, it describes and gives examples of the use of modern computer software for numerical linear algebra. An understanding of numerical linear algebra requires basic knowledge both of linear algebra and of how numerical data are stored and manipulated in the computer. The book begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, matrix factorizations, matrix and vector norms, and other topics in linear algebra; hence, the book is essentially self- contained. The topics addressed in this bookconstitute the most important material for an introductory course in statistical computing, and should be covered in every such course. The book includes exercises and can be used as a text for a first course in statistical computing or as supplementary text for various courses that emphasize computations. James Gentle is University Professor of Computational Statistics at George Mason University. During a thirteen-year hiatus from academic work before joining George Mason, he was director of research and design at the world's largest independent producer of Fortran and C general-purpose scientific software libraries. These libraries implement many algorithms for numerical linear algebra. He is a Fellow of the American Statistical Association and member of the International Statistical Institute. He has held several national
1 930 kr
Skickas inom 10-15 vardagar
In this book the authors have assembled the best techniques from a great variety of sources, establishing a benchmark for the field of statistical computing. ---Mathematics of Computation . The text is highly readable and well illustrated with examples. The reader who intends to take a hand in designing his own regression and multivariate packages will find a storehouse of information and a valuable resource in the field of statistical computing.
968 kr
Skickas inom 10-15 vardagar
Monte Carlo simulation has become one of the most important tools in all fields of science. Simulation methodology relies on a good source of numbers that appear to be random. These "pseudorandom" numbers must pass statistical tests just as random samples would. Methods for producing pseudorandom numbers and transforming those numbers to simulate samples from various distributions are among the most important topics in statistical computing. This book surveys techniques of random number generation and the use of random numbers in Monte Carlo simulation. The book covers basic principles, as well as newer methods such as parallel random number generation, nonlinear congruential generators, quasi Monte Carlo methods, and Markov chain Monte Carlo. The best methods for generating random variates from the standard distributions are presented, but also general techniques useful in more complicated models and in novel settings are described. The emphasis throughout the book is on practical methods that work well in current computing environments. The book includes exercises and can be used as a test or supplementary text for various courses in modern statistics. It could serve as the primary test for a specialized course in statistical computing, or as a supplementary text for a course in computational statistics and other areas of modern statistics that rely on simulation. The book, which covers recent developments in the field, could also serve as a useful reference for practitioners. Although some familiarity with probability and statistics is assumed, the book is accessible to a broad audience. The second edition is approximately 50% longer than the first edition. It includes advances in methods for parallel random number generation, universal methods for generation of nonuniform variates, perfect sampling, and software for random number generation.
1 182 kr
Skickas inom 10-15 vardagar
In recent years developments in statistics have to a great extent gone hand in hand with developments in computing. Indeed, many of the recent advances in statistics have been dependent on advances in computer science and techn- ogy. Many of the currently interesting statistical methods are computationally intensive, eitherbecausetheyrequireverylargenumbersofnumericalcompu- tions or because they depend on visualization of many projections of the data. The class of statistical methods characterized by computational intensity and the supporting theory for such methods constitute a discipline called “com- tational statistics”. (Here, I am following Wegman, 1988, and distinguishing “computationalstatistics”from“statisticalcomputing”, whichwetaketomean “computational methods, including numerical analysis, for statisticians”.) The computationally-intensive methods of modern statistics rely heavily on the developments in statistical computing and numerical analysis generally. Computational statistics shares two hallmarks with other “computational” sciences, such as computational physics, computational biology, and so on. One is a characteristic of the methodology: it is computationally intensive. The other is the nature of the tools of discovery. Tools of the scienti?c method have generally been logical deduction (theory) and observation (experimentation). The computer, used to explore large numbers of scenarios, constitutes a new type of tool. Use of the computer to simulate alternatives and to present the research worker with information about these alternatives is a characteristic of thecomputationalsciences. Insomewaysthisusageisakintoexperimentation. The observations, however, are generated from an assumed model, and those simulated data are used toevaluate and study the model.
541 kr
Skickas inom 10-15 vardagar
Numerical linear algebra is one of the most important subjects in the field of statistical computing. Statistical methods in many areas of application require computations with vectors and matrices. This book describes accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Although the book is not tied to any particular software system, it describes and gives examples of the use of modern computer software for numerical linear algebra. An understanding of numerical linear algebra requires basic knowledge both of linear algebra and of how numerical data are stored and manipulated in the computer. The book begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, matrix factorizations, matrix and vector norms, and other topics in linear algebra; hence, the book is essentially self- contained. The topics addressed in this bookconstitute the most important material for an introductory course in statistical computing, and should be covered in every such course. The book includes exercises and can be used as a text for a first course in statistical computing or as supplementary text for various courses that emphasize computations. James Gentle is University Professor of Computational Statistics at George Mason University. During a thirteen-year hiatus from academic work before joining George Mason, he was director of research and design at the world's largest independent producer of Fortran and C general-purpose scientific software libraries. These libraries implement many algorithms for numerical linear algebra. He is a Fellow of the American Statistical Association and member of the International Statistical Institute. He has held several national
968 kr
Skickas inom 10-15 vardagar
Computational inference uses modern computational power to simulate distributional properties of estimators and test statistics. This book describes computationally intensive statistical methods in a unified presentation.
914 kr
Skickas inom 11-20 vardagar
This book presents the theory of matrix algebra for statistical applications, explores various types of matrices encountered in statistics, and covers numerical linear algebra.
910 kr
Kommande
This book presents the theory of matrix algebra for statistical applications, explores various types of matrices encountered in statistics, and covers numerical linear algebra. Matrix algebra is one of the most important areas of mathematics in data science and in statistical theory, and previous editions had essential updates and comprehensive coverage on critical topics in mathematics.This 3rd edition offers a self-contained description of relevant aspects of matrix algebra for applications in statistics. It begins with fundamental concepts of vectors and vector spaces; covers basic algebraic properties of matrices and analytic properties of vectors and matrices in multivariate calculus; and concludes with a discussion on operations on matrices, in solutions of linear systems and in eigenanalysis. It also includes discussions of the R software package, with numerous examples and exercises.Matrix Algebra considers various types of matrices encountered in statistics, such as projection matrices and positive definite matrices, and describes special properties of those matrices; as well as describing various applications of matrix theory in statistics, including linear models, multivariate analysis, and stochastic processes. It begins with a discussion of the basics of numerical computations and goes on to describe accurate and efficient algorithms for factoring matrices, how to solve linear systems of equations, and the extraction of eigenvalues and eigenvectors. It covers numerical linear algebra—one of the most important subjects in the field of statistical computing. The content includes greater emphases on R, and extensive coverage of statistical linear models. Matrix Algebra is ideal for graduate and advanced undergraduate students, or as a supplementary text for courses in linear models or multivariate statistics. It’s also ideal for use in a course in statistical computing, or as a supplementary text forvarious courses that emphasize computations.
2 143 kr
Skickas inom 10-15 vardagar
Any financial asset that is openly traded has a market price. Except for extreme market conditions, market price may be more or less than a “fair” value. Fair value is likely to be some complicated function of the current intrinsic value of tangible or intangible assets underlying the claim and our assessment of the characteristics of the underlying assets with respect to the expected rate of growth, future dividends, volatility, and other relevant market factors. Some of these factors that affect the price can be measured at the time of a transaction with reasonably high accuracy. Most factors, however, relate to expectations about the future and to subjective issues, such as current management, corporate policies and market environment, that could affect the future financial performance of the underlying assets. Models are thus needed to describe the stochastic factors and environment, and their implementations inevitably require computational finance tools.
3 786 kr
Skickas inom 10-15 vardagar
The Handbook of Computational Statistics - Concepts and Methods (second edition) is a revision of the first edition published in 2004, and contains additional comments and updated information on the existing chapters, as well as three new chapters addressing recent work in the field of computational statistics.
2 143 kr
Skickas inom 10-15 vardagar
Fair value is likely to be some complicated function of the current intrinsic value of tangible or intangible assets underlying the claim and our assessment of the characteristics of the underlying assets with respect to the expected rate of growth, future dividends, volatility, and other relevant market factors.
3 786 kr
Skickas inom 10-15 vardagar
The Handbook of Computational Statistics - Concepts and Methods (second edition) is a revision of the first edition published in 2004, and contains additional comments and updated information on the existing chapters, as well as three new chapters addressing recent work in the field of computational statistics. This new edition is divided into 4 parts in the same way as the first edition. It begins with "How Computational Statistics became the backbone of modern data science" (Ch.1): an overview of the field of Computational Statistics, how it emerged as a separate discipline, and how its own development mirrored that of hardware and software, including a discussion of current active research. The second part (Chs. 2 - 15) presents several topics in the supporting field of statistical computing. Emphasis is placed on the need for fast and accurate numerical algorithms, and some of the basic methodologies for transformation, database handling, high-dimensional data and graphics treatment are discussed. The third part (Chs. 16 - 33) focuses on statistical methodology. Special attention is given to smoothing, iterative procedures, simulation and visualization of multivariate data. Lastly, a set of selected applications (Chs. 34 - 38) like Bioinformatics, Medical Imaging, Finance, Econometrics and Network Intrusion Detection highlight the usefulness of computational statistics in real-world applications.