Nicholas T. Longford - Böcker
Visar alla böcker från författaren Nicholas T. Longford. Handla med fri frakt och snabb leverans.
11 produkter
11 produkter
1 313 kr
Skickas inom 3-6 vardagar
The principal aim of the book is an exposition of methods for the analysis of clustered observations; the secondary one is to provide substantive interest as a measure of uncertainty, quality, equity, or generally, as a summary of differences among experimental or observational units. Another goal is to make a balanced presentation of the advantages and limitations of these methodsThe examples used for illustration of methods are not drawn exclusively from the social sciences. Although models are motivated mainly by social science problems, they are applicable in a variety of situations involving (imperfect) replication, such as repeated measurements, repeated experiments, logitudinal analysis, and analysis of covariance structures in general.
1 752 kr
Skickas inom 10-15 vardagar
Making decisions is a ubiquitous mental activity in our private and professional or public lives. It entails choosing one course of action from an available shortlist of options. Statistics for Making Decisions places decision making at the centre of statistical inference, proposing its theory as a new paradigm for statistical practice. The analysis in this paradigm is earnest about prior information and the consequences of the various kinds of errors that may be committed. Its conclusion is a course of action tailored to the perspective of the specific client or sponsor of the analysis. The author’s intention is a wholesale replacement of hypothesis testing, indicting it with the argument that it has no means of incorporating the consequences of errors which self-evidently matter to the client.The volume appeals to the analyst who deals with the simplest statistical problems of comparing two samples (which one has a greater mean or variance), or deciding whether a parameter is positive or negative. It combines highlighting the deficiencies of hypothesis testing with promoting a principled solution based on the idea of a currency for error, of which we want to spend as little as possible. This is implemented by selecting the option for which the expected loss is smallest (the Bayes rule).The price to pay is the need for a more detailed description of the options, and eliciting and quantifying the consequences (ramifications) of the errors. This is what our clients do informally and often inexpertly after receiving outputs of the analysis in an established format, such as the verdict of a hypothesis test or an estimate and its standard error. As a scientific discipline and profession, statistics has a potential to do this much better and deliver to the client a more complete and more relevant product. Nicholas T. Longford is a senior statistician at Imperial College, London, specialising in statistical methods for neonatal medicine. His interests include causal analysis of observational studies, decision theory, and the contest of modelling and design in data analysis. His longer-term appointments in the past include Educational Testing Service, Princeton, NJ, USA, de Montfort University, Leicester, England, and directorship of SNTL, a statistics research and consulting company. He is the author of over 100 journal articles and six other monographs on a variety of topics in applied statistics.
685 kr
Skickas inom 10-15 vardagar
Making decisions is a ubiquitous mental activity in our private and professional or public lives. It entails choosing one course of action from an available shortlist of options. Statistics for Making Decisions places decision making at the centre of statistical inference, proposing its theory as a new paradigm for statistical practice. The analysis in this paradigm is earnest about prior information and the consequences of the various kinds of errors that may be committed. Its conclusion is a course of action tailored to the perspective of the specific client or sponsor of the analysis. The author’s intention is a wholesale replacement of hypothesis testing, indicting it with the argument that it has no means of incorporating the consequences of errors which self-evidently matter to the client.The volume appeals to the analyst who deals with the simplest statistical problems of comparing two samples (which one has a greater mean or variance), or deciding whether a parameter is positive or negative. It combines highlighting the deficiencies of hypothesis testing with promoting a principled solution based on the idea of a currency for error, of which we want to spend as little as possible. This is implemented by selecting the option for which the expected loss is smallest (the Bayes rule).The price to pay is the need for a more detailed description of the options, and eliciting and quantifying the consequences (ramifications) of the errors. This is what our clients do informally and often inexpertly after receiving outputs of the analysis in an established format, such as the verdict of a hypothesis test or an estimate and its standard error. As a scientific discipline and profession, statistics has a potential to do this much better and deliver to the client a more complete and more relevant product. Nicholas T. Longford is a senior statistician at Imperial College, London, specialising in statistical methods for neonatal medicine. His interests include causal analysis of observational studies, decision theory, and the contest of modelling and design in data analysis. His longer-term appointments in the past include Educational Testing Service, Princeton, NJ, USA, de Montfort University, Leicester, England, and directorship of SNTL, a statistics research and consulting company. He is the author of over 100 journal articles and six other monographs on a variety of topics in applied statistics.
536 kr
Skickas inom 10-15 vardagar
Studying Human Populations is a textbook for graduate students and research workers in social statistics and related subject areas. It follows a novel curriculum developed around the basic statistical activities of sampling, measurement and inference. Statistics is defined broadly as making decisions in the presence of uncertainty that arises as a consequence of limited resources available for collecting information. A connecting link of the presented methods is the perspective of missing information, catering for a diverse class of problems that include nonresponse, imperfect measurement and causal inference. In principle, any problem too complex for our limited analytical toolkit could be converted to a tractable problem if some additional information were available. Ingenuity is called for in declaring such (missing) information constructively, but the universe of problems that we can address is wide open, not limited by a discrete set of procedures.The monograph aims to prepare the reader for the career of an independent social statistician and to serve as a reference for methods, ideas for and ways of studying human populations: formulation of the inferential goals, design of studies, search for the sources of relevant information, analysis and presentation of results. Elementary linear algebra and calculus are prerequisites, although the exposition is quite forgiving, especially in the first few chapters. Familiarity with statistical software at the outset is an advantage, but it can be developed concurrently with studying the text.
Statistical Studies of Income, Poverty and Inequality in Europe
Computing and Graphics in R using EU-SILC
Häftad, Engelska, 2023
691 kr
Skickas inom 10-15 vardagar
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing.Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume.The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
552 kr
Skickas inom 10-15 vardagar
Studying Human Populations is a textbook for graduate students and research workers in social statistics and related subject areas. It follows a novel curriculum developed around the basic statistical activities of sampling, measurement and inference. Statistics is defined broadly as making decisions in the presence of uncertainty that arises as a consequence of limited resources available for collecting information. A connecting link of the presented methods is the perspective of missing information, catering for a diverse class of problems that include nonresponse, imperfect measurement and causal inference. In principle, any problem too complex for our limited analytical toolkit could be converted to a tractable problem if some additional information were available. Ingenuity is called for in declaring such (missing) information constructively, but the universe of problems that we can address is wide open, not limited by a discrete set of procedures.The monograph aims to prepare the reader for the career of an independent social statistician and to serve as a reference for methods, ideas for and ways of studying human populations: formulation of the inferential goals, design of studies, search for the sources of relevant information, analysis and presentation of results. Elementary linear algebra and calculus are prerequisites, although the exposition is quite forgiving, especially in the first few chapters. Familiarity with statistical software at the outset is an advantage, but it can be developed concurrently with studying the text.
521 kr
Skickas inom 10-15 vardagar
A theme running through this book is that of making inference about sources of variation or uncertainty, and the author shows how information about these sources can be used for improved estimation of certain elementary quantities. Amongst the topics covered are: essay rating, summarizing item-level properties, equating of tests, small-area estimation, and incomplete longitudinal studies. Throughout, examples are given using real data sets which exemplify these applications.
Statistical Studies of Income, Poverty and Inequality in Europe
Computing and Graphics in R using EU-SILC
Inbunden, Engelska, 2014
1 036 kr
Tillfälligt slut
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing.Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume.The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
Missing Data and Small-Area Estimation
Modern Analytical Equipment for the Survey Statistician
Häftad, Engelska, 2013
1 096 kr
Skickas inom 10-15 vardagar
This book develops methods for two key problems in the analysis of large-scale surveys: dealing with incomplete data and making inferences about sparsely represented subdomains. The presentation is committed to two particular methods, multiple imputation for missing data and multivariate composition for small-area estimation. The methods are presented as developments of established approaches by attending to their deficiencies. Thus the change to more efficient methods can be gradual, sensitive to the management priorities in large research organisations and multidisciplinary teams and to other reasons for inertia. The typical setting of each problem is addressed first, and then the constituency of the applications is widened to reinforce the view that the general method is essential for modern survey analysis. The general tone of the book is not "from theory to practice," but "from current practice to better practice." The third part of the book, a single chapter, presents a method for efficient estimation under model uncertainty. It is inspired by the solution for small-area estimation and is an example of "from good practice to better theory."A strength of the presentation is chapters of case studies, one for each problem. Whenever possible, turning to examples and illustrations is preferred to the theoretical argument. The book is suitable for graduate students and researchers who are acquainted with the fundamentals of sampling theory and have a good grounding in statistical computing, or in conjunction with an intensive period of learning and establishing one's own a modern computing and graphical environment that would serve the reader for most of the analytical work in the future. While some analysts might regard data imperfections and deficiencies, such as nonresponse and limited sample size, as someone else's failure that bars effective and valid analysis, this book presents them as respectable analytical and inferential challenges, opportunities to harness the computing power into service of high-quality socially relevant statistics. Overriding in this approach is the general principle--to do the best, for the consumer of statistical information, that can be done with what is available.The reputation that government statistics is a rigid procedure-based and operation-centred activity, distant from the mainstream of statistical theory and practice, is refuted most resolutely. After leaving De Montfort University in 2004 where he was a Senior Research Fellow in Statistics, Nick Longford founded the statistical research and consulting company SNTL in Leicester, England. He was awarded the first Campion Fellowship (2000--02) for methodological research in United Kingdom government statistics. He has served as Associate Editor of the Journal of the Royal Statistical Society, Series A, and the Journal of Educational and Behavioral Statistics and as an Editor of the Journal of Multivariate Analysis. He is a member of the Editorial Board of the British Journal of Mathematical and Statistical Psychology. He is the author of two other monographs, Random Coefficient Models (Oxford University Press, 1993) and Models for Uncertainty in Educational Testing (Springer-Verlag, 1995). From the reviews: "Ultimately, this book serves as an excellent reference source to guide and improve statistical practice in survey settings exhibiting these problems."Psychometrika "I am convinced this book will be useful to practitioners...[and a] valuable resource for future research in this field." Jan Kordos in Statistics in Transition, Vol. 7, No. 5, June 2006 "To sum up, I think this is an excellent book and it thoroughly covers methods to deal with incomplete data problems and small-area estimation. It is a useful and suitable book for survey statisticians, as well as for researchers and graduate students interested on sampling designs." Ramon Cleries Soler in Statistics and Operations Research Transactions, Vol. 30, No. 1, January-June 2006
Missing Data and Small-Area Estimation
Modern Analytical Equipment for the Survey Statistician
Inbunden, Engelska, 2005
1 064 kr
Skickas inom 10-15 vardagar
This book develops methods for two key problems in the analysis of large-scale surveys: dealing with incomplete data and making inferences about sparsely represented subdomains. The presentation is committed to two particular methods, multiple imputation for missing data and multivariate composition for small area estimation. The methods are presented as developments of established approaches by attending to their deficiencies. Thus the change to more efficient methods can be gradual, sensitive to the management priorities in large research organisations and multidisciplinary teams and to other reasons for inertia. The typical setting of each problem is addressed first, and then the constituency of the applications is widened to reinforce the view that the general method is essential for modern survey analysis. The general tone of the book is not "from theory to practice," but "from current practice to better practice. " The third part of the book, a single chapter, presents a method for efficient estimation under model uncertainty. It is inspired by the solution for small area estimation and is an example of "from good practice to better theory." A strength of the presentation is chapters of case studies, one for each problem. Whenever possible, turning to examples and illustrations is preferred to the theoretical argument. The book is suitable for graduate students and researchers who are acquainted with the fundamentals of sampling theory and have a good grounding in statistical computing, or in conjunction with an intensive period of learning and establishing one's own a modern computing and graphical environment that would serve the reader for most of the analytical work in the future. While some analysts might regard data imperfections and deficiencies, such as nonresponse and limited sample size, as a turn-off, this book presents them as respectable analytical and inferential challenges, opportunities to harness the computing power into service of high-quality socially relevant statistics. Overriding in this approach is the general principle - to do the best, for the consumer of statistical information, that can be done with what is available.The reputation that government statistics is a rigid procedure-based and operation-centred activity, distant from the mainstream of statistical theory and practice, is refuted most resolutely. After leaving De Montfort University in 2004 where he was a Senior Research Fellow in Statistics, Nick Longford founded the statistical research and consulting company SNTL in Leicester, England. He was awarded the first Campion Fellowship (2000?02) for methodological research in United Kingdom government statistics. He has served as Associate Editor of the Journal of the Royal Statistical Society, Series A, and the Journal of Educational and Behavioral Statistics and as an Editor of the Journal of Multivariate Analysis. He is a member of the Editorial Board of the British Journal of Mathematical and Statistical Psychology. He is the author of two other monographs, Random Coefficient Models (Oxford University Press, 1993) and Models for Uncertainty in Educational Testing (Springer-Verlag, 1995).
536 kr
Skickas inom 10-15 vardagar
This monograph presents a radical rethinking of how elementary inferences should be made in statistics, implementing a comprehensive alternative to hypothesis testing in which the control of the probabilities of the errors is replaced by selecting the course of action (one of the available options) associated with the smallest expected loss.Its strength is that the inferences are responsive to the elicited or declared consequences of the erroneous decisions, and so they can be closely tailored to the client’s perspective, priorities, value judgments and other prior information, together with the uncertainty about them.