De som köpt den här boken har ofta också köpt Tomorrow, And Tomorrow, And Tomorrow av Gabrielle Zevin (häftad).
Köp båda 2 för 728 krThis book turned my hatred of stats and SPSS into love.
I never thought I would find a statistics textbook amusing but somehow our text pulls it off. I also appreciated the online supplementary tools provided by the publisher. They provide a good synthesis of each of the chapters and some easy options to review
I really really love the book, it's the main reason why I'm not curled up in bed with my cats sobbing in fear at the moment. Speaking of cats, I gotta say the correcting cat/misconception mutt framing is very cute, and it almost broke my heart finding out the origin of that orange spiritual feline. I'm having a blast reading about stats, who would've thunk it?
I am enjoying the book, which I would never have imagined! I am not afraid of statistics anymore.
Andy Field is Professor of Quantitative Methods at the University of Sussex. He has published widely (100+ research papers, 29 book chapters, and 17 books in various editions) in the areas of child anxiety and psychological methods and statistics. His current research interests focus on barriers to learning mathematics and statistics.
He is internationally known as a statistics educator. He has written several widely used statistics textbooks including Discovering Statistics Using IBM SPSS Statistics (winner of the 2007 British Psychological Society book award), Discovering Statistics Using R, and An Adventure in Statistics (shortlisted for the British Psychological Society book award, 2017; British Book Design and Production Awards, primary, secondary and tertiary education category, 2016; and the Association of Learned & Professional Society Publishers Award for innovation in publishing, 2016), which teaches statistics through a fictional narrative and uses graphic novel elements. He has also written the adventr and discovr packages for the statistics software R that teach statistics and R through interactive tutorials.
His uncontrollable enthusiasm for teaching statistics to psychologists has led to teaching awards from the University of Sussex (2001, 2015, 2016, 2018, 2019), the British Psychological Society (2006) and a prestigious UK National Teaching fellowship (2010).
He's done the usual academic things: had grants, been on editorial boards, done lots of admin/service but he finds it tedious trying to remember this stuff. None of them matter anyway because in the unlikely event that you've ever heard of him it'll be as the 'Stats book guy'. In his spare time, he plays the drums very noisily in a heavy metal band, and walks his cocker spaniel, both of which he finds therapeutic.
Chapter 1: Why is my evil lecturer forcing me to learn statistics? What the hell am I doing here? I don't belong here The research process Initial observation: finding something that needs explaining Generating and testing theories and hypotheses Collecting data: measurement Collecting data: research design Reporting DataChapter 2: The SPINE of statistics What is the SPINE of statistics? Statistical models Populations and Samples P is for parameters E is for Estimating parameters S is for standard error I is for (confidence) Interval N is for Null hypothesis significance testing, NHST Reporting significance testsChapter 3: The phoenix of statistics Problems with NHST NHST as part of wider problems with science A phoenix from the EMBERS Sense, and how to use it Preregistering research and open science Effect sizes Bayesian approaches Reporting effect sizes and Bayes factorsChapter 4: The IBM SPSS Statistics environment Versions of IBM SPSS Statistics Windows, MacOS and Linux Getting started The Data Editor Entering data into IBM SPSS Statistics Importing Data The SPSS Viewer Exporting SPSS Output The Syntax Editor Saving files Opening files Extending IBM SPSS StatisticsChapter 5: Exploring data with graphs The art of presenting data The SPSS Chart Builder Histograms Boxplots (box-whisker diagrams) Graphing means: bar charts and error bars Line charts Graphing relationships: the scatterplot Editing graphsChapter 6: The beast of bias What is bias? Outliers Overview of assumptions Additivity and Linearity Normally distributed something or other Homoscedasticity/Homogeneity of Variance Independence Spotting outliers Spotting normality Spotting linearity and heteroscedasticity/heterogeneity of variance Reducing BiasChapter 7: Non-parametric models When to use non-parametric tests General procedure of non-parametric tests in SPSS Comparing two independent conditions: the Wilcoxon rank-sum test and Mann- Whitney test Comparing two related conditions: the Wilcoxon signed-rank test Differences between several independent groups: the Kruskal-Wallis test Differences between several related groups: Friedman's ANOVAChapter 8: Correlation Modelling relationships Data entry for correlation analysis Bivariate correlation Partial and semi-partial correlation Comparing correlations Calculating the effect size How to report correlation coefficentsChapter 9: The Linear Model (Regression) An Introduction to the linear model (regression) Bias in linear models? Generalizing the model Sample size in regression Fitting linear models: the general procedure Using SPSS Statistics to fit a linear model with one predictor Interpreting a linear model with one predictor The linear model with two of more predictors (multiple regression) Using SPSS Statistics to fit a linear model with several predictors Interpreting a linear model with several predictors Robust regression Bayesian regression Reporting linear modelsChapter 10: Comparing two means Looking at differences An example: are invisible people mischievous? Categorical predictors in the linear model The t-test Assumptions of the t-test Comparing two means: general procedure Comparing two independent means using SPSS Statistics Comparing two related means using SPSS Statistics Reporting comparisons between two means Between groups or repeated measures?Chapter 11: Moderation, mediation and multicategory predictors The PROCESS tool Moderation: Interactions in the linear model Mediation Categorical predictors in regressionChapter 12: GLM 1: Comparing several independent means Using a linear model to compare several means Assumptions when comparing means Planned contrasts (contrast coding) Post hoc procedures Comparing several means using SPSS Statistics Output from one-way independent ANOVA Robust comparison