Wordtrade LogoWordtrade.com
Mathematics

 

Review Essays of Academic, Professional & Technical Books in the Humanities & Sciences

 

Statistics

Statistics for Evidence-Based Practice and Evaluation, 2nd edition by Allen Rubin (Brooks / Cole) Easy to read and practical, this social-work text provides you with a step-by-step guide that will help you succeed in statistics. The author's friendly, approachable style makes the subject of statistics highly accessible. Studying is made easy with practice illustrations, examples, exercises, and a book companion website that contains frequently asked questions, tutorial quizzes, and links to online resources. Practical examples provide you with the opportunity to see how and when data analysis and statistics are used in practice.

This text is the best tool we can use to teach MSW students statistics without overwhelming them with too much statistics and too few examples. This textbook is easy to follow, and the author explains statistics in a way (so) that even students who are most afraid of numbers will find statistics an interesting subject...I look forward to adoption the text upon publication."

"When I teach statistics to my students, I often rely on chapters from multiple books to convey the material...Rubin's book, I believe, will fill a tremendous gap in terms of statistical needs for master's level students...Rubin's text provides, in one source, a comprehensive, well-written and organized guide to students through the maze of learning statistics without using ?over the top? or confusing examples..."

Elementary Statistics in Social Research (11th Edition) by Jack A. Levin (Author), James Alan Fox (Author), David R. Forde (MySocKit Series: Allyn and Bacon) The Eleventh Edition of Elementary Statistics in Social Research provides an introduction to statistics for students in sociology and related fields, including political science, criminal justice, and social work. This book is not intended to be a comprehensive reference for statistical methods. On the contrary, our first and foremost objective has always been to provide an accessible introduction for a broad range of students, particularly those who may not have a strong background in mathematics. More

Elementary Statistics Using Excel (Second Edition) (with CD-ROM) by Mario F. Triola (Addison Wesley, Pearson) The use of Excel by statistics professors has recently experienced remarkable growth. A major reason for this growth is the extensive use of Excel in corporate America . Excel has become the pre­mier program for working with spreadsheets. Motivated by a desire to serve their students by preparing them for their professional careers, many profes­sors now include Excel as the medium of technology throughout the statistics course. However, the union of statistics and Excel is not without its pitfalls. Consequently, statistics professors and students using Excel require a guide that is effective in identifying Excel’s weaknesses, as well as providing al­ternatives that successfully overcome those weaknesses. Elementary Statistics Using Excel describes the many good statistics features in Excel, and it also identifies its weak­nesses, while providing suitable alternatives.

Elementary Statistics Using Excel, by Mario F. Triola, Professor Emeritus of Mathematics at Dutchess Community College , is designed to be an intro­duction to basic statistics. Instead of being a manual of computer instructions, this book places strong emphasis on understanding concepts of statistics, with Excel included throughout as the key tool. Topics are presented with illustrative exam­ples, identification of required assumptions, and underlying theory. Excel instruc­tions are provided along with typical displays of results. In some cases, such as examples involving formulas and graphs, detailed instructions are presented so that Excel can be used more effectively in all applications, instead of those relat­ing only to statistics.

The Excel instructions and displays are based on Excel 2002, but apply to earlier versions as well.

Excel lacks some important features, such as the ability to generate confi­dence intervals or to conduct hypothesis tests involving proportions. Elementary Statistics Using Excel includes tools that provide these important features. Other features include:

  • Chapter-opening features: A list of chapter sections previews the chapter for the student; a chapter-opening problem, using real data, then motivates the chapter material; and the first section is a chapter overview that provides a statement of the chapter’s objectives.
  • End-of-chapter features: A Chapter Review summarizes the key concepts and topics of the chapter; Review Exercises offer practice on the chapter concepts and procedures; Cumulative Review Exercises reinforce earlier material.
  • From Data to Decision: Critical Thinking is a capstone problem that requires critical thinking and a writing component; Cooperative Group Activities encourage active learning in groups; Excel Projects are for use with Excel; Internet Projects involve students with Internet data sets and, in some cases, applets.
  • Margin Essays: The text includes 120 margin essays, which illustrate uses and abuses of statistics in practical and interesting applications. Topics in­clude “Do Boys or Girls Run in the Family?, “Accuracy of Vote Counts,” “Test of Touch Therapy,” and “Picking Lottery Numbers.”
  • Flowcharts: These appear throughout the text to simplify and clarify more complex concepts and procedures.
  • Over 1500 exercises: In response to requests by users of the previous edition, there are now more of the simpler exer­cises that are based on small data sets. Many more of the exercises require inter­pretation of results. Because exercises are of such critical importance to any statis­tics book, great care has been taken to ensure their usefulness, relevance, and accuracy.
  • Real Data Sets: These are used extensively throughout the entire book. The data sets include such varied topics as ages of Queen Mary stowaways, alcohol and tobacco use in animated children’s movies, eruptions of the Old Faithful geyser, diamond prices and characteris­tics, and movie financial and rating data.
  • Interviews: Every chapter of the text includes author-conducted interviews with professional men and women in a variety of fields who use statistics in their day-to-day work.
  • Quick-Reference Endpapers: Tables A-2 and A-3 (the normal and t distribu­tions) are reproduced on the front and back inside cover pages. A symbol table is included at the back of the book for quick and easy reference to key symbols.
  • Detachable Formula/Table Card: This insert, organized by chapter, gives students a quick reference for studying, or for use when taking tests (if allowed by the instructor).
  • CD-ROM: The CD-ROM, prepared by Mario F. Triola and packaged with every new copy of Elementary Statistics Using Excel, includes the data sets (except for Data Set 4) from Appendix B in the textbook and the DataDeskXL software add-in.

Also available for qualified teacher-adopters are the Instructor’s Solutions Manual, containing solutions to all the text exercises and sample course syllabi; MyMathLab.com, a complete online course that integrates interactive multimedia; the Testing System; and the PowerPoint® Lecture Presentation CD. For the Student/New Purchaser, there are MathXL for Statistics, the Web site that provides students with online home­work, testing, and tutorial help; videos designed to supplement many sections in the book, with some topics presented by the author; and the Addison-Wesley Tutor Center .

Elementary Statistics Using Excel is written for students majoring in any field. Although the use of algebra is minimal, students should have completed at least an elementary algebra course. In many cases, underlying theory is included, but this book does not stress the mathematical rigor more suitable for mathematics majors. Because the many examples and exercises cover a wide variety of differ­ent and interesting statistical applications, the text is appropriate for students pursuing careers in disciplines ranging from the social sciences of psychology and sociology to areas such as education, the allied health fields, business, economics, engineering, the humanities, the physical sciences, journalism, communications, and liberal arts.

Choosing and Using Statistics: A Biologist's Guide by Calvin Dytham (Blackwell) explains how to select the appropriate method for processing data with a statistical software package, and how to extract information from the output produced. The biology textbook focuses on the actual use of the most popular statistics packages--SPSS, MINITAB, and Excel--rather than how they work. The second edition adds coverage of the G-test and logistic regression.

A practical primer for ecological and evolutionary researchers. Dytham gives clear direction for choosing and using of different common statistical applications. It is practical for the most common types of applications biologists are likely to encounter. Of course more advances questions need a general statistical textbook to accompany this work, but for most questions, this  guide will save time and disappointment.

Basic Statistical Methods and Models for the Sciences by Judah Rosenblatt (Chapman & Hall, CRC) Builds a practical foundation in the use of statistical tools and imparts a clear understanding of their underlying assumptions and limitations. Focuses on applications and the models appropriate to each problem while emphasizing Monte Carlo methods, confidence intervals, and power functions.

The use of statistics in biology, medicine, engineering, and the sciences has grown dramatically in recent years and having a basic background in the subject has become a near necessity for students and researchers in these fields. Although many introductory statistics books already exist, too often their focus leans towards theory and few help readers gain effective experience in using a standard statistical software package. Designed to be used in a first course for graduate or upper-level undergraduate students, Basic Statistical Methods and Models builds a practical foundation in the use of statistical tools and imparts a clear understanding of their underlying assumptions and limitations. Without getting bogged down in proofs and derivations, thorough discussions help readers understand why the stated methods and results are reasonable. The use of the statistical software Minitab is integrated throughout the book, giving readers valuable experience with computer simulation and problem-solving techniques. The author focuses on applications and the models appropriate to each problem while emphasizing Monte Carlo methods, the Central Limit Theorem, confidence intervals, and power functions. The text assumes that readers have some degree of maturity in mathematics, but it does not require the use of calculus. This, along with its very clear explanations, generous number of exercises, and demonstrations of the extensive uses of statistics in diverse areas applications make Basic Statistical Methods and Models highly accessible to students in a wide range of disciplines.

Contents: Introduction; Scientific Method; The Aims of Medicine, Science, and Engineering; The Roles of Models and Data; Deterministic and Statistical Models; Probability Theory and Computer Simulation; Definition: Monte Carlo Simulation

Analysis of Failure and Survival Data by Peter J. Smith (Chapman & Hall/ CRC) is an essential textbook for graduate-level students of survival analysis and reliability and a valuable reference for practitioners. It Emphasizes the importance of performing diagnostic checks before survival-model fitting. It focuses on the many techniques that appear in popular software packages, including plotting product-limit survival curves, hazard plots, and probability plots in the context of censored data. The author integrates S-Plus and Minitab output throughout the text, along with a variety of real data sets so readers can see how the theory and methods are applied. He also incorporates exercises in each chapter that provide valuable problem-solving experience. In addition to all of this, the book also brings to light the most recent linear regression techniques. Most importantly, it includes a definitive account of the Buckley-James method for censored linear regression, found to be the best performing method when a Cox proportional hazards method is not appropriate. Applying the theories of survival analysis and reliability requires more background and experience than students typically receive at the undergraduate level. Mastering the contents of this book will help prepare students to begin performing research in survival analysis and reliability and provide seasoned practitioners with a deeper understanding of the field.

The style of the text has an emphasis on understanding the ideas behind the methodology. Certainly, it is a contemporary understanding with software output from S‑PLUS and MINITAB making direct appearance. In the final chapter, S‑PLUS code is given for the simple calculation of Buckley‑James estimators. However, the emphasis is not on the particularity of which software is used, but rather what may be achieved and understood by its use ‑ what diagnostics are important to action when using software, any software, for particular analyses (for example, QQ‑plots for both censored and uncensored data).

Sometimes the style is formal. Always, for easy reference, key terms are highlighted in definitions, key results formalised as theorems and key explanations afforded in simple outline as proofs. It becomes instructive that classical familiar data sets are insightfully analysed (for example, the Stanford Heart Transplant data) to shed light on particular techniques. All data sources are referenced to the literature. Contextual examples are nested within each chapter. A suite of exercises is positioned at the end of each chapter.

Fooled by Randomness: The Hidden Role of Chance in the Markets and in Life by Nassim Nicholas Taleb (Texere) Luck in trading, business, and life. This book is about luck, that single most important factor in everything. It tells how we perceive and deal with luck and how we filter the mass of information that is thrown at us daily, to understand what is important and what is the result of pure chance. Fooled by Randomness delves into the reality of the lucky fool being in the "right place at the right time," and is set around the greatest forum for investigating the misconception of chance perceived as skill—the world of trading and derivatives. How often have you heard about the brilliant trader, with the gift of second sight, suddenly wiped out by a supposedly rare—or random—event? And how common is the business leader who accepts full praise for leadership qualities when stock prices rise, but none when they collapse? Written in an accessible and entertaining manner, Taleb combines personal trading experiences, with details and examples from a multidisciplinary array of topics—ancient history, classical literature, philosophy, mathematics, and science.

The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century by David Salsburg (Freeman) In the 1840s, astronomers used Newton’s mathematical laws to predict the existence of another planet. And Neptune was discovered--right where it was supposed to be. For the first time, the universe seemed to make sense. It ran like a clock and scientists understood the mechanics perfectly. But errors began to mount, and by the end of the nineteenth century the clockwork universe had broken down. Gradually, a new paradigm arose in its place—the statistical model of reality. The development of statistical modeling in primary research is the underreported paradigm shift in the foundation of science. The lady of the title's claim that she could detect a difference between milk-into-tea vs. tea-into-milk infusions sets up the social history of a theory that has changed the culture of science as thoroughly as relativity did (the lady's palate is analogous to quantum physics' famous cat-subject), making possible the construction of meaningful scientific experiments. Statistical modeling is the child of applied mathematics and the 19th-century scientific revolution. So Salsburg begins his history at the beginning (with field agronomists in the U.K. in the 1920s trying to test the usefulness of early artificial fertilizer) and creates an important, near-complete chapter in the social history of science. His modest style sometimes labors to keep the lid on the Wonderland of statistical reality, especially under the "This Book Contains No Equations!" marketing rule for trade science books. He does his best to make a lively story of mostly British scientists' lives and work under this stricture, right through chaos theory. The products of their advancements include more reliable pharmaceuticals, better beer, econometrics, quality control manufacturing, diagnostic tests and social policy. It is unfortunate that this introduction to new statistical descriptions of reality tries so hard to appease mathophobia. Someone should do hypothesis testing of the relationship between equations in texts and sales in popular science markets it would make a fine example of the use of statistics.

In The Lady Tasting Tea, David Salsburg tells the fascinating story of how statistics has revolutionized science in the twentieth century. Leading the reader through a maze of randomness and probability, the author clearly explains the nature of statistical models, where they came from, how they are applied to scientific problems, and whether they are true descriptions of reality. Salsburg also discusses the flaws inherent in a statistical model and the serious problems they’ve created for scientists as we enter the twenty-first century.

Written for the layperson, The Lady Tasting Tea contains no mathematical formulas. It does contain short, easily digestible chapters; each one built around one of the men or women who participated in the statistical revolution. While readers will not learn enough to engage in statistical analysis (that requires several years of graduate study) they will come away with some understanding of the basic philosophy behind the statistical view of science.

The Design Inference: Eliminating Chance Through Small Probabilities by William A. Dembski (Cambridge Studies in Probability, Induction and Decision Theory: Cambridge University Press) attempts to infer the probability of creation through measurement.
How can we identify events due to intelligent causes and distinguish them from events due to undirected natural causes? If we lack a causal theory, how can we determine whether an intelligent cause acted? The Design Inference presents a reliable method for detecting intelligent causes: the design inference. The design inference uncovers intelligent causes by isolating the key trademark of intelligent causes: specified events of small probability. Just about anything that happens is highly improbable, but when a highly improbable event is also specified, that is, conforms to an independently given pattern; undirected natural causes lose their explanatory power. Design inferences can be found in a range of scientific pursuits from forensic science to research into the origins of life to the search for extraterrestrial intelligence. This challenging and provocative book shows how incomplete undirected causes are for science and breathes new life into classical design arguments. Philosophers of science and religion, other philosophers concerned with epistemology and logic, probability and complexity theorists, and statisticians will read by The Design Inference with particular interest.

Analyzing Multivariate Data by Jim Lattin, Doug Carroll, Paul E. Green (Brooks/Cole) Offering the latest teaching and practice of applied multivariate statistics, this text is perfect for students who need an applied introduction to the subject. Lattin, Green, and Carroll have created a text that speaks to the needs of applied students who have advanced beyond the beginning level, but are not yet advanced statistics majors. Their text accomplishes this through a three-part structure. First, the authors begin each major topic by developing students' statistical intuition through geometric presentation. Then, they are providing illustrative examples for support. Finally, for those courses where it will be valuable, they describe relevant mathematical underpinnings with matrix algebra.

Excerpt: Once upon a time, over two decades ago now, two gentlemen (Paul Green and Doug Carroll) collaborated on a textbook titled Analyzing Multivariate Data. Their objective was to produce a hook with a pragmatic orientation "a book for the data ana­lyzer." Quoting from the preface of that hook,

Most users of multivariate statistical techniques arc not professional statisticians. They are applications-oriented researchers-psychologists, sociologists, marketing research­ers, management scientists, and so on-who. from time to time, need the techniques to help them in their work. This text has been written for them and for students of these disciplines.... As implied by the title, emphasis on data analysis and the objectives of people who do data analysis has shaped the character of the whole enterprise.

Many people adopted the hook, including a young professor (Jim Lattin) who was teaching a course on multivariate data analysis for the very first time. The level of the text seemed quite appropriate for the mix of graduate students taking the course (mainly first- and some second-year graduate students from different parts of the uni­versity). It was not too difficult (i.e... it did not rely too heavily on mathematics be­yond the preparation of' the typical student) and not too simplistic (i.e.., it was not a "cookbook"). Because the hook presented a variety of applications, it appealed to a relatively broad cross-section of students (not only students in marketing, organiza­tional behavior, and accounting from the Graduate School of Business. but also stu­dents in engineering, education, economics, food research, psychology, sociology, and statistics).

But perhaps the best feature of the book (in the opinion of the young professor) was the way the authors used the geometry underlying the mathematics to show how the techniques really worked. Even a student with only a tentative grasp of matrix al­gebra can see what is happening when he or she understands that each matrix opera­tion corresponds to a stretching (or shrinking) and rotation of the data. After the orig­inal text went out of print, the young professor continued to teach the course from the - notes he had developed. Many things about the course changed (e.g.., topics were added, dropped, and rearranged; new examples and larger data sets were included to keep pace with the increased computational capabilities of today's software pack­ages), but the underlying pedagogy remained the same.

This new Analyzing Multivariate Data is the result of the collaboration between the now not-so-young professor and the two authors of the original text. It is not so much a revision as it is a rebirth: a fresh look at multivariate techniques more than 20 years later, with new examples, new data, and some new methods, but grounded in the same pedagogical approach (applications-oriented, intuitively motivated using the underlying geometry of the method) that guided the creation of the original.

Analyzing Multivariate Data is organized into three parts. By way of introduction, Part I (Chapters 1 through 3) provides a general overview of multivariate methods, some helpful back­ground on vectors and matrices and their geometrical interpretation, and a review of multiple regression analysis. Part II (Chapters 4 through 8) focuses on the analysis of interdependence, both among variables (principal components, factor analysis) and among objects (multidimensional scaling, cluster analysis). Part III (Chapters 9 through 13) covers canonical correlation and methods used in the analysis of depen­dence, including structural equation models with latent variables, logit choice mod­els, and special cases of the general linear model (analysis of variance, discriminant analysis).

Our objective is to make students intelligent users of these multivariate techniques and good critics of multivariate analyses performed by others. If students are to be intelligent users and good critics of the techniques discussed in this book, they must have some grasp of theory, application, and interpretation. In other words, they must

Have some intuition as to how the technique works. To this end, we use a geo­metric interpretation to provide the students with a mental picture of how each method works. We use mathematics to support the underlying intuition (rather than as a substitute for it).

Be able to apply the technique. We take a hands-on approach, providing illus­trative examples in each chapter based on real-world data. To facilitate the ap­plication of these methods, we have developed student workbooks specific to particular statistical packages (e.g.., SAS and SPSS). These workbooks explain how the concepts in the text are linked to the application software and show the student how to perform the analyses presented in each chapter. The program templates provided in the workbooks enable students to run their own analyses of the more than 100 data sets (most taken from real applications in the pub­lished literature) contained the CD-ROM that accompanies the text.

Be able to interpret the results of the analysis. In each chapter, we raise the im­portant issues and problems that tend to come up with the application of each method. We place special emphasis on assessing the generalizability of the re­sults of an analysis, and suggest ways in which students can test the validity of their findings.

Analyzing Multivariate Data shares a number of similarities with its predecessor:

  • Practical orientation. This book is still for the data analyzer. It continues to have a pragmatic orientation designed to appeal to applications-oriented re­searchers. Each chapter offers at least one real-world application as well as a discussion of the issues related to the proper interpretation of the results.

  • Intuitive approach. The goal still is to have students understand how these methods work (rather than to present them as a "black box"). We seek to build students' intuition with a combination of geometrical reasoning (lots of pic­tures) and limited mathematics (i.e.., some matrix algebra to support the intu­ition). The writing style is still informal and the tendency is still toward con­crete numerical demonstration rather than mathematical proof and/or abstract argument.

  • Interdisciplinary. The book is not written with a single audience in mind. The illustrations and sample problems are drawn from a wide range of areas, including marketing research, sociology, psychology, and economics.

  • Presentation format. Each of the chapters still follows a fairly standard format. We begin by discussing the objectives of each technique and some areas of po­tential application. We then explain how each method works with words and pictures (followed by a more mathematical exposition). An example (or two) helps to make clear the application of the technique and the interpretation of the results. We also provide a discussion of the problems and questions that can arise when doing this type of multivariate analysis.

Analyzing Multivariate Data also differs from its predecessor in several respects:

  • Organization of topics. This book now begins with analysis of interdependence (i.e.., factor analysis, multidimensional scaling, cluster analysis) before moving on to the analysis of dependence. We find that an early discussion of data re­duction techniques and measurement models is helpful before discussing canonical correlation and structural equation models. Also, when discussing techniques for the analysis of dependence, this book now considers both single-dependent-variable and multiple-dependent-variable versions of the technique together in the same chapter.

  • New topics, including logit choice models and structural equation models with latent variables.

  • Expanded coverage of techniques for the analysis of interdependence, espe­cially scaling methods and cluster analysis.

  • Discussion of cross-validation. Overfitting is a serious problem that accompa­nies any exploratory analysis of multivariate data (particularly with the tech­niques used to perform analysis of dependence). In each chapter, we present approaches that can be used to assess the statistical significance and the generalizability of the results of a given analysis.

  • Software independent. Students from different disciplines studying different substantive problems have a tendency to adopt different statistical packages. For that reason, this textbook is designed to be "software independent"; that is, not written from the perspective of any one particular application. Instead, we have developed student workbooks specific to particular software packages (e.g.., SAS and SPSS) to accompany the textbook.

  • Broader variety of sample problems and exercises. Instead of a single data set (the Alpha TV Commercial Study from the original), we have chosen to in­clude a wide variety of data sets to show students how multivariate methods can be used to provide insights into different types of problems. More than 100 data sets are included on the CD-ROM that accompanies the text. An In­structor's Manual, with solutions to the exercises at the end of each chapter, is also available.

  • Selected readings. In addition to the more comprehensive bibliography at the end of the book, we also provide a set of selected readings at the end of each chapter. These readings are not intended to be exhaustive but to give the stu­dent some idea of the origins of each method and some general resources per­taining to issues of importance related to each method.

  • Some sacrifices have been made to keep the scope of the book manageable. Topics no longer covered include automatic interaction detection (AID) and monotonic analysis of variance (MONANOVA). Conjoint analysis, which was not covered in the original book, is also not covered here.

As far as prerequisites go, the book assumes some familiarity with basic statis­tics. Most students coming to a course that uses this text will have seen regression analysis in some shape or form (although some will have a less than satisfactory grasp of the intuition underlying regression, unfortunately). The book does make use of matrix algebra, but students should not have to derive the equations to be able to understand the concepts and methods presented herein. To the extent possible, we have tried to modularize the mathematics (i.e.., confine them to relatively self­contained sections) so as not to deter the interested but less mathematically minded student. The instructor has the option of covering the material in Chapters 2 and 3 in class or of assigning the material as background reading.