Chapter 3

Things to think about before reading this Chapter

  • Hartmann, Pelzel, and Abbott begin this chapter asserting “the dependence of research technology on the theoretically based questions motivating empirical investigations.” How is this illustrated in their review of design, measurement, and analysis in developmental research?
  • What are the design, measurement, and analytic challenges specific to developmental science?
  • Note research results presented in subsequent chapters that either conform—or perhaps fail to conform—to the interpretive concerns raised by Hartmann, Pelzel, and Abbott’s discussion of directional causal bias in developmental research.
  • How do cohort, age, and time of assessment distinguish common developmental research designs?
  • What are the methodological and practical challenges faced when measuring change-over-time?

 

Chapter Outline

DESIGN, MEASUREMENT, AND ANALYSIS IN DEVELOPMENTAL RESEARCH

Introduction

Design

Validity Threats

Statistical conclusion validity

Internal validity

Construct validity

External validity

Seminal Design Issues for Developmental Investigators

Intractable variables

Change

Limited availability of self-reports

Complexity of causal networks

Directional causal biases

Attrition

Design Variations

True experiments

Quasi-experiments

Nonexperimental designs

Developmental Design

Sample Size for Power and Accuracy

Additional Concerns and References

Measurement

Types of Scores

Difference scores

Q-sorts

Age- (grade-) adjusted scores

Criteria for Evaluating Scores

Standardization

Reliability

Measurement validity

Other criteria

Data Facets

Suggested Readings

Agresti, A. (2007). An introduction to categorical data analysis (2nd ed.). Hoboken, NJ: Wiley.

Alasuutari, P., Bickman, L., & Brannen, J. (Eds.). (2008). The Sage handbook of social research methods. London: Sage.

Bakeman, R., & Gottman, J. M. (1997). Observing interaction: An introduction to sequential analysis (2nd ed.). New York: Cambridge University Press.

Baltes, P. B., Reese, H. W., & Nesselroade, J. (1988). Life-span developmental psychology: Introduction to research methods. Hillsdale, NJ: Lawrence Erlbaum Associates.

Camic, P. M., Rhodes, J. E., & Yardley, L. (2003). Naming the stars: Integrating qualitative methods into psychological research. In P. M. Camic, J. E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in methodology and design (pp. 3–15). Washington, DC: American Psychological Association.

Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304–1312.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.

Lerner, R. M. (1998). The life course and human development. In W. Damon (Series Ed.) and R. M. Lerner (Vol. Ed.), Handbook of child psychology. Volume 1: Theoretical models of human development (5th ed., pp. 1–24). New York: Wiley.

Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5, 241–301.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York: McGraw-Hill.

Rindskopf, D. (2004). Trends in categorical data analysis: New, semi-new, and recycled ideas. In D. Kaplan (Ed.) The Sage handbook of quantitative methodology for the social sciences (pp. 137–149). Thousand Oaks, CA: Sage.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

Smithson, M. (2003). Confidence intervals. Thousand Oaks, CA: Sage.

Teti, D. (Ed.). (2005). Handbook of research methods in developmental science. Malden, MA: Blackwell.

Westfall, P. H., Tobias, R. D., Rom, D., Wolfinger, R. D., & Hochberg, Y. (1999). Multiple comparisons and multiple tests using SAS. Cary, NC: SAS.

Glossary

Sampling Method: The rules and procedures for selecting a subgroup of units (e.g., people, organizations) from a well-defined population for the purpose of estimating characteristics of the whole population.

Probability Sampling: Sampling methods in which every member of the population has a known non-zero (but not necessarily equal) probability of being included in the sample, and the sample is drawn using a random or chance process consistent with these probabilities. Examples of probability sampling methods include simple random sampling, systematic sampling, stratified random sampling, and cluster sampling. Most probability sampling methods enable the standard error of the estimator and the confidence limits for the true population value to be computed from the sample data.

Non-probability Sampling: Sampling methods in which members of the population are selected on the basis of subjective judgment, availability, or convenience rather than random selection. Since members of the population are chosen arbitrarily, it is not possible to determine the probability of any one member being included in the sample, and no assurance is given that each member has a chance of being included in the sample. There is no firm method of evaluating the reliability or validity of the population estimators computed from the sample data.