A case study is an intensive analysis on one individual over a long period of time. Thomas (2011) describes them as “Analyses of persons, events, decisions, periods, projects, policies, institutions, or other systems that are studied holistically by one or more methods.”
They produce rich, in-depth qualitative data that is analysed by the researcher. They are often critisised for not being scientific and too subjective as the resercher may interpret the data wrongly. Freud carried out a case study in 1909 on ‘Little Hans’ who had a phobia of horses. Freud concluded that Hans was using horses as a symbol for his father and used his findings to support his theory of the oedipus complex. But would Freud just be looking for evidence to support his theories? Did he just see what he wanted to see and interpret the phobia wrongly? Brown (1965) examined the case study in detail and also provided evidence for Freud’s theory. Case studies can be very useful for lateralising function on people who have suffered injuries or memory loss e.g. H.M, Clive Wearing and Phineas Gage. Without cases like these we wouldn’t be able to examine how memory is affected when different parts of the brain are damaged or how our personalities can alter with damage to specific areas. Yin (2009) states that ‘single-subject research provides the statistical framework for making inferences from quantitative case-study data.’
However, the results obtained from case studies are from 1 individual, therefore you cannot generalise case study results because everyone behaves and reacts differrently, making the usefulness of the results limited. But they do have some major advantages in that they produce heeps of really detailed personal data from the subject and their close family and friends. The researcher can then gain insight into why they are behaving as they do and maybe even predict future behaviour or problems.
Single-subject designs are sensitive to individual differences because each participant serves as their own control. The findings are then used to establish a cause and effect relationship. There are 3 phases in a single-subject design beginning with ‘baseline.’ This is where there is no intervention and the researcher just collects data on the dependent variable. The next phase is ‘intervention’ where the researcher brings in an independent variable and then collects data on the dependent variable again. The last phase is ‘reversal’ where the IV is removed and the researcher collects data on the DV for the third time. Tripodi (1998) states that “Single-subject designs produce or approximate three levels of knowledge: (1) descriptive, (2) correlational, and (3) causal.” By using the above 3 phases the researcher is able to establish a cause and effect relationship between the intervention being introduced and how the behaviour changed. This makes single-subject designs more scientific and reliable than case studies. Both methods are good for gaining alot of data about 1 person but generalisation is very difficult.
Robert K. Yin. Case Study Research: Design and Methods. Fourth Edition. SAGE Publications. California, 2009
G. Thomas (2011) A typology for the case study in social science following a review of definition, discourse and structure. Qualitative Inquiry, 17, 6, 511-521
Tripodi, T. (1998). A Primer on Single-Subject Design for Clinical Social Workers. Washington, DC: National Association of Social Workers (NASW) Press
Ronald Fisher coined the term ‘test of significance.’ He said that “when such tests are available we may discover whether a second sample is or is not significantly different from the first.” In statistics, a result is thought to be significant if it is unlikely that the results are due to chance alone, or if it’s very unlikely to occur when the null hypothesis is true (without any treatment effect). This is determined by the set alpha level. It is essential that the level of significance is kept very small to avoid making a type 1 error. Usually, the alpha level is set at 0.05 but can be as low as 0.01 in clinical trials for example where there is less room for error. This means that there is a 5% or less chance that your results occurred by chance. If your data falls within this critical value, you should reject your null hypothesis and accept your H1.
However, just because your test shows significance does not necessarily mean that there’s definitely an effect, at least not on the real population. Test samples always have incomplete information about its population and it may not be very representative. The sample may be too small or there may be bias within the chosen sample, especially if the participants were not chosen at random. Type 1 and type II errors can also occur when unusual or extreme samples are chosen that lead the researcher to misinterpret the data. It becomes difficult because concentrating so much on trying to not make a type 1 error and relying on significance so much we’re actually making more type II errors (Nakagawa). Sometimes, ‘significant’ effects can be very small and thought of as meaningless. This is why effect size should be used in order to estimate the strength of the relationship between the 2 variables.
The issue of ecological validity is also raised by psychological experiments that take place in the lab. This is a simulated unnatural environment, how can we try to generalise a ‘significant’ result from a lab experiment to real life events? Would people behave/react in the same way in a real life situation? Is your result truly significant in the real world? Situations like these highlight that findings such as these are not really that useful to us.
Significance is also easy to manipulate in that just by increasing your sample size you can increase your chances of getting a significant result. This is why effect size is being used more and more in journals because it doesn’t take the sample size into account. It seems wrong that a piece of data with a P-value of .049 would be much more interesting to other researchers and be taken more seriously than data with a P-value of .051 when there is such a little difference? Yet researchers who stick to the .05 level of significance may in fact disregard important data based on its significance value.
For these reasons I do not believe ‘statistically significant’ data has a definite effect. Effect size should always be considered and reported in the data’s results. Just because data doesn’t fit within the ‘statistical boundaries’ of a hypothesis test does not make it irrelevant or useless.
R. A. Fisher (1925). Statistical Methods for Research Workers, Edinburgh: Oliver and Boyd, 1925, p.43
Robert Rosenthal coined the term ‘file drawer problem’ in 1979. It refers to biases in the literature due to selective publication – basically, people prefer to publish positive/significant results rather than negative/non-confirmatory results. The ‘file drawer effect’ describes how researchers will file many interesting papers related to their research, and then pick out only the papers that agree with their hypothesis, and ignore the papers that contradict it. The extreme view of the “file drawer problem” is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results (Rosenthal, 1979). Often, publishing papers that show the null hypothesis can be harder than it seems. Research that rocks the boat receives more attention than research that tells us nothing new or interesting. Therefore, if you were to carry out a survey of all the published research on any given question, you would be more than likely to get a skewed result, away from the null hypothesis. But the real answer may in fact be closer to the null hypothesis than meta-analysis would show. It has been recognised for years that this “file drawer problem” leads to distortion of the research literature, creating an impression that positive results are far more robust than they really are (Rosenthal, 1979). Discovering the null hypothesis doesn’t mean that you haven’t discovered anything. You only discover nothing when you do nothing at all. When you have looked for something that could have been there but it wasn’t, it should still be part of the scientific record. Without it it confounds meta-analysis. Doctors need negative data in order to know what to prescribe. We as psychologists need negative results to know what ideas have failed and what to study next. How can we move forward if we keep studying the same concepts because nobody will publish their results? People would often respond saying that the publication of all nonsignificant papers would make the literature boring. But im not suggesting any old rubbish gets published, it must have strong methods and a good reason for doing the study in the first place. The medical profession is now largely aware of this problem and it is becoming common practice to register clinical trials before the study begins, so that the results can be published regardless of the outcome. However, advanced registration isn’t really feasible for most areas of psychology. But now thanks to online journals, there is no longer competition for page space and many negative results can now be published. So empty your file drawers people and get your findings out there!
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86 (3), 638-641 DOI: 10.1037/0033-2909.86.3.638