26. Errors and bias in epidemiological studies

From greek.doctor
Revision as of 13:42, 22 November 2022 by Nikolas (talk | contribs) (Created page with "* Chance, bias, and confounding should be ruled out in order to talk about a valid statistical association ** A valid statistical association does not imply causation! ** For example, incidence of prostate cancer has increased recently, as have the sales of flat-screen TVs. But there is no causation * The two types of errors in epidemiology ** Type 1 error – when we conclude that there is a difference when in reality there is no difference *** To avoid type 1 errors, w...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
  • Chance, bias, and confounding should be ruled out in order to talk about a valid statistical association
    • A valid statistical association does not imply causation!
    • For example, incidence of prostate cancer has increased recently, as have the sales of flat-screen TVs. But there is no causation
  • The two types of errors in epidemiology
    • Type 1 error – when we conclude that there is a difference when in reality there is no difference
      • To avoid type 1 errors, we strengthen the statistical power, by using statistic significance, p < 0,05, confidence intervals, etc.
    • Type 2 error – when we conclude that there is no difference when in reality there is one
      • To avoid type 2 errors, we must use a large sample size and accurate measurements
  • Bias
    • A bias is a systematic error which leads us to conclusions which are systematically different from the truth
    • Bias does not apply equally to the different groups measured
  • Selection bias
    • Occurs when the sample group is not representative of the population from which it is drawn
    • Examples
      • Healthy worker effect – the working population is healthier than the general population, so a sample of working people does not represent the general population
      • Volunteer bias – people who volunteer to join a study have different characteristics than the general population
        • Volunteers are generally more healthy, have lower mortality and are more likely to comply with doctor’s orders
    • Prevented by randomizing instead of selecting people to the groups, and making sure the sample group is representative of the population
  • Information bias
    • Occurs during collection, analysis, and interpretation of data
    • Examples
      • Recall bias – people who are diseased may recall their exposure to risk factors better than those who are healthy
      • Interviewer bias – different interviewing approaches towards different groups prompt different responses
      • Publication bias – when the outcome of a study influences the decision to publish the study
        • Studies which find no association are often not published, despite these results being as important as studies which find associations
  • Misclassification = assigning someone to the wrong group
  • Bias in screening
    • Volunteer bias
    • Length bias
      • Screening selectively identifies patients with a long preclinical and clinical phase and less frequently identifies patients with shorter phases
      • The patients with long phases would have a better prognosis regardless of the screening program
    • Lead-time bias
      • Screening causes cases to be diagnosed earlier in the natural history of the disease
      • This makes it seem like the patients live longer, but this is just because of the earlier diagnosis and not because of the earlier treatment
  • Atomistic fallacy
    • Observations at the individual level are not necessarily true on the populational level
    • Examples
      • Infant mortality is associated with low birthweight on an individual level, but not on the populational level
      • Same with CHD and income, or suicide and income