Learning to Analyze the Data

Chris Kahlenborn
(chapter five)
Breast Cancer: Its link to Abortion
and the Birth Control Pill
Reproduced with Permission

A basic unierstanding of this challenging chapter will greatly help one comprehend the data and arguments in the remaining chapters. The more technical sections are at the end of the chapter.

Often when a large new research study is performed, the public reads or hears about it. The "experts" frequently comment on their findings in public. In order to assess the credibility of these experts and their conclusions, it is important to understand which things to look for when analyzing a study. Once the following basic principles are understood, one will be able to decide for oneself as to the validity of a certain "expert's" commentary.

Q-5A. What are the basic items that one should understand regarding the analysis of medical research studies.? The critical items are the following:

The reverse "file-drawer" problem:

The "file-drawer" problem has been described as the tendency to file away negative data1 but its converse, that is the failure to present positive data (ie, the "reverse file-drawer" problem), may be a more serious problem. It is discouraging to think that a researcher, after conducting a research project, might be so biased or may be under so much financial or political pressure that he or she might choose to discard or "file away" those results which do not fit his or her agenda. Although this problem is difficult to prove, it is hard to be confident that all researchers present their more controversial findings because of the "medically correct" atmosphere which currently exists. It is not difficult to believe that pressure from drug companies and organizations that benefit from the sale of contraception or abortion, can influence a researcher enough to cause him or her to "water down" important findings or place controversial results in the "file-drawer."

One example comes from a researcher (ie, Dr. Brind) who analyzed the work of an Australian researcher (Rohan)2. In 1988, Rohan et al3 performed a study that was published in the American Journal of Epidemiology regarding dietary risk factors for breast cancer. No information was published concerning induced abortion and breast cancer in that study. In 1995, a study by Andrieu et al was noted to contain data from the Australian study which noted that it actually had shown a 160% increased risk for induced abortion! Dr. Brind noted: "The obvious question, of course, is why these findings concerning an eminently avoidable risk factor were suppressed for seven years." [2, p.7]

An example of a slightly different type of "file drawer" problem is displayed in a remarkable admission from Mitchell Creinin, a man noted for his advocacy of the chemical abortion involving methotrexate and misoprostol (brand name is Cytotec®). He stated: "The entire field of early abortion is changing. In the seventies physicians tried to perform them, but we didn't have sensitive pregnancy tests so we sometimes did them (ie, abortions) on women who weren't pregnant"[4, p.26]. This stunning admission ought to shock the medical world, but instead the alluded to information was "filed away" so that few in the medical field and/or the laity even realize this deception occurred. In addition to its ethical violations (ie, among other things it admits that some of today's women underwent an "abortion procedure" when they were not pregnant), it should be noted that "controls" who said they had an early abortion, may never have had an actual abortion because they may never have been pregnant. By "filing away" this type of information, this segment of the medical community may have artificially decreased the relative risks that today's papers are able to detect in regard to the risks of early abortion.

The funding conflict:

Money has the ability to corrupt almost every known field and/or organization and the research arena is no exception. Unless a researcher is independently wealthy, he or she must rely on financial grants to fund his or her lab and ongoing area of study. Most grants come from agencies of the Federal Government which include some of the following in descending hierarchical order: the department of Health and Human Services, the Public Health Service, the National Institute of Health (NIH), the National Cancer Institute (NCI), and the National Institute of Child Health and Human development (NICHHD) as well as others. Grants from the government may at first appear to be "clean grants," but one must remember that the U.S. Government spends millions of dollars for hormonal contraception in "foreign aid" (eg, via USAID) as well as giving millions annually to the largest single abortion provider in the U.S. (ie, Planned Parenthood). These examples of government funding certainly create a scenario that provides ample opportunity for conflict of interest to present itself as concerns honest research on abortion, hormonal contraceptives, and their link to breast cancer.

Grants may also come from trusts or foundations or drug companies. Obviously, if the grantor of the funds has a particular bias, the grantee (ie, the researcher) will have a difficult time publishing results that conflict with the grantor's interest or philosophy. A good medical example concerns that of the chickenpox vaccine produced by Merck, a huge drug company (see Appendix 3 for a detailed account of how the funding conflict may influence the medical literature in regard to the chickenpox vaccine). The Journal of Pediatrics published a highly favorable article, written by Daniel Huse, on the benefits and costs of the chickenpox vaccine5, however, the article was funded by Merck. This hardly leaves room for an unbiased analysis. This type of event, however, is not atypical, and occurs in the realm of breast cancer research also. For example, many authors have accepted grant money for their studies on oral contraceptives from pharmaceutical companies who produce oral contraceptives. Rosenberg [6, 7, 8, Miller9, Palmer [8], Jick10, La Vecchia and Parrazini et al11, and Kay12 are among some of the top researchers who have accepted funds from drug companies in their research efforts.

Another type of funding conflict occurs when the grantor has demonstrated a particular bias which would leave any receiver of funds from that source in a questionable position. For example, the World Health Organization (WHO) has openly funded research on a vaccine which works by causing an early abortion13. They also funded an experiment on human subjects with a controversial injectable contraceptive which at the time even the FDA had rejected because of its propensity to cause breast tumors in beagles14. The International Planned Parenthood Federation (IPPF), the Family Planning International Assistance (FPIA), and the U.S. Agency for International development (USAID) are organizations which spend millions of U.S. taxpayer dollars for contraceptives in third world countries. For example, Population Reports noted that from the years between 1978 and 1981 alone, these three agencies donated a total of over 38 million prescriptions of oral contraceptive pills (OCPs), which translates to about 10 million prescriptions per year. U.S. taxpayers have literally been donating hundreds of millions of dollars to foreign countriesÑmoney which was given to drug companies and family planning agencies. In spite of this conflict of interest, researchers studying oral contraceptives have consistently accepted funds for their work from these controversial sources. Some examples include: Paul et al15, Thomas et al16, Lee et al17, Liniefors-Harris and Meirik [18, 19], and Ellery20.

The size of the study:

Many studies claim to show a cause/effect relationship but are often so small that the results mean very little. Because breast cancer has so many risk factors, it is usually necessary to obtain a fairly large group of "cases" to study. One must be careful to note that although the study's authors may claim that their study is large, the actual number of "cases" that might be affected by a particular risk factor may be small. For example, one study may interview 1,000 women and another 10,000 women. One might assume that the second study is larger but this may be deceiving. If the first study's authors interviewed women under the age of 45 and the second group interviewed women aged 40 to 90 years old, the first group might actually have more women under the age of 45. Because these are the women who are more likely to have been exposed to having an abortion at a young age and/or early OCP use, the first study may actually be more powerful than the second. (Note: Women who were under the age of 45 as of 1990 would have been more likely to have been exposed to early OCP use than women over the age of 45 as of 1990 because OCPs began to be used more often at an earlier age in the 1970s and 1980s [21, p.9S].)

A proper latent period:

The time between the influence of a breast cancer risk factor such as radiation, having an abortion at a young age, or early OCP use, and the subsequent clinical manifestation of a breast cancer growth, is called the latent period. Many studies concerning both abortion at a young age and early oral contraceptive pill use claim to show little increase in the rate of breast cancer, but these studies often fail to allow a proper latent period to pass between cause and effect. Why is this important? We noted earlier that a cancerous growth starts when one breast cancer cell starts growing abnormally and that it often takes 15 to 25 years or more to discover that certain factors are indeed true risk factors for breast cancer or other cancers. For example, most people are well aware that it often takes decades for "the cause" of cigarette smoking to result in "the effect" of lung cancer.

In the realm of breast cancer, it was noted that "very young women, survivors of the bombs of Hiroshima and Nagasaki, experienced a radiation dose-related increase in breast cancer incidence, but not until 15 years after exposure." [22, p.985] Another example is that of DES (diethylstilbestrol) -- a synthetic estrogen which was given to pregnant women from the late 1940s through the 1960s to prevent miscarriage and/or premature labor. It took more than 25 years before researchers discovered that des results in a 35% increased risk in breast cancer, a risk that was even noted to affect women older than 60 years of age, when the prevalence rates of breast cancer are especially high23. Dr. McPherson, a researcher from England, noted both of these effects in his research paper in 198724.

Unfortunately many researchers have claimed to "find no effect" from either induced abortion in young women, or early OCP use, in spite of the fact that the researchers often allowed fewer than 10 or 15 years to pass between the suspected cause and the effect (breast cancer). An editorial in the British journal, The Lancet, commented directly upon this. "If long-term OCP use in early life does affect the risk, it might show itself in terms of excess cases 10 or 20 years after exposure." [22, p.985]. A Swedish researcher, Dr. Olsson, wrote: "If latency time is required, it will take another 15 to 20 years to know if the presently used pills affect the risk (of breast cancer)" [25, p.269]. Finally, Dr. Hulka, commenting on latency wrote: "More commonly, 15 to 20 years are required, and for a few carcinogens, it may require 30 years to demonstrate their peak effects." [26, p.1624] Thus, studies published from the 1960s until even the late 1970s will have little possibility of picking up the effects of induced abortion and/or early OCP use on the increased risk of developing breast cancer. Authors who claimed to "show no effect" when publishing studies in the 1960s or early 1970s often failed to take the latent effect into account. An example of this occurred in the Melbye study27 in which he followed women who had abortions for fewer than 10 years and rashly proclaimed his study as "definitely" showing that abortion does not cause breast cancer.

Even a large meta-analysis may fail to give an accurate result if it takes data from studies that were too old (ie, studies from the 1960s and 1970s). For example, a famous meta-analysis performed in Oxford, regarding the risk of OCPs and breast cancer, took more than 50% of its data from women who developed breast cancer before 1985 [21, Table 1]. But researchers Malone and Daling pointed out that "Studies conducted in the latter half of the 1980s may be the first studies conducted in women born recently enough to have used oral contraceptives at a young age and for a long duration followed by a sufficient amount of time to be consistent with the possible induction period for breast carcinogenesis suggested by studies of other risk factors" [28, p.93]. As we shall see shortly, it is indeed the studies of the late 1980s and 1990s that show the largest effects from induced abortion/OCP use and it may well be that, as occurred with DES, the studies showing the highest risks will be those that have allowed a latent period of at least 20 years to pass.

Bias:

(This topic is so important that it merits a separate appendix. For those who wish to gain a more complete understanding, see Appendix 2).

In general, there are two major types of bias. The first type concerns the opinion(s) of the author which he or she may at times fail to separate from true scientific analysis and may thereby influence the data and results. The second type concerns the patient's own bias, that is, his or her truthfulness in answering a questionnaire or interview question. This has been referred to as "recall bias." Some researchers have hypothesized that women who have breast cancer answer more honestly than women who do not have breast cancer, to the following question: "Did you ever have an induced abortion?" This is the so-called "recall bias."

The main study which claims to show evidence of recall bias concerning the issue of having an abortion at a young age is a study in 1991, which was funded by Family Health International and conducted by researchers Britt-Marie Liniefors-Harris and Olav Meirik from Sweien. Here are their thoughts on recall bias: "We hypothesized that a woman who had recently been given a diagnosis of a malignant disease, contemplating causes of her illness, would remember and report an induced abortion more consistently than would a healthy control" [19, p.1003]. The immediate question which may enter the mind is: "Why was this the working hypothesis instead of its direct counterpart?" That is, why did these authors not originally hypothesize that a woman who has breast cancer might be less candid about her recall of abortion? After all, "denial' is one of the first reactions that patients have. When a woman is told that she has breast cancer it is not uncommon to deny to herself that she really has it. It would seem just as logical to think that such women would be more likely to deny factors that may have contributed to the breast cancer such as abortion and /or early oral contraceptive use.

Q-5B: What did Liniefors-Harris' study show?

The study claimed to show that women who had breast cancer were about 50% more likely to tell that they had an abortion, if they indeed had had one in the past, than women who did not have breast cancer. The study, however has been criticized by Daling, a prominent Epidemiologist, who noted that the study actually showed only a 16% "recall bias" when analyzed properly. In this Swedish study, in the group of women who had breast cancer and stated that they had had an induced abortion, the government's national registry bank recorded 27% of these women as never having had an abortion. Few people would believe that 27% of a group of women would lie, and state that they had an abortion, when in fact they never had one. Because of this, the study's credibility was called into question in separate publications [1, 29] by two different researchers (Daring, grind).

In addition, if Liniefors-Harris' hypothesis was correct, it would mean that thousands of other studies in medicine might now be deemed "questionable." Every time one had a disease or "effect" that was caused by a controversial risk factor (ie, one of the causes), the study might be considered invalid because of "recall bias." Studies on "liver cancer and a history of alcoholism" or "cervical cancer and the number of sexual partners a woman has had" or "the diagnosis of AIDS and the number of homosexual encounters a man has had," are all examples of an effect that is associated with a controversial cause. Accepting the Liniefors-Harris hypothesis, implies that all these studies, and thousands of others, are possibly compromised because they all could suffer from recall bias due to a controversial risk factor.

Q-5C: Is there a way to adjust for recall bias, if it exists?

Yes. Actually there is a fairly direct way to adjust for it and that is to measure it. Researchers did this already in the oral contraceptive and breast cancer debate in which some researchers claimed that women with breast cancer would be more honest about their history of oral contraceptive use. A number of studies refuted this claim by going back to a woman's medical records and comparing the results of her interview response to that of the written record. All three of the studies that did this found less than a 2% difference between the "case" and "control" responses [30,31]. But the same technique can be used in the studies involving abortion and breast cancer. Most good obstetricians and gynecologists obtain a thorough medical history of their patients especially on their initial visit. A standard question would be to ask a woman how many miscarriages and/or induced abortions she had. If one wished to measure the degree of "recall bias" between "cases" and "controls," one would simply have to compare their oral responses given in a study's interview, to those of the written medical record -- any degree of bias could be recorded and accounted for.

The age factor:

The "age factor" is the most critical factor to be aware of in the studies concerning breast cancer, so Appendix 1 has been set aside for the reader who wishes to gain a deeper understanding of this important factor. Even the following abbreviated explanation is a bit challenging, but it is important to understand so please be patient and try to work through it.

When one performs a retrospective study it is crucial that the "case group" and the "control group" be matched for average age as well as distribution of age. This means that not only should both groups have the same overall average age, they should also have the same distribution of age. Obviously, if two groups have women with very different average ages, the younger group may well be exposed to different risk factors than the older group. For example, if one would compare a group of 65-year-old women to a group of 40-year-old women in the year 1998, one might expect the younger group to have experienced more abortions and early oral contraceptive use, simply because these two risk factors were not generally available to the older women.

McPherson et al noted this in 1987 when he wrote that OCP (oral contraceptive pill) use before a first full-term pregnancy (FFTP) was markedly different in the groups of women who were over the age 45 compared to those under 45. "Among the older group, barely 3% had any OCP use before first term pregnancy, while in the younger group around 25% reported such use" [24]. It is obvious that when comparing the two groups, namely, women with and without breast cancer, the latter (ie, the "controls") may have had significantly more early OCP use and/or abortions if they are even 1 or 2 years younger than the "case group." This effect could easily be responsible for the Bostonian researcher Lynn Rosenberg's "statistically borderline" finding of an increased relative risk of 40% for developing breast cancer from having an induced abortion at a young age: [RR=1.4 (1.0-1.9)]. In her study [8], the mean age for the "cases" was 52 years old, whereas that of the "controls" was 40! In another example concerning the area of OCPs and breast cancer, Paul's New Zealand study suffered from a large age difference with the "cases" being almost 4 years older than the "controls." [15]. This study may well have shown a larger relative risk, which might have been statistically significant, had she matched the "cases" and the "controls" properly. Unfortunately, the researcher who claims to find "no difference" between two groups when failing to match subjects for age at the beginning of the study may well be masking the real cause/effect relationship.

The Stack Effect: This section is technically challenging but important to understand.

As concerns the "distribution of age," two groups of women may have the same average age, such as age 40 as noted in Figure 5A, but one group may have many more women around age 40 (the "cases"), whereas the other group may have a number of women who are very young and very old, and fewer women who are near age 40 (the "controls"). If the "controls" had many women in their 20s as well as their 50s, then the two groups would have totally different distributions of ages. Another way of saying this is that the "control group" is "stacked"; that is, the researcher has "weighted" or "stacked" women at both the young and old ends of the age scale. When two groups have the same average age but a different distribution of age, they will almost always have been exposed to different breast cancer risk factors (eg, having an abortion at a young age and/or early OCP use) and this may well reduce the credibility of an entire study. In many research studies the "control group " has a stacked distribution of its women. Because this group has disproportionately more younger women, it has the effect of reducing the relative risks that a study finds, because younger women usually have a greater participation in early OCP use and abortion than older women. Appendix 1 examines this critical effect in detail.


Figure 5A: The Stack Effect

The stack effect is depicted in Figure 1A.


Researchers often study a limited number of younger women with breast cancer (eg, the "cases" less than the age of 35), because of the low prevalence of this cancer in younger women. Often they "overmatch" or "oversample" (ie, put more women in one group than another group) these younger groups with excessive young "controls" of the same age in an attempt to "increase the statistical stability" of any findings in these lower age groups. Dr. Newcomb noted this in her 1996 study on the risks of OCPs in the U.S.: "The controls were selected at random to have an age distribution similar to that of the cases, but were oversampled in younger age strata in the New England states to increase statistical power" [32, p.526]. Unfortunately, this attempt to "oversample" or "overmatch" young "cases" among "the controls" has led to one of the largest and most unacknowledged flaws in many major research studies and has been recognized by only a few discerning researchers, such as Pike and Bernstein33, 34, and Olsson35.

What effect does "stacking the data" (ie, the stack effect) have on the relative risks of a particular risk factor such as having an abortion at a young age and/or early OCP use? When researchers "stack" a study by oversampling the "controls" in the younger age groups, they end up underestimating the relative risk for early OCP use and /or having an abortion at a young age. Why? Women in the late 1970s and the 1980s used OCPs far earlier and longer and had more abortions at young ages than women did in the late 1960s and early 1970s [21, p.9S]. Thus, if the younger aged "controls" (eg, those women under the age of 3S) are oversampled compared to the younger "cases," the "control group" will be much more likely to have more women in it who had early OCP use and /or an abortion at a young age, and this will artificially inflate the risk of this group. But artificially inflating the "control group's" risk is another way of saying that one has artificially deflated the "case group's" risks. In other words, when the control group is stacked, one ends up underestimating the risks of developing breast cancer from abortion performed at a young age and /or early OCP use.

Obviously the "stack effect" is critical if the "control group" has a larger percentage of subjects in the lowest aged brackets as noted again in Figure 5A. A difference of 1% or 2% between two different group's population distribution in the younger age brackets can alter the outcome of an entire study. It means that a study whose results claimed to show "no real risk" may in fact "have a real risk" (this is referred to as a "false negative") and those studies which showed a real risk may well have shown a greater risk if the stack effect had been compensated for when the study was designed. In other words, when the "control group" is stacked it generally means that the risks of early OCP and /or having an abortion at a young age are even greater than those stated in the paper. Unfortunately, some of the best known studies in both the fields of breast cancer/abortion, and breast cancer/early OCP use, are riddled with this effect.

In regard to the abortion/breast cancer studies, we find this effect in Janet Daling's 1994 study in which a larger percentage of women were concentrated in the younger age bracket in the "control" group than the "case" group (17% "controls" vs. 8% "cases" in the 21-30 year-old age and again in the 1996 study (19.7% "controls" vs. 14.6% "cases" in the 20-34 year-old age bracket). Both show a large stack effect. In spite of the noted positive findings for abortion as a risk factor in these studies, the results may well have shown abortion at a young age to be even riskier and with stronger statistical significance, had this effect been eliminated by proper age distribution matching from the onset.

The stack effect is even more prevalent in the OCP/ breast cancer literature. Some of the most prominent studies show this: The CASH study (Cancer and Steroid Hormone Study)36 noted that 3.9% of the "controls" versus 0.5% of the "cases" were in the youngest age bracket of 20 to 24 year-olds, and 24.8% of "controls" versus 16.2% of "cases" were less than 35 years old; the WHO (World Health Organization) study by Thomas 37 showed that 35.8% of the "controls" were less than the age of 35 whereas only 14.2% of the "cases" were; Paul's New Zealand study [15] had 21.7% of the "controls" in the youngest age bracket of 25 to 34 year-olds, versus only 7.2% of the "cases"; and Emily White's large study38 in the 1994 Journal of the National Cancer Institute showed that "controls" comprised 17% of the 21 to 30 year-old age bracket versus 9% of the "cases."

Fortunately, as noted earlier, some researchers have commented on this effect, especially in regard to the CASH study. For example, Pike and Bernstein wrote: "But in the CASH study, where the age distributions of cases and controls differ so much, adjustments are important. . . In neither of the CASH papers are the adjustments for age or the rationale for lack of adjustment for region described in detail, but it does not seem that the required finely stratified age-adjusted analyses were made." [34, p.615]. In a later article, in the July 15, 1989 issue of The Lancet, Pike and Bernstein again took issue with the stack effect in CASH: "In the footnote to their Table 1, however, they (ie, the authors of CASH) state that, in a logistic regression analysis, age (and hence cohort) is adjusted for as a continuous variable. This is inadequate; as we stated previously, 'in data on OCP use where there are striking changes by birth cohort and by single years of age, adjustment needs to be made in single years for both birth cohort and age.'" [33, p.158] Last, Olsson summarized the stack effect most concisely in his response to the CASH study (commenting on Dr. Staiel's work, who was involved with the CASH study): "Sir, -- In view of the high relative risks found in our study of oral contraceptives (OC) and breast cancer, we were surprised by the negative findings of Dr. Staiel and colleagues. The seemingly contradictory results between the two studies may be explicable by a flaw in Staiel's statistical analysis. As your Nov. 2 editorial notes, there is a strong time trend in the exposure to OC in young ages. The Staiel study design does not seem to take this effect into account. This could imply that the young OC starters among the controls represent women born recently and thus having short latency times. The relative risk would be biased downward because of interactions bias from the shorter latency times for this young group. Adjusting for age alone will (not) thus eliminate such a bias." (emphasis added) [35, p.1181]

In short, the widespread inclusion of the stack effect in many prominent studies certainly has the effect of underestimating the real risk of early OCP use and/or having an abortion at a young age. Hopefully, future studies will avoid this subtle but critical effect.

Journal bias:

Is there really a "medical correctness" in the medical literature? It would appear that many of the prominent journals, especially those published in the U.S., have perhaps become more ideological than idealistic. They seem to have embraced a certain type of "medical correctness" instead of letting the data speak for itself. For example, in 1992 the New England Journal of Medicine39 published a review on breast cancer and never mentioned abortion as a possible risk factor. In addition, the authors stated that "The use of oral contraceptives appears to increase the risk of breast cancer by 50%, but the excess risk drops rapidly after the drug is stopped, suggesting a late-stage tumor promoting effect." [39, p.322]. As Chapters 9, 10, 11, and 12 will show, their statements and/or omissions are imprecise at best.

Unfortunately, three of the most prominent journals published in the U.S. are closely tied to major medical groups such as the AMA (American Medical Association) which publishes JAMA, the ACOG (American College of Obstetricians and Gynecologists) which publishes Obstetrics and Gynecology, and the AAP (American Academy of Pediatrics) which publishes the journal of Pediatrics. All three have endorsed early contraceptive use as well as induced abortion40, with the ACOG going so far as to oppose a federal ban on partial birth abortion (October 9, 1997; The New York Times). It is therefore no surprise that certain American authors (eg, Brind et al [1] and Howe et al [41, 13]) have ended up publishing their findings in the British literature. Apparently, the British have more "tolerance" to an open presentation of the controversial findings. The irony is that, although women in the U.S. are at higher risk for breast cancer due to their higher rates of abortion at a young age and early OCP use, their own country's medical establishment appears to be muzzling an open discussion of these very factors.

A measurable risk factor:

As noted previously, breast cancer is a complicated cancer that is multifactorial in origin. There are at least ten different factors that influence a woman's risk of developing breast cancer. In order to pick up the influence of a particular risk factor in a significant way, it must have a certain prevalence in the group of women that is being studied or it could be missed. What does this mean, and why is this important? It means that if a factor is a true risk factor for a disease, one may miss this fact if one were to study a population in which that factor was very uncommon.

It is known that men with AIDS develop non-Hodgkin's lymphoma (NHL) almost one hundred times more frequently than the general population42. If one studies people who had lymphoma from two different populations -- one in which AIDS is common and one in which it is not, one may very well miss the fact that AIDS is a real risk factor for lymphoma. For example, if one studied men who had lymphoma in San Francisco, one would expect that out of a group of 1000 men in the population, a number would have AIDS, and it would be easy to see that AIDS predisposed men to the development of non-Hodgkin's lymphoma. But what if one had studied a group of people in whom AIDS is far less common? If one performed the same study in a group of 1000 people with lymphoma in Kansas -- where the rate of AIDS is far lower than in San Francisco -- one might find that one cannot measure the risk of AIDS simply because so few of the population have it.

This phenomenon is quite important when one wishes to determine if a factor such as having an abortion at a young age and/or contraceptive use is a significant risk. Why? Because the rates of early contraceptive use and/or abortion vary widely depending on which group of women one is studying, as well as the country and period of time in history one studies. For example, women in the U.S. and other countries around the world are using OCPs more frequently at younger ages today than they did in the 1960s and 1970s [28, p.81; 43, p.709; 21, Table 14]. This means that if one were to compare women from the late 1960s and early 1970s to those of the 1980s and 1990s, the latter would have used OCPs far more frequently before a first full-term pregnancy (FFTP) than the former. Studies which were conducted in the 1960s or 1970s could have failed to pick up the risk of early OCP use, because it was less prevalent in those decades. The converse is that future studies may demonstrate more accurately how big a risk factor early OCP use actually is.

Another important consideration in regard to the prevalence of a factor is the place in which a study is being conducted. The U.S. has one of the highest rates of women who have an abortion at a young age and have used OCPs. Tietze pointed out that although every other country in the world had an abortion rate of less than 30 per 1,000 in women under the age of 19 in the early 1980s, the U.S. had a rate of 44.4 per 1,000 [44, p.46]. In addition, the U.S. has the "distinction" of having more than three times the rate of abortions compared to any other country in the world in young women aged 13 or 14 years old [45, p.252]. The U.S. also has the highest cumulative amount of OCP use when measured in total number of prescriptions used in a given country46. Even in the early 1980s, more than 84% of women born after 1950 had taken OCPs (CASH study [21, p.36S]). Because the U.S. unfortunately leads the world in the prevalence of these risk factors -- especially in young women -- studies of its population carry more weight than almost any other country in the world.

Another example of this phenomenon is pointed out in a study from Sweien [47]. Olsson et al noted that although their study had only 174 premenopausal women with breast cancer -- compared to the CASH study which had 2089 women less than the age of 45 or the New Zealand study which had 388 women younger than the age of 45 -- the Swedish study may have had far more statistical power because Sweien has had a high rate of early OCP use for many years: "Thus, one possible reason for the discrepancies between our results and those of the CASH study and the New Zealand study is the higher rate of OCP use among women at an early age in southern Sweien. Use of OCPs before the age of 21, in combination with a minimum of 15 years latency after the age of 25, was 12 times more common (6% vs. 0.5%) among controls in this study than those in the CASH study [47, p.1003].

Next Page: High prevalence rate of a risk factor
1, 2