Necessary Groundwork

Chris Kahlenborn
(chapter four)
Breast Cancer: Its link to Abortion and
the Birth Control Pill
Reproduced with Permission

Before starting this chapter the lay reader is again encouraged to be patient because it is slightly technical. The following groundwork will help one understand why certain studies are more important and/or better designed than others. By gaining an understanding of how these studies are constructed and analyzed, one will be able to decide what the data really show and mean. To do this some research vocabulary must be reviewed.

Q4A. What is a retrospective study?

In the medical and/or scientific realm, researchers attempt to discover if a particular cause and effect are related. Does having an early induced abortion (the suspected cause) increase the risk of breast cancer (the effect)? The process is not simple, because there are many factors which increase or decrease the risk of breast cancer as noted in Chapter One. Furthermore, even if a particular factor is shown to be associated with a particular effect, one must be careful not to assume that the factor in question caused the effect. If one finds that women with breast cancer have a higher incidence of having induced abortions at a young age, does this prove that having an abortion causes breast cancer? No, although this certainly would make one suspicious that a "cause and effect" relationship could exist. Why is this so? Because having an early induced abortion may be associated with another factor which is the "real culprit." For example, suppose that one found that women who had an early abortion also tended to have children at a later age, or perhaps had used oral contraceptive pills (OCPs) more frequently than women who did not have an abortion at a young age. It then might be the case that these "associated factors" (ie, the late age at first birth or the early OCP use) were actually increasing the rate of breast cancer in women who had an abortion at a young age. Should this be the case, then having an abortion at a young age would simply be a factor which is associated with the "real causal factor(s)" and thus would not necessarily be a cause in itself.

In order to deal with this problem statisticians and researchers employ a type of study called a retrospective study. Such a study design is ideal for the situation in which there are many causes for a particular effect (de, breast cancer). In this type of study, one works "backwards" or "retrospectively" by looking at the effect and going back to try to find the cause or causes of that effect. For example, in this book we will look at studies which looked at groups of women who had or have breast cancer (de, women who experienced the effect) and go back to try to determine if a particular factor is a cause (eg, having an abortion at a young age, or using OCPs at an early age). The retrospective study has almost always been the preferred type of study when one wishes to measure the cause of an effect such as breast cancer in which the latter has a fairly small (overall) incidence in the general population. (The reason for this will be explained in the following section under prospective study.)

In a retrospective study researchers compare two groups of women -- for example, women with breast cancer and women without breast cancer. The women with breast cancer are generally referred to as the "case group" or "cases,"* and those without the cancer are called the "control group" or "controls."* (* I write "cases" and "controls" with quotations to remind myself to avoid losing sight of the fact that behind the technical terms, are real women who have real problems, eg, breast cancer.) The researcher(s) will interview a large group of women ("cases" and "controls") -- let us say 1,000 of each -- and attempt to match them for age so that both groups have participants who have the same overall average age and distribution of age. The researcher can then ask the participants about any risk factor which he or she wishes to study. As a rule, a good researcher will ask about every possible factor that could cause breast cancer. Failure to do so could invalidate a study, because one might fail to account for a major risk factor. A typical retrospective study on breast cancer will ask the "cases" and "controls" such questions as: Do you have any family history of breast cancer? Do you have a history of any particular type of fibrocystic disease? What was your age at menarche (de, the onset of a woman's monthly cycles) and/ or menopause? Did you ever use oral contraceptive pills (OCPs)? At what age did you give birth to your first child (if you have children)? Did you ever breastfeed? Did you ever have an induced abortion?....

After obtaining responses from both groups, the researcher can then compare and look for any differences in trends between the two groups. For example, almost every study should find that the "case" group (de, those women with breast cancer) had more women who answered affirmatively to the question: "Do you have any family history of breast cancer?" than the "control group." If 200 out of 1000 women in the "case" group had answered "yes," versus only 100 out of the "control" group, one would think that there is roughly a 2-fold risk of obtaining breast cancer if one were to have a positive family history, according to this study.

After collecting the data, the researcher(s) will enter it into a computer with a "particular statistical package" -- this means a statistics software program that "analyzes the data." Modern-day statistics programs are so sophisticated that most researchers have little idea of the detailed manner in which they compute the data and usually work with a professional statistician to make sense of the results. The statistical program will compute what is called an "adjusted relative risk" for a particular risk factor. An adjusted relative risk is the risk of a particular factor after the statistics program has "adjusted for" or "factored out" the influence of the various other possible causes.

For example, if we note that in a particular study the "case group" had double the incidence of having an abortion at a young age compared to the "control group," but that the "case group" of women also tended to have a higher incidence of a positive family history of breast cancer, the computer would adjust for the influence of the family history and might show that instead of abortion conferring a 2-fold risk of breast cancer, it might, after adjusting for family history, actually confer only a 1.6 fold increased risk.

The classical paper that describes the advantages and background of the retrospective study was written by Mantel and Haenszel1 and is consistently referred to by almost all researchers. Retrospective studies have many advantages over prospective studies when studying a subject such as "the causes of breast cancer." In general they are less expensive, require far fewer subjects, and take much less time.

Q4B: What is a prospective study?

As its name implies, prospective studies attempt to find the "cause(s)" in a "cause and effect" relationship by going forward or "prospectively" through time. How would one perform a prospective study in order to see if early oral contraceptive pill (OCP) use increased a woman's risk of breast cancer? One would start with two large groups, namely, women who used OCPs at an early age (called the "cohort group"), and women who did not, called the "control group." These groups should be carefully matched for age as well as for other risk factors (eg, same country of birth, etc.) Both the "cohort" and "control" group would then be followed over many years and one would then note if there was any overall difference in the rate of development of breast cancer between the two groups over time.

There are many difficulties with a prospective study, especially as concerns breast cancer. First, it requires very large study groups. Because only a small percent of young women will develop breast cancer over 20 years, one usually needs to start out with a study group that contains thousands, if not tens of thousands, of women, in order to notice any appreciable difference between the "cohort" and "control group" through time. This is extremely expensive and often requires a "national effort" and the coordination of dozens of researchers. Even then, the results may not have much "statistical power" (defined shortly) because there may be very few women out of the original group who develop breast cancer.

Second, there is the "drop-out factor." Often prospective studies rely on women to complete an annual survey. Many subjects simply drop out with time for various reasons. It would not be uncommon to start out with a group of about 50,000 women, only to see it dwindle to 5,000 before 12 years had elapsed, as occurred in the Lindefors-Harris study2. This could obviously influence the results of the study.

Third, one can easily "fail to adjust" for a particular factor which certainly weakens a study's validity. For example, if one studies women during the 1970s or early 1980s, most researchers would not have asked them about their history of induced abortion because this factor was not widely discussed until the early 1990s. Unfortunately, one would then have a situation in which a major risk factor had never been inquired about in the original surveys, obviously limiting the usefulness of the study. An example of this occurred in a very large Danish prospective study conducted by Melbye et al3. Because the study was prospective in nature and obtained its information from governmental data banks, certain key variables were not adjusted for which include: 1) a family history of breast cancer; 2) a history of oral contraceptive use; 3) a history of alcohol use; 4) age of menarche, and 5) age of menopause. This weakness was even admitted by Patricia Hartge, the woman who wrote an editorial in the New England Journal of Medicine that was otherwise complementary to Melbye et al. Thus, although prospective studies offer certain advantages, these items as discussed can severely limit their usefulness.

A fourth concern is the length of the study. It may take 20 or 30 years for a risk factor (eg, smoking) to show up as a cause for a particular cancer (de, lung cancer). Obviously this presents problems because so many variables may change after 20 to 30 years. Unfortunately, some studies such as Melbye's et al[3] claimed to be "definitive," but allowed less than 10 years of average follow-up time in their cohort groups.

Q-4C: What is a "relative risk" (RR)?

An author may conclude that women who had an abortion at a young age had a "1.5 relative risk" of getting breast cancer compared to women who did not have an abortion. What does one mean by a "1.5 relative risk?" It means that women who had an abortion at a young age have a 1.5-fold increased risk, or stated another way, they have a 50% increased risk of developing breast cancer compared to women who did not have an abortion at a young age. (A RR of 2.0 would mean a 100% increased risk, a 3.3 RR means a 230% increase etc.)

Q-4D: Can you demonstrate how to calculate a relative risk with an example?

Sure. Let us take a hypothetical example in which 1,000 women have breast cancer (the "cases") and 1,000 women do not have breast cancer (the "controls") where 100 of the "cases" had an abortion at a young age and 69 of the "controls" did. The data and estimated relative risk are shown in Table 4A.


Table 4A:
A Sample Study and Relative Risk
Women with Breast Women with Breast Cancer ("Cases") Women with Breast Women with Breast Cancer ("Controls")
Total Number in Group 100 1,000
Women who had an abortion prior to their first full-term pregnancy 10 = A 69 = C
Women who did not have an abortion prior to their first full-term pregnancy 90 = B 931 = D
Estimated Relative Risk: A/C divided by B/D = (A x D) / (B x C) = 1.50


The formula (A x D)/(B x C) = 1.50 is actually the definition of what is called the odds ratio (OR). Of note, the true definition of the relative risk is: [A/(A+C) divided by B/(B+D)] which equals 1.44 but may be estimated as: (A x D)/(B x C) in diseases with low incidences where (A + B) are small compared to (C + D) [4]. Here we see that the odds ratio is 1.50 but when fulfilling the stated conditions becomes an estimate for the relative risk. In general, this means that the "cases" have a 1.50-fold increased risk (de, a 50% increase) of getting breast cancer than the "controls." How did we get this?

First, we took women who had an abortion at a young age and noted that 10 (A) of them had breast cancer and 69 (C) did not, for a ratio of 10/69 = 0.145.

Then we took women who did not have an abortion at a young age and noted that 90 (B) had breast cancer and 931 (D) did not, for a ratio of 90/931 = 0.0967.

Finally we divide 0.145 by 0.0967 to obtain an odds ratio or estimated RR of 1.50.

Now the question is, how much more likely are women who had an abortion at a young age, compared to those who did not, of developing breast cancer?

In layman's language, women who had an abortion at a young age in this example had about a 50% increased breast cancer rate compared to those who did not. Why is it so important to understand this term? Because, if you, the reader, ever browse through a paper and wish to "check the writer's conclusions," it is easy to calculate the odds ratio or estimated relative risk and compare your result to the authors. Another advantage of understanding the term "relative risk" and being able to calculate it is that often an author will present the raw data in a table but make no statement as to what the relative risk is. If in Table 4A, the author had simply said that women who have had an abortion at a young age are at "slightly increased risk," without telling the reader what the actual number was, the reader could calculate that "slightly increased risk" means a 1.50 estimated relative risk, that is, a 50% increased risk.

Q-4E: What is an "adjusted relative risk?"

An adjusted relative risk is simply a relative risk that has been adjusted for the various other factors which may influence breast cancer. For instance, in the example above, if one entered the data from a study into the statistics computer program, one might find that the adjusted RR (relative risk) might have been 1.40 instead of 1.50. The statistics program "adjusted for" or took the other breast cancer risk factors into account. In most studies the "case group" and the "control group" are carefully matched so that the adjusted RR is often very close to the RR. The reader can always go back and verify the veracity of the RR if the author has provided the data, but because only the researchers have access to their specific statistics program, the reader will not, in general, be able to verify an adjusted RR.

Q-4F: What is a "confidence interval" and what do researchers mean when they say a study's results are "statistically significant? (This section is important to understand.)

The estimated relative risk of 1.50 calculated in Table 4A is an estimate of what the real-life relative risk is. If one entered data from the example cited above into the computer statistics program, it would "look at the data" and tell the researcher "how reliable" the 1.50 relative risk (RR) was, that is, how close the RR from the study is likely to be to the real or actual relative risk. For example, if the study is small, you may find a RR of 1.50 or more, but it may be difficult to tell whether the relative risk is "real" or whether it is simply a "fluke" or a "chance finding." That is, is having an abortion at a young age really linked to an increased risk of breast cancer, or was the study's statistical power so small that the results could be due to chance? In general, the larger the number of subjects in a study, the more likely that a RR accurately estimates the "real" relative risk and is less likely to be a "fluke."

Let us take the previous hypothetical example in which we found that having an abortion at a young age conferred a relative risk of 1.50 for breast cancer, that is, women who had an abortion at a young age sustained a 50% increased risk of obtaining breast cancer. In this example we might obtain the following results:

Relative Risk (RR) for having an abortion at a young age in the hypothetical example is:

[RR= 1.50 (1.11-1.90)]; 95% CI

The terms in the parenthesis, are called the 95% "confidence intervals" (CI).

When the statistical program analyzes the data it is able to make some statistical calculations based on the "variability of the data." In other words, the computer can look at the data and tell the reader how likely it is that the results were due to chance versus a real effect. A 95% confidence interval of 1.11-1.90 means that the computer is telling the reader that, based on the data from the study, there is a 95% chance that the real relative risk is between 1.11 and 1.90. (Conversely, this means that 5% of the time the real relative risk will be less than 1.11 or greater than 1.90. The 5% figure is called the "p value." A p value of less than 0.05 (5%) is another way of stating that a researcher's relative risk lies within the 95% confidence intervals). The main thing a researcher looks for is to see if the lower number of the confidence interval is greater than 1.0. If it is not, then the relative risk cannot be said to be statistically different from 1.0. For example, if we had noted that the RR was 1.50 with 95% confidence intervals of (0.~2.2), a researcher would not call this "statistically significant'' because he or she could not say that 95% of the time he or she would expect the real RR to be greater than 1.0. A calculated relative risk is only statistically significant if the lower number of the 95% confidence interval is greater than 1.O.

Q-4G: What is "statistical power"?

This term refers to the strength of the statistical conclusions resulting from a particular study. In general, the larger a retrospective study is, the more "powerful" the results. For example, a study of 100 women with breast cancer compared to 100 "controls" might show that an abortion at a young age carries a relative risk of 1.5 with confidence intervals of 0.7-1.9 (95% CI). If the researchers increased the study size to 1000 "cases" versus 1,000 "controls," the new results might show a RR of 1.5 (1.2-1.7; 95% CI).

Note that by increasing the size of the study, the second result has become statistically significant whereas the first is not. A larger study will almost always have more statistical power than a smaller one.

Q-4H: What is "regression analysis?"

This is simply a fancy term that tells the reader that the statistics program in the computer has adjusted for the various other factors which are known to influence breast cancer risk. As noted previously, a good study will ask about all of the possible risk factors such as early first birth, parity (de, the number of children a woman has), family history of breast cancer, early contraceptive use, induced abortion, etc.) The computer program uses regression analysis to "factor out" these variables. The process of adjusting for all of the other factors when calculating the adjusted relative risk is called regression analysis.

Addendum:

A simple overview of a medical research paper is presented which may aid the reader in analyzing a study more easily.

What is one to believe when he or she hears that "the most recent study says that X causes Y or that A prevents B?" From the body of this book, the reader will see that not all study results are of equal value. Often, the results of a particular study will be commented upon in national newspapers and magazines, but they may distort, amplify or diminish the actual findings of the original research paper. One way of avoiding this problem is for the interested reader to go to the nearest medical library, or even the public library, and speak with the librarian about how to obtain a copy of the journal article in which the study was published. Here again, the reader can now decide for him or herself how valid a study is and what its results really showed. To do this, an understanding of what to look for in a basic medical research paper would be helpful.

What are the basic parts of a medical research paper?

The Title Page:

In addition to the title, the title page lists the authors of the study. The first author is the one who ultimately takes responsibility for the study's findings although in practice, it is often the second, third or other authors who have done a great deal of the hands-on work behind the study. Either the title page or the final page of an article often lists two other key pieces of information. First, it tells where the study was done and usually gives an address where the reader may write to find out more about the study and to correspond with the author. Second, they often tell who funded the study. Sometimes the first page of the bibliography will state this. Obviously, this is a vital piece of information because a conflict of interest arises if the "payor" has a vital interest in a "certain outcome" of a study. An example is when a drug company which manufactures an oral contraceptive pill sponsors a study of their own contraceptive to see if it causes breast cancer.

Abstract or Summary:

The abstract gives the reader a summary of the entire article including the purpose, the main results, and the author's interpretation of them. For those with little time, this summary provides a rapid way to get the main points out of a long research paper. Often, however, the abstract may fail to mention important findings contained within the paper if those findings are "too controversial."

Material and Methods:

This section gives the details on how the study was actually conducted. A thorough material and methods section will list such things as when the study was done, the characteristics of the women who were studied, how the data were collected, who interviewed the subjects, how many subjects dropped out of the study, and especially, the various risk factors that were discussed in the questionnaire or interview. It will also usually tell which type of statistical testing was used and which factors were adjusted for in the regression analysis.

Results:

As its name implies, this section presents the results of the study. Perhaps the best and quickest way to understand this section is to simply look at the graphs, charts and tables. Many authors spend pages discussing the results when, in fact, they are commenting on the data contained within the graphs and tables. It is important to note that often under each table a comment will state which factors the authors have adjusted for.

Discussion:

In this section the authors discuss the relevance and implications of their various findings. Unfortunately, when an author discovers "medically incorrect" findings, there is often a paucity of discussion on those specific results.

References or Bibliography:

This is the last section of a typical research paper and lists the references to all of the footnotes of the general text. Often the reference section is preceded by a list of participants or organizations who helped take part in the study (eg, those who aided financially or allowed their services to be used).


References:

1 Mantel N, Haenszel. Statistical Aspects of the analysis of data from retrospective studies of disease. J Natl Cancer Inst. 1959; 22: 719-748. [Back]

2 Lindefors-Harris BM, Edlund G, et al. Risk of cancer of the breast after legal abortion during the first trimester: a Swedish register study. Br Med J. 1989; 299: 1430-1432. [Back]

3 Melbye M, Wohlfahrt J, et al. Induced abortion and the risk of breast cancer. N Engl J Med. 1997; 336: 81-85. [Back]

Top