Breast Cancer and Abortion (con't p.2)

1  2  3   »



Q–6B: What has transpired since Dr. grind's meta–analysis was published?

A number of studies have been published since Dr. grind's study so that by January, 1999 a U.S. Congressman familiar with Dr. grind's work noted that: "Fully 25 out of 31 epidemiological studies worldwide and 11 out of 12 studies in the U.S¨ show that women who elect to have even one induced abortion show an elevated risk of subsequent breast cancer." [17, 18]. What were some of the recent studies?

First, Dr. Janet Daling came out with a "repeat study" which was designed similar to the one she wrote in 1994. The 1996 study did not show abortion to be as great a risk factor as her previous study, but several weaknesses appear in this latter study, as noted in the Addendum 6A at the end of this chapter. In addition, Daling's second study would have done little to change the results of grind's meta–analysis because it would have been just one more study that would have been averaged into many. The second study by Rookus et al came out in the Journal of the National Cancer Institute and reported on bias. The authors concluded that "Reporting bias is a real problem in case–control studies of induced abortion and breast cancer risk if these studies are based on information from study subjects only." [16, p.1759]. (Reporting bias is another name for recall bias, which was discussed in Chapter 5. Remember that researchers who cite recall bias as a factor are hypothesizing that women who have breast cancer will be more truthful about their abortion history if asked about it in an interview than women who did not have breast cancer). The Rookus study is discussed at length in Appendix 2. The main problem with the "bias argument" is that rt can easily be measured and accounted for in the same way that the studies involving oral contraception and breast cancer accounted for its effect (or lack thereof). As noted previously in Chapter 5, researchers such as Rookus and others simply need to obtain permission from both the "case" and the "control" patients to review their current medical charts from their current or previous gynecologist or obstetrician. Well trained obstetricians and/or gynecologists ask their young patients (ie, usually if under the age of 45) if they have had any induced or spontaneous abortions (miscarriages) when conducting a complete history and physical examination. It is critical to note that the gynecologist's records would thus give the information concerning a woman's abortion history before she would have developed breast cancer. Thus, one has an excellent tool to measure what the "case" and "control" patients' responses were, both before and after developing breast cancer, and one can easily compare the amount or degree of "reporting bias" between the "case" and "control" groups. Any author who wishes to measure the degree of reporting bias in studies that have been done recently or will be done in the future, can perform this exercise by taking the time to obtain permission and access to a patient's medical records. In short, there is no longer any reason to "speculate" about reporting bias. It can be measured.

A third study received little attention. It originated from an Australian researcher named Rohan [19]. His study was published in the American Journal of Epidemiology in 1988 but no information was presented concerning induced abortion and its risk for breast cancer. But in 1995 Andrieu et al noted that data from the Australian study actually had shown a 160% increased risk for induced abortion. Dr. Brind noted: "The obvious question, of course, is why these findings concerning an eminently avoidable risk factor were suppressed for seven years." [20, p.7]

The fourth major event since Dr. grind's publication concerns the study from Denmark by Dr. Mads Melbye et al. This prospective study — one of the largest studies ever done — looked at a "cohort" of 1.5 million women, 10,246 of whom had breast cancer. The study had a number of serious flaws, however, and was meticulously critiqued in the New England Journal of Medicine [21]. A complete analysis of this study is provided at the end of this chapter for the interested reader but the most egregious flaws must be pointed out here.

The study claimed that "induced abortions have no overall effect on the risk of breast cancer." [22, p.81]. This is not what the authors' data showed. The authors noted that 1338 "cases" (A) of breast cancer developed in the group of women who did have abortions (2,697,000 person–years) (B), and that 8,908 "cases" (C) of breast cancer developed in the group who did not have abortions (25,850,000 person–years) (D). This simple calculation (ie, A⁄B divided by C⁄D) yields a 1.44 relative risk or a 44% increased risk for those women who had an abortion. The question of how and⁄or why the New England Journal of Medicine could allow a study to purport that abortion caused no risk of breast cancer, when the study's own figures give evidence that such a risk does exist, is an interesting one.

Second, Melbye's study suffered from a marked "follow–up differential." (see Appendix 1 for details). They allowed less than a 10 year average follow–up time for the "cohort population" (de, women who had abortions) whereas the "control group" had a follow–up period of over 20 years. In addition to the huge difference between the two groups, allowing less than 10 years follow up after an induced abortion does not yield an adequate latent period to observe the increased risk of breast cancer.

The third problem is that Melbye et al noted that the breast cancer risk for later term abortions was significant. But this received far less attention than Dr. Melbye's rash statement recorded in the Wall Street Journal: "I think this settles it. Definitely — there is no overall risk of breast cancer for the average woman who has had an abortion." [23]. Most of the public never heard that Melbye found that women who had an abortion after the 12th week of pregnancy sustained a 38% increased risk, and women who had late-term abortions past 18 weeks sustained an 89% statistically significant risk [1.89 (1.113.22)]. It should be noted that in the U.S. in the 1970s and early 1980s, more than 100,000 women annually had abortions after the 12th week of pregnancy [24, p.10].

In conclusion, the beginning of this chapter asked the central question: Does an induced abortion before a woman's first full–term pregnancy (FFTP) increase her risk of developing breast cancer? The most honest answer that this author can conclude based on the research known to date and presented in this book is the following:

All of the evidence to date points to at least a 50% increased risk of developing breast cancer in parous women who have had an induced abortion prior to their first fullterm pregnancy.

It is now time to let the reader decide what to think.



Addendum 6A:

Daling et al came out with a "repeat study" in 1996 [25] which revealed less dramatic risks than her original study in 1994 [9]. What were this study's weaknesses and why did it show less risk?

[The answer to this question is slightly technical and can be skipped without a loss of continuity].

Daling et al [9] published a landmark study in the November 4, 1994 issue of the Journal of the National Cancer Institute; it received intense publicity and scrutiny. Surprisingly, two years later she published a similar study in conjunction with another researcher named Louise Brinton in the American Journal of Epidemiology, which produced less dramatic results. For example, the second study reported the risk of abortion prior to a FFTP to be only 10%. The first study found a 40% risk. The second study also found only a 20% increased risk for abortion in general. The first study found a 50% risk. Why were the results so different?

Both of Daling's studies looked only at white women, but in Daling's first study about 5% of the population was black, whereas the second study had a base population of about 15% of black women who were excluded. By excluding a significant percent of its study population, the second study left out a particularly high risk group — namely young black women; this weakens the strength and impact of the study.

Another possible weakness is that the second study had a far higher percent of "cases" and "controls" in the 40 to 44 year-old age group than the first study (55% vs. 23%). Because the second study had a higher percentage of women in an older age group, it could reduce the reliability of the study because these women would be less likely to have had an abortion prior to a FFTP. (This type of phenomenon — mixing "relatively" older women into the database, was discussed in Chapter 5 and may have the effect of art)ficially reducing the relative risk for having an abortion at a young age or OCP use).

A third phenomenon concerns the political climate in which the 1996 study was published. It is no secret that Daling received widespread criticism after her initial study was published. Her own study was severely critiqued in the same issue of the Journal of the National Cancer Institute in which it appeared [26]. The environment at the National Cancer Institute (NCI) and among the editors of its journal (ie, the Journal of the National Cancer Institute) grew so controversial that Douglas Weed, editor of the journal found the need to defend himself in the lay press against accusations of bias [27] after he was criticized in the Wall Street Journal [28] for showing partiality. Could the intense political pressure have affected the tone of the second article? That is a question that Dr. Weed might wish to answer.

Despite the fact that her second study found less dramatic results than her first, there is little doubt that it would have had an insignificant effect on Dr. grind's metaanalysis because it would have been one more study added to the current "pool of several studies."



Addendum 6B: Analysis of weaknesses of studies which looked at risk of abortion prior to first full–term pregnancy (FFTP):

It was stated that 4 out of the 6 studies which were used in Dr. grind's meta-analysis were analyzed in a "conservative manner." In other words, if there was a possibility of looking at the data or results of a study in two different ways, Dr. Brind always chose to analyze the data in a way that would yield the lowest increased risk in breast cancer, that is, the most conservative estimate.

There are a number of factors in many of the studies that would probably have yielded significantly higher risks of breast cancer from having an abortion at a young age had they been accounted for. What are those factors, and in which of the quoted six studies can they be found?

A) Brinton's study [8] suffers from: 1) A short latent period. Subjects were interviewed from the years 1973 to 1977. This is inadequate, considering that the latent period for certain risk factors is more than 20 years; 2) The analysis was restricted to white women only, which would serve to underestimate any risk because black women are known to be at especially high risk (see Chapter 11). By cutting them out of the study, Brinton's study is more likely to yield risks that will be underestimated; and 3) The "death effect" (4.3% of "cases" vs. 2.4% of "controls" died). All three weaknesses serve to underestimate her study's risk of breast cancer due to having an abortion at a young age.

B) Daling et al's study [9] suffers from: 1) the stack effect because a total of 17% of "controls" versus 8% of "cases" were in the 21 to 30 year-old age bracket. It also suffers from the "death factor" — 5.7% of "cases" died before they could be interviewed. Although Daling acknowledged the possibility of the "death factor" playing a role, both weaknesses serve to underestimate the relative risks. This study did have the longest latent period with women being diagnosed from 1983–1990.

C) Rosenberg et al's study [12] suffers from multiple weaknesses: 1) a huge age mismatch: "cases" had a mean age of 52, whereas "controls" had a mean age of 40; 2) a short latent period. Interviews were held from 1978 to 1982 (ie, not even 10 years after the Roe v. Wade decision); and 3) the study was partially funded by Hoechst, a pharmaceutical company that is responsible for producing RU–486, the drug that causes a chemical abortion.

D) The Lindefors–Harris et al study [10] was funded by Family Health International. In addition, although she started out with 49,000 subjects, after 11 years she had fewer than 5,000. Only 10.2% of the original group of women stayed in the study and even these women would have had a latent period that may not have been long enough to detect the full impact of having an abortion at a young age (latent period of only 12–14 years). Last, as the investigators of the study admit, they made no adjustment for such basic variables as family history of breast cancer and OCP use [10, p.1432].

In conclusion, four out of six of the studies which Brind et al used in their meta–analysis would most likely have resulted in significantly higher risks of breast cancer had those studies accounted for the above–mentioned variables. In addition, the three studies that had the longest latent periods (ie, Daling [9], Lipworth [11] and Rookus [16]) all showed higher risk than the earlier studies [8, 12, 10] which had shorter latent periods. This would again imply that the risks of breast cancer due to an abortion at a young age are actually higher than the conservative 50% increased risk estimate.



Addendum 6C: Critique of the Danish Study (researcher: Mads Melbye et al):[22]

The Danish study was published in the January 9, 1997, edition of The New England Journal of Medicine. This prospective study relied on Denmark's National Cancer Registry as well as its National Board of Health to obtain information on the patient's cancer and abortion history, both of which are reported to government agencies in that country. The study's main assets include its size and its freedom from "reporting bias." The study claimed to show a number of findings: 1) the risk of having an abortion before a woman's first full–term pregnancy (FFTP) was negligible, namely an 8% trend: [RR=1.08 (0.821.44)]; 2) the overall risk of an abortion after a woman's FFTP is 1.0; and 3) the risk of breast cancer was increased if one had a late–term abortion. For example, a woman who had an abortion after the 18th week of pregnancy sustained an 89% increased risk of breast cancer.

The study's size was impressive although the data which it presented — and also failed to present — actually supported the hypothesis that abortion causes breast cancer. Two central criticisms have already been mentioned but shall be repeated in addition to others.

Failing to report the results accurately:

The study claims that "induced abortions have no overall effect on the risk of breast cancer." [22, p.81]. However, this is not what their data show. The authors reported that 1338 (A) cases of breast cancer developed in the group of women who did have abortions (2,697,000 person–years) (B), and that 8,908 (C) cases of breast cancer developed in the group who did not have abortions (25,850,000 person–years) (D). This simple calculation (ie, A⁄B divided by C⁄D) yields a 1.44 relative risk or a 44% increased risk for those women who had an abortion.

A shorter follow–up time between the "cohorts" and the "controls":

Brind and Chinchilli noted that Danish women who had abortions were followed up for shorter periods of time than were the "controls." They stated: "Since the [Melbye] study encompasses such a wide range, women who had induced abortion are concentrated in the younger end of the total cohort, resulting in considerably less average followup time for them than for women without induced abortions (9.6 versus 20.7 years)" [23]. Melbye et al responded in the same journal to grind's allegation. Melbye wrote: "They [Brind and Chinchilli] claim that a selection bias is introduced because the average follow–up time for women with induced abortion is shorter than that for women without induced abortions. Such an objection can stem only from lack of insight into the design and analysis of a cohort study. For each woman entering the cohort, we calculated the follow–up time (person–years) and allocated this follow–up time according to the abortion history. The calculation of breast–cancer rates (cases per person–years) thus takes into account differences in follow–up time for women with abortions and women without abortions."

Melbye's response appears to offer a sharp rebuttal to Brind and Chinchilli's remarks. But has Melbye failed to address a basic problem? It would appear that he has. He noted that his study had taken into account the difference in follow–up times by dividing the breast cancer "cases" by "person–years." However, Melbye et al failed to address the larger question; that is: What about the effect of an inadequate latent period? Specifically, it may take 15, 20 or even 26 years before the full effect of an abortion at a young age shows up, as concerns breast cancer. By following the women who had induced abortions for an auerage of fewer than 10 years, Melbye et al hardly allowed an adequate latent period to pass. This is crucial if one is to determine the effect of an abortion performed early in a woman's life.

Failure to include basic variables:

Because the study was prospective in nature and obtained its information from government data banks, certain key variables were not adjusted for which include: 1) a family history of breast cancer; 2) a history of oral contraceptive use; 3) a history of alcohol use; 4) age of menarche; and 5) age of menopause. This weakness was even admitted by Patricia Hartge, the woman who wrote an editorial in the New England Journal of Medicine that was otherwise "more than complementary" to Melbye et al. An additional problem is that "more than 30,000 women in the study cohort who had abortions were misclassified as having had no abortions." [23, p.1834] Thus, Melbye et al classified 30,000 women who had abortions as women who did not have abortions!

The funding question:

This observation is more of a curiosity than a direct criticism, however this author cannot fail to be puzzled as to why a Danish medical study which was performed by Danish researchers was partially funded by the U.S. Department of Defense. The reader will note that Dr. Melbye's research article ends with the disclaimer that: "The views expressed in this paper do not necessarily reflect the position or the policy of the U.S. government." Why a U.S. government agency is funding a Danish study and then feels compelled to publish a disclaimer at the end of the study strikes this author as exceptionally odd. Perhaps the U.S. Department of Defense could offer an explanation to the public as to why U.S. tax dollars that are earmarked for maintaining our defensive forces, have gone to a Scandinavian country to fund a study on breast cancer.

Failure to stress the risk of abortions after the 12th week of pregnancy:

Melbye et al did note that the risk of breast cancer increased in women who had later term abortions, but this received little attention compared to Dr. Melbye's better known statement, recorded in the Wall Street Journal, in which he said, "I think this settles it. Definitely--there is no overall risk of breast cancer for the average woman who has had an abortion." [23]. Most of the public never heard that Melbye found that women who had an abortion after the 12th week of pregnancy sustained a 38% increased risk, and women who had late–term abortions past 18 weeks, sustained an 89% statistically significant risk [1.89 (1.11-3.22)]. Of note, more than 100,000 women in the U.S. had abortions after the 12th week of pregnancy annually in the 1970s and early 1980s [24, p.10].



Top


Next page: References: »
1  2  3