Breast cancer is a common malignancy diagnosed in women. In the United States one in eight women who live to the age of 95 will be diagnosed with breast cancer. Even with the high rate of diagnosis, it remains the most treatable due to early screening and improved detection methods. Mammography is the precedent for screening and diagnostic procedures in the breast cancer field. Its enhancements through the years, together with higher resolution, faster, lower-dose screen-film combinations, have contributed to earlier cancer detection in women. Dr. Wilhelm Conrad Roentgen discovered x-rays while working with a Crookes tube in his laboratory on November 8, 1895. Eighteen years later mammography got its rudimentary beginnings due to these ionizing x-rays. In 1913, Berlin Albert Soloman, a German surgeon, was among the first to discover that breast cancer could be radiographed. In a 1927 medical textbook the first instance of a radiograph of a living person’s breast taken by Otto Kleinschmidt was published. Although these recordings of mammography appeared in early years, it wasn’t until the late 50’s that it was popularized by Robert Egan, from the United States and Professor Charles M. Gros, from Germany. These men started using mammography for the diagnosis and evaluation of breast cancer. With this popularity of mammography came vast improvements with technology. Before 1969, many machines were not designed for imaging exclusively breast tissue. For example, imaging units from the past were comprised of tungsten targets, which were primarily used for imaging anatomy that required relatively higher doses of radiation. These units also worked off of a large focal spot which decreases the detail of the image. This was not ideal for imaging something as minute as a breast calcification. In the 60’s direct exposure x-ray film was the film of choice. This film often required a long exposure time which causes a higher dose of radiation to the patient and increased motion blur. Some units utilized substandard compression paddles that didn’t distribute pressure evenly, which produced a radiograph with uneven contrast. This all resulted in a poor diagnostic film. In 1969 dedicated mammography units were employed with low kilovoltage x-ray tubes and molybdenum targets making the units more efficient in x-ray production. The dedicated mammography units have more latitude for positioning as well as minimal discomfort for the patient. Smaller focal spots for imaging little objects with increased detail were also designed. These units were accompanied with their own compression cone. Industrial grade, high-detail film became available that year also. Xeromammography became popular by John Wolfe and Ruzicka in the 60’s. This type of mammography greatly reduced the radiation dose received by the patient compared to the earlier direct film and was easier to understand and evaluate. 1972 was a turning point for mammography when Dupont announced their production of higher-resolution faster speed x-ray films in conjunction with intensifying screens. These intensifying screens contained calcium tungstate phosphor materials that convert x-rays into light. This concept caused the film to be exposed with less radiation, therefore reducing the amount of radiation to the patient. Rare earth elements, a faster, more efficient phosphor, began replacing the use of calcium tungstate in 1976, making this intensifying screen combination the most efficient combination until early 2000. In 1990, a number of advances were employed including: grid technique, emphasis on compression, high-frequency generators, and automatic exposure controls. In the early 2000’s digital technology was incorporated with mammography. It replaced the screen-film system with a charged-coupled device (CCD). The CCD converts visible light photons to electrons. Electrons are sent to a computer where it is converted into a digital format and a...
Please join StudyMode to read the full document