Skip to Content

Programs & Resources

Technology Assessment

William H. Hendee, Ph.D., ChairNATIONAL CANCER INSTITUTE
IMAGING SCIENCES WORKING GROUP
TECHNOLOGY EVALUATION COMMITTEE

FINAL REPORT

December 16, 1997

MISSION OF COMMITTEETo examine the concepts and methods of Technology Assessment and determine how they can best be applied to the evaluation of diagnostic imaging technologies especially as they pertain to cancer diagnosis, staging, treatment guidance and follow- up.
DEFINITION OF TECHNOLOGYTechnology includes equipment, devices, drugs, procedures, techniques, and the organizational structures that support their applications in clinical settings.
APPLICATIONS OF ASSESSMENTTechnology assessment should be used to evaluate oncologic imaging in 4 clinical applications: (1) Detection and diagnosis; (2) Staging; (3) Treatment guidance; and (4) Treatment monitoring and follow up.
LEVELS OF ASSESSMENTTechnology assessment can occur at different levels, from simple measurements of the physical properties of the technology to evaluations of the impact of the technology on the quality of life of patients (Institute of Medicine, Quality of Life and Technology Assessment, National Academy Press, 1989: Washington, DC). A hierarchical model of levels of assessment has been proposed by Thornbury and Fryback (Eur. J. Radiol. 1992: 14, 147-156). A modification of this model is shown here. A particular technology does not have to progress through the assessment levels in linear fashion. For example, after satisfying Levels 1 and 2, teleradiology is now being evaluated at Level 6, and imaging systems for patient monitoring during therapy have moved directly from Level 1 to Level 4.
MODEL OF TECHNOLOGY ASSESSMENT:
LevelMeasure
LEVEL 1:

TECHNICAL MEASURES (examples)

Line pair resolution
Modulation transfer function
Contrast/detail studies
Noise level (Wiener spectra)

LEVEL 2:

DIAGNOSTIC ACCURACY MEASURES (examples)

Abnormal/normal diagnoses in case series
% Correct diagnoses in case series
Sensitivity/specificity under controlled conditions
Area under Relative Operating Characteristics (ROC) curve

LEVEL 3:

DIAGNOSTIC THINKING MEASURES (examples)

% Cases in series where image "helpful" in making diagnosis
Changes in probability distribution of diagnosis
Changes in likelihood of correct diagnosis
Changes in clinician's confidence in diagnosis

LEVEL 4:

THERAPEUTIC MEASURES (examples)

% Cases in series where image helpful in treatment planning
Frequency with which procedure avoided by image
% Cases in series where image changes treatment plan/decision
% Cases in series where treatment options altered by image

LEVEL 5:

PATIENT OUTCOMES (examples)

% Patients improved with image to without image
Morbidity avoided with image
Change in Quality-Adjusted Life Years (QALYs)
Patient utility assessment measures

LEVEL 6:

SOCIETAL BENEFITS (examples)

Average cost/QALY saved with image
Societal benefit/cost analysis
Societal (cost with)/(cost without) analysis
Societal cost-effectiveness analysis

QUANDARY:Technology assessment works well with therapies such as drugs and treatment procedures, and with some diagnostic devices that have specific clinical applications. In general, medical imaging technologies are more challenging to evaluate because they usually have multiple clinical applications, the target population of patients is more heterogeneous, the need for imaging is uncertain for some of the patients and the threshold for interpretation of effectiveness is difficult to define. Also, at each level from 1 to 6, the universe of individuals involved in the assessment increases. At Level 6, and frequently earlier, the universe extends beyond Medicine. Finally, the gap between efficacy (benefits of a technology under ideal conditions) and effectiveness (benefits of a technology under real-world conditions) may often be greater for imaging technologies than for pharmaceuticals, because the applications are more varied and humans are more intimately involved in the applications.
CATEGORIES OF TECHNOLOGY EVOLUTIONThe appropriate level of technology assessment varies depending on the phase of evolution of the technology. A new technology proposed to replace an existing technology often is deemed effective if it obviously satisfies Levels 1 and 2 (e.g., the accuracy is markedly improved at little or no cost increase). If the replacement technology is only marginally better at Levels 1 & 2, assessment at higher levels may be necessary. New uses for an old technology may require assessment only at lower levels because the technology is already disseminated. A new technology for new uses needs assessment at Levels 1 & 2 and also at Level 6. In many cases the assessment will occur at all six hierarchical levels, beginning with Levels 1 & 2, followed by Levels 3 & 4, followed by Levels 5 & 6, with each stage requiring greater dissemination of the technology. Older technologies should be evaluated at levels 5 & 6 to ensure that they continue to offer meaningful contributions. It is just as important to eliminate existing technologies that are no longer useful or cost-effective as it is to assess the value of new technologies before they become widely disseminated in the clinical arena. The variability in the technology assessment process is outlined below. However, it is acknowledged that the evaluation of any specific technology at any particular time requires its own individual pathway through the assessment hierarchy.
 STAGE OF TECHNOLOGY DEVELOPMENT
Level of Technology AssessmentNew Technology for Established UsersOld Technology for New UsesNew Technology for New UsesOld Technology for Established Uses
Level 1********* 
Level 2********* 
Level 2********* 
Level 3****** 
Level 4****** 
Level 5* ******
Level 6*  ***
CommentsIn these tests a new technology is being compared to an accepted technology, and often can be accepted after a greater sensitivity/specificity reduced cost or decreased morbidity is demonstrated.Since the technology is already distributed, its acceptance may require demonstration through levels 3 or 4.Although all 6 levels must be demonstrated, these can be phased over several years, with dissemination increasing with time.Older technologies should be evaluated at levels 5 and 6 to ensure that they continue to offer meaningful contributions.

Key: *** very important to evaluate at this level early in the development of the technology
** important to evaluate at this level after introduction of the technology
* important to evaluate over the long-term use of the technology

PHELPS MODEL

One recommended approach to evaluating diagnostic technologies is the model developed by Phelps and Mushlin (Med. Decision Making: 1988: 8, 270-289). In this model, the clinical result for any patient with a new technology is compared to the result that would be achieved in the absence of the new technology, and an Expected Value of Diagnostic Information (EVDI) is computed. The method is then employed across the eligible population of patients to determine if the global EVDI for the technology justifies its cost of deployment. In the initial evaluation, published data are used and the technology is assumed to be 100% accurate; clinical studies are not required If the technology is not cost-effective with this assumption, it is not pursued further. If the technology passes this test, clinical studies are initiated to determine the actual diagnostic accuracy, and the question of global deployment of the technology is reconsidered from the perspective of global EVDI and cost in the real world.

The Phelps model can be used to define a "challenge region" for a new technology; i.e., what improvement in sensitivity and specificity will be required for the new technology to be considered cost-effective and worthy of deployment? This challenge region can be depicted as a target zone on a ROC analysis for the consideration of a new technology as a replacement for an existing technology. This information could be invaluable for designing trials to verify the efficacy of existing and new technologies.

In this analysis, the solid ROC curve depicts the sensitivity and specificity of the technology to be replaced. The dotted ROC curve above the solid curve depicts the minimum sensitivity and specificity that would have to be achieved with the new technology to make it marginally cost-effective. The cross-hatched area above the dotted curve is the challenge region that the new technology would have to achieve to justify its widespread deployment.

OBSERVER VARIABILITY

In assessing diagnostic technologies, it is often assumed that interpretive variability among observers is low or nonexistent (i.e., image interpretations are by "ideal observers"). In actuality, even well trained observers vary greatly in their sensitivity and specificity to signs in images. This complicates clinical studies of technologies because a large cadre of interpreters is required to reduce the influence of interpreter variability. Because of this variability, studies of efficacy should be conducted before studies of effectiveness, since a technology that cannot demonstrate efficacy assuredly cannot demonstrate effectiveness. For new technologies with proven efficacy, effectiveness should also be demonstrated before the technologies become widely disseminated.

DISEASE PREVALENCE

In any consideration of technology cost-effectiveness, the incidence and prevalence of diseases that the technology addresses are important variables. Low-prevalence diseases not only present greater challenges to cost-effectiveness, but also increase the difficulty and cost of identifying appropriate study and control populations and performing clinical studies with them. Accurate incidence and prevalence data are needed for most diseases, including various types of cancer. In addition, these data change with time, and vary as well with stage of the disease and for subsets of the population.

WHAT SHOULD BE DONE NEXT, AND WHO SHOULD DO IT?
  1. Obtain more accurate prevalence and trends data for cancer, including cancer in population subsets [National Cancer Institute].
  2. Identify challenges in disease (cancer) diagnosis and treatment where improved imaging could help (technology "pull") [Imaging Science Work Group].
  3. Develop models of how imaging is quantifiably helpful along management pathways for patients with specific types of cancer. [Imaging Science Work Group].
  4. Provide examples of how specific imaging technologies have been evaluated by the six levels of technology assessment [Technology Evaluation Committee].
  5. Recognize that one important aspect of technology assessment is the elimination of established but unproductive procedures [Imaging Science Work Group].
  6. Recommend continued work on methods and models for technology assessment. For example: (a) establish protocol(s) for demonstrating equivalence of technologies; (b) delineate advantages, limitations, and possible replacements for ROC analysis; (c) refine of the Phelps-Mushlin model [Technology Evaluation Committee].
  7. Summarize evidence base for assessing existing technologies using synthetic research methods, meta-analysis, consensus conferences, etc., [Technology Evaluation Committee].
  8. Use data in (7) and Phelps-Mushlin model to establish "working challenge regions" for Level 2 studies [Technology Evaluation Committee].
  9. Expand the Phelps-Mushlin model to accommodate greater breadth in the spectrum and dynamics of cancer [Technology Evaluation Committee].
  10. Encourage development of RFAs to address issues in(7) - (9) [National Cancer Institute].
  11. Solicit ideas for technology assessment from individual scientists/analysts as well as through large multi-institutional trials [National Cancer Institute].
  12. Acknowledge that the scientific assessment of medical technologies at various stages in their development is essential to the evolution of the technologies into the clinical arena [National Cancer Institute].

NCI Technology Evaluation Committee:
Dennis Fryback, Ph.D.
Steven E. Seltzer, M.D.
James H. Thrall, M.D.
Craig A. Beam, Ph.D.
James Benson
Walter H. Berninger, Ph.D.
Bruce J. Hillman, M.D.
William C. Black, M.D.
James MacFall, Ph.D.
David G. Bragg, M.D.
Robert E. Wittes, M.D.
David Gur, Sc.D.
John Silva, M.D.

William Hendee, Ph.D, Chair
11/24/97