By Ralph J. Kiernan, Ph.D.
[May 30, 2008]
Neuropsychologists are constantly faced with a complex clinical task that requires them to draw upon a broad empirical data base in relating test scores to brain dysfunction.
"The patient may succeed with a test even if his mental capacity is impaired, because some of the results can be achieved in a roundabout way. If we consider only the result, we may be deceived, because the pathology consists of the patient's inability to perform in a particular way" -- Kurt Goldstein
Neuropsychologists are constantly faced with a complex clinical task that requires them to draw upon a broad empirical data base in relating test scores to brain dysfunction. We use deductive thinking to make inferences about the brain directly from the test scores according to standardization tables. Our inferences about specific disabilities or about the brain, however, also depend upon our subjective impressions about the nature of impaired performance processes. We inductively draw upon our observations and interactions during the evaluation to guide our interpretations regarding the disabilities. The interweaving of our two basic kinds of thinking represents the essence of high quality clinical diagnostic work.
Deductive reasoning alone can be misleading because test scores often fail to identify the specific performance characteristics that are important diagnostically. Many patients are able to receive full credit on intelligence test items for marginally adequate responses that require a laborious and inefficient struggle. The score is blind to how points are obtained. We can compensate for this problem to some degree by closely monitoring test performances and actively intervening as needed to explore the underlying cognitive processes and supplement their score-based knowledge.
We lose the much needed balance between our two kinds of thinking when we rely too heavily on test scores in an attempt to be objective and scientific. We devalue subjective sources of knowledge, shy away from observations and interventions as we conduct the evaluation, and distrust our inductive reasoning. We act as if we were deductive clinicians who interpret test scores concretely without regard for how those scores were obtained. If we modify the scoring of certain tests, however, we can think deductively and still obtain an adequate sense of the cognitive processes involved.
All of the tests we use in neuropsychology are sensitive to the cognitive processes that reflect brain dysfunction, but many tests were originally designed for other purposes. The Wechsler Adult Intelligence Scale (WAIS-III) subtests were designed for estimating intellectual ability, but they have proven useful in assessing specific cognitive disabilities. They evaluate each area across a broad range of skills extending from far below average to well above average in order to estimate IQ scores within the general population. Their scores reflect an average of successes on relatively easy items and failures on more demanding ones.
The WAIS-III scoring system is thus only sluggishly responsive to brain dysfunction. Only the intermediate items evaluate individuals within the limits of their capabilities. These mid-range test items are particularly challenging for patients who struggle to overcome mild to moderate disabilities. Performance on these items, however, is blended into the overall score in such a way that the most interesting data is incorporated into overall scores that obscure the salient performance features that reflect cognitive dysfunction. The vocabulary subtest of the WAIS-III is a good example of a highly sensitive test that is considered a "hold" test and thought to be resistant to the impact of brain dysfunction. The score is resistant, but the subtest itself is highly sensitive to the significant word finding problems a brain-injured client may demonstrate. A wordy, tangential and slowly given response may receive the maximum two points credit, but the verbatim response still reveals the disability.
If we behave exclusively as deductive clinicians, we will miss a great deal of what is going on that's not reflected directly in the score. We can, however, often replace the original, single test score with scores that reflect the cognitive disability more directly. Scores that more closely reflect the impaired cognitive processes have a higher likelihood of correlating with both subjective symptoms and brain pathology.
The intermediate items within the vocabulary subtest of the WAIS-III reveal word finding and verbal expression problems that are of particular interest to the neuropsychologist. These items constitute a range of inefficiency (ROI) that is much broader for those with brain-based disabilities than for the general population. A similar intermediate range can be defined for other Wechsler subtests, beginning at the first sign of difficulty and continuing through the last partial success. Within this range, the struggles, occasional successes and frequent failures take place that reveal the impaired cognitive processes that neuropsychologists rely on in their inductive thinking about brain functions. This range of test items is largely comprised of those questions and/or problems that clients could have responded to adequately prior to their injuries. These test items have become more difficult and confusing as a direct result of the brain injury. As clients struggle to overcome their deficits and to regain the mastery they once enjoyed, they demonstrate their deficiencies most clearly.
Consider the following alteration to the scoring of three of the WAIS-III verbal subtests: Information, Similarities, and Vocabulary. Subtract the item number of the first response that is less than fully adequate (less than maximum score) from the item number of the last response that receives at least partial credit and add 1 to the remainder. Mr. X receives two points for each of the first 8 vocabulary words and then only one point on item 9, and he receives one point on item 27 and no additional points after this. His range of inefficiency (ROI) is 27 minus 9 plus 1 which equals 19. Then divide the total score he obtained on those 19 items by the maximum possible score of 38. This efficiency ratio reflects his struggle far more accurately than does the Scaled Score for the overall test.
The above scoring only makes sense when the ROI is at least 10. If the range is less than 10, the individual has minimal scatter and the ratio is unreliable. As the ROI gets longer, the efficiency ratio more accurately gauges the degree of word finding difficulty. Anyone can miss an early test item for a number of reasons, but their ability to complete items that are within their ability range remains high. Efficiency ratios typically run above 80%. Ratios that are between 70% and 80% are in a borderline range, while those below 70% reflect significant word finding difficulty. It is not unusual to find a pre-morbidly above average individual with mild word finding problems to have average vocabulary, similarities and information subtest scores and very low (below 60%) efficiency ratios.
This new scoring, consistent with the process approach advocated by Edith Kaplan, directs attention to what is going on during the evaluation by translating clinically useful information into scores that can be used deductively. Additional scores alone, however, won't solve the problem of the deductive clinician. I recently reviewed the report of a neuropsychologist who used over 70 scores and indicators to try to establish the diagnosis of frontal lobe damage while ignoring the serious depression that was primarily responsible for his client's slow and inefficient performances. We remain vulnerable to becoming lost in an extended array of scores and indices without the grounding that comes from a clinical understanding of the impaired performance processes.