Wednesday, 3 December 2014
Conference notes- a work in progress
Thanks to everyone who attended my talk and who gave such generous applause. If you'd like to send me some feedback on what was useful or what else you'd like to learn from my presentation, please feel free to contact me using the contact form on the side of this blog.
If you're interested in reading the detailed blog of my experiences as a cancer patient, feel free to go to www.neuroboob.blogspot.com
It was started as a way to keep family and friends informed of my progress through treatment for early breast cancer. I had no idea I was going to develop multifocal grade IV brain tumours in September last year. In hindsight, and reading the blog, I can see that the neurological symptoms started in August 2013, but I'd been through so much by then, and had so many different symptoms, I thought it was just related to the breast cancer treatment.
It's great to be alive and improving every day, 15 months later, though I'm told the fatigue and long muscle weakness could last for a few years. Be warned, the neuroboob blog is very experiential and has lots of self-disclosure. If you'd prefer to wait for my notes from the conference presentation, I'll endeavour to get them up before Christmas.
Best wishes
Wednesday, 3 September 2014
Empathy and patient-centred care - videos and articles
https://www.google.com.au/
There are quite a few related resources below the video on the link above.
The second one brought tears to my eyes
https://www.youtube.com/watch?v=1e1JxPCDme4
This article, summarising a patient experience and innovation summit, has some great ideas and insights:
http://www.beckershospitalreview.com/quality/innovating-the-patient-experience-with-empathy-a-recap-of-cleveland-clinic-s-summit.html
And the two links within this article are also great
http://www.communicatewithheart.org/Newsroom.aspx
as is the video on the Cleveland Clinic Experience.
This is a powerful and touching video.
https://www.google.com.au/
I used to think I empathised with patients and practiced
patient-centred care before I became a patient myself. The perspective
is completely different on the other side, as many of you know from
your own experiences. When you're very unwell, or in pain, or in
shock, or worried about your known or unknown diagnosis, or simply
exhausted, it's very hard to take in all the information you're given
by well-meaning clinicians. I discovered that my familiarity with the
conditions I saw as a neuropsychologist possibly made me less aware of
how unfamiliar, frightening, and overwhelming the experience of being
a patient was for my patients. I maintained what I hoped was a
supportive yet cheerful disposition, not knowing that I would
experience being on the receiving end of similar approaches, and
finding that it made it harder for me to express my feelings and
concerns, and to ask questions. I didn't want to be a difficult
patient. I found the cheerful, competent clinical facade difficult to
penetrate as a patient. I still do, and I'm reasonably well-informed
and assertive. Imagine how it is for people with poor health literacy,
cognitive impairment, ESL, acute illness, pain…
I don't think there's one answer to the problem - perhaps one solution
would be to sit beside the bed-bound patient, rather than looking down
at them, and asking them how they really feel, what they're
experiencing, and if they have any questions or concerns that haven't
been addressed yet. And doing the same for all our patients. Our
efficiency and expertise and need to remain dispassionate and
objective with our patients can inadvertently create distance between
us and the patients we are trying to help. Maybe we need to remember
to break down that barrier with every patient we see, to create a
bridge that allows them to convey their fears, hopes, and questions,
so that we can inform and reassure them from their space as a patient,
not ours as clinicians who have seen it many times before. Many of you
are probably doing this already - I thought I was, but being a patient
was a big wake-up call to realise all the things I could have done
better.
Tuesday, 26 August 2014
How we learn
http://www.npr.org/2014/08/23/342219405/studying-take-a-break-and-embrace-your-distractions
The book looks interesting too.
How We Learn
Thursday, 19 June 2014
Wise words on evaluating information, beliefs, opinions, and knowledge . The Kalama Sutra
Kalama Sutta
Do not simply believe what you hear just because you have heard it for a long time
Do not follow tradition blindly merely because it has been practiced in that way for many generations
Do not be quick to listen to rumours
Do not confirm anything just because it agrees with your scriptures
Do not foolishly make assumptions
Do not abruptly draw conclusions by what you see and hear
Do not be fooled by outward appearances
Do not hold on tightly to any view or idea just because you are comfortable with it
Do not accept as fact anything that you yourself find to be logical
Do not be convinced of anything out of respect and deference to your spiritual teachers
You should go beyond opinion and belief. You can rightly reject anything which when accepted, practiced and perfected leads to more aversion, more craving and more delusion. They are not beneficial and are to be avoided.
Conversely, you can rightly accept anything which when accepted and practiced leads to unconditional love, contentment and wisdom. These things allow you time and space to develop a happy and peaceful mind.
This should be your criteria on what is and what is not the truth; on what should be and what should not be the spiritual practice.
THE BUDDHA.
Another summary of it from Wikipedia:
The Kalama Sutta states:
Do not go upon what has been acquired by repeated hearing,
nor upon tradition,
nor upon rumor,
nor upon what is in a scripture,
nor upon surmise,
nor upon an axiom,
nor upon specious reasoning,
nor upon a bias towards a notion that has been pondered over,
nor upon another's seeming ability,
nor upon the consideration,
"The monk is our teacher." [emphasis added]
Kalamas, when you yourselves know: "These things are good; these things are not blamable; these things are praised by the wise; undertaken and observed, these things lead to benefit and happiness," enter on and abide in them.'
Thus, the Buddha named ten specific sources which knowledge should not be immediately viewed as truthful without further investigation to avoid fallacies:
Oral history
Traditional
News sources
Scriptures or other official texts
Suppositional reasoning
Philosophical dogmatism
Common sense
One's own opinions
Experts
Authorities or one's own teacher
Instead, the Buddha says, only when one personally knows that a certain teaching is skillful, blameless, praiseworthy, and conducive to happiness, and that it is praised by the wise, should one then accept it as true and practice it. Thus, as stated by Soma Thera, the Kalama Sutta is just that; the Buddha's charter of free inquiry:
“ The instruction of the Kalamas (Kalama Sutta) is justly famous for its encouragement of free inquiry; the spirit of the sutta signifies a teaching that is exempt from fanaticism, bigotry, dogmatism, and intolerance.[4] ”
However, as stated by Bhikkhu Bodhi, this teaching is not intended as an endorsement for either radical skepticism or as for the creation of unreasonable personal truth:
“ On the basis of a single passage, quoted out of context, the Buddha has been made out to be a pragmatic empiricist who dismisses all doctrine and faith, and whose Dhamma is simply a freethinker's kit to truth which invites each one to accept and reject whatever he likes.[5]
Thursday, 5 June 2014
Neuropsychology and what patients want. Personal perspectives of a neuropsychologist who has been a patient with cancer and multiple complications for 17 months. (UPDATED with practice survey data 18 June 2014)
From my experience as a patient, I've been willing to wait, patiently, to see various professionals and to have tests done, trusting that this is necessary to diagnose and better treat my condition. I have appreciated the students, nurses, and doctors who have taken the time to take a thorough history from me and to answer my questions and concerns about the relative benefits of colonoscopy, gastroscopy, and laparoscopy in the last four weeks. I have appreciated their candidness in discussing the pros and cons of various scans and procedures. I have been willing to submit to invasive procedures that have been necessary in the diagnostic process, and I am glad we decided on not pursuing a laparoscopy when none of the surgeons could see how it would be helpful, even though the cause of my symptoms was still unknown. When they recurred, I had a laparotomy to correct the twisted bowel which was only apparent on the first X-ray in early May. I understand that the doctors and nurses did not want to put me through unnecessary suffering or procedures. The care and compassion shown by the clinicians makes unpleasant, invasive procedures, and the waiting times, bearable.
I was fasted from Tuesday to Friday for two consecutive weeks waiting for procedures, first a colonoscopy, then a gastroscopy. The nausea and IV fluids overrode any hunger, I was just grateful that I was being cared for. I wanted answers, and was willing to deal with the waiting and uncertainty in order to get them. Pain relief also helped. Developing a topical allergic reaction to morphine was annoying as it took away a quick and effective form of pain relief. And the pain was bad, worse than being in labor. Strangely, the most painful procedures I've experienced have been at the hands of anxious or apologetic nurses or medical students. It's better if they just get on with it, or get someone who is competent and experienced in the procedure to do it.
Neuropsychologists' assumptions about how patients perceive assessments
My recent experience has made me reevaluate a recurrent topic of debate in Australian neuropsychology. I started my training at Melbourne Uni in 1990, and the duration of neuropsychological assessments has been a persisting area of discussion and often heated disagreement within the field since then. Many clinicians favouring short assessments worry that "long" assessments may cause patients fatigue, frustration, or distress, and some have argued that "long" and comprehensive assessments (4-6 hours) were a waste of time for both patients and clinicians. This seemed to be based on the belief that an experienced and competent clinician should know exactly what tests to give each patient, so that the assessment is over in the least possible time (no more than 3 hours). It was argued that assessments should be done as quickly as possible to avoid subjecting patients to the "adverse experience" of testing, and to improve our clinical efficiency so we can see more patients in less time. Some people have told me my preference for comprehensive diagnostic assessments is obsessional, rigid, anal-retentive, cruel, inhumane, overly anxious, excessive or a sign of questionable competence. And that's what people have said to my face.
This issue isn't unique to Australia. Take this quote from the preface to the North American Prediction in Forensic and Neuropsychology (2003)
I suspect that neuropsychologists have been over-sensitised to the risks of causing distress or harm to patients through research ethics applications that always ask if the research could be distressing to the patient, and in a desire to avoid the bad old days of bilateral ECTs, frontal lobotomies, and the 'deep sleep therapy' investigated by the Chelmsford Royal Commission. We've never been involved in those kinds of procedures, and our tests are standardised on healthy controls, where the Wechsler Memory and Intelligence scales are given on a single day for co-norming purposes. This proves that healthy adults can complete the tests in a single day without adverse effects, apart perhaps, from the fatigue which often surprises people when the testing is over. Patients, even the elderly, often seem to find it an interesting and enjoyable experience, albeit challenging and confronting at times.
There are some patients who find assessment distressing for other reasons. Firstly, there are some patients who have insight into their cognitive difficulties, and who find it confronting to not perform as well as they expected on formal testing. I've seen a few patients who came wanting to know that everything was okay, despite their history of significant neurological injury or disease. Being presented with evidence of weaknesses with things like memory or concentration, was sometimes hard for them to tolerate. In hindsight, the referrals weren't clinically necessary, they were more to satisfy the patients' own curiosity or to allay their anxieties. Two patients were dissatisfied with results that confirmed their fears of cognitive weaknesses, they had convinced themselves they were okay, and they'd hoped the assessment would prove others concerns were unfounded. This outcome might have been circumvented by being clear on their reason for the assessment, and talking through the possible outcomes of an assessment before commencing it - how would they feel if the results were different from or worse than they expected? Would they be willing to proceed knowing that the assessment might reveal hitherto unknown difficulties? Of course, if an assessment is required for return to work purposes, or capacity assessments, the results of our testing are often not welcomed by the patient, but the patient is often not the client in these third-party cases, which creates a totally different dynamic to the curious self-funded or public patient who essentially wants to hear that there is nothing wrong with their memory or cognition despite a history of something like alcoholism, ABI, or MS. Accepting referrals without a clear clinical indication puts the clinician at risk of having a dissatisfied patient who is angry with the clinician for their test results. Not fun for either party
For public patients, assessments ranged from 1-30 hours in length, with "short" assessments taking an average of 5.36 hours (sd=2.56, range 1-12); "medium" assessments taking, on average, 7.89 hours (sd = 3.07, range 1.5-16 hours); and "long assessments taking an average 11.77 hours (sd = 4.31, range 2-30). So there is a lot a variability in the range of assessment times in Australia, and differing opinions on what constitutes short and long assessments. It seemed that the categorisation of assessments into short, medium, and long, varied according to the average assessment time of each responding clinician.
For self-funded patients seen in private practice, the mean assessment duration was 8.8 hours (sd =3.51, range, 3.5-24, median = 10, mode=10. N=83 respondents). For WorkCover patients, assessments took an average of 9.57 hours (sd = 3.67, range 0-24, median= 10, mode=10, N=82). For medico legal clients, assessments took an average of 12.67 hours (sd = 4.8, range 4-24, median=12, mode = 10, N = 63).
This data indicates that while some Australian neuropsychologists manage to get through all contact and non-contact activities in a very short period of time, but that the modal estimated total assessment duration for all private patients was 10 hours. At St Vincents' in Melbourne, where we routinely did an assessment of 4-5 hours using the WAIS-III and WMS-III, the BDI-II, the STAI, the Boston Naming Test and verbal fluency (FAS and animal naming), the total time spent on one full assessment with interview, feedback, report and individualised recommendations included, was between 7 and 10 hours per patient (inclusion of the MMPI-2 increased time on interpretation, feedback and reporting, but not contact hours because the patient can be left alone to complete the inventory, with periodic supervision). And we were criticised by some for taking a lengthy approach to assessment. The survey data suggests that comprehensive testing may be more common than people believe, or that people who do a limited number of tests may be spending a greater amount of time in non-contact activities.
The Wechsler Intelligence and Memory scales, since the publication of the WMS-R, were meant to be given together, and this is done routinely in the US, where a full battery of neuropsychological tests can take from 8-10 hours, often administered by a trained psychometrician. I can't recall seeing any concern about this assessment time in my years lurking on North American neuropsychology lists.
In the 20-odd years that I have been giving neuropsychological assessments, firstly on clinical placement, then for my PhD and postdoc research, and then in mental health, neuroscience, general medicine, and rehabilitation settings, I can only recall a handful of patients who did not tolerate a comprehensive assessment with the Wechlser memory and intelligence scales and a number of additional tests of verbal fluency, confrontational naming, mood, anxiety, and premorbid abilities. The ones that I remember best were the young lady with borderline personality disorder who nearly pushed me over when I asked her to accompany me to the office for an assessment, and the agitated young man who hated maths, storming out of the office when I asked him the first arithmetic question of the WAIS-III. There were more patients where I terminated the session, because they were too unwell, depressed, delirious, or even stuporous for a valid assessment to be obtained. I wrote a report in each case detailing why the assessment had not been completed, and offering to see them again once their clinical state had improved.
With all due respect to my colleagues who are concerned that lengthy assessments might subjects our clients to distressing experiences, I think we need to compare what we do to other more common health procedures.
I suspect that patients are willing to submit to invasive, undignified, or painful procedures because they trust the clinician will not subject them to unnecessary tests and, crucially, will not omit necessary ones either. Omitting necessary tests in the interests of time efficiency makes a mockery of the time the patient has spent waiting to see the clinician to obtain answers to their questions and solutions to their concerns and problems.
We seem to forget that the word patient has two meanings - one is a noun used synonymously with client. The other is an adjective which describes how a client is willing to behave on the assumption that their interests are being looked after, and that they will receive the answers they desire from the clinicians they go to see. After waiting to see a clinician, clients want the clinician to reward their patience with an assessment that gets to the bottom of the condition affecting their health, and they don't want clinicians to do an incomplete assessment which wastes the time spent waiting in hope of answers. They want us to get it right the first time, even if it takes a number of hours or sessions to do it. They trust us to get it right, and not waste our time on tests that are outdated or insensitive, and therefore more likely to get it wrong.
We've probably all had the experience of knowing that something is wrong with our car, and taking it to the mechanic who checks it over, changes the oil, and says everything is okay, only to drive it home and have it break down on the way, or within the next week. This experience seems to make many people furious. They take it to a different mechanic who runs more sophisticated or thorough diagnostic tests, and discovers that the clutch plates are worn out, the engine is dropping two cylinders, or that the battery is almost dead. Things that should have been discovered if a thorough assessment had been done in the first place. The motorist is understandably angry at the inconvenience, the wasted time and money, and vows to never return to the first mechanic again.
Clients are like car-owners - they are willing to invest time and effort into diagnostic testing when there is a problem, on the implicit understanding that the diagnostician will do everything necessary to get it right the first time. They understand that sometimes things can be missed in even a thorough assessment, and are forgiving of that. It sometimes takes a while for symptoms to develop to the point that the diagnosis becomes apparent. But motorists are less forgiving if a mechanic cuts corners and doesn't fully assess for the underlying causes of the presenting problem, perhaps because of internal or organisational pressures to constrain costs, perhaps because of a lackadaisical attitude, perhaps because of a well-intentioned concern about not charging for unnecessary tests. The motorist sees this as a waste of their time and money. They paid the mechanic to get it right the first time. We clinicians owe our clients the same. They have patiently waited to see us, they are willing to submit to our tests, trusting that we know and use the best tests available. Compared to the painful and invasive procedures that they have experienced as a result of their illness or injury, spending 5 or more hours sitting with a friendly psychologist who provides cups of tea, regular breaks, and is willing to listen to their experiences as a patient, is seen as a welcome change by patients and their families.
Suggestions for improving our client-centred care.
So how can we improve what we do for our patient clients? Firstly, we can ask them what they most want to learn or gain from their interaction with us, so that we can assess their needs and desires, and sometimes quickly give them what they want without embarking on a full assessment.
How else can neuropsychologists improve what they do?
- Respond to new referrals as soon as possible for both inpatients and outpatients. Get a brief idea of what the patient wants to learn, don't assume the referral question is correct, clarify it with the referring clinician, and do a brief screen for untreated mood or anxiety issues that may add unnecessary caveats to interpretation of your assessment. Unless there is an urgent clinical need, do not assess a clinically depressed or anxious person who has not had treatment. Delay it until treatment has had a chance to be effective. Explain the reason for this to the referrer.
- Educate the patient about what to expect from the assessment by attaching a brochure about neuropsychologial assessment to your appointment letter, or run through the brochure with an inpatient.
- Be brave and evaluate what we do at present, and see if it is necessary for every patient. If a patient and their family have a clear and stable diagnosis with no questions about ability to return to work or study, or concerns about symptom progression, perhaps they just need counselling or education on their symptoms and condition, rather than automatically progressing to a full assessment.
- If they have a condition that may change over time, they probably need a baseline assessment that will provide a good point of comparison improvement or deterioration in future functioning. Such an assessment would use measures with high test-retest stability, which is best obtained by using the composite scores provided by the Weschler Memory and Intelligence Scale Index scores. Individual subtests of the Wechsler Scales rarely have the same stability over time in healthy people as the index scores do, meaning that it's harder to detect true change on subtest scores alone (the 95% confidence interval for test-retest stability ranges from +/- 2 or even higher for subtests). So if you're seeing someone with an acquired brain injury who may want to return to study or work, a pre surgical patient where post surgical review will be required, or someone with a progressive neurological disorder who may lose the capacity to function independently, it is going to be more helpful for them if you assess them with the full Wecshler scales so that you can determine the degree of change over time. You just can't measure change as well with subtest scores as with index scores. So assess them thoroughly, presuming they aren't so severely impaired or clinically unstable at baseline that formal assessment is impossible, or if their condition and prognosis is so poor that assessment won't improve management of their clinical condition. Don't assess them unless they are functioning relatively well and others are making premature decisions for them on the basis of how impaired they may be one day. What the insightful patient wants right now is important, irrespective of their diagnosis. Just because I have had two GBMs removed doesn't mean that I'm happy to ignore the breast lump identified on mammography and MRI earlier this year. Statistically, it's likely the GBMs will get me in the end, but I've survived for 9 months so far, the tumours are shrinking, and I'm planning to beat them. I don't want to be foiled by unoperated breast cancer just because other people decided I've been through too much already and shouldn't spend more time away from my family. One week in hospital for breast surgery is nothing compared to metastatic breast cancer. It's not a decision for others to make, no matter how compassionate they think they're being.
- Unless you try to formally assess someone, you can't presume that they are too impaired to be assessed. I have seen patients who were severely impaired on baseline testing who improved over time, even on brief screens like the Addenbrooke's or MMSE. Even people with significant sensory, motor, or expressive problems, including blindness and deafness, can get subtest and index scores ranging from extremely low to superior. If we don't assess, we will never know. But we need to justify the reason and need for assessment in each case, not just blindly accept the referral question.
- Don't jump to conclusions on the basis of the diagnosis, history, or other people's opinions. Allow patients to surprise you, to demonstrate the remarkable variability in people. Did you think it would be possible for someone who'd had two brain tumours to write like I do? We're all guilty of jumping to conclusions about patients' abilities based on their diagnoses. I give thanks every day that I am still able to read, think, and communicate, and come up with corny analogies.
- Of course, if someone is severely ill, in considerable pain, confused, or still in the recovery phase of an acquired brain injury or acute illness, the test results will not be as valid or reliable as when they are clinically stable. It's okay to delay an assessment until the patient is well enough to do it to the best of their abilities.
- Recognise when a patient pushes certain buttons in yourself that might compromise your objectivity in doing the assessment, and seek support or refer on to another clinician to see the patient. We're all human, and occasionally find certain conditions or stories too close to home for us to be able to help the best that we can.
- We can stop being embarrassed and apologising for what we do. We have the best tests available to test cognition, memory, mood, and behaviour, and we are the experts in understanding the impact of brain disorders on the whole person. People come to us for answers and assistance, we need to acknowledge where we can and cannot help. And that we can do it well.
- We need to regularly evaluate the tests that we use according to our ethics code and code of conduct, and to ensure that every test we use is measuring what it is supposed be measuring. We need to be experts in the clinical applications and importance of reliability, validity, sensitivity, and specificity. Practically speaking, it's like checking that your grandmother's kitchen scales are as accurate as a modern digital scale, and keeping it for decoration purposes only if it is not. Or if her scales are accurate, but scaled in outmoded imperial measures, then having ready access to a conversion chart from imperial to metric so that you don't miscalculate the amount of sugar you use in your macaroons. In baking, accurate measurements are vitally important, as they are in science and neuropsychology.
- Give up on tests that you feel you can interpret accurately based solely on years of experience and your clinical intuition. This approach can result in widely divergent impressions both between and within clinicians. It's a little like rapidly measuring a pinch of salt with your hands vs standard measures, or guestimating one cup of flour. Too much salt, or sugar, will ruin your macaroons. Too much flour will make your sponge cake dry. How can we bake to consistent results without consistent recipes and standardised measures? Our assessments require precise measures, just like in baking sponges or macaroons. They're not casseroles and curries that can be seasoned to taste and adapted to available ingredients.
- We should consign unreliable, invalid, old and outdated tests to the bookcase or shredder rather than continue to use tests that provide results with a standard error of measurement big enough to drive a truck through. Anything with a reliability of less than 0.7 is considered unacceptable. From memory, this means that we should dispense with Trails A& B, the RAVLT, the L'hermitte Board, Colour form sort, and the Wisconsin Card Sorting test. We have better tests for psychomotor speed, verbal and visual memory, and fluid intelligence. Surely it's worse to make a patient spend time on completing inaccurate and uninterpretable tests than to make them spend a few hours on a battery of highly reliable and well-standardised ones that were designed to be used together?
- Purchase and use tests like the BRIEF or BRIEF-A, or the FRSBe to get patient and informant reports of a range of frontal-system behaviours. These inventories reveal if their are problems with behaviours like impulsivity, dysinhibition, inappropriateness, planning, and organisation, by comparing scores with age and gender-matched norms. They can show you if there has been a change from premorbid levels, they are much more revealing than relying on the WCST and Trails to measure a wide range of frontal function; they save time on the clinical interview, and can help reveal lack of insight, over-reporting, protectiveness, or underreporting in patients and informants.
- Be honest in obtaining and reporting test results. If the patient comes to you in so much fatigue, depression, anxiety or pain that you think it will invalidate the test results, do not proceed with the assessment. Reschedule it for when they feel better, or see if patient-administered pain relief or a warm drink can alleviate their discomfort and start the testing a little later. If the testing is affected by the development of fatigue, reductions in concentration, or increased anxiety, discontinue the assessment so you can get the patient's best performance at another time, and note the details on the record forms and in the report.
- If a comprehensive assessment is not obtainable, detail the reasons in the report, and any caveats that may place on the interpretation.
- Don't be embarrassed or apologetic if you're not sure of what to make of the assessment results. Describe what you found, and list the possible interpretations. Human problems are highly variable and and complicated, and it's not always possible to be sure of the answers. Better to acknowledge this rather than to put on a cloak of false confidence in your conclusions. Better to lay your decision-making cards on the table - "it could be one of these things, but I'm not sure. It seems more likely to be a, b or c, and less likely to be z. On an outside chance, it could be xyz".
- Don't assume primary responsibility for arriving at a diagnosis - your effort is just a part of a multidisciplinary assessment. As important as it is, it's just one cog in the wheel. By putting your differentials in the report, it allows others to consider your hypotheses and may help clarify their own. I once speculated that a woman with an isolated memory impairment might have paraneoplastic limbic encephalitis - she was found to have advanced breast cancer.
- Remember your first duty of care is to the patient. Ask what they want to learn or achieve from seeing you, and try to give that to them if it is possible, or reformulate their questions and desires into a form that is possible for you to address.
- Don't let bureaucrats dictate how long your assessment should take based on the "time is money" premise, which is offensive and demeaning to patients, and disrespectful of clinicians who are the experts in their field. Each patient will need a different amount of time to be assessed. Some will skim through everything in the minimal time, others will take longer, due to a multitude of factors including personality, impulsivity, fatigue, concentration, motivation, or other things. Each patient deserves to get the best assessment for their individual circumstances, even if it takes a little more time. We wouldn't accept half a brain scan because it was taking too long, would we? If the patient was allergic to the contrast medium, then we might not get an MRI with contrast, but the reason would be documented and the absence of the data taken into account when interpreting the non-contrast scans.
- Remember to give and elicit feedback on the assessment process once it's over, and keep records of the feedback you receive, so that you can continually improve on the service that you provide.
- Make time to meet with the primary carer individually, with the patients consent, or arrange for one of your team to meet them simultaneously. Family, carers, and patients often find it hard to express their needs and concerns in front of each other, and often don't know what to ask for in terms of education and supports
- Refer on to a multidisciplinary allied health team if the patient isn't already linked to one, so that the patient can benefit from the range of professional services that are available.
In terms of testing, despite my preference for comprehensive assessments using the best tests available, I don't believe in over-assessing anyone, just in doing the best assessment possible for the individual. I'm not completely rigid in my approach, which has developed on extensive reading of neuropsychological assessment and measurement research from the past 30 years. What I have written above basically summarises my approach to neuropsychological assessment, developed over 24 years of reading scholarly articles, research, practice, case-conferences, and supervision. It's an approach based on doing the best assessment possible, using the best tests, for each patient, and not taking short-cuts unless it is clinically necessary or justifiable. I don't see it as "testing until the cows come home," and my patients and students haven't seen it that way either, though I can understand that it is a different approach to that described in Kevin Walsh's books of the late 1980s. The student comments were reassuring for me, because I trained at a time when every supervisor had a different set of favourite adult tests which wasn't necessarily made explicit (we were told to choose them ourselves, and that we'd learn "through experience").
On placement, in trying to assess what I thought was important (and also take a history) in less than two hours, I often found I'd omitted at least one test that my supervisor thought were vital, like digit span, digit symbol, arithmetic, vocabulary, similarities, comprehension, or information. On the one hand, I was embarrassed at leaving "vital" tests out, on the other I was discouraged from wanting to do a thorough, standardised assessment rather than a brief screen based on a hypothesis-testing approach using the WMS, RAVLT, and selected WAIS-R subtests. The former was seen as a waste of time, even though it affected my confidence in my interpretation of the test results.Then I went to the RCH where my supervisors encouraged me to do comprehensive assessments, both for the benefit of the children and their families, and as a learning experience for me as a student. I learnt that the WISC-R and other intelligence scales available at the time gave similar results from slightly different perspectives, and that it was probably over servicing to do two or three different cognitive batteries with the one child (like the WISC-R, K-ABC and Stanford-Binet), no matter how sweet and compliant she was. Sometimes our tests don't give us definitive answers on subtle problems. And yes, "intelligence" tests are suitable for neuropsychologists to use because they test a wide range of cognitive domains and allow us to see how individuals compare to the standardisation sample. To say that our patients need different tests because their brains are different, or that we should test people with certain conditions with tests standardised on non-normal populations, seems absurd to me. Having tests standardised on normal populations allows us to see if the patient differs significantly from others, and allows us to seek evidence to exclude the null hypothesis that all people are normal. This provides clinically and diagnostically useful information. I don't see how the effort involved in getting normative data for various clinical groups would be worth the time and expense. Our existing tests give us a wealth of data, we just have to know how to interpret it.
After my clinical placements, I decided that adults deserved the same degree of thorough assessment given to children, particularly with work, study, and independence to consider, and that it was more consistent to administer the tests of the Wecshler scales in the standardised order and procedure than to pick and choose based on clinical intuition, or to invalidate the norms by "testing the limits" or otherwise violating standardised procedures on an ad-hoc basis for each patient.
Embracing the well-normed, stable, and reliable WMS-R, WMS-III and WMS-IV indices saved me wasting patients' time on the poorly normed, less reliable and less stable tests like the WMS, RAVLT, or the norm-free L'hermitte board and colour-form sorting test. And better still, by using the WAIS-R and WMS-R in combination, there were tables to look up to see if memory was in the range expected from intellectual functioning, rather than the guesses and clinical intuition available to users of the WMS. This has only improved more with the co-normed third and fourth editions of the Wechsler scales, and the conorming with the WTAR.
Using this approach of testing for evidence to reject the null hypothesis (that the patient is normal), I found I could settle into a well-rehearsed process of assessment that is more time-efficient and standardised than pausing in an assessment wondering which test to give next. It allows the clinician to act like a well-oiled conduit for the tests on one level, while being simultaneously aware of the patient's mood, concentration, and other qualitative features of performance on the other. I couldn't justify sacrificing robust tests for flimsy ones. Which is why I prefer composite scores to subtest scores, and to get composite scores, you need to do more testing.
The principle of aggregation shows that the reliability and stability of composite or index scores is greater than that of the individual subtests that contribute to the index. Using composite scores allows you to test if any of the results suggest the patient is significantly different from normal, which might allow you to reject the null hypothesis and conclude that there is evidence of a score, or set of scores, that is unusual in the normal population. I liked to use a base rate of 5% as my criterion for clinical significance. Statistical significance seemed irrelevant when a difference between index or scaled scores can be statistically significant (have a probability of occurring less than 5% of the time), but is found to occur in more than 10% of the standardization sample.
Most of the 2000 patients seen from 1994-2009 directly by me, my students, and colleagues at St Vincent's, using a fairly consistent comprehensive approach, were appreciative of the time we spent with them in interview, assessment, and feedback (average range 4.5-6 hours), and if any became fatigued or distressed, we gave them the opportunity to take a break, or to complete the assessment another time. Some accepted, but others preferred to complete the testing that day, as they'd often already waited for months to see us. In fact, we probably found the assessments more tiring than the clients, due to the need to maintain our concentration and adherence to standardised administration while recording observational data and attending to the client's needs.
An anonymous person commented on the feedback to the 2012 CCN conference that I was pushing an agenda of "testing until the cows come home." This was in response to my comments on a student case presentation where I expressed concern about the apparently common practice of omitting core subtests of the WMS-IV and to assume that only giving logical memory or verbal paired associates provides a reliable measure of verbal memory. I felt bad about getting sidetracked on that issue in that session, and apologised immediately after the session to the presenting student, and to the two students who subsequently had less time to present their cases. I think we parted on good terms, and I would like to apologise to anyone present who was offended that I was distracted by my passion for high-quality data. I meant no offence.
We need to regularly inform ourselves of international practice standards by having a good look at the international literature on neuropsychology, particularly from the USA where involvement in medico legal work has resulted in vigorous debate about standards for neuropsychological assessment. We need to remind ourselves of the tortuous four years of undergraduate study in the science and philosophy of psychology, particularly measurement theory, test construction, standardisation, and psychometrics. They are not irrelevant vestiges of the past, they are core to what we do. We are not here to be palm-readers (though it can be an amusing hobby out of work). Our excellent tests are what set us apart from all other health disciplines, we need to use them to the benefit of our patients and our profession. Of course there is a role for intuition in our interactions with patients and families, but guided by professional knowledge and experience, including knowledge of the errors commonly made by clinicians and steps needed to avoid them. Psychology and neuropsychology are professional and scientific disciplines with strong research and ethical foundations. If we want to avoid potential professional misconduct allegations and deregistration, we need to keep firmly within the bounds of ethical, evidence-based psychological practice. We can't make it up as we go along, unless you fancy being hauled over the coals for professional misconduct, or embarrassed during cross-examination as a witness in court. It's also about doing the right thing, the ethically defensibly and professionally accountable thing, not about doing what is easy.
Having gone through all of this has been a transformative experience. I'm more aware of the important things in life, more exasperated by, and less interested in the trivial, and more appreciative of the preciousness of health, happiness, and existence. I'm acutely aware of my own mortality (which possibly underlies my need to share my thoughts) and hope I still have much to share with people, and much time in which to do it.
I intend to keep improving until I'm fully recovered, and to live another 40 years (at least) so I can see my children grow up and enjoy cuddling their children.