Thursday, 19 February 2015

A personal perspective on brain training apps.

I saw a neurosurgeon for a second opinion today. I took a neuropsychological friend along with me, which was great, because she was able to take notes and discuss what happened with me afterwards. My scans on February 17th were almost identical to those taken on January 2nd, prior to my surgery on January 6th. It's possible the tumour was removed and has regrown. His advice was to remove it with a right temporal lobectomy. I'm feeling relieved and comfortable with that, as I intend to live for many more years so that I can keep learning about neuropsychology and sharing my knowledge, thoughts, and experiences with others. As well as being around for my children as they continue to grow up.

My friend and I  also had a very animated discussion in the waiting room about the benefits of brain training  programs. I could sense the other patients and companions taking an interest in our discussion. I showed her some games on Luminosity, and told her I thought they had poor ecological validity, were frustrating when I can't improve with repeated practice, although they may provide people like me with a baseline and ability to track deterioration over time. I told her how joyful I felt when I was able to complete a crossword puzzle recently, and how i'd prefer to spend my time with friends, either in person, or talking on the telephone, or doing crosswords, or writing, rather than playing games derived from neuropsychological research. These games are intended to measure cognitive function, they're not intended to be fun, and many of them aren't. Some of the Luminosity games are fun initially (like penguin pursuit, pet detective, the barista game, and other tests of flexibility and problem solving), but I find them discouraging when I can't improve, even with repeated practice. My friend reassured me that cognitive training works for working memory function, which I believe (think of all the evidence that learning musical instruments improves attention, memory, and other aspects of cognition in children and adults). She reminded me that brain training probably isn't right for me at the moment, given my recent brain surgery

 Using Luminosity for a few weeks means that I now understand exactly what patients mean when they say our tests are meaningless to them. I usually used to reply that the tests were normed on healthy people of different ages and backgrounds, so that they allow us to tell if a person is performing as would be expected for their age, education, gender, and ethnic background. It doesn't matter how interesting the tests are to the clients, though I empathise with patients who dislike the tests, because it must be hard to give one's best effort when a test is irritating and seems pointless and unrelated to the concern that has brought you to see the psychologist. 

Must sleep now, I have a busy day tomorrow, and surgery is scheduled for February 28th. I might have time to look at Norman Doige's new book on how the brain heals itself before then. However, I'd rather keep myself fit and healthy, and write as much as I can before I have surgery again.

Wednesday, 18 February 2015

I now have recurrent GBM

I'm writing briefly for those of you who may  still be waiting for the notes from my presentation at the 2014 CCN conference on patient-centred care. I still have the powerpoint slides on file, but it doesn't seem possible to upload ppt files to blogger, so my plan was to cut and paste text and images from the files into a blog post, and to elaborate on each slide so that a written version of my presentation would be available.
I'm still hoping to do that some time, but that will depend on the results of my most recent brain scan, and my visit to a new neurosurgeon for a second opinion tomorrow.

I was found to have a new tumour (in the right temporal lobe) on January 2nd, and it was removed four days later. We were told the post surgical MRI showed no residual tumour or perioperative infarct (bleeding). The histopathology was "the same as before", presumably that meant GBM. The neurosurgeon said that the Gliolan hadn't helped identify any tumour that he wouldn't have seen with the naked eye or stereotactic imagery.

I saw the neurosurgeon for a review on February 3rd, and he indicated that he debunked most, but not all of the tumour because it was growing in the lateral ventricle, and he didn't resect absolutely all of it because he didn't want to cause me any impaiment, or to adversely affect my quality of life. I wish we'd spoken about the neuropsychology of the right temporal lobe, because I supervised several (as yet unpublished) postgraduate research projects that looked at neuropsychological outcomes after epilepsy surgery for patients with hippocampal sclerosis. The data consistently showed very little neuropsychological impairment for right temporal lobectomies, with a minimum period of 6 months between surgery and reassessment. People undergoing left temporal lobectomies tended to have more problems, with decreases in verbal memory function and confrontational naming abilities. So, on the basis of at least 3 students' analyses of pre and postoperative neuropsychological data, I would have told the surgeon to not worry about my right temporal lobe, and that I could cope with possibly developing impairment in visual memory after surgery. This would affect my quality of life far less than living with the knowledge that there still may be residual tumour left in my brain after the most recent craniotomy, and that it can't be treated with temozolamide (a form of chemotherapy) because I had a severe and prolonged pancytopenia after receiving that drug for 4 weeks in 2013. Also, it is possible that the region of the most recent tumour may not be able to be irradiated because I've already received radiation in close proximity to it, and because the tumour was close to the medulla, which apparently can only receive a certain level of radiation.

The question of radiotherapy is a technical one, which hasn't yet been answered. The current treatment plan is for me to have Avastin, an anti-angiogenic agent that stops tumours growing their own blood supplies. It also impairs healing of wounds, so I'll have to postpone the prophylactic mastectomy that I'd hoped to have early this year (My remaining breast was normal on two MRIs last year, but I wanted it removed, just to be safe. Now it's the least of my worries in the scheme of things)

I have no idea what tomorrow will bring. I haven't heard anything about the results of yesterday's MRI, and I don't want to see the images until I'm sitting with the neurosurgeon who can interpret them for me and give me advice on what to do next.

I hope to be able to start sharing my presentation notes and ideas soon, it will be good to have a project to work on, other than the one of trying to improve my health.

Updates on my thoughts and experiences will continue to be posted at www.neuroboob.blogspot.com

Wednesday, 3 December 2014

Conference notes- a work in progress

I gave a talk about my experiences as a patient at a conference last week, and promised to post my notes here. This is just a quick post to say that I'm not sure when I'll get around to doing it, I'm still recovering from the conference and a virus that we picked up. I'll probably be able to get something up in the next week or two.
Thanks to everyone who attended my talk and who gave such generous applause. If you'd like to send me some feedback on what was useful or what else you'd like to learn from my presentation, please feel free to contact me using the contact form on the side of this blog.

If you're interested in reading the detailed blog of my experiences as a cancer patient, feel free to go to www.neuroboob.blogspot.com

It was started as a way to keep family and friends informed of my progress through treatment for early breast cancer. I had no idea I was going to develop multifocal grade IV brain tumours in September last year. In hindsight, and reading the blog, I can see that the neurological symptoms started in August 2013, but I'd been through so much by then, and had so many different symptoms, I thought it was just related to the breast cancer treatment.

 It's great to be alive and improving every day,  15 months later, though I'm told the fatigue and long muscle weakness could last for a few years. Be warned, the neuroboob blog is very experiential and has lots of self-disclosure. If you'd prefer to wait for my notes from the conference presentation, I'll endeavour to get them up before Christmas.
Best wishes

Wednesday, 3 September 2014

Empathy and patient-centred care - videos and articles

My husband sent me this link today. Our hospital is moving towards patient-centred care, and is distributing this video on empathy to all staff

https://www.google.com.au/search?site=&source=hp&ei=E6gHVNS5HIWD8gWKwIGoDg&q=cleveland+clinic+patient+experience+video&oq=cleveland+clibic+patient+&gs_l=mobile-gws-hp.1.1.0i13l5.3527.14361.0.18141.26.26.0.4.4.1.913.13444.3-12j4j4j6.26.0....0...1c.1.52.mobile-gws-hp..7.19.6284.0.8g3BwbRVp_c

There are quite a few related resources below the video on the link above.

The second one brought tears to my eyes
https://www.youtube.com/watch?v=1e1JxPCDme4

This article, summarising a patient experience and innovation summit,  has some great ideas and insights:

http://www.beckershospitalreview.com/quality/innovating-the-patient-experience-with-empathy-a-recap-of-cleveland-clinic-s-summit.html

And the two links within this article are also great
http://www.communicatewithheart.org/Newsroom.aspx
as is the video on the Cleveland Clinic Experience.

This is a powerful and touching video.

https://www.google.com.au/search?site=&source=hp&ei=E6gHVNS5HIWD8gWKwIGoDg&q=cleveland+clinic+patient+experience+video&oq=cleveland+clibic+patient+&gs_l=mobile-gws-hp.1.1.0i13l5.3527.14361.0.18141.26.26.0.4.4.1.913.13444.3-12j4j4j6.26.0....0...1c.1.52.mobile-gws-hp..7.19.6284.0.8g3BwbRVp_c

I used to think I empathised with patients and practiced
patient-centred care before I became a patient myself. The perspective
is completely different on the other side, as many of you know from
your own experiences. When you're very unwell, or in pain, or in
shock, or worried about your known or unknown diagnosis, or simply
exhausted, it's very hard to take in all the information you're given
by well-meaning clinicians. I discovered that my familiarity with the
conditions I saw as a neuropsychologist possibly made me less aware of
how unfamiliar, frightening, and overwhelming the experience of being
a patient was for my patients. I maintained what I hoped was a
supportive yet cheerful disposition, not knowing that I would
experience being on the receiving end of similar approaches, and
finding that it made it harder for me to express my feelings and
concerns, and to ask questions. I didn't want to be a difficult
patient. I found the cheerful, competent clinical facade difficult to
penetrate as a patient. I still do, and I'm reasonably well-informed
and assertive. Imagine how it is for people with poor health literacy,
cognitive impairment, ESL, acute illness, pain…

I don't think there's one answer to the problem - perhaps one solution
would be to sit beside the bed-bound patient, rather than looking down
at them, and asking them how they really feel, what they're
experiencing, and if they have any questions or concerns that haven't
been addressed yet. And doing the same for all our patients. Our
efficiency and expertise and need to remain dispassionate and
objective with our patients can inadvertently create distance between
us and the patients we are trying to help. Maybe we need to remember
to break down that barrier with every patient we see, to create a
bridge that allows them to convey their fears, hopes, and questions,
so that we can inform and reassure them from their space as a patient,
not ours as clinicians who have seen it many times before. Many of you
are probably doing this already - I thought I was, but being a patient
was a big wake-up call to realise all the things I could have done
better.


Tuesday, 26 August 2014

How we learn

I wish I'd read this article when I was a student - it would have made me feel less guilty about the way I studied, which wasn't the way we were told to study (BORING!). I hated sitting in the same place each day, had to take lots of breaks and changes of where I was studying (outside, different rooms, cafes…).  But I must have been doing something right, I got good marks and finished my PhD in the end…

http://www.npr.org/2014/08/23/342219405/studying-take-a-break-and-embrace-your-distractions

The book looks interesting too.

How We Learn
How We Learn
The Surprising Truth About When, Where, and Why It Happens
Hardcover, 256 pages purchase

Thursday, 19 June 2014

Wise words on evaluating information, beliefs, opinions, and knowledge . The Kalama Sutra

Thanks to my old friend Debbie Ling for sharing these wise words with me. A lot of this resonates with the literature on clinical decision-making, and what philosophers say about logical thinking and the scientific method. I get excited when similar recommendations come from diverse sources - it suggests, to me at least, that they reveal basic  truths about our world

Kalama Sutta


Do not simply believe what you hear just because you have heard it for a long time


Do not follow tradition blindly merely because it has been practiced in that way for many generations


Do not be quick to listen to rumours


Do not confirm anything just because it agrees with your scriptures


Do not foolishly make assumptions


Do not abruptly draw conclusions by what you see and hear


Do not be fooled by outward appearances


Do not hold on tightly to any view or idea just because you are comfortable with it


Do not accept as fact anything that you yourself find to be logical


Do not be convinced of anything out of respect and deference to your spiritual teachers


You should go beyond opinion and belief.  You can rightly reject anything which when accepted, practiced and perfected leads to more aversion, more craving and more delusion.  They are not beneficial and are to be avoided.


Conversely, you can rightly accept anything which when accepted and practiced leads to unconditional love, contentment and wisdom.  These things allow you time and space to develop a happy and peaceful mind.


This should be your criteria on what is and what is not the truth; on what should be and what should not be the spiritual practice.


THE BUDDHA.



Another summary of it from Wikipedia:


The Kalama Sutta states:


   Do not go upon what has been acquired by repeated hearing,

   nor upon tradition,

   nor upon rumor,

   nor upon what is in a scripture,

   nor upon surmise,

   nor upon an axiom,

   nor upon specious reasoning,

   nor upon a bias towards a notion that has been pondered over,

   nor upon another's seeming ability,

   nor upon the consideration, 


"The monk is our teacher." [emphasis added]

   Kalamas, when you yourselves know: "These things are good; these things are not blamable; these things are praised by the wise; undertaken and observed, these things lead to benefit and happiness," enter on and abide in them.'


Thus, the Buddha named ten specific sources which knowledge should not be immediately viewed as truthful without further investigation to avoid fallacies:


   Oral history

   Traditional

   News sources

   Scriptures or other official texts

   Suppositional reasoning

   Philosophical dogmatism

   Common sense

   One's own opinions

   Experts

   Authorities or one's own teacher


Instead, the Buddha says, only when one personally knows that a certain teaching is skillful, blameless, praiseworthy, and conducive to happiness, and that it is praised by the wise, should one then accept it as true and practice it. Thus, as stated by Soma Thera, the Kalama Sutta is just that; the Buddha's charter of free inquiry:

     The instruction of the Kalamas (Kalama Sutta) is justly famous for its encouragement of free inquiry; the spirit of the sutta signifies a teaching that is exempt from fanaticism, bigotry, dogmatism, and intolerance.[4]     


However, as stated by Bhikkhu Bodhi, this teaching is not intended as an endorsement for either radical skepticism or as for the creation of unreasonable personal truth:

     On the basis of a single passage, quoted out of context, the Buddha has been made out to be a pragmatic empiricist who dismisses all doctrine and faith, and whose Dhamma is simply a freethinker's kit to truth which invites each one to accept and reject whatever he likes.[5]


Thursday, 5 June 2014

Neuropsychology and what patients want. Personal perspectives of a neuropsychologist who has been a patient with cancer and multiple complications for 17 months. (UPDATED with practice survey data 18 June 2014)

This last week in hospital, the third week-long admission for what turned out to be a twisted bowel, has given me another perspective on what patients want from their healthcare experience, particularly in terms of timeliness and time spent by clinicians.

From my experience as a patient, I've been willing to wait, patiently, to see various professionals and to have tests done, trusting that this is necessary to diagnose and better treat my condition. I have appreciated the students, nurses, and doctors who have taken the time to take a thorough history from me and to answer my questions and concerns about the relative benefits of colonoscopy, gastroscopy, and laparoscopy in the last four weeks. I have appreciated their candidness in discussing the pros and cons of various scans and procedures. I have been willing to submit to invasive procedures that have been necessary in the diagnostic process, and I am glad we decided on not pursuing a laparoscopy when none of the surgeons could see how it would be helpful, even though the cause of my symptoms was still unknown. When they recurred, I had a laparotomy to correct the twisted bowel which was only apparent on the first X-ray in early May. I understand that the doctors and nurses did not want to put me through unnecessary suffering or procedures. The care and compassion shown by the clinicians makes unpleasant, invasive procedures, and the waiting times, bearable.


I was fasted from Tuesday to Friday for two consecutive weeks waiting for procedures, first a colonoscopy, then a gastroscopy.  The nausea and IV fluids overrode any hunger, I was just grateful that I was being cared for. I wanted answers, and was willing to deal with the waiting and uncertainty in order to get them. Pain relief also helped. Developing a topical allergic reaction to morphine was annoying as it took away a quick and effective form of pain relief. And the pain was bad, worse than being in labor. Strangely, the most painful procedures I've experienced have been at the hands of anxious or apologetic nurses or medical students. It's better if they just get on with it, or get someone who is competent and experienced in the procedure to do it. 

Neuropsychologists' assumptions about how patients perceive assessments
My recent experience has made me reevaluate a recurrent topic of debate in Australian neuropsychology. I started my training at Melbourne Uni in 1990, and the duration of neuropsychological assessments has been a persisting area of discussion and often heated disagreement within the field since then. Many clinicians favouring short assessments worry that "long" assessments may cause patients fatigue, frustration, or distress, and some have argued that "long" and comprehensive assessments (4-6 hours) were a waste of time for both patients and clinicians. This seemed to be based on the belief that an experienced and competent clinician should know exactly what tests to give each patient, so that the assessment is over in the least possible time (no more than 3 hours). It was argued that assessments should be done as quickly as possible to avoid subjecting patients to the "adverse experience" of testing, and to improve our clinical efficiency so we can see more patients in less time. Some people have told me my preference for comprehensive diagnostic assessments is obsessional, rigid, anal-retentive, cruel, inhumane, overly anxious, excessive or a sign of questionable competence. And that's what people have said to my face.


This issue isn't unique to Australia. Take this quote from the preface to the North American Prediction in Forensic and Neuropsychology (2003)


"Considerable controversy exists between and within factions of neuropsychologists who hold any number of circumscribed views that they often attribute to the superiority of one training model over another. Differences include the number of tests to administer, which among the many available tests should be administered, and how administered tests should be interpreted. Some argue for a purely quantitative analysis where test scores are compared against established standards. Others demand inclusion of qualitative measures, such as a patient's approach to the test, or the way a patient constructs a drawing. Both approaches demonstrate merit. Both present limitations" (p. vii)



Front Cover



I don't advocate a purely quantitative approach, and I'm not aware of anyone who does. I believe that it is possible to take a sophisticated approach to quantitative analysis of test results that also incorporates qualitative observations. I'm less of a fan of "testing the limits" or modifying standard procedures on tests like Block Design because it violates the standardised administration procedures, and thus invalidates the scores. If such procedures are done after the subtest is completed, it would still have an affect on retesting. I'd prefer it if baserate data was available to help interpret the qualitative data, This is from my experience of having tested over 50 normal controls on a range of neuropsychological measures for my PhD, As a tutor teaching test administration, I saw very intelligent clinical psychology masters students who could not master the WCST or Austin Maze, and who made unusual copies of the RCFT. I have seen a huge range of approaches to our tests, some of which would have been classed as abnormal on qualitative analyses.  Without knowing how frequently certain qualitative features of performance occur in healthy and impaired populations, it is difficult to know the sensitivity and specificity of such information. Clinical lore about the diagnostic significance of different approaches to our tests should be backed up by scientifically collected data. I haven't had the chance to look for it lately, but if it's out there, we should use it. I had the privilege of sitting in Edith Kaplan's classes and eating Chinese with her during my time in Boston. She was a firm believer in people collecting data to support or disprove her qualitative Boston Process Approach to neuropsychological assessment. She didn't think that qualitative approaches were incompatible with collecting and using quantitative data, and openly invited people to do so. 

Being a medical patient vs being a neuropsychologist's client

As a patient who has had more than 15 admissions and over 150 days in hospital, I can tell you that, for the medically stable patient who is not troubled by pain, nausea, or fatigue, having an assessment under the care of a responsive and flexible neuropsychologist would only be perceived as an invasive or adverse experience by the rarest of patients. No blood is collected. No veins are punctured, often repeatedly, in various places, in attempts to take blood or insert cannulas. No disgusting contrast liquid needs to be ingested. There is no risk of life-threatening complications from the assessment. It doesn't cause physical pain. Neuropsychological assessments do not cause nausea, vomiting, or diarrhoea, or fear of dying during yet another surgery. 


I suspect that neuropsychologists have been over-sensitised to the risks of causing distress or harm to patients through research ethics applications that always ask if the research could be distressing to the patient, and in a desire to avoid the bad old days of bilateral ECTs, frontal lobotomies, and the 'deep sleep therapy' investigated by the Chelmsford Royal Commission. We've never been involved in those kinds of procedures, and our tests are standardised on healthy controls, where the Wechsler Memory and Intelligence scales are given on a single day for co-norming purposes. This proves that healthy adults can complete the tests in a single day without adverse effects, apart perhaps, from the fatigue which often surprises people when the testing is over. Patients, even the elderly, often seem to find it an interesting and enjoyable experience, albeit challenging and confronting at times.


There are some patients who find assessment distressing for other reasons. Firstly, there are some patients who have insight into their cognitive difficulties, and who find it confronting to not perform as well as they expected on formal testing. I've seen a few patients who came wanting to know that everything was okay, despite their history of significant neurological injury or disease.   Being presented with evidence of weaknesses with things like memory or concentration, was sometimes hard for them to tolerate. In hindsight, the referrals weren't clinically necessary, they were more to satisfy the patients' own curiosity or to allay their anxieties. Two patients were dissatisfied with results that confirmed their fears of cognitive weaknesses, they had convinced themselves they were okay, and they'd hoped the assessment would prove others concerns were unfounded. This outcome might have been circumvented by being clear on their reason for the assessment, and talking through the possible outcomes of an assessment before commencing it - how would they feel if the results were different from or worse than they expected? Would they be willing to proceed knowing that the assessment might reveal hitherto unknown difficulties? Of course, if an assessment is required for return to work purposes, or capacity assessments, the results of our testing are often not welcomed by the patient, but the patient is often not the client in these third-party cases, which creates a totally different dynamic to the curious self-funded or public patient who essentially wants to hear that there is nothing wrong with their memory or cognition despite a history of something like alcoholism, ABI, or MS. Accepting referrals  without a clear clinical indication puts the clinician at risk of having a dissatisfied patient who is angry with the clinician for their test results. Not fun for either party

Secondly, some patients who get distressed have struggled with cognitive issues for some time, and find relief and validation through having an assessment. I recall an elderly woman with epilepsy who burst into tears after struggling with the WMS. She said she wished her husband had been there to see how hard she had found it, because he didn't believe her memory was as bad as she thought it was. For her, the experience of being tested and finding it difficult validated her everyday memory problems, and her distress was from years of being harangued and berated by her husband for not trying hard enough. She was grateful to finally have independent evidence of memory impairment, and to receive education and advice about the causes and strategies to deal with it.

Finally, there are people who have had brain conditions who may be distressed by our assessments because it makes them more aware of the difficulties they have been having. Sometimes these assessments are necessary, other times they are not.  I'd put myself in this category. I know my concentration isn't as sharp as before, and I don't want to know how much function I've lost. There is no clinical need for me to have an assessment at the moment. I am on extended sick leave from my job, and my oncologist writes reports for my income protection insurance every three months to say I'm not yet ready to return to work. He doesn't know when I will be. Having a neuropsych assessment would only make me anxious beforehand, and possibly depressed afterwards. There is not clinical need at present. I'm well-aware of the difficulties I've been experiencing, and know they'd affect my ability to work effectively. I won't detail them here. 


How long is the average neuropsychological assessment in Australia vs the US?

Our 2012 CCN survey of Australian neuropsychologists looked at average assessment times for WorkCover, Medicolegal, and public patients. "Assessment" included contact and non-contact activities, including interview, assessment, scoring, informant interview, records review, interpretation and report-writing. The data is limited by the survey methodology, which asked neuropsychologists to estimate the total assessment time for different categories of patients, and it is unknown how accurate and reliable those estimate were

For public patients, assessments ranged from 1-30 hours in length, with "short" assessments taking an average of 5.36 hours (sd=2.56, range 1-12); "medium" assessments taking, on average, 7.89 hours (sd = 3.07, range 1.5-16 hours); and "long assessments taking an average 11.77 hours (sd = 4.31, range 2-30). So there is a lot a variability in the range of assessment times in Australia, and differing opinions on what constitutes short and long assessments. It seemed that the categorisation of assessments into short, medium, and long, varied according to the average assessment time of each responding clinician.

For self-funded patients seen in private practice, the mean assessment duration was 8.8 hours (sd =3.51, range, 3.5-24, median = 10, mode=10. N=83 respondents). For WorkCover patients, assessments took an average of 9.57 hours (sd = 3.67, range 0-24, median= 10, mode=10, N=82). For medico legal clients, assessments took an average of 12.67 hours (sd = 4.8, range 4-24, median=12, mode = 10, N = 63).

This data indicates that while some Australian neuropsychologists manage to get through all contact and non-contact activities in a very short period of time, but that the modal estimated total assessment duration for all private patients was 10 hours. At St Vincents' in Melbourne, where we routinely did an assessment of 4-5 hours using the WAIS-III and WMS-III, the BDI-II, the STAI, the Boston Naming Test and verbal fluency (FAS and animal naming), the total time spent on one full assessment with interview, feedback, report and individualised recommendations included, was between 7 and 10 hours per patient (inclusion of the MMPI-2 increased time on interpretation, feedback and reporting, but not contact hours because the patient can be left alone to complete the inventory, with periodic supervision). And we were criticised by some for taking a lengthy approach to assessment. The survey data suggests that comprehensive testing may be more common than people believe, or that people who do a limited number of tests may be spending a greater amount of time in non-contact activities.

The Wechsler Intelligence and Memory scales, since the publication of the WMS-R, were meant to be given together, and this is done routinely in the US, where a full battery of neuropsychological tests can take from 8-10 hours, often administered by a trained psychometrician. I can't recall seeing any concern about this assessment time in my years lurking on North American neuropsychology lists. 

In the 20-odd years that I have been giving neuropsychological assessments, firstly on clinical placement, then for my PhD and postdoc research, and then in mental health, neuroscience, general medicine, and rehabilitation settings, I can only recall a handful of patients who did not tolerate a comprehensive assessment with the Wechlser memory and intelligence scales and a number of additional tests of verbal fluency, confrontational naming, mood, anxiety, and premorbid abilities.  The ones that I remember best were the young lady with borderline personality disorder who nearly pushed me over when I asked her to accompany me to the office for an assessment, and the agitated young man who hated maths, storming out of the office when I asked him the first arithmetic question of the WAIS-III. There were more patients where I terminated the session, because they were too unwell, depressed, delirious, or even stuporous for a valid assessment to be obtained. I wrote a report in each case detailing why the assessment had not been completed, and offering to see them again once their clinical state had improved. 

With all due respect to my colleagues who are concerned that lengthy assessments might subjects our clients to distressing experiences, I think we need to compare what we do to other more common health procedures. 



A dental exam, scale and clean is uncomfortable and sometimes painful. A Pap smear or rectal exam is undignified and not the kind of thing we discuss in polite company.  Having bloods collected or cannulas inserted is painful, and can provoke more anxiety the more often you have them, especially if your veins are scarred from being accessed multiple times, or when they start collapsing. I've started to get anxious and panicky at the sound of doctors trying to access other patients' veins (the gentle repeated thwacking of the skin... Need to breathe, slowly, deeply).

Granted, all these procedures take less time than a neuropsychological assessment, but competent neuropsychological assessment is not invasive, painful, or undignified. People often enjoy being tested. Patients don't wince and close their eyes and imagine a happy place when we do them. We don't rehearse phrases like "just a little sting", "just a short sharp jab," "you can unclench your hand now", "wriggle your feet", "would you like a local anaesthetic?", or "try to unclench your muscles." With the exception of a few timed tests and tests of memory, it's always possible to stop midway through an individual subtest. If any of our tests caused significant distress or lasting damage, they would not be published. 

Interestingly, a classmate of mine was asked to review a patient who had been assessed the year before by another student on placement. On seeing the Austin Maze being pulled from the box, the patient burst into tears, and begged my friend not to give it to her. Review of the previous assessment showed that the patient had been made to do the maze dozens and dozens of times in pursuit of the learning criterion of two error-free trials. The finding of a highly significant correlation between the total errors on the first ten trials of the test and the number of trials to learning criterion by Bowden et al in the early 1990s obviated the need to push patients to the learning criterion on the Maze,  but this development hadn't translated into clinical practice, where the ability to achieve error-free trials was seen as evidence of ability to inhibit perserverative errors. It was lucky for patients that the development of the WMS-3 & 4 made the maze redundant as a test of visuospatial memory.



Spending time with a neuropsychologist would be like going to a day spa when compared with the often abrupt and starkly clinical efficiency of other medical procedures, where patients and families can feel swept up on an impersonal conveyor belt. 

I suspect that patients are willing to submit to invasive, undignified, or painful procedures because they trust the clinician will not subject them to unnecessary tests and, crucially, will not omit necessary ones either. Omitting necessary tests in the interests of time efficiency makes a mockery of the time the patient has spent waiting to see the clinician to obtain answers to their questions and solutions to their concerns and problems.

We seem to forget that the word patient has two meanings - one is a noun used synonymously with client. The other is an adjective which describes how a client is willing to behave on the assumption that their interests are being looked after, and that they will receive the answers they desire from the clinicians they go to see. After waiting to see a clinician, clients want the clinician to reward their patience with an assessment that gets to the bottom of the condition affecting their health, and they don't want clinicians to do an incomplete assessment which wastes the time spent waiting in hope of answers. They want us to get it right the first time, even if it takes a number of hours or sessions to do it. They trust us to get it right, and not waste our time on tests that are outdated or insensitive, and therefore more likely to get it wrong.


We've probably all had the experience of knowing that something is wrong with our car, and taking it to the mechanic who checks it over, changes the oil, and says everything is okay, only to drive it home and have it break down on the way, or within the next week. This experience seems to make many people furious. They take it to a different mechanic who runs more sophisticated or thorough diagnostic tests, and discovers that the clutch plates are worn out, the engine is dropping two cylinders, or that the battery is almost dead. Things that should have been discovered if a thorough assessment had been done in the first place. The motorist is understandably angry at the inconvenience, the wasted time and money, and vows to never return to the first mechanic again.


Clients are like car-owners - they are willing to invest time and effort into diagnostic testing when there is a problem, on the implicit understanding that the diagnostician will do everything necessary to get it right the first time. They understand that sometimes things can be missed in even a thorough assessment, and are forgiving of that. It sometimes takes a while for symptoms to develop to the point that the diagnosis becomes apparent. But motorists are less forgiving if a mechanic cuts corners and doesn't fully assess for the underlying causes of the presenting problem, perhaps because of internal or organisational pressures to constrain costs, perhaps because of a lackadaisical attitude, perhaps because of a well-intentioned concern about not charging for unnecessary tests. The motorist sees this as a waste of their time and money. They paid the mechanic to get it right the first time. We clinicians owe our clients the same. They have patiently waited to see us, they are willing to submit to our tests, trusting that we know and use the best tests available. Compared to the painful and invasive procedures that they have experienced as a result of their illness or injury, spending 5 or more hours sitting with a friendly psychologist who provides cups of tea, regular breaks, and is willing to listen to their experiences as a patient, is seen as a welcome change by patients and their families. 


Unlike motorists, patients are in a very vulnerable position, and often don't feel empowered to ask questions or give feedback to clinicians. If we don't ask for their feedback, how can we improve? One way to get honest and unbiased feedback is to provide a standard feedback form and reply-paid envelope for patients and family to complete anonymously after the assessment and feedback is over.

When I asked to see the dietician prior to my discharge from hospital last week, all I wanted was some advice on how to reintroduce food to an irritated stomach that had been fasted for nearly two weeks. (I wasn't sure what to eat, I was hungry, yet afraid to eat anything lest it cause more pain). She sat down on my bed, and before I could tell her what I wanted, she said that she didn't have time to see me, but would arrange an outpatient appointment for me in a couple of weeks time. Then she was gone. My nurse rang the dietetics department and told the receptionist what I had wanted to know, the receptionist asked the dieticians, and advised the nurse to look up low-fibre diets. If the dietician had simply let me speak, she could have told me the same thing in less than a minute. While I understand the pressure on their time with multiple referrals from around the hospital, it was clinically inefficient to come and tell me she couldn't see me without first clarifying the reason for the referral. She could have answered it in a minute, and saved wasting their receptionists time to put me down for an outpatient  appointment. It turns out that the written referral was completely wrong, and said I'd wanted advice on managing constipation!!!

Suggestions for improving our client-centred care.

So how can we improve what we do for our patient clients? Firstly, we can ask them what they most want to learn or gain from their interaction with us, so that we can assess their needs and desires, and sometimes  quickly give them what they want without embarking on a full assessment. 

How else can neuropsychologists improve what they do?
  • Respond to new referrals as soon as possible for both inpatients and outpatients. Get a brief idea of what the patient wants to learn, don't assume the referral question is correct, clarify it with the referring clinician, and do a brief screen for untreated mood or anxiety issues that may add unnecessary caveats to interpretation of your assessment. Unless there is an urgent clinical need, do not assess a clinically depressed or anxious person who has not had treatment. Delay it until treatment has had a chance to be effective. Explain the reason for this to the referrer.
  • Educate the patient about what to expect from the assessment by attaching a brochure about neuropsychologial assessment to your appointment letter, or run through the brochure with an inpatient.
  • Be brave and evaluate what we do at present, and see if it is necessary for every patient. If a patient and their family have a clear and stable diagnosis with no questions about ability to return to work or study, or concerns about symptom progression,  perhaps they just need counselling or education on their symptoms and condition, rather than automatically progressing to a full assessment. 
  • If they have a condition that may change over time, they probably need a baseline assessment that will provide a good point of comparison improvement or deterioration in future functioning. Such an assessment would use measures with high test-retest stability, which is best obtained by using the composite scores provided by the Weschler Memory and Intelligence Scale Index scores. Individual subtests of the Wechsler Scales rarely have the same stability over time in healthy people as the index scores do, meaning that it's harder to detect true change on subtest scores alone (the 95% confidence interval for test-retest stability ranges from +/- 2 or even higher for subtests). So if you're seeing someone with an acquired brain injury who may want to return to study or work, a pre surgical patient where post surgical review will be required, or someone with a progressive neurological disorder who may lose the capacity to function independently, it is going to be more helpful for them if you assess them with the full Wecshler scales so that you can determine the degree of change over time. You just can't measure change as well with subtest scores as with index scores. So assess them thoroughly, presuming they aren't so severely impaired or clinically unstable at baseline that formal assessment is impossible, or if their condition and prognosis is so poor that assessment won't improve management of their clinical condition. Don't assess them unless they are functioning relatively well and others are making premature decisions for them on the basis of how impaired they may be one day. What the insightful patient wants right now is important, irrespective of their diagnosis. Just because I have had two GBMs removed doesn't mean that I'm happy to ignore the breast lump identified on mammography  and MRI earlier this year. Statistically, it's likely the GBMs will get me in the end, but I've survived for 9 months so far, the tumours are shrinking, and I'm planning to beat them. I don't want to be foiled by unoperated breast cancer just because other people decided I've been through too much already and shouldn't spend more time away from my family. One week in hospital for breast surgery is nothing compared to metastatic breast cancer. It's not a decision for others to make, no matter how compassionate they think they're being.
  • Unless you try to formally assess someone, you can't presume that they are too impaired to be assessed. I have seen patients who were severely impaired on baseline testing who improved over time, even on brief screens like the Addenbrooke's or MMSE. Even people with significant sensory, motor, or expressive problems, including blindness and deafness, can get subtest and index scores ranging from extremely low to superior. If we don't assess, we will never know. But we need to justify the reason and need for assessment in each case, not just blindly accept the referral question.
  • Don't jump to conclusions on the basis of the diagnosis, history, or other people's opinions. Allow patients to surprise you, to demonstrate the remarkable variability in people. Did you think it would be possible for someone who'd had two brain tumours to write like I do? We're all guilty of jumping to conclusions about patients' abilities based on their diagnoses. I give thanks every day that I am still able to read, think, and communicate, and come up with corny analogies.
  • Of course, if someone is severely ill, in considerable pain, confused, or still in the recovery phase of an acquired brain injury or acute illness, the test results will not be as valid or reliable as when they are clinically stable. It's okay to delay an assessment until the patient is well enough to do it to the best of their abilities.
  • Recognise when a patient pushes certain buttons in yourself that might compromise your objectivity in doing the assessment, and seek support or refer on to another clinician to see the patient. We're all human, and occasionally find certain conditions or stories too close to home for us to be able to help the best that we can.
  • We can stop being embarrassed and apologising for what we do. We have the best tests available to test cognition, memory, mood, and behaviour, and we are the experts in understanding the impact of brain disorders on the whole person. People come to us for answers and assistance, we need to acknowledge where we can and cannot help. And that we can do it well.
  • We need to regularly evaluate the tests that we use according to our ethics code and code of conduct, and to ensure that every test we use is measuring what it is supposed be measuring. We need to be experts in the clinical applications and importance of reliability, validity, sensitivity, and specificity. Practically speaking, it's like checking that your grandmother's kitchen scales are as accurate as a modern digital scale, and keeping it for decoration purposes only if it is not. Or if her scales are accurate, but scaled in outmoded imperial measures, then having ready access to a conversion chart from imperial to metric so that you don't miscalculate the amount of sugar you use in your macaroons. In baking, accurate measurements are vitally important, as they are in science and neuropsychology. 
  • Give up on tests that you feel you can interpret accurately based solely on years of experience and your clinical intuition. This approach can result in widely divergent impressions both between and within clinicians. It's a little like rapidly measuring a pinch of salt with your hands vs standard measures, or guestimating one cup of flour. Too much salt, or sugar, will ruin your macaroons. Too much flour will make your sponge cake dry. How can we bake to consistent results without consistent recipes and standardised measures? Our assessments require precise measures, just like in baking sponges or macaroons. They're not casseroles and curries that can be seasoned to taste and adapted to available ingredients.
  • We should consign unreliable, invalid, old and outdated tests to the bookcase or shredder rather than continue to use tests that provide results with a standard error of measurement big enough to drive a truck through. Anything with a reliability of less than 0.7 is considered unacceptable. From memory, this means that we should dispense with Trails A& B, the RAVLT, the L'hermitte Board, Colour form sort, and the Wisconsin Card Sorting test. We have better tests for psychomotor speed, verbal and visual memory, and fluid intelligence. Surely it's worse to make a patient spend time on completing inaccurate and uninterpretable tests than to make them spend a few hours on a battery of highly reliable and well-standardised ones that were designed to be used together?
  • Purchase and use tests like the BRIEF or BRIEF-A, or the FRSBe to get patient and informant reports of a range of frontal-system behaviours. These inventories reveal if their are problems with behaviours like impulsivity, dysinhibition, inappropriateness, planning, and organisation, by comparing scores with age and gender-matched norms. They can show you if there has been a change from premorbid levels, they are much more revealing than relying on the WCST and Trails to measure a wide range of frontal function;  they save time on the clinical interview, and can help reveal lack of insight, over-reporting, protectiveness, or underreporting in patients and informants.
  • Be honest in obtaining and reporting test results. If the patient comes to you in so much fatigue, depression, anxiety or pain that you think it will invalidate the test results, do not proceed with the assessment. Reschedule it for when they feel better, or see if patient-administered pain relief or a warm drink can alleviate their discomfort and start the testing a little later. If the testing is affected by the development of fatigue, reductions in concentration, or increased anxiety, discontinue the assessment so you can get the patient's best performance at another time, and note the details on the record forms and in the report.
  • If a comprehensive assessment is not obtainable, detail the reasons in the report, and any caveats that may place on the interpretation.
  • Don't be embarrassed or apologetic if you're not sure of what to make of the assessment results. Describe what you found, and list the possible interpretations. Human problems are highly variable and and complicated, and it's not always possible to be sure of the answers. Better to acknowledge this rather than to put on a cloak of false confidence in your conclusions. Better to lay your decision-making cards on the table - "it could be one of these things, but I'm not sure. It seems more likely to be a, b or c, and less likely to be z. On an outside chance, it could be xyz". 
  • Don't assume primary responsibility for arriving at a diagnosis - your effort is just a part of a multidisciplinary assessment. As important as it is, it's just one cog in the wheel. By putting your differentials in the report, it allows others to consider your hypotheses and may help clarify their own. I once speculated that a woman with an isolated memory impairment might have paraneoplastic limbic encephalitis - she was found to have advanced breast cancer.
  • Remember your first duty of care is to the patient. Ask what they want to learn or achieve from seeing you, and try to give that to them if it is possible, or reformulate their questions and desires into a form that is possible for you to address.
  • Don't let bureaucrats dictate how long your assessment should take based on the "time is money" premise, which is offensive and demeaning to patients, and disrespectful of clinicians who are the experts in their field. Each patient will need a different amount of time to be assessed. Some will skim through everything in the minimal time, others will take longer, due to a multitude of factors including personality, impulsivity, fatigue, concentration, motivation, or other things. Each patient deserves to get the best assessment for their individual circumstances, even if it takes a little more time. We wouldn't accept half a brain scan because it was taking too long, would we? If the patient was allergic to the contrast medium, then we might not get an MRI with contrast, but the reason would be documented and  the absence of the data taken into account when interpreting the non-contrast scans.
  • Remember to give and elicit feedback on the assessment process once it's over, and keep records of the feedback you receive, so that you can continually improve on the service that you provide.
  • Make time to meet with the primary carer individually, with the patients consent, or arrange for one of your team to meet them simultaneously. Family, carers, and patients often find it hard to express their needs and concerns in front of each other, and often don't know what to ask for in terms of education and supports
  • Refer on to a multidisciplinary allied health team if the patient isn't already linked to one, so that the patient can benefit from the range of professional services that are available.
My personal model of best-practice
In terms of testing, despite my preference for comprehensive assessments using the best tests available, I don't believe in over-assessing anyone, just in doing the best assessment possible for the individual. I'm not completely rigid in my approach, which has developed on extensive reading of neuropsychological assessment and measurement research from the past 30 years. What I have written above basically summarises my approach to neuropsychological assessment, developed over 24 years of reading scholarly articles, research, practice, case-conferences, and supervision. It's an approach based on doing the best assessment possible, using the best tests, for each patient, and not taking short-cuts unless it is clinically necessary or justifiable. I don't see it as "testing until the cows come home," and my patients and students haven't seen it that way either, though I can understand that it is a different approach to that described in Kevin Walsh's books of the late 1980s.  The student comments were reassuring for me, because I trained at a time when every supervisor had a different set of favourite adult tests which wasn't necessarily made explicit (we were told to choose them ourselves, and that we'd learn "through experience"). 

On placement, in trying to assess what I thought was important (and also take a history) in less than two hours, I often found I'd omitted at least one test that my supervisor thought were vital, like digit span, digit symbol, arithmetic, vocabulary, similarities, comprehension, or information. On the one hand, I was embarrassed at leaving "vital" tests out, on the other I was discouraged from wanting to do a thorough, standardised assessment rather than a brief screen based on a hypothesis-testing approach using the WMS, RAVLT, and selected WAIS-R subtests. The former  was seen as a waste of time, even though it affected my confidence in my interpretation of the test results.Then I went to the RCH where my supervisors encouraged me to do comprehensive assessments, both for the benefit of the children and their families, and as a learning experience for me as a student. I learnt that the WISC-R and other intelligence scales available at the time gave similar results from slightly different perspectives, and that it was probably over servicing to do two or three different cognitive batteries with the one child (like the WISC-R, K-ABC and Stanford-Binet), no matter how sweet and compliant she was. Sometimes our tests don't give us definitive answers on subtle problems. And yes, "intelligence" tests are suitable for neuropsychologists to use because they test a wide range of cognitive domains and allow us to see how individuals compare to the standardisation sample. To say that our patients need different tests because their brains are different, or that we should test people with certain conditions with tests standardised on non-normal populations, seems absurd to me. Having tests standardised on normal populations allows us to see if the patient differs significantly from others, and allows us to seek evidence to exclude the null hypothesis that all people are normal. This provides clinically and diagnostically useful information. I don't see how the effort involved in getting normative data for various clinical groups would be worth the time and expense. Our existing tests give us a wealth of data, we just have to know how to interpret it.

After my clinical placements, I decided that adults deserved the same degree of thorough assessment given to children, particularly with work, study, and independence to consider, and that it was more consistent to administer the tests of the Wecshler scales in the standardised order and procedure than to pick and choose based on clinical intuition, or to invalidate the norms by "testing the limits" or otherwise violating standardised procedures on an ad-hoc basis for each patient.


Embracing the well-normed, stable, and reliable WMS-R, WMS-III and WMS-IV indices saved me wasting patients' time on the poorly normed, less reliable and less stable tests like the WMS, RAVLT, or the norm-free L'hermitte board and colour-form sorting test. And better still, by using the WAIS-R and WMS-R in combination, there were tables to look up to see if memory was in the range expected from intellectual functioning, rather than the guesses and clinical intuition available to users of the WMS. This has only improved more with the co-normed third and fourth editions of the Wechsler scales, and the conorming with the WTAR.

Using this approach of testing for evidence to reject the null hypothesis (that the patient is normal), I found I could settle into a well-rehearsed process of assessment that is more time-efficient and standardised than pausing in an assessment wondering which test to give next. It allows the clinician to act like a well-oiled conduit for the tests on one level, while being simultaneously aware of the patient's mood, concentration, and other qualitative features of performance on the other.  I couldn't justify sacrificing robust tests for flimsy ones. Which is why I prefer composite scores to subtest scores, and to get composite scores, you need to do more testing. 

The principle of aggregation shows that the reliability and stability of composite or index scores is greater than that of the individual subtests that contribute to the index. Using composite scores allows you to test if any of the results suggest the patient is significantly different from normal, which might allow you to reject the null hypothesis and conclude that there is evidence of a score, or set of scores, that is unusual in the normal population. I liked to use a base rate of 5% as my criterion for clinical significance. Statistical significance seemed irrelevant when a difference between index or scaled scores can be statistically significant (have a probability of occurring less than 5% of the time), but is found to occur in more than 10% of the standardization sample.

Most of the 2000 patients seen from 1994-2009 directly by me, my students, and colleagues at St Vincent's, using a fairly consistent comprehensive approach,  were appreciative of the time we spent with them in interview, assessment, and feedback (average range 4.5-6 hours), and if any became fatigued or distressed, we gave them the opportunity to take a break, or to complete the assessment another time. Some accepted, but others preferred to complete the testing that day, as they'd often already waited for months to see us. In fact, we probably found the assessments more tiring than the clients, due to the need to maintain our concentration and adherence to standardised administration while recording observational data and attending to the client's needs.

I am concerned about arriving at a false positive or negative diagnosis, and am not motivated by a desire to complete an assessment with the least tests possible in the shortest available time, or by the misguided assumption that more testing is better - we have to recognise the overlap and associated redundancy of some of our measures.


My patients and their families have appreciated the time I've taken with them, and the students I've supervised have said that taking a routine and standardised approach to assessments with each patient allows them to get a better idea of the variability of test performances between and within patients, and to spend their time concentrating on the patient and the qualitative aspects of performance, rather than worrying about missing out on crucial tests.


Misperceptions in our field.
An anonymous person commented on the feedback to the 2012 CCN conference that I was pushing an agenda  of "testing until the cows come home." This was in response to my comments on a student case presentation where I expressed concern about the apparently common practice of omitting core subtests of the WMS-IV and to assume that only giving logical memory or verbal paired associates provides a reliable measure of verbal memory. I felt bad about getting sidetracked on that issue in that session, and apologised immediately after the session to the presenting student, and to the two students who subsequently had less time to present their cases. I think we parted on good terms, and I would like to apologise to anyone present who was offended that I was distracted by my passion for high-quality data. I meant no offence. 

I have no idea who called my approach "testing until the cows come home," I find it somewhat amusing, but rather offensive to our patients. On a concrete level of interpretation, I guess that means the writer thinks I test for an inordinately long period of time, longer than most people would do, beyond the point of collecting useful information. A more abstract interpretation, probably not intended in the comment, is that I promote testing until all necessary evidence is collected in a reasonable amount of time, acknowledging that some people take longer than others. Since this anonymous person likened me to a dairy farmer, I'm proud to say that I'm  willing to take the time to bring all the cows home by dusk, rather than rushing it and leaving old Daisy, or the calves, out in the river paddock on a stormy night because I'd rather be inside, dry and warm by the fire. It is the farmer's responsibility to be patient and to make sure each and every cow is home safely at night. It is the clinician's responsibility to fulfil their duty of care to their patients. We may all choose to do it in different ways, but we all need to sleep comfortably at night, knowing we are doing our best, practicing ethically, and within accepted guidelines for our field.


In the early days, I may have been a little over- inclusive in applying the same approach to every patient, especially when they needed education and information, rather than a diagnostic assessment. But  I learnt that being clear on the objective of the assessment and the patient's needs allowed me to refine and tailor a consistent but individualised and responsive approach to each person. I feel more comfortable with that than trusting that a brief assessment will tell me everything I need to know to get a clear and reliable picture of the situation.

Having tried to conduct comprehensive assessments in the inpatient setting, I understand the multifactorial difficulties and the need for brief assessments in acute hospital settings or in screening or triage assessments for cases of possible dementia or early head injury:  if someone fails a brief screen like the MMSE or Addenbrooke's, there's clearly a problem, but depending on the case, it still may be necessary to do a more comprehensive evaluation, and I would never interpret a normal score on a brief battery as showing no evidence of impairment. Rather, I would say that there was no evidence of impairment evident on the brief and incomplete assessment conducted, and that further testing would be recommended if a more complete or detailed cognitive profile was desired. 

We need to regularly inform ourselves of international practice standards by having a good look at the international literature on neuropsychology, particularly from the USA where involvement in medico legal work has resulted in vigorous debate about standards for neuropsychological assessment. We need to  remind ourselves of the tortuous four years of undergraduate study in the science and philosophy of psychology, particularly measurement theory, test construction, standardisation, and psychometrics. They are not irrelevant vestiges of the past, they are core to what we do. We are not here to be palm-readers (though it can be an amusing hobby out of work). Our excellent tests are what set us apart from all other health disciplines, we need to use them to the benefit of our patients and our profession. Of course there is a role for intuition in our interactions with patients and families, but guided by professional knowledge and experience, including knowledge of the errors commonly made by clinicians and steps needed to avoid them.  Psychology and neuropsychology are professional and scientific disciplines with strong research and ethical foundations. If we want to avoid potential professional misconduct allegations and deregistration, we need to keep firmly within the bounds of ethical, evidence-based psychological practice. We can't make it up as we go along, unless you fancy being hauled over the coals for professional misconduct, or embarrassed during cross-examination as a witness in court. It's also about doing the right thing, the ethically defensibly and professionally accountable thing, not about doing what is easy.


PS. I'm feeling almost normal at present. I'm not trying to be brave or inspiring or anything by sharing my health experiences, though I hope it helps to give people a different perspective to what we do. I just want to get healthy again, so that I can enjoy being with my family and friends, and participate in society in a meaningful way. I'm incredibly grateful to have survived for 8 months after having with two grade IV gliomas removed (right parietal, left occipital). They are shrinking on each successive MRI and showing no sign of recurrence or pseudo progression

Having gone through all of this has been a transformative experience. I'm more aware of the important things in life, more exasperated by, and less interested in the trivial, and more appreciative of the preciousness of health, happiness, and existence. I'm acutely aware of my own mortality (which possibly underlies my need to share my thoughts) and hope I still have much to share with people, and much time in which to do it.

I intend to keep improving until I'm fully recovered, and to live another 40 years (at least)  so I can see my children grow up and enjoy cuddling their children. 

As for neuropsychology, I don't know if I'll be able to return to clinical work. The cancer-related fatigue is taking a while to go away, and it will probably be months before I have the energy to achieve a consistent amount each day. At the moment, I crash in bed after each good and productive day. Writing, advocacy, and talking about my experiences seem like achievable and rewarding goals in the short term. Maybe I can help more people through writing and talking about what clinicians can learn from patients than I ever could as a clinician.

Kind regards and thanks to all of you, my colleagues, friends, supervisors, students, and vehement detractors, for shaping my views and perspectives over the years. I'm sorry for the times I may have neglected subtlety, delicacy, and tact in my excitement for talking about ideas, especially in my very passionate and exhilarated early years as a fledgling Neuropsychologist. It's such a wonderful profession and field of study. We need to work together to make it stand strong against incursions from other professions who do brief cognitive assessments without the scientific, theoretical, and practical backgrounds to understand the limitations and applications of their approaches (aren't psychological tests limited to psychologists in this country? Hmm- something to consider in relation to other professionals using cognitive tests)

Time to shut up and get some sleep
Fiona