SomaSimple Discussion Lists  

Go Back   SomaSimple Discussion Lists > Physiotherapy / Physical Therapy / Manual Therapy / Bodywork > CHOICES: Perspectives on the Future of PT
Albums Quiz PubMed Gray's Anatomy Tags Online Journals Statistics

Notices

CHOICES: Perspectives on the Future of PT In this forum you will read interviews conducted in depth with various figures who are making an impact on the profession. NOTE: The forum is fully moderated. All posts need a moderator approval before becoming visible.

Closed Thread
 
Thread Tools Display Modes
Old 21-10-2006, 11:06 AM   #1
Luke Rickards
Arbiter
 
Luke Rickards's Avatar
 
Join Date: Oct 2004
Location: Adelaide
Age: 39
Posts: 2,526
Thanks: 6
Thanked 73 Times in 29 Posts
Default SomaSimple interview with Nicholas Lucas

SomaSimple is very pleased to have Nicholas Lucas as our second guest in our new Choices program.

Nicholas is a registered osteopath in Sydney, Australia. He graduated with a Bachelor of Science, and Master of Health Science (Osteopathy) from Victoria University (Melbourne), and then a Graduate Diploma of Clinical Epidemiology, and Master of Pain Medicine from the University of Newcastle. He is currently working on a PhD at the University of Sydney, examining the evidence for the validity and reliability of clinical diagnostic examinations in physical medicine. Nicholas has also undertaken postgraduate study in the McKenzie system of Mechanical Diagnosis and Therapy, and also taken courses with Shirley Sarhmann and David Butler.

Nicholas is a highly respected lecturer at the University of Western Sydney where he teaches pain neurobiology, pain medicine, and clinical diagnostic examination, and co-ordinates and supervises research projects in the master of osteopathy degree. He is co-editor of the The International Journal of Osteopathic Medicine (IJOM) and has authored several peer-reviewed papers.

In addition to his academic activities, Nicholas is a partner at Sydney Osteopathic Medicine, a private practice providing management of neuromusculoskeletal disorders through patient education, therapeutic exercise and osteopathic manipulative treatment.
Attached Images
File Type: jpg Photo.jpg (9.3 KB, 12 views)

Last edited by Luke Rickards; 14-10-2007 at 01:53 AM.
Luke Rickards is offline  
Old 21-10-2006, 11:17 AM   #2
Luke Rickards
Arbiter
 
Luke Rickards's Avatar
 
Join Date: Oct 2004
Location: Adelaide
Age: 39
Posts: 2,526
Thanks: 6
Thanked 73 Times in 29 Posts
Default

SomaSimple:
Thank you Nicholas Lucas for joining us in this program on SomaSimple.

Even as a student, and often in direct defiance of your instructors, you were outspoken on the importance of the application of scientific evidence to the practice of manual therapy. In a letter to the editor of the Journal of Manipulative and Physiological Therapeutics in 2000 you wrote, “In my opinion, the emphasis in education should begin to shift from what is (overwhelmingly) inherited authoritative wisdom to the application of the evidence, which is increasing and readily available.”

How did your interest in the significance of evidence-based practice develop within the contrasting culture that you referred to? Why do you feel it is essential to apply the evidence to the practice of manual therapy for neuromusculoskeletal pain?


Nicholas:
Firstly, thank you for the opportunity to respond to your questions on a topic about which I am passionate. The real answer to this question is convoluted yet can be summed by saying we are who we are because of how we came to be. The brief version is that I am one of those osteopaths who came to osteopathy after being ‘cured’ by an osteopath, when other approaches appeared to have failed. I was therefore biased toward a belief in osteopathy before I even began to learn about it. Prior to gaining entry into an osteopathy program, I studied biomedical science and was privileged to be instructed by active researchers in a dynamic biomedical science faculty. Anyway, advertisements for the osteopathy programs promised a profession based on scientific principles – a blend of science, art and philosophy – and so it was with enthusiasm that I then enrolled in the program I was accepted into.

I soon discovered that the program was strong on a philosophy of health care, the ‘art’ part I still can’t quite work out, and the basic and clinical sciences were strong, but there seemed to be some discrepancies between what was taught in the ‘science’ units, and that taught in the ‘osteopathy’ units. Interestingly, most people were not phased by this, including the osteopathy lecturers. For a number of reasons, I became frustrated and applied and was accepted into a different postgraduate osteopathy program that included a research component, and it was there that my questioning was appreciated and supported by key staff. I felt comfortable questioning the basic premises and ideas in osteopathy and manual therapy, although as you allude, this was still met with disapproval by some instructors. Nevertheless, I was prepared to keep on questioning, even though I was a novice at it.

I then attended a conference on an evidence-based approach to low back pain, and this is where I first came across the language of evidence-based medicine. There were a number of key speakers at the conference who rocked my world. They challenged the very core of almost everything I believed about osteopathy at that stage, and physical medicine in general. If you’re a U2 fan, you might recall that in their video clip for ‘Desire’, a message flashes up which says, “Everything you know is wrong”, and that’s what it felt like. Cognitive dissonance can be a very uncomfortable feeling. However, I couldn’t ignore what was presented. This is when my interest in research and evidence was given direction and my learning began to accelerate. Again, this was supported, rather than stifled, by senior staff at my institution. I was also very fortunate to have a group of peers, and one friend in particular, who were also set on fire. It was an exciting time for me. We could not get enough journal articles. We’d go to the library every day in anticipation of the arrival of a new issue of one of the relevant journals. We would sit up the back of lectures reading research articles that had just come out. I think we became an irritation because we’d be always putting our hands up in tutorials saying “but the latest article by such and such says…” – and there’s nothing like the truth to ruin a good idea.

Why is it essential to apply the evidence? Because evidence is what can enable you to make an accurate diagnosis. Because evidence can relieve you of making those diagnoses which are invalid. Because the evidence can simplify your interventions and help maximize positive patient outcomes. Because evidence can help you provide a prognosis for your patient. Because evidence can guide you to appropriate referrals. Because evidence can enable you to rescue someone from unethical and corrupt health care practices that are potentially damaging to the patient. Because we are expected to. Because we claim to be health care professionals informed by scientific principles.

However, it’s more than evidence – it’s the attitude and critical thinking behind the desire to apply the evidence that is important. When I first began to read the research literature, or textbooks that were based on literature, I had an uncritical allegiance to osteopathy and manual therapy. I believed it worked. I believed the concepts. They sounded true, logical, reasonable, insightful, and useful. However, as I continued to learn, I was challenged to change my allegiance; and this was because I kept coming across research that challenged what I believed; that challenged some of those concepts. I had come to a crossroads. I either had to ignore the evidence and continue to accept the validity of the concepts on ‘faith’, or I had to accept the evidence and let go of some of those concepts. I choose the evidence. However, rather than blindly accepting the evidence, I chose to enroll in a graduate diploma of clinical epidemiology in order to advance my knowledge and skills in critical appraisal and evidence based medicine, and to specifically learn to identify which evidence was valid, and which evidence wasn’t worth reading – and there’s lot’s of that about.
Luke Rickards is offline  
Old 21-10-2006, 11:20 AM   #3
Luke Rickards
Arbiter
 
Luke Rickards's Avatar
 
Join Date: Oct 2004
Location: Adelaide
Age: 39
Posts: 2,526
Thanks: 6
Thanked 73 Times in 29 Posts
Default

SomaSimple:
“….finding the primary dysfunction with any accuracy relies upon the existence of a method for detection that has proven reliability and validity.] [Using the logic of clinical reasoning, a treatment should be recommended on the basis of a valid diagnosis, and the predictive validity of a diagnosis provides a rationale for treatment because it addresses the pathophysiology described by the diagnosis.” - (Lucas & Moran, IJOM editorial: June 2005)

You have a particular interest in the concepts of validity and reliability. Could you expand on the various aspects of these concepts and explain their relevance to manual therapy practices?


Nicholas:
I continue to be surprised by the widespread misunderstanding people have of evidence-based medicine and the distrust with which they view it. Personally, I found it liberating. I coined the phrase ‘evidence-only medicine’ to describe the most common misunderstanding I came across. What many people hear when evidence-based medicine is mentioned is ‘evidence-only medicine’. They think evidence based medicine means only doing those things for which there is level A evidence. Clearly, the word ‘based’ means that evidence plays a role, but it is not the final word.

Reliability and validity are at the very center of evidence-based medicine as they relate to diagnosis. The other aspect of evidence-based medicine is efficacy – do treatments work. However, back to reliability and validity, let me use an example most people can relate to. None of us like having to pay for our cars to be repaired. We take it to the mechanic with fear and trepidation about the cost. Many of us have no idea about what’s going on under the bonnet (hood). Imagine if you left you car at the garage and went to work. Later that night you return to find the bill is $5,000 dollars. How happy are you to pay that bill? What if you discover that in order to find out what was wrong with the car, the mechanic used a test that was unreliable; that if you’d taken the car to a different mechanic, and he or she had used the same test, that they’d more than likely come up with a different reason for your car problems. Let’s face it, if you knew this before hand, you wouldn’t take you car to that mechanic. The simple reason is that you don’t want to risk spending $5,000 dollars on fixing up a problem that was identified with an unreliable test.

Staying with the car analogy, what if the test was reliable? More than two mechanics could agree, and when they used the test it kept giving the same answer. Great – now you’re feeling a little more confident. Let’s just say the test is computer based, and the result of the test is that the car is low on oil and that this damaged certain parts of the engine, which costs $5,000 dollars to repair. Bob and Jane (the mechanics) both concur that the computer reports that the `car is low on oil and that the engine is damaged. You ask for a little more information. How does the computer ‘know’ that the car is low on oil and that the engine is damaged? Bob and Jane aren’t quite sure. They say, “it’s what it says in the instruction manual and it’s what the computer says”. You’re not satisfied and are now becoming suspicious. You think to yourself “I filled up with oil the other day”. You have a look at the instruction manual and find that the computer doesn’t actually sample the oil levels, nor does it test the engine for damage. Are you happy to pay the $5,000 now? This is one aspect of validity; does the test actually measure what it is supposed to measure.

Given that I don’t think many health professionals would be comfortable paying $5,000 to repair their car on the basis of a test that lacks either reliability or validity, then I don’t think those same people should view the diagnoses they give their human patients any differently. In many cases, we simply don’t know the reliability or validity of our tests – in which case we can only use them cautiously. But for those tests that have been shown to be unreliable or invalid, it is our responsibility to acknowledge their lack of utility. I have an anonymous quote posted on the reception desk in my practice. It reads, “when a man who is honestly mistaken hears the truth, he either ceases to be mistaken or he ceases to be honest”. I like it because it keeps me honest.
Luke Rickards is offline  
Old 21-10-2006, 11:39 AM   #4
Luke Rickards
Arbiter
 
Luke Rickards's Avatar
 
Join Date: Oct 2004
Location: Adelaide
Age: 39
Posts: 2,526
Thanks: 6
Thanked 73 Times in 29 Posts
Default

SomaSimple:
In a recent post on the NOI group forums, you wrote –

“I have three questions (that) apply to all manual therapies.
1. Is the treatment group different to the control group?
2. What is the 95% confidence interval?
3. And most importantly, what is the effect size?
Even great theories based on valid concepts are ho hum in the absence of this information.”

What is the significance of these three questions when determining the evidence for a manual therapy approach?

These three questions refer specifically to outcomes research. How do you see the relative roles between outcomes research and other types of research, such basic science research, reliability studies, etc?


Nicholas:

The significance of these three questions relates to the issue of attribution: to what do we attribute the outcomes we observe in practice? Can we attribute the outcome specifically to the treatment – or not? Firstly, there will usually be a difference between two groups that you compare. It is the magnitude of this difference, and the likelihood that it arose by chance, that are important (questions 3, and 2 respectively). Let’s highlight the significance of attribution by hitting most people where it hurts – their wallet. If you found out that therapy x was very effective at relieving pain, but no more effective than someone pretending to do therapy x – would you pay $5,000.00 to learn therapy x? If therapy x took 2 years to learn, but learning to pretend to do therapy x took 2 weekends, would you give up 2 years of your life, and income, to learn therapy x? Remember, they both have the same outcome. My answer is no, I would not pay to learn therapy x with either my money or my time. Further, how much better does therapy x have to be compared to the pretend version (effect size) before you’d pay for it and take 2 years to learn it? Does a 5% better outcome warrant your ‘hard earned’ cash and your ‘never to be repeated’ time? What would be your threshold? Would a 50% better outcome do it for you?

If there is no evidence for a particular therapy in a particular condition, then we are only able to judge a therapy on anecdotal reports and concept validity; that is, it sounds like it could work, and some people claim that it works. However, I have now been around long enough to hear multiple theories for ‘why’ things should work; but no evidence that they do work. I have also been around long enough to see numerous good ideas become invalidated ideas; to have seen numerous therapies become ineffective therapies after being subject to experimental investigation. My excitement for new ideas has waned in the absence of evidence that they actually work. A good idea is not enough, even though a great therapy starts as a good idea.

Basic science research can be fascinating. Basic science informs the validity of the conceptual basis of most therapies. However, basic science is not enough. A detailed and logical concept of diagnosis or treatment based solely on basic science is not proof of the reliability, validity or efficacy of that test or therapy, respectively. It comes back to the fact that a good idea is not enough. For example, a detailed knowledge of the anatomy of the cranial sutures, the ventricles and the physiology of cerebrospinal fluid production does not in any way validate osteopathy in the cranial field.

In relation to studies of diagnostic accuracy or reliability, this research has lagged behind outcome studies. However, members of the Cochrane Collaboration have recognized this and there is now a Diagnostic Test Accuracy Reviews Working Group, which includes those members who were involved in developing the STARD Initiative and QAUDAS tools for reporting and critiquing studies of diagnostic accuracy.
Luke Rickards is offline  
Old 21-10-2006, 12:02 PM   #5
Luke Rickards
Arbiter
 
Luke Rickards's Avatar
 
Join Date: Oct 2004
Location: Adelaide
Age: 39
Posts: 2,526
Thanks: 6
Thanked 73 Times in 29 Posts
Default

SomaSimple:
Reading and interpreting research effectively is not as straightforward as many clinicians suppose. What should clinicians look for when examining research to determine if it is useful in their practice, and what common mistakes should they avoid?


Nicholas:
There are a number of ways to read research. One is to form a specific clinical question and specifically search for literature on this topic. Another way is to browse the literature – which can lead you on all manner of exciting digressions and journeys. In the old days, you had to browse individual journals in the library. Now, however, you can subscribe to a publishers journal content ‘email alert’ function. You select the journals you are interested in, and you are emailed the contents of that journal each time it is published. Often you can click and view the abstract directly, and full-text is often available if a subscription has been paid. On Science Direct you can also subscribe to specific areas of interest (e.g. pain medicine, or biomechanics) and each week you are sent a list of every article published in that area by Elsevier across their entire portfolio of journals. BioMed Central provides a similar service, although, unlike Science Direct, BioMed Central provides free access to the majority of their online journals.

The other main aspect of your question relates to critical appraisal. There are many good resources available for people who want to improve their skills in this area. The Journal of the American Medical Association ran a whole series of "User's Guide to the Medical Literature" articles, covering such areas as diagnosis, treatment and prognosis. These are freely available from the JAMA website. The BMJ has also published numerous articles and books on how to read papers, and Trisha Greenhalgh’s book “How to read a paper”, published by BMJ is a great place to start (Greenhalgh T. How to Read a Paper. The Basics of Evidence-Based Medicine. London: BMJ Publishing Group, 1997). However, even having read these resources myself, there’s nothing quite like taking a formal course of study in this area, and online education makes this option more viable for busy practitioners.

I really can’t begin to answer your question about what clinicians should look out for when reading research because it’s different for each type of research design, and it is pointless providing a superficial answer that really doesn’t offer anything useful. For a simple approach, I suggest readers go to the CONSORT website where they can look at the CONSORT statement for what should be included in randomized controlled trials, and under the Further Initiatives link, they can also look at the STARD statement for what should be included in studies of diagnostic accuracy. There are additional statements for other research designs.
Luke Rickards is offline  
Old 21-10-2006, 12:05 PM   #6
Luke Rickards
Arbiter
 
Luke Rickards's Avatar
 
Join Date: Oct 2004
Location: Adelaide
Age: 39
Posts: 2,526
Thanks: 6
Thanked 73 Times in 29 Posts
Default

SomaSimple:
Most clinicians feel that research is a world away from their clinical practices. Should research be left to the academics and their postgraduate students? What recommendations do you make for clinicians interested in contributing to the literature?


Nicholas:
The evidence that most people would like in support of what they do in practice will not come from a few studies. It must come from a cohesive body of evidence of varying types that consistently support a given approach. An important part of this cohesive body of evidence needs to, and can come from clinicians. Let me re-word your question. Should research be left to experienced researchers or inexperienced researchers? The answer is that it should be left to experienced researchers, who should also help inexperienced researchers become experienced. So, I don’t recommend that a clinician rush off to do research without first consulting with someone experienced. However, before embarking on this path, I would encourage people to examine their motives. If you believe that a certain phenomenon exists, then you must be prepared to find out that it doesn’t exist, and report that it doesn’t exist. I have witnessed a number of people claim to be interested in research for their therapy, yet when provided with an opportunity, decline to become involved in case the research doesn’t support the therapy.

For those who are keen and who don’t have specific skills in research design and analysis, I would highly recommend you enroll in a research methodology course. You don’t have to complete an entire program of study (like some suckers for punishment ;-D). There are courses of study you can complete online as a distance student, which means that if you have access to the internet, you are only a click (or so) away from a critical reasoning course, or an introduction to evidence-based medicine.

One of the simplest types of research clinicians can become involved in is the so-called ‘single system research design’. Essentially, these are prospective case studies that are designed to control certain extraneous variables and provide multiple measures during the study in order to examine the change in outcome measures. A typical single system design will include a baseline period during which no treatment is provided, with numerous outcomes measured over a predefined time period (e.g. three weeks). This phase would then be followed by a treatment phase, again, during which numerous outcomes are measured. Once the treatment phase is completed, a further phase of either no treatment, or home exercise is undertaken, again during which the same outcomes measures are taken. The point is to qualitatively determine if the initiation of the treatment phase coincides with an improvement in the baseline measures obtained during the initial no-treatment phase, and then to prospectively follow this through the treatment and home exercise phases.

A lot has been written about the role that active research should play in clinical practice – and there are valid arguments both for and against.

I think that at the very least, clinicians should become active consumers of research. They need to learn how to search for literature and confidently appraise the quality of it.
Luke Rickards is offline  
Old 23-10-2006, 09:22 AM   #7
bernard
Admin, Moderator...
 
bernard's Avatar
 
Join Date: Mar 2004
Location: France
Age: 57
Posts: 12,304
Thanks: 660
Thanked 405 Times in 206 Posts
Default

Hi All,

Here is the pdf file of the interview.
Attached Files
File Type: pdf Nicholas_Lucas.pdf (119.3 KB, 134 views)
__________________
Simplicity is the ultimate sophistication. L VINCI
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. I NEWTON

Everything should be made as simple as possible, but not a bit simpler.
If you can't explain it simply, you don't understand it well enough. Albert Einstein
bernard

bernard is offline  
The Following User Says Thank You to bernard For This Useful Post:
Diane (04-04-2012)
Old 04-04-2012, 04:03 PM   #8
Diane
Human Primate Social Groomer and Neuroelastician
 
Diane's Avatar
 
Join Date: Mar 2004
Location: Weyburn Sask.
Posts: 22,565
Thanks: 2,942
Thanked 6,066 Times in 2,742 Posts
Default

Here is a very tidy deconstruction by Nic Lucas on the pharmacologic science around stress and anxiety.

Slideshow, "From Anxious to Happy", about 43 minutes.
__________________
Diane
www.dermoneuromodulation.com
SensibleSolutionsPhysiotherapy
HumanAntiGravitySuit blog
Neurotonics PT Teamblog
Canadian Physiotherapy Pain Science Division (Archived newsletters, paincasts)
Canadian Physiotherapy Association Pain Science Division Facebook page
@PainPhysiosCan
WCPT PhysiotherapyPainNetwork on Facebook
@WCPTPTPN
Neuroscience and Pain Science for Manual PTs Facebook page

@dfjpt
SomaSimple on Facebook
@somasimple

"Rene Descartes was very very smart, but as it turned out, he was wrong." ~Lorimer Moseley

“Comment is free, but the facts are sacred.” ~Charles Prestwich Scott, nephew of founder and editor (1872-1929) of The Guardian , in a 1921 Centenary editorial

“If you make people think they're thinking, they'll love you, but if you really make them think, they'll hate you." ~Don Marquis

"In times of change, learners inherit the earth, while the learned find themselves beautifully equipped to deal with a world that no longer exists" ~Roland Barth

"Doubt is not a pleasant mental state, but certainty is a ridiculous one."~Voltaire
Diane is online now  
The Following User Says Thank You to Diane For This Useful Post:
AdamB (04-04-2012)
Closed Thread

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Interview Medicalstudent123 General Discussion 1 02-10-2006 07:10 AM
Michael Shacklock Interview BB CHOICES: Perspectives on the Future of PT 6 03-08-2006 06:17 AM
Discussion of Shacklock interview BB General Discussion 5 27-07-2006 01:55 PM


All times are GMT +2. The time now is 04:08 AM.


Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SomaSimple © 2004 - 2014