The reliability of card-based and tablet-based left/right judgment measurements
James Goranson, SPT, Tarkenton Kingsbury, SPT, Taylor Kuhn, SPT, and Sarah Morgan, SPT
17 Comments on “The reliability of card-based and tablet-based left/right judgment measurements”
Great poster! In your limitation section you listed that not having a practice trial limited your research. What was your reasoning for not including a practice trial for either LRJ measurements? Your results show that participants scored around 1-2% higher with left hand accuracy during a majority of the trials. Why do you think this is? Also, did you see any differences between genders as well as with different age ranges?
A practice trial was not included as we felt the research was relatively straightforward but also that we were using a limited amount of pictures (n=40) and didn’t want the participants to become familiar with the cards before the trials. In regards to scoring better on the left hand that is hard to know why, but each card accounts for 5% of each hand so a 1-2% difference is largely insignificant. A hypothesis may be that all but 1 or 2 of our participants were right-handed and would model the image with their left hand which may have helped select the left hand if correct, but still the difference in percent correct is relatively small.
Both the card and tablet based instruments look like things that could be easily implemented in the clinic. Are these systems sent home with patients for them to work on outside of therapy? Also, do you have a hypothesis as to why individuals were more accurate with the CB versus the TB?
Natalie, the systems are largely designed to be taken home much like a home exercise program as chronic pain is a very difficult condition to overcome. When looking at accuracy for the 2 systems it’s important to remember that we only had 40 cards total so each card counted for 5% of the total accuracy for the left or right side, An initial hypothesis on why card-based system was more accurate was that we had a 5 second imposed time limit on the app which some people met and led to the card being automatically incorrect which could then lead to a cascade of incorrect choices as the person would try to select an option for the card that was timed out.
Since the card based measurement is more accurate than the tablet based measurement would you prefer than to test your clients using the card based measurement?
There are many variables to consider such as does the person have an electronic device that they can use because you can use it on your phone if you have the cards available in the clinic, and what the patient prefers if both are an option. I personally believe the app has more benefits such as they can perform the exercise anywhere they can use their phone and it gives immediate results where the cards need to be set down and counted themselves. In regards to accuracy difference, as stated above it not as big of a factor as the research may lead you to believe as each card accounts for 5% of each hand.
Great poster! I was wondering what the reasoning was of only completing card based trials on the 3-day follow up versus completing both versions during the follow up session? Also I noticed that in your clinical relevance you mentioned to use caution if using both card based and computer based laterality training. What findings in your research lead you to that conclusion?
Great question Megan. In our review of the literature we were able to find a published study that did establish the reliability of the app for hands, however we were unable to find research establishing reliability of the flashcard system. With the reliability of the app known we wanted to determine if the flashcards were reliable when compared to the app and in a test-retest situation. This is why we only completed the 3-day follow up with the flashcards. As for why we say to used caution if using both is due to the poor correlation we found between card and tablet accuracy. This poor correlation indicates that it is better to keep the testing format the same when taking repeated measures.
I think this research has some meaningful benefit to our patients and presents a unique treatment route. In your literature review did you find any studies reporting negative effects from laterality training? For example, too much exposure increasing their pain.
Good question Katie, I have not seen any reports of negative effects from laterality training in the studies I have looked at up to this point. I don’t know if James or Tark have seen anything or have any input they’d like to add!
Furthering Taylors reply, this is actually not as unique of an approach as you may think with the CRPS patient population. In my current clinical this has been used as a treatment and has seen some positive results.
This research presentation was very interesting to look through, especially after learning about laterality in one of our classes. I think what you have found through this research has good implication for clinical use. In regards to the card-based and tablet-based measurements, was there a specific reason that you chose the Recognise brand of these measurements? And are there differences between different brands of these measurements that might affect their use?
Good question Beth! The Recognise brand was developed by the Neuro Orthopaedic Institute, which is an organization out of Australia that specializes in chronic pain and graded motor imagery. Their products are primarily used for laterality training. I did do some looking though and actually didn’t find many other options available. There is an app for your phone called Orientate, which is free and has options to buy upgraded image packages. I think this app would be a decent free options, but it does not have as many of the nice features and variations that the Recognise brand offers. It also doesn’t calculate your time per card or accuracy for R vs L; it just measures overall time and overall accuracy. So it might be an okay option for someone to train with, but I wouldn’t recommend it for testing in the clinic. I hope that answers your question!
I found this study super interesting and clinically relevant. Was there a standardized protocol of instructions for each participants? If not, do you think that this may have skewed your results due to participant’s different understandings of the procedure?
Great work on the poster. Being that we just went over this material recently, it is nice to see LRJ being applied in your research. In class, I personally preferred the tablet being that it was effective in recording reaction time and accuracy. How do you believe the results for reaction time and accuracy would change if the subjects were all patients with persistent pain?
Will, the idea behind the research is that the homunculus in chronic pain patients is smudged and therefore would take longer or be less accurate when the person responds. We would expect that the patient would take longer initially with reaction time and accuracy would be lower, but as you continue to practice and integrate movement and pain neuroscience education along with LRJ and GMI the results would better match controls.
Nicole, There was no written script for what to say during the standard protocol which may have skewed the results for how people performed the testing. However, we did keep consistent with the words we said and provided demonstrations and answered questions if needed. We did not force a specific protocol for participant instruction to better simulate a clinical setting.
Great poster! In your limitation section you listed that not having a practice trial limited your research. What was your reasoning for not including a practice trial for either LRJ measurements? Your results show that participants scored around 1-2% higher with left hand accuracy during a majority of the trials. Why do you think this is? Also, did you see any differences between genders as well as with different age ranges?
LikeLike
A practice trial was not included as we felt the research was relatively straightforward but also that we were using a limited amount of pictures (n=40) and didn’t want the participants to become familiar with the cards before the trials. In regards to scoring better on the left hand that is hard to know why, but each card accounts for 5% of each hand so a 1-2% difference is largely insignificant. A hypothesis may be that all but 1 or 2 of our participants were right-handed and would model the image with their left hand which may have helped select the left hand if correct, but still the difference in percent correct is relatively small.
LikeLike
Both the card and tablet based instruments look like things that could be easily implemented in the clinic. Are these systems sent home with patients for them to work on outside of therapy? Also, do you have a hypothesis as to why individuals were more accurate with the CB versus the TB?
LikeLike
Natalie, the systems are largely designed to be taken home much like a home exercise program as chronic pain is a very difficult condition to overcome. When looking at accuracy for the 2 systems it’s important to remember that we only had 40 cards total so each card counted for 5% of the total accuracy for the left or right side, An initial hypothesis on why card-based system was more accurate was that we had a 5 second imposed time limit on the app which some people met and led to the card being automatically incorrect which could then lead to a cascade of incorrect choices as the person would try to select an option for the card that was timed out.
LikeLike
Since the card based measurement is more accurate than the tablet based measurement would you prefer than to test your clients using the card based measurement?
LikeLike
There are many variables to consider such as does the person have an electronic device that they can use because you can use it on your phone if you have the cards available in the clinic, and what the patient prefers if both are an option. I personally believe the app has more benefits such as they can perform the exercise anywhere they can use their phone and it gives immediate results where the cards need to be set down and counted themselves. In regards to accuracy difference, as stated above it not as big of a factor as the research may lead you to believe as each card accounts for 5% of each hand.
LikeLike
Great poster! I was wondering what the reasoning was of only completing card based trials on the 3-day follow up versus completing both versions during the follow up session? Also I noticed that in your clinical relevance you mentioned to use caution if using both card based and computer based laterality training. What findings in your research lead you to that conclusion?
LikeLike
Great question Megan. In our review of the literature we were able to find a published study that did establish the reliability of the app for hands, however we were unable to find research establishing reliability of the flashcard system. With the reliability of the app known we wanted to determine if the flashcards were reliable when compared to the app and in a test-retest situation. This is why we only completed the 3-day follow up with the flashcards. As for why we say to used caution if using both is due to the poor correlation we found between card and tablet accuracy. This poor correlation indicates that it is better to keep the testing format the same when taking repeated measures.
LikeLike
I think this research has some meaningful benefit to our patients and presents a unique treatment route. In your literature review did you find any studies reporting negative effects from laterality training? For example, too much exposure increasing their pain.
LikeLike
Good question Katie, I have not seen any reports of negative effects from laterality training in the studies I have looked at up to this point. I don’t know if James or Tark have seen anything or have any input they’d like to add!
LikeLike
Furthering Taylors reply, this is actually not as unique of an approach as you may think with the CRPS patient population. In my current clinical this has been used as a treatment and has seen some positive results.
LikeLike
This research presentation was very interesting to look through, especially after learning about laterality in one of our classes. I think what you have found through this research has good implication for clinical use. In regards to the card-based and tablet-based measurements, was there a specific reason that you chose the Recognise brand of these measurements? And are there differences between different brands of these measurements that might affect their use?
LikeLike
Good question Beth! The Recognise brand was developed by the Neuro Orthopaedic Institute, which is an organization out of Australia that specializes in chronic pain and graded motor imagery. Their products are primarily used for laterality training. I did do some looking though and actually didn’t find many other options available. There is an app for your phone called Orientate, which is free and has options to buy upgraded image packages. I think this app would be a decent free options, but it does not have as many of the nice features and variations that the Recognise brand offers. It also doesn’t calculate your time per card or accuracy for R vs L; it just measures overall time and overall accuracy. So it might be an okay option for someone to train with, but I wouldn’t recommend it for testing in the clinic. I hope that answers your question!
LikeLike
I found this study super interesting and clinically relevant. Was there a standardized protocol of instructions for each participants? If not, do you think that this may have skewed your results due to participant’s different understandings of the procedure?
LikeLike
Great work on the poster. Being that we just went over this material recently, it is nice to see LRJ being applied in your research. In class, I personally preferred the tablet being that it was effective in recording reaction time and accuracy. How do you believe the results for reaction time and accuracy would change if the subjects were all patients with persistent pain?
LikeLike
Will, the idea behind the research is that the homunculus in chronic pain patients is smudged and therefore would take longer or be less accurate when the person responds. We would expect that the patient would take longer initially with reaction time and accuracy would be lower, but as you continue to practice and integrate movement and pain neuroscience education along with LRJ and GMI the results would better match controls.
LikeLike
Nicole, There was no written script for what to say during the standard protocol which may have skewed the results for how people performed the testing. However, we did keep consistent with the words we said and provided demonstrations and answered questions if needed. We did not force a specific protocol for participant instruction to better simulate a clinical setting.
LikeLike