research and the quality of evidence

puunui

Senior Master
Joined
Dec 7, 2010
Messages
4,378
Reaction score
28
I always appreciate posters who make a clear distinction between opinion, personal experience, and factual information. As a researcher in other fields, I have learned the value of assessing the quality of evidence. From highest to lowest, evidence can be categorized by quality: 1) meta-analyses (a report on a group of randomized controlled trial studies that both summarizes overall findings and which critiques the quality of the studies analyzed in the meta-analysis), 2) individual randomized controlled trial studies published in peer reviewed journals, 3) non-randomized but otherwise well-designed studies published in peer reviewed journals, 4) well designed non-experimental studies published in peer reviewed journals, 5) opinions from recognized expert institutions and individuals that are based on clinical evidence, descriptive studies, or committee reports, and 6) the opinions of colleagues and peers. Most of what is posted in the discussion forums of Martial Talk falls into category 6, the lowest quality of evidence. That doesn't mean that sharing opinions with each other is a worthless endeavor. On the contrary, I learn a lot from reading what people have to say here, but what I learn is often more about the varied ways that people experience being a martial artist than about the art itself. Of course, not all issues in Taekwondo or other martial arts have been researched highly or even at all. Hence, some issues simply do not have high quality evidence available from which we can learn. But, when it is possible here on Martial Talk, I particularly value information that is presented with references that allow me to conduct my own research into the issue. And, I particularly value information that is as close to the source as possible given the state of evidence in this field.

What would be examples of numbers 1 through 4? I'm trying to read those but they don't make much sense to me.
 
When it comes to things like martial arts I have grown to accept that most of the stuff posted on boards such as these is "personal opinion". There are 'facts', but they are rarely debated because they are facts. If all someone wanted was fact then there would be no need to come here, it would be far easier for someone to simply google their question and get the answer. In martial arts though, everything is not always that clear cut, everything isnt always black or white, there are shades of grey. Thats what makes places like these so great, you can get perspectives from people from different countries, different instructors, different reasons for doing MA in the first place, perspectives from beginners, perspectives from advanced people etc etc. Most of what is discussed here falls into that grey area and as long as you keep in mind that people are posting their "opinions" and not necessarilly fact, then its a great place to get information. Its only when someone pushes opinion as fact that it gets out of hand.
 
About thirty years ago I saw a piece involving Biomechanical analysis. I think it was done at the Olympic training center in Colorado. The subjects were an Olympic discus thrower and Frank Shorter, a runner. This is where they put little markers on the limbs and torso, somehow film and digitize the motion and then, with a computer program somehow figure out ways to make the motion more efficient. It helped the discus thrower. (we now see this on some TV shows, with the difference being that while they digitize the motion, they don't seem to provide suggestions for improvement)

They noted that Frank Shorters motions were uneven and were going to have him even them out but detailed measurements found he had a difference in leg length and the uneven motion was used to compensate for the difference. At that time I wrote them a letter about analyzing Martial Arts techniques. The response was that if funded it could be done. Some time later a gifted USTF guy was asked to participate in a study with two other MA guys vis a vis punching speed. To their surprise his punch landed in the shortest elapsed time.

Now, here's the problem. If one guy is faster or stronger, how do you allow for individual differences? Is his method better, or is he just better? (We would also need to define "better" . Would it be: Most power on impact? Shortest elapsed time? Fasted speed of attacking surface on impact? Least amount of time from perception of motion to imapct? )

Perhaps computer models would provide an answer.

Now, you might suggest that everyone try the other person's method. I submit that if someone does something a certain way thousands of times over 10 years he would not be as proficient with a new method.
 
At that time I wrote them a letter about analyzing Martial Arts techniques. The response was that if funded it could be done. Some time later a gifted USTF guy was asked to participate in a study with two other MA guys vis a vis punching speed. To their surprise his punch landed in the shortest elapsed time.

Master Weiss, why were the other people surprised at the result?

Also, was the USTF gentleman GM Winegar, by any chance?

Pax,

Chris
 
Cynthia,
Not to take anything away from your heirarchy, but

Not all of the things that are measurable matter.
Not all of the things that matter are measurable.

I think a core value of the dedicated martial artist (at the skill level) is testing and refining for oneself.
At the level of ideas, I think we test more by how something new matches our experience and values.
I read studies with the same filter I use reading posts here: Is there a useful idea I can take away?
 
Last edited:
Master Weiss, why were the other people surprised at the result?

Also, was the USTF gentleman GM Winegar, by any chance?

Pax,

Chris

The surprise was likely based on pride in their respective arts as well as feeling that the slight retraction before extesnion would result in longer elapsed time.

Your guess is correct.
 
Cynthia,
Not to take anything away from your heirarchy, but

Not all of the things that are measurable matter.
Not all of the things that matter are measurable.

I think a core value of the dedicated martial artist (at the skill level) is testing and refining for oneself.
At the level of ideas, I think we test more by how something new matches our experience and values.
I read studies with the same filter I use reading posts here: Is there a useful idea I can take away?

The hiearchy I described (and variations of it) is used in many fields where empirical evidence is essential for development, diagnosis, treatment, policy making and so on.

I completely agree with your comments about not always being able to measure what matters and that not everything that we can measure matters. But, I'll have to write more later. Right now, I'm using a touch screen device which is too slow and liable to typos for an adequate reply to your comment or to Glenn's question.

Cynthia
 
1) Meta-analysis: combining several of the same studies into one large analysis to make sure the data is correct. For example, several places all doing the same thing and collecting all of the individual data
2) Individual randomized controlled etc: This would be an example of a random study to collect data on something that was then submitted to a professional publication so that others in the field can look at it and see if it was "done right" etc. (taking people from all over the country)
3) Non-random etc.: Same as #2, but your test population was not random. It could be a select test population, or it could be a limited population due to geography. (taking people only within a certain mile radius)
4) Well designed non-experimental: You have an idea and you test it out for yourself, but you don't have as specific standards or test population
5) Opinion of experts: The person is an expert in the topic and uses that expertise to make an informed opinion. Kind of like an expert witness in court, they can give an opinion and it is taken for fact due to their training and experience.

I realize that I have REALLY oversimplified the categories, but was trying to make them quick and easy to understand. Each one represents a series of checks and balances to ensure that the data collected is as "pure" as can be, so it is more likely that the hypothesis of the experiment is correct.
 
What would be examples of numbers 1 through 4? I'm trying to read those but they don't make much sense to me.

Good question! So, here are the four highest quality categories of evidence that I listed: 1) meta-analyses (a report on a group of randomized controlled trial studies that both summarizes overall findings and which critiques the quality of the studies analyzed in the meta-analysis), 2) individual randomized controlled trial studies published in peer reviewed journals, 3) non-randomized but otherwise well-designed studies published in peer reviewed journals, and 4) well designed non-experimental studies published in peer reviewed journals.

Let's start with defining an experimental study. In an experimental study, research basically begins with a hypothesis related to dependent and independent variables. What happens with the dependent variable(s) depends on what happens with the independent variable(s). For example, someone could hypothesize that, all other factors being controlled for, a new method (independent variable) of executing a particular Taekwondo technique produces more power (dependent variable) than the currently used method of executing that technique. Or a new method (independent variable) of training Taekwondo athletes produces a greater number (dependent variable) of Olympic gold medalists than the current method. Well, it's all very well and good to have a hypothesis, but until it's actually tested, it's just a guess. If that hypothesis is stated as fact, it still isn't fact. It's just an opinion. In a well-designed experimental study, extraneous factors are controlled for so they don't confound the results. For example, you wouldn't make your experimental group (the one using the new method for the technique or the one undergoing the new method of training) be the top 30 Taekwondo competitors in a nation and have your control group (the one using the current method of executing the technique or the one undergoing the current method of training) be a group of 30 white belts. You would make sure that the *only* difference, on average, between the experimental group and the control group is the experimental intervention.

Non-experimental studies don't have variables that the researchers are manipulating (e.g., method of technique execution or method of training). They could be, for example, descriptive studies (e.g., a study that uses a survey to assess how many Taekwondo practitioners study in a Kukkiwon school, an ITF school, or an independent school; a study that uses a survey to assess how many Taekwondo practitioners are explicitly interested in self-defense). Such studies generally collect demographic data (e.g., gender, age, years of practice, style of Taekwondo practiced, socio-economic status, race/ethnicity/cultural heritage/nationality, level of education, history of being the victim of assault) so that patterns can be analyzed and described (hence, the term "descriptive study"). For example, maybe a researcher is interested in whether there is a trend related to gender or age (e.g., are more men than women interested in self defense or is it the reverse or are they interested in equal numbers?).

Non-experimental studies can also be qualitative (e.g., they're not about numbers). Qualitative research includes anthropological, ethnographic, and naturalistic research. For example, carefully designed, detailed, and systematic interviews (with open-ended questions that yield direct quotations recorded verbatim) could be conducted with the remaining leaders of the Kwans that worked to unify and develop Taekwondo to explore their original intentions which are often debated today (e.g., the purpose of poomsae, the purpose of sparring, and so on). The resulting quotations would be analyzed word by word using any number of qualitatitve research analysis techniques to discover concepts and patterns related to the researchers’ interest. This kind of qualitative research is extremely labor intensive and time consuming and yields results that are completely different (i.e., of a significantly higher reliability and validity) than a casual or journalistic interview.

So, within the category of experimental studies we have subcategories of randomized and non-randomized studies. In randomized studies, first, the largest pool of participants appropriate to the hypothesis to be tested is identified (e.g., all Kukkiwon-certified dojang owners in the US [to test, say, a hypothesis of whether post-certification annual revenue is higher than pre-certification annual revenue among dojang owners in the US] or all child students in all schools that are part of a particular system of Taekwondo unique to an area in Germany [to study whether a new curriculum used by that system of Taekwondo results in higher rates of retention of child students than a prior curriculum]). Then, participants are selected at random from within that pool.

Because of the challenges (e.g., labor, cost) that true random selection poses, convenience samples are more often used (e.g., Kukkiwon-certified dojang owners in the US who happen to participate in Martial Talk and are willing to participate in the research because they know the researcher who also participates in Martial Talk; the child students in the “future black belts club” of a school that is part of a particular system of Taekwondo unique to an area in Germany who are willing to participate). Non-random convenience samples open up the very real possibility that the research results may not be fully generalizable to the full population of interest; they could apply only to the non-random sample used because it has unique features that the larger population of interest does not consistently have.

Meta-analyses are analyses of analyses. So, in a meta-analysis of a group of randomized controlled trial studies, researchers would first conduct a literature review to identify all studies that have been conducted related to the hypothesis of interest based on a set of clearly defined inclusion criteria. Studies that were of poor quality or otherwise did not adequately match the inclusion criteria would be excluded. Studies that were of high quality and fully matched the inclusion criteria would be included. So, let’s say, there were many studies related to whether a new method (independent variable) of executing a particular Taekwondo technique produces more power (dependent variable) than the currently used method of executing that technique. The researchers conducting a meta-analysis would find and examine all of those studies, determine the flaws of each, exclude individual studies that failed to meet the clearly defined inclusion criteria that were created based on the hypothesis and the principles of scientific inquiry. Then, they would use specialized research techniques to look for and analyze any trends that consistently appear across studies. The results of a meta-analysis can range from finding strong trends that clearly support or refute the hypothesis to finding no consistent trends at all (some of the studies support the hypothesis while some of the studies refute the hypothesis) to determining that enough or enough appropriately designed research has not yet been conducted to determine whether the hypothesis is supported or not.

All of this said, many researchers (including myself) would argue that well-conducted qualitative research should rank higher on the hierarchy of evidence and that the only reason it doesn’t is because of a Western bias for quantification. Some issues simply cannot be researched quantitatively, although they can be researched qualitatively. There is no reason (other than bias) to consider qualititative research on issues that cannot be quantified as inherently inferior.

Also, people can seek to learn from external authority (e.g., the opinions of peers and colleagues, the opinions of leaders or institutions, the results of research of various levels of quality), from internal authority (personal experiential learning), or a synthesis of both. Which routes of learning we seek most tends to vary with where we are in our development and what sources are most valued in our culture.

Taekwondo is ripe for research of all kinds. But research requires money--often lots of money. Societies fund what they value. I hope that anyone in a position to fund and/or conduct research in the field of Taekwondo will do so. There is so much of value waiting to be discovered.

Cynthia
 
This sounds more like it would fit in a scientific based study rather than martial arts study, at least for the topics that we are discussing on message boards like this one. I especially don't think the concept of published peer review articles works well in the korean martial arts field, since we don't have that. Some point to JAMA, which to me publishes a lot of erroneous information, at least with respect to the subjects that I am familiar with. I do think there are good articles in there every once in a while, but I think that is more dependent on the author rather than the forum. I guess the bottom line is that I don't have a deep science background so the scientific method is not my go to method for research. For one thing, I don't like to start with a hypothesis. Instead I prefer to keep it open and see where I get taken to. I am more of a investigative reporter type of guy.
 
This sounds more like it would fit in a scientific based study rather than martial arts study, at least for the topics that we are discussing on message boards like this one. I especially don't think the concept of published peer review articles works well in the korean martial arts field, since we don't have that. Some point to JAMA, which to me publishes a lot of erroneous information, at least with respect to the subjects that I am familiar with. I do think there are good articles in there every once in a while, but I think that is more dependent on the author rather than the forum. I guess the bottom line is that I don't have a deep science background so the scientific method is not my go to method for research. For one thing, I don't like to start with a hypothesis. Instead I prefer to keep it open and see where I get taken to. I am more of a investigative reporter type of guy.

Qualitative research can begin with a hypothesis or with a goal of exploration. When an issue hasn't even been studied yet, exploratory qualitative research is a very useful first step, in terms of just increasing understanding as well as in generating enough information about something to be able to develop a hypothesis about some aspect of the issue. There are lots of tools in the research tool box. Each one lends itself well to some applications but poorly or not at all to others. So picking the right research tool is very important.

And, yes, research should always be read with a critical eye. Just because a study gets published, even in a peer reviewed journal, doesn't make it flawless.

Cynthia
 
This sounds more like it would fit in a scientific based study rather than martial arts study, at least for the topics that we are discussing on message boards like this one.

I would love to have access to actual studies (not to mention meta-analyses) related to: 1) what curriculum/teaching methods increase black belt retention, 2) what approaches (e.g., teaching methods, methods of technique execution, warm ups) help older Taekwondo practitioners minimize injury, 3) what methods of teaching self defense are truly effective for the general public, and much more. These are topics that come up on message boards like Martial Talk and they are certainly possible to research.

Cynthia
 
I would love to have access to actual studies (not to mention meta-analyses) related to: 1) what curriculum/teaching methods increase black belt retention, 2) what approaches (e.g., teaching methods, methods of technique execution, warm ups) help older Taekwondo practitioners minimize injury, 3) what methods of teaching self defense are truly effective for the general public, and much more. These are topics that come up on message boards like Martial Talk and they are certainly possible to research.

I think there is enough people who can answer what they do for these areas, which may be level 5.
 
I would love to have access to actual studies (not to mention meta-analyses) related to: 1) what curriculum/teaching methods increase black belt retention, 2) what approaches (e.g., teaching methods, methods of technique execution, warm ups) help older Taekwondo practitioners minimize injury, 3) what methods of teaching self defense are truly effective for the general public, and much more. These are topics that come up on message boards like Martial Talk and they are certainly possible to research.

Cynthia
It sure would be great to have access to studies like that. You cant really gain that sort of information just from talking to people. The main two instructors Ive trained under have taught tkd full time for over 30 years, they have seen literally thousands of students come and go and have tried hundreds of training techniques and ideas. The funny thing is though, that if I talk extensively to either of them regarding the types of things you've mentioned, I would get very different answers on certian topics because they have different experiences, students, etc. I spend a lot of time talking to very experienced instructors (not just from my club) to try and improve my knowledge, and on certain subjects there seems to be "constants" and others vary greatly. It would be great if somehow someone did a completely independent survey with millions of tkdists from different coutries and organisations and see what the results are. Unfortunately it probably isnt really possible to do it properly.
 
I think there is enough people who can answer what they do for these areas, which may be level 5.

Yes, that would be level 5. And, that is the knowledge source I seek at this point on many issues both because I find it useful to consult those with more experience than me and because there is no evidence of higher quality available yet directly answering these questions--at least not that I have found!

Cynthia
 
I would get very different answers on certian topics because they have different experiences, students, etc.

This is exactly the issue that well-designed research studies (and the meta-analyses that analyze them) attempt to address. But, to my knowledge, there is a dearth of research on many issues in Taekwondo (and the martial arts, in general). Honestly, I have no idea from where funding would come to change the situation. So, the best I can do at this point when trying to learn about many issues is to seek information from as close to the source as I can and from individuals and institutions that are widely recognized as credible all while comparing it with my own prior experience and understanding, testing things out experientially in my individual practice and in my dojang with my students, staying open to the possibility that there are flaws in the information I'm discovering, and staying open to the possibility that I have my own unrecognized misunderstandings and biases and to the possibility that information that at first appears to me to be incorrect might not be. Learning is no small task!

Cynthia
 
Yes, that would be level 5. And, that is the knowledge source I seek at this point on many issues both because I find it useful to consult those with more experience than me and because there is no evidence of higher quality available yet directly answering these questions--at least not that I have found!

Cynthia

As I read over this, wow, I think it would take many qualified people and several multi-million dollar grants to accomplish. Feels like #5 is pretty much it, of course, maybe I am just not seeing it. I have seen some studies, I have some studies, but put to this test these studies fall short.

Add in that Taekwondo is in over 200 nations with about 80 million practitioners.
 
As I read over this, wow, I think it would take many qualified people and several multi-million dollar grants to accomplish. Feels like #5 is pretty much it, of course, maybe I am just not seeing it. I have seen some studies, I have some studies, but put to this test these studies fall short.

Add in that Taekwondo is in over 200 nations with about 80 million practitioners.

Yes, that's our dilemma! On the one hand, there's so much room for learning and discovery in Taekwondo--both personally and in terms of research. And, that's exciting! On the other hand, the dearth of quality research often leaves us with nothing but opinion. That's fine for some issues in Taekwondo, but not so fine for others.

Cynthia
 
It would be great if somehow someone did a completely independent survey with millions of tkdists from different coutries and organisations and see what the results are. Unfortunately it probably isnt really possible to do it properly.

This is what random sampling is for. If the population relevant to a hypothesis is very large but can be randomly sampled (and the resulting subpopulation is of the correct size required for the statistical methods that need to be used to test the hypothesis), then the hypothesis can be tested with a much smaller number of participants. That is much more possible, manageable, and affordable than including each member of an entire population!

Cynthia
 
Yes, that's our dilemma! On the one hand, there's so much room for learning and discovery in Taekwondo--both personally and in terms of research. And, that's exciting! On the other hand, the dearth of quality research often leaves us with nothing but opinion. That's fine for some issues in Taekwondo, but not so fine for others.

What about the search for facts? For example, the date the KTA or WTF were founded. Where would that type of inquiry fall? Those aren't opinions.
 
Back
Top