Some years ago, I had a talk with a well-known expert in the field of stuttering research. I hoped he would be interested in my theoretical ideas about the causes of stuttering, but he told me that he was not interested in theories at all. I was surprised and didn’t know what to reply.
I had believed he was searching for the causes of stuttering (and I still think he is searching for those causes), but apparently he was not aware of the fact that causes are always theoretical. Causation is a purely theoretical concept; there is no empirical evidence for causation to exist at all. This doesn’t mean that causation is not real. It only means that empirical research can never find causes.
We are deeply convinced that causality exists. We are convinced that there are relationships between causes and effects. When I throw a stone against a windowpane and the pane gets broken, then I’m convinced that my action caused the pane to get broken. From the subjective experience that our actions have effects, our belief in causation came about. Asking about causes has become the essential way we understand things and events in the world.
Why can empirical research not provide evidence of causation? In the above example, an objective observer can only find a temporal correlation between two events: the pane goes to pieces each time a sufficiently big stone is thrown against it with sufficient power. The conclusion that throwing the stone causes the pane to break is a very plausible but nevertheless purely theoretical conclusion, since the causation itself is not observable.
When we observe a repeated temporal succession of two events A and B or find a statistical correlation between two variables A and B, we can, strictly speaking, never know if there is a causal connection between them. The reason is: there may be an unknown cause C that caused both A and B and the correlation between them, without there being any causality between A and B.
An unknown underlying cause C is extremely implausible in the above example of the broken windowpane. But it cannot be excluded if, for example, a positive correlation is found between the amount of a brain structure anomaly and stuttering severity. There is not only the chicken-or-egg question: is the structural anomaly a cause or consequence of stuttering? From the data alone, we can’t even infer whether there is a causal connection at all between the two variables.
It is therefore impossible to logically derive a causal theory from empirical data alone. Theories are rather the product of speculative thinking about how things may work. A theory arises from an idea, and sometimes perhaps from daydreaming (chemist August Kekulé reported he first saw the annular shape of the benzene molecule in a dream). But the way a theory has come into being is irrelevant to the question whether it is correct or not.
To be valid, that is, possibly correct, a theory has to be consistent with all relevant empirical data. From that, it follows: the more data is provided, the greater the likelihood that a theory is inconsistent with all of them, and that is, the more data we have, the smaller the number of valid theories. At best, only one valid theory remains; then we have good reasons to consider it correct. In the case of a newly discovered phenomenon about which little data is available, further studies may be needed to test the theory, that is, to compare predictions derived from the theory with the data obtained from these studies. However, agreement with a prediction doesn’t mean more than that the theory is consistent with those data. In this regard, there is no difference between data obtained in the past and in the future: a valid theory has to be consistent with all.
Developmental stuttering is not a newly discovered phenomenon; we have a ton of data about it. Of course, the author of a theory cannot check his theory against all these data, but it should be easy for critics to support their objections with data and, in this way, to falsify an incorrect theory. So, the value of data is not that you can derive theories from it, but that you can use it to find out and show which theories are wrong. In this way, empirical data serves for the development of theories and for progress in science.
Consistency with all relevant data is not the only criterion for evaluating theories. A critic can also show that the theory is incoherent or inconsistent in itself (if that is the case) or that the theory does not provide the answers we expect from a good theory. Here is a list of questions a proper theory of stuttering should answer.
Given the vast amount of data about stuttering that are available today, from behavioral research and from brain research, it is probably not for lack of data that we still don’t understand the causes of the disorder. I rather think we have a lack theoretical research and discussion. Empirical data is like pieces of a puzzle, and what we have to do is put the puzzle together. This has been my endeavor for more then 10 years.
to the top