April 12, 2012
I recently came across a site with some promise – Radical Philosophy magazine. The current issue has an interview with Noam Chomsky discussing, among other things, Free Will. I found that I do not agree, and to some extent do not understand, his position. This serves as an excuse to present my theory of Free Will.
I will grant at the outset that I am not well read on the subject. I have never read, for example, Descartes’s theories on this issue. My theory is very simple, but, I think, is quite serviceable in dealing with the limited issue of Free Will. It does not address the matter of consciousness, which I think is more complex.
My understanding is that:
Free Will is that part of a decision making process or action generation process that is inexplicable to the observer.
A few notes:
- This definition holds both for one’s own Free Will and for that of others.
- ‘Inexplicable’ is a rather strict condition because explicability doesn’t require the existence of a well defined theory or description of a process. Any notion that an action or decision is the product of a process that may be analyzed could serve as an explanation that makes the invocation of Free Will unnecessary.
- The source of inexplicability is unspecified and is immaterial to the notion of Free Will. Inexplicability is also not assumed to be an objective property. What is inexplicable to one person may be explicable to another.
- [added 04/20/12] An observer is unable to distinguish – based on external observation alone – a creature to which it attributes free will from a creature whose non-free-will components are identical and whose free-will component is replaced by a random process. This is true since any predictable characteristic of the creature’s behavior (i.e., any behavior which is not apparently random) is not part of its free-will behavior.
- The attribution of Free Will to humans is due to the fact that they behave in ways that humans (including the actor) cannot describe.
- Systems are usually not described as having Free Will because their behavior is usually described in terms of their components. If those components are human, then the Free Will is ascribed to the components.
- When a human is considered as a system – of cells, say – then ascribing the Free Will to the components is usually considered unreasonable since the components are considered simple enough so they in turn can be analyzed. This is the source of the “Problem of Free Will”, which pushes people to speculate that something unusual is happening.
Chomsky’s discussion of Free Will and associated questions is rather vague, I think. (This may be a result of him referring to a body of knowledge that I am not familiar with, but I don’t see any evidence that this is the case.) He seemingly implies that Free Will is a phenomenon that is qualitatively different from “regular” physical phenomena at least in the sense that it cannot be explained today, and may not be explicable at all by humans. Yet “regular” physical phenomena are inexplicable in similar ways. Many physical systems are built of simple components whose behavior in isolation is well understood, but in agglomeration present behaviors that are too complex to be deeply understood. This could be exactly the case with human Free Will.
In fact, whether or not “thinking” is explicable in terms of “physical” laws of nature seems completely beside the point. The source of inexplicability may be “physical” or not, but the effects and implications would be the same. Chomsky says:
the people who argue that freedom of the will doesn’t exist are themselves acting extremely irrationally. Why give us what they take to be reasons? They believe it and are giving these reasons because it’s determined; we don’t believe it because it’s determined. So what’s to
But why would having a physical source of inexplicability – which is presumably what “Free Will doesn’t exist” means – make reasoning any less “rational” than a different source of inexplicability? If one day a machine is built which is able to scan a human brain and predict with good accuracy the behavior of a person over the short period – just like a weather forecast system works – would that make any difference in how we perceive the world, or what behavior is rational or irrational?
[Some interesting scenarios would be created, such as the one described by Newcomb's paradox.]
A related issue is that Chomsky is also rather unclear about what ‘explicable’ (or ‘intelligible’) means. He implies that explicability is an objective term. I am not sure what he means by the distinction between intelligibility of the world and intelligibility of theories about the world, and the following passage seems meaningless to me:
The classical modern scientists, Galileo through Newton, they were looking for a conception of the world that we could comprehend – not just the theory, but also its object. That was the point of the mechanical philosophy. We can understand gears and levers and things pushing each other, and so on, but we can’t comprehend what Newton and his contemporaries regarded as mystical forces.
We may accept them, we may develop a theoretical approach for dealing with them, but that doesn’t mean that they become intelligible. And in fact they don’t. They’re just as inexplicable to us as they were to Newton. We have just modified our conception of intelligibility, so that we now say that they’re intelligible, but they aren’t, by their standards, though theories about them – and about matters even more remote from common-sense understanding – may be intelligible.
As I see things, intelligibility is always relative. One thing is ‘explained’ in terms of other things. Explanations connect notions together and allow certain manipulations. There is no absolute notion of ‘comprehend’. Something is incomprehensible when it stands in isolation from other notions.