In its original form, the Church-Turing thesis concerned computation as Alan Turing and Alonzo Church used the term in 1936-human computation.
It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem (Sect. 2), we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences (Sects. 3 and 4). We then examine two possible solutions (Sect. 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (Sect. 6).
The mechanistic view of computation contends that computational explanations are mechanistic explanations. Mechanists, however, disagree about the precise role that the environment – or the so-called “contextual level” – plays for computational (mechanistic) explanations. We advance here two claims: (i) Contextual factors essentially determine the computational identity of a computing system (computational externalism); this means that specifying the “intrinsic” mechanism is not sufficient to fix the computational identity of the system. (ii) It is not necessary to specify the causal-mechanistic interaction between the system and its context in order to offer a complete and adequate computational explanation. While the first claim has been discussed before, the second has been practically ignored. After supporting these claims, we discuss the implications of our contextualist view for the mechanistic view of computational explanation. Our aim is to show that some versions of the mechanistic view are consistent with the contextualist view, whilst others are not.