Publications by Type: Journal Articles


It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6).

Oron Shagrir. 2020. “In Defense of the Semantic View of Computation.” Synthese, 197, Pp. 4083-4108. Publisher's Version Abstract

The semantic view of computation is the claim that semantic properties play an essential role in the individuation of physical computing systems such as laptops and brains. The main argument for the semantic view (“the master argument”) rests on the fact that some physical systems simultaneously implement different automata at the same time, in the same space, and even in the very same physical properties ("simultaneous implementation"). Recently, several authors have challenged this argument (Piccinini 2008, 2015; Coelho Mollo 2018; Dewhurst 2018). They accept the premise of simultaneous implementation but reject the semantic conclusion. In this paper, I aim to explicate the semantic view and to address these objections. I first characterize the semantic view and distinguish it from other, closely related views. Then, I contend that the master argument for the semantic view survives the counter-arguments against it. One counter-argument is that computational individuation is not forced to choose between the implemented automata but rather always picks out a more basic computational structure. My response is that this move might undermine the notion of computational equivalence. Another counter-argument is that while computational individuation is forced to rely on extrinsic features, these features need not be semantic. My reply is that the semantic view better accounts for these extrinsic features than the proposed non-semantic alternatives. 

PDF Version
Jack Copeland and Oron Shagrir. 2020. “Physical Computability Theses.” Quantum Physics, Probability and Logic: Itamar Pitowsky’s Work and Influence (eds. Meir Hemmo and Orly Shenker) Springer, Pp. 217-232.
Jack Copeland and Oron Shagrir. 2019. “The Church-Turing thesis: logical limit or breachable barrier?.” Communications of the ACM , 62, 1, Pp. 66-74. Publisher's version

The Church-Turing thesis (CTT) underlies tantalizing open questions concerning the fundamental place of computing in the physical universe. For example, is every physical system computable? Is the universe essentially computational in nature? What are the implications for computer science of recent speculation about physical uncomputability? Does CTT place a fundamental logical limit on what can be computed, a computational "barrier" that cannot be broken, no matter how far and in what multitude of ways computers develop? Or could new types of hardware, based perhaps on quantum or relativistic phenomena, lead to radically new computing paradigms that do breach the Church-Turing barrier, in which the uncomputable becomes computable, in an upgraded sense of "computable"? Before addressing these questions, we first look back to the 1930s to consider how Alonzo Church and Alan Turing formulated, and sought to justify, their versions of CTT. With this necessary history under our belts, we then turn to today's dramatically more powerful versions of CTT.

PDF version
Jens Harbecke and Oron Shagrir. 2019. “The Role of the Environment in Computational Explanations.” European Journal for Philosophy of Science , 9, 3, Pp. 37. Publisher's Version Abstract

The mechanistic view of computation contends that computational explanations are mechanistic
explanations. Mechanists, however, disagree about the precise role that the environment
– or the so-called “contextual level” – plays for computational (mechanistic) explanations.
We advance here two claims: (i) Contextual factors essentially determine the computational
identity of a computing system (computational externalism); this means that
specifying the “intrinsic” mechanism is not sufficient to fix the computational identity of
the system. (ii) It is not necessary to specify the causal-mechanistic interaction between the
system and its context in order to offer a complete and adequate computational explanation.
While the first claim has been discussed before, the second has been practically ignored.
After supporting these claims, we discuss the implications of our contextualist view for the
mechanistic view of computational explanation. Our aim is to show that some versions of
the mechanistic view are consistent with the contextualist view, whilst others are not.

Oron Shagrir. 2018. “The Brain as an Input-Output Model of the World.” Minds and Machines, 28, Pp. 53-75. Publisher's Version

An underlying assumption in computational approaches in cognitive and brain sciences is that the nervous system is an input–output model of the world: Its input–output functions mirror certain relations in the target domains. I argue that the input–output modelling assumption plays distinct methodological and explanatory roles. Methodologically, input–output modelling serves to discover the computed function from environmental cues. Explanatorily, input–output modelling serves to account for the appropriateness of the computed function to the explanandum information-processing task. I compare very briefly the modelling explanation to mechanistic and optimality explanations, noting that in both cases the explanations can be seen as complementary rather than contrastive or competing.

PDF Version
Jack Copeland, Eli Dresner, Diane Proudfoot, and Oron Shagrir. 2016. “Time to Re-inspect the Foundations?.” Communications of the ACM, 59, Pp. 34-36. Publisher's Version Abstract


Questioning if computer science is outgrowing its traditional foundations.


William Bechtel and Oron Shagrir. 2015. “The Non-Redundant Contributions of Marr's Three Levels of Analysis for Explaining Information Processing Mechanisms.” Topics in Cognitive Science (TopiCS), 7, Pp. 312-322. Abstract

Are all three of Marr's levels needed? Should they be kept distinct? We argue for the distinct contributions and methodologies of each level of analysis. It is important to maintain them because they provide three different perspectives required to understand mechanisms, especially information-processing mechanisms. The computational perspective provides an understanding of how a mechanism functions in broader environments that determines the computations it needs to perform (and may fail to perform). The representation and algorithmic perspective offers an understanding of how information about the environment is encoded within the mechanism and what are the patterns of organization that enable the parts of the mechanism to produce the phenomenon. The implementation perspective yields an understanding of the neural details of the mechanism and how they constrain function and algorithms. Once we adequately characterize the distinct role of each level of analysis, it is fairly straightforward to see how they relate.

PDF Version
Oron Shagrir. 1/2014. “Review of Marcin Milkowski, Explaining the Computational Mind (MIT Press).” Notre Dame Philosophical Reviews. Publisher's Version
PDF Version
Gualtiero Piccinini and Oron Shagrir. 2014. “Foundations of Computational Neuroscience.” Current Opinion in Neurobiology, 25, Pp. 25-30. Abstract

Most computational neuroscientists assume that nervous systems computeand process information. We discuss foundational issues such as what we mean by ‘computation’ and ‘information processing’ in nervous systems; whether computation and information processing are matters of objective fact or of conventional, observer-dependent description; and how computational descriptions and explanations are related to other levels of analysis and organization.

PDF Version
Oron Shagrir. 2013. “Concepts of Supervenience Revisited.” Erkenntnis, 78, Pp. 469-485. Publisher's Version Abstract

Over the last three decades a vast literature has been dedicated to supervenience. Much of it has focused on the analysis of different concepts of supervenience and their philosophical consequences. This paper has two objectives. One is to provide a short, up-do-date, guide to the formal relations between the different concepts of supervenience. The other is to reassess the extent to which these concepts can establish metaphysical theses, especially about dependence. The conclusion is that strong global supervenience is the most advantageous notion of supervenience that we have.

PDF Version
Oron Shagrir. 2012. “Can a Brain Possess Two Minds?.” Journal of Cognitive Science, 13, Pp. 145-165. Abstract

In "A Computational Foundation for the Study of Cognition" David Chalmers articulates, justifies and defends the computational sufficiency thesis (CST). Chalmers advances a revised theory of computational implementation, and argues that implementing the right sort of computational structure is sufficient for the possession of a mind, and for the possession of a wide variety of mental properties. I argue that Chalmers`s theory of implementation is consistent with the nomological possibility of physical systems that possess different entire minds. I further argue that this brain-possessing-two-minds result challenges CST in three ways. It implicates CST with a host of epistemological problems; it undermines the underlying assumption that the mental supervenes on the physical; and it calls into question the claim that CST provides conceptual foundations for the computational science of the mind.

PDF Version
Oron Shagrir. 2012. “Computation, Implementation, Cognition.” Minds and Machines, 22, Pp. 137–148. Publisher's Version Abstract

Putnam (Representations and reality. MIT Press, Cambridge, 1988) and Searle (The rediscovery of the mind. MIT Press, Cambridge, 1992) famously argue that almost every physical system implements every finite computation. This universal implementation claim, if correct, puts at the risk of triviality certain functional and computational views of the mind. Several authors have offered theories of implementation that allegedly avoid the pitfalls of universal implementation. My aim in this paper is to suggest that these theories are still consistent with a weaker result, which is the nomological possibility of systems that simultaneously implement different complex automata. Elsewhere I (Shagrir in J Cogn Sci, 2012) argue that this simultaneous implementation result challenges a computational sufficiencythesis (articulated by Chalmers in J Cogn Sci, 2012). My focus here is on theories of implementation. After presenting the basic simultaneous implementation construction, I argue that these theories do not avoid the simultaneous implementation result. The conclusion is that the idea that the implementation of the right kind of automaton suffices for a possession of a mind is dubious.

PDF Version
Oron Shagrir. 2012. “The Imitation Game.” Odyssey: A special issue on Alan Turing, 14, Pp. 18-21.
PDF Version (Hebrew)

in Hebrew

Oron Shagrir. 2012. “Structural Representations and the Brain.” The British Journal for the Philosophy of Science, 63, Pp. 519-545. Publisher's Version Abstract

In Representation Reconsidered, William Ramsey suggests that the notion of structural representation is posited by classical theories of cognition, but not by the "newer accounts" (e.g., connectionist modeling). I challenge the assertion about the newer accounts. I argue that the newer accounts also posit structural representations; in fact, the notion plays a key theoretical role in the current computational approaches in cognitive neuroscience. The argument rests on a close examination of computational work on the oculomotor system.

PDF Version
Oron Shagrir. 2012. “Supertasks Do Not Increase Computational Power.” Natural Computing, 11, Pp. 51-58. Publisher's Version Abstract

It is generally assumed that supertasks increase computational power. It is argued, for example, that supertask machines can compute beyond the Turing limit, e.g., compute the halting function. We challenge this assumption. We do not deny, however, that supertask machines can compute beyond the Turing limit. Our claim is that the (hyper) computational power of these machines is not related to supertasks, but to the “right kind” of computational structure.

PDF Version
Jack Copeland and Oron Shagrir. 2011. “Do Accelerating Turing Machines Compute the Uncomputable?.” Minds and Machines, 21, Pp. 221-239. Publisher's Version Abstract

Accelerating Turing machines have attracted much attention in the last decade or so. They have been described as “the work-horse of hypercomputation” (Potgieter and Rosinger 2009). But do they really compute beyond the “Turing limit”—e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term “accelerating Turing machine” is used to refer to two very different species of
accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turing machines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of the epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different
conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical.

PDF Version
Oron Shagrir. 2011. “Supervenience and Anomalism are Compatible.” Dialectica, 65, Pp. 241-266. Abstract

I explore a Davidsonian proposal for the reconciliation of two theses. One is the supervenience of the mental on the physical, the other is the anomalism of the mental.
The gist of the proposal is that supervenience and anomalism are theses about interpretation. Starting with supervenience, the claim is that it should not be understood in terms of deeper metaphysical relations, but as a constraint on the relations between the applications of physical and mental predicates. Regarding anomalism, the claim is that laws have to satisfy certain counterfactual cases, in which an interpreter evaluates her past attributions in the light of new pieces of evidence.
The proposed reconciliation is that supervenience entails that an interpreter will always attribute the same mental predicates to two individuals with the same physical states. However, supervenience does not imply that the interpreter cannot revise her past attributions to the two individuals.

PDF Version
Oron Shagrir. 2010. “Brains as Analog-Model Computers.” Studies in the History and Philosophy of Science, 41, Pp. 271–279. Publisher's Version Abstract

Computational neuroscientists not only employ computer models and simulations in studying brain functions. They also view the modeled nervous system itself as computing. What does it mean to say that the brain computes? And what is the utility of the 'brain-as-computer' assumption in studying brain functions? In previous work, I have argued that a structural conception of computation is not adequate to address these questions. Here I outline an alternative conception of computation, which I call the analog-model. The term 'analog-model' does not mean continuous, non-discrete or non-digital. It means that the functional performance of the system simulates mathematical relations in some other system, between what is being represented. The brain-as-computer view is invoked to demonstrate that the internal cellular activity is appropriate for the pertinent information-processing (often cognitive) task.

PDF Version