Publications

Submitted
Oron Shagrir and Hoffmann-Kolss, Vera . Submitted. Supervenience. doi:10.4135/9781452276052.n369. Publisher's Version
B. Jack Copeland, Shagrir, Oron , and Sprevak, Mark . Submitted. Zuse's Thesis, Gandy's Thesis, And Penrose's Thesis. In , Pp. 39–59. Cambridge University Press. doi:10.1017/9781316759745.003. Publisher's Version
2020
Oron Shagrir. 2020. In Defense Of The Semantic View Of Computation. Synthese, 197, 9, Pp. 4083–4108. doi:10.1007/s11229-018-01921-z. Abstract
The semantic view of computation is the claim that semantic properties play an essential role in the individuation of physical computing systems such as laptops and brains. The main argument for the semantic view (“the master argument”) rests on the fact that some physical systems simultaneously implement different automata at the same time, in the same space, and even in the very same physical properties (“simultaneous implementation”). Recently, several authors have challenged this argument (Piccinini in Philos Stud 137:205–241, 2008, Piccinini in Physical computation: a mechanistic account, Oxford University Press, Oxford, 2015; Coelho Mollo in Synthese 195:3477–3497, 2018; Dewhurst in Br J Philos Sci 69:103–116, 2018). They accept the premise of simultaneous implementation but reject the semantic conclusion. In this paper, I aim to explicate the semantic view and to address these objections. I first characterize the semantic view and distinguish it from other, closely related views. Then, I contend that the master argument for the semantic view survives the counter-arguments against it. One counter-argument is that computational individuation is not forced to choose between the implemented automata but rather always picks out a more basic computational structure. My response is that this move might undermine the notion of computational equivalence. Another counter-argument is that while computational individuation is forced to rely on extrinsic features, these features need not be semantic. My reply is that the semantic view better accounts for these extrinsic features than the proposed non-semantic alternatives.
2019
B. Jack Copeland and Shagrir, Oron . 2019. The Church-Turing Thesis: Logical Limit Or Breachable Barrier?. Communications Of The Acm, 62, 1, Pp. 66–74. doi:10.1145/3198448. Abstract
In its original form, the Church-Turing thesis concerned computation as Alan Turing and Alonzo Church used the term in 1936-human computation.
Lotem Elber-Dorozko and Shagrir, Oron . 2019. Computation And Levels In The Cognitive And Neural Sciences. In Routledge Handbook Of The Computational Mind, Pp. 205-222.
Lotem Elber-Dorozko and Shagrir, Oron . 2019. Integrating Computation Into The Mechanistic Hierarchy In The Cognitive And Neural Sciences. Synthese. doi:10.1007/s11229-019-02230-9. Abstract
It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem (Sect. 2), we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences (Sects. 3 and 4). We then examine two possible solutions (Sect. 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (Sect. 6).
Jens Harbecke and Shagrir, Oron . 2019. The Role Of The Environment In Computational Explanations. European Journal For Philosophy Of Science, 9, 3. doi:10.1007/s13194-019-0263-7. Abstract
The mechanistic view of computation contends that computational explanations are mechanistic explanations. Mechanists, however, disagree about the precise role that the environment – or the so-called “contextual level” – plays for computational (mechanistic) explanations. We advance here two claims: (i) Contextual factors essentially determine the computational identity of a computing system (computational externalism); this means that specifying the “intrinsic” mechanism is not sufficient to fix the computational identity of the system. (ii) It is not necessary to specify the causal-mechanistic interaction between the system and its context in order to offer a complete and adequate computational explanation. While the first claim has been discussed before, the second has been practically ignored. After supporting these claims, we discuss the implications of our contextualist view for the mechanistic view of computational explanation. Our aim is to show that some versions of the mechanistic view are consistent with the contextualist view, whilst others are not.
2018
Oron Shagrir. 2018. The Brain As An Input–Output Model Of The World. Minds And Machines, 28, 1, Pp. 53–75. doi:10.1007/s11023-017-9443-4. Abstract
An underlying assumption in computational approaches in cognitive and brain sciences is that the nervous system is an input–output model of the world: Its input–output functions mirror certain relations in the target domains. I argue that the input–output modelling assumption plays distinct methodological and explanatory roles. Methodologically, input–output modelling serves to discover the computed function from environmental cues. Explanatorily, input–output modelling serves to account for the appropriateness of the computed function to the explanandum information-processing task. I compare very briefly the modelling explanation to mechanistic and optimality explanations, noting that in both cases the explanations can be seen as complementary rather than contrastive or competing.
Oron Shagrir and Bechtel, William . 2018. Marr's Computational Level And Delineating Phenomena. In Explanation And Integration In Mind And Brain Science, Pp. 190–214. United Kingdom: Oxford University Press.
2016
Jack Copeland, Dresner, Eli , Proudfoot, Diane , and Shagrir, Oron . 2016. Viewpoint: Time To Reinspect The Foundations?. Communications Of The Acm, 59, 11, Pp. 34–36. doi:10.1145/2908733. Abstract
The theoretical and philosophical work that they carried out in the 1930s laid the foundations for the computer revolution, and this revolution in turn fueled the fantastic expansion of scientific knowledge in the late 20th and early 21st centuries. Ideas devised at that time have become cornerstones of current science and technology. Much work has been devoted in recent years to analysis of the foundations and theoretical bounds of computing. However, the results of this diverse work, carried out by computer scientists, mathematicians, and philosophers, do not so far form a unified and coherent picture. It is time for the reexamination of the logico mathematical foundations of computing to move center stage.
2015
William Bechtel and Shagrir, Oron . 2015. The Non-Redundant Contributions Of Marr's Three Levels Of Analysis For Explaining Information-Processing Mechanisms. Topics In Cognitive Science, 7, 2, Pp. 312–322. doi:10.1111/tops.12141. Abstract
Are all three of Marr's levels needed? Should they be kept distinct? We argue for the distinct contributions and methodologies of each level of analysis. It is important to maintain them because they provide three different perspectives required to understand mechanisms, especially information-processing mechanisms. The computational perspective provides an understanding of how a mechanism functions in broader environments that determines the computations it needs to perform (and may fail to perform). The representation and algorithmic perspective offers an understanding of how information about the environment is encoded within the mechanism and what are the patterns of organization that enable the parts of the mechanism to produce the phenomenon. The implementation perspective yields an understanding of the neural details of the mechanism and how they constrain function and algorithms. Once we adequately characterize the distinct role of each level of analysis, it is fairly straightforward to see how they relate.
2014
My aim here is to show that an underlying assumption in computational approaches in cognitive and brain sciences is that the brain is a model of the world in the sense that it mirrors certain mathematical relations in the surrounding environment. I will give here three examples. One is from David Marr's computational-level theory of edge-detection. The second one is the computational work on the oculomotor system. And the third one is a Bayesian model of causal reasoning. One might wonder why this brain-as-a-model-of-the- world assumption is so prevalent in computational cognitive science and neuroscience. My proposed answer (for which I will not argue here) is that in these fields computation just means a dynamical process that models another domain. Thus saying that the brain computes just means that its processes models certain mathematical, or other high-order, relations in another domain, often the surrounding world.
Roy T. Cook. 2014. Computability: Turing, Godel, Church, And Beyond. Philosophia Mathematica, 22, 3, Pp. 412-413. doi:10.1093/philmat/nku016.
Gualtiero Piccinini and Shagrir, Oron . 2014. Foundations Of Computational Neuroscience. Current Opinion In Neurobiology, 25, Pp. 25–30. doi:10.1016/j.conb.2013.10.005. Abstract
Most computational neuroscientists assume that nervous systems compute and process information. We discuss foundational issues such as what we mean by 'computation' and 'information processing' in nervous systems; whether computation and information processing are matters of objective fact or of conventional, observer-dependent description; and how computational descriptions and explanations are related to other levels of analysis and organization.
Oron Shagrir. 2014. Kripke's Infinity Argument. In Naming, Necessity And More, Pp. 169–190. Palgrave Macmillan. doi:10.1057/9781137400932.
2013
B. Jack Copeland, Posy, Carl J. , and Shagrir, Oron . 2013. Computability: Turing, Gödel, Church, And Beyond. In Computability, Pp. 1–362. The MIT Press. Abstract
In the 1930s a series of seminal works published by Alan Turing, Kurt Gödel, Alonzo Church, and others established the theoretical basis for computability. This work, advancing precise characterizations of effective, algorithmic computability, was the culmination of intensive investigations into the foundations of mathematics. In the decades since, the theory of computability has moved to the center of discussions in philosophy, computer science, and cognitive science. In this volume, distinguished computer scientists, mathematicians, logicians, and philosophers consider the conceptual foundations of computability in light of our modern understanding. Some chapters focus on the pioneering work by Turing, Gödel, and Church, including the Church-Turing thesis and Gödel’s response to Church’s and Turing’s proposals. Other chapters cover more recent technical developments, including computability over the reals, Gödel’s influence on mathematical logic and on recursion theory and the impact of work by Turing and Emil Post on our theoretical understanding of online and interactive computing; and others relate computability and complexity to issues in the philosophy of mind, the philosophy of science, and the philosophy of mathematics. Contributors:Scott Aaronson, Dorit Aharonov, B. Jack Copeland, Martin Davis, Solomon Feferman, Saul Kripke, Carl J. Posy, Hilary Putnam, Oron Shagrir, Stewart Shapiro, Wilfried Sieg, Robert I. Soare, Umesh V. Vazirani.
In the 1930s a series of seminal works published by Alan Turing, Kurt Godel, Alonzo Church, and others established the theoretical basis for computability. This work, advancing precise characterizations of effective, algorithmic computability, was the culmination of intensive investigations into the foundations of mathematics. In the decades since, the theory of computability has moved to the center of discussions in philosophy, computer science, and cognitive science. In this volume, distinguished computer scientists, mathematicians, logicians, and philosophers consider the conceptual foundations of computability in light of our modern understanding. Some chapters focus on the pioneering work by Turing, Godel, and Church, including the Church-Turing thesis and Godel's response to Church's and Turing's proposals. Other chapters cover more recent technical developments, including computability over the reals, Godel's influence on mathematical logic and on recursion theory and the impact of work by Turing and Emil Post on our theoretical understanding of online and interactive computing; and others relate computability and complexity to issues in the philosophy of mind, the philosophy of science, and the philosophy of mathematics. Contributors:Scott Aaronson, Dorit Aharonov, B. Jack Copeland, Martin Davis, Solomon Feferman, Saul Kripke, Carl J. Posy, Hilary Putnam, Oron Shagrir, Stewart Shapiro, Wilfried Sieg, Robert I. Soare, Umesh V. Vazirani.
Oron Shagrir. 2013. Concepts Of Supervenience Revisited. Erkenntnis, 78, 2, Pp. 469–485. doi:10.1007/s10670-012-9410-7. Abstract
Over the last 3 decades a vast literature has been dedicated to supervenience. Much of it has focused on the analysis of different concepts of supervenience and their philosophical consequences. This paper has two objectives. One is to provide a short, up-do-date, guide to the formal relations between the different concepts of supervenience. The other is to reassess the extent to which these concepts can establish metaphysical theses, especially about dependence. The conclusion is that strong global supervenience is the most advantageous notion of supervenience that we have.