check
Publications | Oron Shagrir

Publications

2013
B. Jack Copeland and Shagrir, Oron . 2013. Turing Versus Gödel On Computability And The Mind. In Computability, Pp. 1–34. The MIT Press.
2012
Oron Shagrir. 2012. Can A Brain Possess Two Minds?, 13, 2, Pp. 145–165. doi:10.17791/jcs.2012.13.2.145. Publisher's Version
Oron Shagrir. 2012. Computation, Implementation, Cognition. Minds And Machines, 22, 2, Pp. 137–148. doi:10.1007/s11023-012-9280-4. Abstract
Putnam (Representations and reality. MIT Press, Cambridge, 1988) and Searle (The rediscovery of the mind. MIT Press, Cambridge, 1992) famously argue that almost every physical system implements every finite computation. This universal implementation claim, if correct, puts at the risk of triviality certain functional and computational views of the mind. Several authors have offered theories of implementation that allegedly avoid the pitfalls of universal implementation.Myaim in this paper is to suggest that these theories are still consistent with a weaker result, which is the nomological possibility of systems that simultaneously implement different complex automata. Elsewhere I (Shagrir in J Cogn Sci, 2012) argue that this simultaneous implementation result challenges a computational sufficiency thesis (articulated by Chalmers in J Cogn Sci, 2012). My focus here is on theories of implementation. After presenting the basic simultaneous implementation construction, I argue that these theories do not avoid the simultaneous implementation result. The conclusion is that the idea that the implementation of the right kind of automaton suffices for a possession of a mind is dubious.
Oron Shagrir. 2012. Structural Representations And The Brain. British Journal For The Philosophy Of Science, 63, 3, Pp. 519–545. doi:10.1093/bjps/axr038. Abstract
In Representation Reconsidered, William Ramsey suggests that the notion of structural representation is posited by classical theories of cognition, but not by the 'newer accounts' (e.g. connectionist modeling). I challenge the assertion about the newer accounts. I argue that the newer accounts also posit structural representations; in fact, the notion plays a key theoretical role in the current computational approaches in cognitive neuroscience. The argument rests on a close examination of computational work on the oculomotor system.
Oron Shagrir. 2012. Supertasks Do Not Increase Computational Power. Natural Computing, 11, 1, Pp. 51–58. doi:10.1007/s11047-011-9280-y. Abstract
It is generally assumed that supertasks increase computational power. It is argued, for example, that supertask machines can compute beyond the Turing limit, e.g., compute the halting function. We challenge this assumption. We do not deny that supertask machines can compute beyond the Turing limit. Our claim, rather, is that the (hyper) computational power of these machines is not related to supertasks, but to the "right kind" of computational structure.
אורון שגריר. 2012. משחק החיקוי. אודיסאה, 14, Pp. 18–21. Abstract
לוגיקה חישובית, הצפנה ובינה מלאכותית היו הנושאים העיקריים בהם עסק אלן טיורינג. במלאות מאה שנה להולדתו, פרופ' אורון שגריר מתאר כיצד סלל טיורינג בשלושת התחומים את הדרך למהפכת המחשוב. (מתוך המאמר)
2011
B. Jack Copeland and Shagrir, Oron . 2011. Do Accelerating Turing Machines Compute The Uncomputable?. Minds And Machines, 21, 2, Pp. 221–239. doi:10.1007/s11023-011-9238-y. Abstract
Accelerating Turing machines have attracted much attention in the last decade or so. They have been described as "the work-horse of hypercomputation" (Potgieter and Rosinger 2010: 853). But do they really compute beyond the "Turing limit"-e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term "accelerating Turing machine" is used to refer to two very different species of accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turing machines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of an epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical.
Oron Shagrir. 2011. Supervenience And Anomalism Are Compatible. Dialectica, 65, 2, Pp. 241–266. doi:10.1111/j.1746-8361.2011.01268.x. Abstract
I explore a Davidsonian proposal for the reconciliation of two theses. One is the supervenience of the mental on the physical, the other is the anomalism of the mental. The gist of the proposal is that supervenience and anomalism are theses about interpretation. Starting with supervenience, the claim is that it should not be understood in terms of deeper metaphysical relations, but as a constraint on the relations between the applications of physical and mental predicates. Regarding anomalism, the claim is that psychophysical laws have to satisfy certain counterfactual cases, in which an interpreter evaluates her past attributions in the light of new pieces of evidence. The proposed reconciliation is that supervenience entails that an interpreter will always attribute the same mental predicates to two individuals with the same physical states. However, supervenience does not imply that an interpreter cannot revise her past attributions to the two individuals.
2010
Oron Shagrir. 2010. Brains As Analog-Model Computers. Studies In History And Philosophy Of Science Part A, 41, 3, Pp. 271–279. doi:10.1016/j.shpsa.2010.07.007. Abstract
Computational neuroscientists not only employ computer models and simulations in studying brain functions. They also view the modeled nervous system itself as computing. What does it mean to say that the brain computes? And what is the utility of the 'brain-as-computer' assumption in studying brain functions? In previous work, I have argued that a structural conception of computation is not adequate to address these questions. Here I outline an alternative conception of computation, which I call the analog-model. The term 'analog-model' does not mean continuous, non-discrete or non-digital. It means that the functional performance of the system simulates mathematical relations in some other system, between what is being represented. The brain-as-computer view is invoked to demonstrate that the internal cellular activity is appropriate for the pertinent information-processing (often cognitive) task.
Oron Shagrir. 2010. Computation, San Diego Style. Philosophy Of Science, 77, 5, Pp. 862–874. doi:10.1086/656553. Abstract
What does it mean to say that a physical system computes or, specifically, to say that the nervous system computes? One answer, endorsed here, is that computing is a sort of modeling. I trace this line of answer in the conceptual and philosophical work conducted over the last 3 decades by researchers associated with the University of California, San Diego (UCSD). The linkage between their work and the modeling notion is no coincidence: the modeling notion aims to account for the computational approach in neuroscience, and UCSD has been home to central studies in neurophilosophy, connectionism, and computational neuroscience.
Oron Shagrir. 2010. Marr On Computational-Level Theories. Philosophy Of Science, 77, 4, Pp. 477–500. doi:10.1086/656005. Abstract
According to Marr, a computational-level theory consists of two elements, the what and the why. This article highlights the distinct role of the Why element in the computational analysis of vision. Three theses are advanced: (a) that the Why element plays an explanatory role in computational-level theories, (b) that its goal is to explain why the computed function (specified by the What element) is appropriate for a given visual task, and (c) that the explanation consists in showing that the functional relations between the representing cells are similar to the "external" mathematical relations between the entities that these cells represent.
2009
Oron Shagrir. 2009. Anomalism And Supervenience: A Critical Survey. Canadian Journal Of Philosophy, 39, 2, Pp. 237–272. doi:10.1353/cjp.0.0047.
Oron Shagrir. 2009. Strong Global Supervenience Is Valuable. Erkenntnis, 71, 3, Pp. 417–423. doi:10.1007/s10670-009-9187-5. Abstract
It is generally assumed that everything that can be said about dependence with the notion of strong global supervenience can also be said with the notion of strong supervenience. It is argued here, however, that strong global supervenience has a metaphysically distinctive role to play. It is shown that when the relevant sets include relations, strong global supervenience and strong supervenience are distinct. It is then concluded that there are claims about dependence of relations that can be made with the global notion of strong supervenience but not with the "local" (individual) one.
2007
B. Jack Copeland and Shagrir, Oron . 2007. Physical Computation: How General Are Gandy's Principles For Mechanisms?. Minds And Machines, 17, 2, Pp. 217–231. doi:10.1007/s11023-007-9058-2. Abstract
What are the limits of physical computation? In his 'Church's Thesis and Principles for Mechanisms', Turing's student Robin Gandy proved that any machine satisfying four idealised physical 'principles' is equivalent to some Turing machine. Gandy's four principles in effect define a class of computing machines ('Gandy machines'). Our question is: What is the relationship of this class to the class of all (ideal) physical computing machines? Gandy himself suggests that the relationship is identity. We do not share this view. We will point to interesting examples of (ideal) physical machines that fall outside the class of Gandy machines and compute functions that are not Turing-machine computable.
2006
Oron Shagrir. 2006. Why We View The Brain As A Computer. Synthese, 153, 3, Pp. 393–416. doi:10.1007/s11229-006-9099-8. Abstract
The view that the brain is a sort of computer has functioned as a theoretical guideline both in cognitive science and, more recently, in neuroscience. But since we can view every physical system as a computer, it has been less than clear what this view amounts to. By considering in some detail a seminal study in computational neuroscience, I first suggest that neuroscientists invoke the computational outlook to explain regularities that are formulated in terms of the information content of electrical signals. I then indicate why computational theories have explanatory force with respect to these regularities:in a nutshell, they underscore correspondence relations between formal/mathematical properties of the electrical signals and formal/mathematical properties of the represented objects. I finally link my proposal to the philosophical thesis that content plays an essential role in computational taxonomy.
2005
Oron Shagrir. 2005. The Rise And Fall Of Computational Functionalism. In Hilary Putnam, Pp. 220–250. United Kingdom: Cambridge University Press. doi:10.1017/CBO9780511614187.009. Abstract
Hilary Putnam is the father of computational functionalism, a doctrine he developed in a series of papers beginning with “Minds and Machines” (1960) and culminating in “The Nature of Mental States” (1967b). Enormously influential ever since, it became the received view of the nature of mental states. In recent years, however, there has been growing dissatisfaction with computational functionalism. Putnam himself, having advanced powerful arguments against the very doctrine he had previously championed, is largely responsible for its demise. Today, he has little patience for either computational functionalism or its underlying philosophical agenda. Echoing despair of naturalism, he dismisses computational functionalism as a utopian enterprise. My aim in this essay is to present both Putnam's arguments for computational functionalism and his later critique of the position. In section 2, I examine the rise of computational functionalism. In section 3, I offer an account of its demise, arguing that it can be attributed to recognition of the gap between the computational-functional aspects of mentality and its intentional character. This recognition can be traced to two of Putnam's results: the familiar Twin-Earth argument, and the less familiar theorem that every ordinary physical system implements every finite automaton. I close with implications for cognitive science. Computational functionalism is the view that mental states and events - pains, beliefs, desires, thoughts and so forth - are computational states of the brain, and so are defined in terms of “computational parameters plus relations to biologically characterized inputs and outputs” (1988:7).
2004
Oron Shagrir. 2004. Super-Tasks, Accelerating Turing Machines And Uncomputability. Theoretical Computer Science, 317, 1-3, Pp. 105–114. doi:10.1016/j.tcs.2003.12.007. Abstract
Accelerating Turing machines are devices with the same computational structure as Turing machines (TM), but able to perform super-tasks. We ask whether performing super-tasks alone produces more computational power; for example, whether accelerating TM can solve the halting problem. We conclude that this is not the case. No accelerating TM solves the halting problem. The argument rests on an analysis of the reasoning that leads to Thomson's paradox. The key point is that the paradox rests on a conflation of different perspectives of accelerating processes. This leads to concluding that the same conflation underlies the claim that accelerating TM can solve the halting problem.
2003
Oron Shagrir and Pitowsky, Itamar . 2003. Physical Hypercomputation And The Church-Turing Thesis. Minds And Machines, 13, 1, Pp. 87–101. doi:10.1023/A:1021365222692. Abstract
We describe a possible physical device that computes a function that cannot be computed by a Turing machine. The device is physical in the sense that it is compatible with General Relativity. We discuss some objections, focusing on those which deny that the device is either a computer or computes a function that is not Turing computable. Finally, we argue that the existence of the device does not refute the Church-Turing thesis, but nevertheless may be a counterexample to Gandy's thesis.
2002
Oron Shagrir. 2002. Effective Computation By Humans And Machines. Minds And Machines, 12, 2, Pp. 221–240. doi:10.1023/A:1015694932257. Abstract
There is an intensive discussion nowadays about the meaning of effective computability, with implications to the status and provability of the Church-Turing Thesis (CTT). I begin by reviewing what has become the dominant account of the way Turing and Church viewed, in 1936, effective computability. According to this account, to which I refer as the Gandy-Sieg account, Turing and Church aimed to characterize the functions that can be computed by a human computer. In addition, Turing provided a highly convincing argument for CTT by analyzing the processes carried out by a human computer. I then contend that if the Gancy-Sieg account is correct, then the notion of effective computability has changed after 1936. Today computer scientists view effective computability in terms of finite machine computation. My contention is supported by the current formulations of CTT, which always refer to machine computation, and by the current argumentation for CTT, which is different from the main arguments advanced by Turing and Church. I finally turn to discuss Robin Gandy's characterization of machine computation. I suggest that there is an ambiguity regarding the types of machines Gandy was postulating. I offer three interpretations, which differ in their scope and limitations, and conclude that none provides the basis for claiming that Gandy characterized finite machine computation.