2021-22 SotA Anthology 2021-22 | Page 162

collection of Chinese symbols and a book of rules on how to manipulate those symbols . Questions ( in Chinese ) are passed into the room and appropriate responses are sent out . So , although the person does not understand Chinese , the responses that they are able to send out because of the instructions seem to make it appear as if the person does understand Chinese ( Searle , 1980 ). This thought experiment is an argument against Strong AI , as it demonstrates how machines are not like human minds because they do not possess intentionality , they can process information , but they do not understand what they are processing . The point of them being unable to understand suggests they have no consciousness ; thus , no consciousness means no ability to think and understand . Searle said himself , " Such intentionality as computers appear to have is solely in the minds of those who interpret the output " ( 1980 , p . 422 ). Searle has a valid point as he is arguing that machines do not have intentionality , meaning they cannot be conscious and have the ability to think .
A reply to Searle and the Chinese Room Thought Experiment is ' The Systems Reply '. This objection posits that the person in the room may not understand Chinese , but they are only one part of the system , and the machine as a whole does in fact understand Chinese ( Searle , 1980 ). In other words , whole objects can do things that their parts cannot . For example , I understand English , but my little finger does not . The room as a whole ( of which Searle is just a part ) understands Chinese . I find this argument very unsatisfying due to it being quite counterintuitive . I would respond to this objection by arguing that in order to understand something in this way , consciousness is necessary .
Searle himself had a response to the system ' s reply . He asks us to internalise all the elements of the system . Suppose he memorises all the rules .
Now he is the whole system , yet he still doesn ' t understand Chinese ( Mandik , 2014 ). I agree with this valid response as it shows that a machine is still not capable of thinking . However , an opponent could argue against Searle by saying the claim that the whole system isn ' t intelligent may only be convincing if we interpret it as ; there ' s no conscious understanding of Chinese . Maybe the room is intelligent . Although I did agree with what Searle was arguing , I find this objection to have some significance . I would concede that from the definition I have been using , the room is intelligent because the room is producing an appropriate response to the situation . But the all-important distinction is that the room is not conscious and therefore cannot think , despite it being intelligent .
In essence , no matter how advanced machines become , they will never be able to truly ' think '. Regardless of passing the Turing Test , a computer which is functionally equivalent to a human with respect to linguistic behaviour is not enough to be conscious and therefore have the ability to think . In this essay I have argued that there ' s no doubt that machines are intelligent from the definition used throughout , however intelligence is separate and doesn ' t correlate with consciousness .
BIBLIOGRAPHY
Hauser , L ., n . d . Artificial Intelligence | Internet Encyclopaedia of Philosophy . [ online ] lep . utm . edu Available at : https :// iep . utm . edu / art-inte / [ Accessed 5 November 2021 ]
Mandik , P ., 2014 . This is philosophy of mind , ( Chichester : Wiley Blackwell ), pp . 93-106
Searle , J ., 1980 . Minds , brains , and programs . Behavioural and Brain Sciences , 3 ( 3 ), pp . 417-424
SCHOOL OF THE ARTS 2021 / 2022
162