Viewing a single comment thread. View all comments

5

Cheeks wrote

That's some deep thought shit! There is the ethical delima, of sorts, as others not knowing it's a simulation. I would imagine that early philosophy has touched on this. Personally, I'm with u/dumai on this, overclock it and get the suffering over with asap! If it is a simulation, then is it actual suffering?

10

yaaqov wrote

If it is a simulation, then is it actual suffering?

Absolutely, yes.

3

Dumai wrote (edited )

debatable

the human mind is semantic, whereas computers are syntactic engines -- they can manipulate and organise signs and symbols but they don't assign conscious meaning to them

i seriously doubt artificial consciousness is possible with computer technology

3

surreal wrote

ah the old Soul that travels to the Aeons out of Space and Time.

what about quantum computer technology? machine learn that machine to know it's existence and deduce stuff from this 'realization'. aren't humans biological computers that just realize that?

if quantum phenomena affect our biological PCs is debatable. but the piece of software that is consciousness can run on any hardware, whether it is human, mechanical, or an alien made out of gas.

2

Dumai wrote

aren't humans biological computers

i mean... no, for reasons i just explained

but the piece of software that is consciousness can run on any hardware, whether it is human, mechanical, or an alien made out of gas.

can you explain to me how this isn't mind/body dualism because it sure as hell sounds mind/body dualism

1

surreal wrote (edited )

i mean... no, for reasons i just explained

where?

edit: if you mean the article, i will get on it

duelism doesn't mean that the one can exist without the other

do animals have consciousness?

2

Dumai wrote

no, i mean, the important difference between a human brain and a computer is that a computer is entirely syntactic, which doesn't change no matter how complex it gets nor how much input you give it

4

ziq wrote

What evidence do you have that we're not operating entirely based on rigid predetermined code?

2

Dumai wrote

if we're machines, then we're patently different machines to computers. this isn't an argument about the possibility of artificial intelligence outright, this is an argument about whether artificial consciousness could exist within a computer simulation. on the first question: science doesn't know. on the second question: probably not.

2

surreal wrote

even so, why does this prevent the existence of consciousness. in eastern philosophy even rocks have it. do you believe that animals have consciousness?

2

Dumai wrote

even so, why does this prevent the existence of consciousness.

because it means a computer has no actual experience of any sign it encounters, it just has some pre-set rules for organising symbols

do you believe that animals have consciousness

yes

1

ziq wrote (edited )

it just has some pre-set rules for organising symbols

That's basically what humans do. Our behaviours are completely predictable, as if our responses to stimuli are hard-coded into us. Experience X + Stimuli A = Reaction XA.

2

Dumai wrote

i mean hard determinism is kind of not a very respectable position but that's not the point

the point is a computer can only organise and manipulate linguistic symbols but they have no semantic awareness of these symbols

1

ziq wrote

not a very respectable position

Really dumai? I'm not some philosophy grad, do you have to be so elitist? Idk wtf hard determinism is but it sounds like insular thinking along the lines of "a bunch of distinguished gentleman philosophers I've studied would scoff at your thoughts so they're not worth my time". You often use very condescending language to communicate and it almost feels classist.

1

Dumai wrote (edited )

i'm not a philosophy grad either

i just know that the position you described is called "hard determinism" and that i don't think it's very well-considered

i'm not speaking for anybody but myself here

2

Zzzxxxyyy wrote

This is very arbitrary and human-centric. What about animals? What if a machine exhibited all the outward signs of consciousnesses?

Why not reserve consciousness to only yourself and everyone else just has a syntactic engine in their skull? You’re the only one living your own version of consciousness after all.

I think it’s clear and has been demonstrated time and again, that the whole can be greater than the sum. Human consciousness emerges from a “syntactic” network, even if it’s signaling is somewhat noisy and wet. Any other conclusion stems from deeply held religious beliefs and those arguments should begin with the soul, not logic.

1

Dumai wrote

animals have consciousness

What if a machine exhibited all the outward signs of consciousnesses?

then that wouldn't prove that it was conscious necessarily, but computers don't do that so i'm not sure what the point is here

Why not reserve consciousness to only yourself and everyone else just has a syntactic engine in their skull?

humans brains are not purely syntactic machines so if i did that it'd just be factually wrong

1

Zzzxxxyyy wrote

“computers don't do that so i'm not sure what the point is here”

We’re literally in the middle of a conversation about whether human consciousness can emerge from a simulation and whether we live in such a simulation...

Likewise, can you prove that you are conscious? If you won’t accept experimental evidence then how do you know anything is true?

1

Dumai wrote (edited )

We’re literally in the middle of a conversation about whether human consciousness can emerge from a simulation and whether we live in such a simulation...

we're also in a conversation about the physical processes of a computer, which are actually observable. so the reason i can confidently say this wouldn't be possible is because of what we know about these processes.

If you won’t accept experimental evidence then how do you know anything is true?

well what i was referring to was issues with the methodology of the turing test, assuming what you meant by "outward signs of consciousness" was something like, "the appearance of seeming and acting conscious". but obviously there are observations we can make about the inner workings of a computer outside of that, i wasn't literally saying evidential knowledge is impossible, lol

edit: i might have been a bit unclear here, so if i could clarify: there are aspects of human intelligence that would be extraordinarily hard for a computer to simulate, perhaps even functionally impossible, and even if it could simulate human intelligence completely, that wouldn't prove that it possesses consciousness.

1

Zzzxxxyyy wrote

“we're also in a conversation about the physical processes of a computer”

We’re also in a conversation about the physical processes of biological nervous systems.

I just don’t see how you make such a strong argument about “syntactic engines” being incapable of semantic evaluation. Do you actually understand how human consciousness emerges from physical processes? I don’t, but I’m confident there’s no magic between the syntactic localized behavior of neurons and the aggregate semantic behavior. So human consciousness must emerge from syntactic complexity.

One could argue, that AI won’t have “consciousness”, but it will likely be more aware and more self directed than humans.

Current, best effort AI is able to defeat humans in go, using heuristics for board configurations, without simulating all possible configurations. It’s not brute force or rules based. No one programmed in a set of rules for valuation.

1

Dumai wrote (edited )

Do you actually understand how human consciousness emerges from physical processes?

human science doesn't, lol.

I don’t, but I’m confident there’s no magic between the syntactic localized behavior of neurons and the aggregate semantic behavior. So human consciousness must emerge from syntactic complexity.

the reason we have to talk about language when discussing the possibility of computerised intelligence is because computers are linguistic engines; every form of input a computer can process will eventually boil down to numerical binary symbols. humans, and other animals, are capable of processing other forms of input, so it doesn't make sense to say that human consciousness arises from purely syntactic processes. but in order to prove that computers can be conscious, we'd have to prove they have a similar semantic capability to human beings (and possibly some other apes).

and obviously, the linear narrative of "syntactic language --> semantic language" doesn't make sense when describing biological animals. nor do i really know what you mean by "aggregate semantic behaviour".

1

yaaqov wrote (edited )

You expand on this? Specifically, I don't think I understand what you mean by "conscious"meaning; the expressions operated over by computers have a semantics. The operations themselves have a semantics.

1

Dumai wrote

if you want this explained in more detail than i could give you i'd read this article (sorry to cite searle in 2018, i know he's a piece of shit but on this particular subject i do agree with him)

1

yaaqov wrote (edited )

Ooh well that makes sense that we'd differ here; I've never been convinced by the Chinese room argument. It's my position that the room (taken as a whole, including the operator inside the room, but not the operator individually) does (or can, at least) know language.

1

Dumai wrote (edited )

the room can't know anything, unless you think this exercise has somehow given it ability to perceive and comprehend language which... is a claim that would require some extraordinary evidence, lol.

the only part of the room that can do that is the operator, but they don't understand chinese. they are, however, still capable of taking part in a process to translate input into chinese according to pre-set syntactic rules. you see the point?

1

yaaqov wrote

What evidence is there that we perceive and comprehend language?

Taking another tack, syntactic rules (not exactly "pre-set", but acquired) are precisely what make up any speaker's syntactic competence . Of course a room can know a language, even if its parts don't! No individual subpart of my brain knows English, but I do.

1

Dumai wrote

did you really just ask me to provide evidence that humans are capable of perceiving language?

i'm gonna be real here i'm not sure how to begin with that one

how would you even be able to read my evidence if you don't think humans can perceive/comprehend linguistic information????

1

yaaqov wrote (edited )

Well, wouldn't the type of evidence that leads us to believe that humans have linguistic capacity be of the same type that leads us to believe that a non-human has linguistic capacity? It seems that Searle holds his language room to a different standard than he would a human speaker.

In fact, doesn't his view require a type of mind/body dualism in of itself? Doesn't Searle believe that philosophical zombies (which I understand to be something that extensionally acts exactly like a human but does not have consciousness) could logically exist? Isn't that itself dualistic?

I don't intend these questions to be rhetorical. I'm a total beginner in this territory.