Viewing a single comment thread. View all comments

2

Zzzxxxyyy wrote

This is very arbitrary and human-centric. What about animals? What if a machine exhibited all the outward signs of consciousnesses?

Why not reserve consciousness to only yourself and everyone else just has a syntactic engine in their skull? You’re the only one living your own version of consciousness after all.

I think it’s clear and has been demonstrated time and again, that the whole can be greater than the sum. Human consciousness emerges from a “syntactic” network, even if it’s signaling is somewhat noisy and wet. Any other conclusion stems from deeply held religious beliefs and those arguments should begin with the soul, not logic.

1

Dumai wrote

animals have consciousness

What if a machine exhibited all the outward signs of consciousnesses?

then that wouldn't prove that it was conscious necessarily, but computers don't do that so i'm not sure what the point is here

Why not reserve consciousness to only yourself and everyone else just has a syntactic engine in their skull?

humans brains are not purely syntactic machines so if i did that it'd just be factually wrong

1

Zzzxxxyyy wrote

“computers don't do that so i'm not sure what the point is here”

We’re literally in the middle of a conversation about whether human consciousness can emerge from a simulation and whether we live in such a simulation...

Likewise, can you prove that you are conscious? If you won’t accept experimental evidence then how do you know anything is true?

1

Dumai wrote (edited )

We’re literally in the middle of a conversation about whether human consciousness can emerge from a simulation and whether we live in such a simulation...

we're also in a conversation about the physical processes of a computer, which are actually observable. so the reason i can confidently say this wouldn't be possible is because of what we know about these processes.

If you won’t accept experimental evidence then how do you know anything is true?

well what i was referring to was issues with the methodology of the turing test, assuming what you meant by "outward signs of consciousness" was something like, "the appearance of seeming and acting conscious". but obviously there are observations we can make about the inner workings of a computer outside of that, i wasn't literally saying evidential knowledge is impossible, lol

edit: i might have been a bit unclear here, so if i could clarify: there are aspects of human intelligence that would be extraordinarily hard for a computer to simulate, perhaps even functionally impossible, and even if it could simulate human intelligence completely, that wouldn't prove that it possesses consciousness.

1

Zzzxxxyyy wrote

“we're also in a conversation about the physical processes of a computer”

We’re also in a conversation about the physical processes of biological nervous systems.

I just don’t see how you make such a strong argument about “syntactic engines” being incapable of semantic evaluation. Do you actually understand how human consciousness emerges from physical processes? I don’t, but I’m confident there’s no magic between the syntactic localized behavior of neurons and the aggregate semantic behavior. So human consciousness must emerge from syntactic complexity.

One could argue, that AI won’t have “consciousness”, but it will likely be more aware and more self directed than humans.

Current, best effort AI is able to defeat humans in go, using heuristics for board configurations, without simulating all possible configurations. It’s not brute force or rules based. No one programmed in a set of rules for valuation.

1

Dumai wrote (edited )

Do you actually understand how human consciousness emerges from physical processes?

human science doesn't, lol.

I don’t, but I’m confident there’s no magic between the syntactic localized behavior of neurons and the aggregate semantic behavior. So human consciousness must emerge from syntactic complexity.

the reason we have to talk about language when discussing the possibility of computerised intelligence is because computers are linguistic engines; every form of input a computer can process will eventually boil down to numerical binary symbols. humans, and other animals, are capable of processing other forms of input, so it doesn't make sense to say that human consciousness arises from purely syntactic processes. but in order to prove that computers can be conscious, we'd have to prove they have a similar semantic capability to human beings (and possibly some other apes).

and obviously, the linear narrative of "syntactic language --> semantic language" doesn't make sense when describing biological animals. nor do i really know what you mean by "aggregate semantic behaviour".