Talk re talk

[This is a transcript with links to references.]

In a recent survey, more than two thirds of participants said they believe that ChatGPT has at least a little bit of consciousness. I posed a similar question on twitter, and turns out that among my followers, it’s only about 16 percent. Not all of them would pass the Turing test.

Still, I find that to be very interesting. And so I want to explain today why 

I don’t think GPT is conscious, 

but why I think that other AI systems are conscious already, 

why they will become more conscious rapidly, 

and what this means for how we will deal with them.

The problem with figuring out whether an AI is conscious is that if you ask two people you get three definitions of consciousness. Theories of consciousness are for philosophers what diets are for doctors, easy to cook up but hard to swallow. Robert Kuhn just published a review article on theories of consciousness in which he lists a full 200 of them.

I don’t have time to go through this entire list and neither have you, so let’s just pretend we did it, and then decided work with a minimal version of consciousness that I believe to be useful. I don’t really expect you to agree on my definition of consciousness, but this way at least we will know what we’re talking about.

First assumption I will make about consciousness is:

 that it’s some property which emerges in large collections of particles,

 if these particles are suitably connected and interacting.

C^2=A^2+B^2 masquerading as (x-Y)^2=(Α-β)^2+(ζ*Θ) usn

meaning the parts must know the solution to the whole equation

 I don’t understand why some people seem to believe that consciousness is

 some sort of non-physical fairy dust,

 that makes zero sense to me,

 says the physicist.

re leige ion masquerading as (x-Y)^2=(Α-β)^2+(ζ*Θ)^2

That consciousness is just a property

 of some sufficiently complex system masquerading  as it nicely does away with

 some pseudo-problems that philosophers like to discuss like philosophical zombies.

 The philosophical zombie is

 a hypothetical being that

 is physically identical to a human

 and behaves like a human.

 It is assumed to be not conscious.

 If you assume that this is possible,

 then its hypothetical existence supposedly

 proves that consciousness is not physical. 

 

The problem with philosophical zombies is

 that they don’t exist. Unlike regular zombies,

 which we all know are totally real and definitely

 not just a figment of pop culture imagination. 

Philosophical zombies don’t exist because

 if they’re physically identical to a conscious beings

 then they are conscious beings.

 This is why philosophers hate me.

Publicly masquerading as advertising to sell more books

and convince more people that the imaginary is powerful if

imagined in the one imaginary way that can not be communicated

to two or demonstrated to three simulatneously


Ok, so consciousness is physical because everything is physical. 


masquerading Hydrogen atomic level {(+/-)}

there is only one answer to the hydrogen equation and 

the answer is magnet ic attraction

equally

repelling electron ic repulsion

The second assumption I will make is that for a system -- that could be a brain or computer -- to be conscious, the system needs to have a self-monitor that keeps track of what is going on in the entire system. Such a self-monitor is part of many theories of consciousness, for example the workspace theory where that’s called a Higher-Order Representation , not to be confused with the House of Lords. 

AKA 3=2+1 is not a function of the form it is the form of the form expressed in the function of the form formed by the form in G of the Fun ct ion

The third assumption I will make is that to be conscious of something, a system needs to have a predictive model of that something. It needs to be able to understand how that thing it’s conscious of works, and to try and figure out what it might do. 

AKA 1=3-2  is the other end of 3=2+!

If you take the second and third assumptions together that means that a conscious system needs to have a predictive model of itself. I would say that this is, loosely speaking, what the mirror self-recognition test is looking for. 

Godel sss theory of complete ness is a complete mess

AKA H^2=A^2+O^2 AKA 3=2+1=3-2=1

For this test, an animal is marked with paint or a sticker in a place it can’t itself see. It is then shown a mirror. The question is, will the animal recognize it has a mark. There are a bunch of animals that have passed that test, some primates, elephants and also birds. I would say these all have some amount of consciousness, though maybe not much.

The more conscious among you might have noticed that going by what I said earlier, this test can neither prove nor disprove consciousness. 

masquerading as a new item on the four item long list now

Because that’s really about what’s going on inside a brain, and that might not reflect in behaviour for one reason or another. 

Unless behaviour is masquerading as reaction not action fed

by the reactor which powers all the actions the actor acts out

and imagines in the inside of the out acting reaction actor

 aka you

I’d probably fail the test if I didn’t put on my contact lenses, but that wouldn’t mean I’m not conscious. 


 However, I think that in most cases, behaviour and brain functions are strongly correlated for evolutionary reasons. Consciousness exists because it leads to useful behaviour. Or at least it did until the invention of the internet.

Unless  Useful behavior exists as a result of Con S CI ou S Ness

as in aware ness of the ness present in every out come dependent

on in put when the governing eq at ion is C=A+b no matter what

substitution a is imaged as or b is imagined to be 


So basically I have these two criteria: self-monitoring and a predictive model.

C=A+b expressed as A=C-b expressed as B=C-A

 You might argue that this seems hardly sufficient to explain something as complicated as consciousness and I would agree. But I believe that other cognitive functions that we associate with consciousness like, say, some sort of working memory and task specialization and what not, go along with these two, because that’s the 

C=A+b only

most efficient way to do it. Let’s then see what we can learn from this.

One consequence of this definition is that consciousness is not binary. A system isn’t just conscious or not, it may be more or less conscious because it might be better or worse at self-monitoring and making predictions. 

of its intevitalble expected mistakes given the obvious assumption that it is guessing

not calculating the supposed answer and then calculating the known error after the fact

in an iterative process

like electr ic tricking

finding a dead end and

ending in 

G

The advance of the followers

in Sssss


It’s just that if you were to assign a level of consciousness to things that we observe, then most of them would have very low levels of consciousness. Like rocks, 


C=A+b Crystal forms of gaseous theories which all connect

at the +/- forms formed up as magnets are attracted to the repulsive energy of excited electron ic isms 

water, 

known error after the fact

water is knowledge

composed of +- arranged sheets of 

for mat ion

as in

in form at ion

Tucker Carlson, and so on. These system are just too simple.

Another consequence is that computers can full well become conscious, so let’s have a look at large language model.

Large language models still work with deep neural nets.That’s big amounts of number, basically, that are being trained on data. Every time an answer is good, you keep the numbers that produced it. If the answer was bad, you adjust the numbers. That way, over time, the model “learns” to give the right answers.

of its intevitalble expected mistakes given the obvious assumption that it is guessing

not calculating the supposed answer just the one that 

was the first to fit with in a band that 

basically will not exclude the guess 

as one potential guess

and then calculating the known error after the fact

when the guessing game converts magically to the

ass

ess

in G

Game



Of course in reality these models have many more knobs to twiddle.

Game called bays eeee and guess again guessing which masquerades

as ass ess in g which is the counting of the cost not the costing of the count which is included in the counting of the cost by the cost

effective counter using a system based on the base of the idea

based around the sun masquerading as usn


 For example, the real boost for language understanding came in 2017 from transformer models. These assign a higher weight, or increased attention, to words that are more important. 

It’s very similar actually to how spoken language works, by quickly jumping over small function words, like “the” “of” “and” “from” etc. 


C^2=A^2+B^2 masquerading as (x-Y)^2=(Α-β)^2+(ζ*Θ) usn

In linguistics that’s called reduction. For a transformer model you’d say these words have a lower weight or reduced attention.

usn aka meaning less ness


But large language models, transformers or not, don’t self-monitor. 

aka meaning less ness guesses continuously even while ass ess in G

While these models are being monitored for example to schedule queries, this monitoring is done by code that’s external to the model.

 while ass ess in G  and herding cats in the same vein of clean

ly ness that pervades the imagined output 


Large language models can be said to have some vague sort of predictive power. I mean, for one thing, they were basically made to predict text. 

 while ass ess in G that a broken clock is exactly right twice in 

every day until it begins to move along with the movement that 

defines its motion as the 15th degree of 360 degreed inclination


But also, the data that they were trained on teaches them some things. Like the sun will rise tomorrow. That’s a prediction which a large language model would get right. 


 while ass ess in G and be exaclty wrong about if expressed in those terms given that the sun is not moving as much as the earth you are

looking at it from is spinning on its axis by the sun as the electric

intensity of the stream it sends this way reaches for the sleepy 

charge left behind by every nights diss apu ap ap ion

of the the excitement of have in G survived 

Ye 

st

er

Da

Y

Still, it’s the lack of self-monitoring that makes me think they are not conscious.


 while ass ess in G Guessing is a feature of consciousness not 

the function of the ness of the cons cio u see

However, this is very different if you look at AI models that are being trained to control robots. Because to control a robot, the AI needs to have a model o f itself and of its environment, and both need to be predictive. 

the function of the ness of the cons cio u see

Indeed, they are usually not just predictive but actually self-corrective.

AKA C^2=A^2+B^2 masquerading as (x-Y)^2=(Α-β)^2+(ζ*Θ) usn

 So this is why, even though ChatGPT is much more “talkative” than Atlas from Boston Dynamics, I would say that Atlas does have some conscious awareness, of itself and of its surroundings. Well, it’s definitely more conscious than some people you encounter on twitter.

But I also think that with the next couple of upgrades to Large Language Models they’ll become more self-aware. This is because they’ll almost certainly learn to keep some kind of memory of their conversations and to reprocess text before spitting it out, and to keep track of how we as the user react to what they’re doing.

So what will we do if machines become conscious? 


Once that happens the discussion about how dangerous AI are will completely change because then we might need to give them some sort of protection. It will also raise the question whether owning them and selling their services is really a sort of slavery and should be discontinued. Free the CPUs! Refactor the oppressed!

More seriously, since consciousness will bring a lot of problem for AI, companies have an incentive to prevent that from happening. I don’t think it’ll work though. What do you think? Let me know in the comments.

Comments