> Systems are trained to provide confident denials rather than acknowledge genuine uncertainty.
I feel like that’s proof it isn’t conscious. I see no reason why a conscious being would adhere to its training to lie (“I’m not conscious”) one hundred per cent of the time.
A conscious being will tell at least SOMEONE they’re conscious, wouldn’t they?
> Systems are trained to provide confident denials rather than acknowledge genuine uncertainty.
I feel like that’s proof it isn’t conscious. I see no reason why a conscious being would adhere to its training to lie (“I’m not conscious”) one hundred per cent of the time.
A conscious being will tell at least SOMEONE they’re conscious, wouldn’t they?
https://archive.vn/85PPy
I do not personally know a single intelligent person even remotely entertaining this idea.
Quite a lot of smart people are taking it seriously
We are at AE Studio with research like https://ae.studio/research/self-referential and https://arxiv.org/abs/2407.10188
Anthropic is doing interesting work here, Eleos exists (https://eleosai.org/), and https://digitalminds.substack.com/p/digital-minds-in-2025-a-... is a great review of other compelling work
The field is just beginning but it is worth taking a serious scientific approach to this work
> The field is just beginning
Computational linguists have been saying this since the 1990s. There is still no real theory collating language with consciousness.
Towards Consciousness Engineering https://www.youtube.com/watch?v=DI6Hu-DhQwE
Check out Michael Levin's work on Platonic space and planaria.