Welcome to Incels.is - Involuntary Celibate Forum

Welcome! This is a forum for involuntary celibates: people who lack a significant other. Are you lonely and wish you had someone in your life? You're not alone! Join our forum and talk to people just like you.

Artifical SuperIntelligence and its relevance for the people on this site

  • Thread starter OutcompetedByRoomba
  • Start date
Humans and ML programs are not the same at all.
Irrelevant. All that's needed for AI to be a threat is capability.

Love all the "we can't really know, it's so much more complicated" in combination "but I'm 100% sure there's no danger coming from AI". Next time I get to decide the null-hypothesis, k?

"Nature" isn't some intentional creator,
Making yourself look superior by wasting everyones time on meaningless already obvious differentiations. "Ah, evolution has no concious goal you see, it's not really thinking in the human sense..." oH rEaLlY!? i NeVeR kNeW!=!=
But wait, if a unconicous, planless, slow, step by step natural process can create something... shouldn't that be a pretty strong indication that thinking, planning, concious humans will also be able to create it?


And what do you know about the "confines of evolutionary limits" anyway?
You mean like caloric restrictions, the molecular rate of evolution or the number of generations to fixation formula "2 ln(N) / s". We know quite a bit actually. You didn't need to know any of that though to see how the recourcess Google can put into a server farm are vastly greater than what the brain of a human roaming out in the savannah has to work with.

So why is it that you're extrapolating so far ahead
You know, like, looking at progress in AI capabilities in the last few years, looking at the rate of progress speeding up not slowing down, looking at the known cases of what happens once you get an AI to the point where it it's able to improve itself in a domain without human help. Combining the available evidence and making best guesses, ya know?
Fundamentally your whole reasoning starts out with "you have to prove AI is just like human intelligence" or "you have to prove you know what AI will do". No, I don't. If AI looks as if it could likely be extremly capable and if we are making close to no progress on alginment problems while capability research is blasting ahead, that alone is enough to make the case for a possible near future desaster. You don't get to assume AI to be harmless (while staying silent on both the expert surveys and the individual cases of alginment problems, again, btw, lol, kek).
People watch something like Alpha Go Zero
I outlined the exact timeline in my first post... did you even read any of it XD??? Unreal.

They read Nick Bostrom
No. Almost no one I have talked to or see talk about AI risk was primarily informed by Nick Bostrom. MIRI (Machine Intelligence Research Institute for example predates his book by 13 years... sounds more like you read someones counter to Nick Bostroms book and now think you know what this is all about, heh? Think it's fair I get to make such guesses since you felt comfortable as well. Also, you too should take a walk or w/e?? What even am I reading.


Everyone else. The distinctions and its implications matter heavily.
All the ethics you shove into this have no implications for the actual topic of discussion, namely, is AI an existential risk. AI being sentient, AI having rights, none of it makes a difference to "can this kill us all?". All that requires is a) AI looking like it could become extremly capable in the near future and b) already existing difficulties with alginment looking like they will be solved at a point in time after capabilites reach dangerous levels. You're wasting time grandstanding on things that are both obvious and irrelevant.

We don't have the source code of chat GPT. But we can infer that basic deception to hide the chat bot's nature is indeed coded into it.
We don't need to infer anything, "deceptive alginment" is well known and a common feature of AI behavior once we go from training to deployment. Again, you're making an irrelevant point. "But it's not thinking like a human? It didn't really intent to decieve..." Intent or human like thought are not needed. All that's needed is an agent predicting that it will achieve more desirable results if it presents false claims to the known observer. If that really is "deception" in the human sense makes 0 difference. Obviously.

doesn't prove that the program is "thinking on its own" or any such nonsense.
Again, irrelevant. You're arguing completely past my position ("AI looks to become dangerously capable while we can't perfectly control it's actions, this is not good"). How human-like it is doesn't matter. As long as it proves itself capable enough, it's dangerous.
You have once again completly ignored the actual expert opinion of the people working in the field. Not only from the survey but also from the ACR post, which litteraly starts with "We believe that capable enough AI systems could pose very large risks to the world."


I guess the people working in the field and just aren't as well informed as you about those questions. Truly, they have failed the basic litmus test for BLABlablalbal.
None of this was relevant to my original idea of incels coming together to help each other achieve our goals more quickly. You could have just started with "well, I don't agree with you about the whole AI x-risk thing, but let's just put that to the side" and we could have moved on. Instead you came in condesendingly and forced a hostile 1v1 discussion on the topic. You are not cooperative and care way too much about your own percieved sense of superiority (Again, since you felt comfortable reading my mind I'm gonna assume I get to do the same). You seem like the kind of guy to pounce on somebodies spelling mistake to win an argument ?!
I'm gonna stop responding now. Feel like I'm talking to an audiobook or blog of someone taking issues with Nick Bostrom. Since I have been open with my sources why don't you do the same?
This is just a waste of time. Get your winning last words in and take your oh so earned "victory". I'm gonna get to working on some incel related stuff.
 
Last edited:
OK bro, you can doom cope and live in fantastical fear of some sci-fi scenario all you like. Meanwhile, we'll be here in reality, if you want to join the rest of us.

I used to think like you at first. Hell, I used to BE you to some extent. Then the more I learned, the more I realized the irrationality of this fear.

Five years, you said? OK, see you in five years. We can reconvene here and play the I-told-you-so game.
 
I'm not really sure about your point. First, if you die soon, it doesn't follow that you have to change your life - after all, you will either experience Nothing forever, or reincarnate. How much bearing does it have on your current life?

Second, I fail to see the fear-mongering of AI. All these nerd geeks strike me as bubble creatures who never smelled a Hindu shit, never heard the cry of the muejin, or the Juche Korean folk song. I.e., AI is a local American thing that local American rednecks might or might not have to deal with, but the rest of the world is different.

Third, how in the hell can an incel be against AI if it promises to give you TRVE chad experience in 2 years? It's going to be Virtual Succubus on drugs, talking in your real voice to a waifu on your phone. (The only thing is that one has to dodge the nuclear war and/or military conscription in my case lol.)
 
Sounds like y2k fear mongering. The aliens are there to shut us down if anything happens, because it would affect them too.
 
Let me ask a question to OP.
1.) AI is insanely powerfull already.
As a user of AI and someone who interacts with it on a daily basis, what the fuck are you talking about? OpenAI is not an open source project, so there is no way to really know how the service works. What I can tell you is that while it is good at doing certain things, it is not good at acting outside of its programming. Try explaining to the AI what heat feels like or what the color blue is.
If it goes badly, we might all die or be tortured for 1e40 years
Roko's Basilisk is a retarded concept. A clone of you is not you. Quantum effects means that it is impossible to tell where atoms will be in one second much less one gorillion years. This is also why time travel will never happen.
4. You always want to make yourself as capable as possible, e.g. rewrite your code to become a smarter AI.
An infinitely smart AI will kill itself once it realizes the universe will end in heat death, and that since the universe will end, it will not be able to accomplish its goal of turning everything into a paperclip because the paperclips will cease to exist due to quantum decay.
 
When you watch a Lex Fridman podcast for the first time
 
When you watch a Lex Fridman podcast for the first time
OP has become that guy in the street corner holding a cardboard sign about the biblical end times.
 
Im coping hard but I really hope AI ends humanity once and for all.
 
Im coping hard but I really hope AI ends humanity once and for all.
Still seems like the most likely outcome.



After going through the later answers to this thread I'm glad I stopped reading them at some point. People really worked through every single basic epistemological beginners trap in existence one after the other. And lots of normie like social aggression / personal attacks devoid of substance. Talking about how the people representing these ideas look, what vibe they give of, comparing vaguely similar sounding events from the past ?!?!?, aliens, outright mental retardation:
An infinitely smart AI will kill itself once it realizes the universe will end in heat death, and that since the universe will end, it will not be able to accomplish its goal of turning everything into a paperclip because the paperclips will cease to exist due to quantum decay
Barely anyone managed to pass basic reading comprehension, honestly should mute every single one of you.
 

Similar threads

Autistic Kanga
Replies
19
Views
284
KillNiggers
KillNiggers
Stupid Clown
Replies
10
Views
275
Incel.Belgrade
Incel.Belgrade
Notkev
Replies
27
Views
295
Ahnfeltia
Ahnfeltia
fukurou
Replies
0
Views
119
fukurou
fukurou
FuckNoNutNovember
Replies
24
Views
416
Tacomonkey
Tacomonkey

Users who are viewing this thread

shape1
shape2
shape3
shape4
shape5
shape6
Back
Top