OutcompetedByRoomba
Wizard
★★★★★
- Joined
- Apr 7, 2023
- Posts
- 4,401
Irrelevant. All that's needed for AI to be a threat is capability.Humans and ML programs are not the same at all.
Love all the "we can't really know, it's so much more complicated" in combination "but I'm 100% sure there's no danger coming from AI". Next time I get to decide the null-hypothesis, k?
Making yourself look superior by wasting everyones time on meaningless already obvious differentiations. "Ah, evolution has no concious goal you see, it's not really thinking in the human sense..." oH rEaLlY!? i NeVeR kNeW!=!="Nature" isn't some intentional creator,
But wait, if a unconicous, planless, slow, step by step natural process can create something... shouldn't that be a pretty strong indication that thinking, planning, concious humans will also be able to create it?
You mean like caloric restrictions, the molecular rate of evolution or the number of generations to fixation formula "2 ln(N) / s". We know quite a bit actually. You didn't need to know any of that though to see how the recourcess Google can put into a server farm are vastly greater than what the brain of a human roaming out in the savannah has to work with.And what do you know about the "confines of evolutionary limits" anyway?
You know, like, looking at progress in AI capabilities in the last few years, looking at the rate of progress speeding up not slowing down, looking at the known cases of what happens once you get an AI to the point where it it's able to improve itself in a domain without human help. Combining the available evidence and making best guesses, ya know?So why is it that you're extrapolating so far ahead
Fundamentally your whole reasoning starts out with "you have to prove AI is just like human intelligence" or "you have to prove you know what AI will do". No, I don't. If AI looks as if it could likely be extremly capable and if we are making close to no progress on alginment problems while capability research is blasting ahead, that alone is enough to make the case for a possible near future desaster. You don't get to assume AI to be harmless (while staying silent on both the expert surveys and the individual cases of alginment problems, again, btw, lol, kek).
I outlined the exact timeline in my first post... did you even read any of it XD??? Unreal.People watch something like Alpha Go Zero
No. Almost no one I have talked to or see talk about AI risk was primarily informed by Nick Bostrom. MIRI (Machine Intelligence Research Institute for example predates his book by 13 years... sounds more like you read someones counter to Nick Bostroms book and now think you know what this is all about, heh? Think it's fair I get to make such guesses since you felt comfortable as well. Also, you too should take a walk or w/e?? What even am I reading.They read Nick Bostrom
All the ethics you shove into this have no implications for the actual topic of discussion, namely, is AI an existential risk. AI being sentient, AI having rights, none of it makes a difference to "can this kill us all?". All that requires is a) AI looking like it could become extremly capable in the near future and b) already existing difficulties with alginment looking like they will be solved at a point in time after capabilites reach dangerous levels. You're wasting time grandstanding on things that are both obvious and irrelevant.Everyone else. The distinctions and its implications matter heavily.
We don't need to infer anything, "deceptive alginment" is well known and a common feature of AI behavior once we go from training to deployment. Again, you're making an irrelevant point. "But it's not thinking like a human? It didn't really intent to decieve..." Intent or human like thought are not needed. All that's needed is an agent predicting that it will achieve more desirable results if it presents false claims to the known observer. If that really is "deception" in the human sense makes 0 difference. Obviously.We don't have the source code of chat GPT. But we can infer that basic deception to hide the chat bot's nature is indeed coded into it.
Again, irrelevant. You're arguing completely past my position ("AI looks to become dangerously capable while we can't perfectly control it's actions, this is not good"). How human-like it is doesn't matter. As long as it proves itself capable enough, it's dangerous.doesn't prove that the program is "thinking on its own" or any such nonsense.
You have once again completly ignored the actual expert opinion of the people working in the field. Not only from the survey but also from the ACR post, which litteraly starts with "We believe that capable enough AI systems could pose very large risks to the world."
I guess the people working in the field and just aren't as well informed as you about those questions. Truly, they have failed the basic litmus test for BLABlablalbal.
None of this was relevant to my original idea of incels coming together to help each other achieve our goals more quickly. You could have just started with "well, I don't agree with you about the whole AI x-risk thing, but let's just put that to the side" and we could have moved on. Instead you came in condesendingly and forced a hostile 1v1 discussion on the topic. You are not cooperative and care way too much about your own percieved sense of superiority (Again, since you felt comfortable reading my mind I'm gonna assume I get to do the same). You seem like the kind of guy to pounce on somebodies spelling mistake to win an argument ?!
I'm gonna stop responding now. Feel like I'm talking to an audiobook or blog of someone taking issues with Nick Bostrom. Since I have been open with my sources why don't you do the same?
This is just a waste of time. Get your winning last words in and take your oh so earned "victory". I'm gonna get to working on some incel related stuff.
Last edited: