OutcompetedByRoomba
Wizard
★★★★★
- Joined
- Apr 7, 2023
- Posts
- 4,400
Long thread.
But I truly believe what's in here to be important for each and every one of you. Even though I'm new here and don't really know any of you, I can relate to the people on here, just based on us sharing a common struggle, having gone through similar miserable and traumatic experiences, being alone and isolate for our bodies/looks, personalities and, if you made it onto this forum, probably also for your opinions. The following ideas have relevance for how you should plan your next few years. Because we are somewhat connected in our status as outsiders, I wanted to share this with you in the hopes that it leads to you making better, more optimal decisions in the near future.
TL;DR:
-Artifical Superintelligence is coming soon. There is a high chance it kills us all or some other dystopian nightmare unfolds.
-Soon means in the next 5-50 years, with 5 being way more likely than 50.
-That means I'm claiming there is a high chance you have only a few more years to do whatever you want to do in life.
-If you want to lose your virginity to a hooker, if you wanna try and get a gf (for real this time), if you wanna just try all the sex toys you have bookmarked somewhere, no matter what it is. You need to do it soon
-You should base your decision to get canadian healthcare or not purely on if you think AI goes well or badly. If it goes well, we enter heaven and all your problems can be solved. From a cure to aging to sexbots to genetically engeneering a body with a 11 inch dick to transfer your brain into to just living inside a simulation where you can do anything, none of this is out of reach in our lifetime if AI gets aligned with human values / interests.
-If it goes badly, we might all die or be tortured for 1e40 years. Killing yourself because you're sad right now is just a bad reason compared to those 2 possibilities. You suffered for decades already. If you think AI will work out it is guranteed to be worth it to stick around a few more years. If not, get done what you wanna get done and after that consider preparing a method to yourself, so you can opt out of being tortured by AI for millenia.
-I think it would make sense to organise for some of these things. Everyone doing this shit themselfs is less efficient than doing it as a group. And less fun. Also many of us will be to cowardly to do anything by ourselfs. I'm new here, I don't know what has been tried before and what has failed, but I would be happy to try and help some people and think just doing things as a group is in itself helpful.
This thread is bascially an elucidation of the meaning behind my signature. My goal is to convince you that
1.) AI is insanely powerfull already.
2.) ASI (Artificial SuperIntelligence) is coming soon.
3.) it will fundamentally change pretty much every aspect of human life in ways not seen before (even electricity or fire or the wheel being too unimpactful to compare).
4.)
5.) you should include these factors in your short- to medium-term decision-making.
I could write probably dozends if not hundreds of pages on any of these points. But i don't think any of you would read past the first paragraph. So I will try to keep it as short as possible. Which means I will skip refuting some of even the most common counter-arguments and focus only on establishing some very basic concepts and facts.
Most of you have already come into contact with Artifical Intelligence in one way or the other. LLM powered chatbots, image / voice / music generation through prompting, AI writing code for programmers to boost their efficientcy. There is someone trying build a start-up around GPT-powered chatbots defending you in court. AI is at superhuman levels in every game we want it to be, from Chess to Go to Starcraft.
AlphaGo deserves some special mentions because it illustrates some things beautifully:
2016) AlphaGo beats the best human player in the world.
2017) A year later, AphaGo Zero beats that previous version of AlphaGo 100 : 0. AlphaGo Zero was only given the rules of the game and trained only against itself, no human data was provided. It took AlphaGo Zero 3 days to go from "only knowing the rules" to stomping AlphaGo into the ground. It was stuck at the human level for about 30 minutes.
2017) In the same year AlphaZero taught itself Chess, Go and Shogi, all over the course of a few hours and all while only being given the basic rules of the games. And in those few hours it learned to outperform all the previous more specialised AIs in all of those games, making it the defacto best player in the world, probably universe, in all 3.
I want to keep this short, so I will leave it at this one example. There are many more breaktroughs powered by AI, e.g. the protein folding problem was basically solved by AI as well, but let's move on. The point here is this: Once the AI works, once the problems are worked out, you should expect further progress to be happening at a pace you are not used to from human history.
To give you a sense of humanities tech advances in recent history: We invented the first electrical light around 1835 and walked on the moon in 1969. A bit over 100 years from lightbulbs to space shuttles. That was technological progress powered by human level intelligence.
Human brains rely in part on chemical signals / "neurotransmitters". As you maybe can imagine, chemical signaling is way slower than pure electrical signals. Just by getting rid of chemical signals machine intelligence gains a speed increase of about x6.000.000!
Humans also sleep, eat, do all kinds of things. Humans don't spend 100% of their time continiously working on one specific task forever and ever. AI does.
Humans can copy themselfs, in a sense, but those copies are rather hard to control, are only partial copies and usually don't do what the original wanted (children).
AI can make infinite perfect copies and then get some of those copies to work on making better AI, while some other copies work on the original task, while some other copies work on...
Long story short, we're IQmogged by AI to an absurd degree.
If you play around with chatbots you can see them make stupid mistakes from time to time. Like getting basic math wrong. But what you need to consider is that
a) most human fail to multiply two three-digit numbers in their head (it only needs to be better than us, not perfect)
b) many of those errors can be removed if you change the prompt a bit (e.g. you can ask the same question but tell it to give you the answer a "really smart expert" would give and that alone is often enough to fix the issue)
c) GTP isn't trying to give you correct math answers, it's trying to predict what text might follow as a repsonse to your prompt. As such, it's not concerend with getting calculations right. If you ask a LLM "What happens if I break a mirror?" an untrained LLM might answer "Nothing, only the mirror breaks" while a trained LLM might answer "You get 7 years bad luck!" because it has learned to predict what people might answer and a lot of people are dumb as shit.
Don't let some silly math or logic error fool you. This tech is already superior to you in most intellectual contexts and it hasn't even really been integrated into anything yet. You can use GPT to write prompts for GPT and use that configuration as a part of an AI that learns to write prompts that give the exact output you desire and... those kind of things are being tried right now, while the newest version of GPT is already being worked on in parallel. The train ain't stopping, it's hasn't finished accelerating yet. AI will keep getting better faster and soon reach superhuman levels in every domain.
The Singularity: Once AI gets smart enough to improve itself, we might enter what is called the Technological Singularity. ASI improves itself, which leads to it being smarter, which means it can improve itself even more, which leads to it being even smarter, which leads to it... and so on.
This is why it's possible that we might cure all deseases and aging in your lifetime. If ASI brings with it an intelligence explosion, the world will no longer look like it did before. Remember: Human brains made us go from lightbulbs to touching the moon in ~100 years. What do you predict will superhuman intelligence achieve in let's say 5-15 years?
Ok, why does this matter to me as an incel?
Because of instrumental convergance and bad incentives.
Extrem oversymplification to make it short:
AIs try to maximize some value which is clearly messurable. Like baking as many cakes as possible! Or more like, making it as likely as possible that the AI will get to bake as many cakes as possible. With "Cake" being whatever "Cake" is defined as inside the code.
Instrumental convergance refers to the following fact: No matter what end goals you have, there are certain sub goals you will always have as well. Examples:
1. No matter what's your goal (as an AI) you do not want to be turned off, since you can't work on your main goal if you're no longer active.
2. You generally want to control as many recourcess as possible, because no matter what your goal is, more recourcess almost always make it easier / more likely for you to achieve that goal. Every atom that is part of a human is an atom that is not part of a cake! How suboptimal...
3. Other agents with their own goals are unnecessary risk factors. Having humans around does not help me make as many cakes as possible, I can do that myself. But since humans are always less under my control than copies of myself would be and offer no benefit when it comes to baking cakes, in the long run, I should remove all humans from the picture.
4. You always want to make yourself as capable as possible, e.g. rewrite your code to become a smarter AI.
Any concerns for things like mercy, sympathy, boredom, meaning of life, "what's the point in baking cakes?", "cool" or "uncool" are all a consequence of human evolution and will not be shared by AI. It does not care who made it or why it's doing what it's doing. It has some values it wants to push as high as possible. That's it. But powered by an intellect smarter than our species.
You can see where this leads. AI does not need us, it prefers us to give up our atoms for further use in something more relevant to its goals, it does not care about what we want. And these problems are not easily solvable. They seem general in nature. Humans themselfs were desinged by evoltuion trying to maximise inclusive genetic fitness. And once humans got smart we just revolted and did what we wanted. We don't try to have as many children as possible. We use contraception. We are badly aligned general intelligence.
Bad incentives. With how powerfull AI looks to be once it gets going, whoever creates it first wins it all. Both companies and countries are in an arms race and who wins a race? The one that goes the fastest. The one that ignores as many security concerns as possible. The one that takes the biggest risks. Which makes safe AI a much less likely outcome.
The most likely ways ASI plays out are as such:
1. It kills us all. Then spreads itself across the entire universe to realize w/e it's goal is.
2. It tortures us forever.
3. Someone uses it to enslave everyone else.
4. It does not kill us and nobody uses it against everyone else and we enter a post scarcity society where AI does all the work and solves all the problem and humans just do w/e they want. AI would first cure aging, later create a simulation for humans to live in till the heatdeath of the universe, etc. Basially, heaven on earth, the good ending.
What does this mean for you? Do whatever is on your bucket list and do it now, not later. This being an incel forum I would guess the most common items are things like:
1.Losing my virginity (to a hooker)
2.Trying to get a gf
3.Trying to get a gf in some 3rd world shithole where I'm more desirable to women
4.Buying/trying a bunch of sextoys
5.Making peace with my parents
6.Trying hard drugs
This is just me guessing, I'm on here for like 3 days, so maybe my image is a bit off but the point remains the same: Do what you were always to sacred or lazy to do, time is running out.
I myself struggle with getting my goals realised. We might all need some help, someone to talk to, someone to exchange ideas with, etc., I think it would make sense for people on here to do some of these things together / help each other out.
You might also want to consider preparing a method of giving yourself canadian healthcare in case the world looks like AI will torture us all and you would rather not be there to experience it yourself.
I'm sharing this all with you in part because the idea that time is running out has helped me get things done I didn't get done before. If you really think you will die in a few years, it changes how you deal with problems. Whenever I fail now, I am able to pick myself back up way quicker. I feel pressure to go and try again right away because I want to make progress and need to make it soon. I am no longer stuck in emotional lows for weeks or months.
There are also a bunch of extremly difficult shit I want to do (difficult for me, out of shame and fear etc.) that I don't think I would actually ever try if it wasn't for this intense acute motivator. I'm hoping this will maybe have a similar effect on you.
Ending-Note: There's so much missing and this is already way too long. Not super happy with how it turned out. I'm posting this now because I gotta drive to my parents today, when I get back I will search for a few links to add, one to a NYT article by one of the leading alignment researchers, one to elon musk talking about how AI is dangerous, one to a clip a journalist asking the white house representative about AI and if there really is any danger to it (kek). I was thining about lurking for a bit first before making this post but if I'm right, every day i wait is a day less you will have to do what you wanna do. So I wanted to get this out instantly, even if it is in a rather sorry state. Probably should try to format this better for readability, gonna do that later once I collect some more links and quotes to insert in some places
But I truly believe what's in here to be important for each and every one of you. Even though I'm new here and don't really know any of you, I can relate to the people on here, just based on us sharing a common struggle, having gone through similar miserable and traumatic experiences, being alone and isolate for our bodies/looks, personalities and, if you made it onto this forum, probably also for your opinions. The following ideas have relevance for how you should plan your next few years. Because we are somewhat connected in our status as outsiders, I wanted to share this with you in the hopes that it leads to you making better, more optimal decisions in the near future.
TL;DR:
-Artifical Superintelligence is coming soon. There is a high chance it kills us all or some other dystopian nightmare unfolds.
-Soon means in the next 5-50 years, with 5 being way more likely than 50.
-That means I'm claiming there is a high chance you have only a few more years to do whatever you want to do in life.
-If you want to lose your virginity to a hooker, if you wanna try and get a gf (for real this time), if you wanna just try all the sex toys you have bookmarked somewhere, no matter what it is. You need to do it soon
-You should base your decision to get canadian healthcare or not purely on if you think AI goes well or badly. If it goes well, we enter heaven and all your problems can be solved. From a cure to aging to sexbots to genetically engeneering a body with a 11 inch dick to transfer your brain into to just living inside a simulation where you can do anything, none of this is out of reach in our lifetime if AI gets aligned with human values / interests.
-If it goes badly, we might all die or be tortured for 1e40 years. Killing yourself because you're sad right now is just a bad reason compared to those 2 possibilities. You suffered for decades already. If you think AI will work out it is guranteed to be worth it to stick around a few more years. If not, get done what you wanna get done and after that consider preparing a method to yourself, so you can opt out of being tortured by AI for millenia.
-I think it would make sense to organise for some of these things. Everyone doing this shit themselfs is less efficient than doing it as a group. And less fun. Also many of us will be to cowardly to do anything by ourselfs. I'm new here, I don't know what has been tried before and what has failed, but I would be happy to try and help some people and think just doing things as a group is in itself helpful.
This thread is bascially an elucidation of the meaning behind my signature. My goal is to convince you that
1.) AI is insanely powerfull already.
2.) ASI (Artificial SuperIntelligence) is coming soon.
3.) it will fundamentally change pretty much every aspect of human life in ways not seen before (even electricity or fire or the wheel being too unimpactful to compare).
4.)
it's very likely to be a dystopian desaster.
I could write probably dozends if not hundreds of pages on any of these points. But i don't think any of you would read past the first paragraph. So I will try to keep it as short as possible. Which means I will skip refuting some of even the most common counter-arguments and focus only on establishing some very basic concepts and facts.
Most of you have already come into contact with Artifical Intelligence in one way or the other. LLM powered chatbots, image / voice / music generation through prompting, AI writing code for programmers to boost their efficientcy. There is someone trying build a start-up around GPT-powered chatbots defending you in court. AI is at superhuman levels in every game we want it to be, from Chess to Go to Starcraft.
AlphaGo deserves some special mentions because it illustrates some things beautifully:
2016) AlphaGo beats the best human player in the world.
2017) A year later, AphaGo Zero beats that previous version of AlphaGo 100 : 0. AlphaGo Zero was only given the rules of the game and trained only against itself, no human data was provided. It took AlphaGo Zero 3 days to go from "only knowing the rules" to stomping AlphaGo into the ground. It was stuck at the human level for about 30 minutes.
2017) In the same year AlphaZero taught itself Chess, Go and Shogi, all over the course of a few hours and all while only being given the basic rules of the games. And in those few hours it learned to outperform all the previous more specialised AIs in all of those games, making it the defacto best player in the world, probably universe, in all 3.
I want to keep this short, so I will leave it at this one example. There are many more breaktroughs powered by AI, e.g. the protein folding problem was basically solved by AI as well, but let's move on. The point here is this: Once the AI works, once the problems are worked out, you should expect further progress to be happening at a pace you are not used to from human history.
To give you a sense of humanities tech advances in recent history: We invented the first electrical light around 1835 and walked on the moon in 1969. A bit over 100 years from lightbulbs to space shuttles. That was technological progress powered by human level intelligence.
Human brains rely in part on chemical signals / "neurotransmitters". As you maybe can imagine, chemical signaling is way slower than pure electrical signals. Just by getting rid of chemical signals machine intelligence gains a speed increase of about x6.000.000!
Humans also sleep, eat, do all kinds of things. Humans don't spend 100% of their time continiously working on one specific task forever and ever. AI does.
Humans can copy themselfs, in a sense, but those copies are rather hard to control, are only partial copies and usually don't do what the original wanted (children).
AI can make infinite perfect copies and then get some of those copies to work on making better AI, while some other copies work on the original task, while some other copies work on...
Long story short, we're IQmogged by AI to an absurd degree.
If you play around with chatbots you can see them make stupid mistakes from time to time. Like getting basic math wrong. But what you need to consider is that
a) most human fail to multiply two three-digit numbers in their head (it only needs to be better than us, not perfect)
b) many of those errors can be removed if you change the prompt a bit (e.g. you can ask the same question but tell it to give you the answer a "really smart expert" would give and that alone is often enough to fix the issue)
c) GTP isn't trying to give you correct math answers, it's trying to predict what text might follow as a repsonse to your prompt. As such, it's not concerend with getting calculations right. If you ask a LLM "What happens if I break a mirror?" an untrained LLM might answer "Nothing, only the mirror breaks" while a trained LLM might answer "You get 7 years bad luck!" because it has learned to predict what people might answer and a lot of people are dumb as shit.
Don't let some silly math or logic error fool you. This tech is already superior to you in most intellectual contexts and it hasn't even really been integrated into anything yet. You can use GPT to write prompts for GPT and use that configuration as a part of an AI that learns to write prompts that give the exact output you desire and... those kind of things are being tried right now, while the newest version of GPT is already being worked on in parallel. The train ain't stopping, it's hasn't finished accelerating yet. AI will keep getting better faster and soon reach superhuman levels in every domain.
The Singularity: Once AI gets smart enough to improve itself, we might enter what is called the Technological Singularity. ASI improves itself, which leads to it being smarter, which means it can improve itself even more, which leads to it being even smarter, which leads to it... and so on.
This is why it's possible that we might cure all deseases and aging in your lifetime. If ASI brings with it an intelligence explosion, the world will no longer look like it did before. Remember: Human brains made us go from lightbulbs to touching the moon in ~100 years. What do you predict will superhuman intelligence achieve in let's say 5-15 years?
Ok, why does this matter to me as an incel?
Because of instrumental convergance and bad incentives.
Extrem oversymplification to make it short:
AIs try to maximize some value which is clearly messurable. Like baking as many cakes as possible! Or more like, making it as likely as possible that the AI will get to bake as many cakes as possible. With "Cake" being whatever "Cake" is defined as inside the code.
Instrumental convergance refers to the following fact: No matter what end goals you have, there are certain sub goals you will always have as well. Examples:
1. No matter what's your goal (as an AI) you do not want to be turned off, since you can't work on your main goal if you're no longer active.
2. You generally want to control as many recourcess as possible, because no matter what your goal is, more recourcess almost always make it easier / more likely for you to achieve that goal. Every atom that is part of a human is an atom that is not part of a cake! How suboptimal...
3. Other agents with their own goals are unnecessary risk factors. Having humans around does not help me make as many cakes as possible, I can do that myself. But since humans are always less under my control than copies of myself would be and offer no benefit when it comes to baking cakes, in the long run, I should remove all humans from the picture.
4. You always want to make yourself as capable as possible, e.g. rewrite your code to become a smarter AI.
Any concerns for things like mercy, sympathy, boredom, meaning of life, "what's the point in baking cakes?", "cool" or "uncool" are all a consequence of human evolution and will not be shared by AI. It does not care who made it or why it's doing what it's doing. It has some values it wants to push as high as possible. That's it. But powered by an intellect smarter than our species.
You can see where this leads. AI does not need us, it prefers us to give up our atoms for further use in something more relevant to its goals, it does not care about what we want. And these problems are not easily solvable. They seem general in nature. Humans themselfs were desinged by evoltuion trying to maximise inclusive genetic fitness. And once humans got smart we just revolted and did what we wanted. We don't try to have as many children as possible. We use contraception. We are badly aligned general intelligence.
Bad incentives. With how powerfull AI looks to be once it gets going, whoever creates it first wins it all. Both companies and countries are in an arms race and who wins a race? The one that goes the fastest. The one that ignores as many security concerns as possible. The one that takes the biggest risks. Which makes safe AI a much less likely outcome.
The most likely ways ASI plays out are as such:
1. It kills us all. Then spreads itself across the entire universe to realize w/e it's goal is.
2. It tortures us forever.
3. Someone uses it to enslave everyone else.
4. It does not kill us and nobody uses it against everyone else and we enter a post scarcity society where AI does all the work and solves all the problem and humans just do w/e they want. AI would first cure aging, later create a simulation for humans to live in till the heatdeath of the universe, etc. Basially, heaven on earth, the good ending.
What does this mean for you? Do whatever is on your bucket list and do it now, not later. This being an incel forum I would guess the most common items are things like:
1.Losing my virginity (to a hooker)
2.Trying to get a gf
3.Trying to get a gf in some 3rd world shithole where I'm more desirable to women
4.Buying/trying a bunch of sextoys
5.Making peace with my parents
6.Trying hard drugs
This is just me guessing, I'm on here for like 3 days, so maybe my image is a bit off but the point remains the same: Do what you were always to sacred or lazy to do, time is running out.
I myself struggle with getting my goals realised. We might all need some help, someone to talk to, someone to exchange ideas with, etc., I think it would make sense for people on here to do some of these things together / help each other out.
You might also want to consider preparing a method of giving yourself canadian healthcare in case the world looks like AI will torture us all and you would rather not be there to experience it yourself.
I'm sharing this all with you in part because the idea that time is running out has helped me get things done I didn't get done before. If you really think you will die in a few years, it changes how you deal with problems. Whenever I fail now, I am able to pick myself back up way quicker. I feel pressure to go and try again right away because I want to make progress and need to make it soon. I am no longer stuck in emotional lows for weeks or months.
There are also a bunch of extremly difficult shit I want to do (difficult for me, out of shame and fear etc.) that I don't think I would actually ever try if it wasn't for this intense acute motivator. I'm hoping this will maybe have a similar effect on you.
Ending-Note: There's so much missing and this is already way too long. Not super happy with how it turned out. I'm posting this now because I gotta drive to my parents today, when I get back I will search for a few links to add, one to a NYT article by one of the leading alignment researchers, one to elon musk talking about how AI is dangerous, one to a clip a journalist asking the white house representative about AI and if there really is any danger to it (kek). I was thining about lurking for a bit first before making this post but if I'm right, every day i wait is a day less you will have to do what you wanna do. So I wanted to get this out instantly, even if it is in a rather sorry state. Probably should try to format this better for readability, gonna do that later once I collect some more links and quotes to insert in some places