Welcome to Incels.is - Involuntary Celibate Forum

Welcome! This is a forum for involuntary celibates: people who lack a significant other. Are you lonely and wish you had someone in your life? You're not alone! Join our forum and talk to people just like you.

Venting I am losing faith in the transhumanist/singularity copes

Ihatereddit

Ihatereddit

Femhorroid Respecter
★★★★★
Joined
Dec 2, 2017
Posts
433
For many years, they were the only reason I refused to rope. But as time passed I have started to seriously doubt their validity.
It's 2018 already, AI is still extremely rudimentary, nobody knows what "consciousness" is, nobody knows what "intelligence" is. The best AIs are really good at one task, and absolute shit at everything else. AGI is still a very distant dream, and most people who claim otherwise seem to be copers like me, futurologists, and some random IFLS tier guys on reddit (also copers tbh).

It's over for technologycels.
 
I think we are one or two generations too short of ever achieving it/experiencing it. With my luck this will all happen right after I die lmao.
 
It is indeed still a distant dream.
 
I don't care about AI singularity, it's a kind of dystopic future according to me ...

My main technological cope would rather be to have one day the opportunity to experience space travel, and live the dream in some Earth like isolated planet where there's nature everywhere, no human and no other sentient specie ( thus nobody will annoy me ever until my death).

I am not even an autist, I'm just fed up with what life has to offer in our modern world.
 
transhumanism is bullshit man
 
Instead of the singularity, this is what we'll get

It's over for wagecels.
 
It's 2018 already, AI is still extremely rudimentary, nobody knows what "consciousness" is, nobody knows what "intelligence" is. The best AIs are really good at one task, and absolute shit at everything else. AGI is still a very distant dream, and most people who claim otherwise seem to be copers like me, futurologists, and some random IFLS tier guys on reddit (also copers tbh).

It's over for technologycels.
I know that feeling. What helps me keep faith is telling myself that as long as human intelligence is defined by the rules of physics, and does not incorporate any supernatural element, we will crack it eventually. It's just a matter of man-hours.

As to the actual state of AI progress right now, there are reasons for cautious optimism. While current approaches are unlikely to lead to AGI, what has been achieved through evolutionary algorithms and stochastic optimization is sufficiently impressive to warrant a degree of optimism.

The date of AGI discovery will essentially depend on whether it can be reached through evolutionary algorithms and raw computing power (Ben Goertzel and Elon Musk seem to think so, but it seems too "easy" to me) or if complete understanding and simultion of animal brains will be required, which will take far more time knowing that we are still at the step of understanding the mouse brain.
 
For many years, they were the only reason I refused to rope. But as time passed I have started to seriously doubt their validity.
It's 2018 already, AI is still extremely rudimentary, nobody knows what "consciousness" is, nobody knows what "intelligence" is. The best AIs are really good at one task, and absolute shit at everything else. AGI is still a very distant dream, and most people who claim otherwise seem to be copers like me, futurologists, and some random IFLS tier guys on reddit (also copers tbh).

It's over for technologycels.

There is a bigger chance that the astral projection afterlife is real than AI ever becoming "strong AI", and due to the same reason: the enigma of consciousness.

But that might actually be a very good thing.
 
If it makes you feel any better, the first round of AI and transhumanism (we will miss out on) will probably follow in the footsteps of current corporatist platforms by locking you in and somehow monitizing you for your continued existence.
 
I know that feeling. What helps me keep faith is telling myself that as long as human intelligence is defined by the rules of physics, and does not incorporate any supernatural element, we will crack it eventually. It's just a matter of man-hours.

As to the actual state of AI progress right now, there are reasons for cautious optimism. While current approaches are unlikely to lead to AGI, what has been achieved through evolutionary algorithms and stochastic optimization is sufficiently impressive to warrant a degree of optimism.

The date of AGI discovery will essentially depend on whether it can be reached through evolutionary algorithms and raw computing power (Ben Goertzel and Elon Musk seem to think so, but it seems too "easy" to me) or if complete understanding and simultion of animal brains will be required, which will take far more time knowing that we are still at the step of understanding the mouse brain.
That always seemed overly simplistic to me, how would massive computational power somehow translate into intelligence? I really want this to be true, but it doesn't seem so. Are there any actual CS people who believe this?

There is some hope though, the technological singularity may be far away, but the biological singularity is definitely close.

Add Health (National Longitudinal Study of Adolescent to Adult Health) and HRS (Health in Retirement Study) are two longitudinal cohorts under study by social scientists. Horizontal axis is polygenic score (computed from DNA alone). It appears that individuals with top quintile polygenic scores are about 5 times more likely to complete college than bottom quintile individuals. (IIUC, HRS cohort grew up in an earlier era when college attendance rates were lower; Add Health participants are younger.)

Consider the following hypothetical:
You are an IVF physician advising parents who have exactly 2 viable embryos, ready for implantation. The parents want to implant only one embryo.​
All genetic and morphological information about the embryos suggest that they are both viable, healthy, and free of elevated disease risk.

However, embryo A has polygenic score (as in figure above) in the lowest quintile (elevated risk of struggling in school) while embryo B has polygenic score in the highest quintile (less than average risk of struggling in school). We could sharpen the question by assuming, e.g., that embryo A has score in the bottom 1% while embryo B is in the top 1%.

You have no other statistical or medical information to differentiate between the two embryos.

What do you tell the parents? Do you inform them about the polygenic score difference between the embryos?
Note, in the very near future this question will no longer be hypothetical..
Source: Infoproc (EXCELLENT blog on genomics, AI, and machine learning, author is BGI-Shenzhen advisor, won't let me post link)
The field of genomic prediction is exploding right now, and soon we'll be able to boost average IQ by 10-15 points if not more. This is not a lot for an individual, but it is absolutely massive for an entire population. If we ever get to AGI, it will probably be made by IVF w/ PGD children who were selected for intelligence.
 
Give it about 15 years if you wanna see REAL progress. You'll likely see some amazing progress in your lifetime assuming you're young.

I follow the topic religiously.
 
READ INDUSTRIAL SOCIETY AND ITS FUTURE
 
I am also very interested in transhumanism but these things need time. lots and lots of time.
 
I'm more looking forward to the world becoming more cyberpunk.

> tfw not a console cowboy with a razor girl gf in Chiba City
 
I'm more looking forward to the world becoming more cyberpunk.

> tfw not a console cowboy with a razor girl gf in Chiba City

JFL the world already is Cyberpunk but only in the really shitty ways.
 
Do you think it will be possible to unify every conciousness into a single entity,do you imagine what could happen if incel experiences,beliefs and lifeforms become one with those of the non incel as part of the omnipotent machine? Would it malfunction and explode due to the high amount of assimilation and work that such a dangerous mix would imply?
 
JFL the world already is Cyberpunk but only in the really shitty ways.

My favorite part is all the weeb stuff everywhere like neon kanji on every sign.

I want more of that.
 
That always seemed overly simplistic to me, how would massive computational power somehow translate into intelligence? I really want this to be true, but it doesn't seem so. Are there any actual CS people who believe this?
There is a theory that human intelligence is simply an emergent property of a sufficiently powerful neural network.
There is some hope though, the technological singularity may be far away, but the biological singularity is definitely close.
Source: Infoproc (EXCELLENT blog on genomics, AI, and machine learning, author is BGI-Shenzhen advisor, won't let me post link)
The field of genomic prediction is exploding right now, and soon we'll be able to boost average IQ by 10-15 points if not more. This is not a lot for an individual, but it is absolutely massive for an entire population. If we ever get to AGI, it will probably be made by IVF w/ PGD children who were selected for intelligence
The biological singularity will take decades more than the tech singularity. Biology is enormously more complex than computer science. If you study DNA, you'll quickly understand. Most genes code for several traits and interactions at the same time. It's an incredibly massive tangle.
READ INDUSTRIAL SOCIETY AND ITS FUTURE
Why do ugly males, aka genetic trash meant for misery and elimination, insist on glorifying Nature? Stockholm Syndrome, nothing else.
 
Last edited:
There will be no "singularity".
 
There is a theory that human intelligence is simply an emergent property of a sufficiently powerful neural network.

Obviously it's false. What is consciousness? At least it's the ability to sense sensations/the existence of sensations itself. Until the AI guys come up with a "sensation machine" rather than just a thinking machine, they will have no chance of creating "human intelligence", or even worm level intelligence for that matter.
 
Obviously it's false. What is consciousness? At least it's the ability to sense sensations/the existence of sensations itself. Until the AI guys come up with a "sensation machine" rather than just a thinking machine, they will have no chance of creating "human intelligence", or even worm level intelligence for that matter.
Why do you think "sensation" is required to produce an AI singularity? An AI doesn't need emotions to have drives.
 
Unless you are an AI researcher, you are not qualified to produce such definitive statements.

I got taken in by the singularity hype around 2006-2007. I read The Singularity is Near and some other book, thought about for a few weeks, then realized what an illusion it was.
Why do you think "sensation" is required to produce an AI singularity? An AI doesn't need emotions to have drives.

To have "strong AI" you have to have a conscious machine. To be conscious, something needs to have sensations for that is what consciousness is. Everything else is BS.

Intelligence is not consciousness and the representation of sensation (like a flag in a computer program) is not the sensation itself.
 
Last edited:
I got taken in by the singularity hype around 2006-2007. I read The Singularity is Near and some other book, thought about for a few weeks, then realized what an illusion it was.
What do you dispute exactly? So far, your arguments haven't been convincing.
To have "strong AI" you have to have a conscious machine. To be conscious, something needs to have sensations for that is what consciousness is. Everything else is BS.

Intelligence is not consciousness and the representation of sensation (like a flag in a computer program) is not the sensation itself.
Oh yeah, good ole' Chinese room argument.

Who cares if the computer is conscious or not if it displays intelligent behavior and achieves its goals?
 
What do you dispute exactly? So far, your arguments haven't been convincing.

Oh yeah, good ole' Chinese room argument.

The Chinese Room argument is completely true and valid. Why haven't you paid attention to it?

Who cares if the computer is conscious or not if it displays intelligent behavior and achieves its goals?

I believe not being conscious means the machine can never attain truly independent judgement. In other words, it won't surpass a human being at the pinnacle of existence. It will remain a tool in the hands of humans or other sentient life, and consequently neither will there be a "singularity".
 
Last edited:
The Chinese Room argument is completely true and valid. Why haven't you paid attention to it?
I have paid attention to it. I don't find it convincing. It's basically an attempt at exaggerating the special nature of the human brain. There is no evidence that consciousness is required to display intelligent behavior, or to have independent volition.

The Chinese room argument is closely associated to the not AI argument, which consists in systematically downplaying an AI achievement as "not true intelligence" once it is achieved (though before achievement, it was seen as requiring true intelligence).

I find both arguments to be essentially unwarranted pessimism and nitpicking.
I believe not being conscious means the machine can never attain truly independent judgement.
That already happened, man... Many of the choices made by Deepmind's Alpha were not pre-written or encouraged by its makers. In fact, its makers did not understand its course of action initially, as well as large blocks of code produced by it. I call that independent judgment.
It will remain a tool in the hands of humans or other sentient life, and consequently neither will there be a "singularity".
A singularity is fully compatible with AI being an unconscious tool in the hands of humans. You obviously have a lack of knowledge in the matter. How many books on the subject have you read fully (from beginning to end, not just the back cover)?
 
There is a theory that human intelligence is simply an emergent property of a sufficiently powerful neural network.

The biological singularity will take decades more than the tech singularity. Biology is enormously more complex than computer science. If you study DNA, you'll quickly understand. Most genes code for several traits and interactions at the same time. It's an incredibly massive tangle.

Why do ugly males, aka genetic trash meant for misery and elimination, insist on glorifying Nature? Stockholm Syndrome, nothing else.

I guess you are right, you wouldn't call it a singularity, but it still is a world-changing technology, and unlike AGI, it's not simply expected, but almost GUARANTEED to come in 10-15 years.
Imagine how productive and innovative a 100 IQ nation is, now bump that to 115, then 130 the next generation and so on.
I am not talking about genetic engineering here, which is of course tremendously complex, this is just embryo selection, it will not produce sci-fi superhumans right out of the bat but let's just say that like the Industrial Revolution, any nation that is left out of it has no chance to succeed.
More information https://www.gwern.net/Embryo-selection
I have paid attention to it. I don't find it convincing. It's basically an attempt at exaggerating the special nature of the human brain. There is no evidence that consciousness is required to display intelligent behavior, or to have independent volition.

The Chinese room argument is closely associated to the not AI argument, which consists in systematically downplaying an AI achievement as "not true intelligence" once it is achieved (though before achievement, it was seen as requiring true intelligence).

I find both arguments to be essentially unwarranted pessimism and nitpicking.

This is a good argument, back in the 90s people found the idea of a Go Champion robot to be ludicrous, they knew that you couldn't bruteforce because of how immensely complex it is, you would need something more. Then in 2010s we get AlphaGo and the reaction is "meh, we knew this was coming anyway, not TRUE intelligence".
I wouldn't be surprised if we reached AGI without even understanding it ourselves, like in some sci-fi horror novel.

A question: What do you think of Alphabet (Google)?
 
I guess you are right, you wouldn't call it a singularity, but it still is a world-changing technology, and unlike AGI, it's not simply expected, but almost GUARANTEED to come in 10-15 years.
Imagine how productive and innovative a 100 IQ nation is, now bump that to 115, then 130 the next generation and so on.
I am not talking about genetic engineering here, which is of course tremendously complex, this is just embryo selection, it will not produce sci-fi superhumans right out of the bat but let's just say that like the Industrial Revolution, any nation that is left out of it has no chance to succeed.
More information https://www.gwern.net/Embryo-selection
I'm not convinced IQ is as important as it is made to be in some spheres. For starters, hard work, values and mental health matter far more than intelligence when it comes to human achievement. The millions of depressed slackers and NEETs in the West should be a clue.
This is a good argument, back in the 90s people found the idea of a Go Champion robot to be ludicrous, they knew that you couldn't bruteforce because of how immensely complex it is, you would need something more. Then in 2010s we get AlphaGo and the reaction is "meh, we knew this was coming anyway, not TRUE intelligence".
Yeah, I have no idea where this pattern of thinking comes from, but it's systematic. Everytime AI reaches a milestone, suddenly it's not true intelligence anymore. At some point, it will be hard to deny true intelligence has been reached when Skynet knocks at your door.
I wouldn't be surprised if we reached AGI without even understanding it ourselves, like in some sci-fi horror novel.
I think this is what will happen. Some of the code produced by evolutionary algorithms and neural nets isn't understood, already. That's what triggered Elon Musk's infamous warnings. In most of OpenAI's published research, they admit they don't understand some of the results.

I think it's Ben Goertzel who used the analogy of the airplane. We didn't have to retro-engineer the bird to devise flying machines. Perhaps it will be the same of intelligence: we could create it through sheer computing power and tinkering without a biological understanding.
A question: What do you think of Alphabet (Google)?
They are a major actor in AI research right now, but not the only one.
 
This is interesting. I guess it makes me think how little we know about our brains, despite us using them from the day we're born until the day we die. I'd kill to be able to ask my brain just one question. I wish we held the key into understanding ourselves. The universe would be our oyster.
 
I'm not convinced IQ is as important as it is made to be in some spheres. For starters, hard work, values and mental health matter far more than intelligence when it comes to human achievement. The millions of depressed slackers and NEETs in the West should be a clue.

Yeah, I have no idea where this pattern of thinking comes from, but it's systematic. Everytime AI reaches a milestone, suddenly it's not true intelligence anymore. At some point, it will be hard to deny true intelligence has been reached when Skynet knocks at your door.

I think this is what will happen. Some of the code produced by evolutionary algorithms and neural nets isn't understood, already. That's what triggered Elon Musk's infamous warnings. In most of OpenAI's published research, they admit they don't understand some of the results.

I think it's Ben Goertzel who used the analogy of the airplane. We didn't have to retro-engineer the bird to devise flying machines. Perhaps it will be the same of intelligence: we could create it through sheer computing power and tinkering without a biological understanding.

They are a major actor in AI research right now, but not the only one.

You are right, IQ isn't the end all be all of intelligence. It was designed as a way to identify mental retardation, after all. A man with an IQ of 150 is almost certainly visibly very intelligent, but not necessarily high-achieving. A 140 accomplished astrophysicist is generally preferable to a 150 IQ farmer.
That being said China is already addressing this problem. They are researching the genes of geniuses.
https://www.nature.com/news/chinese-project-probes-the-genetics-of-genius-1.12985

When IVF with advanced genomic prediction becomes commercial, IQ will likely not be the main thing selected for, but the genes that highly correct with academic success, health, etc. Of course, IQ will highly correlate with achievement but not 1:1.


As for Google, I meant morally speaking.
 
As for Google, I meant morally speaking.
Google has accelerated my intellectual development by ~25 years. Without it, I would be like the middle-aged incel I know who realized only recently, after 20 years of "therapy", that his life sucked because he was an ugly dwarf and not because of his own mistakes.

I consider Google as a tremendous source for good in the world.
 
A singularity is fully compatible with AI being an unconscious tool in the hands of humans. You obviously have a lack of knowledge in the matter. How many books on the subject have you read fully (from beginning to end, not just the back cover)?

I've read two books on the so-called "singularity": The Singularity is Near by Ray Kurzweil and The Spike by Damien Broderick. Still have them. But yeah, it was over ten years ago. Nevertheless, I'd like to remember Kurzweil making the case, at least implicitely, for conscious AI as something not only inevitable as a simple extension of increasing processing power (which it isn't), but also required in order to achieve the run away effect the term implies.
 
Last edited:
I've read two books on the so-called "Singularity": The Singularity is Near by Ray Kurzweil and The Spike by Damien Broderick. Still have them. But yeah, it was over ten years ago. Nevertheless, I'd like to remember Kurzweil making the case, at least implicitely, for conscious AI as something not only inevitable as a simple extension of increasing processing power (which it isn't), but also required in order to achieve the run away effect known as the Singularity.

I was taken in by the hype at first, but given time to reflect on it I came to a different conclusion.
Fair enough. I think you would do well to update your readings, for instance with How To Create A Mind by Kurzweil, or by reading the blogs of OpenAI and DeepMind for actual, practical updates on AGI progress. At some point, it's natural to doubt Kurzweil's "rose-tinted glasses" optimism but I think he was still right on the essential.
 
Give it about 15 years if you wanna see REAL progress. You'll likely see some amazing progress in your lifetime assuming you're young.

Yeah, 15 years, that's what they said 30 years ago. Not going to hold my breath.
 
Yeah, 15 years, that's what they said 30 years ago. Not going to hold my breath.
At the current rate of technological growth it's closer than we've ever been. And with technology coming out in the past couple years and things currently in development. Were getting there
 
Physically they already look pretty good, the fabric could improve, movement wise I think we could see a lot of rapid improvement, but mentally/personality wise yeah I think it is going to take quite a while to get something moderately deceitful to an actual human (like Ex Machina).
 
Fair enough. I think you would do well to update your readings, for instance with How To Create A Mind by Kurzweil, or by reading the blogs of OpenAI and DeepMind for actual, practical updates on AGI progress. At some point, it's natural to doubt Kurzweil's "rose-tinted glasses" optimism but I think he was still right on the essential.

Actually, I might. Not that I think I will agree much with Kurzweil; I've ended up in the opposite camp, being more impressed by people like Sir Roger Penrose and Stuart Hameroff when it comes to the matter of consciousness, but you're right one needs to keep an open mind and an open eye on current developments in this area. At least if you want to have anything worthwhile to add to the discussion.
 
For many years, they were the only reason I refused to rope. But as time passed I have started to seriously doubt their validity.
It's 2018 already, AI is still extremely rudimentary, nobody knows what "consciousness" is, nobody knows what "intelligence" is. The best AIs are really good at one task, and absolute shit at everything else. AGI is still a very distant dream, and most people who claim otherwise seem to be copers like me, futurologists, and some random IFLS tier guys on reddit (also copers tbh).

It's over for technologycels.
you should watch this and rethink your singularity scenario

 

Users who are viewing this thread

shape1
shape2
shape3
shape4
shape5
shape6
Back
Top