Welcome to Incels.is - Involuntary Celibate Forum

Welcome! This is a forum for involuntary celibates: people who lack a significant other. Are you lonely and wish you had someone in your life? You're not alone! Join our forum and talk to people just like you.

Discussion Now that the dust has settled, can we agree the industrial revolution and its consequences have been a disaster for the human race?

  • Thread starter Deleted member 33464
  • Start date
Deleted member 33464

Deleted member 33464

Revelationcel
-
Joined
Mar 8, 2021
Posts
900
1614109518773

Reading ISAIF was the biggest blackpill, he pretty much solved philosophy. After reading this, I feel nothing but spite for people who are genuinely happy to be a part of the system, people who are blindly careermaxxing, and people who get obsessed with science and technology. Life in a primitive society wouldn't be perfect, but it'd be multiple times better than it is now
 
I feel nothing but spite for people who are genuinely happy to be a part of the system, people who are blindly careermaxxing, and people who get obsessed with science and technology.
Image
 
people have been poisoning themselves as a relief since ever. not exclusive to our times.
 
never been a fan of jew meds here
 
I think cultural marxism and its ramifications were what fucked up society the most, but of course technology boosted it.
 
While Uncle Ted makes a lot of great points, his ultimate premise is short-sighted. Technology is only disastrous when it isn't tempered and guided by ethics and a bit of worst-case imagination.
 
"return to monke" was never just a meme for me
 
While Uncle Ted makes a lot of great points, his ultimate premise is short-sighted. Technology is only disastrous when it isn't tempered and guided by ethics and a bit of worst-case imagination.
Did you read his works? A big part of his argument is that technology has effects independent of our intended usage of it. "Guidance" is therefore irreconcilable with this and he wrote prolifically about how reformation would not fix it, only revolution
 
Did you read his works? A big part of his argument is that technology has effects independent of our intended usage of it. "Guidance" is therefore irreconcilable with this and he wrote prolifically about how reformation would not fix it, only revolution
Yes, I did. And you misunderstand.. The idea isn't to apply ethics and philosophical guidance after the fact when the technology is created and irrevocably integrated into society (like computers), but to warn technologists of long-lasting potential harm before certain technologies are developed and proliferated in the world. What generally happens in the world is that scientists have tunnel vision with their work. They fail to see and understand the broader implications of their work, and the potential harm to society and the world at large. Oppenheimer was one example who did see and understand, but he knew that even were he to destroy all of his work and burn everything to the ground, it wouldn't stop what was to come.

Technology and its potential effects (both good and bad) are treated reactively. Even now, with all of the huff and puff about the dangers of AI, for example, politicians aren't seriously considering potential changes to society (thus necessitating new laws) once the technology has become mainstream. When something new is introduced into society policy makers react to it usually only when some kind of harm is done and/or to increase control (and only when it benefits them somehow).

For example, one technology that must not be researched and developed is gene editing and genetic engineering in humans. I'll leave the why as an exercise for you.
 
Last edited:
It depends if you don't mind being dead before you're 40.
 
No, it's not industrial revolution, it's rights and empowerment of toilets, society was still based until like 60's
 
no for instance, if there was no industrial revolution and i was born i would die very fast with my body, and also no copes , and also no future VR nor genetic engineering, i read kaczyinsky and the only point he has is against genetic engineering which is indeed RETARDED
 
Yes, I did. And you misunderstand.. The idea isn't to apply ethics and philosophical guidance after the fact when the technology is created and irrevocably integrated into society (like computers), but to warn technologists of long-lasting potential harm before certain technologies are developed and proliferated in the world. What generally happens in the world is that scientists have tunnel vision with their work. They fail to see and understand the broader implications of their work, and the potential harm to society and the world at large. Oppenheimer was one example who did see and understand, but he knew that even were he to destroy all of his work and burn everything to the ground, it wouldn't stop what was to come.

Technology and its potential effects (both good and bad) are treated reactively. Even now, with all of the huff and puff about the dangers of AI, for example, politicians aren't seriously considering potential changes to society (thus necessitating new laws) once the technology has become mainstream. When something new is introduced into society policy makers react to it usually only when some kind of harm is done and/or to increase control (and only when it benefits them somehow).

For example, one technology that must not be researched and developed is gene editing and genetic engineering in humans. I'll leave the why as an exercise for you.
low iq and here is why:
1. You are playing into the fallacy of futurology. We can predict what is going to happen in the future based off present trends, but what what this doesn't take into account are "miracles", or radical breaks that can only be explained after the fact. It is a contradiction to say you can treat technology "reactively", but before it is integrated into society.
2. Who is making the decisions, what criteria is used to determine if effects of technology are "good and bad", and how can I trust them when there is a historical precedent for literally every government to make shit decisions? To expand more upon the second point, do we take a deontological approach, or a consequentialist approach? No matter what gets picked, what ends up happening is somebody is enforcing their ethics on other people, imposing a control on their behavior, which is the main concern Ted has about technology, restriction of human freedom.
3. The good can't be removed from the bad, and you make the fallacy of linear causality. The system exists as a web of relationships, rather than one thing causes another. You can't have modern medicine, for example, without factories, chemists, computer scientists, etc. which all have their dependencies. And this says nothing of the inherent side effects that exists in technologies itself.
4. If we were to somehow treat technology reactively to prevent it's negative effects, even assuming we could somehow predict all of them perfectly and assuming it were possible to stop the development of technology, it would necessarily imply that we stop the development of all technology and destroy all systems based technology, because it is the nature of technology to have effects independent of our intended usage for it and it is the nature of the system to restrict our freedoms due to the social contract. No matter what kind of system you have, a technoindustrial system necessarily implies making people live and behave in ways remote from natural patterns of human behavior, and it necessarily implies our fate depends on decisions we cannot make.
 
low iq and here is why:
1. You are playing into the fallacy of futurology. We can predict what is going to happen in the future based off present trends, but what what this doesn't take into account are "miracles", or radical breaks that can only be explained after the fact.
You either didn't understand or didn't read properly. I didn't say anything about using present trends to predict the future. I'm talking about creating well thought out contingency plans and preparing for possible eventualities of some as-of-yet-undeveloped technology ahead of time, before Pandora's box is opened.

It is a contradiction to say you can treat technology "reactively", but before it is integrated into society.
OK, so you didn't read properly. I didn't say we should treat technology reactively. I said it IS being treated reactively (that's the current paradigm in civilization's relationship with new technology), and that we should instead be more proactive in anticipating certain technologies and their potentially devastating (and irreversible) harms.

2. Who is making the decisions, what criteria is used to determine if effects of technology are "good and bad", and how can I trust them when there is a historical precedent for literally every government to make shit decisions?
I don't care who or what, but an effort has to be made to make this standard procedure. Scientists and researchers don't police themselves. They blindly just do the work without a care or concern (or much care and concern) for their own benefit (or ideals like "for the sake of human knowledge). Someone needs to police it. There are researchers, both government and private, right now as we're talking, doing research that is potentially globally catastrophic. And I don't just mean creating novel WMDs.

Cryptography, for example, is based on a very simplistic and fundamental fact in number theory: there is no known method of factoring primes, which then gives us the product of two primes. Mathematicians are working diligently in the service of govt and big tech without a care for the destruction that such a simple thing could bring about. All digital encryption would be rendered useless overnight. All banking information, private correspondence, and all sorts of private information would be open to the world. Anybody with a handful of free operating brain cells can do that the math on how bad that would be.

To expand more upon the second point, do we take a deontological approach, or a consequentialist approach?
A hybrid approach. A binary, either or approach is restrictive anyway, as is visible to anyone with a rudimentary understanding of each approach. This should be less of a prescriptive ethic and more of an enforcement of rules.

No matter what gets picked, what ends up happening is somebody is enforcing their ethics on other people, imposing a control on their behavior, which is the main concern Ted has about technology, restriction of human freedom.
What is with people and this knee-jerk, almost childish reaction to having rules imposed upon them? It's already happening to you. They just call them "laws," instead.

This is one of those places where it's too obviously question begging. It's nonsensical to say that you can't restrict freedom of technology because technology restricts freedom. Uncle Ted's insistence on the abject stopping of technology and imposing this on all of society is, ironically, restricting human freedom. His arguments are sound and his foresight is almost scary, but he failed here.

Though I suppose that's the ultimate lesson here. Some freedoms must be restricted, or I'd argue, controlled in the case of technology. There needs to be something like an FDA but for tech (putting aside the obvious corruption with Big Pharma and how an analogous situation would arise with Big Tech).

3. The good can't be removed from the bad, and you make the fallacy of linear causality. The system exists as a web of relationships, rather than one thing causes another. You can't have modern medicine, for example, without factories, chemists, computer scientists, etc. which all have their dependencies. And this says nothing of the inherent side effects that exists in technologies itself.
This is inaccurate and also a red herring.

Technology is designed with a clear purpose and intent. For example, the purpose of creating vaccines (technology, medical health) is arguably good, while the purpose of creating chemical weapons (technology, military weapon) is arguably bad. One saves lives, the other takes them. You could split hairs and argue things like, "but now you can directly inject poisonous compounds into someone and kill them," or, "but now you can easily win battles with minimal infrastructural damage, while saving so many lives of your countrymen," to illustrate that technology can be misused, or that intent of use can be different from intent of creation (I'm sure people who created chemical weapons didn't do so with the intent to preserve buildings or keep more of their troops alive, but to brutally and efficiently kill the enemy).

The good is clearly separable from the bad, whether by intent, consequence, or both, and this is clear regardless of your ethical system. Ethical systems are best used as guides and references, rather than arbiters and adjudicators of decisions. The former is more rational, because in practice you will always tend to have exceptions and edge cases, whereas the latter is almost dogmatically restrictive.

4. If we were to somehow treat technology reactively to prevent it's negative effects, even assuming we could somehow predict all of them perfectly and assuming it were possible to stop the development of technology, it would necessarily imply that we stop the development of all technology and destroy all systems based technology, because it is the nature of technology to have effects independent of our intended usage for it and it is the nature of the system to restrict our freedoms due to the social contract.
This point is moot. I argued for proactivity with respect to novel technological development, not reactivity.

"Unintended effect" of technology, as a phenomenon, is not inherent to technology itself. What you're describing is the propensity for humans to misuse technology and bring it far away from it's intended purpose.

Take internet social networking. The technology behind it was already present (large-scale computer networking aka "the internet). The creation of something like Facebook, for instance, didn't really add anything new (the mathematics was already known in graph theory and algorithm design). It was an idea (stolen, or "reappropriated," but still just an idea) that became a big business and helped contribute to the scaling up of the internet.

But what soon happened is that the US government (and others) quickly saw the spying utility for this (or they already knew and were waiting for the convenient opportunity). So now you have the internet, which was never created for spying, nor had spying built into its creation conceptually, being used to spy on literally billions of people with Facebook as means to do it for them.

No matter what kind of system you have, a technoindustrial system necessarily implies making people live and behave in ways remote from natural patterns of human behavior, and it necessarily implies our fate depends on decisions we cannot make.
I think you're missing the metaphysical point of technology.

On a fundamental level, technology is created with the purpose of assisting humanity. The whole point of it is to make certain tasks easier for humans to do in order to help increase their survivability or extend their functionality somehow beyond their physical body. There's nothing fundamentally special or unique about the technological process. Other animals use technology too, limited only by their brain, body, and environment. They, too, use tools to help them improve their survival.

Humans don't have special, predetermined natural patterns. We don't hibernate like bears, we don't nomadically move like birds away from our habitats seasonally, we don't leave our infant young to fend for themselves (well, a small number of human males do KEK). We eat, shit, and fuck (chads.co). Everything else we do revolves around how to eat, shit and fuck easier and better. We shape our own habitats and we construct on our patterns with which we use as an equilibrium point to stay balanced (read: comfortable).

If a small handful of people create technology and another small handful use it to shape and control our environments, it would be terribly erroneous to attribute any lack of human freedom to technology when it's a human problem at its core. Our fate always depends on decisions we cannot make. That's the fault of other humans imposing their will on you and using technology others created to do it. Uncle Ted took a near perfect shot, but his manifesto was aimed at the wrong target. He should have been aiming at the human condition. He threw the baby out with the bathwater.

Change human nature for the better and you will change how we develop and use technology for the better. But foregoing technology will not change human nature for the better. We'll still be killing and dominating each other.

I noticed you didn't even bother with the thought exercise. JFL.
 
You either didn't understand or didn't read properly. I didn't say anything about using present trends to predict the future. I'm talking about creating well thought out contingency plans and preparing for possible eventualities of some as-of-yet-undeveloped technology ahead of time, before Pandora's box is opened.
And how do you develop these well thought out contingency plans? There are four kinds of knowledge: known knowns, unknown knowns, known unknowns, and unknown unknowns. At the very best, you can make plans and preparations based on 3 of them, but you can't plan for what you don't know that you don't know.
I don't care who or what, but an effort has to be made to make this standard procedure. Scientists and researchers don't police themselves. They blindly just do the work without a care or concern (or much care and concern) for their own benefit (or ideals like "for the sake of human knowledge). Someone needs to police it. There are researchers, both government and private, right now as we're talking, doing research that is potentially globally catastrophic. And I don't just mean creating novel WMDs.
Didn't answer the question.
What is with people and this knee-jerk, almost childish reaction to having rules imposed upon them? It's already happening to you. They just call them "laws," instead.
Not an argument.
This is one of those places where it's too obviously question begging. It's nonsensical to say that you can't restrict freedom of technology because technology restricts freedom. Uncle Ted's insistence on the abject stopping of technology and imposing this on all of society is, ironically, restricting human freedom. His arguments are sound and his foresight is almost scary, but he failed here.
You are getting confused on the the action by which someone does harm (nocet) to another by asserting his freedom when he cannot avoid being subjugated and made a slave, with the action whereby he does another wrong (laedit) by restricting his freedom. It was only an accident (casus) that his freedom harmed the person trying to subjugate him, it was not a free act (in a juristic sense). For to demand of another that he should sit back and take it when he is being taken advantage of would be a claim opposed to all of lawfulness. One has a right to be able to act freely, but the outcome of one's actions is not freely decided.
Some freedoms must be restricted, or I'd argue, controlled in the case of technology.
So you admit the good can't be seperated from the bad, that technology necessarily implies restriction of human freedom.
"Unintended effect" of technology, as a phenomenon, is not inherent to technology itself. What you're describing is the propensity for humans to misuse technology and bring it far away from it's intended purpose.

Take internet social networking. The technology behind it was already present (large-scale computer networking aka "the internet). The creation of something like Facebook, for instance, didn't really add anything new (the mathematics was already known in graph theory and algorithm design). It was an idea (stolen, or "reappropriated," but still just an idea) that became a big business and helped contribute to the scaling up of the internet.
No, what I am describing is effects of technology it necessarily has on humans and human society. The unknown unknowns. Not "misuse", effects that happen regardless of how we use it. Like in physics, every action has an equal and opposite reaction. Technology exerts change on us as we exert change on it. If we were to apply these concepts to your internet social networking example, a better way to put it would be "the medium is the message" as McLuhan put it, who was an influence on Ted. On the surface, social media like kikebook is intended to connect people with friends. The independent effect that it is having on us is a sort of cultural globalization, or the creation of what McLuhan called a "global village", where all the world's individuality and individual culture slowly dwindles away and is replaced by one overarching superculture, which everybody hates because usually this is dominated by the US and western thought. Another effect internet social networking has is the elimination of the obstacle of social distance in the communication process. Look what's happening now with that, how we gave them an inch and they took a mile and force us to isolate and do work and school and shit remotely. In this way, something you thought was trivial has shaped the actions and associations among humans.
I think you're missing the metaphysical point of technology.

On a fundamental level, technology is created with the purpose of assisting humanity. The whole point of it is to make certain tasks easier for humans to do in order to help increase their survivability or extend their functionality somehow beyond their physical body. There's nothing fundamentally special or unique about the technological process. Other animals use technology too, limited only by their brain, body, and environment. They, too, use tools to help them improve their survival.

Humans don't have special, predetermined natural patterns. We don't hibernate like bears, we don't nomadically move like birds away from our habitats seasonally, we don't leave our infant young to fend for themselves (well, a small number of human males do KEK). We eat, shit, and fuck (chads.co). Everything else we do revolves around how to eat, shit and fuck easier and better. We shape our own habitats and we construct on our patterns with which we use as an equilibrium point to stay balanced (read: comfortable).

If a small handful of people create technology and another small handful use it to shape and control our environments, it would be terribly erroneous to attribute any lack of human freedom to technology when it's a human problem at its core. Our fate always depends on decisions we cannot make. That's the fault of other humans imposing their will on you and using technology others created to do it. Uncle Ted took a near perfect shot, but his manifesto was aimed at the wrong target. He should have been aiming at the human condition. He threw the baby out with the bathwater.

Change human nature for the better and you will change how we develop and use technology for the better. But foregoing technology will not change human nature for the better. We'll still be killing and dominating each other.
Point is moot because we are talking about systems based technologies as I and Ted have specifically mentioned several times. I feel it is necessary to add that McLuhan characterizes technology is an extension of ourselves. Humans make both social and information networks. If technology shapes us, then we are malleable machines that relay information to each other. And if we shape technology, then technology becomes the social network of communication between us. Therefore, the concept of social and information networks becomes permeable in nature. Technology creates and destroys relationships just as people send and do not send information.
 
Where can I read his stuff
 
Uncle Ted is one of the greatest Americans to ever live, and he gets proven right more and more everyday.

Hi-Tech truly is enslaving. In the ancient world, slavery was personal. In the modern world, the slavery is far more pervasive than it has ever been, but because the slavery is impersonal rather than personal, this gives people the illusion of "freedom."

I remember reading his manifesto some years ago, and he mentions how technology makes man dependent- once a technological innovation is introduced, after some time it becomes a "necessity" and people feel that they can never again live without it. Look at smart phones for example; they went from being a niche, luxury item to being an essential device for everyday work and leisure in the span of a decade.

Hi-Tech is also responsible in large measure for inceldom, by creating an atomized society, and also by increasing female hypergamy to never-before-seen levels-due to mechanization, women no longer need to (directly) rely on men for provision/protection, which is the main reason they ever felt compelled to pair up with men in the first place.

BTW, uncle Ted hair mogs me to Jupiter and back.
 
I finished reading ISAIF for the first time today and it fucked me up even more. I already knew before reading it that humans were meant to live like our paleolithic ancestors, that our modern societies were unnatural and the reason why there are so many people with psychological problems and unable to have fulfilling lives and stuff, but I never realized how severe our situation was up until now. I mean, lack of pussy and social life is already fucked up enough, but when you see the extent to which our freedom has been restricted, how the system has taken away all of our chances to experience the human condition the way we are supposed to, how truly powerless and humiliated we are... It's frustrating. It's all a big joke man. All that's left now is just cope and cope until death. And good luck if you're lowiq like me JFL no matter how hard you try to immerse yourself in a surrogate activity, sooner or later you feel like fucking idiot and you just can't ignore it. At least ted can afford to be narcissistic, I can't even spend a single day without feeling like an inferior piece of shit subhuman. It never fucking began.
 
And how do you develop these well thought out contingency plans?
Do the exercise I gave you earlier and you'll get an idea.

There are four kinds of knowledge: known knowns, unknown knowns, known unknowns, and unknown unknowns. At the very best, you can make plans and preparations based on 3 of them, but you can't plan for what you don't know that you don't know.
If Donald Rumsfeld became a philosopher. KEK. Those aren't legitimate epistemological classifications. I can't believe you fell for that meme response to a journalist and thought that those are actual categories of knowledge. Well, I sort of can believe it. He fooled a lot of people with that nonsense.

This line of reasoning is ridiculous. I'll leave that as another exercise for you to see why.

While we're on the subject, the four categories of knowledge are: declarative (facts), conceptual (self explanatory), procedural (algorithms and processes), and metacognitive knowledge (self awareness).

These categories are further generalized into a priori (without experience, pure reason) and a posteriori (with experience, empirical evidence).

Didn't answer the question.
I did. I said it doesn't really matter who does it. The political and tribal monkey games don't interest me. You can't make everyone happy, so any set of rules will always have some subset of people not liking the rules. The only thing that matters in question is that the process take place and get the serious treatment and attention it would need.

Not an argument.
It was commentary in reply to your comment about how there will always be somebody imposing their ethics. My comment was that this kind of imposition upon you already occurs in law, albeit with force behind it. The argument was right beneath it.

You erroneously used Ted's argument for technology restricting freedom as an argument for why some prescriptive ethic shouldn't be applied, since that is also freedom restriction. But this objection is a false equivalence. The freedom restriction that Ted argues for is not the same kind of freedom restriction mandated by a prescribed ethic.

You are getting confused on the the action by which someone does harm (nocet) to another by asserting his freedom when he cannot avoid being subjugated and made a slave, with the action whereby he does another wrong (laedit) by restricting his freedom. It was only an accident (casus) that his freedom harmed the person trying to subjugate him, it was not a free act (in a juristic sense). For to demand of another that he should sit back and take it when he is being taken advantage of would be a claim opposed to all of lawfulness. One has a right to be able to act freely, but the outcome of one's actions is not freely decided.
What the hell is this? Is this supposed to be slam poetry? Wait... Oh, OK. This is apologetics for Ted's actions. Sure, wax poetic all you like, as long as you don't pretend it's an explicit argument.

What's your counter argument here? That rules imposed upon someone which restrict their freedoms somehow is forced subjugation and slavery, and that their act of asserting their freedom is somehow a justification for any harm done towards that outcome (freedom)? If I didn't get that right, please be less esoteric in your elaboration. It adds spice and flair, but not substance.

So you admit the good can't be seperated from the bad, that technology necessarily implies restriction of human freedom.
No. Restricting a freedom (like laws against murder that restrict your freedom to commit mortal violence, resulting in the maintenance of order and the prevention of absolute chaos) does not imply (at all) that good cannot be separated from bad. The act of freedom restriction itself doesn't imply this in any logical and rational sense.

And how does that give you "technology necessarily implies restriction of human freedom"? There are huge gaps in logic there. I need you to help me find the missing chains of reasoning and link them together. How do you go from, "the good and the bad can't be separated from technology," to, "therefore technology restricts freedom"? How does the supposed inability to separate good from bad in some thing mean that having, allowing, or using the thing restricts your freedom?

No, what I am describing is effects of technology it necessarily has on humans and human society. The unknown unknowns. Not "misuse", effects that happen regardless of how we use it. Like in physics, every action has an equal and opposite reaction. Technology exerts change on us as we exert change on it. If we were to apply these concepts to your internet social networking example, a better way to put it would be "the medium is the message" as McLuhan put it, who was an influence on Ted. On the surface, social media like kikebook is intended to connect people with friends. The independent effect that it is having on us is a sort of cultural globalization, or the creation of what McLuhan called a "global village", where all the world's individuality and individual culture slowly dwindles away and is replaced by one overarching superculture, which everybody hates because usually this is dominated by the US and western thought. Another effect internet social networking has is the elimination of the obstacle of social distance in the communication process. Look what's happening now with that, how we gave them an inch and they took a mile and force us to isolate and do work and school and shit remotely. In this way, something you thought was trivial has shaped the actions and associations among humans.
Technology isn't like a natural force exerting itself on society. Contrary to popular perceptions of technology and beliefs about it, technology is in fact mostly a constant, despite constantly changing. At first glance, this appears to be contradictory, but I'll explain shortly.

Technology progresses in two ways: the increasing demands of society (see: computation, and recently automation), and novel experimentation. In the first case societal forces motivate the ad hoc research and development of technology. When this happens society's - now higher - demands are met and society reaches homeostasis at a new equilibrium point. In the second case novel (sometimes appropriately called "disruptive") technology is discovered or developed and eventually introduced into society. When this occurs society is out of homeostasis and must adapt and readjust to a new equilibrium point.

In the first case technology improves without society needing to adjust, but in the second case it's the reverse. That is, society needs to adjust to technological improvements. In both cases technology remains static and constant, with brief bursts of improvement, only to go back to being static. An analogy would be a theoretical car on a theoretical freeway that has constant velocity, with brief bursts of acceleration to a new velocity, where it's constant again. The car keeps going faster and faster over a long period of time, but it doesn't slow down. Bringing the analogy back, what would slow down the car for us would be some major catastrophe or war. Nukes and EMP bombs would slow down the car to the point where you could open the door and roll out.

Point is moot because we are talking about systems based technologies as I and Ted have specifically mentioned several times.
Everything I said in that quote applies broadly to all levels of technology: it's there to assist humanity.

I feel it is necessary to add that McLuhan characterizes technology is an extension of ourselves. Humans make both social and information networks. If technology shapes us, then we are malleable machines that relay information to each other. And if we shape technology, then technology becomes the social network of communication between us. Therefore, the concept of social and information networks becomes permeable in nature. Technology creates and destroys relationships just as people send and do not send information.
We've always had information networks independent of any technology, by virtue of being a tribalist, social species, though. Technology has helped vastly improve this process, as you've described. Now Darlene can spread that juicy gossip about Betsy's affair with one single Facebook post, instead of the old fashioned way of spreading the good word face to face one or two acquaintances at a time. You're seriously going to blame the internet and Facebook for that? If you do, it's the same logic as blaming the gun in a shooting.

We can't place the blame on the technology for ruining relationships any more than we can place the blame on the messengers who delivered bad news to kings from opposing lands and lost their heads for it (you can bet they wish they had email).
 
yes it was.
btw you should watch "Unabomber : in his own words" , its a good documentary
 
Utter nonsense men build machines to improve people's lives and that has been achieved
But the unintended consequences are the real problem
 
If Donald Rumsfeld became a philosopher. KEK. Those aren't legitimate epistemological classifications. I can't believe you fell for that meme response to a journalist and thought that those are actual categories of knowledge. Well, I sort of can believe it. He fooled a lot of people with that nonsense.

This line of reasoning is ridiculous. I'll leave that as another exercise for you to see why.

While we're on the subject, the four categories of knowledge are: declarative (facts), conceptual (self explanatory), procedural (algorithms and processes), and metacognitive knowledge (self awareness).

These categories are further generalized into a priori (without experience, pure reason) and a posteriori (with experience, empirical evidence).
It's black swan theory. Not an argument
I did. I said it doesn't really matter who does it.
Low iq cope. It seems like your whole system of political philosophy is DUDE, WHAT IF I COULD TRUST THE GUYS MAKING DECISIONS, AND THEY DID EVERYTHING I WANT THEM TO.
No. Restricting a freedom (like laws against murder that restrict your freedom to commit mortal violence, resulting in the maintenance of order and the prevention of absolute chaos) does not imply (at all) that good cannot be separated from bad. The act of freedom restriction itself doesn't imply this in any logical and rational sense.
Sovereignty over oneself and one's decisions is an inalienable right. By restricting my freedom and controlling my behavior without my consent you are taking from me and making me a slave. Which I think we can all agree is bad.
And it looks like you didn't read ISAIF, because things like laws against murder isn't what he's talking about. One example he gives is surveillance, which you explicitly complain about when you talk about kikebook.
What's your counter argument here? That rules imposed upon someone which restrict their freedoms somehow is forced subjugation and slavery, and that their act of asserting their freedom is somehow a justification for any harm done towards that outcome (freedom)? If I didn't get that right, please be less esoteric in your elaboration. It adds spice and flair, but not substance.
Everybody has a right to be free. If my freedom somehow hurts you when you are trying to impinge upon my freedom, it is a conflict between the oppressor and this inalienable right. For me, it is only an accident which I cannot avoid.
And how does that give you "technology necessarily implies restriction of human freedom"? There are huge gaps in logic there. I need you to help me find the missing chains of reasoning and link them together. How do you go from, "the good and the bad can't be separated from technology," to, "therefore technology restricts freedom"? How does the supposed inability to separate good from bad in some thing mean that having, allowing, or using the thing restricts your freedom?
I am just going by what you admitted to me, when you say
Some freedoms must be restricted, or I'd argue, controlled in the case of technology.


Technology isn't like a natural force exerting itself on society. Contrary to popular perceptions of technology and beliefs about it, technology is in fact mostly a constant, despite constantly changing. At first glance, this appears to be contradictory, but I'll explain shortly.

Technology progresses in two ways: the increasing demands of society (see: computation, and recently automation), and novel experimentation. In the first case societal forces motivate the ad hoc research and development of technology. When this happens society's - now higher - demands are met and society reaches homeostasis at a new equilibrium point. In the second case novel (sometimes appropriately called "disruptive") technology is discovered or developed and eventually introduced into society. When this occurs society is out of homeostasis and must adapt and readjust to a new equilibrium point.

In the first case technology improves without society needing to adjust, but in the second case it's the reverse. That is, society needs to adjust to technological improvements. In both cases technology remains static and constant, with brief bursts of improvement, only to go back to being static. An analogy would be a theoretical car on a theoretical freeway that has constant velocity, with brief bursts of acceleration to a new velocity, where it's constant again. The car keeps going faster and faster over a long period of time, but it doesn't slow down. Bringing the analogy back, what would slow down the car for us would be some major catastrophe or war. Nukes and EMP bombs would slow down the car to the point where you could open the door and roll out.
None of this is relevant to what you were replying to, and admitting technology acts as a revolutionizing agent only agrees with my point. Face it: you got btfo'd by the internet social networking example.
Everything I said in that quote applies broadly to all levels of technology: it's there to assist humanity.
And as you said it changes humanity too.
We've always had information networks independent of any technology, by virtue of being a tribalist, social species, though. Technology has helped vastly improve this process, as you've described. Now Darlene can spread that juicy gossip about Betsy's affair with one single Facebook post, instead of the old fashioned way of spreading the good word face to face one or two acquaintances at a time. You're seriously going to blame the internet and Facebook for that? If you do, it's the same logic as blaming the gun in a shooting.

We can't place the blame on the technology for ruining relationships any more than we can place the blame on the messengers who delivered bad news to kings from opposing lands and lost their heads for it (you can bet they wish they had email).
Here you are either blatantly ignoring what I am saying or still don't understand, so I'll give you another example. Technology serves as an extension of man, and media serves as an extension of the senses. The nature of technology determines the "order" of sensory preferences in a culture. You talk about information networks independent of technology, but media technology has changed us drastically since we actually lived like that by reordering our senses. In the tribal world, every one of our senses was well developed for practical purposes, with the ear being the sense for communication (speech). The ear is all inclusive, sensitive, and cannot be focused as opposed to the analytic and linear eye, which contributed to the harmony and interdependence of tribal kinship. With the invention of the phonetic alphabet, this equilibrium was thrown into permanent disruption by installing sight as the dominant sense. In the words of McLuhan:
Literacy propelled man from the tribe, gave him an eye for an ear and replaced his integral in-depth communal interplay with visual linear values and fragmented consciousness. As an intensification and amplification of the visual function, the phonetic alphabet diminished the role of the senses of hearing and touch and taste and smell, permeating the discontinuous culture of tribal man and translating its organic harmony and complex synaesthesia into the uniform, connected and visual mode that we still consider the norm of “rational” existence. The whole man became fragmented man; the alphabet shattered the charmed circle and resonating magic of the tribal world, exploding man into an agglomeration of specialized and psychically impoverished “individuals,” or units, functioning in a world of linear time and Euclidean space... When tribal man becomes phonetically literate, he may have an improved abstract intellectual grasp of the world, but most of the deeply emotional corporate family feeling is excised from his relationship with his social milieu. This division of sight and sound and meaning causes deep psychological effects, and he suffers a corresponding separation and impoverishment of his imaginative, emotional and sensory life. He begins reasoning in a sequential linear fashion; he begins categorizing and classifying data. As knowledge is extended in alphabetic form, it is localized and fragmented into specialties, creating division of function, of social classes, of nations and of knowledge–and in the process, the rich interplay of all the senses that characterized the tribal society is sacrificed.
The invention of the phonetic alphabet has thus imbued in man a radically different concept of time space relationships. It further affected interpersonal relationships by replacing speech as the main means of communication, which destroyed tribal relationships and senses of community. So yes, I can place the blame on technology. In your example of the messengers and the kings, the focus should not be on the content of the message but the nature of messengers as means of communication. And with facebook, we should not focus on Darlene gossiping about Betsy's affair, but how facebook as a mean of communication has had an effect on social spaces. I don't blame the gun in the shooting, but the nature of the gun as opposed to some other instrument of death should be analyzed when considering the nature of violence. "The medium is the message".

Anyway, I got you to admit technology acts as a revolutionizing agent and that it necessarily restricts human freedom, so it seems like this is pretty much wrapped up and this will probably be my last reply.
 
It's black swan theory. Not an argument
What you described (Donald Rumsfeld epistemology) is not black swan theory. The theory is simply identifying highly improbable events that were completely unpredictable and rationalized as predictable after the fact. If you want to label this as "unknown unknowns" for convenience's sake, that's fine, but it doesn't categorize knowledge in the way in which you've claimed. Also, this is black swan theory is not an argument either, because you have still have to make the argument how.

Low iq cope. It seems like your whole system of political philosophy is DUDE, WHAT IF I COULD TRUST THE GUYS MAKING DECISIONS, AND THEY DID EVERYTHING I WANT THEM TO.
How many times do I have to say it? I don't care about political philosophy. I don't have a political "system." You can argue and bicker amongst yourselves along your tribal affiliations to decide what rules you should have in place regarding this and which dear leader you should listen to. That isn't a problem with my proposal that there needs to be some kind of authoritative body that controls technological development and advancement, instead of letting it run rampant and then dealing with the fallout after the fact. That's, as they say, a you problem.

You seem to be ironically unaware that forcing others to abandon technology, like Ted was trying to do with his mail campaign, is doing precisely what you're speaking out against: forcing others to follow your laws and dictums. I get it, you have problems with authoritative restrictions on your freedom (even if that includes restricting your freedom to develop technology that would destroy us all). That's not my argument (the restriction on your personal freedom), so stop arguing something I'm not saying.

Sovereignty over oneself and one's decisions is an inalienable right. By restricting my freedom and controlling my behavior without my consent you are taking from me and making me a slave. Which I think we can all agree is bad.
And it looks like you didn't read ISAIF, because things like laws against murder isn't what he's talking about. One example he gives is surveillance, which you explicitly complain about when you talk about kikebook.
You're arguing a false equivalence again. You are taking an argument against the restriction of your personal sovereignty e.g., by a government with its laws, and then using THAT argument against the restriction of free technological development via some as-of-yet undefined and unestablished authority (because "authority bad"), as a reason why we should abandon technology altogether. Your reasoning is ghastly erroneous, at least - to be charitable - incomplete (see the previous gaps which you haven't addressed).

You also haven't made a rebuttal against the counter argument that restrictions on freedom does not imply that good cannot be separated from bad. You've been just picking and choosing what to respond to and ignoring challenges. That's not good faith argumentation.

Everybody has a right to be free. If my freedom somehow hurts you when you are trying to impinge upon my freedom, it is a conflict between the oppressor and this inalienable right. For me, it is only an accident which I cannot avoid.
OK, great. Whatever. This seems to be a problem you have against government and authority. That's not our focus here. Once again, I don't care about the political arguments surrounding restrictions on your personal sovereignty, because that's a red herring.

I am just going by what you admitted to me, when you say
No, you've got it backwards.

"Technology necessarily implies restriction of human freedom." This is your argument.

"Some freedoms must be restricted, or I'd argue, controlled in the case of technology." And this is mine.

I'll ask again. How does restricting and controlling the development of technology (to prevent it from reaching systems level and having the devastating effects on society that is so worrisome) mean that technology necessarily implies restriction of human freedom?

Remember how I said earlier that it's question begging (circular)? It's patently nonsensical to say that we must stop technology because it limits our freedoms and then when I say that we should control technology, you turn it right back around and point out that this control of technology is putting restrictions on our freedom. To do what, create technology? Hmmm. Yes, that's the whole point. Restrict tech, before it restricts you (societally).

You're trying to use my counter point as an answer to a follow up point. It neither answers my question, nor strengthens your argument.

None of this is relevant to what you were replying to, and admitting technology acts as a revolutionizing agent only agrees with my point. Face it: you got btfo'd by the internet social networking example.
Your claim was that technology "exerts itself on us and we exert itself on it." In doing this you've categorized technology as some sort of monolithic force, acting independent from us and on us (very similar to gravity, from the way you've described it). I instead argued that it isn't anything of the sort and argued that technology is mostly constant, and described two ways in which it becomes momentarily variable i.e., changes (improves, really, because technology, by intent, never regresses - nobody is going to try and design a more efficient musket to use in war, for example).

Facebook falls under the second case. The technology was already there. Facebook just put together some existing lego pieces to make something new. Calling it "revolutionary" is like calling Amazon "revolutionary" for changing online shopping, but that's a matter of opinion. When you boil it down it really just centralized it, and Facebook centralized digital social communication. The "revolution" is in how society adjusted to these changes. Entire businesses and industries ("social media marketing" JFL) were built around it. Actual technological revolutions (the kind of thing I suspect you're talking about, if I'm being charitable) are things like the inventions of the transistor and the microchip.

These "unintended" changes are what you like to refer to as the "unknown unknowns," I suppose. Call it whatever suits your fancy. The fact is that technology is just a tool, even at the systematic level. It's inanimate and lifeless, and we give it purpose (in both its intent of creation and in creative new ways) and utilize it how we see fit according to our needs. The internet isn't "alive" and we don't have a symbiotic relationship with it, where both sides exert some pressure on it. The internet is not some thing that exists independently, separate from us. We use it to communicate and conduct our business. And it certainly isn't some natural force to be feared.

And as you said it changes humanity too.
Not in the way you're construing it.

Here you are either blatantly ignoring what I am saying or still don't understand, so I'll give you another example. Technology serves as an extension of man, and media serves as an extension of the senses. The nature of technology determines the "order" of sensory preferences in a culture. You talk about information networks independent of technology, but media technology has changed us drastically since we actually lived like that by reordering our senses. In the tribal world, every one of our senses was well developed for practical purposes, with the ear being the sense for communication (speech). The ear is all inclusive, sensitive, and cannot be focused as opposed to the analytic and linear eye, which contributed to the harmony and interdependence of tribal kinship. With the invention of the phonetic alphabet, this equilibrium was thrown into permanent disruption by installing sight as the dominant sense. In the words of McLuhan:
But those information networks still ARE independent of the technology that restructured the priority of senses in culture. You and I could still be be having this discussion and back forth in a tribalist society. The differences, though, are obvious. The tempo, pace, and structure of our words (on a written, digital format vs a live, oral one demanding only the aural sense in real time) would certainly be different. The quality of our discussion, too, would be hindered (you wouldn't be able to quote McLuhan, unless he was present right there with us sitting around the fire, for example). Before the internet, philosophers and thinkers would write letters back and forth and do more or less what you and I are doing here (much higher quality with them than us, of course), but instead of taking a day or two like we're doing, it would take them months to reply back and forth.

Of course, this changes the way we live. IT'S SUPPOSED TO, BY DESIGN. Teleology is intrinsic to the metaphysic of technology. In plain English: Technology does not exist in reality without design and intended purpose aka it's not a naturally occurring phenomenon. Civilizational existence augmented with technology is far superior - orders of magnitude more - than a tribal existence. It's self-evident, and it's frankly surprising how somebody as brilliant as Ted Kaczynski would be so wrong about this. I suppose it is true after all that when a genius makes mistakes it's catastrophically big.

This goes back to criticism of Ted's claim that technology disrupts man's natural state, and comes full circle to what I said earlier. Man does not have some natural state that is intrinsically human. I suppose you could argue that the capacity to reason and use language is in fact man's natural state, since this is a uniquely human trait (I'm sure somebody, somewhere, sometime made this argument long before I existed). To argue that technology takes us far away from our natural state is assuming the truth of a questionable premise.

The invention of the phonetic alphabet has thus imbued in man a radically different concept of time space relationships.
This is a function of language, not technology (see: Sapir-Whorf hypothesis).

It further affected interpersonal relationships by replacing speech as the main means of communication, which destroyed tribal relationships and senses of community. So yes, I can place the blame on technology.
The implication being that communities can only exist within tribal structures is circular. Communities are tribal structures. The alphabet reorganized communities and expanded their size and scope. It only disrupted the structure of tribes that relied solely on speech, much like how the automobile disrupted the structure of businesses that relied solely on the horse.

Your blaming of technology for disrupting your potential state of minimalist, triablist community and it's attendant comforts you would have grown accustomed to, is you blaming guns for shootings that disrupt your workflow comfort because now you have to go through security screening procedures and metal detectors installed every time you go to work and leave on a lunch break to come back (and also have to waste productive time going through active shooter response training). I trust that you're sufficiently reasonable to see the irrationality of that position.

In your example of the messengers and the kings, the focus should not be on the content of the message but the nature of messengers as means of communication.
The messenger IS the "technology." The king taking his anger out on the messenger is precisely him blaming the technology for the misfortune of whatever message would be bringing to his kingdom.

Yes, the focus should be on the nature of the technology. In this case trusted messengers and envoys were the best possible technology available at the time (you can't always trust a pigeon with important and sensitive information). This is why we look for technological innovation and improvements whenever we have tangible problems that need solving.

And with facebook, we should not focus on Darlene gossiping about Betsy's affair, but how facebook as a mean of communication has had an effect on social spaces.
No, the medium is not the message. The messenger is not the message he's delivering to the king. McLuhan's catch phrase is as silly as the old adage, "what doesn't kill you makes you stronger." Sounds cool and wise on the face of it, but upon close inspection it's absurd. Paralysis doesn't kill me either. Guess I'm stronger now. Lemme get out of my wheelchair and break my deadlift PR.

Of course, Facebook has had an effect on social spaces. Who's denying that? Why are you concerned about Facebook facilitating gossip when you should be concerned that people gossip in the first place?

Technology doesn't fix problems of the human condition, nor does it make them worse. Technology makes the existing problems more visible. Technology allows good humans to be better humans, and it allows bad humans to be worse. (This isn't alluding to the inseparability problem of separating the good of technology from the bad of technology that you were claiming earlier. I'm talking about specifically about the human condition here.)

Technology will always be misused, because that is ultimately a human problem. This is why I'm surprised that that Ted focused his efforts on stopping the tools, rather than stopping the misuse of those tools.

I don't blame the gun in the shooting, but the nature of the gun as opposed to some other instrument of death should be analyzed when considering the nature of violence. "The medium is the message".
FAN-FUCKING-TASTIC.

You don't blame technology then. Good news: You're not as irrational as I thought.

You can certainly analyze the nature of the technology in all aspects. That happens all of the time in academia, for example, and now recently is happening in the public sphere with all of this hoopla surrounding AI and the alarmist proclamations of its existential threat to us by public intellectuals and business figures. (If only we some way of controlling the development of this technology. Hm, oh well.) However, it's a huge leap of logic to go from the analysis of technology and it's effects on society to conclude that technology is to blame.

Anyway, I got you to admit technology acts as a revolutionizing agent and that it necessarily restricts human freedom, so it seems like this is pretty much wrapped up and this will probably be my last reply.
No, you didn't, as I already explained in this post and in previous posts.

But feel free to close your eyes and ears and walk away. The technology you're using to read this certainly isn't restricting your freedom to do so. KEK
 
Last edited:
it started with the big bang
 
"n-no you didnt!" lmao jfl
Now you're being retarded by resorting to sophistry, by taking quotes out of context and slapping them together to try and look witty and clever, because you have no legitimate rebuttals to respond with.

You know that doesn't work, right? You're just making an ass out of yourself. Don't let me stop you, though. It can be entertaining.

Look, I get it. You're a Ted Kaczynski fanboy, and you dogmatically adhere to his anti-technology beliefs, which makes you immune to any rational critique of his thesis and arguments. If he didn't send out mail bombs and decided to retire to his cabin instead, there would probably be a cult following around him (against his wishes, no doubt) with groups of people showing up at his cabin in the woods holding candles and standing outside like something out of a fucking M Night Shyamalan movie. Then he'd probably give each of you all letters to personally deliver that would be intended to prematurely blow up in your faces, just so you could all leave him alone. :feelskek:
 

Users who are viewing this thread

shape1
shape2
shape3
shape4
shape5
shape6
Back
Top