mNFwTJ3wz9
This system is contradicting of failing, and yet -
★★★★★
- Joined
- Nov 17, 2019
- Posts
- 9,504
View: https://www.youtube.com/watch?v=hEUO6pjwFOo
The video is about AI safety, but it describes 3 concepts really well.
1. The is-ought problem. - https://en.wikipedia.org/wiki/Is–ought_problem :
-> There seems to be a significant difference between positive statements (about what is) and prescriptive or normative statements (about what ought to be), and that it is not obvious how one can coherently move from descriptive statements to prescriptive ones.
-> Basically : you cannot derive ANY "ought" statements from any number of "is" statements, without atleast 1 "ought" statement.
How this relates to the blackpill:
The blackpill is a collection of "is" statements. It has no "ought" statements. There is also no way of deriving an "ought" statement from the blackpill without a personal "ought" statement. The blackpill CANNOT change your fundamental views or tell you what you "ought" to do, because it has no "ought" statements.
2. Intelligence.
-> Intelligence for an AGI(Artificial General Intelligence) is easily and accurately described by - The AGI has an internal accurate model of reality, and can predict the consequences of their actions, the more accurately the AGI can do this, the more intelligent is is.
-> Intelligence directly corresponds to the validity of "is" statements, and is once again, independent of "ought" statements.
How this relates to the blackpill:
The blackpill provides people with an accurate model of reality. Predictions made by the blackpill are generally more accurate compared to prediction by the bluepill. As a model of reality, it is more accurate and superior to the bluepill.
3. The orthogonality thesis. https://wiki.lesswrong.com/wiki/Orthogonality_thesis
-> The orthogonality thesis states that an artificial intelligence can have any combination of intelligence level and goal. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal.
-> In the more relevant case, A human with any level of intelligence or ability can have a simple or complex goals. The ability(/intelligence/correctness of ought statements) will matter in how well the person can accomplish the goal, but they are independent.
The point of this post: In the end all users here can have wildly different goals and wildly different abilities. The only thing that is common is the "blackpill" which we have observed and believe to be an accurate model of reality. All of us can have wildly different terminal goals but still find common ground because we are not delusional. We can have stormcels and blackcels, Facecels and Heightcels, Moralfags and Hebephiles all interacting on the same forum because of the understanding that people have different fundamental goals. Bluepillers on the other hand will irrationally try to censor you for having different terminal goals or discussing "is" statements which differ from their personal "ought" statements.