Welcome to Incels.is - Involuntary Celibate Forum

Welcome! This is a forum for involuntary celibates: people who lack a significant other. Are you lonely and wish you had someone in your life? You're not alone! Join our forum and talk to people just like you.

SuicideFuel Math thread problem (official)

sorry but I don't follow
I was using this thing: wikipedia.org/wiki/Fundamental_theorem_of_Galois_theory

It basically describes the rotations and transformations for the roots of a polynomial in a field extension. sqrt(p) for any p corresponds to the 2 element group because it has a degree 2 polynomial. In general the polynomial (x^2 - p_1)(x^2 - p_2) ... (x^2 - p_n) has the galois group (C_2)^n.
 
I was using this thing: wikipedia.org/wiki/Fundamental_theorem_of_Galois_theory

It basically describes the rotations and transformations for the roots of a polynomial in a field extension. sqrt(p) for any p corresponds to the 2 element group because it has a degree 2 polynomial. In general the polynomial (x^2 - p_1)(x^2 - p_2) ... (x^2 - p_n) has the galois group (C_2)^n.
Yeah sure, I got that. But a) proving that the Galois group of Q(sqrt(p_1), ... , sqrt(p_m)) is (C_2)^m uses that sqrt(p_1) thru sqrt(p_m) are linearly independent over Q to begin with, which is part of what you're trying to prove, so induction is necessary. And b) how does this yield that "none of these subfields contain Q(sqrt(p_m+1))"?
 
Yeah sure, I got that. But a) proving that the Galois group of Q(sqrt(p_1), ... , sqrt(p_m)) is (C_2)^m uses that sqrt(p_1) thru sqrt(p_m) are linearly independent over Q to begin with, which is part of what you're trying to prove, so induction is necessary. And b) how does this yield that "none of these subfields contain Q(sqrt(p_m+1))"?
I will edit this post with the inductive step later, as for part b, it is basically proving that Q(p_m+1) is not a subfield of any of the subfields, and this goes down to the base level of Q(p_i) =/= Q(p_j)
 
as for part b, it is basically proving that Q(p_m+1) is not a subfield of any of the subfields, and this goes down to the base level of Q(p_i) =/= Q(p_j)
Again, I don't think it's that simple. Q(√2) ≠ Q(√6) and Q(√3) ≠ Q(√6) yet Q(√6) is a subfield of Q(√2,√3). Also, what do you even need Galois groups for then?
 
Again, I don't think it's that simple. Q(√2) ≠ Q(√6) and Q(√3) ≠ Q(√6) yet Q(√6) is a subfield of Q(√2,√3). Also, what do you even need Galois groups for then?
In any linear combination of square roots of square free numbers involving p_1 through p_m+1 which equals 0, the value sqrt(p_m+1) can be factored out and written as a fraction of linear combinations of square free numbers involving the other primes.

Example: a * sqrt(2) + b * sqrt(3) + c * sqrt(6) = 0 => sqrt(3) = -a * sqrt(2) / (b + c * sqrt(2))
 
Last edited:
In any linear combination of square roots of square free numbers involving p_1 through p_m+1 which equals 0, the value sqrt(p_m+1) can be factored out and written as a fraction of linear combinations of square free numbers involving the other primes.

Example: a * sqrt(2) + b * sqrt(3) + c * sqrt(6) = 0 => sqrt(3) = -a * sqrt(2) / (b + c * sqrt(2))
Proof of (x^2 - p_1)...(x^2 - p_m) having Galois group (C_2)^m:

The group action corresponding to each p_i is tau_i(sqrt(p_i)) -> -sqrt(p_i). This is C_2, and combining all of these gives (C_2)^m.

I havent studied this stuff formally so sorry if this is not rigorous enough.
 
Let f : R^2 → R be continuous and recall that the graph of f is given by the subspace Γ_f := {(x, f (x)) | x ∈ R^2} ⊂ R^3. Prove that Γ_f ⊂ R^3 is closed
 
In any linear combination of square roots of square free numbers involving p_1 through p_m+1 which equals 0, the value sqrt(p_m+1) can be factored out and written as a fraction of linear combinations of square free numbers involving the other primes.

Example: a * sqrt(2) + b * sqrt(3) + c * sqrt(6) = 0 => sqrt(3) = -a * sqrt(2) / (b + c * sqrt(2))
Sure, but again, why does this help you? I don't think this is helpful. I could be wrong but as yet I'm not convinced.
 
Sure, but again, why does this help you? I don't think this is helpful. I could be wrong but as yet I'm not convinced.
It implies that sqrt(3) is in Q(sqrt(2)), or more generally sqrt(p_m+1) is in Q(sqrt(p_1), ... , sqrt(p_m))
 
It implies that sqrt(3) is in Q(sqrt(2)), or more generally sqrt(p_m+1) is in Q(sqrt(p_1), ... , sqrt(p_m))
While I don't much love your approach, I reconsidered and it's workable, so I retract my previous statement of it not being helpful. You technically need to split off a case to avoid dividing by 0, but other than that your approach can work if applied inductively. What you now have to prove is that √p_(m+1) ∉ Q(√p_1, ..., √p_m). I really don't think Galois theory is the answer.
 
Ok now prove it's path-connected
Let p & q be two points on the graph. Then p = (x, f(x)) and q = (y, f(y)) for some x, y ∈ R^2. Now consider the function
g : [0, 1] ∋ t ↦ ( tx + (1 − t)y, f(tx + (1 − t)y) ).
Since f is continuous, so is g. Since g(0) = p & g(1) = q we have our continuous path.
 
Let X be a probability distribution of bounded vectors in R^d, and let X_1, X_2, and X_3 be 3 random independent vectors from X, and r is an arbitrary nonnegative real. Prove 2 * probability(norm(X_1 + X_2) >= r) >= (probability(norm(X_3) >= r))^2
  • If r = 0, then the RHS = 0, so there's nothing to prove.
    If r > 0, then we can rescale the probability distribution so that we may WLOG assume that r = 1.
  • If P( |X3| ⩾ 1 ) = 0, then the RHS = 0 again, so we'll assume that P( |X3| ⩾ 1 ) > 0. The LHS can now be rewritten as
    2 * P( |X1 + X2| ⩾ 1 ) = 2 * P( |X1 + X2| ⩾ 1 | |X1| ⩾ 1, |X2| ⩾ 1 ) * P( |X1| ⩾ 1, |X2| ⩾ 1 ).
    Since P( |X1| ⩾ 1, |X2| ⩾ 1 ) = P( |X3| ⩾ 1 ) ^ 2 > 0 by independence, all that needs to be proven is that
    P( |X1 + X2| ⩾ 1 | |X1| ⩾ 1, |X2| ⩾ 1 ) ⩾ ½ for all probability distributions X on R^d.
    To that end, we will minimize P( |X1 + X2| ⩾ 1 | |X1| ⩾ 1, |X2| ⩾ 1 ) w.r.t. X.
  • |X1 + X2| ⩾ 1 iff
    1 ⩽ |X1 + X2|^2 = |X1|^2 + |X2|^2 + 2<X1, X2> = |X1|^2 + |X2|^2 + 2 * |X1| * |X2| * cos(angle) iff
    cos(angle) ⩾ ( 1 − |X1|^2 − |X2|^2 ) / ( 2 * |X1| * |X2| ).
    Since we are conditioning on the event that |X1| ⩾ 1 and |X2| ⩾ 1, we avoid division by 0.
    If we want to minimize
    P( |X1 + X2| ⩾ 1 | |X1| ⩾ 1, |X2| ⩾ 1 ) = P( cos(angle) ⩾ ( 1 − |X1|^2 − |X2|^2 ) / ( 2 * |X1| * |X2| ) | |X1| ⩾ 1, |X2| ⩾ 1 )
    then we want ( 1 − |X1|^2 − |X2|^2 ) / ( 2 * |X1| * |X2| ) to be as large as possible subject to |X1| ⩾ 1 and |X2| ⩾ 1.
    This is because norms and "angles" are "independent".
    I'll skip the proof, but ( 1 − |X1|^2 − |X2|^2 ) / ( 2 * |X1| * |X2| ) is maximal with value −½ when |X1| = |X2| = 1.
    As such, we need only minimize P( <Z1, Z2> ⩾ −½) for Z1, Z2 ~ Z, where Z is any probability distribution on S^(d−1).
    This is because cos(angle) = <Z1, Z2> when Z1 and Z2 lie on S^(d−1).
That was the easy part. Now onto the the hard part -- minimizing P( <Z1, Z2> ⩾ −½) w.r.t. Z.
  • If d = 1, then S^(d−1) = S^0 = {±1}.
    It's easy to verify that the unique minimizer is the Rademacher distribution, which achieves a probability of ½ as expected.
  • If d = 2, I don't know how to proceed.
 
Let p & q be two points on the graph. Then p = (x, f(x)) and q = (y, f(y)) for some x, y ∈ R^2. Now consider the function
g : [0, 1] ∋ t ↦ ( tx + (1 − t)y, f(tx + (1 − t)y) ).
Since f is continuous, so is g. Since g(0) = p & g(1) = q we have our continuous path.
g(0) here is q while g(1) = p and how does it show that R^2 x {0} is path-connected
 
Last edited:
While I don't much love your approach, I reconsidered and it's workable, so I retract my previous statement of it not being helpful. You technically need to split off a case to avoid dividing by 0, but other than that your approach can work if applied inductively. What you now have to prove is that √p_(m+1) ∉ Q(√p_1, ..., √p_m). I really don't think Galois theory is the answer.
What is your method? When I saw the problem it immediately reminded me of galois theory.

  • If r = 0, then the RHS = 0, so there's nothing to prove.
    If r > 0, then we can rescale the probability distribution so that we may WLOG assume that r = 1.
  • If P( |X3| ⩾ 1 ) = 0, then the RHS = 0 again, so we'll assume that P( |X3| ⩾ 1 ) > 0. The LHS can now be rewritten as
    2 * P( |X1 + X2| ⩾ 1 ) = 2 * P( |X1 + X2| ⩾ 1 | |X1| ⩾ 1, |X2| ⩾ 1 ) * P( |X1| ⩾ 1, |X2| ⩾ 1 ).
    Since P( |X1| ⩾ 1, |X2| ⩾ 1 ) = P( |X3| ⩾ 1 ) ^ 2 > 0 by independence, all that needs to be proven is that
    P( |X1 + X2| ⩾ 1 | |X1| ⩾ 1, |X2| ⩾ 1 ) ⩾ ½ for all probability distributions X on R^d.
    To that end, we will minimize P( |X1 + X2| ⩾ 1 | |X1| ⩾ 1, |X2| ⩾ 1 ) w.r.t. X.
  • |X1 + X2| ⩾ 1 iff
    1 ⩽ |X1 + X2|^2 = |X1|^2 + |X2|^2 + 2<X1, X2> = |X1|^2 + |X2|^2 + 2 * |X1| * |X2| * cos(angle) iff
    cos(angle) ⩾ ( 1 − |X1|^2 − |X2|^2 ) / ( 2 * |X1| * |X2| ).
    Since we are conditioning on the event that |X1| ⩾ 1 and |X2| ⩾ 1, we avoid division by 0.
    If we want to minimize
    P( |X1 + X2| ⩾ 1 | |X1| ⩾ 1, |X2| ⩾ 1 ) = P( cos(angle) ⩾ ( 1 − |X1|^2 − |X2|^2 ) / ( 2 * |X1| * |X2| ) | |X1| ⩾ 1, |X2| ⩾ 1 )
    then we want ( 1 − |X1|^2 − |X2|^2 ) / ( 2 * |X1| * |X2| ) to be as large as possible subject to |X1| ⩾ 1 and |X2| ⩾ 1.
    This is because norms and "angles" are "independent".
    I'll skip the proof, but ( 1 − |X1|^2 − |X2|^2 ) / ( 2 * |X1| * |X2| ) is maximal with value −½ when |X1| = |X2| = 1.
    As such, we need only minimize P( <Z1, Z2> ⩾ −½) for Z1, Z2 ~ Z, where Z is any probability distribution on S^(d−1).
    This is because cos(angle) = <Z1, Z2> when Z1 and Z2 lie on S^(d−1).
That was the easy part. Now onto the the hard part -- minimizing P( <Z1, Z2> ⩾ −½) w.r.t. Z.
  • If d = 1, then S^(d−1) = S^0 = {±1}.
    It's easy to verify that the unique minimizer is the Rademacher distribution, which achieves a probability of ½ as expected.
  • If d = 2, I don't know how to proceed.
I will write up a thing in latex for this soon.
 
The point P = (x_1, ... , x_n) is uniformly distributed on the hyperplane x_1 + ... + x_n = 1 with x_i >= 0 for 1 <= i <= n, with n >= 1. Find E(max(x_i)) and E(min(x_i)) in terms of n. For the more general case, find the expected value of the k^th max/min element.
 
The point P = (x_1, ... , x_n) is uniformly distributed on the hyperplane x_1 + ... + x_n = 1 with x_i >= 0 for 1 <= i <= n, with n >= 1. Find E(max(x_i)) and E(min(x_i)) in terms of n. For the more general case, find the expected value of the k^th max/min element.
I tried for a bit, but no dice. Is this is what you had in mind?
 
Let (a,b,c) be a Pythagorean triple. Prove that 60 divides abc.

Bonus (difficult): prove that the only primitive Pythagorean triple for which rad(abc) = 30 is (a,b,c) = (3,4,5). Here rad(n) denotes the radical of the positive integer n -- i.e., the product of the distinct prime divisors of n (with rad(1) = 1).
 
Let (a,b,c) be a Pythagorean triple. Prove that 60 divides abc.

Bonus (difficult): prove that the only primitive Pythagorean triple for which rad(abc) = 30 is (a,b,c) = (3,4,5). Here rad(n) denotes the radical of the positive integer n -- i.e., the product of the distinct prime divisors of n (with rad(1) = 1).

All primitive triples can be represented as (m^2 - n^2, 2mn, m^2 + n^2) by https://en.wikipedia.org/wiki/Pythagorean_triple#Proof_of_Euclid's_formula, with m and n relatively prime, and one of them being even.

This means that mn(m - n)(m + n)(m^2 + n^2) must be divisible by 30. One of m, n is even, so the factor of 2 is taken care of. All possible pairs of values of m and n mod 3 produce something that is 0 mod 3 in m, n, m - n, m + n. In the case of mod 5, the only pairs which dont produce something that is 0 mod 5 in m, n, m - n, m + n, are (1, 2), (1, 3), (2, 4), (3, 4), which all produce 0 mod 5 in m^2 + n^2.

---------------------------------------------------------------------------------------------------------------------------------------------------

Bonus

m and n are relatively prime, and in order to satisfy the conditions, both of them can only have prime factors in {2, 3, 5}. By the Euclidean algorithm, {m, n, m - n, m + n} are pairwise relatively prime (using the condition that exactly 1 of {m, n} is even). In order to distribute the prime factors 2, 3, 5 among these, either n = 1, or m - n = 1, or both, since the 3 prime factors can not be distributed in 4 numbers.

If n = m - n = 1, then m = 2, creating the triple 3, 4, 5.

If n = 1, then the set {m - 1, m, m + 1} must contain 1 power of 2, 1 power of 3, and 1 power of 5. m has to be the power of 2. Since m^2 + 1 is represented as 4^k + 1 for some k, it can not be divisible by 2 or 3, so it must be a power of 5. However, since m^2 - 1 = (m - 1)(m + 1) is divisible by 5, m^2 + 1 is 2 mod 5, which is a contradiction.

If m - n = 1, then the set {m, n, m + n} must contain 1 power of 2, 1 power of 3, and 1 power of 5. This set can be rewritten as {m - 1, m, 2m - 1}. The value m^2 + n^2 can be rewritten as 2m^2 - 2m + 1. Since 2m^2 - 2m + 1 is 1 mod m - 1, 1 mod m, and m mod 2m - 1, it is not divisible by any of 2, 3, 5, so it must be divisible by different prime number(s).
 
All primitive triples can be represented as (m^2 - n^2, 2mn, m^2 + n^2) by https://en.wikipedia.org/wiki/Pythagorean_triple#Proof_of_Euclid's_formula, with m and n relatively prime, and one of them being even.
Overkill, but yes, that works.
m and n are relatively prime, and in order to satisfy the conditions, both of them can only have prime factors in {2, 3, 5}. By the Euclidean algorithm, {m, n, m - n, m + n} are pairwise relatively prime (using the condition that exactly 1 of {m, n} is even). In order to distribute the prime factors 2, 3, 5 among these, either n = 1, or m - n = 1, or both, since the 3 prime factors can not be distributed in 4 numbers.

If n = m - n = 1, then m = 2, creating the triple 3, 4, 5.

If n = 1, then the set {m - 1, m, m + 1} must contain 1 power of 2, 1 power of 3, and 1 power of 5. m has to be the power of 2. Since m^2 + 1 is represented as 4^k + 1 for some k, it can not be divisible by 2 or 3, so it must be a power of 5. However, since m^2 - 1 = (m - 1)(m + 1) is divisible by 5, m^2 + 1 is 2 mod 5, which is a contradiction.

If m - n = 1, then the set {m, n, m + n} must contain 1 power of 2, 1 power of 3, and 1 power of 5. This set can be rewritten as {m - 1, m, 2m - 1}. The value m^2 + n^2 can be rewritten as 2m^2 - 2m + 1. Since 2m^2 - 2m + 1 is 1 mod m - 1, 1 mod m, and m mod 2m - 1, it is not divisible by any of 2, 3, 5, so it must be divisible by different prime number(s).
Very nice. Very different than what I had in mind, but probably cleaner.
 
Prove that (abc)^2 + a^2 + b^2 + c^2 + 2 >= 2(ab + bc + ca) for real numbers a, b, c
 
Prove that (abc)^2 + a^2 + b^2 + c^2 + 2 >= 2(ab + bc + ca) for real numbers a, b, c
Restrict a, b, c to positive reals since changing all of their signs to positive only increases the RHS and if any of them are 0 the problem is trivial.

AM-GM on (abc)^2 + 1 + 1 gives the new inequality:

a^2 + b^2 + c^2 + 3 (abc)^(2/3) >= 2(ab + bc + ca)

Since this is homogeneous with degree 2, let abc = 1. Then:

3 >= 2/a + 2/b + 2/c - a^2 - b^2 - c^2

Let a = e^x, b = e^y, c = e^z with x+y+z=0

The function f(x) = 2e^(-x) - e^(2x) has an inflection point at -ln(2) / 3

Spamming Jensens to maximize f(x) + f(y) + f(z) gives the maximum at x=y=z=0, where the sum is 3.
 
Doesn't Jensen only tell you that

holds so long as x & y & z are all greater than the inflection point?
Should have wrote out steps

x <= y <= z

If x <= y <= inflection point, z is kept constant and y is shifted towards the inflection point and x is shifted negative

If inflection point <= y <= z, then x is constant and y, z are shifted to (y + z) / 2

Let m = (y + z) / 2 and x = -2m.

Then maximize 2f(m) + f(-2m) at m = 0
 
If inflection point <= y <= z, then x is constant and y, z are shifted to (y + z) / 2

Let m = (y + z) / 2 and x = -2m.

Then maximize 2f(m) + f(-2m) at m = 0
because z ⩾ y ⩾ x implies that m ⩾ −2m, which in turn implies that m ⩾ 0 yes
If x <= y <= inflection point, z is kept constant and y is shifted towards the inflection point and x is shifted negative
I don't think it's trivial that this increases f(x) + f(y) but yes.
It suffices to show that f(x − p) + f(y + p) is increasing in p while y + p < inflection point. Differentiating w.r.t. p yields the expression g(x − p) − g(y + p) where g(w) = 2*exp(2w) + 2*exp(−w). It now suffices to show that g(w) is decreasing while w < inflection point, which can again be obtained via differentiation.
 
Integrate ( x*ln(x) − x + 1 ) / ( sqrt(x)*(x − 1)^2 ) from x = 0 to infinity.
 
A physics problem:

Given a = 7m/s3 ( variable acceleration) and 2s, find v final from rest.
 
A physics problem:

Given a = 7m/s3 ( variable acceleration) and 2s, find v final from rest.
Assuming a constant jolt (or jerk) of 7 m/s^3. If the initial acceleration and velocity are both 0, then the acceleration over time is a(t) = \int_0^t 7 ds = 7t, and the velocity over time is v(t) = \int_0^t a(s) ds = \int_0^t 7s ds = 7t^2/2. Plugging in t = 2 gives a final velocity of 14 m/s.
 
2+2=
Post all math related problems and solutions here. Don't cheat.

First problem: If the coefficients of x³ and x^4 in the expansion of (1+ ax+ bx² ) (1−2x) ^18 in powers of x are both zero, then (a, b) is equal to?
 
A5E2D82E 283D 414C A9CB 974C9D8B011B
 
welcome to the forum btw
Hi!

I must admit that I am a complete fool when it comes to math. I do not even understand the questions of these math problems , let alone the solutions.

But I have read about physics in online forums and blogs and found some interesting things there which could be also interesting to you.

Here is another physics problem:

Given a = 7m/s2 ( non variable acceleration) and 2s, find v final from rest.
 
v(2) = \int_0^2 7 ds = 14

Now another physics problem:

How can a body accelerated to cube give the same answer as the body which is accelerated to square. Do not you agree that a body which accelerates at rising rate should be moving faster after 2 seconds?
 
Do not you agree that a body which accelerates at rising rate should be moving faster after 2 seconds?
not necessarily because its initial acceleration is slower
 
fuck you your thread sucks
 
this thread is for nerdcels
 

Similar threads

NeverGetUp
Replies
25
Views
440
NeverGetUp
NeverGetUp
IncelGolem
Replies
28
Views
538
AtrociousCitizen
AtrociousCitizen

Users who are viewing this thread

shape1
shape2
shape3
shape4
shape5
shape6
Back
Top