**Without using a calculator**, how would you determine if terms of the form \(\sum b_i\sqrt{a_i} \) are positive? I am mainly interested in the case where \( a_i, b_i \) are integers.

When there are 5 or fewer terms involved, we can try and split the terms and square both sides, to reduce the number of surds that are involved. For example, to determine if \[\sqrt{2} - \sqrt{3} - \sqrt{5} + \sqrt{7} > 0, \] we can square both sides of \(\sqrt{2} + \sqrt{7} > \sqrt{3}+\sqrt{5} \) to obtain \[9 + 2 \sqrt{14} > 8 + 2 \sqrt{15}.\]

Repeated squaring eventually resolves this question, as the number of surds are reduced. This is a fail-proof method, regardless of the numbers that are involved.

However, when there are more than 6 terms involved, then repeated squaring need not necessarily reduce the terms that are involved.

E.g. How would you determine if

\[\sqrt{2} - \sqrt{3} + \sqrt{5} - \sqrt{7} - \sqrt{11} + \sqrt{13} < 0 \]

I can think of several approaches

There are special cases, which allow us to apply Jensen's inequality. However, this gives a somewhat restrictive condition on the set of values.

Show that \[\sqrt{2} + \sqrt{5} + \sqrt{13} < 7.26 < \sqrt{3} + \sqrt{7} + \sqrt{11} \] However, it might not be feasible to guess what the middle number is, unless you already had a calculator. Taylor approximations might offer an approximation, though the bound could be very tight (and hence require many terms).

Do you have any other suggestions?

No vote yet

19 votes

×

Problem Loading...

Note Loading...

Set Loading...

Easy Math Editor

`*italics*`

or`_italics_`

italics`**bold**`

or`__bold__`

boldNote: you must add a full line of space before and after lists for them to show up correctlyparagraph 1

paragraph 2

`[example link](https://brilliant.org)`

`> This is a quote`

Remember to wrap math in \( ... \) or \[ ... \] to ensure proper formatting.`2 \times 3`

`2^{34}`

`a_{i-1}`

`\frac{2}{3}`

`\sqrt{2}`

`\sum_{i=1}^3`

`\sin \theta`

`\boxed{123}`

## Comments

Sort by:

TopNewestWhether there's an efficient method for this in general is a well-known open problem in computational complexity; see http://en.wikipedia.org/wiki/Sum

ofradicals (in particular, I believe it's not even known whether this problem is in NP, let alone P).There are, of course, a bunch of inefficient ways to figure out whether a sum like \(S = \sum_{i=1}^{n} b_i\sqrt{a_i} \) is positive or negative. One way to do this is to estimate the size of S by first computing the product of all of its conjugates,

\[P = \left|\prod\sum \pm b_i\sqrt{a_i}\right| \]

and then applying the estimate

\[|S| \geq \dfrac{P}{(\sum |b_i\sqrt{a_i}|)^{2^{n}-1}}\]

Now, to determine whether \(S\) is positive or negative, it just suffices to approximate S to within this lower bound for \(|S|\), which you can do via Taylor series or any other convergent expansion.

Log in to reply

Great that you know about this, and have related it to computational complexity theory.

So, if we don't know of an efficient method that applies to all situations, can we figure out efficient methods that apply to certain conditions? In Math, we sometimes develop tools that work according to the conditions in various scenarios, and then break them up into cases.

For example, taking the idea of Jensens, is \( \sum \sgn(b_i) b_i^2 a_i = 0 \) a sufficient condition for us to apply Jensen's to obtain information about the signage?

Log in to reply

That condition isn't enough; one easy way to see this is if you just flip all the signs of the \(b_i\), that sum stays equal to 0 whereas the original expression switches signs.

The inequality you probably want is a generalization of Jensen's inequality called Karamata's majorization inequality (http://en.wikipedia.org/wiki/Karamata's_inequality). In addition to your condition above, you also need the positive terms to 'majorize' the negative terms (see the wikipedia link for more information). Of course, this only applies in a very limited range of situations.

Log in to reply

Yes, the majorization inequality is a sufficient condition, and can be interpreted as Jensen's applied term wise. Essentially, you work from the smallest value, similar to the inequality chain that I presented in my reply to Ivan.

Log in to reply

For the given example, conjugate surds do the trick. It is equal to \( \frac { (\sqrt{2} + \sqrt{5} + \sqrt{13})^{2} -(\sqrt{3} + \sqrt{7} + \sqrt{11})^{2}} {A}\) where A is a positive real.

Expanding the numerator, I get \( -1 +2 ( (\sqrt{10}-\sqrt{21})+ (\sqrt{26}-\sqrt{33}) + (\sqrt{65}-\sqrt{77})) \) which is obviously negative.

Log in to reply

Wait, so how is this different from squaring both sides in Calvin's example?

Log in to reply

Admittedly both methods yield the same computations :p. The reasoning is different though (algebraic manipulations vs increasing functions)

Log in to reply

I'm not quite familiar with "Jensen's inequality". Could anyone please elaborate a bit further? I would be most grateful!

Log in to reply

Jensen's is a statement of convexity / concavity.

A function \(f(x) \) is

concavein an interval \([a,b] \), if for any \(x\) and \(y\) in the interval, and \( 0 \leq t \leq 1 \),\[ f(tx+(1-t)y)\geq t f(x)+(1-t)f(y) \]

If the function is also twice differentiable, we can simply check that \( f''(x) < 0 \).

If \(f(x) \) is a concave function (over the entire real line), then the maximum value of \( \sum_{i=1}^n f(x_i ) \) subject to \( \sum_{i=1}^n x_i = K\) (for some constant \(K\) ) will be attained when all \(x_i \) values are the same, namely \( \frac{K}{n} \).

As an example, since \( \sqrt{x} \) is a concave function on \( [0, \infty) \) (Show this using both ways!), thus we know that

\[ \sqrt{ 3 } + \sqrt{5} \leq 2 \sqrt{\frac{8}{2} } = 4 .\]

If you evaluate the LHS, you get \( \sqrt{3} + \sqrt{5} = 3.968 \). Of course, the condition that \( \sum x_i = K \) is extremely restrictive.

As an extension, to apply it to the inequality that I stated, you can show that

\[ \sqrt{3} + \sqrt{7} + \sqrt{11} \geq \sqrt{2} + \sqrt{7} + \sqrt{12} \geq \sqrt{2} + \sqrt{5} + \sqrt{14} \geq \sqrt{2} + \sqrt{5} + \sqrt{ 13 } \]

Log in to reply

And Jensen's Inequality says that if a function is concave, then for any \(\lambda_1+\lambda_2+\lambda_3+\dots+\lambda_n=1\),

\[f(\lambda_1x_1+\lambda_2x_2+\lambda_3x_3+\dots+\lambda_nx_n)\ge\lambda_1f(x_1)+\lambda_2f(x_2)+\lambda_3f(x_3)+\dots+\lambda_nf(x_n)\]

You can use Jensen's Inequality to prove AM-GM by using \(f(x)=\log x\) and \(\lambda_1=\lambda_2=\lambda_3=\dots=\lambda_n=\frac{1}{n}\). Already, we know that \(\log x\) is concave (we can check the second derivative), and we also know \(\lambda_1+\lambda_2+\lambda_3+\dots+\lambda_n=n\frac{1}{n}=1\), so

\(\frac{a_1+a_2+a_3+\dots+a_n}{n}\ge\sqrt[n]{a_1a_2a_3\dots a_n}\) \(\log(\frac{1}{n}a_1+\frac{1}{n}a_2+\frac{1}{n}a_3+\dots+\frac{1}{n}a_n)\ge\log\sqrt[n]{a_1a_2a_3\dots a_n}=\frac{1}{n}\log a_1+\frac{1}{n}\log a_2+\frac{1}{n}\log a_3+\dots+\frac{1}{n}\log a_n\)

Plugging in our definitions, we get

\[f(\lambda_1x_1+\lambda_2x_2+\lambda_3x_3+\dots+\lambda_nx_n)\ge\lambda_1f(x_1)+\lambda_2f(x_2)+\lambda_3f(x_3)+\dots+\lambda_nf(x_n)\]

which is true by Jensen's Inequality.

Log in to reply

Log in to reply

Log in to reply

As an explicit example, consider \(f(x) = - x^2 \). Take the points \( x_1 = 1\) and \( x_2 = 2 \). How do we reach 0 by connecting the lines? We want \( 0 = \lambda 1 + (1-\lambda) 2 \) which gives us \( \lambda = 2 \). Then, your claim is

\[0 = f(0) = f( 2 \times 1 + (-1) \times 2) \geq 2 f(1) + (-1) f(2) = 2 . \]

Log in to reply

Thank you very much guys. :)

Log in to reply

http://prntscr.com/1lxjuk

I have this comparison, but it would be tedious work especially with more terms

Log in to reply

That's a good idea, choosing to bound the difference of terms by one another.

However, this gets very complicated very quickly. There is no guarantee that there is an equal number of positive and negative terms, nor how we should bound one difference by another. It is possible that the total sum has absolute values less than \(0.1\), which each of the differences have absolute value more than \(1\). In this case, such an approach would not work.

Log in to reply

haha thanks :P I can only do this cuz I haven't learned all sorts of funny things just as yet from my country's education system yet. I could only come up with this :D

Log in to reply

I'm slightly surprised that no one mentioned using convergents (and semi-convergents) to approximate square roots into rational numbers using partial fractions, despite being a question Close to the root which uses this to find method.

This gives us a quick way to approximate the surd using rational numbers, which are much easier to add and multiply.

Log in to reply

maclaurin series or something? I'm not very sure with all these exquisite formulas :P

Log in to reply

No, the approaches presented in the solutions generally require knowledge of continued fractions and convergents/semi-convergents.

I added a solution using Farey sequences to justify the answer. However, this is not ideal because it doesn't explain where to find the approximations (apart from slowly searching).

Log in to reply

Are you sure about five surds?

\(- \sqrt{2} + \sqrt{3} + \sqrt{5} + \sqrt{7} - \sqrt{17} > 0\)

\(\sqrt{3} + \sqrt{5} + \sqrt{7} > \sqrt{2} + \sqrt{17}\)

\(15 + 2\sqrt{10} + 2\sqrt{14} + 2\sqrt{35} > 19 + 2\sqrt{34}\)

\(2\sqrt{10} + 2\sqrt{14} + 2\sqrt{35} > 4 + 2\sqrt{34}\)

I don't think the number of surds is decreased. We still have five surds (although one is simply \(\sqrt{16} = 4\) and the rest have a factor of \(2\) out). I can't see a way to continue it; how would you do it?

Log in to reply

I'm more concerned with looking at the sum of many terms. With few terms, there could be various tricks involved that could help us slightly reduce the number. In that sense, 5/6 was chosen arbitrarily.

It is also not immediately clear that 6 cannot work. There could be some kind of algebraic manipulation that reduces the number.

Log in to reply

569

Log in to reply