## Cubed once again

Weird fact of the day: Suppose $x_1, ..., x_k \geq 0$, then:

$Lim_{n\rightarrow\infty}(\sum_{j=1}^{k} x_{j}^{n} )^{1/n}=max \lbrace x_1, ..., x_k \rbrace$

What does it really mean? It means: if you take some positive numbers $x_1, ..., x_k$, raise each to the power of n, add them all together, take the n-th root of the sum, then if you start to increase n the result will start to approach the greatest of the numbers you started with.

Why is that weird? Because the procedure is symmetric with respect to all the numbers $x_1, ..., x_k$, and the result depends only on one of them, the greatest one (Let’s call it $x_\ast$). All the other k-1 numbers are pretty much completely irrelevant. You can set them all to zero and it will not change the outcome, because all the work is done by $x_\ast$, and by $x_\ast$ alone. How does the procedure “determine” which number is the greatest?

How do we prove this amazing fact? Like this:

We already defined $x_\ast:=max \lbrace x_1, ..., x_k \rbrace$. Now:

$(x_\ast^{n} )^{1/n} \leq (\sum_{j=1}^{k} x_{j}^{n} )^{1/n} \leq (k \cdot x_\ast^{n} )^{1/n}$ and

$Lim_{n\rightarrow\infty}(x_\ast^{n})^{1/n}=Lim_{n\rightarrow\infty}(k\cdot x_\ast^{n} )^{1/n}=x_\ast$, therefore $Lim_{n\rightarrow\infty}(\sum_{j=1}^{k} x_{j}^{n} )^{1/n}=x_\ast$, the end.

Does it work with other families of functions $\varphi_n(x)$ or only with powers and roots? The proof only uses some monotonicity and the fact that $Lim_{n\rightarrow\infty} \varphi_n^{-1}(k\cdot \varphi_n(x))=x$, so it should also work, for example, when $\varphi_n(x)=n^x$, meaning that:

$Lim_{n\rightarrow\infty}(log_n\sum_{j=1}^{k} n^x_{j} )=max \lbrace x_1, ..., x_k \rbrace$

I can’t think of a good way to completely characterize the class of functions, for which the proof holds. Can you?

What does this mean geometrically? It means that as n increases, level surfaces of a function $f_n(x_1, ..., x_n)=(\sum_{j=1}^{k} |x|_{j}^{n} )^{1/n}$ defined in a k-dimensional space, will  start to resemble a k-dimensional cube, because a cube is a level surface of a function $f_\infty(x_1, ..., x_n)=max \lbrace |x_1|, ..., |x_k| \rbrace$. You might remember that we already spoke about this in my previous post titled “On Norms”.

Have you made a cool-looking but completely pointless animation to demonstrate this phenomenon? I thought you’d never ask:

(you may or may not need to click on it)

Does this fact have any useful applications? Not that I know of, which proves that this is some high quality math, and not same applied rubbish. (Just kidding, applied math can be beautiful too. Once in a thousand years.)

This entry was posted in Math. Bookmark the permalink.

### 6 Responses to Cubed once again

1. christopherolah says:

> I can’t think of a good way to completely characterize the class of functions, for which the proof holds. Can you?

Besides something like “locally monotone”, we want $(\forall \epsilon > 0)(\exists n)(\phi_n^{-1}(n\phi_n(x)) -x < \epsilon)$.

Obviously, a $\phi$ that is, in some sense yet to be determined, changing fast would result in a smaller change in $\phi^{-1}(n\phi(x))$ (think IFT).

But using the normal idea of differentiation entangles us in discussions about the size of $\phi(x)$ and such. It would be much nicer to use the geometric derivative.

In fact, it is fairly clear that we want $\lim _{n\to\infty} D^*\phi_n(x) = \infty$… something needs to be said about the rate of convergence, but I'm too tired to work out.

Or my jet lag-addled mind has gone crazy. There's some evidence for that in the "it won't let me sleep despite being tired" thing…

• George G. says:

Let me formalize what I think you’re thinking. Let us assume that $\varphi_n$ is monotonously growing, and so is $\varphi_n'$. We want to prove that:
$\varphi^{-1}_n(k \varphi_n(x))\rightarrow x$, when $n \rightarrow \infty$.
Let us rewrite that as:
$\varphi^{-1}_n(k \varphi_n(x))\rightarrow \varphi^{-1}_n(\varphi_n(x))$
which is equivalent to:
$\varphi^{-1}_n(k \varphi_n(x)) -\varphi^{-1}_n(\varphi_n(x))\rightarrow 0$
We assumed that $\varphi_n'$ was growing monotonously, which gives us the bound:
$\varphi^{-1}_n(k \varphi_n(x)) -\varphi^{-1}_n(\varphi_n(x)) \leq (k-1)\varphi_n(x)\cdot \frac{d\varphi^{-1}_n(x)}{dx}\vert_{x=\varphi_n(x)}$
but:
$(k-1)\varphi_n(x)\cdot \frac{d\varphi^{-1}_n(x)}{dx}\vert_{x=\varphi_n(x)} = \frac{\varphi_n(x)}{\varphi_n'(x)}(k-1)$
This implies that $\varphi^{-1}_n(k \varphi_n(x))\rightarrow x$, if $\forall x\geq 0: \frac{\varphi_n(x)}{\varphi_n'(x)} \rightarrow 0$, so we have a sufficient condition.

But $\forall x\geq 0: \frac{\varphi_n(x)}{\varphi_n'(x)} \rightarrow 0$ is equivalent to $\forall x\geq 0: \varphi_n^\ast(x)\rightarrow \infty$, because $\varphi_n^\ast(x)=exp(\frac{\varphi_n'(x)}{\varphi_n(x)})$, where $\varphi_n^\ast$ is geometric derivative.

Oh, and I think you made a typo here: $(\forall \epsilon > 0)(\exists n)(\phi_n^{-1}(n\phi_n(x)) -x 0)(\exists n)(\phi_n^{-1}(k\phi_n(x)) -x < \epsilon)$, where k is some constant, otherwise you get a completely different problem.

• christopherolah says:

Yeah, the n was a mistake. I blame us using the same variable for the number of dimensions and the term in the sequence. 😛 I thought the issues I had with making the rate exceed n was odd.

In any case, your proof is fairly straight-forward, but I can’t help but feel like it defeats the point of using the geometric derivative…

Again, we have $\varphi^{-1}_n(k \varphi_n(x))\rightarrow \varphi^{-1}_n(\varphi_n(x)) = x$. Consider that, by local approximation with geometric derivatives, $\varphi^{-1}_n(k \varphi_n(x))$ is approximately $x+\log_{f*(x)}(k)$. Let the geometric derivative be monotone and this is an upper bound (if I wasn’t lazy, I’d use some sort of weaker condition — I really just need to rule out the case of a negative second geometric derivative that keeps increasing and pushing it farther and farther back) .

So we have $\varphi^{-1}_n(k \varphi_n(x))\rightarrow x$

Now, as $f^*(x) \to \infty$, $\log_{f^*(x)}(k) \to 0$ so one ends up with with
$\varphi^{-1}_n(k \varphi_n(x))\rightarrow x$, the desired result.

• George G. says:

I took the liberty of tweaking and merging two of your comments for the sake of neatness. I hope you don’t mind.

“In any case, your proof is fairly straight-forward, but I can’t help but feel like it defeats the point of using the geometric derivative…”

Oh, it absolutely does. I am a geometric derivatives skeptic. I think that everything that can be proven using them can be proven without them without complicating the proof much. But I may be mistaken about it.

Anyway. It seems that you are trying to construct the same proof I came up with, based on your remarks, but using geometric derivatives. Alright. But it seems that your proof requires $f^\ast$
to be growing monotonously, and I only wanted $f'$ to be growing monotonously. Your condition is stronger (consider $f(x)=x^2, f'(x)=2x, f^\ast (x)=exp(2/x)$) .
Also, I don’t quite understand why $\varphi^{-1}_n(k \varphi_n(x))$ is approximately $x+\log_{f*(x)}(k)$. It may or may not be correct: I don’t claim to understand geometric calculus perfectly, but it just doesn’t look right to me. Please, clarify. f is $\varphi_n$, I assume.

2. christopherolah says:

>I am a geometric derivatives skeptic.

I know. I got quite a bit of amusement out of resolving this using them, just because of that.

> I think that everything that can be proven using them can be proven without them without complicating the proof much.

It’s a matter of elegance. Perhaps it will be more clear in my solution to

> Also, I don’t quite understand why $\varphi^{-1}_n(k \varphi_n(x))$ is approximately $x+\log_{f*(x)}(k)$.

Actually, I’m going to step back in the proof a bit because I can make things a bit neater.

To prove $\varphi^{-1}_n(k \varphi_n(x))\rightarrow \varphi^{-1}_n(\varphi_n(x)) = x$, let’s solve for $\epsilon$ in $\varphi^{-1}_n(k \varphi_n(x)) = x + \epsilon$.

Well, consider $k\varphi_n(x) = \varphi_n(x + \epsilon)$. We expand to our first term in the geometric approximation: $k\varphi_n(x) = \varphi_n(x)*\varphi^*_n(x)^\epsilon$. Some simple algebraic manipulation results in $\epsilon = \log_{\varphi^*_n(x)}(k)$.

All this is a formalization of the fact that a high geometric derivative of a function $f$ means that the difference between $f(x)$ and $kf(x)$ is in some sense small. So if it becomes bigger and bigger, they go to the same point (in terms of the value of x that would be needed to achieve the same result…).

The reason I like this a lot more than your approach is that it is conceptually smaller. It’s a result of the basic idea of the geometric derivative,

• George G. says:

Ah, I think I see it now! It’s nice, because it allows us to avoid using the formula for the derivative of the inverse function.