Quantcast
Channel: Proving equivalence of two definitions of a convex-type Hamming distance - MathOverflow
Viewing all articles
Browse latest Browse all 2

Proving equivalence of two definitions of a convex-type Hamming distance

$
0
0

Update: If somebody can answer my question there, then I will be able to fully answer my question here.

Consider $n\in\mathbb N$ and a non-empty set $M\subset\{0,1\}^n$. I have the following conjecture:

Conjecture. It is true that $$\sup_{\alpha\in[0,1]^n, \lVert \alpha\rVert_2=1}\min_{m\in M} \langle \alpha, m\rangle = \min_{\beta\in[0,1]^M, \sum_{m\in M} \beta_m = 1} \left\lVert\sum_{m\in M}\beta_m m\right\rVert_2.$$

Here, $\beta_m m$ is just the scalar multiplication of the number $\beta_m$ with $m\in M\subset\{0,1\}^n$. Also, $\lVert \cdot\rVert_2$ is the usual euclidean norm and $\langle\cdot,\cdot\rangle$ is the usual euclidean inner product. (And note, of course, that $[0,1]^M$ is the set of all functions $\beta: M\to[0,1]$ where I will write $\beta_m$ for $\beta(m)$.)


For instance, it is true if $M=\{m\}$, i.e. if $M$ only contains one element. In that case, the left-hand side equals $$\sup_{\alpha\in[0,1]^n, \lVert \alpha\rVert_2 =1}\langle \alpha,m\rangle.$$

By Cauchy-Schwarz, we know that $\langle\alpha, m\rangle\le\lVert \alpha\rVert_2\lVert m\rVert_2=\lVert m\rVert_2$ and we have equality if and only if $\alpha=\frac{m}{\lVert m\rVert_2}$. Hence the left-hand side equals $\lVert m\rVert_2$.

The right-hand side is, as we must have $\beta=1$, $\lVert m\rVert_2$.


If $M=\{m_1, m_2\}$, then we would have to prove$$\sup_{\alpha\in[0,1]^n, \lVert \alpha\rVert_2 = 1} \min(\langle \alpha, m_1\rangle, \langle\alpha, m_2\rangle) = \min_{\beta\in[0,1]} \lVert \beta\, m_1+(1-\beta)\, m_2\rVert_2.$$

This is already not obvious to me. However, for example with $M=\{(1,0),(0,1)\}$, both sides can be computed to equal $\frac1{\sqrt 2}$.


Note: This conjecture is a Lemma that I would need to prove the equivalence of different definitions of convex distance that I found in the context of Talagrand's concentration inequality.


Another example: Consider $n=4$ and (with a slight abuse of notation) $M=\{m_1,m_2,m_3\}=\{(1,1,0,0),(0,1,1,0),(0,1,1,1)\}$.

The right-hand side is $$\min_{(\beta_1,\beta_2,\beta_3)\in[0,1]^3, \beta_1+\beta_2+\beta_3=1} \lVert (\beta_1,\beta_1+\beta_2+\beta_3,\beta_2+\beta_3,\beta_3)\rVert_2.$$

It is not too hard to see that the minimizer is $\beta=(1/2,1/2,0)$ for which we have $$\lVert (\beta_1,\beta_1+\beta_2+\beta_3,\beta_2+\beta_3,\beta_3)\rVert_2=\lVert (1/2,1,1/2,0)\rVert_2=\sqrt{\frac32}.$$

The left-hand side is $$\sup_{(\alpha_1,\alpha_2,\alpha_3,\alpha_4)\in[0,1]^4, \alpha_1^2+\alpha_2^2+\alpha_3^2+\alpha_4^2=1} \min(\alpha_1+\alpha_2,\alpha_2+\alpha_3,\alpha_2+\alpha_3)=\sup_{(\alpha_1,\alpha_2,\alpha_3,\alpha_4)\in[0,1]^4, \alpha_1^2+\alpha_2^2+\alpha_3^2+\alpha_4^2=1}\min(\alpha_1+\alpha_2,\alpha_2+\alpha_3).$$

The supremum occurs only if $\alpha_1+\alpha_2=\alpha_2+\alpha_3$, which happens for $\alpha=\left(\sqrt{\frac16},\sqrt{\frac23},\sqrt{\frac16},0\right)$.

For that $\alpha$, we have $$\alpha_1+\alpha_2=\alpha_2+\alpha_3=\sqrt{\frac32}$$ and so we indeed have equality of both sides.


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles



Latest Images