Required conditions for eigenvalues λmin(A)>λmin(B)\lambda_{\min}(A) >\lambda_{\min}(B) and λmax(A)>λmax(B)\lambda_{\max}(A) >\lambda_{\max}(B) etc?

Given \operatorname{tr}(X^TAX) > \operatorname{tr}(X^TBX)\operatorname{tr}(X^TAX) > \operatorname{tr}(X^TBX) and AA and BB are p.s.d then under what conditions will we have \lambda_{\max}(A)>\lambda_{\max}(B)\lambda_{\max}(A)>\lambda_{\max}(B) to be guaranteed ? What are required conditions for \lambda_{\min}(A)>\lambda_{\min}(B)\lambda_{\min}(A)>\lambda_{\min}(B) ? All matrix entries are real valued and XX is a rectangular matrix. Thirdly, what are the conditions for second smallest eigenvalue (algebraic connectivity) of AA to be greater than the same for BB? Also fourthly, what are the conditions for the eigenvalues of AA to majorize the eigenvalues of BB from above, below and so forth? And by majorization I mean this mathematical property: https://en.wikipedia.org/wiki/Majorization . And finally and most important of all for me..What are the conditions w.r.t A(X)A(X) and B(X)B(X) for \operatorname{tr}(X^TA(X)X) > \operatorname{tr}(X^TB(X)X)\operatorname{tr}(X^TA(X)X) > \operatorname{tr}(X^TB(X)X) to be true if A(X)A(X) and B(X)B(X) are functions of X and matrix valued as well?

=================

=================

1 Answer
1

=================

If XX is fixed there is little that could be said in general. If we have \text{tr}(X^TAX)>\text{tr}(X^TBX)\text{tr}(X^TAX)>\text{tr}(X^TBX) for all XX, then \lambda_k(A)>\lambda_k(B)\lambda_k(A)>\lambda_k(B) for all jj. This follows from

\lambda_k(A)=\min_{\dim K=k}\max\{x^TAx:\ x\in K,\ x^Tx=1\}.

\lambda_k(A)=\min_{\dim K=k}\max\{x^TAx:\ x\in K,\ x^Tx=1\}.

For a fixed subspace LL, we have \max\{x^TAx:\ x\in L,\ x^Tx=1\}>\max\{x^TBx:\ x\in L,\ x^Tx=1\},\max\{x^TAx:\ x\in L,\ x^Tx=1\}>\max\{x^TBx:\ x\in L,\ x^Tx=1\}, since x^TAx=\text{Tr}(X^TAX)>\text{tr}(X^TBX)=x^TBxx^TAx=\text{Tr}(X^TAX)>\text{tr}(X^TBX)=x^TBx, where XX is the matrix with xx in the first column and zeroes elsewhere. Then

\lambda_k(B)=\min_{\dim K=k}\max\{x^TBx:\ x\in K,\ x^Tx=1\}
<\max\{x^TAx:\ x\in L,\ x^Tx=1\}. \lambda_k(B)=\min_{\dim K=k}\max\{x^TBx:\ x\in K,\ x^Tx=1\} <\max\{x^TAx:\ x\in L,\ x^Tx=1\}. But now we can do this for any LL with \dim L=k\dim L=k, and so \lambda_k(B)<\lambda_k(A).\lambda_k(B)<\lambda_k(A). Regarding majorization: let P(X)P(X) denote the projection onto the diagonal (or "pinching"), i.e. P(X)P(X) is the matrix with diagonal X_{11},\ldots,X_{nn}X_{11},\ldots,X_{nn} and zeroes elsewhere. Then we have, thanks to the Schur-Horn theorem: The following statements are equivalent: \lambda(B)\prec\lambda(A)\lambda(B)\prec\lambda(A) There exist unitaries U,VU,V such that B=VP(UAU^*)V^*B=VP(UAU^*)V^*. Regarding your last question, "functions of XX" is extremely vague, so I don't think that any conclusion can be drawn.