# How to extract matrix which produces given symbolic linear combinations?

Suppose I have a column matrix α\alpha, consisting of some symbols

a[0,0]
a[0,1]
a[0,2]
a[0,3]
a[1,0]
a[1,1]
a[1,2]
a[1,3]
a[2,0]
a[2,1]
a[2,2]
a[2,3]
a[3,0]
a[3,1]
a[3,2]
a[3,3]

and a column matrix xx, consisting of linear combinations of symbols above

a[0,0]
a[0,0]+a[1,0]+a[2,0]+a[3,0]
a[0,0]+a[0,1]+a[0,2]+a[0,3]
a[0,0]+a[0,1]+a[0,2]+a[0,3]+a[1,0]+a[1,1]+a[1,2]+a[1,3]+a[2,0]+a[2,1]+a[2,2]+a[2,3]+a[3,0]+a[3,1]+a[3,2]+a[3,3]
a[1,0]
a[1,0]+2 a[2,0]+3 a[3,0]
a[1,0]+a[1,1]+a[1,2]+a[1,3]
a[1,0]+a[1,1]+a[1,2]+a[1,3]+2 a[2,0]+2 a[2,1]+2 a[2,2]+2 a[2,3]+3 a[3,0]+3 a[3,1]+3 a[3,2]+3 a[3,3]
a[0,1]
a[0,1]+a[1,1]+a[2,1]+a[3,1]
a[0,1]+2 a[0,2]+3 a[0,3]
a[0,1]+2 a[0,2]+3 a[0,3]+a[1,1]+2 a[1,2]+3 a[1,3]+a[2,1]+2 a[2,2]+3 a[2,3]+a[3,1]+2 a[3,2]+3 a[3,3]
a[1,1]
a[1,1]+2 a[2,1]+3 a[3,1]
a[1,1]+2 a[1,2]+3 a[1,3]
a[1,1]+2 a[1,2]+3 a[1,3]+2 a[2,1]+4 a[2,2]+6 a[2,3]+3 a[3,1]+6 a[3,2]+9 a[3,3]

then how may I extract matrix AA of coefficients so that

Aα=xA \alpha = x

I.e. I need to know all coefficients from linear combinations xx.

Later I wish to know matrix it’s inverse, which should be

The answer should not rely on the pattern of a[i,j], symbols may be any, for example 16 of a, b, c, d, ….

=================

=================

2

=================

You can use CoefficientArrays:

xx = {a[0, 0], a[0, 0] + a[1, 0] + a[2, 0] + a[3, 0], a[0, 0] + a[0, 1] + a[0, 2] + a[0, 3],
a[0, 0] + a[0, 1] + a[0, 2] + a[0, 3] + a[1, 0] + a[1, 1] +
a[1, 2] + a[1, 3] + a[2, 0] + a[2, 1] + a[2, 2] + a[2, 3] +
a[3, 0] + a[3, 1] + a[3, 2] + a[3, 3], a[1, 0],
a[1, 0] + 2 a[2, 0] + 3 a[3, 0], a[1, 0] + a[1, 1] + a[1, 2] + a[1, 3],
a[1, 0] + a[1, 1] + a[1, 2] + a[1, 3] + 2 a[2, 0] + 2 a[2, 1] +
2 a[2, 2] + 2 a[2, 3] + 3 a[3, 0] + 3 a[3, 1] + 3 a[3, 2] +
3 a[3, 3], a[0, 1], a[0, 1] + a[1, 1] + a[2, 1] + a[3, 1],
a[0, 1] + 2 a[0, 2] + 3 a[0, 3],
a[0, 1] + 2 a[0, 2] + 3 a[0, 3] + a[1, 1] + 2 a[1, 2] + 3 a[1, 3] +
a[2, 1] + 2 a[2, 2] + 3 a[2, 3] + a[3, 1] + 2 a[3, 2] +
3 a[3, 3], a[1, 1], a[1, 1] + 2 a[2, 1] + 3 a[3, 1],
a[1, 1] + 2 a[1, 2] + 3 a[1, 3],
a[1, 1] + 2 a[1, 2] + 3 a[1, 3] + 2 a[2, 1] + 4 a[2, 2] +
6 a[2, 3] + 3 a[3, 1] + 6 a[3, 2] + 9 a[3, 3]};

alpha = {a[0, 0], a[0, 1], a[0, 2], a[0, 3], a[1, 0], a[1, 1],
a[1, 2], a[1, 3], a[2, 0], a[2, 1], a[2, 2], a[2, 3], a[3, 0],
a[3, 1], a[3, 2], a[3, 3]};

aA = Normal[CoefficientArrays[xx, alpha]][[2]]

Verify that aA.alpha gives xx:

aA.alpha == xx
(* True *)

Use Inverse[aA] to get the inverse:

Row[MatrixForm /@ {aA, Inverse[aA]}, Spacer[5]]

… The inverse matrix does not match the inverse matrix in the picture posted by the OP.
– kglr
Nov 19 ’14 at 0:47

this is probably because of the different order of coefficients
– Suzan Cioc
Nov 19 ’14 at 9:46

You can also use a more “mathy” approach:

Assuming xx and alpha are defined as in kguler’s answer,

A = D[xx, {alpha}]

which produces identical output. I like thinking in this way because it is useful for computing hessians (in a slightly different context).

++1 wish i could think of this approach:)
– kglr
Nov 19 ’14 at 19:29

: ) I should say that I think D take a little bit longer for this example. Haven’t put in the work to figure out if it’s a constant factor or if the performance difference scales.
– evanb
Nov 19 ’14 at 19:38