Originally published at 狗和留美者不得入内. You can comment here or there.

linearly independent vectors in result in a -dimensional parallelotope (a generalization of parallelogram or parallelpiped in higher dimensions). We wish to determine its generalized volume with orientation in dimensions. The determinant of an matrix gives the (signed) volume of the parallelotope induced by the column vectors, the order of which affects the sign. (The proof of this is rather straightforward and will be left to the reader.) Independent of coordinates, a linear transformation between two -dimensional vector spaces the positively oriented orthogonal bases of which are respectively assigns each element in to a linear combination of the elements of , and the determinant is the volume of the parallelotope generated by elements of .

Let be the vectors for our -dimensional parallelotope in . Together they define a map from to that by the rank-nullity theorem is onto, with . Let be a positively oriented orthogonal basis of , the subspace of perpendicular to the kernel of with . We then define the same map on a restricted domain as

We then have

**The coordinate free definition adjoint operator (or transpose) and its determinant**

If is a linear map from and inner products are defined on -dimensional vector space with respect to their bases such that

Then the adjoint of , which we denote as is the map from such that

In the language of transposes, we have

Note how the left hand side of gives for every element in an ordered set of vectors in with respect to an orthonormal basis of , and then prescribes an ordered set of vectors in the coordinates of the th of which, with respect to an orthonormal basis of , is the th coordinate of the elements of , and vice versa.

We’ve essentially defined an matrix the elements of which are , and then the elements of transpose matrix . Applying the permutation based determinant formula gives us

**Computing the volume of the parallelotope**

The matrix formed by linearly independent column vectors in , , corresponds directly to a linear isomorphism from , the codomain of which is of course a dimensional subspace of . We showed in the previous section that the adjoint of , namely , has the same determinant. Since determinant is a multiplicative function, thus gives us the square of the volume of the parallelotope.

Per the rule of matrix multiplication, given a linear isomorphism which takes the elements of the orthonormal basis of its domain to vectors , regardless of basis or coordinates in the range, its composition with its adjoint is represented with respect to aforementioned basis by

wherein the invoked inner product is, of course, per the properties of inner product invariant with respect to coordinate transformations, which means is well-defined. This corresponds to the matrix formed from our vectors in ,

mutiplied by its transpose on the left, which gives us the result prescribed by , wherein the inner product is the Euclidean inner product in restricted to a dimensional subspace within it. The resulting Gramian matrix

is, as we’ve already explained, such that

or in words the square of the volume of the parallelotope generated by .

**Decomposing a ** –**volume element into orthogonal components in ** **-form space**

There is also intimate connection here with differential forms, exterior products, and Hodge dual. In of [1], we defined an inner product on the th exterior product space with the Gramian determinant invoked in such that

Let be an orthonormal positively oriented basis for our vector space, which by definition results in the equivalence relation

as far as -dimensional volume is concerned, with which each of our s decomposes to . Substitution of this decomposition into yields for the coefficient of the determinant of the matrix assembled from the rows of our column vectors, with an appropriate sign adjustment. This coefficient is necessarily also an anti-symmetric th rank tensor.

Geometrically this is the oriented -dimensional volume obtained if only the components corresponding to indices are considered. The result in of [1], which was calculated via the Hodge star in a way that equates to the Gramian matrix definition of the inner product, yields for the value of the inner product given in the sum of squares of the coefficients with respect to our basis elements . Essentially there is a basis of of dimension volume elements of equivalent volume in space represented by -forms, which are mutually orthogonal with respect to the inner product we defined on the space of -forms, and we have projected our arbitrary dimensional volume element onto each of them. In this sense it is natural that the square of its norm or size would be the sum of the squares of its components.

I am dedicating this article to Seki Takakazu (1642-1708), who based on no more than 13th century Chinese algebra and arithmetic obtained results regarding determinants and resultants decades before the West and who discovered Bernoulli numbers (or Takakazu numbers) in connection to the closed formula for sum of the first th powers around the same time as did Jacob Bernoulli.

**References**