## May 24th, 2021

### Using the Riemann integral to illustrate convergence of a net

Originally published at 狗和留美者不得入内. You can comment here or there.

Definition 1 Take an interval . A partition of it is a finite sequence of real numbers .

Definition 2 A partition is a refinement of a partition if it contains all the points of . Given any two partitions , one can always find their common refinement, which consists of all their points in increasing order.

Definition 3 The norm or mesh of the partition is equal to

Proposition 1 The collection of partitions of any closed interval is a directed set.

Proof: Under set inclusion, it is a preorder since reflexivity and transitivity are satisfied. The common refinement of any finite collection of partitions is always an upper bound of it.

Definition 4 A tagged partition of , consists of , and for . A refinement of it is the same as the refinment of a partition with the additional requirement that the set of points inside the intervals of the refinement contains the .

Proposition 2 The collection of tagged partitions of any closed interval is a directed set.

Proof: Similar as that of Proposition 1.

Definition 5 The Riemann sum of a function corresponding to a tagged partition of , , , is given by

Proposition 3 The collection of Riemann sums of a function is a net.

Proof: Proposition 2 tells us that the collection of tagged partitions of is a directed set, and the Riemann sum of is a function from this directed set to the reals.

Definition 6 We say that a function is Riemann integrable if the net of Riemann sums associated with it converges to some real value . Equivalently, for all , there exists a tagged partition such that for any refinement of it, the distance between the Riemann sum and is less than .

### How to solve integer matrix equations

Originally published at 狗和留美者不得入内. You can comment here or there.

Take an arbitrary matrix of integer values and an column vector of integers . We wish to find all such that . In order to do so, we prove the following proposition.

Theorem for diagonalization of integer matrices For any integer matrix , there exists square integer matrices such that is of the form

wherein .

Proof: Invertible square integer matrices are generated by

1. Those which are diagonal and such that diagonal elements are all units, in this case all .
2. Those which swap rows or columns.
3. Those which add an integer multiple of one row to another row (or one column to another column).

We shall proof this theorem by induction on the partial order of . We will assume also that the matrix is not all zeros, for which the theorem trivially holds.

Base case:

In the base case where or , one uses the Euclidean algorithm repeatedly on two arbitrary non-zero elements (done per 3) until all but one of the elements is non-zero. Finally, one swaps the non-zero element to the first row or column.

Case 1:

In the case of where and WLOG , we first make the following sequence of transformations, where is the unique integer quotient equal to :

Now, it suffices to prove that every matrix integer matrix, which can be expressed in the form , where is a and is and is , can be reduced to the form . To do this, we first observe that if there is an element in the first column that divides every element in the first column in addition to dividing all elements in its row, there one can realize this reduction by swap its row with the first row and then adding an integer of the first row to every other row to null out the all of the other entries of the first column. One of course can do similar to null out all but the first entry of the first row as well. Analogous of course also holds with there is an element in the first row that divides element in the first row in addition to dividing all elements in its column.

Case 2:

When is not a factor of some element in or , one can apply operations corresponding to an iteration of the Euclidean algorithm on those two elements to decrease the value of . We notice that if , the matrix is necessarily one of Case 1. Since when the matrix is not in base form, must strictly decrease after an iteration of the procedure described above, repeated iteration of that procedure must eventually result in an matrix of Case 1.

Case 3:

The matrix is diagonal and there is a diagonal entry, which we shall call , that divides every other diagonal entry. We note that if one of the diagonal entries is , Case 3 is trivially satisfied. We take the matrix obtained by removing the row and column of . By our inductive hypothesis, we can apply the desired reduction to that. We have that multiplication of a matrix in Case 3 by any invertible integer matrix cannot decrease the greatest common divisor of all the entries in it, which is necessarily a factor of, which proves this case.

Case 4:

By our inductive hypothesis, we can assume that can be reduced to diagonal form with the desired divisibility condition. After such a reduction, if the matrix is in Case 3, then we are finished. If not, there must exist two row indices and two column indices such that its corresponding matrix is in Case 1. We then perform the reduction associated with Case 1 for the two rows and two columns, necessarily decreasing one of the matrix elements in the process. Either the algorithm will terminate to a Case 3 matrix without a unit in the diagonal, or it will terminate with a unit in the diagonal, in which case the matrix is in Case 3. This completes our inductive proof.

Now to solve , we diagonalize such that and , which gives . In solving for , the entries corresponding to a non-zero diagonal entry in exist and are unique iff divisibility holds in each case, and the entries corresponding to zero diagonal entry can be arbitrary. Multiplying this solution set by then gives the solution set for .

Propositions regarding quotients, remainders, divisibility, and greatest common divisors in the Euclidean domain of integers were proven in full rigor in [1]. A reader not comfortable with such rigor is of course welcome to read [1]. I believe that it was also much the process of writing up [1] that made this theorem more or less trivial for me to prove.

References