One of the reasons that the Julia language is gaining so much momentum in scientific computing is in part due to its excellent Linear Algebra support which is available out-of-the-box.

While basic vector and matrix algebra support is available by default, we'll want to extend our available operations by including the `LinearAlgebra`

package which is part of the standard library as well as the `RowEchelon`

package which is available through the `Pkg`

system.

`using LinearAlgebra, RowEchelon`

Suppose we have two vectors \(\mathbf{u}\) and \(\mathbf{v}\) in \(\mathbb{R}^2\).

```
u = [3, 4]
v = [4, -2]
```

We can perform basic operations such as addition, scalar multiplication, and inner products.

```
@show u + v
> u + v = [5, 9]
@show u - v
> u - v = [1, -1]
@show 3u + 2v
> 3u + 2v = [13, 22]
@show dot(u, v)
> dot(u, v) = 26
```

We can determine the length of a vector using the `norm`

function.

```
@show norm(u)
> norm(u) = 5.0
```

We can apply the dot product formula,

\[ \mathbf{u}\cdot\mathbf{v} = \lVert \mathbf{u}\rVert \lVert \mathbf{v}\rVert \cos \theta\,,\]to determine the angle between vectors.

```
@show cosθ = dot(u, v) / (norm(u)*norm(v))
> cosθ = dot(u, v) / (norm(u) * norm(v)) = 0.9656157585206697
@show θ = acos(cosθ)
> θ = acos(cosθ) = 0.26299473168091936
@show rad2deg(θ)
> rad2deg(θ) = 15.068488159492201
```

Suppose we have another vector, \(\mathbf{w}\), and we want to know if \(\mathbf{w}\) is in the span of \(\mathbf{u}\) and \(\mathbf{v}\), \(\text{span}(\mathbf{u}, \mathbf{v})\). In plain English, we want to know if there are some coefficients, \(\alpha\) and \(\beta\) which will satisfy the equation \(\alpha \mathbf{u} + \beta \mathbf{v} = \mathbf{w}\).

```
[ u v w ]
> 2×3 Matrix{Int64}:
> 3 2 5
> 4 5 2
rref([u v w])
> 2×3 Matrix{Float64}:
> 1.0 0.0 3.0
> 0.0 1.0 -2.0
(α, β) = rref([u v w])[:, 3]
@show α*u + β*v == w
> α * u + β * v == w = true
```

We used the array slicing functionality to extract

all rowsfor the 3rd column with`[:, 3]`

and populated the \(\alpha\) and \(\beta\) variables through decomposition.

Therefore, we know that \(\mathbf{w}\) is in the span of \(\mathbf{u}\) and \(\mathbf{v}\).

It is easy to build matrices from the ground up or from a collection of vectors. We can solve the equation \(A \mathbf{x} = \mathbf{b}\) via row reduction or by inverses.

Suppose we have a matrix, \(A\), and vector, \(\mathbf{b}\), defined as,

\[A = \begin{bmatrix*}[r] 1 & 2 \\ 0 & -1 \end{bmatrix*}\,, \ \mathbf{b} = \begin{bmatrix}2 \\ 4 \end{bmatrix}\,, \]```
A = [ 1 2 ; 0 -1 ]
> 2×2 Matrix{Int64}:
> 1 2
> 0 -1
b = [ 2 ; 4 ]
> 2-element Vector{Int64}:
> 2
> 4
```

We can apply row reduction,

\[\begin{bmatrix*}[r] 1 & 2 & 2 \\ 0 & -1 & 4 \end{bmatrix*} \sim \begin{bmatrix*}[r] 1 & 0 & 10 \\ 0 & 1 & -4 \end{bmatrix*} \]```
rref([ A b ])
> 2×3 Matrix{Float64}:
> 1.0 0.0 10.0
> 0.0 1.0 -4.0
```

or we can multiply \(\mathbf{b}\) by \(A^{-1}\) to solve the equation,

\[ \mathbf{x} = A^{-1}\mathbf{b} \,.\]```
inv(A)*b
> 2-element Vector{Float64}:
> 10.0
> -4.0
```

For a matrix, \(A\), we are often interested in the eigenvectors and eigenvalues which satisfy the equation,

\[ A \mathbf{u} = \lambda \mathbf{u}\,. \]We can quickly compute the eigenvectors and eigenvalues of a matrix, \(A\), using the `eigen`

function.

```
((λ1, λ2), K) = eigen(A)
> Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}
> values:
> 2-element Vector{Float64}:
> -1.0
> 1.0
> vectors:
> 2×2 Matrix{Float64}:
> -0.707107 1.0
> 0.707107 0.0
```

We can verify that the eigenvectors and eigenvalues work.

```
@show A * K[:, 1] == λ1 * K[:, 1]
> A * K[:, 1] == λ1 * K[:, 1] = true
@show A * K[:, 2] == λ2 * K[:, 2]
A * K[:, 2] == λ2 * K[:, 2] = true
```

The rank of a matrix is defined as the dimension of the vector space which is spanned by its columns. We can easily compute the rank of a matrix using the `rank`

function.

```
@show rank(A)
> rank(A) = 2
```

The trace of a matrix is defined as the sum of the diagonal elements and we can easily compute it using the `tr`

function.

```
@show tr(A)
> tr(A) = 0
```

The determinant is a function which yields a single, scalar value when applied to a matrix which tells us how a vector will scale when a matrix transformation is applied to it. We can compute it for any (non-singular) matrix in \(\mathbb{R}^n\) using the `det`

function.

```
@show det(A)
> det(A) = -1.0
```

The transpose of a matrix can be quickly generated by adding a single quote on the end of it.

```
A = [ 8 -4 ; 2 -2]
> 2×2 Matrix{Int64}:
> 8 -4
> 2 -2
A'
> 2×2 adjoint(::Matrix{Int64}) with eltype Int64:
> 8 2
> -4 -2
```

Finding the inverse of a matrix can be a long process for matrices larger than \(3 \times 3\).

Recall that a matrix is only invertible if it's full rank and square. *Full rank* means that the column vectors are all linearly independent.

For a \(2 \times 2\) matrix, we can quickly compute the inverse. If we have a matrix, \(A\), such that,

\[ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}\,,\]then the inverse is given by,

\[ A^{-1} = \frac{1}{\det(A)}\begin{bmatrix*}[r] d & -b \\ -c & a \end{bmatrix*}\,.\]In Julia, we can use the `inv`

function to quickly compute the inverse of any (non-singular) \(n \times n\) matrix.

```
A = [ 8 -4 ; 2 -2]
> 2×2 Matrix{Int64}:
> 8 -4
> 2 -2
inv(A)
> 2×2 Matrix{Float64}:
> 0.25 -0.5
> 0.25 -1.0
```

The other sure-fire way to find the inverse of a matrix, \(A\), is to concatenate an identity matrix to \(A\) and then row reduce until you get an identity matrix on the left and the inverse appears on the right. Unless there's an identifiable shortcut, for any matrix larger than \(2 \times 2\) where I need to find the inverse by hand, this is the method I use. It always works (assuming the matrix is non-singular) and is dumb simple to apply.

We can use `I(n)`

to create a \(n\times n\) identity matrix.

```
A = [ 8 -4 ; 2 -2]
rref([ A I(2) ])
> 2×4 Matrix{Float64}:
> 1.0 0.0 0.25 -0.5
> 0.0 1.0 0.25 -1.0
```

© Daniel Marvin. Last modified: July 19, 2021.