A Quick Review on Linear Algebra

The more fundamental part of ML is Mathematics, But if we take the list it goes on like algebra, statistics, calculus, geometry etc., and obviously we get confused. When we approach experts and ask, at this stage most of them will suggest us to go ahead with Linear Algebra.

But the problem will not stop there. we will get stuck up on the deep derivations and get confused again. So Now, we will have the overview of the topics in Linear Algebra and motivation behind it.

Why we Learn Linear Algebra?

When we are working with data, the first step is to and arrange our data in rows and columns format which looks like matrix where linear algebra comes into play.

For 2- dimensions, we call it as Matrix. For 1-dimension we call it as Vectors and for n-dimensions we call it as Tensors.

This matrix can be implemented via Numpy package which we discussed in previous blog. Before we look into that, let us take a quick look on the reasons to NOT to learn linear Algebra:

1. Learning Linear Algebra will take months to years to study the entire field. This will delay in achieving goal to work on Real Time ML problems.

2. Since Not all topics in Linear algebra is relevant to theoretical Machine learning but also the applied ML.

Implementing Matrices in Python

A 2-dimensional Numpy array is used to represent matrix in python. A numpy array with lists of list gives us matrix. An example with 2X3 matrix as below.

Matrix – Matrix Multiplication

The matrix multiplication is also called as matrix DOT product which is more complicated than the previous operations.

Basic rule for this Dot product is that, the number of columns in the First matrix must equal to the number of rows in second matrix.

If A is of shape M x N and B is of shape N x P then C is of shape M x P.

#identity Matrix
from numpy import identity
I = identity(3)
print(I)

Have fun by making changes to the above code and do let me know if you found something interesting 🙂

Leave a comment