Sixth lecture on mathematics for ML.

Today we will talk about linear transformation and we will complete the linear algebra topic.

Let me know which topics I should start next.

#MathsForML #MachineLearning
👇🧵
1) Linear transformation is simple to define. If v and w are two vector spaces then T:v->w is called LT if T(au+bv)=aT(u)+bT(v)

2) If T(v)=0 then that is zero transformation

3) If T(v)=v then that is identity transformation.
4) In matrix we generally write Ta(x) which equals Ax.

5) So we can define Range by T(x) and Nullspace by x s.t. T(x)=0

6) R(t) is actually a subspace with a Linear span of transformation of all the columns and the dimension of it is called rank
7) Rank nullity theorem can also be seen here as the sum of dimensions of null space and range is equal to n.
8) We can define inner product space now. <u,v> is valid if <au+bv,w> = a<u,w>+b<v,w> and <u,v>=complex conjugate of <v,u> and <u,u>=0 if and only if u=0.

9) <u,v> in terms of matrix multiplication is u multiplied by transpose of v i.e. <u,v>=uvt
10) Norm of vector u can be defined as ||u|| = sqrt(<u,u>)

11) Cauchy Schwartz Inequality is nothing but ||<u,v>|| less than or equal to ||u||x||v||

12) We can also define an angle between vectors using norm and inner product space. <u,v>/||u||*||v||
13) So the cos of theta is equal to what we define in the last point. and since that value will range-bound between 1 and -1, the angle will always be valid.

14) Vectors are orthogonal if theta is 90 or in other words <u,v>=0
15) If in a set all vectors are mutually orthogonal and the norm is 1 for all of them then that is an orthonormal set.

16) If it is a basis of vector space then that is called an orthonormal basis.
17) One can create any set to Orthonormal set using Gram Schmidt Orthogonalization Process.
https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process

18) It only works for the linearly independent set.

19) General formula is basically wn=un -<un,v1>v1-<un,v2>v2....-<un,vn-1>vn-1

20) It is recursive formula.
21) There are plenty of applications of what we have learned so far. Especially in matrix decomposition, which will be required in many machine learning models.
22) The other two important definitions that you should learn are eigenvalues and vectors. Ax= lambda x then, lambda is eigenvalue and x is an eigenvector.

23) Two important results, Product of eigen values -> determinant, sum -> trace
There are lots of other applications of eigenvalues and eigenvectors that you should check out.

By this, I am ending the topic of Linear Algebra. The probability will be the next. I am thinking to start puzzles for interviews after that.
Sorry for the delay to publish this lecture. If you have missed my previous lectures on this series.
Check out this thread. https://twitter.com/PythonLover9/status/1386956228358664193?s=20
You can follow @PythonLover9.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: