๐Ÿงฎ Math Basics Lab

Master the mathematical foundations for Machine Learning and AI

๐Ÿ“ Vector Mathematics

Core Vector Equations

1. Vector Representation

vโƒ—=[v1v2โ‹ฎvn]=[v1,v2,...,vn]\vec{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} = [v_1, v_2, ..., v_n]

A vector is an ordered list of numbers representing magnitude and direction in n-dimensional space.

Current Vectors:

vโ‚ = [3, 2]
vโ‚‚ = [1, 4]

2. Dot Product (Inner Product)

aโƒ—โ‹…bโƒ—=โˆ‘i=1naibi=a1b1+a2b2+...+anbn\vec{a} \cdot \vec{b} = \sum_{i=1}^{n} a_i b_i = a_1b_1 + a_2b_2 + ... + a_nb_n

Measures how much two vectors point in the same direction. Result is a scalar (single number).

Calculation:

vโ‚ ยท vโ‚‚ = (3)(1) + (2)(4) = 11.00

3. Vector Magnitude (Length)

โˆฃโˆฃvโƒ—โˆฃโˆฃ=โˆ‘i=1nvi2=v12+v22+...+vn2||\vec{v}|| = \sqrt{\sum_{i=1}^{n} v_i^2} = \sqrt{v_1^2 + v_2^2 + ... + v_n^2}

The length or size of a vector, calculated using the Pythagorean theorem.

Calculations:

||vโ‚|| = โˆš(3ยฒ + 2ยฒ) = 3.61
||vโ‚‚|| = โˆš(1ยฒ + 4ยฒ) = 4.12

4. Cosine Similarity

cosโก(ฮธ)=aโƒ—โ‹…bโƒ—โˆฃโˆฃaโƒ—โˆฃโˆฃโ‹…โˆฃโˆฃbโƒ—โˆฃโˆฃ\cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{||\vec{a}|| \cdot ||\vec{b}||}

Measures the angle between vectors. Range: [-1, 1]. Value of 1 means same direction, -1 means opposite, 0 means perpendicular.

Result:

cos(ฮธ) = 11.00 / (3.61 ร— 4.12) = 0.740
Angle ฮธ = 42.3ยฐ

5. Unit Vector (Normalization)

v^=vโƒ—โˆฃโˆฃvโƒ—โˆฃโˆฃ\hat{v} = \frac{\vec{v}}{||\vec{v}||}

A vector with magnitude 1, pointing in the same direction as the original. Used for direction-only comparisons.

6. Vector Addition & Scalar Multiplication

aโƒ—+bโƒ—=[a1+b1,a2+b2,...,an+bn]\vec{a} + \vec{b} = [a_1 + b_1, a_2 + b_2, ..., a_n + b_n]
cโ‹…vโƒ—=[cโ‹…v1,cโ‹…v2,...,cโ‹…vn]c \cdot \vec{v} = [c \cdot v_1, c \cdot v_2, ..., c \cdot v_n]

Addition combines vectors element-wise. Scalar multiplication scales the vector's magnitude.

๐ŸŽฏ Applications in Machine Learning Algorithms

1

K-Nearest Neighbors (KNN)

Uses vector magnitude to calculate Euclidean distance: d=โˆฃโˆฃxโƒ—1โˆ’xโƒ—2โˆฃโˆฃd = ||\vec{x}_1 - \vec{x}_2||

Finds k closest data points to classify new samples

2

Support Vector Machines (SVM)

Decision boundary: wโƒ—โ‹…xโƒ—+b=0\vec{w} \cdot \vec{x} + b = 0 (dot product)

Maximizes margin between classes using vector operations

3

Neural Networks

Layer computation: yโƒ—=ฯƒ(Wxโƒ—+bโƒ—)\vec{y} = \sigma(W\vec{x} + \vec{b}) (matrix-vector multiplication)

Weights are vectors, activations are vectors

4

Cosine Similarity (Text/Recommendation)

Document similarity: sim(d1,d2)=dโƒ—1โ‹…dโƒ—2โˆฃโˆฃdโƒ—1โˆฃโˆฃโ‹…โˆฃโˆฃdโƒ—2โˆฃโˆฃ\text{sim}(d_1, d_2) = \frac{\vec{d}_1 \cdot \vec{d}_2}{||\vec{d}_1|| \cdot ||\vec{d}_2||}

Used in recommendation systems and NLP

5

Principal Component Analysis (PCA)

Finds principal components (eigenvectors) using vector projections

Dimensionality reduction through vector transformations

6

Gradient Descent

Update rule: ฮธโƒ—new=ฮธโƒ—oldโˆ’ฮฑโˆ‡J(ฮธโƒ—)\vec{\theta}_{new} = \vec{\theta}_{old} - \alpha \nabla J(\vec{\theta})

Gradient is a vector pointing in direction of steepest ascent

๐ŸŽจ Interactive Vector Visualization

Adjust the vectors to see how dot product, magnitude, and angle change in real-time.

Adjust Vectors

3
2
1
4

Computed Results:

Dot Product (vโ‚ ยท vโ‚‚): 11.00

Magnitude ||vโ‚||: 3.61

Magnitude ||vโ‚‚||: 4.12

Cosine Similarity: 0.740

Angle between vectors: 42.3ยฐ

Vector Visualization

vโ‚vโ‚‚

Yellow arc shows the angle (42.3ยฐ) between vectors

๐Ÿ“Š Matrix Operations

Matrix-Vector Multiplication

Axโƒ—=[a11a12a21a22][x1x2]=[a11x1+a12x2a21x1+a22x2]A\vec{x} = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} a_{11}x_1 + a_{12}x_2 \\ a_{21}x_1 + a_{22}x_2 \end{bmatrix}

Used in neural networks for layer transformations, linear regression for predictions, and image transformations.