Skip to main content
/
how AI worksneural network explainerwhat is AIperceptron calculatorAI training explainedtokens explainedhow ChatGPT works
//// Educational · AI

How AI Actually Works

AI isn't magic. It's math. Here's the actual math behind neural networks, training, and language models.

401(k) Limit 2024$23,000
Roth IRA Limit$7,000
S&P 500 Avg Return~10%/yr

🧠 What is a Neural Network?

A neural network is a bunch of "neurons" (nodes) connected in layers. Each connection has a weight — a number that says how much that input matters. Drag the slider and watch how fast the connections multiply!

Neurons in hidden layer4
Input
x1
x2
x3
x4
4 nodes
Hidden
h1
h2
h3
3 nodes
Output
y
1 nodes
Total connections
15
4×3 + 3×1
Formula
4×3+3×1
connections = n_in × n_out per layer
connections = neurons_in × neurons_out  ← try changing the slider!

GPT-4 has roughly 1.8T parameters (weights). That's 1800B connections, and training it cost ~$100M.

🏋️ What is Training?

Training is when the network sees thousands of examples and adjusts its weights to get better. Each pass through the data is called an "epoch." Watch accuracy climb toward ~95% and loss fall toward 0 as the model learns!

Epoch
0
/ 50
Accuracy
0.0%
1 − e^(−0.1×epoch)
Loss
1.000
1 − accuracy
Model accuracy0.0%
accuracy ≈ 1 − e(−0.1 × epoch)

🔤 What are Tokens?

AI models don't read words or letters — they read tokens. A token is roughly a word or word-piece. GPT-4 charges per token. Type something and see it split!

Thequickbrownfoxjumpsoverthelazydog
Tokens
9
Characters
43
Chars/token
4.8
≈ 4 chars typical

GPT-4 has a context window of ~128,000 tokens. That's roughly a 300-page book at once.

⚙️ The Math Behind It: Perceptron

A perceptron is the simplest neural network: multiply each input by its weight, add a bias, then squash through sigmoid to get a 0–1 probability. This is the same operation repeated billions of times inside GPT-4.

Weight w11.0
Weight w2-0.5
Weight w30.8
Bias0.2
Weighted sum: 0.9900 + bias: 1.1900
Output: 76.7%
→ Prediction: YES (class 1)