Client Hub →
Theme
Glossary AI

Epoch

An epoch is one complete pass through an entire training dataset during machine learning model development.

Also known as: Training epoch Learning cycle

What is an Epoch?

In artificial intelligence and machine learning, an epoch represents one full cycle through your entire training dataset. When a machine learning model trains, it learns by examining data repeatedly – and each complete pass through all that data counts as one epoch.

Think of it like studying for an exam: reading through all your notes once = one epoch. If you read through them three times, that's three epochs.

How Epochs Work in AI Training

During each epoch, the model: 1. Receives batches of training data 2. Makes predictions based on current knowledge 3. Compares predictions to actual results 4. Adjusts its internal parameters to improve accuracy 5. Repeats until it's processed every data sample

Once all data has been processed, the epoch ends, and the next epoch begins – using the same data again, but with an improved model.

Why Epochs Matter in Marketing AI

In advertising and media buying, epochs directly impact model performance. For instance, if you're training an AI model to predict which audience segments will convert:

  • Too few epochs: Your model hasn't learned the patterns in your data. You'll get poor predictions.
  • Too many epochs: Your model might "memorise" your training data instead of learning generalizable patterns (called overfitting). It performs well on training data but fails with new, real-world data.
  • Just right: Your model captures genuine patterns and performs well on unseen data.

Epochs vs. Iterations vs. Batches

These terms are often confused: - Batch: A subset of your training data (e.g., 32 samples from 10,000 total) - Iteration: One update to the model using one batch - Epoch: One complete pass using ALL batches

If you have 10,000 samples and batches of 32, you'll need 313 iterations to complete one epoch.

Practical Example for Media Buyers

Imagine you're optimising an AI model to predict ad performance across different placements. Your dataset has 100,000 historical ad impressions.

  • Epoch 1: Model sees all 100,000 impressions, learns initial patterns
  • Epoch 2: Model sees the same 100,000 impressions again, refines understanding
  • Epoch 3-5: Further refinement with each pass

Monitoring your model's accuracy across epochs helps you spot overfitting (when performance on test data starts declining while training performance keeps improving).

Choosing the Right Number of Epochs

There's no universal "correct" number – it depends on: - Your dataset size - Model complexity - Available computing power - Your problem type

Most practitioners use techniques like "early stopping," which halts training when validation performance plateaus, preventing wasted computation and overfitting.

Key Takeaway

Epochs are fundamental to understanding how AI models learn. In media buying and advertising, getting the epoch count right ensures your predictive models – whether for audience targeting, bid optimisation, or creative performance – generalise well to real-world scenarios rather than just memorising historical data.

Frequently Asked Questions

What is an epoch?
An epoch is one complete pass through an entire training dataset during machine learning model development. It's a fundamental unit of training cycles.
Why do epochs matter in advertising AI?
Epochs directly affect model performance. Too few and models don't learn patterns; too many and they overfit. The right number ensures accurate predictions on new data.
How many epochs should I use?
There's no universal answer – it depends on dataset size, model complexity, and compute resources. Use early stopping techniques to find the optimal point automatically.
What's the difference between an epoch and an iteration?
An iteration is one model update using one batch of data. An epoch is one complete pass through all batches. One epoch contains multiple iterations.
Can you have too many epochs?
Yes. Excessive epochs lead to overfitting – the model memorises training data rather than learning generalizable patterns, causing poor performance on new data.

Learn How to Apply This

Need Expert Help?

Our team can put this knowledge to work for your brand.

Request Callback