# Expectation-Maximization (EM) Algorithm

###### This wiki is incomplete.

PS: Here are examples of great wiki pages — Fractions, Chain Rule, and Vieta Root Jumping

The **expectation-maximization (EM) algorithm** is a way to find maximum-likelihood estimates for model parameters when your data is incomplete, has missing data points, or has unobserved (hidden) latent variables. It is an iterative way to approximate the maximum likelihood function. While maximum likelihood estimation can find the “best-fit” model for a set of data, it doesn’t work particularly well for incomplete data sets. The more complex EM algorithm can find model parameters even if you have missing data. It works by choosing random values for the missing data points and using those guesses to estimate a second set of data. The new values are used to create a better guess for the first set, and the process continues until the algorithm converges on a fixed point.

**Cite as:**Expectation-Maximization (EM) Algorithm.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/expectation-maximization-algorithm/