Accord.NET (logo) HiddenMarkovModel Class Accord.NET Framework
Online Show table of contents (goes to the online documentation index).
Discrete-density Hidden Markov Model.
Inheritance Hierarchy

Online System Object
  Accord.Statistics.Models.Markov BaseHiddenMarkovModel
    Accord.Statistics.Models.Markov HiddenMarkovModel

Namespace: Accord.Statistics.Models.Markov
Assembly: Accord.Statistics (in Accord.Statistics.dll) Version: (

public class HiddenMarkovModel : BaseHiddenMarkovModel, 
	IHiddenMarkovModel, ICloneable

Hidden Markov Models (HMM) are stochastic methods to model temporal and sequence data. They are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.

This page refers to the discrete-density version of the model. For arbitrary density (probability distribution) definitions, please see HiddenMarkovModel TDistribution .

Dynamical systems of discrete nature assumed to be governed by a Markov chain emits a sequence of observable outputs. Under the Markov assumption, it is also assumed that the latest output depends only on the current state of the system. Such states are often not known from the observer when only the output values are observable.

Assuming the Markov probability, the probability of any sequence of observations occurring when following a given sequence of states can be stated as

in which the probabilities p(yt|yt-1) can be read as the probability of being currently in state yt given we just were in the state yt-1 at the previous instant t-1, and the probability p(xt|yt) can be understood as the probability of observing xt at instant t given we are currently in the state yt. To compute those probabilities, we simple use two matrices A and B. The matrix A is the matrix of state probabilities: it gives the probabilities p(yt|yt-1) of jumping from one state to the other, and the matrix B is the matrix of observation probabilities, which gives the distribution density p(xt|yt) associated a given state yt. In the discrete case, B is really a matrix. In the continuous case, B is a vector of probability distributions. The overall model definition can then be stated by the tuple

in which n is an integer representing the total number of states in the system, A is a matrix of transition probabilities, B is either a matrix of observation probabilities (in the discrete case) or a vector of probability distributions (in the general case) and p is a vector of initial state probabilities determining the probability of starting in each of the possible states in the model.

Hidden Markov Models attempt to model such systems and allow, among other things,

  1. To infer the most likely sequence of states that produced a given output sequence,
  2. Infer which will be the most likely next state (and thus predicting the next output),
  3. Calculate the probability that a given sequence of outputs originated from the system (allowing the use of hidden Markov models for sequence classification).

The “hidden” in Hidden Markov Models comes from the fact that the observer does not know in which state the system may be in, but has only a probabilistic insight on where it should be.

To learn a Markov model, you can find a list of both supervised and unsupervised learning algorithms in the Accord.Statistics.Models.Markov.Learning namespace.


  • Wikipedia contributors. "Linear regression." Wikipedia, the Free Encyclopedia. Available at:
  • Nikolai Shokhirev, Hidden Markov Models. Personal website. Available at:
  • X. Huang, A. Acero, H. Hon. "Spoken Language Processing." pp 396-397. Prentice Hall, 2001.
  • Dawei Shen. Some mathematics for HMMs, 2008. Available at:


The example below reproduces the same example given in the Wikipedia entry for the Viterbi algorithm ( In this example, the model's parameters are initialized manually. However, it is possible to learn those automatically using BaumWelchLearning.

// Create the transation matrix A 
double[,] transition = 
    { 0.7, 0.3 },
    { 0.4, 0.6 }

// Create the emission matrix B 
double[,] emission = 
    { 0.1, 0.4, 0.5 },
    { 0.6, 0.3, 0.1 }

// Create the initial probabilities pi 
double[] initial =
    0.6, 0.4

// Create a new hidden Markov model
HiddenMarkovModel hmm = new HiddenMarkovModel(transition, emission, initial);

// After that, one could, for example, query the probability 
// of a sequence ocurring. We will consider the sequence 
int[] sequence = new int[] { 0, 1, 2 };

// And now we will evaluate its likelihood 
double logLikelihood = hmm.Evaluate(sequence); 

// At this point, the log-likelihood of the sequence 
// ocurring within the model is -3.3928721329161653. 

// We can also get the Viterbi path of the sequence 
int[] path = hmm.Decode(sequence, out logLikelihood); 

// At this point, the state path will be 1-0-0 and the 
// log-likelihood will be -4.3095199438871337
See Also