The Cracked Bassoon

Signal detection theory

Filed under cognition, python.

Signal detection theory (SDT), or sometimes just detection theory, is a way of understanding how an observer—usually a human in a psychological experiment—discriminates between different categories of information. SDT plays a particularly important role in the subfield of psychology known as psychophysics, which is concerned with the relationships between physical stimuli and how they are perceived. Nowadays, SDT is one of the most widely accepted and extensively used theories in all of psychology and neuroscience.

This post provides a somewhat anachronistic introduction to SDT. I start by describing a simple experiment. Next I describe the most common SDT model used to analyze data from such an experiment. I next explain how the free parameters from this model are used to generate predictions of trial outcomes. Finally, I rearrange those predictions so that parameter estimates can be obtained from observed data.

This post eschews some of the concepts that appear in other treatments of SDT, such as receiver-operating characteristic curves, likelihood ratios, and optimal decision-making. This is because I don’t believe that they are essential for typical use cases. If you are looking for a complete formulation of SDT, the foundational textbook by Green and Swets (1988) is the way to go. Another commonly cited SDT reference is Macmillan and Creelman (2005). Both of these books are quite dense and could be difficult to follow for beginners, so for a gentler introduction, you might want to try either McNicol (2005) or Wickens (2002). For a detailed history of SDT, I recommend the recent article by Wixted (2020).

Yes/no experiment

Imagine a participant, or observer, in a psychophysical experiment. The experiment comprises \(n\) trials. During each trial, the observer hears a sound. The sound is either noise, or noise plus a signal. In SDT parlance, “noise” means any unwanted or uninteresting information; in this case, the “noise” is literally Gaussian white noise. By contrast, “signal” means interesting information; in this case, the signal is a pure tone.

Two example waveforms. The left-hand waveform is Gaussian noise. The right-hand waveform is Gaussian noise mixed with a pure tone—you can see the effect of mixing the noise and signal together in its not-perfectly regular peaks.

Whether the observer hears noise or noise plus signal on a particular trial is random. Let \(X\) denote a random variable that represents whether a particular trial contained a signal. If \(X=0\), the sound was noise. If \(X=1\), the sound was noise plus signal. Often both trial types are equally likely, but this does not have to be the case. Importantly, however, the probabilities of \(X=0\) and \(X=1\)—denoted by \(\mathrm{P}\left(X=0\right)\) and \(\mathrm{P}\left(X=1\right)\), respectively—are set by the experimenter and are therefore always known.

After hearing a sound on a given trial, the observer answers the question, “Did you hear a tone?” The observer must respond on each trial. Let \(Y\) denote a random variable that represents the observer’s response. If \(Y=0\), they responded “yes.” If \(Y=1\), they responded “no.”

This kind of experiment is called a yes/no (YN) experiment. The name is quite misleading, in my opinion, because the defining features of YN experiments aren’t the instructions or response options. One could imagine an experiment where the instructions and responses are worded quite differently but the perceptual and decision-making processes required to complete the task are exactly the same. YN experiments are actually defined by presenting one stimulus per trial and having the observer judge to which of two classes the stimulus belongs (noise or noise plus signal). Unfortunately, the names of other experiment designs under SDT are just as bad, as we shall see in future posts.

Equal-variance Gaussian model

The simplest and most well-known SDT model for YN experiments is the equal-variance Gaussian (EVG) model. This model, like all SDT models, has a perceptual component and a response component. This separation of components is very helpful when trying to understand SDT models DeCarlo (2010). The perceptual component has to do with making observations and the response component has to do with applying decision rules.

Perceptual component

According to the perceptual component of the model, on each trial in a YN experiment, the observer generates a single observation. Observations are continuous random variables. All SDT models make assumptions about the statistical properties of observations—that is, the shape of the probability distributions they are drawn from—but they are generally agnostic about their physiological implementation. (It is possible to augment SDT models by connecting observations to physiology, such as the activity of neurons. This work is really cool, but beyond the scope of this post.)

Let \(\Psi\) denote a continuous random variable to represent the observation on a trial. Depending on the trial, \(\Psi\) is drawn from one of two distributions: the noise distribution when \(X=0\) and the noise-plus-signal distribution when \(X=1\). The EVG model assumes that these distributions are both normal (or Gaussian) and have equal variances. The noise distribution is considered to be standard normal; that is, it has zero mean and unit variance. The signal-plus-noise distribution has an unknown mean, denoted by \(d\), and unit variance.

Illustration of the perceptual component of the EVG model.

One way to write out the perceptual component of the EVG model is

\[\begin{equation} \Psi =dX + Z\\ Z \sim\mathrm{Normal}\left(0, 1\right) \end{equation}\]

but another more useful way is

\[\begin{equation} \Psi\mid{}X=0\sim\textrm{Normal}\left(0, 1\right)\\ \Psi\mid{}X=1\sim\textrm{Normal}\left(d, 1\right) \end{equation}\]

The probability that \(\Psi\) equals a particular value, \(\psi\), on a given trial is

\[\begin{equation} \textrm{P}\left(\Psi=\psi\right)=\varphi\left(\psi-dX\right) \end{equation}\]

or again more usefully in terms of conditional probabilities,

\[\begin{equation}\textrm{P}\left(\Psi=\psi\mid{}X=0\right)=\varphi\left(\psi\right)\tag{1a}\label{eq:1a}\end{equation}\] \[\begin{equation}\textrm{P}\left(\Psi=\psi\mid{}X=1\right)=\varphi\left(\psi - d\right)\tag{1b}\label{eq:1b} \end{equation}\]

where \(\varphi\) denotes the probability density function of the standard normal distribution,

\[\begin{equation} \varphi\left(t\right)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}t^2} \end{equation}\]

The unknown mean of the noise-plus-signal distribution, \(d\), is a free parameter in the EVG model. This parameter represents the distance between the means of the two conditional probability distributions of \(\Psi\). If \(d\) is large, the probability that one distribution generated a particular value \(\psi\) is high, while the probability that the other distribution generated \(\psi\) is low. If \(d\) is small, these two probabilities are similar. The parameter has a clear psychological interpretation, which we will discuss later.

Response component

Under the response component of the EVG model, the decision rule is very simple: the observer responds “yes” if \(\psi\) exceeds a certain value, denoted by \(k\), and “no” otherwise. This decision rule can be written as

\[\begin{equation} Y=0\textrm{ if }\Psi\le{}k\\ Y=1\textrm{ if }\Psi>k \end{equation}\]

Illustration of the response component of the EVG model.

(There is another formulation of the decision rule of the EVG model, in terms of likelihood ratios. Indeed, this is how canonical sources define it (Green & Swets, 1988; Macmillan & Creelman, 2005). The validity of the likelihood-ratio rule has been debated over the years and as mentioned in the introduction, understanding this rule is not essential for using the EVG model under most typical circumstances. I may return to it in a future post.)

The unknown value \(k\) is the second and last free parameter of the EVG model. If \(k\) is small, the observer will be more likely to respond “yes” (\(Y=1\)) than “no” (\(Y=0\)), all else being equal. Conversely, if \(k\) is large, the observer will be more likely to respond “no” than “yes.” There is one point, \(d/2\), where “yes” and “no” responses are equally likely, all else being equal. We will discuss the psychological interpretation of this parameter later.

Prediction of trial outcomes

Because values of \(\Psi\) are unknowable, it is impossible to predict with certainty how the observer will respond on a given trial. However, we can use the EVG model to calculate the probabilities of the different outcomes.

There are four possible trial outcomes. The observer could make a correct rejection, responding “no” on a noise trial; they could make a miss, responding “no” on a signal trial; they could make a false alarm, responding “yes” on a noise trial; or they could make a hit, responding “yes” on a signal trial. To summarize,

  \(X=0\) \(X=1\)
\(Y=0\) correct rejections misses
\(Y=1\) false alarms hits

We only need to concern ourselves with two of these outcomes. By convention, we choose false alarms and hits.

False-alarm rate

From the decision rule of the EVG model, it follows that

\[\begin{equation} \textrm{P}\left(Y=1\mid{}X=0\right)=\textrm{P}\left(\Psi > k\mid{}X=0\right)\end{equation}\]

This the conditional probability of a false alarm, sometimes called the false-alarm rate, denoted by \(f\).

Shaded area is the false-alarm rate.

The false-alarm rate is not the same as the unconditional, or marginal, probability of a false alarm, which is actually the joint probability of \(X=0\) and \(Y=1\). The latter probability can be found by applying the axioms of probability (Kolmogorov et al., 2018) to discover that the joint probability of two events, \(A\) and \(B\) is equal to the conditional probability of \(A\) given \(B\) multiplied by the marginal probability of \(B\),

\[\begin{equation}\mathrm{P}\left(A \cap B\right) = \mathrm{P}\left(A \mid B\right)\mathrm{P}\left(B\right)\end{equation}\]


\[\begin{equation}\mathrm{P}\left(X=0 \cap Y=1\right) =\\ \mathrm{P}\left(Y=1 \mid X=0\right)\mathrm{P}\left(X=0\right)\end{equation}\]

where, as mentioned previously, \(\mathrm{P}\left(X=0\right)\) is defined by the rules of the experiment and therefore always known.

From Equation \(\eqref{eq:1a}\), it follows that

\[\begin{equation} f=\textrm{P}\left(\Psi > k\mid{}X=0\right)\\ =\int_{k}^{\infty}\varphi\left(\psi\right) \mathrm{d}\psi=\Phi\left(-k\right)\tag{2a}\label{eq:2a} \end{equation}\]

where \(\Phi\) is the cumulative distribution function of the standard normal distribution. Thus, it is possible to obtain an observer’s \(f\) from their value of \(k\).

Hit rate

From the decision rule and Equation \(\eqref{eq:1b}\), the conditional probability of a hit or hit rate, denoted by \(h\), is

\[\begin{equation} h=\textrm{P}\left(Y=1\mid{}X=1\right)\\ =\textrm{P}\left(\Psi> k\mid{}X=1\right)\\ =\int_{k-d}^{\infty}\varphi\left(\psi\right) \mathrm{d}\psi=\Phi\left(d-k\right)\tag{2b}\label{eq:2b}\end{equation}\]

Shaded area is the hit rate.

Sensitivity and criterion

By combining and rearranging Equations \(\eqref{eq:2a}\) and \(\eqref{eq:2b}\), we can find equations for \(k\) and \(d\) in terms of \(f\) and \(h\). From Equation \(\eqref{eq:2a}\),


where \(\Phi\) is the inverse of the cumulative distribution function of the standard normal distribution. From Equation \(\eqref{eq:2b}\),

\[\begin{equation} d-k=\Phi^{-1}\left(h\right)\\ d=\Phi^{-1}\left(h\right)+k\\ d=\Phi^{-1}\left(h\right)-\Phi^{-1}\left(f\right) \end{equation}\]

The EVG model is more usually parameterized in terms of sensitivity, denoted by \(d^\prime\), and criterion, denoted by \(c\). Sensitivity is the defined as the standardized distance between the means of the noise and noise-plus-signal distributions. Here, “standardized” means that the standard deviations of the two distributions are pooled. Because both distributions have unit variance under the EVG model, standardized difference is the same as raw difference, so

\[\begin{equation}d^\prime=d\\ =\Phi^{-1}\left(h\right)-\Phi^{-1}\left(f\right)\tag{3a}\label{eq:3a}\end{equation}\]

Note that this is not the case for other SDT models, where the standard deviations of the two distributions are not necessarily the same.

Criterion is defined as the distance of \(k\) from the point where \(\Psi\) is equally likely under both models. Thus,

\[\begin{equation} c=k-\frac{d}{2}\\ =-\frac{1}{2}\left[\Phi^{-1}\left(h\right)+\Phi^{-1}\left(f\right)\right]\tag{3b}\label{eq:3b} \end{equation}\]

Maximum likelihood estimation

Let \(N\) and \(S\) denote the respective counts of noise trials and noise-plus-signal trials completed by an observer in an experiment. Also let \(F\) denote the number of observed false alarms and \(H\) denote the number of hits. The maximum likelihood estimate (MLE; Myung, 2003) of the observer’s false-alarm rate, denoted by \(\hat{f}\), is

\[\begin{equation} \hat{f}=\frac{F}{N} \end{equation}\]

and the MLE of their hit rate denoted by \(\hat{h}\), is

\[\begin{equation} \hat{h}=\frac{H}{S} \end{equation}\]

By swapping out \(f\) for \(\hat{f}\) and \(h\) for \(\hat{h}\) in the right-hand sides of Equations \(\eqref{eq:3a}\) and \(\eqref{eq:3b}\), we obtain MLEs for sensitivity and criterion, denoted by \(\hat{d^\prime}\) and \(\hat{c}\), respectively.

Application to real data

I’ve written a Python script to run an experiment like the one described above. The experiment contains 40 trials and should only take a minute or two to complete. Please check you have installed all the dependencies into your active Python environment and adjust your volume settings are instructed. Here is the code:

"""Script to perform a simple at-home yes/no experiment and analyze the resulting data
using signal detection theory.

import numpy as np
from scipy.stats import norm
import prettytable as pt
import sounddevice as sd

def trial(signal, n=None):
    """Performs a trial in the experiment.

        signal (bool): Should the trial contain a tone?
        n (:obj:`bool`, optional): Trial number. If omitted, a "practice" trial is
            performed which will allow the observer an opportunity to change the volume
            settings on their computer.

        rsp (bool): On practice trials, this indicates whether the real experiment
            should begin. On real trials, it indicates whether the observer responded

    t = np.arange(0, 0.1, 1 / 44100)
    tone = 1e-5 * 10 ** (50 / 20) * np.sin(2 * np.pi * 1000 * t + 0)
    noise = np.random.normal(size=len(t)) * tone.std() / np.sqrt(2) + tone if signal and isinstance(n, int) else noise, 44100)
    responses = {"n": False, "y": True}
    if isinstance(n, int):
        instr = f"Trial {n}: Did you hear a tone? ([y] or [n])?"
        instr = "Adjust your volume settings until the noise barely audible."
        instr += "\n([y] to adjust and hear again; [n] to continue)"
    while 1:
            return responses[input(instr).lower()]
        except KeyError:

def experiment():
    """Performs a series of trials.

    adj = True
    while adj:
        adj = trial(False)
    X = [False, True] * 20
    Y = [trial(*p[::-1]) for p in enumerate(X)]
    c = sum([1 for x, y in zip(X, Y) if x == 0 and y == 0])
    f = sum([1 for x, y in zip(X, Y) if x == 0 and y == 1])
    m = sum([1 for x, y in zip(X, Y) if x == 1 and y == 0])
    h = sum([1 for x, y in zip(X, Y) if x == 1 and y == 1])
    return c, f, m, h

def sdt_yn(c, f, m, h):
    """Calcualte SDT statistics.

    n = f + c
    s = m + h
    sens = norm.ppf(h / s) - norm.ppf(f / n)
    crit = -0.5 * (norm.ppf(h / s) + norm.ppf(f / n))
    return sens, crit

if __name__ == "__main__":

████████╗██╗  ██╗███████╗     ██████╗██████╗  █████╗  ██████╗██╗  ██╗███████╗██████╗ 
╚══██╔══╝██║  ██║██╔════╝    ██╔════╝██╔══██╗██╔══██╗██╔════╝██║ ██╔╝██╔════╝██╔══██╗
   ██║   ███████║█████╗      ██║     ██████╔╝███████║██║     █████╔╝ █████╗  ██║  ██║
   ██║   ██╔══██║██╔══╝      ██║     ██╔══██╗██╔══██║██║     ██╔═██╗ ██╔══╝  ██║  ██║
   ██║   ██║  ██║███████╗    ╚██████╗██║  ██║██║  ██║╚██████╗██║  ██╗███████╗██████╔╝
   ╚═╝   ╚═╝  ╚═╝╚══════╝     ╚═════╝╚═╝  ╚═╝╚═╝  ╚═╝ ╚═════╝╚═╝  ╚═╝╚══════╝╚═════╝ 
██████╗  █████╗ ███████╗███████╗ ██████╗  ██████╗ ███╗   ██╗    ██╗██╗██╗██╗██╗██╗██╗
██╔══██╗██╔══██╗██╔════╝██╔════╝██╔═══██╗██╔═══██╗████╗  ██║    ██║██║██║██║██║██║██║
██████╔╝███████║███████╗███████╗██║   ██║██║   ██║██╔██╗ ██║    ██║██║██║██║██║██║██║
██╔══██╗██╔══██║╚════██║╚════██║██║   ██║██║   ██║██║╚██╗██║    ╚═╝╚═╝╚═╝╚═╝╚═╝╚═╝╚═╝
██████╔╝██║  ██║███████║███████║╚██████╔╝╚██████╔╝██║ ╚████║    ██╗██╗██╗██╗██╗██╗██╗
╚═════╝ ╚═╝  ╚═╝╚══════╝╚══════╝ ╚═════╝  ╚═════╝ ╚═╝  ╚═══╝    ╚═╝╚═╝╚═╝╚═╝╚═╝╚═╝╚═╝                                                                              

Welcome! This script performs a simple experiment and analyzes the data using signal
detection theory (SDT)."""
    c, f, m, h = experiment()
    print("Experiment done!")
    table = pt.PrettyTable()
    table.field_names = ["", "x = 0", "x = 1"]
    table.add_row(["y = 0", c, m])
    table.add_row(["y = 1", f, h])
    print("Here is your contingency table:")
    if any(x == 0 for x in (c, f, m, h)):
Unfortunately, one or more of the cells has a value of 0. SDT statistics can't be
calculated without applying some form of correction. Exiting now"""
    print("Calculating SDT statistics ...")
    sens, crit = sdt_yn(c, f, m, h)
    print("sensitivity (d') = %.2f" % sens)
    print("criterion (c) = %.2f" % crit)

Once complete, you will see your results summarized in a so-called contingency table and estimates of sensitivity and criterion. When I ran this on myself, I got the following output:

Experiment done!
Here is your contingency table:
|       | x = 0 | x = 1 |
| y = 0 |   17  |   2   |
| y = 1 |   3   |   18  |
Calculating SDT statistics ...
sensitivity (d') = 2.32
criterion (c) = -0.12

Obviously, if you run the experiment yourself, you’ll likely get different values. The script throws out a warning message goes no further if any zeros appear in the contingency table—I’ll write more about what happens under these circumstances in a later post. However, if all cells are non-zero, you’ll see something similar to what is shown above.


DeCarlo, L. T. (2010). On the statistical and theoretical basis of signal detection theory and extensions: Unequal variance, random coefficient, and mixture models. Journal of Mathematical Psychology, 54(3), 304–313. 10.1016/

Green, D. M., & Swets, J. A. (1988). Signal detection theory and psychophysics (reprint ed.). Peninsula Publishing.

Kolmogorov, A. N., Bharucha-Reid, A. T., & Morrison, N. (2018). Foundations of the theory of probability (Dover ed.). Dover Publications, Inc.

Macmillan, N. A., & Creelman, C. D. (2005). Detection theory: A user's guide (2nd ed.). Lawrence Erlbaum Associates.

McNicol, D. (2005). A primer of signal detection theory. Psychology Press.

Myung, I. J. (2003). Tutorial on maximum likelihood estimation. Journal of Mathematical Psychology, 47(1), 90–100. 10.1016/S0022-2496(02)00028-7

Wickens, T. D. (2002). Elementary signal detection theory. Oxford University Press.

Wixted, J. T. (2020). The forgotten history of signal detection theory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(2), 201–233. 10.1037/xlm0000732

Version history

Related posts