Skip to content

Creative Fatigue Detection in Digital Advertising

CreativeDynamics Library v0.9.8.1

Creative fatigue: advertising effectiveness degradation over time due to repeated audience exposure. The library employs path signature analysis from rough path theory to detect creative performance degradations in digital advertising campaigns.

Mathematical formulation:

For performance metric time series Y = {y_1, …, y_T}, the library detects fatigue by:

  1. Computing path signatures S_t for sliding windows of size w
  2. Calculating signature distances dt=StSt12d_t = \lVert S_t - S_{t-1} \rVert_2
  3. Identifying change points where d_t > μ_d + k·σ_d
  4. Classifying post-change-point trends to identify declining performance

Captures non-linear pattern changes preceding visible performance drops, enabling proactive campaign optimisation.

For consistency with the rest of this documentation, we can also write the thresholding rule as

dt>μd+kσd.d_t > \mu_d + k\,\sigma_d.

This section follows the notation in the accompanying paper (see arXiv-2509.09758v3/main.tex).

Let {yt}t=1T\{y_t\}_{t=1}^{T} be a performance metric time series observed at regular time steps. We embed the time series as a two-dimensional path XX in R2\mathbb{R}^2 by

Xt:=(t,yt).X_t := (t, y_t).

In practice we work with a normalised, piecewise-linear interpolation over a window [a,b][a,b]: time is scaled to [0,1][0,1] and metric values are min–max scaled to [0,1][0,1]. This ensures signatures are comparable across windows and metrics.

For a path XX of bounded variation on [a,b][a,b], its (tensor) signature is the sequence of iterated integrals

S(X)a,b:=(1,  S1(X)a,b,  S2(X)a,b,  ),S(X)_{a,b} := \left(1,\; S^1(X)_{a,b},\; S^2(X)_{a,b},\; \dots\right),

where, for k1k \ge 1,

Sk(X)a,b=a<t1<<tk<bdXt1dXtk.S^k(X)_{a,b} = \int_{a < t_1 < \cdots < t_k < b} dX_{t_1} \otimes \cdots \otimes dX_{t_k}.

We compute a truncated representation up to depth dd. The implementation uses log-signatures (Lie increments) for numerical stability and efficiency, but conceptually the detector compares adjacent window signatures.

Sliding windows and a signature-distance statistic

Section titled “Sliding windows and a signature-distance statistic”

Fix a window size ww. Let StS_t denote the (truncated) signature for the window ending at time index tt. We define the signature-distance statistic

dt:=StSt12.d_t := \lVert S_t - S_{t-1} \rVert_2.

Large values of dtd_t indicate a change in the geometry of the recent trajectory (trend, volatility, oscillation), often before the raw metric exhibits a clear mean shift.

Let μd\mu_d and σd\sigma_d be the mean and standard deviation of {dt}\{d_t\} for a given item/metric. A change point is flagged when

dt>μd+kσd,d_t > \mu_d + k\,\sigma_d,

where kk is a sensitivity multiplier.

Dual-metric approach analyses both Click-Through Rate (CTR) and Cost Per Click (CPC) for detailed fatigue detection.

CTR (Click-Through Rate):

  • Measures audience engagement (clicks/impressions)
  • Reflects creative relevance and appeal
  • Early indicator of audience saturation

CPC (Cost Per Click):

  • Measures cost efficiency (spend/clicks)
  • Reflects competitive dynamics and quality score
  • Indicates platform algorithm adjustments
  • Different response patterns: CTR and CPC exhibit distinct temporal dynamics under fatigue
  • Detailed detection: Pattern changes may appear in one metric before the other
  • Correlation context: Correlation analysis highlights when CTR and CPC move inversely; metrics are not combined
  • Strategic insights: Different metrics align with different campaign objectives (engagement vs. efficiency)

CTR: Measures audience engagement over time. Declining CTR suggests diminishing interest or relevance.

CPC: Indicates cost efficiency per engagement. Rising CPC signals decreasing efficiency from declining relevance or increasing competition.

Four-phase detection process for each metric:

Phase 1: Change point detection

  • Sliding window size w (default=7) captures weekly patterns
  • Signature depth d=4 balances detail vs. computational cost
  • Threshold multiplier k=1.5 provides precision≈0.7, recall≈0.6

Phase 2: Trend classification

  • Stable: |slope| < threshold
  • Improving: slope > threshold (CTR↑ or CPC↓)
  • Declining: slope < -threshold (CTR↓ or CPC↑)

Phase 3: Benchmark establishment

  • Identifies longest stable/improving segment
  • Computes average performance as benchmark
  • Validates benchmark reliability (minimum 3 data points)

Phase 4: Impact quantification

  • Measures deviation from benchmark during declining periods
  • Quantifies operational impact (engagement_gap_clicks) and financial inefficiency (actual_overspend_gbp)
  • Provides correlation risk context; metrics are reported separately and not combined

Empirical validation across multiple advertising datasets:

  • Early detection: Identifies fatigue 3-5 days before traditional methods
  • False positive rate: ~30% (controlled by threshold parameter k)
  • Computational efficiency: O(T·d²) complexity enables real-time analysis
  • Robustness: Handles missing data and outliers through normalisation

Translates detected fatigue into separate operational and financial impact metrics:

Financial (actual_overspend_gbp):

actual_overspend_gbp=tTdeclinemax(0,CPCtCPCbench)Clickst.\mathrm{actual\_overspend\_gbp} = \sum_{t \in T_{\mathrm{decline}}} \max\bigl(0, \mathrm{CPC}_t - \mathrm{CPC}_{\mathrm{bench}}\bigr)\,\mathrm{Clicks}_t.

Actual overspend due to increased cost-per-click during fatigue periods.

Operational (engagement_gap_clicks):

engagement_gap_clicks=tTdeclinemax(0,CTRbenchCTRt)Impressionst.\mathrm{engagement\_gap\_clicks} = \sum_{t \in T_{\mathrm{decline}}} \max\bigl(0, \mathrm{CTR}_{\mathrm{bench}} - \mathrm{CTR}_t\bigr)\,\mathrm{Impressions}_t.

Lost clicks due to decreased engagement rates. A GBP reference value may be shown separately as:

engagement_gap_gbp_reference=engagement_gap_clicksCPCbench.\mathrm{engagement\_gap\_gbp\_reference} = \mathrm{engagement\_gap\_clicks}\,\mathrm{CPC}_{\mathrm{bench}}.

Correlation risk context (metrics not combined)

Section titled “Correlation risk context (metrics not combined)”

Correlation analysis provides context when interpreting CTR and CPC together:

Correlation coefficient:

ρ(CPC,CTR)=Cov(CPC,CTR)σCPCσCTR.\rho(\mathrm{CPC}, \mathrm{CTR}) = \frac{\mathrm{Cov}(\mathrm{CPC}, \mathrm{CTR})}{\sigma_{\mathrm{CPC}}\,\sigma_{\mathrm{CTR}}}.

Risk classification:

  • Low Risk (ρ > -0.2): Independent or weak correlation
  • Medium Risk (-0.5 < ρ ≤ -0.2): Moderate negative correlation
  • High Risk (ρ ≤ -0.5): Strong negative correlation

Operational and financial metrics are reported separately and are not added regardless of risk level.

Detailed methodology and visualisation interpretation: Example application: quantifying impact.