Skip to content

Example application: quantifying impact using library outputs

Introduction: building on library outputs for domain-specific metrics

Section titled “Introduction: building on library outputs for domain-specific metrics”

The library provides core functionalities for time-series analysis: identifying change points and classifying trends in data segments (see Methodology). These outputs serve as foundational elements for domain-specific calculations.

This document presents the framework for calculating two non-additive impact measures in performance marketing, building on temporal segmentation and trend analysis:

  • actual_overspend_gbp (financial): quantifies overspend due to elevated CPC vs benchmark
  • engagement_gap_clicks (operational): quantifies lost clicks vs benchmark

These metrics are reported separately and must not be added together. A GBP reference value for engagement_gap_clicks may be shown but is clearly labelled as a reference, not a financial loss.

Statistical fallacies of naive impact models

Section titled “Statistical fallacies of naive impact models”

Superficially intuitive approaches to impact calculation often prove inadequate when confronted with non-stationary, complex real-world advertising performance data. Common naive models:

Presupposes a single point in time, tfatiguet_{\mathrm{fatigue}}, after which performance irreversibly degrades. Impact is then calculated relative to a pre-fatigue baseline.

We can express the naive model more cleanly as:

t>tfatigue(CPCtCPCbaseline)Clickst.\sum_{t > t_{\mathrm{fatigue}}} \bigl(\mathrm{CPC}_t - \mathrm{CPC}_{\mathrm{baseline}}\bigr)\,\mathrm{Clicks}_t.

Critique:

Real-world performance rarely exhibits single, monotonic decline. Time series often contain multiple change points, recovery periods, and varying rates of change. Assuming single t_fatigue ignores this complexity and can lead to significant under- or over-estimation of true impact.

Compares observed CPC against external or arbitrarily chosen benchmark.

Critique:

Lacks context-specificity. Fails to account for unique performance potential and lifecycle dynamics of individual creative asset.

Implemented framework: benchmarking against sustained optimal performance

Section titled “Implemented framework: benchmarking against sustained optimal performance”

The library implements an impact calculation framework that quantifies:

  • actual_overspend_gbp (financial inefficiency) during periods where CPC exceeds the benchmark
  • engagement_gap_clicks (operational impact) during periods where CTR falls below the benchmark

These are benchmarked against the creative’s own optimal performance period and are reported separately (not added together).

Step 1: define the benchmark CPC (CPC_benchmark)

Section titled “Step 1: define the benchmark CPC (CPC_benchmark)”

Let P={S0,S1,,Sn}\mathcal{P} = \{S_0, S_1, \dots, S_n\} be the set of temporal segments identified by change point analysis. Define the “good” segments

Popt:={SiPTrend(Si){Stable,Improving}}.\mathcal{P}_{\mathrm{opt}} := \{ S_i \in \mathcal{P} \mid \mathrm{Trend}(S_i) \in \{\mathrm{Stable}, \mathrm{Improving}\} \}.

Let S\*S^\* be the longest segment in Popt\mathcal{P}_{\mathrm{opt}}. Define the benchmark CPC as the average CPC on that segment:

CPCbench:=1S\*tS\*CPCt.\mathrm{CPC}_{\mathrm{bench}} := \frac{1}{|S^\*|} \sum_{t \in S^\*} \mathrm{CPC}_t.

Step 2: identify declining periods (T_decline)

Section titled “Step 2: identify declining periods (T_decline)”

Define TdeclineT_{\mathrm{decline}} as all time points within segments classified as Declining\mathrm{Declining}.

Step 3: calculate point-in-time actual overspend (actual_overspend_gbp_t)

Section titled “Step 3: calculate point-in-time actual overspend (actual_overspend_gbp_t)”

For each tTdeclinet \in T_{\mathrm{decline}}, define

actual_overspend_gbpt:=max(0,CPCtCPCbench)Clickst.\mathrm{actual\_overspend\_gbp}_t := \max\bigl(0, \mathrm{CPC}_t - \mathrm{CPC}_{\mathrm{bench}}\bigr)\,\mathrm{Clicks}_t.

Step 4: calculate total actual overspend (actual_overspend_gbp)

Section titled “Step 4: calculate total actual overspend (actual_overspend_gbp)”
actual_overspend_gbp:=tTdeclineactual_overspend_gbpt.\mathrm{actual\_overspend\_gbp} := \sum_{t \in T_{\mathrm{decline}}} \mathrm{actual\_overspend\_gbp}_t.

Operational impact: engagement gap (clicks)

Section titled “Operational impact: engagement gap (clicks)”

Step 1: define the benchmark CTR (CTR_benchmark)

Section titled “Step 1: define the benchmark CTR (CTR_benchmark)”

Using the same segmentation approach as for CPC, define

CTRbench:=1S\*tS\*CTRt.\mathrm{CTR}_{\mathrm{bench}} := \frac{1}{|S^\*|} \sum_{t \in S^\*} \mathrm{CTR}_t.

Step 2: identify declining periods (T_decline)

Section titled “Step 2: identify declining periods (T_decline)”

Define TdeclineT_{\mathrm{decline}} as all time points within segments classified as Declining\mathrm{Declining}.

Step 3: calculate point-in-time engagement gap (engagement_gap_clicks_t)

Section titled “Step 3: calculate point-in-time engagement gap (engagement_gap_clicks_t)”

For each tTdeclinet \in T_{\mathrm{decline}}, define

engagement_gap_clickst:=max(0,CTRbenchCTRt)Impressionst.\mathrm{engagement\_gap\_clicks}_t := \max\bigl(0, \mathrm{CTR}_{\mathrm{bench}} - \mathrm{CTR}_t\bigr)\,\mathrm{Impressions}_t.

Step 4: calculate total engagement gap (engagement_gap_clicks)

Section titled “Step 4: calculate total engagement gap (engagement_gap_clicks)”
engagement_gap_clicks:=tTdeclineengagement_gap_clickst.\mathrm{engagement\_gap\_clicks} := \sum_{t \in T_{\mathrm{decline}}} \mathrm{engagement\_gap\_clicks}_t.

Optionally, a GBP reference value can be derived as:

engagement_gap_gbp_reference:=engagement_gap_clicksCPCbench.\mathrm{engagement\_gap\_gbp\_reference} := \mathrm{engagement\_gap\_clicks}\,\mathrm{CPC}_{\mathrm{bench}}.

Prior to impact calculations, the library performs critical data validation and normalisation steps:

Automatically detects and normalises CTR values stored as percentages rather than fractions. For example, if CTR values are stored as 0.37 (representing 37%) instead of 0.0037 (representing 0.37%), the system normalises these values by dividing by 100. Prevents impossibly high CTR benchmarks and ensures accurate impact calculations.

The library generates sophisticated visualisations to help interpret impact analysis results. This section explains how to read and interpret these plots.

Generates enhanced two-panel plots for each metric (CPC or CTR) with impact analysis overlays.

CTR Analysis with Impact Visualisation

Top panel: time series with impact context

Section titled “Top panel: time series with impact context”

Displays primary metric over time with key analytical overlays:

Time series line (blue)

  • Shows the actual CTR values over the analysis period
  • Connected points indicate daily performance measurements
  • Fluctuations reveal natural variation and trend changes

Benchmark line (green dashed)

  • Horizontal line representing the calculated benchmark value (2.60% in this example)
  • Derived from the period of optimal stable performance
  • Serves as the reference point for impact calculations

Benchmark period highlighting (light blue shaded area)

  • Indicates time period used to establish benchmark
  • Typically represents longest stable or improving performance segment
  • In this example, covers first 7 days when CTR was consistently high

Segment trend labels

  • Text annotations showing identified trend classifications
  • “STABLE” indicates periods of consistent performance
  • “DECLINING” marks periods where performance deteriorates
  • Based on signature-based change point detection

Underperformance areas (red shaded regions)

  • Highlight periods where actual performance falls below benchmark
  • Only shown for declining periods after benchmark period ends
  • Area size reflects magnitude of impact (e.g., engagement gap)
  • Vertical red lines mark boundaries of impact periods

Change points (red vertical lines)

  • Mark significant pattern changes detected by signature analysis
  • Indicate transitions between different performance phases
  • Based on mathematical signature distance calculations

Shows sophisticated mathematical analysis underlying change point detection:

Signature distance line (green with dots)

  • Each point represents mathematical “distance” between consecutive time windows
  • Higher values indicate greater changes in underlying pattern
  • Based on rough path theory and signature calculations

Detection threshold (red horizontal line)

  • Threshold value (1.708 in this example) for anomaly detection
  • When signature distances exceed this threshold, significant changes are flagged
  • Automatically calculated based on data characteristics

Anomaly markers (red X)

  • Mark time points where signature distances exceeded threshold
  • Correspond to change points shown in top panel
  • Indicate statistically significant pattern shifts

For detailed analysis, generates a four-panel dashboard summarising operational (clicks) and financial (GBP) impact separately.

Impact Analysis Example

Mirrors individual CPC analysis with:

  • CPC time series with benchmark line and impact highlighting
  • Segment trend classifications
  • Change point markers
  • Underperformance area visualisation

Mirrors individual CTR analysis with:

  • CTR time series with benchmark line and impact highlighting
  • Segment trend classifications
  • Change point markers
  • Underperformance area visualisation

Primary and reference indicators

  • engagement_gap_clicks (primary): Lost clicks relative to benchmark (operational)
  • actual_overspend_gbp (secondary): Actual overspend relative to benchmark (financial)
  • Do not add or combine these metrics; they are conceptually distinct

Panel 4: summary and risk assessment (bottom right)

Section titled “Panel 4: summary and risk assessment (bottom right)”

Detailed text summary including:

Impact metrics

  • engagement_gap_clicks (primary)
  • actual_overspend_gbp (secondary, clearly labelled as reference value for GBP when applicable)
  • Lost clicks due to CTR underperformance (as engagement gap)

Benchmark information

  • CPC and CTR benchmark values with calculation periods
  • Provides context for impact calculations

Calculation status

  • Indicates whether impact calculations were successful
  • “Success” confirms reliable benchmark establishment
  • “Insufficient Data” or “No Declining Periods” indicate limitations

Correlation risk indicator

  • Colour-coded severity levels:
    • Low Risk (Green): Weak or positive correlation (r > -0.2)
    • Medium Risk (Orange): Moderate negative correlation (-0.5 < r < -0.2)
    • High Risk (Red): Strong negative correlation (r < -0.5)
    • Unknown Risk (Grey): Insufficient data for correlation analysis
  • Metrics are not combined; interpret engagement_gap_clicks and actual_overspend_gbp separately regardless of risk level

Significant impact indicators:

  • Large red shaded areas in time series plots
  • Substantial gaps between actual performance and benchmark lines
  • Large engagement_gap_clicks and/or notable actual_overspend_gbp magnitudes

Minimal impact indicators:

  • Small or absent red shaded areas
  • Performance lines close to benchmark levels
  • Low engagement_gap_clicks and minimal actual_overspend_gbp in the summary panel

High risk scenarios:

  • Strong negative correlation between CPC and CTR (r < -0.5)
  • When CTR declines, CPC increases proportionally
  • Do not compute any combined total; interpret engagement_gap_clicks and actual_overspend_gbp separately

Low risk scenarios:

  • Weak or positive correlation between CPC and CTR (r > -0.3)
  • Independent variation in both metrics
  • Use metrics independently; do not add

Performance optimisation:

  • Focus intervention efforts on periods with large underperformance areas
  • Address the metric contributing most to operational or financial impact
  • Consider creative refresh at detected change points

Budget planning:

  • Use actual_overspend_gbp for financial inefficiency assessment
  • Use engagement_gap_clicks for operational impact assessment
  • Do not compute or use any combined total

Campaign management:

  • Monitor signature distances for early warning of performance changes
  • Set alerts when distances approach detection thresholds
  • Use benchmark periods as performance targets for optimisation

This framework provides a data-driven measure of operational and financial impact using temporal segmentation from signature analysis. The detailed visualisation suite enables both technical analysis and business decision-making by clearly presenting complex mathematical insights in accessible format without conflating distinct concepts.