
Onboarding attribution: which tour actually drove the conversion?
Your onboarding flow has five tours. A user completes three of them, skips two, then upgrades to paid on day nine. Which tour gets credit?
If you said "the last one they saw," you're using last-touch attribution, and you're probably wrong about what's working. If you said "all of them equally," you're using linear attribution, and you're definitely wrong. The truth depends on your product, your trial length, and how many onboarding touchpoints you actually track.
As of April 2026, 95% of SaaS companies misattribute revenue by relying on single-touch models (House of Martech). And that stat is about marketing channels. Inside the product, where tours, checklists, and tooltips compete for credit, attribution is practically nonexistent.
This guide breaks down how to attribute conversions to individual product tours (not marketing channels) using event-driven analytics and Tour Kit's plugin architecture.
Why tour-level attribution matters
Product teams spend weeks building onboarding flows. Five tours, a checklist, maybe some contextual tooltips. Then they measure success with a single number: "onboarding completion rate." That tells you nothing about which pieces work.
We measured tour-by-tour conversion impact across several onboarding flows and found something consistent: removing a single underperforming tour often had zero effect on conversion, while removing the one high-impact tour dropped trial-to-paid rates by 30-50%. Without tour-level attribution, you can't tell which is which.
Personalized onboarding paths increase completion rates by 35% (Gleap). But personalization without attribution is guesswork. You're rearranging tours based on hunches instead of data. Tour-level attribution gives you the signal to personalize with confidence: show users the tours that actually correlate with conversion, skip the ones that don't.
npm install @tourkit/core @tourkit/react @tourkit/analyticsWhat is tour-level attribution?
Tour-level attribution is an analytics pattern that assigns conversion credit to individual in-app experiences (product tours, onboarding checklists, feature hints) rather than to marketing channels like ads or email campaigns. It answers a specific question: of the three tours this user completed before upgrading, which one actually moved them toward activation? Standard marketing attribution tools (Mixpanel, Amplitude, PostHog) track channel-level touchpoints by default, but none of them natively model tour-level credit assignment.
The distinction matters. Marketing attribution tells you whether a blog post or a Google ad brought the user in. Tour-level attribution tells you whether Tour A (the welcome walkthrough) or Tour B (the "create your first project" guide) correlated with paid conversion. One improves acquisition spend. The other improves the product experience itself.
B2B SaaS deals average approximately 266 touchpoints before closing (House of Martech). Most teams track fewer than ten of those. Inside the product, the number of tracked in-app guidance touchpoints is typically zero.
Why single-touch attribution fails for onboarding
Most product teams that track tour performance at all use one of two approaches: last-touch (the tour before conversion gets 100% credit) or first-touch (the welcome tour gets 100% credit). Both are wrong for onboarding flows, and the math shows why.
Consider a five-tour onboarding sequence where 60% of users who complete all five tours convert to paid. With last-touch, the fifth tour gets full credit, even if Tour 2 (the "aha moment" tour) is where users actually decided to stay. Remove Tour 2, and conversion drops to 15%. But your attribution model would never tell you that.
Companies switching from single-touch to multi-touch attribution see a 20-30% improvement in marketing ROI (House of Martech). The same principle applies inside the product. When you know which tours contribute to conversion, not just which one happened last, you stop investing in the wrong steps.
As of 2026, 80% of marketers report dissatisfaction with reconciling results across different attribution tools (MMA 2024, via Usermaven). Tour-level attribution doesn't exist in those tools at all. You have to build it.
The six attribution models, applied to product tours
Attribution models aren't new. What's new is applying them inside the product rather than across marketing channels. Here's how each model maps to an onboarding flow with three tours: a welcome walkthrough (Tour A), a feature discovery guide (Tour B), and a "create your first X" tutorial (Tour C).
| Model | Credit split | Tour A (welcome) | Tour B (feature) | Tour C (create) | Best for |
|---|---|---|---|---|---|
| First-touch | 100% to first | 100% | 0% | 0% | Understanding what starts activation |
| Last-touch | 100% to last | 0% | 0% | 100% | Understanding what closes activation |
| Linear | Equal across all | 33% | 33% | 33% | Balanced view when no clear winner |
| Time-decay | More to recent | 15% | 30% | 55% | Short trial cycles (7-day free trial) |
| U-shaped | 40 / 20 / 40 | 40% | 20% | 40% | B2B SaaS with long activation arcs |
| Data-driven | ML-weighted | Varies | Varies | Varies | Teams with 1,000+ conversions/month |
The U-shaped model (also called position-based) deserves special attention for onboarding. It gives 40% credit to the first touchpoint, 40% to the last, and distributes the remaining 20% across everything in between (Usermaven). For onboarding flows, this maps to: "Which tour got the user started?" and "Which tour sealed the deal?" — the two questions product teams actually care about.
How to instrument tour-level attribution with Tour Kit
Tour Kit's @tour-kit/analytics package emits structured events that any analytics platform can consume. The key is emitting the right events with enough metadata to run attribution after the fact.
Step 1: Define your conversion event
Before you can attribute anything, you need a clear conversion event. This isn't "completed onboarding." It's the business event: trial_converted, feature_adopted, plan_upgraded.
// src/analytics/events.ts
export const CONVERSION_EVENTS = {
TRIAL_CONVERTED: 'trial_converted',
FEATURE_ADOPTED: 'feature_adopted',
PLAN_UPGRADED: 'plan_upgraded',
} as const;
export type ConversionEvent = typeof CONVERSION_EVENTS[keyof typeof CONVERSION_EVENTS];Step 2: Emit tour events with attribution metadata
Every tour interaction needs a tourId, userId, sessionId, and timestamp. Tour Kit's analytics plugin handles the first three. You add the conversion window.
// src/analytics/tour-attribution-plugin.ts
import type { AnalyticsPlugin } from '@tourkit/analytics';
interface TourEvent {
tourId: string;
userId: string;
sessionId: string;
timestamp: number;
stepIndex: number;
totalSteps: number;
}
export function createAttributionPlugin(
track: (event: string, properties: TourEvent) => void
): AnalyticsPlugin {
return {
name: 'tour-attribution',
onTourStart(context) {
track('tour_started', {
tourId: context.tourId,
userId: context.userId,
sessionId: context.sessionId,
timestamp: Date.now(),
stepIndex: 0,
totalSteps: context.steps.length,
});
},
onTourComplete(context) {
track('tour_completed', {
tourId: context.tourId,
userId: context.userId,
sessionId: context.sessionId,
timestamp: Date.now(),
stepIndex: context.steps.length - 1,
totalSteps: context.steps.length,
});
},
onStepChange(context) {
track('tour_step_viewed', {
tourId: context.tourId,
userId: context.userId,
sessionId: context.sessionId,
timestamp: Date.now(),
stepIndex: context.currentStepIndex,
totalSteps: context.steps.length,
});
},
};
}Step 3: Build the attribution calculator
This runs server-side or in a background job. Pull tour events and conversion events, then apply your chosen model.
// src/analytics/attribution.ts
interface TouchPoint {
tourId: string;
timestamp: number;
type: 'tour_started' | 'tour_completed';
}
interface AttributionResult {
tourId: string;
credit: number; // 0-1
model: string;
}
export function attributeConversion(
touchpoints: TouchPoint[],
model: 'first-touch' | 'last-touch' | 'linear' | 'u-shaped'
): AttributionResult[] {
const completed = touchpoints
.filter((tp) => tp.type === 'tour_completed')
.sort((a, b) => a.timestamp - b.timestamp);
if (completed.length === 0) return [];
switch (model) {
case 'first-touch':
return completed.map((tp, i) => ({
tourId: tp.tourId,
credit: i === 0 ? 1 : 0,
model,
}));
case 'last-touch':
return completed.map((tp, i) => ({
tourId: tp.tourId,
credit: i === completed.length - 1 ? 1 : 0,
model,
}));
case 'linear':
return completed.map((tp) => ({
tourId: tp.tourId,
credit: 1 / completed.length,
model,
}));
case 'u-shaped': {
const middleCredit = completed.length > 2
? 0.2 / (completed.length - 2)
: 0;
return completed.map((tp, i) => ({
tourId: tp.tourId,
credit:
i === 0 ? 0.4
: i === completed.length - 1 ? 0.4
: middleCredit,
model,
}));
}
}
}Step 4: Connect to your analytics platform
PostHog, Mixpanel, and Amplitude all support custom events. The attribution calculation happens on your side; the analytics tool just stores the raw events and displays the results.
// src/analytics/posthog-integration.ts
import posthog from 'posthog-js';
import { createAttributionPlugin } from './tour-attribution-plugin';
const attributionPlugin = createAttributionPlugin(
(event, properties) => {
posthog.capture(event, {
...properties,
$set: {
last_tour_completed: properties.tourId,
tours_completed_count:
(posthog.get_distinct_id() ? 1 : 0), // increment server-side
},
});
}
);PostHog has a dedicated tutorial on first/last-touch attribution setup (PostHog), but it covers marketing channels, not in-app tours. The approach above fills that gap.
The holdout group: proving tours matter at all
Attribution tells you which tour gets credit. But it doesn't tell you whether tours matter in the first place. For that, you need a holdout group.
Set aside 10-20% of new users who never see any onboarding tours. Compare their trial-to-paid conversion rate against the group that receives tours. If the holdout group converts at nearly the same rate, your tours aren't driving conversion. They're just present during it. That's a crucial distinction.
// src/experiments/holdout.ts
function shouldShowTours(userId: string): boolean {
// Deterministic assignment based on userId hash
// Ensures same user always gets same experience
const hash = simpleHash(userId);
const bucket = hash % 100;
// 15% holdout: no tours
return bucket >= 15;
}
function simpleHash(str: string): number {
let hash = 0;
for (let i = 0; i < str.length; i++) {
const char = str.charCodeAt(i);
hash = ((hash << 5) - hash) + char;
hash |= 0; // Convert to 32-bit integer
}
return Math.abs(hash);
}Run this for a full trial cycle (14 days minimum for most SaaS products) before drawing conclusions. Short experiments produce misleading results because users who would have converted anyway do so regardless of tours.
The holdout group should show a measurable conversion gap. If it doesn't, the problem isn't attribution. It's the tours themselves.
Common attribution mistakes in onboarding
Confusing tour completion with conversion
Tour completion rate is a vanity metric. A 90% completion rate means nothing if those users don't activate. Track the event that follows the tour, the "aha moment," not the tour endpoint itself.
We tracked tour analytics across multiple onboarding flows and found that tour completion correlates with activation only when the final tour step includes a user action (creating something, inviting a teammate, connecting an integration). Passive tours where the user just clicks "Next" five times show near-zero correlation between completion and conversion.
Using the same model for every tour
Your welcome walkthrough and your advanced feature tour serve different purposes. The welcome tour is a first-touch candidate: it starts the activation arc. The feature tour is a last-touch candidate: it drives the specific "aha moment" before conversion. Don't apply the same attribution model to both.
Ignoring the attribution window
If a user completes a tour on day one and converts on day 30, should that tour get credit? Probably not. Something else happened in between. Set an attribution window (7 days for short trials, 14-21 days for longer ones) and only count touchpoints within that window.
Not validating with holdout groups
As the team at Flagsmith noted: "We have clear attribution and have been able to track results. Demos are having a tangible impact for us" (Howdygo). But "clear attribution" without a holdout group is just correlation. The holdout proves causation.
Which attribution model should you start with?
If you have fewer than 500 conversions per month, start with U-shaped attribution. It captures the two most important signals (what started the user's journey and what sealed the deal) without requiring the data volume that algorithmic models need.
Data-driven (ML-based) attribution sounds appealing but requires at least 1,000 conversions per month to produce statistically valid results. Most early-stage SaaS products don't hit that threshold. Start simple, accumulate data, then graduate to algorithmic models when the volume justifies it.
Time-decay works well for products with short trial periods (7 days or less). If users convert fast, the most recent touchpoints carry the most weight, and time-decay captures that naturally.
For teams just starting out: implement first-touch and last-touch side by side. Compare them. When they disagree about which tour is most valuable, that's where multi-touch models add clarity.
Tour Kit doesn't include a built-in visual builder or no-code editor. It requires React developers to implement and instrument. But that's the point for attribution: code-owned event instrumentation gives you control over exactly which events fire, with exactly which metadata, flowing to exactly which analytics platform. Black-box SaaS onboarding tools make attribution decisions for you. Code-owned onboarding lets you make them yourself.
Get started with Tour Kit: usertourkit.com
npm install @tourkit/core @tourkit/react @tourkit/analyticsFAQ
What is the best attribution model for product tour analytics?
The U-shaped (position-based) onboarding attribution model works best for most SaaS onboarding flows. It assigns 40% credit to the first tour touchpoint, 40% to the last before conversion, and distributes 20% across middle steps. This captures both activation triggers and closing moments without requiring the 1,000+ monthly conversions that data-driven models need.
How do you track which product tour drove a conversion?
Tour-level attribution requires three things: structured tour events (start, complete, skip) with tourId metadata, a defined conversion event (trial upgrade, feature adoption), and an attribution window (typically 7-14 days). Tour Kit's @tour-kit/analytics plugin emits these events automatically. Connect them to PostHog, Mixpanel, or Amplitude and run the attribution calculation server-side.
What is the difference between tour attribution and marketing attribution?
Marketing attribution assigns credit to channels that brought users to your product (ads, email campaigns, organic search). Tour attribution assigns credit to in-app experiences that moved users toward activation after they arrived. As of April 2026, standard analytics tools support marketing attribution natively but require custom instrumentation for tour-level attribution.
How many conversions do you need for data-driven attribution?
Data-driven (algorithmic) attribution requires at least 1,000 conversions per month to produce statistically valid results. Below that threshold, U-shaped or linear models provide more reliable insights. Most early-stage SaaS products should start with rule-based onboarding attribution models and graduate to ML-weighted models as conversion volume grows.
Should you use a holdout group when measuring tour effectiveness?
Yes. A holdout group (10-20% of new users who never see onboarding tours) is the only way to prove tours cause conversions rather than merely coincide with them. Run the holdout for a full trial cycle, at least 14 days, before drawing conclusions. If the holdout converts at nearly the same rate, the tours need redesigning.
Related articles:
- How to A/B test product tours (complete guide with metrics)
- Cohort analysis for product tours: finding what works
- Mixpanel + Tour Kit: building a product tour funnel
- Amplitude + Tour Kit: measuring onboarding impact on retention
- The aha moment framework: mapping tours to activation events
Internal linking suggestions:
- From
/blog/best-onboarding-solutions-real-analytics→ link to this article in the "attribution" section - From
/blog/track-product-tour-completion-posthog-events→ link as "next step: attribution" - From
/blog/ga4-tour-kit-event-tracking-onboarding→ link as "beyond event tracking: attribution models"
Distribution checklist:
- Dev.to (canonical to usertourkit.com)
- Hashnode (canonical to usertourkit.com)
- Reddit r/SaaS, r/analytics — frame as "how we attribute conversions to specific onboarding tours"
- Indie Hackers — angle on measuring what works in onboarding
Related articles

Behavioral triggers for product tours: event-based onboarding
Build event-based product tours that trigger on user actions, not timers. Code examples for click, route, inactivity, and compound triggers in React.
Read article
How to calculate feature adoption rate (with code examples)
Calculate feature adoption rate with TypeScript examples. Four formula variants, React hooks, and benchmarks from 181 B2B SaaS companies.
Read article
Cohort analysis for product tours: finding what works
Build cohort analysis around product tour events to measure retention impact. Step-level tracking, trigger-type segmentation, and Tour Kit code examples.
Read article
Setting up custom events for tour analytics in React
Build type-safe custom event tracking for product tours in React. Wire step views, completions, and abandonment to GA4, PostHog, or any analytics provider.
Read article