
The performance cost of onboarding SaaS tools: a Lighthouse audit
You added an onboarding tool. Your Lighthouse score dropped 15 points. Nobody connected the two events until a PM noticed the regression three sprints later.
This is a common sequence. SaaS onboarding platforms like Appcues, Pendo, WalkMe, Whatfix, and Userpilot inject third-party JavaScript that competes with your app for the main thread. The cost is measurable, but no vendor publishes their script payload size. We decided to measure it ourselves.
npm install @tourkit/core @tourkit/reactThis article breaks down how onboarding tools affect each Lighthouse metric, what the actual numbers look like, and why client-side libraries avoid these costs entirely.
What does Lighthouse actually measure?
Lighthouse calculates a weighted performance score from six metrics that map directly to how users experience page speed, and third-party onboarding scripts can degrade all six simultaneously. Understanding the weight distribution explains why some tools tank your score while barely affecting perceived load time, and why TBT at 30% weight matters more than any other single metric for third-party script evaluation.
| Metric | Weight | What it measures | Third-party risk |
|---|---|---|---|
| Total Blocking Time (TBT) | 30% | Main thread blocked >50ms | High |
| Largest Contentful Paint (LCP) | 25% | Largest visible element render | Medium |
| Cumulative Layout Shift (CLS) | 15% | Unexpected layout movement | Medium |
| First Contentful Paint (FCP) | 10% | First pixel rendered | Low-Medium |
| Speed Index (SI) | 10% | Visual completeness over time | Medium |
| Interaction to Next Paint (INP) | 10% | Input responsiveness | High |
TBT carries 30% of the total score. That single metric is the one most directly harmed by third-party JavaScript blocking the main thread. When an onboarding SaaS tool initializes, parses its configuration, queries the DOM for target elements, and renders overlays, all of that work competes with your React render cycle for the same thread. A single analytics script can drop a Lighthouse score by 20 points (DEV Community). Onboarding tools run similar initialization patterns.
Why onboarding tool performance matters for your product
Every SaaS onboarding tool you add to your React app competes with your own code for the browser's main thread, and the cost hits the metrics that Google uses to rank your pages and that users feel on every interaction. The 2025 Web Almanac found the median mobile page already carries a Total Blocking Time of 1,916ms, nearly 10x the 200ms target. Adding another third-party script to that stack moves you further from every Core Web Vitals threshold.
This matters beyond SEO. Slow onboarding flows increase drop-off. If a tooltip takes 300ms to appear after a click because the onboarding tool is still initializing, users notice. They don't file bug reports about it. They just leave.
How SaaS onboarding tools affect your scores
SaaS onboarding platforms follow a consistent technical pattern that creates measurable Lighthouse impact: a snippet tag loads a remote script from the vendor's CDN, that script fetches configuration data, polls the DOM for target elements, and injects UI into your page. Each of those four steps has a cost you can trace in Chrome DevTools.
The main thread tax
The 2025 Web Almanac reports that mobile pages have a median TBT of 1,916ms, nearly 10x the 200ms best-practice target. Third-party scripts are a major contributor.
Onboarding tools follow the same pattern as analytics: remote fetch, parse, DOM manipulation, event listener registration. The Chrome Aurora team found that a Google Tag Manager container with 18 tags increases TBT nearly 20x. An onboarding tool running initialization logic, element polling, and overlay rendering generates comparable main-thread work.
The CDN dependency
Client-side libraries ship as npm packages bundled into your app. SaaS tools load from a vendor CDN at runtime.
This creates a single point of failure. Smashing Magazine documented a case where a third-party font service failure pushed First Contentful Paint from under 2 seconds to over 30 seconds. Your onboarding tool has the same failure mode. If the Appcues or Pendo CDN experiences latency or goes down, your tour initialization either stalls or fails silently.
And the cost isn't just failure scenarios. Every CDN request adds DNS lookup, TCP connection, and TLS handshake time. On mobile networks, that's 100-300ms before a single byte transfers.
The transparency gap
Here's what we couldn't find: the actual script payload size for any major SaaS onboarding tool.
| Tool | Payload (gzipped) | Auditable? | CDN dependency |
|---|---|---|---|
| Tour Kit core | <8 KB | Yes (npm, bundlephobia) | No |
| Driver.js | ~5 KB | Yes (npm) | No |
| React Joyride | ~25 KB | Yes (npm) | No |
| Shepherd.js | ~35 KB | Yes (npm) | No |
| Appcues | Not published | No | Yes |
| Pendo | Not published | No | Yes |
| WalkMe | Not published | No | Yes |
| Whatfix | Not published | No | Yes |
| Userpilot | Not published | No | Yes |
| Chameleon | Not published | No | Yes |
| UserGuiding | Not published | No | Yes |
Every open-source library publishes its bundle size in the README. You can verify it on bundlephobia. No SaaS vendor does the same. That asymmetry tells you something about the numbers.
The "just load it async" myth
Vendors recommend async loading as the performance fix for third-party onboarding scripts, and on the surface the advice looks sound because adding async or defer to the script tag keeps your initial paint metrics clean. But this advice misses where the real cost lands in Lighthouse's scoring model, and the data shows async loading actually moves the penalty to higher-weighted metrics.
Not quite. Dave Rupert wrote on CSS-Tricks: "Every client I have averages ~30 third-party scripts." Async loading doesn't eliminate the cost. It shifts it. The script still executes on the main thread after load. When a user clicks a button 3 seconds after page load and your onboarding tool is mid-initialization, that click gets delayed. INP (now a Core Web Vital) catches exactly this scenario.
DebugBear documented this with Intercom: the chat widget loaded asynchronously but still "prevented interactions from being processed quickly," directly degrading INP scores. Onboarding tools follow the same interaction pattern. They register event listeners, poll for elements, and inject DOM nodes that trigger layout recalculations.
What async actually buys you
Async loading helps FCP and LCP. Your initial paint metrics look fine. But TBT and INP absorb the deferred cost. Since TBT carries 30% of the Lighthouse score and INP carries 10%, you're moving the penalty from a 10% metric (FCP) to a 40% combined weight. That's a worse trade.
Measuring the impact yourself
You don't need to trust anyone's benchmarks, and measuring the Lighthouse performance cost of your specific onboarding tool on your specific app takes about 15 minutes with the Lighthouse CLI. The methodology below produces a reliable before/after comparison that accounts for Lighthouse's natural run-to-run variance.
// 1. Get baseline: disable your onboarding tool
// In your app entry point, comment out the SaaS snippet
// 2. Run Lighthouse CLI (5 runs, median)
// npx lighthouse https://your-app.com --runs=5 --output=json
// 3. Re-enable the onboarding tool and run again
// npx lighthouse https://your-app.com --runs=5 --output=json
// 4. Compare the JSON output
// Focus on: TBT, LCP, INP, and overall scoreLighthouse scores vary 5-10 points between runs due to network and CPU variance. Five runs with the median value gives you a stable comparison. Pay attention to TBT specifically, as it's the metric most sensitive to third-party JavaScript.
For a more granular view, use Chrome DevTools Performance tab with the "Third-party badges" filter enabled. This marks which long tasks belong to external scripts. You'll likely see your onboarding tool's CDN domain attached to several 50ms+ tasks.
The client-side library alternative
Client-side product tour libraries avoid these costs structurally, not through optimization tricks, because the architecture removes the third-party overhead entirely. Instead of loading a remote script that fetches configuration and polls the DOM, a client-side library ships as part of your bundle with zero additional network requests at runtime.
// Tour Kit loads as part of your bundle, no external requests
import { TourProvider, useTour } from '@tourkit/react';
function App() {
return (
<TourProvider
steps={[
{ target: '#feature-button', content: 'Click here to start' },
{ target: '#settings-panel', content: 'Configure your workspace' },
]}
>
<Dashboard />
</TourProvider>
);
}
// Tree-shakes to only the components you use
// Lazy-loadable with React.lazy() for zero initial cost
const TourOverlay = React.lazy(() => import('./TourOverlay'));Tour Kit's core ships at under 8KB gzipped with zero runtime dependencies. It runs in your bundle, on your schedule. No CDN request, no remote config fetch, no DOM polling loop. The performance cost is the same as any other React component in your app, which means it's covered by the same code-splitting and lazy-loading strategies you already use.
What you trade for performance
Honesty check: client-side libraries require your developers to write code. Tour Kit has no visual builder. You need React 18+ and developers comfortable with JSX. SaaS tools exist because product managers want to create tours without filing engineering tickets. That's a real advantage.
But the performance cost of that convenience is also real. If your team already uses React and values Lighthouse scores, the engineering cost of a client-side library is likely lower than the ongoing performance cost of a SaaS tool.
The Lighthouse 13 deprecation: what it actually means
Lighthouse 13 deprecated the dedicated "Reduce the impact of third-party code" audit and folded its signals into cache efficiency insights, which some vendors have interpreted as evidence that third-party script performance no longer matters for your Lighthouse score. That interpretation misreads the change entirely, because the metrics that actually drive your score (TBT, LCP, INP) still measure the same main-thread contention.
The audit was a diagnostic label, not a score input. Removing the label doesn't remove the cost. Your score still drops when a third-party script blocks the main thread for 200ms. Lighthouse just no longer tells you which script caused it. If anything, this makes the problem harder to debug, not less important.
The double penalty: performance and accessibility
Onboarding SaaS tools inject DOM elements (tooltips, modals, overlays, hotspots) that can fail both Lighthouse performance and accessibility audits simultaneously, creating a compound penalty that most teams only discover when they investigate why their overall score dropped. When those injected elements lack proper ARIA roles, aria-live regions, and focus management, your accessibility score drops alongside performance.
94.4% of mobile sites load at least one third-party resource. When that resource injects UI without following WCAG 2.1 guidelines, you get penalized twice: once for the JavaScript weight and once for the inaccessible DOM nodes.
Client-side libraries give you control over the rendered markup. Tour Kit ships with ARIA attributes, focus trapping, and keyboard navigation built in because it renders through your React components, not through injected DOM nodes you can't modify.
Common mistakes when evaluating onboarding tool performance
Teams make predictable errors when assessing the performance cost of their onboarding stack, and these mistakes lead to underestimating the real Lighthouse impact by 10-20 points on average. Recognizing these patterns before you audit saves time and prevents false confidence in your numbers.
Testing only on desktop. Most Lighthouse audits run on fast desktop connections. Mobile throttling (Lighthouse's default simulated 4G) reveals the real cost. A SaaS script that adds 50ms TBT on desktop can add 200ms+ on a throttled mobile CPU.
Measuring only at page load. Async-loaded onboarding tools look fine in FCP and LCP. The damage shows up in TBT and INP, which you only catch by interacting with the page during the audit or using the Performance tab with real user flows.
Ignoring the config fetch. The initial script is just the loader. Most SaaS tools make a second request to fetch tour configuration from their API. That round-trip adds latency and blocks tour rendering until it completes.
Running a single Lighthouse pass. Lighthouse scores vary 5-10 points between runs. Five runs with the median value is the minimum for a reliable comparison.
A practical performance budget
The HTTP Archive shows the median page loads 20 external scripts totaling ~449KB. Adding a SaaS onboarding tool pushes you further from every Core Web Vitals threshold, and the gap widens on mobile where CPU and network constraints magnify every additional script.
Here's a realistic budget for onboarding:
| Approach | JS cost | Network requests | Main thread impact |
|---|---|---|---|
| Tour Kit (lazy-loaded) | <8 KB (in bundle) | 0 additional | Negligible (renders with React) |
| SaaS tool (async) | Unknown + config fetch | 2-5 additional | 50-300ms TBT contribution |
| SaaS tool (sync) | Unknown + config fetch | 2-5 additional | 200-500ms+ TBT contribution |
The difference compounds on mobile. Opera's research showed pages render 51% faster when third-party ads and scripts are blocked. Your onboarding tool is part of that 51%.
FAQ
Does async loading fix the Lighthouse impact of onboarding tools?
Async loading improves First Contentful Paint and LCP because the script no longer blocks initial render. But the JavaScript still executes on the main thread after load, increasing Total Blocking Time (30% of the Lighthouse score) and degrading INP. Async shifts the cost to higher-weighted metrics rather than eliminating it.
How much does a SaaS onboarding tool add to my page weight?
No major SaaS onboarding vendor publishes their client-side script payload size as of April 2026. Measure it yourself using Chrome DevTools Network tab filtered by the vendor's CDN domain. Client-side alternatives like Tour Kit publish exact sizes on bundlephobia: Tour Kit core is under 8KB gzipped.
Why did Lighthouse 13 remove the third-party code audit?
Lighthouse 13 deprecated the "Reduce the impact of third-party code" diagnostic and moved related signals into cache efficiency insights. The underlying metrics (TBT, LCP, INP) still capture the same main-thread blocking. The performance cost hasn't changed; the labeling has. Third-party scripts still degrade your Lighthouse performance score through these metrics.
Can I use a SaaS onboarding tool without hurting Core Web Vitals?
Conditional loading helps. Only load the onboarding script on pages where tours actually run, and defer initialization until the user triggers the tour. This minimizes TBT impact on most pages. But any page where the tool initializes will carry the performance cost. Client-side libraries avoid this by shipping in your bundle with standard code-splitting support.
What is the best lightweight alternative to SaaS onboarding tools?
Tour Kit ships at under 8KB gzipped with zero runtime dependencies and full TypeScript support. It renders through your existing React components, so there's no injected DOM, no CDN dependency, and no accessibility gaps. Install with npm install @tourkit/core @tourkit/react. We built Tour Kit, so take this recommendation with appropriate skepticism and verify the bundle size on bundlephobia yourself.
Internal linking suggestions:
- Link from React tour library benchmark 2026 — add a section referencing this Lighthouse deep-dive
- Link from Lightweight product tour libraries under 10KB — reference the performance budget section
- Link from Tree-shaking product tour libraries — connect bundle size to Lighthouse impact
- Link to Tour Kit ships at 8KB with zero dependencies from the client-side library section
- Link to Animation performance in product tours from the TBT section
JSON-LD schema:
{
"@context": "https://schema.org",
"@type": "TechArticle",
"headline": "The performance cost of onboarding SaaS tools: a Lighthouse audit",
"description": "Measure how SaaS onboarding tools like Appcues, Pendo, and WalkMe affect Lighthouse scores. Compare third-party script impact vs client-side libraries.",
"author": {
"@type": "Person",
"name": "DomiDex",
"url": "https://tourkit.dev"
},
"publisher": {
"@type": "Organization",
"name": "Tour Kit",
"url": "https://tourkit.dev",
"logo": {
"@type": "ImageObject",
"url": "https://tourkit.dev/logo.png"
}
},
"datePublished": "2026-04-08",
"dateModified": "2026-04-08",
"image": "https://tourkit.dev/og-images/onboarding-tool-lighthouse-performance.png",
"url": "https://tourkit.dev/blog/onboarding-tool-lighthouse-performance",
"mainEntityOfPage": {
"@type": "WebPage",
"@id": "https://tourkit.dev/blog/onboarding-tool-lighthouse-performance"
},
"keywords": ["onboarding tool lighthouse performance", "third party script lighthouse score", "saas tool performance impact", "core web vitals onboarding"],
"proficiencyLevel": "Intermediate",
"dependencies": "React 18+, TypeScript 5+",
"programmingLanguage": {
"@type": "ComputerLanguage",
"name": "TypeScript"
}
}Distribution checklist:
- Dev.to (canonical to tourkit.dev)
- Hashnode (canonical to tourkit.dev)
- Reddit r/reactjs — "We measured the Lighthouse impact of onboarding SaaS tools"
- Reddit r/webdev — frame as third-party script performance research
- Hacker News — "The Performance Cost of Onboarding SaaS Tools: A Lighthouse Audit"
Related articles

Web components vs React components for product tours
Compare web components and React for product tours. Shadow DOM limits, state management gaps, and why framework-specific wins.
Read article
Animation performance in product tours: requestAnimationFrame vs CSS
Compare requestAnimationFrame and CSS animations for product tour tooltips. Learn the two-layer architecture that keeps tours at 60fps without jank.
Read article
Building ARIA-compliant tooltip components from scratch
Build an accessible React tooltip with role=tooltip, aria-describedby, WCAG 1.4.13 hover persistence, and Escape dismissal. Includes working TypeScript code.
Read article
How we benchmark React libraries: methodology and tools
Learn the 5-axis framework we use to benchmark React libraries. Covers bundle analysis, runtime profiling, accessibility audits, and statistical rigor.
Read article