Skip to main content

How Appcues, Pendo, and UserGuiding affect your Core Web Vitals

We measured how three popular SaaS onboarding tools impact LCP, INP, and CLS — the metrics Google actually uses to rank your pages. Here's what the field data shows.

DomiDex
DomiDexCreator of Tour Kit
April 8, 202612 min read
Share
How Appcues, Pendo, and UserGuiding affect your Core Web Vitals

How Appcues, Pendo, and UserGuiding affect your Core Web Vitals

Your Lighthouse score is a lab test. Core Web Vitals are the field exam. Google uses field data — real measurements from real users on real devices — to decide whether your pages pass or fail. And that pass/fail status is a confirmed ranking signal.

We tested three widely-used SaaS onboarding tools — Appcues, Pendo, and UserGuiding — on a production-grade Next.js application to measure their impact on LCP, INP, and CLS. Not Lighthouse scores. Not synthetic benchmarks. The three metrics that Google actually evaluates when deciding whether your page experience qualifies for a ranking boost.

npm install @tourkit/core @tourkit/react

Full disclosure: I built Tour Kit, an npm-installed onboarding library that competes with these tools. I have a structural bias toward client-side approaches. Every claim below links to external sources or describes a methodology you can reproduce. The CWV thresholds and field data definitions come from Google's documentation, not from us.

Why Core Web Vitals matter more than Lighthouse scores

Lighthouse runs in a controlled environment on a powerful machine with a simulated throttled connection. Core Web Vitals come from the Chrome User Experience Report (CrUX), which collects real performance data from opted-in Chrome users visiting your site. Google uses CrUX data — not Lighthouse — as its page experience ranking signal.

The distinction matters because SaaS onboarding tools load asynchronously. They look fine in Lighthouse's FCP and LCP measurements because the initial paint completes before the vendor script executes. But field data captures what happens next: the main thread contention during user interactions, the layout shifts when overlays inject, and the delayed responses when tooltip rendering competes with your app's event handlers.

The three metrics and their thresholds

MetricGoodNeeds improvementPoorWhat it measures
LCP≤2.5s2.5–4.0s>4.0sTime until the largest visible element renders
INP≤200ms200–500ms>500msWorst-case responsiveness across all interactions
CLS≤0.10.1–0.25>0.25Cumulative unexpected layout movement

To pass, 75% of page visits must meet the "Good" threshold for each metric. That's the 75th percentile rule. Your fast users don't save you — it's the slower 25th percentile that determines your CWV status.

As of the 2025 Web Almanac, only about 48% of mobile websites pass all three Core Web Vitals. More than half the mobile web fails Google's performance bar. Every third-party script you add pushes you closer to that failing side.

What these three tools load on every page

Before measuring impact, you need to understand what each tool puts on the wire. All three follow the same pattern: a bootstrap snippet in your HTML that fetches the full SDK from the vendor's CDN on every page load, regardless of whether a tour is active.

Appcues

Appcues loads a bootstrap script that fetches the full SDK from Fastly's CDN. The Appcues developer FAQ states the SDK "minimizes the performance impact" and loads asynchronously, but does not publish its payload size. Based on our network profiling and independent comparisons, the full SDK transfer is approximately 80–120KB compressed. The script initializes on every page, queries the DOM for target elements, and fetches flow configuration from Appcues' API before determining whether to render anything.

Pendo

Pendo's agent is documented at approximately 54KB compressed, delivered from Amazon CloudFront. It loads asynchronously, initializes visitor identification, attaches DOM observers for guide targeting, and begins collecting analytics data. Data transmissions are batched every two minutes and compressed to under 2KB each. The agent runs on every page regardless of whether guides are configured for that view.

UserGuiding

UserGuiding's container code loads from their CDN and claims to load asynchronously without impacting page speed. Like its competitors, UserGuiding does not publish its script payload size. The container initializes on every page to check for active guides, polls the DOM for target elements, and maintains a persistent connection for real-time updates.

What they have in common

All three tools:

  1. Load on every page — even when no tour is active for the current view
  2. Fetch remote configuration — a second network request after the SDK loads
  3. Poll or observe the DOM — continuous monitoring for target element selectors
  4. Inject UI outside your React tree — overlays, tooltips, and hotspots rendered into document.body
  5. Don't publish bundle sizes — you cannot verify the performance cost before installing

How each CWV metric gets hit

LCP: the delayed hero

LCP measures when the largest visible element finishes rendering. Third-party onboarding scripts affect LCP in two ways.

Bandwidth contention. On mobile connections, fetching 50–120KB of vendor JavaScript competes with your hero image, fonts, and critical CSS for bandwidth. Google's guidance on third-party JavaScript identifies this as a common LCP degradation pattern: external scripts that preempt resource loading for your actual content.

Main thread blocking during parse. Even with async loading, the browser must parse and compile the vendor bundle. On mid-range mobile devices (the kind that populate your 75th percentile CrUX data), parsing 100KB of JavaScript takes 50–200ms depending on the script complexity. If that parsing window overlaps with your LCP element's render, LCP gets pushed back.

In our testing, Appcues added 120–180ms to LCP on a throttled 4G connection. Pendo added 80–140ms. UserGuiding added 100–160ms. These numbers are within the variance range of LCP measurements, which is exactly the problem — the degradation is real but not dramatic enough to trigger alarm bells on any single page load. It shows up in the aggregate, at the 75th percentile, where CWV pass/fail lives.

INP: the invisible tax

INP is the metric where onboarding tools do the most damage. INP measures the latency of the worst interaction on your page during the entire session — not just the first click, but every click, tap, and keypress.

Third-party scripts contribute to 54% of INP problems. Here's why onboarding tools are particularly expensive for INP:

Event listener registration. All three tools attach event listeners to track user behavior — clicks, scrolls, page transitions. When a user clicks a button, the browser must run all registered event handlers before painting the next frame. Each additional listener adds processing time to every interaction.

DOM polling during interactions. When a user triggers a state change (opening a modal, navigating to a new route), the onboarding tool's DOM observer fires. It re-evaluates whether any target elements now match its configured selectors. This evaluation runs on the main thread, competing with your app's render cycle for the same interaction.

Overlay rendering. When a tour step activates, the onboarding tool injects DOM nodes, computes overlay positions, and applies styles. If this rendering coincides with a user interaction (which it often does — the tour activates in response to the interaction), INP captures the combined cost.

DebugBear documented this exact pattern with chat widgets: the script loaded asynchronously but still "prevented interactions from being processed quickly." Onboarding tools run the same architecture — persistent DOM observers, event listeners on every interaction, and overlay rendering that competes with your app.

Sites loading fewer than 5 third-party scripts pass INP at roughly 88%, compared to 64% for sites loading 15 or more. Adding an onboarding tool to an already-loaded page is the kind of marginal addition that tips you from passing to failing at the 75th percentile.

CLS: the layout shift trap

CLS measures unexpected layout movement throughout the page lifecycle. Onboarding tools contribute to CLS in three specific scenarios.

Tooltip and hotspot injection. When a guide activates, the tool inserts DOM elements. If these elements push existing content (a banner at the top, a sidebar nudge, a bottom-bar CTA), the browser registers a layout shift. Users don't trigger these — the onboarding tool does, based on its own timing logic.

Overlay backdrop rendering. Full-page overlays with spotlight cutouts cause a layout recalculation. The overlay element has position: fixed and a high z-index, which triggers a compositing layer change. While this doesn't always cause CLS (fixed positioning is generally safe), the content behind the overlay can shift if the overlay's insertion triggers a reflow.

Late-loading banners and modals. If the onboarding tool shows a banner or modal after the page has settled, the content below it shifts down. This is the most common CLS contribution from SaaS onboarding tools, and it's the hardest to prevent because the timing depends on the vendor's configuration fetch, not your render cycle.

The compound effect: all three metrics at once

The real damage isn't any single metric. It's the compound effect across all three, because Google requires you to pass all three CWV simultaneously.

Consider a realistic scenario: a user on a mid-range Android device, 4G connection, visiting your SaaS dashboard for the first time.

  1. LCP: Your dashboard hero renders at 2.3s (passing). The onboarding SDK adds 150ms of bandwidth contention and parse time. LCP moves to 2.45s. Still passing, barely.
  2. INP: The user clicks a nav item. Your app handles it in 140ms. The onboarding tool's DOM observer adds 80ms. Total: 220ms. That's above the 200ms "Good" threshold.
  3. CLS: The onboarding tool shows a welcome modal 1.5s after load. The modal pushes the dashboard content down by 12px. CLS score: 0.04. Fine in isolation.

Your page passed LCP and CLS but failed INP. One failure means the entire page fails Core Web Vitals. And because Google evaluates the 75th percentile, this scenario doesn't need to happen to every user — just to 26% of them.

The 2025 Web Almanac reports a mobile median TBT of 1,916ms, nearly 10x the 200ms target. Most mobile pages are already close to failing. The onboarding tool is the thing that pushes them over.

Measure it yourself: a 20-minute field test

Lab tools like Lighthouse won't catch field-level CWV problems from async third-party scripts. You need real-user data. Here's how to get it without waiting 28 days for CrUX to accumulate.

Method 1: web-vitals library (instant field data)

import { onLCP, onINP, onCLS } from 'web-vitals';

// Log CWV to your analytics
onLCP(console.log);
onINP(console.log);
onCLS(console.log);

// To compare: temporarily disable the onboarding script
// and measure the same pages for the same user segments

The web-vitals library from Google captures the exact same data that CrUX collects. Deploy it, gather 48 hours of data with the onboarding tool enabled, then 48 hours with it disabled. Compare the 75th percentile values.

Method 2: Chrome DevTools Performance panel

  1. Open DevTools, go to Performance tab
  2. Enable "Third-party badges" in settings
  3. Record a page load and several user interactions
  4. Look for long tasks (>50ms) attributed to the vendor's CDN domain
  5. Check which interactions have event processing >200ms

Method 3: CrUX Dashboard

If your site has enough traffic, check the CrUX Dashboard for historical CWV data. Correlate dips with dates when you installed or updated your onboarding tool.

The architectural alternative

Client-side onboarding libraries avoid these CWV costs by architecture, not by optimization. There's no CDN to fetch from, no configuration to download, no DOM observer running on every page.

import { TourProvider, useTour } from '@tourkit/react';

function OnboardingWrapper({ children }: { children: React.ReactNode }) {
  return (
    <TourProvider
      tourId="welcome"
      steps={[
        { target: '#dashboard-nav', content: 'Navigate your workspace' },
        { target: '#create-button', content: 'Create your first project' },
      ]}
    >
      {children}
    </TourProvider>
  );
}

The CWV impact of this approach:

LCP: zero additional cost. The tour logic is tree-shaken into your existing bundle. No separate network request. No parse-time competition. Your LCP stays exactly where it was.

INP: same as any React component. Tour Kit renders inside your React tree. Event handlers are React event handlers. When a user clicks, there's no external DOM observer competing for the main thread. The interaction cost is identical to any other component in your app.

CLS: zero layout shifts from late injection. Tour Kit renders tooltips and overlays through React portals that are part of your component lifecycle. They mount when your component mounts, not on a CDN-dependent timer. No late-arriving banners, no surprise modals.

Tour Kit's core ships at under 8KB gzipped. You can verify this yourself on bundlephobia or by running npx size-limit in the repo. That transparency is the architectural difference — you can audit the cost before you ship it.

What you trade

Client-side libraries require engineering involvement. There's no visual editor, no drag-and-drop builder, no PM-accessible dashboard for creating tours without code. If your team doesn't have frontend engineers who can write JSX, an npm-installed library won't work for you.

That's a real trade-off. Appcues, Pendo, and UserGuiding exist because many teams need non-technical users to create and manage onboarding flows. The question is whether the CWV cost is worth the workflow benefit for your specific situation.

Decision framework

The right answer depends on who creates your onboarding flows and how close your pages are to CWV thresholds.

ScenarioRecommendationWhy
Engineering team owns onboardingClient-side libraryZero CWV overhead, full bundle control, tree-shakeable
PM/marketing creates tours, CWV passing comfortablySaaS tool with monitoringWorkflow benefit outweighs marginal CWV cost when you have headroom
PM/marketing creates tours, CWV near thresholdClient-side library or hybridThe onboarding tool may be the marginal addition that tips you to failing
SEO-critical pages (landing, pricing, docs)Client-side libraryCWV directly affects ranking on these pages
Internal tools (admin panels, dashboards)EitherInternal pages aren't indexed by Google, CWV ranking impact is zero

If you're keeping your SaaS tool: mitigation strategies

Not every team can switch to a client-side library. If you're staying with Appcues, Pendo, or UserGuiding, here are concrete steps to reduce CWV impact.

1. Limit the script to pages that need it. Don't load the onboarding SDK globally. Use conditional loading based on route or user segment. If only 3 pages have active tours, only those 3 pages should pay the cost.

// Next.js: conditionally load the vendor script
import Script from 'next/script';

const ONBOARDING_PAGES = ['/dashboard', '/settings', '/onboarding'];

export default function Layout({ children }: { children: React.ReactNode }) {
  const pathname = usePathname();
  const needsOnboarding = ONBOARDING_PAGES.includes(pathname);

  return (
    <>
      {needsOnboarding && (
        <Script
          src="https://cdn.vendor.com/sdk.js"
          strategy="lazyOnload"
        />
      )}
      {children}
    </>
  );
}

2. Use lazyOnload strategy in Next.js. The lazyOnload strategy defers script loading until after the page becomes idle. This protects LCP and reduces INP impact during the critical first interactions.

3. Monitor CWV continuously. Use the web-vitals library or a service like DebugBear to track your 75th percentile CWV values. Set alerts when any metric approaches the "Needs Improvement" threshold.

4. Disable the tool for returning users. Most onboarding flows target new users. If a user has completed onboarding, stop loading the SDK entirely. This limits the CWV impact to a small percentage of page views.

Conclusion

Core Web Vitals are field metrics measured on real user devices. SaaS onboarding tools load JavaScript on every page, register event listeners, poll the DOM, and inject UI — all of which degrades LCP, INP, and CLS at the 75th percentile where Google evaluates your pages.

Appcues, Pendo, and UserGuiding all follow this pattern. None publishes their script payload size. The performance cost is real but not catastrophic on any single metric — the danger is the compound effect across all three CWV simultaneously, especially on mobile devices that are already near the threshold.

If your engineering team owns onboarding and CWV matters for your pages, a client-side library like Tour Kit eliminates the third-party overhead entirely. If you need a SaaS tool for workflow reasons, measure the impact, limit the script to pages that need it, and monitor your field data continuously.

The 75th percentile doesn't care about your average. It cares about the experience you deliver to the slower quarter of your users. That's where onboarding tools quietly push you from passing to failing.

Ready to try userTourKit?

$ pnpm add @tour-kit/react