Puppeteer Extra Plugin Stealth: Bypass Bot Detection 2026
Back to Blog

Puppeteer Extra Plugin Stealth: Bypass Bot Detection 2026

20 min read

You know the pattern. The script works on your laptop. It takes clean screenshots, clicks the cookie banner, maybe even logs in. Then you deploy it and the target site serves a CAPTCHA, a blank page, or a half-rendered layout with an “access denied” interstitial.

That failure usually isn’t random. Vanilla Puppeteer leaves a trail, and bot detectors know where to look.

For years, puppeteer extra plugin stealth has been the default answer when developers hit that wall. It’s popular for a reason. It can patch the obvious fingerprints and get a lot of jobs unstuck. But it also introduces a second problem that tutorials rarely spend enough time on. Once you rely on it in production, you inherit a maintenance job. You’re no longer just taking screenshots. You’re maintaining a stealth stack.

The Unseen Battle Against Bot Detection

The first painful lesson in browser automation is that a browser isn’t just a browser. To a target site, your session is a collection of hints. User agent. Navigator properties. WebGL output. Permissions behavior. Missing plugins. Timing. Interaction patterns. Session continuity.

Vanilla Puppeteer leaks enough of those hints that many sites can classify it quickly.

A young man looking surprised at a computer screen displaying a CAPTCHA access denied error message.

That’s where puppeteer-extra-plugin-stealth came from. It was first released on GitHub by berstend in 2018 and has grown into one of the most widely used stealth layers for Puppeteer, with over 5,000 stars and 400 forks on GitHub. The plugin bundles 11 standalone evasion modules that patch browser fingerprints before site scripts run, and the project reports 90 to 95 percent success on public bot detection tests where vanilla Puppeteer fails, according to the puppeteer-extra repository.

Why default Puppeteer gets flagged

A stock headless browser often looks “off” in ways that are easy to test:

  • navigator.webdriver leaks automation: This is one of the oldest and still most common checks.
  • User agent inconsistencies stand out: A headless signature or mismatched browser details can trip simple detectors.
  • Browser APIs don’t look human enough: Missing window.chrome details, fake-looking permissions behavior, and odd plugin lists all add up.
  • Graphics fingerprints differ: WebGL quirks and rendering differences can expose headless Chromium.

For a developer doing screenshot automation, this shows up in annoying ways. You don’t always get blocked hard. Sometimes you get softer failures that are worse to debug. A consent modal overlays the entire page. Search results are rearranged. Product cards don’t render. A geolocation wall appears. The screenshot succeeds, but the output is unusable.

The cat-and-mouse game is the work

A stealth plugin helps because it patches the browser before the page’s detection scripts run. That’s powerful. It’s also only one layer.

Public bot test pages are useful, but they’re the warm-up, not the match.

Sites that care about abuse prevention don’t stop at static fingerprint checks. They combine browser signals with network reputation, request patterns, and behavior. That’s why a script can pass a detector page and still fail in production.

If your automation work includes search results, pricing pages, or retail listings, it’s worth studying workflows built for structured extraction too. This Google Shopping scraper guide is useful because it shows how messy search environments become once anti-bot friction, dynamic content, and pagination pile up.

Stealth matters because default Puppeteer is easy to spot. But stealth isn’t the finish line. It’s the admission price for getting into the game.

Getting Started with Stealth Installation and Configuration

The basic setup is simple enough that you can get a useful result in a few minutes. That simplicity is part of why the plugin spread so widely.

A conceptual diagram showing a detection target being bypassed, processed through code blocks, and enabling stealth mode.

The short version is this. Install puppeteer-extra and puppeteer-extra-plugin-stealth, apply the plugin with puppeteer.use(StealthPlugin()), then launch as usual. That setup is reported to pass about 95 percent of 20+ fingerprint tests on pages like bot.sannysoft.com, while vanilla Puppeteer lands around 10 to 20 percent success on the same checks, according to Scrapfly’s Puppeteer Stealth guide.

Install the packages

npm i puppeteer-extra puppeteer-extra-plugin-stealth

Use puppeteer-extra, not plain puppeteer, for the instance that should load plugins.

Minimal working script

This is the cleanest copy-paste baseline I recommend for testing:

const puppeteer = require('puppeteer-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');

puppeteer.use(StealthPlugin());

(async () => {
  const browser = await puppeteer.launch({
    headless: true,
    args: ['--no-sandbox']
  });

  const page = await browser.newPage();
  await page.setViewport({ width: 1440, height: 900 });

  await page.goto('https://bot.sannysoft.com/', {
    waitUntil: 'networkidle2'
  });

  await page.screenshot({
    path: 'stealth-check.png',
    fullPage: true
  });

  await browser.close();
})();

What this actually does

A lot of examples stop at “it works.” The useful part is understanding the order of operations.

  1. puppeteer-extra wraps Puppeteer This gives you plugin support without changing the way you write most browser code.

  2. puppeteer.use(StealthPlugin()) registers the evasions The plugin injects patches before page scripts execute.

  3. page.evaluateOnNewDocument() does the heavy lifting That’s the mechanism the stealth modules use to alter browser-exposed properties before detection code runs.

  4. Your browser launches normally After the plugin is attached, your script still looks like standard Puppeteer code.

A more practical screenshot example

For screenshot work, I usually test on a real page after the detector page. A detector confirms patches loaded. A real page confirms your output is usable.

const puppeteer = require('puppeteer-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');

puppeteer.use(StealthPlugin());

async function capturePage() {
  const browser = await puppeteer.launch({
    headless: true,
    args: ['--no-sandbox']
  });

  const page = await browser.newPage();
  await page.setViewport({ width: 1600, height: 1200 });

  await page.goto('https://example.com', {
    waitUntil: 'networkidle2'
  });

  await page.screenshot({
    path: 'example-homepage.png',
    fullPage: true
  });

  await browser.close();
}

capturePage().catch(console.error);

If your end goal is long-page rendering, this guide on how to take full page screenshot is worth reading because full-page capture has its own failure modes beyond stealth, especially around lazy loading, sticky elements, and late script execution.

Don’t treat a green detector page as final proof

A successful detector run only tells you the basic fingerprint patches are doing something. It doesn’t tell you whether the target site will tolerate your IP reputation, your timing, or your lack of session history.

Practical rule: First verify on a detector page. Then immediately verify on the exact site type you care about, such as retail search, auth flows, or JS-heavy landing pages.

That second test catches most false confidence.

A good walkthrough helps if you want to watch the setup in action before changing your own codebase:

Common setup mistakes

The plugin is easy to add, but there are a few mistakes that waste hours:

  • Using the wrong import: If you launch plain puppeteer, the plugin never runs.
  • Registering after launch: Call .use() before launch().
  • Testing only local runs: A script that passes locally can still fail once deployed behind a different network profile.
  • Assuming screenshots equal success: The page might render a block page cleanly. That’s still a failed capture.

The first win with puppeteer extra plugin stealth is fast. The hard part starts when you need that win every day, across many sites, under production load.

Beyond the Basics Advanced Stealth Techniques

Most blog posts stop at puppeteer.use(StealthPlugin()). That’s fine for learning. It’s not enough for a screenshot pipeline that runs all day.

The production question isn’t “does stealth work?” It’s “which evasions do I need for this target, and what do they cost me?”

According to the plugin readme, enabling all evasions can add 20 to 50ms per page. It can also be worth selectively disabling non-essential modules, with potential memory usage reductions of 15 to 30 percent for high-volume screenshot jobs, as noted in the stealth plugin readme.

Selective evasions beat default cargo-culting

The default bundle is convenient. It’s also easy to overuse.

If you’re rendering screenshots of public marketing pages, you may not need every evasion module on every request. If a target only checks a narrow set of signals, loading the full bundle can mean more page overhead with no practical upside.

This is the basic pattern for a trimmed configuration:

const puppeteer = require('puppeteer-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');

puppeteer.use(
  StealthPlugin({
    enabledEvasions: new Set([
      'navigator.webdriver',
      'user-agent-override'
    ])
  })
);

That’s not a universal recommendation. It’s a starting point for controlled testing.

When minimal stealth makes sense

A minimal set can work well when:

  • The job is non-interactive: Static page capture, landing page archiving, or visual snapshots.
  • The site isn’t aggressively instrumented: Public content pages often don’t need the full stack.
  • You care about throughput: Small per-page overhead becomes real under sustained load.
  • You want fewer moving parts: Fewer patches can mean fewer odd edge cases.

When the full bundle is safer

Use the default bundle when:

  • You’re still diagnosing detection
  • The site runs lots of client-side checks
  • You don’t yet know which signal is causing the block
  • The target is known to inspect multiple browser surfaces

There’s no trophy for using fewer evasions. The point is to use the smallest setup that stays reliable.

Proxies are not optional on harder targets

Stealth patches the browser environment. It does not solve IP reputation.

That distinction matters. A target can dislike your browser fingerprint, your network identity, or both. If your browser looks human but your traffic arrives from an obviously suspicious route, you’ll still get blocked.

The practical pattern is to pair stealth with proxy rotation:

const browser = await puppeteer.launch({
  headless: true,
  args: [
    '--no-sandbox',
    '--proxy-server=http://proxy-host:proxy-port'
  ]
});

Then authenticate if needed:

await page.authenticate({
  username: 'proxy-username',
  password: 'proxy-password'
});

Good operators also keep browser coherence in mind. Language, timezone, geolocation, and browsing behavior should make sense together. If they don’t, you’ve just replaced one obvious fingerprint with a more subtle mismatch.

Human behavior matters on interactive flows

Stealth hides certain machine fingerprints. It doesn’t make your session behave like a person.

That gap becomes obvious on login flows, checkout paths, conferencing apps, and sites that watch for scroll depth, pointer movement, or focus changes. If your browser opens, clicks instantly, never scrolls, and closes with identical timing every run, you’ve created a behavioral signature.

Simple improvements include:

  • Randomized waits: Avoid perfectly repeatable delays.
  • Mouse movement libraries: Useful when a target watches cursor behavior.
  • Typing delays: Immediate full-field input looks robotic.
  • Session continuity: Reusing context can matter more than one more stealth patch.

For broader operational hygiene, this post on web scraping best practices is useful because it treats anti-detection as one part of a system, not as a magic browser flag.

The teams that stay stable longest usually aren’t the ones with the biggest stealth bundle. They’re the ones that keep browser, network, and behavior coherent.

CAPTCHA and anti-bot layering

On tougher targets, stealth often becomes one layer in a stack rather than the whole strategy. Developers commonly add CAPTCHA handling, request throttling, session reuse, and retry logic around it.

That works, but each added layer increases the maintenance burden:

Layer What it helps with What it costs
Stealth plugin Browser fingerprint leaks Extra debugging and performance overhead
Proxies IP reputation and rate limits Provider management and rotating logic
Behavioral simulation Interaction analysis More code and slower runs
CAPTCHA solving Challenge pages Added complexity and ethical review

This is the part many tutorials miss. Advanced stealth isn’t one trick. It’s a stack of compromises. Every new defense you add can increase reliability on one site while making another site harder to debug.

Troubleshooting Common Stealth Detection Issues

Sooner or later, you’ll hit a site where stealth is loaded correctly and you still lose. That’s normal.

The mistake is to keep swapping flags and plugins without isolating the failure mode. Debugging stealth works better when you treat it like incident response.

Screenshot from https://bot.sannysoft.com/

A useful reality check comes from real-world issue reports. Stealth can still be detected on platforms like Google Meet because of behavioral analysis, and while it performs well on e-commerce targets, its efficacy can drop below 70 percent on interactive web apps. Friendly Captcha also notes that browser vendors in 2026 have improved headless fingerprints, reducing stealth efficacy by 25 percent on JS-heavy SPAs without extra defenses, according to its overview of puppeteer-extra-plugin-stealth. That future-dated note should be treated as a projection in the source, but the practical lesson is already visible today. Interactive apps are harder.

Start with a controlled baseline

Before touching the target site again, verify these basics:

  1. Confirm the plugin is active Use puppeteer-extra, register the plugin before launch, and test on a detector page.

  2. Compare local and deployed runs If local passes and prod fails, suspect environment differences before blaming the plugin.

  3. Log what the page is serving Save HTML, screenshot the interstitial, and capture console output.

  4. Check whether the page is blocked or just incomplete A half-rendered app can be a rendering timing issue, not bot detection.

Read detector pages correctly

bot.sannysoft.com is useful because it gives fast visual feedback. But don’t stare only at the green and red boxes. Treat it as a map of browser surfaces.

If webdriver checks are clean but the target still blocks you, look elsewhere:

  • Behavioral checks: No movement, no scrolling, deterministic actions
  • Session issues: Brand-new profile every run
  • Network profile: IP reputation or geography mismatch
  • Application flow checks: Auth state, challenge scripts, hidden validation requests

A practical debugging loop

When a target starts failing, use a repeatable sequence:

  • Snapshot the failing state: Save screenshot, HTML, and response metadata.
  • Reduce variables: Turn off custom patches you added beyond stealth.
  • Retest with a persistent context: Some sites dislike fresh profiles every time.
  • Slow the flow down: Add realistic waits around navigation and interaction.
  • Inspect blocked resources: Challenge scripts, analytics, or anti-bot endpoints often reveal where the decision happens.

What failure often looks like by site type

Different targets fail differently. That pattern helps narrow your search.

Site type Typical failure signal Most likely issue
Public e-commerce pages Interstitial, soft block, product grid missing Fingerprint plus IP reputation
Login and account pages Challenge loop, forced verification Session risk and behavior
Interactive apps Works briefly, then gets kicked out Behavioral analysis
JS-heavy SPAs Blank shell, partial hydration, script errors Runtime checks or incomplete rendering

If a site fails after a few minutes instead of at first paint, stop obsessing over navigator.webdriver. The site is probably scoring behavior, not just fingerprint.

Don’t ignore your own code

Some “stealth failures” are self-inflicted.

A few common examples:

  • Over-aggressive resource blocking: You block a script that the app needs to render.
  • Conflicting launch flags: A suspicious launch profile can undo plugin benefits.
  • Viewport weirdness: Unusual dimensions can trigger responsive edge cases and suspicion.
  • Broken waiting strategy: Screenshotting too early makes a healthy page look blocked.

The fastest way to improve at this is to keep evidence from every failure. Save the screenshot. Save the DOM. Save console logs. Save the request sequence. Once you do that consistently, patterns show up fast.

The Build vs Buy Dilemma Your Time vs A Reliable API

There’s a point where stealth work stops being an engineering advantage and starts becoming operational drag.

For a side project or an internal tool, that trade can be fine. You learn a lot by building your own stack. For a production workflow that needs clean screenshots on demand, the economics change. You’re not just running Puppeteer. You’re maintaining browsers, launch flags, evasion settings, proxy behavior, retries, failure handling, and all the weird site-specific exceptions nobody puts in the README.

An infographic comparing building a custom stealth solution versus using a commercial API for web automation.

The hidden cost of DIY isn’t the code

The first version of a stealth pipeline is usually satisfying. A few packages, a script, some detector tests, done.

The expensive part comes later:

  • A target changes behavior You now have a production incident tied to one website’s anti-bot rollout.

  • A previously stable render starts capturing cookie walls Your screenshot pipeline didn’t technically fail, but the output is useless.

  • One geography works and another doesn’t You end up debugging proxies, headers, locale coherence, and target-specific delivery.

  • The team wants PDFs or scrolling video too What started as “take a screenshot” becomes a rendering platform.

That’s why the build-vs-buy decision should be made around reliability requirements, not developer pride.

Build makes sense when control matters most

DIY still has a place.

Use your own Puppeteer stack when:

  • You need custom browser behavior
  • The target set is small and well understood
  • You’re experimenting or learning
  • You need deep page instrumentation, not just outputs

If you’re comparing browser automation stacks more broadly, this overview of Playwright vs Puppeteer is a useful sanity check before you commit more engineering time to one branch of the ecosystem.

Buy makes sense when screenshots are the product

A managed screenshot API becomes the obvious choice when your team cares about output, not the browser internals.

That usually means:

Requirement DIY Puppeteer stack Managed screenshot API
Clean screenshots Possible, but fragile Core product behavior
PDFs and scrolling video More custom work Usually built in
Maintenance burden Yours Provider’s
Site-specific rendering quirks You debug them Provider abstracts them
Onboarding new developers Slower Faster

ScreenshotEngine fits the practical need well. It gives teams a clean API for screenshots, scrolling video, and PDF output without forcing them to own every browser-level failure mode. That matters if your job is visual regression, SERP monitoring, compliance archiving, or preview generation. In those contexts, the business need is “give me a clean, consistent render,” not “let me spend another day tuning headless evasions.”

The switch usually happens after the same set of pain points

Teams don’t switch because they suddenly dislike Puppeteer. They switch because of recurring friction:

  • The screenshot job has become business-critical
  • Failures now affect customers or internal reporting
  • The team keeps firefighting broken renders
  • Browser maintenance is stealing time from product work

If you’re evaluating the broader scraping and extraction market at the same time, this comparison of a ScraperAPI alternative is helpful because it frames a similar decision. Once anti-bot infrastructure becomes the hard part, service quality matters more than raw feature lists.

Build when browser control is the point. Buy when the output is the point.

That’s the dividing line I’ve seen hold up best.

A lot of developers stay in DIY mode longer than they should because the first prototype works. Production reliability is a different standard. If your team needs dependable screenshots, PDFs, or scrolling captures every day, “we can probably patch this” stops being a satisfying answer.

Conclusion When to Use Puppeteer Stealth and When to Scale

puppeteer extra plugin stealth is still one of the most useful tools in the Puppeteer ecosystem. It solves a real problem. Default Puppeteer is easy to fingerprint, and stealth gives you a practical way to patch the obvious leaks.

For learning, internal automation, targeted screenshot jobs, and small production workloads, it’s often the right tool. You keep control. You can inspect what’s happening. You can tune the browser to your needs.

But control comes with ownership.

Once your workflow depends on stable renders across many sites, stealth becomes only one piece of a larger system. You start dealing with proxy quality, behavioral signals, session continuity, cookie banners, challenge pages, and site-specific regressions. At that point, the browser is no longer just a tool in your stack. It is the stack.

The clearest decision framework is simple:

  • Use Puppeteer Stealth when you need flexibility, debugging access, and hands-on control.
  • Keep the setup lean, test against your real targets, and don’t assume detector-page success means production safety.
  • Switch to a managed API when reliability matters more than browser tinkering.

The mistake isn’t using stealth. The mistake is pretending stealth removes the maintenance burden. It doesn’t. It shifts that burden onto your team.

If that trade still makes sense, keep building. If it doesn’t, scale with something designed to deliver clean output rather than endless browser surgery.

Frequently Asked Questions about Puppeteer Stealth

Is puppeteer extra plugin stealth still worth using

Yes, for the right jobs.

It’s still a strong baseline when you need to reduce obvious browser fingerprint leaks in Puppeteer. It’s especially useful for public pages, internal tooling, QA tasks, and controlled automation environments. It becomes less reliable as targets lean harder on behavior, session history, and network-level risk scoring.

Can it make Puppeteer undetectable

No. That’s the wrong goal.

A stealth plugin can reduce detection. It can’t guarantee invisibility against layered anti-bot systems. If a site combines fingerprinting, traffic reputation, challenge flows, and interaction analysis, the plugin is only one part of the puzzle.

Should I enable every evasion by default

Not always.

The default bundle is a solid starting point. But for high-volume screenshot work, selective evasions can be the better choice if testing shows you don’t need the full set. Fewer enabled patches can mean less overhead and fewer odd compatibility issues.

What’s the biggest mistake teams make with stealth

They stop at browser patches.

A lot of teams treat stealth as a silver bullet, then get confused when a target still blocks them. In practice, the hardest issues often involve behavior, session continuity, or network identity rather than one missing browser property.

Is Playwright better than Puppeteer for stealth

Sometimes, but not automatically.

The browser library matters less than many people think. Operational quality usually comes down to the full system around it: launch profile, session reuse, proxy strategy, behavior, rendering consistency, and debugging discipline. Changing libraries can help. It won’t remove the underlying anti-bot problem by itself.

That depends on your jurisdiction, your use case, the target site’s terms, and the data or content involved.

For legitimate use cases such as testing, compliance capture, internal QA, archival, and approved data workflows, teams still need legal review and a clear policy. Don’t let a technical workaround substitute for compliance judgment.

When should I stop debugging and use a service instead

Use a service when browser maintenance is taking more time than the output is worth.

That usually happens when your screenshots become part of a product, a customer-facing workflow, scheduled reporting, or a team-wide system. If engineers are spending more time fixing bot issues than shipping features, you’ve crossed the line where managed infrastructure starts paying for itself.


If you need clean website screenshots without maintaining your own stealth stack, ScreenshotEngine is the practical shortcut. It gives you a fast screenshot API for images, scrolling video, and PDF output through a simple interface, so your team can ship reliable captures instead of babysitting headless browsers.