Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Managing puppeteer for memory and performance

I'm using puppeteer for scraping some pages, but I'm curious about how to manage this in production for a node app. I'll be scraping up to 500,000 pages in a day, but these scrape jobs will happen at random intervals, so it's not a single queue that I can plow through.

What I'm wondering is, is it better to open a browser, go to the page, then close the browser between each job? Which I would assume would be a lot slower, but maybe handle memory better?

Or do I open one global browser when the app boots, and then just go to the page, and have some way to dump that page when I'm done with it (e.g. closing all tabs in chrome, but not closing chrome) then just re-open a new page when I need it? This way seems like it would be faster, but could potentially eat up lots of memory.

I've never worked with this library especially in a production environment, so I'm not sure if there's things I should watch out for.

like image 711
jeremywoertink Avatar asked Aug 22 '18 16:08

jeremywoertink


People also ask

How much RAM does puppeteer need?

Memory requirements Actors using Puppeteer: at least 1GB of memory. Large and complex sites like Google Maps: at least 4GB for optimal speed and concurrency.

Can puppeteer run in browser?

Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

Does puppeteer use Chromium?

Puppeteer is a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome or Chromium.


1 Answers

You probably want to create a pool of multiple Chromium instances with independent browsers. The advantage of that is, when one browser crashes all other jobs can keep running. The advantage of one browser (with multiple pages) is a slight memory and CPU advantage and the cookies are shared between your pages.

Pool of puppeteer instances

The library puppteer-cluster (disclaimer: I'm the author) creates a pool of browsers or pages for you. It takes care of the creation, error handling, browser restarting, etc. for you. So you can simply queue jobs/URLs and the library takes care of everything else.

Code sample

const { Cluster } = require('puppeteer-cluster');  (async () => {     const cluster = await Cluster.launch({         concurrency: Cluster.CONCURRENCY_BROWSER, // use one browser per worker         maxConcurrency: 4, // cluster with four workers     });      // Define a task to be executed for your data (put your "crawling code" in here)     await cluster.task(async ({ page, data: url }) => {         await page.goto(url);         // ...     });      // Queue URLs when the cluster is created     cluster.queue('http://www.google.com/');     cluster.queue('http://www.wikipedia.org/');      // Or queue URLs anytime later     setTimeout(() => {         cluster.queue('http://...');     }, 1000); })(); 

You can also queue functions directly in case you have different task to do. Normally you would close the cluster after you are finished via cluster.close(), but you are free to just let it stay open. You find another example for a cluster that gets data when a request comes in in the repository.

like image 70
Thomas Dondorf Avatar answered Sep 19 '22 08:09

Thomas Dondorf