Large site crawling stops without reason

We are currently trying to crawl a website with around 11k pages and have the issue that the crawling stops after a couple of thousand pages. We are running the site on a dedicated managed server at Hetzner. Unfortunately there is no error appearing in the logs, the crawling simply stops. We then have to delete all jobs, add the jobs manually and then call the cron hook wp2static_process_queue manually. Any ideas what the issue could be?

Edit: max_execution_time is Unlimited on the Diagnostic page. All other parameters also seem fine. PHP memory limit is 4096MB but could be increased if it makes sense.

It also seems that there is an issue to be able to execute the crons on a htaccess password protected environment. Because the cron is never starting when /wp-cron.php is called. We have to trigger it manuall via the plugin WP Control.

(originally posted by @mediaworker on forum.wp2static.com during forum migration)

For very long running exports, using WP2Static’s version at this time, you’ll also need to ensure your webserver and PHP-FPM have very long connection / idle timeouts. ie, how long they give up before getting some activity from the client or other process.

If you don’t have luck adjusting those settings, please let me know which webserver you’re using and PHP-FPM setup and I’ll try to provide settings that won’t time out for an hour long task.

Cheers,

Leon