Scrapy是一个用于爬取网站数据的流行框架,有时爬虫可能会停止工作,这通常是由多种原因引起的。以下是一些常见问题及其解决方法:
1、问题背景
用户在使用 Scrapy 0.16.2 版本进行网络爬取时遇到问题,具体表现为爬虫在运行一段时间后停止工作,但重新启动后又可以继续工作一段时间后再停止。
以下是用户在问题发生时看到的相关日志信息:
scrapy crawl basketsp17
2013-11-22 03:07:15+0200 [scrapy] INFO: Scrapy 0.20.0 started (bot: basketbase)
2013-11-22 03:07:15+0200 [scrapy] DEBUG: Optional features available: ssl, http11, boto, django
2013-11-22 03:07:15+0200 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'basketbase.spiders', 'SPIDER_MODULES': ['basketbase.spiders'], 'BOT_NAME': 'basketbase'}
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Enabled item pipelines:
2013-11-22 03:07:16+0200 [basketsp17] INFO: Spider opened
2013-11-22 03:07:16+0200 [basketsp17] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-11-22 03:07:16+0200 [basketsp17] DEBUG: Redirecting (301) to <GET http://www.euroleague.net/main/results/by-date> from <GET http://www.euroleague.net/main/results/by-date/>
2013-11-22 03:07:16+0200 [basketsp17] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date> (referer: None)
2013-11-22 03:07:16+0200 [basketsp17] INFO: Closing spider (finished)
2013-11-22 03:07:16+0200 [basketsp17] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 489,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 12181,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 11, 22, 1, 7, 16, 471690),
'log_count/DEBUG': 8,
'log_count/INFO': 3,
'response_received_count': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2013, 11, 22, 1, 7, 16, 172756)}
2013-11-22 03:07:16+0200 [basketsp17] INFO: Spider closed (finished)
2、解决方案
经过分析,问题可能出在以下几个方面:
- 网站服务器设置了防爬机制,导致爬虫在一段时间后被封禁。
- Scrapy 在处理 HTTP 响应时出现问题,导致爬虫无法正常工作。
- 爬虫代码本身存在问题,导致爬虫在某些情况下停止工作。
针对以上可能的原因,用户可以尝试以下解决方案:
- 更改爬虫的 user agent 或 IP 地址,以绕过网站服务器的防爬机制。
- 在爬虫代码中添加重试机制,以便在遇到 HTTP 错误时重试请求。
- 检查爬虫代码是否存在问题,并进行相应的修复。
经过以上操作后,用户的问题可能得到解决。
示例爬虫代码
以下是一个简单的Scrapy crawl spider示例代码:
import scrapy
from scrapy.crawler import CrawlerProcess
class MySpider(scrapy.Spider):
name = 'myspider'
start_urls = ['http://example.com']
def parse(self, response):
self.log(f'Got response from {response.url}')
for link in response.css('a::attr(href)').getall():
if link is not None:
yield response.follow(link, self.parse)
if __name__ == "__main__":
process = CrawlerProcess(settings={
"LOG_LEVEL": "DEBUG",
})
process.crawl(MySpider)
process.start()
通过检查网络连接、代理设置、爬虫代码、Scrapy配置和日志输出,可以找到爬虫停止工作的原因,并采取相应的措施加以解决。如果问题仍未解决,可以尝试在Scrapy的社区或论坛中寻求帮助。