python - 如何解決scarpy-redis空跑問題?
問題描述
scrapy-redis框架中,reids存儲的xxx:requests已經爬取完畢,但程序仍然一直運行,如何自動停止程序,而不是一直在空跑?
2017-07-03 09:17:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2017-07-03 09:18:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
可以通過engine.close_spider(spider, ’reason’)來停止程序的運行。
def next_request(self):block_pop_timeout = self.idle_before_closerequest = self.queue.pop(block_pop_timeout)if request and self.stats: self.stats.inc_value(’scheduler/dequeued/redis’, spider=self.spider)if request is None: self.spider.crawler.engine.close_spider(self.spider, ’queue is empty’)return request
還有一個問題不明白:當通過engine.close_spider(spider, ’reason’)來關閉spider時,會出現幾個錯誤之后才能關閉。
# 正常關閉2017-07-03 18:02:38 [scrapy.core.engine] INFO: Closing spider (queue is empty)2017-07-03 18:02:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{’finish_reason’: ’queue is empty’, ’finish_time’: datetime.datetime(2017, 7, 3, 10, 2, 38, 616021), ’log_count/INFO’: 8, ’start_time’: datetime.datetime(2017, 7, 3, 10, 2, 38, 600382)}2017-07-03 18:02:38 [scrapy.core.engine] INFO: Spider closed (queue is empty)# 之后還會出現幾個錯誤才關閉spider,難道spider剛啟動時會啟動多個線程一起抓取, # 然后其中一個線程關閉了spider,其他線程就找不到spider才會報錯!Unhandled ErrorTraceback (most recent call last): File 'D:/papp/project/launch.py', line 37, in <module> process.start() File 'D:Program Filespython3libsite-packagesscrapycrawler.py', line 285, in start reactor.run(installSignalHandlers=False) # blocking call File 'D:Program Filespython3libsite-packagestwistedinternetbase.py', line 1243, in run self.mainLoop() File 'D:Program Filespython3libsite-packagestwistedinternetbase.py', line 1252, in mainLoop self.runUntilCurrent()--- <exception caught here> --- File 'D:Program Filespython3libsite-packagestwistedinternetbase.py', line 878, in runUntilCurrent call.func(*call.args, **call.kw) File 'D:Program Filespython3libsite-packagesscrapyutilsreactor.py', line 41, in __call__ return self._func(*self._a, **self._kw) File 'D:Program Filespython3libsite-packagesscrapycoreengine.py', line 137, in _next_request if self.spider_is_idle(spider) and slot.close_if_idle: File 'D:Program Filespython3libsite-packagesscrapycoreengine.py', line 189, in spider_is_idle if self.slot.start_requests is not None:builtins.AttributeError: ’NoneType’ object has no attribute ’start_requests’
問題解答
回答1:怎樣知道放的requests爬取完畢,這個要定義才知道如果不復雜,可以使用內部擴展關掉!
scrapy.contrib.closespider.CloseSpider
CLOSESPIDER_TIMEOUTCLOSESPIDER_ITEMCOUNTCLOSESPIDER_PAGECOUNTCLOSESPIDER_ERRORCOUNThttp://scrapy-chs.readthedocs...
相關文章:
1. html - 微信瀏覽器h5<video>標簽問題2. java - mysql緩存問題3. javascript - 關于Node 、 commonJs、 vue 之間的故事4. node.js - 為什么微信的消息MsgId出現重復了,無法排重了。。5. python - 如何對列表中的列表進行頻率統計?6. python - 如何正則字符串中的所有漢字7. 黑客 - Python模塊安全權限8. android 微信是如何實現即時更新好友頭像的9. android - 像支付寶到位這種點擊marker點擊變大怎么做的10. javascript - 關于ios微信端瀏覽器網頁的一些問題
