python - scrapy pipeline報(bào)錯(cuò)求助
問題描述
由于不太清楚傳輸?shù)臋C(jī)制,卡在SCRAPY傳輸?shù)倪@個(gè)問題上近半個(gè)月,翻閱了好多資料,還是不懂,基礎(chǔ)比較差所以上來求助各位老師!不涉及自定義就以SCRAPY默認(rèn)的格式為例spider return的東西需要什么樣的格式?dict?{a:1,b:2,.....}還是[{a:1,aa:11},{b:2,bb:22},{......}]return的東西傳去哪了?是不是下面代碼的item?
class pipeline : def process_item(self, item, spider):
我真的是很菜,但是我很想學(xué)希望能得到各位老師的幫助!下面是我的代碼,希望能指出缺點(diǎn)
spider:
# -*- coding: utf-8 -*-import scrapyfrom pm25.items import Pm25Itemimport reclass InfospSpider(scrapy.Spider): name = 'infosp' allowed_domains = ['pm25.com'] start_urls = [’http://www.pm25.com/rank/1day.html’, ] def parse(self, response):item = Pm25Item()re_time = re.compile('d+-d+-d+')date = response.xpath('/html/body/p[4]/p/p/p[2]/span').extract()[0] #單獨(dú)解析出DATE# items = []selector = response.selector.xpath('/html/body/p[5]/p/p[3]/ul[2]/li') #從response里確立解析范圍for subselector in selector: #通過范圍逐條解析 try: #防止[0]報(bào)錯(cuò)rank = subselector.xpath('span[1]/text()').extract()[0] quality = subselector.xpath('span/em/text()')[0].extract()city = subselector.xpath('a/text()').extract()[0]province = subselector.xpath('span[3]/text()').extract()[0]aqi = subselector.xpath('span[4]/text()').extract()[0]pm25 = subselector.xpath('span[5]/text()').extract()[0] except IndexError:print(rank,quality,city,province,aqi,pm25) item[’date’] = re_time.findall(date)[0] item[’rank’] = rank item[’quality’] = quality item[’province’] = city item[’city’] = province item[’aqi’] = aqi item[’pm25’] = pm25 # items.append(item) yield item #這里不懂該怎么用,出來的是什么格式, #有的教程會(huì)return items,所以希望能得到指點(diǎn)
pipeline:
import timeclass Pm25Pipeline(object): def process_item(self, item, spider):today = time.strftime('%y%m%d',time.localtime())fname = str(today) + '.txt'with open(fname,'a') as f: for tmp in item: #不知道這里是否寫的對, #個(gè)人理解是spider return出來的item是yiled dict #[{a:1,aa:11},{b:2,bb:22},{......}]f.write(tmp['date'] + ’t’ +tmp['rank'] + ’t’ +tmp['quality'] + ’t’ +tmp['province'] + ’t’ +tmp['city'] + ’t’ +tmp['aqi'] + ’t’ +tmp['pm25'] + ’n’) f.close()return item
items:
import scrapyclass Pm25Item(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() date = scrapy.Field() rank = scrapy.Field() quality = scrapy.Field() province = scrapy.Field() city = scrapy.Field() aqi = scrapy.Field() pm25 = scrapy.Field() pass
部分運(yùn)行報(bào)錯(cuò)代碼:
Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’30’, ’city’: ’新疆’, ’date’: ’2017-04-02’, ’pm25’: ’13 ’, ’province’: ’伊犁哈薩克州’, ’quality’: ’優(yōu)’, ’rank’: ’357’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’28’, ’city’: ’西藏’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’林芝’, ’quality’: ’優(yōu)’, ’rank’: ’358’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’28’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’麗江’, ’quality’: ’優(yōu)’, ’rank’: ’359’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’27’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’15 ’, ’province’: ’玉溪’, ’quality’: ’優(yōu)’, ’rank’: ’360’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’26’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’10 ’, ’province’: ’楚雄州’, ’quality’: ’優(yōu)’, ’rank’: ’361’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’24’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’迪慶州’, ’quality’: ’優(yōu)’, ’rank’: ’362’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’22’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’9 ’, ’province’: ’怒江州’, ’quality’: ’優(yōu)’, ’rank’: ’363’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.engine] INFO: Closing spider (finished)2017-04-03 10:23:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{’downloader/request_bytes’: 328, ’downloader/request_count’: 1, ’downloader/request_method_count/GET’: 1, ’downloader/response_bytes’: 38229, ’downloader/response_count’: 1, ’downloader/response_status_count/200’: 1, ’finish_reason’: ’finished’, ’finish_time’: datetime.datetime(2017, 4, 3, 2, 23, 14, 972356), ’log_count/DEBUG’: 2, ’log_count/ERROR’: 363, ’log_count/INFO’: 7, ’response_received_count’: 1, ’scheduler/dequeued’: 1, ’scheduler/dequeued/memory’: 1, ’scheduler/enqueued’: 1, ’scheduler/enqueued/memory’: 1, ’start_time’: datetime.datetime(2017, 4, 3, 2, 23, 13, 226730)}2017-04-03 10:23:14 [scrapy.core.engine] INFO: Spider closed (finished)
希望能到到各位老師的幫助再次感謝~!
問題解答
回答1:直接寫入就行,不用做循環(huán),item是單個(gè)處理,并不是你想的那樣的列表:
import timeclass Pm25Pipeline(object): def process_item(self, item, spider):today = time.strftime('%y%m%d', time.localtime())fname = str(today) + '.txt'with open(fname, 'a') as f: f.write(item['date'] + ’t’ + item['rank'] + ’t’ + item['quality'] + ’t’ + item['province'] + ’t’ + item['city'] + ’t’ + item['aqi'] + ’t’ + item['pm25'] + ’n’ )f.close()return item回答2:
搜索:TypeError: string indices must be integers,搞清楚什么問題定位行數(shù),解決問題
回答3:Scrapy的Item類似python字典,擴(kuò)展了一些功能而已。
Scrapy的設(shè)計(jì),每生成一個(gè)Item,即可傳遞到pipeline中處理。你在里面寫的for tmp in item循環(huán)的是item字典的鍵了,鍵應(yīng)是字符串,再用__getitem__語法就會(huì)提示你使用的不是數(shù)字。
回答4:你可以把一個(gè)item看作一個(gè)字典,實(shí)際它就是dict類的派生類。你在pipeline里對這個(gè)item直接遍歷,取到的tmp實(shí)際是都是字典的鍵,類型是字符串,所以tmp[’pm25’]這種操作報(bào)出TypeError:string類型的對象索引必須是int型。
相關(guān)文章:
1. android - 分享到微信,如何快速轉(zhuǎn)換成字節(jié)數(shù)組2. angular.js - Beego 與 AngularJS的模板格式?jīng)_突,該怎么解決?3. javascript - 能否讓vue-cli的express修改express重啟服務(wù)4. 解決Android webview設(shè)置cookie和cookie丟失的問題5. node.js - npm一直提示proxy有問題6. phpstorm 沒有安裝Emmet怎么還有Emmet的相關(guān)功能啊7. python - flask template file not found8. mysql怎么多表刪除啊?9. javascript - 有沒有iOS微信中可以在背景播放視頻的方法?10. python - 網(wǎng)頁title中包含換行,如何用正則表達(dá)式提取出來?
