av一区二区在线观看_亚洲男人的天堂网站_日韩亚洲视频_在线成人免费_欧美日韩精品免费观看视频_久草视

您的位置:首頁技術(shù)文章
文章詳情頁

python - scrapy pipeline報(bào)錯(cuò)求助

瀏覽:98日期:2022-08-09 08:55:51

問題描述

由于不太清楚傳輸?shù)臋C(jī)制,卡在SCRAPY傳輸?shù)倪@個(gè)問題上近半個(gè)月,翻閱了好多資料,還是不懂,基礎(chǔ)比較差所以上來求助各位老師!不涉及自定義就以SCRAPY默認(rèn)的格式為例spider return的東西需要什么樣的格式?dict?{a:1,b:2,.....}還是[{a:1,aa:11},{b:2,bb:22},{......}]return的東西傳去哪了?是不是下面代碼的item?

class pipeline : def process_item(self, item, spider):

我真的是很菜,但是我很想學(xué)希望能得到各位老師的幫助!下面是我的代碼,希望能指出缺點(diǎn)

spider:

# -*- coding: utf-8 -*-import scrapyfrom pm25.items import Pm25Itemimport reclass InfospSpider(scrapy.Spider): name = 'infosp' allowed_domains = ['pm25.com'] start_urls = [’http://www.pm25.com/rank/1day.html’, ] def parse(self, response):item = Pm25Item()re_time = re.compile('d+-d+-d+')date = response.xpath('/html/body/p[4]/p/p/p[2]/span').extract()[0] #單獨(dú)解析出DATE# items = []selector = response.selector.xpath('/html/body/p[5]/p/p[3]/ul[2]/li') #從response里確立解析范圍for subselector in selector: #通過范圍逐條解析 try: #防止[0]報(bào)錯(cuò)rank = subselector.xpath('span[1]/text()').extract()[0] quality = subselector.xpath('span/em/text()')[0].extract()city = subselector.xpath('a/text()').extract()[0]province = subselector.xpath('span[3]/text()').extract()[0]aqi = subselector.xpath('span[4]/text()').extract()[0]pm25 = subselector.xpath('span[5]/text()').extract()[0] except IndexError:print(rank,quality,city,province,aqi,pm25) item[’date’] = re_time.findall(date)[0] item[’rank’] = rank item[’quality’] = quality item[’province’] = city item[’city’] = province item[’aqi’] = aqi item[’pm25’] = pm25 # items.append(item) yield item #這里不懂該怎么用,出來的是什么格式, #有的教程會(huì)return items,所以希望能得到指點(diǎn)

pipeline:

import timeclass Pm25Pipeline(object): def process_item(self, item, spider):today = time.strftime('%y%m%d',time.localtime())fname = str(today) + '.txt'with open(fname,'a') as f: for tmp in item: #不知道這里是否寫的對, #個(gè)人理解是spider return出來的item是yiled dict #[{a:1,aa:11},{b:2,bb:22},{......}]f.write(tmp['date'] + ’t’ +tmp['rank'] + ’t’ +tmp['quality'] + ’t’ +tmp['province'] + ’t’ +tmp['city'] + ’t’ +tmp['aqi'] + ’t’ +tmp['pm25'] + ’n’) f.close()return item

items:

import scrapyclass Pm25Item(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() date = scrapy.Field() rank = scrapy.Field() quality = scrapy.Field() province = scrapy.Field() city = scrapy.Field() aqi = scrapy.Field() pm25 = scrapy.Field() pass

部分運(yùn)行報(bào)錯(cuò)代碼:

Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’30’, ’city’: ’新疆’, ’date’: ’2017-04-02’, ’pm25’: ’13 ’, ’province’: ’伊犁哈薩克州’, ’quality’: ’優(yōu)’, ’rank’: ’357’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’28’, ’city’: ’西藏’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’林芝’, ’quality’: ’優(yōu)’, ’rank’: ’358’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’28’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’麗江’, ’quality’: ’優(yōu)’, ’rank’: ’359’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’27’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’15 ’, ’province’: ’玉溪’, ’quality’: ’優(yōu)’, ’rank’: ’360’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’26’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’10 ’, ’province’: ’楚雄州’, ’quality’: ’優(yōu)’, ’rank’: ’361’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’24’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’迪慶州’, ’quality’: ’優(yōu)’, ’rank’: ’362’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’22’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’9 ’, ’province’: ’怒江州’, ’quality’: ’優(yōu)’, ’rank’: ’363’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.engine] INFO: Closing spider (finished)2017-04-03 10:23:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{’downloader/request_bytes’: 328, ’downloader/request_count’: 1, ’downloader/request_method_count/GET’: 1, ’downloader/response_bytes’: 38229, ’downloader/response_count’: 1, ’downloader/response_status_count/200’: 1, ’finish_reason’: ’finished’, ’finish_time’: datetime.datetime(2017, 4, 3, 2, 23, 14, 972356), ’log_count/DEBUG’: 2, ’log_count/ERROR’: 363, ’log_count/INFO’: 7, ’response_received_count’: 1, ’scheduler/dequeued’: 1, ’scheduler/dequeued/memory’: 1, ’scheduler/enqueued’: 1, ’scheduler/enqueued/memory’: 1, ’start_time’: datetime.datetime(2017, 4, 3, 2, 23, 13, 226730)}2017-04-03 10:23:14 [scrapy.core.engine] INFO: Spider closed (finished)

希望能到到各位老師的幫助再次感謝~!

問題解答

回答1:

直接寫入就行,不用做循環(huán),item是單個(gè)處理,并不是你想的那樣的列表:

import timeclass Pm25Pipeline(object): def process_item(self, item, spider):today = time.strftime('%y%m%d', time.localtime())fname = str(today) + '.txt'with open(fname, 'a') as f: f.write(item['date'] + ’t’ + item['rank'] + ’t’ + item['quality'] + ’t’ + item['province'] + ’t’ + item['city'] + ’t’ + item['aqi'] + ’t’ + item['pm25'] + ’n’ )f.close()return item回答2:

搜索:TypeError: string indices must be integers,搞清楚什么問題定位行數(shù),解決問題

回答3:

Scrapy的Item類似python字典,擴(kuò)展了一些功能而已。

Scrapy的設(shè)計(jì),每生成一個(gè)Item,即可傳遞到pipeline中處理。你在里面寫的for tmp in item循環(huán)的是item字典的鍵了,鍵應(yīng)是字符串,再用__getitem__語法就會(huì)提示你使用的不是數(shù)字。

回答4:

你可以把一個(gè)item看作一個(gè)字典,實(shí)際它就是dict類的派生類。你在pipeline里對這個(gè)item直接遍歷,取到的tmp實(shí)際是都是字典的鍵,類型是字符串,所以tmp[’pm25’]這種操作報(bào)出TypeError:string類型的對象索引必須是int型。

標(biāo)簽: Python 編程
相關(guān)文章:
主站蜘蛛池模板: 国产精品久久久久久久久久久久 | 日韩精品免费一区二区在线观看 | 国产精品自拍啪啪 | 天天色官网 | 91热在线 | 毛片免费在线观看 | 91素人| 欧美国产日韩在线 | 久久精品aaa | 亚洲国产精品99久久久久久久久 | 97久久精品午夜一区二区 | 色婷婷精品 | 欧美日韩大片 | 亚洲欧美一区二区三区在线 | 久久精品女人天堂av | 国产一区在线免费观看视频 | 风间由美一区二区三区在线观看 | 免费视频二区 | 韩日在线| 国产精品日日做人人爱 | 欧美精品一区二区三区在线 | 99成人 | 国产欧美日韩一区二区三区在线 | 日本淫视频 | 欧美日韩电影在线 | 久久亚洲国产 | 成人av资源在线 | 日韩精品一区二区三区四区视频 | 黄色网络在线观看 | av影音在线 | 在线日韩视频 | 成人免费视频一区二区 | 在线播放国产视频 | 91精品一区二区三区久久久久 | 久久综合激情 | 欧美成人精品在线 | 日本国产精品视频 | 精品久久久久久久久久 | av网站在线播放 | 视频一区中文字幕 | 精品欧美激情在线观看 |