日日操夜夜添-日日操影院-日日草夜夜操-日日干干-精品一区二区三区波多野结衣-精品一区二区三区高清免费不卡

公告:魔扣目錄網(wǎng)為廣大站長(zhǎng)提供免費(fèi)收錄網(wǎng)站服務(wù),提交前請(qǐng)做好本站友鏈:【 網(wǎng)站目錄:http://www.ylptlb.cn 】, 免友鏈快審服務(wù)(50元/站),

點(diǎn)擊這里在線咨詢客服
新站提交
  • 網(wǎng)站:51998
  • 待審:31
  • 小程序:12
  • 文章:1030137
  • 會(huì)員:747

繼續(xù)老套路,這兩天我爬取了豬八戒上的一些數(shù)據(jù) 網(wǎng)址是:http://task.zbj.com/t-ppsj/p1s5.html,可能是由于爬取的數(shù)據(jù)量有點(diǎn)多吧,結(jié)果我的IP被封了,需要自己手動(dòng)來(lái)驗(yàn)證解封ip,但這顯然阻止了我爬取更多的數(shù)據(jù)了。

Python爬取大量數(shù)據(jù)時(shí),如何防止IP被封 !這點(diǎn)非常重要

 

私信小編01 獲取源代碼!

下面是我寫的爬取豬八戒的被封IP的代碼

# coding=utf-8
import requests
from lxml import etree
def getUrl():
 for i in range(33):
 url = 'http://task.zbj.com/t-ppsj/p{}s5.html'.format(i+1)
 spiderPage(url)
def spiderPage(url):
 if url is None:
 return None
 htmlText = requests.get(url).text
 selector = etree.HTML(htmlText)
 tds = selector.xpath('//*[@class="tab-switch tab-progress"]/table/tr')
 try:
 for td in tds:
 price = td.xpath('./td/p/em/text()')
 href = td.xpath('./td/p/a/@href')
 title = td.xpath('./td/p/a/text()')
 subTitle = td.xpath('./td/p/text()')
 deadline = td.xpath('./td/span/text()')
 price = price[0] if len(price)>0 else '' # Python的三目運(yùn)算 :為真時(shí)的結(jié)果 if 判定條件 else 為假時(shí)的結(jié)果
 title = title[0] if len(title)>0 else ''
 href = href[0] if len(href)>0 else ''
 subTitle = subTitle[0] if len(subTitle)>0 else ''
 deadline = deadline[0] if len(deadline)>0 else ''
 print price,title,href,subTitle,deadline
 print '---------------------------------------------------------------------------------------'
 spiderDetail(href)
 except:
 print '出錯(cuò)'
def spiderDetail(url):
 if url is None:
 return None
 try:
 htmlText = requests.get(url).text
 selector = etree.HTML(htmlText)
 aboutHref = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/div/p[1]/a/@href')
 price = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/div/p[1]/text()')
 title = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/h2/text()')
 contentDetail = selector.xpath('//*[@id="utopia_widget_10"]/div[2]/div/div[1]/div[1]/text()')
 publishDate = selector.xpath('//*[@id="utopia_widget_10"]/div[2]/div/div[1]/p/text()')
 aboutHref = aboutHref[0] if len(aboutHref) > 0 else '' # python的三目運(yùn)算 :為真時(shí)的結(jié)果 if 判定條件 else 為假時(shí)的結(jié)果
 price = price[0] if len(price) > 0 else ''
 title = title[0] if len(title) > 0 else ''
 contentDetail = contentDetail[0] if len(contentDetail) > 0 else ''
 publishDate = publishDate[0] if len(publishDate) > 0 else ''
 print aboutHref,price,title,contentDetail,publishDate
 except:
 print '出錯(cuò)'
if '_main_':
 getUrl()

我發(fā)現(xiàn)代碼運(yùn)行完后,后面有幾頁(yè)數(shù)據(jù)沒(méi)有被爬取,我再也沒(méi)有辦法去訪問(wèn)豬八戒網(wǎng)站了,等過(guò)了一段時(shí)間才能去訪問(wèn)他們的網(wǎng)站,這就很尷尬了,我得防止被封IP

如何防止爬取數(shù)據(jù)的時(shí)候被網(wǎng)站封IP這里有一些套路.查了一些套路

1.修改請(qǐng)求頭

之前的爬蟲代碼沒(méi)有添加頭部,這里我添加了頭部,模擬成瀏覽器去訪問(wèn)網(wǎng)站

 user_agent = 'Mozilla/5.0 (windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.4295.400'
 headers = {'User-Agent': user_agent}
 htmlText = requests.get(url, headers=headers, proxies=proxies).text

2.采用代理IP

當(dāng)自己的ip被網(wǎng)站封了之后,只能采用代理ip的方式進(jìn)行爬取,所以每次爬取的時(shí)候盡量用代理ip來(lái)爬取,封了代理還有代理。

這里我引用了這個(gè)博客的一段代碼來(lái)生成ip地址:http://blog.csdn.net/lammonpeter/article/details/52917264

生成代理ip,大家可以直接把這個(gè)代碼拿去用

# coding=utf-8
# IP地址取自國(guó)內(nèi)髙匿代理IP網(wǎng)站:http://www.xicidaili.com/nn/
# 僅僅爬取首頁(yè)IP地址就足夠一般使用
from bs4 import BeautifulSoup
import requests
import random
def get_ip_list(url, headers):
 web_data = requests.get(url, headers=headers)
 soup = BeautifulSoup(web_data.text, 'lxml')
 ips = soup.find_all('tr')
 ip_list = []
 for i in range(1, len(ips)):
 ip_info = ips[i]
 tds = ip_info.find_all('td')
 ip_list.append(tds[1].text + ':' + tds[2].text)
 return ip_list
def get_random_ip(ip_list):
 proxy_list = []
 for ip in ip_list:
 proxy_list.append('http://' + ip)
 proxy_ip = random.choice(proxy_list)
 proxies = {'http': proxy_ip}
 return proxies
if __name__ == '__main__':
 url = 'http://www.xicidaili.com/nn/'
 headers = {
 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'
 }
 ip_list = get_ip_list(url, headers=headers)
 proxies = get_random_ip(ip_list)
 print(proxies)

好了我用上面的代碼給我生成了一批ip地址(有些ip地址可能無(wú)效,但只要不封我自己的ip就可以了,哈哈),然后我就可以在我的請(qǐng)求頭部添加ip地址

** 給我們的請(qǐng)求添加代理ip**

 proxies = {
 'http': 'http://124.72.109.183:8118',
 'http': 'http://49.85.1.79:31666'
 }
 user_agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.4295.400'
 headers = {'User-Agent': user_agent}
 htmlText = requests.get(url, headers=headers, timeout=3, proxies=proxies).text

目前知道的就

最后完整代碼如下:

# coding=utf-8
import requests
import time
from lxml import etree
def getUrl():
 for i in range(33):
 url = 'http://task.zbj.com/t-ppsj/p{}s5.html'.format(i+1)
 spiderPage(url)
def spiderPage(url):
 if url is None:
 return None
 try:
 proxies = {
 'http': 'http://221.202.248.52:80',
 }
 user_agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.4295.400'
 headers = {'User-Agent': user_agent}
 htmlText = requests.get(url, headers=headers,proxies=proxies).text
 selector = etree.HTML(htmlText)
 tds = selector.xpath('//*[@class="tab-switch tab-progress"]/table/tr')
 for td in tds:
 price = td.xpath('./td/p/em/text()')
 href = td.xpath('./td/p/a/@href')
 title = td.xpath('./td/p/a/text()')
 subTitle = td.xpath('./td/p/text()')
 deadline = td.xpath('./td/span/text()')
 price = price[0] if len(price)>0 else '' # python的三目運(yùn)算 :為真時(shí)的結(jié)果 if 判定條件 else 為假時(shí)的結(jié)果
 title = title[0] if len(title)>0 else ''
 href = href[0] if len(href)>0 else ''
 subTitle = subTitle[0] if len(subTitle)>0 else ''
 deadline = deadline[0] if len(deadline)>0 else ''
 print price,title,href,subTitle,deadline
 print '---------------------------------------------------------------------------------------'
 spiderDetail(href)
 except Exception,e:
 print '出錯(cuò)',e.message
def spiderDetail(url):
 if url is None:
 return None
 try:
 htmlText = requests.get(url).text
 selector = etree.HTML(htmlText)
 aboutHref = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/div/p[1]/a/@href')
 price = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/div/p[1]/text()')
 title = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/h2/text()')
 contentDetail = selector.xpath('//*[@id="utopia_widget_10"]/div[2]/div/div[1]/div[1]/text()')
 publishDate = selector.xpath('//*[@id="utopia_widget_10"]/div[2]/div/div[1]/p/text()')
 aboutHref = aboutHref[0] if len(aboutHref) > 0 else '' # python的三目運(yùn)算 :為真時(shí)的結(jié)果 if 判定條件 else 為假時(shí)的結(jié)果
 price = price[0] if len(price) > 0 else ''
 title = title[0] if len(title) > 0 else ''
 contentDetail = contentDetail[0] if len(contentDetail) > 0 else ''
 publishDate = publishDate[0] if len(publishDate) > 0 else ''
 print aboutHref,price,title,contentDetail,publishDate
 except:
 print '出錯(cuò)'
if '_main_':
 getUrl()
Python爬取大量數(shù)據(jù)時(shí),如何防止IP被封 !這點(diǎn)非常重要

 

數(shù)據(jù)全部爬取出來(lái)了,且我的IP也沒(méi)有被封。當(dāng)然防止被封IP肯定不止這些了,這還需要進(jìn)一步探索!

最后

雖然數(shù)據(jù)我是已經(jīng)抓取過(guò)來(lái)了,但是我的數(shù)據(jù)都沒(méi)有完美呈現(xiàn)出來(lái),只是呈現(xiàn)在我的控制臺(tái)上,這并不完美,我應(yīng)該寫入execl文件或者數(shù)據(jù)庫(kù)中啊,這樣才能方便采用。所以接下來(lái)我準(zhǔn)備了使用Python操作execl

分享到:
標(biāo)簽:Python IP
用戶無(wú)頭像

網(wǎng)友整理

注冊(cè)時(shí)間:

網(wǎng)站:5 個(gè)   小程序:0 個(gè)  文章:12 篇

  • 51998

    網(wǎng)站

  • 12

    小程序

  • 1030137

    文章

  • 747

    會(huì)員

趕快注冊(cè)賬號(hào),推廣您的網(wǎng)站吧!
最新入駐小程序

數(shù)獨(dú)大挑戰(zhàn)2018-06-03

數(shù)獨(dú)一種數(shù)學(xué)游戲,玩家需要根據(jù)9

答題星2018-06-03

您可以通過(guò)答題星輕松地創(chuàng)建試卷

全階人生考試2018-06-03

各種考試題,題庫(kù),初中,高中,大學(xué)四六

運(yùn)動(dòng)步數(shù)有氧達(dá)人2018-06-03

記錄運(yùn)動(dòng)步數(shù),積累氧氣值。還可偷

每日養(yǎng)生app2018-06-03

每日養(yǎng)生,天天健康

體育訓(xùn)練成績(jī)?cè)u(píng)定2018-06-03

通用課目體育訓(xùn)練成績(jī)?cè)u(píng)定