日日操夜夜添-日日操影院-日日草夜夜操-日日干干-精品一区二区三区波多野结衣-精品一区二区三区高清免费不卡

公告:魔扣目錄網(wǎng)為廣大站長提供免費(fèi)收錄網(wǎng)站服務(wù),提交前請做好本站友鏈:【 網(wǎng)站目錄:http://www.ylptlb.cn 】, 免友鏈快審服務(wù)(50元/站),

點擊這里在線咨詢客服
新站提交
  • 網(wǎng)站:51998
  • 待審:31
  • 小程序:12
  • 文章:1030137
  • 會員:747

想做一個有趣的項目,首先整理一下思路,如何快速爬取關(guān)鍵信息。并且實現(xiàn)自動翻頁功能。
想了想用最常規(guī)的requests加上re正則表達(dá)式,BeautifulSoup用于批量爬取

import requests
import re
from bs4 import BeautifulSoup
import pyMySQL

然后引入鏈接,注意這里有反爬蟲機(jī)制,第一頁必須為https://tianjin.anjuke.com/sale/,后面頁必須為’https://tianjin.anjuke.com/sale/p%d/#filtersort’%page,不然會被機(jī)制檢測到為爬蟲,無法實現(xiàn)爬取。這里實現(xiàn)了翻頁功能。

while page < 11:

 # brower.get("https://tianjin.anjuke.com/sale/p%d/#filtersort"%page)
 # time.sleep(1)
 print ("這是第"+str(page) +"頁")
 # proxy=requests.get(pool_url).text
 # proxies={
 #     'http': 'http://' + proxy
 #         }
 if page==1:
  url='https://tianjin.anjuke.com/sale/'
  headers={
          'referer': 'https://tianjin.anjuke.com/sale/',
          'user-agent': 'Mozilla/5.0 (windows NT 10.0; Win64; x64) AppleWebKit/537.36 (Khtml, like Gecko) Chrome/79.0.3945.130 Safari/537.36',

          }
 else:
  url='https://tianjin.anjuke.com/sale/p%d/#filtersort'%page
  headers={
          'referer': 'https://tianjin.anjuke.com/sale/p%d/#filtersort'%page,
          'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36',

          }
 # html=requests.get(url,allow_redirects=False,headers=headers,proxies=proxies)
 html = requests.get(url, headers=headers)

第二步自然是分析網(wǎng)頁以及如何實現(xiàn)自動翻頁,首先找到圖片

 

python3快速爬取房源信息,并存入mysql數(shù)據(jù)庫,超詳細(xì)

 

正則表達(dá)式走起!

#圖片地址
 myjpg=r'<img src="(.*?)" width="180" height="135" />'

 jpg=re.findall(myjpg,html.text)

照片信息已經(jīng)完成爬取,接下來依葫蘆畫瓢,把其它信息頁也迅速爬取!

#描述
 mytail=r'<a data-from="" data-company=""  title="(.*?)" href'
 tail=re.findall(mytail,html.text)
# 獲取總價
 totalprice=r'<span class="price-det"><strong>(.*?)</strong>'
 mytotal=re.findall(totalprice,html.text)
#單價
 simpleprice=r'<span class="unit-price">(.*?)</span> '
 simple=re.findall(simpleprice,html.text)

接下來實現(xiàn)用beauitfulsoup實現(xiàn)關(guān)鍵字標(biāo)簽取值!解析器我這里用lxml,速度比較快,當(dāng)然也可以用html.parser

soup=BeautifulSoup(html.content,'lxml')

 

看圖,這里用了很多換行符,并且span標(biāo)簽沒有命名,所以請上我們的嘉賓bs4

 

python3快速爬取房源信息,并存入mysql數(shù)據(jù)庫,超詳細(xì)

 

這里使用了循環(huán),因為是一次性爬取,一個300條信息,由于一頁圖片只有60張,所以將其5個一組進(jìn)行劃分,re.sub目的為了將其中的非字符信息替換為空以便存入數(shù)據(jù)庫

#獲取房子信息
 itemdetail=soup.select(".details-item span")
# print(len(itemdetail))
 you=[]
 my=[]
 for i in itemdetail:
    # print(i.get_text())

    you.append(i.get_text())
 k = 0
 while k < 60:
    my.append([you[5 * k], you[5 * k + 1], you[5 * k + 2], you[5 * k + 3],re.sub(r's', "", you[5 * k + 4])])
    k = k + 1
 # print(my)
 # print(len(my))

接下來存入數(shù)據(jù)庫!

db = pymysql.connect("localhost", "root", "" ,"anjuke")
 conn = db.cursor()
 print(len(jpg))
 for i in range(0,len(tail)):
    jpgs = jpg[i]
    scripts = tail[i]
    localroom = my[i][0]
    localarea=my[i][1]
    localhigh=my[i][2]
    localtimes=my[i][3]
    local=my[i][4]
    total = mytotal[i]
    oneprice=simple[i]
    sql = "insert into shanghai_admin value('%s','%s','%s','%s','%s','%s','%s','%s','%s')" % 
          (jpgs,scripts,local,total,oneprice,localroom,localarea,localhigh,localtimes)
    conn.execute(sql)
    db.commit()
 db.close()

大功告成!來看看效果!

 

python3快速爬取房源信息,并存入mysql數(shù)據(jù)庫,超詳細(xì)

 

以下為完整代碼:

# from selenium import webdriver
import requests
import re
from bs4 import BeautifulSoup
import pymysql
# import time
# chrome_driver=r"C:Users秦QQAppDataLocalProgramsPythonPython38-32Libsite-packagesselenium-3.141.0-py3.8.eggseleniumwebdriverchromechromedriver.exe"
# brower=webdriver.Chrome(executable_path=chrome_driver)
# pool_url='http://localhost:5555/random'
page=1
while page < 11:

 # brower.get("https://tianjin.anjuke.com/sale/p%d/#filtersort"%page)
 # time.sleep(1)
 print ("這是第"+str(page) +"頁")
 # proxy=requests.get(pool_url).text
 # proxies={
 #     'http': 'http://' + proxy
 #         }
 if page==1:
  url='https://tianjin.anjuke.com/sale/'
  headers={
          'referer': 'https://tianjin.anjuke.com/sale/',
          'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36',

          }
 else:
  url='https://tianjin.anjuke.com/sale/p%d/#filtersort'%page
  headers={
          'referer': 'https://tianjin.anjuke.com/sale/p%d/#filtersort'%page,
          'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36',

          }
 # html=requests.get(url,allow_redirects=False,headers=headers,proxies=proxies)
 html = requests.get(url, headers=headers)
 soup=BeautifulSoup(html.content,'lxml')
#圖片地址
 myjpg=r'<img src="(.*?)" width="180" height="135" />'

 jpg=re.findall(myjpg,html.text)
#描述
 mytail=r'<a data-from="" data-company=""  title="(.*?)" href'
 tail=re.findall(mytail,html.text)
#獲取房子信息
 itemdetail=soup.select(".details-item span")
# print(len(itemdetail))
 you=[]
 my=[]
 for i in itemdetail:
    # print(i.get_text())

    you.append(i.get_text())
 k = 0
 while k < 60:
    my.append([you[5 * k], you[5 * k + 1], you[5 * k + 2], you[5 * k + 3],re.sub(r's', "", you[5 * k + 4])])
    k = k + 1
 # print(my)
 # print(len(my))
# 獲取總價
 totalprice=r'<span class="price-det"><strong>(.*?)</strong>'
 mytotal=re.findall(totalprice,html.text)
#單價
 simpleprice=r'<span class="unit-price">(.*?)</span> '
 simple=re.findall(simpleprice,html.text)
 db = pymysql.connect("localhost", "root", "" ,"anjuke")
 conn = db.cursor()
 print(len(jpg))
 for i in range(0,len(tail)):
    jpgs = jpg[i]
    scripts = tail[i]
    localroom = my[i][0]
    localarea=my[i][1]
    localhigh=my[i][2]
    localtimes=my[i][3]
    local=my[i][4]
    total = mytotal[i]
    oneprice=simple[i]
    sql = "insert into shanghai_admin value('%s','%s','%s','%s','%s','%s','%s','%s','%s')" % 
          (jpgs,scripts,local,total,oneprice,localroom,localarea,localhigh,localtimes)
    conn.execute(sql)
    db.commit()
 db.close()
 # button=brower.find_element_by_class_name('aNxt')
 # button.click()
 # time.sleep(1)
 page=page+1
# brower.close()

 

分享到:
標(biāo)簽:數(shù)據(jù)庫 mysql
用戶無頭像

網(wǎng)友整理

注冊時間:

網(wǎng)站:5 個   小程序:0 個  文章:12 篇

  • 51998

    網(wǎng)站

  • 12

    小程序

  • 1030137

    文章

  • 747

    會員

趕快注冊賬號,推廣您的網(wǎng)站吧!
最新入駐小程序

數(shù)獨(dú)大挑戰(zhàn)2018-06-03

數(shù)獨(dú)一種數(shù)學(xué)游戲,玩家需要根據(jù)9

答題星2018-06-03

您可以通過答題星輕松地創(chuàng)建試卷

全階人生考試2018-06-03

各種考試題,題庫,初中,高中,大學(xué)四六

運(yùn)動步數(shù)有氧達(dá)人2018-06-03

記錄運(yùn)動步數(shù),積累氧氣值。還可偷

每日養(yǎng)生app2018-06-03

每日養(yǎng)生,天天健康

體育訓(xùn)練成績評定2018-06-03

通用課目體育訓(xùn)練成績評定