current position:Home>Python multi thread crawling weather website pictures and saving

Python multi thread crawling weather website pictures and saving

2022-01-30 20:34:57 Xiaosheng Fanyi

This is my participation 11 The fourth of the yuegengwen challenge 1 God , Check out the activity details :2021 One last more challenge

1.1 subject

Specify a web site , Crawl all the pictures in this website , For example, China Meteorological Network (www.weather.com.cn), Use single thread and multi thread crawling methods respectively .( Limit the number of crawling pictures to after the student number 3 position )

Output information : Will download Url Information is output on the console , And store the downloaded pictures in images Sub file , And give a screenshot .

1.2 Ideas

1.2.1 Send a request

  • Construct request header
import requests,re
import urllib

headers = {
    'Connection': 'keep-alive',
    'Cache-Control': 'max-age=0',
    'Upgrade-Insecure-Requests': '1',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'Accept-Language': 'zh-CN,zh;q=0.9',
}

url = "http://www.weather.com.cn/"
request = urllib.request.Request(url, headers=headers)
 Copy code 
  • Send a request
request = urllib.request.Request(url, headers=headers)
r = urllib.request.urlopen(request)
 Copy code 

1.2.2 Parse web pages

Page parsing , And replace carriage return , It is convenient for subsequent regular matching pictures .

html = r.read().decode().replace('\n','')
 Copy code 

image-20211027110330755

1.2.3 Get node

Use regular matching , Get all of it first a label , Then crawl a label All the pictures below

urlList = re.findall('<a href="(.*?)" ',html,re.S)
 Copy code 

Get all the pictures

allImageList = []
for k in urlList:
    try:
        request = urllib.request.Request(k, headers=headers)
        r = urllib.request.urlopen(request)
        html = r.read().decode().replace('\n','')
        imgList = re.findall(r'<img.*?src="(.*?)"', html, re.S)
        allImageList+=imgList
    except Exception as e:
        pass
 Copy code 

The request here is actually crawled by multithreading , All follow-up will be supplemented by !

1.2.4 Save the data ( Single thread )

for i, img in enumerate(allImageList[:102]):
    print(f" Saving the {i + 1} A picture   route :{img}")
    resp = requests.get(img)
    with open(f'./image/{img.split("/")[-1]}', 'wb') as f:  #  Save to this image Under the path 
        f.write(resp.content)
 Copy code 

image-20211027110644257

1.2.4 Save the data ( Multithreading )

  • Introduce multi process module
import threading
#  Multithreading 
def download_imgs(imgList,limit):
    threads = []
    T = [
        threading.Thread(target = download, args=(url,i))
        for i, url in enumerate(imgList[:limit + 1])
    ]
    for t in T:
        t.start()
        threads.append(t)
    return threads
 Copy code 
  • Write download function
def download(img_url,name):
    resp = requests.get(img_url)
    try:
        resp = requests.get(img_url)
        with open(f'./images/{name}.jpg', 'wb') as f:
                f.write(resp.content)
    except Exception as e:
        print(f" Download failed : {name} {img_url} -> {e}")
    else:
        print(f" Download complete : {name} {img_url}")
 Copy code 

It's random

image-20211027110955637

copyright notice
author[Xiaosheng Fanyi],Please bring the original link to reprint, thank you.
https://en.pythonmana.com/2022/01/202201302034565200.html

Random recommended