current position:Home>Detailed explanation of Python + fidder 4 web crawler (the most complete in Station C)

Detailed explanation of Python + fidder 4 web crawler (the most complete in Station C)

2022-02-02 09:01:20 E iosers


1、 Grab the bag
(1) <>:html
(2)Json: json data, It could also be an interface
(3) Css:css project
(4)Js:js project
2、 Stop Grab the bag :file->capture Click to switch
3、 Click request -> Select on the right ->Inspectors
(1) The upper right :http request
①Raw: Details of the request header
②Webforms: The requested parameters ;query_string、formdata.
(2) The lower right :http Response information (U should first break Compress information , Click on the Yellow scroll bar )
① Click the yellow bar to decode ;
②Raw: All information in response ;
③Headers: Response head ;
④Json: What the interface returns ;( Response content )
(3) Lower left instruction box ( The program can be operated quickly )
①Clear: Clear all requests ;
②Select + …: Quickly select relevant information ;
③?+ Content (com/du…): Quickly search for information that matches the content ;

summary :fidder It is a professional bag grabbing tool , Compared with google Web Developer Tools ,fidder There is no case of overwriting the information obtained from the previous web page ,fidder You can keep the relevant data captured by relevant web pages before .

fidder4  Interface display

Python-urllib library

1、 effect : A library that simulates a browser sending requests ,Python Bring your own library
2、Python 3: Integrated two libraries urllib.rquest urllib.parse
3、 Related functions
(6)Read() Read the corresponding content , Content is byte type
(7)Geturl() Obtain requested url
(8)Getheaders() Get the corresponding header information ( In the list is the format of Yuanzu )
(9)Getcode() Get status code

complete url: = 123#lala domain name 
index/html?: file 
name=goudan&passward = 123:get() Parameters with 
#lala: Anchor point 
:80/: port 

Use the bag grabbing tool -urlopen4

import urllib.request
url = ''
reaponse = urllib.request.urlopen(url = url)# Send a request 
#print(response) # Print known response It's an object 
#print( Get the content in the object 
#print((response.getheaders()))# Get data in tuples 
#print(dict(response.getheaders()))# take getheaders() The data obtained by the method is displayed in the form of a dictionary .
#print(response.getcode()) # Get status code 
#print(response.readlines()) # According to the line read , Returns a list of , All byte types .
''' This is the moment B Format , That is, binary format , So we need to convert binary format to string format . 1. encode()  character string  ->  Binary system  2. decode()  Binary system  ->  character string   If you don't write any parameters in parentheses , The default is utf-8, If write , Namely gbk  Before getting the contents of the object , First convert the binary content into string format ; Before that, check the coding format of this page . '''
with open('baidu.html','w',encoding = 'utf-8') as fp:
''' At this point, you have obtained the content in ’baidu.html‘ My files are saved , without doubt , The format is html, After running the file , You can see the baidu home page interface .'''
with open('baidu1.html','wb') as fp: # Read directly in binary ,’wb‘ Binary reading mode 

urlrequest - urlparse Build the request object

First copy the picture address

image_url = ''
response = urllib.request.urlopen(image_url)
# Pictures can only be written in local binary format 
with open('qing.jpg','wb') as fp:

(Urlopen(url)) So you can save the picture locally

The second method :

image_url = ''

(Urlretrieve(url,image_path)) This allows you to write directly to


import urllib.parse
image_url = ',10000&q=a80&n=0&g=0n&fmt=jpeg?sec=1638521018&t=9ae6d69f2f0ed50aba91c06a1685e596'

parse.quote() Coding function parse.unquote() Use of decoding function

url Can only be composed of specific characters , Letter 、 Numbers 、 Underline
If there's something else , such as $、 Space 、 Chinese, etc , We need to code it , Is an illegal encoding format ; At this point, use parse.quoto

url = ' The dog egg &pwd=12345'# It belongs to illegal coding format 
ret = urllib.parse.quote(url) # Coding function 
re = urllib.parse.unquoto(url) # Decoding function 

At this time, you can also borrow the coding webmaster tool ( Du Niang has both )
quote url Coding function , Translate Chinese into %XXX
unquote url Decoding function , take %XXX Convert to the specified character


import urllib.parse
#url = ''# Now we have to deal with this url Parameters are required when sending a request 
# The added parameters are  name age sex height, Then you need to splice it when writing code 
name = 'goudan'
age = 18
sex = 'boy'
height = 180

#url = ''
# By splicing the contents of the dictionary 
'name' = goudan,
'age' = 18,
'sex' = boy,
'height' = 180
# Ergodic dictionary 
for k,v in data.items()
query_string = '&'.join(it)
url = url+'?'+query_string
# however urllib There are already functions completed by developers , I would like to thank you in particular Python Third party library developers 

query_string = urllib.parse.urllencode(data)

urllencode() What needs to be passed in the function is the dictionary ;query_string That's what I wrote ; and urllencode() Function can already encode illegal words , So we're using urllencode() Function, there is no need to consider the related problems of illegal characters .

copyright notice
author[E iosers],Please bring the original link to reprint, thank you.

Random recommended