在现实应用中,网络爬虫获取网页数据的流程如下:
(1)模拟浏览器发送请求
(2)获取响应内容(获取网页):即获取html、css、json、图片、音频、视频等类型信息
(3)解析内容(提取信息):正则表达式、第三库解析库(Beautifulsoup、pyquery等)
(4)保存数据:保存到数据库(mysql、mongdb、redis等)或txt、csv、json、xml…格式的文件
注意:本编内容主要是上述的(1)和(2)步骤。
方法一:requests.get 无标注解析器
# python3.6import requestsurl = ""headers = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.100.4811.0 Safari/537.36"}res = requests.get(url, headers=headers)str_content = res.textbyte_content = res.contentprint(type(str_content), type(byte_content)) # <class 'str'> <class 'bytes'>
方法二:requests.get 有标注解析器
# python3.6import requestsurl = ""headers = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.100.4811.0 Safari/537.36"}res = requests.get(url,'lxml',headers=headers)str_content = res.textbyte_content = res.contentprint(str_content)print(type(str_content),type(byte_content)) # <class 'str'> <class 'bytes'>
方法三:urllib.request.urlopen
# python3.6from urllib import requesturl = ""headers = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.100.4811.0 Safari/537.36"}res = request.urlopen(request.Request(url, headers=headers))# 以下代码,二选一bytes_content = res.read()str_content = res.read().decode('utf-8') # bytes数据类型,需要utf-8解码print(type(bytes_content), type(str_content)) # <class 'bytes'> <class 'str'>
方法四:urllib3
# python3.6import urllib3url = ""headers = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.100.4811.0 Safari/537.36"}http = urllib3.PoolManager()res = http.request('GET', url, headers=headers)bytes_content = res.datastr_content = res.data.decode('utf-8') # bytes数据类型,需要解码print(bytes_content)print(type(bytes_content), type(str_content)) # <class 'bytes'> <class 'str'>