先上效果图
import re import requests import os
头文件:
因为爬虫需要用到请求网络部分,所以需要这两个包,没有的话自行下载即可。这个可以直接用pip安装。如果连pip都不懂,那就只能学习一下python基础了。
请求头:
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, l
完整的请求:
url = 'https://image.baidu.com/search/flip?tn=baiduimage&ie=utf-8&word=='+name+'+&pn='+str(i*30)
result = requests.get(url,headers=headers)
dowmloadPic(result.content.decode(), name)
正则表达式:
pic_url = re.findall('"objURL":"(.*?)",',html,re.S)
下载图片:
fp = open(dir, 'wb') fp.write(pic.content) fp.close()
完整代码:
#!/usr/bin/python # -*- coding: UTF-8 -*- import re import requests import os def dowmloadPic(html, keyword,i): pic_url = re.findall('"objURL":"(.*?)",',html,re.S) abc=i*60 print('找到关键词:' + keyword + '的图片,现在开始下载图片...') for each in pic_url: print('正在下载第' + str(abc) + '张图片,图片地址:' + str(each)) try: pic = requests.get(each, timeout=10) except requests.exceptions.ConnectionError: print('【错误】当前图片无法下载') continue dir = r'D:\image\i' + keyword + '_' + str(abc) + '.jpg' if not os.path.exists('D:\image'): os.makedirs('D:\image') fp = open(dir, 'wb') fp.write(pic.content) fp.close() abc += 1 if __name__ == '__main__': #word = input("Input key word: ") headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36'} name = "清纯妹子私房照" num = 0 x =1 for i in range(int(x)): url = 'https://image.baidu.com/search/flip?tn=baiduimage&ie=utf-8&word='+name+'+&pn='+str(i*30) print(url) result = requests.get(url,headers=headers) dowmloadPic(result.content, name,1) print("下载完成")
我从没有这么渴望过知识,第一次感受到知识的力量!!!
© 版权声明
文章版权归作者所有,未经允许请勿转载。
THE END