0%

Python 的简单爬虫

Python 的简单爬虫 三种网页抓取方式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# coding:utf8 I
import urllib2
import cookielib

url = "http://www.baidu.com"
print '第一种方法'
response1 = urllib2.urlopen(url)
print response1.getcode()
print len(response1.read())

print '第二种方法'
request = urllib2. Request(url)
request.add_header("user-agent","Mozilla/5.0")
response2 = urllib2.urlopen(request)
print response2.getcode()
print len(response2.read())

print '第三种方法'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
response3 = urllib2.urlopen(url)
print response3.getcode()
print cj
print response3.read()

打印

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
第一种方法
200
118090
第二种方法
200
118069
第三种方法
200
<CookieJar[<Cookie BAIDUID=6BEEEF7E1E24A2D831C6EBE1842863C2:FG=1 for .baidu.com/>, <Cookie BIDUPSID=6BEEEF7E1E24A2D831C6EBE1842863C2 for .baidu.com/>, <Cookie H_PS_PSSID= for .baidu.com/>, <Cookie PSTM=1533609482 for .baidu.com/>, <Cookie BDSVRTM=0 for www.baidu.com/>, <Cookie BD_HOME=0 for www.baidu.com/>, <Cookie delPer=0 for www.baidu.com/>]>
<!DOCTYPE html>
<!--STATUS OK-->
XXX 网页的内容
}
</script>

</body>
</html>