自上一篇文章 Z Story : Using Django with GAE Python 后臺抓取多個網(wǎng)站的頁面全文 后,大體的進(jìn)度如下:
1.增加了Cron: 用來告訴程序每隔30分鐘 讓一個task 醒來, 跑到指定的那幾個博客上去爬取最新的更新
2.用google 的 Datastore 來存貯每次爬蟲爬下來的內(nèi)容。。只存貯新的內(nèi)容。。
就像上次說的那樣,這樣以來 性能有了大幅度的提高: 原來的每次請求后, 爬蟲才被喚醒 所以要花大約17秒的時間才能從后臺輸出到前臺而現(xiàn)在只需要2秒不到
3.對爬蟲進(jìn)行了優(yōu)化
1. Cron.yaml 來安排每個程序醒來的時間
經(jīng)過翻文檔, 問問題終于弄明白google的cron的工作原理--實(shí)際上只是google每隔指定的時間虛擬地訪問一個我們自己指定的url…
因此在Django 下, 根本不需要寫一個純的python 程序 一定不要寫:
if __name__=="__main__":
只需要自己配置一個url 放在views.py里:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
def updatePostsDB(request): #deleteAll() SiteInfos = [] SiteInfo = {} SiteInfo[ 'PostSite' ] = "L2ZStory" SiteInfo[ 'feedurl' ] = "feed://l2zstory.wordpress.com/feed/" SiteInfo[ 'blog_type' ] = "wordpress" SiteInfos.append(SiteInfo) SiteInfo = {} SiteInfo[ 'PostSite' ] = "YukiLife" SiteInfo[ 'feedurl' ] = "feed://blog.sina.com.cn/rss/1583902832.xml" SiteInfo[ 'blog_type' ] = "sina" SiteInfos.append(SiteInfo) SiteInfo = {} SiteInfo[ 'PostSite' ] = "ZLife" SiteInfo[ 'feedurl' ] = "feed://ireallife.wordpress.com/feed/" SiteInfo[ 'blog_type' ] = "wordpress" SiteInfos.append(SiteInfo) SiteInfo = {} SiteInfo[ 'PostSite' ] = "ZLife_Sina" SiteInfo[ 'feedurl' ] = "feed://blog.sina.com.cn/rss/1650910587.xml" SiteInfo[ 'blog_type' ] = "sina" SiteInfos.append(SiteInfo) try : for site in SiteInfos: feedurl = site[ 'feedurl' ] blog_type = site[ 'blog_type' ] PostSite = site[ 'PostSite' ] PostInfos = getPostInfosFromWeb(feedurl,blog_type) recordToDB(PostSite,PostInfos) Msg = "Cron Job Done..." except Exception,e: Msg = str (e) return HttpResponse(Msg) |
cron.yaml 要放在跟app.yaml同一個級別上:
cron:
- description: retrieve newest posts
url: /task_updatePosts/
schedule: every 30 minutes
在url.py 里只要指向這個把task_updatePostsDB 指向url就好了
調(diào)試這個cron的過程可以用慘烈來形容。。。在stackoverflow上有很多很多人在問為什么自己的cron不能工作。。。我一開始也是滿頭是汗,找不著頭腦。。。最后僥幸弄好了,大體步驟也是空泛的很。。但是很樸實(shí):
首先,一定要確保自己的程序沒有什么syntax error….然后可以自己試著手動訪問一下那個url 如果cron 正常的話,這個時候任務(wù)應(yīng)該已經(jīng)被執(zhí)行了 最后實(shí)在不行的話多看看log…
2. Datastore的配置和利用--Using Datastore with Django
我的需求在這里很簡單--沒有join…所以我就直接用了最簡陋的django-helper..
這個models.py 是個重點(diǎn):
from appengine_django.models import BaseModel
from google.appengine.ext import db
classPostsDB(BaseModel):
link=db.LinkProperty()
py" id="highlighter_816192">
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
|
import urllib #from BeautifulSoup import BeautifulSoup from pyquery import PyQuery as pq def getArticleList(url): lstArticles = [] url_prefix = url[: - 6 ] Cnt = 1 response = urllib.urlopen(url) html = response.read() d = pq(html) try : pageCnt = d( "ul.SG_pages" ).find( 'span' ) pageCnt = int (d(pageCnt).text()[ 1 : - 1 ]) except : pageCnt = 1 for i in range ( 1 ,pageCnt + 1 ): url = url_prefix + str (i) + ".html" #print url response = urllib.urlopen(url) html = response.read() d = pq(html) title_spans = d( ".atc_title" ).find( 'a' ) date_spans = d( '.atc_tm' ) for j in range ( 0 , len (title_spans)): titleObj = title_spans[j] dateObj = date_spans[j] article = {} article[ 'link' ] = d(titleObj).attr( 'href' ) article[ 'title' ] = d(titleObj).text() article[ 'date' ] = d(dateObj).text() article[ 'desc' ] = getPageContent(article[ 'link' ]) lstArticles.append(article) return lstArticles def getPageContent(url): #get Page Content response = urllib.urlopen(url) html = response.read() d = pq(html) pageContent = d( "div.articalContent" ).text() #print pageContent return pageContent def main(): url = 'http://blog.sina.com.cn/s/articlelist_1191258123_0_1.html' #Han Han url = "http://blog.sina.com.cn/s/articlelist_1225833283_0_1.html" #Gu Du Chuan Ling url = "http://blog.sina.com.cn/s/articlelist_1650910587_0_1.html" #Feng url = "http://blog.sina.com.cn/s/articlelist_1583902832_0_1.html" #Yuki lstArticles = getArticleList(url) for article in lstArticles: f = open ( "blogs/" + article[ 'date' ] + "_" + article[ 'title' ] + ".txt" , 'w' ) f.write(article[ 'desc' ].encode( 'utf-8' )) #特別注意對中文的處理 f.close() #print article['desc'] if __name__ = = '__main__' : main() |