473,326 Members | 2,113 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,326 software developers and data experts.

need to write a simple web crawler

hai

i am a student and need to write a simple web crawler using python and need some guidance of how to start.. i need to crawl web pages using BFS and also DFS... one using stacks and other using queues...

i will try on the obsolete web pages only and so tht i can learn of how to do that.. i have taken a course called search engines and need some help in doing that...

help in any knind would be appreciated..

thank u
Sep 16 '06 #1
13 21670
kudos
127 Expert 100+
Its quite easy actually, you need one thing, one way to parse a html page (which is found in the python lib), and as you pointed out in your post, Breath first search (BFS) and depth first search (DFS). You also need some kind of structure to determine if you visited a certain page before (maybe a hash list?)

Lets assume that we use BFS, and use pythons list method, and that you start on a certain page (www.thescripts.com ?:)

hash = {}
stack = []
stack.push("www.thescripts.com")

while(len(stack) > 0):
currpage = stack.pop()
hash[currpage] = 1 # sets it to visited
links = findlinks(currpage) # this method finds all the links of the page
# here you can do what you would do, like finding some text, downloading
# some image etc etc
# push all the links on the stack
Expand|Select|Wrap|Line Numbers
  1.  for l in links:
  2.   if(hash[l] != 1):
  3.    stack.push(l)

This was strictly psuedo code, since I haven't got a python interpreter here. If you still need it, I could write you a simple crawler.

-kudos



hai

i am a student and need to write a simple web crawler using python and need some guidance of how to start.. i need to crawl web pages using BFS and also DFS... one using stacks and other using queues...

i will try on the obsolete web pages only and so tht i can learn of how to do that.. i have taken a course called search engines and need some help in doing that...

help in any knind would be appreciated..

thank u
Sep 17 '06 #2
squzer
3
Hi friend.. me too involving develpin a crawler.. share the deas you got please........
Jun 18 '07 #3
kudos
127 Expert 100+
Hi friend.. me too involving develpin a crawler.. share the deas you got please........
Hi, what do you want to get from your crawl?

-kudos
Jun 18 '07 #4
I am looking for one that will read from a list of urls and crawl them for certain text words and then list the results.
Aug 6 '07 #5
I am also trying for that but my crawler takes a hell a lot of time to crwal i have done it in python. Can you folks give me some clue
Nov 12 '07 #6
dazzler
75
I have done crawler also which parses URLs from html. I think that python's html parser modules only work with clean & valid html code... and net is full of dirty html! so get ready to write your own html parser =)
Nov 12 '07 #7
heiro
56
Its quite easy actually, you need one thing, one way to parse a html page (which is found in the python lib), and as you pointed out in your post, Breath first search (BFS) and depth first search (DFS). You also need some kind of structure to determine if you visited a certain page before (maybe a hash list?)

Lets assume that we use BFS, and use pythons list method, and that you start on a certain page (www.thescripts.com ?:)

hash = {}
stack = []
stack.push("www.thescripts.com")

while(len(stack) > 0):
currpage = stack.pop()
hash[currpage] = 1 # sets it to visited
links = findlinks(currpage) # this method finds all the links of the page
# here you can do what you would do, like finding some text, downloading
# some image etc etc
# push all the links on the stack
Expand|Select|Wrap|Line Numbers
  1.  for l in links:
  2.   if(hash[l] != 1):
  3.    stack.push(l)

This was strictly psuedo code, since I haven't got a python interpreter here. If you still need it, I could write you a simple crawler.

-kudos

I'm very interested how web crawler works..Would you mind if I ask for a sample code so that i could study and later make my own?
Nov 24 '07 #8
hi, i am trying to make a crawler and have the most frequency keywords of the pages of one site ... any idea??
Mar 29 '08 #9
urgent
1
Hi, I need to write a simple crawler too. it must have the ability to capture webpages from a certain site for example ww.CNN.com

and also it must parse those HTML webpages. I need any sample code please..urgently in order to help me with my project.
Apr 4 '08 #10
a simple html parser, looks for thumbnail tags and prints the thumbnail information

Expand|Select|Wrap|Line Numbers
  1. import urllib2, sgmllib
  2.  
  3.  
  4. class ImageScraper(sgmllib.SGMLParser):
  5.  
  6.     def __init__(self):
  7.  
  8.         sgmllib.SGMLParser.__init__(self)
  9.  
  10.         self.href = ''
  11.  
  12.     def start_a(self, attrs):
  13.         for tag, value in attrs:
  14.             if tag == 'href':
  15.                 self.href = value
  16.  
  17.     def end_a(self):
  18.         self.href = ''
  19.  
  20.     def start_img(self, attrs):
  21.         if self.href:
  22.             print "#####################################"
  23.             print "IMAGE URL: " + self.href
  24.             for tag, value in attrs:
  25.                 if tag == 'src':
  26.                     print "THUMBNAIL SRC: " + value
  27.                 elif tag == "width":
  28.                     print "THUMBNAIL WIDTH: " + value
  29.                 elif tag == "height":
  30.                     print "THUMBNAIL HEIGHT: " + value
  31.                 elif tag == "alt":
  32.                     print "THUMBNAIL NAME: " + value
  33.                 elif tag == "border":
  34.                     print "THUMBNAIL BORDER: " + value
  35.                 else:
  36.                     None
  37.             print "#####################################\n"
  38.  
  39.  
  40. url = "http://bytes.com/"
  41.  
  42. sock = urllib2.urlopen(url)
  43.  
  44. page = sock.read()
  45.  
  46. sock.close()
  47.  
  48. parser = ThumbnailScraper()
  49.  
  50. parser.feed(page)
  51.  
  52. parser.close()
Apr 4 '08 #11
Hi, what do you want to get from your crawl?

-kudos
hi kudos,

I want to write a crawler which will fetch the data like company name,turnover,product for which they are working for..and store into my database.

actually i have to submit a project,i have made simple html tags based crawler but want to make a dynamic simple web crawler.

your help is required!!!

Thanks in advance!!!

Varun
Jul 1 '08 #12
kudos
127 Expert 100+
ok, webcrawlers, there is usually alot of 'ifs', but have a sketched out a very simple webcrawler that illustrates the idea (with comments!)

Expand|Select|Wrap|Line Numbers
  1. #webcrawler
  2. #this is basically a shell, illustrating use of the "breath-first" type of webcrawler
  3. # you have to add things for extracting the actual info from the webpage yourself
  4. # all it currently do is to print the url of the pages, and the number of candidates to visit
  5.  
  6. import urllib
  7. page = "http://bytes.com" # startpage
  8. stack = []
  9. stack.append(page)
  10. visit = {} # keeps track of pages that we visited, to avoid loops
  11. stopvar = 5 # I have added a variable that will allow you to exit after visiting x number of page, obviously we do not want to visit all page of the internet :)
  12.  
  13. while(stopvar >= 0):
  14.  stopvar-=1
  15.  cpage = stack.pop()
  16.  f = urllib.urlopen(cpage)
  17.  html=f.read()
  18.  sp = "a href=\""
  19.  
  20.  # you want extract things from the html code (such as images, text etc, etc around here)
  21.  # the rest of the thing is to extract hyperlinks, and put them into a stack, so we can
  22.  # continue to visit pages
  23.  
  24.  for i in range(len(html)):
  25.   if(sp == html[i:i+len(sp)]):
  26.    url = ""
  27.    i+=len(sp)
  28.    while(html[i] != "\""):
  29.     url+=html[i]
  30.     i+=1
  31.    # is our link a local link, or a global link? i leave local links as an exercise :)
  32.    if(url[0:4] == "http"):
  33.     if(visit.has_key(url) == False):
  34.      stack.append(url)
  35.      visit[url] = 1
  36.  print str(len(stack)) + " " + cpage
  37.  
-kudos
Jul 19 '08 #13
Try Scrapy, a very powerful (and well documented) framework for writing web crawlers (and screen scrapers) in Python.
Nov 21 '09 #14

Sign in to post your reply or Sign up for a free account.

Similar topics

7
by: mx2k | last post by:
Hello @ all, we have written a small program (code below) for our own in-developement rpg system, which is getting values for 4 RPG-Characters and doing some calculations with it. now we're...
2
by: OM | last post by:
I need a simple Javascript shopping cart. I did a few searches on Yahoo... And got a few results of free Javascript shopping carts. The problem is there tooo complicated and very hard to...
3
by: worzel | last post by:
need some simple code to copy text to clipboard in c# - my app has right click > copy to clicpboard feature, which is best way to do this?
1
by: Rafael Veronezi | last post by:
I have a simple doubt about the Response.Write method... Follows... I have a page that do some processing before show up, it could take something like 10 or 15 seconds... But it's not the network...
2
by: mikespike21 | last post by:
Hello, I need a perl script that converts the content of a simple text document. Like the following: Content before: CC -0.007 ZZ 79.854 YY -0.002 XX -0.009
23
by: Rex | last post by:
Hi I want to write a procedure which takes in a string of names seperated by a whitespace and puts commas at each whitespace the last name however, should have "and" before it. Let me explain...
1
by: Girish Kanakagiri | last post by:
How to write simple isapi filter code in C++ Just to Add "Hello World" to the Response ? Can any one please help with initial start up so that I can build up further. It is Urgent... ...
7
by: bdy120602 | last post by:
In addition to the question in the subject line, if the answer is yes, is it possible to locate keywords as part of the functionality of said crawler (bot, spider)? Basically, I would like to...
0
bIGMOS
by: bIGMOS | last post by:
I made a GUI ping program, now Im lost on how to do the server end of it Need a simple VB 2005 express program that will listen to ping request and display something like YOU ARE BE PINGED like in...
4
by: =?GB2312?B?0rvK18qr?= | last post by:
Hi all, Today I was writing a simple test app for a video decoder library. I use python to parse video files and input data to the library. I got a problem here, I need a windows form, and...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.