Changes

1,419 bytes added ,  17:16, 27 January 2009
Line 41: Line 41:  
* [http://infolab.stanford.edu/~backrub/google.html Sergey Brin and Lawrence Page : The Anatomy of a Large-Scale Hypertextual Web Search Engine]
 
* [http://infolab.stanford.edu/~backrub/google.html Sergey Brin and Lawrence Page : The Anatomy of a Large-Scale Hypertextual Web Search Engine]
   −
JA: [[User:Jon Awbrey|Jon Awbrey]] 09:08, 27 January 2009 (PST)
+
{| align="center" width="90%"
 +
|
 +
<p>'''2.1.2. Intuitive Justification'''</p>
 +
 
 +
<p>PageRank can be thought of as a model of user behavior. We assume there is a "random surfer" who is given a web page at random and keeps clicking on links, never hitting "back" but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank. And, the d damping factor is the probability at each page the "random surfer" will get bored and request another random page. One important variation is to only add the damping factor d to a single page, or a group of pages. This allows for personalization and can make it nearly impossible to deliberately mislead the system in order to get a higher ranking. We have several other extensions to PageRank, again see [Page 98].</p>
 +
 
 +
<p>Another intuitive justification is that a page can have a high PageRank if there are many pages that point to it, or if there are some pages that point to it and have a high PageRank. Intuitively, pages that are well cited from many places around the web are worth looking at. Also, pages that have perhaps only one citation from something like the Yahoo! homepage are also generally worth looking at. If a page was not high quality, or was a broken link, it is quite likely that Yahoo's homepage would not link to it. PageRank handles both these cases and everything in between by recursively propagating weights through the link structure of the web.</p>
 +
|}
12,080

edits