Google Patents Search Algorithm 367
blastedtokyo writes "Google gets the first web search patent. According to this News.com.com article, Google was able to patent how they crawl and rank web pages. They claim "an improved search engine that refines a document's relevance score based on interconnectivity of the document within a set of relevant documents.""
Mis-title (Score:4, Informative)
They basically measure Web pages as either 1) portals, or 2) authorities.
Sites like Kuro5hin [kuro5hin.org] and *nix [starnix.org] have a lot of "Google juice" (i.e. weight in their ranking system) because they have so many links to other sites, while also garnering a slew of links to their main page.
Oh Please - Eugene Garfield did this is 1961 (Score:5, Informative)
And if you try to fool.. (Score:4, Informative)
Software patents (Score:4, Informative)
Kids, software patents are bad, mm-kay... [mit.edu]
Re:Mis-title (Score:5, Informative)
PageRank scores are calculated completely independently of the search query. You are probably thinking of Kleinbergs HITS (or Hubs and Authorities) algorithm which uses an initial search query to prune the search space, and then identifies hubs and authorities in the web. In contrast to PageRank, which only uses forward links to calculate its rankings, HITS uses both forward and "backward" links to figure out its ratings. Furthermore, unlike PageRank, HITS produces different scores for different queries.
The above tells us the following: That Kuro5hin and Slashdot have high pageranks not because of their excessive numbers of outlinks, but because many people point to their frontpages. Similarly, these high PageRanks mean that people that Slashdot or Kuro5hin point to get higher scores as well.
Not necessarily... (Score:5, Informative)
Think fuel injectors [autofieldguide.com], for example, which are made by several suppliers, but have a patent holder who gets license revenue.
PATENTS AREN'T BAD - POORLY AWARDED PATENTS ARE (Score:5, Informative)
The patents we all scream about are those that are comparable to the "fragrance" - patenting the concept of the shopping cart or the concept of transferring multimedia streams over the Internet. The Patent Office violated their own rules when awarding patents like that.
Google didn't patent the concept of a ranking system, they patented the way they do it. And that is a good patent. It patents the formula and not the fragrance.
If someone can figure out how to achieve the same result with a diffrent formula, more power to 'em!
Patent Obsession: Today's UF Topic (Score:2, Informative)
Re:Novel and innovative? (Score:3, Informative)
Some have been talking about similar techniques since before this patent was filed:
http://www.carnet.hr/cuc/cuc2000/radovi/prezent
http://citeseer.nj.nec.com/context/856618/0
Re:Oh Please - Eugene Garfield did this is 1961 (Score:5, Informative)
<p>
Of course, it's still a significant contribution to see the application of the Jacobi method to ranking web pages, and I assume that they have done some clever and many more dirty tricks to get more realistic results, weed out duplicate pages, etc., which may or may not be part or the patent.
<p>
In any case, the basic page rank algorithm is quite intuitive to anyone who has worked with iterative numerical methods, and in fact a very nice illustration of the power of such methods.
How I Got Ranked Highly at Google (Score:3, Informative)
I didn't pay a search engine optimization service to make this happen. I didn't use any tricks like "doors" either. It cost me no money, but it did take time and hard work to achieve it.
I explain everything I did in How To Promote Your Business On the Internet [goingware.com].
What's my secret? No secret at all:
Other pages I have that you may find helpful are:
and finally, from my K5 diary, A Webmaster's Strange But True Tale [kuro5hin.org].
Thank you for your attention.
Patent # 6,526,440 (Score:5, Informative)
How to remove your support for Google (Score:3, Informative)
TWW
Re:OMG MORE PATENTS!!! (Score:5, Informative)
More worrying is that software patents are sometimes granted using such general language that the entity getting the patent *doesn't* really have to disclose anything, enabling them to get both protection while keeping their invention secret, which is exactlty the opposite effect of what patents were intended for -- to get duplicable knowledge into the public domain after a period of protection for the original inventor.
Why not? (Score:4, Informative)
A) The algorithm is highly useful.
B) It required a significant amount of risk and technical effort to make it worthwhile.
C) The scope of the patent really just covers what it is that they've added, i.e., the ideas that they are supposedly deriving from are not being locked up.
What more do you really need to know? Regardless of what language you wish to put your claims in, that they've just made a "context shift" or what have you, it is a worthwhile effort and it is the kind of effort that requires the potential for substantial profits to secure continued efforts. People don't take risk without at least the potential to profit and the greater the potential reward the greater risks people are willing to take. Are you really going to argue that the idea was obvious or easy? If so, then explain why no one did it before, when billions of dollars and many years were (and are) being spent on such internet technology. There was a considerable lag time between the appreciation of the need for a good search engine (and the resources to develop them) and google's appearance. What's more, keep in mind that:
a) Google's core methodology is no secret now
b) The patent's life is limited.
c) The ideas that they presumedly derived from a still as open as they were prior to this patent
d) This country produces far more than any country despite the fact that we arguably "share our toys" less than most countries, even more than countries with much larger populations (even technically educated ones)....
Now I agree that there are dangers in allowing people to patent any and everything, e.g., well known sorting algorithms and other fundamental building blocks, but this clearly is not happening here.
They already do (Score:2, Informative)
The patent notice contains a U.S. patent number. When entered into the USPTO search engine [uspto.gov], a patent number calls forth a complete description of how to implement an invention.
Re:OMG MORE PATENTS!!! (Score:3, Informative)
But since USPTO considers "find a common knowlegde algorithm and patend a way to do it with computers" a valid patenting method, they probably would not consider it a prior art.
Re:Why not? (Score:3, Informative)
d) This country produces far more than any country despite the fact that we arguably "share our toys" less than most countries, even more than countries with much larger populations (even technically educated ones)....
How is this even relevant? Anyway, which countries did you have in mind?
Re:Good for them... (Score:4, Informative)
See J.M. Kleinberg, "Authoritative Sources in a Hyperlinked Environment", Proceedings of the 9th ACM-SIAM Symposium on Discrete Algorithms, ACM Press, New York and SIAM Press, Philadelphia, 1998, pp.668-677
That discusses the HITS algorithm, which is the core of PageRank (which is a simplified version of HITS). Sergey Brin and Lawrence Page in fact developed [1] Google from HITS [2].
References:
[1] S. Brin and L. Page, "The Anatomy of a Large Scale Hypertextual Web Search Engine", Proceedings of the 7th World Wide Web Converence, Elsevier Science, Amsterdam, 1998. pp. 107-117
[2] Chakrabarti, Dom, et. al., "Mining the Web's Link Structure", Computer, August 1999. pp. 60-66
Re:Mis-title (Score:2, Informative)
Google, when it's 'reading' a page, is having a bot spider it. If google is spidering a page and comes across a link to a page it has not 'read', then it follows the link, spiders the page, and includes it in the index.
As for returning results to the 'unread page' based on the context of the link, what do you mean by 'returned results to the page'? Do you mean, is now capable of displaying that page in its results set for a specific query?
You *might* mean that a 'freshbot', which is googlebot's little bro, can go and pick up a new page and temporarily add it to the google index for the month, without calclulating its true PageRank (it waits until after the next update, so it can compair the new page to everything else in context).
in this sense an 'unread' page could mean a page that is not properly indexed yet, but is a new addition.
It's true that google can return a page in its results that don't have the search phrase on it, if that search phrase has been used to point to the page in question, but it doesn't mean google hasn't 'read' the page in question, it has.
but google does not return pages in its results set that it hasn't spidered, and has only seen the links to. If google sees a link, it goes and indexes the page.