In the last days the news were abuzz with headlines like: Search for secret millions + Google , Google kauft Suchalgorithmus von israelischem Studenten (Google buys search algorithm from israeli student)
Most often they more or less recited the original press release (dated from Sept. 2005) and stated the fact that the inventor, Ori Allon, now works for google and the rumours that microsoft and yahoo also were interested
Especially the media focused on the following two passages of the press release:
“The results to the query are displayed immediately in the form of expanded text extracts, giving you the relevant information without having to go to the website – although you still have that option if you wish,”
“By displaying results to other associated key words directly related to your search topic, you gain additional pertinent information that you might not have originally conceived, thus offering an expert search without having an expert’s knowledge.”
So let’s have a look at these two claims.
Inline display of text extracts
This first claim clearly gets the media going, screaming IPR violation, IPR violation all over the place. Especially when it is enhanced by quotes from Ori Allon like:
I don’t envision that Orion will completely eliminate the need for going to actual web pages. (Sidney Morning Herald Interview)
Just in case nobody has noted: Google is already displaying text extracts as part of the search result. And IMHO there is good reason they are not displaying longer passages of the search result page, namely IPR issues.
Looking at some self-proclaimed Orion look-alikes like Qtsaver one can easily see that something like this can easily done via some frontend mashup using the Google API
So if that claim was the reason that Google bought the algorithm (and hence the patent) than only for one reason: To save them from legal hassles, definitely not for the technical merit of that invention.
Displaying results to other associated key words directly related to your search topic
The second claim could be the one where it gets interesting. Funny enough, this is the one that didn’t get quite that bit of media attention. What is claimed normally falls into the research problems labeled query expansion, thesaurus generation, concept learning etc. typically
If Mr. Allon has found a well working algorithm for one of the above problems, that is scalable and performant, and this means google-like scale and performance, this algorithm definitely should draw the interest of Google and the other search giants.
Query expansion etc. are normally fields coverd by the research discipline of artificial intelligence. Since Mr. Allon is, even after Google hired him, still a Ph.D. student of Eric Martin who is working in that field.
Mr. Martins homepage cites the following research interests:
My main interests are in the logical foundations of Artificial intelligence. The theoretical part of my research is mainly devoted to developing a unified framework, Parametric logic, that investigates the relationships between:
- a notion of logical complexity, that accounts for various kinds of logical inferences, encompassing deductive, inductive and nonmonotonic inferences;
- a notion of complexity from the perspective of Formal learning theory, encompassing learnability in the limit, with or without (ordinal) mind change bounds;
- a notion of syntactic complexity, for formulas in infinitary modal languages;
- a notion of topological complexity.
I am also involved in projects on knowledge acquisition based on ripple down rules, as well as projects on query answering systems, logic programming, and discovery from the web.
Since i worked also in the field of AI and logics (description logics, not parametric logics) i would love to learn more about this algorithm.