Interwoven Patents Some Aspects Of Image Search 18
prostoalex writes "InterWoven patented locating and identifying image content via shapes, texture, color or resemblance to another image. No official word yet on whether the company thinks there are any infringers."
Not a threat to either company. (Score:5, Informative)
Google doesn't look at the image, just the filename, alt tag and surrounding context. Likewise with Ditto. I fail to see how that involves "shapes, texture, color or resemblance to another image". There are other companies out there that should be worried, but the ones you mention are about as far from that patent as you can get and still search on images.
These guys [princeton.edu] are a closer match, but since they are doing 3D CAD/CAM models, perhaps they are safe to.
On the other hand... these guys (eVe Image Search Toolkit) [searchtools.com] could be in trouble if they are not the patent holders themselves.
This patent seems more applicable to finding images that have similar color properties and gross image shape, which could be really useful when looking for images that go well together when compositing, not for finding pictures of a specific thing (unless you have an example that is very similar to the object you seek.)
So for the forseeable future, metadata will be far more successfull at finding images. Computer vision is still incredibly primitive: more so than computer speech recognition ten years ago.
Re:Not a threat to either company. (Score:3, Interesting)
Re:Not a threat to either company. (Score:2)
And with this patent, it's bound to stay so for the next 24 years at least. Patents kill innovation.
So you're saying... (Score:2)
Not a surprise (Score:2)
Re:Not a surprise (Score:2)
by FooAtWFU (699187) on Friday September 17, @03:01PM (#10279235)
(http://fennec.homedns.org/)
that for the next twenty years no one will be able to search for an image based on those sorts of similarities without money going to these people? TANJ.
--
You keep using that word. I do not think that it means what you think it means.
[ Reply to This ]
Not a surprise (Score:2)
by HotNeedleOfInquiry (598897) on Friday September 17, @03:02PM (#10279246)
About a year ago it occured to me tha
To my knowledge Google only uses textual metadata (Score:5, Interesting)
IMatch has been doing this for years (Score:3, Informative)
Violence (Score:4, Funny)
Working Implementation? (Score:4, Insightful)
However, if all they are patenting/developing is the searching, they're douchebags. I say this because after you have the feature vectors, the next step is a Nearest Neighbor Search, and there are already a number of algorithms for determining nearest neighbors. Unless their method somehow gets around the "curse of dimensionality", or provides other major improvements, I will be unimpressed.
And after reading the patent.. (Score:3, Interesting)
There's definitely prior art... (Score:4, Insightful)
Re:There's definitely prior art... (Score:2)
I am not a fan of software patents, as they tend to be too closely related to patenting mathematics or business models (this one is particularly close to mathematics), but for the time being, they are allowable in the US, and at least this patent takes a st
Usage for Content Filtering? (Score:1)
Come on... (Score:1)
Ok if you had patented a certain algorithme and someone else was using it. I would agree to sue them. Just because you made a way to search for images and I later found another way with the same results doesn't mean I copied you.
This way of patenting sounds like me in the stone age patenting a vehicle (which means basically anything) which doesn't depend on animal power.
If this syst
I don't think this is new. (Score:3, Interesting)
GE generated X:Y location & degree-of-match for an image regarding each of a set of simple image filters (rings of various radii and angular slits at 2 degree separation). This data was then cross correlated (some experiments used early neural nets, IIRC). They were successful at finding different types of features in aerial photography, such as farm, urban, water, grass, and forest.
The Audre Entity Recognition system used, among other things, input from a convolution/correlation system and a variety of other feature extraction methods, and used various means to build feature models from scanned engineering drawings, contour maps and other large format images. The system could produce a complete 3D terrain model from a simple contour map. The Visual Understanding Lab [cmu.edu] at CMU with which we worked also worked on using color features, more than Audre did. We even explored X-ray images, but scanning hardware of the time didn't have sufficient reliable gray scale capability.
A company in Denver or thereabouts was building systems using fractal decomposition of images as the fundamental data model for both display and recognition. They used a hexagonal cell model rather than the common rectangular one.
The patent is written in "patentese", so it'll take some study before I can be sure.
[Easter Egg: Check these movies (1 [cmu.edu], 2 [cmu.edu]) and animated gif [cmu.edu] of ray tracing at 0.99c, by R. Thibadeau [cmu.edu] at CMU.]
Prepatenting (Score:1)