Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Semantic Web Under Suspicion 79

Dr Occult writes "Much of the talk at the 2006 World Wide Web conference has been about the technologies behind the so-called semantic web. The idea is to make the web intelligent by storing data such that it can be analyzed better by our machines, instead of the user having to sort and analyze the data from search engines. From the article: 'Big business, whose motto has always been time is money, is looking forward to the day when multiple sources of financial information can be cross-referenced to show market patterns almost instantly.' However, concern is also growing about the misuses of this intelligent web as an affront to privacy and security."
This discussion has been archived. No new comments can be posted.

Semantic Web Under Suspicion

Comments Filter:
  • It's cool! (Score:3, Funny)

    by crazyjeremy ( 857410 ) * on Thursday May 25, 2006 @08:21AM (#15400862) Homepage Journal
    "Semantic web" might make it easier for HackerBillyBob to find a potential identity fraud victim's information. So basically, HackerBillyBob can get dumber and dumber but do more and more damage. Fortunately the good side of this is PhisherBillyBob can decrease his R&D time because SemantiGoog will give him thousands of ACTIVE email addresses EACH AND EVERY MORNING.
  • All Talk (Score:5, Informative)

    by eldavojohn ( 898314 ) * <eldavojohnNO@SPAMgmail.com> on Thursday May 25, 2006 @08:23AM (#15400879) Journal
    So I know a lot of people that get all excited when they read articles on the "semantic web."

    I think that we are all missing some very important aspects of what it takes to make something capable of what they speak of. In all the projects I have worked on, to create something geared toward this sort of solution, you need two things: training data & a robust taxonomy.

    First things first, how would we define or even agree on a taxonomy? By taxonomy, I mean something with breadth & depth that has been used and verified. By breadth I mean that it must be capable of normalization (pharmacetical concoctions, drugs & pills are all the same concept), stemming (go & went are the same action, dog & dogs are the same concept) and also important is how many tokens wide a concept can be. By depth I mean that we must be able to define specificity and use it to our advantage (a site about 747s is scored higher than a site about airline jets which is scored higher than a site about planes). By rigorous I mean that it must be tried and true ... you start with a corpus of documents to "seed" it and have experts (or web surfers) contribute little by little until it is accurate. Oh, it must also be able to adapt quickly and stay current.

    Without a taxonomy, how will we index sites and be able to tell between "water tanks" and "panzer tanks." I think that this is one of the great things that Google is missing to really improve its searching abilities. If you suggest an ontology to replace it, the problems encountered in developing it only multiply.

    Where is the training data? Well, one may argue that the web content out there will suffice as training data but I think that more importantly, they need collections of traffic for these sites and user behavioral patterns to quickly and adequately deduce what the surfer is in need of.

    I feel that these two aspects are missing and the taxonomy may be impossible to achieve.

    Why are we even concerned with security if we can't even lay the foundations for the semantic web? I would argue that once we plan it out and determine it's viable, then we concern ourselves with the everyone's rights.
    • Re:All Talk (Score:3, Informative)

      by RobotWisdom ( 25776 )
      I agree. My own (universally ignored) proposal for the taxonomy problem starts with person, place, and thing as 'elements' and builds complex ideas as compounds of these: [faq] [robotwisdom.com]
      • Hey, man, you're not universally ignored, it's just that there isn't much agreement about what the root of the ontology should be. For at least a decade I've been asking anybody I thought might have an answer worth hearing, "What are the properties of the uber-object?" Never gotten the same answer twice. This isn't really suprising as how people view and categorize the world is greatly effected by their experiences and knowledge.

        For what it's worth, I can think of two reasons you feel univerally ignored. Fi
    • I think a taxonomy could be culled from Wordnet, or some other similar smantic project.

      http://wordnet.princeton.edu/ [princeton.edu]

    • Re:All Talk (Score:5, Interesting)

      by $RANDOMLUSER ( 804576 ) on Thursday May 25, 2006 @08:48AM (#15401066)
      I've always thought that the Table of Contents [notredame.ac.jp] for Roget's Thesaurus was one of the greatest works of mankind. I don't think many people realize just how difficult the problem really is, and how long it's going to take.
    • >Without a taxonomy, how will we index sites and be able to tell between "water tanks" and "panzer tanks." >I think that this is one of the great things that Google is missing to really improve its searching >abilities. If you suggest an ontology to replace it, the problems encountered in developing it only >multiply ISO 2788 and a few other specifications talk about how to do a multilingual thesaurus. I'm working on doing one namely for geography and such. Computational Linguistics is the fie
    • I would have to agree that, although the idea is fascinating, implementing it would be a gargantuan effort. And it's unclear how difficult it would be to maintain.

      I think it might be easier to approach the problem from another direction. Once a semantic A.I. like Cyc [cyc.com] has reached a level at which it can begin categorizing and "understanding" the information on the Web, it could do the enormous chore of creating a semantic web for us.
    • Re:All Talk (Score:2, Informative)

      by Temposs ( 787432 )
      As another reply mentions, WordNet [princeton.edu] is a promising avenue of success for creating a taxonomy and an ontology for the web(just read a paper on ontologizing semantic relations using WordNet, actually). In fact, it already is a taxonomy of sorts(and a multi-dimensional one at that), although a generalized one. And there are multitudinous other projects building off of WordNet and paralleling WordNet.

      There's VerbNet [upenn.edu], FrameNet [berkeley.edu], Arabic WordNet [globalwordnet.org], and probably others I don't know about.

      WordNet has become a standa

    • The idea, as I understand it, is that you can transfer data formatted as XML within a certain grammar, such as RSS or some others defined within technology and industry niches, or make up your own. Then you don't need to base your own systems on this grammar, because they have provided transformation tools like XSL to transform a known grammar to the grammar your system requires. XML provides the taxonomy, which can be defined however the data source decides. I believe they are hoping for, and beginning to
    • Couldn't statistical analysis be used effectively? I've been playing with crm114 lately (a "Markov based" filter) for a backup to Spam Assasin on my servers. Once trained, it's ability to pick out spam is almost uncanny at times.

      The context of a word seems to me (obviously not a math of CS geek) to be a good, and relatively easy to calculate, indicator of the word's relation to other terms. By context I mean the "physical" proximity to other terms on a page, rather than the normal written language cont

    • The "Semantic Web" is basically the intersection of RDF+OWL, that is to say, it is entirely about taxonomy. The whole idea is that you have a certain nomenclature that you assert against known values, someone else has a different nomenclature that they assert against the same values. You can now cross-reference with a high degree of confidence. For example, using the Dublin Core. [dublincore.org]

      I get people all the time dismissing the whole idea because "man, you'd have to agree on definitions" or "how does 'it' know?" Rig
    • You're being too negative about things; we don't have to define an ontological representation for everything in the entire world, for the ontology to have use.

      If we can help to define standards for some part of knowledge then we have helped the world a little bit, which is a better place than we started off.

      As for how we do it, well, there is lots of experience around the world at doing this. Check out the Dewey Decimal system, or the Library of Congress classification. If you want something bit, then SNOME
  • Smarter Machines (Score:5, Interesting)

    by jekewa ( 751500 ) on Thursday May 25, 2006 @08:24AM (#15400888) Homepage Journal
    I personally fear the day that a machine or algorithm can better determine the purpose for my keyword-based search than I can. Sure, there's a lot of improvement that can be done to make the searches more precice, but certainly in the end it'll be my decision what's important and what isn't.

    What I really want to see is the search engine reduce the duplicated content to single entries (try Googling for a Java classname and you'll see how many Google-searched websites have the API on them), or order them by reoccurrance of the word or phrase giving the context more value than the popularity of the page.

    • "I personally fear the day that a machine or algorithm can better determine the purpose for my keyword-based search than I can."

      I wonder how this will influence our language and communication in general. Can language itself (not only its use) assimilated by marketing?

      I shudder at the thought of 'marketspeak'...
    • by Irish_Samurai ( 224931 ) on Thursday May 25, 2006 @08:41AM (#15401001)
      What I really want to see is the search engine reduce the duplicated content to single entries (try Googling for a Java classname and you'll see how many Google-searched websites have the API on them), or order them by reoccurrance of the word or phrase giving the context more value than the popularity of the page.

      There is a huge problem with this, and it goes back to the days of people jamming 1000 instances of their keywords at the bottom of their pages in the same fant color as the background. Also, your desire to rate the pages on context requires an ontology type algo, which is NOT easy. Google has been working on this for a little while now, but it is a big hill to climb. They are using popularity as a substitution for this. It is not the most effective, but it is a pretty decent second option.

      There is another issue with the approach you suggest. If Google decides that javapage.htm is the end all be all of JAVA knowledge, and removes all other listings from their database - then everyone and their grandmother will be fed information from this one source. That will ultimately reduce the effectiveness of Google to return valid responses to people who do not use search like a robot.

      There is a human element at play here that Google is attempting to cater to through sheer numbers. Not everyone knows how to use search properly, hell most people have no idea. Keyword order, booleans, quotes - these will all affect the results given back, but very few people use them right off the bat. If you reduce the number of returned listings for a single word search to one area that was detirmined to be the authority, you have just made your search engine less effective in the eyes of the less skilled. I would be willing to bet that this less skilled group composed most of Googles userbase.

      If you don't cater to these people, then you lose marketshare, and then you lose revenue from advertisers, and then you go out of business.
      • Not to mention the fact that the 'one true page' could be wrong. Or incomplete. I personally often read 3 or 4 of the results if I actually care about what I'm searching for.
      • I don't think the solutions to the many API pages problem is simply not listing the copies. I think it would be more like the current limitation on how many hits from the same site Google will normally display. Just have a link "Show more sites with the same content." Not similiar content, identical content. Although, determining that is difficult becuase formatting is different and sites may have their own navbars or headers/footers.
      • There is another issue with the approach you suggest. If Google decides that javapage.htm is the end all be all of JAVA knowledge, and removes all other listings from their database - then everyone and their grandmother will be fed information from this one source. That will ultimately reduce the effectiveness of Google to return valid responses to people who do not use search like a robot.

        Unfortunately that is exactly what is happening today. [wikipedia.org]
      • There is another issue with the approach you suggest. If Google decides that javapage.htm is the end all be all of JAVA knowledge, and removes all other listings from their database - then everyone and their grandmother will be fed information from this one source. That will ultimately reduce the effectiveness of Google to return valid responses to people who do not use search like a robot.

        You could just thread the result. If you did a search for a certain java class, and it turned out a whack of pages w
    • Didn't Turing say "A sonet written by a machine, is best appreciated by another machine"?
    • by suv4x4 ( 956391 )
      I personally fear the day that a machine or algorithm can better determine the purpose for my keyword-based search than I can.

      I fear the day where typing on an electronic device will produce better looking text and typography than me painstakingly painting every letter and produce one book a year.
    • I personally fear the day that a machine or algorithm can better determine the purpose for my keyword-based search than I can. [...] in the end it'll be my decision what's important and what isn't.

      So you'd prefer google just return all pages in it's index with your keywords in them, in a random order, and let you go through the 3 million results and look for the important ones by hand?

      The symantic web is all about allowing you to more precisely specify your keywords. More precise search results then follow
  • I can think of at least a dozen chain drug stores who have a retail store evey 10 blocks in every major city in the country. Their sales are recorded centrally.

    Hypothetically, if all of them decided it would be for the good of humanity to allow someone to examine their sales in real time as a whole to identify flu outbreaks early - then the process of doing that would not be too difficult.

    UPS and Fed Ex track their packages in real time, know who sent them and who is receiving them and how much it weighs.

    Da
    • I mention this :

      UPS and Fed Ex track their packages in real time, know who sent them and who is receiving them and how much it weighs.

      Because in and of itself how much my package weighs doesn't amount to a hill of beans. However if I know a natural disaster recently struck an area, and found some more "harmless" data to add to my filter, I can tell how much replacing of stuff via insurance claim people do on-line, and some other very interesting things.

      I just thought I'd clarify :)
  • Say what? [counterpunch.org] Sounds similar to what the Bush administration via the NSA is doing only on a public level. For those who want to ramble on about privacy and abuse take note that just about every other week some company has lost their records, or someone has infiltrated their networks and gained access to records. If that's not enough to make you throw in the towel when it comes to protecting your privacy, never ever apply for a credit card, never sign a document, never reveal anything about yourself to anyone. W
  • by gravyface ( 592485 ) on Thursday May 25, 2006 @08:38AM (#15400984)

    ...and growing and evolving.

    Take a look at the "blogosphere" and the tagging/classification initiative that's happening there.

    Sure, it seems crude and unrefined but it's working, like most grass-roots initiatives do when compared with grandiose "industry standards" and the big, bulky workgroups that try to define them.

  • by Jon Luckey ( 7563 ) on Thursday May 25, 2006 @08:39AM (#15400996)
    Obligitory Skynet joke in

    5...4...3...2...1
    • I sent a terminator unit back in time to kill all the Slashdotters who were about to make an obligatory Skynet joke.
    • The thing is, SKYNET was a military based computer and it gave us "Judgement Day".

      I dare hypothesize that if a truly intlligent web ever arose, it would have a strong porn background.

      I shudder to think of what it's version of Judgement Day would be ..
  • Biz School (Score:3, Insightful)

    by Doc Ruby ( 173196 ) on Thursday May 25, 2006 @08:50AM (#15401073) Homepage Journal
    "Big business, whose motto has always been time is money"

    That motto is really "anything for a buck". Even if business has to wait or waste time to get money, it will wait until the cows come home - then sell them.
  • Semantic Web ~- evil (Score:5, Informative)

    by tbriggs6 ( 816403 ) on Thursday May 25, 2006 @08:51AM (#15401078) Homepage
    The article does a pretty bad job at explaining the situation. The idea behind the Semantic Web is simply to provide a framework for information to be marked up for machines rather than human eyes. The idea is that using an agreed upon frame of reference for the symbols contained in the page (an ontology), agents are able to make use of the information contained there. Further, an agent can collection data from several different ontologies and (hopefully) perform basic reasoning tasks over that data, and (even better) complete some advanced tasks for the agent's user.

    The article would have us believe that this is going to expose everyone to massive amounts of privacy invasion. This is not necessarily the case. It is already the case that there are privacy mechanisms to protect information in the SW (e.g. require agents to authenticate to a site to retrieve restricted information). Beyond simple mechanisms, there is a lot of research being conducted on the idea of trust in the semantic web - e.g. how does my agent know to trust a slashdot article as absolute truth and a wikipedia article as outright fabrication (or vice versa).

    As for making the content of the internet widely available, some researchers feel this will never happen. As another commenter noted that it is essential that there is agreement in the definition of concepts (ontologies) to enable the SW to work (if my agent believes the symbol "apple" refers to the concept Computer, and your agent believes it refers to "garbage", we may have some interesting but less than useful results). I am researching ontology generation using information extraction / NLP techniques, and it is certainly a difficult problem, and one that isn't likely to have a trivial problem (in some respects, this is goes back to the origins of AI in the 1950's, and we're still hacking at it today).

    For some good references on the Semantic Web (beyond Wikipedia), check out some of these links

    • I'm glad someone finally portrayed an accurate image of what the semantic web really is.
    • Thank you very much for saving me a few minutes of typing. This is a very accurate description of what the semantic web is intended to be. I thought for a minute that I was actually gonna have to dig up that buried knowledge from college, when I actually took a class based on the principles of the semantic web. Our final project was a website (using our own personally created ontology) that searched a database of Magic: The Gathering cards and could infer from even the most rudimentary search query whic
    • I have done a little bit of casual research into ontology and have a question maybe you could answer.

      Is it possible to have a markup structure that could handle this issue by searching for a "secondary key" bit of information to qualify the identifier? Using your example above of "apple":

      <Item>
      <Primary ID>Apple</Primary ID>
      <Supplementary Id>Computer</Supplementary Id>
      <Supplementary Id>Power Book</Supplementary Id>
      </It

      • Ontologies for the Semantic Web are based on description logics (OWL-DL) or first-order logics (Owl-Full). We define classes and their relationships (T-Box definitions), and we define instance assertions (A-Box definitions).

        For example, we could define the Apple domain as :

        Classes: Computer, Garbage, ComputerMfg
        Roles: makesComputer computerMadeBy

        We can assign the domain of makesComputer to be a ComputerMfg, and the range to be a Computer (the inverse would be flipped).

        Class rdf:ID="Computer"

        Class rdf:ID
        • Thanks for the information. A quick search for the W3C references on OWL fleshed out alot of what you were saying, and let me know know where I was getting off track.
  • by SmallFurryCreature ( 593017 ) on Thursday May 25, 2006 @08:54AM (#15401119) Journal
    Lets use the holiday example giving in the article. So I got a hotel that is 54 dollars per night. That means I am not going to be included in the below 50 dollar search. Hmmm, I don't want that. I want maximum exposure. So I lower my price to 49 dollars + 10 dollars in extra fees that are a suprise when you receive the bill (what you say? 49+10 > 54? Offcourse you idiot, any price cut must be offset by higher charges elsewhere.)

    You could already do this semantic web nonsense if people would just stick to a standard and be honest with what they publish.

    Nobody wants to do that however. Mobile phone companies always try to make their offering sound as attractive as possible by highlighting the good points and hiding the bad ones. Phone stores try to cut through this by making their own charts for comparing phone companies but in turn try to hide the fact that they get a bigger cut from some companies then others.

    It wouldn't be at all hard to set up a standard that would make it very easy to tell what cell phone subscription is best for you. Getting the companies involved to participate is impossible however.

    This is the real problem with searching the web right now. It wouldn't be at all hard to use google today if everyone was honest with their site content. For instance, removed the word "review" from a product page if no review is available.

    Do you think this is going to happen anyday soon? No, then the semantic web will not be with us anyday soon either.

    • Semantic web and growing databases will just cause people to elect to disappear behind a cloud of confusion. Many Google searches are already useless because of dishonest advertisers; Godwin's Law says that eventually the web will become useless because bad information will flood out good information. Meanwhile, the technically literate will continue to hide themselves with anonymity and usage patterns that are deliberately inconsistent and disruptive. Eventually the only people the government and the adver
      • Although I'm not particularly paranoid, I have no loyalty cards, only one credit card, frequently pay with cash and never borrow.

        It's because of people like you that we're getting identity cards. Would it have killed you to join Tesco's loyalty programme? ;-)

        renting to avoid local government records

        How does renting help? You still need to be on the electoral register and you need to pay council tax - in both cases it doesn't matter if you're renting or owning. And what about TV Licencing? Not having a TV Li
  • Who Web? (Score:3, Funny)

    by geobeck ( 924637 ) on Thursday May 25, 2006 @09:05AM (#15401210) Homepage

    How many people read this and thought "Okay, what have they done with Norton now?"

  • The term "semantic web" generally refers to Tim Berners-Lee's notion of "semantic web" -- he coined the term. TBL's vision is simply a model of describing information. One expression of it is RDF-XML, another is data in N3 notation, but the core of it is the idea that you can express most information as a simple triplet of data: subject-predicate-value (e.g.: the sky - is - blue). That's basically it. It doesn't even have to have anything to do with "the Web" in the sense of the Internet.

    The idea, however,
  • Healthcare? (Score:1, Informative)

    by Anonymous Coward
    The guy claims that health records are public data? Well, that's a BBC site, but in the U.S. they decidedly are not, since HIPAA was passed.

    But all this semantic web stuff makes me giggle when they start talking about healthcare, anyway. I worked in that industry up until a couple years ago. Semantic web people want to move everybody away from EDI...while the healthcare people are struggling to upgrade to EDI. In 2003 I was setting up imports of fixed-length mainframe records. By the time healthcare is ex
    • I was interested that you posted about the healthcare industry, because I work in it today, and also went to a university which has done quite a bit of research into the area of health & bio informatics. From the research, it is clear that the semantic web and healthcare are actually a great match for each other, particularly when it comes to things like concepts & ontologies (for example, check out MeSH [nih.gov] if you haven't seen it before).

      Another example of how semantics make sense for healthcare i

  • The next great leap in searching the web won't be due to the semantic web. It'll be natural language processing. Soon the day will come when you will be able to type in a "real" question and truely get the best answers back. We all know keyword searching doesn't cut it. But a complete question can be interpolated to a logical query. It'll require no change to current web pages. Just a much smarter search engine.
  • by saddino ( 183491 ) on Thursday May 25, 2006 @09:26AM (#15401384)
    All the hoopla around the Semantic Web reminds me exactly of the days "XML" became the latest high-flying meme touted by "tech" writers en masse. Witness:

    The semantic search engine would then cross-reference all of the information about hotels in Majorca, including checking whether the rooms are available, and then bring back the results which match your query.

    And here in all its glory is the 1999 version:
    The software would then use XML to cross-reference all of the information about hotels in Majorca, including checking whether the rooms are available, and then bring back the results which match your query.

    Of course, the problem with this fantasy of XML was that no standardization of schemas led to an infinite mix of tagging and thus, the laypersons idea that "this XML document can be read and understood by any software" was pure bunk.

    Granted, the semantic web addresses many of these problems, but IMHO the underlying problem remains: layers of context on top of content still need to be parsed and understood.

    So the question remains: will the Semantic Web be implemented in a useful fashion before some develops a Contextual Web Mining system that understands web content to a degree that it fufills the promise of the Semantic Web without additional context?

    Disclaimer: I work on contextual web content extraction software [q-phrase.com] so yes I may be biased towards this solution, but I really think the Semantic Web has a insanely high hurdle (proper implementation in millions of web pages) before we can tell how successful it is.
    • The semantic web is a step up from XML. In an XML document, there is a great deal of information implicitly stored in the structure of the document. A human is (often) able to guess what the implied relationship is between the parent element and the child element, but machines are still poor at guessing. By making the relationship explicit (using RDF) a machine has a better chance of identifying the nature of the relationship. Of course, you still need standard tags, but it's easier to talk about named
    • XML documents can be read everywhere. But two things need to happen, they need to have the doctype (DTD whatever it is) avaliable to the software, and some styling (CSS for example) that enables the document to be displayed nicely. If everyone did this (assuming that the XML is well formed) it would be wonderful. XHTML is XML and can be read everywhere, the DTD is avaliable freely, and CSS is either included with each document or webbrowsers have a default.
  • Well (Score:3, Informative)

    by aftk2 ( 556992 ) on Thursday May 25, 2006 @09:55AM (#15401666) Homepage Journal
    The semantic web would have to be feasible [shirky.com] before it posed some sort of threat, so I wouldn't get too up in arms about this.
  • At first glace I thought this was about Semitic Web.....
    Now THAT would be something.
  • this is anti-semantic. does the ADL know about this ?
  • Oh good. They are finally going to include anti-virus on the web.
  • Glass Houses (Score:5, Insightful)

    by Baavgai ( 598847 ) on Thursday May 25, 2006 @10:16AM (#15401850) Homepage

    "All of this data is public data already," said Mr Glaser. "The problem comes when it is processed."

    The privacy and security concerns are bizarre. They're saying that there is currently an implicit "security through obscurity" and that's ok. However, if someone were to make available data more easily found, then it would be less secure?

    Here's a radical thought; don't make any data public you don't want someone to see. Blaming Google because you put your home address on your blog and "bad people" found you is absurd. If data is sensitive it shouldn't be there now.

    You can't really bitch about peeping Tom's if you built the glass house.

    • The Semantic Web isn't about data, its about metadata. Metadata is often automatically extracted from the data itself and made available without any interaction from the user. So its not that fact that your data is published on the web that makes it a security risk, its the fact that a bunch of automatically generated metadata was added to it, possibly without your knowledge. Think of all the trouble that has come from Word document metadata being put on the web.

      The problem is that no one is willing to man

      • a bunch of automatically generated metadata was added to it, possibly without your knowledge. Think of all the trouble that has come from Word document metadata being put on the web.

        I'm not sure that this is the gist of the article I read, but it is an interesting thought.

        That's more under the heading of the cute, insecure ideas that just wont die. Why does a web server tell me it's name and build number? Why does web publishing software include the registered user's name in the meta data? It's val

  • by MarkWatson ( 189759 ) on Thursday May 25, 2006 @10:38AM (#15402036) Homepage
    I am both a sceptic and a fan of the SW. I dislike XML serialization of RDF (and RDFS and OWL) - to me the SW is a knowledge engineering task and frankly 20 year old Lisp systems seem to offer a friendlier notation and a much better working environment. If you are a Lisp-er, check out the (now) open source Loom system that supplies a descriptive logic reasoner and other goodies.

    The Protege project provides a good (and free) editor for working on ontologies - you might want to grab a copy and work through a tutorial.

    I think that the SW will take off, but its success will be a grass roots type of effort: simple ontologies will be used in an adhoc sort of way and the popular ones might become defacto standards. I don't think that a top-down standards approach is going to work.

    I added a chapter on the SW to my current Ruby book project, but it just has a few simple examples because I wanted to only use standard Ruby libraries -- no dependencies makes it easier to play with.

    I had a SW business idea a few years ago, hacked some Lisp code, but never went anywhere with it (I tend to stop working on my own projects when I get large consulting jobs): define a simple ontology for representing news stories and writing an intelligent scraper that could create instances, given certain types of news stories. Anyway, I have always intended to get back to this idea someday.
  • The ideal Semantic Web is a beautiful dream, nothing more. Sure it'd be nice to have it around to start worrying about the consequences regarding security, privacy and everything else, but the ideal conceptualization seems to be impossible to implement.

    The Semantic Web I see coming is one where many different, limited domains are identified and semantically annotated, allowing some kind of agents to perform well-defined activities for us (i.e., book tickets, make appointments, search info, etc.). This sound
  • All this cross-referencing of data for the purpose of data mining (this appears to be simply a refining of data mining) is why I lie my butt off to telemarketers. Only by filling their collective coffers with garbage can we hope to undo their efforts. Currently, I'm a white male computer engineer when doing politically correct things, I'm a hispanic female for others, and I have many other identities. I pick identities at random for activities that I don't want traced, and I keep them active by doing str
  • I hope you people all understand that books, magazines, and newspapers are just as dangerous - anyone can publish private information using these technologies!
    Come on, this is absurd. If anything this article underscores the need for privacy laws - but the privacy implications of the semantic web are hardly any more significant than any other publishing method.
  • "Big business, whose motto has always been time is money, is looking forward to the day when multiple sources of financial information can be cross-referenced to show market patterns almost instantly"

    That's the Bloomberg service in a nutshell. Yes, same company founded by current NYC mayor, Michael Bloomberg. As an example, I was able to simultaneously examine various financial ratios of about 1,200 companies along with their current market values. Depending upon where certain ratios went, I flagged them
  • by SamSim ( 630795 ) on Thursday May 25, 2006 @01:54PM (#15403912) Homepage Journal
    The huge, glaring issue with the Semantic Web idea that I see is: how do you force the creators of web content to put the right semantic tags on their content? What's to stop there being thousands of sites full of nothing but semantic tags so that even Swoogling for "747" brings up porn first? The clear answer is that the tags will have to be out of the control of the creators of the web content. That means somebody or someTHING else - namely, your Semantic Web search engine of choice - has to figure out your site's tags for you. And the ONLY way to accurately judge, classify and rank a web page is by its actual, real content. This is just another way of looking at the same problem. I'm waiting to be impressed.

  • There's still people out there doing that stuff? That's too much! Good luck, semantic web dudes!

    N.B. The above is a flippant, snide, and unhelpful comment. However, in my defence, I submit that that is _exactly_ the sort of comment that any remaining semantic web diehards should be most used to hearing.

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...