Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

OpenDocument Alliance to Fight Digital Dark Age 185

OSS_ilation writes "A consortium of vendors and academic institutions -- including IBM, Sun Microsystems and the American Library Association -- has announced today that they are forming the OpenDocument Alliance as part of an effort to promote open file standards worldwide. The group will support the one truly open standard file format, OpenDocument, which is an XML-based file format used saving and exchanging editable office documents such as text documents, spreadsheets and presentations. Sun's Simon Phipps said he believed ODF would allow future generations to view all of today's digital docs and prevent a digital Dark Age from occurring."
This discussion has been archived. No new comments can be posted.

OpenDocument Alliance to Fight Digital Dark Age

Comments Filter:
  • I believe they call it "text/plain". Oh, you wanted formatting? Then try "text/html".

    There is more than one "truly open format", so using the word "the" is a bit pretentious.

    • I somewhat agree with you, although even text/plain format does not guarantee that people will be able to access it 200 years from now.

      You see, even this text plain files are based on some conventions, this is, we all "know" that when your machine reads one of those files a 65 means the character A, 67 character B and so on. Thus, the generation that wants the information must be aware of those things.

      Now, with digital information is a bit more difficult than with printed paper, as in 200 years people may
      • You see, even this text plain files are based on some conventions, this is, we all "know" that when your machine reads one of those files a 65 means the character A, 67 character B and so on. Thus, the generation that wants the information must be aware of those things.

        Assuming for a minute that all knowledge of ASCII is lost; given that during WW2 we were able to crack the enigma, I seriously doubt a simple substitution cipher will be a huge deal for our ancestors. People do more difficult puzzles with a p

      • "...a 65 means the character A, 67 character B and so on..."

        <nitpick>
        65 + 1 = 66
        </nitpick>

        I think it would take more than a couple hundred years for a language to die though. It will probably take more like a couple thousand - and an almost complete collapse of civilazion as we know it. Not entirely out of the realm of possibility, I suppose.

        Still... like Linus once famously said, "Backups are for wimps. Real men upload their data to an FTP site and have everyone else mirror it." So as long as t
      • If they are too stupid to crack ASCII, or even UTF-8 or UTF-16, to hell with them. They would not be able to get the bits out of the drive if they are *that* stupid. Things like NTFS, which is only documented at full at Microsoft (*if* it is fully documented), might pose more of a problem. Or encrypted drives etc.

        But I don't think that a harddrive will hold that much data over thousands of years anyway. Flash maybe...
      • The concept of digital dark age assummes that only proprietary document formats and their corresponding applications are lost, while public knowledge (like W3C specs, encoding specifications, internet protocols) is preserved.

        Suppose that a very important document is formatted in Billy's proprietary document format v1.21, but there are no more copies of Billy's wordprocessor which was discontinued 250 years ago, so the format has to be reverse engineered.

        Now what happened if Billy's wordprocessor instead us
      • I think in hundreds of years, we may not have to just worry about the medium but also how the language will have changed. This text is from the 1300's

        Lyte Lowys my sone, I aperceyve wel by certeyne evydences thyn abilite to lerne sciences touching nombres and proporciouns; and as wel considre I thy besy praier in special to lerne the tretys of the Astrelabie. Than for as mochel as a philosofre saith, "he wrappith him in his frend, that condescendith to the rightfulle praiers of his frend,"

        You could argue th

      • I have a solution. We build a mechanical computer entirely of stone.

        Water can provide the mechanical energy needed to run it.

        There are 2 buttons. One for "0" and one for "1" (or whatever). After a button is pressed, the water pops back up. After the 8th button has been pressed, it stays down, the wheels turn, and then a stone tablet with a letter carved in to it is raised. After a moment, the letter goes down, and the input buttons raise again for the user to input the next 8 bits.

        As long as the users of th
      • In 200 years, people may not remember that ASCII had 8 bits, or that UTF8 can have 8 or 16 bits depending on the character. But if they know how our computer worked (that including the fact that we use a 8 bit quantum), the documents can be recovered. That is assuming that they can read our language, but we have plenty of translations out there to help with that.

        We probably shouldn't worry about writting down into paper how our computers work for now. There are lots of texts out there already printed that

    • by albalbo ( 33890 ) on Friday March 03, 2006 @12:10PM (#14842894) Homepage
      There are a few people pointing out other "open" file formats. OpenDocument is part of that heritage, though:

      - it invents very little. The container format (jar) is well known, XML is well known, it builds on HTML for semantic structure, it uses other standards (XLink, XForms, SVG, etc.) where it makes sense to. It is, in effect, a "common subset" of standards which are all useful in creating documents (e.g., HTML is great, but you can't store images easily in an HTML file [tho obviously, yes, it is possible...]). This is in stark contrast to, e.g., MS XML.

      - it has been well-designed from the start to actually improve the current state of the art, not replicate it. E.g., the metadata system is good and getting better.

      - unlike text/html, competing implementations are actually interoperable: vendors are working through OASIS, which has standardised it from the start, and are making sure things work. HTML standards came a little late in the game, and there are still text/html pages out there which my poor Firefox can't handle.

      There are a ton of reasons why OpenDocument isn't just "another format", but actually a significant step forward.
    • HTML is certainly not very standard and some implementations of it aren't even very open. HTML is basically a bastardization of SGML, while XML is a well-defined, strict subset of SGML. You can write HTML in XML if you want to make it open and standard, but you'd still be writing in XML.

      Open Document Format, on other hand, is a strict XML format so it is both open and standard.

      Sure, plain text is open and standard too, but most applications require a more structured document than you can get with plai

    • text/plain isn't a format. text/plain plus an encoding is a format. Which encoding would you like? Latin-1? That's defined in ISO/IEC 8859-1:1998 [www.iso.ch], which costs ~50USD to buy. UTF-8? That's defined in ISO/IEC 10646:2003 [www.iso.ch], which costs ~90USD to buy. Want text/html? On top of the character encoding issues, you'll also have to refer to the ISO 8879:1986 [www.iso.ch] standard, which costs ~170USD to buy.

      What you claim are "truly open standards" are built on top of for-pay standards. You can't implement them fully

    • Does HTML count as a format considering nobody checks for compliance when creating these documents? It's a really crappy format in the least.
  • But there are already such formats. I.e. latex. Ufortunatelly the only usable wysiwyg editor s LyX which runs oout of the box only on linux.
    • by 16384 ( 21672 ) on Friday March 03, 2006 @11:47AM (#14842743)
      Not anymore, now lyx support windows. See http://wiki.lyx.org/Windows/Windows [lyx.org]
    • There is a Windows port of LyX [lyx.org] which runs perfectly well on Windows XP, however most power users would prefer to run something like TeXnicCenter [toolscenter.org] on Windows - or even just notepad and MiKTeX.
    • At first I thought you were talking media formats, and I wondered how one was supposed to record a bit on rubber, but I figured once you did, it probably would hold forever..
    • Lyx is not the only usable latex editor.

      I have found TeXmacs pretty good too.

      http://www.texmacs.org [texmacs.org]

      They also have a (beta) version that runs on Windows (I tested it. Works fine.)

      There are other commercial editor also.
    • by rxmd ( 205533 ) on Friday March 03, 2006 @01:03PM (#14843284) Homepage
      But there are already such formats. I.e. latex.
      LaTeX is in no way an open document format suitable for storing, let alone archiving data, for a variety of reasons:

      • Firstly, LaTeX gets its usefulness and power from packages. Unless you want to standardise on a given reference set of packages, it can't be used sensibly for archival purposes. because you'll have to store all possible packages in all versions along with your data. If you're willing to do that, you could run Word in an emulator, too.
      • There is no universal method for package versioning, for resolving package dependencies and for maintaining backward and forward compatibility between package versions. This creates lots of problems when you use older documents on a newer TeX system. An example was the rather popular geometry package for easier page geometry setup where version 3 of the package broke compatibility with older versions. The author added a simple switch to make the new version behave like the old ones, but you had to add the switch to the \usepackage declaration to make your documents compile. If you have to modify your documents to keep them useable, you're missing the point of a document archive.
      • There is no consistent way of using Unicode in TeX documents. Basically, with the existing solutions such as Lambda/Omega, UTF-8 inputenc, ucs.sty and proprietary packages, it's "choose two out of: compatibility with most LaTeX packages, compatibility with the Unicode standard, large character repertoire". It's somehow useable, but not really well enough to be called universal.
      • LaTeX documents are really difficult to parse on a computer, making them even more ill-suited for archival storage on a large scale. Try talking to the developers of the TeX->LyX conversion scripts one day. Someone stated that the only good TeX parser is TeX itself. A good archival document format should be parseable using third-party tools.


      LaTeX is a typesetting system. It's designed for getting a nicely formatted PDF or PostScript file out of a source file that you can alter and modify on the spot. Typesetting is what it does really well. If you try to shove and bend it into other roles, it starts to get kludgy, especially when it concerns data exchange between large numbers of users with inconsistent package versions, automated processing of LaTeX documents with third-party tools or heavy use of international character sets.
      • Funny, I still can't see how the data that is in a LaTeX file may be lost. Maybe some formatation, but no tables, pictures, formula or text. Also, you can always infer the function of a package from the name if you don't have it, or read any manual from that time, there are always plenty of them. And it is not so hard to keep all combinations of packages, CTAN does exactly that. It is in no way worse than any other open format out there.

        About it being dificult to parse (?!). Well, I think that it is the p

      • And what about nonstandard stylesheets? Are authors to be restricted to "standard" XML, or do they have to provide their own stylesheets? In the latter case, they could just as easily provide their own LaTeX style files. Also, LaTeX is better supported than MathML, so LaTeX is better for math (at least right now).

        Also, you speak of maintaing compatibility. Will XML always be compatible? Will nothing in the various XML languages ever be deprecated?

        As for the difficulty of parsing TeX/LaTeX, is that due to th
  • Open formats are definitely the standard for which to strive.

    It appears Microsoft claims an open format, from the (fine) article:

    OpenXML will be the default format for saving documents instead of Microsoft's proprietary formats, said Alan Yates of the company's Office division

    Can anyone clear up exactly what OpenXML is? When I google it, I get vague references leading me to believe OpenXML is more of a container, and not Microsoft's specific document format. So, this sounds like another canard from Microsoft with the claim "open" obfuscating what is probably not.

    Any /.'ers have more info about Microsoft's format?

    On the other hand, the consortium (if you will) proposing a universal open document standard sounds more open and the proof will be in the implementation. Still, I'd like to know more specifically what that standard proposal is in detail.

    • Canards (Score:5, Informative)

      by Tony ( 765 ) on Friday March 03, 2006 @11:50AM (#14842764) Journal
      Any /.'ers have more info about Microsoft's format?

      Get thee to Groklaw [groklaw.net], my curious friend. The debate, along with fine technical details are found there.

      On the other hand, the consortium (if you will) proposing a universal open document standard sounds more open and the proof will be in the implementation. Still, I'd like to know more specifically what that standard proposal is in detail.

      The implementation is here. It's called "ODF," the "Open Document Format." It is the default file format of the Open Office suite of applications; KOffice is also moving (or *has* moved, I'm too lazy to look) to that format, as well. IBM's office suite will implement ODF.

      Again, Groklaw has a lot of information, including pointers to the official specification.
    • I don't know, it looks like it might - might - be the real deal this time. The Office 12 format has been submitted to the ECMA, and the revised licensing terms are actually very favorable. There is a decent example of the new Word XML here:
      http://blogs.msdn.com/brian_jones/archive/2006/02 / 02/523469.aspx [msdn.com]

      Additionally, it appears that they have adopted a covenant not to sue:

      Here are a few more specific and detailed questions and answers about Microsoft's 'Covenant Not to Sue' approach:

      There is no long

      • One complaint was that any future extensions that MS make weren't necessarily open as well. So MS can 'extend' and break it. Was this addressed too?
      • by owlstead ( 636356 ) on Friday March 03, 2006 @12:45PM (#14843148)
        Ah, supersets. Great. So, does Microsoft also states that it won't use supersets by default? And if anyone defines supersets, will there be any way to get them accepted in the standard? Or will MS just create a worse superset themselves and push that as default? Sun does not accept supersets of Java (called Java) for compatability reasons, since supersets can (and will) mess up the standard. See extended HTML added to MS IE as reference to that.
    • by 99BottlesOfBeerInMyF ( 813746 ) on Friday March 03, 2006 @12:25PM (#14843009)

      Can anyone clear up exactly what OpenXML is? When I google it, I get vague references leading me to believe OpenXML is more of a container, and not Microsoft's specific document format. So, this sounds like another canard from Microsoft with the claim "open" obfuscating what is probably not.

      MS is offering to license this format to people under particular terms and parts of this format are indeed binaries embedding in XML. It is also patent encumbered. The main objections to the licensing include restrictions making it unimplementable by GPLed programs and licensing for old versions expires as soon as MS releases a new version, thus providing no guarantee that future generations will be able to legally read the files. Basically it is MS trying to confuse the issue and claim their format is just as open as the Open Document, even though in reality it does not confer the benefits Open Document does.

  • by saskboy ( 600063 ) on Friday March 03, 2006 @11:37AM (#14842671) Homepage Journal
    The dark age has already happened several times. There are oodles of media formats from the 70's and on that are no longer readable today in the standard computer. Heck, new computers don't even come with floppy drives for 3.5" floppies. I hope they have a strategy to tackle media problems along with file format compatibility, because the medium is the message.
    • I know, all of my data stored on my Commodore 128 can't be ported to my Linux machine since it doesn't have drivers for the 1571.
      • by dylan_- ( 1661 ) on Friday March 03, 2006 @12:01PM (#14842832) Homepage
        I know, all of my data stored on my Commodore 128 can't be ported to my Linux machine since it doesn't have drivers for the 1571.
        Actually, this page [shuttle.de] has exactly that.
      • The Catweasel controller board can read basically any floppy disk format, because it returns the timing between transitions. Whether you can find drivers to decode the data is another matter, but it is possible to write your own. For standard FM and MFM disks, cw2dmk works very well. The main problem with reading old Commodore disks is the crazy copy protection tricks they used, some of which even required 6502 code be downloaded to the floppy drive.
    • Only one answer then. We'll establish a foundation on the edge of the Galaxy to preserve these documents for the next thousand years...
    • I recently moved, and came across a stash of floppies from 10 years ago. Curious as to what was on them, I tried to read them. Not a single one was readable. I even tried a sector recovery tool on a couple. No dice.

      I would hope that anything of value has already been transferred to another archive format. Of course, CD-R's would not be a good format to use because their longevity is estimated to be only 2-5 years [technologyreview.com].

      How much does CD-ROM creating equipment cost?
    • Solved, already (Score:3, Insightful)

      by DogDude ( 805747 )
      It's already been solved. It's called "paper". It's been used for 1000's of years, and if you take care of it properly, it can last a LONG time, always be readable, and is more open source than any of the FUD the OSS camp spews out. Paper. Written records. Hasn't been beat yet. Kinda' like all of the people thinking that they were re-inventing the wheel with e-books. We've all seen how well that has gone.
      • In fact, if you bothered to check, paper [wikipedia.org] has been in use for less than 2000 years, actually, and for less than 1000 years in the West.

        Perhaps you haven't noticed that it's kind of laborious to back up data that exists only on paper. This is why so many texts from antiquity have been lost. The survival of texts in manuscript form is the exception, not the norm. People have never, until the last hundred years, devoted a very large or widespread effort to technology for preserving data: the lack and the need

    • It doesn't matter. The data is digital, we can make perfect copies to current media formats. If anyone at all cares about this data, they can use one of the remaining computers that read them and copy it onto the hard drive of a web server, whereupon everyone in the world will be able to read it. This will always be the case.
      • Being digital doesn't mean squat if you can't read/write the format it's saved in. I'm sure you know that Motorola's digital organization of bits isn't the same as Intel's organization? The tools to read and write it into INFORMATION can't be neglected and fall into disuse before the information is converted to fresh or new media. That's why Open Document is so good, since the specs for reading and writing it are available to anyone, but also it should be a good enough format so that it's used as a stand
        • Well, I was only talking about the physical medium. File formats are more important, yes. That said, in the case of text, no matter what the format is, unless there's encryption or something it will always be possible to at least extract the raw words while flushing the rich text/formatting data. For example, in the case of Microsoft Word there is a little free application called "antiword" that does just this. So again, although it would be convenient for file formats to remain open and stable, no hist
  • Cool name! (Score:5, Funny)

    by Rob T Firefly ( 844560 ) on Friday March 03, 2006 @11:37AM (#14842673) Homepage Journal
    Naming your interest group "The [anything] Alliance" gives it that hardcore "We'll form Voltron and smite you if you look at us wrong" street cred.
  • Digital Dark Age? (Score:3, Interesting)

    by tgd ( 2822 ) on Friday March 03, 2006 @11:41AM (#14842698)
    Not being able to read the damn file format isn't the problem. The fact that there is no possible way to store even a tiny fraction of the data being produced for the long term is what will cause a digital dark age.

    I mean hell. I've got 1.25 terabytes of online storage at home and probably 250 CDs burned over the last ten years I can't reliabily ensure I'll still have access to in ten years. Half those CDs are probably unreadable now -- from recent experience at least 10% aren't.

    If they want to solve the digital dark age problem, they need to figure out how gigabytes or terabytes of PERSONAL information will be saved for future generations, not filtered down government or commercial archives. File formats just aren't that big of a deal. Worst case someone has to reverse engineer it in a hundred years, if you actually HAVE the data in a hundred years.
    • by brunes69 ( 86786 ) <`gro.daetsriek' `ta' `todhsals'> on Friday March 03, 2006 @11:46AM (#14842738)
      The problem is not that there is no long-term storage. The problem is that we produce more useless data than ever before.

      Really, who gives a f*ck about your 1.25 TB of crap? Or mine? We're just two ants in the anthill. You really think you can look up any substantial amount of information on someone who lived 200 years ago? Hell, try *50* years ago. Aside from public records like tax information and housing details, and maybe some family photos, you are likly to come up with bubkus, unless that person was famous.

      It's going to be no different 200 years from now, and frankly I don't see the problem with that. Only in the past decade has everyone gotten this weird urge to try and archive and record every unimportant detail of their daily lives (see MySpace.com, blogging, etc). What they don't realize is no one really gives a crap today, and they sure as hell won't give a crap in 100 years.

      Historians want to know about culture as a whole, not in bite-sized chunks. Aside from the major move-makers (politicians, *some* celebrities), historians won't be any more interested people's musings on shit like Paris Hilton than I am.

      • Of course the true fun is the fact that we are making it much more difficult for them as well with our constant musings. The events that lead up to Gulf War II will probably be important to historians. While the crazy nutcase theories concerning the events flood the internet will only confuse history. You can hardly seperate the truth from the fiction NOW, imagine trying to do so 100 years from now.
        • The events that lead up to Gulf War II will probably be important to historians. While the crazy nutcase theories concerning the events flood the internet will only confuse history. You can hardly seperate the truth from the fiction NOW, imagine trying to do so 100 years from now.

          Right. Because in the 1770s, absolutely no one in the colonies wrote anything stating that Washington et. al. were just a bunch of wankers who didn't feel like paying extra for tea, the greedy bastards. And no one ever wrote home a
          • Good point, my point wasn't the newness of dissenting opinions, simply the vast worldly spread of them. Its fairly new that a lower class individual can obtain a world-wide audience for their views, especially when they arn't even a worldclass writer.
            • ...my point wasn't the newness of dissenting opinions, simply the vast worldly spread of them.

              That's true, but there is also a larger amount of worldly spread information in general, true and false. In fact, I would venture to say that the percentage of false information to true information is much lower now than it has been historically, simply because the average educational level of the world has increased dramatically in the last 100 years. So even though more bad information is out there, even more goo
              • Yes, but its a LOT more information to have to shift through, future historians will definatly have an interesting job. Of course we are currently working on writing the history as it happens, but any good historian will want to try to recreate it to really get at the truth and not just the "official" truth.
      • by Random_Goblin ( 781985 ) on Friday March 03, 2006 @12:07PM (#14842872)
        Really, who gives a f*ck about your 1.25 TB of crap? Or mine? We're just two ants in the anthill.

        actually judging by what modern archeologists find really interesting, it is exactly what future archeologists will be interested in.

        The little bits of detritus that make up individuals lives are far more interesting than the "big picture" history which is usually heavily loaded with spin, and therefore a bit of chore working out what actually was happening as opposed to what people wanted you to believe was happening.

        the fact that people ARE musing on shit like Paris Hilton IS going to be interesting to future historians, because it gives an insight into how people were living their lives and what was important to them at the begining of the 21st Century.

        all of those pictures from our camera phones, and whining live journals may not be a terribly flattering picture of our lives, but for an archeological point of view, it's exactly the sort of evidence you want.
        • I'm sure they'd love to find a blog or two, but if they saw all of Blogger.com they'd just ignore us as a pointless moment in time.

          And if they saw slashdot browsing below 5, they'd write us off completely

      • by tgd ( 2822 ) on Friday March 03, 2006 @12:08PM (#14842884)
        Actually things like personal papers and photographs from random people are immensely valuable to historians, and priceless to family.

        I have nearly a hundred glass negatives of photos from my family a hundred years ago.

        A hundred years from now, its unlikely any of the 500 gig of digital photos or DV videos I have in there will be available to anyone. Hell, I'm worried that a couple of bad failures in quick succession could mean the same for myself or my future children in a lot less time!

        You clearly are not a historian or a history buff... because I don't know any that would make such a blatently rediculous statement that they are not interested personal writings and other forms of media.
      • You are sort of right. Historians do find the lives of common people interesting. The new problem is that gathering information is so easy that we have a ton of it. Your great grand kids may really be interested in knowing what books you loved, what movies you liked, where you and your spouse met. Maybe even where you went on vacation. They may really not care about the copies of futurama you have stored.
        DRM could be the real cause of the Digital dark ages. The pirates may be the only thing that will let th
      • Hey... with 1.2 Tb you can store 1/100 of AT&T's Daytona database.
      • Only in the past decade has everyone gotten this weird urge to try and archive and record every unimportant detail of their daily lives (see MySpace.com, blogging, etc). What they don't realize is no one really gives a crap today, and they sure as hell won't give a crap in 100 years.
        I blame Samuel Pepys. Narcissitic bastard.
      • Considering how archaeologists will dig through a fossilized dung heap to try to figure out what an ancient civilization ate, I think you're dead wrong on whether archaeologists in 100 years will want to read someone's archived myspace blog contents.

        Right now, the oldest culture on earth was the ancient egyptians. They were nice enough to leave lots of written records, but it took us hundreds of years to figure out what their writing meant - and we had to cheat to do it (by the lucky discovery of the roset
    • I don't think anyone cares unless.. Wait, it makes more sense like this:
      "...figure out how gigabytes or terabytes of PR0N information will be saved for future generations..."
      Now we have a problem that needs to be solved. Get to it!
  • by Bullfish ( 858648 ) on Friday March 03, 2006 @11:44AM (#14842724)
    I really do wish them luck. The thing is the "document" and "content" companies are going to fight like hell to expand proprietary formats as they ultimately look to the MS word format, the sheer number of copies of MSOffice sold, and see the dollar signs available by controlling the format and making everyone dance to their tune. Anyone who remembers the fiasco that occurred when MSOffice 97 wasn't very compatible with the previous version will also remember that companies simply shelled out for converters etc until MS issued a patch. They had no choice.

    While packages like open office etc exist, they have for a while and are perceived as "not being ready for prime time" by most businesses. The only advantage many see is the ability to save as PDF (another proprietary format). For ODF to take hold, governments and some very large publishing concerns are going to have to adopt it. Else, not much will change and the march towards increasingly proprietary formats will continue.
    • The only advantage many see is the ability to save as PDF (another proprietary format)

      Just out of curiosity, why is PDF as bad as...say...a Word document? How else can something like xpdf, or any number of PDF converters and/or readers exist?

      And to be quite honest, if this 'open document' inititive falls through, I dont see why there can be a concentrated effort to reverse engineer the Word document format to a satisfactory standard. It's stayed almost identical since Word 97.

      • Adobe owns the PDF standard outright, and the thing about proprietary formats is the originator can change the spec anytime they want without any input from anyone. Ultimately, what makes ODF attractive is that changes won't (or shouldn't) occur that way.
        • Adobe owns the PDF standard outright, and the thing about proprietary formats is the originator can change the spec anytime they want without any input from anyone.

          But that doesn't prevent Free apps from specifying that they read and write PDF 7.0. On the other hand, do you think Free specs don't change? Look at the rewrite-from-scratch that is the XHTML 1.1 to XHTML 2.0 transition.

  • ...for it to be truly open and future-accessible, it would have to specify everything from the bottom hardware level up - from how bits are encoded on whatever the storage medium is, through to file system layout, and only then starting to talk about the contents of the data itself. From there you then have to decide formats for all types of data (text, formatted text, images, audio, etc.) and specify how they're encoded and what their structure is, and how to interpret their data (e.g. images composed of p
    • for it to be truly open and future-accessible, it would have to specify everything from the bottom hardware level up - from how bits are encoded on whatever the storage medium is, through to file system layout, and only then starting to talk about the contents of the data itself.

      The "Compact Disc Recordable" and "ISO 9660" formats are already fairly strictly specified, and abridged versions of the specs could be etched onto a durable substrate and included in the time capsule. Trouble is that these spec

  • by A beautiful mind ( 821714 ) on Friday March 03, 2006 @12:07PM (#14842873)
    Three Standards for the iMac-kings under the sky,
    Seven for the HURD-lords in their halls of stone,
    Nine for Microsoft Men doomed to die,
    One for the Big Blue on his sparc throne,
    In the Land of Sun where the Shadows lie,
    One Standard to rule them all, One Standard to find them,
    One Standard to bring them all and in the darkness unite them
    In the Land of Sun, where the Shadows lie,

    The One Truly Open Standard.
  • Am I the only one that finds this number a little low? They could probably bump it up to 99% and still be right. Assuming they aren't including other types of files Office doesn't do (like cartography files as an example), I would think .doc, .xls, .ppt and the like account for almost all document types. I know if I have a document in OpenOffice or something similar, getting it to .doc format usually isn't far off, since otherwise it seems like noone else can really use it.
    • in the last year i am getting more and more documents in both sx* and opendocument (actually, more the first one). not because sender knew i had oo.org, not because we talked about formats - just like that. from commercial entities i know about, ~ 80% use oo.org on at least 50% of their machines. in some cases it is installed alongside msoffice, but a lot of companies just get one or two msoffice licenses for gateway purposes if they get a document that oo.org can not handle. it somewhat surprises even me (
  • Dark Information (Score:5, Insightful)

    by GodWasAnAlien ( 206300 ) on Friday March 03, 2006 @12:55PM (#14843211)
    Historically, the problem of the disappearance "important" information has always existed, but some do not see the possible connection in a modern, digital world.

    Some pieces of information did really exist long ago, but we only have references to the information, not the information itself. This could be from the lack of copies, or from suppression from religion or government.

    In our digital world the same could happen with information, including software, books, music, and movies.

    In an effort to absolutely control the information, different information industries attempt to control the media, using secrets, encryptions, and government control. These industries intend to profit from this information control as long as possible. The end of this control is assumed and mandated not to exist.

    The problem is that at some point in the future the information could become non-valuable to these information industry. But currently, no mechanism exists such that these industries would be required or motivated to reveal the secrets or encryption mechanisms that would make the information useful. One cause could be that other information uses similar encryption or secrets, and the profit possibility of that information may be jeopardized.

    The result is that unprofitable information may silently disappear, as whatever backups of the original expire.

    Some examples would be:

    A software company writes software, selling binaries only to the public. The copyright for the software is 100 years. Far before the end of the 100 years (perhaps 10 years),
    the original source was no longer kept by the company. So in the future, looking back at the state of software in the year 2000, perhaps there may be some pictures of "Windows XP", but it may be unclear what it did, as no source exists, and it's not really worth reverse engineering. While somethings called Linux and BSD did exist, and the complete information/source about these would still be available. History can really focus only on the known, not the hidden.

    Similarly, assume that the recording and music industry come up with the "perfect/unbreakable" encryption. They spend much of there resources hiding anything close to raw digital information from the consumer. But this DRMed songs eventually become unpopular. Obviously the DRM mechanism could still not be revealed as they still use it for other songs. They have essentially subverted any copyright limits, to impose an infinite limit. After the point of dis-interest, the DRM songs/movies may just fade away. I suppose Creative Commons music/movies of the time may survive instead. Obviously these may not represent what was seen at the time.

    • We've already lost the source code for some of the more popular TRS-80 software because the authors didn't think to preserve it when they had the chance.

      Ira Goldklang has collected thousands of TRS-80 programs at trs-80.com [trs-80.com] but when the authors of some of these programs were tracked down, they admitted that their original source code had been lost - thrown away or media unreadable.

  • I have news for you: It doesn't matter what format the documents are in. If one format is unreadable, they all are. For example, if I can't read hello from 68 65 6C 6C 6F, then how in the hell would i understand hello from 3C 74 65 78 74 3E 68 65 6C 6C 6F 3C 2F 74 65 78 74 3E?

    No way, either they'll be able to read it or they won't, it doesn't make any difference if we tag the text. I personally think sticking to ASCII would at least yield some possibility they could get the text back, because at least t
    • Note that sticking to ASCII also ensures that they have no way to properly read documents fromm every country where English is not the main language as the countries have no way of saving their stuff properly. If you want compatibility and simplicity standardizing on UTF-32 would probably be the best way. UTF-8 wuld probably work as well - if they have the Unicode table they probably also have the UTF-8 specs.
  • Until they said: "This is not a partisan, anti-Microsoft group," said Simon Phipps of Sun.

    If they ain't agin MicroSoft, they ain't with me ;-)

  • by AeroIllini ( 726211 ) <aeroillini@NOSpam.gmail.com> on Friday March 03, 2006 @04:11PM (#14845219)
    All this talk about the One True Format(tm) is nice, and I'm heartily in favor of using OpenDocument over proprietary formats, but not to prevent a Digital Dark Age.

    The Digital Dark Age people talk about is not about file formats. Mostly, it's about data storage and retention. Most of what historians/archeologists know about entire civilizations and time periods comes not from the official documents, but from the personal, off-the-cuff type stuff. Historians love reading journals, diaries and personal letters, and archeologists glean the most information from household and personal items. These are the things that give you insight into the *people* who lived in that age, and how the political events of the times (which are generally well preserved) were perceived.

    However, most of our personal letters are now emails, which regularly get deleted, lost, blown away in a formatting, or simply forgotten about and tossed with the computer when we upgrade. Our journals and diaries are now blogs, which are subject to the same problems. In 2500 years when some archaeologist digs up your laptop, he must first decipher the machine to find where the data is stored, then extract the data, then decode it and translate it into his own language, before he can even start working on the meaning and significance of your emails, all of which contain complicated headers and multiple encodings (text, HTML, etc.). Contrast this with his finding a paper letter... the machine deciphering and data extraction is already done. All he has to do is decode the symbols and translate the language.

    Data about our society will exist, but most of it will be in a digital form, and this places lots of extra burden on the person trying to understand the data. As a result, there will be many more gaps in our history, because the data is much harder to decipher.

    Keeping our data in open formats is not really the issue; they still rely on conventions such as ASCII, XML, and PNG, that may or may not be lost. The truth is that the data only exists as 1s and 0s, and whether the data is in Microsoft Word format or OpenDocument format, it will still need to be deciphered and decoded. If all knowledge of ASCII/Unicode mapping and 32-bit RGBA color encoding is lost, does it matter if the XML schema of the format is documented somewhere in some different string of 1s and 0s?

    What the OpenDocument format solves is the problem of near-term data access. In relatively short time spans, say 100 years or so, the OpenDocument will still be readable long after all proprietary formats have been abandoned. For this reason, OpenDocument should be used to keep documents available long after the company that provided the creation software has gone under. This is a noble and very valid goal, but let's not confuse it with the larger issue of the "Digital Dark Age."
    • ASCII, XML, PNG et al. are standards. These standards are written down, often in electronic form but there are
      certainly published books documenting PNG and XML. ASCII is a trivial thing to reverse engineer, noticing that
      all/most(perhaps some bit rot) of the files you discover are a multiple of 8 bits is pretty easy. After that if you
      you know the charatcer frequency distribution of English/Latin/Hawaiian you should be well on your way to
      recovering all of Project Gutenberg's texts.
  • From the article: "The OpenXML format is supported by Intel, Apple Computer, Toshiba, BP and the British Library, among others, Yates said."

    OK, so do like or hate Apple today? They're obviously fellating MS in order to continue to have versions of Office created for them, no?

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...