Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Government United Kingdom Your Rights Online Science

UK University Researchers Must Make Data Available 352

Sara Chan writes "In a landmark ruling, the UK's Information Commissioner's Office has decided that researchers at a university must make all their data available to the public. The decision follows from a three-year battle by mathematician Douglas J. Keenan, who wants the data to do his own analysis on it. The university researchers have had the data for many years, and have published several papers using the data, but had refused to make the data available. The data in this case pertains to global warming, but the decision is believed to apply to any field: scientists at universities, which are all public in the UK, can now not claim data from publicly-funded research as their private property." There's more at the BBC, at Nature Climate Feedback, and at Keenan's site.
This discussion has been archived. No new comments can be posted.

UK University Researchers Must Make Data Available

Comments Filter:
  • The public pays for gathering the data, the public should have access to that data. Kinda hard to find fault with that.

    • The public pays for gathering the data, the public should have access to that data. Kinda hard to find fault with that.

      I'm sure people with a dogmatic axe to grind will prove an annoying if minor fault. Creationists regularly mangle papers, taking quotes out of context and all. I can't imagine them being pacified by the messy data.

      Oil companies and people who are dead set against thinking we -might- be changing the atmosphere will undoubtedly cherry pick out from the data, take things out of context from studies supporting climate change as a theory, and those people whose support of climate change is based more off of re

      • by NeutronCowboy ( 896098 ) on Wednesday April 21, 2010 @05:52PM (#31932858)

        Creationists regularly mangle papers, taking quotes out of context and all.

        Get ready for an onslaught of mangled data analysis, with data being taken out of context, the results published to some blog, and people making policy decision based on those blog postings.

        the media will focus on the new controversies this will spawn

        That's a guarantee. While in theory, I welcome this development, I suspect that in practice it will lead to more chaos than before. Not because the data is shoddy, but because some meteorologist will think that running a data set through an excel curve fitting algorithm is science.

        • by Anonymous Coward on Wednesday April 21, 2010 @06:17PM (#31933156)

          some meteorologist will think that running a data set through an excel curve fitting algorithm is science.

          Nope -- it's only science if you adjust and filter the data first to make it match your truth. Resist releasing your data though, others may adjust and filter it other ways to make it match their truth. All science in the world of research driven by political agendas and egotistical arrogance.

          Disclose, when in doubt disclose more. Anything less in scientific arenas where others can't repeat your experiments is just a symptom of fear, insecurity, and lack of confidence that your conclusions will stand up to the view and study of many brains (some better than yours, some worse).

          Same argument for why FOSS is better - many eyes reviewing (in theory) and rapid fixes.

          • by interkin3tic ( 1469267 ) on Wednesday April 21, 2010 @06:25PM (#31933268)

            some meteorologist will think that running a data set through an excel curve fitting algorithm is science.

            Nope -- it's only science if you adjust and filter the data first to make it match your truth.

            I don't think that's what he was saying. He's saying this will lend itself to overly simplistic interpretations. Which is a good prediction in climatology, considering what people got out of "climategate."

          • Re: (Score:3, Funny)

            by mkiwi ( 585287 )

            A mathematician, an engineer, and a computer scientist are the final candidates for the top tech spot at a major corporation. They are summoned one by one to be interviewed.

            The mathematician goes to the interview. The person interviewing him is the CEO of the company. Only one question is asked: "What is 1+1?"
            The mathematician pulls out a pen and paper, makes a few scribbles, and says "This is proof that 1+1=2!"

            The engineer goes to the interview next. The CEO asks him the same question, "What is 1+1?"
            Th

        • Re: (Score:3, Informative)

          by maxume ( 22995 )

          Even worse, some hack might shove the data through some perl code:

          http://www.timesonline.co.uk/tol/news/uk/article7028418.ece [timesonline.co.uk]

        • by interkin3tic ( 1469267 ) on Wednesday April 21, 2010 @07:59PM (#31934226)

          Creationists regularly mangle papers, taking quotes out of context and all.

          Get ready for an onslaught of mangled data analysis, with data being taken out of context, the results published to some blog, and people making policy decision based on those blog postings.

          Hmm... I think you've brought up another valid point: some researchers might take the data, rehash it and publish it as their own, getting credit for it, much as you have taken my point, restated it with a minor additions, and got all the mod points for it.

          Which is to say, I see what you did there ;)

        • Re: (Score:3, Insightful)

          by T Murphy ( 1054674 )
          People making bad conclusions from good data is better than making (any) conclusions from no or bad data. By using good data, it helps give the proper scientists a chance to use logic and reason to correct people. We can't change the minds of creationists because we are not drawing our conclusions from the same 'data'. People believe in global warming because of data, now deny it because of doubt in the data. They may be impulsive and believe whoever speaks the loudest, but it does imply we can bring them
        • by Xest ( 935314 ) on Thursday April 22, 2010 @04:05AM (#31936680)

          The issue with the FoIA in the UK is that there is a clause requiring bodies to only have to comply with the request if the cost of fulfilling it is not more than around £450.

          I've seen first hand local government abuse this by claiming that collation of the data would take 18 hours and that their FoI officer is paid £25 an hour, and hence the cost of providing the data is too high. Quite why it requires someone paid £50k a year to collate some basic data that they should already have collated anyway I've no idea, but still, they use this excuse, and the information commissioner allows such abuse of it.

          So although as you say it's a great theoretical win, I believe it'll make no difference in practice either way due to the ease of which public bodies are able to sidestep FoI requests.

    • Re: (Score:3, Insightful)

      by c++0xFF ( 1758032 )

      Opening the data will encourage further research. The data will be available for others to use, instead of forcing constant duplication.

      "Standing on the shoulders of giants" means to build on what has been done before. Hiding the source data shows just how "little" you are.

      • by jacksonj04 ( 800021 ) <nick@nickjackson.me> on Wednesday April 21, 2010 @06:09PM (#31933056) Homepage

        You know the multi-billion dollar LHC? Guess what they did their first physics on. Not finding new exotic particles, but proving that what we think we know so far still stands up. Duplicating data is exactly how things get proven and disproven. If Group A and Group B use exactly the same source data there's no possibility of Group B proving Group A's research wrong.

        • by thepike ( 1781582 ) on Wednesday April 21, 2010 @06:27PM (#31933286)

          I totally agree. If people just start looking at each others data instead of verifying it, a lot of mistakes (or fraudulent data [wikipedia.org]) will never be caught.

          Also, I have to wonder what the timeline for releasing data is. My research is funded with government money (NIH and NSF) but it can take years to get enough data to make a worthwhile paper. If I have to release my data before then it will hurt my ability to publish papers without getting scooped. You could end up with a whole closet industry of people just data mining the data others have had to disclose. And, here's the main catch, if you don't have to release results you haven't yet reported on, the problem isn't solved at all because I could just choose to "not yet publish" any results that don't agree with what I want to say. Nothing says I ever have to publish results I get, so why wouldn't I just sit on them?

          Not that sitting on data just because it doesn't agree is a good thing, but it happens. And plenty of good data goes unpublished (experiments fail, uninteresting results happen, journals don't publish negative results very often etc) so what about that data? Overall this law isn't going to help anything, and will just cause issues.

          • Re: (Score:3, Insightful)

            by the_womble ( 580291 )

            I totally agree. If people just start looking at each others data instead of verifying it, a lot of mistakes (or fraudulent data ) will never be caught.

            On the other hand, a lot of errors in interpretation and statistical analysis will be caught.

        • by Obfuscant ( 592200 ) on Wednesday April 21, 2010 @06:39PM (#31933428)
          If Group A and Group B use exactly the same source data there's no possibility of Group B proving Group A's research wrong.

          Wrong. If Group B cannot duplicate Group A's analysis of the data, that proves that Group A did something wrong and probably came to the wrong conclusion.

          If Group B cannot duplicate the experiment and get the same data (and knowing that means being able to compare both sets) that calls the experiment as a whole into question.

          There is more to science than simply applying equation A to data B and getting number C.

          This hubbub all came about because of the difficulty in prying the source data out of the hands of the guy who produced the "hockey stick" figures. It's covered in the book "Broken Consensus" I think it's called. The "hockey stick" is not the "source data", the source data is all of the individual readings from all the instruments, prior to corrections for sampling errors or known issues. One cannot verify the quality of the "hockey stick" result without having the source data and being able to verify the processing steps that were done to it.

          The downside to free and open access to all data is that research groups get grants to collect AND process the data to come up with results. Opening the data up for free access means that other groups, who have more interest in scooping than being right, have more ability to do that scooping. That leaves the people who did the work in the cold. There is good reason to delay opening the data until the group being paid to collect it has a chance to use it.

          • Re: (Score:3, Interesting)

            The downside to free and open access to all data is that research groups get grants to collect AND process the data to come up with results. Opening the data up for free access means that other groups, who have more interest in scooping than being right, have more ability to do that scooping. That leaves the people who did the work in the cold. There is good reason to delay opening the data until the group being paid to collect it has a chance to use it.

            Why do you think tha

          • by Roger W Moore ( 538166 ) on Wednesday April 21, 2010 @09:50PM (#31935030) Journal

            Opening the data up for free access means that other groups, who have more interest in scooping than being right, have more ability to do that scooping. That leaves the people who did the work in the cold.

            That is not hard to achieve: someone has to make an FoI request, the cost to prepare the data has to be estimated, someone has to get hired to collect and format the data and then the data is released. That can take a considerable amount of time.....but that's not the only issue. In my field of particle physics raw data is generally useless unless you understand how it was collected and how to analyse it.

            Even assuming that you had several petabytes of disk/tape available to store it, raw data from ATLAS would be completely useless to you unless you really understand the detector "warts and all". Trying to understand this data without access to the detector itself and the ability to test and cross-check ideas looking at (and sometimes carefully tweaking) the hardware is literally impossible....and that is before you get into the thorny international issues about who did what and so whether it falls under any one country's laws.

            These issues were discussed on a previous experiment I worked on in the US and the conclusion was that it did not serve the public to have data released in just about any form: the raw data was useless and even the processed data still had considerable "quirks" which required understanding (e.g. acceptance drops at detector boundaries etc.). This was aptly demonstrated by a pilot project which resulted in no interest at all from the public but which worryingly attracted a few nutters who were more interested in proving their pet theory than in doing science.

            So while I am very sympathetic to the "the public paid for it the public should be able to access it" argument I do not think that the public's interest is best served by releasing raw data in all (most?) cases. The best way to serve the public interest is to ensure that results and ideas arising from that research are freely available to all and allow the public to build on that.

          • by TapeCutter ( 624760 ) * on Thursday April 22, 2010 @02:47AM (#31936392) Journal
            "This hubbub all came about because of the difficulty in prying the source data out of the hands of the guy who produced the "hockey stick" figures. It's covered in the book "Broken Consensus" I think it's called. The "hockey stick" is not the "source data", the source data is all of the individual readings from all the instruments, prior to corrections for sampling errors or known issues. One cannot verify the quality of the "hockey stick" result without having the source data and being able to verify the processing steps that were done to it."

            I threw away some mod points because it irks me how unskeptical the garden variety climate skeptic actually is when it comes to accepting the hockey stick has been discredited. Here are a few points you should consider with your skeptics hat on...

            1. Mann's original hockey stick was published in the jounal Nature, they are not well known for publishing shoddy work.

            2. A senate inquisition was held on Mann's paper in which the National Acedemies of science were called in to give expert testimony [nationalacademies.org] on the veracity of Mann's paper. As you will no doubt learn when reading the testimony the NAS came down firmly in favour of Mann although they did highlight some minor technical problems.

            3. Given that the NAS were able to agree with Mann's conclusions under oath at a hostile inquisition, how did they do so without access to the data?

            4. The journal science is also not well known for publishing shoddy work. So why did NAS then publish a follow up study by Mann in their journal Science if they were not satisfied he had no only addressed the minor technical problems in the original but also greatly increaed the robustness of the findings?

            5. Why can't I find a listing for a book called "broken consensus" which you cite as a source? Shouldn't you at least adhere to your own standards of evidence?

            6. How do you explain the links to the data and methods found in an article called Dummies guide to the hockey stick [realclimate.org] on Mann's website?

            7. Why do people belive that some difficult to obtain data (ie: time consuming) from a few nations means that the other 99.99999% of the raw data [realclimate.org] available on the web is insuffitient to recreate the hockey stick?

            8. Why is McIntrye only interested in "auditing" climate science that disagrees with his opinion? Could this be because his own paper did not stand up to the traditional auditing method called "the test of time"?

            If the above points do not at least cause you to question your sources then I can only conclude your sketics hat must have slipped down over your eyes...
        • Re: (Score:3, Interesting)

          by budgenator ( 254554 )

          What if group B notices that a temperature station one day reports the temperature is -12.4C one day and 10 minutes later it's +12.4 C the next? On 2010-Apr-21 22:10, Drifting buoy 48534 [sailwx.info] did just that and that's an automated report, imagine the fun and games when human error gets added in! The data is bad, there is a lot of bad data points in the records and the records were never intended for the purpose they are being used for so quality control is even more critical. We really need a large number of huma

    • So the question is, is it possible to request data on ANY publicly funded research going on in the UK? What about research on SILEX(http://en.wikipedia.org/wiki/Silex_Process), a laser based uranium enrichment process that is much more efficient than other enrichment processes(currently very very classified) or military research?
      • No. Classified data is still protected by law, even if funded publicly and researched at a public university. Courts are extremely unlikely to ever decide that classified data should be blindly released for any reason, and the public nature of the funding behind it would not be grounds for release.

    • There is no question that having the data released eventually should be the rule. It shouldn't even be considered proven science until it can be thoroughly recreated. However, the tricky bit is mandating exactly by when it must be released. If a lab has spent a long time, let's say 10 years, accumulating some hard fought data, they should be allowed the benefit of a few publications before releasing all the data so that better (likely privately) funded labs do to the easy rapid analysis and 24/7 postdoc
      • by Rogerborg ( 306625 ) on Wednesday April 21, 2010 @07:17PM (#31933850) Homepage

        If a lab has spent a long time, let's say 10 years, accumulating some hard fought data

        If a lab has been spending my tax money for 10 years, I want my employees to give me my data right Goddamn now.

        The "reward" for doing publicly funded research is that you keep getting funded. I don't care one whit what you think you're entitled to: if you're taking my money, you work for me.

        • Re: (Score:3, Interesting)

          by the gnat ( 153162 )

          If a lab has been spending my tax money for 10 years, I want my employees to give me my data right Goddamn now.

          Okay, but does that mean you should get to see the data before they're done analyzing it, before they can write a paper on their results? If we instituted such a rule, there would be nothing to stop scientists from bombarding their competitors with FOIA requests, and using the released data to scoop them. At the very least we'd need embargo rules, but even that won't entirely prevent abuses of th

          • by HungryHobo ( 1314109 ) on Wednesday April 21, 2010 @08:20PM (#31934426)

            not really.
            Your problems with these possible situations are based on the deeply flawed system we have in place now.

            Give academics the respect and credit they deserve for collecting vast quantities of high quality data rather than merely for the 2 page paper they write about some interesting statistical anomalies they found in said data and this ceases to be a problem.

            The way papers are written, reviewed and published today and the way academics are given credit is based on a system hundreds of years old when it costly to print hundreds of pages of boring figures.

            Now data is cheap beyond words. Publishing a few hundred words or a gigabyte is little different when your audience is fairly small and the way academics publish should reflect that but it's too hidebound and dogmatic to do that.

            A professor who does nothing but produce a high quality and hard to acquire dataset deserves credit even if he comes to no conclusions at all.

            The problem is with the system and with the way academics think.
            Not with this possible change.

            Fix your system.

            • by the gnat ( 153162 ) on Wednesday April 21, 2010 @08:35PM (#31934556)

              Give academics the respect and credit they deserve for collecting vast quantities of high quality data rather than merely for the 2 page paper they write about some interesting statistical anomalies they found in said data and this ceases to be a problem.

              The problem is that interpreting raw scientific data is enormously time-consuming, because there's so much information available that we can't possibly assimilate it all. I have a PhD in biochemistry and advanced training in crystallography, but I couldn't look at a ribosome structure and easily figure out what it meant, because I don't know very much about ribosomes. The people solving the structure, on the other hand, have exactly the background necessary to perform detailed analyses, and they will undoubtedly notice things that completely escape me. And I think you're understating the value of the scientific literature. A 2 page paper on statistical anomalies won't get you a faculty position at a major university, but a well-written 10 page paper on the meaning of a crystal structure certainly can. This is even more the case if they took additional time to perform non-crystallographic experiments to verify new hypotheses.

              I don't deny that there are issues with our system, but you're completely missing the point of writing papers. Simply generating massive amounts of data isn't considered science - figuring out what it means is. I say this as someone who is very good at generating data quickly, but not particularly good at interpreting it. Now I write data analysis software instead, and leave the question-asking to more suitable minds.

              • Re: (Score:3, Insightful)

                by HungryHobo ( 1314109 )

                The original objection was that if the data is hard to come by then it's unfair to academics who wouldn't get the credit after gathering the data.

                Of course simply generating massive amounts of data isn't science but it is a very very very important part of science.

                Is an academic who can write that well-written 10 page paper on the meaning of a crystal structure any less mentally capable because he didn't have the funds or facilities to gather the data he's looking at?

                If you open up the data then someone wil

              • by xtracto ( 837672 ) on Thursday April 22, 2010 @12:14AM (#31935810) Journal

                Simply generating massive amounts of data isn't considered science - figuring out what it means is. I say this as someone who is very good at generating data quickly, but not particularly good at interpreting it.

                Spot on. I have a PhD in Comp. Sci. (Multi-Agent Systems / Market Based Control). One of the things you learn (maybe in you Universitity degree courses or in your first paper presentation) is that data does not mean *anything*, what matters is the interpretation of such data.

                Nevertheless, I am of the opinion that programs used for the generation / manipulation of such data should also be free / scrutinable. Specially those developped during the research as they are also being paid by the tax payers money.

                In the field I am working now (Agent based computational economics) a lot of people do these so called agent-based simulations, then they write a nice paper about what their simulations showed and try to publish it. The problem is that they keep their code! and in that respect they are deffinitely removing a good chunk of the "methods" part of their research. It is absolutely impossible to duplicate that work without the code.

                 

        • by Vornzog ( 409419 ) on Wednesday April 21, 2010 @09:28PM (#31934916)

          I work for a government lab that produces DNA sequences. We are obligated to release our data into a public database as soon as it has been verified for any samples that come from the US, and we release most of our foreign data, too, unless the other country involved gets pissy.

          Nothing good comes of that speed. We get crackpots thinking they've made major discoveries (not one real one yet), we get scooped for major papers (think Science), sometimes by our own collaborators using only our data and none of theirs, and we generally spend a lot of time, effort and *more money* on media spin control. There is such a thing as releasing the raw data too fast.

          We get a *ton* of FoI requests, too - people think we are withholding the good data, or being stubborn by not providing them composite statistics in exactly the format they want to see. The truth is, up until I got involved, the data management technology was so far behind the current bog-standard capabilities of the rest of the world, we couldn't actually answer the questions that were being asked, barring Herculean effort.

          Don't get me wrong, I think we *should* be releasing all of this data - delayed by just a bit. That way the people who generate it would have a better shot to get recognition/credit for their work, the crackpots would have less ammo for their rants, the press would be more likely to get the facts right the first time, and the scientific integrity of the whole process would be upheld, as everyone would get the raw data to review. It'd probably save a ton of money.

          The "reward" for doing publicly funded research is that you keep getting funded.

          Collecting good data is hard work, and the payoff is big publications, which you need if you want to continue getting funded. Once you've got that big publication in your pocket, though, you'd better by coughing up that data set. Otherwise, everything you say is suspect. Kudos to the UK for getting this half-way right, but they'd better set some reasonable constraints on the timing of these required data releases, or face any number of frivolous lawsuits from conspiracy theorists and 'data analysis specialists' who don't want to do any of the hard work themselves...

          I don't care one whit what you think you're entitled to: if you're taking my money, you work for me.

          I don't care if you are a ditch digger or a particle physicist. Doing all the hard work and getting none of the credit sucks regardless of what we are discussing or who is paying the bills. So put up or shut up. Would you be willing to do all of the grunt work in your job, but take none of the recognition? Most people wouldn't - those are the kinds of jobs that make people go 'Postal'. If you aren't doing it (and even if you are), do you really expect anyone else to?

        • If a lab has been spending my tax money for 10 years, I want my employees to give me my data right Goddamn now. .....if you're taking my money, you work for me.

          Just stop and think for a second about exactly what it is that us scientists are being paid to do. We are NOT being paid to collect data we are being paid to figure out how the world works and how to apply that knowledge for the betterment of mankind. The data is an end towards that means.

          Now, do you REALLY want us to spend a serious fraction of our time and money preparing and making available the raw data in a form which will probably be useless to you instead of analysing and coming up with results w

      • by the gnat ( 153162 ) on Wednesday April 21, 2010 @07:17PM (#31933852)

        my experience of this situation comes from protein crystallography and deposition of the hard won data there

        Ah, a fellow crystallographer. Welcome, brother!

        I was about to post a similar comment. However, I only agree with you up to a point. Once you publish a paper reporting the structure, all of the raw data should be made publicly available (including diffraction images - although deposition of those isn't quite feasible yet). I would apply the same standard to any other field: you shouldn't publish until you are comfortable releasing the underlying data. I don't care if you're still working on some super-secret follow-up paper, as far as I'm concerned your publication is useless if I can't go to the PDB and download the coordinates. And if you're using public resources to solve your structure (like NIH funding, or one of the DOE's synchrotrons), your results are public property.

        There was once intense resistance to even mandating coordinate deposition (long before I got started in the field), which just sounds insane now. Some of the people doing the most complaining were in fact some of the best funded. A decade later, the field went through the same bullshit whining with regard to reflection data. Now most journals require both coordinates and reflections, and not only has the field not suffered in the slightest, many more studies are now possible and the majority of structures can be solved without experimental phasing. If we'd left things the way the naysayers wanted it, every group attempting to study, say, ribosome structure would have to either plead with more senior groups for coordinates in order to solve their structures (and, almost certainly, further bloat the author lists and potentially cede some control over their project - which, I imagine, would have suited the senior faculty just fine), or waste half a decade making heavy metal derivatives. It is difficult to convey to non-crystallographers how huge a waste of time and money - most of it coming from tax dollars - this scenario would be.

        Now, where it gets messy is situations where you have to release data ASAP, instead of waiting until publication. American structural genomics groups do this (it may be a requirement of the NIH), but PDB deposition is more of an endpoint in itself for them, and no one is going to bother trying to scoop them on most of those proteins. Genomics centers also do this. A grad school classmate of mine worked on a sequencing project where much of the gruntwork was performed by the DOE, and they had extremely strict release rules. She complained that other groups (of bioinformaticists) could start analyzing the data before she'd had a chance to complete her own studies, because the outsiders didn't have to spend a lot of time thoroughly annotating the genome before publishing. (I don't think it held her back in the end - she graduated with several papers in Science.) In many situations like this, to obtain the data you need to agree to an embargo on publications, to prevent that sort of underhanded behavior. I saw an article retraction recently where the scientific content was undisputed, but the investigators had (unintentionally, it appeared) broken an embargo by submitting the paper when they did.

        In general, I think the scientific community - especially the part funded by the public - should err on the side of maximum disclosure of data, and I don't have much sympathy for the researchers in this story (and I'm not particularly sympathetic towards "climate skeptics" either). I do worry that rules will be used to harass researchers in supposedly controversial fields (Richard Lenski's adventures with Conservapedia are a particularly nauseating example), but as a scientist, I also think the benefits of making massive amounts of data available to anyone are far too important to let these risks bother us, and the drawbacks of keeping such data private are much worse than having to fight off the occasional knuckle-dragging lunatic.

    • The public pays for gathering the data, the public should have access to that data. Kinda hard to find fault with that.

      No, it isn't. The fault is that the data may contain sensitive information. The Army collects data about enemies, should that be free access for the public? Nope. (I'm not arguing against making university data public, but your logic is flawed)

    • Do you think grad students were collecting data in the field on iPads in the 1980s?

      Most of the data is probably in the form of moldy old penciled notebooks, core samples, B&W photo negatives and microscope slides. I hate to break it to you, but you know what, except maybe in physics or electrical engineering, not all experimental data was systematically recorded digitally until 15-20 years ago.

      They collated, analyzed their data at the time, published their results in peer reviewed journals, and that was

      • Re: (Score:3, Insightful)

        by Ctrl-Alt-Del ( 64339 )

        Unfortunately, Climategate proved that, at least in the field of climate research, "peer review" is worthless; Mann et al were actively conspiring to ensure that only "friendly" eyes carried out the reviews; anyone thought to be showing signs of scepticism were blacklisted, whether individuals or publications.

        To add to that, Glaciergate proved that much of what was claimed to be peer-reviewed was actually just regurgitated propaganda, often based on anecdotal evidence (reminisces of mountaineers published i

    • Except the public who paid for the data isn't the same as the public who are paying the researchers.

      Large amounts of the data under discussion are from _foreign_ governments. Additionally, researchers frequently have to sign confidentiality agreements in order to gain access to health records and other data. If that needs to then be public, they won't have access to it.

      Stupid judge, stupid finding.

      • by martin-boundary ( 547041 ) on Wednesday April 21, 2010 @07:16PM (#31933842)
        Who cares? Are you arguing for science, or for little confidentiality fiefdoms?

        There is literally no point in doing Science (with a capital S) if the data isn't available for scrutiny by everyone. Without scrutiny, it's all he said/she said, rumours and bullshit.

        As to signing confidentiality agreements etc, there comes a time when a researcher has to decide: does he want to contribute to human knowledge (=> don't sign) or does he just want to wank around with secret data (=> sign it)?

        It sucks to be unable to use purportedly available data, just because it can't be divulged, but it's better that way in the long run.

        Unsupported data is worse than useless, it's a cancer that grows every time someone else quotes the unsupported result, until it gets to the level of unchallenged folk wisdom within the community.

    • Re: (Score:3, Insightful)

      by SETIGuy ( 33768 )
      Sure, I'll give you the data. But I wasn't funded to put the data in a format that's easy to understand. I've also got a job, and I don't get paid to support a competitor's data analysis attempts. Good luck.
      • Sure, I'll give you the data. But I wasn't funded to put the data in a format that's easy to understand. I've also got a job, and I don't get paid to support a competitor's data analysis attempts. Good luck.

        Your so-called competitors will be sure to mention your viewpoints when your funding runs out and you apply for more. Not only is your research not easy to understand and you don't let others analyze the data to attempt to reproduce your conclusions, but you think that other members in the scientific community are competitors and you feel a need to sabotage their efforts by making it difficult for them to use taxpayer-funded data to advance science. If science is such a business to you, then how about you fund it all yourself from the profits you make?

    • by michaelwv ( 1371157 ) on Wednesday April 21, 2010 @07:01PM (#31933662)
      Absolutely. The public should have access to the data. Public grants then also need to pay for curating the data. Libraries aren't free, archives aren't free, package data in an actually useful form takes precious time, which is scientists most precious resource. Having data in a form that is useful to the 25 people in your research group is very different than providing data that can be used by thousands of people. It's analogous to the difference between the quick bash script you have that backs up your movies to your external hard drive, and having something that you're willing to distribute to 1000 people and provide support.
  • Publicly funded (Score:2, Offtopic)

    by sconeu ( 64226 )

    Why doesn't this apply to the BBC?

  • making publicly funded non-military research has nothing to do with privacy. Public money is spent for the public good and there is no good justifiable reason to keep it hidden from the public... especially if its meant for the betterment of society.

    if you want your data to be private, get your own privately funded money
    • Re:yro my ass (Score:5, Insightful)

      by geekoid ( 135745 ) <dadinportland@y[ ]o.com ['aho' in gap]> on Wednesday April 21, 2010 @06:21PM (#31933206) Homepage Journal

      errr... no always.

      Putting data into peoples hands whoa aren't experts often leads to bad things. See every non expert who believed Wakefield study because they didn't understand how to interpret data. In that case kids died , and kids are still dying.

      In principle I agree with you, but we live in an are where everyone thinks they are a qualified expert in anything. That simply isn't true, and no good will come out of this.

      The data wan't show a flaw in the study because it wasn't used, but he will inevitably cherry pick data to 'prove' the study is wrong. And people like Hannah Devlin are always happy to publish claims without proper study. So no good can come from this, and people need to understand that.

      It's hard problem to solve.

  • by Improv ( 2467 ) <pgunn01@gmail.com> on Wednesday April 21, 2010 @05:23PM (#31932482) Homepage Journal

    Science journals have long fought this, because their profit model is strongest when they own copyright and are the exclusive publishers of a paper. Peer review and scientific principles don't mesh well with peer review though, and many academes have either "published" their papers on their own websites or found other ways to try to work around the journals.

    Ridding peer review and science of copyright would be a great improvement.

    • no, peer review is good. It helps to point out mistakes or inconsistencies. Getting rid of scientific journals is quasi-good (less profit motive in science, but also less chance to get work out there).

    • This has nothing to do with journals. The data was not available anywhere - not in a for-pay journal, not on a website, not on request. It was the researchers that refused to release the raw data - the publishers have no motivation to suppress these release, because it is the published paper that earns them money, not the raw data.

    • Peer review and scientific principles don't mesh well with peer review though,

      Peer review doesn't mesh well with peer review?? What?

    • by geekoid ( 135745 )

      No they haven't. They make money from published papers and reputation. Not raw data.

  • Awful summary (Score:5, Insightful)

    by Protoslo ( 752870 ) on Wednesday April 21, 2010 @05:34PM (#31932630)
    It turns out that "the data" are measurements of petrified tree rings, which were collected in the course of (presumably) a government grant-funded study. Now Queen's University researchers must compile the data for release because of the (UK) Freedom of Information Act. The scientists quoted in TFA apparently did not use the ring data for anything relating to climate studies, but Keenan has that purpose in mind.

    Phil Willis, a Liberal Democrat MP and chairman of the Science and Technology Select Committee, said that scientists now needed to work on the presumption that if research is publicly funded, the data ought to be made publicly available.

    That doesn't seem unreasonable to me. Appendices with raw data are often included already in the online editions of journals. Of course, if the ruling applies to all data generated in the course of a study, whether it is used in publications or not, it could be onerous indeed.

    • Re: (Score:3, Interesting)

      by blair1q ( 305137 )

      Now Queen's University researchers must compile the data for release because of the (UK) Freedom of Information Act.

      Seems unreasonable. They should charge the requester for any effort needed to "compile" or transmit the data. No reason the public should foot the bill for any particular formatting or delivery.

    • Re:Awful summary (Score:4, Informative)

      by pkphilip ( 6861 ) on Wednesday April 21, 2010 @08:40PM (#31934612)

      Michael Mann used the same tree ring data as temperature proxies for his studies and has published papers on this. But now the very same scientists who collected the tree ring data claim that data cannot be used as a temperature proxies - even though they haven't mentioned a word about how this would invalidate Michael Mann's work.

      http://climateaudit.org/2010/04/21/mann-of-oak/#more-10811 [climateaudit.org]

  • NSF (Score:3, Interesting)

    by martas ( 1439879 ) on Wednesday April 21, 2010 @05:38PM (#31932676)
    does anyone know if the NSF has similar requirements?
    • Re:NSF (Score:4, Interesting)

      by imidan ( 559239 ) on Wednesday April 21, 2010 @05:51PM (#31932836)

      The NSF has recently taken more of an interest in research data management. They're definitely starting to make it a requirement of grant funding that the research data be digitally stored, backed up, and, after a cooling-off period to allow the principal researchers to publish, made available to the public. I'm working on a research data management group at my university, and the researchers generally seem open to the idea, though they're loathe to put in any extra effort to make it work.

    • Re: (Score:3, Informative)

      by guruevi ( 827432 )

      Yes it does, kinda. Thanks to our publishing overlords however these 'making available' issues are more difficult than just publishing it online or so. The data cannot be made available as long as a publishing house has copyrights on it and the publishing house usually takes copyright for all work for years including data that is not directly published by them especially when the work is or becomes popular. However NSF/NIH grants usually have the requirement to release all data to the public a couple of yea

  • by Anonymous Coward

    We can agree that the whole scientific process does not make much sense if we have to believe in the interpretations without seeing the actual data. From this perspective it is crucial for all scientific data to be open.

    The other perspective comes from the individual scientist. It might take years to put together a complete data set of a particular phenomenon via experiments, literature review, digging in the ground or looking at the stars. So after looking for something special you finally discover somethi

  • Teaching? (Score:2, Interesting)

    by jfw ( 2291 )
    OK, why does this argument not also apply to teaching? I am paid to teach and do research from the public purse. My teaching is available to any one who meets certain standards and pays a user fee. Access to data should be the same.
  • So a non expert (Score:3, Interesting)

    by geekoid ( 135745 ) <dadinportland@y[ ]o.com ['aho' in gap]> on Wednesday April 21, 2010 @06:15PM (#31933136) Homepage Journal

    wants to use data that wasn't used for climate change and models in order to prove that the studies that didn't use them are flawed.

    Add to that a reporter who continually overstates anything the climate change denilist say, I'm sure it will confuse even more people.
    This should be fun.

  • by OrwellianLurker ( 1739950 ) on Wednesday April 21, 2010 @07:38PM (#31934074)
    I am a pretty big cynic, and I remain unconvinced that AGW is a significant problem. It doesn't help that the raw data isn't disclosed. I wish scientists would go back to doing science and quit trying to be policy makers.
  • More details (Score:3, Informative)

    by Sara Chan ( 138144 ) on Thursday April 22, 2010 @02:57AM (#31936424)
    I am the story's submitter. My original submission included a link to the mathematician's web page about this [informath.org]; the page has many more details. There have also been other news stories, e.g. at the BBC [bbc.co.uk].

    The UK Freedom of Information Act [ico.gov.uk] has exemptions for data that has not yet been used in publications, vexatious requests, etc.

Marvelous! The super-user's going to boot me! What a finely tuned response to the situation!

Working...