Become a fan of Slashdot on Facebook


Forgot your password?
United States Government Medicine Open Source Your Rights Online

How To FIx Go Open-Source! 307

McGruber writes "Over at Bloomberg Businessweek, Paul Ford explains that the debacle known as makes clear that it is time for the government to change the way it ships code: namely, by embracing the open source approach to software development that has revolutionized the technology industry." That seems like the only way to return maximum value to the taxpayers, too.
This discussion has been archived. No new comments can be posted.

How To FIx Go Open-Source!

Comments Filter:
  • by HornWumpus ( 783565 ) on Sunday October 20, 2013 @05:06PM (#45183013)

    When your system doesn't work and you are way behind schedule.

    Make a radical change and fire someone. That's sure to fix things.

    • by cold fjord ( 826450 ) on Sunday October 20, 2013 @05:13PM (#45183051)

      And add staff, lots more staff. That will make it better, and get the job done faster. (Wasn't that one of the conclusions of The Mythical Man Month []? ( download []))

    • by Charliemopps ( 1157495 ) on Sunday October 20, 2013 @06:13PM (#45183489)

      Actually, I've been involved in projects where exactly that was required. That site isn't that complicated and there's nothing new and innovative on it. If they brought in the right people and busted ass for a few weeks they could have an open source alternative built and tested. The problem here is it's government and there's no way to just make the kind of executive decisions that would be required to pull it off.

      • by Bite The Pillow ( 3087109 ) on Sunday October 20, 2013 @08:52PM (#45184583)

        Which site isn't that complicated? The one that confirms eligibility based on data from multiple agencies which probably were not built to work together? The one that was supposed to magically work day 1 to support the kind of load that every other website grew to support organically?

        There are too many things going on here to say it isn't that complicated. Nothing innovative, that's true, but the work to integrate a goodly number of systems still needs done, and it isn't that simple.

        I have done post-merger integrations in the Fortune 100 space, and existing systems that only need to be tied together can be a convoluted mess. This is orders of magnitude more complicated, It isn't going to be a simple web service or database call to get some info, call another one, and display something on the page. It should be, but there's no way it "just works" like that.

        • The one that was supposed to magically work day 1 to support the kind of load that every other website grew to support organically?

          Every other site that grew organically didnt have the backing of a few hundred million dollars behind it. This is a federally mandated and funded thing, I dont think they have the same concerns facebook or twitter had during their early days.

      • by rtb61 ( 674572 ) on Sunday October 20, 2013 @10:07PM (#45184971) Homepage

        The problem here is it is corporate and the lobbyists have worked for decades to purposefully create a system that the government is forced to work to. So that various major corporations can regularly rip the treasury off. The US government is trapped in a cycle of lobbyist corruption that purposefully runs down government services in order to privatise work via contracts that deliver virtually nothing but cost a fortune and purposefully make the government look bad so they can privatise etc etc etc.

      • by raftpeople ( 844215 ) on Sunday October 20, 2013 @11:10PM (#45185265)

        That site isn't that complicated and there's nothing new and innovative on it. If they brought in the right people and busted ass for a few weeks they could have an open source alternative built and tested.

        Oh, ok, a few weeks? My largest project was orders of magnitude smaller than this project and you couldn't even complete testing in a "few weeks". I don't think you have any clue the complexity of this project or time required for large/complex projects.

      • by zippthorne ( 748122 ) on Sunday October 20, 2013 @11:12PM (#45185269) Journal

        The real question is why isn't it open source right now. As a taxpayer, I paid for (a part of) this thing. I want to be see the source code for my health care exchange software.

        Maybe we can FOIA it?

    • If they sold it on silk road, and they accepted BTC as payment, then it would have a chance.
  • by Anonymous Coward on Sunday October 20, 2013 @05:07PM (#45183023)

    Prototype and test. Go open source if you want to do so, but it isn't a silver bullet. If you don't test your software under simulated load conditions, you won't know if it will work. And for a work in progress, open source may have a delayed benefit time of several months before you get the feedback you need. People scratch their own itches in open source. They don't necessarily look at the entire system integrity. Only testing will do that.

    • by Sir_Sri ( 199544 ) on Sunday October 20, 2013 @05:58PM (#45183393)

      And there's no data like real world data for load. At some point you sit down in a room and guess how people are going to use your software. You put it out there, and find out you were wrong.

      That is after all, why they did this with a couple of months to spare.

      Are there going to be 10 million people over christmas all trying to buy health insurance? Probably, and that's going to cause no end of grief, but there isn't some mystical open source fairy that can tell you how to correctly predict load for a system like this and make all the infrastructure work the way you want it to. Particularly with health care and open source you'd have to deal with thousands of tea party programmers trying to fuck it up too.

    • "simulated load conditions"

      What I have learned about the software (from /.), this pig wasn't going to work under any load.

  • I'm all for it (Score:2, Informative)

    by cold fjord ( 826450 )

    I'm all for it just as long as the mandates are delayed until the infrastructure is really done this time.* When does the RFP go out?

    * And maybe a "few" other kinks [] in the law ironed out.

    • Re: (Score:2, Insightful)

      by Sarten-X ( 1102295 )

      Of course, let's do nothing unless it's perfect. If a few million people suffer or die in the meantime, at least we can look on and say we weren't flawed.

      • Re: (Score:2, Insightful)

        by sumdumass ( 711423 )

        I would rather do nothing while people suffer and die then do something that causes people to suffer and die. You see, the difference is that on one, the results are the actions of fate and flaws, in the other, the results are the actions of me or you or whoever did whatever.

        But more specifically, in this case anyways, if nothing is done, all that will happen is the status quo remains with the added bonus of a tax penalty digging into the pockets of Americans. If the penalty is removed, the same is true wit

      • Re:I'm all for it (Score:5, Insightful)

        by Anonymous Coward on Sunday October 20, 2013 @06:36PM (#45183651)

        That is a load of bull. There is a difference between insurance and healthcare. The main problem the ACA was supposedly trying to solve was a 15% uninsured rate. There were many reasons why that 15% didn't have coverage, including no small part of it that could afford insurance, but didn't want to pay for it. So the Democratic party grabbed hold of 100% of the market, not the 15% that was the problem, and started rearranging things with a hasty, thrown together plan that was scraped up from whatever they thought they could pass in short order with very unusual parliamentary maneuvering (remember "deemed to pass"?) and written in part by a "progressive" think tank []. The result still won't cover 100%, not even close, has raised rates for many people, has ended up costing many people both their insurance and income since their work hours were cut back, caused economic contraction due to businesses pulling headcounts under the limits, and plenty of other problems. And the best part is, they expect it to ultimately fail so that they can force single payer on everyone! How does that figure into your BS comment "let's do nothing unless it's perfect. If a few million people suffer or die in the meantime "? How about this - why don't we do something deliberately, in a planned fashion, that has wide support in society? How about we just don't throw crap to say we did something? Are you planning to take responsibility for the people that die without insurance now that this bad piece of law, this planned failure has passed? One of the principal precepts of medical ethics is, "first, do no harm". The idea should be to do something that is both useful and productive, not just "something" that is already expected to be destructive and fail. The cure is going to be worse than the disease in this case. But at least it will be hideously expensive. For some reason I doubt you read much in the way of criticism of the law, but your conscience will be clear because "something" was done.

        • Re:I'm all for it (Score:4, Insightful)

          by Bananenrepublik ( 49759 ) on Monday October 21, 2013 @01:51AM (#45185825)

          There were many reasons why that 15% didn't have coverage, including no small part of it that could afford insurance, but didn't want to pay for it. So the Democratic party grabbed hold of 100% of the market, not the 15% that was the problem

          If you don't see the connection between the two than you have spent no time actually thinking about the law. People who could buy insurance but don't are usually healthy. Take them out of the risk pool, and insurance becomes more expensive for everyone, increasing the incentive not to get insurance for everyone who can do without. But even though they are healthy and don't want insurance, you know that at some point, maybe 10 years down the line, maybe 20, they will also need health care.

  • How interesting. (Score:5, Informative)

    by Anonymous Coward on Sunday October 20, 2013 @05:13PM (#45183049)

    "As it turns out, the Government website uses the open source software DataTables, which is a plug-in for the jQuery Javascript library.

    While using open-source software is fine, the makers of decided to blatantly remove all references to its owners or the original copyright license."

  • by Pinhedd ( 1661735 ) on Sunday October 20, 2013 @05:17PM (#45183075) didn't fail because the designers didn't use open source software at every point in the chain - if the rumors are to be believed, an audit found open source code in there that had simply had its licence removed - it failed because it was designed by the lowest bidder and was not subject to the rigorous testing regime demanded by a national service.

    FOSS is great for reigning in costs, but it is not a patch for unskilled developers or a crutch for incompetent project managers who are unable to keep the project on track and within scope.

    • by cirby ( 2599 )

      Yes, it was the lowest, but there apparently weren't any other bidders, or at least none that anyone can find or name.

      You see, they didn't actually put it out to open bidding, and instead awarded the contract to someone with political connections.

      They used something called "task orders," which allows bureaucrats to completely bypass open bids. Basically, if you win one government contract somewhere along the way (even for a completely different project), it's possible for the government to award you future

    • by moschner ( 3003611 ) on Sunday October 20, 2013 @06:08PM (#45183455)

      Management is the reason why has been such as disaster. Open source or not, it wouldn't have mattered. They didn't even get to start coding until this spring, because the government was so slow in issuing specifications for the site. Then as if the tight deadlines were not enough, Administrators kept issuing changes to the site up until last few weeks of September (despite an October 1st launch date). It wasn't little a change here or there either.

      One of the last big overhauls was making it so people had to register before they could browse the plans. This was apparently becasue they wanted people to see what the price would be with the subsidy. The idea being that for many people the price before the credit would scare them away from buying in.

      There is also more info on this at the new york times []

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      it failed because it was designed by the lowest bidder


      It failed because it was a crony capitalism project, gifted by the Obama administration to a former campaign worker. ... Teal Media was selected as the lead visual design team on the redesign of Check out The Atlantic article about the redesign. ... Jessica Teal - Principle ... Jessica founded Teal Media following her successful stint as Design Manager for the 2008 Obama presidential campaign.

    • by cold fjord ( 826450 ) on Sunday October 20, 2013 @06:44PM (#45183699)

      It failed because they went with the lowest bidder

      It didn't fail because they went with the lowest bidder. This was apparently a "sole source" contract. They just added another task onto an existing contract.

      Meet CGI Federal, the company behind the botched launch of []

      CGI's business model depends on embedding itself deeply within an institution.

      "The ultimate aim is to establish relations so intimate with the client that decoupling becomes almost impossible," read one profile of the company. ...

      CGI Federal's winning bid stretches back to 2007, when it was one of 16 companies to get certified on a $4 billion "indefinite delivery, indefinite quantity" contract for upgrading Medicare and Medicaid's systems. Government-Wide Acquisition Contracts — GWACs, as they're affectionately known — allow agencies to issue task orders to pre-vetted companies without going through the full procurement process, but also tend to lock out companies that didn't get on the bandwagon originally. According to, CGI Federal got a total of $678 million for various services under the contract — including the $93.7 million job, which CGI Federal won over three other companies in late 2011.

      It's also true that CGI Federal began lobbying as it started winning government work. According to, it has spent $800,000 since 2006 lobbying on several different tax and appropriations bills.

      Feds reviewed only one bid for Obamacare website design []

      Rather than open the contracting process to a competitive public solicitation with multiple bidders, officials in the Department of Health and Human Services' Centers for Medicare and Medicaid accepted a sole bidder, CGI Federal, the U.S. subsidiary of a Canadian company with an uneven record of IT pricing and contract performance.

      CMS officials are tight-lipped about why CGI was chosen or how it happened. They also refuse to say if other firms competed with CGI, or if there was ever a public solicitation for building, the backbone of Obamacare’s problem-plagued web portal....

      There is no evidence CMS issued any public solicitation for the Obamacare website contract. The Examiner asked both CMS and CGI for copies of any public solicitation notice for the task orders. Neither CMS nor CGI furnished any such public notice.....

      The ID/IQ system provides a fast-track contract approval process, but it is much less likely than competitive bidding to secure high quality at a reasonable cost.

      “Whenever you have limited competition, but certainly with a sole source or a one-bid offer, the government has to question whether it is going to get the best product at the best price,” said Scott Amey, general counsel for the Project on Government Oversight, a nonprofit watchdog organization that monitors federal contracting.

      Both, which tracks federal spending, and the FFATA Subaward Reporting System, which specifically tracks contracts, refer to CGI as the lone bidder for the Obamacare website design award.

      Each site describes the CGI contract award as the product of “full and open competition,” but CGI is the only bidder listed.

      I can't find the link at the moment, but apparently this company is "favored" within the administration.... for some reason.

    • Was it truly lowest bidder? Anyone really check the procurement details? Or was the contract really let on the golf course? As a survivor of federal IT, I can speak to the wonders of golf course procurement by vendor-friendly management and what a mess it inevitably leads to for the poor schmos in IT who have to use the product and make it work.
      • supposedly it was a golf course contract of sorts. I don't know all the details, but I do know that it was not handled in a professional manner.

    • It failed (like many government programs) because nobody really cares or even has to care. Sure, the government employees want to do a good job, but their livelihood doesn't depend on it. The government contractors themselves don't care either, because they'll still keep getting new contracts that they can screw up the same way as the old ones. And even if they were prevented from bidding again, they'd just change their name and go again.

    • by tyrione ( 134248 )
      Most people have no background in Engineering Economics which has long shown the best of breed solution has the highest up front cost, with the lowest long-term costs, due to having the lowest MAINTENANCE costs. Roadways are a great example of how low bid doesn't improve the infrastructure. Best of breed is the only solution.
  • routine IT work (Score:5, Insightful)

    by globaljustin ( 574257 ) on Sunday October 20, 2013 @05:18PM (#45183079) Journal

    TFA, and virtually everything I've seen on 'Obamacare' are not helpful

    First, I've yet to see *one* legitamit (or even fringe) news organization film themselves without editing sit down at a computer and **attempt to enroll in ACA**...if this exits, please post a link

    2nd, a major problem of this article (and again virtually every news or analysis on ACA I've seen) is the lack of necessary information distinguishing **STATE EXCHANGES** and the **NATIONAL EXCHANGE**

    I hear it mentioned that the two exist, and are different, and I see a map that shows which states have their own ACA program and which do not, therefore defaulting to the Federal system....however I absolutely have not heard any distinction made when any blog/news report/etc mentions 'Obamacare's failures'

    3rd, The problems of "Obamacare" are myriad to be sure, but in the coverage of the "rollout of the website" no IT workers are one with any expertise actually explains what the problems are...

    We can easily understand (if you visit the website) & read a few news reports that the website's "failure" is a timeout when people try to sign up. Again, we don't know if this is the *state* or *federal* exchange, but the point is that the website breaks b/c of too many hits.

    Server over capacity.

    A few news articles to explain this much, but not any more.

    what does /. call 'server over capacity' type problems???


    I used to have my CCNA, it has lapsed. I'm not pretending rolling out a functional site like ACA is easy, but it's **well known** how to make a system like that, from a web coding perspective, work.

    It's routine IT work done daily all over the world.

    So, the real analysis is that the ACA needs more servers.

    It's that simple....note 'simple' does not in any way mean "easy"....but the concept is well understood by many IT engineers.

    "an open-source approach" is usually helpful in any system experiencing major problems...but this is routine IT work....not in any way a massive failure

    if you want to assign blame: blame the contractor that got the 1$billion to develop the ACA site

    • Re:routine IT work (Score:5, Informative)

      by complete loony ( 663508 ) <> on Sunday October 20, 2013 @05:44PM (#45183261)

      Building a web site for a hobby is very different to the approaches you need for massive scale. If you took a look at how the largest web sites scale, they all do things slightly differently, and they all had to fix their scalability problems gradually as their popularity increased. Throwing more CPU and RAM at the problem may be unable to fix anything if their current software design doesn't allow for it.

      • by Sir_Sri ( 199544 )

        While that's certainly true, designing a system for 5 million users versus 10 million users is not radically different - but if 10 million users all try and hit your server designed for 5 million you're going to have problems.

        There's no way was done as a 'hobby' type site - they guessed wrong on the load they were going to have and there seem to be some issues with how the insurance companies hook up to the system - neither of which is encouraging but it's not like they intended 100 users and

        • But even when designing for 5 million users, it's very difficult to test for that load without having 5 million actual users. There will inevitably be a bottleneck you didn't think of that fails under real loads, that you didn't or couldn't test for. A bottleneck that can't simply be fixed with more servers.
          • Re:routine IT work (Score:5, Insightful)

            by adri ( 173121 ) on Sunday October 20, 2013 @06:53PM (#45183743) Homepage Journal

            Why do people keep saying that over and over again?

            It's easy. You write a test suite that pretends to be a real user. You script it so there's some actions that aren't just "do A do B do C." You make them make errors. You have them put in garbage details. You have them fill out the forms incorrectly or incompletely. You have them skip pages or press "back".

            Then you add a "pretend I'm the internet!" layer in between that simulates latency, so you make sure that your servers can handle the number of concurrent requests going on. A lot of not-so-seasoned web developers still fall for the "it worked on the LAN to 100,000 users, why not on the internet?" latency fallacy. Increased latency (due to RTT, packet drops, TCP retransmits, etc) leads to having more and more sessions going concurrently. That ties up resources at the server end.

            Then you add a "pretend shit breaks!" layer. Ie, the user internet connection breaks. They forget and come back after a while, and hit the restart page. The connection dies half way during the transaction.

            Then, once you've written that, you create 5 million instances of that. 100,000 per box sounds about right.

            This isn't 1995. Computers are really god damned fast.


            • Yup. Do it all the time in the lab. No-brainer. Which lab, I'm not sayin'.
            • Re:routine IT work (Score:4, Interesting)

              by complete loony ( 663508 ) <> on Sunday October 20, 2013 @07:20PM (#45183939)

              And then you add integration with all of the 3rd party legacy systems on the back end..., That level of scalability testing is possible, but unlikely. It demands a test system that perfectly mirrors the production system. And I mean perfectly, down to the wiring, switches and routers. Then you have to model your user behaviour accurately, which is difficult for a system that doesn't have any real users yet.

              If you can't build such a test system, or your tests don't reflect the typical actions of your users, something could slip through the cracks. If your tests with 100,000 users on one server, that doesn't mean that production will work with 5 million users on 50 servers.

            • Re:routine IT work (Score:4, Insightful)

              by Sir_Sri ( 199544 ) on Sunday October 20, 2013 @10:47PM (#45185153)

              We keep saying it over and over because:

              It's easy.

              Isn't true, at all. Watch 1000 volunteers all try and test your system, and then try and simulate that behaviour to model however many you think you'll have - and you still get surprised.

              Yes, definitely you should have testing for all the cases of what a user can do. But you don't know how people are really going to use a system until they're using it for real. As it turns out real use is different than testing, and an early tester sample are not really a good sample.

              Let me give you an example of how synthetic tests will go badly. You get some fake numbers from your partners so they all have either completely random prices, or they all have exactly the same price. So when you're testing users click around, and no problem right, they can select the one they want etc.

              Then you get real data, in the real world - and one company posts a price 3% lower than the other guy but it's for a slightly different product. So before, where users clicked the same thing once, now a bunch of them are clicking back and forth loading the page multiple times doing so, they're sending them to their friends to compare etc.

              It then takes your assumption about 100 000 users per box to maybe 70, 80 or 90 or some number not quite enough when you scale up to 5 million.

      • You mean to tell me that a centralized database that millions of people are trying to access can't be fixed by adding a few cpus and a couple sticks of ram. I guarantee that there is a single server handling all the queries for some critical part of this mess, they could probably figure out which server just by fan noise.
    • It's not possible to film it unedited. It took me more than 3 hours to do, and my application was still f'd up to the point I can't buy insurance.
    • Testing was scheduled to begin the week before the site went live []. Do you really think it's just a problem of adding more servers?
      • Do you really think it's just a problem of adding more servers?

        as some very helpful individual elucidated in a different post on this thread: "its not like building a hobby site"

        if you notice, my original post assumed a level of IT knowledge on behalf of the reader (you)

        i'm saying I havent heard any problems reported professionally whatsoever, and of the reports that **do** exist mention **only** problems that, at the core, are routine IT Engineering problems

        I'm demanding better journalism (re: TFA) and sm

        • Beginning testing a week before going live may be unfortunately common, but it's also a recipe for disaster.
          • so you're agreeing that the reports of 'Obamacare Failure' are based on faulty journalism, and that, further, what has been reported are simple IT Engineering solutions?

            that's what I'm hearing...b/c you say this:

            Beginning testing a week before going live may be unfortunately common, but it's also a recipe for disaster.

   you're saying that the contractor who was paid $1Billion to 'roll out' the site made a "common" mistake that most /. readers would catch?

            glad you agree?

  • by Twillerror ( 536681 ) on Sunday October 20, 2013 @05:25PM (#45183119) Homepage Journal

    Am I the only one that thinks things have gotten a bit hyperbolic. I hear a lot of non technical people talking about how "bad" the architecture is.

    This is a new product and has more users a few weeks in then most of the big boys had in over a year.

    We are not selling a iPhone or a plane ticket here. This is a complex infrastructure with lots of back end interactions. The front end is fairly modern. They haven't gotten around to minimizing and consolidating the JS files, but that will come I'm sure.

    I've gotten through the sign up process, they added some stuff to do some ad-hoc shopping. I've seen much more dragging of feet by supposed enterprise players. What are we 20 days into this enormous platform? Most of the people complaining don't even need the damn thing because they already have insurance.

    At the end of the day the exchanges are not even selling insurance. Insurance companies are doing that. It's like using googles shopping feature. Ultimately the insurance company is If you need the insurance you'll go directly to the person selling it. Hell we probably should have started with the exchanges being nothing more than a fancy craigslist.

    People who need insurance because they are sick or scared will get it. They will get the subsidies etc. The vast majority of these so called "healthy young" are just declining insurance through there employers. They just have to fill out a bunch of paper work with their HR department.

    At the end of the day is something to help people get insurance. The subsides and the new rules are what will get it for them.

    •, does everything is supposed to do. It works exceptionally well.
    • by jsepeta ( 412566 )

      But it fails for me in the sign-up phase. That means that it's not even touching the supposed complex infrastructure. I can't even get a fucking account started.

      • You might not want to - there was a news story the other day about "no privacy expected" being the <standard_disclaimer> on the site. This might get straightened out sooner or later, but for now beware of disclosing medical or financial information to such a site.

  • by kervin ( 64171 ) on Sunday October 20, 2013 @05:30PM (#45183153) Homepage

    I understand the political grandstanders on both sides using this in their latest talking points but I really expected a bit more from Slashdot. Crashing Websites, Grumbling Users: Obamacare's Debut Is a Typical Tech Launch [] is the most balanced and informed article I've seen written on this topic.

    Basically the webs has been out for little 2-3 weeks now. It's a National rollout. And it's all on 1.0 code. Of course there will be issues. Network design is done using estimates, but scaling is done using metrics. Load-testing with a 100K concurrent user target will not help you when 200K users show up at your door.

    This is all business as usual at the start of the sign-up period. Where users can also call in their applications and also fill them out in person. I'd be surprised if they couldn't mail in their applications as well.

    • You do know that shooting towards the middle doesn't automatically make something balanced, right?

      If one side is right and the other side is complete spin, then the middle is just less spin than complete spin. Given this, under your definition of "balanced" the spinners can arbitrarily control where the "balanced" point is by ramping up or ramping down the amount of spin as necessary.

      The fact is that many people are required by law to use this thing within the next 71 days, but it continues to not work.
  • Not only did they not want to go open source they offered the contract to ONE company. There was no open bid.

    It gets worse. They also didn't want the programmers available for congressional subpoena. The whole thing was done as secretively and opaquely as possible.

    This isn't a failure of system design. Its idiots destroying thing by trying to do everything in the most sneaky and underhanded manner possible.

    Answer this... if we knew everything about Obamacare at the time of voting that we know now... would i

    • by guruevi ( 827432 )

      And do you think it would've made any difference depending on how you voted? No. Romney would've implemented RomneyCare and the IRS, NSA and ATF would've still been corrupt because those institutions transcend administrations and politics. The government in the US has transcended any politics and choices, they are simply there, large, slow and corrupt not belonging to anyone really but typically favoring the things that are equally large, slow and corrupt but feeds them.

    • were made by Karmashock! Bravo!!!!!!!
    • by quantaman ( 517394 ) on Sunday October 20, 2013 @08:42PM (#45184527)

      Answer this... if we knew everything about Obamacare at the time of voting that we know now... would it have passed?


      Actually yes. Every poll done on the ACA has shown that people approve of all the individual components and are a lot more approving of the whole when it's explained to them.

      Which is why they don't tell us anything. They don't respect your vote. You don't get to decide. Your opinion is worthless. They will do what they want to do. And if you want something else they will lie to your face.

      So I assume you disapprove of the standoff by John Boehner and the congressional Republicans. Where a minority of congressmen (ie the majority of the majority) for a party who received less than 50% of the congressional vote used the threat of an economic collapse to try and overrule the President and the Senate.

      Was the IRS attacking political opponents of the president on purpose? Of course not. Until it was proven that they were.

      Until it was proven that they weren't and there was no political bias to the IRS audits []

    • Answer this... if we knew everything about Obamacare at the time of voting that we know now... would it have passed?

      Absolutely - it was passed on a purely partisan vote in a budget reconciliation bill, with its lead sponsor being 100% up front with the fact that the bill's contents were secret from the Congress. The votes were from people who wanted a "more European" system in America and the details didn't matter - this was their big chance.

      That's why there's going to be continual showdowns on this issue,

  • by rockmuelle ( 575982 ) on Sunday October 20, 2013 @05:34PM (#45183187)

    Look, I use open source all the time and have contributed to many projects and ran a few. I love open source just as much as the next slashdotter.

    BUT, broad statements like "open source will fix" don't add anything to the conversation. What if it was built on open source and it failed? Would we be making the same claims about commercial software? "If only they had used WebSphere and DB2!! Everything would have been wonderful!".

    No. No. And. No.

    As many people have already pointed out, the problems with are mostly the same ones that plague many large scale IT projects. Insufficient testing, complex interactions between many existing complex systems (which are hard to get right), consultants that get paid for code delivered, working or not, and so on.

    Now, TFA actually makes the argument that as an _open platform_ would be a good idea. It goes on to point out that that's one thing that makes some of the bigger web apps successful: they are platforms for building apps rather than apps themselves. How much of that is true is open for debate (is google really a beautiful platform or is it a bunch of hacks held together by duct tape? only google engineering knows for sure...) , but as a goal, as a platform isn't a bad idea.

    However, platforms don't just materialize from thin air. In fact, building a platform before you have apps is a recipe for failure. It's usually only after the third or fourth app that the patterns emerge that make a platform possible. It takes time for good platforms to evolve.

    Given that, designing from the beginning as a platform would probably have failed, too. The developers would have created a wonderful platform for some vague requirements that likely didn't actually meet the needs of an insurance exchange at all.

    From a pure software engineering perspective, what's happening right now isn't that bad. Version 1.0 launched, it had problems. Let's get working on Version 2.0 and maybe try out some new ideas. Then for Version 3.0 and 4.0, we can start thinking of a platform. The other important point here is that you have to plan for multiple versions and long term maintenance/evolution for software. The suggestion that should have been run as a startup in the government rather than outsourced is probably the best idea for fixing the problem.


  • by sootman ( 158191 ) on Sunday October 20, 2013 @05:36PM (#45183201) Homepage Journal

    It's a ridiculously complicated system. [] (Scroll down to the graphic.) Figure out a way to release it in stages. Step 1, you can create an account and log in and read what the system will someday be. Step 2, make sure it's getting to all the right info from all the right places. Etc...

  • Make it open source and easily customizable enough for other countries to use. Then put in bunch of hard to find security bugs and let the NSA exploit those bugs to spy on other countries. Two birds, one stone, everyone is happy. You're welcome.
  • by EmperorOfCanada ( 1332175 ) on Sunday October 20, 2013 @05:48PM (#45183305)
    It has become painfully obvious to me that government IT contracts exist solely to give well connected contracting companies billions of dollars of taxpayer's money since these same large creatively dead companies can't actually come up with products that real consumers would want. My guess is that if you contracted these companies to build an iPad that it would be 1 inch thick and have a 640 x 400 resolution and have an owner's manual that came in a set of binders.

    So these companies are going to fight opensource as hard as they can seeing that it destroys all kinds of things they had going for them. Open source means that other companies can come in and scoop their contracts. Open source means that people like slashdoter and the DailyWTF will go through the code highlighting crap that came from 3rd rate 3rd world outsourced coders. Open source could even mean horror upon horrors that if good code is generated that other governments will copy it and simply modify it to their own needs.

    But the worst horror is that if they charge 50 billion dollars for a few thousand lines of modifications to an existing system people like us will be willing to testify at the fraud trial.

    Actually there is one worse horror: that people like us contribute free functionality, upgrades, and fixes.
    • by kervin ( 64171 ) on Sunday October 20, 2013 @07:13PM (#45183881) Homepage

      Reuters puts money spent so far at $200M, and project at $300M. Source: As Obamacare tech woes mounted, contractor payments soared []

      You can make your point without resorting to embellishments you know.

      • I wasn't referring to the Obamacare project specifically. I was thinking more about many government (specifically medical) software projects that have gone into over-budget orbit. A great one is the British NHS IT disaster that hit £11 Billion before it was cancelled. That is over 17 Billion USD for nothing. I have a sneaking suspicion that if the source code and other project documents got out that we would all be shaking with anger over the terrible code and development/management process that went
  • If you give the people any chance to participate in government, besides paying taxes and voting for the carefully-groomed, reliable idiots, then they are likely to develop some misplaced sense of ownership.
    That is absolutely NOT how this plantation is run.
  • Oh my non-existing-god!
    Open source? America is definitely committed to its descent into communism!
  • ....perhaps Jessica Teal and Teal Media and a bunch of 20-somethings with limited experienced actually had something to do with the situation?
  • by PPH ( 736903 ) on Sunday October 20, 2013 @07:00PM (#45183795)

    ... its incredibly difficult to map rules described in English to software requirements. And in this case, it isn't English, its legalese.

    I've done this for engineering applications. In fact I've worked with automated systems that do a pretty good job of automated code generation. But in either the manual or automated case, it takes numerous iterations through the requirements definition phase to capture the inputs (what the customer wants), map these to requirements, discover holes (where a specific case might not be addressed) or conflicts. The solution in these cases is to go back to the customer and get more information. Or in some cases, tell them that 'it just won't work like that'.

    Writing legislation, passing a bill and then building a web site doesn't work this way. What do you expect the developers to do? Go back to Congress and ask them to re-write the law if a problem is discovered? I don't think so.

    Compare this to tax law. That has had decades to evolve, as a manual system before TurboTax came on the scene. And many of the discrepancies were actually encountered and solved. Just not in software. So when it came time to write code, the regulations (requirements) were well understood and complete.

  • "ISO-9001 certified quality frameworks that result in CGI's on-time, on-budget delivery track record"
  • From Forbes investigation the issue is that you cannot browse the plans without entering all of your personal information for verification first. The system then needs to cross check all of the info to calculate your government subsides. This causes a major bottleneck which greatly slows down the system. Most would balk at the prices without the subsides.

    Quote from the article: So, by analyzing your income first, if you qualify for heavy subsidies, the website can advertise those subsidies to you instea

"You can have my Unix system when you pry it from my cold, dead fingers." -- Cal Keegan