Cal.com Is Going Closed Source Because of AI 92
Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security.
[...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source."
While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."
[...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source."
While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."
AI can also FIX t (Score:4, Insightful)
Instead of fearing AI, use it to secure software and make it better.
We have nothing to fear but fear itself.
Re:AI can also FIX t (Score:5, Insightful)
Every time you want to do a shitty thing in the world now you just say AI made me do it.
Re: AI can also FIX t (Score:5, Insightful)
If attackers can now so easily scan for vulnerabilities... so could they. They have access to the same tools. Not to mention that these new approaches don't even really need access to the source.
He says they don't want to be a cybersecurity company, just quietly focus on handling the sensitive data of their customers. But you can't do one without the other.
If you don't want to build up the security know-how and processes in-house, that's fair. Outsource it to someone who specialises in it. But a company just trying to avoid a breach by flying under the radar and cheaping out on security has no business handling sensitive data in the first place.
Re: (Score:2)
It is a sad and troubling thought, but open-source may now be a threat.
Re: (Score:3)
But if it's open source, once one person has fixed it, it's fixed. Closed source means everyone has to fix it individually.
Re: (Score:2)
If one person fixes it, how many Arch users out there run arch but don't touch the code? probably around 80%, so given that maybe 20% of arch users touch code, now take that number and spread it out by how many packages there are in arch and you'll quicky understand that eyes on code != coders working on code.
Re: (Score:2)
Re: (Score:2)
No, but I am assuming that their different implementations probably have about the same number of vulnerabilities to start with. You're right that with closed source my fix probably wouldn't help you directly, but it's still the case that you and I each have a vulnerability to fix, instead of you being able to take my fix and use it without effort.
Re: (Score:2)
Please don't get me wrong, I like open source. I use plenty of open things. I explicitly prefer it in most cases, because I am cheap. I greatly appreciate the hard and largely unpaid work that goes into it. But I had dou
Re: (Score:2)
You could be right, but my take was that it was made by a manager who had no idea. Of course, they could both be true.
Re: AI can also FIX t (Score:3)
I'm pretty sure they found more bugs than they wanted to fix when the checked the code.
Re:AI can also FIX t (Score:4, Insightful)
GenAI is a bit nicer for offense than defense.
If you are an attacker, the time and consequences of a GenAI mistakes can be more easily ignored. Whoops, an attack that didn't work but you weren't going to succeed anyway. If it screws up the target in a way that you didn't actually want, you may have an opportunity cost because you wanted that data or to ransom the data, but you didn't care *that* much about the data. It's actually a pretty unambiguous 'win' for malicious users since the usual downsides don't matter.
If doing defense, the consequences of GenAI mistakes are more costly. An erroneous security fix actually becomes a hole. A change that loses data is data you actually care about.
All that said, I'm not sure closed sourcing and maintaining an open fork would realistically do anything. I doubt the proprietary fork would be sufficiently different to protect them from hypothetical security issues in their codebase.
Re: (Score:2)
The thing is , offence is defence if your devs are competent. One thing I've always stressed is we should be attacking our own products all the time. Using security linters and static analysis, fuzzing and all the other techniques to kick holes in our own systems so we can identify and patch them. These AI tools are no different.
And I'd argue for an attacker the stakes are just as high. If you screw up, you might just expose who you are, and while your target risks losing his money, you risk losing your fre
Re: (Score:2)
And I'd argue for an attacker the stakes are just as high. If you screw up, you might just expose who you are, and while your target risks losing his money, you risk losing your freedom..... or worse, if you pick a gnarly enough target.
The stakes for the attacker go back to zero if they're in a jurisdiction where there's no chance of prosecution (Russia, Iran, North Korea, to give a non-inclusive list). They just need to not be stupid enough to hack an entity with local connections. Something like Cal.com doesn't make that list.
Re: (Score:2)
Security isn't convenient.
Security isn't easy, but it isn't hard either.
Assume you're' a target (because you are) and make it so that you're hardened. Don't be the easy target. Criminals are lazy.
Re: (Score:2)
Don't forget it also super chargers the general asymmetry between attackers and defenders. That is attackers are there for as long as they want to be, defenders have to be their all the time.
Everytime a newer bigger, better trained, whatever model or a new set of tooling, feedback, etc workflow drops defenders have to buy into it and stand it up, and evaluate a whole new wave of potential vulnerabilities that are now identified, and they have to do it before a threat actor does.
Until we reach a point of st
Re: (Score:2)
What a myopic view. if AI can scan the codebase to find a vulnerability to attack, it can scan the codebase to find a vulnerability to *fix*. You seem to misunderstanding the premise here.
Also, you say "GenAI", but there's nothing "generative" about this-- it's AI's ability to interpret the code and find mistakes that's under fire by the CEO of "cal".
Re:AI can also FIX t (Score:4, Informative)
They're pissing on 'you' and telling you it's raining, if the summary is correct.
It usually follows the business model collapsing and precedes a fork and the original just going into support.
Re: (Score:2)
They're pissing on 'you' and telling you it's raining, if the summary is correct.
indeed. the whole premise is totally contradictory if not hypocritical: "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source" (a jarring claim in itself), and now that they have a working product "open source has become too dangerous" and "we don't want to become a cybersecurity business", which is just nonsense because 1. security by obscurity is the last thing you want, this is security 101, 2. if malicious llm agents can find v
Re: (Score:2)
Bingo. AI is an arms race. I'd rather have tools that can scan my codebase and find any security issues, than keep it private. The bad guys are going to disassemble it with AI anyway, so why hamstring the white-hats?
Re: AI can also FIX t (Score:2)
I thought the argument was that Open Source was more secure because everyone can find and fix vulnerabilities?
AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities
I bet the real issue is that with their actual source code in the public domain, someone could 'vibe code' a competing product with the same features...
Re: (Score:2)
I thought the argument was that Open Source was more secure because everyone can find and fix vulnerabilities?
Up until now that has been a really questionable argument because, when you include all the modules pulled in from elsewhere, many open source codebases were really large with lots of different people and policies involved over all of the different parts of the software. You only got the real benefit for some codebases, like the Linux kernel itself, where enough people really cared.
Now, with LLMs that can pick up bugs with better chance and explanation than old static code analysers it's completely possible
Cool Story Bro (Score:3)
Re:Cool Story Bro (Score:4, Informative)
Never put off till tomorrow the poor decisions you can make today.
Also, I've never heard of Cal.com. I suspect nothing of value is being lost here.
Re: Cool Story Bro (Score:2)
Also, I've never heard of Cal.com. I suspect nothing of value is being lost here.
Perhaps this "dramatic shift to closed source" is more of a marketing move, to get their company name 'out there'?
Re: Cool Story Bro (Score:4, Informative)
Re: (Score:2)
Yeah, it feels like classic enshittification, going the same way a few supposed open source projects run by corporations have gone (see also Zimbra for instance.)
Blame AI, AI is unpopular anyway, so it's easy to blame and have people who don't have critical thinking skills (ironically, that'd be more biased towards the handful of people who are pro-genAI) assume it's right.
(Of course, genAI is ultimately going to be just as good at analyzing machine code as it does source code, so the excuse makes little se
Re: (Score:2)
That's assuming that you even publish the code at all. Looks like they would keep their codebase internal.
Re: (Score:2)
"binary analysis/conversion"
Re: (Score:2)
Re: (Score:2)
fair enough, but deployed server executables aren't usually referenced as "publishing code" or "internal codebase" so i assumed there was some misunderstanding there.
they do have phone and desktop clients although that's likely javascript, and seem open to on premises service. anyway ai binary analysis seems a moot point here because the whole "security" angle is clearly just a pretext for closing the source; the real reason is anybody's guess but i'm betting on a good laugh if (or when) they get pwned.
Re: Cool Story Bro (Score:2)
I'm pretty sure I already saw a story about a model reverse engineering from a binary. It's ultimately just assembler.
Re: Cool Story Bro (Score:2)
It's ultimately just assembler.
I think you mean 'disassembler' - an assembler is used to create a binary executable, a disassembler turns a binary executable into a form of source code.
Re: (Score:2)
Re: (Score:2)
binary analysis/conversion means nothing if the AI doesn't understand how the CPU processes code and currently no AI actually understand how things work.
Given you have x86, ARM and RISC-V binary analysis/conversion is absolutely worthless, an modern CPU (since the Pentium era) supports out of order execution of instructions, sooo exactly what would an AI be doing with that, since it doesn't understand how the processor actually reads machine code and what the results will be on a unknown architecture (you d
Not "flaunting" (Score:3)
But flauting, surely?
Re: Not "flaunting" (Score:3)
Flouting god damn it
Re: (Score:2)
Open-source code is basically like handing out... (Score:5, Insightful)
... the blueprint to a bank vault.
Hmmm... arguing for security by obscurity.
Security researchers answered that question long ago.
Re: (Score:2)
Yup, that caught my eye too.
Security isn't "my blueprints are secret."
Security comes with: "Here are the blueprints, here is the research behind the blueprints, here are copies of the safe to practice on, here are conference papers discussing the known exploits of the safe, here are the reviews done by experts in the field, and here is the list of implementations used by governments around the world, if you discover exploits you'll make global news and companies everywhere will want your brains."
Re: Open-source code is basically like handing out (Score:2)
Fact is, non-software engineering projects have been doing this for nearly a century. Software engineering is still struggling to do it, and we all pay the price for it with data breaches.
Re: (Score:2)
Re: (Score:2)
To be fair, this is a different kind of security than the researchers studied.
When it is said that "security by obscurity" is no security at all, it's referring to security *locks* that are meant to bar entry to those who are unauthenticated and/or unauthorized, but allow entry to those with the right credentials.
Security by obscurity arguably *does* work better when it comes to vulnerability scans. If AI can't read the source code, it will have a harder time discovering vulnerabilities caused by bad coding
Re: (Score:2)
No, they argue for defense in depth which is a valid strategy.
Obscuring the attack surface could delay detection of exploits for the "outside" people while the "insiders" have it easier. In theory.
Would it really matter in this case? I doubt it would make a practical difference.
Bullcrap. we already lived this before (Score:4, Informative)
In the very late '90s and most of the '00s, Automated Fuzzing tools were ivented. That led to a massive increase of vulnerability discovery and reports, increasing significantly the workload of maintainers. Also, bad actors started to use said tools to discover vulns before the maintainers could discover and patch them.
If you search tech websites of the era (including slashdot) you will see the same set nad tone of articles. Maintainers complaining of the increased workload. The sky is falling. Security-pocalypse...
In the end, the big corpos steped up giving tooling and compute capacity for free to run the new tools against the existing codebases, both for project important to their infrastructure, as well as projects that would earn them good PR points.
Also, the maintainers were able to adapt their procedures, tooling and community to the "new normal" increased workload, and the software world kept turning without the sky falling off.
This shall also pass.
Yes, not all projects will survive, and of those which survive, not all wil get through unskaved, but stresses like this help separate the grain from the chaf
Re: (Score:2)
"...and the software world kept turning without the sky falling off."
The teensy piece of the "software world" impacted anyway. Embedded dominates the "software world", you know all that notorious software vulnerable to "Automated Fuzzing tools". 98% of processors are used in embedded products, but sure, those "maintainers" are all that matters.
And what sky "falls off"?
If the tools are so good (Score:5, Insightful)
If the tools are so good that you are afraid they will be used to expose your security flaws... maybe you should use the tools to find the security flaws yourself, and then fix them rather than declaring security thru obscurity.
This is a fig leaf over the desire to back out of the open source community now that the product has reached profitability.
Hopefully someone cares enough to fork the latest open source version and run them out of business with a better product that remains open.
Re: If the tools are so good (Score:2)
They probably did run the tools.
Then saw more work than they wanted to do.
Re: If the tools are so good (Score:2)
Hopefully someone cares enough to fork the latest open source version and run them out of business with a better product that remains open.
Yes, someone should definitely clone their product, keep their clone 'open source' and convince people to use the 'open source version' by creating enhanced features and providing free support, because they have nothing better to do than to go after a company that no one has ever heard of that is taking a product no one uses and making it closed source.
I don't think this was a community-built open source project, like, say, the Linux kernel, it was a proprietary product developed by paid developers who (for
Re: (Score:2)
Maybe. The thing is, the hackers are running hundreds of different tools using a wide variety of techniques. An software maker can't necessarily afford to purchase and run the same number of "white hat" tools. These things aren't generally free. And even if they are, they still take time and effort to weed through all the false positives. The hackers, on the other hand, only need to find *one* vulnerability that wasn't found by the software maker's scanners.
AI can analyze machine code (Score:5, Insightful)
Re: (Score:2)
"Do they really think that AI can't understand machine code and find vulns that way?"
They likely haven't thought of it, but why would you just assume that it can? At very minimum, machine instructions would overwhelm systems sized for source-level analysis making systems need revision and inferencing far more expensive. More likely, machine code might be preprocessed first, you know, like has been done for decades. Not an AI thing.
Would be fun watching AI digest an executable and emit full source code fo
Re: AI can analyze machine code (Score:2)
Re: (Score:2)
Umnnh...but I think the programs that do that are different programs than the ones that understand source code. (Well, not *that* much different, but trained on a massively different data set.)
FWIW, I think understanding binary code is probably an easier problem for an AI than understanding source code, but it *does* require different training.
Re: (Score:3)
AGPL + CLA (Score:2)
The restrictiveness of the AGPL when combined with license assignment or a CLA achieves the opposite result as intended. Where it's meant to kept contributions from the community from being taken by corporations who don't give back, but instead it's used to take community contributions until the point when they are ready to rug pull. Knowing that an open community fork won't survive with such a restrictive license.
At least here, they have the courtesy to at least pretend it's not the same old rug pull, with
We need our security through obscurity! (Score:2)
n/t
Local Man Abandons Honest Living (Score:4, Insightful)
Hargrove added that conventional security measures have become ineffective. "The only truly secure option left is to go fully analog," he said. "No digital trail, no logs, no AI predicting my every move. Just me, a dark street, and whoever walks by with cash in their pocket." He said he intends to launch his new career tomorrow night.
Lying Sack Of Shit (Score:2)
Closed source for security against AI, but here's an "insecure" open source DIY version if you want to contribute ideas to my closed source platform?
Ah well, it doesn't really matter anyway. Thanks to the wonders of AI anyone can vibe code their own Calendly ripoff, just like this fuck stick is doing. It's not an app that is at all complicated.
LLMs can read machie code (Score:2)
Moving to closed source alone doesn't help, because LLMs understand machine as if it was just another programming language. You need to combine it with powerful obfuscation, and even that will only work until someone figures out how to teach an LLM to use a debugger.
Unclear on the concept. (Score:2)
The software is either secure or it's insecure.
If it's secure, they have no concern that anyone knows how it works.
If it's insecure, hiding the source does not secure it.
Plus, they have access to the many AI software code analysis tools, just as bad actors do.
Ignorance is bliss (Score:2)
I always thought open source was supposed to be all about security through transparency, rather than security through obscurity. AI exposure is no different than a sudden surge in the size of project community (developers, users, and yes - abusers too).
Re: (Score:2)
Exploits are faster than fixes. Attackers can automate the entire process to launch thousands of attacks every minute after their AI finds a flaw. How long does it take to get a patch deployed to every endpoint?
Re: (Score:2)
Re: (Score:2)
AI is different because it can find and exploit vulnerabilities so much faster than fixes can be deployed. An attacker doesn't have to care nearly as much about reviewing and debugging their AI generated code. Defenders must, or they're making the problem worse.
Say both sides are using vulnerability hunting AI, and they both find a critical privilege escalato
forked (Score:2)
Re: (Score:2)
Are they not forgotten now? I never heard of them before this story.
Re: forked (Score:2)
Exactly, this is just a way to generate some 'heat' (interest) in their company...
They failed to gain market share by being 'open source' so now they want some free press on going 'closed source because AI'...
Is it too expensive to use all the LLMs? (Score:2)
bull shit he got wrote on the cheap (Score:1)
AI is just a way to close source it. To late AI already knows all the code and can duplicate it.
Re: (Score:2)
Code is not for anyone/anything (Score:1)
"I make the code proprietary for AI" what a pathetic excuse.
Will they have read the AGPL?
Do they know that they are not at all obliged to give the code to pigs and dogs, without asking anyone for anything?
They could easily put the code in a repo 'watched by human personnel'.
A false analogy (Score:2)
Re: (Score:2)
I fear that, at least for the time being, what were the security advantages of open-source have become a major threat to the entire ecosystem.
Re: (Score:2)
If a person isn't going to follow the rules of not exploiting vulnerabilities, they also aren't going to follow the rule of not breaking into closed code to find vulnerabilities.
Re: (Score:2)
There's also the old cliche about "too many chefs".
When I check bug reports on open-source projects, no matter how many people are reporting things, there are still only a handful of people actually working on them. I suspect that the idea of there being so many people out there checking open-source code is a myth. I'm betting that e
Pathetic (Score:1)
Sure (Score:3)
Sure, AI is to blame. Totally believable.
Maybe . . . (Score:3)
Users of Open Source should aggressively test security using AI tools themselves?
This seems like a twist on "Problem of Commons" economics. If users of free commons resources don't commit to help keep the shared resource clean (defend it by helping secure the software) then everybody loses when the resource gets trashed.
Hopefully this is just a latency period because not enough open source contributors yet exist who've become skilled in AI tools.
Re: (Score:2)
If you have an open-source project and a malicious AI user finds a vulnerability in it, they can hit every single user of your project in one night, while it would take you at least a day to publish a fix, which would still have to be deployed.
It's a very real threat. Concern is warranted.
It sounds like offence has the advantage. (Score:2)
When all the eyes reviewing open-source code were human
flaunting (Score:2)
Flouting his ignorance
Breathtaking stupidity (Score:2)
AI won't care that it needs to decompile binaries, but humans will.
What they're doing will have the opposite effect of what they apparently want.
Fishy story (Score:2)
The entire story sounds fishy in its arguments and code is still around as cal.diy under MIT.
https://www.cal.diy/ [cal.diy]
Basically these arguments are all non-sequitur.