Movies

NASA To Stream Rocket Launches and Spacewalks On Netflix (nerds.xyz) 16

BrianFagioli shares a report from NERDS.xyz: NASA is coming to Netflix. No, not a drama or sci-fi reboot. The space agency is actually bringing real rocket launches, astronaut spacewalks, and even views of Earth from space directly to your favorite streaming service. Starting this summer, NASA+ will be available on Netflix, giving the space-curious a front-row seat to live mission coverage and other programming.

The space agency is hoping this move helps it connect with a much bigger audience, and considering Netflix reaches over 700 million people, that's not a stretch. This partnership is about accessibility. NASA already offers NASA+ for free, without ads, through its app and website. But now it's going where the eyeballs are. If people won't come to the space agency, the space agency will come to them.

Security

New NSA/CISA Report Again Urges the Use of Memory-Safe Programming Language (theregister.com) 66

An anonymous reader shared this report from the tech news site The Register: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) this week published guidance urging software developers to adopt memory-safe programming languages. "The importance of memory safety cannot be overstated," the inter-agency report says...

The CISA/NSA report revisits the rationale for greater memory safety and the government's calls to adopt memory-safe languages (MSLs) while also acknowledging the reality that not every agency can change horses mid-stream. "A balanced approach acknowledges that MSLs are not a panacea and that transitioning involves significant challenges, particularly for organizations with large existing codebases or mission-critical systems," the report says. "However, several benefits, such as increased reliability, reduced attack surface, and decreased long-term costs, make a strong case for MSL adoption."

The report cites how Google by 2024 managed to reduce memory safety vulnerabilities in Android to 24 percent of the total. It goes on to provide an overview of the various benefits of adopting MSLs and discusses adoption challenges. And it urges the tech industry to promote memory safety by, for example, advertising jobs that require MSL expertise.

It also cites various government projects to accelerate the transition to MSLs, such as the Defense Advanced Research Projects Agency (DARPA) Translating All C to Rust (TRACTOR) program, which aspires to develop an automated method to translate C code to Rust. A recent effort along these lines, dubbed Omniglot, has been proposed by researchers at Princeton, UC Berkeley, and UC San Diego. It provides a safe way for unsafe libraries to communicate with Rust code through a Foreign Function Interface....

"Memory vulnerabilities pose serious risks to national security and critical infrastructure," the report concludes. "MSLs offer the most comprehensive mitigation against this pervasive and dangerous class of vulnerability."

"Adopting memory-safe languages can accelerate modern software development and enhance security by eliminating these vulnerabilities at their root," the report concludes, calling the idea "an investment in a secure software future."

"By defining memory safety roadmaps and leading the adoption of best practices, organizations can significantly improve software resilience and help ensure a safer digital landscape."
AI

AI Improves At Improving Itself Using an Evolutionary Trick (ieee.org) 41

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv.

From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges...

The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems."

... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.)

As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."
Desktops (Apple)

After 27 Years, Engineer Discovers How To Display Secret Photo In Power Mac ROM (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: On Tuesday, software engineer Doug Brown published his discovery of how to trigger a long-known but previously inaccessible Easter egg in the Power Mac G3's ROM: a hidden photo of the development team that nobody could figure out how to display for 27 years. While Pierre Dandumont first documented the JPEG image itself in 2014, the method to view it on the computer remained a mystery until Brown's reverse engineering work revealed that users must format a RAM disk with the text "secret ROM image."

Brown stumbled upon the image while using a hex editor tool called Hex Fiend with Eric Harmon's Mac ROM template to explore the resources stored in the beige Power Mac G3's ROM. The ROM appeared in desktop, minitower, and all-in-one G3 models from 1997 through 1999. "While I was browsing through the ROM, two things caught my eye," Brown wrote. He found both the HPOE resource containing the JPEG image of team members and a suspicious set of Pascal strings in the PowerPC-native SCSI Manager 4.3 code that included ".Edisk," "secret ROM image," and "The Team."

The strings provided the crucial clue Brown needed. After extracting and disassembling the code using Ghidra, he discovered that the SCSI Manager was checking for a RAM disk volume named "secret ROM image." When found, the code would create a file called "The Team" containing the hidden JPEG data. Brown initially shared his findings on the #mac68k IRC channel, where a user named Alex quickly figured out the activation method. The trick requires users to enable the RAM Disk in the Memory control panel, restart, select the RAM Disk icon, choose "Erase Disk" from the Special menu, and type "secret ROM image" into the format dialog. "If you double-click the file, SimpleText will open it," Brown explains on his blog just before displaying the hidden team photo that emerges after following the steps.

Android

Apple's Swift Coding Language Is Working On Android Support (9to5google.com) 44

Apple's Swift programming language is expanding official support to Android through a new "Android Working Group" which will improve compatibility, integration, and tooling. "As it stands today, Android apps are generally coded in Kotlin, but Apple is looking to provide its Swift coding language as an alternative," notes 9to5Google. "Apple first launched its coding language back in 2014 with its own platforms in mind, but currently also supports Windows and Linux officially." From the report: A few of the key pillars the Working Group will look to accomplish include:

- Improve and maintain Android support for the official Swift distribution, eliminating the need for out-of-tree or downstream patches
- Recommend enhancements to core Swift packages such as Foundation and Dispatch to work better with Android idioms
- Work with the Platform Steering Group to officially define platform support levels generally, and then work towards achieving official support of a particular level for Android
- Determine the range of supported Android API levels and architectures for Swift integration
- Develop continuous integration for the Swift project that includes Android testing in pull request checks.
- Identify and recommend best practices for bridging between Swift and Android's Java SDK and packaging Swift libraries with Android apps
- Develop support for debugging Swift applications on Android
- Advise and assist with adding support for Android to various community Swift packages

Programming

'The Computer-Science Bubble Is Bursting' 128

theodp writes: The job of the future might already be past its prime," writes The Atlantic's Rose Horowitch in The Computer-Science Bubble Is Bursting. "For years, young people seeking a lucrative career were urged to go all in on computer science. From 2005 to 2023, the number of comp-sci majors in the United States quadrupled. All of which makes the latest batch of numbers so startling. This year, enrollment grew by only 0.2 percent nationally, and at many programs, it appears to already be in decline, according to interviews with professors and department chairs. At Stanford, widely considered one of the country's top programs, the number of comp-sci majors has stalled after years of blistering growth. Szymon Rusinkiewicz, the chair of Princeton's computer-science department, told me that, if current trends hold, the cohort of graduating comp-sci majors at Princeton is set to be 25 percent smaller in two years than it is today. The number of Duke students enrolled in introductory computer-science courses has dropped about 20 percent over the past year."

"But if the decline is surprising, the reason for it is fairly straightforward: Young people are responding to a grim job outlook for entry-level coders. In recent years, the tech industry has been roiled by layoffs and hiring freezes. The leading culprit for the slowdown is technology itself. Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words. This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true."

Meanwhile, writing in the Communications of the ACM, Orit Hazzan and Avi Salmon ask: Should Universities Raise or Lower Admission Requirements for CS Programs in the Age of GenAI? "This debate raises a key dilemma: should universities raise admission standards for computer science programs to ensure that only highly skilled problem-solvers enter the field, lower them to fill the gaps left by those who now see computer science as obsolete due to GenAI, or restructure them to attract excellent candidates with diverse skill sets who may not have considered computer science prior to the rise of GenAI, but who now, with the intensive GenAI and vibe coding tools supporting programming tasks, may consider entering the field?
Python

Behind the Scenes at the Python Software Foundation (python.org) 11

The Python Software Foundation ("made up of, governed, and led by the community") does more than just host Python and its documnation, the Python Package Repository, and the development workflows of core CPython developers. This week the PSF released its 28-page Annual Impact Report this week, noting that 2024 was their first year with three CPython developers-in-residence — and "Between Lukasz, Petr, and Serhiy, over 750 pull requests were authored, and another 1,500 pull requests by other authors were reviewed and merged." Lukasz Langa co-implemented the new colorful shell included in Python 3.13, along with Pablo Galindo Salgado, Emily Morehouse-Valcarcel, and Lysandros Nikolaou.... Code-wise, some of the most interesting contributions by Petr Viktorin were around the ctypes module that allows interaction between Python and C.... These are just a few of Serhiy Storchaka's many contributions in 2024: improving error messages for strings, bytes, and bytearrays; reworking support for var-arguments in the C argument handling generator called "Argument Clinic"; fixing memory leaks in regular expressions; raising the limits for Python integers on 64-bit platforms; adding support for arbitrary code page encodings on Windows; improving complex and fraction number support...

Thanks to the investment of [the OpenSSF's security project] Alpha-Omega in 2024, our Security Developer-in-Residence, Seth Larson, continued his work improving the security posture of CPython and the ecosystem of Python packages. Python continues to be an open source security leader, evident by the Linux kernel becoming a CVE Numbering Authority using our guide as well as our publication of a new implementers guide for Trusted Publishers used by Ruby, Crates.io, and Nuget. Python was also recommended as a memory-safe programming language in early 2024 by the White House and CISA following our response to the Office of the National Cyber Directory Request for Information on open source security in 2023... Due to the increasing demand for SBOMs, Seth has taken the initiative to generate SBOM documents for the CPython runtime and all its dependencies, which are now available on python.org/downloads. Seth has also started work on standardizing SBOM documents for Python packages with PEP 770, aiming to solve the "Phantom Dependency" problem and accurately represent non-Python software included in Python packages.

With the continued investment in 2024 by Amazon Web Services Open Source and Georgetown CSET for this critical role, our PyPI Safety & Security Engineer, Mike Fiedler, completed his first full calendar year at the PSF... In March 2024, Mike added a "Report project as malware" button on the website, creating more structure to inbound reports and decreasing remediation time. This new button has been used over 2,000 times! The large spike in June led to prohibiting Outlook email domains, and the spike in November was driven by a persistent attack. Mike developed the ability to place projects in quarantine pending further investigation. Thanks to a grant from Alpha-Omega, Mike will continue his work for a second year. We plan to do more work on minimizing time-on-PyPI for malware in 2025...

In 2024, PyPI saw an 84% growth in download counts and 48% growth in bandwidth, serving 526,072,569,160 downloads for the 610,131 projects hosted there, requiring 1.11 Exabytes of data transfer, or 281.6 Gbps of bandwidth 24x7x365. In 2024, 97k new projects, 1.2 million new releases, and 3.1 million new files were uploaded to the index.

Stats

RedMonk Ranks Top Programming Languages Over Time - and Considers Ditching Its 'Stack Overflow' Metric (redmonk.com) 40

The developer-focused analyst firm RedMonk releases twice-a-year rankings of programming language popularity. This week they also released a handy graph showing the movement of top 20 languages since 2012. Their current rankings for programming language popularity...

1. JavaScript
2. Python
3. Java
4. PHP
5. C#
6. TypeScript
7. CSS
8. C++
9. Ruby
10. C

The chart shows that over the years the rankings really haven't changed much (other than a surge for TypeScript and Python, plus a drop for Ruby). JavaScript has consistently been #1 (except in two early rankings, where it came in behind Java). And in 2020 Java finally slipped from #2 down to #3, falling behind... Python. Python had already overtaken PHP for the #3 spot in 2017, pushing PHP to a steady #4. C# has maintained the #5 spot since 2014 (though with close competition from both C++ and CSS). And since 2021 the next four spots have been held by Ruby, C, Swift, and R.

The only change in the current top 20 since the last ranking "is Dart dropping from a tie with Rust at 19 into sole possession of 20," writes RedMonk co-founder Stephen O'Grady. "In the decade and a half that we have been ranking these languages, this is by far the least movement within the top 20 that we have seen. While this is to some degree attributable to a general stasis that has settled over the rankings in recent years, the extraordinary lack of movement is likely also in part a manifestation of Stack Overflow's decline in query volume..." The arrival of AI has had a significant and accelerating impact on Stack Overflow, which comprises one half of the data used to both plot and rank languages twice a year... Stack Overflow's value from an observational standpoint is not what it once was, and that has a tangible impact, as we'll see....

As that long time developer site sees fewer questions, it becomes less impactful in terms of driving volatility on its half of the rankings axis, and potentially less suggestive of trends moving forward... [W]e're not yet at a point where Stack Overflow's role in our rankings has been deprecated, but the conversations at least are happening behind the scenes.

"The veracity of the Stack Overflow data is increasingly questionable," writes RedMonk's research director: When we use Stack Overflow for programming language rankings we measure how many questions are asked using specific programming language tags... While other pieces, like Matt Asay's AI didn't kill Stack Overflow are right to point out that the decline existed before the advent of AI coding assistants, it is clear that the usage dramatically decreased post 2023 when ChatGPT became widely available. The number of questions asked are now about 10% what they were at Stack Overflow's peak.
"RedMonk is continuing to evaluate the quality of this analysis," the research director concludes, arguing "there is value in long-lived data, and seeing trends move over a decade is interesting and worthwhile. On the other hand, at this point half of the data feeding the programming language rankings is increasingly stale and of questionable value on a going-forward basis, and there is as of now no replacement public data set available.

"We'll continue to watch and advise you all on what we see with Stack Overflow's data."
Microsoft

Is 'Minecraft' a Better Way to Teach Programming in the Age of AI? (edsurge.com) 58

The education-news site EdSurge published "sponsored content" from Minecraft Education this month. "Students light up when they create something meaningful," the article begins. "Self-expression fuels learning, and creativity lies at the heart of the human experience."

But they also argue that "As AI rapidly reshapes software development, computer science education must move beyond syntax drills and algorithmic repetition." Students "must also learn to think systemically..." As AI automates many of the mechanical aspects of programming, the value of CS education is shifting, from writing perfect code to shaping systems, telling stories through logic and designing ethical, human-centered solutions... [I]t's critical to offer computer science experiences that foster invention, expression and design. This isn't just an education issue — it's a workforce one. Creativity now ranks among the top skills employers seek, alongside analytical thinking and AI literacy. As automation reshapes the job market, McKinsey estimates up to 375 million workers may need to change occupations by 2030. The takeaway? We need more adaptable, creative thinkers.

Creative coding, where programming becomes a medium for self-expression and innovation, offers a promising solution to this disconnect. By positioning code as a creative tool, educators can tap into students' intrinsic motivation while simultaneously building computational thinking skills. This approach helps students see themselves as creators, not just consumers, of technology. It aligns with digital literacy frameworks that emphasize critical evaluation, meaningful contribution and not just technical skills.

One example of creative coding comes from a curriculum that introduces computer science through game design and storytelling in Minecraft... Developed by Urban Arts in collaboration with Minecraft Education, the program offers middle school teachers professional development, ongoing coaching and a 72-session curriculum built around game-based instruction. Designed for grades 6-8, the project-based program is beginner-friendly; no prior programming experience is required for teachers or students. It blends storytelling, collaborative design and foundational programming skills with a focus on creativity and equity.... Students use Minecraft to build interactive narratives and simulations, developing computational thinking and creative design... Early results are promising: 93 percent of surveyed teachers found the Creative Coders program engaging and effective, noting gains in problem-solving, storytelling and coding, as well as growth in critical thinking, creativity and resilience.

As AI tools like GitHub Copilot become standard in development workflows, the definition of programming proficiency is evolving. Skills like prompt engineering, systems thinking and ethical oversight are rising in importance, precisely what creative coding develops... As AI continues to automate routine tasks, students must be able to guide systems, understand logic and collaborate with intelligent tools. Creative coding introduces these capabilities in ways that are accessible, culturally relevant and engaging for today's learners.

Some background from long-time Slashdot reader theodp: The Urban Arts and Microsoft Creative Coders program touted by EdSurge in its advertorial was funded by a $4 million Education Innovation and Research grant that was awarded to Urban Arts in 2023 by the U.S. Education Department "to create an engaging, game-based, middle school CS course using Minecraft tools" for 3,450 middle schoolers (6th-8th grades)" in New York and California (Urban Arts credited Minecraft for helping craft the winning proposal)... New York City is a Minecraft Education believer — the Mayor's Office of Media and Entertainment recently kicked off summer with the inaugural NYC Video Game Festival, which included the annual citywide Minecraft Education Battle of the Boroughs Esports Competition in partnership with NYC Public Schools.
AI

CEOs Have Started Warning: AI is Coming For Your Job (yahoo.com) 124

It's not just Amazon's CEO predicting AI will lower their headcount. "Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job," reports the Washington Post — including IBM, Salesforce, and JPMorgan Chase.

But are they really just trying to impress their shareholders? Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries.... CEOs are under pressure to show they are embracing new technology and getting results — incentivizing attention-grabbing predictions that can create additional uncertainty for workers. "It's a message to shareholders and board members as much as it is to employees," Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. "You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different."

Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings — in line with their interest in promoting AI's power...

IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI "agents" for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year.... Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company...

Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. "We have little evidence of layoffs so far," said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. "What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business." Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard... It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. "Usage does not necessarily translate into value," he said. "Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?"

Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. "Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction ... or because humans are more productive."

On an earnings call, Salesforce's chief operating and financial officer said AI agents helped them reduce hiring needs — and saved $50 million, according to the article. (And Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, adds that if advanced tools like AI agents can prove their reliability and automate work — that could become a larger disruptor to jobs.) "A wave of disruption is going to happen," he's quoted as saying.

But while the debate continues about whether AI will eliminate or create jobs, Mollick still hedges that "the truth is probably somewhere in between."
AI

Anthropic Deploys Multiple Claude Agents for 'Research' Tool - Says Coding is Less Parallelizable (anthropic.com) 4

In April Anthorpic introduced a new AI trick: multiple Claude agents combine for a "Research" feature that can "search across both your internal work context and the web" (as well as Google Workspace "and any integrations...")

But a recent Anthropic blog post notes this feature "involves an agent that plans a research process based on user queries, and then uses tools to create parallel agents that search for information simultaneously," which brings challenges "in agent coordination, evaluation, and reliability.... The model must operate autonomously for many turns, making decisions about which directions to pursue based on intermediate findings." Multi-agent systems work mainly because they help spend enough tokens to solve the problem.... This finding validates our architecture that distributes work across agents with separate context windows to add more capacity for parallel reasoning. The latest Claude models act as large efficiency multipliers on token use, as upgrading to Claude Sonnet 4 is a larger performance gain than doubling the token budget on Claude Sonnet 3.7. Multi-agent architectures effectively scale token usage for tasks that exceed the limits of single agents.

There is a downside: in practice, these architectures burn through tokens fast. In our data, agents typically use about 4Ã-- more tokens than chat interactions, and multi-agent systems use about 15Ã-- more tokens than chats. For economic viability, multi-agent systems require tasks where the value of the task is high enough to pay for the increased performance. Further, some domains that require all agents to share the same context or involve many dependencies between agents are not a good fit for multi-agent systems today.

For instance, most coding tasks involve fewer truly parallelizable tasks than research, and LLM agents are not yet great at coordinating and delegating to other agents in real time. We've found that multi-agent systems excel at valuable tasks that involve heavy parallelization, information that exceeds single context windows, and interfacing with numerous complex tools.

Thanks to Slashdot reader ZipNada for sharing the news.
Science

Axolotl Discovery Brings Us Closer Than Ever To Regrowing Human Limbs (sciencealert.com) 41

alternative_right shares a report from ScienceAlert: A team of biologists from Northeastern University and the University of Kentucky has found one of the key molecules involved in axolotl regeneration. It's a crucial component in ensuring the body grows back the right parts in the right spot: for instance, growing a hand, from the wrist. "The cells can interpret this cue to say, 'I'm at the elbow, and then I'm going to grow back the hand' or 'I'm at the shoulder... so I'm going to then enable those cells to grow back the entire limb'," biologist James Monaghan explains.

That molecule, retinoic acid, is arranged through the axolotl body in a gradient, signaling to regenerative cells how far down the limb has been severed. Closer to the shoulder, axolotls have higher levels of retinoic acid, and lower levels of the enzyme that breaks it down. This ratio changes the further the limb extends from the body. The team found this balance between retinoic acid and the enzyme that breaks it down plays a crucial role in 'programming' the cluster of regenerative cells that form at an injury site. When they added surplus retinoic acid to the hand of an axolotl in the process of regenerating, it grew an entire arm instead.

In theory, the human body has the right molecules and cells to do this too, but our cells respond to the signals very differently, instead forming collagen-based scars at injury sites. Next, Monaghan is keen to find out what's going on inside cells -- the axolotl's, and our own -- when those retinoic acid signals are received.
The research is published in Nature Communications.
AI

How Do Olympiad Medalists Judge LLMs in Competitive Programming? 23

A new benchmark assembled by a team of International Olympiad medalists suggests the hype about large language models beating elite human coders is premature. LiveCodeBench Pro, unveiled in a 584-problem study [PDF] drawn from Codeforces, ICPC and IOI contests, shows the best frontier model clears just 53% of medium-difficulty tasks on its first attempt and none of the hard ones, while grandmaster-level humans routinely solve at least some of those highest-tier problems.

The researchers measured models and humans on the same Elo scale used by Codeforces and found that OpenAI's o4-mini-high, when stripped of terminal tools and limited to one try per task, lands at an Elo rating of 2,116 -- hundreds of points below the grandmaster cutoff and roughly the 1.5 percentile among human contestants. A granular tag-by-tag autopsy identified implementation-friendly, knowledge-heavy problems -- segment trees, graph templates, classic dynamic programming -- as the models' comfort zone; observation-driven puzzles such as game-theory endgames and trick-greedy constructs remain stubborn roadblocks.

Because the dataset is harvested in real time as contests conclude, the authors argue it minimizes training-data leakage and offers a moving target for future systems. The broader takeaway is that impressive leaderboard jumps often reflect tool use, multiple retries or easier benchmarks rather than genuine algorithmic reasoning, leaving a conspicuous gap between today's models and top human problem-solvers.
Programming

Apple Migrates Its Password Monitoring Service to Swift from Java, Gains 40% Performance Uplift (infoq.com) 109

Meta and AWS have used Rust, and Netflix uses Go,reports the programming news site InfoQ. But using another language, Apple recently "migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput, and significantly reducing memory usage."

This freed up nearly 50% of their previously allocated Kubernetes capacity, according to the article, and even "improved startup time, and simplified concurrency." In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability... "Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency."

Apple's Password Monitoring service, part of the broader Password app's ecosystem, is responsible for securely checking whether a user's saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols. This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions... Apple's previous Java implementation struggled to meet the service's growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead — from JVM initialization, class loading, and just-in-time compilation, slowed the system's ability to scale in real time. Additionally, the service's memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs.

Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases.... Swift's deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java.

"While this isn't a sign that Java and similar languages are in decline," concludes InfoQ's article, "there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice."
Transportation

17-Year-Old Student Builds 3D-printed Drone In Garage, Interests DoD and MIT (yahoo.com) 63

"Cooper Taylor is only 17 years old, but he's already trying to revolutionize the drone industry," writes Business Insider: His design makes the drone more efficient, customizable, and less expensive to construct, he says. He's built six prototypes, 3D printing every piece of hardware, programming the software, and even soldering the control circuit board. He says building his drone cost one-fifth of the price of buying a comparable machine, which sells for several thousand dollars. Taylor told Business Insider he hopes that "if you're a first responder or a researcher or an everyday problem solver, you can have access to this type of drone."

His innovation won him an $8,000 scholarship in April at the Junior Science and Humanities Symposium, funded by the Defense Department. Then, on May 16, he received an even bigger scholarship of $15,000 from the US Navy, which he won after presenting his research at the Regeneron International Science and Engineering Fair...

It all started when Taylor's little sister got a drone, and he was disappointed to see that it could fly for only about 30 minutes before running out of power. He did some research and found that a vertical take-off and landing, or VTOL, drone would last longer. This type of drone combines the multi-rotor helicopter style with the fixed wings of an airplane, making it extremely versatile. It lifts off as a helicopter, then transitions into plane mode. That way, it can fly farther than rotors alone could take it, which was the drawback to Taylor's sister's drone. Unlike a plane-style drone, though, it doesn't need a runway, and it can hover with its helicopter rotors.

Taylor designed a motor "that could start out helicopter-style for liftoff, then tilt back to become an airplane-style motor," according to the article.

And now this summer he'll be "working on a different drone project through a program with the Reliable Autonomous Systems Lab at the Massachusetts Institute of Technology."

Thanks to Slashdot reader Agnapot for sharing the news.
Python

Python Creator Guido van Rossum Asks: Is 'Worse is Better' Still True for Programming Languages? (blogspot.com) 67

In 1989 a computer scientist argued that more functionality in software actually lowers usability and practicality — leading to the counterintuitive proposition that "worse is better". But is that still true?

Python's original creator Guido van Rossum addressed the question last month in a lightning talk at the annual Python Language Summit 2025. Guido started by recounting earlier periods of Python development from 35 years ago, where he used UNIX "almost exclusively" and thus "Python was greatly influenced by UNIX's 'worse is better' philosophy"... "The fact that [Python] wasn't perfect encouraged many people to start contributing. All of the code was straightforward, there were no thoughts of optimization... These early contributors also now had a stake in the language; [Python] was also their baby"...

Guido contrasted early development to how Python is developed now: "features that take years to produce from teams of software developers paid by big tech companies. The static type system requires an academic-level understanding of esoteric type system features." And this isn't just Python the language, "third-party projects like numpy are maintained by folks who are paid full-time to do so.... Now we have a huge community, but very few people, relatively speaking, are contributing meaningfully."

Guido asked whether the expectation for Python contributors going forward would be that "you had to write a perfect PEP or create a perfect prototype that can be turned into production-ready code?" Guido pined for the "old days" where feature development could skip performance or feature-completion to get something into the hands of the community to "start kicking the tires". "Do we have to abandon 'worse is better' as a philosophy and try to make everything as perfect as possible?" Guido thought doing so "would be a shame", but that he "wasn't sure how to change it", acknowledging that core developers wouldn't want to create features and then break users with future releases.

Guido referenced David Hewitt's PyO3 talk about Rust and Python, and that development "was using worse is better," where there is a core feature set that works, and plenty of work to be done and open questions. "That sounds a lot more fun than working on core CPython", Guido paused, "...not that I'd ever personally learn Rust. Maybe I should give it a try after," which garnered laughter from core developers.

"Maybe we should do more of that: allowing contributors in the community to have a stake and care".

AI

Salesforce Blocks AI Rivals From Using Slack Data (theinformation.com) 9

An anonymous reader shares a report: Slack, an instant-messaging service popular with businesses, recently blocked other software firms from searching or storing Slack messages even if their customers permit them to do so, according to a public disclosure from Slack's owner, Salesforce.

The move, which hasn't previously been reported, could hamper fast-growing artificial intelligence startups that have used such access to power their services, such as Glean. Since the Salesforce change, Glean and other applications can no longer index, copy or store the data they access via the Slack application programming interface on a long-term basis, according to the disclosure. Salesforce will continue allowing such firms to temporarily use and store their customers' Slack data, but they must delete the data, the company said.

Python

New Code.org Curriculum Aims To Make Schoolkids Python-Literate and AI-Ready 50

Longtime Slashdot reader theodp writes: The old Code.org curriculum page for middle and high school students has been changed to include a new Python Lab in the tech-backed nonprofit's K-12 offerings. Elsewhere on the site, a Computer Science and AI Foundations curriculum is described that includes units on 'Foundations of AI Programming [in Python]' and 'Insights from Data and AI [aka Data Science].' A more-detailed AI Foundations Syllabus 25-26 document promises a second semester of material is coming soon: "This semester offers an innovative approach to teaching programming by integrating learning with and about artificial intelligence (AI). Using Python as the primary language, students build foundational programming skills while leveraging AI tools to enhance computational thinking and problem-solving. The curriculum also introduces students to the basics of creating AI-powered programs, exploring machine learning, and applying data science principles."

Newly-posted videos on Code.org's YouTube channel appear to be intended to support the new Python-based CS & AI course. "Python is extremely versatile," explains a Walmart data scientist to open the video for Data Science: Using Python. "So, first of all, Python is one of the very few languages that can handle numbers very, very well." A researcher at the Univ. of Washington's Institute for Health Metrics and Evaluation (IHME) adds, "Python is the gold standard and what people expect data scientists to know [...] Key to us being able to handle really big data sets is our use of Python and cluster computing." Adding to the Python love, an IHME data analyst explains, "Python is a great choice for large databases because there's a lot of support for Python libraries."

Code.org is currently recruiting teachers to attend its CS and AI Foundations Professional Learning program this summer, which is being taught by Code.org's national network of university and nonprofit regional partners (teachers who signup have a chance to win $250 in DonorsChoose credits for their classrooms). A flyer for a five-day Michigan Professional Development program to prepare teachers for a pilot of the Code.org CS & A course touts the new curriculum as "an alternative to the AP [Computer Science] pathway" (teachers are offered scholarships covering registration, lodging, meals, and workshop materials).

Interestingly, Code.org's embrace of Python and Data Science comes as the nonprofit changes its mission to 'make CS and AI a core part of K-12 education' and launches a new national campaign with tech leaders to make CS and AI a graduation requirement. Prior to AI changing the education conversation, Code.org in 2021 boasted that it had lined up a consortium of tech giants, politicians, and educators to push its new $15 million Amazon-bankrolled Java AP CS A curriculum into K-12 classrooms. Just three years later, however, Amazon CEO Andy Jassy was boasting to investors that Amazon had turned to AI to automatically do Java coding that he claimed would have otherwise taken human coders 4,500 developer-years to complete.
AI

Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities.

"For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"

Slashdot Top Deals