Books

Bill Gates Began the Altair BASIC Code in His Head While Hiking as a Teenager (msn.com) 134

Friday Bill Gates shared an excerpt from his upcoming memoir Source Code: My Beginnings. Published in the Wall Street Journal, the excerpt includes pictures of young Bill Gates when he was 12 (dressed for a hike) and 14 (studying a teletype machine).

Gates remembers forming "a sort of splinter group" from the Boy Scouts when he was 13 with a group of boys who "wanted more freedom and more risk" and took long hikes around Seattle, travelling hundreds of miles together on hikes as long as "seven days or more." (His favorite breakfast dish was Oscar Mayer Smokie Links.) But he also remembers another group of friends — Kent, Rick, and... Paul — who connected to a mainframe computer from a phone line at their private school. Both hiking and programming "felt like an adventure... exploring new worlds, traveling to places even most adults couldn't reach."

Like hiking, programming fit me because it allowed me to define my own measure of success, and it seemed limitless, not determined by how fast I could run or how far I could throw. The logic, focus and stamina needed to write long, complicated programs came naturally to me. Unlike in hiking, among that group of friends, I was the leader.
When Gates' school got a (DEC) PDP-8 — which cost $8,500 — "For a challenge, I decided I would try to write a version of the Basic programming language for the new computer..." And Gates remembers a long hike where "I silently honed my code" for its formula evaluator: I slimmed it down more, like whittling little pieces off a stick to sharpen the point. What I made seemed efficient and pleasingly simple. It was by far the best code I had ever written...

By the time school started again in the fall, whoever had lent us the PDP-8 had reclaimed it. I never finished my Basic project. But the code I wrote on that hike, my formula evaluator — and its beauty — stayed with me. Three and a half years later, I was a sophomore in college not sure of my path in life when Paul Allen, one of my Lakeside friends, burst into my dorm room with news of a groundbreaking computer. I knew we could write a Basic language for it; we had a head start.

Gates typed his code from that hike, "and with that planted the seed of what would become one of the world's largest companies and the beginning of a new industry."

Gates cites Richard Feynman's description of the excitement and pleasure of "finding the thing out" — the reward for "all of the disciplined thinking and hard work." And he remembers his teenaged years as "intensely driven by the love of what I was learning, accruing expertise just when it was needed: at the dawn of the personal computer."
AI

Cutting-Edge Chinese 'Reasoning' Model Rivals OpenAI o1 55

An anonymous reader quotes a report from Ars Technica: On Monday, Chinese AI lab DeepSeek released its new R1 model family under an open MIT license, with its largest version containing 671 billion parameters. The company claims the model performs at levels comparable to OpenAI's o1 simulated reasoning (SR) model on several math and coding benchmarks. Alongside the release of the main DeepSeek-R1-Zero and DeepSeek-R1 models, DeepSeek published six smaller "DeepSeek-R1-Distill" versions ranging from 1.5 billion to 70 billion parameters. These distilled models are based on existing open source architectures like Qwen and Llama, trained using data generated from the full R1 model. The smallest version can run on a laptop, while the full model requires far more substantial computing resources.

The releases immediately caught the attention of the AI community because most existing open-weights models -- which can often be run and fine-tuned on local hardware -- have lagged behind proprietary models like OpenAI's o1 in so-called reasoning benchmarks. Having these capabilities available in an MIT-licensed model that anyone can study, modify, or use commercially potentially marks a shift in what's possible with publicly available AI models. "They are SO much fun to run, watching them think is hilarious," independent AI researcher Simon Willison told Ars in a text message. Willison tested one of the smaller models and described his experience in a post on his blog: "Each response starts with a ... pseudo-XML tag containing the chain of thought used to help generate the response," noting that even for simple prompts, the model produces extensive internal reasoning before output.
Although the benchmarks have yet to be independently verified, DeepSeek reports that R1 outperformed OpenAI's o1 on AIME (a mathematical reasoning test), MATH-500 (a collection of word problems), and SWE-bench Verified (a programming assessment tool).

TechCrunch notes that three Chinese labs -- DeepSeek, Alibaba, and Moonshot AI's Kimi, have released models that match o1's capabilities.
Linux

Linux 6.13 Released (phoronix.com) 25

"Nothing horrible or unexpected happened last week," Linux Torvalds posted tonight on the Linux kernel mailing list, "so I've tagged and pushed out the final 6.13 release."

Phoronix says the release has "plenty of fine features": Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors. The new AMD EPYC 9005 "Turin" server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency....

Linux 6.13 also brings more Rust programming language infrastructure and more.

Phoronix notes that Linux 6.13 also brings "the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler."

And some lucky Linux kernel developers will also be getting a guitar pedal soldered by Linus Torvalds himself, thanks to a generous offer he announced a week ago: For _me_ a traditional holiday activity tends to be a LEGO build or two, since that's often part of the presents... But in addition to the LEGO builds, this year I also ended up doing a number of guitar pedal kit builds ("LEGO for grown-ups with a soldering iron"). Not because I play guitar, but because I enjoy the tinkering, and the guitar pedals actually do something and are the right kind of "not very complex, but not some 5-minute 555 LED blinking thing"...

[S]ince I don't actually have any _use_ for the resulting pedals (I've already foisted off a few only unsuspecting victims^Hfriends), I decided that I'm going to see if some hapless kernel developer would want one.... as an admittedly pretty weak excuse to keep buying and building kits...

"It may be worth noting that while I've had good success so far, I'm a software person with a soldering iron. You have been warned... [Y]ou should set your expectations along the lines of 'quality kit built by a SW person who doesn't know one end of a guitar from the other.'"
Programming

Node.js 'Type Stripping' for TypeScript Now Enabled by Default (hashnode.dev) 63

The JavaScript runtime Node.js can execute TypeScript (Microsoft's JavaScript-derived language with static typing).

But now it can do it even better, explains Marco Ippolito of the Node.js steering committee: In August 2024 Node.js introduced a new experimental feature, Type Stripping, aimed at addressing a longstanding challenge in the Node.js ecosystem: running TypeScript with no configuration. Enabled by default in Node.js v23.6.0, this feature is on its way to becoming stable.

TypeScript has reached incredible levels of popularity and has been the most requested feature in all the latest Node.js surveys. Unlike other alternatives such as CoffeeScript or Flow, which never gained similar traction, TypeScript has become a cornerstone of modern development. While it has been supported in Node.js for some time through loaders, they relied heavily on configuration and user libraries. This reliance led to inconsistencies between different loaders, making them difficult to use interchangeably. The developer experience suffered due to these inconsistencies and the extra setup required... The goal is to make development faster and simpler, eliminating the overhead of configuration while maintaining the flexibility that developers expect...

TypeScript is not just a language, it also relies on a toolchain to implement its features. The primary tool for this purpose is tsc, the TypeScript compiler CLI... Type checking is tightly coupled to the implementation of tsc, as there is no formal specification for how the language's type system should behave. This lack of a specification means that the behavior of tsc is effectively the definition of TypeScript's type system. tsc does not follow semantic versioning, so even minor updates can introduce changes to type checking that may break existing code. Transpilation, on the other hand, is a more stable process. It involves converting TypeScript code into JavaScript by removing types, transforming certain syntax constructs, and optionally "downleveling" the JavaScript to allow modern syntax to execute on older JavaScript engines. Unlike type checking, transpilation is less likely to change in breaking ways across versions of tsc. The likelihood of breaking changes is further reduced when we only consider the minimum transpilation needed to make the TypeScript code executable — and exclude downleveling of new JavaScript features not yet available in the JavaScript engine but available in TypeScript...

Node.js, before enabling it by default, introduced --experimental-strip-types. This mode allows running TypeScript files by simply stripping inline types without performing type checking or any other code transformation. This minimal technique is known as Type Stripping. By excluding type checking and traditional transpilation, the more unstable aspects of TypeScript, Node.js reduces the risk of instability and mostly sidesteps the need to track minor TypeScript updates. Moreover, this solution does not require any configuration in order to execute code... Node.js eliminates the need for source maps by replacing the removed syntax with blank spaces, ensuring that the original locations of the code and structure remain intact. It is transparent — the code that runs is the code the author wrote, minus the types...

"As this experimental feature evolves, the Node.js team will continue collaborating with the TypeScript team and the community to refine its behavior and reduce friction. You can check the roadmap for practical next steps..."
Google

Google Upgrades Open Source Vulnerability Scanning Tool with SCA Scanning Library (googleblog.com) 2

In 2022 Google released a tool to easily scan for vulnerabilities in dependencies named OSV-Scanner. "Together with the open source community, we've continued to build this tool, adding remediation features," according to Google's security blog, "as well as expanding ecosystem support to 11 programming languages and 20 package manager formats... Users looking for an out-of-the-box vulnerability scanning CLI tool should check out OSV-Scanner, which already provides comprehensive language package scanning capabilities..."

Thursday they also announced an extensible library for "software composition analysis" scanning (as well as file-system scanning) named OSV-SCALIBR (Open Source Vulnerability — Software Composition Analysis LIBRary). The new library "combines Google's internal vulnerability management expertise into one scanning library with significant new capabilities such as:
  • Software composition analysis for installed packages, standalone binaries, as well as source code
  • OSes package scanning on Linux (COS, Debian, Ubuntu, RHEL, and much more), Windows, and Mac
  • Artifact and lockfile scanning in major language ecosystems (Go, Java, Javascript, Python, Ruby, and much more)
  • Vulnerability scanning tools such as weak credential detectors for Linux, Windows, and Mac
  • Software Bill of Materials (SBOM) generation in SPDX and CycloneDX, the two most popular document formats
  • Optimization for on-host scanning of resource constrained environments where performance and low resource consumption is critical

"OSV-SCALIBR is now the primary software composition analysis engine used within Google for live hosts, code repos, and containers. It's been used and tested extensively across many different products and internal tools to help generate SBOMs, find vulnerabilities, and help protect our users' data at Google scale. We offer OSV-SCALIBR primarily as an open source Go library today, and we're working on adding its new capabilities into OSV-Scanner as the primary CLI interface."


AI

World's First AI Chatbot, ELIZA, Resurrected After 60 Years (livescience.com) 37

"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...) Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."

Thanks to long-time Slashdot reader MattSparkes for sharing the news.
AI

Google Reports Halving Code Migration Time With AI Help 12

Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.

For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.

"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.
Google

Google Begins Requiring JavaScript For Google Search (techcrunch.com) 91

Google says it has begun requiring users to turn on JavaScript, the widely-used programming language to make web pages interactive, in order to use Google Search. From a report: In an email to TechCrunch, a company spokesperson claimed that the change is intended to "better protect" Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won't work properly, and that the quality of search results tends to be degraded.
Transportation

Toyota Unit Hino Motors Reaches $1.6 Billion US Diesel Emissions Settlement (msn.com) 8

An anonymous reader quotes a report from Reuters: Toyota Motor unit Hino Motors has agreed a $1.6 billion settlement with U.S. agencies and will plead guilty over excess diesel engine emissions in more than 105,000 U.S. vehicles, the company and U.S. government said on Wednesday. The Japanese truck and engine manufacturer was charged with fraud in U.S. District Court in Detroit for unlawfully selling 105,000 heavy-duty diesel engines in the United States from 2010 through 2022 that did not meet emissions standards. The settlement, which still must be approved by a U.S. judge, includes a criminal penalty of $521.76 million, $442.5 million in civil penalties to U.S. authorities and $236.5 million to California.

A company-commissioned panel said in a report in 2022 Hino had falsified emissions data on some engines going back to at least 2003. Hino agreed to plead guilty to engaging in a multi-year criminal conspiracy and serve a five-year term of probation, during which it will be barred from importing any diesel engines it has manufactured into the U.S., and carry out a comprehensive compliance and ethics program, the Justice Department and Environmental Protection Agency said. [...] The settlement includes a mitigation program, valued at $155 million, to offset excess air emissions from the violations by replacing marine and locomotive engines, and a recall program, valued at $144.2 million, to fix engines in 2017-2019 heavy-duty trucks

The EPA said Hino admitted that between 2010 and 2019, it submitted false applications for engine certification approvals and altered emission test data, conducted tests improperly and fabricated data without conducting any underlying tests. Hino President Satoshi Ogiso said the company had improved its internal culture, oversight and compliance practices. "This resolution is a significant milestone toward resolving legacy issues that we have worked hard to ensure are no longer a part of Hino's operations or culture," he said in a statement.
Toyota's Hino Motors isn't the only automaker to admit to selling vehicles with excess diesel emissions. Volkswagen had to pay billions in fines after it admitted in 2015 to cheating emissions tests by installing "defeat devices" and sophisticated software in nearly 11 million vehicles worldwide. Daimler (Mercedes-Benz), BMW, Opel/Vauxhall (General Motors), and Fiat Chrysler have been implicated in similar practices.
AI

AI Slashes Google's Code Migration Time By Half (theregister.com) 74

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.
Programming

Replit CEO on AI Breakthroughs: 'We Don't Care About Professional Coders Anymore' (semafor.com) 168

Replit, an AI coding startup platform, has made a dramatic pivot away from professional programmers in a fundamental shift in how software may be created in the future. "We don't care about professional coders anymore," CEO Amjad Masad told Semafor, as the company refocuses on helping non-developers build software using AI.

The strategic shift follows the September launch of Replit's "Agent" tool, which can create working applications from simple text commands. The tool, powered by Anthropic's Claude 3.5 Sonnet AI model, has driven a five-fold revenue increase in six months. The move marks a significant departure for Replit, which built its business providing online coding tools for software developers. The company is now betting that AI will make traditional programming skills less crucial, allowing non-technical users to create software through natural language instructions.
Oracle

Oracle Won't Withdraw 'JavaScript' Trademark, Says Deno. Legal Skirmish Continues (infoworld.com) 68

"Oracle has informed us they won't voluntarily withdraw their trademark on 'JavaScript'." That's the word coming from the company behind Deno, the alternative JavaScript/TypeScript/WebAssembly runtime, which is pursuing a formal cancellation with the U.S. Patent and Trademark Office.

So what happens next? Oracle "will file their Answer, and we'll start discovery to show how 'JavaScript' is widely recognized as a generic term and not controlled by Oracle." Deno's social media posts show a schedule of various court dates that extend through July of 2026, so "The dispute between Oracle and Deno Land could go on for quite a while," reports InfoWorld: Deno Land co-founder Ryan Dahl, creator of both the Deno and Node.js runtimes, said a formal answer from Oracle is expected before February 3, unless Oracle extends the deadline again. "After that, we will begin the process of discovery, which is where the real legal work begins. It will be interesting to see how Oracle argues against our claims — genericide, fraud on the USPTO, and non-use of the mark."

The legal process begins with a discovery conference by March 5, with discovery closing by September 1, followed by pretrial disclosure from October 16 to December 15. An optional request for an oral hearing is due by July 8, 2026.

Oracle took ownership of JavaScript's trademark in 2009 when it purchased Sun Microsystems, InfoWorld notes.

But "Oracle does not control (and has never controlled) any aspect of the specification or how the phrase 'JavaScript' can be used by others," argues an official petition filed by Deno Land Inc. with the United States Patent and Trademark Office: Today, millions of companies, universities, academics, and programmers, including Petitioner, use "JavaScript" daily without any involvement with Oracle. The phrase "JavaScript" does not belong to one corporation. It belongs to the public. JavaScript is the generic name for one of the bedrock languages of modern programming, and, therefore, the Registered Mark must be canceled.

An open letter to Oracle discussing the genericness of the phrase "JavaScript," published at https://javascript.tm/, was signed by 14,000+ individuals at the time of this Petition to Cancel, including notable figures such as Brendan Eich, the creator of JavaScript, and the current editors of the JavaScript specification, Michael Ficarra and Shu-yu Guo. There is broad industry and public consensus that the term "JavaScript" is generic.

The seven-page petition goes into great detail, reports InfoWorld. "Deno Land also accused Oracle of committing fraud in its trademark renewal efforts in 2019 by submitting screen captures of the website of JavaScript runtime Node.js, even though Node.js was not affiliated with Oracle."
Programming

Ask Slashdot: What's the Best Way to Transfer Legacy PHP Code to a Modern Framework? 112

Slashdot reader rzack writes: Since 1999, I've written a huge amount of PHP code, for dozens of applications and websites. Most of it has been continually updated, and remains active and in-production, in one form or another.

Here's the thing. It's all hand-written using vi, even to this day.

Is there any benefit to migrating this codebase to a more modern PHP framework, like Laravel? And is there an easy and minimally intrusive way this can be done en-masse, across dozens of applications and websites?

Or at this point should I just stick with vi?

Share your thoughts and suggestions in the comments.

What's the best way to transfer legacy PHP code to a modern framework?
AI

Foreign Cybercriminals Bypassed Microsoft's AI Guardrails, Lawsuit Alleges (arstechnica.com) 3

"Microsoft's Digital Crimes Unit is taking legal action to ensure the safety and integrity of our AI services," according to a Friday blog post by the unit's assistant general counsel. Microsoft blames "a foreign-based threat-actor group" for "tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft's, to create offensive and harmful content.

Microsoft "is accusing three individuals of running a 'hacking-as-a-service' scheme," reports Ars Technica, "that was designed to allow the creation of harmful and illicit content using the company's platform for AI-generated content" after bypassing Microsoft's AI guardrails: They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use. Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesn't know their identity.... The three people who ran the service allegedly compromised the accounts of legitimate Microsoft customers and sold access to the accounts through a now-shuttered site... The service, which ran from last July to September when Microsoft took action to shut it down, included "detailed instructions on how to use these custom tools to generate harmful and illicit content."

The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft's AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company's Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them. Microsoft didn't say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored. The company also raised the possibility that the credentials were stolen by people who gained unauthorized access to the networks where they were stored...

The lawsuit alleges the defendants' service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference.

Programming

StackOverflow Usage Plummets as AI Chatbots Rise (devclass.com) 66

Developer Q&A platform StackOverflow appears to be facing an existential crisis as volume of new questions on the site has plunged 75% from the 2017 peak and 60% year-on-year in December 2024, according to StackExchange Data Explorer figures.

The decline accelerated after ChatGPT's launch in November 2022, with questions falling 76% since then. Despite banning AI-generated answers two years ago, StackOverflow has embraced AI partnerships, striking deals with Google, OpenAI and GitHub.
Programming

Should First-Year Programming Students Be Taught With Python and Java? (huntnewsnu.com) 175

Long-time Slashdot reader theodp writes: In an Op-ed for The Huntington News, fourth year Northeastern University CS student Derek Kaplan argues that real pedagogical merit is what should count when deciding which language to use to teach CS fundamentals (aka 'Fundies'). He makes the case for Northeastern to reconsider its decision to move from Racket to Python and Java later this year in an overhaul of its first-year curriculum.

"Students will get extensive training in Python, which is currently the most requested language by co-op employers," Northeastern explains (some two decades after a Slashdot commenter made the same Hot Languages = Jobs observation in a spirited 2001 debate on Java as a CS introductory language)...

"I have often heard computer science students complain that Fundies 1 teaches Racket instead of a 'useful language' like Python," Kaplan writes. "But the point of Fundies is not to teach Racket — it is to teach program design skills that can be applied using any programming language. Racket is just the tool it uses to do so. A student who does well in Fundies will have no difficulty applying the same skills to Python or any other language. And with how fast the tech industry changes, is it really worth having a course that teaches just Python when tomorrow, some other language might dominate the industry? Our current curriculum focuses on timeless principles rather than fleeting trends."

Also expressing concerns about the selection of suitable languages for novice programming is King's College CS Prof Michael Kölling, who explains, "One of the drivers is the perceived usefulness of the language in a real-world context. Students (and their parents) often have opinions which language is 'better' to learn. In forming these opinions, the definition of 'better' can often be vague and driven by limited insight. One strong aspect commonly cited is the perceived usefulness of a language in the 'real world.' If a language is widely used in industry, it is more likely to be seen as a useful language to learn." Kölling's recommendation? "We need a new language for teaching novices at secondary school and introductory university level," Kölling concludes. "This language should be designed explicitly for teaching [...] Maintenance and adaptation of this language should be driven by pedagogical considerations, not by industry needs."

While noble in intent, one suspects Kaplan and Kölling may be on a quixotic quest in a money wins world, outgunned by the demands, resources, and influence of tech giants like Amazon — the top employer of Northeastern MSCS program grads — who pushed back against NSF advice to deemphasize Java in high school CS and dropped $15 million to have tech-backed nonprofit Code.org develop and push a new Java-based, powered-by-AWS CS curriculum into high schools with the support of a consortium of politicians, educators, and tech companies. Echoing Northeastern, an Amazon press release argued the new Java-based curriculum "best prepares students for the next step in their education and careers."

Programming

New System Auto-Converts C To Memory-Safe Rust, But There's a Catch 75

Researchers from Inria and Microsoft have developed a system to automatically convert specific types of C programming code into memory-safe Rust code, addressing growing cybersecurity concerns about memory vulnerabilities in software systems.

The technique, detailed in a new paper, requires programmers to use a restricted version of C called "Mini-C" that excludes features like pointer arithmetic. The researchers successfully tested their conversion system on two major code libraries, including the 80,000-line HACL* cryptographic library. Parts of the converted code have already been integrated into Mozilla's NSS and OpenSSH security systems, according to the researchers. Memory safety errors account for 76% of Android vulnerabilities in 2019.
Programming

'International Obfuscated C Code Contest' Will Relaunch, Celebrating 40th Anniversary (fosstodon.org) 23

After a four-year hiatus, 2025 will see the return of the International Obfuscated C Code Contest. Started in 1984 (and inspired partly by a bug in the classic Bourne shell), it's "the Internet's oldest contest," acording to their official social media account on Mastodon.

The contest enters its "pending" state today at 2024-12-29 23:58 UTC — meaning an opening date for submissions has been officially scheduled (for January 31st) as well as a closing date roughly eight weeks later on April 1st, 2025. That's according to the newly-released (proposed and tentative) rules and guidelines, listing contest goals like "show the importance of programming style, in an ironic way" and "stress C compilers with unusual code." And the contest's home page adds an additional goal: "to have fun with C!"

Excerpts from the official rules: Rule 0
Just as C starts at 0, so the IOCCC starts at rule 0. :-)

Rule 1
Your submission must be a complete program....

Rule 5
Your submission MUST not modify the content or filename of any part of your original submission including, but not limited to prog.c, the Makefile (that we create from your how to build instructions), as well as any data files you submit....

Rule 6
I am not a rule, I am a free(void *human);
while (!(ioccc(rule(you(are(number(6)))))) {
ha_ha_ha();
}

Rule 6 is clearly a reference to The Prisoner... (Some other rules are even sillier...) And the guidelines include their own jokes: You are in a maze of twisty guidelines, all different.

There are at least zero judges who think that Fideism has little or nothing to do with the IOCCC judging process....

We suggest that you avoid trying for the 'smallest self-replicating' source. The smallest, a zero byte entry, won in 1994.

And this weekend there was also a second announcement: After a 4 year effort by a number of people, with over 6168+ commits, the Great Fork Merge has been completed and the Official IOCCC web site has been updated! A significant number of improvements has been made to the IOCCC winning entries. A number of fixes and improvements involve the ability of reasonable modern Unix/Linux systems to be able to compile and even run them.
Thanks to long-time Slashdot reader — and C programmer — achowe for sharing the news.
Python

Python in 2024: Faster, More Powerful, and More Popular Than Ever (infoworld.com) 45

"Over the course of 2024, Python has proven again and again why it's one of the most popular, useful, and promising programming languages out there," writes InfoWorld: The latest version of the language pushes the envelope further for speed and power, sheds many of Python's most decrepit elements, and broadens its appeal with developers worldwide. Here's a look back at the year in Python.

In the biggest news of the year, the core Python development team took a major step toward overcoming one of Python's longstanding drawbacks: the Global Interpreter Lock or "GIL," a mechanism for managing interpreter state. The GIL prevents data corruption across threads in Python programs, but it comes at the cost of making threads nearly useless for CPU-bound work. Over the years, various attempts to remove the GIL ended in tears, as they made single-threaded Python programs drastically slower. But the most recent no-GIL project goes a long way toward fixing that issue — enough that it's been made available for regular users to try out.

The no-GIL or "free-threaded" builds are still considered experimental, so they shouldn't be deployed in production yet. The Python team wants to alleviate as much of the single-threaded performance impact as possible, along with any other concerns, before giving the no-GIL builds the full green light. It's also entirely possible these builds may never make it to full-blown production-ready status, but the early signs are encouraging.

Another forward-looking feature introduced in Python 3.13 is the experimental just-in-time compiler or JIT. It expands on previous efforts to speed up the interpreter by generating machine code for certain operations at runtime. Right now, the speedup doesn't amount to much (maybe 5% for most programs), but future versions of Python will expand the JIT's functionality where it yields real-world payoffs.

Python is now more widely used than JavaScript on GitHub (thanks partly to its role in AI and data science code).
Programming

Bret Taylor Urges Rethink of Software Development as AI Reshapes Industry 111

Software development is entering an "autopilot era" with AI coding assistants, but the industry needs to prepare for full autonomy, argues former Salesforce co-CEO Bret Taylor. Drawing parallels with self-driving cars, he suggests the role of software engineers will evolve from code authors to operators of code-generating machines. Taylor, a board member of OpenAI and who once rewrote Google Maps over a weekend, calls for new programming systems, languages, and verification methods to ensure AI-generated code remains robust and secure. From his post: In the Autonomous Era of software engineering, the role of a software engineer will likely transform from being the author of computer code to being the operator of a code generating machine. What is a computer programming system built natively for that workflow?

If generating code is no longer a limiting factor, what types of programming languages should we build?

If a computer is generating most code, how do we make it easy for a software engineer to verify it does what they intend? What is the role of programming language design (e.g., what Rust did for memory safety)? What is the role of formal verification? What is the role of tests, CI/CD, and development workflows?

Today, a software engineer's primary desktop is their editor. What is the Mission Control for a software engineer in the era of autonomous development?

Slashdot Top Deals