GNU is Not Unix

FSF: Meta's License for Its Llama 3.1 AI Model 'is Not a Free Software License' (fsf.org) 35

July saw the news that Meta had launched a powerful open-source AI model, Llama 3.1.

But the Free Software Foundation evaluated Llama 3.1's license agreement, and announced this week that "this is not a free software license and you should not use it, nor any software released under it." Not only does it deny users their freedom, but it also purports to hand over powers to the licensors that should only be exercised through lawmaking by democratically-elected governments.

Moreover, it has been applied by Meta to a machine-learning (ML) application, even though the license completely fails to address software freedom challenges inherent in such applications....

We decided to review the Llama license because it is being applied to an ML application and model, while at the same time being presented by Meta as if it grants users a degree of software freedom. This is certainly not the case, and we want the free software community to have clarity on this.

In other news, the FSF also announced the winner of the logo contest for their big upcoming 40th anniversary celebration.
Social Networks

Oracle and US Investors (Including Microsoft) Discuss Taking Control of TikTok in the US (npr.org) 53

A plan to keep TikTok available in the U.S. "involves tapping software company Oracle and a group of outside investors," reports NPR, "to effectively take control of the app's global operations, according to two people with direct knowledge of the talks..."

"[P]otential investors who are engaged in the talks include Microsoft." Under the deal now being negotiated by the White House, TikTok's China-based owner ByteDance would retain a minority stake in the company, but the app's algorithm, data collection and software updates will be overseen by Oracle, which already provides the foundation of TikTok's web infrastructure... "The goal is for Oracle to effectively monitor and provide oversight with what is going on with TikTok," said the person directly involved in the talks, who was not authorized to speak publicly about the deliberations. "ByteDance wouldn't completely go away, but it would minimize Chinese ownership...." Officials from Oracle and the White House held a meeting on Friday about a potential deal, and another meeting has been scheduled for next week, according to the source involved in the discussions, who said Oracle is interested in a TikTok stake "in the tens of billions," but the rest of the deal is in flux...

Under a law passed by Congress and upheld by the Supreme Court, TikTok must execute what is known as "qualified divestiture" from ByteDance in order to stay in business in the U.S... A congressional staffer involved in talks about TikTok's future, who was not authorized to speak publicly, said binding legal agreements from the White House ensuring ByteDance cannot covertly manipulate the app will prove critical in winning lawmakers' approval. "A key part is showing there is no operational relationship with ByteDance, that they do not have control," the Congressional staffer said. "There needs to be no backdoors where China can potentially gain access...."

Chinese regulators, who have for years opposed the selling of TikTok, recently signaled that they would not stand in the way of a TikTok ownership change, saying acquisitions "should be independently decided by the enterprises and based on market principles." The statement, at first, does not seem to say much, but negotiators in the White House believe it indicates that Beijing is not planning to block a deal that gives American investors a majority-stake position in the company.

"Meanwhile, Apple and Google still have not returned TikTok to app stores..."
Power

Could New Linux Code Cut Data Center Energy Use By 30%? (datacenterdynamics.com) 65

Two computer scientists at the University of Waterloo in Canada believe changing 30 lines of code in Linux "could cut energy use at some data centers by up to 30 percent," according to the site Data Centre Dynamics.

It's the code that processes packets of network traffic, and Linux "is the most widely used OS for data center servers," according to the article: The team tested their solution's effectiveness and submitted it to Linux for consideration, and the code was published this month as part of Linux's newest kernel, release version 6.13. "All these big companies — Amazon, Google, Meta — use Linux in some capacity, but they're very picky about how they decide to use it," said Martin Karsten [professor of Computer Science in the Waterloo's Math Faculty]. "If they choose to 'switch on' our method in their data centers, it could save gigawatt hours of energy worldwide. Almost every single service request that happens on the Internet could be positively affected by this."

The University of Waterloo is building a green computer server room as part of its new mathematics building, and Karsten believes sustainability research must be a priority for computer scientists. "We all have a part to play in building a greener future," he said. The Linux Foundation, which oversees the development of the Linux OS, is a founder member of the Green Software Foundation, an organization set up to look at ways of developing "green software" — code that reduces energy consumption.

Karsten "teamed up with Joe Damato, distinguished engineer at Fastly" to develop the 30 lines of code, according to an announcement from the university. "The Linux kernel code addition developed by Karsten and Damato was based on research published in ACM SIGMETRICS Performance Evaluation Review" (by Karsten and grad student Peter Cai).

Their paper "reviews the performance characteristics of network stack processing for communication-heavy server applications," devising an "indirect methodology" to "identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead...

"Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput..."
AI

'Copilot' Price Hike for Microsoft 365 Called 'Total Disaster' with Overwhelmingly Negative Response (zdnet.com) 129

ZDNET's senior editor sees an "overwhelmingly negative" response to Microsoft's surprise price hike for the 84 million paying subscribers to its Microsoft 365 software suite. Attempting the first price hike in more than 12 years, "they made it a 30% price increase" — going from $10 a month to $13 a month — "and blamed it all on artificial intelligence." Bad idea. Why? Because...

No one wants to pay for AI...

If you ask Copilot in Word to write something for you, the results will be about what you'd expect from an enthusiastic summer intern. You might fare better if you ask Copilot to turn a folder full of photos into a PowerPoint presentation. But is that task really such a challenge...?

The announcement was bungled, too... I learned about the new price thanks to a pop-up message on my Android phone... It could be worse, I suppose. Just ask the French and Spanish subscribers who got a similar pop-up message telling them their price had gone from €10 a month to €13,000. (Those pesky decimals.) Oh, and I've lost count of the number of people who were baffled and angry that Microsoft had forcibly installed the Copilot app on their devices. It was just a rebranding of the old Microsoft 365 app with the new name and logo, but in my case it was days later before I received yet another pop-up message telling me about the change...

[T]hey turned the feature on for everyone and gave Word users a well-hidden checkbox that reads Enable Copilot. The feature is on by default, so you have to clear the checkbox to make it go away. As for the other Office apps? "Uh, we'll get around to giving you a button to turn it off next month. Maybe." Seriously, the support page that explains where you can find that box in Word says, "We're working on adding the Enable Copilot checkbox to Excel, OneNote, and PowerPoint on Windows devices and to Excel and PowerPoint on Mac devices. That is tentatively scheduled to happen in February 2025." Until the Enable Copilot button is available, you can't disable Copilot.

ZDNET's senior editor concludes it's a naked grab for cash, adding "I could plug the numbers into Excel and tell you about it, but let's have Copilot explain instead."

Prompt: If I have 84 million subscribers who pay me $10 a month, and I increase their monthly fee by $3 a month each, how much extra revenue will I make each year?

Copilot describes the calculation, concluding with "You would make an additional $3.024 billion per year from this fee increase." Copilot then posts two emojis — a bag of money, and a stock chart with the line going up.
AI

Apple Enlists Veteran Software Executive To Help Fix AI and Siri (yahoo.com) 30

An anonymous reader quotes a report from Bloomberg: Apple executive Kim Vorrath, a company veteran known for fixing troubled products and bringing major projects to market, has a new job: whipping artificial intelligence and Siri into shape. Vorrath, a vice president in charge of program management, was moved to Apple's artificial intelligence and machine learning division this week, according to people with knowledge of the matter. She'll be a top deputy to AI chief John Giannandrea, said the people, who asked not to be identified because the change hasn't been announced publicly. The move helps bolster a team that's racing to make Apple a leader in AI -- an area where it's fallen behind technology peers. [...]

Vorrath, who has spent 36 years at Apple, is known for managing the development of tough software projects. She's also put procedures in place that can catch and fix bugs. Vorrath joins the new team from Apple's hardware engineering division, where she helped launch the Vision Pro headset. Over the years, Vorrath has had a hand in several of Apple's biggest endeavors. In the mid-2000s, she was chosen to lead project management for the original iPhone software group and get the iconic device ready for consumers. Until 2019, she oversaw project management for the iPhone, iPad and Mac operating systems, before taking on the Vision Pro software. Haley Allen will replace Vorrath overseeing program management for visionOS, the headset's operating system, according to the people.

Prior to joining Giannandrea's organization, Vorrath had spent several weeks advising Kelsey Peterson, the group's previous head of program management. Peterson will now report to Vorrath -- as will two other AI executives, Cindy Lin and Marc Schonbrun. Giannandrea, who joined Apple from Google in 2018, disclosed the changes in a memo sent to staffers. The move signals that AI is now more important than the Vision Pro, which launched in February 2024, and is seen as the biggest challenge within the company, according to a longtime Apple executive who asked not to be identified. Vorrath has a knack for organizing engineering groups and creating an effective workflow with new processes, the executive said. It has been clear for some time now that Giannandrea needs additional help managing an AI group with growing prominence, according to the executive. Vorrath is poised to bring Apple's product development culture to the AI work, the person said.

Security

FBI: North Korean IT Workers Steal Source Code To Extort Employers (bleepingcomputer.com) 27

The FBI warned this week that North Korean IT workers are abusing their access to steal source code and extort U.S. companies that have been tricked into hiring them. From a report: The security service alerted public and private sector organizations in the United States and worldwide that North Korea's IT army will facilitate cyber-criminal activities and demand ransoms not to leak online exfiltrated sensitive data stolen from their employers' networks. "North Korean IT workers have copied company code repositories, such as GitHub, to their own user profiles and personal cloud accounts. While not uncommon among software developers, this activity represents a large-scale risk of theft of company code," the FBI said.

"North Korean IT workers could attempt to harvest sensitive company credentials and session cookies to initiate work sessions from non-company devices and for further compromise opportunities." To mitigate these risks, the FBI advised companies to apply the principle of least privilege by disabling local administrator accounts and limiting permissions for remote desktop applications. Organizations should also monitor for unusual network traffic, especially remote connections since North Korean IT personnel often log into the same account from various IP addresses over a short period of time.

Microsoft

Linux 6.14 Adds Support For The Microsoft Copilot Key Found On New Laptops (phoronix.com) 35

The Linux 6.14 kernel now maps out support for Microsoft's "Copilot" key "so that user-space software can determine the behavior for handling that key's action on the Linux desktop," writes Phoronix's Michael Larabel. From the report: A change made to the atkbd keyboard driver on Linux now maps the F23 key to support the default copilot shortcut action. The patch authored by Lenovo engineer Mark Pearson explains [...]. Now it's up to the Linux desktop environments for determining what to do if the new Copilot key is pressed. The patch was part of the input updates now merged for the Linux 6.14 kernel.
AI

OpenAI Unveils AI Agent To Automate Web Browsing Tasks (openai.com) 41

The rumors are true: OpenAI today launched Operator, an AI agent capable of performing web-based tasks through its own browser, as a research preview for U.S. subscribers of its $200 monthly ChatGPT Pro tier. The agent uses GPT-4's vision capabilities and reinforcement learning to interact with websites through mouse and keyboard actions without requiring API integration, OpenAI said in a blog post.

Operator can self-correct and defer to users for sensitive information though there are some limitations with complex interfaces. OpenAI said it's partnering with DoorDash, Instacart, OpenTable and others to develop real-world applications, with plans to expand access to Plus, Team and Enterprise users.

Check out our list of the best AI web browsing agents.
AI

OpenAI's Stargate Deal Heralds Shift Away From Microsoft 38

Microsoft's absence from OpenAI's Stargate announcement follows months of tension between the companies and signals a new era in which the longtime partners will be less reliant on each other. From a report: At a White House press conference, the ChatGPT maker announced Stargate, a venture with Oracle and tech investor SoftBank. The new company plans to spend up to $500 billion building new data centers in the U.S. to help power OpenAI's development.

The assembled leaders -- OpenAI's Sam Altman, Oracle's Larry Ellison, SoftBank's Masayoshi Son and President Trump -- discussed how AI could create jobs and even cure cancer. Microsoft CEO Satya Nadella was thousands of miles away, at the World Economic Forum in Davos, Switzerland. The developments show how the OpenAI-Microsoft partnership that helped trigger the generative-AI boom is drifting apart as each company focuses on its own evolving needs.

In the months leading up to the announcement, the two sides had been haggling over what to do about OpenAI's seemingly insatiable appetite for computing power and its contention Microsoft couldn't fulfill it even though their agreement didn't allow OpenAI to easily switch to others, said people familiar with the discussions. OpenAI is almost entirely reliant on Microsoft to provide it with the data centers it needs to build and operate its sophisticated AI software. That has been part of their agreement since Microsoft first invested in 2019. With the success of ChatGPT, OpenAI's need for computing power surged. Its executives have said ending the exclusive cloud contract could be crucial to compete with rival AI developers that don't have the same constraints.
AI

macOS Sequoia 15.3 and iOS 18.3 Enable Apple Intelligence Automatically 55

Apple's upcoming updates -- macOS Sequoia 15.3, iOS 18.3, and iPadOS 18.3 -- will enable Apple Intelligence by default on compatible devices, requiring users to manually disable it if undesired. From Apple's developer release notes: "For users new or upgrading to iOS 18.3, Apple Intelligence will be enabled automatically during iPhone onboarding. Users will have access to Apple Intelligence features after setting up their devices. To disable Apple Intelligence, users will need to navigate to the Apple Intelligence & Siri Settings pane and turn off the Apple Intelligence toggle. This will disable Apple Intelligence features on their device." MacRumors reports: With macOS Sequoia 15.1, macOS Sequoia 15.2, iOS 18.1, and iOS 18.2, Apple Intelligence was opt-in rather than opt-out, and users who wanted the feature needed to turn it on in the Settings app. Going forward, it will be enabled by default, and Mac, iPhone, and iPad users who do not want to use the feature will need to turn it off. The report notes that macOS Sequoia 15.3 introduces Genmoji, allowing Mac users to create custom emoji characters, and enhances Notification summaries with clearer indicators for AI-generated information.

Public releases of this and other software updates are expected next week, following today's release candidate versions.
Games

EA's Origin App For PC Gaming Will Shut Down In April 17

EA's Origin PC client will be shut down on April 17, 2025, as Microsoft ends support for 32-bit software. "Anyone still using Origin will need to swap over to the EA app before that date," adds Engadget. From the report: For those PC players who have not migrated over to the EA app, the company has an FAQ explaining the latest system requirements. The EA app runs on 64-bit architecture, and requires a machine using Windows 10 or Windows 11. [...] If you're simply downloading the EA app on a current machine, players won't need to re-download their games. And if you have cloud saves enabled, all of your data should transfer without any additional steps.

However, it's always a good idea to have physical backups with this type of transition, especially since not all games support cloud saves, and those titles will need to have saved game data manually transferred. Mods also may not automatically make the switch, and EA recommends players check with mod creators about transferring to the EA app.
AI

AI Boom Gives Rise To 'GPU-as-a-Service' 35

An anonymous reader quotes a report from IEEE Spectrum: The surge of interest in AI is creating a massive demand for computing power. Around the world, companies are trying to keep up with the vast amount of GPUs needed to power more and more advanced AI models. While GPUs are not the only option for running an AI model, they have become the hardware of choice due to their ability to efficiently handle multiple operations simultaneously -- a critical feature when developing deep learning models. But not every AI startup has the capital to invest in the huge numbers of GPUs now required to run a cutting-edge model. For some, it's a better deal to outsource it. This has led to the rise of a new business: GPU-as-a-Service (GPUaaS). In recent years, companies like Hyperbolic, Kinesis, Runpod, and Vast.ai have sprouted up to remotely offer their clients the needed processing power.

[...] Studies have shown that more than half of the existing GPUs are not in use at any given time. Whether we're talking personal computers or colossal server farms, a lot of processing capacity is under-utilized. What Kinesis does is identify idle compute -- both for GPUs and CPUs -- in servers worldwide and compile them into a single computing source for companies to use. Kinesis partners with universities, data centers, companies, and individuals who are willing to sell their unused computing power. Through a special software installed on their servers, Kinesis detects idle processing units, preps them, and offers them to their clients for temporary use. [...] The biggest advantage of GPUaaS is economical. By removing the need to purchase and maintain the physical infrastructure, it allows companies to avoid investing in servers and IT management, and to instead put their resources toward improving their own deep learning, large language, and large vision models. It also lets customers pay for the exact amount of GPUs they use, saving the costs of the inevitable idle compute that would come with their own servers.
The report notes that GPUaaS is growing in profitability. "In 2023, the industry's market size was valued at US $3.23 billion; in 2024, it grew to $4.31 billion," reports IEEE. "It's expected to rise to $49.84 billion by 2032."
Security

Employees of Failed Startups Are at Special Risk of Stolen Personal Data Through Old Google Logins (techcrunch.com) 7

Hackers could steal sensitive personal data from former startup employees by exploiting abandoned company domains and Google login systems, security researcher Dylan Ayrey revealed at ShmooCon conference. The vulnerability particularly affects startups that relied on "Sign in with Google" features for their business software.

Ayrey, CEO of Truffle Security, demonstrated the flaw by purchasing one failed startup's domain and accessing ChatGPT, Slack, Notion, Zoom and an HR system containing Social Security numbers. His research found 116,000 website domains from failed tech startups currently available for sale. While Google offers preventive measures through its OAuth "sub-identifier" system, some providers avoid it due to reliability concerns - which Google disputes. The company initially dismissed Ayrey's finding as a fraud issue before reversing course and awarding him a $1,337 bounty. Google has since updated its documentation but hasn't implemented a technical fix, TechCrunch reports.
United States

The Pentagon Says AI is Speeding Up Its 'Kill Chain' 34

An anonymous reader shares a report: Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a "significant advantage" in identifying, tracking, and assessing threats, the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

"We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces," said Plumb. The "kill chain" refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb. The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans. "We've been really clear on what we will and won't use their technologies for," Plumb said, when asked how the Pentagon works with AI model providers.
Linux

Linux 6.13 Released (phoronix.com) 25

"Nothing horrible or unexpected happened last week," Linux Torvalds posted tonight on the Linux kernel mailing list, "so I've tagged and pushed out the final 6.13 release."

Phoronix says the release has "plenty of fine features": Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors. The new AMD EPYC 9005 "Turin" server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency....

Linux 6.13 also brings more Rust programming language infrastructure and more.

Phoronix notes that Linux 6.13 also brings "the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler."

And some lucky Linux kernel developers will also be getting a guitar pedal soldered by Linus Torvalds himself, thanks to a generous offer he announced a week ago: For _me_ a traditional holiday activity tends to be a LEGO build or two, since that's often part of the presents... But in addition to the LEGO builds, this year I also ended up doing a number of guitar pedal kit builds ("LEGO for grown-ups with a soldering iron"). Not because I play guitar, but because I enjoy the tinkering, and the guitar pedals actually do something and are the right kind of "not very complex, but not some 5-minute 555 LED blinking thing"...

[S]ince I don't actually have any _use_ for the resulting pedals (I've already foisted off a few only unsuspecting victims^Hfriends), I decided that I'm going to see if some hapless kernel developer would want one.... as an admittedly pretty weak excuse to keep buying and building kits...

"It may be worth noting that while I've had good success so far, I'm a software person with a soldering iron. You have been warned... [Y]ou should set your expectations along the lines of 'quality kit built by a SW person who doesn't know one end of a guitar from the other.'"
Google

Google Upgrades Open Source Vulnerability Scanning Tool with SCA Scanning Library (googleblog.com) 2

In 2022 Google released a tool to easily scan for vulnerabilities in dependencies named OSV-Scanner. "Together with the open source community, we've continued to build this tool, adding remediation features," according to Google's security blog, "as well as expanding ecosystem support to 11 programming languages and 20 package manager formats... Users looking for an out-of-the-box vulnerability scanning CLI tool should check out OSV-Scanner, which already provides comprehensive language package scanning capabilities..."

Thursday they also announced an extensible library for "software composition analysis" scanning (as well as file-system scanning) named OSV-SCALIBR (Open Source Vulnerability — Software Composition Analysis LIBRary). The new library "combines Google's internal vulnerability management expertise into one scanning library with significant new capabilities such as:
  • Software composition analysis for installed packages, standalone binaries, as well as source code
  • OSes package scanning on Linux (COS, Debian, Ubuntu, RHEL, and much more), Windows, and Mac
  • Artifact and lockfile scanning in major language ecosystems (Go, Java, Javascript, Python, Ruby, and much more)
  • Vulnerability scanning tools such as weak credential detectors for Linux, Windows, and Mac
  • Software Bill of Materials (SBOM) generation in SPDX and CycloneDX, the two most popular document formats
  • Optimization for on-host scanning of resource constrained environments where performance and low resource consumption is critical

"OSV-SCALIBR is now the primary software composition analysis engine used within Google for live hosts, code repos, and containers. It's been used and tested extensively across many different products and internal tools to help generate SBOMs, find vulnerabilities, and help protect our users' data at Google scale. We offer OSV-SCALIBR primarily as an open source Go library today, and we're working on adding its new capabilities into OSV-Scanner as the primary CLI interface."


Printer

Proposed New York Law Could Require Background Checks Before Buying 3D Printers (news10.com) 225

A new law is being considered by New York's state legislature, reports a local news outlet, which "if passed, will require anyone buying a 3D printer to pass a background check. If you can't legally own a firearm, you won't be able to buy one of these printers..." It is illegal to print most gun parts in New York. Attorney Greg Rinckey believes the proposal is an overreach. "I think this is also gonna face some constitutional problems. I mean, it really comes down to a legal parsing of what are you printing and at what point is it technically a firearm...?"

[Ascent Fabrication owner Joe] Fairley thinks lawmakers should shift their focus on those partial gun kits that produce the metal firing components. Another possibility is to require printer manufacturers to install software that prevents gun parts from being printed. "They would need to agree on some algorithm to look at the part and say nope, that is a gun component, you're not allowed to print that part somehow," said Fairley. "But I feel like it would be extremely difficult to get to that point."

AI

Arrested by AI: When Police Ignored Standards After AI Facial-Recognition Matches (msn.com) 55

A county transit police detective fed a poor-quality image to an AI-powered facial recognition program, remembers the Washington Post, leading to the arrest of "Christopher Gatlin, a 29-year-old father of four who had no apparent ties to the crime scene nor a history of violent offenses." He was unable to post the $75,000 cash bond required, and "jailed for a crime he says he didn't commit, it would take Gatlin more than two years to clear his name." A Washington Post investigation into police use of facial recognition software found that law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence... The Post reviewed documents from 23 police departments where detailed records about facial recognition use are available and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime — in most cases contradicting their own internal policies requiring officers to corroborate all leads found through AI. Some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts, The Post found. One police report referred to an uncorroborated AI result as a "100% match." Another said police used the software to "immediately and unquestionably" identify a suspected thief.

Gatlin is one of at least eight people wrongfully arrested in the United States after being identified through facial recognition... All of the cases were eventually dismissed. Police probably could have eliminated most of the people as suspects before their arrest through basic police work, such as checking alibis, comparing tattoos, or, in one case, following DNA and fingerprint evidence left at the scene.

Some statistics from the article about the eight wrongfully-arrested people:
  • In six cases police failed to check alibis
  • In two cases police ignored evidence that contradicted their theory
  • In five cases police failed to collect key pieces of evidence
  • In three cases police ignored suspects' physical characteristics
  • In six cases police relied on problematic witness statements

The article provides two examples of police departments forced to pay $300,000 settlements after wrongful arrests caused by AI mismatches. But "In interviews with The Post, all eight people known to have been wrongly arrested said the experience had left permanent scars: lost jobs, damaged relationships, missed payments on car and home loans. Some said they had to send their children to counseling to work through the trauma of watching their mother or father get arrested on the front lawn.

"Most said they also developed a fear of police."


AI

World's First AI Chatbot, ELIZA, Resurrected After 60 Years (livescience.com) 37

"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...) Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."

Thanks to long-time Slashdot reader MattSparkes for sharing the news.
AI

Google Reports Halving Code Migration Time With AI Help 12

Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.

For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.

"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.

Slashdot Top Deals