Crime

Global Scam Industry Evolving at 'Unprecedented Scale' Despite Recent Crackdown (cnn.com) 13

Online scam operations across Southeast Asia are rapidly adapting to recent crackdowns, adopting AI and expanding globally despite the release of 7,000 trafficking victims from compounds along the Myanmar-Thailand border, experts say. These releases represent just a fraction of an estimated 100,000 people trapped in facilities run by criminal syndicates that rake in billions through investment schemes and romance scams targeting victims worldwide, CNN reports.

"Billions of dollars are being invested in these kinds of businesses," said Kannavee Suebsang, a Thai lawmaker leading efforts to free those held in scam centers. "They will not stop." Crime groups are exploiting AI to write scamming scripts and using deepfakes to create personas, while networks have expanded to Africa, South Asia, and the Pacific region, according to the United Nations Office of Drugs and Crime. "This is a situation the region has never faced before," said John Wojcik, a UN organized crime analyst. "The evolving situation is trending towards something far more dangerous than scams alone."
Microsoft

Microsoft Urges Businesses To Abandon Office Perpetual Licenses 95

Microsoft is pushing businesses to shift away from perpetual Office licenses to Microsoft 365 subscriptions, citing collaboration limitations and rising IT costs associated with standalone software. "You may have started noticing limitations," Microsoft says in a post. "Your apps are stuck on your desktop, limiting productivity anytime you're away from your office. You can't easily access your files or collaborate when working remotely."

In its pitch, the Windows-maker says Microsoft 365 includes Office applications as well as security features, AI tools, and cloud storage. The post cites a Microsoft-commissioned Forrester study that claims the subscription model delivers "223% ROI over three years, with a payback period of less than six months" and "over $500,000 in benefits over three years."
AI

AI Masters Minecraft: DeepMind Program Finds Diamonds Without Being Taught 9

An AI system has for the first time figured out how to collect diamonds in the hugely popular video game Minecraft -- a difficult task requiring multiple steps -- without being shown how to play. Its creators say the system, called Dreamer, is a step towards machines that can generalize knowledge learned in one domain to new situations, a major goal of AI. From a report: "Dreamer marks a significant step towards general AI systems," says Danijar Hafner, a computer scientist at Google DeepMind in San Francisco, California. "It allows AI to understand its physical environment and also to self-improve over time, without a human having to tell it exactly what to do." Hafner and his colleagues describe Dreamer in a study in Nature published on 2 April.

In Minecraft, players explore a virtual 3D world containing a variety of terrains, including forests, mountains, deserts and swamps. Players use the world's resources to create objects, such as chests, fences and swords -- and collect items, among the most prized of which are diamonds. Importantly, says Hafner, no two experiences are the same. Every time you play Minecraft, it's a new, randomly generated world," he says. This makes it useful for challenging an AI system that researchers want to be able to generalize from one situation to the next. "You have to really understand what's in front of you; you can't just memorize a specific strategy," he says.
AI

95% of Code Will Be AI-Generated Within Five Years, Microsoft CTO Says 130

Microsoft Chief Technology Officer Kevin Scott has predicted that AI will generate 95% of code within five years. Speaking on the 20VC podcast, Scott said AI would not replace software engineers but transform their role. "It doesn't mean that the AI is doing the software engineering job.... authorship is still going to be human," Scott said.

According to Scott, developers will shift from writing code directly to guiding AI through prompts and instructions. "We go from being an input master (programming languages) to a prompt master (AI orchestrator)," he said. Scott said the current AI systems have significant memory limitations, making them "awfully transactional," but predicted improvements within the next year.
AI

OpenAI Accused of Training GPT-4o on Unlicensed O'Reilly Books (techcrunch.com) 49

A new paper [PDF] from the AI Disclosures Project claims OpenAI likely trained its GPT-4o model on paywalled O'Reilly Media books without a licensing agreement. The nonprofit organization, co-founded by O'Reilly Media CEO Tim O'Reilly himself, used a method called DE-COP to detect copyrighted content in language model training data.

Researchers analyzed 13,962 paragraph excerpts from 34 O'Reilly books, finding that GPT-4o "recognized" significantly more paywalled content than older models like GPT-3.5 Turbo. The technique, also known as a "membership inference attack," tests whether a model can reliably distinguish human-authored texts from paraphrased versions.

"GPT-4o [likely] recognizes, and so has prior knowledge of, many non-public O'Reilly books published prior to its training cutoff date," wrote the co-authors, which include O'Reilly, economist Ilan Strauss, and AI researcher Sruly Rosenblat.
Mozilla

Mozilla To Launch 'Thunderbird Pro' Paid Services (techspot.com) 36

Mozilla plans to introduce a suite of paid professional services for its open-source Thunderbird email client, transforming the application into a comprehensive communication platform. Dubbed "Thunderbird Pro," the package aims to compete with established ecosystems like Gmail and Office 365 while maintaining Mozilla's commitment to open-source software.

The Pro tier will include four core services: Thunderbird Appointment for streamlined scheduling, Thunderbird Send for file sharing (reviving the discontinued Firefox Send), Thunderbird Assist offering AI capabilities powered by Flower AI, and Thundermail, a revamped email client built on Stalwart's open-source stack. Initially, Thunderbird Pro will be available free to "consistent community contributors," with paid access for other users.

Mozilla Managing Director Ryan Sipes indicated the company may consider limited free tiers once the service establishes a sustainable user base. This initiative follows Mozilla's 2023 announcement about "remaking" Thunderbird's architecture to modernize its aging codebase, addressing user losses to more feature-rich competitors.
AI

MCP: the New 'USB-C For AI' That's Bringing Fierce Rivals Together (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: What does it take to get OpenAI and Anthropic -- two competitors in the AI assistant market -- to get along? Despite a fundamental difference in direction that led Anthropic's founders to quit OpenAI in 2020 and later create the Claude AI assistant, a shared technical hurdle has now brought them together: How to easily connect their AI models to external data sources. The solution comes from Anthropic, which developed and released an open specification called Model Context Protocol (MCP) in November 2024. MCP establishes a royalty-free protocol that allows AI models to connect with outside data sources and services without requiring unique integrations for each service.

"Think of MCP as a USB-C port for AI applications," wrote Anthropic in MCP's documentation. The analogy is imperfect, but it represents the idea that, similar to how USB-C unified various cables and ports (with admittedly a debatable level of success), MCP aims to standardize how AI models connect to the infoscape around them. So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday.

MCP has also rapidly begun to gain community support in recent months. For example, just browsing this list of over 300 open source servers shared on GitHub reveals growing interest in standardizing AI-to-tool connections. The collection spans diverse domains, including database connectors like PostgreSQL, MySQL, and vector databases; development tools that integrate with Git repositories and code editors; file system access for various storage platforms; knowledge retrieval systems for documents and websites; and specialized tools for finance, health care, and creative applications. Other notable examples include servers that connect AI models to home automation systems, real-time weather data, e-commerce platforms, and music streaming services. Some implementations allow AI assistants to interact with gaming engines, 3D modeling software, and IoT devices.

AI

DeepMind is Holding Back Release of AI Research To Give Google an Edge (arstechnica.com) 31

Google's AI arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry. From a report: The group, led by Nobel Prize-winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind. Three former researchers said the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google's own Gemini AI model in a negative light compared with others.

The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and as a home for the best scientists building AI. Meanwhile, huge breakthroughs by Google researchers -- such as its 2017 "transformers" paper that provided the architecture behind large language models -- played a central role in creating today's boom in generative AI. Since then, DeepMind has become a central part of its parent company's drive to cash in on the cutting-edge technology, as investors expressed concern that the Big Tech group had ceded its early lead to the likes of ChatGPT maker OpenAI.

"I cannot imagine us putting out the transformer papers for general use now," said one current researcher. Among the changes in the company's publication policies is a six-month embargo before "strategic" papers related to generative AI are released. Researchers also often need to convince several staff members of the merits of publication, said two people with knowledge of the matter.

AI

Alan Turing Institute Plans Revamp in Face of Criticism and Technological Change 29

Britain's flagship AI agency will slash the number of projects it backs and prioritize work on defense, environment and health as it seeks to respond to technological advances and criticism of its record. From a report: The Alan Turing Institute -- named after the pioneering British computer scientist -- will shut or offload almost a quarter of its 101 current initiatives and is considering job cuts as part of a change programme that led scores of staff to write a letter expressing their loss of confidence in the leadership in December.

Jean Innes, appointed chief executive in July 2023, argued that huge advances in AI meant the Turing needed to modernise after being founded as a national data science institute by David Cameron's government a decade ago this month. "The Turing has chalked up some really great achievements," Innes said in an interview. "[But we need] a big strategic shift to a much more focused agenda on a small number of problems that have an impact in the real world." A review last year by UK Research and Innovation, the government funding body, found "a clear need for the governance and leadership structure of the Institute to evolve." It called for a move away from the dominance of universities to a structure more representative of AI in UK.
AI

Anthropic Announces Updates On Security Safeguards For Its AI Models (cnbc.com) 39

Anthropic announced updates to the "responsible scaling" policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards. In an earlier version of its responsible scaling policy, Anthropic said it will start sweeping physical offices for hidden devices as part of a ramped-up security effort as the AI race intensifies. From a report: The company, backed by Amazon and Google, published safety and security updates in a blog post on Monday, and said it also plans to establish an executive risk council and build an in-house security team. Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups.

In addition to high-growth startups, tech giants including Google, Amazon and Microsoft are racing to announce new products and features. Competition is also coming from China, a risk that became more evident earlier this year when DeepSeek's AI model went viral in the U.S. Anthropic said in the post that it will introduce "physical" safety processes, such as technical surveillance countermeasures -- or the process of finding and identifying surveillance devices that are used to spy on organizations. The sweeps will be conducted "using advanced detection equipment and techniques" and will look for "intruders."
CNBC corrected that story to note that it had written about previous security safeguards Anthropic shared in October 2024. On Monday, Anthropic defined model capabilities that would require additional deployment and security safeguards beyond AI Safety Level (ASL) 3.
Programming

'There is No Vibe Engineering' 121

Software engineer Sergey Tselovalnikov weighs in on the new hype: The term caught on and Twitter quickly flooded with posts about how AI has radically transformed coding and will soon replace all software engineers. While AI undeniably impacts the way we write code, it hasn't fundamentally changed our role as engineers. Allow me to explain.

[...] Vibe coding is interacting with the codebase via prompts. As the implementation is hidden from the "vibe coder", all the engineering concerns will inevitably get ignored. Many of the concerns are hard to express in a prompt, and many of them are hard to verify by only inspecting the final artifact. Historically, all engineering practices have tried to shift all those concerns left -- to the earlier stages of development when they're cheap to address. Yet with vibe coding, they're shifted very far to the right -- when addressing them is expensive.

The question of whether an AI system can perform the complete engineering cycle and build and evolve software the same way a human can remains open. However, there are no signs of it being able to do so at this point, and if it one day happens, it won't have anything to do with vibe coding -- at least the way it's defined today.

[...] It is possible that there'll be a future where software is built from vibe-coded blocks, but the work of designing software able to evolve and scale doesn't go away. That's not vibe engineering -- that's just engineering, even if the coding part of it will look a bit different.
Intel

Intel CEO Lip-Bu Tan Says Company Will Spin Off Non-Core Units (msn.com) 41

Intel Chief Executive Officer Lip-Bu Tan said the chipmaker will spin off assets that aren't central to its mission and create new products including custom semiconductors to try to better align itself with customers. From a report: Intel needs to replace the engineering talent it has lost, improve its balance sheet and better attune manufacturing processes to meet the needs of potential customers, Tan said. Speaking at his first public appearance as CEO, at the Intel Vision conference Monday in Las Vegas, Tan didn't specify what parts of Intel he believes are no longer central to its future.

"We have a lot of hard work ahead," Tan said, addressing the company's customers in the audience. "There are areas where we've fallen short of your expectations." The veteran semiconductor executive is trying to restore the fortunes of a company that dominated an industry for decades, but now finds itself chasing rivals in most of the areas that define success in the field. A key question confronting its leadership is whether a turnaround is best served by the company remaining whole or splitting up its key product and manufacturing operations. Tan gave no indication that he will seek to divest either part of Intel. Instead, he highlighted the problems he needs to fix to get both units performing more successfully. Intel's chips for data center and AI-related work in particular are not good enough, he said. "We fell behind on innovation," the CEO said. "We have been too slow to adapt and meet your needs."

AI

OpenAI Plans To Release a New 'Open' AI Language Model In the Coming Months 6

OpenAI plans to release a new open-weight language model -- its first since GPT-2 -- in the coming months and is seeking community feedback to shape its development. "That's according to a feedback form the company published on its website Monday," reports TechCrunch. "The form, which OpenAI is inviting 'developers, researchers, and [members of] the broader community' to fill out, includes questions like 'What would you like to see in an open-weight model from OpenAI?' and 'What open models have you used in the past?'" From the report: "We're excited to collaborate with developers, researchers, and the broader community to gather inputs and make this model as useful as possible," OpenAI wrote on its website. "If you're interested in joining a feedback session with the OpenAI team, please let us know [in the form] below." OpenAI plans to host developer events to gather feedback and, in the future, demo prototypes of the model. The first will take place in San Francisco within a few weeks, followed by sessions in Europe and Asia-Pacific regions.

OpenAI is facing increasing pressure from rivals such as Chinese AI lab DeepSeek, which have adopted an "open" approach to launching models. In contrast to OpenAI's strategy, these "open" competitors make their models available to the AI community for experimentation and, in some cases, commercialization.
AI

ChatGPT 'Added One Million Users In the Last Hour' 30

OpenAI is having another viral moment after releasing Images for ChatGPT last week, with millions of people creating Studio Ghibli-inspired AI art. In a post on X today, CEO Sam Altman said the company has "added one million users in the last hour" alone. A few days prior he begged users to stop generating images because he said "our GPUs are melting."
IT

Micron Hikes Memory Prices Amid Surging AI Demand (tomshardware.com) 15

Micron will raise prices for DRAM and NAND flash memory chips through 2026 as AI and data center demand strains supply chains, the U.S. chipmaker confirmed Monday. The move follows a market rebound from previous oversupply, with memory prices steadily climbing as producers cut output while AI and high-performance computing workloads grow.

Rivals Samsung Electronics and SK Hynix are expected to implement similar increases. Micron cited "un-forecasted demand across various business segments" in communications to channel partners. The price hikes will impact sectors ranging from consumer electronics to enterprise data centers.
China

Microsoft Shutters AI Lab in Shanghai, Signalling a Broader Pullback From China (scmp.com) 9

An anonymous reader shares a report: Microsoft has closed its IoT & AI Insider Lab in Shanghai's Zhangjiang hi-tech zone, marking the latest sign of the US tech giant's retreat from China amid rising geopolitical tensions.

The Shanghai lab, meant to help with domestic development of the Internet of Things (IoT) and artificial intelligence (AI) technologies, closed earlier this year, according to people who work in the Zhangjiang AI Island area. Opened in May 2019, Microsoft's IoT & AI Insider Lab was touted as a flagship collaboration between the global tech giant and Zhangjiang, the innovation hub of Shanghai's Pudong district, where numerous domestic and international semiconductor and AI companies have set up shop. The lab covered roughly 2,800 square meters (30,000 square feet).

Programming

'No Longer Think You Should Learn To Code,' Says CEO of AI Coding Startup (x.com) 108

Learning to code has become sort of become pointless as AI increasingly dominates programming tasks, said Replit founder and chief executive Amjad Masad. "I no longer think you should learn to code," Masad wrote on X.

The statement comes as major tech executives report significant AI inroads into software development. Google CEO Sundar Pichai recently revealed that 25% of new code at the tech giant is AI-generated, though still reviewed by engineers. Furthermore, Anthropic CEO Dario Amodei predicted AI could generate up to 90% of all code within six months.

Masad called this shift a "bittersweet realization" after spending years popularizing coding through open-source work, Codecademy, and Replit -- a platform that now uses AI to help users build apps and websites. Instead of syntax-focused programming skills, Masad recommends learning "how to think, how to break down problems... how to communicate clearly, with humans and with machines."
AI

Copilot Can't Beat a 2013 'TouchDevelop' Code Generation Demo for Windows Phone 18

What happens when you ask Copilot to "write a program that can be run on an iPhone 16 to select 15 random photos from the phone, tint them to random colors, and display the photos on the phone"?

That's what TouchDevelop did for the long-discontinued Windows Phone in a 2013 Microsoft Research 'SmartSynth' natural language code generation demo. ("Write scripts by tapping on the screen.")

Long-time Slashdot reader theodp reports on what happens when, 14 years later, you pose the same question to Copilot: "You'll get lots of code and caveats from Copilot, but nothing that you can execute as is. (Compare that to the functioning 10 lines of code TouchDevelop program). It's a good reminder that just because GenAI can generate code, it doesn't necessarily mean it will generate the least amount of code, the most understandable or appropriate code for the requestor, or code that runs unchanged and produces the desired results.
theodp also reminds us that TouchDevelop "was (like BASIC) abandoned by Microsoft..." Interestingly, a Microsoft Research video from CS Education Week 2011 shows enthusiastic Washington high school students participating in an hour-long TouchDevelop coding lesson and demonstrating the apps they created that tapped into music, photos, the Internet, and yes, even their phone's functionality. This shows how lacking iPhone and Android still are today as far as easy programmability-for-the-masses goes. (When asked, Copilot replied that Apple's Shortcuts app wasn't up to the task).
Robotics

China is Already Testing AI-Powered Humanoid Robots in Factories (msn.com) 71

The U.S. and China "are racing to build a truly useful humanoid worker," the Wall Street Journal wrote Saturday, adding that "Whoever wins could gain a huge edge in countless industries."

"The time has come for robots," Nvidia's chief executive said at a conference in March, adding "This could very well be the largest industry of all." China's government has said it wants the country to be a world leader in humanoid robots by 2027. "Embodied" AI is listed as a priority of a new $138 billion state venture investment fund, encouraging private-sector investors and companies to pile into the business. It looks like the beginning of a familiar tale. Chinese companies make most of the world's EVs, ships and solar panels — in each case, propelled by government subsidies and friendly regulations. "They have more companies developing humanoids and more government support than anyone else. So, right now, they may have an edge," said Jeff Burnstein [president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan]....

Humanoid robots need three-dimensional data to understand physics, and much of it has to be created from scratch. That is where China has a distinct edge: The country is home to an immense number of factories where humanoid robots can absorb data about the world while performing tasks. "The reason why China is making rapid progress today is because we are combining it with actual applications and iterating and improving rapidly in real scenarios," said Cheng Yuhang, a sales director with Deep Robotics, one of China's robot startups. "This is something the U.S. can't match." UBTech, the startup that is training humanoid robots to sort and carry auto parts, has partnerships with top Chinese automakers including Geely... "A problem can be solved in a month in the lab, but it may only take days in a real environment," said a manager at UBTech...

With China's manufacturing prowess, a locally built robot could eventually cost less than half as much as one built elsewhere, said Ming Hsun Lee, a Bank of America analyst. He said he based his estimates on China's electric-vehicle industry, which has grown rapidly to account for roughly 70% of global EV production. "I think humanoid robots will be another EV industry for China," he said. The UBTech robot system, called Walker S, currently costs hundreds of thousands of dollars including software, according to people close to the company. UBTech plans to deliver 500 to 1,000 of its Walker S robots to clients this year, including the Apple supplier Foxconn. It hopes to increase deliveries to more than 10,000 in 2027.

Few companies outside China have started selling AI-powered humanoid robots. Industry insiders expect the competition to play out over decades, as the robots tackle more-complicated environments, such as private homes.

The article notes "several" U.S. humanoid robot producers, including the startup Figure. And robots from Amazon's Agility Robotics have been tested in Amazon warehouses since 2023. "The U.S. still has advantages in semiconductors, software and some precision components," the article points out.

But "Some lawmakers have urged the White House to ban Chinese humanoids from the U.S. and further restrict Chinese robot makers' access to American technology, citing national-security concerns..."
AI

Bloomberg's AI-Generated News Summaries Had At Least 36 Errors Since January (nytimes.com) 25

The giant financial news site Bloomberg "has been experimenting with using AI to help produce its journalism," reports the New York Times. But "It hasn't always gone smoothly."

While Bloomberg announced on January 15 that it would add three AI-generated bullet points at the top of articles as a summary, "The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year." (This Wednesday they published a "hallucinated" date for the start of U.S. auto tariffs, and earlier in March claimed president Trump had imposed tariffs on Canada in 2024, while other errors have included incorrect figures and incorrect attribution.) Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called "Ask the Post" that generates answers to questions from published Post articles. And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization.

Bloomberg News said in a statement that it publishes thousands of articles each day, and "currently 99 percent of A.I. summaries meet our editorial standards...." The A.I. summaries are "meant to complement our journalism, not replace it," the statement added....

John Micklethwait, Bloomberg's editor in chief, laid out the thinking about the A.I. summaries in a January 10 essay, which was an excerpt from a lecture he had given at City St. George's, University of London. "Customers like it — they can quickly see what any story is about. Journalists are more suspicious," he wrote. "Reporters worry that people will just read the summary rather than their story." But, he acknowledged, "an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter."

A Bloomberg spokeswoman told the Times that the feedback they'd received to the summaries had generally been positive — "and we continue to refine the experience."

Slashdot Top Deals