AI

AI Crawlers Haven't Learned To Play Nice With Websites (theregister.com) 57

SourceHut, an open-source-friendly git-hosting service, says web crawlers for AI companies are slowing down services through their excessive demands for data. From a report: "SourceHut continues to face disruptions due to aggressive LLM crawlers," the biz reported Monday on its status page. "We are continuously working to deploy mitigations. We have deployed a number of mitigations which are keeping the problem contained for now. However, some of our mitigations may impact end-users."

SourceHut said it had deployed Nepenthes, a tar pit to catch web crawlers that scrape data primarily for training large language models, and noted that doing so might degrade access to some web pages for users. "We have unilaterally blocked several cloud providers, including GCP [Google Cloud] and [Microsoft] Azure, for the high volumes of bot traffic originating from their networks," the biz said, advising administrators of services that integrate with SourceHut to get in touch to arrange an exception to the blocking.

Robotics

Nvidia Says 'the Age of Generalist Robotics Is Here' (theverge.com) 125

During the company's GTC 2025 keynote today, Nvidia founder and CEO Jensen Huang announced Isaac GR00T N1 -- the company's first open-source, pre-trained yet customizable foundation model designed to accelerate the development and capabilities of humanoid robots. "The age of generalist robotics is here," said Huang. "With Nvidia Isaac GR00T N1 and new data-generation and robot-learning frameworks, robotics developers everywhere will open the next frontier in the age of AI." The Verge reports: Huang demonstrated 1X's NEO Gamma humanoid robot performing autonomous tidying jobs using a post-trained policy built on the GR00T N1 model. [...] Other companies developing humanoid robots who have had early access to the GR00T N1 model include Boston Dynamics, the creators of Atlas; Agility Robotics; Mentee Robotics; and Neura Robotics. Originally announced as Project GR00T a year ago, the GR00T N1 foundation model utilizes a dual-system architecture inspired by human cognition.

System 1, as Nvidia calls it, is described as a "fast-thinking action model" that behaves similarly to human reflexes and intuition. It was trained on data collected through human demonstrations and synthetic data generated by Nvidia's Omniverse platform. System 2, which is powered by a vision language model, is a "slow-thinking model" that "reasons about its environment and the instructions it has received to plan actions." Those plans are passed along to System 1, which translates them into "precise, continuous robot movements" that include grasping, moving objects with one or two arms, as well as more complex multistep tasks that involve combinations of basic skills.

While the GR00T N1 foundation model is pretrained with generalized humanoid reasoning and skills, developers can customize its behavior and capabilities for specific needs by post-training it with data gathered from human demonstrations or simulations. Nvidia has made GR00T N1 training data and task evaluation scenarios available for download through Hugging Face and GitHub.

AI

Italian Newspaper Says It Has Published World's First AI-Generated Edition (theguardian.com) 25

Italian newspaper Il Foglio claims to have published the world's first entirely AI-generated edition as part of a month-long experiment to explore AI's impact on journalism. The special four-page supplement, available in print and online, features AI-written articles, headlines, and reader letters. The only thing the human journalists provided were prompts. The Guardian reports: The front page of the first edition of Il Foglio AI carries a story referring to the US president, Donald Trump, describing the "paradox of Italian Trumpians" and how they rail against "cancel culture" yet either turn a blind eye, or worse, "celebrate" when "their idol in the US behaves like the despot of a banana republic." The front page also features a column headlined "Putin, the 10 betrayals," with the article highlighting "20 years of broken promises, torn-up agreements and words betrayed" by Vladimir Putin, the Russian president.

In a rare upbeat story about the Italian economy, another article points to the latest report from Istat, the national statistics agency, on the redistribution of income, which shows the country "is changing, and not for the worse" with salary increases for about 750,000 workers being among the positive effects of income tax reforms. On page 2 is a story about "situationships" and how young Europeans are fleeing steady relationships. The articles were structured, straightforward and clear, with no obvious grammatical errors. However, none of the articles published in the news pages directly quote any human beings.

The final page runs AI-generated letters from readers to the editor, with one asking whether AI will render humans "useless" in the future. "AI is a great innovation, but it doesn't yet know how to order a coffee without getting the sugar wrong," reads the AI-generated response.

United States

FTC Removes Posts Critical of Amazon, Microsoft, and AI Companies (wired.com) 71

The Federal Trade Commission has removed over 300 business guidance blogs published during former President Biden's term, including consumer protection information on AI and privacy lawsuits against Amazon and Microsoft, WIRED reported Tuesday, citing current and former FTC employees.

Deleted posts included guidance about Amazon's alleged use of Ring camera data to train algorithms, Microsoft's $20 million settlement over Xbox children's data collection, and compliance standards for AI chatbots. New FTC Chair Andrew Ferguson has pledged to pursue tech companies but with focus on alleged conservative censorship rather than data collection practices.
Transportation

GM Taps Nvidia To Boost Its Self-Driving Projects 11

General Motors is partnering with Nvidia to enhance its self-driving and manufacturing capabilities by leveraging Nvidia's AI chips, software, and simulation tools. "GM says it will apply several of Nvidia's products to its business, such as the Omniverse 3D graphics platform which will run simulations on virtual assembly lines with an eye on reducing downtime and improving efficiency," reports The Verge. "The automaker also plans to equip its next-generation vehicles with Nvidia's 'AI brain' for advanced driver assistance and autonomous driving. And it will employ the chipmaker's AI training software to make its vehicle assembly line robots better at certain tasks, like precision welding and material handling." From the report: GM already uses Nvidia's GPUs to train its AI software for simulation and validation. Today's announcement was about expanding those use cases into improving its manufacturing operations and autonomous vehicles, GM CEO Mary Barra said in a statement. (Dave Richardson, GM's senior VP of Software and Services Engineering will be joining NVIDIA's Norm Marks for a fireside chat at the conference.) "AI not only optimizes manufacturing processes and accelerates virtual testing but also helps us build smarter vehicles while empowering our workforce to focus on craftsmanship," Barra said. "By merging technology with human ingenuity, we unlock new levels of innovation in vehicle manufacturing and beyond."

GM will adopt Nvidia's in-car software products to build next-gen vehicles with autonomous driving capabilities. That includes the company's Drive AGX system-on-a-chip (SoC), similar to Tesla's Full Self-Driving chip or Intel's Mobileye EyeQ. The SoC runs the "safety-certified" DriveOS operating system, built on the Blackwell GPU architecture, which is capable of delivering 1,000 trillion operations per second (TOPS) of high-performance compute, the company says. [...] In a briefing with reporters, Ali Kani, Nvidia's vice president and general manager of automotive, described the chipmaking company's automotive business as still in its "infancy," with the expectation that it will only bring in $5 billion this year. (Nvidia reported over $130 billion in revenue in 2024 for all its divisions.)

Nvidia's chips are in less than 1 percent of the billions of cars on the road today, he added. But the future looks promising. The company is also announcing deals with Tier 1 auto supplier Magna, which helped build Sony's Afeela concept, to use Drive AGX in the company's next-generation advanced driver assist software. "We believe automotive is a trillion dollar opportunity for Nvidia," Kani said.
AI

Nvidia Reveals Next-Gen AI Chips, Roadmap Through 2028 (cnbc.com) 9

Nvidia unveiled its next wave of AI processors at GTC on Tuesday, announcing Blackwell Ultra chips that will ship in the second half of 2025, followed by the Vera Rubin architecture in 2026. CEO Jensen Huang also revealed that its 2028 chips will be named after physicist Richard Feynman.

The Blackwell Ultra maintains the same 20 petaflops of AI performance as standard Blackwell chips but increases memory from 192GB to 288GB of HBM3e. Nvidia claims these chips can process 1,000 tokens per second -- ten times faster than its 2022 hardware -- enabling AI reasoning tasks like running DeepSeek-R1 models with 10-second response times versus 1.5 minutes on H100 chips.

Vera Rubin will deliver a substantial leap to 50 petaflops in 2026, featuring Nvidia's first custom Arm-based CPU design called Olympus. Nvidia is also changing how it counts GPUs -- Rubin itself contains two dies working as one chip. The annual release cadence represents a strategic shift for Nvidia, which previously introduced new architectures every two years before the AI boom transformed its business.
The Courts

US Appeals Court Rejects Copyrights For AI-Generated Art (yahoo.com) 47

An anonymous reader quotes a report from Reuters: A federal appeals court in Washington, D.C., on Tuesday affirmed that a work of art generated by artificial intelligence without human input cannot be copyrighted under U.S. law. The U.S. Court of Appeals for the District of Columbia Circuit agreed with the U.S. Copyright Office that an image created by Stephen Thaler's AI system "DABUS" was not entitled to copyright protection, and that only works with human authors can be copyrighted.

Tuesday's decision marks the latest attempt by U.S. officials to grapple with the copyright implications of the fast-growing generative AI industry. The Copyright Office has separately rejected artists' bids for copyrights on images generated by the AI system Midjourney. The artists argued they were entitled to copyrights for images they created with AI assistance -- unlike Thaler, who said that his "sentient" system created the image in his case independently. [...]

U.S. Circuit Judge Patricia Millett wrote for a unanimous three-judge panel on Tuesday that U.S. copyright law "requires all work to be authored in the first instance by a human being." "Because many of the Copyright Act's provisions make sense only if an author is a human being, the best reading of the Copyright Act is that human authorship is required for registration," the appeals court said.

United States

Vance Slams Globalization For Hampering American Innovation (thehill.com) 246

U.S. Vice President J.D. Vance denounced decades of globalization for hampering American innovation in a speech to entrepreneurs and venture capitalists on Tuesday, arguing that offshoring has eroded U.S. technological leadership. "Our workers have been failed by the government of the last 40 years," Vance told the American Dynamism Summit, criticizing two "conceits" of globalization: that nations manufacturing products wouldn't eventually design them too, and that cheap foreign labor benefits innovation.

"As they got better at the low end of the value chain, they also started catching up on the higher end. We were squeezed from both ends," Vance said, adding that "cheap labor is fundamentally a crutch" that inhibits technological advancement. The Trump administration recently rolled back Biden-era AI regulations, with Vance emphasizing their goal to "incentivize investment in our own borders, in our own businesses, our own workers and our own innovation." Vance, a former venture capitalist, dismissed fears about AI eliminating jobs, comparing it to ATMs which ultimately created more financial sector roles.
IT

The First New Pebble Smartwatches Are Coming Later This Year (theverge.com) 20

Eric Migicovsky, founder of Pebble, will release two new smartwatches running the newly open-sourced Pebble operating system through his company Core Devices. The Core 2 Duo, priced at $149 and shipping in July, utilizes unused Pebble 2 frames with the same black-and-white E Ink display.

The device features a 30-day battery life -- quadruple its predecessor's -- and incorporates a speaker for AI assistant interaction. Approximately 10,000 units will be available. The Core Time 2, arriving in December at $225, adds touchscreen functionality to the classic Pebble design while maintaining physical buttons and month-long battery life.

Both devices face iPhone integration challenges. Migicovsky cautioned potential tariff increases would be passed to consumers, stating, "We're going to charge more if it costs more." "I'm not building a company to sell millions of these," Migicovsky said. "The goal is to make something I really want."
Facebook

Meta's Llama AI Models Hit 1 Billion Downloads, Zuckerberg Says (techcrunch.com) 17

Meta's open AI model family Llama has reached 1 billion downloads, CEO Mark Zuckerberg announced on Tuesday, marking a 53% increase from the 650 million reported in early December. Llama, which powers Meta's AI assistant across Facebook, Instagram and WhatsApp, operates under a proprietary license that some developers consider commercially restrictive despite its free availability. Major corporations including Spotify, AT&T and DoorDash currently deploy Llama models in production environments.
Programming

'Vibe Coding' is Letting 10 Engineers Do the Work of a Team of 50 To 100, Says YC CEO (businessinsider.com) 159

Y Combinator CEO Garry Tan said startups are reaching $1-10 million annual revenue with fewer than 10 employees due to "vibe coding," a term coined by OpenAI cofounder Andrej Karpathy in February.

"You can just talk to the large language models and they will code entire apps," Tan told CNBC (video). "You don't have to hire someone to do it, you just talk directly to the large language model that wrote it and it'll fix it for you." What would've once taken "50 or 100" engineers to build, he believes can now be accomplished by a team of 10, "when they are fully vibe coders." He adds: "When they are actually really, really good at using the cutting edge tools for code gen today, like Cursor or Windsurf, they will literally do the work of 10 or 100 engineers in the course of a single day."

According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models. Despite limitations in debugging capabilities, Tan said the technology enables small teams to perform work previously requiring dozens of engineers and makes previously overlooked niche markets viable for software businesses.
AI

Hollywood Urges Trump To Not Let AI Companies 'Exploit' Copyrighted Works (variety.com) 105

An anonymous reader quotes a report from Variety: More than 400 Hollywood creative leaders signed an open letter to the Trump White House's Office of Science and Technology Policy, urging the administration to not roll back copyright protections at the behest of AI companies. The filmmakers, writers, actors, musicians and others -- which included Ben Stiller, Mark Ruffalo, Cynthia Erivo, Cate Blanchett, Cord Jefferson, Paul McCartney, Ron Howard and Taika Waititi -- were submitting comments for the Trump administration's U.S. AI Action Plan. The letter specifically was penned in response to recent submissions to the Office of Science and Technology Policy from OpenAI and Google, which asserted that U.S. copyright law allows (or should allow) allow AI companies to train their system on copyrighted works without obtaining permission from (or compensating) rights holders.

"We firmly believe that America's global AI leadership must not come at the expense of our essential creative industries," the letter says in part. The letter claims that "AI companies are asking to undermine this economic and cultural strength by weakening copyright protections for the films, television series, artworks, writing, music and voices used to train AI models at the core of multibillion-dollar corporate valuations." [...] The letter says Google and OpenAI "are arguing for a special government exemption so they can freely exploit America's creative and knowledge industries, despite their substantial revenues and available funds. There is no reason to weaken or eliminate the copyright protections that have helped America flourish."
You can read the full statement and list of signatories here.

The letter was issued in response to recent submissions from OpenAI (PDF) and Google (PDF) claiming that U.S. law allows, or should allow, AI companies to train their programs on copyrighted works under the fair use legal doctrine.
Google

People Are Using Google's New AI Model To Remove Watermarks From Images (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Last week, Google expanded access to its Gemini 2.0 Flash model's image generation feature, which lets the model natively generate and edit image content. It's a powerful capability, by all accounts. But it also appears to have few guardrails. Gemini 2.0 Flash will uncomplainingly create images depicting celebrities and copyrighted characters, and -- as alluded to earlier -- remove watermarks from existing photos.

As several X and Reddit users noted, Gemini 2.0 Flash won't just remove watermarks, but will also attempt to fill in any gaps created by a watermark's deletion. Other AI-powered tools do this, too, but Gemini 2.0 Flash seems to be exceptionally skilled at it -- and free to use. To be clear, Gemini 2.0 Flash's image generation feature is labeled as "experimental" and "not for production use" at the moment, and is only available in Google's developer-facing tools like AI Studio. The model also isn't a perfect watermark remover. Gemini 2.0 Flash appears to struggle with certain semi-transparent watermarks and watermarks that canvas large portions of images.

Windows

Huawei To Pivot To Linux, HarmonyOS as Microsoft Windows License Expires 37

Huawei will no longer be able to produce or sell Windows-based PCs as Microsoft's supply license to the Chinese tech company expires this month, according to Chinese tech site MyDrivers. The restriction comes as Huawei remains on the U.S. Department of Commerce's Entity List, requiring American companies to obtain special export licenses to conduct business with the firm.

Richard Yu, executive director of Huawei's consumer business unit, said the company is preparing to pivot to alternative operating systems. Huawei had previously announced plans to abandon Windows for future PC generations. The Chinese tech giant will introduce a new "AI PC" laptop in April running its own Kunpeng CPU and HarmonyOS, alongside a MateBook D16 Linux Edition, its first Linux-based laptop.
Social Networks

BlueSky Proposes 'New Standard' When Scraping Data for AI Training (techcrunch.com) 52

An anonymous reader shared this article from TechCrunch: Social network Bluesky recently published a proposal on GitHub outlining new options it could give users to indicate whether they want their posts and data to be scraped for things like generative AI training and public archiving.

CEO Jay Graber discussed the proposal earlier this week, while on-stage at South by Southwest, but it attracted fresh attention on Friday night, after she posted about it on Bluesky. Some users reacted with alarm to the company's plans, which they saw as a reversal of Bluesky's previous insistence that it won't sell user data to advertisers and won't train AI on user posts.... Graber replied that generative AI companies are "already scraping public data from across the web," including from Bluesky, since "everything on Bluesky is public like a website is public." So she said Bluesky is trying to create a "new standard" to govern that scraping, similar to the robots.txt file that websites use to communicate their permissions to web crawlers...

If a user indicates that they don't want their data used to train generative AI, the proposal says, "Companies and research teams building AI training sets are expected to respect this intent when they see it, either when scraping websites, or doing bulk transfers using the protocol itself."

Over on Threads someone had a different wish for our AI-enabled future. "I want to be able to conversationally chat to my feed algorithm. To be able to explain to it the types of content I want to see, and what I don't want to see. I want this to be an ongoing conversation as it refines what it shows me, or my interests change."

"Yeah I want this too," posted top Instagram/Threads executive Adam Mosseri, who said he'd talked about the idea with VC Sam Lessin. "There's a ways to go before we can do this at scale, but I think it'll happen eventually."
AI

Google's AI 'Co-Scientist' Solved a 10-Year Superbug Problem in Two Days (livescience.com) 48

Google collaborated with Imperial College London and its "Fleming Initiative" partnership with Imperial NHS, giving their scientists "access to a powerful new AI designed" built with Gemini 2.0 "to make research faster and more efficient," according to an announcement from the school. And the results were surprising...

"José Penadés and his colleagues at Imperial College London spent 10 years figuring out how some superbugs gain resistance to antibiotics," writes LiveScience. "But when the team gave Google's 'co-scientist'' — an AI tool designed to collaborate with researchers — this question in a short prompt, the AI's response produced the same answer as their then-unpublished findings in just two days." Astonished, Penadés emailed Google to check if they had access to his research. The company responded that it didn't. The researchers published their findings [about working with Google's AI] Feb. 19 on the preprint server bioRxiv...

"What our findings show is that AI has the potential to synthesise all the available evidence and direct us to the most important questions and experimental designs," co-author Tiago Dias da Costa, a lecturer in bacterial pathogenesis at Imperial College London, said in a statement. "If the system works as well as we hope it could, this could be game-changing; ruling out 'dead ends' and effectively enabling us to progress at an extraordinary pace...."

After two days, the AI returned suggestions, one being what they knew to be the correct answer. "This effectively meant that the algorithm was able to look at the available evidence, analyse the possibilities, ask questions, design experiments and propose the very same hypothesis that we arrived at through years of painstaking scientific research, but in a fraction of the time," Penadés, a professor of microbiology at Imperial College London, said in the statement. The researchers noted that using the AI from the start wouldn't have removed the need to conduct experiments but that it would have helped them come up with the hypothesis much sooner, thus saving them years of work.

Despite these promising findings and others, the use of AI in science remains controversial. A growing body of AI-assisted research, for example, has been shown to be irreproducible or even outright fraudulent.

Google has also published the first test results of its AI 'co-scientist' system, according to Imperial's announcement, which adds that academics from a handful of top-universities "asked a question to help them make progress in their field of biomedical research... Google's AI co-scientist system does not aim to completely automate the scientific process with AI. Instead, it is purpose-built for collaboration to help experts who can converse with the tool in simple natural language, and provide feedback in a variety of ways, including directly supplying their own hypotheses to be tested experimentally by the scientists."

Google describes their system as "intended to uncover new, original knowledge and to formulate demonstrably novel research hypotheses and proposals, building upon prior evidence and tailored to specific research objectives...

"We look forward to responsible exploration of the potential of the AI co-scientist as an assistive tool for scientists," Google adds, saying the project "illustrates how collaborative and human-centred AI systems might be able to augment human ingenuity and accelerate scientific discovery.
Intel

Intel's Stock Jumps 18.8% - But What's In Its Future? (msn.com) 47

Intel's stock jumped nearly 19% this week. "However, in the past year through Wednesday's close, Intel stock had fallen 53%," notes Investor's Business Daily: The appointment of Lip-Bu Tan as CEO is a "good start" but Intel has significant challenges, Morgan Stanley analyst Joseph Moore said in a client note. Those challenges include delays in its server chip product line, a very competitive PC chip market, lack of a compelling AI chip offering, and over $10 billion in losses in its foundry business over the past 12 months. There is "no quick fix" for those issues, he said.
"There are things you can do," a Columbia business school associate professor tells the Wall Street Journal in a video interview, "but it's going to be incremental, and it's going to be extremely risky... They will try to be competitive in the foundry manufacturing space," but "It takes very aggressive investments."

Meanwhile, TSMC is exploring a joint venture where they'd operate Intel's factories, even pitching the idea to AMD, Nvidia, Broadcam, and Qualcomm, according to Reuters. (They add that Intel "reported a 2024 net loss of $18.8 billion, its first since 1986," and talked to multiple sources "familiar with" talks about Intel's future). Multiple companies have expressed interest in buying parts of Intel, but two of the four sources said the U.S. company has rejected discussions about selling its chip design house separately from the foundry division. Qualcomm has exited earlier discussions to buy all or part of Intel, according to those people and a separate source. Intel board members have backed a deal and held negotiations with TSMC, while some executives are firmly opposed, according to two sources.
"They say Lip-Bu Tan is the best hope to fix Intel — if Intel can be fixed at all," writes the Wall Street Journal: He brings two decades of semiconductor industry experience, relationships across the sector, a startup mindset and an obsession with AI...and basketball. He also comes with tricky China business relationships, underscoring Silicon Valley's inability to sever itself from one of America's top adversaries... [Intel's] stock has lost two-thirds of its value in four short years as Intel sat out the AI boom...

Manufacturing chips is an enormous expense that Intel can't currently sustain, say industry leaders and analysts. Former board members have called for a split-up. But a deal to sell all or part of Intel to competitors seems to be off the table for the immediate future, according to bankers. A variety of early-stage discussions with Broadcom, Qualcomm, GlobalFoundries and TSMC in recent months have failed to go anywhere, and so far seem unlikely to progress. The company has already hinted at a more likely outcome: bringing in outside financial backers, including customers who want a stake in the manufacturing business...

Tan has likely no more than a year to turn the company around, said people close to the company. His decades of investing in startups and running companies — he founded a multinational venture firm and was CEO of chip design company Cadence Design Systems for 13 years — provide indications of how Tan will tackle this task in the early days: by cutting expenses, moving quickly and trying to turn Intel back into an engineering-first company. "In areas where we are behind the competition, we need to take calculated risks to disrupt and leapfrog," Tan said in a note to Intel employees on Wednesday. "And in areas where our progress has been slower than expected, we need to find new ways to pick up the pace...."

Many take this culture reset to also mean significant cuts at Intel, which already shed about 15,000 jobs last year. "He is brave enough to adjust the workforce to the size needed for the business today," said Reed Hundt, a former Intel board member who has known Tan since the 1990s.

AI

'There's a Good Chance Your Kid Uses AI To Cheat' (msn.com) 98

Long-time Slashdot reader theodp writes: Wall Street Journal K-12 education reporter Matt Barnum has a heads-up for parents: There's a Good Chance Your Kid Uses AI to Cheat. Barnum writes:

"A high-school senior from New Jersey doesn't want the world to know that she cheated her way through English, math and history classes last year. Yet her experience, which the 17-year-old told The Wall Street Journal with her parent's permission, shows how generative AI has rooted in America's education system, allowing a generation of students to outsource their schoolwork to software with access to the world's knowledge. [...] The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments. The student turned to OpenAI's ChatGPT and Google's Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection. The student was caught only once."

Not surprisingly, AI companies play up the idea that AI will radically improve learning, while educators are more skeptical. "This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

Facebook

After Meta Blocks Whistleblower's Book Promotion, It Becomes an Amazon Bestseller (thetimes.com) 39

After Meta convinced an arbitrator to temporarily prevent a whistleblower from promoting their book about the company (titled: Careless People), the book climbed to the top of Amazon's best-seller list. And the book's publisher Macmillan released a defiant statement that "The arbitration order has no impact on Macmillan... We will absolutely continue to support and promote it." (They added that they were "appalled by Meta's tactics to silence our author through the use of a non-disparagement clause in a severance agreement.")

Saturday the controversy was even covered by Rolling Stone: [Whistleblower Sarah] Wynn-Williams is a diplomat, policy expert, and international lawyer, with previous roles including serving as the Chief Negotiator for the United Nations on biosafety liability, according to her bio on the World Economic Forum...

Since the book's announcement, Meta has forcefully responded to the book's allegations in a statement... "Eight years ago, Sarah Wynn-Williams was fired for poor performance and toxic behavior, and an investigation at the time determined she made misleading and unfounded allegations of harassment. Since then, she has been paid by anti-Facebook activists and this is simply a continuation of that work. Whistleblower status protects communications to the government, not disgruntled activists trying to sell books."

But the negative coverage continues, with the Observer Sunday highlighting it as their Book of the Week. "This account of working life at Mark Zuckerberg's tech giant organisation describes a 'diabolical cult' able to swing elections and profit at the expense of the world's vulnerable..."

Though ironically Wynn-Williams started their career with optimism about Facebook's role in the app internet.org. . "Upon witnessing how the nascent Facebook kept Kiwis connected in the aftermath of the 2011 Christchurch earthquake, she believed that Mark Zuckerberg's company could make a difference — but in a good way — to social bonds, and that she could be part of that utopian project...

What internet.org involves for countries that adopt it is a Facebook-controlled monopoly of access to the internet, whereby to get online at all you have to log in to a Facebook account. When the scales fall from Wynn-Williams's eyes she realises there is nothing morally worthwhile in Zuckerberg's initiative, nothing empowering to the most deprived of global citizens, but rather his tool involves "delivering a crap version of the internet to two-thirds of the world". But Facebook's impact in the developing world proves worse than crap. In Myanmar, as Wynn-Williams recounts at the end of the book, Facebook facilitated the military junta to post hate speech, thereby fomenting sexual violence and attempted genocide of the country's Muslim minority. "Myanmar," she writes with a lapsed believer's rue, "would have been a better place if Facebook had not arrived." And what is true of Myanmar, you can't help but reflect, applies globally...

"Myanmar is where Wynn-Williams thinks the 'carelessness' of Facebook is most egregious," writes the Sunday Times: In 2018, UN human rights experts said Facebook had helped spread hate speech against Rohingya Muslims, about 25,000 of whom were slaughtered by the Burmese military and nationalists. Facebook is so ubiquitous in Myanmar, Wynn-Williams points out, that people think it is the entire internet. "It's no surprise that the worst outcome happened in the place that had the most extreme take-up of Facebook." Meta admits it was "too slow to act" on abuse in its Myanmar services....

After Wynn-Williams left Facebook, she worked on an international AI initiative, and says she wants the world to learn from the mistakes we made with social media, so that we fare better in the next technological revolution. "AI is being integrated into weapons," she explains. "We can't just blindly wander into this next era. You think social media has turned out with some issues? This is on another level."

Open Source

Startup Claims Its Upcoming (RISC-V ISA) Zeus GPU is 10X Faster Than Nvidia's RTX 5090 (tomshardware.com) 69

"The number of discrete GPU developers from the U.S. and Western Europe shrank to three companies in 2025," notes Tom's Hardware, "from around 10 in 2000." (Nvidia, AMD, and Intel...) No company in the recent years — at least outside of China — was bold enough to engage into competition against these three contenders, so the very emergence of Bolt Graphics seems like a breakthrough. However, the major focuses of Bolt's Zeus are high-quality rendering for movie and scientific industries as well as high-performance supercomputer simulations. If Zeus delivers on its promises, it could establish itself as a serious alternative for scientific computing, path tracing, and offline rendering. But without strong software support, it risks struggling against dominant market leaders.
This week the Sunnyvale, California-based startup introduced its Zeus GPU platform designed for gaming, rendering, and supercomputer simulations, according to the article. "The company says that its Zeus GPU not only supports features like upgradeable memory and built-in Ethernet interfaces, but it can also beat Nvidia's GeForce RTX 5090 by around 10 times in path tracing workloads, according to slide published by technology news site ServeTheHome." There is one catch: Zeus can only beat the RTX 5090 GPU in path tracing and FP64 compute workloads. It's not clear how well it will handle traditional rendering techniques, as that was less of a focus. In speaking with Bolt Graphics, the card does support rasterization, but there was less emphasis on that aspect of the GPU, and it may struggle to compete with the best graphics cards when it comes to gaming. And when it comes to data center options like Nvidia's Blackwell B200, it's an entirely different matter.

Unlike GPUs from AMD, Intel, and Nvidia that rely on proprietary instruction set architectures, Bolt's Zeus relies on the open-source RISC-V ISA, according to the published slides. The Zeus core relies on an open-source out-of-order general-purpose RVA23 scalar core mated with FP64 ALUs and the RVV 1.0 (RISC-V Vector Extension Version 1.0) that can handle 8-bit, 16-bit, 32-bit, and 64-bit data types as well as Bolt's additional proprietary extensions designed for acceleration of scientific workloads... Like many processors these days, Zeus relies on a multi-chiplet design... Unlike high-end GPUs that prioritize bandwidth, Bolt is evidently focusing on greater memory size to handle larger datasets for rendering and simulations. Also, built-in 400GbE and 800GbE ports to enable faster data transfer across networked GPUs indicates the data center focus of Zeus.

High-quality rendering, real-time path tracing, and compute are key focus areas for Zeus. As a result, even the entry-level Zeus 1c26-32 offers significantly higher FP64 compute performance than Nvidia's GeForce RTX 5090 — up to 5 TFLOPS vs. 1.6 TFLOPS — and considerably higher path tracing performance: 77 Gigarays vs. 32 Gigarays. Zeus also features a larger on-chip cache than Nvidia's flagship — up to 128MB vs. 96MB — and lower power consumption of 120W vs. 575W, making it more efficient for simulations, path tracing, and offline rendering. However, the RTX 5090 dominates in AI workloads with its 105 FP16 TFLOPS and 1,637 INT8 TFLOPS compared to the 10 FP16 TFLOPS and 614 INT8 TFLOPS offered by a single-chiplet Zeus...

The article emphasizes that Zeus "is only running in simulation right now... Bolt Graphics says that the first developer kits will be available in late 2025, with full production set for late 2026."

Thanks to long-time Slashdot reader arvn for sharing the news.

Slashdot Top Deals