Technology

Racks of AI Chips Are Too Damn Heavy (theverge.com) 48

The weight of AI server racks has reached a point where legacy data centers cannot accommodate them even with significant retrofitting efforts, The Verge reports. Chris Brown, chief technical officer at Uptime Institute, said most retrofitting attempts would require "bulldozing the building and starting over from scratch."

AI racks are projected to reach 5,000 pounds compared to the 400 to 600 pounds that racks weighed three decades ago. The dramatic increase stems from hundreds to 1,000 GPUs packed densely into each rack alongside memory chips and liquid cooling systems that can add substantial weight. AI workloads now consume up to 350 kilowatts per rack, 35 times the 10 kilowatts that traditional computer chip workloads averaged a decade ago. Legacy data centers with raised floors typically max out at around 1,250 pounds per square foot for static loads.

Chris McLean, president of Critical Facility Group, said that rack heights have grown from 6 feet to 9 feet over nearly two decades, creating problems with doorframes and freight elevators in older buildings.
Technology

Tech Giants Can't Agree On What To Call Their AI-Powered Glasses (theverge.com) 39

The glasses-shaped face computers that tech companies have been building for years now face an identity crisis, and their makers can't agree on what to call them. Meta has asked a journalist to refer to its Ray-Ban glasses as "AI glasses" to distinguish them from Google Glass. Google, whose Project Aura is a collaboration with Xreal, calls the product "wired XR glasses" because the company views it as more aligned with headsets in a glasses form factor.

Xreal's CEO Chi Xu laughed when asked about Aura's category and said the company will call all its products "AR glasses." Research firms aren't aligned either. Gartner defines smart glasses as camera- and display-free devices with Bluetooth and AI. Counterpoint Research said smart glasses without see-through displays drive volumes in the smart eyewear category. IDC uses a broader definition that includes anything glasses-shaped.
Education

The Entry-Level Hiring Process Is Breaking Down (theatlantic.com) 113

The traditional signals that employers used to evaluate entry-level job candidates -- college GPAs, cover letters, and interview performance -- have lost much of their value as grade inflation and widespread AI use render these metrics nearly meaningless, writes The Atlantic.

The recent-graduate unemployment rate now sits slightly higher than the overall workforce's, a reversal from historical norms where new college graduates were more likely to be employed than the average worker. Job postings on Handshake, a career-services platform for students and recent graduates, have fallen by more than 16 percent in the past year. At Harvard, 60% of undergraduate grades are now A's, up from fewer than a quarter two decades ago. Seven years ago, 70% of new graduates' resumes were screened by GPA; that figure has dropped to 40%.

Two working papers examining Freelancer.com found that cover-letter quality once strongly predicted who would get hired and how well they would perform -- until ChatGPT became available. "We basically find the collapse of this entire signaling mechanism," researcher Jesse Silbert said. The average number of applications per open job has increased by 26% in the past year. Students at UC Berkeley are now applying to 150 internships just to land one or two interviews.
Mozilla

Mozilla's New CEO Bets Firefox's Future on AI 114

Mozilla has named Anthony Enzor-DeMeo as its new chief executive, promoting the executive who has spent the past year leading the Firefox browser team and who now plans to make AI central to the company's future.

Enzor-DeMeo announced on Tuesday that an "AI Mode" is coming to Firefox next year. The feature will let users choose from multiple AI models rather than being locked into a single provider. Some options will be open-source models, others will be private "Mozilla-hosted cloud options," and the company also plans to integrate models from major AI companies. Mozilla itself will not train its own large language model.

"We're not incentivized to push one model or the other," Enzor-DeMeo told The Verge. Firefox currently has about 200 million monthly users, a fraction of Chrome's roughly 4 billion, though Enzor-DeMeo insists mobile usage is growing at a decent clip.

He takes over from interim CEO Laura Chambers, who led the company through a major antitrust case and what Mozilla describes as "double-digit mobile growth" in Firefox. Chambers is returning to the Mozilla board of directors. The new CEO has outlined three priorities: ensuring all products give users control over AI features including the ability to turn them off, building a business model around transparent monetization, and expanding Firefox into a broader ecosystem of trusted software. Mozilla VPN integration is planned for the browser next year.
Google

Google Search Homepage Adds a 'Plus' Menu (9to5google.com) 21

After introducing an AI Mode shortcut earlier this year, Google has now added a new "plus" menu to its Search homepage, highlighting options for image and file uploads. 9to5Google reports: On google.com, the Search bar now has a plus icon at the far left that replaces the magnifying glass. Clicking lets you "Upload image" or "Upload file." It very much matches the AI Mode experience. Those two capabilities aren't new, but this plus menu does help emphasize that you can use Google to accomplish tasks, and not just find information. Additionally, it helps indicate that they can be used with AI Mode and AI Overviews. This is just available on desktop web (not mobile) and is live on all the devices we checked today, including across signed-out Incognito sessions.
The Internet

Merriam-Webster's 2025 Word of the Year Is 'Slop' 26

Merriam-Webster crowned "slop" its 2025 Word of the Year, reflecting growing public awareness and and fatigue around low-quality, AI-generated content flooding the internet. "It's such an illustrative word," said Greg Barlow, Merriam-Webster's president. "It's part of a transformative technology, AI, and it's something that people have found fascinating, annoying and a little bit ridiculous." The Associated Press reports: "Slop" was first used in the 1700s to mean soft mud, but it evolved more generally to mean something of little value. The definition has since expanded to mean "digital content of low quality that is produced usually in quantity by means of artificial intelligence." In other words, "you know, absurd videos, weird advertising images, cheesy propaganda, fake news that looks real, junky AI-written digital books," Barlow said. "Words like 'ubiquitous,' 'paradigm,' 'albeit,' 'irregardless,' these are always top lookups because they're words that are on the edge of our lexicon," Barlow said. "'Irregardless' is a word in the dictionary for one reason: It's used. It's been used for decades to mean 'regardless.'"

The announcement can be found here.
The Internet

Cloudflare Reveals How Bots and Governments Reshaped the Internet in 2025 (nerds.xyz) 23

Cloudflare's sixth annual Year in Review report describes an internet increasingly shaped by two forces: automated traffic and government intervention, as global connectivity grew 19% year over year in 2025.

Google's web crawler now dominates automated traffic, dwarfing other AI and indexing bots to become the single largest source of bot activity on the web. Nearly half of all major internet disruptions globally were linked to government actions, and civil society and non-profit organizations became the most attacked sector for the first time.

Post-quantum encryption crossed a significant threshold, now protecting 52% of human internet traffic observed by Cloudflare. The company also recorded more than 25 record-breaking DDoS attacks throughout the year.
Television

LG's Software Update Forces Microsoft Copilot Onto Smart TVs (tomshardware.com) 57

LG smart TV owners discovered over the weekend that a recent webOS software update had quietly installed Microsoft Copilot on their devices, and the app cannot be uninstalled. Affected users report the feature appears automatically after installing the latest webOS update on certain models, sitting alongside streaming apps like Netflix and YouTube.

LG's support documentation confirms that certain preinstalled or system apps can only be hidden, not deleted. At CES 2025, LG announced plans to integrate Copilot into webOS as part of its "AI TV" strategy, describing it as an extension of its AI Search experience. The current implementation appears to function as a shortcut to a web-based Copilot interface rather than a native application. Samsung TVs include Google's Gemini in a similar fashion. Users wanting to avoid the feature entirely are left with one option: disconnecting their TV from the internet.
AI

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power? (noemamag.com) 183

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..."

"When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions...

We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Some key points:
  • "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival..."
  • "When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk..."
  • "Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... "
  • "Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power..."
  • "Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction..."
  • "The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve..."
  • "The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods..." [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed..."
  • "These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..."

He's ultimately warning us about "politics masked as predictions..."

"The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation.

"It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."


AI

CEOs Plan to Spend More on AI in 2026 - Despite Spotty Returns (msn.com) 41

The Wall Street Journal reports that 68% of CEOs "plan to spend even more on AI in 2026, according to an annual survey of more than 350 public-company CEOs from advisory firm Teneo." And yet "less than half of current AI projects had generated more in returns than they had cost, respondents said." They reported the most success using AI in marketing and customer service and challenges using it in higher-risk areas such as security, legal and human resources.

Teneo also surveyed about 400 institutional investors, of which 53% expect that AI initiatives would begin to deliver returns on investments within six months. That compares to the 84% of CEOs of large companies — those with revenue of $10 billion or more — who believe it will take more than six months.

Surprisingly, 67% of CEOs believe AI will increase their entry-level head count, while 58% believe AI will increase senior leadership head count.

All the surveyed CEOS were from public companies with revenue over $1 billion...
AI

Podcast Industry Under Siege as AI Bots Flood Airways with Thousands of Programs (yahoo.com) 42

An anonymous reader shared this report from the Los Angeles Times: Popular podcast host Steven Bartlett has used an AI clone to launch a new kind of content aimed at the 13 million followers of his podcast "Diary of a CEO." On YouTube, his clone narrates "100 CEOs With Steven Bartlett," which adds AI-generated animation to Bartlett's cloned voice to tell the life stories of entrepreneurs such as Steve Jobs and Richard Branson. Erica Mandy, the Redondo Beach-based host of the daily news podcast called "The Newsworthy," let an AI voice fill in for her earlier this year after she lost her voice from laryngitis and her backup host bailed out...

In podcasting, many listeners feel strong bonds to hosts they listen to regularly. The slow encroachment of AI voices for one-off episodes, canned ad reads, sentence replacement in postproduction or translation into multiple languages has sparked anger as well as curiosity from both creators and consumers of the content. Augmenting or replacing host reads with AI is perceived by many as a breach of trust and as trivializing the human connection listeners have with hosts, said Megan Lazovick, vice president of Edison Research, a podcast research company... Still, platforms such as YouTube and Spotify have introduced features for creators to clone their voice and translate their content into multiple languages to increase reach and revenue. A new generation of voice cloning companies, many with operations in California, offers better emotion, tone, pacing and overall voice quality...

Some are using the tech to carpet-bomb the market with content. Los Angeles podcasting studio Inception Point AI has produced its 200,000 podcast episodes, in some weeks accounting for 1% of all podcasts published that week on the internet, according to CEO Jeanine Wright. The podcasts are so cheap to make that they can focus on tiny topics, like local weather, small sports teams, gardening and other niche subjects. Instead of a studio searching for a specific "hit" podcast idea, it takes just $1 to produce an episode so that they can be profitable with just 25 people listening... One of its popular synthetic hosts is Vivian Steele, an AI celebrity gossip columnist with a sassy voice and a sharp tongue... Inception Point has built a roster of more than 100 AI personalities whose characteristics, voices and likenesses are crafted for podcast audiences. Its AI hosts include Clare Delish, a cooking guidance expert, and garden enthusiastNigel Thistledown...

Across Apple and Spotify, Inception Point podcasts have now garnered 400,000 subscribers.

AI

Entry-Level Tech Workers Confront an AI-Fueled Jobpocalypse (restofworld.org) 78

AI "has gutted entry-level roles in the tech industry," reports Rest of World.

One student at a high-ranking engineering college in India tells them that among his 400 classmates, "fewer than 25% have secured job offers... there's a sense of panic on the campus." Students at engineering colleges in India, China, Dubai, and Kenya are facing a "jobpocalypse" as artificial intelligence replaces humans in entry-level roles. Tasks once assigned to fresh graduates, such as debugging, testing, and routine software maintenance, are now increasingly automated. Over the last three years, the number of fresh graduates hired by big tech companies globally has declined by more than 50%, according to a report published by SignalFire, a San Francisco-based venture capital firm. Even though hiring rebounded slightly in 2024, only 7% of new hires were recent graduates. As many as 37% of managers said they'd rather use AI than hire a Gen Z employee...

Indian IT services companies have reduced entry-level roles by 20%-25% thanks to automation and AI, consulting firm EY said in a report last month. Job platforms like LinkedIn, Indeed, and Eures noted a 35% decline in junior tech positions across major EU countries during 2024...

"Five years ago, there was a real war for [coders and developers]. There was bidding to hire," and 90% of the hires were for off-the-shelf technical roles, or positions that utilize ready-made technology products rather than requiring in-house development, said Vahid Haghzare, director at IT hiring firm Silicon Valley Associates Recruitment in Dubai. Since the rise of AI, "it has dropped dramatically," he said. "I don't even think it's touching 5%. It's almost completely vanished." The company headhunts workers from multiple countries including China, Singapore, and the U.K... The current system, where a student commits three to five years to learn computer science and then looks for a job, is "not sustainable," Haghzare said. Students are "falling down a hole, and they don't know how to get out of it."

Education

Purdue University Approves New AI Requirement For All Undergrads (forbes.com) 26

Nonprofit Code.org released its 2025 State of AI & Computer Science Education report this week with a state-by-state analysis of school policies complaining that "0 out of 50 states require AI+CS for graduation."

But meanwhile, at the college level, "Purdue University will begin requiring that all of its undergraduate students demonstrate basic competency in AI," writes former college president Michael Nietzel, "starting with freshmen who enter the university in 2026." The new "AI working competency" graduation requirement was approved by the university's Board of Trustees at its meeting on December 12... The requirement will be embedded into every undergraduate program at Purdue, but it won't be done in a "one-size-fits-all" manner. Instead, the Board is delegating authority to the provost, who will work with the deans of all the academic colleges to develop discipline-specific criteria and proficiency standards for the new campus-wide requirement. [Purdue president] Chiang said students will have to demonstrate a working competence through projects that are tailored to the goals of individual programs. The intent is to not require students to take more credit hours, but to integrate the new AI expectation into existing academic requirements...

While the news release claimed that Purdue may be the first school to establish such a requirement, at least one other university has introduced its own institution-wide expectation that all its graduates acquire basic AI skills. Earlier this year, The Ohio State University launched an AI Fluency initiative, infusing basic AI education into core undergraduate requirements and majors, with the goal of helping students understand and use AI tools — no matter their major.

Purdue wants its new initiative to help graduates:

— Understand and use the latest AI tools effectively in their chosen fields, including being able to identify the key strengths and limits of AI technologies;

— Recognize and communicate clearly about AI, including developing and defending decisions informed by AI, as well as recognizing the influence and consequences of AI in decision-making;

— Adapt to and work with future AI developments effectively.

AI

Time Magazine's 'Person of the Year': the Architects of AI (time.com) 54

Time magazine used its 98th annual "Person of the Year" cover to "recognize a force that has dominated the year's headlines, for better or for worse. For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME's 2025 Person of the Year."

One cover illustration shows eight AI executives sitting precariously on a beam high above the city, while Time's 6,700-word article promises "the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how [Nvidia CEO] Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods."

Time describes them betting on "one of the biggest physical infrastructure projects of all time," mentioning all the usual worries — datacenters' energy consumption, chatbot psychosis, predictions of "wiping out huge numbers of jobs" and the possibility of an AI stock market bubble. (Although "The drumbeat of warning that advanced AI could kill us all has mostly quieted"). But it also notes AI's potential to jumpstart innovation (and economic productivity) This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible. "Every industry needs it, every company uses it, and every nation needs to build it," Huang tells TIME in a 75-minute interview in November, two days after announcing that Nvidia, the world's first $5 trillion company, had once again smashed Wall Street's earnings expectations. "This is the single most impactful technology of our time..."

The risk-averse are no longer in the driver's seat. Thanks to Huang, Son, Altman, and other AI titans, humanity is now flying down the highway, all gas no brakes, toward a highly automated and highly uncertain future. Perhaps Trump said it best, speaking directly to Huang with a jovial laugh in the U.K. in September: "I don't know what you're doing here. I hope you're right."

GNOME

New Rule Forbids GNOME Shell Extensions Made Using AI-Generated Code (phoronix.com) 70

An anonymous reader shared this report from Phoronix: Due to the growing number of GNOME Shell extensions looking to appear on extensions.gnome.org that were generated using AI, it's now prohibited. The new rule in their guidelines note that AI-generated code will be explicitly rejected:

"Extensions must not be AI-generated

While it is not prohibited to use AI as a learning aid or a development tool (i.e. code completions), extension developers should be able to justify and explain the code they submit, within reason.

Submissions with large amounts of unnecessary code, inconsistent code style, imaginary API usage, comments serving as LLM prompts, or other indications of AI-generated output will be rejected."

In a blog post, GNOME developer Javad Rahmatzadeh explains that "Some devs are using AI without understanding the code..."
AI

Startup Successfully Uses AI to Find New Geothermal Energy Reservoirs (cnn.com) 50

A Utah-based startup announced last week it used AI to locate a 250-degree Fahrenheit geothermal reservoir, reports CNN. It'll start producing electricity in three to five years, the company estimates — and at least one geologist believes AI could be an exciting "gamechanger" for the geothermal industry. [Startup Zanskar Geothermal & Minerals] named it "Big Blind," because this kind of site — which has no visual indication of its existence, no hot springs or geysers above ground, and no history of geothermal exploration — is known as a "blind" system. It's the first industry-discovered blind site in more than three decades, said Carl Hoiland, co-founder and CEO of Zanskar. "The idea that geothermal is tapped out has been the narrative for decades," but that's far from the case, he told CNN. He believes there are many more hidden sites across the Western U.S.

Geothermal energy is a potential gamechanger. It offers the tantalizing prospect of a huge source of clean energy to meet burgeoning demand. It's near limitless, produces scarcely any climate pollution, and is constantly available, unlike wind and solar, which are cheap but rely on the sun shining and the wind blowing. The problem, however, has been how to find and scale it. It requires a specific geology: underground reservoirs of hot water or steam, along with porous rocks that allow the water to move through them, heat up, and be brought to the surface where it can power turbines... The AI models Zanskar uses are fed information on where blind systems already exist. This data is plentiful as, over the last century and more, humans have accidentally stumbled on many around the world while drilling for other resources such as oil and gas.

The models then scour huge amounts of data — everything from rock composition to magnetic fields — to find patterns that point to the existence of geothermal reserves. AI models have "gotten really good over the last 10 years at being able to pull those types of signals out of noise," Hoiland said...

Zanskar's discovery "is very significant," said James Faulds, a professor of geosciences at Nevada Bureau of Mines and Geology.... Estimates suggest over three-quarters of US geothermal resources are blind, Faulds told CNN. "Refining methods to find such systems has the potential to unleash many tens and perhaps hundreds of gigawatts in the western US alone," he said... Big Blind is the company's first blind site discovery, but it's the third site it has drilled and hit commercial resources. "We expect dozens, to eventually hundreds, of new sites to be coming to market," Hoiland said.... Hoiland says Zanskar's work shows conventional geothermal still has huge untapped potential.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Firefox

Firefox Survey Finds Only 16% Feel In Control of Their Privacy Choices Online (mozilla.org) 33

Choosing your browser "is one of the most important digital decisions you can make, shaping how you experience the web, protect your data, and express yourself online," says the Firefox blog. They've urged readers to "take a stand for independence and control in your digital life."

But they also recently polled 8,000 adults in France, Germany, the UK and the U.S. on "how they navigate choice and control both online and offline" (attending in-person events in Chicago, Berlin, LA, and Munich, San Diego, Stuttgart): The survey, conducted by research agency YouGov, showcases a tension between people's desire to have control over their data and digital privacy, and the reality of the internet today — a reality defined by Big Tech platforms that make it difficult for people to exercise meaningful choice online:


— Only 16% feel in control of their privacy choices (highest in Germany at 21%)

— 24% feel it's "too late" because Big Tech already has too much control or knows too much about them. And 36% said the feeling of Big Tech companies knowing too much about them is frustrating — highest among respondents in the U.S. (43%) and the UK (40%)

— Practices respondents said frustrated them were Big Tech using their data to train AI without their permission (38%) and tracking their data without asking (47%; highest in U.S. — 55% and lowest in France — 39%)


And from our existing research on browser choice, we know more about how defaults that are hard to change and confusing settings can bury alternatives, limiting people's ability to choose for themselves — the real problem that fuels these dynamics.

Taken together our new and existing insights could also explain why, when asked which actions feel like the strongest expressions of their independence online, choosing not to share their data (44%) was among the top three responses in each country (46% in the UK; 45% in the U.S.; 44% in France; 39% in Germany)... We also see a powerful signal in how people think about choosing the communities and platforms they join — for 29% of respondents, this was one of their top three expressions of independence online.

"For Firefox, community has always been at the heart of what we do," says their VP of Global Marketing, "and we'll keep fighting to put real choice and control back in people's hands so the web once again feels like it belongs to the communities that shape it."

At TwitchCon in San Diego Firefox even launched a satirical new online card game with a privacy theme called Data War.
United States

Arizona City Rejects Data Center After Lobbying Push 43

Chandler, Arizona unanimously rejected a proposed AI data center despite heavy lobbying from Big Tech interests and former Sen. Kyrsten Sinema. Politico reports: The Chandler City Council last night voted down a request by a New York developer to rezone land to build a data center and business complex. The local battle escalated in October after Sinema showed up at a planning commission meeting to offer public comment warning officials in her home state that federal authority may soon stomp on local regulations. "Chandler right now has the opportunity to determine how and when these new, innovative AI data centers will be built," she told local officials. "When federal preemption comes, we'll no longer have that privilege."

Explaining her no vote, Chandler Vice Mayor Christine Ellis said that she had long framed her decision about the local benefits rather than the national push to build AI. She recalled a meeting with Sinema where she asked point-blank, "what's in it for Chandler?" "If you can't show me what's in it for Chandler, then we are not having a conversation," Ellis said before voting against the project. [...]

The project, along with Sinema's involvement, attracted significant community opposition, with speakers raising concerns about whether the project would use too much water or raise power prices. Residents packed the council chambers, with many holding up signs reading "No More Data Centers." According to the city's planning office, more than 200 comments were filed against the proposal compared to just eight in favor.
Businesses

Doom Studio id Software Forms 'Wall-To-Wall' Union (engadget.com) 32

id Software employees voted to form a wall-to-wall union with the CWA, covering all roles at the Doom studio. "The vote wasn't unanimous, though a majority did vote in favor of the union," notes Engadget. From the report: The union will work in conjunction with the Communications Workers of America (CWA), which is the same organization involved with parent company ZeniMax's recent unionization efforts. Microsoft, who owns ZeniMax, has already recognized this new effort, according to a statement by the CWA. It agreed to a labor neutrality agreement with the CWA and ZeniMax workers last year, paving the way for this sort of thing.

From the onset, this union will look to protect remote work for id Software employees. "Remote work isn't a perk. It's a necessity for our health, our families, and our access needs. RTO policies should not be handed down from executives with no consideration for accessibility or our well-being," said id Software Lead Services Programmer Chris Hays. He also said he looks forward to getting worker protections regarding the "responsible use of AI."

AI

US To Mandate AI Vendors Measure Political Bias For Federal Sales (reuters.com) 63

An anonymous reader quotes a report from Reuters: The U.S. government will require artificial intelligence vendors to measure political "bias" to sell their chatbots to federal agencies, according to a Trump administration statement (PDF) released on Thursday. The requirement will apply to all large language models bought by federal agencies, with the exception of national security systems, according to the statement.

President Donald Trump ordered federal agencies in July to avoid buying large language models that he labeled as "woke." Thursday's statement gives more detail to that directive, saying that developers should not "intentionally encode partisan or ideological judgments" into a chatbot's outputs.
Further reading: Trump Signs Executive Order For Single National AI Regulation Framework, Limiting Power of States

Slashdot Top Deals