Businesses

Data Center Boom May End Up Being 'Irrational,' Investor Warns (axios.com) 28

A prominent venture capitalist has warned that the technology industry's massive buildout of AI data centers risks becoming "irrational" and could end in disaster, particularly as companies pursue small nuclear reactors to power the facilities. Josh Wolfe, co-founder and partner at Lux Capital, compared the current infrastructure expansion to previous market bubbles in fiber-optic networking and cloud computing. While individual actions by hyperscale companies to build data center infrastructure remain rational, Wolfe said the collective effort "becomes irrational" and "will not necessarily persist."

The warning comes as Big Tech companies pour tens of billions into data centers and energy sources, with Meta announcing just this week a deal to purchase power from an operating nuclear station in Illinois that was scheduled to retire in 2027. Wolfe said he is worried that speculative capital is flowing into small modular reactors based on presumed energy demands from data centers. "I think that that whole thing is going to end in disaster, mostly because as cliched as it is, history doesn't repeat. It rhymes," he said.
AI

DreamWorks Co-Founder Katzenberg Likens AI To CGI Revolution 50

At the Axios AI+ Summit, DreamWorks co-founder Jeffrey Katzenberg compared the rise of AI in entertainment to the CGI revolution of the 1990s, emphasizing that those who adapt to the technology will thrive. He argued AI won't replace people -- but will replace those who don't embrace it. Axios reports: Katzenberg, a co-founder of DreamWorks and one-time Disney executive whose work includes films like "Shrek," reflected on the "huge" resistance to making "Toy Story" with the then-novel CGI technology. The people most afraid were the ones who would be disrupted, he said. "Everything that you are hearing today are the issues that we had to deal with," he said.

Katzenberg continued, "Yes, there was disruption, but animation's never, ever been bigger than it is today." The bottom line: "AI isn't going to replace people, it's going to replace people that don't use AI," he said. "The exact same analogy there ... is that the talent that went and learned how to use the computer as a new pencil and a new paint brush ... they thrived," he said. Katzenberg added, "if change is uncomfortable, irrelevance is going to be a whole lot harder."
Microsoft

Microsoft's LinkedIn Chief Is Now Running Office (theverge.com) 16

Announced in an internal memo from Microsoft CEO Satya Nadella, LinkedIn CEO Ryan Roslansky has been appointed to also lead the Office, Outlook, and Microsoft 365 Copilot teams as part of an internal AI reorganization. Roslansky will report to Rajesh Jha for Office while continuing to run LinkedIn independently under Nadella. The Verge reports: "LinkedIn remains a top priority and will continue to operate as an independent subsidiary," says Nadella in his memo. "This move brings us closer to the original vision we laid out nine years ago with the LinkedIn acquisition: connecting the world's economic graph with the Microsoft Graph. And I look forward to how Ryan will bring his product ethos and leadership to entertainment and devices." Sumit Chauhan and Gaurav Sareen, senior executives in the Office and Microsoft 365 teams, will remain on the entertainment and devices leadership team, but along with their teams they'll join Jon Friedman and the UX team to work directly for Roslansky.

Charles Lamanna and his BIC team are also moving to report to Rajesh Jha as part of an AI shakeup. "Charles has consistently kept us focused on what it takes to win in business applications and the agent layer, and I look forward to the impact he and his team will have in entertainment and devices," says Nadella. In a separate memo, Lamanna also announced that starting July 2nd Lili Cheng will take on the newly expanded role of CTO of the BIC team. Dan Lewis is also taking on the role of corporate vice president of Copilot Studio. "We are poised to reinvent every role and every business process, and start to reimagine organizations as composed of people and agents," says Lamanna in an internal memo.

Both the Lamanna and Roslansky moves are very interesting, as the business Copilot team and Microsoft 365 Copilot team have been in separate parts of Microsoft's sprawling AI and cloud teams up until this point. This has led to a situation where nobody really owns Copilot all up inside Microsoft, but now the separate leaders of Microsoft 365 Copilot and the business Copilot teams now both report to Rajesh Jha. The consumer Copilot will still be run by Microsoft AI CEO Mustafa Suleyman.

The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
The Courts

Reddit Sues AI Startup Anthropic For Breach of Contract, 'Unfair Competition' (cnbc.com) 44

Reddit is suing AI startup Anthropic for what it's calling a breach of contract and for engaging in "unlawful and unfair business acts" by using the social media company's platform and data without authority. From a report: The lawsuit, filed in San Francisco on Wednesday, claims that Anthropic has been training its models on the personal data of Reddit users without obtaining their consent. Reddit alleges that it has been harmed by the unauthorized commercial use of its content.

The company opened the complaint by calling Anthropic a "late-blooming" AI company that "bills itself as the white knight of the AI industry." Reddit follows by saying, "It is anything but."

AI

Hollywood Already Uses Generative AI (And Is Hiding It) (vulture.com) 61

Major Hollywood studios are extensively using AI tools while avoiding public disclosure, according to industry sources interviewed by New York Magazine. Nearly 100 AI studios now operate in Hollywood with every major studio reportedly experimenting with generative AI despite legal uncertainties surrounding copyright training data, the report said.

Lionsgate has partnered with AI company Runway to create a customized model trained on the studio's film archive, with executives planning to generate entire movie trailers from scripts before shooting begins. The collaboration allows the studio to potentially reduce production costs from $100 million to $50 million for certain projects.

Widespread usage of the new technology is often happening through unofficial channels. Workers are reporting pressure to use AI tools without formal studio approval, then "launder" the AI-generated content through human artists to obscure its origins.
Education

Code.org Changes Mission To 'Make CS and AI a Core Part of K-12 Education' 40

theodp writes: Way back in 2010, Microsoft and Google teamed with nonprofit partners to launch Computing in the Core, an advocacy coalition whose mission was "to strengthen computing education and ensure that it is a core subject for students in the 21st century." In 2013, Computing in the Core was merged into Code.org, a new tech-backed-and-directed nonprofit. And in 2015, Code.org declared 'Mission Accomplished' with the passage of the Every Student Succeeds Act, which elevated computer science to a core academic subject for grades K-12.

Fast forward to June 2025 and Code.org has changed its About page to reflect a new AI mission that's near-and-dear to the hearts of Code.org's tech giant donors and tech leader Board members: "Code.org is a nonprofit working to make computer science (CS) and artificial intelligence (AI) a core part of K-12 education for every student." The mission change comes as tech companies are looking to chop headcount amid the AI boom and just weeks after tech CEOs and leaders launched a new Code.org-orchestrated national campaign to make CS and AI a graduation requirement.
Programming

Morgan Stanley Says Its AI Tool Processed 9 Million Lines of Legacy Code This Year And Saved 280,000 Developer Hours (msn.com) 88

Morgan Stanley has deployed an in-house AI tool called DevGen.AI that has reviewed nine million lines of legacy code this year, saving the investment bank's developers an estimated 280,000 hours by translating outdated programming languages into plain English specifications that can be rewritten in modern code.

The tool, built on OpenAI's GPT models and launched in January, addresses what Mike Pizzi, the company's global head of technology and operations, calls one of enterprise software's biggest pain points -- modernizing decades-old code that weakens security and slows new technology adoption. While commercial AI coding tools excel at writing new code, they lack expertise in older or company-specific programming languages like Cobol, prompting Morgan Stanley to train its own system on its proprietary codebase.

The tool's primary strength, the bank said, lies in creating English specifications that map what legacy code does, enabling any of the company's 15,000 developers worldwide to rewrite it in modern programming languages rather than relying on a dwindling pool of specialists familiar with antiquated coding systems.
Programming

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations 39

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago.

These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations.

The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.
Biotech

World-First Biocomputing Platform Hits the Market (ieee.org) 20

An anonymous reader quotes a report from IEEE Spectrum: In a development straight out of science fiction, Australian startup Cortical Labs has released what it calls the world's first code-deployable biological computer. The CL1, which debuted in March, fuses human brain cells on a silicon chip to process information via sub-millisecond electrical feedback loops. Designed as a tool for neuroscience and biotech research, the CL1 offers a new way to study how brain cells process and react to stimuli. Unlike conventional silicon-based systems, the hybrid platform uses live human neurons capable of adapting, learning, and responding to external inputs in real time. "On one view, [the CL1] could be regarded as the first commercially available biomimetic computer, the ultimate in neuromorphic computing that uses real neurons," says theoretical neuroscientist Karl Friston of University College London. "However, the real gift of this technology is not to computer science. Rather, it's an enabling technology that allows scientists to perform experiments on a little synthetic brain."

The first 115 units will begin shipping this summer at $35,000 each, or $20,000 when purchased in 30-unit server racks. Cortical Labs also offers a cloud-based "wetware-as-a-service" at $300 weekly per unit, unlocking remote access to its in-house cell cultures. Each CL1 contains 800,000 lab-grown human neurons, reprogrammed from the skin or blood samples of real adult donors. The cells remain viable for up to six months, fed by a life-support system that supplies nutrients, controls temperature, filters waste, and maintains fluid balance. Meanwhile, the neurons are firing and interpreting signals, adapting from each interaction.

The CL1's compact energy and hardware footprint could make it attractive for extended experiments. A rack of CL1 units consumes 850-1,000 watts, notably lower than the tens of kilowatts required by a data center setup running AI workloads. "Brain cells generate small electrical pulses to communicate to a broader network," says Cortical Labs Chief Scientific Officer Brett Kagan. "We can do something similar by inputting small electrical pulses representing bits of information, and then reading their responses. The CL1 does this in real time using simple code abstracted through multiple interacting layers of firmware and hardware. Sub-millisecond loops read information, act on it, and write new information into the cell culture."
The company sees CL1 as foundational for testing neuropsychiatric treatments, leveraging living cells to explore genetic and functional differences. "It allows people to study the effects of stimulation, drugs and synthetic lesions on how neuronal circuits learn and respond in a closed-loop setup, when the neuronal network is in reciprocal exchange with some simulated world," says theoretical neuroscientist Karl Friston of University College London. "In short, experimentalists now have at hand a little 'brain in a vat,' something philosophers have been dreaming about for decades."
Movies

The OpenAI Board Drama Is Turning Into a Movie (hollywoodreporter.com) 14

Luca Guadagnino is in talks to direct Artificial, a dramatization of Sam Altman's dramatic firing and rehiring at OpenAI in 2023. The Amazon-MGM film is rumored to star Andrew Garfield, 'A Complete Unknown' scene-stealer Monica Barbaro, and 'Anora' actor Yura Borisov as lead roles in the story. From the Hollywood Reporter: Heyday Films' David Heyman and Jeffrey Clifford are producing the feature that is being put together at lightning speed at Amazon MGM Studios. Simon Rich wrote the script and will also produce, with Jennifer Fox also in talks to produce. How fast is this moving? Sources say Amazon is looking to get production going this summer, with an eye to shoot in San Francisco and Italy.

Altman co-founded OpenAI, but in the fall of 2023, after mounting safety concerns regarding AI, and reports of abusive behavior, was ousted as the head of the company by his board. Five days later, after a revolt, he was reinstated. Sources say that if all goes as planned, Garfield would play Altman, Barbaro would play chief technology office Mira Murati, and Borisov would play Ilya Sutskever, a co-founder who led the movement to get rid of Altman.

AI

AI Pioneer Announces Non-Profit To Develop 'Honest' AI 25

Yoshua Bengio, a pioneer in AI and Turing Award winner, has launched a $30 million non-profit aimed at developing "honest" AI systems that detect and prevent deceptive or harmful behavior in autonomous agents. The Guardian reports: Yoshua Bengio, a renowned computer scientist described as one of the "godfathers" of AI, will be president of LawZero, an organization committed to the safe design of the cutting-edge technology that has sparked a $1 trillion arms race. Starting with funding of approximately $30m and more than a dozen researchers, Bengio is developing a system called Scientist AI that will act as a guardrail against AI agents -- which carry out tasks without human intervention -- showing deceptive or self-preserving behavior, such as trying to avoid being turned off.

Describing the current suite of AI agents as "actors" seeking to imitate humans and please users, he said the Scientist AI system would be more like a "psychologist" that can understand and predict bad behavior. "We want to build AIs that will be honest and not deceptive," Bengio said. He added: "It is theoretically possible to imagine machines that have no self, no goal for themselves, that are just pure knowledge machines -- like a scientist who knows a lot of stuff."

However, unlike current generative AI tools, Bengio's system will not give definitive answers and will instead give probabilities for whether an answer is correct. "It has a sense of humility that it isn't sure about the answer," he said. Deployed alongside an AI agent, Bengio's model would flag potentially harmful behaviour by an autonomous system -- having gauged the probability of its actions causing harm. Scientist AI will "predict the probability that an agent's actions will lead to harm" and, if that probability is above a certain threshold, that agent's proposed action will then be blocked.
"The point is to demonstrate the methodology so that then we can convince either donors or governments or AI labs to put the resources that are needed to train this at the same scale as the current frontier AIs. It is really important that the guardrail AI be at least as smart as the AI agent that it is trying to monitor and control," he said.
Businesses

AI Startup Revealed To Be 700 Indian Employees Pretending To Be Chatbots (latintimes.com) 55

An anonymous reader quotes a report from the Latin Times: A once-hyped AI startup backed by Microsoft has filed for bankruptcy after it was revealed that its so-called artificial intelligence was actually hundreds of human workers in India pretending to be chatbots. Builder.ai, a London-based company previously valued at $1.5 billion, marketed its platform as an AI-powered solution that made building apps as simple as ordering pizza. Its virtual assistant, "Natasha," was supposed to generate software using artificial intelligence. In reality, nearly 700 engineers in India were manually coding customer requests behind the scenes, the Times of India reported.

The ruse began to collapse in May when lender Viola Credit seized $37 million from the company's accounts, uncovering that Builder.ai had inflated its 2024 revenue projections by 300%. An audit revealed the company generated just $50 million in revenue, far below the $220 million it claimed to investors. A Wall Street Journal report from 2019 had already questioned Builder.ai's AI claims, and a former executive sued the company that same year for allegedly misleading investors and overstating its technical capabilities. Despite that, the company raised over $445 million from big names including Microsoft and the Qatar Investment Authority. Builder.ai's collapse has triggered a federal investigation in the U.S., with prosecutors in New York requesting financial documents and customer records.

Facebook

Meta's Going To Revive an Old Nuclear Power Plant (theverge.com) 47

Meta has struck a 20-year deal with energy company Constellation to keep the Clinton Clean Energy Center nuclear plant in Illinois operational, the social media giant's first nuclear power purchase agreement as it seeks clean energy sources for AI data centers. The aging facility, which was slated to close in 2017 after years of financial losses and currently operates under a state tax credit reprieve until 2027, will receive undisclosed financial support that enables a 30-megawatt capacity expansion to 1,121 MW total output.

The arrangement preserves 1,100 local jobs while generating electricity for 800,000 homes, as Meta purchases clean energy certificates to offset a portion of its growing carbon footprint driven by AI operations.
AI

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions 75

An anonymous reader quotes a report from 404 Media: The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning "a bunch of schizoposters" who believe "they've made some sort of incredible discovery or created a god or become a god," highlighting a new type of chatbot-fueled delusion that started getting attention in early May. "LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities," one of the moderators of r/accelerate, wrote in an announcement. "There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment."

The moderator said that it has banned "over 100" people for this reason already, and that they've seen an "uptick" in this type of user this month. The moderator explains that r/accelerate "was formed to basically be r/singularity without the decels." r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. "Decels" is short for the pejorative "decelerationists," who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a "pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents."

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about "Chatgpt induced psychosis." From someone saying their partner is convinced he created the "first truly recursive AI" with ChatGPT that is giving them "the answers" to the universe. [...] The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions." The author of that post said they noticed a spike in websites, blogs, Githubs, and "scientific papers" that "are very obvious psychobabble," and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. "Ironically, the OP post appears to be falling for the same issue as well," the r/accelerate moderator wrote.
"Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people," an r/accelerate moderator told 404 Media. "The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now."

Moderators of the subreddit often cite the term "Neural Howlround" to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles.
AI

Jony Ive's OpenAI Device Gets the Laurene Powell Jobs Nod of Approval 19

Laurene Powell Jobs has publicly endorsed the secretive AI hardware device being developed by Jony Ive and OpenAI, expressing admiration for his design process and investing in his ventures. Ive says the project is an attempt to address the unintended harms of past tech like the iPhone, and Powell Jobs stands to benefit financially if the device succeeds. The Verge reports: In a new interview published by The Financial Times, the two reminisce about Jony Ive's time working at Apple alongside Powell Jobs' late husband, Steve, and trying to make up for the "unintentional" harms associated with those efforts. [...] Powell Jobs, who has remained close friends with Ive since Steve Jobs passed in 2011, echoes his concerns, saying that "there are dark uses for certain types of technology," even if it "wasn't designed to have that result." Powell Jobs has invested in both Ive's LoveFrom design and io hardware startups following his departure from Apple. Ive notes that "there wouldn't be LoveFrom" if not for her involvement. Ive's io company is being purchased by OpenAI for almost $6.5 billion, and with her investment, Powell Jobs stands to gain if the secretive gadget proves anywhere near as successful as the iPhone.

The pair gives away no extra details about the device that Ive is building with OpenAI, but Powell Jobs is expecting big things. She says she has watched "in real time how ideas go from a thought to some words, to some drawings, to some stories, and then to prototypes, and then a different type of prototype," Powell Jobs said. "And then something that you think: I can't imagine that getting any better. Then seeing the next version, which is even better. Just watching something brand new be manifested, it's a wondrous thing to behold."
AI

Web-Scraping AI Bots Cause Disruption For Scientific Databases and Journals (nature.com) 37

Automated web-scraping bots seeking training data for AI models are flooding scientific databases and academic journals with traffic volumes that render many sites unusable. The online image repository DiscoverLife, which contains nearly 3 million species photographs, started receiving millions of daily hits in February this year that slowed the site to the point that it no longer loaded, Nature reported Monday.

The surge has intensified since the release of DeepSeek, a Chinese large language model that demonstrated effective AI could be built with fewer computational resources than previously thought. This revelation triggered what industry observers describe as an "explosion of bots seeking to scrape the data needed to train this type of model." The Confederation of Open Access Repositories reported that more than 90% of 66 surveyed members experienced AI bot scraping, with roughly two-thirds suffering service disruptions. Medical journal publisher BMJ has seen bot traffic surpass legitimate user activity, overloading servers and interrupting customer services.
AI

Business Insider Recommended Nonexistent Books To Staff As It Leans Into AI (semafor.com) 23

An anonymous reader shares a report: Business Insider announced this week that it wants staff to better incorporate AI into its journalism. But less than a year ago, the company had to quietly apologize to some staff for accidentally recommending that they read books that did not appear to exist but instead may have been generated by AI.

In an email to staff last May, a senior editor at Business Insider sent around a list of what she called "Beacon Books," a list of memoirs and other acclaimed business nonfiction books, with the idea of ensuring staff understood some of the fundamental figures and writing powering good business journalism.

Many of the recommendations were well-known recent business, media, and tech nonfiction titles such as Too Big To Fail by Andrew Ross Sorkin, DisneyWar by James Stewart, and Super Pumped by Mike Isaac. But a few were unfamiliar to staff. Simply Target: A CEO's Lessons in a Turbulent Time and Transforming an Iconic Brand by former Target CEO Gregg Steinhafel was nowhere to be found. Neither was Jensen Huang: the Founder of Nvidia, which was supposedly published by the company Charles River Editors in 2019.

Programming

How Stack Overflow's Reputation System Led To Its Own Downfall (infoworld.com) 103

A new analysis argues that Stack Overflow's decline began years before AI tools delivered the "final blow" to the once-dominant programming forum. The site's monthly questions dropped from a peak of 200,000 to a steep collapse that began in earnest after ChatGPT's 2023 launch, but usage had been declining since 2014, according to data cited in the InfoWorld analysis.

The platform's remarkable reputation system initially elevated it above competitors by allowing users to earn points and badges for helpful contributions, but that same system eventually became its downfall, the piece argues. As Stack Overflow evolved into a self-governing platform where high-reputation users gained moderation powers, the community transformed from a welcoming space for developer interaction into what the author compares to a "Stanford Prison Experiment" where moderators systematically culled interactions they deemed irrelevant.
AI

AI's Adoption and Growth Truly is 'Unprecedented' (techcrunch.com) 157

"If the adoption of AI feels different from any tech revolution you may have experienced before — mobile, social, cloud computing — it actually is," writes TechCrunch. They cite a new 340-page report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history — and uses the word "unprecedented" on 51 pages: ChatGPT reaching 800 million users in 17 months: unprecedented. The number of companies and the rate at which so many others are hitting high annual recurring revenue rates: also unprecedented. The speed at which costs of usage are dropping: unprecedented. While the costs of training a model (also unprecedented) is up to $1 billion, inference costs — for example, those paying to use the tech — has already dropped 99% over two years, when calculating cost per 1 million tokens, she writes, citing research from Stanford. The pace at which competitors are matching each other's features, at a fraction of the cost, including open source options, particularly Chinese models: unprecedented...

Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.

"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.

"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."

Slashdot Top Deals