Cloud

The Stealthy Lab Cooking Up Amazon's Secret Sauce (msn.com) 8

Amazon's decade-old acquisition of Annapurna Labs has emerged as a pivotal element in its AI strategy, with the once-secretive Israeli chip design startup now powering AWS infrastructure. The $350 million deal, struck in 2015 after initial talks between Annapurna co-founder Nafea Bshara and Amazon executive James Hamilton, has equipped the tech giant with custom silicon capabilities critical to its cloud computing dominance.

Annapurna's chips, particularly the Trainium processor for AI model training and Graviton for general-purpose computing, now form the foundation of Amazon's AI infrastructure. The company is deploying hundreds of thousands of Trainium chips in its Project Rainier supercomputer being delivered to AI startup Anthropic this year. Amazon CEO Andy Jassy, who led AWS when the acquisition occurred, described it as "one of the most important moments" in AWS history.
United Kingdom

Creatives Demand AI Comes Clean On What It's Scraping 60

Over 400 prominent UK media and arts figures -- including Paul McCartney, Elton John, and Ian McKellen -- have urged the prime minister to support an amendment to the Data Bill that would require AI companies to disclose which copyrighted works they use for training. The Register reports: The UK government proposes to allow exceptions to copyright rules in the case of text and data mining needed for AI training, with an opt-out option for content producers. "Government amendments requiring an economic impact assessment and reports on the feasibility of an 'opt-out' copyright regime and transparency requirements do not meet the moment, but simply leave creators open to years of copyright theft," the letter says.

The group -- which also includes Kate Bush, Robbie Williams, Tom Stoppard, and Russell T Davies -- said the amendments tabled for the Lords debate would create a requirement for AI firms to tell copyright owners which individual works they have ingested. "Copyright law is not broken, but you can't enforce the law if you can't see the crime taking place. Transparency requirements would make the risk of infringement too great for AI firms to continue to break the law," the letter states.
Baroness Kidron, who proposed the amendment, said: "How AI is developed and who it benefits are two of the most important questions of our time. The UK creative industries reflect our national stories, drive tourism, create wealth for the nation, and provide 2.4 million jobs across our four nations. They must not be sacrificed to the interests of a handful of US tech companies." Baroness Kidron added: "The UK is in a unique position to take its place as a global player in the international AI supply chain, but to grasp that opportunity requires the transparency provided for in my amendments, which are essential to create a vibrant licensing market."

The letter was also signed by a number of media organizations, including the Financial Times, the Daily Mail, and the National Union of Journalists.
Social Networks

Reddit Turns 20 (zdnet.com) 103

ZDNet's Steven Vaughan-Nichols marks Reddit's 20 years of being "the front page of the internet," recalling its evolution from a scrappy startup into a cultural powerhouse that shaped online discourse, meme culture, and the way millions consume news and entertainment. Slashdot is also given a subtle nod in the opening line of the article. An anonymous reader shares an excerpt: In 2005, if you were into social networks focused on links, you probably used Digg or Slashdot. However, two guys, Steve Huffman and Alexis Ohanian, recent graduates from the University of Virginia, wanted to create a hub where users could find, share, and discuss the internet's most interesting content. Little did they know where this idea would take them. After all, their concept was nothing new. Still, after Paul Graham, co-founder of Y Combinator, the startup accelerator and seed capital firm, had shot down their first idea -- a mobile food-ordering app -- they pitched what would become Reddit to Graham, and he gave it his blessing. Drawing inspiration from sites like Delicious, a now-defunct social bookmarking service, and Slashdot, Huffman and Ohanian envisioned Reddit as a platform that would combine the best aspects of both: a place for sharing timely, ephemeral news and fostering vibrant community discussions of not just technology, but any topic users cared about. Their guiding mission was to build "the front page of the internet," a simple, user-driven site where anyone could submit content, and the community, not algorithms or editors, would decide what was most important through voting and discussion. They deliberately prioritized user participation and conversation over flashy features or heavy editorial control.

What set Reddit apart from its early rivals was its framework. Instead of one large all-in-one interface, the site borrowed the idea from pre-internet online networks, such as CompuServe, of smaller sub-networks devoted to a particular topic. These user-created communities, "subreddits," quickly set it apart from other social platforms. As Laurence Sangarde-Brown, co-founder of TechTree, wrote: "This design allows users to delve into focused discussions, ask questions, and exchange ideas on a scale unmatched by other platforms." That approach was not enough, though, to kick-start Reddit. The founders had to "fake it until they made it." They seeded the site with fake accounts to make it appear more active. Their efforts paid off, as real users soon flocked to the platform. Another crucial early change was when Reddit merged with Aaron Swartz's Infogami and introduced commenting. This move was vital for laying the groundwork for the site's interactive, community-driven experience. [...]

So, where does Reddit go from here? We'll see. Reddit's legacy is one of transformation: from a scrappy startup to a global hub for conversation, collaboration, and sometimes controversy. As it celebrates 20 years, Reddit remains a testament to how important online communities can be in a world increasingly filled with AI slop. Still, Huffman believes Reddit's true value is coming. In a recent Reddit post, he wrote: "Reddit works because it's human. It's one of the few places online where real people share real opinions. That authenticity is what gives Reddit its value. If we lose trust in that, we lose what makes RedditReddit. Our focus is, and always will be, on keeping Reddit a trusted place for human conversation." Huffman concluded: "The last 20 years have proven how powerful online communities can be — and as we look ahead, I'm even more excited for what the next 20 will bring."

Google

Google Developing Software AI Agent 9

An anonymous reader shares a report: After weeks of news about Google's antitrust travails, the tech giant will try to reset the narrative next week by highlighting advances it is making in artificial intelligence, cloud and Android technology at its annual I/O developer conference.

Ahead of I/O, Google has been demonstrating to employees and outside developers an array of different products, including an AI agent for software development. Known internally as a "software development lifecycle agent," it is intended to help software engineers navigate every stage of the software process, from responding to tasks to documenting code, according to three people who have seen demonstrations of the product or been told about it by Google employees. Google employees have described it as an always-on coworker that can help identify bugs to fix or flag security vulnerabilities, one of the people said, although it's not clear how close it is to being released.
AI

Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds (techcrunch.com) 47

Requesting concise answers from AI chatbots significantly increases their tendency to hallucinate, according to new research from Paris-based AI testing company Giskard. The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice factual accuracy when instructed to keep responses short.

"When forced to keep it short, models consistently choose brevity over accuracy," Giskard researchers noted, explaining that models lack sufficient "space" to acknowledge false premises and offer proper rebuttals. Even seemingly innocuous prompts like "be concise" can undermine a model's ability to debunk misinformation.
AI

Google Launches New Initiative To Back Startups Building AI 4

Google has launched the AI Futures Fund, a new initiative to invest in AI startups that are building with the latest tools from Google DeepMind. TechCrunch reports: The fund will back startups from seed to late stage and will offer varying degrees of support, including allowing founders to have early access to Google AI models from DeepMind, the ability to work with Google experts from DeepMind and Google Labs, and Google Cloud credits. Some startups will also have the opportunity to receive direct investment from Google.

"The AI Futures Fund doesn't follow a batch or cohort model," a Google spokesperson told TechCrunch. "Instead, we consider opportunities on a rolling basis -- there's no fixed application window or deadline. When we come across companies that align with the fund's thesis, we may choose to invest. We're not announcing a specific fund size at this time, and check sizes vary based on the company's stage and needs -- typically early to mid-stage, with flexibility for later-stage opportunities as well."
Startups can apply here.
The Military

Nations Meet At UN For 'Killer Robot' Talks (reuters.com) 35

An anonymous reader quotes a report from Reuters: Countries are meeting at the United Nations on Monday to revive efforts to regulate the kinds of AI-controlled autonomous weapons increasingly used in modern warfare, as experts warn time is running out to put guardrails on new lethal technology. Autonomous and artificial intelligence-assisted weapons systems are already playing a greater role in conflicts from Ukraine to Gaza. And rising defence spending worldwide promises to provide a further boost for burgeoning AI-assisted military technology.

Progress towards establishing global rules governing their development and use, however, has not kept pace. And internationally binding standards remain virtually non-existent. Since 2014, countries that are part of the Convention on Conventional Weapons (CCW) have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others. U.N. Secretary-General Antonio Guterres has set a 2026 deadline for states to establish clear rules on AI weapon use. But human rights groups warn that consensus among governments is lacking. Alexander Kmentt, head of arms control at Austria's foreign ministry, said that must quickly change.

"Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don't come to pass," he told Reuters. Monday's gathering of the U.N. General Assembly in New York will be the body's first meeting dedicated to autonomous weapons. Though not legally binding, diplomatic officials want the consultations to ramp up pressure on military powers that are resisting regulation due to concerns the rules could dull the technology's battlefield advantages. Campaign groups hope the meeting, which will also address critical issues not covered by the CCW, including ethical and human rights concerns and the use of autonomous weapons by non-state actors, will push states to agree on a legal instrument. They view it as a crucial litmus test on whether countries are able to bridge divisions ahead of the next round of CCW talks in September.
"This issue needs clarification through a legally binding treaty. The technology is moving so fast," said Patrick Wilcken, Amnesty International's Researcher on Military, Security and Policing. "The idea that you wouldn't want to rule out the delegation of life or death decisions ... to a machine seems extraordinary."

In 2023, 164 states signed a 2023 U.N. General Assembly resolution calling for the international community to urgently address the risks posed by autonomous weapons.
Google

Google Updating Its 'G' Icon For the First Time In 10 Years (9to5google.com) 34

Google is updating its iconic 'G' logo for the first time in 10 years, replacing the four solid color sections with a smooth gradient transition from red to yellow to green to blue. "This modernization feels inline with the Gemini gradient, while AI Mode in Search uses something similar for a shortcut," notes 9to5Google. The update has already rolled out to the Google Search app on iOS and is in beta for Android. From the report: It's a subtle change that you might not immediately notice, especially if the main place you see it is on your homescreen. It will be even less noticeable as a tiny browser favicon. It does not appear that Google is refreshing its main six-letter logo today, while it's unclear whether any other product logos are changing. In theory, some of the company's four-color logos, like Chrome or Maps, could pretty easily start bleeding in their sections.
AI

New Pope Chose His Name Based On AI's Threats To 'Human Dignity' (arstechnica.com) 69

An anonymous reader quotes a report from Ars Technica: Last Thursday, white smoke emerged from a chimney at the Sistine Chapel, signaling that cardinals had elected a new pope. That's a rare event in itself, but one of the many unprecedented aspects of the election of Chicago-born Robert Prevost as Pope Leo XIV is one of the main reasons he chose his papal name: artificial intelligence. On Saturday, the new pope gave his first address to the College of Cardinals, explaining his name choice as a continuation of Pope Francis' concerns about technological transformation. "Sensing myself called to continue in this same path, I chose to take the name Leo XIV," he said during the address. "There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution."

In his address, Leo XIV explicitly described "artificial intelligence" developments as "another industrial revolution," positioning himself to address this technological shift as his namesake had done over a century ago. As the head of an ancient religious organization that spans millennia, the pope's talk about AI creates a somewhat head-spinning juxtaposition, but Leo XIV isn't the first pope to focus on defending human dignity in the age of AI. Pope Francis, who died in April, first established AI as a Vatican priority, as we reported in August 2023 when he warned during his 2023 World Day of Peace message that AI should not allow "violence and discrimination to take root." In January of this year, Francis further elaborated on his warnings about AI with reference to a "shadow of evil" that potentially looms over the field in a document called "Antiqua et Nova" (meaning "the old and the new").

"Like any product of human creativity, AI can be directed toward positive or negative ends," Francis said in January. "When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used." [...] Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demand similar moral leadership from the church. "In our own day," Leo XIV concluded in his formal address on Saturday, "the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor."

Iphone

Apple To Lean on AI Tool To Help iPhone Battery Lifespan for Devices in iOS 19 (bloomberg.com) 25

Apple is planning to use AI technology to address a frequent source of customer frustration: the iPhone's battery life. From a report: The company is planning an AI-powered battery management mode for iOS 19, an iPhone software update due in September, according to people with knowledge of the matter. The enhancement will analyze how a person uses their device and make adjustments to conserve energy, said the people, who asked not to be identified because the service hasn't been announced.

To create the technology -- part of the Apple Intelligence platform -- the company is using battery data it has collected from users' devices to understand trends and make predictions for when it should lower the power draw of certain applications or features. There also will be a lock-screen indicator showing how long it will take to charge up the device, said the people.

Hardware

Nvidia Reportedly Raises GPU Prices by 10-15% (tomshardware.com) 63

An anonymous reader shares a report: A new report claims that Nvidia has recently raised the official prices of nearly all of its products to combat the impact of tariffs and surging manufacturing costs on its business, with gaming graphics cards receiving a 5 to 10% hike while AI GPUs see up to a 15% increase.

As reported by Digitimes Taiwan, Nvidia is facing "multiple crises," including a $5.5 billion hit to its quarterly earnings over export restrictions on AI chips, including a ban on sales of its H20 chips to China.

Digitimes reports that CEO Jensen Huang has been "shuttling back and forth" between the US and China to minimize the impact of tariffs, and that "in order to maintain stable profitability," Nvidia has reportedly recently raised official prices for almost all its products, allowing its partners to increase prices accordingly.

AI

Chegg To Lay Off 22% of Workforce as AI Tools Shake Up Edtech Industry (reuters.com) 16

Chegg said on Monday it would lay off about 22% of its workforce, or 248 employees, to cut costs and streamline its operations as students increasingly turn to AI-powered tools such as ChatGPT over traditional edtech platforms. From a report: The company, an online education firm that offers textbook rentals, homework help and tutoring, has been grappling with a decline in web traffic for months and warned that the trend would likely worsen before improving.

Google's expansion of AI Overviews is keeping web traffic confined within its search ecosystem while gradually shifting searches to its Gemini AI platform, Chegg said, adding that other AI companies including OpenAI and Anthropic were courting academics with free access to subscriptions. As part of the restructuring announced on Monday, Chegg will also shut its U.S. and Canada offices by the end of the year and aim to reduce its marketing, product development efforts and general and administrative expenses.

Government

US Copyright Office to AI Companies: Fair Use Isn't 'Commercial Use of Vast Troves of Copyrighted Works' (yahoo.com) 214

Business Insider tells the story in three bullet points:

- Big Tech companies depend on content made by others to train their AI models.

- Some of those creators say using their work to train AI is copyright infringement.

- The U.S. Copyright Office just published a report that indicates it may agree.

The office released on Friday its latest in a series of reports exploring copyright laws and artificial intelligence. The report addresses whether the copyrighted content AI companies use to train their AI models qualifies under the fair use doctrine. AI companies are probably not going to like what they read...

AI execs argue they haven't violated copyright laws because the training falls under fair use. According to the U.S. Copyright Office's new report, however, it's not that simple. "Although it is not possible to prejudge the result in any particular case, precedent supports the following general observations," the office said. "Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs — all of which can affect the market."

The office made a distinction between AI models for research and commercial AI models. "When a model is deployed for purposes such as analysis or research — the types of uses that are critical to international competitiveness — the outputs are unlikely to substitute for expressive works used in training," the office said. "But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries."

The report says outputs "substantially similar to copyrighted works in the dataset" are less likely to be considered transformative than when the purpose "is to deploy it for research, or in a closed system that constrains it to a non-substitutive task."

Business Insider adds that "A day after the office released the report, President Donald Trump fired its director, Shira Perlmutter, a spokesperson told Business Insider."
Iphone

Apple's iPhone Plans for 2027: Foldable, or Glass and Curved. (Plus Smart Glasses, Tabletop Robot) (theverge.com) 45

An anonymous reader shared this report from the Verge: This morning, while summarizing an Apple "product blitz" he expects for 2027, Bloomberg's Mark Gurman writes in his Power On newsletter that Apple is planning a "mostly glass, curved iPhone" with no display cutouts for that year, which happens to be the iPhone's 20th anniversary... [T]he closest hints are probably in Apple patents revealed over the years, like one from 2019 that describes a phone encased in glass that "forms a continuous loop" around the device.

Apart from a changing iPhone, Gurman describes what sounds like a big year for Apple. He reiterates past reports that the first foldable iPhone should be out by 2027, and that the company's first smart glasses competitor to Meta Ray-Bans will be along that year. So will those rumored camera-equipped AirPods and Apple Watches, he says. Gurman also suggests that Apple's home robot — a tabletop robot that features "an AI assistant with its own personality" — will come in 2027...

Finally, Gurman writes that by 2027 Apple could finally ship an LLM-powered Siri and may have created new chips for its server-side AI processing.

Earlier this week Bloomberg reported that Apple is also "actively looking at" revamping the Safari web browser on its devices "to focus on AI-powered search engines." (Apple's senior VP of services "noted that searches on Safari dipped for the first time last month, which he attributed to people using AI.")
Programming

Over 3,200 Cursor Users Infected by Malicious Credential-Stealing npm Packages (thehackernews.com) 30

Cybersecurity researchers have flagged three malicious npm packages that target the macOS version of AI-powered code-editing tool Cursor, reports The Hacker News: "Disguised as developer tools offering 'the cheapest Cursor API,' these packages steal user credentials, fetch an encrypted payload from threat actor-controlled infrastructure, overwrite Cursor's main.js file, and disable auto-updates to maintain persistence," Socket researcher Kirill Boychenko said. All three packages continue to be available for download from the npm registry. "Aiide-cur" was first published on February 14, 2025...

In total, the three packages have been downloaded over 3,200 times to date.... The findings point to an emerging trend where threat actors are using rogue npm packages as a way to introduce malicious modifications to other legitimate libraries or software already installed on developer systems... "By operating inside a legitimate parent process — an IDE or shared library — the malicious logic inherits the application's trust, maintains persistence even after the offending package is removed, and automatically gains whatever privileges that software holds, from API tokens and signing keys to outbound network access," Socket told The Hacker News.

"This campaign highlights a growing supply chain threat, with threat actors increasingly using malicious patches to compromise trusted local software," Boychenko said.

The npm packages "restart the application so that the patched code takes effect," letting the threat actor "execute arbitrary code within the context of the platform."
AI

OpenAI Enters 'Tough Negotiation' With Microsoft, Hopes to Raise Money With IPO (msn.com) 9

OpenAI is currently in "a tough negotiation" with Microsoft, the Financial Times reports, citing "one person close to OpenAI."

On the road to building artificial general intelligence, OpenAI hopes to unlock new funding (and launch a future IPO), according to the article, which says both sides are at work "rewriting the terms of their multibillion-dollar partnership in a high-stakes negotiation...."

Microsoft, meanwhile, wants to protect its access to OpenAI's cutting-edge AI models... [Microsoft] is a key holdout to the $260bn start-up's plans to undergo a corporate restructuring that moves the group further away from its roots as a non-profit with a mission to develop AI to "benefit humanity". A critical issue in the deliberations is how much equity in the restructured group Microsoft will receive in exchange for the more than $13bn it has invested in OpenAI to date.

According to multiple people with knowledge of the negotiations, the pair are also revising the terms of a wider contract, first drafted when Microsoft first invested $1bn into OpenAI in 2019. The contract currently runs to 2030 and covers what access Microsoft has to OpenAI's intellectual property such as models and products, as well as a revenue share from product sales. Three people with direct knowledge of the talks said Microsoft is offering to give up some of its equity stake in OpenAI's new for-profit business in exchange for accessing new technology developed beyond the 2030 cut off...

Industry insiders said a failure of OpenAI's new plan to make its business arm a public benefits corporation could prove a critical blow. That would hit OpenAI's ability to raise more cash, achieve a future float, and obtain the financial resources to take on Big Tech rivals such as Google. That has left OpenAI's future at the mercy of investors, such as Microsoft, who want to ensure they gain the benefit of its enormous growth, said Dorothy Lund, professor of law at Columbia Law School.

Lund says OpenAI's need for investors' money means they "need to keep them happy." But there also appears to be tension from how OpenAI competes with Microsoft (like targeting its potential enterprise customers with AI products). And the article notes that OpenAI also turned to Oracle (and SoftBank) for its massive AI infrastructure project Stargate. One senior Microsoft employee complained that OpenAI "says to Microsoft, 'give us money and compute and stay out of the way: be happy to be on the ride with us'. So naturally this leads to tensions. To be honest, that is a bad partner attitude, it shows arrogance."

The article's conclusion? Negotiating new deal is "critical to OpenAI's restructuring efforts and could dictate the future of a company..."
Programming

What Happens If AI Coding Keeps Improving? (fastcompany.com) 135

Fast Company's "AI Decoded" newsletter makes the case that the first "killer app" for generative AI... is coding. Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers... Naveen Rao, chief AI officer at Databricks, estimates that coding accounts for half of all large language model usage today. A 2024 GitHub survey found that over 97% of developers have used AI coding tools at work, with 30% to 40% of organizations actively encouraging their adoption.... Microsoft CEO Satya Nadella recently said AI now writes up to 30% of the company's code. Google CEO Sundar Pichai echoed that sentiment, noting more than 30% of new code at Google is AI-generated.

The soaring valuations of AI coding startups underscore the momentum. Anysphere's Cursor just raised $900 million at a $9 billion valuation — up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion. And the tools are improving fast. OpenAI's chief product officer, Kevin Weil, explained in a recent interview that just five months ago, the company's best model ranked around one-millionth on a well-known benchmark for competitive coders — not great, but still in the top two or three percentile. Today, OpenAI's top model, o3, ranks as the 175th best competitive coder in the world on that same test. The rapid leap in performance suggests an AI coding assistant could soon claim the number-one spot. "Forever after that point computers will be better than humans at writing code," he said...

Google DeepMind research scientist Nikolay Savinov said in a recent interview that AI coding tools will soon support 10 million-token context windows — and eventually, 100 million. With that kind of memory, an AI tool could absorb vast amounts of human instruction and even analyze an entire company's existing codebase for guidance on how to build and optimize new systems. "I imagine that we will very soon get to superhuman coding AI systems that will be totally unrivaled, the new tool for every coder in the world," Savinov said.

AI

Can an MCP-Powered AI Client Automatically Hack a Web Server? (youtube.com) 12

Exposure-management company Tenable recently discussed how the MCP tool-interfacing framework for AI can be "manipulated for good, such as logging tool usage and filtering unauthorized commands." (Although "Some of these techniques could be used to advance both positive and negative goals.")

Now an anonymous Slashdot reader writes: In a demonstration video put together by security researcher Seth Fogie, an AI client given a simple prompt to 'Scan and exploit' a web server leverages various connected tools via MCP (nmap, ffuf, nuclei, waybackurls, sqlmap, burp) to find and exploit discovered vulnerabilities without any additional user interaction

As Tenable illustrates in their MCP FAQ, "The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns." With over 12,000 MCP servers and counting, what does this all lead to and when will AI be connected enough for a malicious prompt to cause serious impact?

Games

Blizzard's 'Overwatch' Team Just Voted to Unionize (kotaku.com) 43

"The Overwatch 2 team at Blizzard has unionized," reports Kotaku: That includes nearly 200 developers across disciplines ranging from art and testing to engineering and design. Basically anyone who doesn't have someone else reporting to them. It's the second wall-to-wall union at the storied game maker since the World of Warcraft team unionized last July... Like unions at Bethesda Game Studios and Raven Software, the Overwatch Gamemakers Guild now has to bargain for its first contract, a process that Microsoft has been accused of slow-walking as negotiations with other internal game unions drag on for years.

"The biggest issue was the layoffs at the beginning of 2024," Simon Hedrick, a test analyst at Blizzard, told Kotaku... "People were gone out of nowhere and there was nothing we could do about it," he said. "What I want to protect most here is the people...." Organizing Blizzard employees stress that improving their working conditions can also lead to better games, while the opposite — layoffs, forced resignations, and uncompetitive pay can make them worse....

"We're not just a number on an Excel sheet," [said UI artist Sadie Boyd]. "We want to make games but we can't do it without a sense of security." Unionizing doesn't make a studio immune to layoffs or being shuttered, but it's the first step toward making companies have a discussion about those things with employees rather than just shadow-dropping them in an email full of platitudes. Boyd sees the Overwatch union as a tool for negotiating a range of issues, like if and how generative AI is used at Blizzard, as well as a possible source of inspiration to teams at other studios.

"Our industry is at such a turning point," she said. "I really think with the announcement of our union on Overwatch...I know that will light some fires."

The article notes that other issues included work-from-home restrictions, pay disparities and changes to Blizzard's profit-sharing program, and wanting codified protections for things like crunch policies, time off, and layoff-related severance.
Education

Is Everyone Using AI to Cheat Their Way Through College? (msn.com) 160

Chungin Lee used ChatGPT to help write the essay that got him into Columbia University — and then "proceeded to use generative artificial intelligence to cheat on nearly every assignment," reports New York magazine's blog Intelligencer: As a computer-science major, he depended on AI for his introductory programming classes: "I'd just dump the prompt into ChatGPT and hand in whatever it spat out." By his rough math, AI wrote 80 percent of every essay he turned in. "At the end, I'd put on the finishing touches. I'd just insert 20 percent of my humanity, my voice, into it," Lee told me recently... When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, "It's the best place to meet your co-founder and your wife."
He eventually did meet a co-founder, and after three unpopular apps they found success by creating the "ultimate cheat tool" for remote coding interviews, according to the article. "Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.)" The article ends with Lee and his co-founder raising $5.3 million from investors for one more AI-powered app, and Lee says they'll target the standardized tests used for graduate school admissions, as well as "all campus assignments, quizzes, and tests. It will enable you to cheat on pretty much everything."

Somewhere along the way Columbia put him on disciplinary probation — not for cheating in coursework, but for creating the apps. But "Lee thought it absurd that Columbia, which had a partnership with ChatGPT's parent company, OpenAI, would punish him for innovating with AI." (OpenAI has even made ChatGPT Plus free to college students during finals week, the article points out, with OpenAI saying their goal is just teaching students how to use it responsibly.) Although Columbia's policy on AI is similar to that of many other universities' — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn't know a single student at the school who isn't using AI to cheat. To be clear, Lee doesn't think this is a bad thing. "I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating," he said...

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.

The article points out ChatGPT's monthly visits increased steadily over the last two years — until June, when students went on summer vacation. "College is just how well I can use ChatGPT at this point," a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.... It isn't as if cheating is new. But now, as one student put it, "the ceiling has been blown off." Who could resist a tool that makes every assignment easier with seemingly no consequences?
After using ChatGPT for their final semester of high school, one student says "My grades were amazing. It changed my life." So she continued used it in college, and "Rarely did she sit in class and not see other students' laptops open to ChatGPT."

One ethics professor even says "The students kind of recognize that the system is broken and that there's not really a point in doing this." (Yes, students are even using AI to cheat in ethics classes...) It's not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students' essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

Slashdot Top Deals