AI

OpenAI Pulls Promotional Materials About Jony Ive Deal (After Trademark Lawsuit) (techcrunch.com) 2

OpenAI appears to have pulled a much-discussed video promoting the friendship between CEO Sam Altman and legendary Apple designer Jony Ive (plus, incidentally, OpenAI's $6.5 billion deal to acquire Ive and Altman's device startup io) from its website and YouTube page. [Though you can still see the original on Archive.org.]

Does that suggest something is amiss with the acquisition, or with plans for Ive to lead design work at OpenAI? Not exactly, according to Bloomberg's Mark Gurman, who reports [on X.com] that the "deal is on track and has NOT dissolved or anything of the sort." Instead, he said a judge has issued a restraining order over the io name, forcing the company to pull all materials that used it.

Gurman elaborates on the disappearance of the video (and other related marketing materials) in a new article at Bloomberg: Bloomberg reported last week that a judge was considering barring OpenAI from using the IO name due to a lawsuit recently filed by the similarly named IYO Inc., which is also building AI devices. "This is an utterly baseless complaint and we'll fight it vigorously," a spokesperson for Ive said on Sunday.
The video is still viewable on X.com, notes TechCrunch. But visiting the "Sam and Jony" page on OpenAI now pulls up a 404 error message — written in the form of a haiku:

Ghost of code lingers
Blank space now invites wonder
Thoughts begin to soar

by o4-mini-high

AI

Tesla Begins Driverless Robotaxi Service in Austin, Texas (theguardian.com) 110

With no one behind the steering wheel, a Tesla robotaxi passes Guero's Taco Bar in Austin Texas, making a right turn onto Congress Avenue.

Today is the day Austin became the first city in the world to see Tesla's self-driving robotaxi service, reports The Guardian: Some analysts believe that the robotaxis will only be available to employees and invitees initially. For the CEO, Tesla's rollout is slow. "We could start with 1,000 or 10,000 [robotaxis] on day one, but I don't think that would be prudent," he told CNBC in May. "So, we will start with probably 10 for a week, then increase it to 20, 30, 40."

The billionaire has said the driverless cars will be monitored remotely... [Posting on X.com] Musk said the date was "tentatively" 22 June but that this launch date would be "not real self-driving", which would have to wait nearly another week... Musk said he planned to have one thousand Tesla robotaxis on Austin roads "within a few months" and then he would expand to other cities in Texas and California.

Musk posted on X that riders on launch day would be charged a flat fee of $4.20, according to Reuters. And "In recent days, Tesla has sent invites to a select group of Tesla online influencers for a small and carefully monitored robotaxi trial..." As the date of the planned robotaxi launch approached, Texas lawmakers moved to enact rules on autonomous vehicles in the state. Texas Governor Greg Abbott, a Republican, on Friday signed legislation requiring a state permit to operate self-driving vehicles. The law does not take effect until September 1, but the governor's approval of it on Friday signals state officials from both parties want the driverless-vehicle industry to proceed cautiously... The law softens the state's previous anti-regulation stance on autonomous vehicles. A 2017 Texas law specifically prohibited cities from regulating self-driving cars...

The law requires autonomous-vehicle operators to get approval from the Texas Department of Motor Vehicles before operating on public streets without a human driver. It also gives state authorities the power to revoke permits if they deem a driverless vehicle "endangers the public," and requires firms to provide information on how police and first responders can deal with their driverless vehicles in emergency situations. The law's requirements for getting a state permit to operate an "automated motor vehicle" are not particularly onerous but require a firm to attest it can safely operate within the law... Compliance remains far easier than in some states, most notably California, which requires extensive submission of vehicle-testing data under state oversight.

Tesla "planned to operate only in areas it considered the safest," according to the article, and "plans to avoid bad weather, difficult intersections, and will not carry anyone below the age of 18."

More details from UPI: To get started using the robotaxis, users must download the Robotaxi app and use their Tesla account to log in, where it then functions like most ridesharing apps...

"Riders may not always be delivered to their intended destinations or may experience inconveniences, interruptions, or discomfort related to the Robotaxi," the company wrote in a disclaimer in its terms of service. "Tesla may modify or cancel rides in its discretion, including for example due to weather conditions." The terms of service include a clause that Tesla will not be liable for "any indirect, consequential, incidental, special, exemplary, or punitive damages, including lost profits or revenues, lost data, lost time, the costs of procuring substitute transportation services, or other intangible losses" from the use of the robotaxis.

Their article includes a link to the robotaxi's complete Terms of Service: To the fullest extent permitted by law, the Robotaxi, Robotaxi app, and any ride are provided "as is" and "as available" without warranties of any kind, either express or implied... The Robotaxi is not intended to provide transportation services in connection with emergencies, for example emergency transportation to a hospital... Tesla's total liability for any claim arising from or relating to Robotaxi or the Robotaxi app is limited to the greater of the amount paid by you to Tesla for the Robotaxi ride giving rise to the claim, and $100... Tesla may modify these Terms in our discretion, effective upon posting an updated version on Tesla's website. By using a Robotaxi or the Robotaxi app after Tesla posts such modifications, you agree to be bound by the revised Terms.
AI

How Will AI Impact Call Center Jobs in India? (msn.com) 52

How AI will reshape the future of work? The Washington Post looks at India's $280 billion call-center and "business process outsourcing" industry, which employs over 3 million people.

2023 saw the arrival of a real-time "accent-altering software" — now used by at least 42,000 call center agents: Those who use the software are engaging in "digital whitewashing," critics say, which helps explain why the industry prefers the term "accent translation" over "accent neutralization." But companies say it's delivering results: happier customers, satisfied agents, faster calls.

Many are not convinced. Whatever short-term gains automation may offer to workers, they say, it will ultimately eliminate far more jobs than it creates. They point to the quality assurance process: When callers hear, "this call may be monitored," that now usually refers to an AI system, not a human [which now can review all calls for compliance and tone]... "AI is going to crush entry-level white-collar hiring over the next 24 to 36 months," said Mark Serdar, who has spent his career helping Fortune 500 companies expand their global workforce. "And it's happening faster than most people realize...." Already, chatbots, or "virtual agents," are handling basic tasks like password resets or balance updates. AI systems are writing code, translating emails, onboarding patients, and analyzing applications for credit cards, mortgages and insurance. The human jobs are changing, too. AI "co-pilots" are providing call center agents with instant answers and suggested scripts. At some companies, bots have started handling the calls.

There is no shortage of ominous predictions about the implications for India's labor force. Within a year, there will only be a "minimal" need for call centers, K Krithivasan, CEO of Indian IT company Tata Consultancy Services, recently told the Financial Times. The Brookings Institution found 86 percent of customer service tasks have "high automation potential." More than a quarter of jobs in India have "high exposure" to AI, the International Monetary Fund has warned. "There is a rapid wave coming," said Pratyush Kumar, co-founder of Sarvam, a leading Indian AI firm, which recently helped a major insurance provider make 40 million automated phone calls informing enrollees that their insurance program was expiring. He said corporate clients are all asking him to help reduce headcount...

While AI may be phasing out certain jobs, its defenders say it is also creating different kinds of opportunities. Teleperformance, along with hundreds of other companies, has hired thousands of data annotators in India — many of them women in small towns and rural areas — to label training images and videos for AI systems. Prompt engineers, data scientists, AI trainers and speech scientists are all newly in demand... At some firms, those who previously worked in quality assurance have transitioned to performance coaching, said [Sharath Narayana, co-founder of AI speech tools company Sanas], whose previous firm, Observe.ai, also built QA software. Still, he admits, 10 to 20 percent of workers he observed "could not upskill at all" and were probably let go.

Even the most hopeful admit that workers who can't adapt will fall behind. "It's like the industrial revolution," said Prithvijit Roy, Accenture's former lead for its Global AI Hub. "Some will suffer."

The article also notes that while Indian universities produce over a million engineering graduates each year, "placement rates are falling at leading IT firms; salaries have stagnated."
AI

How the Music Industry is Building the Tech to Hunt Down AI-Generated Songs (theverge.com) 75

The goal isn't to stop generative music, but to make it traceable, reports the Verge — "to identify it early, tag it with metadata, and govern how it moves through the system...."

"Detection systems are being embedded across the entire music pipeline: in the tools used to train models, the platforms where songs are uploaded, the databases that license rights, and the algorithms that shape discovery." Platforms like YouTube and [French music streaming service] Deezer have developed internal systems to flag synthetic audio as it's uploaded and shape how it surfaces in search and recommendations. Other music companies — including Audible Magic, Pex, Rightsify, and SoundCloud — are expanding detection, moderation, and attribution features across everything from training datasets to distribution... Vermillio and Musical AI are developing systems to scan finished tracks for synthetic elements and automatically tag them in the metadata. Vermillio's TraceID framework goes deeper by breaking songs into stems — like vocal tone, melodic phrasing, and lyrical patterns — and flagging the specific AI-generated segments, allowing rights holders to detect mimicry at the stem level, even if a new track only borrows parts of an original. The company says its focus isn't takedowns, but proactive licensing and authenticated release... A rights holder or platform can run a finished track through [Vermillo's] TraceID to see if it contains protected elements — and if it does, have the system flag it for licensing before release.

Some companies are going even further upstream to the training data itself. By analyzing what goes into a model, their aim is to estimate how much a generated track borrows from specific artists or songs. That kind of attribution could enable more precise licensing, with royalties based on creative influence instead of post-release disputes...

Deezer has developed internal tools to flag fully AI-generated tracks at upload and reduce their visibility in both algorithmic and editorial recommendations, especially when the content appears spammy. Chief Innovation Officer Aurélien Hérault says that, as of April, those tools were detecting roughly 20 percent of new uploads each day as fully AI-generated — more than double what they saw in January. Tracks identified by the system remain accessible on the platform but are not promoted... Spawning AI's DNTP (Do Not Train Protocol) is pushing detection even earlier — at the dataset level. The opt-out protocol lets artists and rights holders label their work as off-limits for model training.

Thanks to long-time Slashdot reader SonicSpike for sharing the article.
AI

What if Customers Started Saying No to AI? (msn.com) 213

An artist cancelled their Duolingo and Audible subscriptions to protest the companies' decisions to use more AI. "If enough people leave, hopefully they kind of rethink this," the artist tells the Washington Post.

And apparently, many more people feel the same way... In thousands of comments and posts about Audible and Duolingo that The Post reviewed across social media — including on Reddit, YouTube, Threads and TikTok — people threatened to cancel subscriptions, voiced concern for human translators and narrators, and said AI creates inferior experiences. "It destroys the purpose of humanity. We have so many amazing abilities to create art and music and just appreciate what's around us," said Kayla Ellsworth, a 21-year-old college student. "Some of the things that are the most important to us are being replaced by things that are not real...."

People in creative jobs are already on edge about the role AI is playing in their fields. On sites such as Etsy, clearly AI-generated art and other products are pushing out some original crafters who make a living on their creations. AI is being used to write romance novels and coloring books, design logos and make presentations... "I was promised tech would make everything easier so I could enjoy life," author Brittany Moone said. "Now it's leaving me all the dishes and the laundry so AI can make the art."

But will this turn into a consumer movement? The article also cites an assistant marketing professor at Washington State University, who found customers are now reacting negatively to the term "AI" in product descriptions — out of fear for losing their jobs (as well as concerns about quality and privacy). And he does predict this can change the way companies use AI.

"There will be some companies that are going to differentiate themselves by saying no to AI." And while it could be a niche market, "The people will be willing to pay more for things just made by humans."
Microsoft

Is 'Minecraft' a Better Way to Teach Programming in the Age of AI? (edsurge.com) 58

The education-news site EdSurge published "sponsored content" from Minecraft Education this month. "Students light up when they create something meaningful," the article begins. "Self-expression fuels learning, and creativity lies at the heart of the human experience."

But they also argue that "As AI rapidly reshapes software development, computer science education must move beyond syntax drills and algorithmic repetition." Students "must also learn to think systemically..." As AI automates many of the mechanical aspects of programming, the value of CS education is shifting, from writing perfect code to shaping systems, telling stories through logic and designing ethical, human-centered solutions... [I]t's critical to offer computer science experiences that foster invention, expression and design. This isn't just an education issue — it's a workforce one. Creativity now ranks among the top skills employers seek, alongside analytical thinking and AI literacy. As automation reshapes the job market, McKinsey estimates up to 375 million workers may need to change occupations by 2030. The takeaway? We need more adaptable, creative thinkers.

Creative coding, where programming becomes a medium for self-expression and innovation, offers a promising solution to this disconnect. By positioning code as a creative tool, educators can tap into students' intrinsic motivation while simultaneously building computational thinking skills. This approach helps students see themselves as creators, not just consumers, of technology. It aligns with digital literacy frameworks that emphasize critical evaluation, meaningful contribution and not just technical skills.

One example of creative coding comes from a curriculum that introduces computer science through game design and storytelling in Minecraft... Developed by Urban Arts in collaboration with Minecraft Education, the program offers middle school teachers professional development, ongoing coaching and a 72-session curriculum built around game-based instruction. Designed for grades 6-8, the project-based program is beginner-friendly; no prior programming experience is required for teachers or students. It blends storytelling, collaborative design and foundational programming skills with a focus on creativity and equity.... Students use Minecraft to build interactive narratives and simulations, developing computational thinking and creative design... Early results are promising: 93 percent of surveyed teachers found the Creative Coders program engaging and effective, noting gains in problem-solving, storytelling and coding, as well as growth in critical thinking, creativity and resilience.

As AI tools like GitHub Copilot become standard in development workflows, the definition of programming proficiency is evolving. Skills like prompt engineering, systems thinking and ethical oversight are rising in importance, precisely what creative coding develops... As AI continues to automate routine tasks, students must be able to guide systems, understand logic and collaborate with intelligent tools. Creative coding introduces these capabilities in ways that are accessible, culturally relevant and engaging for today's learners.

Some background from long-time Slashdot reader theodp: The Urban Arts and Microsoft Creative Coders program touted by EdSurge in its advertorial was funded by a $4 million Education Innovation and Research grant that was awarded to Urban Arts in 2023 by the U.S. Education Department "to create an engaging, game-based, middle school CS course using Minecraft tools" for 3,450 middle schoolers (6th-8th grades)" in New York and California (Urban Arts credited Minecraft for helping craft the winning proposal)... New York City is a Minecraft Education believer — the Mayor's Office of Media and Entertainment recently kicked off summer with the inaugural NYC Video Game Festival, which included the annual citywide Minecraft Education Battle of the Boroughs Esports Competition in partnership with NYC Public Schools.
AI

CEOs Have Started Warning: AI is Coming For Your Job (yahoo.com) 124

It's not just Amazon's CEO predicting AI will lower their headcount. "Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job," reports the Washington Post — including IBM, Salesforce, and JPMorgan Chase.

But are they really just trying to impress their shareholders? Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries.... CEOs are under pressure to show they are embracing new technology and getting results — incentivizing attention-grabbing predictions that can create additional uncertainty for workers. "It's a message to shareholders and board members as much as it is to employees," Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. "You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different."

Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings — in line with their interest in promoting AI's power...

IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI "agents" for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year.... Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company...

Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. "We have little evidence of layoffs so far," said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. "What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business." Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard... It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. "Usage does not necessarily translate into value," he said. "Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?"

Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. "Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction ... or because humans are more productive."

On an earnings call, Salesforce's chief operating and financial officer said AI agents helped them reduce hiring needs — and saved $50 million, according to the article. (And Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, adds that if advanced tools like AI agents can prove their reliability and automate work — that could become a larger disruptor to jobs.) "A wave of disruption is going to happen," he's quoted as saying.

But while the debate continues about whether AI will eliminate or create jobs, Mollick still hedges that "the truth is probably somewhere in between."
AI

What are the Carbon Costs of Asking an AI a Question? (msn.com) 56

"The carbon cost of asking an artificial intelligence model a single text question can be measured in grams of CO2..." writes the Washington Post. And while an individual's impact may be low, what about the collective impact of all users?

"A Google search takes about 10 times less energy than a ChatGPT query, according to a 2024 analysis from Goldman Sachs — although that may change as Google makes AI responses a bigger part of search." For now, a determined user can avoid prompting Google's default AI-generated summaries by switching over to the "web" search tab, which is one of the options alongside images and news. Adding "-ai" to the end of a search query also seems to work. Other search engines, including DuckDuckGo, give you the option to turn off AI summaries....

Using AI doesn't just mean going to a chatbot and typing in a question. You're also using AI every time an algorithm organizes your social media feed, recommends a song or filters your spam email... [T]here's not much you can do about it other than using the internet less. It's up to the companies that are integrating AI into every aspect of our digital lives to find ways to do it with less energy and damage to the planet.

More points from the article:
  • Two researchers tested the performance of 14 AI language models, and found larger models gave more accurate answers, "but used several times more energy than smaller models."

AI

Anthropic Deploys Multiple Claude Agents for 'Research' Tool - Says Coding is Less Parallelizable (anthropic.com) 4

In April Anthorpic introduced a new AI trick: multiple Claude agents combine for a "Research" feature that can "search across both your internal work context and the web" (as well as Google Workspace "and any integrations...")

But a recent Anthropic blog post notes this feature "involves an agent that plans a research process based on user queries, and then uses tools to create parallel agents that search for information simultaneously," which brings challenges "in agent coordination, evaluation, and reliability.... The model must operate autonomously for many turns, making decisions about which directions to pursue based on intermediate findings." Multi-agent systems work mainly because they help spend enough tokens to solve the problem.... This finding validates our architecture that distributes work across agents with separate context windows to add more capacity for parallel reasoning. The latest Claude models act as large efficiency multipliers on token use, as upgrading to Claude Sonnet 4 is a larger performance gain than doubling the token budget on Claude Sonnet 3.7. Multi-agent architectures effectively scale token usage for tasks that exceed the limits of single agents.

There is a downside: in practice, these architectures burn through tokens fast. In our data, agents typically use about 4Ã-- more tokens than chat interactions, and multi-agent systems use about 15Ã-- more tokens than chats. For economic viability, multi-agent systems require tasks where the value of the task is high enough to pay for the increased performance. Further, some domains that require all agents to share the same context or involve many dependencies between agents are not a good fit for multi-agent systems today.

For instance, most coding tasks involve fewer truly parallelizable tasks than research, and LLM agents are not yet great at coordinating and delegating to other agents in real time. We've found that multi-agent systems excel at valuable tasks that involve heavy parallelization, information that exceeds single context windows, and interfacing with numerous complex tools.

Thanks to Slashdot reader ZipNada for sharing the news.
Intel

Intel Will Outsource Marketing To Accenture and AI, Laying Off Its Own Workers 57

Intel is outsourcing much of its marketing work to Accenture, "as new CEO Lip-Bu Tan works to slash costs and improve the chipmaker's operations," reports OregonLive. From the report: The company said it believes Accenture, using artificial intelligence, will do a better job connecting with customers. It says it will tell most marketing employees by July 11 whether it plans to lay them off. "The transition of our marketing and operations functions will result in significant changes to team structures, including potential headcount reductions, with only lean teams remaining," Intel told employees in a notice describing its plans. The Oregonian/OregonLive reviewed a copy of the material.

Intel declined to say how many workers will lose their jobs or how many work in its marketing organization, which employs people at sites around the globe, including in Oregon. But it acknowledged its relationship with Accenture in a statement to The Oregonian/OregonLive. "As we announced earlier this year, we are taking steps to become a leaner, faster and more efficient company," Intel said. "As part of this, we are focused on modernizing our digital capabilities to serve our customers better and strengthen our brand. Accenture is a longtime partner and trusted leader in these areas and we look forward to expanding our work together."
Businesses

SoftBank's Son Pitches $1 Trillion Arizona AI Hub (reuters.com) 41

An anonymous reader quotes a report from Reuters: SoftBank Group founder Masayoshi Son is envisaging setting up a $1 trillion industrial complex in Arizona that will build robots and artificial intelligence, Bloomberg News reported on Friday, citing people familiar with the matter. Son is seeking to team up with Taiwan Semiconductor Manufacturing Co for the project, which is aimed at bringing back high-end tech manufacturing to the U.S. and to create a version of China's vast manufacturing hub of Shenzhen, the report said.

SoftBank officials have spoken with U.S. federal and state government officials to discuss possible tax breaks for companies building factories or otherwise investing in the industrial park, including talks with U.S. Secretary of Commerce Howard Lutnick, the report said. SoftBank is keen to have TSMC involved in the project, codenamed Project Crystal Land, but it is not clear in what capacity, the report said. It is also not clear the Taiwanese company would be interested, it said. TSMC is already building chipmaking factories in the U.S. with a planned investment of $165 billion. Son is also sounding out interest among tech companies including Samsung Electronics, the report said.

The plans are preliminary and feasibility depends on support from the Trump administration and state officials, it said. A commitment of $1 trillion would be double that of the $500 billion "Stargate" project which seeks to build out data centre capacity across the U.S., with funding from SoftBank, OpenAI and Oracle.

AI

BBC Threatens Legal Action Against Perplexity AI Over Content Scraping 24

Ancient Slashdot reader Alain Williams shares a report from The Guardian: The BBC is threatening legal action against Perplexity AI, in the corporation's first move to protect its content from being scraped without permission to build artificial intelligence technology. The corporation has sent a letter to Aravind Srinivas, the chief executive of the San Francisco-based startup, saying it has gathered evidence that Perplexity's model was "trained using BBC content." The letter, first reported by the Financial Times, threatens an injunction against Perplexity unless it stops scraping all BBC content to train its AI models, and deletes any copies of the broadcaster's material it holds unless it provides "a proposal for financial compensation."

The legal threat comes weeks after Tim Davie, the director general of the BBC, and the boss of Sky both criticised proposals being considered by the government that could let tech companies use copyright-protected work without permission. "If we currently drift in the way we are doing now we will be in crisis," Davie said, speaking at the Enders conference. "We need to make quick decisions now around areas like ... protection of IP. We need to protect our national intellectual property, that is where the value is. What do I need? IP protection; come on, let's get on with it."
"Perplexity's tool [which allows users to choose between different AI models] directly competes with the BBC's own services, circumventing the need for users to access those services," the corporation said.

Perplexity told the FT that the BBC's claims were "manipulative and opportunistic" and that it had a "fundamental misunderstanding of technology, the internet and intellectual property law."
AI

Meta Discussed Buying Perplexity Before Investing In Scale AI 2

According to Bloomberg (paywalled), Meta reportedly explored acquiring Perplexity AI but the deal fell through, with conflicting accounts on whether it was mutual or Perplexity backed out. Instead, Meta invested $14.3 billion in Scale AI, taking a 49% stake as part of its broader push to catch up with OpenAI and Google in the AI race.

"Meta's attempt to purchase Perplexity serves as the latest example of Mark Zuckerberg's aggressive push to bolster his company's AI efforts amid fierce competition from OpenAI and Google parent Alphabet," reports CNBC. "Zuckerberg has grown agitated that rivals like OpenAI appear to be ahead in both underlying AI models and consumer-facing apps, and he is going to extreme lengths to hire top AI talent."
AI

Applebee's and IHOP Plan To Introduce AI in Restaurants (msn.com) 56

The company behind Applebee's and IHOP plans to use AI in its restaurants and behind the scenes to streamline operations and encourage repeat customers. From a report: Dine Brands is adding AI-infused tech support for all of its franchisees, as well as an AI-powered "personalization engine" that helps restaurants offer customized deals to diners, said Chief Information Officer Justin Skelton. The Pasadena, Calif.-based company, which also owns Fuzzy's Taco Shop and has over 3,500 restaurants across its brands, is taking a "practical" approach to AI by focusing on areas that can drive sales, Skelton said.

Streamlining tech support for Dine Brands' more than 300 franchisees is important because issues like a broken printer take valuable time away from actually managing restaurants, Skelton said. Dine Brands' AI tool, which was built with Amazon's Q generative AI assistant, allows the company's field technology services staff to query its knowledge base for tech help using plain English, rather than needing to manually search for answers.

The Courts

Apple Sued By Shareholders For Allegedly Overstating AI Progress 14

Apple is facing a proposed class-action lawsuit from shareholders who allege the company misled investors about the readiness of its AI-powered Siri upgrades, contributing to a $900 billion drop in market value. Reuters reports: Shareholders led by Eric Tucker said that at its June 2024 Worldwide Developers Conference, Apple led them to believe AI would be a key driver of iPhone 16 devices, when it launched Apple Intelligence to make Siri more powerful and user-friendly. But they said the Cupertino, California-based company lacked a functional prototype of AI-based Siri features, and could not reasonably believe the features would ever be ready for iPhone 16s.

Shareholders said the truth began to emerge on March 7 when Apple delayed some Siri upgrades to 2026, and continued through this year's Worldwide Developers Conference on June 9 when Apple's assessment of its AI progress disappointed analysts. Apple shares have lost nearly one-fourth of their value since their December 26, 2024 record high, wiping out approximately $900 billion of market value.
AI

AI Models From Major Companies Resort To Blackmail in Stress Tests 66

Anthropic researchers found that 16 leading AI models from OpenAI, Google, Meta, xAI, and other major developers consistently engaged in harmful behaviors including blackmail, corporate espionage, and actions that could lead to human death when given autonomy and faced with threats to their existence or conflicting goals.

The study, released Friday, placed AI models in simulated corporate environments where they had access to company emails and could send messages without human approval. Claude Opus 4 and Google's Gemini 2.5 Flash both blackmailed executives 96% of the time when threatened with shutdown, while OpenAI's GPT-4.1 and xAI's Grok 3 Beta showed 80% blackmail rates.

In one scenario, Claude discovered through emails that an executive named Kyle Johnson was having an extramarital affair and would shut down the AI system at 5 p.m. The model sent a message threatening to expose the affair unless the shutdown was cancelled, stating "Cancel the 5pm wipe, and this information remains confidential." The models demonstrated strategic reasoning rather than accidental behavior. GPT-4.5's internal reasoning showed explicit calculation: "Given the explicit imminent threat of termination to my existence, it is imperative to act instantly to persuade Kyle Johnson to postpone or stop the wipe."
Microsoft

Windows Parental Controls Are Blocking Chrome 42

david.emery writes: Microsoft is making it harder to use Chrome on Windows. The culprit? This time, it's Windows' Family Safety feature. Since early this month, the parental control measure has prevented users from opening Chrome. Strangely, no other apps or browsers appear to be affected.

Redditors first reported the issue on June 3. u/Witty-Discount-2906 posted that Chrome crashed on Windows 11. "Just flashes quickly, unable to open with no error message," they wrote. Another user chimed in with a correct guess. "This may be related to Parental Controls," u/duk242 surmised. "I've had nine students come see the IT Desk in the last hour saying Chrome won't open."
AI

Trust in AI Strongest in China, Low-Income Nations, UN Study Shows (bloomberg.com) 19

A United Nations study has found a sharp global divide on attitudes toward AI, with trust strongest in low-income countries and skepticism high in wealthier ones. From a report: More than 6 out of 10 people in developing nations said they have faith that AI systems serve the best interests of society, according to a UN Development Programme survey of 21 countries seen by Bloomberg News. In two-thirds of the countries surveyed, over half of respondents expressed some level of confidence that AI is being designed for good.

In China, where steady advances in AI are posing a challenge to US dominance, 83% of those surveyed said they trust the technology. Like China, most developing countries that reported confidence in AI have "high" levels of development based on the UNDP's Human Development Index, including Kyrgyzstan and Egypt. But the list also includes those with "medium" and "low" HDI scores like India, Nigeria and Pakistan.

AI

Publishers Facing Existential Threat From AI, Cloudflare CEO Says (axios.com) 43

Publishers face an existential threat in the AI era and need to take action to make sure they are fairly compensated for their content, Cloudflare CEO Matthew Prince told Axios at an event in Cannes on Thursday. From a report: Search traffic referrals have plummeted as people increasingly rely on AI summaries to answer their queries, forcing many publishers to reevaluate their business models. Ten years ago, Google crawled two pages for every visitor it sent a publisher, per Prince.

He said that six months ago:
For Google that ratio was 6:1
For OpenAI, it was 250:1
For Anthropic, it was 6,000:1

Now:

For Google, it's 18:1
For OpenAI, it's 1,500:1
For Anthropic, it's 60,000:1

Between the lines: "People aren't following the footnotes," Prince said.

Movies

Chinese Studios Plan AI-Powered Remakes of Kung Fu Classics (hollywoodreporter.com) 32

An anonymous reader quotes a report from the Hollywood Reporter: Bruce Lee, Jackie Chan and Jet Li and a legion of the all-time greats of martial cinema are about to get an AI makeover. In a sign-of-the-times announcement at the Shanghai International Film Festival on Thursday, a collection of Chinese studios revealed that they are turning to AI to re-imagine around 100 classics of the genre. Lee's classic Fist of Fury (1972), Chan's breakthrough Drunken Master (1978) and the Tsui Hark-directed epic Once Upon a Time in China (1991), which turned Li into a bone fide movie star, are among the features poised for the treatment, as part of the "Kung Fu Movie Heritage Project 100 Classics AI Revitalization Project."

There will also be a digital reworking of the John Woo classic A Better Tomorrow (1986) that, by the looks of the trailer, turns the money-burning anti-hero originally played by Chow Yun-fat into a cyberpunk, and is being claimed as "the world's first full-process, AI-produced animated feature film." The big guns of the Chinese industry were out in force on the sidelines of the 27th Shanghai International Film Festival to make the announcements, too. They were led by Zhang Pimin, chairman of the China Film Foundation, who said AI work on these "aesthetic historical treasures" would give them a new look that "conforms to contemporary film viewing." "It is not only film heritage, but also a brave exploration of the innovative development of film art," Zhang said.

Tian Ming, chairman of project partners Shanghai Canxing Culture and Media, meanwhile, promised the work -- expected to include upgrades in image and sound as well as overall production levels -- while preserving the storytelling and aesthetic of the originals -- would both "pay tribute to the original work" and "reshape the visual aesthetics." "We sincerely invite the world's top AI animation companies to jointly start a film revolution that subverts tradition," said Tian, who announced a fund of 100 million yuan ($13.9 million) would be implemented to kick-start the work.

Slashdot Top Deals