×
Microsoft

Salesforce CEO Marc Benioff Says Microsoft Copilot Has Disappointed Many Customers (theverge.com) 52

Marc Benioff said Microsoft's Copilot AI hasn't lived up to the hype. The Salesforce CEO said on the company's second-quarter earnings call that its own AI is nothing like Copilot, which he said was unimpressive. From a report: "So many customers are so disappointed in what they bought from Microsoft Copilot because they're not getting the accuracy and the response that they want," Benioff said. "Microsoft has disappointed so many customers with AI."

Microsoft Copilot integrates OpenAI's ChatGPT tech into the company's existing suite of business software like Word, Excel, and PowerPoint that comes with Microsoft 365. Launched last year, Copilot is meant to help companies boost productivity by responding to employee prompts and helping them with daily tasks like scheduling meetings, writing up product announcements, and creating presentations. In response to Benioff's comments, Jared Spataro, Microsoft's corporate vice president for AI at work, said in a statement to Fortune that the company was "hearing something quite different" from its customers.

China

US Proposes Ban on Smart Cars With Chinese and Russian Tech (cnn.com) 94

The US Commerce Department on Monday will propose a ban on the sale or import of smart vehicles that use specific Chinese or Russian technology because of national security concerns, according to US officials. From a report: A US government investigation that began in February found a range of national security risks from embedded software and hardware from China and Russia in US vehicles, including the possibility of remote sabotage by hacking and the collection of personal data on drivers, Secretary of Commerce Gina Raimondo told reporters Sunday in a conference call.

"In extreme situations, a foreign adversary could shut down or take control of all their vehicles operating in the United States, all at the same time, causing crashes (or) blocking roads," she said. The rule would not apply to cars already on the road in the US that already have Chinese software installed, a senior administration official told CNN. The software ban would take effect for vehicles for "model year" 2027 and the hardware ban for "model year" 2030, according to the Commerce Department. The proposed regulatory action is part of a much broader struggle between the United States and China, the world's two biggest economies, to secure the supply chains of the key computing technology of the future, from semiconductors to AI software. China, in particular, has invested heavily in the connected car market, and inroads made by Chinese manufacturers in Europe have worried US officials.

AI

'Forget ChatGPT: Why Researchers Now Run Small AIs On Their Laptops' (nature.com) 48

Nature published an introduction to running an LLM locally, starting with the example of a bioinformatician who's using AI to generate readable summaries for his database of immune-system protein structures. "But he doesn't use ChatGPT, or any other web-based LLM." He just runs the AI on his Mac... Two more recent trends have blossomed. First, organizations are making 'open weights' versions of LLMs, in which the weights and biases used to train a model are publicly available, so that users can download and run them locally, if they have the computing power. Second, technology firms are making scaled-down versions that can be run on consumer hardware — and that rival the performance of older, larger models. Researchers might use such tools to save money, protect the confidentiality of patients or corporations, or ensure reproducibility... As computers get faster and models become more efficient, people will increasingly have AIs running on their laptops or mobile devices for all but the most intensive needs. Scientists will finally have AI assistants at their fingertips — but the actual algorithms, not just remote access to them.
The article's list of small open-weights models includes Meta's Llama, Google DeepMind's Gemma, Alibaba's Qwen, Apple's DCLM, Mistral's NeMo, and OLMo from the Allen Institute for AI. And then there's Microsoft: Although the California tech firm OpenAI hasn't open-weighted its current GPT models, its partner Microsoft in Redmond, Washington, has been on a spree, releasing the small language models Phi-1, Phi-1.5 and Phi-2 in 2023, then four versions of Phi-3 and three versions of Phi-3.5 this year. The Phi-3 and Phi-3.5 models have between 3.8 billion and 14 billion active parameters, and two models (Phi-3-vision and Phi-3.5-vision) handle images1. By some benchmarks, even the smallest Phi model outperforms OpenAI's GPT-3.5 Turbo from 2023, rumoured to have 20 billion parameters... Microsoft used LLMs to write millions of short stories and textbooks in which one thing builds on another. The result of training on this text, says Sébastien Bubeck, Microsoft's vice-president for generative AI, is a model that fits on a mobile phone but has the power of the initial 2022 version of ChatGPT. "If you are able to craft a data set that is very rich in those reasoning tokens, then the signal will be much richer," he says...

Sharon Machlis, a former editor at the website InfoWorld, who lives in Framingham, Massachusetts, wrote a guide to using LLMs locally, covering a dozen options.

The bioinformatician shares another benefit: you don't have to worry about the company updating their models (leading to different outputs). "In most of science, you want things that are reproducible. And it's always a worry if you're not in control of the reproducibility of what you're generating."

And finally, the article reminds readers that "Researchers can build on these tools to create custom applications..." Whichever approach you choose, local LLMs should soon be good enough for most applications, says Stephen Hood, who heads open-source AI at the tech firm Mozilla in San Francisco. "The rate of progress on those over the past year has been astounding," he says. As for what those applications might be, that's for users to decide. "Don't be afraid to get your hands dirty," Zakka says. "You might be pleasantly surprised by the results."
Advertising

Amazon Ads Launches a New AI Video Generator (aboutamazon.com) 24

Long-time Slashdot reader theodp writes: On Thursday, Amazon Ads announced Video Generator and Live Image, "our first generative AI-powered technology designed to remove creative barriers and enable brands to produce lifestyle imagery that enhances ad performance."

Amazon's blog post calls it "a new feature that uses generative AI technology to make it easier for advertisers to create more interesting and relevant video ads for customers. The new feature, Video generator, creates visually rich video content in a matter of minutes and at no additional cost. Using a single product image, Video generator curates custom AI-generated videos tailored to a product's distinct selling proposition and features, leveraging Amazon's unique insights to vividly bring a product story to life."

An accompanying video demonstrates how Amazon's AI-powered tech can be used to animate still images, making it appear that steam is rising from a coffee mug, flowers are being blown in the wind, the night sky is changing breathtakingly behind a telescope, and that waves are breaking behind a smart speaker at the beach.

Government

AI Smackdown: How a New FTC Rule Also Fights Fake Product Reviews (salon.com) 29

Salon looks closer at a new $51,744-per-violation AI regulation officially approved one month ago by America's FTC — calling it a financial blow "If you're a digital media company whose revenue comes from publishing AI-generated articles and fake product reviews.

But they point out the rules also ban "product review suppression." Per the ruling, that means it's a violation for "anyone to use an unfounded or groundless legal threat, a physical threat, intimidation, or a public false accusation in response to a consumer review... to (1) prevent a review or any portion thereof from being written or created, or (2) cause a review or any portion thereof to be removed, whether or not that review or a portion thereof is replaced with other content."

Finally... The rule makes it a violation for a business to "provide compensation or other incentives in exchange for, or conditioned expressly or by implication on, the writing or creation of consumer reviews expressing a particular sentiment, whether positive or negative, regarding the product, service or business...." [T]he new rule also prevents secretly advertising for yourself while pretending to be an independent outlet or company. It bars "the creation or operation of websites, organizations or entities that purportedly provide independent reviews or opinions of products or services but are, in fact, created and controlled by the companies offering the products or services."

In an earlier statement, FTC Consumer Protection Bureau head Sam Levine, said the new rule "should help level the playing field for honest companies. We're using all available means to attack deceptive advertising in the digital age," he said.

Thanks to long-time Slashdot reader mspohr for sharing the article.
Transportation

California Drivers May Soon Get Mandatory In-Car Speed Warnings Like the EU (caranddriver.com) 207

UPDATE (9/28): California's governor vetoed the bill.

Below is Slashdot's original story...

"Exceed the speed limit in one of the 27 European Union countries, and you may get some pushback from your vehicle," reports Car and Driver. "As of July, new cars sold in the EU must include a speed-warning device that alerts drivers if they exceed the posted limit."

The warnings can be ither acoustic or haptic, "though the European Commission gives automakers the latitude to supplant those passive measures with either an active accelerator pedal that applies counterpressure against the driver's foot or a governor that restricts the vehicle's speed to the legal limit." Drivers can override or deactivate these admonishments, but the devices must default to their active state at startup.

Now California is looking to emulate the EU with legislation that would mandate in-car speed-warning devices [for driving more than 10 miles per hour over the speed limit — in "just about every 2030 model-year vehicle equipped with either GPS or a front-facing camera"].

The article cites statistics that 18% of those drivers involved in fatal crashes were speeding.

Although the projects director at the European Transport Safety Council also acknowledges the systems may struggle to identify speed limits from passing signs — and that their testing shows the systems generally irritate drivers, who often deactivate the systems...

Thanks to long-time Slashdot reader sinij for sharing the article.
Intel

Qualcomm Approached Intel About a Takeover (msn.com) 35

Friday the Wall Street Journal reported Qualcomm recently "made a takeover approach" to Intel, which has a market value of roughly $90 billion ("according to people familiar with the matter...") A deal is far from certain, the people cautioned. Even if Intel is receptive, a deal of that size is all but certain to attract antitrust scrutiny, though it is also possible it could be seen as an opportunity to strengthen the U.S.'s competitive edge in chips... Both Intel and Qualcomm have become U.S. national champions of sorts as chip-making gets increasingly politicized. Intel is in line to get up to $8.5 billion of potential grants for factories in the U.S. as Chief Executive Pat Gelsinger tries to build up a business making chips on contract for outsiders...
Both Intel and Qualcomm have been "overshadowed" by Nvidia's success in powering the AI boom, the article points out.

But "To get the deal done, Qualcomm could intend to sell assets or parts of Intel to other buyers... A deal would significantly broaden Qualcomm's horizons, complementing its mobile-phone chip business with chips from Intel that are ubiquitous in personal computers and servers..." Qualcomm's approach follows a more than three-year turnaround effort at Intel under Gelsinger that has yet to bear significant fruit. For years, Intel was the biggest semiconductor company in the world by market value, but it now lags behind rivals including Qualcomm, Broadcom, Texas Instruments and AMD. In August, following a dismal quarterly report, Intel said it planned to lay off thousands of employees and pause dividend payments as part of a broad cost-saving drive. Gelsinger last month laid out a roadmap to slash costs by more than $10 billion in 2025, as the company reported a loss of $1.6 billion for the second quarter, compared with a $1.5 billion profit a year earlier...

Intel earlier this year began to report separate financial results of its manufacturing operations, which many on Wall Street saw as a prelude to a possible split of the company. Some analysts have argued Intel should be split into two, mirroring a shift in the industry toward specializing in either chip design or chip manufacturing. Splitting up immediately might not be possible, however, Bernstein Research analyst Stacy Rasgon said in a recent note. Intel's manufacturing arm is money-losing and hasn't gained strong traction with customers other than Intel itself since Gelsinger opened the factories to outside chip designers three years ago. Gelsinger has been doubling down on the company's factory ambitions, outlining spending of hundreds of billions of dollars building new plants in the U.S., Europe and Israel in recent years.

Given Intel's market value, a successful takeover of the entire company would rank as the all-time largest technology M&A deal, topping Microsoft's $69 billion acquisition of Activision Blizzard.

Intel's stock "had its biggest one-day drop in over 50 years in August after the company reported disappointing earnings," reports CNBC. Partly because of that one-day, 26% drop, Intel's shares "are down 53% this year as investors express doubts about the company's costly plans to manufacture and design chips."

But the Register remains skeptical about Qualcomm taking over Intel: Chipzilla may not be worth much to Qualcomm unless it can renegotiate the x86/x86-64 cross-licensing patent agreement between Intel and AMD, which dates back to 2009. That agreement is terminated if a change in control happens at either Intel or AMD.

While a number of the patents expired in 2021, it's our understanding that agreement is still in force and Qualcomm would be subject to change of control rules. In other words, Qualcomm wouldn't be able to produce Intel-designed x86-64 chips unless AMD gave the green light. It's also likely one of the reasons why no one bought AMD when it was dire straits; whoever took over it would have to deal with Intel.

Robotics

Do Self-Service Kiosks Actually Increase Employment at Fast-Food Restaurants? (cnn.com) 78

Instead of eliminating jobs, self-service kiosks at McDonald's and other fast-food chains "have added extra work for kitchen staff," reports CNN — and as a bonus, "pushed customers to order more food than they do at the cash register..." Kiosks "guarantee that the upsell opportunities" like a milkshake or fries are suggested to customers when they order, Shake Shack CEO Robert Lynch said on an earnings call last month. "Sometimes that is not always a priority for employees when you've got 40 people in line. You're trying to get through it as quick as possible." Kiosks also shift employees from behind the cash register to maintaining the dining area, delivering food to customers or working in the kitchen, he said. [Although a study from Temple University researchers found long lines at a kiosk stress customers — making them order less.]

Some McDonald's franchisees — which own and operate 95% of McDonald's in the United States — are now rolling out kiosks that can take cash and accept change. But even in these locations, McDonald's is reassigning cashiers to other roles, including new "guest experience lead" jobs that help customers use the kiosks and assist with any issues. "In theory, kiosks should help save on labor, but in reality, restaurants have added complexity due to mobile ordering and delivery, and the labor saved from kiosks is often reallocated for these efforts," said RJ Hottovy, an analyst who covers the restaurant and retail industries at data analytics firm Placer.ai....

Christopher Andrews, a sociologist at Drew University who studies the effects of technology on work, said the impacts of kiosks were similar to other self-service technology such as ATMs and self-checkout machines in supermarkets. Both technologies were predicted to cause job losses. "The introduction of ATMs did not result in massive technological unemployment for bank tellers," he said. "Instead, it freed them up from low-value tasks such as depositing and cashing checks to perform other tasks that created value." Self-checkout also has not caused retail job losses. In some cases, self-checkout backfired for chains because self-checkout leads to higher merchandise losses from customer errors and more intentional shoplifting than when human cashiers are ringing up customers.

Fast-food chains and retailers need to do a better job communicating what the potential benefits of kiosks and self-checkout are to consumers and employees, Andrews said. "What I think will be central for customers is that they see how this technology is providing them with more or better service rather than more unpaid busywork," he said. "Otherwise, the public is just likely to view it as yet another attempt to reduce labor costs via automation and self-service."

This article ends up taking both sides of the issue. For example, some befuddled kiosk users can take longer to order, the article points out — and of course, kiosks can also break down.

Restaurant analyst Hottovy told CNN "If kiosks really improved speed of service, order accuracy, and upsell, they'd be rolled out more extensively across the industry than they are today."
AI

Tech Giants Push To Dilute Europe's AI Act (reuters.com) 38

The world's biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating AI as they seek to fend off the risk of billions of dollars in fines. From a report: EU lawmakers in May agreed the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups. But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face.

The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge.

AI

Project Analyzing Human Language Usage Shuts Down Because 'Generative AI Has Polluted the Data' (404media.co) 93

The creator of an open source project that scraped the internet to determine the ever-changing popularity of different words in human language usage says that they are sunsetting the project because generative AI spam has poisoned the internet to a level where the project no longer has any utility. 404 Media: Wordfreq is a program that tracked the ever-changing ways people used more than 40 different languages by analyzing millions of sources across Wikipedia, movie and TV subtitles, news articles, books, websites, Twitter, and Reddit. The system could be used to analyze changing language habits as slang and popular culture changed and language evolved, and was a resource for academics who study such things. In a note on the project's GitHub, creator Robyn Speer wrote that the project "will not be updated anymore."

"Generative AI has polluted the data," she wrote. "I don't think anyone has reliable information about post-2021 language usage by humans." She said that open web scraping was an important part of the project's data sources and "now the web at large is full of slop generated by large language models, written by no one to communicate nothing. Including this slop in the data skews the word frequencies." While there has always been spam on the internet and in the datasets that Wordfreq used, "it was manageable and often identifiable. Large language models generate text that masquerades as real language with intention behind it, even though there is none, and their output crops up everywhere," she wrote.

Microsoft

Microsoft Taps Three Mile Island Nuclear Plant To Power AI (yahoo.com) 125

The data centers that train all the large language models behind AI consume unimaginable amounts of energy, and the stakes are high for big tech companies to ensure they have enough power to run those plants. That's why Microsoft is now throwing its weight behind nuclear power. From a report: The tech giant on Friday signed a major deal with nuclear plant owner Constellation Energy to restart its closed Three Mile Island plant by 2028 to power its data centers. The Constellation plant, infamous for melting down in 1979, closed in 2019 after failing to garner enough demand for its energy amid competition with cheaper alternatives like natural gas, and solar and wind power. Constellation said it plans to spend $1.6 billion to revive its reactor, pending regulatory approval. The financial terms of the deal were not disclosed. Microsoft agreed to purchase all of the power from the reactor over the next 20 years, a Constellation spokesperson told TechCrunch. Once restored, the reactor promises a capacity of 835 megawatts.
AI

Indian Filmmaker Ditches Human Musicians for AI (techcrunch.com) 72

Indian filmmaker Ram Gopal Varma is ditching human musicians for artificial intelligence, saying he'll use only AI-generated tunes in future projects, a move that underscores AI's growing reach in creative industries. From a report: The filmmaker and screenwriter, known for popular Bollywood movies including Company, Rangeela, Sarkar, and Satya has launched a venture, called RGV Den Music, that will only feature music generated from AI apps including Suno and Udio, he told TechCrunch. Varma said he will use the AI-generated music in all his projects, including movies. The entire background score on his new feature movie, called Saree, is also AI-generated, he said. In an interview, Varma urged artists to embrace AI rather than resist it. "Eventually, the music comes from your thoughts. You need to have clarity on what you want the app to produce. It's the taste that will matter," he said.
The Courts

Creator of Kamala Harris Parody Video Sues California Over Election 'Deepfake' Ban (politico.com) 337

Longtime Slashdot reader SonicSpike shares a report from Politico: The creator of a video that used artificial intelligence to imitate Kamala Harris is suing the state of California after Gov. Gavin Newsom signed laws restricting the use of digitally altered political "deepfakes," alleging First and 14th Amendment violations. Christopher Kohls, who goes by the name "Mr Reagan" on X, has been at the center of a debate over the use of AI-generated material in elections since he posted the video in July, calling it a parody of a Harris campaign ad. It features AI-generated clips mimicking Harris' voice and saying she's the "ultimate diversity hire." The video was shared by X owner Elon Musk without calling it parody and attracted the ire of Newsom, who vowed to ban such content.

The suit (PDF), filed Tuesday in federal court, seeks permanent injunctions against the laws. One of the laws in question, the Defending Democracy from Deepfake Deception Act, specifies that it does not apply to satire or parody content. It requires large online platforms to remove or label deceptive, digitally altered media during certain periods before or after an election. Newsom spokesperson Izzy Gardon said in a statement that Kohls had already labeled the post as a parody on X. "Requiring them to use the word 'parody' on the actual video avoids further misleading the public as the video is shared across the platform," Gardon said. "It's unclear why this conservative activist is suing California. This new disclosure law for election misinformation isn't any more onerous than laws already passed in other states, including Alabama."

Nintendo

Palworld Developer Has No Idea Why Nintendo's Suing Over Its Pokemon-like Game 69

An anonymous reader shares a report: Pocketpair has responded to the lawsuit filed against it by Nintendo and The Pokemon Company. The studio that developed Palworld, the game at the heart of the suit, issued a statement early this morning saying it doesn't know what patents it violated. "At this moment, we are unaware of the specific patents we are accused of infringing upon, and we have not been notified of such details," the statement read.

According to Nintendo's press release, the reason for the lawsuit has to do with Pocketpair allegedly infringing on multiple as yet undisclosed patents. The details of the lawsuit have not yet been made public, so we do not yet know which patents, and according to Pocketpair's statement, it doesn't know, either.
AI

'Dead Internet Theory' Comes To Life With New AI-Powered Social Media App 66

A conspiracy theory known as "Dead Internet Theory" has gained traction in recent years, positing that most online social activity is artificial and designed to manipulate users. This theory has grown alongside the rise of large language models like ChatGPT. On Monday, software developer Michael Sayman launched SocialAI, an app that seems to embody aspects of this theory. ArsTechnica: SocialAI's 28-year-old creator, Michael Sayman, previously served as a product lead at Google, and he also bounced between Facebook, Roblox, and Twitter over the years. In an announcement post on X, Sayman wrote about how he had dreamed of creating the service for years, but the tech was not yet ready. He sees it as a tool that can help lonely or rejected people.

"SocialAI is designed to help people feel heard, and to give them a space for reflection, support, and feedback that acts like a close-knit community," wrote Sayman. "It's a response to all those times I've felt isolated, or like I needed a sounding board but didn't have one. I know this app won't solve all of life's problems, but I hope it can be a small tool for others to reflect, to grow, and to feel seen." As The Verge reports in an excellent rundown of the example interactions, SocialAI lets users choose the types of AI followers they want, including categories like "supporters," "nerds," and "skeptics." These AI chatbots then respond to user posts with brief comments and reactions on almost any topic, including nonsensical "Lorem ipsum" text.
Businesses

Tech Jobs Have Dried Up - and Aren't Coming Back Soon (msn.com) 178

The once-booming tech job market has contracted sharply, with software development postings down over 30% since February 2020, according to Indeed.com. Tech companies have shed around 137,000 jobs since January, Layoffs.fyi reports.

This downturn, following years of aggressive hiring, marks a significant shift in the industry's labor dynamics. Companies are pivoting from growth-at-all-costs strategies to revenue-focused approaches, cutting entry-level positions and redirecting resources towards AI development. The release of ChatGPT in late 2022 sparked an AI investment frenzy, creating high demand for specialized AI talent.
AI

Snapchat Reserves the Right To Use AI-Generated Images of Your Face In Ads 29

Snapchat's terms of service for its "My Selfie" tool reserve the right to use users' AI-generated images in ads. While users can opt out by disabling the "See My Selfie in Ads" feature, it is enabled by default. 404 Media's Emanuel Maiberg reports: A support page on the Snapchat website titled "What is My Selfie?" explains further: "You'll take selfies with your Snap camera or select images from your camera roll. These images will be used to understand what you look like to enable you, Snap and your friends to generate novel images of you. If you're uploading images from the camera roll, only add images of yourself," Snapchat's site says. "After you've successfully onboarded, you may have access to some features powered by My Selfie, like Cameos stickers and AI Snaps. We are constantly adding features and functionality so stay tuned for more My Selfie features."

After seeing the popup, I searched for instances of people getting ads featuring their own face on Snapchat, and found this thread on the r/Privacy Reddit community where a user claimed exactly this happened to them. In an email to 404 Media, Snapchat said that it couldn't confirm or deny whether this user was served an ad featuring their face, but if they did, the ad was not using My Selfie images. Snapchat also said that it investigated the claim in the Reddit thread and that the advertiser, yourdreamdegree.com, has a history of advertising on Snapchat and that Snapchat believes the ad in question does not violate any of its policies. "The photo that was used in the advertisement is clearly AI, however, it is very clearly me," the Reddit user said. "It has my face, my hair, the clothing I wear, and even has my lamp & part of a painting on my wall in the background. I have no idea how they got photos of me to be able to generate this ad."
Snapchat confirmed the news but emphasized that advertisers do not have access to Snapchat users' generative AI data. "You are correct that our terms do reserve the right, in the future, to offer advertising based on My Selfies in which a Snapchatter can see themselves in a generated image delivered to them," a Snapchat spokesperson said. "As explained in the onboarding modal, Snapchatters have full control over this, and can turn this on and off in My Selfie Settings at any time."
The Almighty Buck

Microsoft and Abu Dhabi's MGX To Back $30 Billion BlackRock AI Infrastructure 12

An anonymous reader quotes a report from Data Center Dynamics: BlackRock plans to launch a new $30 billion artificial intelligence (AI) investment fund focused on data centers and energy projects. Microsoft and Abu Dhabi-backed investment company MGX are general partners of the fund. GPU giant Nvidia will also advise. Run through BlackRock's Global Infrastructure Partners fund, which it acquired for $12.5 billion earlier this year, the 'Global AI Investment Partnership,' plans to raise up to $30 billion in equity investments. Another $70 billion could come via leveraged debt financing. "Mobilizing private capital to build AI infrastructure like data centers and power will unlock a multi-trillion-dollar long-term investment opportunity," said Larry Fink, chairman and CEO of BlackRock. "Data centers are the bedrock of the digital economy, and these investments will help power economic growth, create jobs, and drive AI technology innovation."

Brad Smith, Microsoft's president, added: "The capital spending needed for AI infrastructure and the new energy to power it goes beyond what any single company or government can finance. This financial partnership will not only help advance technology, but enhance national competitiveness, security, and economic prosperity."

Bayo Ogunlesi, CEO of Global Infrastructure Partners, said: "There is a clear need to mobilize significant amounts of private capital to fund investments in essential infrastructure. One manifestation of this is the capital required to support the development of AI. We are highly confident that the combined capabilities of our partnership will help accelerate the pace of investments in AI-related infrastructure."
AI

OpenAI Threatens To Ban Users Who Probe Its 'Strawberry' AI Models (wired.com) 50

OpenAI truly does not want you to know what its latest AI model is "thinking." From a report: Since the company launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe how the model works.

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an "o1" model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model. Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets.

Android

Apple and Google Diverge on Photography Philosophy (theverge.com) 41

Apple's VP of camera software engineering Jon McCormack has affirmed the company's commitment to traditional photography in an interview, contrasting with Google's "memories" approach for Pixel cameras. (A Google executive said last month of the AI usage in the pictures Pixel smartphone owners take: "What some of these edits do is help you create the moment that is the way you remember it, that's authentic to your memory and to the greater context, but maybe isn't authentic to a particular millisecond.") The Verge: I asked Apple's VP of camera software engineering Jon McCormack about Google's view that the Pixel camera now captures "memories" instead of photos, and he told me that Apple has a strong point of view about what a photograph is -- that it's something that actually happened. It was a long and thoughtful answer, so I'm just going to print the whole thing:

"Here's our view of what a photograph is. The way we like to think of it is that it's a personal celebration of something that really, actually happened.

"Whether that's a simple thing like a fancy cup of coffee that's got some cool design on it, all the way through to my kid's first steps, or my parents' last breath, It's something that really happened. It's something that is a marker in my life, and it's something that deserves to be celebrated.

"And that is why when we think about evolving in the camera, we also rooted it very heavily in tradition. Photography is not a new thing. It's been around for 198 years. People seem to like it. There's a lot to learn from that. There's a lot to rely on from that.

"Think about stylization, the first example of stylization that we can find is Roger Fenton in 1854 -- that's 170 years ago. It's a durable, long-term, lasting thing. We stand proudly on the shoulders of photographic history."
Further reading: 'There is No Such Thing as a Real Picture,' Says Samsung Exec.

Slashdot Top Deals