×
AI

Microsoft Investigated by UK Over Ex-Inflection Staff Hires (bloomberg.com) 3

Microsoft's investment into Inflection AI will get a full-blown UK antitrust probe, after the watchdog said it needed to take a closer look at the hiring of former employees from the artificial intelligence startup. From a report: The Competition and Markets Authority said Tuesday it was opening the formal phase one merger probe into the partnership, setting a Sept. 11 deadline on whether to escalate it to an in-depth investigation. The agency has been swift to act against big tech's AI startup investments after it found a pattern of large tech firms piling money into start ups.
AI

Senate Introduces Bill To Setup Legal Framework For Ethical AI Development (techspot.com) 48

Last week, the U.S. Senate introduced a new bill to outlaw the unethical use of AI-generated content and deepfake technology. Called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), the bill would "set new federal transparency guidelines for marking, authenticating and detecting AI-generated content, protect journalists, actors and artists against AI-driven theft, and hold violators accountable for abuses." TechSpot reports: Proposed and sponsored by Democrats Maria Cantwell of Washington and Martin Heinrich of New Mexico, along with Republican Marsha Blackburn of Tennessee, the aims to establish enforceable transparency standards in AI development [such a through watermarking]. The legislation also wants to curb unauthorized data use in training models. The senators intend to task the National Institutes of Standards and Technology with developing sensible transparency guidelines should the bill pass. [...] The senators feel that clarifying and defining what is okay and what is not regarding AI development is vital in protecting citizens, artists, and public figures from the harm that misuse of the technology could cause, particularly in creating deepfakes. The text of the bill can be read here.
AI

Microsoft Unveils a Large Language Model That Excels At Encoding Spreadsheets 38

Microsoft has quietly announced the first details of its new "SpreadsheetLLM," claiming it has the "potential to transform spreadsheet data management and analysis, paving the way for more intelligent and efficient user interactions." You can read more details about the model in a pre-print paper available here. Jasper Hamill reports via The Stack: One of the problems with using LLMs in spreadsheets is that they get bogged down by too many tokens (basic units of information the model processes). To tackle this, Microsoft developed SheetCompressor, an "innovative encoding framework that compresses spreadsheets effectively for LLMs." "It significantly improves performance in spreadsheet table detection tasks, outperforming the vanilla approach by 25.6% in GPT4's in-context learning setting," Microsoft added. The model is made of three modules: structural-anchor-based compression, inverse index translation, and data-format-aware aggregation.

The first of these modules involves placing "structural anchors" throughout the spreadsheet to help the LLM understand what's going on better. It then removes "distant, homogeneous rows and columns" to produce a condensed "skeleton" version of the table. Index translation addresses the challenge caused by spreadsheets with numerous empty cells and repetitive values, which use up too many tokens. "To improve efficiency, we depart from traditional row-by-row and column-by-column serialization and employ a lossless inverted index translation in JSON format," Microsoft wrote. "This method creates a dictionary that indexes non-empty cell texts and merges addresses with identical text, optimizing token usage while preserving data integrity." [...]

After conducting a "comprehensive evaluation of our method on a variety of LLMs" Microsoft found that SheetCompressor significantly reduces token usage for spreadsheet encoding by 96%. Moreover, SpreadsheetLLM shows "exceptional performance in spreadsheet table detection," which is the "foundational task of spreadsheet understanding." The new LLM builds on the Chain of Thought methodology to introduce a framework called "Chain of Spreadsheet" (CoS), which can "decompose" spreadsheet reasoning into a table detection-match-reasoning pipeline.
AI

Microsoft CTO Kevin Scott Thinks LLM 'Scaling Laws' Will Hold Despite Criticism 18

An anonymous reader quotes a report from Ars Technica: During an interview with Sequoia Capital's Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) "scaling laws" will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI. "Despite what other people think, we're not at diminishing marginal returns on scale-up," Scott said. "And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them."

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs. Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI's AI development philosophy.
Scott's comments can be found around the 46-minute mark.
AI

Gemini AI Platform Accused of Scanning Google Drive Files Without User Permission (techradar.com) 23

Last week, Senior Advisor on AI Governance at the Center for Democracy & Technology, Kevin Bankston, took to X to report that Google's Gemini AI was caught summarizing his private tax return on Google Drive without his permission. "Despite attempts to disable the feature, Bankston found that Gemini's continued to operate in Google Drive, raising questions about Google's handling of user data and privacy settings," writes TechRadar's Craig Hale. From the report: After failing to find the right controls to disable Gemini's integration, the Advisor asked Google's ChatGPT-rivalling AI chatbot on two occasions to pinpoint the settings. A second, more detailed response still brought no joy: "Gemini is *not* in Apps and services on my dashboard (1st option), and I didn't have a profile pic in the upper right of the Gemini page (2nd)."

With help from another X user, Bankston found the control, which was already disabled, highlighting either a malfunctioning control or indicating that further settings are hidden elsewhere. However, previous Google documentation has confirmed that the company will not use Google Workspace data to train or improve its generative AI services or to feed targeted ads. Bankston theorizes that his previous participation in Google Workspace Labs might have influenced Gemini's behavior. The Gemini side panel in Google Drive for PDFs can be closed if a user no longer wishes to access generative AI summaries.

AI

Microsoft CTO Says AI Progress Not Slowing Down, It's Just Warming Up (arstechnica.com) 28

An anonymous reader shares a report: During an interview with Sequoia Capital's Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) "scaling laws" will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI. "Despite what other people think, we're not at diminishing marginal returns on scale-up," Scott said. "And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them."

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs. Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI's AI development philosophy.

Facebook

Facebook Ads For Windows Desktop Themes Push Info-Stealing Malware (bleepingcomputer.com) 28

Cybercriminals are using Facebook business pages and advertisements to promote fake Windows themes that infect unsuspecting users with the SYS01 password-stealing malware. From a report: Trustwave researchers who observed the campaigns said the threat actors also promote fake downloads for pirated games and software, Sora AI, 3D image creator, and One Click Active. While using Facebook advertisements to push information-stealing malware is not new, the social media platform's massive reach makes these campaigns a significant threat.

The threat actors take out advertisements that promote Windows themes, free game downloads, and software activation cracks for popular applications, like Photoshop, Microsoft Office, and Windows. These advertisements are promoted through newly created Facebook business pages or by hijacking existing ones. When using hijacked Facebook pages, the threat actors rename them to suit the theme of their advertisement and to promote the downloads to the existing page members.

AI

AI Stocks Balloon Even As Earnings Lag, Jefferies Warns (indiadispatch.com) 57

An anonymous reader shares a report: A basket of 27 large-cap AI stocks created by wealth manager and brokerage house Jefferies has surged 127% in value since ChatGPT's launch in late 2022, adding about $10 trillion in market cap. However, 2025 earnings forecasts for these companies have increased only 25% over the same period, Jefferies warned in a note to clients.

This disconnect has pushed the incremental price-to-earnings ratio for AI stocks to 73 times, suggesting investors are pricing in extremely optimistic growth expectations across the sector.

Nvidia has seen the largest gains, with its stock price up 656% since late 2022. Despite signs of overvaluation, Jefferies believes the AI bubble could keep expanding in the near term, citing strong capital expenditure plans through 2025 and ample cash reserves at major cloud providers.

AI

'Eno' Documentary: Different at Every Screening, to Explore Randomness and 'Generative' Film-making (theverge.com) 62

From The New York Times: The key to "Eno" comes near the beginning of the film — at least, the beginning of the first version I saw. The musician Brian Eno, the documentary's subject, notes that the fun of the kind of art he makes is that it's a two-way street. "The audience's brain does the cooking and keeps seeing relationships," he says.

Most movies are made up of juxtapositions of scenes, carefully selected and designed by the editor. But "Eno," directed by Gary Hustwit, turns that convention on its head. Writ large, it's a meditation on creativity. But every version of the movie you see is different, generated by a set of rules that dictate some things about the film, while leaving others to chance. (I've seen it twice, and maybe half the same material appeared across both films.)

Eno, one of the most innovative and celebrated musicians and producers of his generation, has fiddled with randomness in his musical practice for decades, often propelled along by new technologies. He agreed to participate in "Eno" only if it, too, could be an example of what he and others have long called generative art... "Brain One", programmed by the artist Brendan Dawes, generates a new version of the film on the fly every time the algorithm is run. Dawes's system selects from a database of 30 hours of new interviews with Eno and 500 hours of film from his personal archive and, following a system of rules set down by the filmmakers with code, creating a new film. According to the filmmakers, there are 52 quintillion (that is, 52 billion billion) possible combinations, which means the chances of Brain One generating two exact copies of "Eno" are so small as to be functionally zero.

"But the ambitions of Eno are greater than the film itself," writes the Verge, with director Hustwit hoping for a cinematic future exploring generative filmmaking with their software and hardware package. "We have a patent pending on the system, and we just launched a startup called Anamorph that is basically exploring this idea further with other filmmakers and studios and streamers."

In an interview with the Verge, Hustwit points out that Brian Eno did the soundtrack for his previous film. "I was having these thoughts about, well, why can't showing a film be more performative? Why does it have to be this static thing every time?"


The film just began a two-week run at Greenwich Village's nonprofit theatre Film Forum, and in the U.K. is appearing this week at 17 Picturehouse Cinemas across England and Scotland. Check this online schedule for upcoming dates this week in Nashville (Thursday), Austin (Friday), Dallas (Saturday) — with later dates this month including Toronto, San Francisco, and Los Angeles, and more cities in August.
AI

How Will AI Transform the Future of Work? (theguardian.com) 121

An anonymous reader shared this report from the Guardian: In March, after analysing 22,000 tasks in the UK economy, covering every type of job, a model created by the Institute for Public Policy Research predicted that 59% of tasks currently done by humans — particularly women and young people — could be affected by AI in the next three to five years. In the worst-case scenario, this would trigger a "jobs apocalypse" where eight million people lose their jobs in the UK alone.... Darrell West, author of The Future of Work: AI, Robots and Automation, says that just as policy innovations were needed in Thomas Paine's time to help people transition from an agrarian to an industrial economy, they are needed today, as we transition to an AI economy. "There's a risk that AI is going to take a lot of jobs," he says. "A basic income could help navigate that situation."

AI's impact will be far-reaching, he predicts, affecting blue- and white-collar jobs. "It's not just going to be entry-level people who are affected. And so we need to think about what this means for the economy, what it means for society as a whole. What are people going to do if robots and AI take a lot of the jobs?"

Nell Watson, a futurist who focuses on AI ethics, has a more pessimistic view. She believes we are witnessing the dawn of an age of "AI companies": corporate environments where very few — if any — humans are employed at all. Instead, at these companies, lots of different AI sub-personalities will work independently on different tasks, occasionally hiring humans for "bits and pieces of work". These AI companies have the potential to be "enormously more efficient than human businesses", driving almost everyone else out of business, "apart from a small selection of traditional old businesses that somehow stick in there because their traditional methods are appreciated"... As a result, she thinks it could be AI companies, not governments, that end up paying people a basic income.

AI companies, meanwhile, will have no salaries to pay. "Because there are no human beings in the loop, the profits and dividends of this company could be given to the needy. This could be a way of generating support income in a way that doesn't need the state welfare. It's fully compatible with capitalism. It's just that the AI is doing it."

AI

OpenAI Working On New Reasoning Technology Under Code Name 'Strawberry' (reuters.com) 83

OpenAI is close to a breakthrough with a new project called "Strawberry," which aims to enhance its AI models with advanced reasoning abilities. Reuters reports: Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available. How Strawberry works is a tightly kept secret even within OpenAI, the person said.

The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time."

On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg, opens new tab. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry. OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence.

AI

Amazon's AI Chatbot Rufus Is Now Live For All US Customers 20

Amazon's AI chatbot Rufus is now live for all U.S. customers. Engadget's Lawrence Bonk reports: So what does it do? It's an Amazon chatbot so it helps with shopping. You can ask for lists of recommended products and ask what specific products do and stuff like that. I've tooled around with it a bit this morning and it seems fine, though a bit boring. I will say that I cross-referenced some of the recommended products with the web version and Rufus does not automatically list promoted items, at least for now.

It spit out a seemingly random list of well-reviewed products on several occasions. That's fine by me, though I'm not about to buy something based on the word of a one-day old chatbot. You can also ask specific questions about products, but the answers seem to be pulled directly from the descriptions. As any regular Amazon customer knows, some of these descriptions are accurate and others aren't. The chatbot is tied to your personal account, so it can answer questions about upcoming deliveries and the like.

Amazon says that the bot has been trained on its product catalog, along with customer reviews, community Q&As and public information found throughout the web. However, it hasn't disclosed what websites it pulled that public information from and to what end. It didn't even confirm that these were retail-adjacent websites.
You can try Rufus by updating to the latest version of the Amazon Shopping app. It'll be available in the bottom navigation bar with a typical AI icon consisting of bubbles and sparkles/stars.
The Internet

iLounge and the Unofficial Apple Weblog Are Back As Unethical AI Content Farms 11

An anonymous reader quotes a report from Ars Technica, written by Samuel Axon: In one of the most egregiously unethical uses of AI we've seen, a web advertising company has re-created some defunct, classic tech blogs like The Unofficial Apple Weblog (TUAW) and iLounge by mimicking the bylines of the websites' former writers and publishing AI-generated content under their names. The Verge reported on the fiasco in detail, including speaking to Christina Warren, a former writer for TUAW who now works at GitHub. Warren took to the social media platform Threads yesterday to point out that someone had re-launched TUAW at its original domain and populated it with fake content allegedly written by her and other past TUAW staff. Some of the content simply reworded articles that originally appeared on TUAW, while other articles tied real writers' names to new, AI-generated articles about current events.

TUAW was shut down in 2015, but its intellectual property and domain name continued to be owned by Yahoo. A Hong Kong-based web advertising firm named Web Orange Limited claims to have purchased the domain and brand name but not the content. The domain name still carries some value in terms of Google ranking, so Web Orange Limited seems to have relaunched the site and then used AI summarization tools to reword the original content and publish it under the original authors' names. (It did the same with another classic Apple blog, iLounge.) The site also includes author bios, which are generic and may have been generated, and they are accompanied by author photos that don't look anything like the real writers. The Verge found that some of these same photos have appeared in other places, like web display ads for iPhone cases and dating websites. They may have been AI-generated, though the company has also been caught reusing photos of real people without permission in other contexts.

At first, some of Web Orange Limited's websites named Haider Ali Khan, an Australian currently residing in Dubai, as the owner of the company. Khan's own website identified him as "an independent cyber security analyst" and "long-time advocate for web security" who also runs a web hosting company, and who "started investing in several technology reporting websites" and "manages and runs several news blogs such as the well-known Apple tech-news blog iLounge." However, mentions of his name were removed from the websites today, and the details on his personal website have apparently been taken offline. Warren emailed the company, threatening legal action. After she did that, the byline was changed to what we can only assume is a made-up name -- "Mary Brown." The same goes for many of the other author names on Web Orange Limited's websites.

The company likely tried to use the original authors' names as part of an SEO play; Google tracks the names of authors and gives them authority rankings on specific topics as another layer on top of a website's own authority. That way, Google can try to respond to user queries with results written by people who have built strong reputations in the users' areas of interest. It also helps Google surface authors who are experts on a topic but who write for multiple websites, which is common among freelance writers. The websites are still operational, even though the most arguably egregious breach of ethics -- the false use of real people's names -- has been addressed in many cases.
Businesses

Taiwan's TSMC Crosses $1 Trillion Market Cap Amid AI Frenzy (reuters.com) 28

An anonymous reader quotes a report from Reuters: Taiwan's TSMC scaled a record high on Thursday after posting strong second-quarter revenue on booming demand for AI applications, cementing its position as Asia's most valuable company. TSMC also topped a trillion dollar market value this week. The AI frenzy has sparked a rally in chipmaker stocks across the globe. Taiwan Semiconductor Manufacturing Co (TSMC), the world's largest contract chipmaker, whose customers include AI poster child Nvidia, has especially benefited from the soaring demand for AI-capable chips.

Foreign investors have poured $4.8 billion so far this year into Taiwan's stock market, which is dominated by TSMC. Asian funds, however, according to HSBC, still remain underweight on Taiwan, suggesting there could be room for further inflow. Shares of TSMC, whose customers also include Apple, have jumped nearly 80% this year, widely outperforming the benchmark Taiwan SE Weighted Index, which is up 35%. On Thursday, TSMC's Taipei-listed shares rose more than 2% to a record T$1,080, taking the company's market value to T$28 trillion ($861 billion) and making it Asia's most valuable publicly listed company.

AI

AI Investment Soars but Profitable Use Remains Elusive for Many Firms, Goldman Sachs Says 46

Despite soaring investment in AI hardware, most companies are struggling to turn the technology into profitable ventures, Goldman Sachs' latest AI adoption tracker reveals. Equity markets project a $330 billion boost to annual revenues for AI enablers by 2025, up from $250 billion forecast just last quarter, yet only 5% of US firms currently use AI in their production processes.

The disconnect between sky-high investment and tepid adoption underscores the significant hurdles businesses face in implementing AI effectively. Industry surveys by Goldman indicate that while many small businesses are experimenting with the technology, most have yet to define clear use cases or establish comprehensive employee training programs. Data compatibility and privacy concerns remain substantial roadblocks, with many firms reporting their existing tech platforms are ill-equipped to support AI applications.

The lack of in-house expertise and resources further compounds these challenges, leaving many companies unable to bridge the gap between AI's theoretical potential and practical implementation. Even among those organizations actively deploying AI, only 35% have a clearly defined vision for creating business value from the technology. This strategic uncertainty is particularly acute in consumer and retail sectors, where just 30% of executives believe they have adequately prioritized generative AI. The barriers to profitable AI use are not limited to technical and strategic issues. Legal and compliance risks loom large, with 64% of businesses expressing concerns about cybersecurity risks and roughly half worried about misinformation and reputational damage stemming from AI use.

Despite these challenges, investment continues to pour into AI hardware, particularly in semiconductor and cloud computing sectors. Markets anticipate a 50% revenue growth for semiconductor companies by the end of 2025. However, this enthusiasm has yet to translate into widespread job displacement, with AI-related layoffs remaining muted and unemployment rates for AI-exposed jobs tracking closely with broader labor market trends.
Japan

Tokyo Residents Seek To Block Building of Massive Data Centre (usnews.com) 22

A group of residents in Tokyo said on Wednesday they were aiming to block construction of a massive logistics and data centre planned by Singaporean developer GLP, in a worrying sign for businesses looking to Japan to meet growing demand. From a report: The petition by more than 220 residents of Akishima city in western Tokyo follows a successful bid in December in Nagareyama city to quash a similar data-centre plan. The Akishima residents were concerned the centre would threaten wildlife, cause pollution and a spike in electricity usage, and drain its water supply which comes solely from groundwater. They filed a petition to audit the urban planning procedure that approved GLP's 3.63-million-megawatt data centre, which GLP estimated would likely emit about 1.8 million tons of carbon dioxide a year. "One company will be responsible for ruining Akishima. That's what this development is," Yuji Ohtake, a representative of the residents' group, told a press conference. Global tech firms such as Microsoft, Amazon and Oracle also have plans to build data centres in Japan. The residents estimated that 3,000 of 4,800 trees on the site would have to be cut down, threatening the area's Eurasian goshawk birds and badgers.
Businesses

AMD Plans To Acquire Silo AI In $665 Million Deal (reuters.com) 6

AMD shares are up following the announcement that it plans to acquire Finnish artificial intelligence company Silo AI for about $665 million. Reuters reports: Acquiring Silo AI will help AMD improve the development and deployment of AMD-powered AI models and help potential customers build complex AI models with the company's chips, AMD said. Silo AI will also strengthen AMD's software development capabilities. While the deal will not impact AMD's financial performance, it "unlocks a significant amount of business moving forward," AMD Senior Vice President of AI, Vamsi Boppana said in an interview. AMD declined to discuss how much business the acquisition would generate over time.

Helsinki, Finland-based Silo AI specializes in end-to-end AI-driven solutions that help customers integrate the tech into their products and services. With operations in Europe and North America, the startup counts companies, including Philips, Rolls-Royce, and Unilever, among its customers. Silo AI's CEO and co-founder Peter Sarlin will continue to lead the unit as part of the AMD Artificial Intelligence Group, AMD said. The deal is expected to close in the second half of 2024.

AI

AWS App Studio Promises To Generate Enterprise Apps From a Written Prompt (techcrunch.com) 36

Amazon Web Services is the latest entrant to the generative AI game with the announcement of App Studio, a groundbreaking tool capable of building complex software applications from simple written prompts. TechCrunch's Ron Miller reports: "App Studio is for technical folks who have technical expertise but are not professional developers, and we're enabling them to build enterprise-grade apps," Sriram Devanathan, GM of Amazon Q Apps and AWS App Studio, told TechCrunch. Amazon defines enterprise apps as having multiple UI pages with the ability to pull from multiple data sources, perform complex operations like joins and filters, and embed business logic in them. It is aimed at IT professionals, data engineers and enterprise architects, even product managers who might lack coding skills but have the requisite company knowledge to understand what kinds of internal software applications they might need. The company is hoping to enable these employees to build applications by describing the application they need and the data sources they wish to use.

Examples of the types of applications include an inventory-tracking system or claims approval process. The user starts by entering the name of an application, calling the data sources and then describing the application they want to build. The system comes with some sample prompts to help, but users can enter an ad hoc description if they wish. It then builds a list of requirements for the application and what it will do, based on the description. The user can refine these requirements by interacting with the generative AI. In that way, it's not unlike a lot of no-code tools that preceded it, but Devanathan says it is different. [...] Once the application is complete, it goes through a mini DevOps pipeline where it can be tested before going into production. In terms of identity, security and governance, and other requirements any enterprise would have for applications being deployed, the administrator can link to existing systems when setting up the App Studio. When it gets deployed, AWS handles all of that on the back end for the customer, based on the information entered by the admin.

AI

Galaxy Z Fold & Z Flip 6, Watch Ultra, and New Ring Are Samsung's AI Carriers (arstechnica.com) 11

At its Galaxy Unpacked event today, Samsung unveiled a slew of new devices ushering in the "Next Frontier of Mobile AI." With "cross-device intelligence," each device has its own set of AI features that Samsung said will be personalized for users, good for humanity, and empowering for creators. Ars Technica's Kevin Purdy reports: Aiming to put its Galaxy AI onto your wrist and fingers, Samsung announced a seventh version of its Galaxy Watch, a rugged and larger Galaxy Watch Ultra, and the first version of a Galaxy Ring. [...] The Galaxy Watch 7 and Watch Ultra are strikingly similar to their inspirations: the Apple Watch Ultra and the previous Galaxy Watch, respectively. [...] The Galaxy Z Fold 6 ($1,900) and Z Flip 6 ($1,100) have the kinds of boosts from their prior models you might expect. There's a Snapdragon 8 Gen 3 chip inside. The folding glass on both is supposedly stronger and now rated for IP48, which means dust resistance went from "X" (good luck) to "4" (1 mm and greater particles), which is still unfortunate at these price points, but that's life on the folding edge.

The outward-facing screen on the Z Fold 6 got a smidge bigger (6.2 to 6.3 inches), though it has the same inner display. Its cameras are much the same (50 megapixel main, 10 megapixel telephoto, 12 megapixel ultrawide), though the ultrawide claims better low-light performance. The Z Flip 6's most notable upgrade is its 4,000 mAh battery and a vapor cooling chamber inside. The base model gets 12GB of RAM instead of 8GB and 512GB of storage instead of 256GB on the base model.

There are other products not mentioned here announced by Samsung today, including its Galaxy Buds3 and Buds3 Pro, which are wireless earbuds that will remind you of certain other very popular wireless earbuds. What Samsung really had to pitch today was how its own Galaxy AI was the connective tissue between all of them. The screens on the Fold and Flip models are ideal for circling things to search them. The cameras can auto-zoom, the notes can be summarized, and translations, in particular, are everywhere. The watches and rings can track your health and suggest ways to make it better in all kinds of ways that merit a lot of disclosure about where all that data is going. Rick Osterloh, Google's devices and services chief, showed up to give a kind of Gemini blessing to Samsung's efforts.

Microsoft

Microsoft, Apple Drop OpenAI Board Plans as Scrutiny Grows (bloomberg.com) 9

Microsoft and Apple dropped plans to take board roles at OpenAI in a surprise decision that underscores growing regulatory scrutiny of Big Tech's influence over artificial intelligence. From a report: Microsoft, which invested $13 billion in the ChatGPT creator, will withdraw from its observer role on the board, the company said in a letter to OpenAI on Tuesday, which was seen by Bloomberg News. Apple was due to take up a similar role, but an OpenAI spokesperson said the startup won't have board observers after Microsoft's departure. Regulators in the US and Europe had expressed concerns about Microsoft's sway over OpenAI, applying pressure on one of the world's most valuable companies to show that it's keeping the relationship at arm's length. Microsoft has integrated OpenAI's services into its Windows and Copilot AI platforms and, like other big US tech companies, is banking on the new technology to help drive growth.

Slashdot Top Deals