×
AI

Adobe Starts Roll-Out of AI Video Tools, Challenging OpenAI and Meta (reuters.com) 10

An anonymous reader quotes a report from Reuters: Adobe (ADBE.O), opens new tab on Monday said it has started publicly distributing an AI model that can generate video from text prompts, joining the growing field of companies trying to upend film and television production using generative artificial intelligence. The Firefly Video Model, as the technology is called, will compete with OpenAI's Sora, which was introduced earlier this year, while TikTok owner ByteDance and Meta Platforms have also announced their video tools in recent months.

Facing much larger rivals, Adobe has staked its future on building models trained on data that it has rights to use, ensuring the output can be legally used in commercial work. San Jose, California-based Adobe will start opening up the tool to people who have signed up for its waiting list but did not give a general release date. While Adobe has not yet announced any customers using its video tools, it said on Monday that PepsiCo-owned Gatorade will use its image generation model for a site where customers can order custom-made bottles, and Mattel has been using Adobe tools to help design packaging for its Barbie line of dolls.

For its video tools, Adobe has aimed at making them practical for everyday use by video creators and editors, with a special focus on making the footage blend in with conventional footage, said Ely Greenfield, Adobe's chief technology officer for digital media. "We really focus on fine-grain control, teaching the model the concepts that video editors and videographers use -- things like camera position, camera angle, camera motion," Greenfield told Reuters in an interview.

AI

India Cenbank Chief Warns Against Financial Stability Risks From Growing Use of AI (reuters.com) 10

The growing use of AI and machine learning in financial services globally can lead to financial stability risks and warrants adequate risk mitigation practices by banks, the Governor of the Reserve Bank of India said on Monday. From a report: "The heavy reliance of AI can lead to concentration risks, especially when a small number of technology providers dominate the market," Shaktikanta Das said at an event in New Delhi. This could amplify systemic risks as failures or disruptions in these systems may cascade across the financial sector, Das added.

India's financial service providers are using AI to enhance customer experience, reduce costs, manage risks and drive growth through chatbots and personalised banking. The growing use of AI introduces new vulnerabilities like increased susceptibility to cyber attacks and data breaches, Das said. AI's "opacity" makes it difficult to audit and interpret algorithms which drive lender's decisions and could potentially lead to "unpredictable consequences in the market," he warned.

AI

AI Threats 'Complete BS' Says Meta Senior Research, Who Thinks AI is Dumber Than a Cat (msn.com) 111

Meta senior research Yann LeCun (also a professor at New York University) told the Wall Street Journal that worries about AI threatening humanity are "complete B.S." When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. "It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat," he replied on X. He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today's "frontier" AIs, including those made by Meta itself.
LeCun shared a Turing Award with Geoffrey Hinton and Hoshua Bengio (who hopes LeCun is right, but adds "I don't think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy. That is why I think we need governments involved.")

But LeCun still believes AI is a very powerful tool — even as Meta joins the quest for artificial general intelligence: Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it's now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models. "The impact on Meta has been really enormous," he says.

At the same time, he is convinced that today's AIs aren't, in any meaningful sense, intelligent — and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous... OpenAI's Sam Altman last month said we could have Artificial General Intelligence within "a few thousand days...." But creating an AI this capable could easily take decades, [LeCun] says — and today's dominant approach won't get us there.... His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that's analogous to how a baby animal does, by building a world model from the visual information it takes in.

In contrast, today's AI models "are really just predicting the next word in a text, he says... And because of their enormous memory capacity, they can seem to be reasoning, when in fact they're merely regurgitating information they've already been trained on."
AI

Study Done By Apple AI Scientists Proves LLMs Have No Ability to Reason (appleinsider.com) 233

Slashdot reader Rick Schumann shared this report from the blog AppleInsider: A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.

The group has proposed a new benchmark, GSM-Symbolic, to help others measure the reasoning capabilities of various large language models (LLMs). Their initial testing reveals that slight changes in the wording of queries can result in significantly different answers, undermining the reliability of the models. The group investigated the "fragility" of mathematical reasoning by adding contextual information to their queries that a human could understand, but which should not affect the fundamental mathematics of the solution. This resulted in varying answers, which shouldn't happen...

The study found that adding even a single sentence that appears to offer relevant information to a given math question can reduce the accuracy of the final answer by up to 65 percent. "There is just no way you can build reliable agents on this foundation, where changing a word or two in irrelevant ways or adding a few bit of irrelevant info can give you a different answer," the study concluded... "We found no evidence of formal reasoning in language models," the new study concluded. The behavior of LLMS "is better explained by sophisticated pattern matching" which the study found to be "so fragile, in fact, that [simply] changing names can alter results."

AI

$5,000 AI Pants: This Company Wants to Rent Hikers an Exoskeleton (cnn.com) 40

"Technical outerwear brand Arc'teryx and wearable technology startup Skip have teamed up to create exoskeleton hiking pants, powered by AI..." reports CNN. After four years of collaboration and testing, the two companies plan to start selling the battery-powered pants in 2025 for $5,000 — but they're also "available to rent and try out now," according to CNN's video report: "You can think of it like an e-bike for walking..." says Skip's co-founder and chief product officer Anna Roumiantseva. "On the way up, it really kind of offloads some of those big muscle groups that are working their hardest. We like to say it gives you about 40% more power in your legs on the way up with every step." ("And then supports their knees on the way down," says Cam Stuart, Arc'Teryx's advanced concepts team manager for research and engineering.)

Kathryn Zealand, Skip Co-founder and CEO adds, "There's a lot of artificial intelligence built into these pants," with Roumiantseva explaining that technology "understands how you move, predicts how you're going to want to move next — and then assists you in doing that, so that the assistant doesn't feel like you're walking to the beat of the robot or is moving independently..."

Stuart: I think when people think of what an exoskeleton is, they think of this big bionic frame or they think it's like Avatar or something like that. The challenge for us really was how do we put that in a pair of pants...?"

Co-founder Roumiantseva: We've done a lot of work to make a lot of the complicated and sophisticated technology that goes into it look and feel as approachable and as similar to a garment as possible.

Co-founder Zealand: And so maybe you think about them like a pair of pants.

CNN points out it isn't the only "recreational exoskeleton." (Companies like Dnsys and Hypershell have even "developed their own lightweight exoskeletons — through Kickstarter campaigns.")

But beyond recreation, this also has applications for people with disabilities. "Movement and mobility, it's such a huge driver of quality of life, it's such a huge driver of joy," says Skip's co-founder and chief product officer. "It does become a luxury — and that's a huge part of why we're building what we're building. Is we don't think it should be."
AI

LLM Attacks Take Just 42 Seconds On Average, 20% of Jailbreaks Succeed (scworld.com) 79

spatwei shared an article from SC World: Attacks on large language models (LLMs) take less than a minute to complete on average, and leak sensitive data 90% of the time when successful, according to Pillar Security.

Pillar's State of Attacks on GenAI report, published Wednesday, revealed new insights on LLM attacks and jailbreaks, based on telemetry data and real-life attack examples from more than 2,000 AI applications. LLM jailbreaks successfully bypass model guardrails in one out of every five attempts, the Pillar researchers also found, with the speed and ease of LLM exploits demonstrating the risks posed by the growing generative AI (GenAI) attack surface...

The more than 2,000 LLM apps studied for the State of Attacks on GenAI report spanned multiple industries and use cases, with virtual customer support chatbots being the most prevalent use case, making up 57.6% of all apps.

Common jailbreak techniques included "ignore previous instructions" and "ADMIN override", or just using base64 encoding. "The Pillar researchers found that attacks on LLMs took an average of 42 seconds to complete, with the shortest attack taking just 4 seconds and the longest taking 14 minutes to complete.

"Attacks also only involved five total interactions with the LLM on average, further demonstrating the brevity and simplicity of attacks."
AI

California Newspaper Creates AI-Powered 'News Assistant' for Kamala Harris Info (sfchronicle.com) 154

After nearly 30 years of covering Kamala Harris, the San Francisco Chronicle is now letting ChatGPT do it. Sort of...

"We're introducing a new way to engage with our decades of coverage: an AI-powered tool designed to answer your questions about Harris' life, her journey through public service and her presidential campaign," they announced this week: Drawing from thousands of articles written, edited and published by Chronicle journalists since 1995, this tool aims to give readers informed answers about a politician who rose from the East Bay and is now campaigning to become one of the world's most powerful people.

Why don't we have a similar tool for Donald Trump, the Republican nominee for president? The answer isn't political. It's because we've been covering Harris since her career began in the Bay Area and have an archive of vetted articles to draw from. Our newsroom can't offer the same level of expertise when it comes to the former president.

The tool's answers are "drawn directly from decades of extensive reporting," according to a notice toward the bottom of the page. "The tool searches through thousands of Chronicle articles, with new stories added every hour as they are published, ensuring readers have access to the most up-to-date information." Our news assistant is powered by OpenAI's GPT-4o mini model, combined with OpenAI's text-embedding-3-large model, to deliver precise answers based on user queries. The Chronicle articles in this tool's corpus span from April 24, 1995, to the present, covering the length of Harris' career.

This corpus wouldn't be possible without the hard work of the Chronicle's journalists.

Questions go through OpenAI's moderation filter and "relevance check" — and if it asks how to vote, "we redirect readers to appropriate resources including canivote.org..."
China

Who's Winning America's 'Tech War' With China? (wired.com) 78

In mid-2021 Ameria's National Security Advisor set up a new directorate focused on "advanced chips, quantum computing, and other cutting-edge tech," reports Wired. And the next year as Congress was working on boosting America's semiconductor sector, he was "closing in on a plan to cripple China's... In October 2022, the Commerce Department forged ahead with its new export controls."

So what happened next? In a phone call with President Biden this past spring, Xi Jinping warned that if the US continued trying to stall China's technological development, he would not "sit back and watch." And he hasn't. Already, China has answered the US export controls — and its corresponding deals with other countries — by imposing its own restrictions on critical minerals used to make semiconductors and by hoovering up older chips and manufacturing equipment it is still allowed to buy. For the past several quarters, in fact, China was the top customer for ASML and a number of Japanese chip companies. A robust black market for banned chips has also emerged in China. According to a recent New York Times investigation, some of the Chinese companies that have been barred from accessing American chips through US export controls have set up new corporations to evade those bans. (These companies have claimed no connection to the ones who've been banned.) This has reportedly enabled Chinese entities with ties to the military to obtain small amounts of Nvidia's high-powered chips.

Nvidia, meanwhile, has responded to the US actions by developing new China-specific chips that don't run afoul of the US controls but don't exactly thrill the Biden administration either. For the White House and Commerce Department, keeping pace with all of these workarounds has been a constant game of cat and mouse. In 2023, the US introduced the first round of updates to its export controls. This September, it released another — an announcement that was quickly followed by a similar expansion of controls by the Dutch. Some observers have speculated that the Biden administration's actions have only made China more determined to invest in its advanced tech sector.

And there's clearly some truth to that. But it's also true that China has been trying to become self-sufficient since long before Biden entered office. Since 2014, it has plowed nearly $100 billion into its domestic chip sector. "That was the world we walked into," [NSA Advisor Jake] Sullivan said. "Not the world we created through our export controls." The United States' actions, he argues, have only made accomplishing that mission that much tougher and costlier for Beijing. Intel CEO Pat Gelsinger estimated earlier this year that there's a "10-year gap" between the most powerful chips being made by Chinese chipmakers like SMIC and the ones Intel and Nvidia are working on, thanks in part to the export controls.

If the measure of Sullivan's success is how effectively the United States has constrained China's advancement, it's hard to argue with the evidence. "It's probably one of the biggest achievements of the entire Biden administration," said Martijn Rasser, managing director of Datenna, a leading intelligence firm focused on China. Rasser said the impact of the US export controls alone "will endure for decades." But if you're judging Sullivan's success by his more idealistic promises regarding the future of technology — the idea that the US can usher in an era of progress dominated by democratic values — well, that's a far tougher test. In many ways, the world, and the way advanced technologies are poised to shape it, feels more unsettled than ever.

Four years was always going to be too short for Sullivan to deliver on that promise. The question is whether whoever's sitting in Sullivan's seat next will pick up where he left off.

AI

AI Disclaimers in Political Ads Backfire on Candidates, Study Finds (msn.com) 49

Many U.S. states now require candidates to disclose when political ads used generative AI, reports the Washington Post.

Unfortunately, researchers at New York University's Center on Technology Policy "found that people rated candidates 'less trustworthy and less appealing' when their ads featured AI disclaimers..." In the study, researchers asked more than 1,000 participants to watch political ads by fictional candidates — some containing AI disclaimers, some not — and then rate how trustworthy they found the would-be officeholders, how likely they were to vote for them and how truthful their ads were. Ads containing AI labels largely hurt candidates across the board, with the pattern holding true for "both deceptive and more harmless uses of generative AI," the researchers wrote. Notably, researchers also found that AI labels were more harmful for candidates running attack ads than those being attacked, something they called the "backfire effect".

"The candidate who was attacked was actually rated more trustworthy, more appealing than the candidate who created the ad," said Scott Babwah Brennen, who directs the center at NYU and co-wrote the report with Shelby Lake, Allison Lazard and Amanda Reid.

One other interesting finding... The article notes that study participants in both parties "preferred when disclaimers were featured anytime AI was used in an ad, even when innocuous."
Crime

Halcyon Announces Anti-Ransomware Protection for Enterprise Linux Environments (linux-magazine.com) 14

Formed in 2021 by cybersecurity professionals (and backed by high-powered VCs including Dell Technologies Capital), Halcyon sells an enterprise-grade anti-ransomware platform.

And this month they announced they're offering protection against ransomware attacks targeting Linux systems, according to Linux magazine: According to Cynet, Linux ransomware attacks increased by 75 percent in 2023 and are expected to continue to climb as more bad actors target Linux deployments... "While Windows is the favorite for desktops, Linux dominates the market for supercomputers and servers."
Here's how Halcyon's announcement made their pitch: "When it comes to ransomware protection, organizations typically prioritize securing Windows environments because that's where the ransomware operators were focusing most of their attacks. However, Linux-based systems are at the core of most any organization's infrastructure, and protecting these systems is often an afterthought," said Jon Miller, CEO & Co-founder, Halcyon. "The fact that Linux systems usually are always on and available means they provide the perfect beachhead for establishing persistence and moving laterally in a targeted network, and they can be leveraged for data theft where the exfiltration is easily masked by normal network traffic. As more ransomware operators are developing the capability to target Linux systems alongside Windows, it is imperative that organizations have the ability to keep pace with the expanded threat."

Halcyon Linux, powered through the Halcyon Anti-Ransomware Platform, uniquely secures Linux-based systems offering comprehensive protection and rapid response capabilities... Halcyon Linux monitors and detects ransomware-specific behaviors such as unauthorized access, lateral movement, or modification of critical files in real-time, providing instant alerts with critical context... When ransomware is suspected or detected, the Halcyon Ransomware Response Engine allows for rapid response and action.... Halcyon Data Exfiltration Protection (DXP) identifies and blocks unauthorized data transfers to protect sensitive information, safeguarding the sensitive data stored in Linux-based systems and endpoints...

Halcyon Linux runs with minimal resource impact, ensuring critical environments such as database servers or virtualized workloads, maintain the same performance.

And in addition, Halcyon offers "an around the clock Threat Response team, reviewing and responding to alerts," so your own corporate security teams "can attend to other pressing priorities..."
AI

PC Shipments Stuck in Neutral Despite AI Buzz (theregister.com) 81

The PC market is not showing many signs of a rebound, despite the hype around AI PCs, with market watchers split over whether unit shipments are up or down slightly. From a report: Those magical AI PC boxes were supposed to fire up buyer enthusiasm and spur the somewhat listless market for desktop and laptop systems into significant growth territory, but that doesn't appear to be happening. According to the latest figures from Gartner, global PC shipments totaled 62.9 million units during Q3 of this year, representing a 1.3 percent decline compared with the same period last year. However, this does follow three consecutive quarters of modest growth.

"Even with a full line-up of Windows-based AI PCs for both Arm and x86 in the third quarter of 2024, AI PCs did not boost the demand for PCs since buyers have yet to see their clear benefits or business value," commented Gartner Director Analyst Mikako Kitagawa. This is perhaps understandable when AI PCs are largely just a marketing concept, and vendors can't agree on exactly what the the definition of an AI PC should be. Even worse, some buyers of Arm-based Copilot+ machines discovered that their performance isn't actually very good with some applications.

AI

Silicon Valley Is Debating If AI Weapons Should Be Allowed To Decide To Kill (techcrunch.com) 99

An anonymous reader quotes a report from TechCrunch: In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone. "Congress doesn't want that," the defense tech founder told TechCrunch. "No one wants that." But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons -- or at least a heavy skepticism of arguments against them. The U.S.'s adversaries "use phrases that sound really good in a sound bite: Well, can't you agree that a robot should never be able to decide who lives and dies?" Luckey said during a talk earlier this month at Pepperdine University. "And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?"

When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn't mean that robots should be programmed to kill people on their own, just that he was concerned about "bad people using bad AI." In the past, Silicon Valley has erred on the side of caution. Take it from Luckey's cofounder, Trae Stephens. "I think the technologies that we're building are making it possible for humans to make the right decisions about these things," he told Kara Swisher last year. "So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously." The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens' perspectives, and said that Stephens didn't mean that a human should always make the call, but just that someone is accountable.

Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to "press the button every time it fires." He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. "You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I'm a staffer who's never played this game before," he said. "I could destroy us in the battle."

When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn't be the ones setting the agenda on lethal AI. "The key context to what I was saying is that our companies don't make the policy, and don't want to make the policy: it's the job of elected officials to make the policy," he said. "But they do need to educate themselves on the nuance to do a good job." He also reiterated a willingness to consider more autonomy in weapons. "It's not a binary as you suggest -- 'fully autonomous or not' isn't the correct policy question. There's a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do," he said. "Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what's necessary to win with American lives on the line." [...]
"For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.'s hand," reports TechCrunch. "At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to 'teach the Navy, teach the DoD, teach Congress' about the potential of AI to 'hopefully get us ahead of China.' Lonsdale's and Luckey's affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets."
AI

Former Google Chief Urges AI Investment Over Climate Targets (windowscentral.com) 81

Former Google CEO Eric Schmidt urged prioritizing AI infrastructure over climate goals at a Washington AI summit this week. Schmidt, who led Google until 2011, argued that AI's rapid growth will outpace environmental mitigation efforts. "We're not going to hit the climate goals anyway because we're not organized to do it," Schmidt told attendees, addressing concerns about AI's surging energy demands.

Data centers powering AI are projected to consume 35 gigawatts annually by 2030, up from 17 gigawatts in 2023, according to McKinsey. Schmidt, now heading AI drone company White Stork, suggested AI could ultimately solve climate issues, stating, "I'd rather bet on AI solving the problem than constraining it."
Wikipedia

The Editors Protecting Wikipedia from AI Hoaxes (404media.co) 59

A group of Wikipedia editors have formed WikiProject AI Cleanup, "a collaboration to combat the increasing problem of unsourced, poorly-written AI-generated content on Wikipedia." From a report: The group's goal is to protect one of the world's largest repositories of information from the same kind of misleading AI-generated information that has plagued Google search results, books sold on Amazon, and academic journals. "A few of us had noticed the prevalence of unnatural writing that showed clear signs of being AI-generated, and we managed to replicate similar 'styles' using ChatGPT," Ilyas Lebleu, a founding member of WikiProject AI Cleanup, told me in an email. "Discovering some common AI catchphrases allowed us to quickly spot some of the most egregious examples of generated articles, which we quickly wanted to formalize into an organized project to compile our findings and techniques."

In many cases, WikiProject AI Cleanup finds AI-generated content on Wikipedia with the same methods others have used to find AI-generated content in scientific journals and Google Books, namely by searching for phrases commonly used by ChatGPT. One egregious example is this Wikipedia article about the Chester Mental Health Center, which in November of 2023 included the phrase "As of my last knowledge update in January 2022," referring to the last time the large language model was updated.

AI

OpenAI's GPT Store has Left Some Developers in the Lurch (wired.com) 5

OpenAI's GPT Store, launched in January 2024, has failed to deliver on promised revenue-sharing for most small developers. Despite CEO Sam Altman's earlier statements about paying creators, only a select few have been invited to a pilot program, Wired is reporting.

Developers like Josh Brent Villocido, whose Books GPT was featured at launch, remain excluded from monetization opportunities. Many GPT creators report lack of analytics and unclear performance metrics. Some have devised workarounds, placing affiliate links or ads within their GPTs.
It's funny.  Laugh.

Man Learns He's Being Dumped Via 'Dystopian' AI Summary of Texts 109

An anonymous reader quotes a report from Ars Technica: On Wednesday, NYC-based software developer Nick Spreen received a surprising alert on his iPhone 15 Pro, delivered through an early test version of Apple's upcoming Apple Intelligence text message summary feature. "No longer in a relationship; wants belongings from the apartment," the AI-penned message reads, summing up the content of several separate breakup texts from his girlfriend -- that arrived on his birthday, no less. Spreen shared a screenshot of the AI-generated message in a now-viral tweet on the X social network, writing, "for anyone who's wondered what an apple intelligence summary of a breakup text looks like." Spreen told Ars Technica that the screenshot does not show his ex-girlfriend's full real name, just a nickname.

This summary feature of Apple Intelligence, announced by the iPhone maker in June, isn't expected to fully ship until an iOS 18.1 update in the fall. However, it has been available in a public beta test of iOS 18 since July, which is what Spreen is running on his iPhone. It works akin to something like a stripped-down ChatGPT, reading your incoming text messages and delivering its own simplified version of their content. On X, Spreen replied to skepticism over whether the message was real in a follow-up post. "Yes this was real / yes it happened yesterday / yes it was my birthday," Spreen wrote. In response to a question about it being a fair summary of his girlfriend's messages, he wrote, "it is."

We reached out to Spreen directly via email and he delivered his own summary of his girlfriend's messages. "It was something along the lines of i can't believe you just did that, we're done, i want my stuff. we had an argument in a bar and I got up and left, then she sent the text," he wrote. How did he feel about getting the news via AI summary? "I do feel like it added a level of distance to it that wasn't a bad thing," he told Ars Technica. "Maybe a bit like a personal assistant who stays professional and has your back even in the most awful situations, but yeah, more than anything it felt unreal and dystopian."
AMD

AMD Launches AI Chip To Rival Nvidia's Blackwell (cnbc.com) 30

AMD is launching a new chip to rival Nvidia's upcoming Blackwell chips, which Nvidia called the "world's most powerful chip" for AI when unveiled earlier this year. CNBC reports: The Instinct MI325X, as the chip is called, will start production before the end of 2024, AMD said Thursday during an event announcing the new product. If AMD's AI chips are seen by developers and cloud giants as a close substitute for Nvidia's products, it could put pricing pressure on Nvidia, which has enjoyed roughly 75% gross margins while its GPUs have been in high demand over the past year. In the past few years, Nvidia has dominated the majority of the data center GPU market, but AMD is historically in second place. Now, AMD is aiming to take share from its Silicon Valley rival or at least to capture a big chunk of the market, which it says will be worth $500 billion by 2028.

AMD didn't reveal new major cloud or internet customers for its Instinct GPUs at the event, but the company has previously disclosed that both Meta and Microsoft buy its AI GPUs and that OpenAI uses them for some applications. The company also did not disclose pricing for the Instinct MI325X, which is typically sold as part of a complete server. With the launch of the MI325X, AMD is accelerating its product schedule to release new chips on an annual schedule to better compete with Nvidia and take advantage of the boom in AI chips. The new AI chip is the successor to the MI300X, which started shipping late last year. AMD's 2025 chip will be called MI350, and its 2026 chip will be called MI400, the company said.

AI

Amazon Dreams of AI Agents That Do the Shopping For You (wired.com) 76

An anonymous reader quotes a report from Wired: Amazon might not have ChatGPT, but it has a roadmap that includes developing even more advanced forms of artificial intelligence -- including AI agents that are hell-bent on helping you buy stuff. The ecommerce company is already sprinkling ChatGPT-like AI over its website and apps -- today announcing, among other enhancements, AI-generated shopping guides for hundreds of different product categories. Executives at the company say its engineers are also exploring more ambitious AI services, including autonomous AI shopping agents that recommend goods to a customer or even add items to their cart.

"It's on our roadmap. We're working on it, prototyping it, and when we think it's good enough, we'll release it in whatever form makes sense," says Trishul Chilimbi, a VP and distinguished scientist at Amazon who works on applying the company's core AI to its products and services. Chilimbi says the first step toward AI agents will likely be chatbots that proactively recommend products based on what they know of your habits and interests, as well as a grasp of broader trends. He acknowledges that making this feel nonintrusive will be crucial. "If it's no good and annoying, then you'll tune it out," he says. "But if it comes up with surprising things that are interesting, you'll use it more." [...]

Like many tech companies, Amazon is looking beyond chat and turning its attention toward the potential of so-called agents, which use LLMs but attempt to carry out useful tasks on users' behalf either by writing code on-the-fly, inputing text, or moving a computer's cursor. Future AI agents might, for instance, navigate various websites to sort out a parking ticket, or they might operate a PC to file a tax return. Getting LLM-powered programs to do this reliably is elusive, however, because such tasks are vastly more complex than simple queries and require a new level of precision and reliability.

Amazon's agents are, of course, likely to be more focused on helping customers find and buy whatever they need or want. A Rufus agent might notice when the next book in a series someone is reading becomes available and then automatically recommend it, add it to your cart, or even buy it for you, says Rajiv Mehta, a vice president at Amazon who works on conversational AI shopping. "It could say, 'We have one bought for you. We can ship it today, and it will arrive tomorrow morning at your door. Would you like that?'" Mehta says. He adds that Amazon is thinking about how advertising can be incorporated into its model's recommendation. Chilimbi and Mehta say that eventually, an agent might go on a shopping spree when a customer says, "I'm going on a camping trip, buy me everything I need." An extreme, though not impossible, scenario would involve agents that decide for themselves when a customer needs something, and then buy and ship it to their door. "You could maybe give it a budget," Chilimbi says with a grin.

Open Source

Open-Source AI Definition Finally Gets Its First Release Candidate (zdnet.com) 5

An anonymous reader quotes a report from ZDNet: Getting open-source and artificial intelligence (AI) on the same page isn't easy. Just ask the Open Source Initiative (OSI). The OSI, the open-source definition steward organization, has been working on creating an open-source artificial intelligence definition for two years now. The group has been making progress, though. Its Open Source AI Definition has now released its first release candidate, RC1. The latest definition aims to clarify the often contentious discussions surrounding open-source AI. It specifies four fundamental freedoms that an AI system must grant to be considered open source: the ability to use the system for any purpose without permission, to study how it works, to modify it for any purpose, and to share it with or without modifications. So far, so good.

However, the OSI has opted for a compromise regarding training data. Recognizing it's not easy to share full datasets, the current definition requires "sufficiently detailed information about the data used to train the system" rather than the full dataset itself. This approach aims to balance transparency with practical and legal considerations. That last phrase is proving difficult for some people to swallow. From their perspective, if all the data isn't open, then AI large language models (LLM) based on such data can't be open-source. The OSI summarized these arguments as follows: "Some people believe that full, unfettered access to all training data (with no distinction of its kind) is paramount, arguing that anything less would compromise full reproducibility of AI systems, transparency, and security. This approach would relegate Open-Source AI to a niche of AI trainable only on open data."
The OSI acknowledges that the definition of open-source AI isn't final and may need significant rewrites, but the focus is now on fixing bugs and improving documentation. The final version of the Open Source AI Definition is scheduled for release at the All Things Open conference on October 28, 2024.
AI

80% of Software Engineers Must Upskill For AI Era By 2027, Gartner Warns (itpro.com) 108

80% of software engineers will need to upskill by 2027 to keep pace with generative AI's growing demands, according to Gartner. The consultancy predicts AI will transform the industry in three phases. Initially, AI tools will boost productivity, particularly for senior developers. Subsequently, "AI-native software engineering" will emerge, with most code generated by AI. Long-term, AI engineering will rise as enterprise adoption increases, requiring a new breed of professionals skilled in software engineering, data science, and machine learning.

Slashdot Top Deals