×
Google

Google is Developing AI That Can Hear If You're Sick (qz.com) 29

A new AI model being developed by Google could make diagnosing tuberculosis and other respiratory ailments as easy as recording a voice note. From a report: Google is training one of its foundational AI models to listen for signs of disease using sound signals, like coughing, sneezing, and sniffling. This tech, which would work using people's smartphone microphones, could revolutionize diagnoses for communities where advanced diagnostic tools are difficult to come by.

The tech giant is collaborating with Indian respiratory health care AI startup, Salcit Technologies. The tech, which was introduced earlier this year as Health Acoustic Representations, or HeAR, is what's known as a bioacoustic foundation model. HeAR was then trained on 300 million pieces of audio data, including 100 million cough sounds, to learn to pick out patterns in the sounds. Salcit is then using this AI model, in combination with its own product Swaasa, which uses AI to analyze cough sounds and assess lung health, to help research and improve early detection of TB based solely on cough sounds.

Data Storage

Asia's Richest Man Says He Will Give Everyone 100 GB of Free Cloud Storage (techcrunch.com) 43

Mukesh Ambani, Asia's richest man and the chairman of Reliance Industries, said this week that his telecom firm will offer users 100 GB of free cloud storage. Oil-to-retail giant Reliance, which is India's most valuable firm by market cap, has upended the telecom market in India by offering free voice calls and dirt-cheap internet access.

Jio, Reliance's telecom subsidiary, serves 490 million subscribers, more than any rival in India. Jio offers access to at least 2GB of data per day for 14 days to subscribers for a total of $2.3. TechCrunch adds: Reliance plans to offer Jio users up to 100 GB of free cloud storage through its Jio AI Cloud service, set to launch around Diwali in October, Ambani said.
Encryption

Feds Bust Alaska Man With 10,000+ CSAM Images Despite His Many Encrypted Apps (arstechnica.com) 209

A recent indictment (PDF) of an Alaska man stands out due to the sophisticated use of multiple encrypted communication tools, privacy-focused apps, and dark web technology. "I've never seen anyone who, when arrested, had three Samsung Galaxy phones filled with 'tens of thousands of videos and images' depicting CSAM, all of it hidden behind a secrecy-focused, password-protected app called 'Calculator Photo Vault,'" writes Ars Technica's Nate Anderson. "Nor have I seen anyone arrested for CSAM having used all of the following: [Potato Chat, Enigma, nandbox, Telegram, TOR, Mega NZ, and web-based generative AI tools/chatbots]." An anonymous reader shares the report: According to the government, Seth Herrera not only used all of these tools to store and download CSAM, but he also created his own -- and in two disturbing varieties. First, he allegedly recorded nude minor children himself and later "zoomed in on and enhanced those images using AI-powered technology." Secondly, he took this imagery he had created and then "turned to AI chatbots to ensure these minor victims would be depicted as if they had engaged in the type of sexual contact he wanted to see." In other words, he created fake AI CSAM -- but using imagery of real kids.

The material was allegedly stored behind password protection on his phone(s) but also on Mega and on Telegram, where Herrera is said to have "created his own public Telegram group to store his CSAM." He also joined "multiple CSAM-related Enigma groups" and frequented dark websites with taglines like "The Only Child Porn Site you need!" Despite all the precautions, Herrera's home was searched and his phones were seized by Homeland Security Investigations; he was eventually arrested on August 23. In a court filing that day, a government attorney noted that Herrera "was arrested this morning with another smartphone -- the same make and model as one of his previously seized devices."

The government is cagey about how, exactly, this criminal activity was unearthed, noting only that Herrera "tried to access a link containing apparent CSAM." Presumably, this "apparent" CSAM was a government honeypot file or web-based redirect that logged the IP address and any other relevant information of anyone who clicked on it. In the end, given that fatal click, none of the "I'll hide it behind an encrypted app that looks like a calculator!" technical sophistication accomplished much. Forensic reviews of Herrera's three phones now form the primary basis for the charges against him, and Herrera himself allegedly "admitted to seeing CSAM online for the past year and a half" in an interview with the feds.

Government

California Passes Bill Requiring Easier Data Sharing Opt Outs (therecord.media) 22

Most of the attention today has been focused on California's controversial "kill switch" AI safety bill, which passed the California State Assembly by a 45-11 vote. However, California legislators passed another tech bill this week which requires internet browsers and mobile operating systems to offer a simple tool for consumers to easily opt out of data sharing and selling for targeted advertising. Slashdot reader awwshit shares a report from The Record: The state's Senate passed the landmark legislation after the General Assembly approved it late Wednesday. The Senate then added amendments to the bill which now goes back to the Assembly for final sign off before it is sent to the governor's desk, a process Matt Schwartz, a policy analyst at Consumer Reports, called a "formality." California, long a bellwether for privacy regulation, now sets an example for other states which could offer the same protections and in doing so dramatically disrupt the online advertising ecosystem, according to Schwartz.

"If folks use it, [the new tool] could severely impact businesses that make their revenue from monetizing consumers' data," Schwartz said in an interview with Recorded Future News. "You could go from relatively small numbers of individuals taking advantage of this right now to potentially millions and that's going to have a big impact." As it stands, many Californians don't know they have the right to opt out because the option is invisible on their browsers, a fact which Schwartz said has "artificially suppressed" the existing regulation's intended effects. "It shouldn't be that hard to send the universal opt out signal," Schwartz added. "This will require [browsers and mobile operating systems] to make that setting easy to use and find."

Earth

Who Wins From Nature's Genetic Bounty? (theguardian.com) 23

Scientists are harvesting genetic data from microorganisms in a North Yorkshire quarry, fueling a global debate over ownership and profit-sharing of natural genetic resources. Researchers from London-based startup Basecamp Research are collecting samples and digitizing genetic codes for sale to AI companies. This practice of trading digital sequencing information (DSI) has become central to biotechnology research and development. The issue will be a focal point at October's COP16 biodiversity summit in Cali, Colombia, The Guardian reports.

Developing nations, home to much of the world's biodiversity, are pushing for a global system requiring companies to pay for genetic data use. Past discoveries underscore the potential value: heat-resistant bacteria crucial for COVID-19 testing and marine organisms used in cancer treatments have generated significant profits. Critics accuse companies of "biopiracy" for commercializing genetic information without compensating source countries. Proposed solutions include a global fund for equitable benefit-sharing, though implementation details remain contentious.
AI

Midjourney Says It's 'Getting Into Hardware' 4

Midjourney, the AI image-generating platform, announced on Wednesday that it's "officially getting into hardware." TechCrunch reports: As for what hardware Midjourney, which has a team of fewer than 100 people, might pursue, there might be a clue in its hiring of Ahmad Abbas in February. Abbas, an ex-Neuralink staffer, helped engineer the Apple Vision Pro, Apple's mixed reality headset. Midjourney CEO David Holz is also no stranger to hardware. He co-founded Leap Motion, which built motion-tracking peripherals. (Abbas worked together with Holz at Leap, in fact.)

Despite the lawsuits over its AI training approach working their way through the courts, Midjourney has said it's continuing to develop AI models for video and 3D generation. The hardware could perhaps be related to those efforts, as well.
AI

California Legislature Passes Controversial 'Kill Switch' AI Safety Bill (arstechnica.com) 56

An anonymous reader quotes a report from Ars Technica: A controversial bill aimed at enforcing safety standards for large artificial intelligence models has now passed the California State Assembly by a 45-11 vote. Following a 32-1 state Senate vote in May, SB-1047 now faces just one more procedural state senate vote before heading to Governor Gavin Newsom's desk. As we've previously explored in depth, SB-1047 asks AI model creators to implement a "kill switch" that can be activated if that model starts introducing "novel threats to public safety and security," especially if it's acting "with limited human oversight, intervention, or supervision." Some have criticized the bill for focusing on outlandish risks from an imagined future AI rather than real, present-day harms of AI use cases like deep fakes or misinformation. [...]

If the Senate confirms the Assembly version as expected, Newsom will have until September 30 to decide whether to sign the bill into law. If he vetoes it, the legislature could override with a two-thirds vote in each chamber (a strong possibility given the overwhelming votes in favor of the bill). At a UC Berkeley Symposium in May, Newsom said he worried that "if we over-regulate, if we overindulge, if we chase a shiny object, we could put ourselves in a perilous position." At the same time, Newsom said those over-regulation worries were balanced against concerns he was hearing from leaders in the AI industry. "When you have the inventors of this technology, the godmothers and fathers, saying, 'Help, you need to regulate us,' that's a very different environment," he said at the symposium. "When they're rushing to educate people, and they're basically saying, 'We don't know, really, what we've done, but you've got to do something about it,' that's an interesting environment."
Supporters of the AI safety bill include state senator Scott Weiner and AI experts including Geoffrey Hinton and Yoshua Bengio. Bengio supports the bill as a necessary step for consumer protection and insists that AI should not be self-regulated by corporations, akin to other industries like pharmaceuticals and aerospace.

Stanford professor Fei-Fei Li opposes the bill, arguing that it could have harmful effects on the AI ecosystem by discouraging open-source collaboration and limiting academic research due to the liability placed on developers of modified models. A group of business leaders also sent an open letter Wednesday urging Newsom to veto the bill, calling it "fundamentally flawed."
Apple

Apple Is in Talks To Invest in OpenAI, WSJ Says (wsj.com) 13

Apple is in talks to invest in OpenAI, a move that would cement ties to a partner integral to its efforts to gain ground in the artificial-intelligence race. WSJ: The investment would be part of a new OpenAI fundraising round that would value the ChatGPT maker above $100 billion, people familiar with the situation said. The Wall Street Journal reported Wednesday that venture-capital firm Thrive Capital is leading the round, which will total several billion dollars, and Apple rival Microsoft is also expected to participate.

It couldn't be learned how much Apple or Microsoft will invest into OpenAI this round. To date, Microsoft has been the primary strategic investor into OpenAI. It owns a 49% share of the AI startup's profits after investing $13 billion since 2019. Apple in June announced OpenAI as the first official partner for Apple Intelligence, its system for infusing AI features throughout its operating system. The new AI will feature an improved Siri voice assistant, text proofreading and creating custom emojis.

AI

AI Giants Pledge To Share New Models With Feds 14

OpenAI and Anthropic will give a U.S. government agency early access to major new model releases under agreements announced on Thursday. From a report: Governments around the world have been pushing for measures -- both legislative and otherwise -- to evaluate the risks of powerful new AI algorithms. Anthropic and OpenAI have each signed a memorandum of understanding to allow formal collaboration with the U.S. Artificial Intelligence Safety Institute, a part of the Commerce Department's National Institute of Standards and Technology. In addition to early access to models, the agreements pave the way for collaborative research around how to evaluate models and their safety as well as methods for mitigating risk. The U.S. AI Safety Institute was set up as part of President Biden's AI executive order.
The Internet

South Korea Faces Deepfake Porn 'Emergency' 54

An anonymous reader quotes a report from the BBC: South Korea's president has urged authorities to do more to "eradicate" the country's digital sex crime epidemic, amid a flood of deepfake pornography targeting young women. Authorities, journalists and social media users recently identified a large number of chat groups where members were creating and sharing sexually explicit "deepfake" images -- including some of underage girls. Deepfakes are generated using artificial intelligence, and often combine the face of a real person with a fake body. South Korea's media regulator is holding an emergency meeting in the wake of the discoveries.

The spate of chat groups, linked to individual schools and universities across the country, were discovered on the social media app Telegram over the past week. Users, mainly teenage students, would upload photos of people they knew -- both classmates and teachers -- and other users would then turn them into sexually explicit deepfake images. The discoveries follow the arrest of the Russian-born founder of Telegram, Pavel Durov, on Saturday, after it was alleged that child pornography, drug trafficking and fraud were taking place on the encrypted messaging app.
South Korean President Yoon Suk Yeol on Tuesday instructed authorities to "thoroughly investigate and address these digital sex crimes to eradicate them."

"Recently, deepfake videos targeting an unspecified number of people have been circulating rapidly on social media," President Yoon said at a cabinet meeting. "The victims are often minors and the perpetrators are mostly teenagers." To build a "healthy media culture," President Yoon said young men needed to be better educated. "Although it is often dismissed as 'just a prank,' it is clearly a criminal act that exploits technology to hide behind the shield of anonymity," he said.

The Guardian notes that making sexually explicit deepfakes with the intention of distributing them is punishable by five years in prison or a fine of $37,500.

Further reading: 1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds (Source: 404 Media)
Google

Google To Relaunch Tool For Creating AI-Generated Images of People 35

Google announced that it will reintroduce AI image generation capabilities through its Gemini tool, with early access to the new Imagen 3 generator available for select users in the coming days. The company pulled the feature shortly after it launched in February when users discovered historical inaccuracies and questionable responses. CNBC reports: "We've worked to make technical improvements to the product, as well as improved evaluation sets, red-teaming exercises and clear product principles," [wrote Dave Citron, a senior director of product on Gemini, in a blog post]. Red-teaming refers to a practice companies use to test products for vulnerabilities.

Citron said Imagen 3 doesn't support photorealistic identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes. "Of course, as with any generative AI tool, not every image Gemini creates will be perfect, but we'll continue to listen to feedback from early users as we keep improving," Citron wrote. "We'll gradually roll this out, aiming to bring it to more users and languages soon."
AI

The World's Call Center Capital Is Gripped by AI Fever - and Fear (bloomberg.com) 61

The Philippines' $38 billion outsourcing industry faces a seismic shift as AI tools threaten to displace hundreds of thousands of jobs. Major players are rapidly deploying AI "copilots" to handle tasks like summarizing customer interactions and providing real-time assistance to human agents, Bloomberg reports. Industry experts estimate up to 300,000 business process outsourcing (BPO) jobs could be lost to AI in the next five years, according to outsourcing advisory firm Avasant.

However, the firm also projects AI could create 100,000 new roles in areas like algorithm training and data curation. The BPO sector is crucial to the Philippine economy as the largest source of private-sector employment. The government has established an AI research center and launched training initiatives to boost workers' skills.
AI

Anthropic Publishes the 'System Prompts' That Make Claude Tick 10

An anonymous reader quotes a report from TechCrunch: [...] Anthropic, in its continued effort to paint itself as a more ethical, transparent AI vendor, has published the system prompts for its latest models (Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku) in the Claude iOS and Android apps and on the web. Alex Albert, head of Anthropic's developer relations, said in a post on X that Anthropic plans to make this sort of disclosure a regular thing as it updates and fine-tunes its system prompts. The latest prompts, dated July 12, outline very clearly what the Claude models can't do -- e.g. "Claude cannot open URLs, links, or videos." Facial recognition is a big no-no; the system prompt for Claude Opus tells the model to "always respond as if it is completely face blind" and to "avoid identifying or naming any humans in [images]." But the prompts also describe certain personality traits and characteristics -- traits and characteristics that Anthropic would have the Claude models exemplify.

The prompt for Claude 3 Opus, for instance, says that Claude is to appear as if it "[is] very smart and intellectually curious," and "enjoys hearing what humans think on an issue and engaging in discussion on a wide variety of topics." It also instructs Claude to treat controversial topics with impartiality and objectivity, providing "careful thoughts" and "clear information" -- and never to begin responses with the words "certainly" or "absolutely." It's all a bit strange to this human, these system prompts, which are written like an actor in a stage play might write a character analysis sheet. The prompt for Opus ends with "Claude is now being connected with a human," which gives the impression that Claude is some sort of consciousness on the other end of the screen whose only purpose is to fulfill the whims of its human conversation partners. But of course that's an illusion.
"If the prompts for Claude tell us anything, it's that without human guidance and hand-holding, these models are frighteningly blank slates," concludes TechCrunch's Kyle Wiggers. "With these new system prompt changelogs -- the first of their kind from a major AI vendor -- Anthropic is exerting pressure on competitors to publish the same. We'll have to see if the gambit works."
Open Source

How Do You Define 'Open Source AI'? (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: The Open Source Initiative (OSI) recently unveiled its latest draft definition for "open source AI," aiming to clarify the ambiguous use of the term in the fast-moving field. The move comes as some companies like Meta release trained AI language model weights and code with usage restrictions while using the "open source" label. This has sparked intense debates among free-software advocates about what truly constitutes "open source" in the context of AI. For instance, Meta's Llama 3 model, while freely available, doesn't meet the traditional open source criteria as defined by the OSI for software because it imposes license restrictions on usage due to company size or what type of content is produced with the model. The AI image generator Flux is another "open" model that is not truly open source. Because of this type of ambiguity, we've typically described AI models that include code or weights with restrictions or lack accompanying training data with alternative terms like "open-weights" or "source-available."

To address the issue formally, the OSI -- which is well-known for its advocacy for open software standards -- has assembled a group of about 70 participants, including researchers, lawyers, policymakers, and activists. Representatives from major tech companies like Meta, Google, and Amazon also joined the effort. The group's current draft (version 0.0.9) definition of open source AI emphasizes "four fundamental freedoms" reminiscent of those defining free software: giving users of the AI system permission to use it for any purpose without permission, study how it works, modify it for any purpose, and share with or without modifications. [...] OSI's project timeline indicates that a stable version of the "open source AI" definition is expected to be announced in October at the All Things Open 2024 event in Raleigh, North Carolina.

AI

Gannett is Shuttering Site Accused of Publishing AI Product Reviews (theverge.com) 12

An anonymous reader shares a report: Newspaper giant Gannett is shutting down Reviewed, its product reviews site, effective November 1st, according to sources familiar with the decision. The site offers recommendations for products ranging from shoes to home appliances and employs journalists to test and review items -- but has also been at the center of questions around whether its work is actually produced by humans.

"After careful consideration and evaluation of our Reviewed business, we have decided to close the operation. We extend our sincere gratitude to our employees who have provided consumers with trusted product reviews," Reviewed spokesperson Lark-Marie Anton told The Verge in an email. But the site more recently has been the subject of scrutiny, at times by its own unionized employees. Last October, Reviewed staff publicly accused Gannett of publishing AI-generated product reviews on the site. The articles in question were written in a strange, stilted manner, and staff found that the authors the articles were attributed to didn't seem to exist on LinkedIn and other platforms. Some questioned whether they were real at all. In response to questions, Gannett said the articles were produced by a third-party marketing company called AdVon Commerce and that the original reviews didn't include proper disclosure. But Gannett denied that AI was involved.

AI

Klarna Aims To Halve Workforce With AI-Driven Gains (ft.com) 49

Klarna aims to extend AI-driven cuts to its workforce with plans to axe almost half of its staff [non-paywalled source], as the lossmaking Swedish buy now, pay later company gears up for a stock market flotation. FT: Chief executive Sebastian Siemiatkowski heralded the benefits of AI in Klarna's second-quarter results on Tuesday, which showed a significant narrowing of its net loss from SKr854mn ($84mn) a year earlier to SKr10mn. The Swedish fintech has already cut its workforce from 5,000 to 3,800 in the past year. Siemiatkowski told the Financial Times that Klarna could employ as few as 2,000 employees in the coming years as it uses AI in tasks such as customer service and marketing.

"Not only can we do more with less, but we can do much more with less. Internally, we speak directionally about 2,000 [employees]. We don't want to put a specific deadline on that," he added. Klarna has imposed a hiring freeze on workers apart from engineers and is using natural attrition rather than lay-offs to shrink its workforce. Siemiatkowski has become one of the most outspoken European tech bosses about the benefits of AI, even if it leads to lower employment, arguing that is an issue for governments to worry about. The Stockholm-based group is lining up financial advisers for its long-anticipated initial public offering -- due as early as the first half of next year -- with Morgan Stanley, JPMorgan Chase and Goldman Sachs in lead positions to secure top roles, people familiar with the matter have previously told the FT.

Google

Ex-Googlers Discover That Startups Are Hard 61

Dozens of former AI researchers from Google who struck out on their own are learning that startups are tricky. The Information: The latest example is French AI agent developer H, which lost three of its five cofounders (four of whom are ex-Googlers) just months after announcing they had raised $220 million from investors in a "seed" round, as our colleagues reported Friday. The founders had "operational and business disagreements," one of them told us.

The drama at H follows the quasi-acquisitions of Inflection, Adept and Character, which were each less than three years old and founded mostly by ex-Google AI researchers. Reka, another AI developer in this category, was in talks to be acquired by Snowflake earlier this year. Those talks, which could have valued the company at $1 billion, fell apart in May. AI image developer Ideogram, also cofounded by four ex-Googlers, has spoken with at least one later-stage tech startup about potential sale opportunities, though the talks didn't seem to go anywhere, according to someone involved in the discussions.

Cohere, whose CEO co-authored a seminal Google research paper about transformers with Noam Shazeer, the ex-CEO of Character, has also faced growing questions about its relatively meager revenue compared to its rivals. For now, though, it has a lot of money in the bank. Has someone put a curse on startups founded by ex-Google AI researchers?
AI

Hobbyists Discover How To Insert Custom Fonts Into AI-Generated Images (arstechnica.com) 33

An anonymous reader quotes a report from Ars Technica: Last week, a hobbyist experimenting with the new Flux AI image synthesis model discovered that it's unexpectedly good at rendering custom-trained reproductions of typefaces. While far more efficient methods of displaying computer fonts have existed for decades, the new technique is useful for AI image hobbyists because Flux is capable of rendering depictions of accurate text, and users can now directly insert words rendered in custom fonts into AI image generations. [...] Since Flux is an open model available for download and fine-turning, this past month has been the first time training a typeface LoRA might make sense. That's exactly what an AI enthusiast named Vadim Fedenko (who did not respond to a request for an interview by press time) discovered recently. "I'm really impressed by how this turned out," Fedenko wrote in a Reddit post. "Flux picks up how letters look in a particular style/font, making it possible to train Loras with specific Fonts, Typefaces, etc. Going to train more of those soon."

For his first experiment, Fedenko chose a bubbly "Y2K" style font reminiscent of those popular in the late 1990s and early 2000s, publishing the resulting model on the Civitai platform on August 20. Two days later, a Civitai user named "AggravatingScree7189" posted a second typeface LoRA that reproduces a font similar to one found in the Cyberpunk 2077 video game. "Text was so bad before it never occurred to me that you could do this," wrote a Reddit user named eggs-benedryl when reacting to Fedenko's post on the Y2K font. Another Redditor wrote, "I didn't know the Y2K journal was fake until I zoomed it." It's true that using a deeply trained image synthesis neural network to render a plain old font on a simple background is probably overkill. You probably wouldn't want to use this method to replace Adobe Illustrator while designing a document. "This looks good but it's kinda funny how we're reinventing the idea of fonts as 300MB LoRAs," wrote one Reddit commenter on a thread about the Cyberpunk 2077 font.

Television

Samsung TVs Will Get 7 Years of Free Tizen OS Upgrades (businesskorea.co.kr) 95

Samsung Electronics said it will provide Tizen OS updates for its newer TVs for at least seven years, starting with models released in March this year and some 2023 models. Business Korea reports: [Yoon Seok-woo, President of Samsung Electronics' Visual Display Business Division] emphasized that the seven-year free upgrade for Tizen applied to AI TVs would help Samsung widen the market share gap with Chinese competitors. Tizen, an in-house developed OS, has been applied to over 270 million Samsung smart TVs as of last year, making it the world's largest smart TV platform and a key player in leading the Internet of Things (IoT) era. "AI TV will act as the hub of the AI home, connecting other AI appliances like refrigerators and air conditioners," Yoon explained. "We will expand the AI home era by enabling users to monitor and control peripheral devices through the TV even when it is off or when the user is away." This connectivity is a key differentiator from Chinese competitors, according to Yoon.

In the first half of this year, Samsung Electronics maintained the top spot in the global TV market with a 28.8% market share by revenue. However, the combined market share of Chinese companies TCL and Hisense has reached 22.1%, indicating fierce competition.

AI

Wolfram Thinks We Need Philosophers Working on Big Questions Around AI (techcrunch.com) 82

Stephen Wolfram, renowned mathematician and computer scientist, is calling for philosophers to engage with critical questions surrounding AI as the technology's advancement raises complex ethical and societal issues. Wolfram, creator of Mathematica and Wolfram Alpha, argues that the tech industry's approach to AI development often lacks philosophical rigor. "Sometimes in the tech industry, when people talk about how we should set up this or that thing with AI, some may say, 'Well, let's just get AI to do the right thing.' And that leads to, 'Well, what is the right thing?'"

He sees parallels between current AI challenges and foundational questions in philosophy, citing discussions on AI guardrails and the potential for AI to significantly impact society as examples where philosophical inquiry is crucial. The scientist, who earned his doctorate at 20, suggests that philosophers may be better equipped than scientists to tackle the paradigm shifts AI presents. Wolfram's call comes as AI's growing influence raises ethical concerns across industries, urging an interdisciplinary approach to address these emerging challenges.

Slashdot Top Deals