×
Businesses

Samsung Stock Hits Three-Year High With Boost From AI (cnbc.com) 4

Samsung said it expects a 1,452% profit increase for the second quarter, causing shares to climb 2.24% to a high of 86,500 Korean won ($62.73). CNBC reports: Samsung issued guidance on Friday, saying operating profit for the April to June quarter is projected to be about 10.4 trillion won ($7.54 billion) -- that's a jump of about 1,452% from 670 billion won a year ago. The expected operating profit beat a LSEG estimate of 8.51 trillion won. The firm also said it expects revenue for the second quarter to be between 73 trillion to 75 trillion won, from 60.01 trillion won a year ago. This is in line with the 73.7 trillion won estimated by LSEG analysts.

Business for the world's largest memory chip maker rebounded as memory chip prices recovered on AI optimism last year. The South Korean electronics giant saw record losses in 2023 as the industry reeled from a post-Covid slump in demand for memory chips and electronics. Its memory chips are commonly found in a wide range of consumer devices including smartphones and computers. Samsung said in April it expects the second quarter to be driven mostly by demand for generative AI, while mobile demand remains stable.
"Samsung announces earnings surprise but mainly the earnings upside is from memory price high. So ironically, Samsung is lagging behind in HBM (high-bandwidth memory) production. So supply to Nvidia -- the qualification -- has been delayed," SK Kim, executive director of Daiwa Capital Markets, told CNBC's "Street Signs Asia" on Friday.
Google

Google Struggles to Lessen Reliance on Apple Safari (theinformation.com) 20

Google is intensifying efforts to decrease its dependency on Apple's Safari browser, as a U.S. antitrust lawsuit threatens its default search engine status on iPhones. The tech giant has been trying to shift more iPhone searches to its own apps, with the percentage rising from 25% five years ago to the low 30s recently, The Information reported Friday.

Progress has stalled in recent months, however. To attract users, Google has run advertising campaigns showcasing unique features like Lens image search. The company recently hired former Instagram executive Robby Stein to lead this initiative, potentially leveraging AI to enhance its apps' appeal. Google paid Apple over $20 billion last year for default status on Safari. Reducing this dependency could protect Google's mobile search advertising revenue if the antitrust ruling goes against it. The report adds: Google executives considered having its new AI Overviews feature, which shows AI-generated responses to search queries, appear on its mobile apps but not on Safari, people who have worked on the product said. But Google ultimately decided against that move.
Google

Google Paper: AI Potentially Breaking Reality Is a Feature Not a Bug (404media.co) 82

An anonymous reader shares a report: Generative AI could "distort collective understanding of socio-political reality or scientific consensus," and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI. The paper, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data," [PDF] was co-authored by researchers at Google's artificial intelligence research laboratory DeepMind, its security think tank Jigsaw, and its charitable arm Google.org, and aims to classify the different ways generative AI tools are being misused by analyzing about 200 incidents of misuse as reported in the media and research papers between January 2023 and March 2024.

Unlike self-serving warnings from Open AI CEO Sam Altman or Elon Musk about the "existential risk" artificial general intelligence poses to humanity, Google's research focuses on real harm that generative AI is currently causing and could get worse in the future. Namely, that generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos. Much like another Google research paper about the dangers of generative AI I covered recently, Google's methodology here likely undercounts instances of AI-generated harm. But the most interesting observation in the paper is that the vast majority of these harms and how they "undermine public trust," as the researchers say, are often "neither overtly malicious nor explicitly violate these tools' content policies or terms of service." In other words, that type of content is a feature, not a bug.

AI

China Dominates Generative AI Patent Filings, UN Says (apnews.com) 12

China has requested significantly more generative AI patents than any other country, the U.N. intellectual property agency (the World Intellectual Property Organization) is reporting. According to WIPO's first-ever report on GenAI patents, China submitted over 38,200 inventions in the past decade, dwarfing the United States' 6,300 filings. South Korea, Japan, and India rounded out the top five. The study tracked approximately 54,000 GenAI-related patent applications from 2014 to 2023, with over a quarter emerging in the last year alone.
AI

Ray Kurzweil Still Says He Will Merge With AI 151

Renowned futurist Ray Kurzweil, 76, has doubled down on his prediction of the Singularity's imminent arrival in an interview with The New York Times. Gesturing to a graph showing exponential growth in computing power, Kurzweil asserted humanity would merge with AI by 2045, augmenting biological brains with vast computational abilities.

"If you create something that is thousands of times -- or millions of times -- more powerful than the brain, we can't anticipate what it is going to do," Kurzweil said. His claims, once dismissed, have gained traction amid recent AI breakthroughs. As Kurzweil ages, his predictions carry personal urgency. "Even a healthy 20-year-old could die tomorrow," he told The Times, hinting at his own mortality race against the Singularity's timeline.
Security

A Hacker Stole OpenAI Secrets 18

A hacker infiltrated OpenAI's internal messaging systems in early 2023, stealing confidential information about the ChatGPT maker's AI technologies, New York Times reported Thursday. The breach, disclosed to employees in April that year but kept from the public, has sparked internal debate over the company's security protocols and potential national security implications, the report adds. The hacker accessed an employee forum containing sensitive discussions but did not breach core AI systems. OpenAI executives, believing the hacker had no government ties, opted against notifying law enforcement, the Times reported. From the report: After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future A.I. technologies do not cause serious harm, sent a memo to OpenAI's board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Mr. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI's security wasn't strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
The Internet

Cloudflare Rolls Out Feature For Blocking AI Companies' Web Scrapers (siliconangle.com) 40

Cloudflare today unveiled a new feature part of its content delivery network (CDN) that prevents AI developers from scraping content on the web. According to Cloudflare, the feature is available for both the free and paid tiers of its service. SiliconANGLE reports: The feature uses AI to detect automated content extraction attempts. According to Cloudflare, its software can spot bots that scrape content for LLM training projects even when they attempt to avoid detection. "Sadly, we've observed bot operators attempt to appear as though they are a real browser by using a spoofed user agent," Cloudflare engineers wrote in a blog post today. "We've monitored this activity over time, and we're proud to say that our global machine learning model has always recognized this activity as a bot."

One of the crawlers that Cloudflare managed to detect is a bot that collects content for Perplexity AI Inc., a well-funded search engine startup. Last month, Wired reported that the manner in which the bot scrapes websites makes its requests appear as regular user traffic. As a result, website operators have struggled to block Perplexity AI from using their content. Cloudflare assigns every website visit that its platform processes a score of 1 to 99. The lower the number, the greater the likelihood that the request was generated by a bot. According to the company, requests made by the bot that collects content for Perplexity AI consistently receive a score under 30.

"When bad actors attempt to crawl websites at scale, they generally use tools and frameworks that we are able to fingerprint," Cloudflare's engineers detailed. "For every fingerprint we see, we use Cloudflare's network, which sees over 57 million requests per second on average, to understand how much we should trust this fingerprint." Cloudflare will update the feature over time to address changes in AI scraping bots' technical fingerprints and the emergence of new crawlers. As part of the initiative, the company is rolling out a tool that will enable website operators to report any new bots they may encounter.

China

Chinese AI Stirs Panic At European Geoscience Society (science.org) 32

Paul Voosen reports via Science Magazine: Few things prompt as much anxiety in science and the wider world as the growing use of artificial intelligence (AI) and the rising influence of China. This spring, these two factors created a rift at the European Geosciences Union (EGU), one of the world's largest geoscience societies, that led to the firing of its president. The whole episode has been "a packaging up of fear of AI and fear of China," says Michael Stephenson, former chief geologist of the United Kingdom and one of the founders of Deep-time Digital Earth (DDE), a $70 million effort to connect digital geoscience databases. In 2019, another geoscience society, the International Union of Geological Sciences (IUGS), kicked off DDE, which has been funded almost entirely by the government of China's Jiangsu province.

The dispute pivots on GeoGPT, an AI-powered chatbot that is one of DDE's main efforts. It is being developed by Jian Wang, chief technology officer of e-commerce giant Alibaba. Built on Qwen, Alibaba's own chatbot, and fine-tuned on billions of words from open-source geology studies and data sets, GeoGPT is meant to provide expert answers to questions, summarize documents, and create visualizations. Stephenson tested an early version, asking it about the challenges of using the fossilized teeth of conodonts, an ancient relative of fish, to define the start of the Permian period 299 million years ago. "It was very good at that," he says. As awareness of GeoGPT spread, so did concern. Paul Cleverly, a visiting professor at Robert Gordon University, gained access to an early version and said in a recent editorial in Geoscientist there were "serious issues around a lack of transparency, state censorship, and potential copyright infringement."
Paul Cleverly and GeoScienceWorld CEO Phoebe McMellon raised these concerns in a letter to IUGS, arguing that the chatbot was built using unlicensed literature without proper citations. However, they did not cite specific copyright violations, so DDE President Chengshan Wang, a geologist at the China University of Geosciences, decided not to end the project.

Tensions at EGU escalated when a complaint about GeoGPT's transparency was submitted before the EGU's April meeting, where GeoGPT would be introduced. "It arrived at an EGU whose leadership was already under strain," notes Science. The complaint exacerbated existing leadership issues within EGU, particularly surrounding President Irina Artemieva, who was seen as problematic by some executives due to her affiliations and actions. Science notes that she's "affiliated with Germany's GEOMAR Helmholtz Centre for Ocean Research Kiel but is also paid by the Chinese Academy of Geological Sciences to advise it on its geophysical research."

Artemieva forwarded the complaint via email to the DDE President to get his view, but forgot to delete the name attached to it, leading to a breach of confidentiality. This incident, among other leadership disputes, culminated in her dismissal and the elevation of Peter van der Beek to president. During the DDE session at the EGU meeting, van der Beek's enforcement actions against Chinese scientists and session attendees led to allegations of "harassment and discrimination."

"Seeking to broker a peace deal around GeoGPT," IUGS's president and another former EGU president, John Ludden, organized a workshop and invited all parties to discuss GeoGPT's governance, ongoing negotiations for licensing deals and alternative AI models for GeoGPT's use.
AI

MIT Robotics Pioneer Rodney Brooks On Generative AI 41

An anonymous reader quotes a report from TechCrunch: When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997. In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he's doing. He knows what he's talking about, and he thinks maybe it's time to put the brakes on the screaming hype that is generative AI. Brooks thinks it's impressive technology, but maybe not quite as capable as many are suggesting. "I'm not saying LLMs are not important, but we have to be careful [with] how we evaluate them," he told TechCrunch.

He says the trouble with generative AI is that, while it's perfectly capable of performing a certain set of tasks, it can't do everything a human can, and humans tend to overestimate its capabilities. "When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that," Brooks said. "And they're usually very over-optimistic, and that's because they use a model of a person's performance on a task." He added that the problem is that generative AI is not human or even human-like, and it's flawed to try and assign human capabilities to it. He says people see it as so capable they even want to use it for applications that don't make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone suggested to him recently that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. In his estimation, however, this is not a reasonable use case for generative AI and would actually slow things down. It's instead much simpler to connect the robots to a stream of data coming from the warehouse management software. "When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it's just going to slow things down," he said. "We have massive data processing and massive AI optimization techniques and planning. And that's how we get the orders completed fast."
"People say, 'Oh, the large language models are gonna make robots be able to do things they couldn't do.' That's not where the problem is. The problem with being able to do stuff is about control theory and all sorts of other hardcore math optimization," he said.

"It's not useful in the warehouse to tell an individual robot to go out and get one thing for one order, but it may be useful for eldercare in homes for people to be able to say things to the robots," he said.
Nintendo

Nintendo Has No Plans to Use Generative AI in Its Games, Company President Says (cnet.com) 18

Mario and Luigi aren't jumping on the AI train. From a report: In a recent Q&A with investors, Nintendo President Shuntaro Furukawa addressed the issue. Though he said generative AI can be creative, Furukawa told his audience that the company isn't planning to use the technology in its games. "In the game industry, AI-like technology has long been used to control enemy character movements, so game development and AI technology have always been closely related," Furukawa said, according to TweakTown. "Generative AI, which has been a hot topic in recent years, can be more creative, but we also recognize that it has issues with intellectual property rights. "We have decades of know-how in creating optimal gaming experiences for our customers, and while we remain flexible in responding to technological developments, we hope to continue to deliver value that is unique to us and cannot be achieved through technology alone."
Power

Tech Industry Wants to Lock Up Nuclear Power for AI (wsj.com) 70

Tech companies scouring the country for electricity supplies have zeroed in on a key target: America's nuclear-power plants. From a report: The owners of roughly a third of U.S. nuclear-power plants are in talks with tech companies to provide electricity to new data centers needed to meet the demands of an artificial-intelligence boom. Among them, Amazon Web Services is nearing a deal for electricity supplied directly from a nuclear plant on the East Coast with Constellation Energy, the largest owner of U.S. nuclear-power plants, according to people familiar with the matter. In a separate deal in March, the Amazon subsidiary purchased a nuclear-powered data center in Pennsylvania for $650 million.

The discussions have the potential to remove stable power generation from the grid while reliability concerns are rising across much of the U.S. and new kinds of electricity users -- including AI, manufacturing and transportation -- are significantly increasing the demand for electricity in pockets of the country. Nuclear-powered data centers would match the grid's highest-reliability workhorse with a wealthy customer that wants 24-7 carbon-free power, likely speeding the addition of data centers needed in the global AI race. But instead of adding new green energy to meet their soaring power needs, tech companies would be effectively diverting existing electricity resources. That could raise prices for other customers and hold back emission-cutting goals.

AI

Brazil Data Regulator Bans Meta From Mining Data To Train AI Models 13

Brazil's national data protection authority ruled on Tuesday that Meta must stop using data originating in the country to train its artificial intelligence models. The Associated Press reports: Meta's updated privacy policy enables the company to feed people's public posts into its AI systems. That practice will not be permitted in Brazil, however. The decision stems from "the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects," the agency said in the nation's official gazette. [...] Hye Jung Han, a Brazil-based researcher for the rights group, said in an email Tuesday that the regulator's action "helps to protect children from worrying that their personal data, shared with friends and family on Meta's platforms, might be used to inflict harm back on them in ways that are impossible to anticipate or guard against."

But the decision regarding Meta will "very likely" encourage other companies to refrain from being transparent in the use of data in the future, said Ronaldo Lemos, of the Institute of Technology and Society of Rio de Janeiro, a think-tank. "Meta was severely punished for being the only one among the Big Tech companies to clearly and in advance notify in its privacy policy that it would use data from its platforms to train artificial intelligence," he said. Compliance must be demonstrated by the company within five working days from the notification of the decision, and the agency established a daily fine of 50,000 reais ($8,820) for failure to do so.
In a statement, Meta said the company is "disappointed" by the decision and insists its method "complies with privacy laws and regulations in Brazil."

"This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil," a spokesperson for the company added.
Businesses

Phil Schiller To Join OpenAI Board In 'Observer' Role Following Apple's ChatGPT Deal (9to5mac.com) 7

As reported by Bloomberg, Apple will get an "observer role" on OpenAI's board of directors as part of its partnership to integrate ChatGPT into iOS 18. That role will reportedly be filled by Apple Fellow, Phil Schiller. 9to5Mac reports: Apple having an "observer role" on the OpenAI board matches the role of Microsoft. Schiller will be able to observe and attend board meetings, but will not have any voting power: "The board observer role will put Apple on par with Microsoft, OpenAI's biggest backer and its main AI technology provider. The job allows someone to attend board meetings without being able to vote or exercise other director powers. Observers, however, do gain insights into how decisions are made at the company." The arrangement will take effect later this year, according to Bloomberg. Schiller "hasn't yet attended any meetings" of the OpenAI board and "details of the situation could still change."

Schiller served as Apple's long-time marketing chief before transitioning to an Apple Fellow role in 2020. In this role, Schiller continues to lead the App Store and Apple events and reports directly to Apple CEO Tim Cook. Schiller is also leading Apple's efforts to defend the App Store against antitrust allegations around the world.

Google

Google Emissions Jump Nearly 50% Over Five Years As AI Use Surges (ft.com) 29

An anonymous reader quotes a report from the Financial Times: Google's greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to "net zero" by 2030 in doubt. The Silicon Valley company's pollution amounted to 14.3 million tons of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday. Google said the jump highlighted "the challenge of reducing emissions" at the same time as it invests in the build-out of large language models and their associated applications and infrastructure, admitting that "the future environmental impact of AI" was "complex and difficult to predict."

Chief sustainability officer Kate Brandt said the company remained committed to the 2030 target but stressed the "extremely ambitious" nature of the goal. "We do still expect our emissions to continue to rise before dropping towards our goal," said Brandt. She added that Google was "working very hard" on reducing its emissions, including by signing deals for clean energy. There was also a "tremendous opportunity for climate solutions that are enabled by AI," said Brandt. [...] In Tuesday's report, Google said its 2023 energy-related emissions -- which come primarily from data center electricity consumption -- rose 37 percent year on year, and overall represented a quarter of its total greenhouse gas emissions. Google's supply chain emissions -- its largest chunk, representing 75 percent of its total emissions -- also rose 8 percent. Google said they would "continue to rise in the near term" as a result in part of the build-out of the infrastructure needed to run AI systems.

Google has pledged to achieve net zero across its direct and indirect greenhouse gas emissions by 2030, and to run on carbon-free energy during every hour of every day within each grid it operates by the same date. However, the company warned in Tuesday's report that the "termination" of some clean energy projects during 2023 had pushed down the amount of renewables it had access to. Meanwhile, the company's data centre electricity consumption had "outpaced" Google's ability to bring more clean power projects online in the US and Asia-Pacific regions. Google's data centre electricity consumption increased 17 percent in 2023, and amounted to approximately 7-10 percent of global data center electricity consumption, the company estimated.Its data centers also consumed 17 percent more water in 2023 than during the previous year, Google said.

AI

AI Trains On Kids' Photos Even When Parents Use Strict Privacy Settings 33

An anonymous reader quotes a report from Ars Technica: Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators -- even when platforms prohibit scraping and families use strict privacy settings. Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia's states and territories, including indigenous children who may be particularly vulnerable to harms. These photos are linked in the dataset "without the knowledge or consent of the children or their families." They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han's report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online. That puts children in danger of privacy and safety risks, Han said, and some parents thinking they've protected their kids' privacy online may not realize that these risks exist.

From a single link to one photo that showed "two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural," Han could trace "both children's full names and ages, and the name of the preschool they attend in Perth, in Western Australia." And perhaps most disturbingly, "information about these children does not appear to exist anywhere else on the Internet" -- suggesting that families were particularly cautious in shielding these boys' identities online. Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed "a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating" during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be "unlisted" and would not appear in searches. Only someone with a link to the video was supposed to have access, but that didn't stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.

Reached for comment, YouTube's spokesperson, Jack Malon, told Ars that YouTube has "been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse." But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That's why -- even more than parents need tech companies to up their game blocking AI training -- kids need regulators to intervene and stop training before it happens, Han's report said. Han's report comes a month before Australia is expected to release a reformed draft of the country's Privacy Act. Those reforms include a draft of Australia's first child data protection law, known as the Children's Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren't "actually sure how much the government is going to announce in August." "Children in Australia are waiting with bated breath to see if the government will adopt protections for them," Han said, emphasizing in her report that "children should not have to live in fear that their photos might be stolen and weaponized against them."
Google

Google Might Abandon ChromeOS Flex (zdnet.com) 59

An anonymous reader shares a report: ChromeOS Flex extends the lifespan of older hardware and contributes to reducing e-waste, making it an environmentally conscious choice. Unfortunately, recent developments hint at a potential end for ChromeOS Flex. As detailed in a June 12 blog post by Prajakta Gudadhe, senior director of engineering for ChromeOS, and Alexander Kuscher, senior director of product management for ChromeOS, Google's announcement about integrating ChromeOS with Android to enhance AI capabilities suggests that Flex might not be part of this future.

Google's plan, as detailed, suggests that ChromeOS Flex could be phased out, leaving its current users in a difficult position. The ChromiumOS community around ChromeOS Flex may attempt to adjust to these changes if Google open sources ChromeOS Flex, but this is not a guarantee. In the meantime, users may want to consider alternatives, such as various Linux distributions, to keep their older hardware functional.

IT

Figma Disables AI Design Tool That Copied Apple Weather App (techcrunch.com) 28

Design startup Figma is temporarily disabling its "Make Design" AI feature that was said to be ripping off the designs of Apple's own Weather app. TechCrunch: The problem was first spotted by Andy Allen, the founder of NotBoring Software, which makes a suite of apps that includes a popular, skinnable Weather app and other utilities. He found by testing Figma's tool that it would repeatedly reproduce Apple's Weather app when used as a design aid. John Gruber, writing at DaringFireball: This is even more disgraceful than a human rip-off. Figma knows what they trained this thing on, and they know what it outputs. In the case of this utter, shameless, abject rip-off of Apple Weather, they're even copying Weather's semi-inscrutable (semi-scrutable?) daily temperature range bars.

"AI" didn't do this. Figma did this. And they're handing this feature to designers who trust Figma and are the ones who are going to be on the hook when they present a design that, unbeknownst to them, is a blatant rip-off of some existing app.

Science

Survey Finds Public Perception of Scientists' Credibility Has Slipped (phys.org) 280

An anonymous reader quotes a report from Phys.Org: New analyses from the Annenberg Public Policy Center find that public perceptions of scientists' credibility -- measured as their competence, trustworthiness, and the extent to which they are perceived to share an individual's values -- remain high, but their perceived competence and trustworthiness eroded somewhat between 2023 and 2024. The research also found that public perceptions of scientists working in artificial intelligence (AI) differ from those of scientists as a whole. [...] The five factors in Factors Assessing Science's Self-Presentation (FASS) are whether science and scientists are perceived to be credible and prudent, and whether they are perceived to overcome bias, correct error (self-correcting), and whether their work benefits people like the respondent and the country as a whole (beneficial). [...] In the FASS model, perceptions of scientists' credibility are assessed through perceptions of whether scientists are competent, trustworthy, and "share my values." The first two of those values slipped in the most recent survey. In 2024, 70% of those surveyed strongly or somewhat agree that scientists are competent (down from 77% in 2023) and 59% strongly or somewhat agree that scientists are trustworthy (down from 67% in 2023). The survey also found that in 2024, fewer people felt that scientists' findings benefit "the country as a whole" and "benefit people like me." In 2024, 66% strongly or somewhat agreed that findings benefit the country as a whole (down from 75% in 2023). Belief that scientists' findings "benefit people like me," also declined, to 60% from 68%. Taken together, those two questions make up the beneficial factor of FASS. The findings follow sustained attacks on climate and COVID-19-related science, and more recently, public concerns about the rapid development and deployment of artificial intelligence. Here's what the study found when comparing perceptions of scientists in general with climate and AI scientists: - Credibility: When asked about three factors underlying scientists' credibility, AI scientists have lower credibility in all three values.
- Competent: 0% strongly/somewhat agree that scientists are competent, but only 62% for climate scientists and 49% for AI scientists.
- Trustworthy: 59% agree scientists are trustworthy, 54% agree for climate scientists, 28% for AI scientists.
- Share my values: A higher number (38%) agree that climate scientists share my values than for scientists in general (36%) and AI scientists (15%). More people disagree with this for AI scientists (35%) than for the others.
- Prudence: Asked whether they agree or disagree that science by various groups of scientists "creates unintended consequences and replaces older problems with new ones," over half of those surveyed (59%) agree that AI scientists create unintended consequences and just 9% disagree.
- Overcoming bias: Just 42% of those surveyed agree that scientists "are able to overcome human and political biases," but only 21% feel that way about AI scientists. In fact, 41% disagree that AI scientists are able to overcome human political biases. In another area, just 23% agree that AI scientists provide unbiased conclusions in their area of inquiry and 38% disagree.
- Self-correction: Self-correction, or "organized skepticism expressed in expectations sustaining a culture of critique," as the FASS paper puts it, is considered by some as a "hallmark of science." AI scientists are seen as less likely than scientists or climate scientists to take action to prevent fraud; take responsibility for mistakes; or to have mistakes that are caught by peer review.
- Benefits: Asked about the benefits from scientists' findings, 60% agree that scientists' "findings benefit people like me," though just 44% agree for climate scientists and 35% for AI scientists. Asked about whether findings benefit the country as a whole, 66% agree for scientists, 50% for climate scientists and 41% for AI scientists.
- Your best interest: The survey also asked respondents how much trust they have in scientists to act in the best interest of people like you. (This specific trust measure is not a part of the FASS battery.) Respondents have less trust in AI scientists than in others: 41% have a great deal/a lot of trust in medical scientists; 39% in climate scientists; 36% in scientists; and 12% in AI scientists.

AI

Anthropic Looks To Fund a New, More Comprehensive Generation of AI Benchmarks 8

AI firm Anthropic launched a funding program Monday to develop new benchmarks for evaluating AI models, including its chatbot Claude. The initiative will pay third-party organizations to create metrics for assessing advanced AI capabilities. Anthropic aims to "elevate the entire field of AI safety" with this investment, according to its blog. TechCrunch adds: As we've highlighted before, AI has a benchmarking problem. The most commonly cited benchmarks for AI today do a poor job of capturing how the average person actually uses the systems being tested. There are also questions as to whether some benchmarks, particularly those released before the dawn of modern generative AI, even measure what they purport to measure, given their age.

The very-high-level, harder-than-it-sounds solution Anthropic is proposing is creating challenging benchmarks with a focus on AI security and societal implications via new tools, infrastructure and methods.
AI

The Vision Pro Will Get Apple Intelligence, 'Go Deeper' In-Store Demos 17

According to Bloomberg's Mark Gurman, Apple plans to add its "Apple Intelligence" AI features to visionOS and update its approach to in-store demos of the headset. The Verge reports: The company is adding a new "Go Deeper" option to its in-store demos, Gurman writes. That reportedly includes testing office features and watching videos, as well as defaulting to the Dual Loop band that sends straps over the top and around the back of wearers' heads instead of the single-strap Solo Loop band, which some find uncomfortable. Apple will also reportedly let people view their own videos and photos, including panoramas, in the headset. Adding the sentimental touch to the demos could work out, especially once visionOS 2 comes out this fall, with its "spatialize" option to turn 2D photos into 3D ones -- a feature that's more impressive than it has the right to be (though still a little quirky with hair and glasses, like Apple's Portrait Mode feature).

Slashdot Top Deals