Businesses

Has Online Shopping Left Warehouse Workers WIthout Political Power? (msn.com) 81

A writer for the New York Times editorial board argues we don't yet fully understand the impact of warehouses. "Thanks to the rise of online shopping and the proximity to so many American doorsteps, warehouses have become a major source of blue-collar employment," both in Bethlehem, Pennsylvania and beyond. "In Pennsylvania's Lehigh Valley, more than 19,000 people work in the warehouses that prepare our packages. Thousands more drive the trucks that deliver them."

But while the total number of warehouse-related jobs almost replaces the jobs lost from the closure of a major steel plant, "the political power that blue-collar workers once wielded has not been replaced." Despite their large numbers, their importance to the economy, and their presence in Northampton — a swing county in a crucial battleground state — warehouse workers don't form an influential voting bloc in the way that steelworkers did... It turns out that making stuff isn't the same as distributing it. Working in a steel mill is a communal act that lends itself to the pursuit of political power in a way that warehouse jobs do not. Steelworkers toiled alongside one another, forming lifelong bonds, bowling leagues and unions that delivered a reliable voting bloc. Back when thousands of workers streamed out of the gates of Bethlehem Steel at quitting time, "politicians would come out to shake our hands," Jerry Green, retired president of United Steelworkers Local 2599, told me.

Factories were so good at political mobilization, in fact, that some credit them for democracy itself. Women and working-class men won the right to vote in the United States, Western Europe and much of East Asia after about a quarter of those populations were employed in factories, according to recent research by Sam van Noort, a lecturer at Princeton. Warehouses, by contrast, have no such mystique. Nobody campaigns outside the Walmart distribution centers here. Workers tend to be hired by staffing agencies and many stay for only a few months. They work on their own and rarely socialize. They are notoriously difficult to organize. Alec MacGillis, author of "Fulfillment: America in the Shadow of Amazon," told me that the biggest challenge for labor organizers at Amazon warehouses was getting workers to stay on the job long enough to feel a sense of solidarity.

Malenie Tapia, who moved to Bethlehem from Queens, N.Y., five years ago and took a job as a "picker" in a Zara warehouse, explained why. For eight hours a day, she grabbed items off numbered shelves and delivered them to packers who packed them into boxes. Talking to co-workers was forbidden, she said, except during a brief lunch break. "Sometimes I would go to the section in the back, where there would be less eyes on you, and sneak in a little moment of conversation," she said.

Here's what happened when the reporter asked a pair of Latino workers about their political opinions: Most of all, they fretted about being replaced by machines. They spoke with dread about a fully automated McDonald's and a robot that unloads container ships. They didn't seem to see themselves as part of a working class that could band together to demand protections for their jobs.

The hot political issue around warehouses isn't the workers at all; it's the traffic and loss of green space associated with them. Both the Democratic and Republican candidates in the race for a state representative seat in Northampton have vowed to stop the proliferation of warehouses, which some citizens' groups say destroys their rural way of life. If warehouse workers had a political voice, they might push back. But they don't, so they won't. Warehouses have been an economic boon. But politically, for workers, they are a loss.

Medicine

Human Sense of Smell Is Faster Than Previously Thought, New Study Suggests 26

A new study reveals that the human sense of smell is far more sensitive than previously thought, capable of distinguishing odors and their sequences within just 60 milliseconds. CNN reports: In a single sniff, the human sense of smell can distinguish odors within a fraction of a second, working at a level of sensitivity that is "on par" with how our brains perceive color, "refuting the widely held belief that olfaction is our slow sense," a new study finds. Humans also can discern between various sequences of odors -- distinguishing a sequence of "A" before "B" from sequence "B" before "A" -- when the interval between odorant A and odorant B is merely 60 milliseconds, according to the study, published Monday in the journal Nature Human Behavior. [...]

The new findings challenge previous research in which the timing it took to discriminate between odor sequences was around 1,200 milliseconds, Dr. Dmitry Rinberg, a professor in the Department of Neuroscience and Physiology at NYU Langone Health in New York, wrote in an editorial accompanying the study in Nature Human Behavior. "The timing of individual notes in music is essential for conveying meaning and beauty in a melody, and the human ear is very sensitive to this. However, temporal sensitivity is not limited to hearing: our sense of smell can also perceive small temporal changes in odor presentations," he wrote. "Similar to how timing affects the perception of notes in a melody, the timing of individual components in a complex odor mixture that reaches the nose may be crucial for our perception of the olfactory world."

The ability to tell apart odors within a single sniff might be an important way in which animals detect both what a smell is and where it might be in space, said Dr. Sandeep Robert Datta, a professor in the Department of Neurobiology at Harvard Medical School, who was not involved in the new study. "The demonstration that humans can tell apart smells as they change within a sniff is a powerful demonstration that timing is important for smell across species, and therefore is a general principle underlying olfactory function. In addition, this study sheds important light on the mysterious mechanisms that support human odor perception," Datta wrote in an email. "The study of human olfaction has historically lagged that of vision and hearing, because as humans we think of ourselves as visual creatures that largely use speech to communicate," he said, adding that the new study helps "fill a critical gap in our understanding of how we as humans smell."
Facebook

Science Editors Raise New Doubts on Meta's Claims It Isn't Polarizing (msn.com) 16

Meta Platforms' claims that Facebook doesn't polarize Americans came under new doubt as the journal Science raised questions about a prominent research paper the tech giant has cited to support its position. WSJ: In an editorial Thursday, Science said that Meta's emergency efforts to calm its platforms in the wake of the 2020 election may have swayed the conclusions of the paper, which the journal published in July 2023. The editorial, titled "Context matters in social media," was prompted by a letter that Science also published presenting new criticism of the paper. Because the study of Facebook's algorithms relied on data provided by Meta when it was undertaking extraordinary efforts to restrain incendiary political content, the letter's authors argue that the paper may have overstated the case that social media algorithms didn't contribute to political polarization.

Such criticisms of peer-reviewed research often appear below papers in academic journals, but Science's editors felt their editorial was needed to more prominently caveat this original paper's conclusions, said Holden Thorp, Science's editor in chief. "It was incumbent on us to come up with a way somehow that people who would come to the paper would know of these concerns,â Thorp said in an interview. While no correction was warranted, he said, "There's an election coming up, and we care about people citing this paper." Meta said it had been transparent with researchers about its actions during the time of the study, and the company and its research partners say it had no control over the Science paper's conclusions. Meta called debates of the sort aired on Thursday as part of the research process.

United States

'The IRS Says There's Always Next Year' (msn.com) 131

The tax agency again delays a vital software upgrade, at the cost of billions. WSJ's Editorial Board: Taxpayers endure drudgery to file on time each year, but the tax collectors seem less concerned with deadlines. A new Internal Revenue Service database, more than a decade in the making, will be delayed another year. And its cost is billions of dollars and climbing. The IRS told the press this week that it won't replace its Individual Master File until the 2026 tax year, at the earliest. That falls short of Commissioner Danny Werfel's goal of launching a new system in time for 2025 taxes, and the delay could mean another year of grief for countless taxpayers. The file is the digital silo in which more than 154 million tax files are held, and keeping it up-to-date helps to enable speedy, accurate refunds.

The code that powers the database was written in the 1960s by IBM engineers at the same time their colleagues worked on the Apollo program. The system runs on a nearly extinct computer language known as Cobol, and though it retains its basic functionality, maintaining it requires bespoke service. By 2018 the IRS had only 17 remaining developers considered to be experts on the system. The agency has sought and failed to overhaul or replace the database since the 1980s. It spent $4 billion over 14 years to devise upgrades, but it canceled that effort in 2000 "without receiving expected benefits," according to the Government Accountability Office.

The costs continue to mount. IRS spending on operating and maintaining its IT systems has risen 35% in the past four years, to $2.7 billion last year from $2 billion in 2019. These costs will "likely continue to increase until a majority of legacy systems are decommissioned," according to a report last month by the agency's inspector general. Each year major upgrades are pushed back adds a larger sum to the final tab. The IRS usually pleads poverty as an excuse for failing to stay up-to-date. Yet Congress gave the agency billions of extra dollars through the Inflation Reduction Act to fund a speedy database overhaul. Since 2022 it has spent $1.3 billion beyond its ordinary budget to modernize its business systems. Taxpayers will have to wait at least another year to see if that investment has paid off.

The Courts

Appeals Court Questions TikTok's Section 230 Shield for Algorithm (reuters.com) 92

A U.S. appeals court has revived a lawsuit against TikTok over a child's death, potentially limiting tech companies' legal shield under Section 230. The 3rd U.S. Circuit Court of Appeals ruled that the law does not protect TikTok from claims that its algorithm recommended a deadly "blackout challenge" to a 10-year-old girl.

Judge Patty Shwartz wrote that Section 230 only immunizes third-party content, not recommendations made by TikTok's own algorithm. The decision marks a departure from previous rulings, citing a recent Supreme Court opinion that platform algorithms reflect "editorial judgments." This interpretation could significantly impact how courts apply Section 230 to social media companies' content curation practices.
Movies

Rotten Tomatoes Introduces a New Audience Rating For People Who Actually Bought a Ticket (indiewire.com) 48

Rotten Tomatoes and Fandango are rolling out a new "Verified Hot" rating for users who actually bought a ticket to the movie being reviewed. "The designation is only given to theatrical movies that have reached an audience score above 90 percent among user ratings," adds IndieWire. From the report: Movie ticketing app Fandango is the parent company to Rotten Tomatoes, so if you bought your ticket through Fandango and then rated a movie using that same user info on Rotten Tomatoes, RT is able to confirm you bought a ticket and can filter out anyone else who may just be rating things blindly. A rep for RT tells IndieWire the goal is to work with other partners so that other people who don't use Fandango can still be considered verified.

Rotten Tomatoes also expanded its Popcornmeter designations. Anything with an audience score above 60 percent of people rating it as 3.5 stars or higher will be labeled "Hot," and movies below that 60 percent threshold are now "Stale." The "Certified Fresh" badge for movies that achieve a strong enough critics score has been around for a while, but in 2020 RT introduced a "Top Critics" feature such that you could filter out the dozens or hundreds of aggregated critics from unreliable sources who could be skewing a film's score. Anyone can vote or rate movies on Rotten Tomatoes if you're an audience member, but you can also filter out ratings from those not considered "verified."

Rotten Tomatoes made some other tweaks too under the hood: Both the Popcornmeter and Tomatometer need to meet a new minimum number of reviews published for a score to appear. Not everything gets reviewed widely, so the threshold varies depending on a film's total projected domestic box office forecast.
A full list of "Verified Hot" films can be found here.
Sci-Fi

2024's Hugo Award Winners Announced (thehugoawards.org) 69

Slashdot reader Dave Knott writes: After once again being plagued by controversy, this time due to a thwarted ballot-stuffing campaign, the 2024 Hugo Awards have been awarded at the 2024 World Science Fiction Convention.

This year's winners are:

* Best Novel: Some Desperate Glory, by Emily Tesh
* Best Novella: Thornhedge, by T. Kingfisher
* Best Novelette: "The Year Without Sunshine", by Naomi Kritzer
* Best Short Story: "Better Living Through Algorithms", by Naomi Kritzer
* Best Series: Imperial Radch, by Ann Leckie
* Best Graphic Story or Comic: Saga, Vol. 11, written by Brian K. Vaughan, art by Fiona Staples
* Best Related Work: A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?, by Kelly Weinersmith and Zach Weinersmith
* Best Dramatic Presentation, Long Form: Dungeons & Dragons: Honor Among Thieves
* Best Dramatic Presentation, Short Form: The Last of Us: "Long, Long Time", written by Craig Mazin and Neil Druckmann, directed by Peter Hoar
* Best Game or Interactive Work: Baldur's Gate 3, produced by Larian Studios
* Best Editor Short Form: Neil Clarke
* Best Editor Long Form: Ruoxi Chen
* Best Professional Artist: Rovina Cai
* Best Semiprozine: Strange Horizons, by the Strange Horizons Editorial Collective
* Best Fanzine: Nerds of a Feather, Flock Together, editors Roseanna Pendlebury, Arturo Serrano, Paul Weimer; senior editors Joe Sherry, Adri Joy, G. Brown, Vance Kotrla
* Best Fancast: Octothorpe, by John Coxon, Alison Scott, and Liz Batty
* Best Fan Writer: Paul Weimer
* Best Fan Artist: Laya Rose
* Lodestar Award for Best YA Book: To Shape a Dragon's Breath by Moniquill Blackgoose
* Astounding Award for Best New Writer: Xiran Jay Zhao

Space

NASA Citizen Scientists Spot Object Moving 1 Million Miles Per Hour (nasa.gov) 58

Citizen scientists from NASA's Backyard Worlds: Planet 9 project discovered a hypervelocity object, CWISE J1249, moving fast enough to escape the Milky Way. "This hypervelocity object is the first such object found with the mass similar to or less than that of a small star," reports NASA's Science Editorial Team, suggesting the object may have originated from a binary star system or a globular cluster. From the report: A few years ago, longtime Backyard Worlds citizen scientists Martin Kabatnik, Thomas P. Bickle, and Dan Caselden spotted a faint, fast-moving object called CWISE J124909.08+362116.0, marching across their screens in the WISE images. Follow-up observations with several ground-based telescopes helped scientists confirm the discovery and characterize the object. These citizen scientists are now co-authors on the team's study about this discovery published in the Astrophysical Journal Letters (a pre-print version is available here). CWISE J1249 is zooming out of the Milky Way at about 1 million miles per hour. But it also stands out for its low mass, which makes it difficult to classify as a celestial object. It could be a low-mass star, or if it doesn't steadily fuse hydrogen in its core, it would be considered a brown dwarf, putting it somewhere between a gas giant planet and a star.

Ordinary brown dwarfs are not that rare. Backyard Worlds: Planet 9 volunteers have discovered more than 4,000 of them! But none of the others are known to be on their way out of the galaxy. This new object has yet another unique property. Data obtained with the W. M. Keck Observatory in Maunakea, Hawaii, show that it has much less iron and other metals than other stars and brown dwarfs. This unusual composition suggests that CWISE J1249 is quite old, likely from one of the first generations of stars in our galaxy. Why does this object move at such high speed? One hypothesis is that CWISE J1249 originally came from a binary system with a white dwarf, which exploded as a supernova when it pulled off too much material from its companion. Another possibility is that it came from a tightly bound cluster of stars called a globular cluster, and a chance meeting with a pair of black holes sent it soaring away.

Social Networks

Flipboard Users Can Now Follow Anyone In the Fediverse (techcrunch.com) 8

Starting today, users of the social magazine app Flipboard can follow any federated accounts, "meaning those that participate in the social network of interconnected servers known as the fediverse," writes TechCrunch's Sarah Perez. "This now includes Threads accounts in addition to Mastodon accounts and others." From the report: With the update, which deepens Flipboard's connection with the ActivityPub social graph, any Flipboard user can follow user profiles from any other federated service. If their Flipboard account is also federated, they can interact with those users' posts and participate in conversations, as well. Flipboard's user base, however, is currently undisclosed. [...] The Flipboard app supports full fediverse integration, but the company hasn't yet allowed all users to turn on federation as it's a phased rollout. We're told the goal is to make federation a setting users can select later this year, similar to how Threads added a "fediverse sharing" option in June. When federation is enabled, people will be able to not only share to the fediverse but also see and engage with conversations around their Flipboard posts that are taking place in the fediverse.

With Tuesday's update on Flipboard, people can find and follow others in the fediverse across three areas of its app: Search, Explore and Community. In search results, Flipboard will surface federated accounts and profile results in a new section, "Fediverse Accounts." Editorial recommendations can also be found in the app's "Explore" tab under "Fediverse," and every week a new selection of accounts will be featured in the Community section. Activity from the fediverse will also be displayed in the Flipboard notifications panel, allowing people to engage and follow others in the fediverse directly from their notifications. For Flipboard users, that means they can now follow user profiles from Threads and Mastodon in the Flipboard app, including high-profile users like President Joe Biden (POTUS) and former President Barack Obama on Threads, as well as various creators, like Marques Brownlee, and journalists, like Kara Swisher.

AI

Journalists at 'The Atlantic' Demand Assurances Their Jobs Will Be Protected From OpenAI (msn.com) 57

"As media bosses scramble to decide if and how they should partner with AI companies, workers are increasingly concerned that the technology could imperil their jobs or degrade their work..." reports the Washington Post.

The latest example? "Two months after the Atlantic reached a licensing deal with OpenAI, staffers at the storied magazine are demanding the company ensure their jobs and work are protected." (Nearly 60 journalists have now signed a letter demanding the company "stop prioritizing its bottom line and champion the Atlantic's journalism.") The unionized staffers want the Atlantic bosses to include AI protections in the union contract, which the two sides have been negotiating since 2022. "Our editorial leaders say that The Atlantic is a magazine made by humans, for humans," the letter says. "We could not agree more..."

The Atlantic's new deal with OpenAI grants the tech firm access to the magazine's archives to train its AI tools. While the Atlantic in return will have special access to experiment with these AI tools, the magazine says it is not using AI to create journalism. But some journalists and media observers have raised concerns about whether AI tools are accurately and fairly manipulating the human-written text they work with. The Atlantic staffers' letter noted a pattern by ChatGPT of generating gibberish web addresses instead of the links intended to attribute the reporting it has borrowed, as well as sending readers to sites that have summarized Atlantic stories rather than the original work...

Atlantic spokeswoman Anna Bross said company leaders "agree with the general principles" expressed by the union. For that reason, she said, they recently proposed a commitment to not to use AI to publish content "without human review and editorial oversight." Representatives from the Atlantic Union bargaining committee told The Washington Post that "the fact remains that the company has flatly refused to commit to not replacing employees with AI."

The article also notes that last month the union representing Lifehacker, Mashable and PCMag journalists "ratified a contract that protects union members from being laid off because AI has impacted their roles and requires the company to discuss any such plans to implement AI tools ahead of time."
China

Chinese AI Stirs Panic At European Geoscience Society (science.org) 32

Paul Voosen reports via Science Magazine: Few things prompt as much anxiety in science and the wider world as the growing use of artificial intelligence (AI) and the rising influence of China. This spring, these two factors created a rift at the European Geosciences Union (EGU), one of the world's largest geoscience societies, that led to the firing of its president. The whole episode has been "a packaging up of fear of AI and fear of China," says Michael Stephenson, former chief geologist of the United Kingdom and one of the founders of Deep-time Digital Earth (DDE), a $70 million effort to connect digital geoscience databases. In 2019, another geoscience society, the International Union of Geological Sciences (IUGS), kicked off DDE, which has been funded almost entirely by the government of China's Jiangsu province.

The dispute pivots on GeoGPT, an AI-powered chatbot that is one of DDE's main efforts. It is being developed by Jian Wang, chief technology officer of e-commerce giant Alibaba. Built on Qwen, Alibaba's own chatbot, and fine-tuned on billions of words from open-source geology studies and data sets, GeoGPT is meant to provide expert answers to questions, summarize documents, and create visualizations. Stephenson tested an early version, asking it about the challenges of using the fossilized teeth of conodonts, an ancient relative of fish, to define the start of the Permian period 299 million years ago. "It was very good at that," he says. As awareness of GeoGPT spread, so did concern. Paul Cleverly, a visiting professor at Robert Gordon University, gained access to an early version and said in a recent editorial in Geoscientist there were "serious issues around a lack of transparency, state censorship, and potential copyright infringement."
Paul Cleverly and GeoScienceWorld CEO Phoebe McMellon raised these concerns in a letter to IUGS, arguing that the chatbot was built using unlicensed literature without proper citations. However, they did not cite specific copyright violations, so DDE President Chengshan Wang, a geologist at the China University of Geosciences, decided not to end the project.

Tensions at EGU escalated when a complaint about GeoGPT's transparency was submitted before the EGU's April meeting, where GeoGPT would be introduced. "It arrived at an EGU whose leadership was already under strain," notes Science. The complaint exacerbated existing leadership issues within EGU, particularly surrounding President Irina Artemieva, who was seen as problematic by some executives due to her affiliations and actions. Science notes that she's "affiliated with Germany's GEOMAR Helmholtz Centre for Ocean Research Kiel but is also paid by the Chinese Academy of Geological Sciences to advise it on its geophysical research."

Artemieva forwarded the complaint via email to the DDE President to get his view, but forgot to delete the name attached to it, leading to a breach of confidentiality. This incident, among other leadership disputes, culminated in her dismissal and the elevation of Peter van der Beek to president. During the DDE session at the EGU meeting, van der Beek's enforcement actions against Chinese scientists and session attendees led to allegations of "harassment and discrimination."

"Seeking to broker a peace deal around GeoGPT," IUGS's president and another former EGU president, John Ludden, organized a workshop and invited all parties to discuss GeoGPT's governance, ongoing negotiations for licensing deals and alternative AI models for GeoGPT's use.
AI

Journalists 'Deeply Troubled' By OpenAI's Content Deals With Vox, The Atlantic (arstechnica.com) 100

Benj Edwards and Ashley Belanger reports via Ars Technica: On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers -- and the unions that represent them -- were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.."

Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI.

Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.

Medicine

'Russia Might Have Caused Havana Syndrome' (washingtonpost.com) 188

An anonymous reader quotes an opinion piece from the Washington Post, published by the Editorial Board: A just-published investigation by Russian, American and German journalists has unearthed startling new information about the so-called Havana syndrome, or "Anomalous Health Incidents," as the government calls the unexplained bouts of painful disorientation that U.S. diplomats and intelligence officers have suffered in recent years. The new information suggests but does not prove that Russia's military intelligence agency is responsible. Earlier, agencies in the U.S. intelligence community had concluded that "it is very unlikely a foreign adversary is responsible." They need to look again. [...]

[T]he new investigation by the Insider, a Russian investigative news outlet, in collaboration with CBS's "60 Minutes" and Germany's Der Spiegel, paints a different picture. It identifies the possible culprit as Unit 29155, a "notorious assassination and sabotage squad" of the GRU, Moscow's military intelligence service. Senior members of the unit received "awards and political promotions for work related to the development of 'non-lethal acoustic weapons'" -- a term used in the Russian military-scientific literature to describe both sound- and radiofrequency-based directed energy devices. The investigation found documentary evidence that Unit 29155 "has been experimenting with exactly the kind of weaponized technology" experts suggest is a plausible cause. Moreover, the Insider reported, geolocation data shows that operators attached to Unit 29155, traveling undercover, were present in places where Havana syndrome struck, just before the incidents took place.

Even more concerning, the investigation found that a commonality among the Americans targeted was their work history on Russia issues. This included CIA officers who were helping Ukraine build up its intelligence capabilities in the years before Russia's full-scale invasion in 2022. One veteran of the CIA Kyiv station was named the new chief of station in Vietnam and was hit there. A second veteran of the CIA in Ukraine was hit in his apartment in Tashkent, Uzbekistan. Both these intelligence officers had to be medevaced and were treated at Walter Reed National Military Medical Center. The wife of a third CIA officer who had served in Kyiv was hit in London. "Of all the cases" examined by the news organizations, they said, "the most well-documented involve U.S. intelligence and diplomatic personnel with subject matter expertise in Russia or operational experience in countries such as Georgia and Ukraine," both of which were the scene of popular pro-Western uprisings in the past two decades. The news organizations point out that Russian President Vladimir Putin has often blamed these "color revolutions" on the CIA and the State Department. They conclude, "Putin would have every interest in neutralizing scores of U.S. intelligence officers he deemed responsible for his loss of the former satellites."
The Editorial Board is advocating for a thorough and aggressive investigation by the U.S. intelligence community that "takes into account all aspects of the incidents."

"If the incidents are a deliberate attack, the perpetrator must be identified and held to account. Along with sending a message to those who might harm American personnel, the United States needs to show all those who might join the diplomatic and intelligence services that the government will protect them abroad and at home from foreign adversaries, no matter what."
AI

BBC Will Stop Using AI For 'Doctor Who' Promotion After Receiving Complaints 79

The BBC says it has stopped using AI to promote Doctor Who after receiving complaints from viewers. Deadline reports: The BBC's marketing teams used the tech "as part of a small trial" to help draft some text for two promotional emails and mobile notifications, according to its complaints website, which was intended to highlight Doctor Who programming on the BBC. But the corporation received complaints over the reports that it was using generative AI, it added. "We followed all BBC editorial compliance processes and the final text was verified and signed-off by a member of the marketing team before it was sent," the BBC said. "We have no plans to do this again to promote Doctor Who."

The decision to stop promoting via generative AI represents a u-turn from the BBC, who said at the time of announcement that "generative AI offers a great opportunity to speed up making the extra assets to get more experiments live for more content that we are trying to promote." At the time, the BBC didn't mention that this would be the only time it uses the technology for Doctor Who promotion. Doctor Who will launch in May on the BBC and, for the first time, Disney+. A new trailer was unveiled last week.
AI

AI-Generated Science 32

Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals. From a report: Searching for the phrase "As of my last knowledge update" on Google Scholar, a free search tool that indexes articles published in academic journals, returns 115 results. The phrase is often used by OpenAI's ChatGPT to indicate when the data the answer it is giving users is coming from, and the specific months and years found in these academic papers correspond to previous ChatGPT "knowledge updates."

"As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral.

Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.
Businesses

Outdoor Voices To Close All Stores This Week (nytimes.com) 54

Outdoor Voices, an athletic apparel company, is closing all its stores on Sunday, The New York Times reported this week, citing four employees at four different stores. From the report: In an internal Slack message reviewed by The New York Times, some employees were notified on Wednesday that "Outdoor Voices is embarking on a new chapter as we transition to an exclusively online business." Products in stores are going to be discounted 50 percent, according to the Slack message. The news came as a surprise, two of the employees said, adding that they were not offered severance.

Outdoor Voices, which lists 16 retail locations on its website, did not immediately respond to a request for comment. Founded in 2014 by Ty Haney, the brand became popular for its muted tones and highly Instagrammable aesthetics. Think matching crop tops and leggings in pale shades of earthy tones. Its hashtag and company mantra, #DoingThings, became popular on social media, where brand loyalists would regularly share images of themselves participating in athletic activities like running or hiking or spinning. The company often hosted events, like group exercise classes, and even built an editorial platform called The Recreationalist. Many Outdoor Voices customers weren't just shoppers; they were devotees. The company was a chic athleisure brand perfectly positioned to attract millennials, but it was also selling a lifestyle. A lifestyle that helped the brand raise millions in funding.

AI

AI-Generated Articles Prompt Wikipedia To Downgrade CNET's Reliability Rating (arstechnica.com) 54

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness. "The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022," adds Ars Technica. Futurism first reported the news. From the report: Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication. "CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors," wrote a Wikipedia editor named David Gerard. "So far the experiment is not going down well, as it shouldn't. I haven't found any yet, but any of these articles that make it into a Wikipedia article need to be removed." After other editors agreed in the discussion, they began the process of downgrading CNET's reliability rating.

As of this writing, Wikipedia's Perennial Sources list currently features three entries for CNET broken into three time periods: (1) before October 2020, when Wikipedia considered CNET a "generally reliable" source; (2) between October 2020 and present, when Wikipedia notes that the site was acquired by Red Ventures in October 2020, "leading to a deterioration in editorial standards" and saying there is no consensus about reliability; and (3) between November 2022 and January 2023, when Wikipedia considers CNET "generally unreliable" because the site began using an AI tool "to rapidly generate articles riddled with factual inaccuracies and affiliate links."

Futurism reports that the issue with CNET's AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com. Those sites published AI-generated content around the same period of time as CNET. The editors also criticized Red Ventures for not being forthcoming about where and how AI was being implemented, further eroding trust in the company's publications. This lack of transparency was a key factor in the decision to downgrade CNET's reliability rating.
A CNET spokesperson said in a statement: "CNET is the world's largest provider of unbiased tech-focused news and advice. We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy."
Social Networks

Supreme Court Hears Landmark Cases That Could Upend What We See on Social Media (cnn.com) 282

The US Supreme Court is hearing oral arguments Monday in two cases that could dramatically reshape social media, weighing whether states such as Texas and Florida should have the power to control what posts platforms can remove from their services. From a report: The high-stakes battle gives the nation's highest court an enormous say in how millions of Americans get their news and information, as well as whether sites such as Facebook, Instagram, YouTube and TikTok should be able to make their own decisions about how to moderate spam, hate speech and election misinformation. At issue are laws passed by the two states that prohibit online platforms from removing or demoting user content that expresses viewpoints -- legislation both states say is necessary to prevent censorship of conservative users.

More than a dozen Republican attorneys general have argued to the court that social media should be treated like traditional utilities such as the landline telephone network. The tech industry, meanwhile, argues that social media companies have First Amendment rights to make editorial decisions about what to show. That makes them more akin to newspapers or cable companies, opponents of the states say. The case could lead to a significant rethinking of First Amendment principles, according to legal experts. A ruling in favor of the states could weaken or reverse decades of precedent against "compelled speech," which protects private individuals from government speech mandates, and have far-reaching consequences beyond social media. A defeat for social media companies seems unlikely, but it would instantly transform their business models, according to Blair Levin, an industry analyst at the market research firm New Street Research.

AI

Scientific Journal Publishes AI-Generated Rat With Gigantic Penis (vice.com) 72

Jordan Pearson reports via Motherboard: A peer-reviewed science journal published a paper this week filled with nonsensical AI-generated images, which featured garbled text and a wildly incorrect diagram of a rat penis. The episode is the latest example of how generative AI is making its way into academia with concerning effects. The paper, titled "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway" was published on Wednesday in the open access Frontiers in Cell Development and Biology journal by researchers from Hong Hui Hospital and Jiaotong University in China. The paper itself is unlikely to be interesting to most people without a specific interest in the stem cells of small mammals, but the figures published with the article are another story entirely. [...]

It's unclear how this all got through the editing, peer review, and publishing process. Motherboard contacted the paper's U.S.-based reviewer, Jingbo Dai of Northwestern University, who said that it was not his responsibility to vet the obviously incorrect images. (The second reviewer is based in India.) "As a biomedical researcher, I only review the paper based on its scientific aspects. For the AI-generated figures, since the author cited Midjourney, it's the publisher's responsibility to make the decision," Dai said. "You should contact Frontiers about their policy of AI-generated figures." Frontier's policies for authors state that generative AI is allowed, but that it must be disclosed -- which the paper's authors did -- and the outputs must be checked for factual accuracy. "Specifically, the author is responsible for checking the factual accuracy of any content created by the generative AI technology," Frontier's policy states. "This includes, but is not limited to, any quotes, citations or references. Figures produced by or edited using a generative AI technology must be checked to ensure they accurately reflect the data presented in the manuscript."

On Thursday afternoon, after the article and its AI-generated figures circulated social media, Frontiers appended a notice to the paper saying that it had corrected the article and that a new version would appear later. It did not specify what exactly was corrected.
UPDATE: Frontiers retracted the article and issued the following statement: "Following publication, concerns were raised regarding the nature of its AI-generated figures. The article does not meet the standards of editorial and scientific rigor for Frontiers in Cell and Development Biology; therefore, the article has been retracted. This retraction was approved by the Chief Executive Editor of Frontiers. Frontiers would like to thank the concerned readers who contacted us regarding the published article."
Science

Firms Churning Out Fake Papers Are Now Bribing Journal Editors (science.org) 32

Nicholas Wise is a fluid dynamics researcher who moonlights as a scientific fraud buster, reports Science magazine. And last June he "was digging around on shady Facebook groups when he came across something he had never seen before." Wise was all too familiar with offers to sell or buy author slots and reviews on scientific papers — the signs of a busy paper mill. Exploiting the growing pressure on scientists worldwide to amass publications even if they lack resources to undertake quality research, these furtive intermediaries by some accounts pump out tens or even hundreds of thousands of articles every year. Many contain made-up data; others are plagiarized or of low quality. Regardless, authors pay to have their names on them, and the mills can make tidy profits.

But what Wise was seeing this time was new. Rather than targeting potential authors and reviewers, someone who called himself Jack Ben, of a firm whose Chinese name translates to Olive Academic, was going for journal editors — offering large sums of cash to these gatekeepers in return for accepting papers for publication. "Sure you will make money from us," Ben promised prospective collaborators in a document linked from the Facebook posts, along with screenshots showing transfers of up to $20,000 or more. In several cases, the recipient's name could be made out through sloppy blurring, as could the titles of two papers. More than 50 journal editors had already signed on, he wrote. There was even an online form for interested editors to fill out...

Publishers and journals, recognizing the threat, have beefed up their research integrity teams and retracted papers, sometimes by the hundreds. They are investing in ways to better spot third-party involvement, such as screening tools meant to flag bogus papers. So cash-rich paper mills have evidently adopted a new tactic: bribing editors and planting their own agents on editorial boards to ensure publication of their manuscripts. An investigation by Science and Retraction Watch, in partnership with Wise and other industry experts, identified several paper mills and more than 30 editors of reputable journals who appear to be involved in this type of activity. Many were guest editors of special issues, which have been flagged in the past as particularly vulnerable to abuse because they are edited separately from the regular journal. But several were regular editors or members of journal editorial boards. And this is likely just the tip of the iceberg.

The spokesperson for one journal publisher tells Science that its editors are receiving bribe offers every week..

Thanks to long-time Slashdot reader schwit1 for sharing the article..

Slashdot Top Deals