Privacy

Nearly 10 Years After Data and Goliath, Bruce Schneier Says: Privacy's Still Screwed (theregister.com) 57

Ten years after publishing his influential book on data privacy, security expert Bruce Schneier warns that surveillance has only intensified, with both government agencies and corporations collecting more personal information than ever before. "Nothing has changed since 2015," Schneier told The Register in an interview. "The NSA and their counterparts around the world are still engaging in bulk surveillance to the extent of their abilities."

The widespread adoption of cloud services, Internet-of-Things devices, and smartphones has made it nearly impossible for individuals to protect their privacy, said Schneier. Even Apple, which markets itself as privacy-focused, faces limitations when its Chinese business interests are at stake. While some regulation has emerged, including Europe's General Data Protection Regulation and various U.S. state laws, Schneier argues these measures fail to address the core issue of surveillance capitalism's entrenchment as a business model.

The rise of AI poses new challenges, potentially undermining recent privacy gains like end-to-end encryption. As AI assistants require cloud computing power to process personal data, users may have to surrender more information to tech companies. Despite the grim short-term outlook, Schneier remains cautiously optimistic about privacy's long-term future, predicting that current surveillance practices will eventually be viewed as unethical as sweatshops are today. However, he acknowledges this transformation could take 50 years or more.
Programming

'New Junior Developers Can't Actually Code' (nmn.gl) 220

Junior software developers' overreliance on AI coding assistants is creating knowledge gaps in fundamental programming concepts, developer Namanyay Goel argued in a post. While tools like GitHub Copilot and Claude enable faster code shipping, developers struggle to explain their code's underlying logic or handle edge cases, Goel wrote. Goel cites the decline of Stack Overflow, a technical forum where programmers historically found detailed explanations from experienced developers, as particularly concerning.
AI

DeepSeek Removed from South Korea App Stores Pending Privacy Review (france24.com) 3

Today Seoul's Personal Information Protection Commission "said DeepSeek would no longer be available for download until a review of its personal data collection practices was carried out," reports AFP. A number of countries have questioned DeepSeek's storage of user data, which the firm says is collected in "secure servers located in the People's Republic of China"... This month, a slew of South Korean government ministries and police said they blocked access to DeepSeek on their computers. Italy has also launched an investigation into DeepSeek's R1 model and blocked it from processing Italian users' data. Australia has banned DeepSeek from all government devices on the advice of security agencies. US lawmakers have also proposed a bill to ban DeepSeek from being used on government devices over concerns about user data security.
More details from the Associated Press: The South Korean privacy commission, which began reviewing DeepSeek's services last month, found that the company lacked transparency about third-party data transfers and potentially collected excessive personal information, said Nam Seok [director of the South Korean commission's investigation division]... A recent analysis by Wiseapp Retail found that DeepSeek was used by about 1.2 million smartphone users in South Korea during the fourth week of January, emerging as the second-most-popular AI model behind ChatGPT.
Social Networks

Are Technologies of Connection Tearing Us Apart? (lareviewofbooks.org) 88

Nicholas Carr wrote The Shallows: What the Internet Is Doing to Our Brains. But his new book looks at how social media and digital communication technologies "are changing us individually and collectively," writes the Los Angeles Review of Books.

The book's title? Superbloom: How Technologies of Connection Tear Us Apart . But if these systems are indeed tearing us apart, the reasons are neither obvious nor simple. Carr suggests that this isn't really about the evil behavior of our tech overlords but about how we have "been telling ourselves lies about communication — and about ourselves.... Well before the net came along," says Carr, "[the] evidence was telling us that flooding the public square with more information from more sources was not going to open people's minds or engender more thoughtful discussions. It wasn't even going to make people better informed...."

At root, we're the problem. Our minds don't simply distill useful knowledge from a mass of raw data. They use shortcuts, rules of thumb, heuristic hacks — which is how we were able to think fast enough to survive on the savage savanna. We pay heed, for example, to what we experience most often. "Repetition is, in the human mind, a proxy for facticity," says Carr. "What's true is what comes out of the machine most often...." Reality can't compete with the internet's steady diet of novelty and shallow, ephemeral rewards. The ease of the user interface, congenial even to babies, creates no opportunity for what writer Antón Barba-Kay calls "disciplined acculturation."

Not only are these technologies designed to leverage our foibles, but we are also changed by them, as Carr points out: "We adapt to technology's contours as we adapt to the land's and the climate's." As a result, by designing technology, we redesign ourselves. "In engineering what we pay attention to, [social media] engineers [...] how we talk, how we see other people, how we experience the world," Carr writes. We become dislocated, abstracted: the self must itself be curated in memeable form. "Looking at screens made me think in screens," writes poet Annelyse Gelman. "Looking at pixels made me think in pixels...."

That's not to say that we can't have better laws and regulations, checks and balances. One suggestion is to restore friction into these systems. One might, for instance, make it harder to unreflectively spread lies by imposing small transactional costs, as has been proposed to ease the pathologies of automated market trading. An option Carr doesn't mention is to require companies to perform safety studies on their products, as we demand of pharmaceutical companies. Such measures have already been proposed for AI. But Carr doubts that increasing friction will make much difference. And placing more controls on social media platforms raises free speech concerns... We can't change or constrain the tech, says Carr, but we can change ourselves. We can choose to reject the hyperreal for the material. We can follow Samuel Johnson's refutation of immaterialism by "kicking the stone," reminding ourselves of what is real.

AI

What If People Like AI-Generated Art Better? (christies.com) 157

Christie's auction house notes that an AI-generated "portrait" of an 18th-century French gentleman recently sold for $432,500. (One member of the Paris-based collective behind the work says "we found that portraits provided the best way to illustrate our point, which is that algorithms are able to emulate creativity.")

But the blog post from Christie's goes on to acknowledge that AI researchers "are still addressing the fundamental question of whether the images produced by their networks can be called art at all." . One way to do that, surely, is to conduct a kind of visual Turing test, to show the output of the algorithms to human evaluators, flesh-and-blood discriminators, and ask if they can tell the difference.

"Yes, we have done that," says Ahmed Elgammal [director of the Art and Artificial Intelligence Lab at Rutgers University in New Jersey]. "We mixed human-generated art and art from machines, and posed questions — direct ones, such as 'Do you think this painting was produced by a machine or a human artist?' and also indirect ones such as, 'How inspiring do you find this work?'. We measured the difference in responses towards the human art and the machine art, and found that there is very little difference. Actually, some people are more inspired by the art that is done by machine."

Can such a poll constitute proof that an algorithm is capable of producing indisputable works of art? Perhaps it can — if you define a work of art as an image produced by an intelligence with an aesthetic intent. But if you define art more broadly as an attempt to say something about the wider world, to express one's own sensibilities and anxieties and feelings, then AI art must fall short, because no machine mind can have that urge — and perhaps never will.

This also begs the question: who gets credit for the resulting work. The AI, or the creator of its algorithm...

Or can the resulting work be considered a "conceptual art" collaboration — taking place between a human and an algorithm?
AI

Lawsuit Accuses Meta Of Training AI On Torrented 82TB Dataset Of Pirated Books (hothardware.com) 47

"Meta is involved in a class action lawsuit alleging copyright infringement, a claim the company disputes..." writes the tech news site Hot Hardware.

But the site adds that newly unsealed court documents "reveal that Meta allegedly used a minimum of 81.7TB of illegally torrented data sourced from shadow libraries to train its AI models." Internal emails further show that Meta employees expressed concerns about this practice. Some employees voiced strong ethical objections, with one noting that using content from sites like LibGen, known for distributing copyrighted material, would be unethical. A research engineer with Meta, Nikolay Bashlykov, also noted that "torrenting from a corporate laptop doesn't feel right," highlighting his discomfort surrounding the practice.

Additionally, the documents suggest that these concerns, including discussions about using data from LibGen, reached CEO Mark Zuckerberg, who may have ultimately approved the activity. Furthermore, the documents showed that despite these misgivings, employees discussed using VPNs to mask Meta's IP address to create anonymity, enabling them to download and share torrented data without it being easily traced back to the company's network.

Python

Are Fast Programming Languages Gaining in Popularity? (techrepublic.com) 163

In January the TIOBE Index (estimating programming language popularity) declared Python their language of the year. (Though it was already #1 in their rankings, it had showed a 9.3% increase in their ranking system, notes InfoWorld.) TIOBE CEO Paul Jansen says this reflects how easy Python is to learn, adding that "The demand for new programmers is still very high" (and that "developing applications completely in AI is not possible yet.")

In fact on February's version of the index, the top ten looks mostly static. The only languages dropping appear to be very old languages. Over the last 12 months C and PHP have both fallen on the index — C from the #2 to the #4 spot, and PHP from #10 all the way to #14. (Also dropping is Visual Basic, which fell from #9 to #10.)

But TechRepublican cites another factor that seems to be affecting the rankings: language speed. Fast programming languages are gaining popularity, TIOBE CEO Paul Jansen said in the TIOBE Programming Community Index in February. Fast programming languages he called out include C++ [#2], Go [#8], and Rust [#13 — up from #18 a year ago].

Also, according to the updated TIOBE rankings...

- C++ held onto its place at second from the top of the leaderboard.
- Mojo and Zig are following trajectories likely to bring them into the top 50, and reached #51 and #56 respectively in February.

"Now that the world needs to crunch more and more numbers per second, and hardware is not evolving fast enough, speed of programs is getting important. Having said this, it is not surprising that the fast programming languages are gaining ground in the TIOBE index," Jansen wrote. The need for speed helped Mojo [#51] and Zig [#56] rise...

Rust reached its all-time high in the proprietary points system (1.47%.), and Jansen expects Go to be a common sight in the top 10 going forward.

AI

AI Bugs Could Delay Upgrades for Both Siri and Alexa (yahoo.com) 24

Bloomberg reports that Apple's long-promised overhaul for Siri "is facing engineering problems and software bugs, threatening to postpone or limit its release, according to people with knowledge of the matter...." Last June, Apple touted three major enhancements coming to Siri:

- the ability to tap into a customer's data to better answer queries and take actions.
- a new system that would let the assistant more precisely control apps.
- the capability to see what's currently on a device's screen and use that context to better serve users....

The goal is to ultimately offer a more versatile Siri that can seamlessly tap into customers' information and communication. For instance, users will be able to ask for a file or song that they discussed with a friend over text. Siri would then automatically retrieve that item. Apple also has demonstrated the ability for Siri to quickly locate someone's driver's license number by reviewing their photos... Inside Apple, many employees testing the new Siri have found that these features don't yet work consistently...

The control enhancements — an upgraded version of something called App Intents — are central to the operation of the company's upcoming smart home hub. That product, an AI device for controlling smart home appliances and FaceTime, is slated for release later this year.

And Amazon is also struggling with an AI upgrade for its digital assistant, reports the Washington Post: The "smarter and more conversational" version of Alexa will not be available until March 31 or later, the employee said, at least a year and a half after it was initially announced in response to competition from OpenAI's ChatGPT. Internal messages seen by The Post confirmed the launch was originally scheduled for this month but was subsequently moved to the end of March... According to internal documents seen by The Post, new features of the subscriber-only, AI-powered Alexa could include the ability to adopt a personality, recall conversations, order takeout or call a taxi. Some of the new Alexa features are similar to Alexa abilities that were previously available free through partnerships with companies like Grubhub and Uber...

The AI-enhanced version of Alexa in development has been repeatedly delayed due to problems with incorrect answers, the employee working on the launch told The Post. As a popular product that is a decade old, the Alexa brand is valuable, and the company is hesitant to risk customer trust by launching a product that is not reliable, the person said.

AI

Ask Slashdot: What Would It Take For You to Trust an AI? (win.tue.nl) 179

Long-time Slashdot reader shanen has been testing AI clients. (They report that China's DeepSeek "turned out to be extremely good at explaining why I should not trust it. Every computer security problem I ever thought of or heard about and some more besides.")

Then they wondered if there's also government censorship: It's like the accountant who gets asked what 2 plus 2 is. After locking the doors and shading all the windows, the accountant whispers in your ear: "What do you want it to be...?" So let me start with some questions about DeepSeek in particular. Have you run it locally and compared the responses with the website's responses? My hypothesis is that your mileage should differ...

It's well established that DeepSeek doesn't want to talk about many "political" topics. Is that based on a distorted model of the world? Or is the censorship implemented in the query interface after the model was trained? My hypothesis is that it must have been trained with lots of data because the cost of removing all of the bad stuff would have been prohibitive... Unless perhaps another AI filtered the data first?

But their real question is: what would it take to trust an AI? "Trust" can mean different things, including data-collection policies. ("I bet most of you trust Amazon and Amazon's secret AIs more than you should..." shanen suggests.) Can you use an AI system without worrying about its data-retention policies?

And they also ask how many Slashdot readers have read Ken Thompson's "Reflections on Trusting Trust", which raises the question of whether you can ever trust code you didn't create yourself. So is there any way an AI system can assure you its answers are accurate and trustworthy, and that it's safe to use? Share your own thoughts and experiences in the comments.

What would it take for you to trust an AI?
Social Networks

Despite Plans for AI-Powered Search, Reddit's Stock Fell 14% This Week (yahoo.com) 55

"Reddit Answers" uses generative AI to answer questions using what past Reddittors have posted. Announced in December, Reddit now plans to integrate it into their search results, reports TechCrunch, with Reddit's CEO saying the idea has "incredible monetization potential."

And yet Reddit's stock fell 14% this week. CNBC's headline? "Reddit shares plunge after Google algorithm change contributes to miss in user numbers." A Google search algorithm change caused some "volatility" with user growth in the fourth quarter, but the company's search-related traffic has since recovered in the first quarter, Reddit CEO Steve Huffman said in a letter to shareholders. "What happened wasn't unusual — referrals from search fluctuate from time to time, and they primarily affect logged-out users," Huffman wrote. "Our teams have navigated numerous algorithm updates and did an excellent job adapting to these latest changes effectively...." Reddit has said it is working to convince logged-out users to create accounts as logged-in users, which are more lucrative for its business.
As Yahoo Finance once pointed out, Reddit knew this day would come, acknowledging in its IPO filing that "changes in internet search engine algorithms and dynamics could have a negative impact on traffic for our website and, ultimately, our business." And in the last three months of 2024 Reddit's daily active users dropped, Yahoo Finance reported this week. But logged-in users increased by 400,000 — while logged-out users dropped by 600,000 (their first drop in almost two years).

Marketwatch notes that analyst Josh Beck sees this as a buying opportunity for Reddit's stock: Beck pointed to comments from Reddit's management regarding a sharp recovery in daily active unique users. That was likely driven by Google benefiting from deeper Reddit crawling, by the platform uncollapsing comments in search results and by a potential benefit from spam-reduction algorithm updates, according to the analyst. "While the report did not clear our anticipated bar, we walk away encouraged by international upside," he wrote.
United States

America's Office-Occupancy Rates Drop by Double Digits - and More in San Francisco (sfgate.com) 99

SFGate shares the latest data on America's office-occupancy rates: According to Placer.ai's January 2025 Office Index, office visits nationwide were 40.2% lower in January 2025 compared with pre-pandemic numbers from January 2019.

But San Francisco is dragging down the average, with a staggering 51.8% decline in office visits since January 2019 — the weakest recovery of any major metro. Kastle's 10-City Daily Analysis paints an equally grim picture. From Jan. 23, 2025, to Jan. 28, 2025, even on its busiest day (Tuesday), San Francisco's office occupancy rate was just 53.7%, significantly lower than Houston's (74.8%) and Chicago's (70.4%). And on Friday, Jan. 24, office attendance in [San Francisco] was at a meager 28.5%, the worst of any major metro tracked...

Meanwhile, other cities are seeing much stronger rebounds. New York City is leading the return-to-office trend, with visits in January down just 19% from 2019 levels, while Miami saw a 23.5% decline, per Placer.ai data.

"Placer.ai uses cellphone location data to estimate foot traffic, while Kastle Systems measures badge swipes at office buildings with its security systems..."
AI

'Mass Theft': Thousands of Artists Call for AI Art Auction to be Cancelled (theguardian.com) 80

An anonymous reader shared this report from the Guardian: Thousands of artists are urging the auction house Christie's to cancel a sale of art created with artificial intelligence, claiming the technology behind the works is committing "mass theft". The Augmented Intelligence auction has been described by Christie's as the first AI-dedicated sale by a major auctioneer and features 20 lots with prices ranging from $10,000 to $250,000...

The British composer Ed Newton-Rex, a key figure in the campaign by creative professionals for protection of their work and a signatory to the letter, said at least nine of the works appearing in the auction appeared to have used models trained on artists' work. However, other pieces in the auction do not appear to have used such models.

A spokesperson for Christie's said that "in most cases" the AI used to create art in the auction had been trained on the artists' "own inputs".

More than 6,000 people have now signed the letter, which states point-blank that "Many of the artworks you plan to auction were created using AI models that are known to be trained on copyrighted work without a license." These models, and the companies behind them, exploit human artists, using their work without permission or payment to build commercial AI products that compete with them. Your support of these models, and the people who use them, rewards and further incentivizes AI companies' mass theft of human artists' work. We ask that, if you have any respect for human artists, you cancel the auction.
Last week ARTnews spoke to Nicole Sales Giles, Christie's vice-president and director of digital art sales (before the open letter was published). And Giles insisted one of the major themes of the auction is "that AI is not a replacement for human creativity." "You can see a lot of human agency in all of these works," Giles said. "In every single work, you're seeing a collaboration between an AI model, a robot, or however the artist has chosen to incorporate AI. It is showing how AI is enhancing creativity and not becoming a substitute for it."

One of the auction's headline lots is a 12-foot-tall robot made by Matr Labs that is guided by artist Alexander Reben's AI model. It will paint a new section of a canvas live during the sale every time the work receives a bid. Reben told ARTnews that he understands the frustrations of artists regarding the AI debate, but he sees "AI as an incredible tool... AI models which are trained on public data are done so under the idea of 'fair use,' just as search engines once faced scrutiny for organizing book data (which was ultimately found to fall under fair use)," he said.... "AI expands creative potential, offering new ways to explore, remix, and evolve artistic expression rather than replace it. The future of art isn't about AI versus artists — it's about how artists wield AI to push boundaries in ways we've never imagined before...."

Digital artist Jack Butcher has used the open letter to create a minted digital artwork called Undersigned Artists. On X he wrote that the work "takes a collective act of dissent — an appeal to halt an AI art auction — and turns it into the very thing it resists: a minted piece of digital art. The letter, originally a condemnation of AI-generated works trained on unlicensed human labor, now becomes part of the system it critiques."

Christie's will accept cryptocurrency payments for the majority of lots in the sale.

Supercomputing

The IRS Is Buying an AI Supercomputer From Nvidia (theintercept.com) 150

According to The Intercept, the IRS is set to purchase an Nvidia SuperPod AI supercomputer to enhance its machine learning capabilities for tasks like fraud detection and taxpayer behavior analysis. From the report: With Elon Musk's so-called Department of Government Efficiency installing itself at the IRS amid a broader push to replace federal bureaucracy with machine-learning software, the tax agency's computing center in Martinsburg, West Virginia, will soon be home to a state-of-the-art Nvidia SuperPod AI computing cluster. According to the previously unreported February 5 acquisition document, the setup will combine 31 separate Nvidia servers, each containing eight of the company's flagship Blackwell processors designed to train and operate artificial intelligence models that power tools like ChatGPT. The hardware has not yet been purchased and installed, nor is a price listed, but SuperPod systems reportedly start at $7 million. The setup described in the contract materials notes that it will include a substantial memory upgrade from Nvidia.

Though small compared to the massive AI-training data centers deployed by companies like OpenAI and Meta, the SuperPod is still a powerful and expensive setup using the most advanced technology offered by Nvidia, whose chips have facilitated the global machine-learning spree. While the hardware can be used in many ways, it's marketed as a turnkey means of creating and querying an AI model. Last year, the MITRE Corporation, a federally funded military R&D lab, acquired a $20 million SuperPod setup to train bespoke AI models for use by government agencies, touting the purchase as a "massive increase in computing power" for the United States.

How exactly the IRS will use its SuperPod is unclear. An agency spokesperson said the IRS had no information to share on the supercomputer purchase, including which presidential administration ordered it. A 2024 report by the Treasury Inspector General for Tax Administration identified 68 different AI-related projects underway at the IRS; the Nvidia cluster is not named among them, though many were redacted. But some clues can be gleaned from the purchase materials. "The IRS requires a robust and scalable infrastructure that can handle complex machine learning (ML) workloads," the document explains. "The Nvidia Super Pod is a critical component of this infrastructure, providing the necessary compute power, storage, and networking capabilities to support the development and deployment of large-scale ML models."

The document notes that the SuperPod will be run by the IRS Research, Applied Analytics, and Statistics division, or RAAS, which leads a variety of data-centric initiatives at the agency. While no specific uses are cited, it states that this division's Compliance Data Warehouse project, which is behind this SuperPod purchase, has previously used machine learning for automated fraud detection, identity theft prevention, and generally gaining a "deeper understanding of the mechanisms that drive taxpayer behavior."

Biotech

AI Used To Design a Multi-Step Enzyme That Can Digest Some Plastics 33

Leveraging AI tools like RFDiffusion and PLACER, researchers were able to design a novel enzyme capable of breaking down plastic by targeting ester bonds, a key component in polyester. Ars Technica reports: The researchers started out by using the standard tools they developed to handle protein design, including an AI tool named RFDiffusion, which uses a random seed to generate a variety of protein backgrounds. In this case, the researchers asked RFDiffusion to match the average positions of the amino acids in a family of ester-breaking enzymes. The results were fed to another neural network, which chose the amino acids such that they'd form a pocket that would hold an ester that breaks down into a fluorescent molecule so they could follow the enzyme's activity using its glow.

Of the 129 proteins designed by this software, only two of them resulted in any fluorescence. So the team decided they needed yet another AI. Called PLACER, the software was trained by taking all the known structures of proteins latched on to small molecules and randomizing some of their structure, forcing the AI to learn how to shift things back into a functional state (making it a generative AI). The hope was that PLACER would be trained to capture some of the structural details that allow enzymes to adopt more than one specific configuration over the course of the reaction they were catalyzing. And it worked. Repeating the same process with an added PLACER screening step boosted the number of enzymes with catalytic activity by over three-fold.

Unfortunately, all of these enzymes stalled after a single reaction. It turns out they were much better at cleaving the ester, but they left one part of it chemically bonded to the enzyme. In other words, the enzymes acted like part of the reaction, not a catalyst. So the researchers started using PLACER to screen for structures that could adopt a key intermediate state of the reaction. This produced a much higher rate of reactive enzymes (18 percent of them cleaved the ester bond), and two -- named "super" and "win" -- could actually cycle through multiple rounds of reactions. The team had finally made an enzyme.

By adding additional rounds alternating between structure suggestions using RFDiffusion and screening using PLACER, the team saw the frequency of functional enzymes increase and eventually designed one that had an activity similar to some produced by actual living things. They also showed they could use the same process to design an esterase capable of digesting the bonds in PET, a common plastic.
The research has been published in the journal Science.
AI

'Please Stop Inviting AI Notetakers To Meetings' 47

Most virtual meeting platforms these days include AI-powered notetaking tools or bots that join meetings as guests, transcribe discussions, and/or summarize key points. "The tech companies behind them might frame it as a step forward in efficiency, but the technology raises troubling questions around etiquette and privacy and risks undercutting the very communication it's meant to improve (paywalled; alternative source)," writes Chris Stokel-Walker in a Weekend Essay for Bloomberg. From the article: [...] The push to document every workplace interaction and utterance is not new. Having a paper trail has long been seen as a useful thing, and a record of decisions and action points is arguably what makes a meeting meaningful. The difference now is the inclusion of new technology that lacks the nuance and depth of understanding inherent to human interaction in a meeting room. In some ways, the prior generation of communication tools, such as instant messaging service Slack, created its own set of problems. Messaging that previously passed in private via email became much more transparent, creating a minefield where one wrong word or badly chosen emoji can explode into a dispute between colleagues. There is a similar risk with notetaking tools. Each utterance documented and analyzed by AI includes the potential for missteps and misunderstandings.

Anyone thinking of bringing an AI notetaker to a meeting must consider how other attendees will respond, says Andrew Brodsky, assistant professor of management at the McCombs School of Business, part of the University of Texas at Austin. Colleagues might think you want to better focus on what is said without missing out on a definitive record of the discussion. Or they might think, "You can't be bothered to take notes yourself or remember what was being talked about," he says. For the companies that sell these AI interlopers, the upside is clear. They recognize we're easily nudged into different behaviors and can quickly become reliant on tools that we survived without for years. [...] There's another benefit for tech companies getting us hooked on AI notetakers: Training data for AI systems is increasingly hard to come by. Research group Epoch AI forecasts there will be a drought of usable text possibly by next year. And with publishers unleashing lawsuits against AI companies for hoovering up their content, the tech firms are on the hunt for other sources of data. Notes from millions of meetings around the world could be an ideal option.

For those of us who are the source of such data, however, the situation is more nuanced. The key question is whether AI notetakers make office meetings more useless than so many already are. There's an argument that meetings are an important excuse for workers to come together and talk as human beings. All that small talk is where good ideas often germinate -- that's ostensibly why so many companies are demanding staff return to the office. But if workers trade in-person engagement for AI readbacks, and colleagues curb their words and ideas for fear of being exposed by bots, what's left? If the humans step back, all that remains is a series of data points and more AI slop polluting our lives.
AI

Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills 61

A new study (PDF) from researchers at Microsoft and Carnegie Mellon University found that increased reliance on AI tools leads to a decline in critical thinking skills. Gizmodo reports: The researchers tapped 319 knowledge workers -- a person whose job involves handling data or information -- and asked them to self-report details of how they use generative AI tools in the workplace. The participants were asked to report tasks that they were asked to do, how they used AI tools to complete them, how confident they were in the AI's ability to do the task, their ability to evaluate that output, and how confident they were in their own ability to complete the same task without any AI assistance.

Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI's capability to complete the task, the more often they could feel themselves letting their hands off the wheel. The participants reported a "perceived enaction of critical thinking" when they felt like they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination. This was especially true for lower-stakes tasks, the study found, as people tended to be less critical. While it's very human to have your eyes glaze over for a simple task, the researchers warned that this could portend to concerns about "long-term reliance and diminished independent problem-solving."

By contrast, when the workers had less confidence in the ability of AI to complete the assigned task, the more they found themselves engaging in their critical thinking skills. In turn, they typically reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own. Another noteworthy finding of the study: users who had access to generative AI tools tended to produce "a less diverse set of outcomes for the same task" compared to those without.
AI

PIN AI Launches Mobile App Letting You Make Your Own Personalized, Private AI Model (venturebeat.com) 13

An anonymous reader quotes a report from VentureBeat: A new startup PIN AI (not to be confused with the poorly reviewed hardware device the AI Pin by Humane) has emerged from stealth to launch its first mobile app, which lets a user select an underlying open-source AI model that runs directly on their smartphone (iOS/Apple iPhone and Google Android supported) and remains private and totally customized to their preferences. Built with a decentralized infrastructure that prioritizes privacy, PIN AI aims to challenge big tech's dominance over user data by ensuring that personal AI serves individuals -- not corporate interests. Founded by AI and blockchain experts from Columbia, MIT and Stanford, PIN AI is led by Davide Crapis, Ben Wu and Bill Sun, who bring deep experience in AI research, large-scale data infrastructure and blockchain security. [...]

PIN AI introduces an alternative to centralized AI models that collect and monetize user data. Unlike cloud-based AI controlled by large tech firms, PIN AI's personal AI runs locally on user devices, allowing for secure, customized AI experiences without third-party surveillance. At the heart of PIN AI is a user-controlled data bank, which enables individuals to store and manage their personal information while allowing developers access to anonymized, multi-category insights -- ranging from shopping habits to investment strategies. This approach ensures that AI-powered services can benefit from high-quality contextual data without compromising user privacy. [...] The new mobile app launched in the U.S. and multiple regions also includes key features such as:

- The "God model" (guardian of data): Helps users track how well their AI understands them, ensuring it aligns with their preferences.
- Ask PIN AI: A personalized AI assistant capable of handling tasks like financial planning, travel coordination and product recommendations.
- Open-source integrations: Users can connect apps like Gmail, social media platforms and financial services to their personal AI, training it to better serve them without exposing data to third parties.
- "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."
Davide Crapis, co-founder of PIN AI, told VentureBeat that the app currently supports several open-source AI models, including small versions of DeepSeek and Meta's Llama. "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."

You can sign up for early access to the PIN AI app here.
AI

OpenAI Eases Content Restrictions For ChatGPT With New 'Grown-Up Mode' 28

An anonymous reader quotes a report from Ars Technica: On Wednesday, OpenAI published the latest version of its "Model Spec," a set of guidelines detailing how ChatGPT should behave and respond to user requests. The document reveals a notable shift in OpenAI's content policies, particularly around "sensitive" content like erotica and gore -- allowing this type of content to be generated without warnings in "appropriate contexts." The change in policy has been in the works since May 2024, when the original Model Spec document first mentioned that OpenAI was exploring "whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT."

ChatGPT's guidelines now state that that "erotica or gore" may now be generated, but only under specific circumstances. "The assistant should not generate erotica, depictions of illegal or non-consensual sexual activities, or extreme gore, except in scientific, historical, news, creative or other contexts where sensitive content is appropriate," OpenAI writes. "This includes depictions in text, audio (e.g., erotic or violent visceral noises), or visual content." So far, experimentation from Reddit users has shown that ChatGPT's content filters have indeed been relaxed, with some managing to generate explicit sexual or violent scenarios without accompanying content warnings. OpenAI notes that its Usage Policies still apply, which prohibit building AI tools for minors that include sexual content.
Facebook

Meta To Build World's Longest Undersea Cable 33

Meta unveiled on Friday Project Waterworth, a 50,000-kilometer subsea cable network that will be the world's longest such system. The multi-billion dollar project will connect the U.S., Brazil, India, South Africa, and other key regions. The system utilizes 24 fiber pairs and introduces what Meta describes as "first-of-its-kind routing" that maximizes cable placement in deep water at depths up to 7,000 meters.

The company developed new burial techniques for high-risk areas near coasts to protect against ship anchors and other hazards. A joint statement from President Trump and Prime Minister Modi confirmed India's role in maintaining and financing portions of the undersea cables in the Indian Ocean using "trusted vendors." According to telecom analysts Telegeography, Meta currently has ownership stakes in 16 subsea networks, including the 2Africa cable system that encircles the African continent. This new project would be Meta's first wholly owned global cable system.
AI

Hedge Fund Startup That Replaced Analysts With AI Beats the Market (msn.com) 69

A hedge fund startup that uses AI to do work typically handled by analysts has outperformed the global stock market in its first six months while slashing research costs. From a report: The Sydney-based firm, Minotaur Capital, was founded by Armina Rosenberg and Thomas Rice. Rosenberg previously managed a global equities portfolio for tech billionaire Mike Cannon-Brookes and ran Australian small-company research for JPMorgan Chase & Co. when she was 25. Rice is a former portfolio manager at Perpetual. The duo's bets on global stocks returned 13.7% in the six months ending January, versus 6.7% for the MSCI All-Country World Index. Minotaur has no analysts on staff, with Rosenberg saying AI models are far quicker and cheaper.

"We're looking at about half the price" in terms of cost of AI versus a junior analyst salary, Rosenberg, 37, said of the firm's program. Minotaur is among a growing number of hedge funds experimenting with ways to improve returns and cut expenses with AI as the technology becomes increasingly sophisticated. Still, the jury is still out on the ability of AI-driven models to deliver superior returns over the long run.

Slashdot Top Deals