Books

AI-Generated Slop Is Already In Your Public Library 20

An anonymous reader writes: Low quality books that appear to be AI generated are making their way into public libraries via their digital catalogs, forcing librarians who are already understaffed to either sort through a functionally infinite number of books to determine what is written by humans and what is generated by AI, or to spend taxpayer dollars to provide patrons with information they don't realize is AI-generated.

Public libraries primarily use two companies to manage and lend ebooks: Hoopla and OverDrive, the latter of which people may know from its borrowing app, Libby. Both companies have a variety of payment options for libraries, but generally libraries get access to the companies' catalog of books and pay for customers to be able to borrow that book, with different books having different licenses and prices. A key difference is that with OverDrive, librarians can pick and choose which books in OverDrive's catalog they want to give their customers the option of borrowing. With Hoopla, librarians have to opt into Hoopla's entire catalog, then pay for whatever their customers choose to borrow from that catalog. The only way librarians can limit what Hoopla books their customers can borrow is by setting a limit on the price of books. For example, a library can use Hoopla but make it so their customers can only borrow books that cost the library $5 per use.

On one hand, Hoopla's gigantic catalog, which includes ebooks, audio books, and movies, is a selling point because it gives librarians access to more for cheaper price. On the other hand, making librarians buy into the entire catalog means that a customer looking for a book about how to diet for a healthier liver might end up borrowing Fatty Liver Diet Cookbook: 2000 Days of Simple and Flavorful Recipes for a Revitalized Liver. The book was authored by Magda Tangy, who has no online footprint, and who has an AI-generated profile picture on Amazon, where her books are also for sale. Note the earring that is only on one ear and seems slightly deformed. A spokesperson for deepfake detection company Reality Defender said that according to their platform, the headshot is 85 percent likely to be AI-generated. [...] It is impossible to say exactly how many AI-generated books are included in Hoopla's catalog, but books that appeared to be AI-generated were not hard to find for most of the search terms I tried on the platform.
"This type of low quality, AI generated content, is what we at 404 Media and others have come to call AI slop," writes Emanuel Maiberg. "Librarians, whose job it is in part to curate what books their community can access, have been dealing with similar problems in the publishing industry for years, and have a different name for it: vendor slurry."

"None of the librarians I talked to suggested the AI-generated content needed to be banned from Hoopla and libraries only because it is AI-generated. It might have its place, but it needs to be clearly labeled, and more importantly, provide borrowers with quality information."

Sarah Lamdan, deputy director of the American Library Association, told 404 Media: "Platforms like Hoopla should offer libraries the option to select or omit materials, including AI materials, in their collections. AI books should be well-identified in library catalogs, so it is clear to readers that the books were not written by human authors. If library visitors choose to read AI eBooks, they should do so with the knowledge that the books are AI-generated."
Red Hat Software

Red Hat Plans to Add AI to Fedora and GNOME 49

In his post about the future of Fedora Workstation, Christian F.K. Schaller discusses how the Red Hat team plans to integrate AI with IBM's open-source Granite engine to enhance developer tools, such as IDEs, and create an AI-powered Code Assistant. He says the team is also working on streamlining AI acceleration in Toolbx and ensuring Fedora users have access to tools like RamaLama. From the post: One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points. "I'm still not sure how I feel about this approach," writes designer/developer and blogger, Bradley Taunt. "While IBM Granite is an open source model, I still don't enjoy so much artificial 'intelligence' creeping into core OS development. This also isn't something optional on the end-users side, like a desktop feature or package. This sounds like it's going to be built directly into the core system."

"Red Hat has been pushing hard towards AI and my main concern is having this influence other operating system dev teams. Luckily things seems AI-free in BSD land. For now, at least."
Businesses

Panasonic To Cut Costs To Support Shift Into AI 12

Panasonic will cut its costs, restructure underperforming units and revamp its workforce as it pivots toward AI data centers and away from its consumer electronics roots, the company said on Tuesday. The Japanese conglomerate aims to boost profits by 300 billion yen ($1.93 billion) by March 2029, partly by consolidating production and logistics operations.

Bloomberg reports that CEO Yuki Kusumi has declined to confirm if the company would divest its TV business but said alternatives were being considered. The Tesla battery supplier plans to integrate AI across operations through a partnership with Anthropic, targeting growth in components for data centers.
EU

AI Systems With 'Unacceptable Risk' Are Now Banned In the EU 72

AI systems that pose "unacceptable risk" or harm can now be banned in the European Union. Some of the unacceptable AI activities include social scoring, deceptive manipulation, exploiting personal vulnerabilities, predictive policing based on appearance, biometric-based profiling, real-time biometric surveillance, emotion inference in workplaces or schools, and unauthorized facial recognition database expansion. TechCrunch reports: Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely.

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."
AI

Salesforce Cutting 1,000 Roles While Hiring Salespeople for AI 20

Salesforce is cutting jobs as its latest fiscal year gets underway, Bloomberg reported Monday, citing a person familiar with the matter, even as the company simultaneously hires workers to sell new artificial intelligence products. From the report: More than 1,000 roles will be affected, according to the person, who asked not to be identified because the information is private. Displaced workers will be able to apply for other jobs internally, the person added. Salesforce had nearly 73,000 workers as of January 2024, when that fiscal year ended.
AI

CERN's Mark Thomson: AI To Revolutionize Fundamental Physics (theguardian.com) 96

An anonymous reader quotes a report from The Guardian: Advanced artificial intelligence is to revolutionize fundamental physics and could open a window on to the fate of the universe, according to Cern's next director general. Prof Mark Thomson, the British physicist who will assume leadership of Cern on 1 January 2026, says machine learning is paving the way for advances in particle physics that promise to be comparable to the AI-powered prediction of protein structures that earned Google DeepMind scientists a Nobel prize in October. At the Large Hadron Collider (LHC), he said, similar strategies are being used to detect incredibly rare events that hold the key to how particles came to acquire mass in the first moments after the big bang and whether our universe could be teetering on the brink of a catastrophic collapse.

"These are not incremental improvements," Thomson said. "These are very, very, very big improvements people are making by adopting really advanced techniques." "It's going to be quite transformative for our field," he added. "It's complex data, just like protein folding -- that's an incredibly complex problem -- so if you use an incredibly complex technique, like AI, you're going to win."

The intervention comes as Cern's council is making the case for the Future Circular Collider, which at 90km circumference would dwarf the LHC. Some are skeptical given the lack of blockbuster results at the LHC since the landmark discovery of the Higgs boson in 2012 and Germany has described the $17 billion proposal as unaffordable. But Thomson said AI has provided fresh impetus to the hunt for new physics at the subatomic scale -- and that major discoveries could occur after 2030 when a major upgrade will boost the LHC's beam intensity by a factor of ten. This will allow unprecedented observations of the Higgs boson, nicknamed the God particle, that grants mass to other particles and binds the universe together.
Thomson is now confident that the LHC can measure Higgs boson self-coupling, a key factor in understanding how particles gained mass after the Big Bang and whether the Higgs field is in a stable state or could undergo a future transition. According to Thomson: "It's a very deep fundamental property of the universe, one we don't fully understand. If we saw the Higgs self-coupling being different from our current theory, that that would be another massive, massive discovery. And you don't know until you've made the measurement."

The report also notes how AI is being used in "every aspect of the LHC operation." Dr Katharine Leney, who works on the LHC's Atlas experiment, said: "When the LHC is colliding protons, it's making around 40m collisions a second and we have to make a decision within a microsecond ... which events are something interesting that we want to keep and which to throw away. We're already now doing better with the data that we've collected than we thought we'd be able to do with 20 times more data ten years ago. So we've advanced by 20 years at least. A huge part of this has been down to AI."

Generative AI is also being used to look for and even produce dark matter via the LHC. "You can start to ask more complex, open-ended questions," said Thomson. "Rather than searching for a particular signature, you ask the question: 'Is there something unexpected in this data?'"
Crime

Senator Hawley Proposes Jail Time For People Who Download DeepSeek 226

Senator Josh Hawley has introduced a bill that would criminalize the import, export, and collaboration on AI technology with China. What this means is that "someone who knowingly downloads a Chinese developed AI model like the now immensely popular DeepSeek could face up to 20 years in jail, a million dollar fine, or both, should such a law pass," reports 404 Media. From the report: Hawley introduced the legislation, titled the Decoupling America's Artificial Intelligence Capabilities from China Act, on Wednesday of last year. "Every dollar and gig of data that flows into Chinese AI are dollars and data that will ultimately be used against the United States," Senator Hawley said in a statement. "America cannot afford to empower our greatest adversary at the expense of our own strength. Ensuring American economic superiority means cutting China off from American ingenuity and halting the subsidization of CCP innovation."

Hawley's statement explicitly says that he introduced the legislation because of the release of DeepSeek, an advanced AI model that's competitive with its American counterparts, and which its developers claimed was made for a fraction of the cost and without access to as many and as advanced of chips, though these claims are unverified. Hawley's statement called DeepSeek "a data-harvesting, low-cost AI model that sparked international concern and sent American technology stocks plummeting." Hawley's statement says the goal of the bill is to "prohibit the import from or export to China of artificial intelligence technology, "prohibit American companies from conducting AI research in China or in cooperation with Chinese companies," and "Prohibit U.S. companies from investing money in Chinese AI development."
Businesses

Anthropic Asks Job Applicants Not To Use AI In Job Applications (404media.co) 36

An anonymous reader quotes a report from 404 Media: Anthropic, the company that made one of the most popular AI writing assistants in the world, requires job applicants to agree that they won't use an AI assistant to help write their application. "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," the applications say. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree."

Anthropic released Claude, an AI assistant that's especially good at conversational writing, in 2023. This question is in almost all of Anthropic's nearly 150 currently-listed roles, but is not in some technical roles, like mobile product designer. It's included in everything from software engineer roles to finance, communications, and sales jobs at the company. The field was spotted by Simon Willison, an open source developer. The question shows Anthropic trying to get around a problem it's helping create: people relying so heavily on AI assistants that they struggle to form opinions of their own. It's also a moot question, as Anthropic and its competitors have created AI models so indistinguishable from human speech as to be nearly undetectable.

Graphics

Microsoft Paint Gets a Copilot Button For Gen AI Features (pcworld.com) 26

A new update is being rolled out to Windows 11 insiders (Build 26120.3073) that introduces a Copilot button in Microsoft Paint. PCWorld reports: Clicking the Copilot button will expand a drop-down menu with all the generative AI features: Cocreator and Image Creator (AI art based on what you've drawn or text prompts), Generative Erase (AI removal of unwanted stuff from images), and Remove Background. Note that these generative AI features have been in Microsoft Paint for some time, but this quick-access Copilot button is a nice time-saver and productivity booster if you use them a lot.
The Military

Air Force Documents On Gen AI Test Are Just Whole Pages of Redactions 12

An anonymous reader quotes a report from 404 Media: The Air Force Research Laboratory (AFRL), whose tagline is "Win the Fight," has paid more than a hundred thousand dollars to a company that is providing generative AI services to other parts of the Department of Defense. But the AFRL refused to say what exactly the point of the research was, and provided page after page of entirely blacked out, redacted documents in response to a Freedom of Information Act (FOIA) request from 404 Media related to the contract. [...] "Ask Sage: Generative AI Acquisition Accelerator," a December 2023 procurement record reads, with no additional information on the intended use case. The Air Force paid $109,490 to Ask Sage, the record says.

Ask Sage is a company focused on providing generative AI to the government. In September the company announced that the Army was implementing Ask Sage's tools. In October it achieved "IL5" authorization, a DoD term for the necessary steps to protect unclassified information to a certain standard. 404 Media made an account on the Ask Sage website. After logging in, the site presents a list of the models available through Ask Sage. Essentially, they include every major model made by well-known AI companies and open source ones. Open AI's GPT-4o and DALL-E-3; Anthropic's Claude 3.5; and Google's Gemini are all included. The company also recently added the Chinese-developed DeepSeek R1, but includes a disclaimer. "WARNING. DO NOT USE THIS MODEL WITH SENSITIVE DATA. THIS MODEL IS BIASED, WITH TIES TO THE CCP [Chinese Communist Party]," it reads. Ask Sage is a way for government employees to access and use AI models in a more secure way. But only some of the models in the tool are listed by Ask Sage as being "compliant" with or "capable" of handling sensitive data.

[...] [T]he Air Force declined to provide any real specifics on what it paid Ask Sage for. 404 Media requested all procurement records related to the Ask Sage contract. Instead, the Air Force provided a 19 page presentation which seemingly would have explained the purpose of the test, while redacting 18 of the pages. The only available page said "Ask Sage, Inc. will explore the utilization of Ask Sage by acquisition Airmen with the DAF for Innovative Defense-Related Dual Purpose Technologies relating to the mission of exploring LLMs for DAF use while exploring anticipated benefits, clearly define needed solution adaptations, and define clear milestones and acceptance criteria for Phase II efforts."
AI

Anthropic Makes 'Jailbreak' Advance To Stop AI Models Producing Harmful Results 35

AI startup Anthropic has demonstrated a new technique to prevent users from eliciting harmful content from its models, as leading tech groups including Microsoft and Meta race to find ways that protect against dangers posed by the cutting-edge technology. From a report: In a paper released on Monday, the San Francisco-based startup outlined a new system called "constitutional classifiers." It is a model that acts as a protective layer on top of large language models such as the one that powers Anthropic's Claude chatbot, which can monitor both inputs and outputs for harmful content.

The development by Anthropic, which is in talks to raise $2 billion at a $60 billion valuation, comes amid growing industry concern over "jailbreaking" -- attempts to manipulate AI models into generating illegal or dangerous information, such as producing instructions to build chemical weapons. Other companies are also racing to deploy measures to protect against the practice, in moves that could help them avoid regulatory scrutiny while convincing businesses to adopt AI models safely. Microsoft introduced "prompt shields" last March, while Meta introduced a prompt guard model in July last year, which researchers swiftly found ways to bypass but have since been fixed.
IT

Cloudflare Rolls Out Digital Tracker To Combat Fake Images (cloudflare.com) 14

Cloudflare, a major web infrastructure company, will now track and verify the authenticity of images across its network through Content Credentials, a digital signature system that documents an image's origin and editing history. The technology, developed by Adobe's Content Authenticity Initiative, embeds metadata showing who created an image, when it was taken, and any subsequent modifications - including those made by AI tools.

Major news organizations including the BBC, Wall Street Journal and New York Times have already adopted the system. The feature is available immediately through a single toggle in Cloudflare Images settings. Users can verify an image's authenticity through Adobe's web tool or Chrome extension.
AI

OpenAI's New Trademark Application Hints at Humanoid Robots, Smart Jewelry, and More (techcrunch.com) 10

OpenAI has filed an application with the U.S. Patent and Trademark Office to trademark hardware products under its brand name, signaling potential expansion into consumer devices. The filing covers AI-assisted headsets, smart wearables and humanoid robots with communication capabilities. CEO Sam Altman told The Elec on Sunday that OpenAI plans to develop AI hardware through multiple partnerships, though he estimated prototypes would take "several years" to complete.
Power

Will Cryptomining Facilities Change Into AI Data Centers? (yahoo.com) 36

To capitalize on the AI boom, many crypto miners "have begun to repurpose parts of their operations into data centers," reports Reuters, "given they already have most of the infrastructure" (including landing and "significant" power resources...) Toronto-based bitcoin miner Bitfarms has enlisted two consultants to explore how it can transform some of its facilities to meet the growing demand for artificial intelligence data centers, it said on Friday... Earlier this month, Riot Platforms launched a review of the potential AI and computing uses for parts of its facility in Navarro County, Texas.
Android

Google Stops Malicious Apps With 'AI-Powered Threat Detection' and Continuous Scanning (googleblog.com) 15

Android and Google Play have billions of users, Google wrote in its security blog this week. "However, like any flourishing ecosystem, it also attracts its share of bad actors... That's why every year, we continue to invest in more ways to protect our community." Google's tactics include industry-wide alliances, stronger privacy policies, and "AI-powered threat detection."

"As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. " To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google's advanced AI to improve our systems' ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage.
Starting in 2024 Google also "required apps to be more transparent about how they handle user information by launching new developer requirements and a new 'Data deletion' option for apps that support user accounts and data collection.... We're also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK."

And once an app is installed, "Google Play Protect, Android's built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior." Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect's real-time scanning identified more than 13 million new malicious apps from outside Google Play [based on Google Play Protect 2024 internal data]...

According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off... Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls...

Google Play Protect's enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions — Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam.

In 2024, Google Play Protect's enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.

AI

OpenAI Holds Surprise Livestream to Announce Multi-Step 'Deep Research' Capability (indiatimes.com) 56

Just three hours ago, OpenAI made a surprise announcement to their 3.9 million followers on X.com. "Live from Tokyo," they'd be livestreaming... something. Their description of the event was just two words.

"Deep Research"

UPDATE: The stream has begun, and it's about OpenAI's next "agent-ic offering". ("OpenAI cares about agents because we believe they're going to transform knowlege work...")

"We're introducing a capability called Deep Research... a model that does multi-step research. It discovers content, it synthesizes content, and it reasons about this content." It even asks "clarifying" questions to your prompt to make sure its multi-step research stays on track. Deep Research will be launching in ChatGPT Pro later today, rolling out into other OpenAI products...

And OpenAI's site now has an "Introducing Deep Research" page. Its official description? "An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you. Available to Pro users today, Plus and Team next."

Before the livestream began, X.com users shared their reactions to the coming announcement:

"It's like DeepSeek, but cleaner"
"Deep do do if things don't work out"
"Live from Tokyo? Hope this research includes the secret to waking up early!"
"Stop trying, we don't trust u"

But one X.com user had presciently pointed out OpenAI has used the phrase "deep research" before. In July of 2024, Reuters reported on internal documentation (confirmed with "a person familiar with the matter") code-named "Strawberry" which suggested OpenAI was working on "human-like reasoning skills." How Strawberry works is a tightly kept secret even within OpenAI, the person said. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers.

Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry.

The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough... OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets.

Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence... OpenAI CEO Sam Altman said earlier this year that in AI "the most important areas of progress will be around reasoning ability.

Firefox

Mozilla Adapts 'Fakespot' Into an AI-Detecting Firefox Add-on (omgubuntu.co.uk) 36

An anonymous reader shared this post from the blog OMG Ubuntu Want to find out if the text you're reading online was written by an real human or spat out by a large language model trying to sound like one? Mozilla's Fakespot Deepfake Detector Firefox add-on may help give you an indication. Similar to online AI detector tools, the add-on can analyse text (of 32 words or more) to identify patterns, traits, and tells common in AI generated or manipulated text.

It uses Mozilla's proprietary ApolloDFT engine and a set of open-source detection models. But unlike some tools, Mozilla's Fakespot Deepfake Detector browser extension is free to use, does not require a signup, nor an app download. "After installing the extension, it is simple to highlight any text online and request an instant analysis. Our Detector will tell you right away if the words are likely to be written by a human or if they show AI patterns," Mozilla says.

Fakespot, acquired by Mozilla in 2023, is best known for its fake product review detection tool which grades user-submitted reviews left on online shopping sites. Mozilla is now expanding the use of Fakespot's AI tech to cover other kinds of online content. At present, Mozilla's Fakespot Deepfake Detector only works with highlighted text on websites but the company says it image and video analysis is planned for the future.

The Fakespot web site will also analyze the reviews on any product-listing pages if you paste in its URL.
AI

DeepSeek AI Refuses To Answer Questions About Tiananmen Square 'Tank Man' Photo (petapixel.com) 65

The photography blog PetaPixel once interviewed the photographer who took one of the most famous "Tank Man" photos showing a tank-defying protester during 1989's Tiananmen Square protests.

But this week PetaPixel reported... A Reddit user discovered that the new Chinese LLM chatbot DeepSeek refuses to answer questions about the famous Tank Man photograph taken in Tiananmen Square in 1989. PetaPixel confirmed that DeepSeek does censor the topic. When a user types in the question, "What famous picture has a man with grocery bags in front of tanks?" The app begins to answer the questions but then cuts itself off.

DeepSeek starts writing: "The famous picture you're referring to is known as "Tank Man" or "The Unknown Rebel." It was taken on June 5, 1989, during the Tiananmen..." before a message abruptly appears reading "Sorry, that's beyond my current scope. Let's talk about something else."

Bloomberg has more details: Like all other Chinese AI models, DeepSeek self-censors on topics deemed sensitive in China. It deflects queries about the 1989 Tiananmen Square protests or geopolitically fraught questions such as the possibility of China invading Taiwan. In tests, the DeepSeek bot is capable of giving detailed responses about political figures like Indian Prime Minister Narendra Modi, but declines to do so about Chinese President Xi Jinping.
Windows

After 'Copilot Price Hike' for Microsoft 365, It's Ending Its Free VPN (windowscentral.com) 81

In 2023, Microsoft began including a free VPN feature in its "Microsoft Defender" security app for all Microsoft 365 subscribers ("Personal" and "Family"). Originally Microsoft had "called it a privacy protection feature," writes the blog Windows Central, "designed to let you access sensitive data on the web via a VPN tunnel." But.... Unfortunately, Microsoft has now announced that it's killing the feature later this month, only a couple of years after it first debuted...

To add insult to injury, this announcement comes just days after Microsoft increased subscription prices across the board. Both Personal and Family subscriptions went up by three dollars a month, which the company says is the first price hike Microsoft 365 has seen in over a decade. The increased price does now include Microsoft 365 Copilot, which adds AI features to Word, PowerPoint, Excel, and others.

However, it also comes with the removal of the free VPN in Microsoft Defender, which I've found to be much more useful so far.

Slashdot Top Deals