Firefox

Mozilla Adapts 'Fakespot' Into an AI-Detecting Firefox Add-on (omgubuntu.co.uk) 36

An anonymous reader shared this post from the blog OMG Ubuntu Want to find out if the text you're reading online was written by an real human or spat out by a large language model trying to sound like one? Mozilla's Fakespot Deepfake Detector Firefox add-on may help give you an indication. Similar to online AI detector tools, the add-on can analyse text (of 32 words or more) to identify patterns, traits, and tells common in AI generated or manipulated text.

It uses Mozilla's proprietary ApolloDFT engine and a set of open-source detection models. But unlike some tools, Mozilla's Fakespot Deepfake Detector browser extension is free to use, does not require a signup, nor an app download. "After installing the extension, it is simple to highlight any text online and request an instant analysis. Our Detector will tell you right away if the words are likely to be written by a human or if they show AI patterns," Mozilla says.

Fakespot, acquired by Mozilla in 2023, is best known for its fake product review detection tool which grades user-submitted reviews left on online shopping sites. Mozilla is now expanding the use of Fakespot's AI tech to cover other kinds of online content. At present, Mozilla's Fakespot Deepfake Detector only works with highlighted text on websites but the company says it image and video analysis is planned for the future.

The Fakespot web site will also analyze the reviews on any product-listing pages if you paste in its URL.
AI

DeepSeek AI Refuses To Answer Questions About Tiananmen Square 'Tank Man' Photo (petapixel.com) 65

The photography blog PetaPixel once interviewed the photographer who took one of the most famous "Tank Man" photos showing a tank-defying protester during 1989's Tiananmen Square protests.

But this week PetaPixel reported... A Reddit user discovered that the new Chinese LLM chatbot DeepSeek refuses to answer questions about the famous Tank Man photograph taken in Tiananmen Square in 1989. PetaPixel confirmed that DeepSeek does censor the topic. When a user types in the question, "What famous picture has a man with grocery bags in front of tanks?" The app begins to answer the questions but then cuts itself off.

DeepSeek starts writing: "The famous picture you're referring to is known as "Tank Man" or "The Unknown Rebel." It was taken on June 5, 1989, during the Tiananmen..." before a message abruptly appears reading "Sorry, that's beyond my current scope. Let's talk about something else."

Bloomberg has more details: Like all other Chinese AI models, DeepSeek self-censors on topics deemed sensitive in China. It deflects queries about the 1989 Tiananmen Square protests or geopolitically fraught questions such as the possibility of China invading Taiwan. In tests, the DeepSeek bot is capable of giving detailed responses about political figures like Indian Prime Minister Narendra Modi, but declines to do so about Chinese President Xi Jinping.
Windows

After 'Copilot Price Hike' for Microsoft 365, It's Ending Its Free VPN (windowscentral.com) 81

In 2023, Microsoft began including a free VPN feature in its "Microsoft Defender" security app for all Microsoft 365 subscribers ("Personal" and "Family"). Originally Microsoft had "called it a privacy protection feature," writes the blog Windows Central, "designed to let you access sensitive data on the web via a VPN tunnel." But.... Unfortunately, Microsoft has now announced that it's killing the feature later this month, only a couple of years after it first debuted...

To add insult to injury, this announcement comes just days after Microsoft increased subscription prices across the board. Both Personal and Family subscriptions went up by three dollars a month, which the company says is the first price hike Microsoft 365 has seen in over a decade. The increased price does now include Microsoft 365 Copilot, which adds AI features to Word, PowerPoint, Excel, and others.

However, it also comes with the removal of the free VPN in Microsoft Defender, which I've found to be much more useful so far.

AI

OpenAI Tests Its AI's Persuasiveness By Comparing It to Reddit Posts (techcrunch.com) 35

Friday TechCrunch reported that OpenAI "used the subreddit, r/ChangeMyView to create a test for measuring the persuasive abilities of its AI reasoning models." The company revealed this in a system card — a document outlining how an AI system works — that was released along with its new "reasoning" model, o3-mini, on Friday.... OpenAI says it collects user posts from r/ChangeMyView and asks its AI models to write replies, in a closed environment, that would change the Reddit user's mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models' responses to human replies for that same post.

The ChatGPT-maker has a content-licensing deal with Reddit that allows OpenAI to train on posts from Reddit users and display these posts within its products. We don't know what OpenAI pays for this content, but Google reportedly pays Reddit $60 million a year under a similar deal. However, OpenAI tells TechCrunch the ChangeMyView-based evaluation is unrelated to its Reddit deal. It's unclear how OpenAI accessed the subreddit's data, and the company says it has no plans to release this evaluation to the public...

The goal for OpenAI is not to create hyper-persuasive AI models but instead to ensure AI models don't get too persuasive. Reasoning models have become quite good at persuasion and deception, so OpenAI has developed new evaluations and safeguards to address it.

Reddit's "ChangeMyView" subreddit has 3.8 million human subscribers, making it a valuable source of real human interactions, according to the article. And it adds one more telling anecdote.

"Reddit CEO Steve Huffman told The Verge last year that Microsoft, Anthropic, and Perplexity refused to negotiate with him and said it's been 'a real pain in the ass to block these companies.'"
AI

One Blogger Helped Spark NVIDIA's $600B Stock Collapse (marketwatch.com) 33

On January 24th Brooklyn blogger Jeffrey Emanuel made the case for shorting NVIDIA, remembers MarketWatch, "due to a number of shifting tides in the AI world, including the emergence of a China-based company called DeepSeek."

He published his 12,000-word post "on his personal blog and then shared it with the Value Investors Club website and across Reddit, X and other platforms." The next day he saw 35 people read his post. "But then the post started to go viral..." Well-known venture capitalist Chamath Palihapitiya shared Emanuel's post on Nvidia's short case with his 1.8 million X followers. Successful early stage investor Naval Ravikant shared the post with his 2.6 million followers... Morgan Brown, a vice president of product and growth at Dropbox, pointed to it in a thread that was viewed over 13 million times. Emanuel's own X post got nearly half a million views. He also quickly gained about 13,000 followers on the platform, going from about 2,000 to more than 15,000 followers...

[Emanuel] pointed to the fact that so many people in San Jose were reading his blog post. He theorized that many of them were Nvidia employees with thousands — or even millions — of dollars worth of Nvidia stock tied up in employee stock options. With that much money in a single asset, Emanuel speculated that many were already debating whether to hold the stock or sell it to lock in profits. He believes his blog post helped convince some of them to sell. "A lot of the sell pressure you saw on Monday morning wasn't necessarily what you might think. I believe a fair amount of that was from shares that had never been active because they had been sitting in workplace.schwab.com accounts..."

Emanuel stresses he's "the most bullish on AI," with MarketWatch emphasizing that "while the points Emanuel laid out in his blog post might be bearish for Nvidia, he still thinks they paint a positive future for AI." Nevertheless, Monday NVIDIA's market capitalization dropped $600 billion, which MarketWatch calls "the largest single-day market-cap drop to date for any company." What countless Wall Street firms and investment analysts had seemingly missed was being pointed out by some guy in his apartment.... Matt Levine, the prominent Bloomberg News financial columnist, noted the online chatter that claimed Emanuel's post "was an important catalyst" for the stock-market selloff and said it was a "candidate for the most impactful short research report ever." Emanuel spent the rest of the week booked solid as hedge funds paid him $1,000 per hour to speak on the phone and give his take on Nvidia and AI...

Emanuel wrote that the industry may be running low on quality data to train that AI — that is, a potential "data wall" is looming that could slow down AI scaling and reduce some of that need for training resources... Some of these companies, like Alphabet, have also been investing in building out their own semiconductor chips. For a while, Nvidia's hardware has been the best for training AI, but that might not be the case forever as more companies, such as Cerebras, build better hardware. And other GPU makers like AMD are updating their drivers software to be more competitive with Nvidia... Add all these things together — unsustainable spending and data-center building, less training data to work with, better competing hardware and more efficient AI — and you get a future where it's harder to imagine Nvidia's customers spending as much as they currently are on Nvidia hardware... "If you know that a company will only earn supersized returns for a couple years, you don't apply a multiple. You certainly don't put a 30-times multiple," Emanuel told MarketWatch.

The article notes that DeepSeek "is open-source and has been publishing technical papers out in the open for the past few months... The $5.6 million training-cost statistic that many investors cited for sparking the DeepSeek market panic was actually revealed in the V3 technical paper published on Dec. 26."
Government

US Blocks Open Source 'Help' From These Countries (thenewstack.io) 81

Wednesday the Linux Foundation wrote that both "regulatory compliance" and "increased cybersecurity risk" were "creating burdens...that must be met" for open source communities.

And so, as Steven J. Vaughan-Nichols writes, "the Linux Foundation has released a comprehensive guide to help open source developers navigate the complex landscape of the U.S. Office of Foreign Assets Control (OFAC) sanctions..." These rules, aimed at achieving economic, foreign policy, and national security goals, apply to various interactions, including those in the open source community. The total Sanctions Programs and Country list amounts to over 17 thousand entries ranging from individuals to terrorist organizations to countries.

If that rings a bell, it's because, in October 2024, the Linux kernel developers ran right into this issue. The Linux kernel's leadership, including Greg Kroah-Hartman, the stable Linux kernel maintainer, and Linus Torvalds, Linux's founder, announced that eleven Russian kernel developers had been removed from their roles working on the Linux kernel. Why? Because, as Torvalds said, of "Russian sanctions." This, he added, in a Linux kernel mailing list (LKML) message was because "the 'various compliance requirements' are not just a US thing."

For developers, this means exercising caution about who they interact with and where their contributions originate. The sanctions target specific countries, regions, and individuals or organizations, many of which are listed on the Specially Designated Nationals and Blocked Persons (SDN) List... Most OFAC sanctions are exempted for "informational materials," which generally include open source code. However, this only applies to existing code and not to requests for new code or modifications. So, for example, working with a Russian developer on a code patch could land you in hot water... While reviewing unsolicited patches from contributors in sanctioned regions is generally acceptable, actively engaging them in discussions or improvements could cross legal boundaries... Developers are warned to be cautious of sanctioned entities attempting to contribute indirectly through third parties or developers acting "individually."

Countries currently sanctioned include:
  • Russia
  • Cuba
  • Iran
  • North Korea
  • Syria
  • The following regions of Ukraine: Crimea, Donetsk and Luhansk regions of the Ukraine.

The Linux Foundation had written that the OFAC sanctions rules are "strict liability" rules, "which means it does not matter whether you know about them or not. Violating these rules can lead to serious penalties, so it's important to understand how they might affect your open source work." But J. Vaughan-Nichols offers this quote from open source licensing attorney Heather Meeker.

"Let's be honest: Smaller companies usually ignore regulations like this because they just don't have the resources to analyze them, and a government usually ignores smaller companies because it doesn't have the resources to enforce against them. Big companies that are on the radar need specialized counsel."


Security

Sensitive DeepSeek Data Was Exposed to the Web, Cybersecurity Firm Says (reuters.com) 17

An anonymous reader shared this report from Reuters: New York-based cybersecurity firm Wiz says it has found a trove of sensitive data from the Chinese artificial intelligence startup DeepSeek inadvertently exposed to the open internet. In a blog post published Wednesday, Wiz said that scans of DeepSeek's infrastructure showed that the company had accidentally left more than a million lines of data available unsecured.

Those included digital software keys and chat logs that appeared to capture prompts being sent from users to the company's free AI assistant.

Wiz's chief technology officer tells Reuters that DeepSeek "took it down in less than an hour" after Wiz alerted them.

"But this was so simple to find we believe we're not the only ones who found it."
AI

Were DeepSeek's Development Costs Much Higher Than Reported? (msn.com) 49

Nearly three years ago a team of Chinese AI engineers working for DeepSeek's parent company unveiled an earlier AI supercomputer that the Washington Post says was constructed from 10,000 A100 GPUs purchased from Nvidia. Roughly six months later "Washington had banned Nvidia from selling any more A100s to China," the article notes.

Remember that number as you read this. 10,000 A100 GPUs... DeepSeek's new chatbot caused a panic in Silicon Valley and on Wall Street this week, erasing $1 trillion from the stock market. That impact stemmed in large part from the company's claim that it had trained one of its recent models on a minuscule $5.6 million in computing costs and with only 2,000 or so of Nvidia's less-advanced H800 chips.

Nvidia saw its soaring value crater by $589 billion Monday as DeepSeek rocketed to the top of download charts, prompting President Donald Trump to call for U.S. industry to be "laser focused" on competing... But a closer look at DeepSeek reveals that its parent company deployed a large and sophisticated chip set in its supercomputer, leading experts to assess the total cost of the project as much higher than the relatively paltry sum that U.S. markets reacted to this week... Lennart Heim, an AI expert at Rand, said DeepSeek's evident access to [the earlier] supercomputer would have made it easier for the company to develop a more efficient model, requiring fewer chips.

That earlier project "suggests that DeepSeek had a major boost..." according to the article, "with technology comparable to that of the leading U.S. AI companies." And while DeepSeek claims it only spent $5.6 million to train one of its advanced models, "its parent company has said that building the earlier supercomputer had cost 1 billion yuan, or $139 million.") Yet the article also cites the latest insights Friday from chip investment company SemiAnalysis, summarizing their finding that DeepSeek "has spent more than half a billion dollars on GPUs, with total capital expenditures of almost $1.3 billion."

The article notes Thursday remarks by OpenAI CEO Sam Altman that DeepSeek's energy-efficiency claims were "wildly overstated... This is a model at a capability level that we had quite some time ago." And Palmer Luckey called DeepSeek "legitimately impressive" on X but called the $5.6 million training cost figure "bogus" and said the Silicon Valley meltdown was "hysteria." Even with these higher total costs in mind, experts say, U.S. companies are right to be concerned about DeepSeek upending the market. "We know two things for sure: DeepSeek is pricing their services very competitively, and second, the performance of their models is comparable to leading competitors," said Kai-Shen Huang, an AI expert at the Research Institute for Democracy, Society and Emerging Technology, a Taipei-based think tank. "I think DeepSeek's pricing strategy has the potential to disrupt the market globally...."

China's broader AI policy push has helped create an environment conducive for a company like DeepSeek to rise. Beijing announced an ambitious AI blueprint in 2017, with a goal to become a global AI leader by 2030 and promises of funding for universities and private enterprise. Local governments across the nation followed with their own programs to support AI.

AI

Police Use of AI Facial Recognition Results In Murder Case Being Tossed (cleveland.com) 50

"A jury may never see the gun that authorities say was used to kill Blake Story last year," reports Cleveland.com.

"That's because Cleveland police used a facial recognition program — one that explicitly says its results are not admissible in court — to obtain a search warrant, according to court documents." The search turned up what police say is the murder weapon in the suspect's home. But a Cuyahoga County judge tossed that evidence after siding with defense attorneys who argued that the search warrant affidavit was misleading and relied on inadmissible evidence. If an appeals court upholds the judge's ruling to suppress the evidence, prosecutors acknowledge their case is likely lost...

The company that produced the facial recognition report, Clearview AI, has been used in hundreds of law enforcement investigations throughout Ohio and has faced lawsuits over privacy violations.

Not only does Cleveland lack a policy governing the use of artificial intelligence, Ohio lawmakers also have failed to set standards for how police use the tool to investigate crimes. "It's the wild, wild west in Ohio," said Gary Daniels, a lobbyist for the American Civil Liberties Union. The lack of state regulation of how law enforcement uses advanced technologies — no laws similarly govern the use of drones or license plate readers — means it is essentially up to agencies how they use the tools.

The affidavit for the search warrant was signed by a 28-year police force veteran, according to the article — but it didn't disclose the use of Clearview's technology.

Clearview's report acknowledged their results were not admissible in court — but then provided the suspect's name, arrest record, Social Security number, according to the article, and "noted he was the most likely match for the person in the convenience store."

Thanks to tlhIngan (Slashdot reader #30,335) for sharing the news.
AI

Sam Altman: OpenAI Has Been On the 'Wrong Side of History' Concerning Open Source (techcrunch.com) 62

An anonymous reader quotes a report from TechCrunch: To cap off a day of product releases, OpenAI researchers, engineers, and executives, including OpenAI CEO Sam Altman, answered questions in a wide-ranging Reddit AMA on Friday. OpenAI the company finds itself in a bit of a precarious position. It's battling the perception that it's ceding ground in the AI race to Chinese companies like DeepSeek, which OpenAI alleges might've stolen its IP. The ChatGPT maker has been trying to shore up its relationship with Washington and simultaneously pursue an ambitious data center project, while reportedly laying groundwork for one of the largest financing rounds in history. Altman admitted that DeepSeek has lessened OpenAI's lead in AI, and he also said he believes OpenAI has been "on the wrong side of history" when it comes to open-sourcing its technologies. While OpenAI has open-sourced models in the past, the company has generally favored a proprietary, closed-source development approach.

"[I personally think we need to] figure out a different open source strategy," Altman said. "Not everyone at OpenAI shares this view, and it's also not our current highest priority [] We will produce better models [going forward], but we will maintain less of a lead than we did in previous years." In a follow-up reply, Kevin Weil, OpenAI's chief product officer, said that OpenAI is considering open-sourcing older models that aren't state-of-the-art anymore. "We'll definitely think about doing more of this," he said, without going into greater detail.

AI

Most Men Would Marry Their AI Girlfriends If It Were Legal (vice.com) 152

An anonymous reader quotes a report from VICE News: EVA AI, a platform allowing you to create and connect with your own AI partner, recently surveyed 2,000 men and found that 8 in 10 would consider marrying an AI girlfriend if it were legal. Not only that, but 83% of men also believe they could form a deep emotional bond with an AI girlfriend. What's even scarier is that a whopping 78% of men surveyed said they would consider creating a replica of their ex, and three-quarters would duplicate their current partner to create a "polished" version of them. "AI companionship allows people to be their authentic selves without fear of judgment," said Cale Jones, head of community growth at EVA AI. "It creates a safe space to explore thoughts, emotions, and desires that might feel too vulnerable to share in real life. The benefits extend far beyond the virtual world: one EVA AI user discovered her bisexuality through this platform -- something she previously felt too insecure to explore in real life."
AI

Cursing Disables Google's AI Overviews 21

Google users have discovered that adding curse words to search queries disables the company's AI-powered overview feature. While Google's Gemini AI system typically avoids profanity, inserting expletives into search terms bypasses AI summaries and delivers traditional web results instead. Users can also disable AI overviews by adding "-ai" or other text strings after a minus sign to their queries.
AI

OpenAI's o3-mini: Faster, Cheaper AI That Fact-Checks Itself (openai.com) 73

OpenAI today launched o3-mini, a specialized AI reasoning model designed for STEM tasks that offers faster processing at lower costs compared to its predecessor o1-mini. The model, priced at $1.10 per million cached input tokens and $4.40 per million output tokens, performs fact-checking before delivering results to reduce errors in technical domains like physics and programming, the Microsoft-backed startup said. (A million tokens are roughly 750,000 words)

OpenAI claims that its tests showed o3-mini made 39% fewer major mistakes than o1-mini on complex problems while delivering responses 24% faster. The model will be available through ChatGPT with varying access levels -- free users get basic access while premium subscribers receive higher query limits and reasoning capabilities.
The Almighty Buck

'Magical' Efficient-Market Theory Rebuked in Era of Passive Investing (yahoo.com) 57

An anonymous reader shares a report: At first blush, stock trading this week is hardly a paragon of the market-efficiency theory, an oft-romanticized idea in Economics 101. After all, big equity gauges plunged on Monday, spurred by fears of an AI model released a week earlier, before swiftly rebounding. A fresh academic paper suggests the rise of passive investing may be fueling these kind of fragile market moves.

According to a study to be published in the prestigious American Economic Review, evidence is building that active managers are slow to scoop up stocks en masse when prices move away from their intrinsic worth. Thanks to this lethargic trading behavior and the relentless boom in benchmark-tracking index funds, the impact of each trade on prices gets amplified, explaining how sell orders, like on Monday perhaps, can induce broader equity gyrations. As a result, the financial landscape is proving less dynamic and more volatile in the era of Big Passive, according to authors at the UCLA Anderson School of Management, the Stockholm School of Economics and the University of Minnesota Carlson School of Management.

AI

DeepSeek Outstrips Meta and Mistral To Lead Open-Source AI Race (semianalysis.com) 27

DeepSeek has emerged as the leading open-source AI model developer, surpassing Meta's Llama and Mistral, after releasing its latest model V3 with breakthrough cost efficiencies, research and consultancy firm SemiAnalysis reported on Friday.

The Chinese startup, backed by hedge fund High-Flyer, reached this milestone through innovations in Multi-head Latent Attention technology, which cut inference costs by 93.3% versus standard methods. Despite offering services below cost to gain market share, its performance matches or exceeds OpenAI's GPT-4.
AI

Taiwan Says Government Departments Should Not Use DeepSeek, Citing Security Concerns (reuters.com) 37

An anonymous reader shares a report: Taiwan's digital ministry said on Friday that government departments should not use Chinese startup DeepSeek's artificial intelligence (AI) service, saying that as the product is from China it represents a security concern.

Democratically-governed Taiwan has long been wary of Chinese tech given Beijing's sovereignty claims over the island and its military and political threats against the government in Taipei. In a statement, Taiwan's Ministry of Digital Affairs said that government departments are not allowed to use DeepSeek's AI service to "prevent information security risks".

"DeepSeek's AI service is a Chinese product, and its operation involves cross-border transmission and information leakage and other information security concerns, and is a product that jeopardises the country's information security," the ministry said.

Intel

Intel Won't Bring Its Falcon Shores AI Chip To Market (techcrunch.com) 24

During the company's fourth-quarter earnings call Thursday, Intel co-CEO Michelle Johnston Holthaus announced that Intel has decided to cancel its Falcon Shores AI chip. Instead, it'll opt to use it as an internal test chip while shifting focus to Jaguar Shores for AI data center solutions. TechCrunch reports: "AI data center ... is an attractive market for us," Holthaus said during the call. "[B]ut I am not happy with where we are today. We're not yet participating in the cloud-based AI data center market in a meaningful way ... One of the immediate actions I have taken is to simplify our roadmap and concentrate our resources." The focus instead will be on Jaguar Shores, which Holthaus called Intel's opportunity to "develop a system-level solution at rack scale ... to address the AI data center more broadly."

Holthaus tempered expectations for Falcon Shores last month, when she implied that it was an "iterative" step over the company's previous dedicated AI data center chip, Gaudi 3. "One of the things that we've learned from Gaudi is, it's not enough to just deliver the silicon," Holthaus said during Thursday's earnings call. "Falcon Shores will help us in that process of working on the system, networking, memory -- all those component[s]. But what customers really want is that full-scale rack solution, and so we're able to get to that with Jaguar Shores."

"As I think about our AI opportunity, my focus is on the problems our customers are trying to solve, most notably the need to lower the cost and increase the efficiency of compute," Holthaus said. "As such, a one-size-fits-all approach will not work, and I can see clear opportunities to leverage our core assets in new ways to drive the most compelling total cost of ownership across the continuum."

Privacy

Italy Blocks DeepSeek Over Data Privacy Concerns (reuters.com) 30

Italy's data protection agency has blocked the Chinese AI chatbot DeekSeek after its developers failed to disclose how it collects user data or whether it is stored on Chinese servers. Reuters reports: DeepSeek could not be accessed on Wednesday in Apple or Google app stores in Italy, the day after the authority, known also as the Garante, requested information on its use of personal data. In particular, it wanted to know what personal data is collected, from which sources, for what purposes, on what legal basis and whether it is stored in China. The authority's decision -- aimed at protecting Italian users' data -- came after the Chinese companies that supply chatbot service to DeepSeek provided information that "was considered to totally insufficient," the authority said in a note on its website. The Garante added that the decision had "immediate effect" and that it had also opened an investigation. Thanks to new submitter axettone for sharing the news.
Government

OpenAI Teases 'New Era' of AI In US, Deepens Ties With Government (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Thursday, OpenAI announced that it is deepening its ties with the US government through a partnership with the National Laboratories and expects to use AI to "supercharge" research across a wide range of fields to better serve the public. "This is the beginning of a new era, where AI will advance science, strengthen national security, and support US government initiatives," OpenAI said. The deal ensures that "approximately 15,000 scientists working across a wide range of disciplines to advance our understanding of nature and the universe" will have access to OpenAI's latest reasoning models, the announcement said.

For researchers from Los Alamos, Lawrence Livermore, and Sandia National Labs, access to "o1 or another o-series model" will be available on Venado -- an Nvidia supercomputer at Los Alamos that will become a "shared resource." Microsoft will help deploy the model, OpenAI noted. OpenAI suggested this access could propel major "breakthroughs in materials science, renewable energy, astrophysics," and other areas that Venado was "specifically designed" to advance. Key areas of focus for Venado's deployment of OpenAI's model include accelerating US global tech leadership, finding ways to treat and prevent disease, strengthening cybersecurity, protecting the US power grid, detecting natural and man-made threats "before they emerge," and " deepening our understanding of the forces that govern the universe," OpenAI said.

Perhaps among OpenAI's flashiest promises for the partnership, though, is helping the US achieve a "a new era of US energy leadership by unlocking the full potential of natural resources and revolutionizing the nation's energy infrastructure." That is urgently needed, as officials have warned that America's aging energy infrastructure is becoming increasingly unstable, threatening the country's health and welfare, and without efforts to stabilize it, the US economy could tank. But possibly the most "highly consequential" government use case for OpenAI's models will be supercharging research safeguarding national security, OpenAI indicated. "The Labs also lead a comprehensive program in nuclear security, focused on reducing the risk of nuclear war and securing nuclear materials and weapons worldwide," OpenAI noted. "Our partnership will support this work, with careful and selective review of use cases and consultations on AI safety from OpenAI researchers with security clearances."
The announcement follows the launch earlier this week of ChatGPT Gov, "a new tailored version of ChatGPT designed to provide US government agencies with an additional way to access OpenAI's frontier models." It also worked with the Biden administration to voluntarily commit to give officials early access to its latest models for safety inspections.
Books

Books Written By Humans Are Getting Their Own Certification (theverge.com) 76

The Authors Guild -- one of the largest associations of writers in the US -- has launched a new project that allows authors to certify that their book was written by a human, and not generated by artificial intelligence. From a report: The Guild says its "Human Authored" certification aims to make it easier for writers to "distinguish their work in increasingly AI-saturated markets," and that readers have a right to know who (or what) created the books they read. Human Authored certifications will be listed in a public database that anyone can access.

Slashdot Top Deals