Australia

Q-CTRL Unveils Jam-Proof Positioning System That's 50x More Accurate Than GPS (interestingengineering.com) 101

schwit1 shares a report from Interesting Engineering: Australia's Q-CTRL developed a new system called "Ironstone Opal," which uses quantum sensors to navigate without GPS. It's passive (meaning it doesn't emit signals that could be detected or jammed) and highly accurate. Instead of relying on satellites, Q-CTRL's system can read the Earth's magnetic field, which varies slightly depending on location (like a magnetic fingerprint or map). The system can determine where you are by measuring these variations using magnetometers. This is made possible using the company's proprietary quantum sensors, which are incredibly sensitive and stable. The system also comes with special AI-based software, which filters out interference like vibrations or electromagnetic noise (what they call "software ruggedization"). The system is small and compact and could, in theory, be installed in drones or cars and, of course, aircraft.

Q-CTRL ran some live tests on the ground and in the air to validate the technology. As anticipated, they found that it could operate completely independently of GPS. Moreover, the company reports that its quantum GPS was 50 times more accurate than traditional GPS backup systems (like Inertial Navigation Systems or INS). The systems also delivered navigation precision on par with hitting a bullseye from 1,000 yards. Even when the equipment was mounted inside a plane, where interference is much worse, it outperformed existing systems by at least 11x. This is the first time quantum technology has been shown to outperform existing tech in a real-world commercial or military application, a milestone referred to as achieving "quantum advantage."

AI

Police Using AI Personas to Infiltrate Online Activist Spaces, Records Reveal (wired.com) 77

samleecole shares a report from 404 Media and Wired: American police departments near the United States-Mexico border are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on "college protesters," "radicalized" political activists, and suspected drug and human traffickers, according to internal documents, contracts, and communications 404 Media obtained via public records requests. Massive Blue, the New York-based company that is selling police departments this technology, calls its product Overwatch, which it markets as an "AI-powered force multiplier for public safety" that "deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels." According to a presentation obtained by 404 Media, Massive Blue is offering cops these virtual personas that can be deployed across the internet with the express purpose of interacting with suspects over text messages and social media. [...]

While the documents don't describe every technical aspect of how Overwatch works, they do give a high-level overview of what it is. The company describes a tool that uses AI-generated images and text to create social media profiles that can interact with suspected drug traffickers, human traffickers, and gun traffickers. After Overwatch scans open social media channels for potential suspects, these AI personas can also communicate with suspects over text, Discord, and other messaging services. The documents we obtained don't explain how Massive Blue determines who is a potential suspect based on their social media activity. Salzwedel, of Pinal County, said "Massive Blue's solutions crawl multiple areas of the Internet, and social media outlets are just one component. We cannot disclose any further information to preserve the integrity of our investigations." [...] Besides scanning social media and engaging suspects with AI personas, the presentation says that Overwatch can use generative AI to create "proof of life" images of a person holding a sign with a username and date written on it in pen.

AI

Microsoft Researchers Develop Hyper-Efficient AI Model That Can Run On CPUs 59

Microsoft has introduced BitNet b1.58 2B4T, the largest-scale 1-bit AI model to date with 2 billion parameters and the ability to run efficiently on CPUs. It's openly available under an MIT license. TechCrunch reports: The Microsoft researchers say that BitNet b1.58 2B4T is the first bitnet with 2 billion parameters, "parameters" being largely synonymous with "weights." Trained on a dataset of 4 trillion tokens -- equivalent to about 33 million books, by one estimate -- BitNet b1.58 2B4T outperforms traditional models of similar sizes, the researchers claim.

BitNet b1.58 2B4T doesn't sweep the floor with rival 2 billion-parameter models, to be clear, but it seemingly holds its own. According to the researchers' testing, the model surpasses Meta's Llama 3.2 1B, Google's Gemma 3 1B, and Alibaba's Qwen 2.5 1.5B on benchmarks including GSM8K (a collection of grade-school-level math problems) and PIQA (which tests physical commonsense reasoning skills). Perhaps more impressively, BitNet b1.58 2B4T is speedier than other models of its size -- in some cases, twice the speed -- while using a fraction of the memory.

There is a catch, however. Achieving that performance requires using Microsoft's custom framework, bitnet.cpp, which only works with certain hardware at the moment. Absent from the list of supported chips are GPUs, which dominate the AI infrastructure landscape.
Education

Google Is Gifting Gemini Advanced To US College Students 30

Google is offering all U.S. college students a free year of its Gemini Advanced AI tools through its Google One AI Premium plan, as part of a push to expand Gemini's user base and compete with ChatGPT. It includes access to the company's Pro models, Veo 2 video generation, NotebookLM, Gemini Live and 2TB of Drive storage. Ars Technica reports: Google has a new landing page for the deal, allowing eligible students to sign up for their free Google One AI Premium plan. The offer is valid from now until June 30. Anyone who takes Google up on it will enjoy the free plan through spring 2026. The company hasn't specified an end date, but we would wager it will be June of next year. Google's intention is to give students an entire school year of Gemini Advanced from now through finals next year. At the end of the term, you can bet Google will try to convert students to paying subscribers.

As for who qualifies as a "student" in this promotion, Google isn't bothering with a particularly narrow definition. As long as you have a valid .edu email address, you can sign up for the offer. That's something that plenty of people who are not actively taking classes still have. You probably won't even be taking undue advantage of Google if you pretend to be a student -- the company really, really wants people to use Gemini, and it's willing to lose money in the short term to make that happen.
Privacy

ChatGPT Models Are Surprisingly Good At Geoguessing (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: There's a somewhat concerning new trend going viral: People are using ChatGPT to figure out the location shown in pictures. This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely "reason" through uploaded images. In practice, the models can crop, rotate, and zoom in on photos -- even blurry and distorted ones -- to thoroughly analyze them. These image-analyzing capabilities, paired with the models' ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, in particular, is quite good at deducing cities, landmarks, and even restaurants and bars from subtle visual clues.

In many cases, the models don't appear to be drawing on "memories" of past ChatGPT conversations, or EXIF data, which is the metadata attached to photos that reveal details such as where the photo was taken. X is filled with examples of users giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to imagine it's playing "GeoGuessr," an online game that challenges players to guess locations from Google Street View images. It's an obvious potential privacy issue. There's nothing preventing a bad actor from screenshotting, say, a person's Instagram Story and using ChatGPT to try to doxx them.

AI

Bot Students Siphon Millions in Financial Aid from US Community Colleges (voiceofsandiego.org) 47

Fraud rings using fake "bot" students have infiltrated America's community colleges, stealing over $11 million from California's system alone in 2024. The nationwide scheme, which began in 2021, targets open-admission institutions where scammers enroll fictitious students in online courses to collect financial aid disbursements.

"We didn't used to have to decide if our students were human," said Eric Maag, who has taught at Southwestern College for 21 years. Faculty now spend hours vetting suspicious enrollees and analyzing AI-generated assignments. At Southwestern in Chula Vista, professor Elizabeth Smith discovered 89 of her 104 enrolled students were fraudulent. The California Community College system estimates 25% of all applicants statewide are bots. Community college administrators describe fighting an evolving technological battle against increasingly sophisticated fraud tactics. The fraud crisis has particularly impacted asynchronous online courses, crowding real students out of classes and fundamentally altering faculty roles.
Facebook

Meta Blocks Apple Intelligence in iOS Apps (9to5mac.com) 22

Meta has disabled Apple Intelligence features across its iOS applications, including Facebook, WhatsApp, and Threads, according to Brazilian tech blog Sorcererhat Tech. The block affects Writing Tools, which enable text creation and editing via Apple's AI, as well as Genmoji generation. Users cannot access these features via the standard text field interface in Meta apps. Instagram Stories have also lost previously available keyboard stickers and Memoji functionality.

While Meta hasn't explained the decision, it likely aims to drive users toward Meta AI, its own artificial intelligence service that offers similar text and image generation capabilities. The move follows failed negotiations between Apple and Meta regarding Llama integration into Apple Intelligence, which reportedly collapsed over privacy disagreements. The companies also maintain ongoing disputes regarding App Store policies.
Television

LG TVs' Integrated Ads Get More Personal With Tech That Analyzes Viewer Emotions (arstechnica.com) 122

LG is partnering with Zenapse to integrate AI-driven emotional intelligence into its smart TVs, enabling hyper-targeted ads based on viewers' psychological traits, emotions, and behaviors. Ars Technica reports: The upcoming advertising approach comes via a multi-year licensing deal with Zenapse, a company describing itself as a software-as-a-service marketing platform that can drive advertiser sales "with AI-powered emotional intelligence." LG will use Zenapse's technology to divide webOS users into hyper-specific market segments that are supposed to be more informative to advertisers. LG Ad Solutions, LG's advertising business, announced the partnership on Tuesday.

The technology will be used to inform ads shown on LG smart TVs' homescreens, free ad-supported TV (FAST) channels, and elsewhere throughout webOS, per StreamTV Insider. LG will also use Zenapse's tech to "expand new software development and go-to-market products," it said. LG didn't specify the duration of its licensing deal with Zenapse. Zenapse's platform for connected TVs (CTVs), ZenVision, is supposed to be able to interpret the types of emotions shown in the content someone is watching on TV, partially by using publicly available information about the show's or movie's script and plot, StreamTV Insider reported. ZenVision also analyzes viewer behavior, grouping viewers based on their consumption patterns, the publication noted. Under the new partnership, ZenVision can use data that LG has gathered from the automatic content recognition software in LG TVs.

With all this information, ZenVision will group LG TV viewers into highly specified market segments, such as "goal-driven achievers," "social connectors," or "emotionally engaged planners," an LG spokesperson told StreamTV Insider. Zenapse's website for ZenVision points to other potential market segments, including "digital adopters," "wellness seekers," "positive impact & environment," and "money matters." Companies paying to advertise on LG TVs can then target viewers based on the ZenVision-specified market segments and deliver an "emotionally intelligent ad," as Zenapse's website puts it.

Businesses

OpenAI In Talks To Buy Windsurf For About $3 Billion (reuters.com) 5

According to Bloomberg (paywalled), OpenAI is in talks to buy AI-assisted coding tool Windsurf for about $3 billion. "The deal would be OpenAI's largest to date, the terms of which have not yet been finalized," notes Reuters. From a report: Windsurf was in talks with investors such as Kleiner Perkins and General Catalyst to raise funding at a $3 billion valuation, the report added. It closed a $150 million funding round led by General Catalyst last year, valuing it at $1.25 billion.
Google

Google Used AI To Suspend Over 39 Million Ad Accounts Suspected of Fraud (techcrunch.com) 25

An anonymous reader quotes a report from TechCrunch: Google on Wednesday said it suspended 39.2 million advertiser accounts on its platform in 2024 -- more than triple the number from the previous year -- in its latest crackdown on ad fraud. By leveraging large language models (LLMs) and using signals such as business impersonation and illegitimate payment details, the search giant said it could suspend a "vast majority" of ad accounts before they ever served an ad.

Last year, Google launched over 50 LLM enhancements to improve its safety enforcement mechanisms across all its platforms. "While these AI models are very, very important to us and have delivered a series of impressive improvements, we still have humans involved throughout the process," said Alex Rodriguez, a general manager for Ads Safety at Google, in a virtual media roundtable. The executive told reporters that a team of over 100 experts assembled across Google, including members from the Ads Safety team, the Trust and Safety division, and researchers from DeepMind.
"In total, Google said it blocked 5.1 billion ads last year and removed 1.3 billion pages," adds TechCrunch. "In comparison, it blocked over 5.5 billion ads and took action against 2.1 billion publisher pages in 2023. The company also restricted 9.1 billion ads last year, it said."
AI

OpenAI Debuts Codex CLI, an Open Source Coding Tool For Terminals (techcrunch.com) 9

OpenAI has released Codex CLI, an open-source coding agent that runs locally in users' terminal software. Announced alongside the company's new o3 and o4-mini models, Codex CLI directly connects OpenAI's AI systems with local code and computing tasks, enabling them to write and manipulate code on users' machines.

The lightweight tool allows developers to leverage multimodal reasoning capabilities by passing screenshots or sketches to the model while providing access to local code repositories. Unlike more ambitious future plans for an "agentic software engineer" that could potentially build entire applications from descriptions, Codex CLI focuses specifically on integrating AI models with command-line interfaces.

To accelerate adoption, OpenAI is distributing $1 million in API credits through a grant program, offering $25,000 blocks to selected projects. While the tool expands AI's role in programming workflows, it comes with inherent risks -- studies show AI coding models frequently fail to fix security vulnerabilities and sometimes introduce new bugs, particularly concerning when given system-level access.
AI

OpenAI Unveils o3 and o4-mini Models (openai.com) 2

OpenAI has released two new AI models that can "think with images" during their reasoning process. The o3 and o4-mini models represent a significant advancement in visual perception, enabling them to manipulate images -- cropping, zooming, and rotating -- as part of their analytical process.

Unlike previous models, o3 and o4-mini can agentically use all of ChatGPT's tools, including web search, Python code execution, and image generation. This allows them to tackle multi-faceted problems by selecting appropriate tools based on the task at hand.

The models have set new state-of-the-art performance benchmarks across multiple domains. On visual tasks, o3 achieved 86.8% accuracy on MathVista and 78.6% on CharXiv-Reasoning, while o4-mini scored 91.6% on AIME 2024 competitions. In expert evaluations, o3 made 20% fewer major errors than its predecessor on complex real-world tasks. ChatGPT Plus, Pro, and Team users will see o3, o4-mini, and o4-mini-high in the model selector starting today, replacing o1, o3â'mini, and o3â'miniâ'high.
AI

AI-generated Music Accounts For 18% of All Tracks Uploaded To Deezer (reuters.com) 85

About 18% of songs uploaded to Deezer are fully generated by AI, the French streaming platform said on Wednesday, underscoring the technology's growing use amid copyright risks and concerns about fair payouts to artists. From a report: Deezer said more than 20,000 AI-generated tracks are uploaded on its platform each day, which is nearly twice the number reported four months ago. "AI-generated content continues to flood streaming platforms like Deezer and we see no sign of it slowing down," said Aurelien Herault, the company's innovation chief.
United States

Immigrant Founders Are the Norm in Key US AI Firms: Study (axios.com) 146

More than half of the top privately held AI companies based in the U.S. have at least one immigrant founder, according to an analysis from the Institute for Progress. From the report: The IFP analysis of the top AI-related startups in the Forbes AI 2025 list found that 25 -- or 60% -- of the 42 companies based in the U.S. were founded or co-founded by immigrants. The founders of those companies "hail from 25 countries, with India leading (nine founders), followed by China (eight founders) and then France (three founders). Australia, the U.K., Canada, Israel, Romania, and Chile all have two founders each."

Among them is OpenAI -- whose co-founders include Elon Musk, born in South Africa, and Ilya Sutskever, born in Russia -- and Databricks, whose co-founders were born in Iran, Romania and China. The analysis echoes previous findings about the key role foreign-born scientists and engineers have played in the U.S. tech industry and the broader economy.

Programming

Figma Sent a Cease-and-Desist Letter To Lovable Over the Term 'Dev Mode' (techcrunch.com) 73

An anonymous reader quotes a report from TechCrunch: Figma has sent a cease-and-desist letter to popular no-code AI startup Lovable, Figma confirmed to TechCrunch. The letter tells Lovable to stop using the term "Dev Mode" for a new product feature. Figma, which also has a feature called Dev Mode, successfully trademarked that term last year, according to the U.S. Patent and Trademark office. What's wild is that "dev mode" is a common term used in many products that cater to software programmers. It's like an edit mode. Software products from giant companies like Apple's iOS, Google's Chrome, Microsoft's Xbox have features formally called "developer mode" that then get nicknamed "dev mode" in reference materials.

Even "dev mode" itself is commonly used. For instance Atlassian used it in products that pre-date Figma's copyright by years. And it's a common feature name in countless open source software projects. Figma tells TechCrunch that its trademark refers only to the shortcut "Dev Mode" -- not the full term "developer mode." Still, it's a bit like trademarking the term "bug" to refer to "debugging." Since Figma wants to own the term, it has little choice but send cease-and-desist letters. (The letter, as many on X pointed out, was very polite, too.) If Figma doesn't defend the term, it could be absorbed as a generic term and the trademarked becomes unenforceable.

AI

Uber Cofounder Kalanick Says AI Means Some Consultants Are in 'Big Trouble' (businessinsider.com) 27

Uber cofounder Travis Kalanick thinks AI is about to shake up consulting -- and for "traditional" professionals, not in a good way. From a report: The former Uber CEO said consultants who mostly follow instructions or do repetitive tasks are at risk of being replaced by AI. "If you're a traditional consultant and you're just doing the thing, you're executing the thing, you're probably in some big trouble," he said. He joked about what that future of consultancy might look like: "Push a button. Get a consultant."

However, Kalanick said the professionals who would come out ahead would be the ones who build tools rather than just use them. "If you are the consultant that puts the things together that replaces the consultant, maybe you got some stuff," he said. "You're going to profitable companies with competitive moats, making that moat bigger," he explained. "Making their profit bigger is probably pretty interesting from a financial point of view."

Programming

You Should Still Learn To Code, Says GitHub CEO (businessinsider.com) 45

You should still learn to code, says GitHub's CEO. And you should start as soon as possible. From a report: "I strongly believe that every kid, every child, should learn coding," Thomas Dohmke said in a recent podcast interview with EO. "We should actually teach them coding in school, in the same way that we teach them physics and geography and literacy and math and what-not." Coding, he added, is one such fundamental skill -- and the only reason it's not part of the curriculum is because it took "us too long to actually realize that."

Dohmke, who's been a programmer since the 90s, said he's never seen "anything more exciting" than the current moment in engineering -- the advent of AI, he believes, has made the field that much easier to break into, and is poised to make software more ubiquitous than ever. "It's so much easier to get into software development. You can just write a prompt into Copilot or ChatGPT or similar tools, and it will likely write you a basic webpage, or a small application, a game in Python," Dohmke said. "And so, AI makes software development so much more accessible for anyone who wants to learn coding."

AI, Dohmke said, helps to "realize the dream" of bringing an idea to life, meaning that fewer projects will end up dead in the water, and smaller teams of developers will be enabled to tackle larger-scale projects. Dohmke said he believes it makes the overall process of creation more efficient. "You see some of the early signs of that, where very small startups -- sometimes five developers and some of them actually only one developer -- believe they can become million, if not billion dollar businesses by leveraging all the AI agents that are available to them," he added.

AI

Google DeepMind Is Hiring a 'Post-AGI' Research Scientist (404media.co) 61

An anonymous reader shares a report: None of the frontier AI research labs have presented any evidence that they are on the brink of achieving artificial general intelligence, no matter how they define that goal, but Google is already planning for a "Post-AGI" world by hiring a scientist for its DeepMind AI lab to research the "profound impact" that technology will have on society.

"Spearhead research projects exploring the influence of AGI on domains such as economics, law, health/wellbeing, AGI to ASI [artificial superintelligence], machine consciousness, and education," Google says in the first item on a list of key responsibilities for the job. Artificial superintelligence refers to a hypothetical form of AI that is smarter than the smartest human in all domains. This is self explanatory, but just to be clear, when Google refers to "machine consciousness" it's referring to the science fiction idea of a sentient machine.

OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Elon Musk, and other major and minor players in the AI industry are all working on AGI and have previously talked about the likelihood of humanity achieving AGI, when that might happen, and what the consequences might be, but the Google job listing shows that companies are now taking concrete steps for what comes after, or are at least are continuing to signal that they believe it can be achieved.

Social Networks

OpenAI is Building a Social Network (theverge.com) 30

An anonymous reader shares a report: OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter. While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It's unclear if OpenAI's plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month.

Launching a social network in or around ChatGPT would likely increase Altman's already-bitter rivalry with Elon Musk. In February, after Musk made an unsolicited offer to purchase OpenAI for $97.4 billion, Altman responded: "no thank you but we will buy twitter for $9.74 billion if you want." Entering the social media market also puts OpenAI on more of a collision course with Meta, which we're told is planning to add a social feed to its coming standalone app for its AI assistant. When reports of Meta building a rival to the ChatGPT app first surfaced a couple of months ago, Altman shot back on X again by saying, "ok fine maybe we'll do a social app."

Communications

FCC Chairman Tells Europe To Choose Between US or Chinese Communications Tech (ft.com) 146

FCC Chairman Brendan Carr has issued a stark ultimatum to European allies, telling them to choose between US and Chinese communications technology. In an interview with Financial Times, Carr urged "allied western democracies" to "focus on the real long-term bogey: the rise of the Chinese Communist party." The warning comes as European governments question Starlink's reliability after Washington threatened to switch off its services in Ukraine.

UK telecoms BT and Virgin Media O2 are currently trialing Starlink's satellite internet technology but haven't signed full agreements. "If you're concerned about Starlink, just wait for the CCP's version, then you'll be really worried," said Carr. Carr claimed Europe is "caught" between Washington and Beijing, with a "great divide" emerging between "CCP-aligned countries and others" in AI and satellite technology. He also accused the European Commission of "protectionism" and an "anti-American" attitude while suggesting Nokia and Ericsson should relocate manufacturing to the US to avoid Trump's import tariffs.

Slashdot Top Deals