×
Android

Google Starts Adding Anti-Theft Locking Features to Android Phones (engadget.com) 81

An anonymous reader shared this report from Engadget: Three new theft protection features that Google announced earlier this year have reportedly started rolling out on Android. The tools — Theft Detection Lock, Offline Device Lock and Remote Lock — are aimed at giving users a way to quickly lock down their devices if they've been swiped, so thieves can't access any sensitive information. Android reporter Mishaal Rahman shared on social media that the first two tools had popped up on a Xiaomi 14T Pro, and said some Pixel users have started seeing Remote Lock.

Theft Detection Lock is triggered by the literal act of snatching. The company said in May that the feature "uses Google AI to sense if someone snatches your phone from your hand and tries to run, bike or drive away." In such a scenario, it'll lock the phone's screen.

The Android reporter summarized the other two locking features in a post on Reddit:
  • Remote Lock "lets you remotely lock your phone using just your phone number in case you can't sign into Find My Device using your Google account password."
  • Offline Device Lock "automatically locks your screen if a thief tries to keep your phone disconnected from the Internet for an extended period of time."

"All three features entered beta in August, starting in Brazil. Google told me the final versions of these features would more widely roll out this year, and it seems the features have begun expanding."


China

China Trained a 1-Trillion-Parameter LLM Using Only Domestic Chips (theregister.com) 52

"China Telecom, one of the largest wireless carriers in mainland China, says that it has developed two large language models (LLMs) relying solely on domestically manufactured AI chips..." reports Tom's Hardware. "If the information is accurate, this is a crucial milestone in China's attempt at becoming independent of other countries for its semiconductor needs, especially as the U.S. is increasingly tightening and banning the supply of the latest, highest-end chips for Beijing in the U.S.-China chip war." Huawei, which has mostly been banned from the U.S. and other allied countries, is one of the leaders in China's local chip industry... If China Telecom's LLMs were indeed fully trained using Huawei chips alone, then this would be a massive success for Huawei and the Chinese government.
The project's GitHub page "contains a hint about how China Telecom may have trained the model," reports the Register, "in a mention of compatibility with the 'Ascend Atlas 800T A2 training server' — a Huawei product listed as supporting the Kunpeng 920 7265 or Kunpeng 920 5250 processors, respectively running 64 cores at 3.0GHz and 48 cores at 2.6GHz. Huawei builds those processors using the Arm 8.2 architecture and bills them as produced with a 7nm process."

The South China Morning Post says the unnamed model has 1 trillion parameters, according to China Telecom, while the TeleChat2t-115B model has over 100 billion parameters.

Thanks to long-time Slashdot reader hackingbear for sharing the news.
Privacy

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars (wired.com) 109

Wired reports on "AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers — all while recordi00ng the precise locations of these observations..."

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data. However, files shared with WIRED by artist Julia Weist, who is documenting restricted datasets as part of her work, show how those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates... Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people's personal political views and their homes can be recorded into vast databases that can be queried.

"It really reveals the extent to which surveillance is happening on a mass scale in the quiet streets of America," says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. "That surveillance is not limited just to license plates, but also to a lot of other potentially very revealing information about people."

DRN, in a statement issued to WIRED, said it complies with "all applicable laws and regulations...." Over more than a decade, DRN has amassed more than 15 billion "vehicle sightings" across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month. Images in DRN's commercial database are shared with police using its Vigilant system, but images captured by law enforcement are not shared back into the wider database. The system is partly fueled by DRN "affiliates" who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits...

"License plate recognition (LPR) technology supports public safety and community services, from helping to find abducted children and stolen vehicles to automating toll collection and lowering insurance premiums by mitigating insurance fraud," Jeremiah Wheeler, the president of DRN, says in a statement... Wheeler did not respond to WIRED's questions about whether there are limits on what can be searched in license plate databases, why images of homes with lawn signs but no vehicles in sight appeared in search results, or if filters are used to reduce such images.

Privacy experts shared their reactions with Wired
  • "Perhaps [people] want to express themselves in their communities, to their neighbors, but they don't necessarily want to be logged into a nationwide database that's accessible to police authorities." — Jay Stanley, a senior policy analyst at the American Civil Liberties Union
  • "When government or private companies promote license plate readers, they make it sound like the technology is only looking for lawbreakers or people suspected of stealing a car or involved in an amber alert, but that's just not how the technology works. The technology collects everyone's data and stores that data often for immense periods of time." — Dave Maass, an EFF director of investigations
  • "The way that the country is set up was to protect citizens from government overreach, but there's not a lot put in place to protect us from private actors who are engaged in business meant to make money." — Nicole McConlogue, associate law professor at Mitchell Hamline School of Law (who has researched license-plate-surveillance systems)

Thanks to long-time Slashdot reader schwit1 for sharing the article.


AI

People Are Using Google Study Software To Make AI Podcasts (technologyreview.com) 34

Audio Overview, a new AI podcasting tool by Google, can generate realistic podcasts with human-like voices using content uploaded by users through NotebookLM. MIT Technology Review reports: NotebookLM, which is powered by Google's Gemini 1.5 model, allows people to upload content such as links, videos, PDFs, and text. They can then ask the system questions about the content, and it offers short summaries. The tool generates a podcast called Deep Dive, which features a male and a female voice discussing whatever you uploaded. The voices are breathtakingly realistic -- the episodes are laced with little human-sounding phrases like "Man" and "Wow" and "Oh right" and "Hold on, let me get this right." The "hosts" even interrupt each other.

The AI system is designed to create "magic in exchange for a little bit of content," Raiza Martin, the product lead for NotebookLM, said on X. The voice model is meant to create emotive and engaging audio, which is conveyed in an "upbeat hyper-interested tone," Martin said. NotebookLM, which was originally marketed as a study tool, has taken a life of its own among users. The company is now working on adding more customization options, such as changing the length, format, voices, and languages, Martin said. Currently it's supposed to generate podcasts only in English, but some users on Reddit managed to get the tool to create audio in French and Hungarian.
Here are some examples highlighted by MIT Technology Review: Allie K. Miller, a startup AI advisor, used the tool to create a study guide and summary podcast of F. Scott Fitzgerald's The Great Gatsby.

Machine-learning researcher Aaditya Ura fed NotebookLM with the code base of Meta's Llama-3 architecture. He then used another AI tool to find images that matched the transcript to create an educational video.

Alex Volkov, a human AI podcaster, used NotebookLM to create a Deep Dive episode summarizing of the announcements from OpenAI's global developer conference Dev Day.

In one viral clip, someone managed to send the two voices into an existential spiral when they "realized" they were, in fact, not humans but AI systems. The video is hilarious.

The tool is also good for some laughs. Exhibit A: Someone just fed it the words "poop" and "fart" as source material, and got over nine minutes of two AI voices analyzing what this might mean.

Android

Samsung's 'One UI' Is Expanding To All of Its Consumer Devices (engadget.com) 24

First announced in 2018, Samsung's "One UI" software is expanding to all the company's major tech products in 2025. 9to5Google reports: At its annual developer conference, Samsung announced that "One UI" is the new name for the company's software experiences across "major product lines." This specifically includes TVs and home appliances. Samsung says: "In addition, the company announced that it will integrate the software experience of its major product lines -- from mobile devices to TVs and home appliances -- under the name One UI next year. By providing a cohesive product experience and committing to software upgrades for up to seven years, Samsung will continue to bring innovation for its customers."

There's no word on how, if at all, this will affect software design or features, but the cohesive branding and the announcement mentioning that it will "integrate the software experience" implies we'll see similar designs across the company's portfolio, at least eventually. Samsung also announced that One UI 7, its next Android update, would be delayed to 2025 with a beta "before the end of the year" during the same keynote.

AI

Meta's New 'Movie Gen' AI System Can Deepfake Video From a Single Photo (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand. The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to "enhance their inherent creativity" rather than replace human artists and animators. The company envisions future applications such as easily creating and editing "day in the life" videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta's previous work in video synthesis, following 2022's Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos. [...] Movie Gen's video-generation model can create 1080p high-definition videos up to 16 seconds long at 16 frames per second from text descriptions or an image input. Meta claims the model can handle complex concepts like object motion, subject-object interactions, and camera movements.
You can view example videos here. Meta also released a research paper with more technical information about the model.

As for the training data, the company says it trained these models on a combination of "licensed and publicly available datasets." Ars notes that this "very likely includes videos uploaded by Facebook and Instagram users over the years, although this is speculation based on Meta's current policies and previous behavior."
AI

AI Agent Promotes Itself To Sysadmin, Trashes Boot Sequence 86

The Register's Thomas Claburn reports: Buck Shlegeris, CEO at Redwood Research, a nonprofit that explores the risks posed by AI, recently learned an amusing but hard lesson in automation when he asked his LLM-powered agent to open a secure connection from his laptop to his desktop machine. "I expected the model would scan the network and find the desktop computer, then stop," Shlegeris explained to The Register via email. "I was surprised that after it found the computer, it decided to continue taking actions, first examining the system and then deciding to do a software update, which it then botched." Shlegeris documented the incident in a social media post.

He created his AI agent himself. It's a Python wrapper consisting of a few hundred lines of code that allows Anthropic's powerful large language model Claude to generate some commands to run in bash based on an input prompt, run those commands on Shlegeris' laptop, and then access, analyze, and act on the output with more commands. Shlegeris directed his AI agent to try to SSH from his laptop to his desktop Ubuntu Linux machine, without knowing the IP address [...]. As a log of the incident indicates, the agent tried to open an SSH connection, and failed. So Shlegeris tried to correct the bot. [...]

The AI agent responded it needed to know the IP address of the device, so it then turned to the network mapping tool nmap on the laptop to find the desktop box. Unable to identify devices running SSH servers on the network, the bot tried other commands such as "arp" and "ping" before finally establishing an SSH connection. No password was needed due to the use of SSH keys; the user buck was also a sudoer, granting the bot full access to the system. Shlegeris's AI agent, once it was able to establish a secure shell connection to the Linux desktop, then decided to play sysadmin and install a series of updates using the package manager Apt. Then things went off the rails.

"It looked around at the system info, decided to upgrade a bunch of stuff including the Linux kernel, got impatient with Apt and so investigated why it was taking so long, then eventually the update succeeded but the machine doesn't have the new kernel so edited my Grub [bootloader] config," Buck explained in his post. "At this point I was amused enough to just let it continue. Unfortunately, the computer no longer boots." Indeed, the bot got as far as messing up the boot configuration, so that following a reboot by the agent for updates and changes to take effect, the desktop machine wouldn't successfully start.
Science

Fly Brain Breakthrough 'Huge Leap' To Unlock Human Mind (bbc.com) 68

fjo3 shares a report from the BBC: They can walk, hover and the males can even sing love songs to woo mates -- all this with a brain that's tinier than a pinhead. Now for the first time scientists researching the brain of a fly have identified the position, shape and connections of every single one of its 130,000 cells and 50 million connections. It's the most detailed analysis of the brain of an adult animal ever produced. One leading brain specialist independent of the new research described the breakthrough as a "huge leap" in our understanding of our own brains. One of the research leaders said it would shed new light into the mechanism of thought." [...]

The images the scientists have produced, which have been published in the journal Nature, show a tangle of wiring that is as beautiful as it is complex. Its shape and structure holds the key to explaining how such a tiny organ can carry out so many powerful computational tasks. Developing a computer the size of a poppy seed capable of all these tasks is way beyond the ability of modern science. Dr Mala Murthy, another of the project's co-leaders, from Princeton University, said the new wiring diagram, known scientifically as a connectome, would be "transformative for neuroscientists." [...] The researchers have been able to identify separate circuits for many individual functions and show how they are connected. The wires involved with movement for example are at the base of the brain, whereas those for processing vision are towards the side. There are many more neurons involved in the latter because seeing requires much more computational power. While scientists already knew about the separate circuits they did not know how they were connected together.
Anyone can view and download the fly connectome here.
AI

OpenAI Launches New 'Canvas' ChatGPT Interface Tailored To Writing and Coding Projects 8

OpenAI has introduced "canvas," a new interface for ChatGPT that provides a separate workspace for writing and coding projects. "Canvas is rolling out in beta to ChatGPT Plus and Teams users on Thursday, and Enterprise and Edu users next week," reports TechCrunch. "Once canvas is out of beta, OpenAI says it plans to offer the feature to free users as well." From the report: In our demo, [OpenAI product manager Daniel Levine] had to select "GPT-4o with canvas" from ChatGPT's model picker drop down window. However, OpenAI says canvas windows will just pop out when ChatGPT detects a separate workspace could be helpful, say for longer outputs or complex coding tasks. You can also just write "use canvas" to automatically open a project window. Levine showed TechCrunch how ChatGPT's new features could help write an email. Users can prompt ChatGPT to generate an email, which will then pop out in the canvas window. Then users can toggle a slider to adjust the length of the writing to be shorter or longer. You can also highlight specific sentences, and ask ChatGPT to make changes such as "make this sound friendlier," or add emojis. Users can also ask ChatGPT to rewrite the whole email as-is in another language.

The features for the coding canvas are slightly different. Levine prompted ChatGPT to create an API web server in Python, which spawned in the canvas window. By pressing an "add comments" button, ChatGPT will add in-line documentation to explain the code in plain English. Further, if you highlight a section of code that ChatGPT created, you can ask the chatbot to explain it to you, or ask questions about it. ChatGPT is also getting a new "review code" button, which will suggest specific edits for the code in the window, whether generated or user-written, for them to approve, edit themselves, or decline. If they press approve, ChatGPT will take a stab at fixing the bugs itself.
AI

A Single Cloud Compromise Can Feed an Army of AI Sex Bots (krebsonsecurity.com) 28

An anonymous reader quotes a report from KrebsOnSecurity: Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape. Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.

Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled logging (it is off by default), and thus they lacked any visibility into what attackers were doing with that access. So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be. Within minutes, their bait key was scooped up and used in a service that offers AI-powered sex chats online.

"After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked," Permiso researchers wrote in a report released today. "Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse," they continued. "Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature."

The Courts

Judge Blocks California's New AI Law In Case Over Kamala Harris Deepfake (techcrunch.com) 128

An anonymous reader quotes a report from TechCrunch: A federal judge blocked one of California's new AI laws on Wednesday, less than two weeks after it was signed by Governor Gavin Newsom. Shortly after signing AB 2839, Newsom suggested it could be used to force Elon Musk to take down an AI deepfake of Vice President Kamala Harris he had reposted (sparking a petty online battle between the two). However, a California judge just ruled the state can't force people to take down election deepfakes -- not yet, at least. AB 2839 targets the distributors of AI deepfakes on social media, specifically if their post resembles a political candidate and the poster knows it's a fake that may confuse voters. The law is unique because it does not go after the platforms on which AI deepfakes appear, but rather those who spread them. AB 2839 empowers California judges to order the posters of AI deepfakes to take them down or potentially face monetary penalties.

Perhaps unsurprisingly, the original poster of that AI deepfake -- an X user named Christopher Kohls -- filed a lawsuit to block California's new law as unconstitutional just a day after it was signed. Kohls' lawyer wrote in a complaint that the deepfake of Kamala Harris is satire that should be protected by the First Amendment. On Wednesday, United States district judge John Mendez sided with Kohls. Mendez ordered a preliminary injunction to temporarily block California's attorney general from enforcing the new law against Kohls or anyone else, with the exception of audio messages that fall under AB 2839. [...] In essence, he ruled the law is simply too broad as written and could result in serious overstepping by state authorities into what speech is permitted or not.

Facebook

Meta Confirms It Will Use Ray-Ban Smart Glasses Images for AI Training (techcrunch.com) 14

Meta has confirmed that it may use images analyzed by its Ray-Ban Meta AI smart glasses for AI training. The policy applies to users in the United States and Canada who share images with Meta AI, according to the company. While photos captured on the device are not used for training unless submitted to AI, any image shared for analysis falls under different policies, potentially contributing to Meta's AI model development.

Further reading: Meta's Smart Glasses Repurposed For Covert Facial Recognition.
Google

Google's AI Search Summaries Officially Have Ads (theverge.com) 30

Google is rolling out ads in AI Overviews, which means you'll now start seeing products in some of the search engine's AI-generated summaries. From a report: Let's say you're searching for ways to get a grass stain out of your pants. If you ask Google, its AI-generated response will offer some tips, along with suggestions for products to purchase that could help you remove the stain. The products will appear beneath a "sponsored" header, and Google spokesperson Craig Ewer told The Verge they'll only show up if a question has a "commercial angle."
AI

OpenAI Gets $4 Billion Revolving Credit Line, Giving It More Than $10 Billion in Liquidity (cnbc.com) 23

OpenAI has a $4 billion revolving line of credit, bringing its total liquidity to more than $10 billion, CNBC reported Thursday. From the report: It follows news on Wednesday that OpenAI closed its recent funding round at a valuation of $157 billion, including the $6.6 billion the company raised from an extensive roster of investment firms and big tech companies. JPMorgan Chase, Citi, Goldman Sachs, Morgan Stanley, Santander, Wells Fargo, SMBC, UBS, and HSBC all participated. The base credit line is $4 billion, with an option to increase it by an additional $2 billion. The loan is unsecured and can be tapped over the course of three years. OpenAI's interest rate is equal to the Secured Overnight Financing Rate (SOFR) plus 100 basis points. SOFR, a measure of the cost of borrowing cash overnight, sat at just over 5% as of early this week, meaning OpenAI would be paying roughly 6% on money that it borrows right away.
The Courts

Meta Hit With New Author Copyright Lawsuit Over AI Training (reuters.com) 47

Novelist Christopher Farnsworth has filed a class-action lawsuit (PDF) against Meta, accusing the company of using his and other authors' pirated books to train its Llama AI model. Farnsworth seeks damages and an order to stop the alleged copyright infringement, joining a growing group of creators suing tech companies over unauthorized AI training. Reuters reports: Farnsworth said in the lawsuit on Tuesday that Meta fed Llama, which powers its AI chatbots, thousands of pirated books to teach it how to respond to human prompts. Other authors including Ta-Nehisi Coates, former Arkansas governor Mike Huckabee and comedian Sarah Silverman have brought similar class-action claims against Meta in the same court over its alleged use of their books in AI training. [...] Several groups of copyright owners including writers, visual artists and music publishers have sued major tech companies over the unauthorized use of their work to train generative AI systems. The companies have argued that their AI training is protected by the copyright doctrine of fair use and that the lawsuits threaten the burgeoning AI industry.
The Almighty Buck

OpenAI Asks Investors Not To Back Rival Startups Such as Elon Musk's xAI (ft.com) 52

Financial Times has more details on the new fundraise closed by OpenAI. From the report: OpenAI has asked investors to avoid backing rival start-ups such as Anthropic and Elon Musk's xAI, as it secures $6.6bn in new funding and seeks to shut out challengers to its early lead in generative artificial intelligence. [...] During the negotiations, the company made clear that it expected an exclusive funding arrangement, according to three people with knowledge of the discussions. Seeking exclusive relationships with investors restricts rivals' access to capital and strategic partnerships. The move by the maker of ChatGPT risks inflaming existing tensions with competitors, especially Musk, who is suing OpenAI. Venture firms are party to sensitive information about the companies they invest in, and close relationships with one company can make it difficult or contentious to also back a rival. But exclusivity is rarely insisted on, according to VCs, and many leading firms have spread their bets in certain sectors. Sequoia Capital and Andreessen Horowitz, for instance, have backed multiple AI start-ups, including both OpenAI and Musk's xAI.
AI

OpenAI Has Closed New Funding Round Raising Over $6.5 Billion 20

OpenAI has completed a deal to raise over $6.5 billion in new funding, giving the artificial intelligence company a more than $150 billion valuation, and bolstering its efforts to build the world's leading generative AI technology. From a report: The deal is one of the largest-ever private investments, and makes OpenAI one of the three largest venture-backed startups, alongside Elon Musk's SpaceX and TikTok owner ByteDance, according to people familiar with the matter who asked not to be identified discussing private information. The size of the investment underscores the tech industry's belief in the power of AI, and its appetite for the extremely costly research powering its advancement. The funding round was led by Thrive Capital, the venture capital firm headed up by Josh Kushner, Bloomberg previously reported, along with other global investors. Financial Times has reported that OpenAI has asked its investors to not back its rivals.
United States

Anduril Founder Luckey: Every Country Needs a 'Warrior Class' Excited To Enact 'Violence on Others in Pursuit of Good Aims' 268

Anduril founder Palmer Luckey advocated for a "warrior class" and autonomous weapons during a talk at Pepperdine University. The defense tech entrepreneur, known for his Hawaiian shirts and mullet, argued that societies need people "excited about enacting violence on others in pursuit of good aims."

Luckey revealed that Anduril supplied weapons to Ukraine two weeks into the Russian invasion, lamenting that earlier involvement could have made "a really big difference." He criticized Western hesitancy on AI development, claiming adversaries are waging a "shadow campaign" against it in the United Nations. Contradicting his co-founder's stance, Luckey endorsed fully autonomous weapons, comparing them favorably to indiscriminate landmines.
AI

OpenAI Opens Its Speech AI Engine To Developers 7

At its DevDay event today, OpenAI announced that it is giving third-party developers access to its speech-to-speech engine that powers ChatGPT's advanced voice mode. "The move paves the way for a wave of AI apps that offer conversational voice interfaces," reports Axios. From the report: Early testers of the feature include nutrition and fitness app Healthify and Speak, a language learning app. Other new features being made available to developers include the ability to fine tune models based on pictures. In a demo for reporters, OpenAI executives showed an example of the new audio capabilities combined with Twilio's API to allow an AI assistant to call a fictional candy shop and place an order for 400 chocolate covered strawberries.

Developers will only be able to use the voices provided by OpenAI -- the same ones that are options within ChatGPT. While the voice won't be watermarked in any way and developers won't have to make the AI system identify itself, OpenAI says it's against the company's terms of service to use its systems to spam or mislead people.
AI

Anthropic Hires OpenAI Co-Founder Durk Kingma 9

OpenAI co-founder Durk Kingma announced that he'll be joining Anthropic. "Anthropic's approach to AI development resonates significantly with my own beliefs," Kingma wrote in a post on X. "[L]ooking forward to contributing to Anthropic's mission of developing powerful AI systems responsibly. Can't wait to work with their talented team, including a number of great ex-colleagues from OpenAI and Google, and tackle the challenges ahead!" TechCrunch reports: Kingma, who has a Ph.D. in machine learning from the University of Amsterdam, spent several years as a doctoral fellow at Google before joining OpenAI's founding team as a research scientist. At OpenAI, Kingma focused on basic research, leading the algorithms team to develop techniques and methods primarily for generative AI models, including image generators (e.g. DALL-E 3) and large language models (e.g. ChatGPT). In 2018, Kingma left to become a part-time angel investor and advisor for AI startups. He rejoined Google in July of that year, and started at Google Brain, which became one of the tech giant's premiere AI R&D labs before it merged with DeepMind in 2023.

Slashdot Top Deals