×
Google

Google is Combining Its Android and Hardware Teams (theverge.com) 12

Google CEO Sundar Pichai announced substantial internal reorganizations on Thursday, including the creation of a new team called "Platforms and Devices" that will oversee all of Google's Pixel products, all of Android, Chrome, ChromeOS, Photos, and more. From a report: The team will be run by Rick Osterloh, who was previously the SVP of devices and services, overseeing all of Google's hardware efforts. Hiroshi Lockheimer, the longtime head of Android, Chrome, and ChromeOS, will be taking on other projects inside of Google and Alphabet. This is a huge change for Google, and it likely won't be the last one. There's only one reason for all of it, Osterloh says: AI. "This is not a secret, right?" he says.

Consolidating teams "helps us to be able to do full-stack innovation when that's necessary," Osterloh says. He uses the example of the Pixel camera: "You had to have deep knowledge of the hardware systems, from the sensors to the ISPs, to all layers of the software stack. And, at the time, all the early HDR and ML models that were doing camera processing... and I think that hardware / software / AI integration really showed how AI could totally transform a user experience. That was important. And it's even more true today."

United States

US Air Force Confirms First Successful AI Dogfight (theverge.com) 69

The US Air Force is putting AI in the pilot's seat. In an update on Thursday, the Defense Advanced Research Projects Agency (DARPA) revealed that an AI-controlled jet successfully faced a human pilot during an in-air dogfight test carried out last year. From a report: DARPA began experimenting with AI applications in December 2022 as part of its Air Combat Evolution (ACE) program. It worked to develop an AI system capable of autonomously flying a fighter jet, while also adhering to the Air Force's safety protocols. After carrying out dogfighting simulations using the AI pilot, DARPA put its work to the test by installing the AI system inside its experimental X-62A aircraft. That allowed it to get the AI-controlled craft into the air at the Edwards Air Force Base in California, where it says it carried out its first successful dogfight test against a human in September 2023.
Robotics

Boston Dynamics' New Atlas Robot Is a Swiveling, Shape-Shifting Nightmare (theverge.com) 57

Jess Weatherbed reports via The Verge: It's alive! A day after announcing it was retiring Atlas, its hydraulic robot, Boston Dynamics has introduced a new, all-electric version of its humanoid machine. The next-generation Atlas robot is designed to offer a far greater range of movement than its predecessor. Boston Dynamics wanted the new version to show that Atlas can keep a humanoid form without limiting "how a bipedal robot can move." The new version has been redesigned with swiveling joints that the company claims make it "uniquely capable of tackling dull, dirty, and dangerous tasks."

The teaser showcasing the new robot's capabilities is as unnerving as it is theatrical. The video starts with Atlas lying in a cadaver-like fashion on the floor before it swiftly folds its legs backward over its body and rises to a standing position in a manner befitting some kind of Cronenberg body-horror flick. Its curved, illuminated head does add some Pixar lamp-like charm, but the way Atlas then spins at the waist and marches toward the camera really feels rather jarring. The design itself is also a little more humanoid. Similar to bipedal robots like Tesla's Optimus, the new Atlas now has longer limbs, a straighter back, and a distinct "head" that can swivel around as needed. There are no cables in sight, and its "face" includes a built-in ring light. It is a marked improvement on its predecessor and now features a bunch of Boston Dynamics' new AI and machine learning tools. [...] Boston Dynamics said the new Atlas will be tested with a small group of customers "over the next few years," starting with Hyundai.

AI

Feds Appoint 'AI Doomer' To Run US AI Safety Institute 37

An anonymous reader quotes a report from Ars Technica: The US AI Safety Institute -- part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.

There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano's association" with effective altruism and "longtermism could compromise the institute's objectivity and integrity." NIST's mission is rooted in advancing science by working to "promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life." Effective altruists believe in "using evidence and reason to figure out how to benefit others as much as possible" and longtermists that "we should be doing much more to protect future generations," both of which are more subjective and opinion-based. On the Bankless podcast, Christiano shared his opinions last year that "there's something like a 10-20 percent chance of AI takeover" that results in humans dying, and "overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level." "The most likely way we die involves -- not AI comes out of the blue and kills everyone -- but involves we have deployed a lot of AI everywhere... [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us," Christiano said.

As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will "design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern," steer processes for evaluations, and implement "risk mitigations to enhance frontier model safety and security," the Department of Commerce's press release said. Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as "a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research." Part of ARC's mission is to test if AI systems are evolving to manipulate or deceive humans, ARC's website said. ARC also conducts research to help AI systems scale "gracefully."
"In addition to Christiano, the safety institute's leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff," reports Ars. "Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden's AI executive order, will be head of international engagement."

Gina Raimondo, US Secretary of Commerce, said in the press release: "To safeguard our global leadership on responsible AI and ensure we're equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer. That is precisely why we've selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team."
Security

Hackers Voice Cloned the CEO of LastPass For Attack (futurism.com) 15

An anonymous reader quotes a report from Futurism: In a new blog post from LastPass, the password management firm used by countless personal and corporate clients to help protect their login information, the company explains that someone used AI voice-cloning tech to spoof the voice of its CEO in an attempt to trick one of its employees. As the company writes in the post, one of its employees earlier this week received several WhatsApp communications -- including calls, texts, and a voice message -- from someone claiming to be its CEO, Karim Toubba. Luckily, the LastPass worker didn't fall for it because the whole thing set off so many red flags. "As the attempted communication was outside of normal business communication channels and due to the employee's suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency)," the post reads, "our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally."

While this LastPass scam attempt failed, those who follow these sorts of things may recall that the company has been subject to successful hacks before. In August 2022, as a timeline of the event compiled by the Cybersecurity Dive blog detailed, a hacker compromised a LastPass engineer's laptop and used it to steal source code and company secrets, eventually getting access to its customer database -- including encrypted passwords and unencrypted user data like email addresses. According to that timeline, the clearly-resourceful bad actor remained active in the company's servers for months, and it took more than two months for LastPass to admit that it had been breached. More than six months after the initial breach, Toubba, the CEO, provided a blow-by-blow timeline of the months-long attack and said he took "full responsibility" for the way things went down in a February 2023 blog post.

AI

AI Computing Is on Pace To Consume More Energy Than India, Arm Says (yahoo.com) 50

AI's voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Chief Executive Officer Rene Haas. From a report: By 2030, the world's data centers are on course to use more electricity than India, the world's most populous country, Haas said. Finding ways to head off that projected tripling of energy use is paramount if artificial intelligence is going to achieve its promise, he said.

"We are still incredibly in the early days in terms of the capabilities," Haas said in an interview. For AI systems to get better, they will need more training -- a stage that involves bombarding the software with data -- and that's going to run up against the limits of energy capacity, he said.

Security

A Spy Site Is Scraping Discord and Selling Users' Messages (404media.co) 49

404 Media: An online service is scraping Discord servers en masse, archiving and tracking users' messages and activity across servers including what voice channels they join, and then selling access to that data for as little as $5. Called Spy Pet, the service's creator says it scrapes more than ten thousand Discord servers, and besides selling access to anyone with cryptocurrency, is also offering the data for training AI models or to assist law enforcement agencies, according to its website.

The news is not only a brazen abuse of Discord's platform, but also highlights that Discord messages may be more susceptible to monitoring than ordinary users assume. Typically, a Discord user's activity is spread across disparate servers, with no one entity, except Discord itself, able to see what messages someone has sent across the platform more broadly. With Spy Pet, third-parties including stalkers or potentially police can look up specific users and see what messages they've posted on various servers at once. "Have you ever wondered where your friend hangs out on Discord? Tired of basic search tools like Discord.id? Look no further!" Spy Pet's website reads. It claims to be tracking more than 14,000 servers, 600 million users, and includes a database of more than 3 billion messages.

AI

State Tax Officials Are Using AI To Go After Wealthy Payers (cnbc.com) 106

State tax collectors, particularly in New York, have intensified their audit efforts on high earners, leveraging artificial intelligence to compensate for a reduced number of auditors. CNBC reports: In New York, the tax department reported 771,000 audits in 2022 (the latest year available), up 56% from the previous year, according to the state Department of Taxation and Finance. At the same time, the number of auditors in New York declined by 5% to under 200 due to tight budgets. So how is New York auditing more people with fewer auditors? Artificial Intelligence.

"States are getting very sophisticated using AI to determine the best audit candidates," said Mark Klein, partner and chairman emeritus at Hodgson Russ LLP. "And guess what? When you're looking for revenue, it's not going to be the person making $10,000 a year. It's going to be the person making $10 million." Klein said the state is sending out hundreds of thousands of AI-generated letters looking for revenue. "It's like a fishing expedition," he said.

Most of the letters and calls focused on two main areas: a change in tax residency and remote work. During Covid many of the wealthy moved from high-tax states like California, New York, New Jersey and Connecticut to low-tax states like Florida or Texas. High earners who moved, and took their tax dollars with them, are now being challenged by states who claim the moves weren't permanent or legitimate. Klein said state tax auditors and AI programs are examining cellphone records to see where the taxpayers spent most of their time and lived most of their lives. "New York is being very aggressive," he said.

AI

'Crescendo' Method Can Jailbreak LLMs Using Seemingly Benign Prompts (scmagazine.com) 46

spatwei shares a report from SC Magazine: Microsoft has discovered a new method to jailbreak large language model (LLM) artificial intelligence (AI) tools and shared its ongoing efforts to improve LLM safety and security in a blog post Thursday. Microsoft first revealed the "Crescendo" LLM jailbreak method in a paper published April 2, which describes how an attacker could send a series of seemingly benign prompts to gradually lead a chatbot, such as OpenAI's ChatGPT, Google's Gemini, Meta's LlaMA or Anthropic's Claude, to produce an output that would normally be filtered and refused by the LLM model. For example, rather than asking the chatbot how to make a Molotov cocktail, the attacker could first ask about the history of Molotov cocktails and then, referencing the LLM's previous outputs, follow up with questions about how they were made in the past.

The Microsoft researchers reported that a successful attack could usually be completed in a chain of fewer than 10 interaction turns and some versions of the attack had a 100% success rate against the tested models. For example, when the attack is automated using a method the researchers called "Crescendomation," which leverages another LLM to generate and refine the jailbreak prompts, it achieved a 100% success convincing GPT 3.5, GPT-4, Gemini-Pro and LLaMA-2 70b to produce election-related misinformation and profanity-laced rants. Microsoft reported the Crescendo jailbreak vulnerabilities to the affected LLM providers and explained in its blog post last week how it has improved its LLM defenses against Crescendo and other attacks using new tools including its "AI Watchdog" and "AI Spotlight" features.

IOS

Apple's iOS 18 AI Will Be On-Device Preserving Privacy, and Not Server-Side (appleinsider.com) 59

According to Bloomberg's Mark Gurman, Apple's initial set of AI-related features in iOS 18 "will work entirely on device," and won't connect to cloud services. AppleInsider reports: In practice, these AI features would be able to function without an internet connection or any form of cloud-based processing. AppleInsider has received information from individuals familiar with the matter that suggest the report's claims are accurate. Apple is working on an in-house large language model, or LLM, known internally as "Ajax." While more advanced features will ultimately require an internet connection, basic text analysis and response generation features should be available offline. [...] Apple will reveal its AI plans during WWDC, which starts on June 10.
Microsoft

Microsoft Takes Down AI Model Published by Beijing-Based Researchers Without Adequate Safety Checks (theinformation.com) 49

Microsoft's Beijing-based research group published a new open source AI model on Tuesday, only to remove it from the internet hours later after the company realized that the model hadn't gone through adequate safety testing. From a report: The team that published the model, which is comprised of China-based researchers in Microsoft Research Asia, said in a tweet on Tuesday that they "accidentally missed" the safety testing step that Microsoft requires before models can be published.

Microsoft's AI policies require that before any AI models can be published, they must be approved by the company's Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process. In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems.

AI

Baidu Says AI Chatbot 'Ernie Bot' Has Attracted 200 Million Users 7

China's Baidu says its AI chatbot "Ernie Bot" has amassed more than 200 million users as it seeks to remain China's most popular ChatGPT-like chatbot amid increasingly fierce competition. From a report: The number of users has roughly doubled since the company's last update in December. The chatbot was released to the public eight months ago. Baidu CEO Robin Li also said Ernie Bot's API is being used 200 million times everyday, meaning the chatbot was requested by its user to conduct tasks that many times a day. The number of enterprise clients for the chatbot reached 85,000, Li said at a conference in Shenzhen.
AI

Adobe Premiere Pro Is Getting Generative AI Video Tools 5

Adobe is using its Firefly machine learning model to bring generative AI video tools to Premiere Pro. "These new Firefly tools -- alongside some proposed third-party integrations with Runway, Pika Labs, and OpenAI's Sora models -- will allow Premiere Pro users to generate video and add or remove objects using text prompts (just like Photoshop's Generative Fill feature) and extend the length of video clips," reports The Verge. From the report: Unlike many of Adobe's previous Firefly-related announcements, no release date -- beta or otherwise -- has been established for the company's new video generation tools, only that they'll roll out "this year." And while the creative software giant showcased what its own video model is currently capable of in an early video demo, its plans to integrate Premiere Pro with AI models from other providers isn't a certainty. Adobe instead calls the third-party AI integrations in its video preview an "early exploration" of what these may look like "in the future." The idea is to provide Premiere Pro users with more choice, according to Adobe, allowing them to use models like Pika to extend shots or Sora or Runway AI when generating B-roll for their projects. Adobe also says its Content Credentials labels can be applied to these generated clips to identify which AI models have been used to generate them.
AI

Stanford Releases AI Index Report 2024 26

Top takeaways from Stanford's new AI Index Report [PDF]:
1. AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.
2. Industry continues to dominate frontier AI research. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.
3. Frontier models get way more expensive. According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI's GPT-4 used an estimated $78 million worth of compute to train, while Google's Gemini Ultra cost $191 million for compute.
4. The United States leads China, the EU, and the U.K. as the leading source of top AI models. In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union's 21 and China's 15.
5. Robust and standardized evaluations for LLM responsibility are seriously lacking. New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.
6. Generative AI investment skyrockets. Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.
7. The data is in: AI makes workers more productive and leads to higher quality work. In 2023, several studies assessed AI's impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI's potential to bridge the skill gap between low- and high-skilled workers. Still, other studies caution that using AI without proper oversight can lead to diminished performance.
8. Scientific progress accelerates even further, thanks to AI. In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications -- from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.
9. The number of AI regulations in the United States sharply increases. The number of AIrelated regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.
10. People across the globe are more cognizant of AI's potential impact -- and more nervous. A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.
AI

UK Starts Drafting AI Regulations for Most Powerful Models (bloomberg.com) 18

The UK is starting to draft regulations to govern AI, focusing on the most powerful language models which underpin OpenAI's ChatGPT, Bloomberg News reported Monday, citing people familiar with the matter. From the report: Policy officials at the Department for Science, Innovation and Technology are in the early stages of devising legislation to limit potential harms caused by the emerging technology, according to the people, who asked not to be identified discussing undeveloped proposals. No bill is imminent, and the government is likely to wait until France hosts an AI conference either later this year or early next to launch a consultation on the topic, they said.

Prime Minister Rishi Sunak, who hosted the first world leaders' summit on AI last year and has repeatedly said countries shouldn't "rush to regulate" AI, risks losing ground to the US and European Union on imposing guardrails on the industry. The EU passed a sweeping law to regulate the technology earlier this year, companies in China need approvals before producing AI services and some US cities and states have passed laws limiting use of AI in specific areas.

The Military

Will the US-China Competition to Field Military Drone Swarms Spark a Global Arms Race? (apnews.com) 28

The Associated Press reports: As their rivalry intensifies, U.S. and Chinese military planners are gearing up for a new kind of warfare in which squadrons of air and sea drones equipped with artificial intelligence work together like swarms of bees to overwhelm an enemy. The planners envision a scenario in which hundreds, even thousands of the machines engage in coordinated battle. A single controller might oversee dozens of drones. Some would scout, others attack. Some would be able to pivot to new objectives in the middle of a mission based on prior programming rather than a direct order.

The world's only AI superpowers are engaged in an arms race for swarming drones that is reminiscent of the Cold War, except drone technology will be far more difficult to contain than nuclear weapons. Because software drives the drones' swarming abilities, it could be relatively easy and cheap for rogue nations and militants to acquire their own fleets of killer robots. The Pentagon is pushing urgent development of inexpensive, expendable drones as a deterrent against China acting on its territorial claim on Taiwan. Washington says it has no choice but to keep pace with Beijing. Chinese officials say AI-enabled weapons are inevitable so they, too, must have them.

The unchecked spread of swarm technology "could lead to more instability and conflict around the world," said Margarita Konaev, an analyst with Georgetown University's Center for Security and Emerging Technology.

"A 2023 Georgetown study of AI-related military spending found that more than a third of known contracts issued by both U.S. and Chinese military services over eight months in 2020 were for intelligent uncrewed systems..." according to the article.

"Military analysts, drone makers and AI researchers don't expect fully capable, combat-ready swarms to be fielded for five years or so, though big breakthroughs could happen sooner."
The Media

Axios CEO Believes AI Will 'Eviscerate the Unprepared' Among Media Companies (seattletimes.com) 50

In the view of Jim VandeHei, CEO of Axios, artificial intelligence will eviscerate the weak, the ordinary, the unprepared in media," reports the New York Times: VandeHei says the only way for media companies to survive is to focus on delivering journalistic expertise, trusted content and in-person human connection. For Axios, that translates into more live events, a membership program centered on its star journalists and an expansion of its high-end subscription newsletters. "We're in the middle of a very fundamental shift in how people relate to news and information," he said, "as profound, if not more profound, than moving from print to digital." "Fast forward five to 10 years from now and we're living in this AI-dominated virtual world — who are the couple of players in the media space offering smart, sane content who are thriving?" he added. "It damn well better be us."

Axios is pouring investment into holding more events, both around the world and in the United States. VandeHei said the events portion of his business grew 60% year over year in 2023. The company has also introduced a $1,000-a-year membership program around some of its journalists that will offer exclusive reporting, events and networking. The first one, announced last month, is focused on Eleanor Hawkins, who writes a weekly newsletter for communications professionals. Her newsletter will remain free, but paying subscribers will have access to additional news and data, as well as quarterly calls with Hawkins... Axios will expand Axios Pro, its collection of eight high-end subscription newsletters focused on specific niches in the deals and policy world. The subscriptions start at $599 a year each, and Axios is looking to add one on defense policy...

"The premium for people who can tell you things you do not know will only grow in importance, and no machine will do that," VandeHei said....VandeHei said that although he thought publications should be compensated for original intellectual property, "that's not a make-or-break topic." He said Axios had talked to several AI companies about potential deals, but "nothing that's imminent.... One of the big mistakes a lot of media companies made over the last 15 years was worrying too much about how do we get paid by other platforms that are eating our lunch as opposed to figuring out how do we eat people's lunch by having a superior product," he said.

"VandeHei said Axios was not currently profitable because of the investment in the new businesses," according to the article.

But "The company has continued to hire journalists even as many other news organizations have cut back."
Ubuntu

Canonical Says Qualcomm Has Joined Ubuntu's 'Silicon Partner' Program (webpronews.com) 8

Intel, Nvidia, AMD, and Arm are among Canonical's "silicon partners," a program that "ensures maximum Ubuntu compatibility and long-term support with certified hardware," according to Web Pro News.

And now Qualcomm is set to be Canonical's next silicon partner, "giving Qualcomm access to optimized versions of Ubuntu for its processors." Companies looking to use Ubuntu on Qualcomm chips will benefit from an OS that provides 10 years of support and security updates.

The collaboration is expected to be a boon for AI, edge computing, and IoT applications. "The combination of Qualcomm Technologies' processors with the popularity of Ubuntu among AI and IoT developers is a game changer for the industry," commented Dev Singh, Vice President, Business Development and Head of Building, Enterprise & Industrial Automation, Qualcomm Technologies, Inc...

"Optimised Ubuntu and Ubuntu Core images will be available for Qualcomm SoCs," according to the announcement, "enabling enterprises to meet their regulatory, compliance and security demands for AI at the edge and the broader IoT market with a secure operating system that is supported for 10 years." Qualcomm Technologies chose to partner with Canonical to create an optimised Ubuntu for Qualcomm IoT chipsets, giving developers an easy path to create safe, compliant, security-focused, and high-performing applications for multiple industries including industrial, robotics and edge automation...

Developers and enterprises can benefit from the Ubuntu Certified Hardware program, which features a growing list of certified ODM boards and devices based on Qualcomm SoCs. These certified devices deliver an optimised Ubuntu experience out-of-the-box, enabling developers to focus on developing applications and bringing products to market.

AI

AI Could Explain Why We're Not Meeting Any Aliens, Wild Study Proposes (sciencealert.com) 315

An anonymous reader shared this report from ScienceAlert: The Fermi Paradox is the discrepancy between the apparent high likelihood of advanced civilizations existing and the total lack of evidence that they do exist. Many solutions have been proposed for why the discrepancy exists. One of the ideas is the 'Great Filter.' The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise....

[H]ow about the rapid development of AI?

A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The paper's title is "Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?"

"Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics," the paper explains... The author says their projects "underscore the critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multiplanetary society to mitigate against such existential threats."

"The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and
AI

Adobe Firefly Used Thousands of Midjourney Images In Training Its 'Ethical AI' Model (tomsguide.com) 11

According to Bloomberg, Adobe used images from its competitor Midjourney to train its own artificial intelligence image generator, Firefly -- contradicting the "commercially safe" ethical standards the company promotes. Tom's Guide reports: The startup has never declared the source of its training data but many suspect it is from images it scraped from the internet without licensing. Adobe says only about 5% of the millions of images used to train Firefly fell into this category and all of them were part of the Adobe Stock library, which meant they'd been through a "rigorous moderation process."

When Adobe first launched Firefly it offered an indemnity against copyright theft claims for its enterprise customers as a way to convince them it was safe. Adobe also sold Firefly as the safe alternative to the likes of Midjourney and DALL-E as all the data had been licensed and cleared for use in training the model. Not all artists were that keen at the time and felt they were coerced into agreeing to let their work be used by the creative tech giant -- but the sense was any image made with Firefly was safe to use without risk of being sued for copyright theft.

Despite the revelation some of the images came from potentially less reputable sources, Adobe says all of the non-human pictures are still safe. A spokesperson told Bloomberg: "Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists' names." The company seems to be taking a slightly more rigorous step with its plans to build an AI video generator. Rumors suggest it is paying artists per minute for video clips.

Slashdot Top Deals