×
AI

Can Google Make Stoplights Smarter? (scientificamerican.com) 64

An anonymous reader shares a report: Traffic along some of Seattle's stop-and-go streets is running a little smoother after Google tested out a new machine-learning system to optimize stoplight timing at five intersections. The company launched this test as part of its Green Light pilot program in 2023 in Seattle and a dozen other cities, including some notoriously congested places such as Rio de Janeiro, Brazil, and Kolkata, India. Across these test sites, local traffic engineers use Green Light's suggestions -- based on artificial intelligence and Google Maps data -- to adjust stoplight timing. Google intends for these changes to curb waiting at lights while increasing vehicle flow across busy throughways and intersections -- and, ultimately, to reduce greenhouse gases.

"We have seen positive results," says Mariam Ali, a Seattle Department of Transportation spokesperson. Green Light has provided "specific, actionable recommendations," she adds, and it has identified bottlenecks (and confirmed known ones) within the traffic system.

Managing the movement of vehicles through urban streets requires lots of time, money and consideration of factors such as pedestrian safety and truck routes. Google's foray into the field is one of many ongoing attempts to modernize traffic engineering by incorporating GPS app data, connected cars and artificial intelligence. Preliminary data suggest the system could reduce stops by up to 30 percent and emissions at intersections by up to 10 percent as a result of reduced idling, according to Google's 2024 Environmental Report. The company plans to expand to more cities soon. The newfangled stoplight system doesn't come close to replacing human decision-making in traffic engineering, however, and it may not be the sustainability solution Google claims it is.

The Courts

AI-powered 'Undressing' Websites Are Getting Sued (theverge.com) 107

The San Francisco City Attorney's office is suing 16 of the most frequently visited AI-powered "undressing" websites, often used to create nude deepfakes of women and girls without their consent. From a report: The landmark lawsuit, announced at a press conference by City Attorney David Chiu, says that the targeted websites were collectively visited over 200 million times in the first six months of 2024 alone.

The offending websites allow users to upload images of real, fully clothed people, which are then digitally "undressed" with AI tools that simulate nudity. One of these websites, which wasn't identified within the complaint, reportedly advertises: "Imagine wasting time taking her out on dates, when you can just use [the redacted website] to get her nudes."

AI

California Weakens Bill To Prevent AI Disasters Before Final Vote (techcrunch.com) 36

An anonymous reader shares a report: California's bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. California lawmakers bent slightly to that pressure Thursday, adding in several amendments suggested by AI firm Anthropic and other opponents. On Thursday the bill passed through California's Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener's office told TechCrunch.

[...] SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California's government less power to hold AI labs to account. Most notably, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic. Instead, California's attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

Google

Google's AI Search Gives Sites Dire Choice: Share Data or Die (bloomberg.com) 64

An anonymous reader shares a report: Google now displays convenient AI-based answers at the top of its search pages -- meaning users may never click through to the websites whose data is being used to power those results. But many site owners say they can't afford to block Google's AI from summarizing their content. That's because the Google tool that sifts through web content to come up with its AI answers is the same one that keeps track of web pages for search results, according to publishers. Blocking Alphabet's Google the way sites have blocked some of its AI competitors would also hamper a site's ability to be discovered online.

Google's dominance in search -- which a federal court ruled last week is an illegal monopoly -- is giving it a decisive advantage in the brewing AI wars, which search startups and publishers say is unfair as the industry takes shape. The dilemma is particularly acute for publishers, which face a choice between offering up their content for use by AI models that could make their sites obsolete and disappearing from Google search, a top source of traffic.

Transportation

Intel and Karma Partner To Develop Software-Defined Car Architecture (arstechnica.com) 53

An anonymous reader quotes a report from Ars Technica: Intel is partnering with Karma Automotive to develop an all-new computing platform for vehicles. The new software-defined vehicle architecture should first appear in a high-end electric coupe from Karma in 2026. But the partners have bigger plans for this architecture, with talk of open standards and working with other automakers also looking to make the leap into the software-defined future. [...] In addition to advantages in processing power and weight savings, software-defined vehicles are easier to update over-the-air, a must-have feature since Tesla changed that paradigm. Karma and Intel say their architecture should also have other efficiency benefits. They give the example of security monitoring that remains active even when the vehicle is turned off; they move this to a low-powered device using "data center application orchestration concepts."

Intel is also contributing its power management SoC to get the most out of inverters, DC-DC converters, chargers, and as you might expect, the domain controllers use Intel silicon as well, apparently with some flavor of AI enabled. [...] Karma's first car to use the software-defined vehicle architecture will be the Kayeva, a $300,000 two-door with 1,000 hp (745 kW) on tap, which is scheduled to arrive in two years. But Intel and Karma want to offer the architecture to others in the industry. "For Tier 1s and OEMs not quite ready to take the leap from the old way of doing things to the new, Karma Automotive will play as an ally, helping them make that transition," said [Karma President Marques McCammon].
"Together, we're harnessing the combined might of Intel's technological prowess and Karma's ultra-luxury vehicle expertise to co-develop a revolutionary software-defined vehicle architecture," said McCammon. "This isn't just about realizing Karma's full potential; it's about creating a blueprint for the entire industry. We're not just building exceptional vehicles, we're paving the way for a new era of automotive innovation and offering a roadmap for those ready to make the leap."
AI

Hollywood Union Strikes Deal For Advertisers To Replicate Actors' Voices With AI 32

The SAG-AFTRA actors' union has struck a deal with online talent marketplace Narrativ, allowing actors to sell advertisers the rights to replicate their voices using AI. "Not all members will be interested in taking advantage of the opportunities that licensing their digital voice replicas might offer, and that's understandable," SAG-AFTRA official Duncan Crabtree-Ireland said in a statement. "But for those who do, you now have a safe option." Reuters reports: Narrativ connects advertisers and ad agencies with actors to create audio ads using AI. Under the deal, an actor can set the price for an advertiser to digitally replicate their voice, provided it at least equals the SAG-AFTRA minimum pay for audio commercials. Brands must obtain consent from performers for each ad that uses the digital voice replica. The union hailed the pact with Narrativ as setting a standard for the ethical use of AI-generated voice replicas in advertising.
Microsoft

Microsoft Tweaks Fine Print To Warn Everyone Not To Take Its AI Seriously (theregister.com) 54

Microsoft is notifying folks that its AI services should not be taken too seriously, echoing prior service-specific disclaimers. From a report: In an update to the IT giant's Service Agreement, which takes effect on September 30, 2024, Redmond has declared that its Assistive AI isn't suitable for matters of consequence. "AI services are not designed, intended, or to be used as substitutes for professional advice," Microsoft's revised legalese explains. The changes to Microsoft's rules of engagement cover a few specific services, such as noting that Xbox customers should not expect privacy from platform partners.

"In the Xbox section, we clarified that non-Xbox third-party platforms may require users to share their content and data in order to play Xbox Game Studio titles and these third-party platforms may track and share your data, subject to their terms," the latest Service Agreement says. There are also some clarifications regarding the handling of Microsoft Cashback and Microsoft Rewards. But the most substantive revision is the addition of an AI Services section, just below a passage that says Copilot AI Experiences are governed by Bing's Terms of Use. Those using Microsoft Copilot with commercial data protection get a separate set of terms. The tweaked consumer-oriented rules won't come as much of a surprise to anyone who has bothered to read the contractual conditions governing Microsoft's Bing and associated AI stuff. For example, there's now a Services Agreement prohibition on using AI Services for "Extracting Data."

Businesses

Eric Schmidt Walks Back Claim Google Is Behind on AI Because of Remote Work (msn.com) 82

Eric Schmidt, ex-CEO and executive chairman at Google, walked back remarks in which he said his former company was losing the AI race because of its remote-work policies. From a report: "I misspoke about Google and their work hours," Schmidt said Wednesday in an email to The Wall Street Journal. "I regret my error." Schmidt, who left Google parent Alphabet's board more than five years ago, spoke earlier at a wide-ranging discussion at Stanford University. He criticized Google's remote-work policies in response to a question about Google competing with OpenAI. "Google decided that work-life balance and going home early and working from home was more important than winning," Schmidt said at Stanford. "The reason startups work is because the people work like hell."

Video of Schmidt's talk was posted on YouTube this week by Stanford Online, a division of the university that offers online courses. The video, which had more than 40,000 views as of Wednesday afternoon, has since been set to private. Schmidt said he asked for the video to be taken down.

AI

Magic: The Gathering Community Fears Generative AI Will Replace Talented Artists (slate.com) 133

Slate's Derek Heckman, an avid fan of Magic: The Gathering since the age of 10, expresses concern about the potential replacement of the game's distinctive hand-drawn art with generative AI -- and he's not alone. "I think we're all pretty afraid of what the potential is, given what we've seen from the generative image side," Sam, a YouTube creator who runs the channel Rhystic Studies, told him. "It's staggeringly powerful. And it's only in its infancy."

"Magic's greatest asset has always been its commitment to create a new illustration for every new card," he said. He adds that if we sacrifice that commitment for A.I., "you'd get to a point pretty fast where it just disintegrates and becomes the ugliest definition of the word product." Here's an excerpt from his report: So far, Magic's parent company, Wizards of the Coast, has outwardly agreed with Sam, saying in an official statement in 2023 that Magic "has been built on the innovation, ingenuity, and hard work of talented people" and forbidding outside creatives from using A.I. in their work. However, a number of recent incidents -- from the accidental use of A.I. art in a Magic promotional image to a very intentional LinkedIn post for a "Principal AI Engineer," one that Wizards had to clarify was for the company's video game projects -- have left many players unsure whether Wizards is potentially evolving their stance, or merely trying to find their footing in an ever-changing A.I. landscape.

In response to fan concerns, Wizards has created an "AI art FAQ" detailing, among other things, the new technologies it's invested in to detect A.I. use in art. Still, trust in the company has been damaged by this year's incidents. Longtime Magic artist David Rapoza even severed ties with the game this past January, citing this seeming difference between Wizards' words and actions when it comes to the use of A.I. Sam says the larger audience has likewise been left "cautiously suspicious," hoping to believe Wizards' official statements while also carefully noting the company's moves and mistakes with the technology. "I think what we want is for Wizards to commit hard to one lane and stay [with] what is tried and true," Sam says. "And that is prioritizing human work over shortcuts."

The Courts

Artists Claim 'Big' Win In Copyright Suit Fighting AI Image Generators (arstechnica.com) 53

Ars Technica's Ashley Belanger reports: Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists. In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.

"We won BIG," an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. "Not only do we proceed on our copyright claims," but "this order also means companies who utilize" Stable Diffusion models and LAION-like datasets that scrape artists' works for AI training without permission "could now be liable for copyright infringement violations, amongst other violations." Lawyers for the artists, Joseph Saveri and Matthew Butterick, told Ars that artists suing "consider the Court's order a significant step forward for the case," as "the Court allowed Plaintiffs' core copyright-infringement claims against all four defendants to proceed."

Government

FTC Finalizes Rule Banning Fake Reviews, Including Those Made With AI (techcrunch.com) 35

TechCrunch's Lauren Forristal reports: The U.S. Federal Trade Commission (FTC) announced on Wednesday a final rule that will tackle several types of fake reviews and prohibit marketers from using deceptive practices, such as AI-generated reviews, censoring honest negative reviews and compensating third parties for positive reviews. The decision was the result of a 5-to-0 vote. The new rule will start being enforced 60 days after it's published in the official government publication called Federal Register. [...]

According to the final rule, the maximum civil penalty for fake reviews is $51,744 per violation. However, the courts could impose lower penalties depending on the specific case. "Ultimately, courts will also decide how to calculate the number of violations in a given case," the Commission wrote. [...] The FTC initially proposed the rule on June 30, 2023, following an advanced notice of proposed rulemaking issued in November 2022. You can read the finalized rule here (PDF), but we also included a summary of it below:

- No fake or disingenuous reviews. This includes AI-generated reviews and reviews from anyone who doesn't have experience with the actual product.
- Businesses can't sell or buy reviews, whether negative or positive.
- Company insiders writing reviews need to clearly disclose their connection to the business. Officers or managers are prohibited from giving testimonials and can't ask employees to solicit reviews from relatives.
- Company-controlled review websites that claim to be independent aren't allowed.
- No using legal threats, physical threats or intimidation to forcefully delete or prevent negative reviews. Businesses also can't misrepresent that the review portion of their website comprises all or most of the reviews when it's suppressing the negative ones.
- No selling or buying fake engagement like social media followers, likes or views obtained through bots or hacked accounts.

AI

Research AI Model Unexpectedly Modified Its Own Code To Extend Runtime (arstechnica.com) 53

An anonymous reader quotes a report from Ars Technica: On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly modifying its own code to extend the time it had to work on a problem. "In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."

Sakana provided two screenshots of example code that the AI model generated, and the 185-page AI Scientist research paper discusses what they call "the issue of safe code execution" in more depth. While the AI Scientist's behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world. AI models do not need to be "AGI" or "self-aware" (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if accidentally.

Google

Eric Schmidt Says Google Is Falling Behind on AI - And Remote Work Is Why (msn.com) 113

Eric Schmidt, ex-CEO and executive chairman at Google, said his former company is losing the AI race and remote work is to blame. From a report: "Google decided that work-life balance and going home early and working from home was more important than winning," Schmidt said at a talk at Stanford University. "The reason startups work is because the people work like hell." Schmidt made the comments earlier at a wide-ranging discussion at Stanford. His remarks about Google's remote-work policies were in response to a question about Google competing with OpenAI.
AI

New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat (neurosciencenews.com) 129

ZipNada writes: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can generate sophisticated language, they are unlikely to pose existential threats. However, the potential misuse of AI, such as generating fake news, still requires attention. The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) -- the premier international conference in natural language processing -- reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe. "The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the new study on the 'emergent abilities' of LLMs.

Professor Iryna Gurevych added: "... our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."
Social Networks

Deep-Live-Cam Goes Viral, Allowing Anyone To Become a Digital Doppelganger (arstechnica.com) 17

An anonymous reader quotes a report from Ars Technica: Over the past few days, a software package called Deep-Live-Cam has been going viral on social media because it can take the face of a person extracted from a single photo and apply it to a live webcam video source while following pose, lighting, and expressions performed by the person on the webcam. While the results aren't perfect, the software shows how quickly the tech is developing -- and how the capability to deceive others remotely is getting dramatically easier over time. The Deep-Live-Cam software project has been in the works since late last year, but example videos that show a person imitating Elon Musk and Republican Vice Presidential candidate J.D. Vance (among others) in real time have been making the rounds online. The avalanche of attention briefly made the open source project leap to No. 1 on GitHub's trending repositories list (it's currently at No. 4 as of this writing), where it is available for download for free. [...]

Like many open source GitHub projects, Deep-Live-Cam wraps together several existing software packages under a new interface (and is itself a fork of an earlier project called "roop"). It first detects faces in both the source and target images (such as a frame of live video). It then uses a pre-trained AI model called "inswapper" to perform the actual face swap and another model called GFPGAN to improve the quality of the swapped faces by enhancing details and correcting artifacts that occur during the face-swapping process. The inswapper model, developed by a project called InsightFace, can guess what a person (in a provided photo) might look like using different expressions and from different angles because it was trained on a vast dataset containing millions of facial images of thousands of individuals captured from various angles, under different lighting conditions, and with diverse expressions.

During training, the neural network underlying the inswapper model developed an "understanding" of facial structures and their dynamics under various conditions, including learning the ability to infer the three-dimensional structure of a face from a two-dimensional image. It also became capable of separating identity-specific features, which remain constant across different images of the same person, from pose-specific features that change with angle and expression. This separation allows the model to generate new face images that combine the identity of one face with the pose, expression, and lighting of another.

Google

US Considers a Rare Antitrust Move: Breaking Up Google (bloomberg.com) 87

A rare bid to break up Alphabet's Google is one of the options being considered by the Justice Department after a landmark court ruling found that the company monopolized the online search market, Bloomberg News reported Tuesday, citing sources familiar with the matter. From the report: The move would be Washington's first push to dismantle a company for illegal monopolization since unsuccessful efforts to break up Microsoft two decades ago.

Less severe options include forcing Google to share more data with competitors and measures to prevent it from gaining an unfair advantage in AI products, said the people, who asked not to be identified discussing private conversations. Regardless, the government will likely seek a ban on the type of exclusive contracts that were at the center of its case against Google. If the Justice Department pushes ahead with a breakup plan, the most likely units for divestment are the Android operating system and Google's web browser Chrome, said the people. Officials are also looking at trying to force a possible sale of AdWords, the platform the company uses to sell text advertising, one of the people said.

AI

Google Makes Your Pixel Screenshots Searchable With Recall-like AI Feature (theverge.com) 19

An anonymous reader shares a report: Google has announced Pixel Screenshots, a new AI-powered app for its Pixel 9 lineup that lets you save, organize, and surface information from screenshots. Pixel Screenshot uses Google's private, on-device Gemini Nano AI model to analyze the content of an image and make it searchable.

During a demo at its Pixel launch event, Google showed how you can take a screenshot and then save it to a collection, like "gift ideas." You can also search through all your other screenshots by typing in a keyword, like "bikes" or "shoes." Pixel Screenshots will then pull up all relevant results. Additionally, Pixel Screenshots can give you information about what's inside an image.
Further reading: Microsoft Postpones Windows Recall After Major Backlash.
Android

Google's Pixel 9 Lineup is a Pro Show (theverge.com) 34

Google unveiled its latest Pixel smartphone series on Tuesday, introducing four new models with enhanced AI capabilities and updated designs. The Pixel 9 lineup includes the standard Pixel 9, two Pro models, and a foldable device. The new Pixel phones feature flat sides and an elongated camera module on the rear, departing from the curved edges of previous generations. Screen sizes range from 6.3 inches on the standard Pixel 9 to 6.8 inches on the Pixel 9 Pro XL.

All models are powered by Google's new Tensor G4 processor and come with increased RAM, with Pro models boasting 16GB. The devices run on Android 14 and will receive seven years of OS updates and security patches. Google has significantly expanded the AI capabilities of the new Pixels. An updated on-device Gemini Nano model can now analyze images and speech in addition to text. New features include automatic screenshot cataloging and retrieval, and an AI-powered illustration generator called Pixel Studio. Camera improvements are a key focus, with all models receiving upgraded ultrawide lenses and the Pro versions featuring a new 42-megapixel selfie camera with autofocus. Google has introduced "Magic Editor," allowing users to transform parts of an image using text prompts and generative AI.

The Pixel 9 Pro Fold, Google's second-generation foldable device, is thinner than its predecessor at 5.1mm when unfolded. It features a larger 8-inch inner display with increased brightness, reaching up to 2,700 nits in peak mode. Pricing for the new Pixel lineup starts at $799 for the standard Pixel 9, representing a $100 increase from last year's model. The Pixel 9 Pro and Pro XL are priced at $999 and $1,099 respectively, while the Pixel 9 Pro Fold will retail for $1,799. The devices will be released in stages, with the Pixel 9 and 9 Pro XL available from August 22, followed by the 9 Pro in September and the Pro Fold on September 4.
AI

Copyright Group Takes Down Dutch Language AI Dataset (aol.com) 14

Dutch-based copyright enforcement group BREIN has taken down a large language dataset that was being offered for use in training AI models, the organization said on Tuesday. From a report: The dataset included information collected without permission from tens of thousands of books, news sites, and Dutch language subtitles harvested from "countless" films and TV series, BREIN said in a statement. Director Bastiaan van Ramshorst told Reuters it was not clear whether or how widely the dataset may already have been used by AI companies. "It's very difficult to know, but we are trying to be on time" to avoid future lawsuits, he said. He said the European Union's AI Act will require AI firms to disclose what datasets they have used to train their models.
AI

AI PCs Made Up 14% of Quarterly PC Shipments (reuters.com) 73

AI PCs accounted for 14% of all PC shipped in the second quarter with Apple leading the way, research firm Canalys said on Tuesday, as added AI capabilities help reinvigorate demand. From a report: PC providers and chipmakers have pinned high hopes on devices that can perform AI tasks directly on the system, bypassing the cloud, as the industry slowly emerges from its worst slump in years. These devices typically feature neural processing units dedicated to performing AI tasks.

Apple commands about 60% of the AI PC market, the research firm said in the report, pointing to its Mac portfolio incorporating M-series chips with a neural engine. Within Microsoft's Windows, AI PC shipments grew 127% sequentially in the quarter. The tech giant debuted its "Copilot+" AI PCs in May, with Qualcomm's Snapdragon PC chips based on Arm Holdings' architecture.

Slashdot Top Deals