×
Microsoft

Microsoft, Apple Drop OpenAI Board Plans as Scrutiny Grows (bloomberg.com) 9

Microsoft and Apple dropped plans to take board roles at OpenAI in a surprise decision that underscores growing regulatory scrutiny of Big Tech's influence over artificial intelligence. From a report: Microsoft, which invested $13 billion in the ChatGPT creator, will withdraw from its observer role on the board, the company said in a letter to OpenAI on Tuesday, which was seen by Bloomberg News. Apple was due to take up a similar role, but an OpenAI spokesperson said the startup won't have board observers after Microsoft's departure. Regulators in the US and Europe had expressed concerns about Microsoft's sway over OpenAI, applying pressure on one of the world's most valuable companies to show that it's keeping the relationship at arm's length. Microsoft has integrated OpenAI's services into its Windows and Copilot AI platforms and, like other big US tech companies, is banking on the new technology to help drive growth.
Businesses

Intuit To Cut About 1,800 Jobs As It Looks To Increase AI Investments (reuters.com) 70

TurboTax-parent Intuit said on Wednesday it will let go of about 1,800 employees, or 10% of its workforce, as it looks to focus on its AI-powered tax preparation software and other financial products. From a report: The company, which has invested heavily in providing generative AI powered accounting and tax preparation tools for small and medium businesses in the past few years, expects to close two of its sites in Edmonton, Canada and Boise, Idaho. Intuit will rehire 1,800 new people primarily in engineering, product and customer-facing roles, CEO Sasan Goodarzi said in a note to employees.
United States

US Officials Uncover Alleged Russian 'Bot Farm' (bbc.com) 211

An anonymous reader quotes a report from the BBC: US officials say they have taken action against an AI-powered information operation run from Russia, including nearly 1,000 accounts pretending to be Americans. The accounts on X were designed to spread pro-Russia stories but were automated "bots" -- not real people. In court documents made public Tuesday the US justice department said the operation was devised by a deputy editor at Kremlin-owned RT, formerly Russia Today. RT runs TV channels in English and several other languages, but appears much more popular on social media than on conventional airwaves.

The justice department seized two websites that were used to issue emails associated with the bot accounts, and ordered X to turn over information relating to 968 accounts that investigators say were bots. According to the court documents, artificial intelligence was used to create the accounts, which then spread pro-Russian story lines, particularly about the war in Ukraine. "Today's actions represent a first in disrupting a Russian-sponsored generative AI-enhanced social media bot farm," said FBI Director Christopher Wray. "Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government," Mr Wray said in a statement. The accounts now appear to have been deleted by X, and screenshots shared by FBI investigators indicated that they had very few followers.

The Courts

Judge Dismisses Lawsuit Over GitHub Copilot AI Coding Assistant (infoworld.com) 83

A US District Court judge in San Francisco has largely dismissed a class-action lawsuit against GitHub, Microsoft, and OpenAI, which challenged the legality of using code samples to train GitHub Copilot. The judge ruled that the plaintiffs failed to establish a claim for restitution or unjust enrichment but allowed the claim for breach of open-source license violations to proceed. InfoWorld reports: The lawsuit, first filed in Nov. 2022, claimed that GitHub's training of the Copilot AI on public GitHub code repositories violated the rights of the "vast number of creators" who posted code under open-source licenses on GitHub. The complaint (PDF) alleged that "Copilot ignores, violates, and removes the Licenses offered by thousands -- possibly millions -- of software developers, thereby accomplishing software piracy on an unprecedented scale." [...]

In a decision first announced on June 24, but only unsealed and made public on July 5, California Northern District judge Jon S. Tigar wrote that "In sum, plaintiff's claims do not support the remedy they seek. Plaintiffs have failed to establish, as a matter of law, that restitution for any unjust enrichment is available as a measure of plaintiffs' damages for their breach of contract claims." Judge Tigar went on to state that "court dismisses plaintiffs' section 1202(b) claim, this time with prejudice. The Court declines to dismiss plaintiffs' claim for breach of contract of open-source license violations against all defendants. Finally, the court dismisses plaintiffs' request for monetary relief in the form of unjust enrichment, as well as plaintiffs' request for punitive damages."

Businesses

Etsy Loses Its 'Handmade' and 'Vintage' Labels As It Takes On Temu and Amazon (theverge.com) 22

Instead of "handmade" and "vintage," Etsy created four new classifications for sellers on the site: "made by," "designed by," "handpicked by," and "sourced by." In order for products to be sold on Etsy, they'll now need to fall into one of these four categories. The Verge reports: Vintage items -- a backbone of Etsy's offerings -- will fall under "handpicked by," though these items will also have "vintage" labels on product listings. Craft supplies like beads or clay are considered "sourced by." A vase handmade by a ceramics artist would be in the "made by" category, whereas a digital illustration would be considered "designed by" the seller. These categories will be visible on Etsy product listings. The company says that this won't change anything in practice -- things that were previously prohibited, like the reselling of items made by someone else, still won't be allowed under the new policy.

"The consistent theme here is that items are infused with a human touch, because that's what makes Etsy, well, Etsy," CEO Josh Silverman said in a video message. The goal for the new categories, the company says, is to provide more details to shoppers about how an item is made and how a seller was involved in the process. Etsy has differentiated itself from other marketplaces like Amazon or Temu, emphasizing itself as a place to find unique items made by an artisan or selected by a curator. But over the years, the company has loosened its rules around what exactly counts as "handmade."

AI

OpenAI and Arianna Huffington Are Working Together On an 'AI Health Coach' 25

OpenAI CEO Sam Altman and businesswoman Arianna Huffington have announced they're working on an "AI health coach" via Thrive AI Health. According to a Time magazine op-ed, the two executives said that the bot will be trained on "the best peer-reviewed science" alongside "the personal biometric, lab, and other medical data you've chosen to share with it." The Verge reports: The company tapped DeCarlos Love, a former Google executive who previously worked on Fitbit and other wearables, to be CEO. Thrive AI Health also established research partnerships with several academic institutions and medical centers like Stanford Medicine, the Rockefeller Neuroscience Institute at West Virginia University, and the Alice L. Walton School of Medicine. (The Alice L. Walton Foundation is also a strategic investor in Thrive AI Health.) Thrive AI Health's goal is to provide powerful insights to those who otherwise wouldn't have access -- like a single mother looking for quick meal ideas for her gluten-free child or an immunocompromised person in need of instant advice in between doctor's appointments. [...]

The bot is still in its early stages, adopting an Atomic Habits approach. Its goal is to gently encourage small changes in five key areas of your life: sleep, nutrition, fitness, stress management, and social connection. By making minor adjustments, such as suggesting a 10-minute walk after picking up your child from school, Thrive AI Health aims to positively impact people with chronic conditions like heart disease. It doesn't claim to be ready to provide real diagnosis like a doctor would but instead aims to guide users into a healthier lifestyle. "AI is already greatly accelerating the rate of scientific progress in medicine -- offering breakthroughs in drug development, diagnoses, and increasing the rate of scientific progress around diseases like cancer," the op-ed read.
AI

Spain Sentences 15 Schoolchildren Over AI-Generated Naked Images (theguardian.com) 119

An anonymous reader quotes a report from The Guardian: A court in south-west Spain has sentenced 15 schoolchildren to a year's probation for creating and spreading AI-generated images of their female peers in a case that prompted a debate on the harmful and abusive uses of deepfake technology. Police began investigating the matter last year after parents in the Extremaduran town of Almendralejo reported that faked naked pictures of their daughters were being circulated on WhatsApp groups. The mother of one of the victims said the dissemination of the pictures on WhatsApp had been going on since July.

"Many girls were completely terrified and had tremendous anxiety attacks because they were suffering this in silence," she told Reuters at the time. "They felt bad and were afraid to tell and be blamed for it." On Tuesday, a youth court in the city of Badajoz said it had convicted the minors of 20 counts of creating child abuse images and 20 counts of offenses against their victims' moral integrity. Each of the defendants was handed a year's probation and ordered to attend classes on gender and equality awareness, and on the "responsible use of technology." [...] Police identified several teenagers aged between 13 and 15 as being responsible for generating and sharing the images. Under Spanish law minors under 14 cannot be charged but their cases are sent to child protection services, which can force them to take part in rehabilitation courses.
Further reading: First-Known TikTok Mob Attack Led By Middle Schoolers Tormenting Teachers
United States

Chinese Self-Driving Cars Have Quietly Traveled 1.8 Million Miles On US Roads (fortune.com) 65

An anonymous reader quotes a report from Fortune: On February 1st last year, Montana residents gawked upwards at a large white object hovering in the sky that looked to be another moon. The airborne object was in fact a Chinese spy balloon loaded with cameras, sensors, and other high-tech surveillance equipment, and it set off a nationwide panic as it drifted across the midwestern and southern United States. How much information the balloon gathered -- if any -- remains unknown, but the threat was deemed serious enough that an F-22 U.S. Air Force jet fired a Sidewinder missile at the unmanned balloon on a February afternoon, blasting it to pieces a few miles off the coast of South Carolina. At the same time that the eyes of Americans were fixed on the Chinese intruder in the sky, around 30 cars owned by Chinese companies and equipped with cameras and geospatial mapping technology were navigating the streets of greater Los Angeles, San Francisco, and San Jose. They collected detailed videos, audio recordings, and location data on their surroundings to chart out California's roads and develop their autonomous driving algorithms.

Since 2017, self-driving cars owned by Chinese companies have traversed 1.8 million miles of California alone, according to a Fortune analysis of the state's Department of Motor Vehicles data. As part of their basic functionality, these cars capture video of their surroundings and map the state's roads to within two centimeters of precision. Companies transfer that information from the cars to data centers, where they use it to train their self-driving systems. The cars are part of a state program that allows companies developing self-driving technology -- including Google-spinoff Waymo and Amazon-owned Zoox -- to test autonomous vehicles on public roads. Among the 35 companies approved to test by the California DMV, seven are wholly or partly China-based. Five of them drove on California roads last year: WeRide, Apollo, AutoX, Pony.ai, and DiDi Research America. Some Chinese companies are approved to test in Arizona and Texas as well.

Fitted with cameras, microphones, and sophisticated sensors, self-driving cars have long raised flags among privacy advocates. Matthew Guariglia, a policy analyst at the digital rights nonprofit Electronic Frontier Foundation, called self-driving cars "rolling surveillance devices" that passively collect massive amounts of information on Americans in plain sight. In the context of national security however, the data-hungry Chinese cars have received surprisingly little scrutiny. Some experts have compared them to Chinese-owned social media site TikTok, which has been subjected to a forced divestiture or ban on U.S. soil due to fears around its data collection practices threatening national security. The years-long condemnation of TikTok at the highest levels of the U.S. government has heightened the sense of distrust between the U.S. and China.

Some Chinese self-driving car companies appear to store U.S. data in China, according to privacy policies reviewed byFortune -- a situation that experts said effectively leaves the data accessible to the Chinese government. Depending on the type of information collected by the cars, the level of precision, and the frequency at which it's collected, the data could provide a foreign adversary with a treasure trove of intelligence that could be used for everything from mass surveillance to war planning, according to security experts who spoke withFortune. And yet, despite the sensitivity of the data, officials at the state and federal agencies overseeing the self-driving car testing acknowledge that they do not currently monitor, or have any process for checking, exactly what data the Chinese vehicles are collecting and what happens to the data after it is collected. Nor do they have any additional rules or policies in place for oversight of Chinese self-driving cars versus the cars in the program operated by American or European companies. "It is literally the wild, Wild West here," said Craig Singleton, director of the China program at the Foundation for Defense of Democracies, a conservative-leaning national security think tank. "There's no one in charge."

AI

Goldman Research Head Skeptical on AI Returns Despite Massive Spend 51

Goldman Sachs' head of global equity research Jim Covello has expressed skepticism about the potential returns from AI technology, despite an estimated $1 trillion in planned industry investment over the coming years. In a recent report [PDF], Covello argued that AI applications must solve complex, high-value problems to justify their substantial costs, which he believes the technology is not currently designed to do.

"AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn't designed to do," Covello said. Unlike previous technological revolutions like e-commerce, which provided low-cost solutions from the start, AI remains prohibitively expensive even for basic tasks, he said. Covello also questioned whether AI costs would decline sufficiently over time, citing potential lack of competition in critical components like GPU chips.

The Goldman executive also expressed doubt about AI's ability to boost company valuations, arguing that efficiency gains would likely be competed away and that the path to revenue growth remains unclear. Despite the skepticism, Covello acknowledged that substantial AI infrastructure spending will continue in the near term due to competitive pressures and investor expectations.
Transportation

Gig-Economy Drivers Are Turning to EVs to Save Money - and They Need More Public Chargers (hbs.edu) 206

Remember those researchers who spent years training AI tools to analyze the reviews drivers left on the smartphone apps where they pay for EV charging?

There was one more unexpected finding. "Rideshare drivers who work for companies such as Uber are increasingly turning to electric vehicles to reduce fuel costs." That trend is boosting demand for conveniently located, publicly accessible EV chargers... "They are mostly relying on public chargers for their daily Uber needs, usually every day or every couple of days, which dramatically increases electric vehicle miles traveled," [climate fellow Omar Asensio told the Institute's blog], explaining that many drivers live in apartments that lack garages or space for a residential EV charger. Uber CEO Dara Khosrowshahi considers the issue so pressing he urged U.S. policymakers to accelerate plans to improve the nation's EV charging infrastructure in a Fast Co. op-ed in January — during the World Economic Forum in Davos, when media messaging can influence policymakers.

Independent Uber drivers, Khosrowshahi said, are converting to electric vehicles seven times faster than the general public and they tend to be disproportionately from low- and middle-income households that need access to public charging stations. "Charging infrastructure must be more equitable," Khosrowshahi wrote. "Many drivers don't have driveways or garages, so access to nearby overnight charging is essential. Yet our data shows us that Uber drivers often live in neighborhoods lacking this infrastructure. These 'charging deserts' hold countless people back from making the switch."

AI

'Cyclists Can't Decide Whether To Fear Or Love Self-Driving Cars' (yahoo.com) 210

"Many bike riders are hopeful about a world of robot drivers that never experience road rage or get distracted by their phones," reports the Washington Post. "But some resent being guinea pigs for driverless vehicles that veer into bike lanes, suddenly stop short and confuse cyclists trying to navigate around them.

"In more than a dozen complaints submitted to the DMV, cyclists describe upsetting near misses and close calls... " Of the nearly 200 California DMV complaints analyzed by The Post, about 60 percent involved Cruise vehicles; the rest mostly involved Waymo. About a third describe erratic or reckless driving, while another third document near misses with pedestrians. The remainder involve reports of autonomous cars blocking traffic and disobeying road markings or traffic signals... Only 17 complaints involved bicyclists or bike lane disruptions. But interviews with cyclists suggest the DMV complaints represent a fraction of bikers' negative interactions with self-driving vehicles. And while most of the complaints describe relatively minor incidents, they raise questions about corporate boasts that the cars are safer than human drivers, said Christopher White, executive director of the San Francisco Bike Coalition... Robot cars could one day make roads safer, White said, "but we don't yet see the tech fully living up to the promise. ... The companies are talking about it as a much safer alternative to people driving. If that's the promise that they're making, then they have to live up to it...."

Many bicycle safety advocates support the mission of autonomous vehicles, optimistic the technology will cut injuries and deaths. They are quick to point out the carnage associated with human-driven cars: There were 2,520 collisions in San Francisco involving at least one cyclist from 2017 to 2022, according to state data analyzed by local law firm Walkup, Melodia, Kelly & Schoenberger. In those crashes, 10 cyclists died and another 243 riders were severely injured, the law firm found. Nationally, there were 1,105 cyclists killed by drivers in 2022, according to NHTSA, the highest on record...

Meanwhile, the fraction of complaints to the DMV related to bicycles demonstrates the shaky relationship between self-driving cars and cyclists. In April 2023, a Waymo edged into a crosswalk, confusing a cyclist and causing him to crash and fracture his elbow, according to the complaint filed by the cyclist. Then, in August — days after the state approved an expansion of these vehicles — a Cruise car allegedly made a right turn that cut off a cyclist. The rider attempted to stop but then flipped over their bike. "It clearly didn't react or see me!" the complaint said.

Even if self-driving cars are proven to be safer than human drivers, they should still receive extra scrutiny and aren't the only way to make roads safer, several cyclists said.

Thanks to Slashdot reader echo123 for sharing the article.
IT

Shipt's Pay Algorithm Squeezed Gig Workers. They Fought Back (ieee.org) 35

Workers at delivery company Shipt "found that their paychecks had become...unpredictable," according to an article in IEEE Spectrum. "They were doing the same work they'd always done, yet their paychecks were often less than they expected. And they didn't know why...."

The article notes that "Companies whose business models rely on gig workers have an interest in keeping their algorithms opaque." But "The workers showed that it's possible to fight back against the opaque authority of algorithms, creating transparency despite a corporation's wishes." On Facebook and Reddit, workers compared notes. Previously, they'd known what to expect from their pay because Shipt had a formula: It gave workers a base pay of $5 per delivery plus 7.5 percent of the total amount of the customer's order through the app. That formula allowed workers to look at order amounts and choose jobs that were worth their time. But Shipt had changed the payment rules without alerting workers. When the company finally issued a press release about the change, it revealed only that the new pay algorithm paid workers based on "effort," which included factors like the order amount, the estimated amount of time required for shopping, and the mileage driven. The company claimed this new approach was fairer to workers and that it better matched the pay to the labor required for an order. Many workers, however, just saw their paychecks dwindling. And since Shipt didn't release detailed information about the algorithm, it was essentially a black box that the workers couldn't see inside.

The workers could have quietly accepted their fate, or sought employment elsewhere. Instead, they banded together, gathering data and forming partnerships with researchers and organizations to help them make sense of their pay data. I'm a data scientist; I was drawn into the campaign in the summer of 2020, and I proceeded to build an SMS-based tool — the Shopper Transparency Calculator [written in Python, using optical character recognition and Twilio, and running on a home server] — to collect and analyze the data. With the help of that tool, the organized workers and their supporters essentially audited the algorithm and found that it had given 40 percent of workers substantial pay cuts...

This "information asymmetry" helps companies better control their workforces — they set the terms without divulging details, and workers' only choice is whether or not to accept those terms... There's no technical reason why these algorithms need to be black boxes; the real reason is to maintain the power structure... In a fairer world where workers have basic data rights and regulations require companies to disclose information about the AI systems they use in the workplace, this transparency would be available to workers by default.

The tool's creator was attracted to the idea of helping a community "control and leverage their own data," and ultimately received more than 5,600 screenshots from over 200 workers. 40% were earning at least 10% less — and about 33% were earning less than their state's minimum wage. Interestingly, "Sharing data about their work was technically against the company's terms of service; astoundingly, workers — including gig workers who are classified as 'independent contractors' — often don't have rights to their own data...

"[O]ur experiment served as an example for other gig workers who want to use data to organize, and it raised awareness about the downsides of algorithmic management. What's needed is wholesale changes to platforms' business models... The battles that gig workers are fighting are the leading front in the larger war for workplace rights, which will affect all of us. The time to define the terms of our relationship with algorithms is right now."

Thanks to long-time Slashdot reader mspohr for sharing the article.
Transportation

New Research Finds America's EV Chargers Are Just 78% Reliable (and Underfunded) (hbs.edu) 220

Harvard Business School has an "Institute for Business in Global Society" that explores the societal impacts of business. And they've recently published some new AI-powered research about EV charging infrastructure, according to the Institute's blog, conducted by climate fellow Omar Asensio.

"Asensio and his team, supported by Microsoft and National Science Foundation awards, spent years building models and training AI tools to extract insights and make predictions," using the reviews drivers left (in more than 72 languages) on the smartphone apps drivers use to pay for charging. And ultimately this research identified "a significant obstacle to increasing electric vehicle (EV) sales and decreasing carbon emissions in the United States: owners' deep frustration with the state of charging infrastructure, including unreliability, erratic pricing, and lack of charging locations..." [C]harging stations in the U.S. have an average reliability score of only 78%, meaning that about one in five don't work. They are, on average, less reliable than regular gas stations, Asensio said. "Imagine if you go to a traditional gas station and two out of 10 times the pumps are out of order," he said. "Consumers would revolt...." EV drivers often find broken equipment, making charging unreliable at best and simply not as easy as the old way of topping off a tank of gas. The reason? "No one's maintaining these stations," Asensio said.
One problem? Another blog post by the Institute notes that America's approach to public charging has differed sharply from those in other countries: In Europe and Asia, governments started making major investments in public charging infrastructure years ago. In America, the initial thinking was that private companies would fill the public's need by spending money to install charging stations at hotels, shopping malls and other public venues. But that decentralized approach failed to meet demand and the Biden administration is now investing heavily to grow the charging network and facilitate EV sales... "No single market actor has sufficient incentive to build out a national charging network at a pace that meets our climate goals," the report declared. Citing research and the experience of other countries, it noted that "policies that increase access to charging stations may be among the best policies to increase EV sales." But the U.S. is far behind other countries.
Thanks to Slashdot reader NoWayNoShapeNoForm for sharing the article.
AI

Microsoft's AI CEO: Web Content (Without a Robots.txt File) is 'Freeware' for AI Training (windowscentral.com) 136

Slashdot reader joshuark shared this report from Windows Central Microsoft may have opened a can of worms with recent comments made by the tech giant's CEO of AI Mustafa Suleyman. The CEO spoke with CNBC's Andrew Ross Sorkin at the Aspen Ideas Festival earlier this week. In his remarks, Suleyman claimed that all content shared on the web is available to be used for AI training unless a content producer says otherwise specifically.
The whole discussion was interesting — but this particular question was very direct. CNBC's interviewer specifically said, "There are a number of authors here... and a number of journalists as well. And it appears that a lot of the information that has been trained on over the years has come from the web — and some of it's the open web, and some of it's not, and we've heard stories about how OpenAI was turning YouTube videos into transcripts and then training on the transcripts."

The question becomes "Who is supposed to own the IP, who is supposed to get value from the IP, and whether, to put it in very blunt terms, whether the AI companies have effectively stolen the world's IP." Suleyman begins his answer — at the 14:40 mark — with "Yeah, I think — look, it's a very fair argument." SULEYMAN: "I think that with respect to content that is already on the open web, the social contract of that content since the 90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding.

"There's a separate category where a website or a publisher or a news organization had explicitly said, 'Do not scrape or crawl me for any other reason than indexing me so that other people can find that content.' That's a gray area and I think that's going to work its way through the courts."


Q: And what does that mean, when you say 'It's a gray area'?

SULEYMAN: "Well, if — so far, some people have taken that information... but that's going to get litigated, and I think that's rightly so...

"You know, look, the economics of information are about to radically change, because we're going to reduce the cost of production of knowledge to zero marginal cost. And this is just a very difficult thing for people to intuit — but in 15 or 20 years time, we will be producing new scientific cultural knowledge at almost zero marginal cost. It will be widely open sourced and available to everybody. And I think that is going to be, you know, a true inflection point in the history of our species. Because what are we, collectively, as an organism of humans, other than an intellectual production engine. We produce knowledge. Our science makes us better. And so what we really want in the world, in my opinion, are new engines that can turbocharge discovery and invention."

AI

'How Good Is ChatGPT at Coding, Really?' (ieee.org) 135

IEEE Spectrum (the IEEE's official publication) asks the question. "How does an AI code generator compare to a human programmer?" A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI's ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code — with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent — depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code.
The study tested GPT-3.5 on 728 coding problems from the LeetCode testing platform — and in five programming languages: C, C++, Java, JavaScript, and Python. The results? Overall, ChatGPT was fairly good at solving problems in the different coding languages — but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. "However, when it comes to the algorithm problems after 2021, ChatGPT's ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems," said Yutian Tang, a lecturer at the University of Glasgow. For example, ChatGPT's ability to produce functional code for "easy" coding problems dropped from 89 percent to 52 percent after 2021. And its ability to generate functional code for "hard" problems dropped from 40 percent to 0.66 percent after this time as well...

The researchers also explored the ability of ChatGPT to fix its own coding errors after receiving feedback from LeetCode. They randomly selected 50 coding scenarios where ChatGPT initially generated incorrect coding, either because it didn't understand the content or problem at hand. While ChatGPT was good at fixing compiling errors, it generally was not good at correcting its own mistakes... The researchers also found that ChatGPT-generated code did have a fair amount of vulnerabilities, such as a missing null test, but many of these were easily fixable.

"Interestingly, ChatGPT is able to generate code with smaller runtime and memory overheads than at least 50 percent of human solutions to the same LeetCode problems..."
Cellphones

'Windows Recall' Preview Remains Hackable As Google Develops Similar Feature 20

Windows Recall was "delayed" over concerns that storing unencrypted recordings of users' activity was a security risk.

But now Slashdot reader storagedude writes: The latest version of Microsoft's planned Windows Recall feature still contains data privacy and security vulnerabilities, according to a report by the Cyber Express.

Security researcher Kevin Beaumont — whose work started the backlash that resulted in Recall getting delayed last month — said the most recent preview version is still hackable by Alex Hagenah's "TotalRecall" method "with the smallest of tweaks."

The Windows screen recording feature could as yet be refined to fix security concerns, but some have spotted it recently in some versions of the Windows 11 24H2 release preview that will be officially released in the fall.

Cyber Express (the blog of threat intelligence vendor Cyble Inc) got this official response: Asked for comment on Beaumont's findings, a Microsoft spokesperson said the company "has not officially released Recall," and referred to the updated blog post that announced the delay, which said: "Recall will now shift from a preview experience broadly available for Copilot+ PCs on June 18, 2024, to a preview available first in the Windows Insider Program (WIP) in the coming weeks."

"Beyond that, Microsoft has nothing more to share," the spokesperson added.

Also this week, the blog Android Authority wrote that Google is planning to introduce its own "Google AI" features to Pixel 9 smartphones. They include the ability to enhance screenshots, an "Add Me" tool for group photos — and also "a feature resembling Microsoft's controversial Recall" dubbed "Pixel Screenshots." Google's take on the feature is different and more privacy-focused: instead of automatically capturing everything you're doing, it will only work on screenshots you take yourself. When you do that, the app will add a bit of extra metadata to it, like app names, web links, etc. After that, it will be processed by a local AI, presumably the new multimodal version of Gemini Nano, which will let you search for specific screenshots just by their contents, as well as ask a bot questions about them.

My take on the feature is that it's definitely a better implementation of the idea than what Microsoft created.. [B]oth of the apps ultimately serve a similar purpose and Google's implementation doesn't easily leak sensitive information...

It's worth mentioning Motorola is also working on its own version of Recall — not much is known at the moment, but it seems it will be similar to Google's implementation, with no automatic saving of everything on the screen.

The Verge describes the Pixel 9's Google AI as "like Microsoft Recall but a little less creepy."
Businesses

Investors Pour $27.1 Billion into AI Startups, Defying a Downturn (msn.com) 17

"For two years, many unprofitable tech startups have cut costs, sold themselves or gone out of business," reports the New York Times.

"But the ones focused on artificial intelligence have been thriving." Now, the AI boom that started in late 2022, has become the strongest counterpoint to the broader startup downturn. Investors poured $27.1 billion into AI startups in the United States from April to June, accounting for nearly half of all U.S. startup funding in that period, according to PitchBook, which tracks startups. In total, U.S. startups raised $56 billion, up 57% from a year earlier and the highest three-month haul in two years. AI companies are attracting huge rounds of funding reminiscent of 2021, when low interest rates pushed investors away from taking risks on tech investments...

The startup downturn began in early 2022 as many money-losing companies struggled to grow as quickly as they did in the pandemic. Rising interest rates also pushed investors to chase less risky investments. To make up for dwindling funding, startups slashed staff and scaled back their ambitions. Then in late 2022, OpenAI, a San Francisco AI lab, kicked off a new boom with the release of its ChatGPT chatbot. Excitement around generative AI technology, which can produce text, images and videos, set off a frenzy of startup creation and funding. "Sam Altman canceled the recession," joked Siqi Chen, founder of the startup Runway Financial, referring to OpenAI's chief executive. Chen said his company, which makes finance software, was growing faster than it otherwise would have because "AI can do the job of 1.5 people...."

An analysis of 125 AI startups by Kruze Consulting, an accounting and tax advisory firm, showed that the companies spent an average of 22% of their expenses on computing costs in the first three months of the year — more than double the 10% spent by non-AI software companies in the same period. "No wonder VCs are throwing money into these companies," said Healy Jones, Kruze's vice president of financial strategy. While AI startups are growing faster than other startups, he said, "they clearly need the money."

Startups receiving funding include CoreWeave ($1.1 billion), ScaleAI ($1 billion), and the Elon Musk-founded xAI ($6 billion), according to the article.

"For investors who back fast-growing startups, there is little downside to being wrong about the next big thing, but there is enormous upside in being right. AI's potential has generated deafening hype, with prominent investors and executives predicting that the market for AI will be bigger than the markets for the smartphone, the personal computer, social media and the internet."
China

Nvidia Forecasted To Make $12 Billion Selling GPUs In China (theregister.com) 4

Nvidia is expected to earn $12 billion from GPU sales to China in 2024, despite U.S. trade restrictions. Research firm SemiAnalysis says the GPU maker will ship over 1 million units of its new H20 model to the Chinese market, "with each one said to cost between $12,000 and $13,000 apiece," reports The Register. From the report: This figure is said by SemiAnalysis to be nearly double what Huawei is likely to sell of its rival accelerator, the Ascend 910B, as reported by The Financial Times. If accurate, this would seem to contradict earlier reports that Nvidia had moved to cut the price of its products for the China market. This was because buyers were said to be opting instead for domestically made kit for accelerating AI workloads. The H20 GPU is understood to be the top performing model out of three Nvidia GPUs specially designed for the Chinese market to comply with rules introduced by the Biden administration last year that curb performance.

In contrast, Huawei's Ascend 910B is claimed to have performance on a par with that of Nvidia's A100 GPU. It is believed to be an in-house design manufactured by Chinese chipmaker SMIC using a 7nm process technology, unlike the older Ascend 910 product. If this forecast proves accurate, it will be a relief for Nvidia, which earlier disclosed that its sales in China delivered a "mid-single digit percentage" of revenue for its Q4 of FY2024, and was forecast to do the same in Q1 of FY 2025. In contrast, the Chinese market had made up between 20 and 25 percent of the company's revenue in recent years, until the export restrictions landed.

Youtube

YouTube's Updated Eraser Tool Removes Copyrighted Music Without Impacting Other Audio (techcrunch.com) 16

YouTube has released an AI-powered eraser tool to help creators easily remove copyrighted music from their videos without affecting other audio such as dialog or sound effects. TechCrunch's Ivan Mehta reports: On its support page, YouTube still warns that, at times, the algorithm might fail to remove just the song. "This edit might not work if the song is hard to remove. If this tool doesn't successfully remove the claim on a video, you can try other editing options, such as muting all sound in the claimed segments or trimming out the claimed segments," the company said.

Alternatively, creators can choose to select "Mute all sound in the claimed segments" to silence bits of video that possibly has copyrighted material. Once the creator successfully edits the video, YouTube removes the content ID claim -- the company's system for identifying the use of copyrighted content in different clips.
YouTube shared a video describing the feature on its Creator Insider channel.
AI

Wimbledon Employs AI To Protect Players From Online Abuse 19

An anonymous reader writes: The All England Lawn Tennis Club is using AI for the first time to protect players at Wimbledon from online abuse. An AI-driven service monitors players' public-facing social media profiles and automatically flags death threats, racism and sexist comments in 35 different languages. High-profile players who have been targeted online such as the former US Open champion Emma Raducanu and the four-time grand slam winner Naomi Osaka have previously spoken out about having to delete Instagram and Twitter, now called X, from their phones. Harriet Dart, the British No 2, has said she only uses social media from time to time because of online "hate."

Speaking on Thursday after her triumph against Katie Boulter, the British No 1, Dart said: "I just think there's a lot of positives for it [social media] but also a lot of negatives. I'm sure today, if I open one of my apps, regardless if I won, I'd have a lot of hate as well." Jamie Baker, the tournament's director, said Wimbledon had introduced the social media monitoring service Threat Matrix. The system, developed by the AI company Signify Group, will also be rolled out at the US Open. [...] He said the AI-driven service was supported by people monitoring the accounts. Players can opt in for a fuller service that scans abuse or threats via private direct messaging. Baker, a former British No 2, said Wimbledon would consult the players about the abuse before reporting it to tech companies for removal or to the police if deemed necessary.

Slashdot Top Deals