IOS

Apple To Allow Alternative App Stores For iOS Users In Brazil 5

Apple will allow alternative iOS app stores and external payment systems in Brazil after settling an antitrust case with the country's competition authority, following a lawsuit brought by MercadoLibre back in 2022. Thurrott reports: Yesterday, Brazil's Conselho Administrativo de Defesa Economica (CADE) explained in its press release that it has approved a Term of Commitment to Cease (TCC) submitted by Apple. To settle the lawsuit, the iPhone maker has agreed to allow third-party iOS app stores in Brazil and to let developers use external payment systems. The company will also use neutral wording in the warning messages about third-party app stores and external payment systems that iOS users in Brazil will see.

As part of the settlement, Apple has 105 days to implement these changes to avoid a fine of up to $27.1 million. A separate report from Brazilian blog Tecnoblog revealed that Apple will still take a 5% "Core Technology Commission" fee on transactions going through alternative app stores. Additionally, the company will take a 15% cut on in-app purchases for App Store apps when developers redirect users to their own payment systems.
AI

Italy Tells Meta To Suspend Its Policy That Bans Rival AI Chatbots From WhatsApp 4

Italy's antitrust regulator Italian Competition Authority ordered Meta to suspend a policy that blocks rival AI chatbots from using WhatsApp's business APIs, citing potential abuse of market dominance. "Meta's conduct appears to constitute an abuse, since it may limit production, market access, or technical developments in the AI Chatbot services market, to the detriment of consumers," the Authority wrote. "Moreover, while the investigation is ongoing, Meta's conduct may cause serious and irreparable harm to competition in the affected market, undermining contestability." TechCrunch reports: The AGCM in November had broadened the scope of an existing investigation into Meta, after the company changed its business API policy in October to ban general-purpose chatbots from being offered on the chat app via the API. Meta has argued that its API isn't designed to be a platform for the distribution of chatbots and that people have more avenues beyond WhatsApp to use AI bots from other companies. The policy change, which goes into effect in January, would affect the availability of AI chatbots from the likes of OpenAI, Perplexity, and Poke on the app.
AI

China Is Worried AI Threatens Party Rule 20

An anonymous reader quotes a report from the Wall Street Journal: Concerned that artificial intelligence could threaten Communist Party rule, Beijing is taking extraordinary steps to keep it under control. Although China's government sees AI as crucial to the country's economic and military future, regulations and recent purges of online content show it also fears AI could destabilize society. Chatbots pose a particular problem: Their ability to think for themselves could generate responses that spur people to question party rule.

In November, Beijing formalized rules it has been working on with AI companies to ensure their chatbots are trained on data filtered for politically sensitive content, and that they can pass an ideological test before going public. All AI-generated texts, videos and images must be explicitly labeled and traceable, making it easier to track and punish anyone spreading undesirable content. Authorities recently said they removed 960,000 pieces of what they regarded as illegal or harmful AI-generated content during three months of an enforcement campaign. Authorities have officially classified AI as a major potential threat, adding it alongside earthquakes and epidemics to its National Emergency Response Plan.

Chinese authorities don't want to regulate too much, people familiar with the government's thinking said. Doing so could extinguish innovation and condemn China to second-tier status in the global AI race behind the U.S., which is taking a more hands-off approach toward policing AI. But Beijing also can't afford to let AI run amok. Chinese leader Xi Jinping said earlier this year that AI brought "unprecedented risks," according to state media. A lieutenant called AI without safety like driving on a highway without brakes. There are signs that China is, for now, finding a way to thread the needle.

Chinese models are scoring well in international rankings, both overall and in specific areas such as computer coding, even as they censor responses about the Tiananmen Square massacre, human-rights concerns and other sensitive topics. Major American AI models are for the most part unavailable in China. It could become harder for DeepSeek and other Chinese models to keep up with U.S. models as AI systems become more sophisticated. Researchers outside of China who have reviewed both Chinese and American models also say that China's regulatory approach has some benefits: Its chatbots are often safer by some metrics, with less violence and pornography, and are less likely to steer people toward self-harm.
"The Communist Party's top priority has always been regulating political content, but there are people in the system who deeply care about the other social impacts of AI, especially on children," said Matt Sheehan, who studies Chinese AI at the Carnegie Endowment for International Peace, a think tank. "That may lead models to produce less dangerous content on certain dimensions."
Censorship

US Bars Five Europeans It Says Pressured Tech Firms To Censor American Viewpoints Online (apnews.com) 168

An anonymous reader quotes a report from the Associated Press: The State Department announced Tuesday it was barring five Europeans it accused of leading efforts to pressure U.S. tech firms to censor or suppress American viewpoints. The Europeans, characterized by Secretary of State Marco Rubio as "radical" activists and "weaponized" nongovernmental organizations, fell afoul of a new visa policy announced in May to restrict the entry of foreigners deemed responsible for censorship of protected speech in the United States. "For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose," Rubio posted on X. "The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship."

The five Europeans were identified by Sarah Rogers, the under secretary of state for public diplomacy, in a series of posts on social media. [...] The five Europeans named by Rogers are: Imran Ahmed, chief executive of the Centre for Countering Digital Hate; Josephine Ballon and Anna-Lena von Hodenberg, leaders of HateAid, a German organization; Clare Melford, who runs the Global Disinformation Index; and former EU Commissioner Thierry Breton, who was responsible for digital affairs. Rogers in her post on X called Breton, a French business executive and former finance minister, the "mastermind" behind the EU's Digital Services Act, which imposes a set of strict requirements designed to keep internet users safe online. This includes flagging harmful or illegal content like hate speech. She referred to Breton warning Musk of a possible "amplification of harmful content" by broadcasting his livestream interview with Trump in August 2024 when he was running for president.

Privacy

Inside Uzbekistan's Nationwide License Plate Surveillance System (techcrunch.com) 26

An anonymous reader quotes a report from TechCrunch: Across Uzbekistan, a network of about a hundred banks of high-resolution roadside cameras continuously scan vehicles' license plates and their occupants, sometimes thousands a day, looking for potential traffic violations. Cars running red lights, drivers not wearing their seatbelts, and unlicensed vehicles driving at night, to name a few. The driver of one of the most surveilled vehicles in the system was tracked over six months as he traveled between the eastern city of Chirchiq, through the capital Tashkent, and in the nearby settlement of Eshonguzar, often multiple times a week. We know this because the country's sprawling license plate-tracking surveillance system has been left exposed to the internet.

Security researcher Anurag Sen, who discovered the security lapse, found the license plate surveillance system exposed online without a password, allowing anyone access to the data within. It's not clear how long the surveillance system has been public, but artifacts from the system show that its database was set up in September 2024, and traffic monitoring began in mid-2025. The exposure offers a rare glimpse into how such national license plate surveillance systems work, the data they collect, and how they can be used to track the whereabouts of any one of the millions of people across an entire country. The lapse also reveals the security and privacy risks associated with the mass monitoring of vehicles and their owners, at a time when the United States is building up its nationwide array of license plate readers, many of which are provided by surveillance giant Flock.

The Courts

John Carreyou and Other Authors Bring New Lawsuit Against Six Major AI Companies 31

A group of authors led by John Carreyrou has filed a new lawsuit against Anthropic, Google, OpenAI, Meta, xAI, and Perplexity, accusing the AI firms of training models on pirated copies of their books. TechCrunch reports: If this sounds familiar, it's because another set of authors already filed a class action suit against Anthropic for these same acts of copyright infringement. In that case, the judge ruled that it was legal for Anthropic and similar AI companies to train on pirated copies of books, but that it was not legal to pirate the books in the first place.

While eligible writers can receive about $3,000 from the $1.5 billion Anthropic settlement, some authors were dissatisfied with that resolution -- it doesn't hold AI companies accountable for the actual act of using stolen books to train their models, which generate billions of dollars in revenue.
The plaintiffs in the new lawsuit say the proposed Anthropic settlement "seems to serve [the AI companies], not creators."

"LLM companies should not be able to so easily extinguish thousands upon thousands of high-value claims at bargain-basement rates, eliding what should be the true cost of their massive willful infringement."
The Courts

Judge Blocks Texas App Store Age Verification Law (theverge.com) 43

A federal judge blocked Texas' app store age-verification law, ruling it likely violates the First Amendment by forcing platforms to gate speech and collect data in an overly broad way. The law was set to go into effect on January 1, 2026. The Verge reports: In an order granting a preliminary injunction on the Texas App Store Accountability Act (SB 2420), Judge Robert Pitman wrote that the statute "is akin to a law that would require every bookstore to verify the age of every customer at the door and, for minors, require parental consent before the child or teen could enter and again when they try to purchase a book." Pitman has not yet ruled on the merits of the case, but his decision to grant the preliminary injunction means he believes its defenders are unlikely to prevail in court.

Pitman found that the highest level of scrutiny must be applied to evaluate the law under the First Amendment, which means the state must prove the law is "the least restrictive means of achieving a compelling state interest." The judge found this is not the case and that it wouldn't even survive intermediate scrutiny, because Texas has so far failed to prove that its goals are connected to its methods. Since Texas already has a law requiring age verification for porn sites, Pitman said that "only in the vast minority of applications would SB 2420 have a constitutional application to unprotected speech not addressed by other laws." Though Pitman acknowledged the importance of safeguarding kids online, he added, "the means to achieve that end must be consistent with the First Amendment. However compelling the policy concerns, and however widespread the agreement that the issue must be addressed, the Court remains bound by the rule of law."
"The Texas App Store Accountability Act is the first among a series of similar state laws to face a legal challenge, making the ruling especially significant, as Congress considers a version of the statute," notes The Verge. "The laws, versions of which also passed in Utah and Louisiana, aim to impose age verification standards at the app store level, making companies like Apple and Google responsible for transmitting signals about users' ages to app developers to block users from age-inappropriate experiences."

"The state can still appeal the ruling with the Fifth Circuit Court of Appeals, which has a history of reversing blocks on internet regulations."
Piracy

LimeWire Re-Emerges In Online Rush To Share Pulled '60 Minutes' Segment (arstechnica.com) 124

An anonymous reader quotes a report from Ars Technica: CBS cannot contain the online spread of a "60 Minutes" segment that its editor-in-chief, Bari Weiss, tried to block from airing. The episode, "Inside CECOT," featured testimonies from US deportees who were tortured or suffered physical or sexual abuse at a notorious Salvadoran prison, the Center for the Confinement of Terrorism. "Welcome to hell," one former inmate was told upon arriving, the segment reported, while also highlighting a clip of Donald Trump praising CECOT and its leadership for "great facilities, very strong facilities, and they don't play games."

Weiss controversially pulled the segment on Monday, claiming it could not air in the US because it lacked critical voices, as no Trump officials were interviewed. She claimed that the segment "did not advance the ball" and merely echoed others' reporting, NBC News reported. Her plan was to air the segment when it was "ready," insisting that holding stories "for whatever reason" happens "every day in every newsroom." But Weiss apparently did not realize that the "Inside CECOT" would still stream in Canada, giving the public a chance to view the segment as reporters had intended.

Critics accusing CBS of censoring the story quickly shared the segment online Monday after discovering that it was available on the Global TV app. Using a VPN to connect to the app with a Canadian IP address was all it took to override Weiss' block in the US, as 404 Media reported the segment was uploaded to "to a variety of file sharing sites and services, including iCloud, Mega, and as a torrent," including on the recently revived file-sharing service LimeWire. It's currently also available to stream on the Internet Archive, where one reviewer largely summed up the public's response so far, writing, "cannot believe this was pulled, not a dang thing wrong with this segment except it shows truth."
"Yo what," joked Reddit user Howzitgoin, highlighting only the word "LimeWire." Another user responded, "man, who knew my nostalgia prof pic would become relevant again, WTF."

"Bringing back LimeWire to illegally rip copies of reporting suppressed by the government is definitely some cyberpunk shit," a Bluesky user wrote.

"We need a champion against the darkness," a Reddit commenter echoed. "I side with LimeWire."
United States

FCC Bans Foreign-Made Drones Over National Security, Spying Concerns (politico.com) 66

The FCC has banned approval of new foreign-made drones and components, citing "an unacceptable risk" to national security. The move will most heavily impact DJI but it "does not affect drones or drone components that are currently sold in the United States." Reuters reports: The tech was placed on the commission's "Covered List," barring DJI and other foreign drone manufacturers from receiving the FCC's approval to sell new drone models for import or sale in the U.S. In Monday's announcement, the agency said that the move "will reduce the risk of direct [drone] attacks and disruptions, unauthorized surveillance, sensitive data exfiltration and other [drone] threats to the homeland."

FCC Chair Brendan Carr said in a statement that while drones offer the potential to boost public safety and the U.S.' posture on global innovation, "criminals, terrorists and hostile foreign actors have intensified their weaponization of these technologies, creating new and serious threats to our homeland."

The ruling comes as China hawks in Congress amplify warnings about the security risks of drones made by DJI, which accounts for more than 90% of the global market share. But efforts to crack down on Capitol Hill have been met with some pushback due to the potential impacts of curbing the drone usage on U.S. businesses and law enforcement. A wide variety of sectors, including construction, energy, agriculture and mining companies, as well as local police and fire departments across the country, deploy DJI-made drones.

United States

Welcome To America's New Surveillance High Schools (forbes.com) 96

Beverly Hills High School has deployed an AI-powered surveillance apparatus that includes facial recognition cameras, behavioral analysis software, smoke detector-shaped bathroom listening devices from Motorola, drones, and license plate readers from Flock Safety -- a setup the district spent $4.8 million on in the 2024-2025 fiscal year and considers necessary given the school's high-profile location in Los Angeles.

Similar systems are spreading to campuses nationwide as schools try to stop mass shootings that killed 49 people on school property this year, 59 in 2024, and 45 in 2023. A 2023 ACLU report found that eight of the ten largest school shootings since Columbine occurred at schools that already had surveillance systems, and 32% of students surveyed said they felt like they were always being watched. The technology has a spotty track record, however.

Gun detection vendor Evolv, used by more than 800 schools including Beverly Hills High, was reprimanded by the FTC in 2024 for claiming its AI could detect all weapons after it failed to flag a seven-inch knife used to stab a student in 2022. Evolv has also flagged laptops and water bottles as guns. Rival vendor Omnilert flagged a 16-year-old student at a Maryland high school reaching for an empty Doritos bag as a possible gun threat; police held the teenager at gunpoint.

Not every school is buying in. Highline Schools in Washington state cancelled its $33,000 annual ZeroEyes contract this year and spent the money on defibrillators and Ford SUVs for its safety team instead.
Music

Spotify Says 'Anti-Copyright Extremists' Scraped Its Library (musically.com) 59

A group of activists has scraped Spotify's entire library, accessing 256 million rows of track metadata and 86 million audio files totaling roughly 300TB of data. The metadata has been released via Anna's Archive, a search engine for "shadow libraries" that previously focused on books.

Spotify described the activists as "anti-copyright extremists who've previously pirated content from YouTube and other platforms" and confirmed it is actively investigating the incident. The activists claim this represents "the world's first 'preservation archive' for music which is fully open" and covers "around 99.6% of listens."

They appear to have used Spotify's public web API to scrape the metadata and circumvented DRM to access audio files. Spotify insists that this is not a security breach affecting user data. Though the more pressing concern for the music industry may be AI training rather than pirate streaming services -- similar YouTube datasets have reportedly been used by unlicensed generative AI music services.
Crime

In 2025 Scammers Have Stolen $835M from Americans Using Fake Customer Service Numbers (straitstimes.com) 26

They call it "the business-impersonator scam". And it's fooled 396,227 Americans in just the first nine months of 2025 — 18% more than the 335,785 in the same nine months of 2024. That's according to a Bloomberg reporter (who also fell for it in late November), citing the official statistics from America's Federal Trade Commission: Some pose as airline staff on social media and respond to consumer complaints. Others use texts or e-mails claiming to be an airline reporting a delayed or cancelled flight to phish for travellers' data. But the objective is always the same: to hit a stressed out, overwhelmed traveller at their most vulnerable. In my case, the scammer exploited weaknesses in Google's automated ad-screening system, so that fraudulent sponsored results rose to the top [They'd typed "United airlines agent on demand" into Google, and the top search result on their phone said United.com, had a 1-888 number next to it and said it had had 1M+ visits in past month. "It looked legit. I tapped the number..." ]

After I reported the fake "United Airlines" ad to Google, via an online form for consumers, it was taken down. But a few days later, I entered the same search terms and the identical ad featuring the same 1-888 number was back at the top of my results. I reported it again, and it was quickly removed again... A [Google] spokesperson there said the company is constantly evolving its tactics "to stay ahead of bad actors." Of the 5.1 billion ads blocked by the company last year, she said, 415 million were taken down for "scam-related violations." Google updated its ads misrepresentation policy in 2024 to include "impersonating or falsely implying affiliation with a public figure, brand or organization to entice users to provide money or information." Still, many impostor ads slip through the cracks.

"Reported losses from business-impostor scams in the United States rose 30 per cent, to US$835 million, in the first three quarters of 2025," the article points out (citing more figures from the America's Federal Trade Commision). An updated version of the article also includes a response from United Airlines. "We encourage customers to only use customer-service contact information that is listed on our website and app."

And what happened to the scammed reporter? "I called American Express and contested the charge before cancelling my credit card. I then contacted Experian, one of the three major credit bureaus, to put a fraud alert on my file. Next, I filed a complaint with the FTC and reported the fake ad to Google.

"American Express wound up resolving the dispute in my favour, but the memories of this chaotic Thanksgiving will stay with us forever. "
United States

The U.S. Could Ban Chinese-Made Drones Used By Police Departments (msn.com) 76

Tuesday the White House faces a deadline to decide "whether Chinese drone maker DJI Technologies poses a national security threat," reports Bloomberg. But their article notes it's "a decision with the potential to ground thousands of machines deployed by police and fire departments across the US."

One person making the case against the drones is Mike Nathe, a North Dakota Republican state representative described by the Post as "at the forefront of a nationwide campaign sounding alarms about the Made-in-China aircraft." Nathe tells them that "People do not realize the security issue with these drones, the amount of information that's being funneled back to China on a daily basis." The president already signed anexecutive orderin June targeting "foreign control or exploitation" of America's drone supply chain. That came after Congress mandated a review to determine whether DJI deserves inclusion in a federal register of companies believed to endanger national security. If DJI doesn't get a clean bill of health for Christmas, it could join Huawei Technologies Co. Ltd. and ZTE Corp.on that Federal Communications Commission list. The designation would give the Trump administration authority to prevent new domestic sales or even impose a flight ban, affecting public agencies from New York to North Dakota to Nevada...

The fleet used by public safety agencies nationwide exceeds about 25,000 aircraft, said Chris Fink, founder of Unmanned Vehicle Technologies LLC, a Fayetteville, Arkansas-based firm that advises law-enforcement clients. The overwhelming majority of those drones — called uncrewed aerial vehicles, or UAVs, in industry parlance — comes from China, said Jon Beal, president of theLaw Enforcement Drone Association, a training and advocacy group that counts DJI and some US competitors as corporate sponsors...

Currently, at least half a dozen states havetargeted DJIand other Chinese-manufactured drones, including restrictions in Arkansas, Mississippi and Tennessee. A Nevada law prohibiting public agencies from using Chinese drones took effect in January... Legislators also took up the cause in Connecticut, which passed a law this year preventing public offices from using Chinese drones. Supporters said they're worried about these eyes in the skies being used for spying. "We're kind of sitting ducks," said Bob Duff, the Democratic majority leader in the state senate who promoted the legislation. "They are designed to infiltrate systems even when the users don't think that they will."

One North Dakota sheriff's department complains U.S.-made drones are "at least double and triple the price out of the gate," according to the article, which adds that public safety officials "say it's difficult to find domestic alternatives that match DJI in price and performance."

And DJI "wants an extension on the security review," according to the article, "saying Tuesday is too soon to make a conclusion."
United States

Trump Admin to Hire 1,000 for New 'Tech Force' to Build AI Infrastructure (cnbc.com) 56

An anonymous reader shared this report from CNBC: The Trump administration on Monday unveiled a new initiative dubbed the "U.S. Tech Force," comprising about 1,000 engineers and other specialists who will work on artificial intelligence infrastructure and other technology projects throughout the federal government.

Participants will commit to a two-year employment program working with teams that report directly to agency leaders in "collaboration with leading technology companies," according to an official government website. ["...and work closely with senior managers from companies partnering with the Tech Force."] Those "private sector partners" include Amazon Web Services, Apple, Google Public Sector, Dell Technologies, Microsoft, Nvidia, OpenAI, Oracle, Palantir, Salesforce and numerous others [including AMD, IBM, Coinbase, Robinhood, Uber, xAI, and Zoom], the website says.

The Tech Force shows the Trump administration increasing its focus on developing America's AI infrastructure as it competes with China for dominance in the rapidly growing industry... The engineering corps will be working on "high-impact technology initiatives including AI implementation, application development, data modernization, and digital service delivery across federal agencies," the site says.

"Answer the call," says the new web site at TechForce.gov.

"Upon completing the program, engineers can seek employment with the partnering private-sector companies for potential full-time roles — demonstrating the value of combining civil service with technical expertise." [And those private sector companies can also nominate employees to participate.] "Annual salaries are expected to be in the approximate range of $150,000 to $200,000."
Crime

Flock Executive Says Their Camera Helped Find Shooting Suspect, Addresses Privacy Concerns (cnn.com) 58

During a search for the Brown shoogin suspect, a law enforcement press conference included a request for "Ring camera footage from residents and businesses near Brown University," according to local news reports.

But in the end it was Flock cameras according to an article in Gizmodo, after a Reddit poster described seeing "odd" behavior of someone who turned out to be the suspect: The original Reddit poster, identified only as John in the affidavit, contacted police the next day and came in for an interview. He told them about his odd encounter with the suspect, noting that he was acting suspiciously by not having appropriate cold-weather clothes on when he saw him in a bathroom at Brown University. That was two hours before the shooting. After spotting him in the bathroom wearing a mask, John actually started following the suspect in what he called a "game of cat and mouse...." Police detectives showed John two images obtained through Flock, the company that's built extensive surveillance infrastructure across the U.S. used by investigators, and he recognized the suspect's vehicle, replying, "Holy shit. That might be it," according to the affidavit. Police were able to track down the license plate of the rental car, which gave them a name, and within 24 hours, they had found Claudio Manuel Neves Valente dead in a storage facility in Salem, New Hampshire, where he reportedly rented a unit.
"We intend to continue using technology to make sure our law enforcement are empowered to do their jobs," Flock's safety CEO Garrett Langley wrote on X.com, pinning the post to the top of his feed.

Though ironically, hours before Providence Police Chief Oscar Perez credited Flock for helping to find the suspect, CNN was interviewing Flock's safety CEO to discuss "his response to recent privacy concerns surrounding Flock's technology." To Langley, the situation underscored the value and importance of Flock's technology, despite mounting privacy concerns that have prompted some jurisdictions to cancel contracts with the company... Langley told me on Thursday that he was motivated to start Flock to keep Americans safer. His goal is to deter crime by convincing would-be criminals they'll be caught... One of Flock's cameras had recently spotted [the suspect's] car, helping police pinpoint Valente's location. Flock turned on additional AI capabilities that were not part of Providence Police's contract with the company to assist in the hunt, a company spokesperson told CNN, including a feature that can identify the same vehicle based on its description even if its license plates have been changed.

The company has faced criticism from some privacy advocates and community groups who worry that its networks of cameras are collecting too much personal information from private citizens and could be misused. Both the Electronic Frontier Foundation and the American Civil Liberties Union have urged communities not to work with Flock. "State legislatures and local governments around the nation need to enact strong, meaningful protections of our privacy and way of life against this kind of AI surveillance machinery," ACLU Senior Policy Analyst Jay Stanley wrote in an August blog post. Flock also drew scrutiny in October when it announced a partnership with Amazon's Ring doorbell camera system... ["Local officers using Flock Safety's technology can now post a request directly in the Ring Neighbors app asking for help," explains Flock's blog post.]

Langley told me it was up to police to reassure communities that the cameras would be used responsibly... "If you don't trust law enforcement to do their job, that's actually what you're concerned about, and I'm not going to help people get over that." Langley added that Flock has built some guardrails into its technology, including audit trails that show when data was accessed. He pointed to a case in Georgia where that audit found a police chief using data from LPR cameras to stalk and harass people. The chief resigned and was arrested and charged in November...

More recently, the company rolled out a "drone as first responder" service — where law enforcement officers can dispatch a drone equipped with a camera, whose footage is similarly searchable via AI, to evaluate the scene of an emergency call before human officers arrive. Flock's drone systems completed 10,000 flights in the third quarter of 2025 alone, according to the company... I asked what he'd tell communities already worried about surveillance from LPRs who might be wary of camera-equipped drones also flying overhead. He said cities can set their own limitations on drone usage, such as only using drones to respond to 911 calls or positioning the drones' cameras on the horizon while flying until they reach the scene. He added that the drones fly at an elevation of 400 feet.

Slashdot Top Deals