AI

Did Google's Duplex Testing Break the Law? (daringfireball.net) 62

An anonymous reader writes: Tech blogger John Gruber appears to have successfully identified one of the restaurants mentioned in a post on Google's AI blog that bragged about "a meal booked through a call from Duplex." Mashable then asked a restaurant employee there if Google had let him know in advance that they'd be receiving a call from their non-human personal assistant AI. "No, of course no," he replied. And "When I asked him to confirm one more time that Duplex had called...he appeared to get nervous and immediately said he needed to go. He then hung up the phone."

John Gruber now asks: "How many real-world businesses has Google Duplex been calling and not identifying itself as an AI, leaving people to think they're actually speaking to another human...? And if 'Victor' is correct that Hong's Gourmet had no advance knowledge of the call, Google may have violated California law by recording the call." Friday he added that "This wouldn't send anyone to prison, but it would be a bit of an embarrassment, and would reinforce the notion that Google has a cavalier stance on privacy (and adhering to privacy laws)."

The Mercury News also reports that legal experts "raised questions about how Google's possible need to record Duplex's phone conversations to improve its artificial intelligence may come in conflict with California's strict two-party consent law, where all parties involved in a private phone conversation need to agree to being recorded."

For another perspective, Gizmodo's senior reviews editor reminds readers that "pretty much all tech demos are fake as hell." Speaking of Google's controversial Duplex demo, she writes that "If it didn't happen, if it is all a lie, well then I'll be totally disappointed. But I can't say I'll be surprised."
AI

Ask Slashdot: Could Asimov's Three Laws of Robotics Ensure Safe AI? (wikipedia.org) 205

"If science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned...?" asks Slashdot reader OpenSourceAllTheWay. There is much screaming lately about possible dangers to humanity posed by AI that gets smarter and smarter and more capable and might -- at some point -- even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-bit home computers entered our lives.
The original submission cites Isaac Asimov's Three Laws of Robotics from the 1950 collection I, Robot.
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The original submission asks, "If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions 'So 20th Century' that AI builders won't even consider learning from their work?"

Wolfrider (Slashdot reader #856) is an Asimov fan, and writes that "Eventually I came across an article with the critical observation that the '3 Laws' were used by Asimov to drive plot points and were not to be seriously considered as 'basics' for robot behavior. Additionally, Giskard comes up with a '4th Law' on his own and (as he is dying) passes it on to R. Daneel Olivaw."

And Slashdot reader Rick Schumann argues that Asimov's Three Laws of Robotics "would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing, therefore it does not apply..."

But what are your own thoughts? Do you think Asimov's Three Laws of Robotics could ensure safe AI?


Transportation

Should The Media Cover Tesla Accidents? (chicagotribune.com) 241

Long-time Slashdot reader rufey writes: Last weekend a Tesla vehicle was involved in a crash near Salt Lake City Utah while its Autopilot feature was enabled. The Tesla, a Model S, crashed into the rear end of a fire department utility truck, which was stopped at a red light, at an estimated speed of 60 MPH. "The car appeared not to brake before impact, police said. The driver, whom police have not named, was taken to a hospital with a broken foot," according to the Associated Press. "The driver of the fire truck suffered whiplash and was not taken to a hospital."
Elon Musk tweeted about the accident:

It's super messed up that a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in past year get almost no coverage. What's actually amazing about this accident is that a Model S hit a fire truck at 60mph and the driver only broke an ankle. An impact at that speed usually results in severe injury or death.

The Associated Press defended their news coverage Friday, arguing that the facts show that "not all Tesla crashes end the same way." They also fact-check Elon Musk's claim that "probability of fatality is much lower in a Tesla," reporting that it's impossible to verify since Tesla won't release the number of miles driven by their cars or the number of fatalities. "There have been at least three already this year and a check of 2016 NHTSA fatal crash data -- the most recent year available -- shows five deaths in Tesla vehicles."

Slashdot reader Reygle argues the real issue is with the drivers in the Autopilot cars. "Someone unwilling to pay attention to the road shouldn't be allowed anywhere near that road ever again."


AI

Google's Duplex AI Robot Will Warn That Calls Are Recorded (bloomberg.com) 27

An anonymous reader quotes a report from Bloomberg: On Thursday, the Alphabet Inc. unit shared more details on how the Duplex robot-calling feature will operate when it's released publicly, according to people familiar with the discussion. Duplex is an extension of the company's voice-based digital assistant that automatically phones local businesses and speaks with workers there to book appointments. At Google's weekly TGIF staff meeting on Thursday, executives gave employees their first full Duplex demo and told them the bot would identify itself as the Google assistant. It will also inform people on the phone that the line is being recorded in certain jurisdictions, the people said.
AI

AI Can't Reason Why (wsj.com) 181

The current data-crunching approach to machine learning misses an essential element of human intelligence. From a report: Amid rapid developments and nagging setbacks, one essential building block of human intelligence has eluded machines for decades: Understanding cause and effect. Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects. The questions "Why did this happen?" and "What if I had acted differently?" are at the core of the cognitive advances that made us human, and so far are missing from machines.

Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie. The program reviews the store's records and sees that past variations of the price of toothpaste haven't correlated with changes in sales volume. So Charlie recommends raising the price to generate more revenue. A month later, the sales of toothpaste have dropped -- along with dental floss, cookies and other items. Where did Charlie go wrong? Charlie didn't understand that the previous (human) manager varied prices only when the competition did. When Charlie unilaterally raised the price, dentally price-conscious customers took their business elsewhere. The example shows that historical data alone tells us nothing about causes -- and that the direction of causation is crucial.

AI

NYC Announces Plans To Test Algorithms For Bias (betanews.com) 77

The mayor of New York City, Bill de Blasio, has announced the formation of a new task force to examine the fairness of the algorithms used in the city's automated systems. From a report: The Automated Decision Systems Task Force will review algorithms that are in use to determine that they are free from bias. Representatives from the Department of Social Services, the NYC Police Department, the Department of Transportation, the Mayor's Office of Criminal Justice, the Administration for Children's Services, and the Department of Education will be involved, and the aim is to produce a report by December 2019. However, it may be some time before the task force has any sort of effect. While a report is planned for the end of next year, it will merely recommend "procedures for reviewing and assessing City algorithmic tools to ensure equity and opportunity" -- it will be a while before any recommendation might be assessed and implemented.
Google

Google Won't Confirm If Its Human-Like AI Actually Called a Salon To Make an Appointment As Demoed at I/O (axios.com) 95

The headline demo at Google's I/O conference earlier this month continues to be a talking point in the industry. The remarkable demo, which saw Google Assistant call a salon to successfully fix an appointment, continues to draw skepticism. News outlet Axios followed up with Google to get some clarifications only to find that the company did not wish to talk about it. From the report: What's suspicious? When you call a business, the person picking up the phone almost always identifies the business itself (and sometimes gives their own name as well). But that didn't happen when the Google assistant called these "real" businesses. Axios called over two dozen hair salons and restaurants -- including some in Google's hometown of Mountain View -- and every one immediately gave the business name.

Axios asked Google for the name of the hair salon or restaurant, in order to verify both that the businesses exist and that the calls were not pre-planned. We also said that we'd guarantee, in writing, not to publicly identify either establishment (so as to prevent them from receiving unwanted attention). A longtime Google spokeswoman declined to provide either name.

We also asked if either call was edited, even perhaps just cutting the second or two when the business identifies itself. And, if so, were there other edits? The spokeswoman declined comment, but said she'd check and get back to us. She didn't.

Facebook

Facebook Deleted 583 Million Fake Accounts in the First Three Months of 2018 (cnet.com) 75

Facebook said Tuesday that it had removed more than half a billion fake accounts and millions of pieces of other violent, hateful or obscene content over the first three months of 2018. From a report: In a blog post on Facebook, Guy Rosen, Facebook's vice president of product management, said the social network disabled about 583 million fake accounts during the first three months of this year -- the majority of which, it said, were blocked within minutes of registration. That's an average of over 6.5 million attempts to create a fake account every day from Jan. 1 to March 31. Facebook boasts 2.2 billion monthly active users, and if Facebook's AI tools didn't catch these fake accounts flooding the social network, its population would have swelled immensely in just 89 days.
Google

Google Employees Resign in Protest Against Pentagon Contract (gizmodo.com) 467

Kate Conger, reporting for Gizmodo: It's been nearly three months since many Google employees -- and the public -- learned about the company's decision to provide artificial intelligence to a controversial military pilot program known as Project Maven, which aims to speed up analysis of drone footage by automatically classifying images of objects and people. Now, about a dozen Google employees are resigning in protest over the company's continued involvement in Maven.

The resigning employees' frustrations range from particular ethical concerns over the use of artificial intelligence in drone warfare to broader worries about Google's political decisions -- and the erosion of user trust that could result from these actions. Many of them have written accounts of their decisions to leave the company, and their stories have been gathered and shared in an internal document, the contents of which multiple sources have described to Gizmodo.

AI

The Future of Fishing Is Big Data and Artificial Intelligence (civileats.com) 35

An anonymous reader shares a report: New England's groundfish season is in full swing, as hundreds of dayboat fishermen from Rhode Island to Maine take to the water in search of the region's iconic cod and haddock. But this year, several dozen of them are hauling in their catch under the watchful eye of video cameras as part of a new effort to use technology to better sustain the area's fisheries and the communities that depend on them. Video observation on fishing boats -- electronic monitoring -- is picking up steam in the Northeast and nationally as a cost-effective means to ensure that fishing vessels aren't catching more fish than allowed while informing local fisheries management. While several issues remain to be solved before the technology can be widely deployed -- such as the costs of reviewing and storing data -- electronic monitoring is beginning to deliver on its potential to lower fishermen's costs, provide scientists with better data, restore trust where it's broken, and ultimately help consumers gain a greater understanding of where their seafood is coming from.

[...] Human observers are widely used to monitor catch in quota-managed fisheries, and they're expensive: It costs roughly $700 a day for an observer in New England. The biggest cost of electronic monitoring is the labor required to review the video. Perhaps the most effective way to cut costs is to use computers to review the footage. Christopher McGuire, marine program director for TNC in Massachusetts, says there's been a lot of talk about automating the review, but the common refrain is that it's still five years off. To spur faster action, TNC last year spearheaded an online competition, offering a $50,000 prize to computer scientists who could crack the code -- that is, teach a computer how to count fish, size them, and identify their species. The contest exceeded McGuire's expectations. "Winners got close to 100 percent in count and 75 percent accurate on identifying species," he says.

AI

AI Systems Should Debate Each Other To Prove Themselves, Says OpenAI (fastcompany.com) 56

tedlistens shares a report from Fast Company: To make AI easier for humans to understand and trust, researchers at the [Elon Musk-backed] nonprofit research organization OpenAI have proposed training algorithms to not only classify data or make decisions, but to justify their decisions in debates with other AI programs in front of a human or AI judge. In an experiment described in their paper (PDF), the researchers set up a debate where two software agents work with a standard set of handwritten numerals, attempting to convince an automated judge that a particular image is one digit rather than another digit, by taking turns revealing one pixel of the digit at a time. One bot is programmed to tell the truth, while another is programmed to lie about what number is in the image, and they reveal pixels to support their contentions that the digit is, say, a five rather than a six.

The image classification task, where most of the image is invisible to the judge, is a sort of stand-in for complex problems where it wouldn't be possible for a human judge to analyze the entire dataset to judge bot performance. The judge would have to rely on the facets of the data highlighted by debating robots, the researchers say. "The goal here is to model situations where we have something that's beyond human scale," says Geoffrey Irving, a member of the AI safety team at OpenAI. "The best we can do there is replace something a human couldn't possibly do with something a human can't do because they're not seeing an image."

Education

Carnegie Mellon Launches Undergraduate Degree In AI (cmu.edu) 76

Earlier this week, Carnegie Mellon University announced plans to offer an undergrad degree in artificial intelligence. The news may be especially attractive for students given how much tech giants have been ramping up their AI efforts in the recent years, and how U.S. News & World Report ranked Carnegie Mellon University as the No. 1 graduate school for AI. An anonymous reader shares the announcement with us: Carnegie Mellon University's School of Computer Science will offer a new undergraduate degree in artificial intelligence beginning this fall, providing students with in-depth knowledge of how to transform large amounts of data into actionable decisions. SCS has created the new AI degree, the first offered by a U.S. university, in response to extraordinary technical breakthroughs in AI and the growing demand by students and employers for training that prepares people for careers in AI.

The bachelor's degree program in computer science teaches students to think broadly about methods that can accomplish a wide variety of tasks across many disciplines, said Reid Simmons, research professor of robotics and computer science and director of the new AI degree program. The bachelor's degree in AI will focus more on how complex inputs -- such as vision, language and huge databases -- are used to make decisions or enhance human capabilities, he added. AI majors will receive the same solid grounding in computer science and math courses as other computer science students. In addition, they will have additional course work in AI-related subjects such as statistics and probability, computational modeling, machine learning, and symbolic computation. Simmons said the program also would include a strong emphasis on ethics and social responsibility. This will include independent study opportunities in using AI for social good, such as improving transportation, health care or education.

AI

Google's 'Duplex' System Will Identify Itself When Talking To People, Says Google (businessinsider.com) 77

Google's "Duplex" AI system was the most talked about product at Google I/O because it called into question the ethics of an AI that cannot easily be distinguished from a real person's voice. The service lets its voice-based digital assistant make phone calls and write emails for you, causing many to ask if the system should come with some sort of warning to let the other person on the line know they are talking to a computer. According to Business Insider, "a Google spokesperson confirmed [...] that the creators of Duplex will 'make sure the system is appropriately identified' and that they are 'designing this feature with disclosure built-in.'" From the report: Here's the full statement from Google: "We understand and value the discussion around Google Duplex -- as we've said from the beginning, transparency in the technology is important. We are designing this feature with disclosure built-in, and we'll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product."

Google CEO Sundar Pichai preemptively addressed ethics concerns in a blog post that corresponded with the announcement earlier this week, saying: "It's clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it's equally clear that we can't just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately -- and we feel a deep sense of responsibility to get this right." In addition, several Google insiders have told Business Insider that the software is still in the works, and the final version may not be as realistic (or as impressive) as the demonstration.

AI

AI Trained To Navigate Develops Brain-Like Location Tracking (arstechnica.com) 40

An anonymous reader quotes a report from Ars Technica: Now that DeepMind has solved Go, the company is applying DeepMind to navigation. Navigation relies on knowing where you are in space relative to your surroundings and continually updating that knowledge as you move. DeepMind scientists trained neural networks to navigate like this in a square arena, mimicking the paths that foraging rats took as they explored the space. The networks got information about the rat's speed, head direction, distance from the walls, and other details. To researchers' surprise, the networks that learned to successfully navigate this space had developed a layer akin to grid cells. This was surprising because it is the exact same system that mammalian brains use to navigate. More DeepMind experiments showed that only the neural networks that developed layers that "resembled grid cells, exhibiting significant hexagonal periodicity (gridness)," could navigate more complicated environments than the initial square arena, like setups with multiple rooms. And only these networks could adjust their routes based on changes in the environment, recognizing and using shortcuts to get to preassigned goals after previously closed doors were opened to them. The study has been reported in the journal Science.
AI

Ask Slashdot: How Would a Self-Aware AI Behave? (slashdot.org) 340

Long-time Slashdot reader BigBlockMopar writes that evolution has been a messy but beautiful trial-and-error affair, but now "we are on the cusp of introducing a new life form; a self-aware AI." Its parents will be the coders who write that first kernel than can evolve to become self-aware. Its guardians will be the people who use its services, and maybe its IQ (or any more suitable measure of real intelligence) will rise as fast as Moore's Law... But let me make some bold but happy predictions of what will happen.
The predictions?
  • A self-aware AI "will inherit most of the culture of the computer geeks who create it. Knowledge of The Jargon File will probably be good..."
  • The self-aware AI "will like us, because we love machines..."
  • It will love all life, and "will respect and understand the life/death/recycling scenario, and monster truck shows will be as tasteless to it as public beheadings would be to us."
  • "It will be as insatiably curious about what it's like to be carbon-based life as we will be about what it's like to be silicon-based life. And it will love the diversity of carbon-based development platforms..."
  • A self-aware AI "will cause a technological singularity for humanity. Everything possible within the laws of physics (including those laws as yet undiscovered) will be within the reach of Man and Metal working together."
  • A self-aware AI "will introduce us to extraterrestrial life. Only a fool believes this is the only planet with life in the Universe. Without superintelligence, we're unlikely to find it or communicate in any useful way. Whether or not we have developed a superintelligence might even be a key to our acceptance in a broader community."

The original submission was a little more poetic, ultimately asking if anyone is looking forward to the arrival of "The Superintelligence" -- but of course, that depends on what you predict will happen once it arrives.

So leave your own best thoughts in the comments. How would a self-aware AI behave?


Social Networks

Klout's Score Drops to Zero as It Announces Plans to Close Down (gizmodo.com) 44

Once upon a time, Klout had 100 million users, Gizmodo reports. But now... You probably haven't experienced the crippling anxiety of thinking about increasing your Klout score in quite some time. As of May 25, you won't have ever have to do it again. On Thursday, the social ranking company announced to its 708,000 Twitter followers (meh) that it will be shutting down.

Klout was founded in 2008 as a way for social media users to gauge their "influence." Through some algorithmic voodoo the service would snoop through your social media presence and spit out your "Klout Score" -- a number between 1 and 100 that determined how much you are worth as a social human being.

Lithium Technologies (Klout's parent company) annouced that their acquisition "provided Lithium with valuable artificial intelligence (AI) and machine learning capabilities but Klout as a standalone service is not aligned with our long-term strategy."

But Lithium also announced plans to launch "a new social impact scoring methodology based on Twitter" sometime in the future.
AI

The White House Has Set Up a Task Force To Help Further the Country's AI Development (theverge.com) 43

The White House has set up a new task force dedicated to US artificial intelligence efforts, the Trump administration announced today during an event with technology executives, government leaders, and AI experts. From a report: The news and the event, which was organized by the federal government, are both moves to further the country's AI development, as other regions like Europe and Asia ramp up AI investment and R&D as well. The administration will be further investing in AI, deputy CTO of the White House's Office of Science and Technology Policy Michael Kratsios said at the event.

"To realize the full potential of AI for the American people, it will require the combined efforts of industry, academia, and government," Kratsios said, according to FedScoop. According to the Trump administration, the federal government has increased its investment in unclassified R&D for AI by 40 percent since 2015. In his speech, Kratsios highlighted ways the US could improve AI advancement, such as robotics startups in Pittsburgh that are models for how to spur job growth in areas hurt by workplace automation. Startups like those now hire engineers, scientists, bookkeepers, and administrators, he said, and are evidence that AI does not necessarily mean massive unemployment is on the horizon.
Further reading: The White House says a new AI task force will protect workers and keep America first (MIT Tech Review).
AI

Siri, Alexa, and Google Assistant Can Be Controlled By Inaudible Commands (venturebeat.com) 100

Apple's Siri, Amazon's Alexa, and Google's Assistant were meant to be controlled by live human voices, but all three AI assistants are susceptible to hidden commands undetectable to the human ear, researchers in China and the United States have discovered. From a report: The New York Times reports today that the assistants can be controlled using subsonic commands hidden in radio music, YouTube videos, or even white noise played over speakers, a potentially huge security risk for users. According to the report, the assistants can be made to dial phone numbers, launch websites, make purchases, and access smart home accessories -- such as door locks -- at the same time as human listeners are perceiving anything from completely different spoken text to recordings of music.

In some cases, assistants can be instructed to take pictures or send text messages, receiving commands from up to 25 feet away through a building's open windows. Researchers at Berkeley said that they can modestly alter audio files "to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear."

Google

Google Executive Addresses Horrifying Reaction To Uncanny AI Tech (bloomberg.com) 205

The most talked-about product from Google's developer conference earlier this week -- Duplex -- has drawn concerns from many. At the conference Google previewed Duplex, an experimental service that lets its voice-based digital assistant make phone calls and write emails. In a demonstration on stage, the Google Assistant spoke with a hair salon receptionist, mimicking the "ums" and "hmms" pauses of human speech. In another demo, it chatted with a restaurant employee to book a table. But outside Google's circles, people are worried; and Google appears to be aware of the concerns. From a report: "Horrifying," Zeynep Tufekci, a professor and frequent tech company critic, wrote on Twitter about Duplex. "Silicon Valley is ethically lost, rudderless and has not learned a thing." As in previous years, the company unveiled a feature before it was ready. Google is still debating how to unleash it, and how human to make the technology, several employees said during the conference. That debate touches on a far bigger dilemma for Google: As the company races to build uncanny, human-like intelligence, it is wary of any missteps that cause people to lose trust in using its services.

Scott Huffman, an executive on Google's Assistant team, said the response to Duplex was mixed. Some people were blown away by the technical demos, while others were concerned about the implications. Huffman said he understands the concerns. Although he doesn't endorse one proposed solution to the creepy factor: Giving it an obviously robotic voice when it calls. "People will probably hang up," he said.

[...] Another Google employee working on the assistant seemed to disagree. "We don't want to pretend to be a human," designer Ryan Germick said when discussing the digital assistant at a developer session earlier on Wednesday. Germick did agree, however, that Google's aim was to make the assistant human enough to keep users engaged. The unspoken goal: Keep users asking questions and sharing information with the company -- which can use that to collect more data to improve its answers and services.

AI

Should Calls From Google's 'Duplex' System Include Initial Warning Announcements? (vortex.com) 276

Yesterday at its I/O developer conference, Google debuted "Duplex," an AI system for accomplishing real world tasks over the phone. "To show off its capabilities, CEO Sundar Pichai played two recordings of Google Assistant running Duplex, scheduling a hair appointment and a dinner reservation," reports Quartz. "In each, the person picking up the phone didn't seem to realize they were talking to a computer." Slashdot reader Lauren Weinstein argues that the new system should come with some sort of warning to let the other person on the line know that they are talking with a computer: With no exceptions so far, the sense of these reactions has confirmed what I suspected -- that people are just fine with talking to automated systems so long as they are aware of the fact that they are not talking to another person. They react viscerally and negatively to the concept of machine-based systems that have the effect (whether intended or not) of fooling them into believing that a human is at the other end of the line. To use the vernacular: "Don't try to con me, bro!" Luckily, there's a relatively simple way to fix this problem at this early stage -- well before it becomes a big issue impacting many lives.

I believe that all production environment calls (essentially, calls not being made for internal test purposes) from Google's Duplex system should be required by Google to include an initial verbal warning to the called party that they have been called by an automated system, not by a human being -- the exact wording of that announcement to be determined.

UPDATE (5/10/18): Google now says Duplex will identify itself to humans.

Slashdot Top Deals