Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Government Security Software United States

US Lawmakers Say AI Deepfakes 'Have the Potential To Disrupt Every Facet of Our Society' (theverge.com) 198

Yesterday, several lawmakers sent a letter to the Director of National Intelligence, Dan Coats, asking him to assess the threat posed to national security by deepfakes -- a new type of AI-assisted video editing that creates realistic results with minimal effort. The Verge reports: The letter says "hyper-realistic digital forgeries" showing "convincing depictions of individuals doing or saying things they never did" could be used for blackmail and misinformation. "As deep fake technology becomes more advanced and more accessible, it could pose a threat to United States public discourse and national security," say the letter's signatories, House representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL), and Carlos Curbelo (R-FL). The trio want the intelligence community to produce a report that includes descriptions of when "confirmed or suspected" deepfakes have been produced by foreign individuals (there are no current examples of this), and to suggest potential countermeasures. In a press statement, Curbelo said: "Deep fakes have the potential to disrupt every facet of our society and trigger dangerous international and domestic consequences [...] As with any threat, our Intelligence Community must be prepared to combat deep fakes, be vigilant against them, and stand ready to protect our nation and the American people."
This discussion has been archived. No new comments can be posted.

US Lawmakers Say AI Deepfakes 'Have the Potential To Disrupt Every Facet of Our Society'

Comments Filter:
  • by Anonymous Coward on Friday September 14, 2018 @09:24PM (#57317636)

    They're obviously lying. Politicians are gonna love the proliferation of deepfakes. That way, the next time they say something stupid in an interview, they can say the clip is a deepfake. They're just starting to blame them as an upcoming problem now so they can start using it as an excuse ASAP.

  • Same as it ever was (Score:5, Informative)

    by Tablizer ( 95088 ) on Friday September 14, 2018 @09:28PM (#57317658) Journal

    Photos use to be considered "strong evidence", then Photoshop etc. came along to make doctoring cheap and common, and people stopped trusting photos. The same will happen to audio and video once they see enough fudged examples.

    • Agreed. The problem is that with photos, video and voice all untrustable, what is left? How can someone come to an independent conclusion about anything?

      Politicians will just deny saying things that were caught on video....oh wait....

      It really is a serious issue.

      • Agreed. The problem is that with photos, video and voice all untrustable, what is left? How can someone come to an independent conclusion about anything?

        The same way we did before the proliferation of movie/video cameras and audio recorders. These are only very recent inventions.

        • Eye witness testimony? Great. It is a good thing we never had a problem with false accusers and poor perception.
          • by drnb ( 2434720 )

            Eye witness testimony? Great. It is a good thing we never had a problem with false accusers and poor perception.

            Right because ...
            (1) We had nothing other than eye witness testimony, never that stuff called evidence.
            (2) And video eliminated false accusers and perception because actions are never subject to interpretation and the camera/mic catches all, its not like what happened before the video started or after it ended, what is happening out of frame, what beyond mic detection fills in unknown gaps.

            Videos gets faked all the time for political purposes and people are largely gullible and believe them. Few have t

          • It's far worse than that. How, exactly, do eye witnesses tend to share their experiences with people? Through videos. Which can be faked.

            So now not only do we have the problem of video fakes of politicians and politicians claiming real video is fake, we have the problem of eye witnesses being faked on video, and video of eye witnesses being called fakes.

            On the flip side, maybe print journalism will have a resurgence, as we're headed back to the reputation of an individual who talked to eye witnesses being t

        • by joe_frisch ( 1366229 ) on Friday September 14, 2018 @11:58PM (#57318064)

          We used to have sources that the majority of people believed in. Professors. Clergy, civic leaders. Not saying the trust was deserved, but at least people mostly agreed on facts. (even if they were wrong)

          • by drnb ( 2434720 )

            We used to have sources that the majority of people believed in. Professors. Clergy, civic leaders. Not saying the trust was deserved, but at least people mostly agreed on facts. (even if they were wrong)

            The world is full of evidence beyond video. There is no need to resort to leaders, clergy and professors who are offering opinions rather than evidence based conclusions.

      • In a way we're back to the problem thats plagued philosophers since time immemorial. Can we trust what we percieve? Do we go back to Hume and trust nothing, or back off a little and say Well we aren't entirely sure, but we do have this fantastic cognitive ability to sort through it and hopefully come to the right conclusions, while accepting we kinda see what we wanna see.

        No good answers here, other than this really isn't as new as we think. We've always had to contend with untruths.

        The difference is, for m

      • Agreed. The problem is that with photos, video and voice all untrustable, what is left?

        For the most part photos, videos and voice are only untrustable to the common person. To expert opinion identifying a doctored item is usually very straight forward.

        • Sure, but in a typical court case, the defense and the prosecution will both put forth 'experts', one assuring everyone a video is fake, the other assuring everyone that video is real, and the jury (who have absolutely no clue how to tell which expert is reliable) will have to go off various internal biases.

          Investigators, prosecutors, judges - none of these people have the knowledge to distinguish real videos from fake videos, and they also don't have the knowledge to distinguish real experts from bullsh-tt

          • Well that's fundamentally the way ANY court process works about anything. The goal of the jury is to decide what information to trust. There's nothing deepfake about this, it's the way it's been done since the 1800s.

            • The jury must decide what to trust sure, but shouldn't be deciding whether a scientific process is valid, and allowing in junk science is a major problem. Bad science shouldn't be put in front of a jury of laypeople; look at all the false convictions bad science like bite marks lead to. And it's *supposed* to be excluded, but unfortunately the Daubert standard for challenging expert testimony goes in front of judges who can't be bothered to do their job, and winds up being 'has any other judge allowed this?
              • The jury must decide what to trust sure, but shouldn't be deciding whether a scientific process is valid, and allowing in junk science is a major problem.

                They aren't deciding on whether a scientific process is valid, they are deciding on the explanation offered by experts, trust in the people. I agree bad science shouldn't be put in front of the jury, but that's an idealistic view of a court system which has that amazing airtight defence that will force out the truth: putting you hand on a book and pinky swearing.

                But I think we're fundamentally in agreement anyway. This isn't about deep fakes doctoring videos. It's about a process that has flaws and has had

          • > And the jury (who have absolutely no clue how to tell which expert is reliable)

            The best experts will explain to the jury how they can look at the evidence and draw conclusions, rarher than just proclaiming a conclusion.

            As one example, a blood spatter expert can demonstrate, either live or on video, that slinging a wet sponge at high speed leaves a pattern of very small droplets, while carrying it or slinging it slowly let's much larger drops form. The jury can then see for themselves whether the crime

            • by HiThere ( 15173 )

              For "best" I think you should substitute "most convincing". A lot of the traditionally accepted forensic tests have never been properly validated. Some which were recently tested failed attempts to validate. (IIRC one that failed was correlating scars on a bullet and the barrel it was fired through...and that one *ought* to have worked.) But this doesn't mean that the technique that has failed validation won't be used in some other case, or that cases where it was used as "convincing evidence" will now

        • Given the poor quality, and the lack of provenance or ability to verify a chain of custody, of most video and audio evidence, it's not certain at all that more sophisticated doctoring will be apparent. An analyst with good tools can often deduce a great deal about the recording tools and the details of the original recording. But it takes time and effort that is often not of interest to a prosecutor.

          • There's no need to verify chain of custody if doctoring becomes apparent through simple statistical techniques. I think you're well over estimating the time and effort required. Often a simple JPEG error level analysis can determine what was doctored in an image, to say nothing of the fact that most outright lies (meaning composite images where someone has been doctored out or doctored in) are easy to identify.

            For the common case we are well and truly in the realm of good enough. Now if you're prosecuting e

            • Running those "simple statistical techniques" is technologically feasible. Is it feasible fiscally? How much does it cost to run this analysis? Is this going to become more difficult and complex to detect as the doctoring tools improve?

              Prosecutors are _not_ happy to have to spend more time and effort validating evidence. Neither are defense attorneys. It costs time and effort.

              • Running those "simple statistical techniques" is technologically feasible. Is it feasible fiscally? How much does it cost to run this analysis?

                Well for a basic first pass you can upload the picture to any ELA online website. Though for courts they may want to do something more trustable like download free code from Github, or maybe even use one of the several open source forensic tools which are available. The same technique used for images works for video too though I'm not sure as many open source programs are around for it at this point.

                The point is, it's not as hard as you make out, and the level of effort / expense is proportional to the crim

                • by sjames ( 1099 )

                  Most of the cost isn't the code, it's the expert that runs the code, interprets the results for lay people, and testifies that the code actually performs a valid analysis.

      • by caseih ( 160668 )

        Reading the book The Truth Machine right now, which touches on this issue. The solution mentioned in the book is digital certification and notarization of images and video. Security camera footage, for example, is all digitally signed (by whom, I don't know). Any modification of the footage breaks the signature and is thus inadmissible in a court, or easily detectable. Trusted services that can notarize images and video make a lot of money in this universe. The mechanics of how this all would happen is no

      • Back to analog for forensic evidence? Those are harder to fake.

        If there is a video footage of something it may not stand as a proof on its own. Other bits of evidence from multiple sources and in different forms will have to align with what the video shows to form a consistent view.

    • I think the difference with photos is that only limited information can be gleaned from a photo. The information which might be gleaned from a video is far, far more detailed. In particular, if they could effectively fake a voice, then the potential implications are pretty tremendous.

      Personally, I suspect we're a long way off from reasonably-convincing fakes. And even when we do have such fakes, it's going to be far more difficult to make a fake which withstands careful scrutiny.

      • by swb ( 14022 )

        What's funny is that most photos are more detailed and higher resolution than most videos. The video captures motion, but is usually lower (often much) lower pixel density, worse color quality, and often suffers from poor image stabilization.

        • What's funny is that most photos are more detailed and higher resolution than most videos. The video captures motion, but is usually lower (often much) lower pixel density, worse color quality, and often suffers from poor image stabilization.

          In spite of that, you can often make out details on video that don't show up well in a photograph, like flat reflective surfaces which can be difficult to identify in stills. Ideally, you'd have both of them. Also, blur.

    • Photos use to be considered "strong evidence", then Photoshop etc. came along to make doctoring cheap and common, and people stopped trusting photos. The same will happen to audio and video once they see enough fudged examples.

      Is that really true, though? Yes, Photoshop has made photo doctoring easy, but the digital age makes it easy enough to detect and debunk. AFAIK, it is extremely difficult to doctor a photo well enough that simple forensic analysis can not determine that it was manipulated, and even w

      • Agreed. If nothing else, we'll train an AI to spot the fakes. I mean, apparently AI can do anything these days, right?

        • If nothing else, we'll train an AI to spot the fakes

          Yes, that is exactly how it works. Train one DNN to generate fakes, and train another to detect the fakes. Set them up as adversarial networks [wikipedia.org].

          The generating network, and the detecting network continue to improve as each fights to defeat the other. Eventually the generator creates an image or video so good that it passes as "real". That is a "deep fake".

          • This still doesn't make sense. You cannot reliably determine the best 'faked images', and there's a reason for that - the only way we can know a well-faked image is fake is if there are things that *semantically* don't make sense in an image - e.g. say it's supposedly a photo of a ball, but the lighting doesn't make sense for a ball as there's a shading anomaly on the surface of the ball - the human can say, OK, that's supposed to be round but it's not, that must be fake - but if the AI 'fake detector' does

        • "Agreed. If nothing else, we'll train an AI to spot the fakes. I mean, apparently AI can do anything these days, right?"

          So you'll train the AI to spot fakes based on what, real-world training data for which you can't reliably determine they're fakes? Or you'll create your own fakes as training data? This doesn't make much sense; since it's impossible for a human to distinguish a very well-done fake from a legitimate image, you're basically just going to train your AI to at best determine badly-done fakes -

          • I think you missed the sarcasm implied in my post. But given recent AI hype, I could see how that might be taken seriously. That being said...

            So you'll train the AI to spot fakes based on what, real-world training data for which you can't reliably determine they're fakes?

            Yes. It's not like you can't find a bunch of training material which you know are fakes to train your algorithm on. I can't tell most visual effects in movies are fake, but I know they are. Same principle.

            Also, note that it's *humans* that can't reliably tell if they're fakes. The whole point is that computers have much deeper powers of analysis than humans do -

      • "Yes, Photoshop has made photo doctoring easy, but the digital age makes it easy enough to detect and debunk"

        Considering how many people I've seen boldly declaring something 'obviously fake' on the basis of dumb crap like JPEG artifacts or H.264/H.265 encoding artifacts, I'm very wary of anyone who thinks it thinks it's "easy" to detect and debunk fakes - actually, I've been doctoring photos in Photoshop for over 10 years, and the bottom line is that while badly-done fakes are easy to spot, most well-done f

    • I'm not so sure it's the same. Yes, photos can be edited with Photoshop - quite realistically if the operator is skilled. But, as I understand it, manipulation by Photoshop can be detected by various means. What we're talking about here is manipulation by artificial intelligence, not by a human working with keyboard and mouse. The implication is that as AI becomes increasingly more sophisticated, AI generated fakes will also become more sophisticated, perhaps to the point where humans can no longer tell th
  • This is a potential problem that directly affects congress critters and famous people and really nobody else, so clearly it has the potential to destroy civilization and we must prioritize dealing with it.
  • OH NOES (Score:4, Insightful)

    by Jarwulf ( 530523 ) on Friday September 14, 2018 @09:46PM (#57317704)
    We're headed back to that horrible time a few decades ago without pervasive reliable audio and visual surveillance. Quick! Let's pass more laws curtailing freedom of expression and individual liberty!
    • it's the ability to create a doctored video that makes it look like your opponent said something they did not. You could litter the net with these and destroy the political career of anyone. A person who would do such a thing would be inherently dishonest and evil. Left unchecked this will drive out the few remaining honest people in politics. I'm sure the folks who are left in politics will support a surveillance state.
  • by steveha ( 103154 ) on Friday September 14, 2018 @10:02PM (#57317740) Homepage

    Robert A. Heinlein imagined this problem, and in his novel Stranger in a Strange Land [wikipedia.org] he described a new profession: "Fair Witness"

    A Fair Witness is a person who is trained to observe and remember without jumping to any conclusions. A Fair Witness should be able to describe in court what he/she saw, and only that. As an example, a Fair Witness would say something like "I observed a house, and the side I saw appeared white" rather than "I observed a white house." It's possible that other sides of the house, not seen by the Fair Witness, could be a different color; and it's possible the house was repainted after the Fair Witness saw it... the Fair Witness keeps such things in mind.

    Surprisingly, Wikipedia doesn't seem to think that the idea of "Fair Witness" is notable. I Google searched for a reference, and I found a reference that claims to be quoting Wikipedia, but I can't find it on Wikipedia now.

    http://dlkphotography.com/fair-witness/stranger-in-a-strange-land [dlkphotography.com]

    I found the "Fair Witness" idea to be one of the most interesting things in the book, and I have long wondered if we would one day see that profession in real life.

    • That's something that was said over and over again in my private investigator and security officer training.

      You DON'T tell the client "yes I caught your spouse cheating on you with his ex". You write notes as its happening if possible saying "I observed a white sedan park near 124 Oak Street". It's possible that the car isn't the subject's car, it only looks similar. It's possible that the suspected companion doesn't currently live at that address. It's possible that the subject went there to meet with his

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      In my "Intro to Psychology" class we learned about experiments that were performed that involved staging an event in front of a room full of people, including people who were trained as observers. Then, quizzing them on the details.

      Everyone performed badly, including the trained observers. Especially prominent were racial biases, skewing their memory of who did what.

      The notion of "fair witness" is too high a bar for humans to hit. We cannot avoid jumping to conclusions, infusing our biases, etc. And our

      • ... And our memories are not nearly as accurate as they feel

        Reminds me of this witness memory demo on youtube [youtu.be]. IMHO, the problem of synthesized eyewitness videos will, to paraphrase JBS Haldane or Arthur Eddington, eventually lead to trouble that will not be just worse than we imagine, but worse than we can imagine.

      • by steveha ( 103154 )

        staging an event in front of a room full of people, including people who were trained as observers. Then, quizzing them on the details.

        Everyone performed badly, including the trained observers.

        I once participated in an exercise along those lines. Our group was shown a video in which a man with a revolver robbed a bank or something and fired the revolver. The group was asked how many shots the man fired.

        Some of the group thought he fired 7 times. From his revolver.[1]

        I actually got the count perfect, but

  • by Todd Knarr ( 15451 ) on Friday September 14, 2018 @10:05PM (#57317746) Homepage

    Simple solution: don't trust any one source. Even if they're supposedly impeccable. Look for corroboration from multiple independent sources (and make sure they're really independent and not all getting their information from the same source). For instance if you have a video of someone checking into a hotel with a compromising companion, look for corroboration from the hotel's records, hotel staff who should have interacted with them, and the person's credit-card records. This is what we used to do before people got lazy and started believing everything they were told without question.

    • You're solving the wrong problem. People in power want to throw poor people in jail with video evidence. They also want to be able to afford expensive lawyers to plant doubt that a video shows the objective truth.

      When politics gets involved with tech, it is all about self preservation and persecution.

    • Simple solution: don't trust any one source. Even if they're supposedly impeccable. Look for corroboration from multiple independent sources (and make sure they're really independent and not all getting their information from the same source). For instance if you have a video of someone checking into a hotel with a compromising companion, look for corroboration from the hotel's records, hotel staff who should have interacted with them, and the person's credit-card records. This is what we used to do before people got lazy and started believing everything they were told without question.

      Indeed.

      Deuteronomy 17:6 [biblegateway.com]

      On the evidence of two witnesses or of three witnesses the one who is to die shall be put to death; a person shall not be put to death on the evidence of one witness.

  • US Lawmakers (Score:5, Insightful)

    by nehumanuscrede ( 624750 ) on Friday September 14, 2018 @10:06PM (#57317748)

    are simply afraid of the competition.

    The US ( and everyone else ) has been altering both modern and historical facts to suit their own agendas since the very beginning.
    I'm curious why the sudden concern :|

    Pot. . . meet kettle.

  • Lawmakers don't have many facets, maybe as few as two: graft and porn. I can see how they might project that onto society in general.

  • by Dwedit ( 232252 ) on Friday September 14, 2018 @10:34PM (#57317828) Homepage

    What could be possible:

    * Cryptographic signatures on raw data leaving a camera, or Cryptographic signatures on the default recording app as the videos/photos are taken

    Probably won't help that much though, but might help to identify unedited footage.

    * Give images/videos timestamps signed by a third party immediately as they are taken

    This can prove that a piece of information existed no earlier than that time.

    None of these can thwart recording a video screen playing back pre-edited video.

    • by gweihir ( 88907 )

      Trusted third parties are expensive and usually unworkable. Cameras signing things will not provide any security at all, as they are easily hacked.

      We will just have to learn to deal with this. There really is no other way.

      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday September 15, 2018 @09:16AM (#57318968) Homepage Journal

        Trusted third parties are expensive and usually unworkable. Cameras signing things will not provide any security at all, as they are easily hacked.

        So there will need to be some kind of hacking detection built into cameras. A write-only store which keeps track of changes to the internal flash, for example. In order to prove the validity of a video you'll have to produce both the video and the camera. This should be useful at least in keeping the police from tampering with body cam footage.

        • by gweihir ( 88907 )

          That has all been tied before. The only hardware in existence that offers a reasonable protection and could do the functionality you describe is a full HSM (Hardware Security Module) at something like $50'000 and upwards. All other approaches have been hacked successfully, and I have a nagging suspicion that HSMs have not only because they are to expensive for hackers to get their hands on as you would probably have to burn 3-4 for a successful hack. They are also not sold to the general public.

      • Trusted third parties are expensive and usually unworkable. Cameras signing things will not provide any security at all, as they are easily hacked.

        I don't think you understand what he's talking about. We already have certificate authorities working just fine, some of them for free (e.g. Let's Encrypt). His notion isn't much different than what we already have with SSL/TLS, and hacking the camera wouldn't actually get you much of anything.

        Hacking the camera wouldn't allow you to change the file, since doing so would invalidate the cryptographic signature on the file. Hacking the camera wouldn't destroy copies of the file that had already been exported.

        • by gweihir ( 88907 )

          Ah, yes, all the nativity of those that have no clue what they are talking about and think security is easy. Have a look into the relevant literature and then come back. Here is a starting point: The public x.509 certificate system is badly broken, and you can even get EV certificates for a fake or wrong identity with minimal effort.

          • You have yet to actually suggest what a harmful attack would even accomplish, let alone the means to accomplish it. While I touched on some of the means by which we could do this stuff, the overarching point I was making was that the concern you raised doesn’t allow for a bad actor to actually do anything harmful enough to warrant your claims that it’s unworkable and gets us nothing in terms of security.

            Discussing flaws in a hypothetical device is pointless when the flaws don’t even expos

    • by Z80a ( 971949 )

      Compress your videos with Mpeg or similar.
      If someone overlay something over your video, even if it's a "deepfake" you still can, as it's done with pictures check for differences on the compression artifacts.

  • by grep -v '.*' * ( 780312 ) on Friday September 14, 2018 @11:00PM (#57317892)
    Way back when, when we first had cameras, that was the saying. Because it was really hard to make a convincing "wrong" photo. (Early: One [qz.com], Two [izitru.com], Three [hoaxes.org], Four [mentalfloss.com].

    And then came along tape, and audio editing, and auto-tune and computerized voice editing. And Hatsune Miku [wikipedia.org], who not only doesn't exist, her VOICE doesn't even exist: She was created by taking vocal samples [which] all contain a single Japanese phonic that, when strung together, creates full lyrics and phrases. Video [youtu.be]. The people with glowing green sticks, though, are real.

    And now with movies have placed people's heads on other's bodies, never mind body-doubles. The trick is that's it's becoming better, cheaper and more widespread to create. (And I *SWEAR* that people are more gullible now-a-days than they used to be. Or maybe it's because things just move so much faster.) So we're back to a century or so ago: just because there's an audio/video of it, doesn't mean it's HAS to be true.

    No worries though, since you're innocent until proven guilty, which has worked so well with MeToo and everything else in the last few years. We'll all just have to have a 2-way shoulder mounted camera that does a real-time blockchain video feed to verify where we are all of the time and that it's really is US in the video.

    Now if blockchain would only run at Visa-level transaction speeds instead of a slow 8mm Movie Motion Picture Camera. Oh, and that's 7 BitCoin transactions system-wide and not just per camera. The limit for Litecoin is 56 TPS and the limit for Bitcoin is 7. Visa: 24,000 TPS (Link [howmuch.net]. And far be it your mounted camera loses WiFi/Cell connection or runs out of power.
    • Way back when, when we first had cameras, that was the saying. Because it was really hard to make a convincing "wrong" photo.

      This is not true. In Germany, the cinema is very popular. I've seen all of Ned Nederlander's films... including when he was known as "Little Neddy Knickers." It was this man who inspired me to learn the art of the quick draw. I looked up to this man. I studied his every move. It was my dream to be as fast as Ned Nederlander. I practiced every day for hours and hours. He was a god to me. Then I found out about movie tricks. Trick photography. I was crushed.

  • by gweihir ( 88907 ) on Friday September 14, 2018 @11:43PM (#57318004)

    And what gives them power? Laws criminalizing things. There is no way in this universe they can stop this. But they can profit from it and they are certainly trying to.

  • by Kwirl ( 877607 ) <kwirlkarphys@gmail.com> on Saturday September 15, 2018 @12:21AM (#57318102)
    its usually because something has come out that they haven't figured out how to manipulate or abuse. when this technology has fallen out of the news, you will know that at least some political bodies are abusing it for their benefit.
  • We will just have to start raising a generation of politicians who have reputations for telling the truth. Politicians who ordinary people will believe, who have integrity and the trust of the population. Leaders who don't commit criminal acts, shag anything that moves or accept illegal payments from all and sundry.

    Ones who don't try to weasel out of situations with denials, false claims (I was not told about .... ), redefinition of meaning or refusing to answer questions.

    And if they can be seen not to

  • I work with a lot of people from different backgrounds. They watch things like movies all the time and they see things like science fiction and see it as purely Hollywood and not simply forseeing the ambitions of many creative people around them.

    I have a habit of writing documents describing technology that I would implement myself given the time, budget and drive. Examples of this would be RAID systems capable of online live healing and automated scaling with almost zero risk for loss due to transactional
  • Just make it illegal! That always fixes every problem!

  • Anything electronic can be faked, and so we need to start encouraging people to UNDERSTAND that "Seeing Isn't Believing". Just because you saw it on a screen doesn't mean it's true.

    It's all phony.

  • No they don't. (Score:5, Insightful)

    by thegarbz ( 1787294 ) on Saturday September 15, 2018 @03:06AM (#57318404)

    People will currently believe absolutely anything provided you get the narrative right and appeal to their emotions. There's no need to even doctor videos anymore. You just have to tell them.

  • by Artem S. Tashkinov ( 764309 ) on Saturday September 15, 2018 @05:37AM (#57318628) Homepage
    The cat is out of the bag, now learn to live this way.
    • by miekal ( 5487708 )
      If I were in a position to worry about deep fakes I would pre-emptively release a deluge of preposterous fakes to amuse and inform (and promote). Like you suggest - adapt
  • When the next video comes out, showing them saying Nazi/racist/whatever stuff, they can just claim:
    Deepfake news!

    • They can already get away with blatant lying and fool 51% of voters. We are seeing just how extreme 1 side can go with it already.

      Profound affects on everybody's lives are already happening. It's so fast that the majority are aware of the shift; but as it becomes the new normal it'll be hardly any different over time than the gradual slide we've had since WW2; when weaponized psychology became widespread.

      It'll probably take another horrific shameful war before "Lying Press" becomes a taboo phrase in this c

      • Comment removed based on user account deletion
        • Trump was NOT a middle finger. Aside from the immature childish behavior of an FU vote (which is fitting given Trump's a man-child;) a middle finger is a huge understatement. Voting for Trump in protest is like a teenage girl running away out of spite with the 1st man who hits on her and ending up gang raped and thrown in some ditch. Yeah, that sure showed them...

          Or the cattle/sheep escaping from their farmer for using and looking down on them; escaping to the butcher who reminds them of a pig so he must

  • Maybe Giuliani is right when he says "Truth isn't truth".

    When anything and everything, such as photos, videos, audio, facts, etc., become as muddied and confused as a psychotic break, then does "Truth" still matter? When "Truth" becomes unknowable, then it becomes useless. You can only react to what you perceive. Like for a person having a psychotic break, that often means reacting in a chaotic or harmful way.

    I can't imagine what this will do to society. Perhaps society will react to things in way
  • The most memorable use in film for me is The Running Man, where an autocracy implementing barbaric (and corrupt) punishments attempts to create footage showing our hero meeting his end. For a slice of 80's schlock it's actually a pretty decent guess at the future!
  • Comment removed based on user account deletion

Been Transferred Lately?

Working...