1.16.2021

Deepfakes






    • You click on a news clip and see the President of the United States at a press conference with a foreign leader. The dialogue is real. The news conference is real.

     


    • You share with a friend. They share with a friend. Soon, everyone has seen it. Only later you learn that the President’s head was superimposed on someone else’s body.

None of it ever actually happened.

Sound farfetched? Not if you’ve seen a certain wild video from YouTube user Ctrl Shift Face (take a look at the clip above).
  • Since last August, it's gotten almost 9. million views.

    The definitive guide to a new kind of AI that could seriously threaten our democracy.What Is a keepsake

 

  • The Wader video is an expertly crafted deep fake, a technology invented in 2014 by Ian Feelgood, a PhD.. student who now works at Apple. Most keepsake technology is based on generative adversarial networks (Fans).

  • This is what a deepfake voice clone used in a failed fraud attempt sounds like
    AI voice clones are getting more and more realistic Security consulting firm NIKOS has released a report analyzing one such attempted fraud, and shared the audio with Motherboard. The clip below is part of a voicemail sent to an employee at an unnamed tech firm, in which a voice that sounds like the company’s CEO asks the employee for “immediate assistance to finalize an urgent business deal.

    Fans enable algorithms to move beyond classifying data into generating or creating images. This occurs when two Nags try to fool each other into thinking an image is “real.” Using as little as one image, a seasoned GAN can create a video clip of that person. Samsung’s AI Center recently released research sharing the science behind this approach.
    “Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” said the researchers behind the paper. “We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”

    For now, this is only applied to talking head videos. u when 4 percent of Americans watch their news through online video content, what happens when Fans can make people dance, clap their hands, or otherwise be manipulated?

    How "disinformation campaigns" could be targeting 2020 election

     

    "We're kind of seeing the same things that we saw in 2016, but at a much larger scale," Chet's an Patterson said."We're kind of seeing the same things that we saw in 2016, but at a much larger scale," Chet's an Patterson told CTN anchor Anne-Marie Green.

    Why Are fakes erogenous?

    • If we forget the fact that there are over 30 nations actively engaged in cybercafe at any time, then the biggest concern with deep-fakes might be things like the ill-conceived website deepness, where celebrity faces and the faces of ordinary women could be superimposed on pornographic video content.

     

    • nude founder eventually canceled the site's launch, fearing “the probability that people will misuse it is too high.” Well, what else would people do with fake pornography content?

     

    • “At the most basic level, keepsakes are lies disguised to look like truth,” says Andrea Erickson, rector of the School of Journalism and Mass Communications at the University of South Carolina.
    • “If we take them as truth or evidence, we can easily make false conclusions with potentially disastrous consequences.”

     

    • A lot of the fear about deep-fakes rightfully concerns politics, Erickson says. “What happens if a deep-fake video portrays a political leader inciting violence or panic? Might other countries be forced to act if the threat was immediate?”

    Signing bonus proration for opt outs won't count against NFL's 2020 cap

    • Listen to some of the best sound from Jacksonville Jaguars quarterback Gardner Minshew's digital presser Tuesday.
    • With the 2020 elections approaching and the continued threat of counterattacks and cybercafe, we have to seriously consider a few scary scenarios:
    • → Weaponized deepfakes will be used in the 2020 election cycle to further ostracize, insulate, and divide the American electorate.
    Weaponized deep-fakes will be used to change and impact the voting behavior, but also the consumer preferences of hundreds of millions of Americans.

     


    • Weaponized deepfakes will be used in spear phishing and other known cybersecurity attack strategies to more effectively target victims.

    This means that deepfakes put companies, individuals, and the government at increased risk.

     

    “The problem isn’t the GAN technology, necessarily,” says en Lamb, CEO of the AI company Hyperglycemia Industries.

     

    “The problem is that bad actors currently have an outsized advantage and there are not solutions in place to address the growing threat. However, there are a number of solutions and new ideas emerging in the AI community to combat this threat. Still, the solution must be humans first.”

    A New Peril: keepsake Financial Scams

    oh, you remember your first robocall

    Perhaps not, considering the automated phone calls were pretty convincing a few years ago, back when most of us didn't understand what they were just yet. Luckily, those scummy calls have been on the decline: the U.S. Federal Trade Commission reports that robocall complaints fell 68 percent in April and 60 percent in May, compared to the same periods in 2019.

    Andre faker charged with four counts of robbery with firearm; Quinton off the hook

    If convicted, each charge would carry a minimum of 10 years and up to life in state prison. While faker is being charged, fellow NFL defensive back Quinton unbar is not. unbar, who was also allegedly involved in the incident, is off the hook in the case. Prosecutors declined to charge the Seattle Hawks star due to insufficient evidence, according to Pat Leonard of the New York News.Police in Miriam, Fla., issued arrest warrants for unbar and baker on May 14 after they were accused of robbing some people at a party.

    • However, audio deepfake technology could easily bolster the deceitful tactic. According to Nisos, an Alexandria, Virgina-based cybersecurity company, hackers are using machine learning to clone peoples' voices. In one documented case, hackers used deepfake synthetic audio in an attempt to defraud a tech company.

    • Nisos shared that audio clip with Motherboard. Take a listen.


    • This came in the form of a voicemail message, which seemed to come from the tech company's CEO. In the message, he asks an employee to call back and "finalize an urgent business deal."

    • "The recipient immediately thought it suspicious and did not contact the number, instead referring it to their legal department, and as a result the attack was not successful," Nisos notes in a July 23 white paper.

    • ⚠️ What to do if you receive a suspicious voicemail ⚠️

    • → Alert your company's general counsel or another high-ranking executive. Often these social engineering schemes prey on lower-level employees.

    • → You can return the call directly to get the potential hacker on the line. Nisos says that deepfake technology is "not sophisticated enough" to mimic a full phone call.

    • → Get your company to exercise a series of "challenge questions" about information that is not publicly known. This should help vet the identify of the person on the other end of the call.

    • What’s eing one to Fight eepfakes?

    • Last summer, the U.S. House of Representatives’ Intelligence Committee sent a letter to Twitter, Facebook, and Google asking how the social media sites planned to combat deepfakes in the 2020 election. The inquiry came in large part after President Trump tweeted out a deepfake video of House Speaker Nancy Pelosi:

    Antonio Gibson, ryce Love could see bigger roles after Guice release

    Antonio Gibson, ryce Love could see bigger roles after Guice release

    Antonio Gibson and rice Love could be in line for greater roles after Washington’s release of serious Guise.© rain Turlock-USA TODAY Sports Washington selected Antonio Gibson in the third round of the 2020 draft. Guidance was let go by the Washington Football Team on Friday for his domestic violence arrest. Guise was cleared for football activity and looked like he would have a big role in Washington’s offense. Now, there will be three players to whom Washington could give more chances, and each player represents something different.

    © 2016 PressFrom - US. All Rights Reserved.

    Earlier this year, Facebook took a positive step toward banning deep-fakes. In a January 6 blog post, Monika Rickert, vice president of global policy management for Facebook, wrote that the company is making new efforts to "remove misleading manipulated media."
  • Facebook is taking a specific, two-pronged approach to flagging and removing deepfakes. For an image to be taken down, it must meet the following criteria, according to the blog post:


    It has been edited or synthesized–beyond adjustments for clarity or quality–in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.

    It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

    • Satire and parody videos are still safe, though, as are videos that have been edited only to omit or change the order of words. That means manipulated media can still get through the cracks. Notably, Ticktock and Twitter have similar policies.

    • Meanwhile, government institutions like ARIA and researchers at colleges like Carnegie Mellon, the University of Washington, Stanford University, and the Max Planck Institute for Informatics are also experimenting with deepfake technology. So is Disney. The organizations are looking at both how to use GAN technology, but also how to combat it.

    • Feeding algorithms deepfake and real video, they’re hoping to help computers identify when something is a deep fake.

    • If this sounds like an arms race, it’s because it is. We’re using technology to fight technology in a race that won’t end.

    Washington Football Team's serious Guide arrested on domestic violence charges


    A second-round pick of the Washington Football Team back in 2018, Guidance in that year's draft due to concerns about his character.

    Guile was arrested Friday afternoon on domestic violence charges, a spokesperson for the Loudness County Sheriff’s Office told The Washington Post.

    Maybe the solution isn’t tech. Additional recent research suggests that mice might just be the key. Researchers at the University of Oregon Institute of Neuroscience think that “a mouse model, given the powerful genetic and electrocardiography tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.”

 

  • This means mice could inform next-generation algorithms that could detect fake video and audio. Nature could counteract technology, but it’s still an arms race.

    While advances in deepfake technology could help spot deepfakes, it may be too late. Once trust is corroded in a technology, it’s nearly impossible to bring it back. If we corrupt one’s faith in video, then how long until faith is lost in the news on television, in the clips on the Internet, or in live-streamed historic events?

 

  • videos threaten our civic discourse and can cause serious reputational and psychic harm to individuals,” says Sharon Bradford Franklin, Policy erector for New America’s Open Technology Institute

 

  • it so is ensuring the foundation of our democracy and our press.

    Americans have already lost their faith in the news. And as deepfake technology grows, the cries of fake news are only going to get louder.

    “The best way to protect yourself from a deepfake is to never take a video at face value,” says Hickerson. “We can’t assume seeing is believing. Audiences should independently seek out related contextual information and pay especially attention to who and why someone is sharing a video. Generally speaking, people are sloppy about what they share on social media. Even if your best friend shares it, you should think about where she got it. Who or what is the original source?”

    The solution to this problem has to be driven by individuals until governments, technologists, or companies can find a solution. If there isn’t an immediate push for an answer, though, it could be too late.

    What we should all do is demand that the platforms that propagate this information be held accountable, that the government enforces efforts to ensure technology has enough positive use cases to outweigh the negatives, and that education ensures we know about deepfakes and have enough sense to not share them.

    Otherwise, we may find ourselves in a cyberwar that a hacker started based on nothing but an augmented video. What then?