The textbook definition of public relations is: The management of communications between an organization and its publics.
Public relations professionals manage an organization’s reputation, community engagement and outreach, public persona, social media strategies, all public content and media relations. We create messaging strategies, long and short form content like white papers, thought leadership articles and blogs. Very rarely we are called upon to manage crisis communication, which could be anything from a product recall, to a workplace injury or accident, to an accusation of criminal behavior. Thankfully, in my 40-year career I have engaged in crisis communication on behalf of a client just four times. Contrary to what many believe, the vast majority of public and private organizations are managed legally and ethically.
An increasing threat to authentic public relations is viral misinformation, what the media and political sphere has named “fake news.” Cordell Hull, Franklin Delano Roosevelt’s Secretary of State once said, “A lie will gallop halfway round the world before the truth has time to pull its breeches on.” With the assistance of social media, this timeline has been vastly accelerated.
Fortunately, fake stories that appear in print on social media are easily debunked if the reader takes the time to do a little basic research. One can often revert to existing video to determine what the person actually said versus what others say they said.
But what if an emerging technology made it possible to create video images with voice that was completely undetectable as fake? What are the ramifications of a social media post suddenly appearing showing your company’s CEO giving a racist rant? That technology actually exists today.
On September 5th, 2019 in the MIT Technology Review, an article in the newsletter “The Algorithm” by Will Knight discusses Facebook’s fear AI-generated “deepfake” videos could be the next big source of viral misinformation—spreading among its users with potentially catastrophic consequences for private organizations, individuals and elections.
Here’s what “The Algorithm” article discussed:
“Facebook’s solution? Making lots of deepfakes of its own, to help researchers build and refine detection tools. Facebook has directed its team of AI researchers to produce a number of highly realistic fake videos featuring actors doing and saying routine things. These clips will serve as a data set for testing and benchmarking deepfake detection tools. The Facebook deepfakes will be released at a major AI conference at the end of the year.
The rise of deepfakes has been driven by recent advances in machine learning. It has long been possible for movie studios to manipulate images and video with software and computers, and algorithms capable of capturing and re-creating a person’s likeness have already been used to make point-and-click tools for pasting a person’s face onto someone else. Methods for spotting forged media exist, but they often involve painstaking expert analysis. Tools for catching deepfakes automatically are only just emerging.
Facebook’s CTO, Mike Schroepfer, says deepfakes are advancing rapidly, so devising much better ways to flag or block potential fakes is vital.
‘We have not seen this as huge problem on our platforms yet, but my assumption is if you increase access—make it cheaper, easier, faster to build these things—it clearly increases the risk that people will use this in some malicious fashion,’ Schroepfer, who is spearheading the initiative, said last night. ‘I don’t want to be in a situation where this is a massive problem and we haven’t been investing massive amounts in R&D.’
Comparing the effort to the fight against spam email, Schroepfer said Facebook may not be able to catch the most sophisticated fakes. ‘We’ll catch the obvious ones,’ he said. But he said Facebook isn’t employing any methods yet because the forgeries are improving so quickly.”
As is we didn’t have enough on our plate helping organizations manage their authentic communications, a new world is emerging, and public relations professionals need to stay abreast of the technologies that could be manipulated with malicious intent.
If you need help with traditional pr, or you need assistance with crisis communication, we can lend a hand.
© 2024 O’Keeffe. All Rights Reserved. | Sitemap