Let's Talk !
Synthetic media is a representation of the degrading legitimacy of digital evidence within crimes; it’s a helpful tool within creative endeavors, and a potential hammer that will nail innocent individuals and ruin job opportunities and social lives. Seeing is becoming nearly impossible to believe; this is a reality that is only made more apparent with the advent and implementation of AI. Artificial intelligence has come a long way since its barebones experimental phases in the 1950s and 60s. Now, with easy-to-use AI chatbots like ChatGPT , or AI image generators that can win competitions against real humans, and subsequent APIs that allow companies and individuals to make new, impressive, and pseudo-authentic content, it was only a matter of time before this revolutionary technology was used for malicious purposes.
Deepfakes have been used in a variety of ways, primarily within the entertainment field to restore and preserve old footage, rework scenes for dubs, and even develop cost-effective, realistic special effects. With the same magic that is created from Artificial Neural Networks (ANNs), computer vision techniques, and other subsets of AI, known figures within the Internet scape are experiencing defamatory content like non-consensual pornography. However, everyday people are not free from being targeted.
Interestingly, with the same technology used to create such content, software developers can use it to identify and aid in discerning the differences between original and AI content. Ultimately leading to truth regaining its footing, and crimes, whether as socially and professionally damaging as revenge porn, or as severe and traumatic as blackmail, extortion, and fraud, will be much easier to fight against.
The landmark case of Noelle Martin and the effects of doctored pornography led to changes within Australian law regarding what is now classified as a crime. During the years it took to establish this monumental change, Martin found and later expressed in a TEDTalk that she is one of the thousands of normal women being taken advantage of daily. This case, while dealing more with images such as revenge porn, foreshadowed how difficult it would be to determine what's real and what's fake.
Deepfake technology depends on artificial intelligence (AI), more specifically, machine learning (ML), to make equally deceptive and realistic synthetic media. This process begins with the collection and preprocessing of data from images and videos of the targeted person. Following this, an AI model with deep learning architecture, usually a Generative Adversarial Network (GAN), is trained on said data. As a result, the model is trained to develop almost completely realistic content. The model will only get better through involved training using the features, patterns, and characteristics of the targeted person. With the staggering advancements AI has made as of late, the results are becoming far more difficult to discern. These results have become a thorn in the side of law enforcement officials as forensic analysts are being left in the dust with current protocols and technologies.
Even so, artificial intelligence can also be leveraged as a double-edged sword to combat this defamatory and incriminating content. Machine learning algorithms like Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) can be used by experienced software developers to analyze visual and auditory aspects within the content and detect abnormalities indicating deepfake manipulation, i.e., AI-generated content. Additionally, temporal analysis, variations in blinking patterns, lip movements, and facial expressions are utilized as indicators for potential deepfake content. Robust and in-depth solutions like these are being sought out and currently in development between law enforcement officials and talented engineers, hopefully leading to a future where truth prevails.
Deepfakes can be used for incredible things. However, crimes such as fraud, extortion and blackmail, defamation, digital harassment, and revenge porn can maintain and increase at an alarming rate. With the last three annual reports from The National Registry of Exonerations showing an upward trend of more people being declared innocent after being "proven" guilty, law enforcement officials will have accurate and robust tools to combat false allegations and doctored evidence.
Plus, wrongful convictions can see significant decreases altogether as a result of optimized evidence processing. As for the social and career impact, the consequences of these sorts of cases, while still difficult, could be minimized and hold little to no external outcome.
As cutting-edge as AI is, the detection of such kind of content is more of a bleeding-edge endeavor that lies on the shoulders of talented software engineers who specialize in AI resources that are at the forefront of technology to create the tools needed to properly tell the difference. These tools would then be used by the proper sectors, resulting in livelihoods remaining intact, journalistic integrity maintaining legitimacy, and law enforcement seeing significant improvements in preserving the truth.
Chetu does not affect the opinion of this article. Any mention of a specific software, company or individual does not constitute an endorsement from either party unless otherwise specified. This blog should not be construed as legal advice.
Founded in 2000, Chetu is a global provider of offshore software development services, solutions and support services. Chetu's specialized technology and industry experts serve startups, SMBs, and Fortune 500 companies with an unparalleled software delivery model suited to the needs of the client. Chetu's one-stop-shop model spans the entire software technology spectrum. Headquartered in Plantation, Florida, Chetu has fourteen locations throughout the U.S. and abroad.