RMIT experts call for unmasking deepfake challenges

RMIT experts call for unmasking deepfake challenges

Recently, there has been significant use of the term ‘deepfake’, particularly surrounding the publication of nude images of music superstar Taylor Swift on X. RMIT experts discussed its adverse effects and proposed solutions to tackle them.

Dr Jonathan Crellin (left) and Dr Nguyen Van Thang Long (right), RMIT Vietnam Dr Jonathan Crellin (left) and Dr Nguyen Van Thang Long (right), RMIT Vietnam

Dr Jonathan Crellin, Program Manager in Cyber Security in the School of Science, Engineering & Technology, RMIT Vietnam: “Cyber crime is all about exploiting new technologies in unanticipated ways.” 

The increase in uses of AI brings both benefits, but also unexpected and new problems.

A deepfake is a machine-generated creation that may combine images or videos from different sources, creating a very realistic-looking image, video, or even audio. It is based on a technique in AI called machine learning, and it can substitute and integrate elements such as a person’s face, into another image or video.   

A notable example of deepfake usage involves Taylor Swift, where her images were combined with pornography content to create embarrassing fake pictures.

To do this, several images are needed so the software can learn how her face appears and can then combine it with pornography to create an obscene image that makes the star look like a bad person. In fact, these images are rumoured to have been distributed by a Telegram group and created using a Microsoft Designer tool which incorporates AI assistance. 

This can be done to anyone. It just needs images, video, or audio recordings that can be found. Fakes of various sorts are likely to be used to create false news stories, and there will no doubt be a deluge of them during the US Presidential election. 

Currently, law makers around the world are struggling to work out how to legislate against this type of image. There are several approaches starting to be used in the US to construct workable legislation, perhaps based on civil lawsuits, or laws to make “the dissemination of AI-generated explicit images of a person without their consent”. China also introduces new rules to allow prosecution of AI generated images. The UK made sharing deepfake porn illegal under its online safety act.   

How to detect or prevent this sort of thing? One way is to reduce the number of your images, video, or audio recordings online. Make sure you only share with people you know, and not the wider internet. Once something has been uploaded to the Internet, it is virtually impossible to remove it.   

A second thing may be to ensure you share a secret word with your family which will help validate a call, making it less likely you will fall for a fake threat. The images, and particularly video can have odd faults (artefacts), if you notice these it is possible the media is faked. 

Another technique is to use a ‘reverse image’ search, from Google or other search engines, which will probably identify the original image source. 

But the final lesson is don’t blindly believe what you see, the ‘camera’ (or the AI) actually does lie! 

The increase in uses of AI brings both benefits, but also unexpected and new problems (image: Freepik). The increase in uses of AI brings both benefits, but also unexpected and new problems (image: Freepik).

Dr Nguyen Van Thang Long, Senior Lecturer from the School of Communication & Design at RMIT Vietnam: “Deepfakes pose a significant danger to celebrities and politicians, as the continuous circulation of negative information can shape public perception unfavourably towards them.”

With the proliferation of fake news through deepfakes, the media teams of celebrities and politicians need to have resources in place to monitor and swiftly respond to fake news or continuously correct misinformation. 

If deepfakes are systematically integrated with organised smear campaigns rather than being disseminated spontaneously, this task becomes even more challenging due to the flood of contradictory information. In such cases, fake or negative news is likely to spread more rapidly than positive news.

Typically, when news is shared on social media, individuals often seek validation from mainstream media outlets. With deepfake-generated content circulating on social media, confirming the accuracy of journalistic information becomes increasingly arduous and time-consuming, requiring extensive research and verification techniques. 

Delayed verification of news and its sources creates additional openings for the proliferation of false, fabricated, or misleading information, amplified by swift sharing and commentary on social media. This exacerbates the underlying issue and could potentially lead to social unrest, particularly if the content relates to political statements, religion, gender, business strategies, or macroeconomic matters.

In the context of deepfakes, the most effective risk management strategies involve maintaining consistent communication channels - via popular social media platforms, websites, or direct interactions - among enterprises, celebrities, politicians, and key stakeholders such as fans, journalists, communities, and employees. 

By upholding these communication channels, timely reception of deepfake-related information becomes possible, enabling swift and effective correction of rumours and debunking of misinformation from the beginning.

However, companies, celebrities, and politicians also need to develop crisis management plans specifically tailored to deepfake scenarios. These plans should outline protocols such as designating official spokespersons, selecting communication channels, specifying criteria for verifying information through credible sources and evidence, establishing timelines for addressing rumours, and outlining strategies for reputation recovery. 

With a well-prepared plan in place, managing deepfake crises becomes more feasible, mitigating the occurrence of detrimental outcomes.

Story: June Pham

27 February 2024

Share

Related news