Deepfakes are back in the news after it was reported that a group of researchers working at the Samsung AI Centre in Moscow have introduced an initiative called Mega Portraits (megapixel portraits).
In a research paper, they write that the portraits are based on a concept called “neural head avatars, which offer a new fascinating way of creating virtual head models. They bypass the complexity of realistic physics-based modeling of human avatars by learning the shape and appearance directly from the videos of talking people.”
The question on the minds of many is whether a technology such as this could eventually lead to a rampant increase in the number of deepfakes cases.
Moscow is one of seven Samsung AI Centres globally, with the others located in Toronto, Montreal, New York, Cambridge (UK), Seoul, and Mountain View, Calif. and according to the company, “leading projects” there at the moment include generation of photorealistic human avatars, 3D modeling, and image manipulation techniques.
In late June, the FBI Internet Crime Complaint Centre (IC3) warned of an increase in complaints around the use of deepfakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home positions.
Deepfakes, the bureau said in an advisory, contain a “video, an image or recording convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
Of note is the type of jobs impersonators are applying for. They included “information technology, computer programming, database, and software-related job functions. Notably, some reported positions include access to customer PII, financial data, corporate IT databases, and/or proprietary information.
“Complaints report the use of voice spoofing, or potentially voice deepfakes, during online interviews of the potential applicants. In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.”
Meanwhile, in a report released in February by the Canadian Global Affairs Institute, author Abby MacDonald warned that a “realistic deepfake carefully created and released at a certain time could sway the result of a democratic election, incite violence against certain groups or exacerbate political and social divides.
“The potentially dangerous effects of deepfakes have begun to emerge, with the technology having been used to commit fraud, fool people into connecting online and discredit individuals.
“Online disinformation has exacerbated security incidents in democratic countries, including violent protests, which have led to health and safety risks, property damage, injuries, and even death. The prospect of deepfakes as a cause of instability is not difficult to imagine, nor far off.”
The report stated that that the “technology relies on two important breakthroughs in machine learning and artificial intelligence. The first is a neural network. The more information these algorithms are exposed to, the more accurately they can repeat it back.”
“The second is generative adversarial networks (GANs), which essentially combine two neural networks together and make them compete against one another to produce a better final product.”