Researchers at Samsung's Artificial Intelligence (AI) Centre in Moscow have created an algorithm that can generate videos using only one image.

The development has caused some worry among technology experts and commentators, who see it as a worrying step towards making fake content creation easier.

In a paper published in the pre-print journal ArXiv, and in an accompanying video demo, the algorithm creates a video using a single still image, such as the Mona Lisa painting or a photo of Salvador Dali.



The video can be created using one single image but the more images are used, the better the quality.

A sample of 32 images produces a video of near lifelike accuracy.

Current AI systems usually require the algorithm to scan large sets of data of a body and face before it can produce a moving picture based on it.

With this new technology, however, creating fake videos will become a lot easier.

The Samsung algorithm was trained using the publicly available VoxCeleb database which has more than 7000 images of celebrities from YouTube videos.

Since the algorithm recognises common characteristics of a person's face and body, as opposed to specific traits of a subject, it's able to quickly extrapolate images with little input.

This method also means that the technology is applicable toward non-celebrities and can be used on anyone, even people who died a long time ago and were never captured on video.

The AI is currently only able to produce "talking head" style videos from the shoulders up.


Skeptics of deepfake technology, as it is referred to, worry it will be used to spread misinformation and fake news or to steal people's identity.