WASHINGTON: Our photographs may soon resemble those from the world of Harry Potter, as scientists have developed a tool that can make a person from a 2D image run, walk or jump out of the frame.
Their algorithm, called Photo Wake-Up, also allows users to view the animation in three dimensions using augmented reality tools.
“This is a very hard fundamental problem in computer vision,” said Ira Kemelmacher-Shlizerman, an associate professor at the University of Washington in the US.
“The big challenge here is that the input is only from a single camera position, so part of the person is invisible. Our work combines technical advancement on an open problem in the field with artistic creative visualisation,” Kemelmacher-Shlizerman said.
Previously, researchers thought it would be impossible to animate a person running out of a single photo.
“There is some previous work that tries to create a 3D character using multiple viewpoints,” said Brian Curless, a professor at University of Washington.
“But you still couldn’t bring someone to life and have them run out of a scene, and you couldn’t bring AR into it. It was really surprising that we could get some compelling results with using just one photo,” said Curless.
The applications of Photo Wake-Up are numerous, the team said.
The researchers envision this could lead to a new way for gamers to create avatars that actually look like them, a method for visitors to interact with paintings in an art museum — say sitting down to have tea with Mona Lisa — or something that lets children to bring their drawings to life.
Examples in the research paper include animating the Golden State Warriors’ Stephen Curry to run off the court, Paul McCartney to leap off the cover of the “Help!” album and Matisse’s “Icarus” to leave his frame.
To make the magic a reality, Photo Wake-Up starts by identifying a person in an image and making a mask of the body’s outline. From there, it matches a 3D template to the subject’s body position.
Then the algorithm does something surprising: In order to warp the template so that it actually looks like the person in the photo, it projects the 3D person back into 2D.
Photo Wake-Up stores 3D information for each pixel: its distance from the camera or artist and how a person’s joints are connected together.
Once the template has been warped to match the person’s shape, the algorithm pastes on the texture — the colours from the image.
It also generates the back of the person by using information from the image and the 3D template. Then the tool stitches the two sides together to make a 3D person who will be able to turn around.
Once the 3D character is ready to run, the algorithm needs to set up the background so that the character does not leave a blank space behind.
Photo Wake-Up fills in the hole behind the person by borrowing information from other parts of the image.
Right now Photo Wake-Up works best with images of people facing forward, and can animate both artistic creations and photographs of real people.
The algorithm can also handle some photos where people’s arms are blocking part of their bodies, but it is not yet capable of animating people who have their legs crossed or who are blocking large parts of themselves. (agencies)