One of the many useful techniques of AI neural networks is the style transfer techniques. The neural network understands the style and looks of an input photo and then it applies the style to another photo, generating a new image with the same style as the input.
This technique has found its application in photo filter apps such as Prisma. But the problem with the apps is that the results do not appear to be actual photographs, rather feel as if they had been painted. They lack a sense of reality to them due to the loss of certain image details that make it look unrealistic.
The style of the picture changes but not in a genuinely convincing way. A team of researchers from Cornell University and Adobe improved the style transfer technique so that the results are more photorealistic. In a research paper titled “Deep Photo Style Transfer”, all the work has been highlighted.
Essentially, the researchers have added a new layer of the neural network on top the original ones so that the details of the original picture are preserved. The new layer works to identify and preserve “local affine patches” in an image. This can be described as details such as edges within an image will be focused on and preserved, rather than being moved and distorted as with the original algorithm.
The new algorithm works best with images of buildings and sceneries where edges and boundaries of objects are easy to identify. It still has trouble transferring style onto human faces. Another problem is that the input must be close to the photo it has to be applied to.
If a scene with a building needs a style added onto it, then the input image must also be an image of buildings. If they are drastically different, then the neural networks have problems identifying similar features. Even with its flaws, it still is an improvement over the original style transfer algorithm.