Researchers with the University of California Berkeley and Google Research have published a new paper detailing an AI that can remove unwanted shadows from images. The algorithm focuses on two different types of shadows — ones from external objects and ones naturally resulting from facial features — and works to either remove or soften them in order to maintain a natural appearance.
Whereas professional images are often taken in a studio with proper lighting, the average snapshot of a person is taken ‘in the wild’ where lighting conditions may be harsh, causing dark shadows that obscure parts of the subject’s face while other parts are covered with excessive highlights.
The newly developed AI is designed to address this problem by targeting those unwanted shadows and highlights, removing and softening them until a clearer subject remains. The researchers say their tool works in a ‘realistic and controllable way,’ and it could prove useful for more than just images captured in casual settings.
Professionals could, for example, use a tool like this to salvage images taken in outdoor environments where it was impossible to control the lighting, such as wedding images taken outdoors under a bright noon sun. In their paper, the researchers explain:
In this work, we attempt to provide some of the control over lighting that professional photographers have in studio environments to casual photographers in unconstrained environments … Given just a single image of a human subject taken in an unknown and unconstrained environment, our complete system is able to remove unwanted foreign shadows, soften harsh facial shadows, and balance the image’s lighting ratio to produce a flattering and realistic portrait image.
This project is designed to target three specific elements in these photographs: foreign shadows from external objects, facial shadows caused by one’s natural facial features and lighting ratios between the lightest and darkest parts of the subject’s face. Two different machine learning models are used to target these elements, one to remove foreign shadows and the other to soften facial shadows alongside lighting ratio adjustments.
The team evaluated their two machine learning models using both ‘in the wild’ and synthetic image datasets. The results are compared to existing state-of-the-art technologies that perform the same functions. ‘Our complete model clearly outperforms the others,’ the researchers note in the study, highlighting their system’s ability in a selection of processed sample images.
In addition to using the technology to adjust images, the study explains that this method can be tapped as a way to ‘preprocess’ images for other image-modifying algorithms, such as portrait relighting tools. The researchers explain:
Though often effective, these portrait relighting techniques sometimes produce suboptimal renderings when presented with input images that contain foreign shadows or harsh facial shadows. Our technique can improve a portrait relighting solution: our model can be used to remove these unwanted shadowing effects, producing a rendering that can then be used as input to a portrait relighting solution, resulting in an improved final rendering.
The system isn’t without limitations, however, particularly if the foreign shadows are presented with ‘many finely-detailed structures,’ some residue of which may remain even after the images are processed. As well, and due to the way the system works, some bilaterally symmetric shadows may not be removed from subjects,
In addition, softening the facial shadows using this technique may, at times, result in a soft, diffused appearance due to excessive smoothing of some fine details that should remain, such as in the subject’s hair, as well as causing a ‘flat’ appearance by softening some facial shadows.
As well, the researchers note that their complete system looks for two types of shadows — facial and foreign — and that it may confuse the two at times. If facial shadows on the subject are ‘sufficiently harsh,’ the system may detect them as foreign shadows and remove (rather than soften) them.
Talking about this issue, the researchers explain:
This suggests that our model may benefit from a unified approach for both kinds of shadows, though this approach is somewhat at odds with the constraints provided by image formation and our datasets: a unified learning approach would require a unified source of training data, and it is not clear how existing light stage scans or in-the-wild photographs could be used to construct a large, diverse, and photorealistic dataset in which both foreign and facial shadows are present and available as ground-truth.
Regardless, the study highlights yet another potential use for artificial intelligence technologies in the photography industry, paving the way for more capable and realistic editing that takes less time to perform than manual editing. A number of studies over the past few years have highlighted potential uses for AI, including transforming still images into moving animations and, in the most extreme cases, generating entire photo-realistic images.
As for this latest project, the researchers have made their code, evaluation data, test data, supplemental materials and paper available to download through the UC Berkeley website.
Via: Reddit
Articles: Digital Photography Review (dpreview.com)
You must be logged in to post a comment.