Sony Electronics recently announced updates to its Visual Story, a cloud-based mobile application that works with various models of cameras the company produces. Built with event photographers in mind, ‘Visual Story’ Version 1.1 uses AI (artificial intelligence) to instantly recognize scenes and objects. The app will continuously select what it deems the best images for real-time gallery creation.
What this means, for these socially-distanced times, is that friends, family and colleagues can view the highlights of a wedding, conference or sporting match, while it’s happening, in the comfort of their own homes or offices. The ‘Live Gallery’ feature also applies presets to images as the photos upload, to maximize their visual appeal.
A newly-added object detection filter allows the photographer and viewers to locate a specific photo containing, for example, a wedding cake, soccer ball or table. The audience can also ‘like’ specific photos. This can aid the photographer in curating images for a final gallery before it’s delivered to the client.
Photographers can also add their own logo plus links to their website and social media profiles to galleries for branding purposes, not to mention increased exposure to the audiences.
Visual Story allows you to access photos from any specific time during the event. Photos are stored to the cloud for backup as well. Photographers can also add their own logo plus links to their website and social media profiles to galleries for branding purposes, not to mention increased exposure to the audiences.
Currently available for free on iOS, Visual Story is compatible with the following Sony cameras: a7C, a7R IV, a7S III, a9, a9 II, a1, a7 III (with updated firmware) and FX3.
Last year, Skylum Software added a new feature to Luminar, AI Sky Replacement. The fully automatic feature, powered by artificial intelligence, can almost instantly replace the sky in an image and relight the overall scene. Next year in Luminar AI, Skylum is taking the feature even further with its new Sky AI tool. Sky AI will add a much-requested feature, water reflections.
Skylum has published a new blog post and video, seen further below, showing off how water reflection will work in 2021 when it is added to Luminar AI, which is scheduled to release this year. As is the case with AI Sky Replacement, Sky AI and its water reflections feature will be fully automated.
Sky AI 1.0 (left) versus Sky AI 2.0 (right). Sky AI 2.0 includes the new water reflections functionality. Image credit: Elia Locardi
With Sky AI, when the software detects water in your scene and you replace the sky, Luminar AI will ensure that the new sky is accurately reflected in the water. As Skylum writes, ‘That means no more duplicating your scene, flipping it and applying a bunch of masking to make it look realistic.’ The reflected sky will also adapt on the fly to your selected relight settings and be ‘blurred into the scene without any manual work from you.’
Further, any details in the water in your scene, such as waterbirds, will stay in your scene and not be overwritten by the reflected sky. Sky AI recognizes the objects in your image and works to preserve fine details in a scene. You can check out the upcoming feature in the preview video below.
As you can see in the video above, Sky AI has a similar selection of sliders to what’s currently available in Luminar 4’s AI Sky Replacement tool. However, Skylum has added Water Reflection and Water Ripples sliders. You can control the intensity of the reflection and even add user-adjustable ripples to the reflection by using these new sliders.
Image credit: Daniel Kordan
In addition to the new water reflection capabilities of Sky AI, Skylum is also adding the ability to browse through your library of skies in a thumbnail viewer in 2021. The browser will show you a preview of each sky, whereas in Luminar 4, you have a list of the names of different skies, but no visual preview.
Sky AI is one of many exciting new features coming to Luminar AI. You can discover more about Skylum’s Luminar AI and view preorder options by clicking here.
Ahead of Luminar AI’s release this holiday season, Skylum Software has shared additional information about how Templates will work in the upcoming all-in-one photo editing software and how Templates can be used to save photographers a lot of time and energy when processing many images.
In many traditional photo editors, users must adjust different sliders to get the desired results when processing images. There are often available presets, which will speed up the editing process, but presets have limitations, and photographers are often left feeling like their creativity has been removed from the editing process. Ideally, many of us want to save time without sacrificing creative control. Skylum believes that Luminar AI’s Templates will remove the frustration and overcome the limitations of presets in other photo editors.
Photo credit: Elia Locardi
The artificial intelligence in Luminar AI has been integrated throughout the entire editing process and has been trained with ‘expert input from artists, photographers, colorists and scientists.’ Alex Tsepko, Skylum’s CEO, says, ‘With Luminar AI, we wanted to ensure that AI not only was easy to use, but that it also provided creatives a way to express themselves. Through our unique 3D depth-mapping and segmentation technologies, we’re able to recognize the contents of a photo, recommended edits and then allow creatives to refine every aspect of that recommendation. Doing this lets creatives retain their unique style in their edits without tedious, manual work. Professional results, but in a fraction of the time.’
Artificial intelligence starts operating as soon as you open an image in Luminar AI. The software identifies the contents of an image, analyzes potential problem spots, and evaluates the depth of the image. Luminar AI then offers a list of carefully selected Templates for users to select from.
Users can test out different Templates and see how they impact their image. Skylum states that a preview is created in less than a second. When you evaluate a specific Template you can even see which AI tools were utilized in the Template. For beginners, it should prove useful to see which tools are used to create different images and how each tool changes the look and feel of a photo.
Photo credit: Javier Pardina
Templates will offer novices a variety of ideas to help choose the direction they want to take an image. For advanced photographers with more editing experience, they can choose when and how they want more advanced manual control over their image edits. They can pick and choose which AI tools they want to utilize and then create their own templates for future use on single images or when batch editing. Skylum states that utilizing templates will allow photographers to save ‘up to 90% of their time spent editing.’
You can learn more about Luminar AI here. For a list of Frequently Asked Questions about Luminar AI, click here. Luminar AI is available for preorder at a special price, which you can learn more about here. As the release of Luminar AI approaches this holiday, stay tuned for more information, including a planned hands-on preview ahead of the full public release of Luminar AI.
French company Vaonis unveiled a fully automated camera telescope, Stellina. The product is designed to allow amateur astrophotographers to easily capture beautiful images of the night sky without the need for regular manual control or extensive astrophotography knowledge.
Vaonis is preparing to launch a new feature via software update in 2021 and it recently tested the feature by capturing a massive 546MP panorama of a group of nebulae. The new feature is ‘Automatic Mosaic-ing’ and it will take advantage of Stellina’s image stacking and image stitching technologies. This feature will let you create ultra-high-resolution panoramas of the night sky without you needing to do anything manually.
The image, shown via the screenshot below, was captured with the new feature by Vaonis Technical Director, Gilles Krebs. The image shows, from left to right, the Running Chicken, Statue of Liberty and Carina nebulae. These nebulae are about 7,000 light-years away. The image is comprised of 208,000 total photos, stacked into 168 images and then stitched together into a 546MP panorama. The panorama represents 336 hours of total exposure time. You can explore the full-size image by clicking this link.
Image credit: Vaonis
The Vaonis Stellina is designed to be easy to use and its intelligent, smart design allows the user to easily set up and use the device. DPReview contributor and astronomer Jose Francisco Salgado wrote an excellent review of Stellina. Salgado says, ‘The Stellina is a well-thought out smart telescope. It can easily be transported from one location to another and setting it up cannot be more simple.’ He continues, ‘…if you want a fun-to-use, click-and-shoot device that will work for you while you relax and enjoy the night sky then the Stellina is right for you!’ You can learn some of the basics of Stellina in Vaonis’s video below.
If you’d like to learn more about purchasing the Vaonis Stellina, you can head to Vaonis’s website. Vaonis has also begun teasing a brand new product that they ‘believe will change astronomy.’ The Stellina is already a compact device considering its capabilities, but the new teased product will be even smaller. It will be fully unveiled on Kickstarter on October 1. If you’d like to sign up for alerts or learn more about the new product, click here.
The post Become a Better Photo Editor with the New Lightroom Mobile ‘Discover’ Feature appeared first on Digital Photography School. It was authored by Simon Ringsmuth.
Every time you see a photo that strikes you as beautiful, brilliant, or breathtaking, you are only witnessing the tip of the iceberg. In nearly every case, the photo is the end result of dozens, even hundreds, of edits made by the photographer. From simple cropping and white balance to in-depth editing like curves and color mix, these edits are what turn an ordinary image into a work of art.
Unfortunately, such edits on a photo have been impossible to see. But, thanks to the recent addition of a ‘Share Your Edit’ feature in Lightroom Mobile, you’re now able to view the behind-the-scenes edits made to images.
One of the best ways to grow as a photographer is to learn from others. Find out what works for photographers you admire and respect, and then adopt those techniques into your own workflow. This is the foundation for almost any trade, craft, or artistic pursuit. Yet, for photographers, this knowledge is often locked away behind a door. People can see the end result, but not the process.
The Discover feature in Lightroom Mobile solves this by giving you access to a worldwide community of artists who have willingly shared their editing process. There are hundreds, even thousands, of photo communities online that let you view pictures and share your own. However, none of these—not Instagram, Flickr, SmugMug, or anything else—let you see the editing process. You can only see the final image, which isn’t much use if you want to know how the photographer edited their photo to actually create the picture.
This is Lightroom Mobile’s ace in the hole: Because the Discover feature is part of the same software used to edit the images being shared, it allows for a level of freedom unmatched by any other photosharing site. In minutes, you can be learning from experts and professionals all over the world to see how they have edited their pictures, and you can adopt their techniques into your own workflow.
Discovering the Share Your Edit feature
Accessing the Discover option requires nothing more than a few taps on your mobile device. Open the Lightroom Mobile app and then tap on the icon that looks like a globe. If you hold your device vertically the icon will appear at the bottom of your screen along with the Discover label.
What you see next might remind you of many other photosharing apps, but dig a little deeper and you’ll see so much more. Scroll up and down to see more photos, and tap the heart icon in the lower right corner of any picture to mark it as one that you like. In the lower left corner, you will see the profile photo of the photographer who shot the picture. At the top is a list of categories for you to explore: Featured, New, Abstract, Landscape, Nature, and more.
So far so good, right? If the point of the Lightroom Mobile Discover feature is to help you find photos (or photographers) that you like, then there’s not much to distinguish this from any other photosharing app. The real fun begins when you tap on a photo to see the edit history.
Learning from the edits
When you tap on a picture it’s almost like stepping through a time machine or, more accurately, into a classroom.
Lightroom Mobile now shows you the picture you tapped on, along with a blue bar at the bottom of your screen that fills from left to right. As the bar moves, the picture changes right before your very eyes, almost as though you’re watching it being edited in realtime. And, in a way, that’s exactly what’s happening.
Tap the Edits button at the bottom of the screen or just press and scroll upwards on the photo to load the entire editing history of the image. This is where the Lightroom Mobile Discover feature rockets into the stratosphere and becomes an amazing tool for photographers who want to learn from others, not just be inspired by their photos.
After tapping the Edits button you are presented with a scrolling list of every single edit that the photographer applied to the photo. Scroll to the top to see the initial import, and then slowly scroll down to watch the image change before your very eyes as each individual edit was applied. Lightroom Mobile shows you each particular edit along with the specific number for each individual adjustment.
This linear edit history lets you look over the shoulder of the photographer, watching every edit they made and seeing how each decision changed the image. The Discover feature lets you stand in a room with thousands of photographers, learning from each of them as you see how they arrived at their final images.
One limitation you will quickly realize is that this feature only shows you the edits. You are not allowed to change any of the editing values and, as a result, alter the image in any way. However, you can save the edits as a preset so you can use them in your own photography.
Click the three-dot icon in the top right corner and then tap Save as Preset to download the edits to your own Lightroom app. You can then apply these edits to any of your photos and adjust any of the parameters that you want.
The Lightroom Mobile Discover feature has a few more tricks up its sleeve to help you get inside the mind of photographers who have shared their images. Tap the Info button to see additional details that the photographer has shared about the image. This often includes a title, written description, keywords related to the subject, EXIF data, and camera information. All this is extraordinarily useful for anyone who wants to learn more about a particular photo beyond just how it was edited.
Share your own
After diving into the Discover feature and learning more about how other photos were edited, you might be inclined to share your own images and edits. You can do this easily from Lightroom Mobile with just a few taps.
To get started with sharing your images to the Discover community, just open Lightroom Mobile and tap on any of the images in your library. Then tap the Share icon in the top right corner.
Then click the Share Edit option.
Note that as of this writing (July 2020) this process is still in Beta. Adobe will no doubt improve and refine it over time, and the exact steps might change.
The next screen prompts you to enter some information about the photo. This is similar to Instagram and other photosharing sites, but keep in mind that the point here is to help other photographers learn more about the photo. You aren’t competing for likes or upvotes; you’re sharing valuable information along with your edits to help a larger community of photographers learn more about their craft.
It helps to be as descriptive as possible in your title, description, and category sections. That way, you are not only helping other people learn more about your photo; you’re helping them to discover it, as well, by using categories that are similar to hashtags on other photosharing sites.
Finally, choose whether you want your edits to be saved as presets. I always recommend enabling this option because of the sharing mentality that makes the Lightroom Mobile Discover feature so valuable. If you have benefitted from viewing edits that other photographers have made, it’s nice to respond in kind by sharing your own edits, as well.
I don’t recommend including location information, which is turned off by default.
After you have all the basic information about your photo ready to go, tap the checkmark icon in the top right corner. This uploads your image, editing information, title, description, and categories to the Discover feature.
Tap the OK button and then head over to the Discover feature to see your image in the New section. Soon other photographers will start viewing it and learning from your edits! To see all the images you have shared with the Discover community, along with the number of likes each photo has gotten, tap your profile icon.
Keep in mind that the point of Discover is not to get likes but to learn and help others do the same. Thus, the number of likes on each of your images is almost entirely irrelevant and I recommend not paying attention to it all.
Conclusion
The Lightroom Mobile Discover feature is still in its infancy, and I’m excited to see where Adobe takes it in the coming years. Even though it’s still a bit rough around the edges in a few places, it’s an incredibly useful tool for learning more about the editing process. I hope you give it a chance and, if you learn anything from it, I’d love to have your thoughts in the comments below!
The post Become a Better Photo Editor with the New Lightroom Mobile ‘Discover’ Feature appeared first on Digital Photography School. It was authored by Simon Ringsmuth.
Adobe has announced an update to Photoshop for iPad that adds the popular Refine Edge Brush and Rotate Canvas feature.
The new Refine Edge Brush in Photoshop for iPad makes it easier to precisely select parts of an image, particularly those involving fine fabrics, hair or fur. Underneath, the technology is the same as used in its Desktop feature, but Adobe tweaked the interface a bit to make it more intuitive for the iPad’s touch-first design.
Below are a few of the examples Adobe has shared in its announcement blog post. Keep in mind these are specifically-chosen images, so your results may vary.
Adobe has an entire user guide on how to use the new Refine Edge Brush in Photoshop for iPad to help get you up and running if you aren’t familiar with the feature.
Another much-requested feature Adobe has added is the Rotate Canvas tool. Now, using a two-finger gesture, you can rotate the canvas you’re working on, making it easier to precisely edit and make changes to your work.
The feature works in conjunction with the zoom gesture, so you can quickly pinch in and out while also rotating the canvas. Rotation can snap at 0, 90, 180 and 270 degrees, and resetting the rotation and zoom is as simple as quickly pinching out on the canvas.
Adobe has created a user guide for the Rotate Canvas function as well.
The update should be live in the App Store to download today. If it isn’t, turn off and turn on your iPad before revisiting the App Store.
The post Introduction and Creative Uses for the Snapseed Double Exposure Feature appeared first on Digital Photography School. It was authored by Ana Mireles.
Do you have a different app for doing collages, and compositing, and another for changing backgrounds? Then this article is for you. I’ll show you how to use the Snapseed double exposure feature so you can do all of this inside one free app. Let’s get started.
Double Exposure
The double-exposure technique comes from film photography. It’s created by shooting multiple times in the same frame. This can create compositions, collages, or superimposed ghosts on a scene and used for many things. Fortunately, it carried over into digital photography.
There are different ways to achieve double exposures. You can do it in-camera, by editing on your computer or using your smartphone. This last one is what I want to show you.
Snapseed editing app
There are tons of editing apps to choose from. I particularly like Snapseed because you can do most of your post-production in it, it’s free and available for iOS and Android.
In general, Snapseed is very intuitive, but if you want to have more control over your editing, it’s not always clear how to access the tools for fine-tuning. This is the case of the Snapseed double exposure feature.
Basic Double Exposure
For the basic use of the Snapseed double exposure feature, I’m going to show you how to add a bokeh background to your subject.
When you launch the app, you’ll be immediately prompted to open your image by clicking anywhere on the screen. This will open the browser for you to access your gallery. Choose the one with your subject and tap on it.
Next, open the Tools menu by tapping the pencil icon. Scroll down until you find the Double Exposure tool and tap on it.
Here you’ll find three tools. Choose the one with the plus sign (+) on it. This is the ‘add image’ button. It will give you access to your gallery again to add the photo you want to overlap. In this case, the bokeh image.
Blending modes
Now that both images are superimposed, you can modify the effect.
Start by tapping the middle icon – it represents the different layers. Here you can adjust the way they interact with each other. If you are familiar with Photoshop Blend Modes, it will be fairly easy. If not, just tap on each choice to see how they change the results.
When you’re happy, tap on the check icon to apply.
Opacity
Now, go to the third tool, the one that looks like a drop. With this one, you can open a slider that controls the transparency of the layer. Move it until you like the final result.
If it’s still not perfect, you can always mask away specific parts of your layer. I’ll show you how to do this in the next section by doing a simple composite.
Advanced editing
Pretty good right? But not exactly a lot of control. That’s why the Snapseed double exposure feature offers the possibility to mask. However, these tools aren’t so easy to find as a first time user.
First, make your composite with the basic tools as explained in the previous section. Once you’ve decided on the blending mode and transparency, accept the edits by tapping on the check sign.
Next, tap on the back button that you can find on the top right. Usually, you wouldn’t do that unless you were unhappy with your results, and this is why these advanced tools are not apparent at first glance. This will open a menu that gives you the choice to Undo, Revert, and View edits. This last one is where you want to go.
This will open a list with every edit you’ve done.
In this case, there’s only the double exposure, but if you also adjusted perspective, exposure, etc, it would show up here to access again for further edits.
Click on the Double Exposure step to open its menu. The sliders icon on the right takes you back to basic tools if you want to make any changes. The icon in the middle takes you to the advanced edits.
Masking
Here, you can mask your images to reveal or hide different parts of it. Use your finger as a brush and just paint away. With the eye icon, you can make the mask visible.
Use the arrows to increase or decrease the opacity. If you made a mistake and painted over the wrong part, tap the arrow down to 0 and paint again to make visible again the underlying layer.
If you need to be precise, you can zoom in and out using two fingers. When you’re happy just tap on the check button and save your image.
Conclusion
The Snapseed double exposure feature gives you control over the effect you’re applying while still being easy to use without previous training.
And, by the way, it’s not just double but multiple exposures. You can add as many layers as you want. Just repeat the process to add more images.
Get creative and show us your results in the comments section!
The post Introduction and Creative Uses for the Snapseed Double Exposure Feature appeared first on Digital Photography School. It was authored by Ana Mireles.
Last week, Adobe released its latest updates for Lightroom, Photoshop and Camera Raw, bringing new and improved features to each of its photo-centric apps. While the list of updated features is exhaustive, Colin Smith from YouTube Channel Photoshop CAFE has broken down every new feature so you don’t have to.
Smith’s feature overview is split between two videos: one that focuses on Photoshop and Camera Raw, and one that focuses on Lightroom Classic.
For the 12-minute Photoshop video, Smith covers the improved AI-powered Select Subject tool in Photoshop as well as the updated Lightroom-inspired interface for Adobe Camera Raw.
The Lightroom video comes in at 11 minutes and addresses the new Local Hue tool, the updated Tone Curve interface, ISO Adaptive Presets and the new performance improvements Adobe has made throughout the app.
You can keep up to date with Smith’s Photoshop and Lightroom tips and tutorials over on the PhotoshopCAFE Youtube Channel.
Sony has shared the details of its new sensor array that it claims is the world’s first to feature artificial intelligence (AI) directly onboard the chipset.
Before diving into the details, it’s worth noting this sensor has been produced with industrial applications in mind, but as tends to be the case with much of Sony’s other sensor technologies, it’s not difficult to imagine seeing these AI-powered technologies in Sony’s smartphone and possibly even mirrorless camera sensors down the road.
With that out of the way, let’s dive into the details. The IMX500 is a 12.3MP (4056 × 3040 pixels) backside-illuminated 1/2.3” sensor capable of 4K/60p video capture that features a new stacked design that puts the pixel chip atop the logic chip. The stacked design means the data captured from the pixel chip can immediately be parsed via Sony’s AI processing directly on the logic chip.
Processing in the sensor not only removes the need for external hardware, it also means only the relevant data needs to be output, significantly reducing the amount of data that needs to be communicated to the cloud, reducing bandwidth and increasing speed. Sony says the new sensor can capture and process the image within 3.1 milliseconds when using the MobileNet V1 image analysis model.
IMX500 (left), IMX501 (right).
In addition to faster, real-time object-tracking and processing, this setup also allows the system to export either image data and information or information alone. The ability to export the AI-derived information alone adds an additional layer of security that could be added to scenes where privacy is critical.
Illustration from Sony showing how the data output format can be customized to meet various needs.
Again, Sony specifically mentions retail and industrial equipment use-cases for this new sensor technology, so it’s not likely we’ll be seeing this in consumer camera tech anytime soon.
The IMX500 (bare chip model) is expected to start shipping in April 2020 for 10,000 JPY (~$ 94), while the IMX501 (packaged product model) is expected to start shipping in June 2020 for 20,000 JPY (~$ 188).
Free, open-source software RawTherapee has been updated to version 5.8, the team behind the product has announced. This is a relatively small update, at least as far as general users are concerned. RawTherapee 5.8 brings a new tool called Capture Sharpening that automatically recovers the detail lost due to diffraction/lens blur.
The RawTherapee team explains that Capture Sharpening can be used with Post-Resize Sharpening in order to produce ‘detailed and crisp results.’ The tool is found within the ‘Raw’ tab.
In addition, RawTherapee 5.8 adds support for Canon’s CR3 raw image format. The team says that at this point in time, RawTherapee can decode the image data so that users can process these image files; it cannot, however, retrieve the metadata. Though it’s not explicitly stated, it appears the team plans to add metadata support for these files in the future.
Those two features aside, the new update brings various improvements to camera models, optimizes tools, speeds up the application, improves its memory management and fixes a number of unspecified bugs. RawTherapee 5.8 can be downloaded for Windows, Mac and Linux from the software’s website.
You must be logged in to post a comment.