A few days ago, DJI released firmware update for the Mavic Air 2 that gives users new capabilities. Coming in at 178.3MB, V01.00.0340 is the first major update since the consumer–grade drone started shipping a little over 3 months ago. It offers up digital zoom, a new hyperlapse resolution, improved obstacle avoidance, and more.
‘We are so excited to bring a unique zooming function to the Mavic Air 2 along with 4K hyperlapse. The zooming feature will help creators add a dynamic perspective to the scene, establish shots and so much more while the 4K hyperlapse offers high-quality content in a manageable format. The Mavic Air 2 continues to be one of the most versatile and capable drones to ever take to the skies,’ says Patrick Santucci, DJI’s Senior Communications Manager.
DPReview recently covered issues encountered when testing out the Mavic Air 2’s 8K hyperlapse feature. So it’s exciting that the drone manufacturer has now made it possible to record 4K hyperlapse clips. Users now have the ability to pause a hyperlapse and then resume recording as well. The number of waypoints you can select when pre-planning a flight has increased to 45 and there is added support for Task Library.
Mavic Air 2 users can now digitally zoom in on a subject. 4K Zoom mode supports 2x digital zoom up to 4K/30p.
Mavic Air 2 users can now digitally zoom in on a subject. 4K Zoom mode supports 2x digital zoom up to 4K/30p. 2x digital zoom is also possible up to 2.7K/60p while full 4x digital zoom is possible up to 1080p/60p in video mode. You can also press the ‘Fn’ button on the remote while using the dial to gradually zoom in and out.
With Obstacle Avoidance, you now have the ability to either bypass obstacles, brake in front of them, or you can turn it off entirely. The option to disable sideways flight is also available. Since the Mavic Air 2 does not have obstacle avoidance sensors on either side of the aircraft (only on the front, back, and bottom), this feature is useful for beginners or those wishing to fly in a straight line, forwards or backwards.
It’s easy to update the firmware within the DJI Fly app.
There are a few other improvements not mentioned above. To install the update, open up the DJI Fly app. You can also use the DJI Assistant 2 desktop app on your computer. At the top right–hand corner, you’ll see three dots. Click on that icon. From the top navigation bar, select ‘About.’ From there it will show you your latest firmware. Select ‘Check for Updates.’ You’ll need a full battery as it can take 10 minutes to install the latest firmware. Full release notes as follows:
• ZOOM – 4K Zoom 2x digital zoom 30/25/24 fps. 2.7 Zoom 2x 60/50/48/30/25/24 and 1080P 4x 60/50/48/30/25/24 in video mode. • Added Safety Flight Mode. The aircraft avoids obstacles automatically and will not respond to commands to fly left or right. • Optimized FPV mode for gimbal. • Optimized hyperlapse. Pause shooting during hyperlapse, increased the number of waypoints to 45, and added support for Task Library. • Optimized Sports Mode. • Optimized ActiveTrack when tracking vehicles at low altitude.
After much teasing, Sirui has finally launched its second anamorphic lens for interchangeable lens systems. The first was the 50mm F1.8 1.33x anamorphic that came out at the beginning of this year, and the company doesn’t seem to be wasting much time getting its second model out to the market.
One of the great attractions of the original lens was its price, and many will be pleased to hear that this new 35mm model is following suit. Anamorphic lenses are, in the main, quite expensive, so these sub-$ 800 Sirui examples open anamorphic shooting to a much wider audience. The lens is being launched via an Indiegogo campaign with early-bird prices from $ 599.
With its 1.33x anamorphic characteristic the lens offers a 2.35:1 aspect ratio to those shooting in 16:9 while GH5 and GH5S users using Anamorphic mode will get a high resolution 16:9 image with all the anamorphic trappings of flare, blue streaks and oval out-of-focus highlights. GH5S users shooting in 4096 x 2160 C4K will be able to achieve a 2.5:1 aspect ratio.
MFT mount with adapters
The 35mm F1.8 comes only in a Micro Four Thirds mount, but Sirui offers adapters for Nikon Z, Sony E and Canon EF-M bodies. The 50mm was offered with fixed mounts for MFT, Sony E and Fujifilm’s X mount so there’s been a bit of a shift in favor of Nikon Z and away from Fuji X. Sirui says there is a Fujifilm X-mount lens on the way, but it hasn’t said what focal length it will be. With all the video improvements Fuji has introduced in recent times there should be a decent market for an anamorphic lens, but Sirui says it can’t make an adapter to fit MFT lenses on Fujifilm X-mount bodies.
Designed to cover APS-C, Super 35 and MFT sensors, the smaller imaging areas will add some apparent magnification to the marked focal length. The 35mm focal length on APS-C sensors with a 1.5x factor behaves as a 52.5mm would on a full frame camera, but with the extra 1.33x width in the horizontal plane that 52mm stretches back to the appearance of the 40mm.
On MFT bodies the 35mm doubles to 70mm, but then stretches to cover the horizontal angle we’d expect of a 52mm. The angles of view achieved with this lens are wider than those achieved with the 50mm lens, but they leave a good deal of room for a wider lens in the future.
Gear rings
Sirui has helpfully included a pair of gear rings with this lens to allow it to be used more easily with follow-focus systems. The rings slip over the mount-end of the lens and marry with the ribbing on the focusing and aperture rings. Each ring is labelled so you know which goes where, not that it seems to make much difference.
The rings are essential for follow-focus but they also make hand-made smooth focus transitions much easier too. As the barrel of the 35mm is somewhat wider than that of the 50mm, these rings aren’t interchangeable between the two lenses.
Design
The lens has an all-metal ‘aircraft aluminum’ body that feels very solid in the hand and dense for its size – but without it being heavy. The smooth finish feels good to the touch and the focus and aperture rings turn nicely with just the right amount of resistance. The ribbing on both is perhaps a little fine for a sure grip in all conditions, but the addition of the gear rings soon solves that. I kept mine on all the time.
The aperture ring turns smoothly and without clicked stops, allowing iris altering during filming without disturbance, and of course focus is all manual.
Plain underbelly
The underside of the lens is completely plain other than for the close focus and filter size engravings. I rather like the look without any other text, but it does mean that when the camera is mounted above head-height you can’t see what aperture you are using or the focus distance set. For those more used to lenses designed for stills this won’t seem unusual, but for those coming from movie lenses this might be a surprise.
The distraction-free underside though shows clearly where the anamorphic element group is in the optical construction, as the forward end of the barrel expands to accommodate that wide anamorphic cylinder.
Looking through the lens
Further evidence of the position of the anamorphic group comes when we look through the lens. From the front the iris looks oval and from the rear it looks round, thus demonstrating that the iris is positioned behind the anamorphic group. Not all anamorphics have the anamorphic group at the front of the construction, as some use a design that places the group just in front of the mount, but those with a forward anamorphic group display more pronounced optical characteristics. Having the cylinder at the front helps it catch the light that creates flare and ensures we get those oval out-of-focus highlights.
Close focus
For a normal spherical lens a close focus distance of 0.85m / 33in would be considered a little long, but in anamorphic terms this is about standard. Distances are marked in feet and meters, and apertures in full stop measurements. All markings on the barrel are deeply engraved, with paint neatly dropped well below the surface.
It takes a 191° rotation of the focusing ring to shift focus from the closest point to infinity, which makes for swift shifts in the focus position.
Construction
The Sirui 35mm F1.8 1.33x anamorphic is built with 13 elements in 9 groups, and uses a 10-bladed iris. The glass is made by Schott according to Sirui.
The lens is really very small for an anamorphic, which is partly down to its reduced covering circle but must also be the result of some internal miniaturization in the design, elements and glass used. It measures 117mm / 4.61in including the mount, is 70mm / 2.8in wide at the front and weighs 700g / 1.55lb.
Compared to the 50mm F1.8
The overall look and design of the 35mm (right) is very much in keeping with the existing 50mm, so the two lenses are easily identified as part of a set. There are some minor changes to the font used in places but you’d only notice if you had the time to look at such things.
The 35mm is longer, broader and heavier than the 50mm, and although the focus rings match in depth they have different diameters due to the difference in barrel size. The aperture rings are different in design too, with the 50mm featuring a much narrower ribbed area.
Red dots
The shift from dedicated mounts to an adapter system has meant Sirui has had to relocate the index red dot. Maybe not such a big deal, but I’m used to Micro Four Thirds lenses having their red dot on the side of the barrel where it is easy to see rather than on the mount itself, so this took some getting used to. The dot has shifted from a permanently visible position to avoid confusion when a Nikon Z adapter is fitted, for example, as the Z mount has its red dot in a different place. Moving the dot to the mount means there will only ever be one on display as the MFT red dot that is marked on the mount will be covered by the adapter ring.
The mount adapters are fitted using the second set of screws in the base of the lens, with an index indentation to ensure it is positioned correctly. One set of screws has a star head while the ones we are supposed to use have a regular cross-head, to avoid getting them mixed up. The 50mm doesn’t have the facility to accept these adapters, so those using Nikon Z cameras, for example, will only have access to the 35mm at the moment. Only Sony E and MFT camera users can fit both 50mm and 35mm lenses.
The post 5 Things You Can Do To Improve Your Photography appeared first on Digital Photography School. It was authored by Karthika Gupta.
Photography is an art form that just gets better and better as technology improves and people invest in themselves. Like any other craft out there, the more you commit to working on your skills, the better you will become. There are lots of simple and easy ways for you to improve your photography. Here are a few you can try today to help you become a better artist tomorrow!
You can never miss the opportunity to photography a yellow house.
1. Start a daily practice and set up unique challenges
One of the best things I did for my photography and my mindset when I was just starting out was set up a daily practice.
Oftentimes, we are our biggest critic. We feel that the lighting has to be perfect, the subject has to be perfect, and the situation has to be perfect for us to create art. But that is far from the truth. In order to improve your photography, or anything for that matter, all you have to do is practice. Practice regularly and consistently.
If daily practice is not possible, that’s okay. Don’t let that stop you from creating consistently. Find a schedule that works for you and stick to it. Give yourself challenges like photographing food, photographing pets, macro photography, and more to get out and simply create. This will also help you train your eye to see images before you even take them.
An exercise in capturing my spring-blooming trees ended up an exercise in still life photography.
2. Shoot in Manual mode
When I first started my business, I photographed in Auto mode for the first six to eight months. The whole process of interacting with clients, photographing, editing, delivering images, and marketing a business was intimidating enough; the last thing I needed was to figure out my gear on the fly. So I pushed that button on my camera to Auto and happily clicked along.
But once I gave myself the permission to fail, learn, and try Manual mode, I never looked back. Manual mode is more than just a button on your camera. It is a chance for you to really understand how exposure works by controlling shutter speed, f-stop, and ISO.
The more you play around with these elements, the more you will learn about your own style of photography. I realized that I loved images that were clean and crisp. The images that were light and airy spoke to my style; they were the kind of images that I wanted to create. I realized that I needed to shoot wide open with a low ISO to get the look that I wanted. This meant I only had my shutter speed to play with.
I also learned the lowest shutter speed I could use while handholding my camera to get a crisp image in any situation. None of these would have been possible if I had let the camera dictate the settings for each scene (i.e., by shooting in Auto mode).
This image was actually taken from a train window. It would have been nearly impossible to photograph in Auto mode. The camera would have underexposed this image and the golden light would have been lost.
3. Experiment with different editing styles
Earlier I mentioned that I love light bright and airy style images. But that does not mean I don’t like moody images or those with a lot of contrast. I think there is a place for each type of image, and I encourage you to experiment and try out different editing styles.
While you might have a primary editing style, there is nothing stopping you from trying out other editing styles from time to time. This does not mean you are undecided; this just means that you like to get creative and experiment with your art. And that is a great way to learn editing software like Lightroom and Photoshop.
The image on the left is a lighter, brighter style, while the one on the right is the matte look many photographers enjoy.
4. Try creative shooting in your photography
There are many different ways to add a little creativity to your photography. Using double or triple exposures, shooting through elements, or even playing with shutter speed can be a way to deviate from the norm. All these techniques bring an element of uniqueness into your imagery and help you break up the monotony of your own work. These will help you improve your photography in the long run as you start thinking on your feet when you are out and about or even at a client shoot.
Lately, I have been loving the whole double exposure method for adding something extra to my images. This creative headshot I made for another photographer is one of my favorites.
5. Learn about light in different situations
As a photographer, you need to not only see light but also need to learn the art of reading light: the type of light, the quality of light, and also how the light will affect your final image.
For the first few years of my business, I had a very limited knowledge of light. I did not even own an external flash, and so I limited myself to photographing in bright, open, natural light conditions.
Living in Chicago, our summers are quite short, and fall is usually a mix of rain, thunderstorms and more rain. I learned very quickly that I needed to get out of my comfort zone and figure out how to photograph different lighting situations and do it confidently and creatively.
So the next time you are out and about, or even if you are in your home, pay attention to how the light changes as the day progresses. Photograph in each of these situations to understand how light affects the look and feel of your imagery.
Look at light as a subject in your images and you will find yourself starting to use light more creatively.
Conclusion
I hope these simple tips help you get confident in your photography. Perhaps you have limited access to gear, models or even places to photograph. Don’t let that stop you from doing these things to improve your photography on a day to day basis. All you need is the right mindset and the tenacity to see it through.
The post 5 Things You Can Do To Improve Your Photography appeared first on Digital Photography School. It was authored by Karthika Gupta.
Researchers with the University of Chicago’s SAND Lab have detailed the development of a new tool called Fawkes that subtly alters images in a way that makes them unusable for facial recognition. The tool comes amid growing concerns about privacy and an editorial detailing the secret scraping of billions of online images to create facial recognition models.
Put simply, Fawkes is a cloaking tool that modifies images in ways imperceptible to the human eye. The idea is that anyone can download the tool, which has been made publicly available, to first cloak their images before posting them online. The name was inspired by Guy Fawkes, the mask of whom was popularized by the movie V for Vendetta.
The Fawkes algorithm doesn’t prevent a facial recognition algorithm from analyzing a face in a digital image — instead, it teaches the algorithm a ‘highly distorted version’ of what that person’s face looks like without triggering errors; it cannot, the researchers say, be ‘easily detected’ by the machines, either.
By feeding the algorithm these cloaked images, it subtly disrupts the machine’s attempt to learn that person’s face, making it less capable of identifying them when presented with uncloaked imagery. The researchers claim their cloaking algorithm is ‘100% effective’ against top-tier facial recognition models, including Amazon Rekognition and Microsoft Azure Face API.
As well, the team says their disruption algorithm has been ‘proven effective’ in many environments through extensive testing. The use of such technology would be far more subtle and difficult for authorities to prevent compared to more conventional concepts like face painting, IR-equipped glasses, distortion-causing patches or manual manipulation of one’s own images.
These conspicuous methods are known as ‘evasion attacks,’ whereas Fawkes and similar tools are referred to as ‘poison attacks.’ As the name implies, the method ‘poisons’ the data itself so that it ‘attacks’ deep learning models that attempt to utilize it, causing more widespread disruption to the overall model.
The researchers note that Fawkes is more sophisticated than a mere label attack, saying the goal of their utility is ‘to mislead rather than frustrate.’ Whereas a simple corruption of data in an image could make it possible for companies to detect and remove the images from their training model, the cloaked images imperceptibly ‘poison’ the model in a way that can’t be easily detected or removed.
As a result, the facial recognition model loses accuracy fairly quickly and its ability to detect that person in other images and real-time observation drops to a low level.
Yes, that’s McDreamy.
How does Fawkes achieve this? The researchers explain:
‘DNN models are trained to identify and extract (often hidden) features in input data and use them to perform classification. Yet their ability to identify features is easily disrupted by data poisoning attacks during model training, where small perturbations on training data with a particular label can shift the model’s view of what features uniquely identify …
But how do we determine what perturbations (we call them “cloaks”) to apply to [fictional example] Alice’s photos? An effective cloak would teach a face recognition model to associate Alice with erroneous features that are quite different from real features defining Alice. Intuitively, the more dissimilar or distinct these erroneous features are from the real Alice, the less likely the model will be able to recognize the real Alice.’
The goal is to discourage companies from scraping digital images from the Internet without permission and using them to create facial recognition models for unaware people, a huge privacy issue that has resulted in calls for stronger regulations, among other things. The researchers point specifically to the aforementioned NYT article, which details the work of a company called Clearview.ai.
According to the report, Clearview has scraped more than three billion images from a variety of online sources, including everything from financial app Venmo to obvious platforms like Facebook and less obvious ones like YouTube. The images are used to create facial recognition models for millions of people who are unaware of their inclusion in the system. The system is then sold to government agencies who can use it to identify people in videos and images.
Many experts have criticized Clearview.ai for its impact on privacy and apparent facilitation of a future in which the average person can be readily identified by anyone with the means to pay for access. Quite obviously, such tools could be used by oppressive governments to identify and target specific individuals, as well as more insidious uses like the constant surveillance of a population.
By using a method like Fawkes, individuals who possess only basic tech skills are given the ability to ‘poison’ the unauthorized facial recognition models trained specifically to recognize them. The researchers note that there are limitations to such technologies, however, making it tricky to sufficiently poison these systems.
One of these images has been cloaked using the Fawkes tool.
For example, the person may be able to cloak images they share of themselves online, but they may find it difficult to control images of themselves posted by others. Images posted by known associates like friends may make it possible for these companies to train their models, though it’s unclear whether there exists the ability to quickly located people in third-party images (for training purposes) in an automated fashion and at a mass scale.
Any entity that is able to gather enough images of the target could train a model sufficiently enough that a minority of cloaked images fed into it may be unable to substantially lower its accuracy. Individuals can attempt to mitigate this by sharing more cloaked images of themselves in identifiable ways and by taking other steps to reduce one’s uncloaked presence online, such as removing name tags from images, using ‘right to be forgotten’ laws and simply asking friends and family to refrain from sharing images of one’s self online.
Another limitation is that Fawkes — which has been made available to download for free Linux, macOS and Windows — only works on images. This means it is unable to offer cloaking for videos, which can be downloaded and parsed out into individual still frames. These frames could then be fed into a training model to help it learn to identify that person, something that becomes increasingly possible as consumer-tier camera technology offers widespread access to high-resolution and high-quality video recording capabilities.
Despite this limitation, Fawkes remains an excellent tool for the public, enabling the average person with access to a computer and the ability to click a couple of buttons to take more control over their privacy.
A full PDF of the Fawkes image-cloaking study can be found on the SAND Lab website here.
We’ve started digging into the a7S III’s video capabilities. Initial results are positive: we measured excellent, sub-10ms rolling shutter rates, and we can confirm that the camera uses a dual gain sensor.
Leica specialist store, Meister Camera, has found a way to make non-working Leica cameras into expensive one-of-a-kind pieces of art by copper-plating the camera, lens and all.
Meister Camera currently has eight of these one-off pieces for sale on its website. According to the product descriptions, the shop partners with a third party to copper-plate the cameras using what it calls a ‘galvanic process.’ The precise details of how the entire camera is effectively embalmed in a coat of copper, including the glass lens and non-metal components, remains unknown, but the end result speaks for itself.
Most of the copper-plated cameras are various versions of the Leica I, II and III cameras, but Meister Camera also has a copper-plated M3 up for sale. Prices start at 995€ (~$ 1,170) for the Leica IIf and go up to 1,450€ (~$ 1,705) for the Leica M3. You can see more information for each of the cameras on Meister Cameras’ online shop.
Researchers with Google Research and the Google Brain deep learning AI team have published a new study detailing Neural Radiance Fields for Unconstrained Photo Collections (NeRF). The system works by taking ‘in the wild’ unconstrained images of a particular location — tourist images of a popular attraction, for example — and using an algorithm to turn them into a dynamic, complex, high-quality 3D model.
The researchers detail their project in a new paper, explaining that their work involves adding ‘extensions’ to neural radiance fields (NeRF) that enable the AI to accurately reconstruct complex structures from unstructured images, meaning ones taken from random angles with different lighting and backgrounds.
This contrasts to NeRF without the extensions, which is only able to accurately model structures from images that were taken in controlled settings. The obvious benefit to this is that 3D models can be created using the huge number of Internet photos that already exist of these structures, transforming those collections into useful datasets.
Different views of the same model constructed from unstructured images.
The Google researchers call their more sophisticated AI ‘NeRF-W,’ one used to create ‘photorealistic, spatially consistent scene representations’ of famous landmarks from images that contain various ‘confounding factors.’ This represents a huge improvement to the AI, making it far more useful compared to a version that requires carefully controlled image collections to work.
Talking about the underlying technology, the study explains how NeRF works, stating:
‘The Neural Radiance Fields (NeRF) approach implicitly models the radiance field and density of a scene within the weights of a neural network. Direct volume rendering is then used to synthesize new views, demonstrating a heretofore unprecedented level of fidelity on a range of challenging scenes.’
There’s one big problem, though, which is that NeRF systems only work well if the scene is captured in controlled settings, as mentioned. Without a set of structured images, the AI’s ability to generate models ‘degrades significantly,’ limiting its usefulness compared to other modeling approaches.
The researchers explain how they build upon this AI and advance it with new capabilities, saying in their study:
The central limitation of NeRF that we address in this work is its assumption that the world is geometrically, materially, and photometrically static — that the density and radiance of the world is constant. NeRF therefore requires that any two photographs taken at the same position and orientation must have identical pixel intensities. This assumption is severely violated in many real-world datasets, such as large-scale internet photo collections of well-known tourist landmarks…
To handle these complex scenarios, we present NeRF-W, an extension of NeRF that relaxes the latter’s strict consistency assumptions.
The process involves multiple steps, including first having NeRF-W model the per-image appearance of different elements in the photos, such as the weather, lighting, exposure level and other variables. The AI ultimately learns ‘a shared appearance representation for the entire photo collection,’ paving the way for the second step.
In the second part, NeRF-W models the overall subject of the images…
‘…as the union of shared and image-dependent elements, thereby enabling the unsupervised decomposition of scene content into static and transient components. This decomposition enables the high-fidelity synthesis of novel views of landmarks without the artifacts otherwise induced by dynamic visual content present in the input imagery.
Our approach models transient elements as a secondary volumetric radiance field combined with a data-dependent uncertainty field, with the latter capturing variable observation noise and further reducing the effect of transient objects on the static scene representation.’
Upon testing their creation, the researchers found that NeRF-W was able to produce high-fidelity models of subjects with multiple detailed viewpoints using ‘in-the-wild’ unstructured images. Despite using more complicated images with many variables, the NeRF-W models surpassed the quality of models generated by the previous top-tier NeRF systems ‘by a large margin across all considered metrics,’ according to researchers.
The potential uses for this technology are numerous, including the ability to generate 3D models of popular destinations for VR and AR applications using existing tourist images. This eliminates the need to create carefully-controlled settings for capturing the images, which can be difficult at popular destinations where people and vehicles are often present.
A PDF containing the full study can be found here; some models can be found on the project’s GitHub, as well.
Version 3.2.1 (3.2.0 was skipped from public release due to last minute bug fixes) of darktable, an open source raw photo developer available for many operating systems, is now available. This marks a major departure from the software’s typical annual release schedule. darktable’s team states, ‘The unfortunate state of global health has led to a marked increase in contributions and improvements. On top of that, version, 3.4 is still scheduled for Christmas 2020. 2020 will therefore be the first year in which the darktable team will have the pleasure to offer you two major versions.’ darktable version 3.0 was released around Christmas 2019.
There are numerous new features and upgrades in darktable 3.2.1. As soon as you launch the software, you will be met with a refined user interface, including a major overhaul to the lighttable, which is the software’s library and photo browser. There are a variety of new overlay modes on thumbnails, including quick access to organizational tools such as ratings, labels and more.
Digital asset management has been improved in the latest release. The metadata editor has been improved with a pair of additional fields: notes and version name. Further, users can expect improved tag management, seven new collection filters and additional image information in the information module.
darktable version 3.2 includes the new negadoctor module, designed to allow photographers to capture digital images of their film negatives and process them with many useful controls and settings. Image credit: darktable
For photographers who want to work with scanned film negatives, the old film negative invert module had a problem, it only worked on non-demosaiced image data. This means that it did not work with negatives scanned using a digital camera. Version 3.2.1 of darktable includes a new module, negadoctor, which is based on the Kodak Cineon sensitometry system developed in the 1990s. There is a lot to discuss when it comes to negadoctor, so if you are interested in using your digital camera to scan negatives, I recommend heading to darktable’s article about version 3.2.1 to read more about how the new module operators and what settings you will have access to when working on scanned image files.
With darktable 2.6, the team introduced filmic to improve color in scenes with wide dynamic range. The filmic module saw major improvements in version 3.0 and has been further improved with darktable 3.2.1. New color science has been implement for improved handling of highlights during editing.
The lighttable in darktable 3.2 includes improved performance, new visual options and a refined user interface. Image credit: darktable
In terms of image editing, the histogram in darktable incorporates a pair of major new features. First, you can now adjust the histogram height size. Secondly, there’s a new RGB parade mode. This displays waveforms that represent the levels of each of the red, green and blue layers. With this mode, you can better visualize the distribution of color components in your image. Although not visible, the histogram has been rewritten for better performance.
With respect to performance, Rico Richardson on YouTube has published a new hands-on video detailing the improvements in darktable 3.2.1 and he remarks that the software is quicker and smoother overall. You can see that video below. If you are interested in using the free, open source darktable software for your photo editing, I highly recommend visiting his channel for many tutorial videos.
There are a lot of new features in darktable 3.2.1. If you’d like to download the latest version or even try darktable for the first time, visit the installation page. Additional information about the darktable 3.2.1 release can be found on Github. User manuals, downloadable styles, a book on using darktable to process your photos and many tutorials can be found here.
The post Become a Better Photo Editor with the New Lightroom Mobile ‘Discover’ Feature appeared first on Digital Photography School. It was authored by Simon Ringsmuth.
Every time you see a photo that strikes you as beautiful, brilliant, or breathtaking, you are only witnessing the tip of the iceberg. In nearly every case, the photo is the end result of dozens, even hundreds, of edits made by the photographer. From simple cropping and white balance to in-depth editing like curves and color mix, these edits are what turn an ordinary image into a work of art.
Unfortunately, such edits on a photo have been impossible to see. But, thanks to the recent addition of a ‘Share Your Edit’ feature in Lightroom Mobile, you’re now able to view the behind-the-scenes edits made to images.
Nikon D750 | AF-S NIKKOR 70-200mm f/2.8G ED VR II | 200mm | 1/4000s | f/22 | ISO 100
One of the best ways to grow as a photographer is to learn from others. Find out what works for photographers you admire and respect, and then adopt those techniques into your own workflow. This is the foundation for almost any trade, craft, or artistic pursuit. Yet, for photographers, this knowledge is often locked away behind a door. People can see the end result, but not the process.
The Discover feature in Lightroom Mobile solves this by giving you access to a worldwide community of artists who have willingly shared their editing process. There are hundreds, even thousands, of photo communities online that let you view pictures and share your own. However, none of these—not Instagram, Flickr, SmugMug, or anything else—let you see the editing process. You can only see the final image, which isn’t much use if you want to know how the photographer edited their photo to actually create the picture.
This is Lightroom Mobile’s ace in the hole: Because the Discover feature is part of the same software used to edit the images being shared, it allows for a level of freedom unmatched by any other photosharing site. In minutes, you can be learning from experts and professionals all over the world to see how they have edited their pictures, and you can adopt their techniques into your own workflow.
Discovering the Share Your Edit feature
Accessing the Discover option requires nothing more than a few taps on your mobile device. Open the Lightroom Mobile app and then tap on the icon that looks like a globe. If you hold your device vertically the icon will appear at the bottom of your screen along with the Discover label.
Tap the globe icon at the top left to access the Discover feature.
What you see next might remind you of many other photosharing apps, but dig a little deeper and you’ll see so much more. Scroll up and down to see more photos, and tap the heart icon in the lower right corner of any picture to mark it as one that you like. In the lower left corner, you will see the profile photo of the photographer who shot the picture. At the top is a list of categories for you to explore: Featured, New, Abstract, Landscape, Nature, and more.
So far so good, right? If the point of the Lightroom Mobile Discover feature is to help you find photos (or photographers) that you like, then there’s not much to distinguish this from any other photosharing app. The real fun begins when you tap on a photo to see the edit history.
Learning from the edits
When you tap on a picture it’s almost like stepping through a time machine or, more accurately, into a classroom.
Nearly every photo in the Discover feature lets you look at the edits that were made to it.
Lightroom Mobile now shows you the picture you tapped on, along with a blue bar at the bottom of your screen that fills from left to right. As the bar moves, the picture changes right before your very eyes, almost as though you’re watching it being edited in realtime. And, in a way, that’s exactly what’s happening.
Tap the Edits button at the bottom of the screen or just press and scroll upwards on the photo to load the entire editing history of the image. This is where the Lightroom Mobile Discover feature rockets into the stratosphere and becomes an amazing tool for photographers who want to learn from others, not just be inspired by their photos.
Scroll up and down through the list of edits to see them applied in real-time.
After tapping the Edits button you are presented with a scrolling list of every single edit that the photographer applied to the photo. Scroll to the top to see the initial import, and then slowly scroll down to watch the image change before your very eyes as each individual edit was applied. Lightroom Mobile shows you each particular edit along with the specific number for each individual adjustment.
This linear edit history lets you look over the shoulder of the photographer, watching every edit they made and seeing how each decision changed the image. The Discover feature lets you stand in a room with thousands of photographers, learning from each of them as you see how they arrived at their final images.
Looking through the edits to a photo is like being in the same room as the photographer while the image is being refined.
One limitation you will quickly realize is that this feature only shows you the edits. You are not allowed to change any of the editing values and, as a result, alter the image in any way. However, you can save the edits as a preset so you can use them in your own photography.
Click the three-dot icon in the top right corner and then tap Save as Preset to download the edits to your own Lightroom app. You can then apply these edits to any of your photos and adjust any of the parameters that you want.
Most edits can be shared as presets, unless the photographer sharing the edits has specifically forbidden it.
The Lightroom Mobile Discover feature has a few more tricks up its sleeve to help you get inside the mind of photographers who have shared their images. Tap the Info button to see additional details that the photographer has shared about the image. This often includes a title, written description, keywords related to the subject, EXIF data, and camera information. All this is extraordinarily useful for anyone who wants to learn more about a particular photo beyond just how it was edited.
Share your own
After diving into the Discover feature and learning more about how other photos were edited, you might be inclined to share your own images and edits. You can do this easily from Lightroom Mobile with just a few taps.
To get started with sharing your images to the Discover community, just open Lightroom Mobile and tap on any of the images in your library. Then tap the Share icon in the top right corner.
Tap the Share button on any of your images to upload the picture (and your editing history) to the Discover feature.
Then click the Share Edit option.
Note that as of this writing (July 2020) this process is still in Beta. Adobe will no doubt improve and refine it over time, and the exact steps might change.
Share Edit is still in beta as of July 2020, but it works very well.
The next screen prompts you to enter some information about the photo. This is similar to Instagram and other photosharing sites, but keep in mind that the point here is to help other photographers learn more about the photo. You aren’t competing for likes or upvotes; you’re sharing valuable information along with your edits to help a larger community of photographers learn more about their craft.
The more you write in your title and description, the more helpful other photographers will find your image.
It helps to be as descriptive as possible in your title, description, and category sections. That way, you are not only helping other people learn more about your photo; you’re helping them to discover it, as well, by using categories that are similar to hashtags on other photosharing sites.
Finally, choose whether you want your edits to be saved as presets. I always recommend enabling this option because of the sharing mentality that makes the Lightroom Mobile Discover feature so valuable. If you have benefitted from viewing edits that other photographers have made, it’s nice to respond in kind by sharing your own edits, as well.
I don’t recommend including location information, which is turned off by default.
I recommend enabling the Save as Preset option to let others save your editing process to use on their own images.
After you have all the basic information about your photo ready to go, tap the checkmark icon in the top right corner. This uploads your image, editing information, title, description, and categories to the Discover feature.
I get a kick out of heading to the Discover feature right away to see my images show up in the stream of new photos.
Tap the OK button and then head over to the Discover feature to see your image in the New section. Soon other photographers will start viewing it and learning from your edits! To see all the images you have shared with the Discover community, along with the number of likes each photo has gotten, tap your profile icon.
Keep in mind that the point of Discover is not to get likes but to learn and help others do the same. Thus, the number of likes on each of your images is almost entirely irrelevant and I recommend not paying attention to it all.
Conclusion
The Lightroom Mobile Discover feature is still in its infancy, and I’m excited to see where Adobe takes it in the coming years. Even though it’s still a bit rough around the edges in a few places, it’s an incredibly useful tool for learning more about the editing process. I hope you give it a chance and, if you learn anything from it, I’d love to have your thoughts in the comments below!
The post Become a Better Photo Editor with the New Lightroom Mobile ‘Discover’ Feature appeared first on Digital Photography School. It was authored by Simon Ringsmuth.
Hasselblad has announced the general availability of its 907X 50C medium format camera kit, as well as the accessory grip and optical viewfinder. The camera was introduced over a year ago, and up until now a limited edition Moon Landing kit was the only way to get one – but those sold out pretty quickly.
The standard production 907X body with the CFV II 50C back will cost €6590 / £5990 / CNY¥48000 including tax / $ 6399 excluding sales tax and will ship at the end of August. The 907X Control Grip will be of €749 / £679 / CNY¥5990 / $ 729 (no US tax) and the optical finder €499 / £459 / CNY¥3990 / $ 499. The limited edition Moon-Landing kit that came in a matt black finish with black trim cost $ 7500/ €7475/ £6990.
The 907X camera body accepts lenses from the XCD range designed for the X1D series cameras, but can also use the HC/HCD, V system and XPan lenses via adapters. The CFV II 50C back features a 50MP sensor, dual SD card slots, USB-C and Wi-Fi as well as the ability to record 2.7K video. For more information see the Hasselblad website.
Press release:
Press information – For immediate release Gothenburg, Sweden 12 August 2020
HASSELBLAD 907 X 50 C NOW AVAILABLE
COMBINING OUR LEGACY WITH YOUR FUTURE A UNIQUE PHOTOGRAPHIC EXPERIENCE WITH FULL SYSTEM MODULARITY
Introduced as a concept in June 2019 and now available to purchase, the 907X 50C mirrorless medium format digital camera is comprised of the modernised CFV II 50C digital back and the brand new 907X camera body, connecting Hasselblad’s photographic legacy and future into one system.
With an outstanding medium format 50-megapixel CMOS sensor (43.8 x 32.9 mm), the CFV II 50C digital back enables use with most V System cameras made from 1957 and onwards in addition to third party technical or view cameras. The CFV II 50C features a brilliant 3.2in 2.36K dot tilting rear display with full touch support and Hasselblad’s renowned user interface for settings, image review, and menu naviga- tion. Users of previous CFV digital backs will appreciate the new fully integrated battery design, the same used on the X System, which reduces overall size and with the option to recharge in-camera via the USB-C port. Combining its iconic aesthetics with a chrome edge body finish in addition to modern tech- nology, the CFV II 50C gives a nod to Hasselblad’s history combined with the brand’s world-renowned image quality.
Coupling the CFV II 50C with the 907X, Hasselblad’s smallest medium format camera body ever, creates a highly compact package. This combination offers a truly distinct photographic experience, including the classic waist-level shooting style of the V System enabled by the CFV II 50C’s tilt screen. With the 907X, the photographer gains access to all of the high-quality X System Lenses in addition to a vast range of Hasselblad optics via adapters, including the H System, V System, and XPan Lenses. In addition, the 907X enables compatibility with a wide range of third-party adapters and lenses. Accessories that beautifully complement the combination include the 907X Control Grip and 907X Optical Viewfinder.
Key features for the 907X 50C: • Large medium format 50MP 43.8 x 32.9 mm CMOS sensor • 14 stops of dynamic range • Hasselblad HNCS • Captures 16-bit RAW images and full resolution JPEGs • High-resolution 3.2-inch 2.36K dot touch and tilt screen • Smooth live view experience with a high frame rate of 60fps • Video: 2.7k (2720 x 1530) and Full HD (1920 x 1080) / Video covers full sensor width in a 16:9 ratio • Intuitive user interface with swipe and pinch touch controls • Internal battery slot with the option to recharge in-camera via the USB-C port (same battery used on the X System) • Compatibility with most V System cameras made from 1957 and onwards in addition to third party technical or view cameras • Full compatibility with all XCD Lenses • Full compatibility with HC/HCD Lenses including AF with optional XH Lens Adapter (manual focus only with HC 120 Macro and HC 120 Macro II) • Compatibility with V System Lenses, XPan Lenses, and third-party lenses using XV, XPan and third- party lens adapters, respectively • Dual UHS-II SD card slots • Audio in/out connectors • Flash in/out connectors • Integrated Wi-Fi connectivity and USB-C connection, enabling tethered shooting • Portable workflow with Phocus Mobile 2 support* • Optional accessories, including chrome finished 907X Control Grip for quick access to main image functions and 907X Optical Viewfinder for convenient eye-level shooting • 907X ultra-thin and light weight body converts CFV II 50C digital back into digital SWC • 907X 50C weight: 740 g (CFV II 50C: 540 g / 907X Camera Body: 200 g)
The 907X 50C has an MSRP of €6590 / £5990 / CNY¥48000 including VAT and $ 6399 excluding sales tax.
The 907X Control Grip has an MSRP of €749 / £679 / CNY¥5990 including VAT and $ 729 excluding sales tax.
The 907X Optical Viewfinder has an MSRP of €499 / £459 / CNY¥3990 including VAT and $ 499 excluding sales tax.
All products are available to order today, and shipping will begin from the end of August. Visit www.hasselblad.com/cfv-ii-50c-907x/ to see more about the 907X 50C.
*Update for Phocus Mobile 2 for iPad and a brand new Phocus Mobile 2 for iPhone to be released soon. This update for Phocus Mobile 2 will enable Live View, allowing for Focus Peaking, control of Depth-of- Field, setting AF area positioning and the ability to simulate exposure, all from the Live View screen in Phocus Mobile 2.
You must be logged in to post a comment.