RSS
 

Aputure’s new LS 600d prototype LED light offers 600W output

18 Sep

Filmmaking gear company Aputure revealed a prototype version of its upcoming LS 600d LED light at IBC 2019 over the weekend, introducing consumers to a model with an exceptionally bright output at 600W. The light is described as the next step up from Aputure’s 300d II model.

In addition to being shorter and wider than the LS C300II model, which Aputure says makes this model better suited for shooting in tight areas, the LS 600d light’s low-RPM fan is quieter than that of the 300d II despite the greater output.

The model has a 720W draw, while its companion controller can be run off a 48V DC input or four 310W V-mount batteries. Assuming the unit is run off batteries, Adorama reports the LS 600d can run non-stop for up to 1 hour and 45 minutes. The light features a Bowens-style mount, as well, for use with light modifiers and other attachments.

Cinema5D reports that the Aputure LS 600d likewise features three built-in effects: paparazzi, strobe, and lightning. The prototype version of the battery and control box will be slimmed down in the final production version of the device, according to Cinema5D, which reports that Aputure hopes to cut the size in half.

The Aputure LS 600d should be available to purchase in or around February 2020.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Aputure’s new LS 600d prototype LED light offers 600W output

Posted in Uncategorized

 

CyberLink reveals its new PhotoDirector 11, PowerDirector 18 and other creative software

18 Sep

CyberLink launched PhotoDirector 11, PowerDirector 18, and other updates to its creative software today, adding major new features like performance improvements for the latest R9 and i9 chipsets, AI-powered tools, 1:1 square video support for Instagram and Facebook, new transition effects, and more.

PhotoDirector 11 has been updated with Customizable Warped and Bevel & Emboss text effects in layer editing, the latter of which includes a layer editing tool with access to all of the ‘key’ editing and adjustment features, according to CyberLink. As well, the updated software now features AI Deblur and AI Styles, tools that use the software’s AI engine to remove blurs and add brushstrokes to images.

Joining the product is the updated PowerDirector 18, which has received a number of new features, including support for 1:1 square video and nested projects, new transition effects, motion graphics and animated tile templates, and Shape Designer, a tool for adding and editing vector shapes in videos.

AudioDirector 10 and ColorDirector 8 bring fewer, though no less significant, updates with the addition of AI Dewind for audio clips and Punch & Roll Recording for long audio tracks (AudioDirector), as well as Color Match for standardizing scene color and Color Replacement with Keyframe Control (ColorDirector).

When purchased individually, CyberLink charges the following prices for its four products:

  • PhotoDirector 11 Ultra: $ 99.99 USD
  • ColorDirector 8: $ 129.99 USD
  • AudioDirector 10: $ 129.99 USD
  • PowerDirector 18 Ultra + PhotoDirector 11 Ultra: $ 139.99 USD

Customers also have various 365 subscription options, including Director Suite 365 for $ 29.99/month or $ 129.99/year; this pack includes the four updated applications listed above, as well as unlimited access to 100GB of CyberLink Cloud storage, the company’s exclusive AI Style Packs, as well as its premium effects, plug-ins, and other content packs.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on CyberLink reveals its new PhotoDirector 11, PowerDirector 18 and other creative software

Posted in Uncategorized

 

The iPhone 11 is more than just Apple catching up to Android

18 Sep

Apple announces iPhone 11 (Pro)

Major smartphone manufacturers introduce new models on a yearly cadence. Camera upgrades tend to be a major focus, with little else apparently to differentiate new models from old ones. Often, what seem like small, incremental upgrades can have significant impact on photo and video quality. The iPhone XS, for example, dramatically improved image quality in high contrast scenes thanks to the sensor’s ability to capture ‘inter-frames’ – short exposures in between longer ones – to improve dynamic range and noise. Similarly, 4K video was improved with multi-frame HDR capture.

Last week, Apple announced numerous updates to the cameras in the iPhone 11, some of which will inevitably be seen as attempts to catch up to capabilities of contemporary Android offerings. But, taken together, we think they stack up to meaningful upgrades that potentially make an already very capable camera one of the most compelling ones on the market.

See beyond your frame

The iPhone 11 offers a whopping 13mm 35mm equivalent field-of-view with its wide-angle, ‘0.5x’ lens. The iPhone 11 is the first Apple phone to feature an ultra-wide angle lens, a feature that’s been present on numerous Android phones. Wide angle lenses often add drama and a sense of depth to everything from landscapes, portraits, architecture and still life. They also allow for creative framing options, juxtaposing interesting foreground objects and distant ones. They’re also useful when you simply can’t step any further back from your subject.

The iPhone 11 models alert you to the potential presence of objects of interest beyond your current framing by showing you the wider field-of-view within the camera user interface. Simply tap the ‘1x’ focal length multiplier button to zoom out (or zoom in, on the Pro models).

Refined image processing pipeline

Newer, faster processors often mean increased photo and video capability, and the iPhone 11 is no exception. Its image processing pipeline, which handles everything from auto white balance to auto exposure, autofocus, and image ‘development’, gets some new features: a 10-bit rendering pipeline upgraded from the previous 8-bit one, and the generation of a segmentation mask that isolates human subjects and faces, allowing for ‘semantic rendering’.

10-bit rendering should help render high dynamic range images without banding, which could otherwise result from the extreme tone-mapping adjustments required. Semantic rendering allows faces to be processed differently from other portions of the scene, allowing for more intelligent tone mapping and local contrast operations in images with human subjects (for example, faces can look ‘crunchy’ in high contrast scenes if local contrast is uniformly preserved across the entire image). The end result? More pleasing photos of people.

Night mode

The general principle of night modes on smartphones is to use burst photography to capture multiple frames. Averaging pixels from multiple exposures reduces noise, allowing the camera software to brighten the image with less noise penalty.

Google set the bar for low light photography with its Night Sight mode. Other Android phones soon added their own similar modes, making the iPhone’s lack of such a mode particularly conspicuous (third party solutions like Hydra haven’t offered quite the level of detail as the best Android implementations, and of course require you to launch a separate app).

Apple has developed its own Night mode on the iPhone 11 phones, which turns on automatically under dim conditions. Apple’s approach is slightly different from Google’s, using ‘adaptive bracketing’ to capture and fuse multiple exposures with potentially differing shutter speeds (the Pixel takes a burst of images at the same shutter speed).

Varying shutter speeds to capture both short and long exposures can help reduce blur with moving subjects. Information from shorter exposures is used for moving subjects, while longer exposures – which are inherently brighter and contain less noise – can be used for static scene elements. Each frame is broken up into many small blocks before alignment and merging. Blocks that have too much motion blur are discarded, with a noise penalty resulting from fewer averaged frames for that scene element.

Deep Fusion

Google’s Night Sight mode isn’t just about better photos in low light. Night Sight uses burst photography and super resolution techniques to generate images with more detail, less noise, and less moiré thanks to the lack of demosaicing (slight shifts from frame to frame allow the camera to sample red, green and blue information at each pixel location). ‘Deep Fusion’, available in a soon-to-be-released update later this year, seems to be Apple’s response to Google’s Night Sight mode.

Deep Fusion captures up to 9 frames and fuses them into a higher resolution 24MP image. Four short and four secondary frames are constantly buffered in memory, throwing away older frames to make room for newer ones. The buffer guarantees that the ‘base frame’ – the most important frame to which all other frames are aligned – is taken as close to your shutter press as possible. The buffer ensures a very short, or zero, shutter lag, enabling the camera to capture your desired moment.

After you press the shutter, one long exposure is taken (ostensibly to reduce noise), and subsequently all 9 frames are combined – ‘fused’ – presumably using a super resolution technique with tile-based alignment (described in the previous slide) to produce a blur and ghosting-free high resolution image. Apple’s SVP of Worldwide Marketing Phil Schiller also stated that it’s the ‘first time a neural engine is responsible for generating the output image’. We look forward to assessing the final results.

Portrait mode

Apple is famous for using its technologies to perfect ideas that other companies introduced. That could be said about iPhone’s Portrait mode. Fake blurred background modes have been around for years, even on some compact cameras, but they were never convincing. By applying depth mapping technology to the problem, Apple made a portrait mode that worked, and soon the feature was everywhere. And we’re slowly seeing a similar trend with portrait re-lighting: researchers and companies have been quick to develop relighting techniques, some of which do not even require a depth map.

The iPhone 11 updates portrait mode in a few significant ways. First, it offers the mode even with the main 26mm equivalent main camera, allowing for shallow depth-of-field wide-angle portraiture. The main camera modules always have better autofocus, and its sensor has been updated to have ‘100% focus pixels’ (hinting at a dual pixel design), so wide-angle portrait mode will benefit from this as well.

Second, on the Pro models, the telephoto lens used for more traditional portraits has been updated: it’s F2.0 aperture lets in 44% more light than the F2.4 aperture on previous telephoto modules. That’s a little over a half stop improvement in low light light gathering ability, which should help both image quality and autofocus. Telephoto modules on most smartphone cameras have struggled with autofocus in low light, resorting to hunting and resulting in misfocused shots, so this is a welcome change.

And thirdly…

Portrait relighting

The iPhone 11 offers a new portrait relighting option: ‘High-Key Light Mono’. This mode uses the depth map generated from the stereo pair of lenses to separate the subject from the background, blow out the background to white, and ‘re-light’ the subject while making the entire photo B&W. Presumably, the depth map can aid in identifying the distance of various facial landmarks, so that the relighting effect can emulate the results of a real studio light source (nearer parts of the face receive more light than farther ones). The result is a portrait intended to look as if it were shot under studio lighting.

We’ve now talked a bit about the new features iPhone 11 brings to the table, but let’s turn our attention backwards and take a look at the ways in which iPhone cameras are already leading the industry, if not setting standards along the way.

Sublime rendering

Straight out of camera, iPhone photos are, simply put, sublime. Look at the iPhone XS shot above: with a single button press, I’ve captured the bright background and it looks as if my daughter has some fill light on her. If you want to get technical, then, white balance is well judged (not too cool), skintones are great (not too green), and wide dynamic ranges are preserved without leading to crunchy results, a problem that can result from tone-mapping a large global contrast range while retaining local contrast.

Much of this is thanks to segmented processing techniques that treat human subjects differently to others when processing the image. Digging deeper and looking at images at pixel-level, Apple’s JPEG engine could do a better job in balancing noise reduction and sharpening: often images can appear overly smoothed in some areas with aggressive sharpening and overshoot in others. This may be done in part because results have been optimized for display on high DPI retina devices, and a Raw option – that still utilizes all the computational multi-frame ‘smarts – would go a long way to remedying this for enthusiasts and pros.

But it’s hard to argue that iPhone’s default color and rendition aren’t pleasing. In our opinion, Apple’s white balance, color rendering, and tone-mapping are second to none. The improvements to image detail, particularly thanks to Apple’s as-of-yet unreleased ‘Deep Fusion’ mode, should (we hope) remedy many of our remaining reservations regarding pixel-level image quality.

HDR Photos

No, not the HDR you’re thinking about, that creates flat images from large dynamic range scenes. We’re talking about HDR display of HDR capture. Think HDR10 and Dolby Vision presentations of 4K UHD video. Traditionally, when capturing a high contrast scene, we had two processing options for print or for display on old, dim 100-nit monitors: (1) preserve global contrast, often at the cost of local contrast, leading to flat results; or (2) preserve local contrast, often requiring clipping of shadows and highlights to keep the image from looking too unnatural. The latter is what most traditional camera JPEG engines do in the absence of dynamic range compensation modes.

With the advent of new display technology like OLED, capable of 10x or higher brightness compared to old displays and print, as well as nearly infinite contrast, the above trade-off need no longer exist. The iPhone X was the first device ever to support the HDR display of HDR photos. Since then, iPhones can capture a wide dynamic range and color gamut but then also display them using the full range of its class-leading OLED displays. This means that HDR photos need not look flat, retaining both large global contrast from deep shadows to bright highlights, while still looking contrasty, with pop. All without clipping tones and colors, in an effort to get closer to reproducing the range of tones and colors we see in the real world.

It’s hard to show the effect, and much easier to experience it in person, but in the photo above we’ve used a camera to shoot an iPhone XS displaying HDR (left) vs. non-HDR (right) versions of the same photo. Note how the HDR photo has brighter highlights, and darker midtones, creating the impression that the sky is much brighter than the subject (which it is!). The bright displays on modern phones mean that the subject doesn’t look too dark compared to the non-HDR version, she just looks more appropriately balanced against the sky, rather than appearing almost the same brightness as the sky.

Wide color (P3)

Apple is also leading the field in wide gamut photography. Ditching the age-old sRGB color space, iPhone images can now fully utilize the P3 color gamut, which means images can contain far more saturated colors. In particular, more vivid reds, oranges, yellows and greens. You won’t see them in the image above because of the way our content management system operates, but if you do have a P3 display and a color managed workflow, you can download and view the original image here. Or take a look at this P3 vs. sRGB rollover here on your iPhone or any recent Mac.

Apple is not only taking advantage of the extra colors of the P3 color space, it’s also encoding its images in the ‘High Efficiency Image Format’ (HEIF), which is an advanced format intended to replace JPEG that is more efficient and also allows for 10-bit color encoding (to avoid banding while allowing for more colors) and HDR encoding to allow the display of a larger range of tones on HDR displays.

Video

The video quality from the iPhone XS was already class-leading, thanks to the use of a high quality video codec and advanced compression techniques that suppressed common artifacts like macro-blocking and mosquito noise. 4K video up to 30 fps also had high dynamic range capture: fusing both bright and dark frames together to capture a wider range of contrast. Check out this frame grab of a very high contrast scene, with complex motion due to constant camera rotation. Note the lack of any obvious artifacts.

The iPhone 11 takes things further by offering extended dynamic range (EDR) for 4K 60p capture. Android devices are still limited to standard dynamic range capture. While a few Android devices offer a high dynamic range (HDR) video output format – such as HDR10 or HLG – to take advantage of the larger contrast of recent displays, without HDR capture techniques (fusion of multiple frames) this benefit is limited.

To sum up: we expect that the iPhone 11 will offer the highest quality video available in a smartphone, with natural looking footage in even high contrast scenes, now at the highest resolution and frame rates. We do hope Apple pairs its HDR video capture with an HDR video output format, like HDR10 or HLG. This would allow for a better viewing experience of the extended dynamic range (EDR) of video capture on the latest iPhones, with more pop and a wider color gamut.

Video experience

Apple is also looking to change the experience of shooting video in the iPhone 11 models. First, it’s easier to record video than it was in previous iterations: just hold down the shutter button in stills mode to start shooting a video. Instant video recording helps you capture the moment, rather than miss it as you switch camera modes.

Perhaps more exciting are the new video editing tools built right into the camera app. These allow for easy adjustment of image parameters like exposure, highlights, shadows, contrast and color. And the interface appears to be intuitive as ever.

Multiple angles and ‘Pro’ capture

Advanced video shooters are familiar with the FiLMiC Pro app, which allows for creative and total control over movie shooting. The CEO of FiLMiC was invited on stage to talk about some of the new features, and one of the coolest was the ability to record multiple streams from multiple cameras. The app shows all four streams from all four cameras (on the Pro), allowing you to choose which ones to record from. You can even record both participants of a face-to-face conversation using the front and rear cameras. This opens up new possibilities for creative framing, in some cases obviating the need for A/B cameras.

Currently it’s unclear how many total streams can be recorded simultaneously, but even two simultaneous streams opens up creative possibilities. Some of this capability will come retroactively to 2018 flagship devices, as we describe here.

Conclusion

Much of the sentiment after the launch of the iPhone 11 has centered around how Apple is playing catch-up with Android devices. And this is somewhat true: ultra-wide angle lenses, night modes, and super-resolution burst photography features have all appeared on multiple Android devices, with Google and Huawei leading the pack. No-one is standing still, and the next iterations from these companies – and others – will likely leap frog respective capabilities even further.

Even if Apple is playing catch up in some regards though, it’s leading in others, and we suspect that when they ship, the combination of old features and new – like Deep Fusion and Night mode – will make the iPhone 11 models among the most compelling smartphone cameras on the market.

As the newest iPhone, the iPhone 11 camera is by inevitably the best Apple has made. But is the iPhone 11 Pro the best smartphone camera around currently? We’ll have to wait until we have one in our hands. And, of course, the Google Pixel 4 is a wildcard, and just around the corner…

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The iPhone 11 is more than just Apple catching up to Android

Posted in Uncategorized

 

Canon patent application sheds more light on its upcoming IBIS technology

17 Sep

Rumors about Canon’s much-anticipated in-body stabilization (IBIS) are a dime a dozen, but a recent patent application from Canon dives into more detail than we’ve seen before, further lending credence to the rumors the technology could make it into Canon’s next R-series camera body.

First discovered by Canon News, Japan patent application 2019-152785 details how in-body stabilization technology can be improved by more accurately moving and positioning the sensor along its axes. According to the patent, Canon plans to do this through the use of a magnetic circuit known as the Halbach array.

An illustration from the patent showing how in-lens stabilization would work alongside the in-body stabilization to achieve optimal results.

The Halbach array, believed to have first been discovered by John C. Mallinson in 1973, is a collection of magnets that is particularly arranged so that one side of the magnetic field is magnified while the opposite side is effectively canceled out. Halbach arrays have multiple uses ranging from something as simple as a refrigerator magnet to something as intricate as a particle accelerator (where it’s used to focus particle accelerator beams).

Canon’s implementation, however, would use Halbach arrays to ensure that when a correction is applied to one axis, it won’t negatively affect another axis. Particularly, Canon’s patent application details how it would use a Halbach array on the vertical (y-axis) stabilization unit to ensure that the horizontal correction (x-axis) isn’t skewed when applying y-axis corrections.

A pair of illustrations from the patent showing how the Halbach array would be positioned.

The patent application also explains how the IBIS would work hand-in-hand with in-lens stabilization units to create the most effective stabilization possible. Specifically, the patent says the in-lens stabilization would account for corrections on the XY planes (2-axis stabilization) while the in-body stabilization would be able to account for shake on XY-theta planes (3-axis stabilization with vertical, horizontal and roll compensation). Similarly, gyro units within both the lens and camera would work alongside one another to account for angular corrections so the image stabilization element in the lens could be adjusted in coordination with the image sensor to most accurately correct the optical axis.

Below is a brief illustration of XY-theta alignment at work:

It’s unknown, of course, if this particular patent application will be used down the road in a future IBIS arrangement, but it is one of the more detailed patents we’ve come across from Canon regarding the technology. Based on this particular patent application, it would be a 5-axis IBIS unit, similar to those found in Sony and Nikon mirrorless cameras.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon patent application sheds more light on its upcoming IBIS technology

Posted in Uncategorized

 

Laowa 100mm F2.8 2x Ultra Macro APO sample gallery

17 Sep

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_1571838117″,”galleryId”:”1571838117″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

The Laowa 100mm F2.8 2X Ultra Macro APO is unusual among macro optics for offering a maximum reproduction ratio of 2:1, enabling extreme closeup photography. Despite its impressive specifications, it’s priced at a wallet-friendly $ 449. Our UK-based contributor has been shooting with the 100mm F2.8 for a while, and we’ve uploaded a small sample gallery.

View our gallery of images on the Laowa 100mm F2.8 2X Ultra Macro APO

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Laowa 100mm F2.8 2x Ultra Macro APO sample gallery

Posted in Uncategorized

 

How to Use Color in Your Photography to Give Your Photos the Wow Factor

17 Sep

The post How to Use Color in Your Photography to Give Your Photos the Wow Factor appeared first on Digital Photography School. It was authored by Jeremy Flint.

Photography is a wonderful art form that is enjoyed by people all around the world. It is a popular medium of expression, a fine art, a way of documenting our journeys and memories, and even a way to change views. Originally, when photography started it was all black and white, and historic photographers learned their craft in this way. Whilst this can be a great way to bring out textures and shapes, the introduction of color is a great way to attract attention and add impact to your images. Here are 5 tips to show you how to use color in your photography to give your photos the wow factor:

1. Find a colorful scene

Colour

St Nectain’s Glen waterfall, Cornwall, England

The first thing to do is to find a colorful scene. The way the countryside unfolds in the summer, for example, can be a wonderful way to show color in a landscape. Purple hues of lavender or golden wheat fields are all examples of where color can be captured to help your images stand out.

You could focus on one dominant color in a scene such as green. Alternatively, seek out a variety of tones like a cool blue sky mixed with a red field which can give your photos some impact and evoke different emotions. Blue can provide feelings of cold, whilst red can give warmth, energy, and excitement.

You can also use complementary colors in captivating ways. They refer to colors on opposite sides of the color wheel. For example, red and green or blue and orange. Other colors that work particularly well together include yellow, red and orange; and pink, purple and blue.

2. Look for details with color

Colour

Poppy field, England

Another way to use color in your photography is to look for details with color. You can photograph individual flowers in bloom such as bluebells or plant crops. As a part of a broader scene, you can photograph flowers like vibrant red poppies swaying in a field.

Think about the different colors and details you could photograph near you and experiment with a variety of color palettes to see what works well. Try and find details to photograph with single colors and combinations of colors to see which you prefer. Red is a strong color that attracts the viewer’s eye whilst yellow and orange can also command attention.

3. Change your white balance setting

how-to-use-color-in-your-photography

Stonehenge, England

One great advantage of the white balance setting is that your camera has the ability to change the color of your images simply by adjusting the white balance.

There are several different types of white balance which can be varied according to different lighting scenarios – auto white balance is the default setting for most cameras. Shade, cloudy and daylight can be selected for warmer and brighter colors, whilst tungsten and auto present cooler colors.

As the name suggests, a ‘shade’ white balance setting can work well in lighting conditions with shade and shadows, whilst ‘cloudy’ is a good choice when it is overcast. There is also an option to manually adjust your white balance to alter the color temperature to your taste.

4. Use a polariser to boost color and contrast

how-to-use-color-in-your-photography

Oxford, England

One of the greatest accessories you can have in your photography kit is a polarizing filter. A polarizer is usually placed at the front of your camera lens and is a versatile piece of equipment that can be brilliant to help darken skies, minimize reflections and manage glare from water sources such as lakes or the sea.

Polarizing filters are also a fantastic way to improve your images by instantly enhancing the color and contrast in your images. They don’t take up much space in your kit bag and are useful for making your images more distinct. Polarizing filters particularly work well in landscape photography by bringing out colors and reducing haze.

5. Boost colors in post-production

how-to-use-color-in-your-photography

Oxford, England

You can also add color to images in post-production. One way to boost the color of your photographs is to increase the saturation in post-production. You will find a saturation slider in most editing tools. Both photoshop and lightroom offer an adjustment slider where the color can be increased by moving it to the right. There is also the option to increase the saturation of individual colors to make certain parts of your images more vibrant.

Conclusion

Using color can be a great way to enhance your images. You can find colorful scenes, vibrant details, adjust your white balance, add a polariser or adjust the saturation in post-processing to give your photos more impact.

Try these techniques and share your images with us below. Also, if you have any other tips, feel free to share those too!

 

how-to-use-color-in-your-photography

The post How to Use Color in Your Photography to Give Your Photos the Wow Factor appeared first on Digital Photography School. It was authored by Jeremy Flint.


Digital Photography School

 
Comments Off on How to Use Color in Your Photography to Give Your Photos the Wow Factor

Posted in Photography

 

Canon to Produce an 80-Megapixel Mirrorless Camera

17 Sep

The post Canon to Produce an 80-Megapixel Mirrorless Camera appeared first on Digital Photography School. It was authored by Jaymes Dempsey.

 

canon-80mp-cameraIf you’ve been hoping Canon will produce a high-resolution professional mirrorless camera, then you’re in luck.

Rumors that Canon has been working on a 70 or 80-megapixel mirrorless camera have been swirling for months, but several new pieces of information make it more likely than ever.

First, Canon has filed a patent for an 83-megapixel sensor, which may be at the heart of a new mirrorless camera body.

Second, someone claiming to have a prototype of the new camera has just revealed specs, including:

  • An 80-megapixel full-frame sensor
  • A larger viewfinder than the Canon EOS R
  • Dual SD card slots
  • A new joystick
  • A larger size than the Canon EOS R

If these details are accurate, then the new camera (dubbed the Canon EOS RS by Canon Rumors) will likely be a mirrorless replacement to the Canon 5DS/5DS R duo. The two DSLR cameras debuted in February of 2015, and Canon has failed to update them in the years since. Most notable about the two cameras are their sensors: 50.6 megapixels – the largest full-frame sensors in existence at the time.

The dual card slots will be a particularly welcome addition to the Canon EOS RS. Many professional photographers passed over the EOS R based on its single card slot, and it seems Canon got the message. So for photographers who require redundancy in their work, the Canon EOS RS will be a good choice.

And if the Canon EOS RS is truly 80+ megapixels, commercial photographers will appreciate the opportunity to push resolution to its limits.

Such a high-resolution sensor has its drawbacks, however. The larger the files, the faster you’ll fill up space. Plus, a sensor with 80 megapixels will have high pixel density, leading to small pixels. This can be a problem with regard to noise production: the smaller pixels are more likely to produce noise at high ISOs.

Let’s just hope that Canon puts out a high-quality sensor to complement the megapixel count! If you’re a Canon fan looking to make the change to mirrorless, then keep an eye out for news on the Canon EOS RS, which will likely be announced at the start of 2020.

Would you purchase the Canon EOS RS for its 80+ megapixels? Share your thoughts with us in the comments.

The post Canon to Produce an 80-Megapixel Mirrorless Camera appeared first on Digital Photography School. It was authored by Jaymes Dempsey.


Digital Photography School

 
Comments Off on Canon to Produce an 80-Megapixel Mirrorless Camera

Posted in Photography

 

SLR Magic announces 21mm T1.6, 50mm T1.4 ‘MicroPrime’ cine lenses for MFT camera systems

17 Sep

Budget lens manufacturer SLR Magic has added a pair of Micro Four Thirds (MFT) ‘MicroPrime’ lenses to its cinema lineup: the MicroPrime CINE 21mm T1.6 and the MicroPrime CINE 50mm T1.4.

The lenses, designed specifically with the Blackmagic Pocket Cinema Camera 4K and Z Cam E2 in mind, feature full manual control, an 82mm front filter thread, an 85mm front filter diameter for matte boxes and are both geared with 0.8 Mod gears.

The MicroPrime CINE 21mm T1.6 is constructed of 13 elements in 11 groups and weighs 700g (1.57lbs), while the MicroPrime CINE 50mm T1.4 is constructed of 6 elements in 5 groups with a weight of 630g (1.39lbs). The lenses, which will retail for $ 599 a piece at authorized retailers, round out SLR’s MFT lineup alongside the 12mm T2.8 and 18mm T2.8.

SLR MAGIC expands their portfolio at IBC 2019 with additional focal lengths in MFT mount for MicroPrimes – 21mm T1.6 & 50mm T1.4

Hong Kong (13 September 2019) — Following the success of SLR Magic’s venture into affordable yet robust and high-quality cinema and photo lenses, we are pleased to announce new editions to our product range. Based on popular demand and responding to our users, we are now offering a selection of new lenses in the MFT-mount

Given the extreme popularity of our cine lenses in the E-mount range, our engineers and design team logically progressed into the development of extending mounting options.

Taking from our successful formula along with our previous offerings, SLR Magic have now produced 2 additional lengths given the popularity of Blackmagic’s Pocket 4k camera and the Z Cam E2.

Both lenses all feature full manual control, have an 82mm thread front, an 85mm front diameter for clamp-on matte box, are all 0.8 Mod geared and are almost identical in form factor and weight.

A 21mm and 50mm are now added to the MicroPrime range. With a new design, these 2 new focal lengths will now round out the existing range and provide more choice and versatility to all Micro Prime shooters. The range for MFT will now include 12mm T2.8, 18mm T2.8, 21mm T1.6, 25mm T1.5, 50mm T1.4.

SLR Magic has been synonymous with sharp images wide open with smooth roll-off, beautiful bokeh edge to edge and is fast becoming the lens of choice rather than an alternative for the Cine and Photo industry.

MicroPrime CINE 21mm T1.6
Mount: Micro Four Thirds
MSRP: US $ 599
Optical Structure: 13 elements in 11 groups
Image Circle: ?32
Weight: 700g

MicroPrime CINE 50mm T1.4
Mount: Micro Four Thirds
MSRP: US $ 599
Optical Structure: 6 elements in 5 groups
Image Circle: ?32
Weight: 630g

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on SLR Magic announces 21mm T1.6, 50mm T1.4 ‘MicroPrime’ cine lenses for MFT camera systems

Posted in Uncategorized

 

Video: Taking photos across California to recreate Apple’s macOS wallpapers

17 Sep

YouTuber Andrew Levitt teamed up with friends Jacob Phillips and Taylor Gray to recreate the stock wallpapers Apple includes with its Mac computers by traveling around and snapping them one photo at a time. The project required a trip into the blistering hot Death Valley National Park, as well as Sierra National Forest, Yosemite National Park, Mavericks and more.

The group of guys showed their endeavors in a newly published video detailing the project. Levitt notes that their final images aren’t quite the same as Apple’s due to different seasons, among other things, but that they’re representative of the experience and getting to see each destination in person. The five resulting images have been made available to download as wallpapers here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: Taking photos across California to recreate Apple’s macOS wallpapers

Posted in Uncategorized

 

Apple’s ‘MultiCam’ recording will be available on iPhone XS, XR and the latest iPads

17 Sep

During its iPhone 11 event last week, Apple showcased an as-yet-unreleased version of the Filmic Pro app running on the new iPhone 11 Pro. The app was capable of recording footage from multiple cameras simultaneously, for example, front and rear camera or wide-angle and primary shooter. According to Filmic Pro, the updated version of its app will be available in the App Store sometime later this year.

The multi-cam recording function uses a new API that comes with iOS 13 and, according to Apple, required a redesign of the camera pipeline on their devices. The good news is that the changes have already been made on the iPhones XS and XR, as well as the latest iPad Pro models, as explained in this WWDC session keynote.

A screenshot from the WWDC session that shows the supported MultiCam Formats on the iPhone XS.

However, having multiple cameras active and recording at the same time puts a lot of stress on the hardware, which is why multi-camera recording on the 2018 devices has some limitations. Only certain combinations of cameras can be activated simultaneously (see table below) and, depending on the hardware requirements of their app, developers might have to revert to lower-quality video streams to make the feature work.

A screenshot from the WWDC session showing what combination of camera modules can be used with the MultiCam API in iOS 13 on an iPhone XS.

It has not been clarified yet which camera combinations will be usable on the iPhone 11 generation but during the Filmic demo, the footage appeared to be recorded from all four of the iPhone 11 Pro’s cameras simultaneously.

Nevertheless, this is good news for owners of 2018 Apple flagship devices who will be able to use the new feature at least partly. The multi-cam API should launch with iOS 13 on September 19. We’d expect a bunch of third-party apps implementing the functionality soon after.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Apple’s ‘MultiCam’ recording will be available on iPhone XS, XR and the latest iPads

Posted in Uncategorized