RSS
 

Posts Tagged ‘possible’

Profoto’s new $299 OCF Adapter makes it possible to mount OCF light modifiers to A-series speedlights

11 Nov

Profoto has announced the release of the OCF Adapter, a new adapter that makes it possible to use all of Profoto’s OCF light shaping tools with any of its A-series flash units.

Image credit: Profoto

The OCF Adapter looks similar to many other speedlight to speedring adapters: it has a coldshoe mount for securely attaching a Profoto A-series flash and a mounting point for any of Profoto’s OCF light shaping tools, which the head of the flash fits into. Profoto has over a dozen OCF light shaping tools, including the OCF Magnum Reflector, a 24” OCF Beauty Dish, an array of OCF Grids and plenty of OCF Gel attachments.

The unit isn’t necessarily small (120mm (4.7”) wide, 280mm (11”) tall and 90mm (3.5) deep), but it’s certainly a more compact solution than carrying around a larger monolight when an A-series flash will get the job done with an OCF light shaping tool. Below is a hands-on with the OCF Adapter by Adorama:

Being Profoto, it shouldn’t come as a surprise the OCF Adapter isn’t cheap. The 750g (1.65lb) piece of plastic and metal will set you back $ 299 (Adorama, B&H). For a little context, Godox/Flashpoint’s Profoto A1 knock-off can be purchased, with accessories, for $ 229 (Godox at B&H, Flashpoint at Adorama).

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Profoto’s new $299 OCF Adapter makes it possible to mount OCF light modifiers to A-series speedlights

Posted in Uncategorized

 

This 3D-printed accessory makes it possible to shoot split double exposures on Instax Mini 90 cameras

09 Aug

One of the accessories you can purchase for some of Lomography’s instant cameras is the Splitzer, an add-on component that makes it possible to shoot multiple exposures on the same frame. Unfortunately, the accessory isn’t available for the Fujifilm Instax Mini 90, but photographer Guillermo Hernandez has managed to create his own 3D-printed version for the popular camera.

Like the Splitzer, the 3D-printed component simply attaches to the front of the lens. To capture a double exposure, simply cover the half of the frame you don’t want exposed, take a shot, then rotate the Splitzer 180-degrees before taking another shot.

As you can see in the sample photos below, this allows you to create unique compositions wherein a single subjects can be in two places at once or frame the same object side-by-side.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_9993708725″,”galleryId”:”9993708725″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

It’s undoubtedly a novel accessory, but it’s a fun way to get a little more out of a Fujifilm Instax camera. Hernandez is selling his Instax Mini 90 Splitzer in multiple colors for $ 5 on eBay with $ 3 shipping, but if you have access to a 3D printer and know some basic CAD tutorials you could probably whip up one yourself. Hernandez has other 3D-printed photo-related products on his eBay store, too.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on This 3D-printed accessory makes it possible to shoot split double exposures on Instax Mini 90 cameras

Posted in Uncategorized

 

Rumor: Canon’s next mirrorless camera could have 45MP sensor with IBIS and possible 8K/30p video

29 Jan

Yesterday, Canon Rumors posted an interesting list of rumored specifications for a Canon mirrorless camera it believes will be called the EOS R5. Today, additional details have emerged, painting a picture of what would be an impressive mirrorless camera if the rumored specifications hold true.

According to Canon Rumors’ report, which was created with information shared via multiple, unrelated anonymous sources, the camera could feature a 45-megapixel sensor with in-body image stabilization and still frame rates up to 20 frames per second. Specifically, Canon Rumors claims the IBIS will offer five stops of image stabilization on its own and up to 7–8 stops when used with in-lens stabilization as well. The still frame rates are still up in the air, as the sources are apparently offering conflicting information, but it appears as though it could be 14 fps and 20 fps for mechanical and electronic shutter, respectively.

According to Canon Rumors, the camera will offer 4K video at 120 fps and could offer 8K Raw at up to 30 fps, although it is noted that the 4K / 120 fps could be a crop mode to control heat and the 8K Raw could refer to a special timelapse mode in the camera.

Those specs are enticing, but it’s the rumored video features that are really interesting. According to Canon Rumors, the camera could offer 4K video at 120 fps and could offer 8K Raw at up to 30 fps, although it is noted that the 4K / 120 fps could be a crop mode to control heat and the 8K Raw could refer to a special timelapse mode in the camera.

Other details rumored include the addition of a scroll wheel, the removal of the touchbar, a larger-capacity battery that looks similar to the LP-E6/N batteries currently used by Canon, and an announcement date ‘ahead of CP+ next month.’

Canon users have long been asking for an R-series camera body worthy of Canon’s growing lineup of RF-mount lenses and if these rumors indeed come to fruition, it’s safe to say there won’t be much room left to complain. However, these specifications are just that: a rumor, so there’s also the possibility these are little more than hearsay or misleading fragments cobbled together from multiple other rumors.

In light of these rumored specifications, let us take a second to ask you, the reader, how you would like to see Canon evolve its R-series lineup going forward?

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Rumor: Canon’s next mirrorless camera could have 45MP sensor with IBIS and possible 8K/30p video

Posted in Uncategorized

 

Redditor uncovers possible price of Sigma’s fp camera in product page source code

08 Oct

Sigma hasn’t officially revealed the pricing of its fp full-frame mirrorless camera, but a clever Redditor discovered what is believed to be the price by looking through the source code of the fp product page on Sigma’s website.

Though Sigma has corrected its mistake and removed the information from the source code of its website, Redditor u/jadware initially made the discovery and Redditor u/ForwardTwo captured the above screenshot showing the pricing information while it was still live on Sigma’s website. The information, which revealed the price to be $ 1,899, was previously visible under the meta property tag ‘og:price:amount’ when you searched for ‘1899’ within the source code of the website.

This information still isn’t definitive, but it seems like a reasonable price point for the camera and the fact all signs of the price have since been removed lend credence to the possibility of the Sigma fp costing $ 1,899. Nokishita has also published pricing information, which puts it around the $ 1,899 price point (h/t Mistral75). Previous rumors have suggested the Sigma fp will be released on October 20th, so it shouldn’t be much longer until we find out definitively.


Image credits: Screenshot by u/ForwardTwo, used with permission.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Redditor uncovers possible price of Sigma’s fp camera in product page source code

Posted in Uncategorized

 

CP+ 2019 – Nikon interview: ‘The view through the viewfinder should be as natural as possible’

25 Mar
(L-R) Mr Naoki Kitaoka, Department Manager of the UX Planning Department in the Marketing Sector of Nikon’s Imaging Business Unit, pictured with Mr Takami Tsuchida, Sector Manager of the Marketing Sector inside Nikon’s Imaging Business Unit, at the CP+ 2019 show in Yokohama Japan.

We were in Japan earlier this month for the annual CP+ show in Yokohama, where we sat down with senior executives from several camera and lens manufacturers, among them Nikon.

We spoke with three Nikon executives from the Marketing Sector of Nikon’s Imaging Business Unit: Mr Naoki Kitaoka, Department Manager, of the UX Planning Department, Mr Takami Tsuchida, Sector Manager, and Mr Hiroyuki Ishigami, Section Manager of the Product Planning Section IL, UX Planning Department.

Please note that this interview was conducted with multiple interlocutors through an interpreter, and has been edited for clarity and flow. For the sake of readability, answers have been combined.


How do you think the market for full frame mirrorless will evolve?

In terms of hardware, it is likely that mirrorless will catch up with DSLR. But one thing that is a challenge is the time lag of electronic viewfinders. Even though we have a great mirrorless [solution], we cannot beat the optical viewfinder.

For really high-level professional photographers at sports events and so on, I believe that the DSLR will survive. I think there will be a synergy between DSLR and mirrorless, so we can expand the market moving forward.

I hesitate to talk about our competitors, but while Sony only offers mirrorless cameras, both Nikon and Canon offer DSLR and mirrorless, so there are more options for our customer bases. DSLR and mirrorless cameras have their own unique characteristics.

The Nikon Z6 and Z7 feature a high-resolution optical viewfinder which prioritizes clarity and sharpness over response speed. One of the secrets behind the large, sharp viewfinder image is the complex optical unit behind the display panel, which contains multiple lenses including an aspherical element.

The Z6 and Z7 offer very high resolution finders, at the expense of response speed, compared to some competitors. Why did you make this decision?

There are various factors, however we decided on three main pillars for the Z system. The first pillar is a new dimension of optical performance. The second is reliability, both in terms of the hardware and also the technology, and the third is future-proofing of that technology.

The view through the viewfinder should be as natural as possible

To touch on the first pillar, optical performance, we’re really trying to be the best and provide the ultimate performance of the viewfinder. The view through the viewfinder should be as natural as possible. To achieve that goal we did two things – we focused on the optics, and also on image processing.

With current technology there is always some time lag, it will take some time and if we want to shorten the response time and compromise in terms of resolution, the [experience] deteriorates. Of course, we’ll continue to try to make the response time shorter.

Is it more important for the viewfinder response to be faster in a camera more geared towards speed?

That depends. In the Z7, our first priority was not speed. Therefore, if we were going to launch a camera focused on speed, we’d need to review [viewfinder responsiveness].

What kind of feedback have you received from your Z6 and Z7 customers?

Very similar to [DPReview’s] feedback. For people who don’t prioritize high-speed shooting, they’re happy with the performance and the portability of the system. In many cases they’ve totally switched away from DSLR.

The Nikon Z6 is a lower-cost companion camera to the flagship Z7, which has already out-sold the more expensive model. According to Nikon, the Z6 has proven especially popular with filmmakers.

Is the Z6 attracting a different kind of customer to the Z7?

When we launched them, we expected that sales would be about 50:50, however the Z6 already has a larger customer base. It’s more price competitive. Video shooters are telling us [the Z6] is very user-friendly, and in the US market, the Film Makers’ Kit has become popular.

We’re going to create easier to use and friendlier equipment for photographers that need to do both stills and video

In the future, would you like Nikon to appeal to serious professional videographers and filmmakers?

If you mean Hollywood or television broadcast videographers, we’re not trying to address that segment. However we are targeting freelancers, one-person team kind of videographers – that kind of shooter. That’s the kind of direction we’re going in.

We’re going to create easier to use and friendlier equipment for those photographers that need to do both stills and video. For example, photojournalists, or wedding photographers.

On the optics side, in the S-series lenses we took great care over the video functionality as well, so for example when you zoom the focus stays there, there’s no defocusing, and there’s no change in the image angle when you focus, either.

Do you think that strategy might change in the future?

We’ll keep an eye on the market, and look at the demands of our customers.

Despite the entry of the Z7 into the market, the D850 continues to be a major seller for Nikon, and in some ways remains a more capable camera for professionals.

Do you plan to increase your production capacity, to make F mount and Z mount products in parallel? Or will you scale down production of one line to make room for expansion of the other?

Even though we’ve now launched Z mount into the market, we still have a very robust [F mount] customer base, and a good reputation thanks to our DSLRs, especially products like the D750 and D850. And sales are still very robust.

I want to grow the Z series and D series at the same time – we’re not weighing one against the other. For example, developing Z lenses alongside F-mount lenses will put a lot of pressure on us, so efficiency of production will be very important from now on, because we really want to maintain production and development of both lines in future. When we can, we’ll commonize parts and platforms, and of course we’ll monitor trends in the market, and where the growth is.

Take a look inside Nikon’s Sendai factory [August 2018]

Can you give me an example of a new, efficient production process in contrast to an older, less efficient process?

We are really interested in automation, and we’d like to automate so we don’t have to depend [entirely] on human labor. For example, we’d like to have a 24/7 operation in our factories.

Since we launched the Z series, our users have been asking us to apply mirrorless technology to the DX format

Do you think the Z mount will eventually be an APS-C platform, as well as full-frame?

I cannot disclose our plans but for today I can say that since we launched the Z series, our DX format DSLR users have been asking us to apply mirrorless technology to the DX format as well. If we employ APS-C sensors [in mirrorless] maybe the system can be made even smaller. So as we go along, we’ll listen to the voices of our customers.

One of the advantages of the narrow dimensions of the 60 year-old F-mount is that the APS-C cameras that use it – like the D3500, shown here – can be made remarkably small. That will be a harder trick to pull off with the larger Z-mount.

We understand some of the benefits of a short flange back and wide diameter mount, are there any disadvantages?

In comparison to F mount, [when designing lenses for Z] we can really guide the light, even right to the edges of the frame. This gives uniformly high image quality across the whole image area. The camera can also be thinner.

There’s no particular challenge or shortcoming in this kind of design, except that the mount diameter determines the camera’s size. You can’t make the camera any smaller [than the height defined by the diameter of the mount].

Does a shorter flange back distance make the mount and lens alignment tolerances more critical? Is it harder to correct for reflections and ghosting?

Generally speaking, when it comes to alignment, no. But there is more risk of sensor damage in [such a design, with a rear lens group very close to the imaging plane ] if the camera is dropped. So we needed to create a system to [absorb shock] in this instance. When it comes to ghosting, it is more critical, so we have to really reduce reflections. Only by doing this were we able to [make the design of the Z mount practical].

Is there a software component to that, or are you achieving the reduced reflections entirely optically and via coatings?

No software is involved.


Editor’s note: Barnaby Britton

Last year was a crucial year for Nikon, and the Z system was a hugely significant move for the company – one on which the future of the manufacturer may depend. Nikon has been careful not to talk about the Z mount replacing the 60 year-old F-mount so much as complementing it, and in our meeting at CP+, Nikon’s executives were again keen to emphasize that they see DSLRs and mirrorless cameras co-existing – at least for now.

Clearly though, as they admit, ‘mirrorless will catch up with DSLR’ eventually. And already, for Nikon, mirrorless has opened the door to a new customer base for the company: filmmakers. While Nikon isn’t targeting professional production companies or broadcast customers (not yet – although the forthcoming addition of Raw video is a strong indicator that they’d like to) I get the sense that the Z6 has been more of a hit with multimedia shooters than Nikon perhaps expected. It certainly seems as if sales figures for the 24MP model have come as a bit of a surprise. It’s unclear though whether the proportionally greater sales of the Z6 compared to the Z7 are a result of the cheaper model over-performing, or the flagship under-performing in the market.

A mirrorless D5 it ain’t, but the high-resolution Z7 is an excellent platform for Nikon’s new range of Z-series lenses

The Z7 was always going to be a relatively tough sell at its launch price, with the inevitable comparisons against the incredibly capable and still-popular D850, and the fact that the similarly-specced (and in some ways more versatile) Z6 was coming fast on its heels. Regardless, Nikon clearly sees the Z7 as living alongside its high-end DSLRs, rather than as a replacement model. As the executives said in our interview, ‘in the Z7, our first priority was not speed’. A mirrorless D5 it ain’t, but the high-resolution Z7 is an excellent platform for Nikon’s new range of Z-series lenses, which are at least a generation ahead of their F-mount forebears in terms of optical technology.

We’ve heard a lot about the benefits of wider, shallower mounts for optical design (and the benefits are real, by the way, especially when it comes to designing wide, fast lenses) but it was interesting to hear about some of the challenges that emerged. Principle among them are the need to reduce aberrant reflections, which can cause ghosting, and the requirement for a robust sensor assembly to avoid damage from impact.

Right now, the Z system is a full-frame system. But in this interview we got the clearest hint yet that this might not be a permanent condition

Judging by Roger Cicala’s tear-down of the Z7 last year, it’s obvious that Nikon really prioritized ruggedness and ‘accident-proofing’ in the Z6/7. It turns out that one of the reasons for this focus on build quality is the close proximity of the stabilized sensor not only to the outside world, but also to the rear elements of Z-series lenses.

Right now, the Z system is a full-frame system. But in this interview we got the clearest hint yet that this might not be a permanent condition. Reading between the lines, a statement like ‘since we launched the Z series, our users have been asking us to apply mirrorless technology to the DX format’ is as close to a confirmation that this is being actively worked on as we’d expect to get from a senior executive. As for how far away an APS-C Z-mount camera is, I wouldn’t want to guess.

There’s always a chance, of course, that Nikon could go the Canon route and use a totally separate mount for APS-C. I doubt it, but Mr Kitaoka did make the point that the width of the Z-mount defines the size of the camera. And the Z-mount, as we know well, is very wide indeed.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on CP+ 2019 – Nikon interview: ‘The view through the viewfinder should be as natural as possible’

Posted in Uncategorized

 

Canon patent details schematics for a possible RF 90mm F2.8L IS Macro lens

28 Dec

A recently published patent from Canon details a set of optical formulas for what appears to be an RF 90mm F2.8L IS Macro lens.

Japanese Patent Application Number 2018-205435, first detailed by Northlight Images, is fairly standard as far as patents go, but there is one interesting element worth noting.

The image stabilization elements inside the lens — labelled L12 — are located within the first optical group towards the front of the lens. More often the image stabilization component is towards the center of the lens, but that doesn’t appear to be the case here.

This could be for a number of reasons, but the patent text specifically mentions that in order to get the most accurate image stabilization, larger optical components and accompanying motors are needed. As such, the larger front area of the lens makes more sense than the middle of the lens where the optics are more confined, especially with the aperture mechanism located there (represented by ‘SP’ in the illustrations).

It’s also worth noting that the focusing component of the lens is towards the rear of the lens. The patent text says this too is due to the larger focusing motor(s) needed, but it could also have an added benefit of creating a more balanced lens with the heavy image stabilization component towards the front of the lens. As Canon showed the world with its RF 35mm F1.8 IS STM lens, the RF mount makes it possible to put larger elements and more electronics towards the rear of the lens due to the larger mount size.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon patent details schematics for a possible RF 90mm F2.8L IS Macro lens

Posted in Uncategorized

 

‘Perfect’ sensors may be possible, but might not come to cameras

24 Feb
Fossum’s team has created a prototype chip with a variety of pixel designs and readout methods. This included combinations with sufficiently low read noise to allow individual photons to be counted.

The future could include sensors that perfectly describe the light in the scene, that offer new computational possibilities and give film-like latitude in the highlights. And yet we may not ever see them in cameras, says father of the CMOS sensor, Professor Eric Fossum.

We spoke to Fossum shortly after he received, alongside three other pioneers of digital photography, the Queen Elizabeth Prize for Engineering for his work on CMOS sensors. But the topic of our conversation is the future, rather than his past achievements. He now leads a group at the Thayer School of Engineering at Dartmouth, New Hampshire, working on what he calls Quanta Image Sensors (QIS). The team has recently published a paper announcing a breakthrough using the same fabrication process used to make CMOS image sensors.

The perfect sensor?

The principle is to use nanoscale, specialized pixels, called ‘Jots’ to capture light at the level of individual photons. They work in a binary fashion: they’ve either received a photon or they haven’t (as opposed to conventional sensors which accumulate the charge generated by lots of photons during exposure). These jots are read repeatedly to see whether another photon has arrived since they were last checked.

While Fossum is keen to stress that other teams are having some success in the same field (using a slightly different approach), his own team’s work is looking very promising. The paper in the journal Optica shows the team’s technology has been refined such that a 1MJot chip can be read 1000 times per second while still exhibiting sufficiently low read noise that it can distinguish between individual photons.

We can count every photon: you can’t do any better than that

“The Holy Grail is no read noise,” says Fossum: “so that the read signal is proportional to the signal as it arrived.” And the team’s latest paper says they’ve got very close to this, with noise levels so low that the sensor can distinguish between individual photons without getting confused by read noise. This opens up the possibility of cameras that could perfectly describe the light in the scene, even in near total darkness.

A mathematical model showing how noise levels (measured in the root mean square of the number of electrons), affect the ability to interpret small signals. The lower the read noise, the more accurately you can distinguish between individual values in the signal.
Diagram from the team’s paper in Optica

Eliminating read noise from the sensor wouldn’t mean totally noiseless photos, since the randomness of the light being captured is a key source of noise, but it’s the best any sensor can possibly achieve. “We can count every photon: you can’t do any better than that,” he says.

The paper, perhaps conservatively, says the technology could be suited to scientific, space, security and low-light imaging applications, but Fossum has clearly also been thinking about conventional photography.

A classic response

“Because it’s binary in nature, its response is comparable to old photographic film,” he says. “In film, when the silver halide was hit by a photon, it’s reduced to a silver atom that isn’t washed away [during processing]. If it’s hit by two photons, it doesn’t make any additional difference.”

This ends up meaning that in bright regions of the image there are ever fewer unexposed silver ions as the exposure goes on. This, in turn makes it less likely that the last few ions will be hit by a photon, so it becomes increasingly difficult to fully saturate the system. The same is true for the tiny, binary Jots: as more of them become saturated, it becomes increasingly difficult to saturate the last few.

“The response is linear at moderate exposure but it trails off to give significant overexposure latitude. It’s a pattern first observed by Hurter and Driffield in 1890,” says Fossum: “they showed the same curve that we measure, experimentally, in our QIS devices.”

Diagram showing the Jots’ exposure response, in comparison to mathematical models of different read noise levels. Note the roll-off at high exposures, comparable to the Hurter Driffield response curves of photographic film.
Diagram from the team’s paper in Optica

“That has obvious interest both for still photographers who’re used to shooting film and for cinematographers who’re looking for that kind of response.”

The use of such tiny pixels has other benefits, too: “Jots are below diffraction limits in size. This means the resolution of the system is always higher than the resolution of the lens, which means we never have to worry about aliasing.” While the group’s prototype sensors feature one million Jots, Fossum says their target is one billion.

Beyond conventional photography

Fossum isn’t just thinking about photographic history, though. The tiny size and the approach of repeatedly reading out the sensor challenges the existing concept of single exposures. “At the moment we make motion pictures by shooting a series of snapshots. With QIS it’s more like the reverse process,” he says: constructing still images from precisely captured movement.

Professor Fossum has already been responsible for one revolution in photography: the invention of the CMOS sensor. In December 2017 he was awarded the Queen Elizabeth Prize for Engineering for his work.

Essentially, taking lots of short, sub-frames during an exposure gives you an extra dimension to your images: time. “If you take a single frame, you get a bunch of ones and zeros. If you take another, you quickly build up a cube of ones and zeros,” Fossum says: “For example, if you shoot 100 frames at 1000 frames per second, you get a cube that’s x pixels wide by y pixels tall, but also 100 frames deep.”

This presents some interesting questions, he says: “What do you do with that data? How do you create an image from that very faithful map of where photons arrived?”

“You could choose a number of pixels in x and y but also in the time axis. If you wanted a very sensitive pixel in low light you could combine 10 x 10 Jots in x and y and then maybe combine the data from 100 frames: it’s essentially like increasing the grain size in a more sensitive film.”

Of course you can achieve something comparable to this in conventional digital photography by downscaling an image, but Jots allow greater flexibility, Fossum says: “your pixel size could vary between different parts of the image, so in some places you’d have bigger but more sensitive grains.”

What is the object of photography? Is it artistic or an attempt to perfectly recreate the scene as it was?

The time component also opens up additional possibilities, he says: “if an object moves during these hundred frames, instead of adding all the values from the same location, you could add them at an angle that corresponds to the movement,” so that all the pixels relating to the same object are combined. “We could take out motion blur or remove the scanning effect of a computer screen in video.”

The idea of combining multiple frames in interesting ways is, of course, already becoming a core part of mobile photography, and Fossum says finding all the things that are possible is a challenge he is leaving for others: “From my point of view, we’re building a platform for computational imaging, it’s for others to develop all the ways to use it. A camera would have to take account of the new sensor capabilities.”

But it’ll ask interesting questions, he believes: “What is the object of photography? Is it artistic or an attempt to perfectly recreate the scene as it was? Some of the things we associate with photography are artifacts of the way we capture them.”

Not the only future

With all this going for it, it might seem odd that Fossum isn’t promising to deliver a second revolution in digital imaging. But, having devoted a career to developing technologies and teaching about the challenges, he’s realistic both about the work left to do and the competition any product would face.

“What we’ve already achieved is wonderful. The next challenge is adding color [awareness], but I don’t think that’s going to be particularly problematic. Then there’s power: we’ve shown we can produce a large chip that doesn’t consume or disperse a prohibitively large amount of power. We’re currently at around 27mW but scale it up by 1000 [to get to one billion Jots] and that’s 27W, so we need to cut that by about a factor of ten.”

His concern is more about the current state of the rival technologies: “In order to bring a new technology to replace the existing one, it has to be compellingly better in a number of ways,” he says. “For a few niches, [our technology] is already compelling.” But for photography, the bar is already set very high.

I don’t want our startup to be another esoteric imaging product that fails to find a market

“CMOS technology is pretty awesome right now,” he says, before almost embarrassedly stressing that he’s not claiming the credit for this: “where it is today is the result of the input from thousands of engineers from different companies who’ve contributed towards where we are now.”

Professor Eric Fossum pictured with Dr Jiaju Ma, one of the co-authors of the Optica paper and a co-founder of the spin-off company, Gigajot Technology.

But, for all his cautious words, Fossum is convinced enough by the technology’s potential to have created a company, Gigajot Technology, with his co-researchers. “Finding a sweet spot in the market is a really important part of challenge. It comes back to the things I teach: ‘who is your customer?’ ‘what is your market?’ ‘how are we going to get there?'”

“I don’t want our startup to be another esoteric imaging product that fails to find a market,” he says.

While it’s by no means certain that QIS sensors will make their way into mainstream cameras, it already looks like the technology has tremendous potential for niches such as scientific measurement. This alone shows just how far the technology has come from Fossum’s original idea. As he readily admits: “When we first started this project I wasn’t even sure it could be made to work.”

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on ‘Perfect’ sensors may be possible, but might not come to cameras

Posted in Uncategorized

 

How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

19 Jan

Blue hour, especially the one in the evening (yes it happens before sunrise too!), is probably the most popular time of day to take cityscape photography with dazzling city lights illuminated. But exactly when is the prime time of blue hour that could result in you getting the best possible shots?

Singapore - How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

Singapore skyline at blue hour.

Hong Kong - How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

Hong Kong skyline at blue hour.

Blue Hour Photography Requires a Tripod

One note before we get started. Although you could shoot handheld at blue hour by bumping the ISO up, it’s always advisable to use a tripod in order to shoot clean (noise-free) photos with low ISO (e.g. 100). It also comes with an added bonus of letting you do long exposure photography with smoothed-out water, etc.

For your information, sample photos shown in this post are all shot using my trusty Manfrotto MT190CXPRO3 carbon-fibre tripod.

Tripod - How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

Setting a tripod up and getting ready for blue hour.

Finding out Your Local Sunset and Dusk Time

Let’s get down to business. In terms of timeline, SUNSET comes first, followed by DUSK 20+ minutes later. The time between sunset and dusk is called TWILIGHT, and NIGHT falls once dusk is over.

To find out your local sunset and dusk time, simply go to timeanddate.com and search for your city (e.g. sunset and dusk time in Singapore on January 26th, 2018 will be 19:18 and 19:40 respectively). Or alternatively, search Google using “dusk date city” format (e.g. dusk January 24th, 2018 Singapore). Then, Google returns a dusk time even before the first result. Checking a dusk time has become a second nature to me whenever I’m shooting at blue hour, locally as well as traveling abroad on holidays.

Note: Apps like PhotoPills are also really helpful for planning shooting times and figuring out the sunrise, sunset and dusk times daily in any location worldwide.

Timeline - How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

Sunset to dusk in timeline. Towards the end of dusk is the best time to shoot blue hour photos with beautiful bluish hue in the sky.

Aim for Shooting the Last 10 Minutes of Dusk

In this 20 or so minutes between sunset and dusk, the first 10 minutes are still not quite “ripe”, as city buildings are not yet fully lit up, and the sky hasn’t yet taken on the beautiful bluish hue that appears towards the end of dusk. Use this time to decide on your composition, do some test shots, etc.

Singapore - How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

This Singapore skyline was shot 15 minutes before the end of dusk (six minutes after sunset) at f/13, 1.6 seconds, ISO 100. The stage isn’t quite set yet, as the sky is still bright and not many of the city lights are illuminated.

When there are about 10 minutes left before dusk, more city buildings will be lit, and bluish hue starts to appear in the sky, getting deeper and deeper with every single passing minute. It’s these last 10 minutes of dusk that are undoubtedly the prime time to shoot blue hour photography.

In addition, the limited available light at blue hour allows for your shutter speed to naturally get longer, especially with the use of a small aperture. Shoot in Aperture Priority mode and use a bigger f-stop number such as f/13, which helps create smoothed-out water and rushing clouds effects (provided that you’re shooting with a tripod).

ND filter - How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

A neutral density (ND) filter is an item that will enrich your blue hour photography experience and images.

Add an ND Filter

To enhance such effects, try shooting with a neutral density (ND) filter attached. ND filters help reduce the light that is coming through the lens, allowing you to use much slower shutter speeds.

For example, with a 3-stop ND filter attached, a base shutter speed of 2-seconds is extended to 15 seconds. For a greater effect, use 6-stop ND filter to extend a base shutter speed of 2-seconds to 128 seconds (just over two minutes), which gives your photo a surreal and dreamy feel that is typically seen in long exposure photography, like Marina Bay (Singapore) photo below.

Singapore - How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

This Marina Bay photo was shot three minutes before the end of dusk (f/13, 135 seconds, ISO 100). The blue hour sky looks just right – not too light, not too dark, not overly vibrant. Also, an exposure of 135 seconds (with a 6-stop ND filter attached) helped create a silky smooth water effect.

Blue Hour Suddenly Ends after Dusk

Blue hour photography is sometimes mixed up with night photography, which starts once dusk is over. You might be surprised to find out that night falls almost suddenly after dusk. It doesn’t even take 10 minutes for the blue hour sky at dusk to turn into pitch-black night.

Personally, I never shoot after dusk. Photos shot after dusk tend to come out very dark and colors look muddy as there is little bluish hue left in the sky. Your photos will look considerably different if you miss this prime time of blue hour even by a mere few minutes.

Hong Kong - How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

This Hong Kong skyline was shot 8 minutes after the end of dusk. The bluish hue in the sky quickly disappeared, and the scene turned into the dark night rather abruptly.

Conclusion

In fact, what we call blue “hour” seems to last only approximately 10 minutes towards the end of dusk (depending on where you are located relative to the equator).

Blue hour photography is quite a time-sensitive genre, as this prime time of blue hour sky ends in the blink of an eye. So, stay focused, otherwise, you could suddenly miss it passing you by under the fast-changing dusk sky. I really wish blue hour could literally last for an hour!

Editor’s note: it does in some parts of the world, at certain times of the year. If you want more blue hour time – travel farther away from the equator! Where I live in Canada blue hour is almost a full hour in the summer, versus 20 minutes where the author lives in Singapore.

The post How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour by Joey J appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on How to Find the Best Possible Time to Shoot Cityscapes at Blue Hour

Posted in Photography

 

Joe McNally asks, ‘What’s not possible?’

04 May

Joe’s latest blog post takes Nikon SB-5000 speedlights and a ton of talented folks to transform portrait subjects from the ordinary to the surreal.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Joe McNally asks, ‘What’s not possible?’

Posted in Uncategorized

 

Google software engineer shows what’s possible with smartphone cameras in low light

27 Apr
Image: Florian Kainz/Google

On a full moon night last year, Google software engineer Florian Kainz took a photo of the Golden Gate bridge and the City of San Francisco in the background with professional camera equipment: a Canon EOS-1D X and a Zeiss Otus 28mm F1.4 ZE lens. 

When he showed the results to his colleagues at Google Gcam, a team that focuses on computational photography, they challenged him to re-take the same shot with a smartphone camera. Google’s HDR+ camera mode on the Google Nexus and Pixel phones is one of Gcam’s most interesting products. It allows for decent image quality at low light levels by shooting a burst of up to ten short exposures and averaging them them into a single image, reducing blur while capturing enough total light for a good exposure. 

However, Florian being an engineer, wanted to find out what smartphone camera can do when taken to the current limits of technology and wrote an Android camera app with manual control over exposure time, ISO and focus distance. When the shutter button is pressed the app waits a few seconds and then records up to 64 frames with the selected settings. The app saves DNG raw files which can then be downloaded for processing on a PC. 

He used the app to capture several night scenes, including an image of the night sky, with a Nexus 6P smartphone, which is capable of shutter speeds up to 2 seconds at high ISOs. On each occasion he shot an additional burst of black frames after covering the camera lens with opaque adhesive tape. Back at the office the frames were combined in Photoshop. Individual images were, as you would expect, very noisy, but computing the mean of all 32 frames cleaned up most of the grain, and subtracting the mean of the 32 black frames removed faint grid-like patterns caused by local variations in the sensor’s black level.

The results are very impressive indeed. At 9 to 10MP the images are smaller than the output of most current DSLRs but the photos are sharp across the frame, there is little noise and dynamic range is surprisingly good. Getting to those results took a lot of post-processing work but with smartphone processing becoming even more powerful it should only be a question of time before the sort of complex processing that Florian did manually in Photoshop can be done on the device. You can see all the image results in full resolution and read Florian’s detailed description of his capture and editing workflow on the Google Research Blog.

 Image: Florian Kainz/Google

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google software engineer shows what’s possible with smartphone cameras in low light

Posted in Uncategorized