RSS
 

Posts Tagged ‘Change’

NVIDIA Computational Zoom lets you change perspective and focal length in post

03 Aug

Researchers with the University of California, Santa Barbara (UCSB) and NVIDIA have detailed a new type of technology called ‘computational zoom’ that can be used to adjust the focal length and perspective of an image after it has been taken. The technology was detailed in a recently published technical paper, as well as a video (above) that shows the tech in action. With it, photographers are able to tweak an image’s composition during post-processing.

According to UCSB, computational zoom technology can, at times, allow for the creation of ‘novel image compositions’ that can’t be captured using a physical camera. One example is the generation of multi-perspective images featuring elements from photos taken using a telephoto lens and a wide-angle lens.

To utilize the technology, photographers must take what the researchers call a ‘stack’ of images, where each image is taken slightly closer to the subject while the focal length remains unchanged. The combination of an algorithm and the computational zoom system then determines the camera’s orientation and position based on the image stack, followed by the creation of a 3D rendition of the scene with multiple views.

“Finally,” UCSB researchers explain, “all of this information is used to synthesize multi-perspective images which have novel compositions through a user interface.”

The end result is the ability to change an image’s composition in real time using the software, bringing a photo’s background seemingly closer to the subject or moving it further away, as well as tweaking the perspective at which it is viewed. Computational zoom technology may make its way into commercial image editing software, according to UCSB, which says the team hopes to make it available to photographers in the form of software plug-ins.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA Computational Zoom lets you change perspective and focal length in post

Posted in Uncategorized

 

CP+ 2017 – Fujifilm Interview: ‘We hope that the GFX will change how people view medium format’

24 Feb
Toshihisa Iida, General Manager of Fujifilm’s Optical Device and Electronic Imaging Products Division, posing with the new medium-format GFX 50S.

We’re at the CP+ 2017 show, in Yokohama Japan where Fujifilm is preparing to ship its long-awaited medium format GFX 50S. 

We sat down with three Fujifilm executives, Toshihisa Iida, (general manager of Fujifilm’s Optical Device and Electronic Imaging Products Division), Makoto Oishi, (manager of Fujifilm’s Sales and Marketing Group, Optical Device and Electronic Imaging Products division), and Shinichiro Udono, (Senior Manager for the Sales and Marketing Group of the Optical Device and Electronic Imaging Division), to learn more about the GFX, some of the challenges of creating a medium-format system, and future plans for GX and X series development.

Now that the GFX is ready, and about to ship, this must be quite exciting for you.

Yes, absolutely. For the past four or five years we’ve been concentrating on the APS-C format, and a lot of people were asking us when we’d enter the larger format market. Once some time had passed, and we’d produced a good number of APS-C lenses, we started to look more seriously at large format to attract more customers. That was about two years ago.

The GFX 50S is a mirrorless medium-format camera built around a 43.8 X 32.9mm CMOS sensor. Although the camera borrows a lot of design cues from its smaller X Series cousins, the GFX offers a very different handling experience. Despite being based around such a large sensor, the combination of camera and 63mm prime lens is surprisingly lightweight and very well-balanced. 

Since the development announcement at Photokina we’ve received a lot of positive feedback from photographers. We started a program called the ‘GFX Challenge’, where we loaned GFX cameras to photographers from various fields, in order to get feedback. Based on that feedback we refined the camera’s software. Now that we’re almost ready to ship, I can’t wait to get feedback from customers.

What kind of changes resulted from the Challenge feedback?

Most feedback was more or less as we’d expected. Photographers were surprised by how small and light the camera was. We made a few changes on the firmware side, mostly small refinements, like how the dials work, for example, to make it less likely that you’ll make an accidental control input (etc.)

What were the biggest technical challenges that you faced when moving from APS-C to medium format?

The sensor size is 4X as large, so speed and responsiveness were two major challenges. Readout speed, processing and autofocus.

Makoto Oishi shows off the 50MP medium-format sensor used in the GFX 50S.

The GFX does not offer phase-detection – are the lenses designed to support this in the future?

Yes, definitely.

You’re joining Ricoh in the medium format market, and some long-established brands like Hasselblad and Phase One. Are you expecting other manufacturers to enter this market too?

We don’t know. Obviously, the other brands are focusing on full-frame at the moment. Obviously though we’d welcome any brand that joins this category, because it will increase awareness, and help the category as a whole.

When you were planning a product like the GFX, did you come up with any predictions about the growth of the medium-format market?

At the moment we’re just focusing on making the best product we can. We hope that the GFX will change how people view medium format, and this will help to grow the entire category.

What’s your medium-term strategy for growth in this product line? Will there be longer product cycles, for instance?

Obviously the sales volume will be lower, so the product life cycle will probably be longer. But whenever we have the right combination of the right hardware, the right sensor and the right processor, we’ll introduce a new camera.

When you were planning the GFX, what kind of photographers did you have in mind?

After our experience with the GFX challenge, we actually see a much wider potential audience than we’d originally thought. It will depend on what kinds of lenses we introduce. For example, we didn’t think that street photographers would use medium format much, but [based on feedback] we hope that we can reach a broader audience.

You have a six-lens roadmap for GFX right now – how will this lineup evolve?

After the announcement of the GFX we started to get a lot of requests from photographers about other lenses. For example a lot of photographers are asking us for telephoto lenses, in the 200-300mm range. Nature photographers for example. Also people are asking for a wide-angle, like a 15mm equivalent, and an equivalent to the 70-200mm on full-frame.

Fujifilm’s recently updated lens roadmap for the APS-C X Series, including new lenses coming next year. We’re told that ultra-wide and fast tele lenses have been requested for the GFX platform, too. 

If you do develop those kinds of longer lenses, aimed at wildlife photographers, presumably the autofocus system will need to be able to keep up?

The autofocus algorithm in the GFX is the same as in the X Series, but performance is different. The readout speed of the sensor is critical, and that’s not the same. Compared to the X Series, the speed is more limited.

Is this something you’ll be working on in the future?

Yes absolutely.

When you started coming up with the concept for a medium format camera, did you ever consider using a non-mirrorless design?

When we started studying the possible design, we were aware that some of our customers wanted a rangefinder-style camera. ‘It’s a Fujifilm medium-format, it has to be a rangefinder!’ However, at least in our first-generation camera, we wanted to reach a wider audience. We concluded that a mirrorless design would be much more versatile. Mirrorless gives us more freedom, and more flexibility.

The GFX’s 50MP sensor is 4X larger than the APS-C sensors in Fujifilm’s X Series cameras. This entails a lot of extra processing power, which is one of the reasons why the GFX sensor has a conventional bayer pattern filter array. 

Was it easier, ultimately, to design around a mirrorless concept?

There are fewer mechanical parts, which is simpler. No mirror or pentaprism also means smaller size and weight.

Did you design this camera with the intention that customers could use adapted lenses from other systems?

Yes of course. We made the flange-back distance short enough to accommodate mount adapters for legacy lenses. We are making two adapters, one for H (Hasselblad) mount, and one for view cameras.

When will we begin to see mirrorless cameras take over the professional market?

There are several things that mirrorless manufacturers need to focus on. Number one is speed, still, to attract sports photographers. Also viewfinder blackout, we need to innovate there. Maybe one more processor and sensor generation should be enough to make mirrorless beat DSLRs in every respect.

By the time of the Tokyo 2020 olympics, will there be mirrorless cameras on the sidelines?

I think so, yes.

From Fujifilm?

Hopefully!

Can you tell us about the new Fujinon cine lenses that you’ve released?

Yesterday we announced new Fujinon cine lenses, in what we’re calling the ‘MK series. Fully manual zooms, and manual focus. Initially we’re introducing them in E-Mount versions, but X mount will follow. They’re designed to cover Super 35. The flange-back distance of E and X mount are very similar, so we can use the same optics.

The new Fujinon MK18-55mm T2.9 and 50-135mm T2.9 cover the Super 35 imaging area (~APS-C) and are being released in Sony E and Fujifilm X mount.

We have an optical devices division, which markets broadcast and cinema lenses, and I really want to maximize synergies between the broadcast and photography divisions.

Fujinon is well-known in cinema lenses, but until now, the lenses have been very big and very expensive. But now we’re looking at a new kind of video customer, who’s getting into the market via mirrorless. Mostly they’re using SLR lenses, which aren’t perfect. So a lot of those customers are looking for more affordable cinema lenses.

Do you see most potential in the E-mount, for video?

Yes, we think so. But obviously we’re releasing these lenses in X-mount too, and increasing movie quality in the X Series is very important. Traditionally, Fujifilm has been more of a stills company, but when we introduced the X-T2, we had a lot of good feedback about the 4K video, especially about color. Of course we need to do more, and we need to develop more technology, but I think there’s a lot of potential.

For now, Fujifilm tells us that they see most potential in videographers using Sony’s E-mount mirrorless cameras, but the company has ambitious plans to expand the video functionality of its X Series range. 

Moving on to the X100F – what was the main feedback from X100T users, in terms of things that they wanted changed?

A lot of customers wanted improved one-handed operability. So we moved all the buttons to the right of the LCD, like the X-Pro 2. And the integrated ISO and shutter speed dial, for instance.

The lens remains unchanged – why is this?

We looked into whether we should change it, but it would have affected the size of the camera, and we concluded that the form-factor is one of the most important selling-points of the X100 series. Of course we evaluated the image quality, with the new 24MP sensor, but concluded that it was still good.

If it ain’t broke, don’t fix it. The X100F features the same 23mm F2 lens as its predecessors, but Fujifilm ran the numbers and saw no reason to update the lens for 24MP. We do wish there was a 28mm version, though.  

Do your customers ask you for an X100-series camera with a 28mm lens?

Yes, of course. That’s why we have the 28mm wide converter for the X100, and the X70. And there’s potential to expand the fixed-lens APS-C camera range more.

Will X-Trans continue in the next generation of APS-C sensors?

For APS-C, definitely. For the GFX format, we’ll probably continue with the conventional bayer pattern. If you try to put X-Trans into medium format, the processing gets complicated, and the benefit isn’t very big.

How big is the extra processing requirement for X-Trans compared to bayer?

X-Trans is a 6×6 filter arrangement, not 4×4, it’s something like a 20-30% increase in processing requirement. 


Editor’s note:

It’s exciting to pick up and use a production-quality GFX 50S, after writing about it for so many months, and Fujifilm’s senior executives are understandably keen to get the camera in the hands of photographers. Due to ship in just a few days, the GFX looks like a hugely impressive product,. We’ll have to wait for Raw support to take a really detailed look at what the camera can do, but our early shooting suggests that image quality really is superb. 

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_3125129958″,”galleryId”:”3125129958″,”isEmbeddedWidget”:true,”standalone”:false,”selectedImageIndex”:0,”startInCommentsView”:false,”isMobile”:false}) });

It was interesting to learn a little about the feedback process, by which Fujifilm gathered notes, impressions, and suggestions from professional photographers after the launch of the GFX last year. The end result is a very nicely balanced camera, both literally (it’s surprisingly lightweight) and figuratively. Although obviously very different to the X series APS-C models, the GFX is simple to figure out, and easy to shoot with. When Mr Iida says that he hopes that ‘the GFX will change how people view medium format’, part of this comes down to handling. 

It was also interesting to hear that Fujifilm considered other types of design for the GFX. Are there concept renderings somewhere of an SLR design, or a rangefinder? Probably. Will we ever see a medium-format SLR or mirrorless from Fujifilm? Personally, I wouldn’t be surprised if the company releases a rangefinder styled medium-format mirrorless. An X-Pro 2-style camera with a medium-format sensor and a hybrid viewfinder? Yes please.

For now though, the GFX is quite enough camera to be getting on with. Beyond medium-format, indeed beyond still imaging, Fujifilm is eyeing the video market. While Fujinon cine lenses have been popular in the film industry for decades, Mr Iida has his eye on a new generation of videographers, who are growing up using mirrorless cameras like Sony’s a7S and a7R-series. This makes sense, but it’s interesting that the new Fujinon zooms will also be manufactured in X mount versions. This level of confidence from Fujifilm in its X series’ video capabilities is good to see, and bodes well for future product development. 

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on CP+ 2017 – Fujifilm Interview: ‘We hope that the GFX will change how people view medium format’

Posted in Uncategorized

 

The Vertu Constellation is a luxury smartphone with high-end specs, for a change

26 Jan

Vertu is company known for its “handmade in England” luxury smartphones and eye-watering price tags. However, while many Vertu devices easily set you back as much as a decent used car, the specifications tend to be in line with mid-range rather than high-end models.

However, the latest incarnation of the Vertu Constellation, announced today, changes things up. It comes with actual high-end specifications and dual-SIM capability as an added bonus. The  5.5″ QHD AMOLED display is protected by scratch-proof sapphire glass and the Android 6 operating system is powered by a Qualcomm Snapdragon 820 chipset and 4GB of RAM. Also on board are 128GB of expandable storage, NFC, a USB Type-C port, and a 3,220 mAh battery. Front-facing stereo speakers with Dolby Digital Plus virtual surround sound processing should provide a good sound experience and all the components have been wrapped up in a hand-crafted shell made from anodized aluminum and finished with leather, ‘sourced from a specialist, family-run tannery in Italy.’

In the camera department the Constellation offers a 12 MP rear camera with 1.55um pixel size which equates to a 1/2.3 sensor. Video recording at 4K is supported but there is no talk of optical image stabilization. No pricing information for the device has been released yet but it’s safe to assume it will be a four-figure US$ number. In case you’re still not sure if this kind of price tag is justified, you should consider Vertu’s 24-hour concierge service is included. On the Constellation it is accessed by pressing a ruby button on the side of the phone. You also get access to iPass, the world’s largest Wi-Fi network. If you want to take your mobile image in style you won’t have to wait too long. The Constellation will be available from February. You can register your interest now on the Vertu website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The Vertu Constellation is a luxury smartphone with high-end specs, for a change

Posted in Uncategorized

 

Sony a6300 versus a6500: what’s changed, and what still needs to change

18 Oct

Sony a6300 versus a6500

That was quick.

Just eight months after Sony introduced the a6300, a higher-end sister model to the a6000, we now have another higher-end sister model in the a6500. The sheer speed of Sony’s product releases lately is somewhat appropriate, given the outright shooting speed these cameras are capable of.

Both cameras feature the same 24MP APS-C CMOS sensor, the same 425-point on-sensor PDAF system, the same viewfinder, the same video specification, and the same 11 fps burst shooting rate (8 fps with Sony’s implementation of ‘live view’). Wait a second – what exactly is new to the a6500?

Turns out, there’s a handful of changes that can have big implications for how photographers will interact with and use these cameras, but are they worth the $ 400 premium on the new model? Let’s take a look.

Continuous shooting

Patrick Murphy-Racey discusses using the a6500 for peak action (like drag races) due to its burst speed and autofocus system.

A deeper buffer combined with a newly developed front-end LSI (which stands for Large Scale Integration – it’s basically an additional chip providing more processing power) promise more responsive performance when shooting bursts – 300 JPEG or 107 Raw images can be captured at 11 fps with full autofocus and autoexposure. Users can also instantly review or check focus on the last image that the camera has written to the card (though that might not necessarily be the last image in that burst), with the added plus of an indicator showing just how many images remain to be written to the card.

Comparatively, the a6300 can still shoot at 11 fps with full autofocus and autoexposure, but only for 44 JPEG or 21 Raws. And while the camera is writing to the card, you can’t enter playback, or magnify the displayed image (if you have image review on). We’re particularly happy to see that last limitation go, as it makes the camera eminently more usable.

In-body 5-axis stabilization

Sony consolidated the shutter charge and shutter mechanisms to one side to make room for the IBIS unit in the a6500.

Without increasing the depth of the camera body, Sony has redesigned the a6500’s shutter mechanism to not only be more durable (tested – though not guaranteed – to 200,000 cycles), but also to incorporate 5-axis stabilization with non-stabilized lenses. What’s more, when you pair an optically stabilized lens with the a6500, the camera knows to pass of pitch and yaw correction to the lens’ stabilization system. This doesn’t increase the effectiveness more than the rated 5 stops, but is likely to help maintain effectiveness when shooting at longer focal lengths.

There’s also the intriguing possibility of shooting full 4K stabilized video with any lens – but we’re withholding our verdict on the resulting image quality until we can test it for ourselves. After all, core video specification and performance hasn’t changed from the a6300 to the a6500, and we’re curious to see if the stabilization system has any effect on the rather lackluster rolling shutter performance of the a6300.

And, of course, the a6300 offers no in-body stabilization.

Touch and see

The a6500’s screen is touch-enabled, whereas the a6300’s isn’t. They share the same resolution (and the touch-panel doesn’t seem to have affected glare or fingerprint-resistance), but on the a6500, you can now use the screen to quickly place an AF point, move your AF point around by acting as a ‘touchpad’ with your eye to the finder, and also double-tap to zoom and swipe around an image in playback.

So while AF performance will likely remain the same on the a6500, you may now find you’re more quickly able to adapt to a scene in front of you by using the touchscreen as opposed to the cumbersome sequence of button presses most Sony cameras require for focus point movement.

That said, in touchpad mode, the control of the AF point is unfortunately always relative, rather than (at least an option for) absolute, so you swipe to move the AF point from its current position, rather than touching exactly where you want it to be. This meant we found ourselves often swiping repeatedly to get the AF point from one side to the other. This could be obviated with absolute positioning in combination with limiting the touchpad area to the upper right quadrant, something we suggested to Sony in-person. Lastly, we found the touchpad performance to be decidedly laggy, especially when compared to competitors’ offerings.

When it comes to video, the a6500’s touchscreen is particularly useful for focus pulls, since you can just tap to change the focus point and initiate a rack focus (and as always, you can control how quickly the camera will rack focus). Less easy is getting the camera – in video – to continue to track your subject around the frame after you’ve tapped on it, since Lock-on AF is unavailable in video (something we continue to request Sony to address).

There appears to be a workaround, though: if you turn the old, vesitigial ‘Center Lock-on AF’ on, then tapping appears to initiate subject tracking. Unfortunately, ‘Center Lock-on AF’ isn’t always the most reliable, and it’s still somewhat cumbersome to work this way as you have to first turn this feature on, which requires either a (Fn/main) menu dive or a dedicated button assigned to it, plus a couple more button presses before you tap.

Controls and usability

Autofocus and video options are among the new ‘groupings’ within the updated Sony menu system.

Besides the touchscreen, the other major control change on the a6500 compared to the a6300 is the addition of C2 | C1 custom buttons on the top plate, a7-style. They’re nicer buttons than the soft-press C1 button of the a6300, providing more haptic feedback. The a6500’s grip has also been redesigned to be ‘chunkier’ and deeper than that on the a6300, again much like the a7 Mark II cameras, which should help when using heavier or longer lenses.

The a6500 also inherits the redesigned menu interface that debuted in the a99 II which is, in our opinion, much more user-friendly than the interface on the a6300 (and a6000, for that matter). The tabs are now color-coded, but more importantly, similar functions like autofocus, image parameters and movie settings are grouped. This makes it much less likely that you’ll miss a moment while rocketing through the menu to find a setting you swear you saw somewhere in there last week. It’s one of our favorite additions to the a6500, and it’s about time.

Unfortunately, you still can’t make a custom ‘My Menu’-like page in this menu system. A shame, as it’s an easy way to group most-used menu items into one section for quicker access, particularly ones – like movie options – that can’t be assigned to the custom Fn menu.

What hasn’t changed (and should have been)

There’s no question that the a6500 is incredibly well-specified camera in an impressively small package. In terms of usability, Sony has made great strides on this new model with the addition of a touchscreen and a revamped menu system. Unfortunately, there’s still a few aspects of the a6500 that we can’t help but wish Sony would have addressed.

Unfortunately, like the a6300, the a6500 lacks a control dial on the front of the camera – an omission that is all the more glaring at its higher position in the market. Unlike the a6300, all of the a6500’s direct peers offer twin control dials, and a front control dial would further aid usability in our opinion (though we’d also happily take the ‘Tri-Navi’ system of the old NEX-7 flagship as a compromise).

The core stills and video specification hasn’t changed at all since the a6300 (although the new front-end LSI is supposed to help with JPEG performance at higher ISO values). Now, the a6300 already produced great results under almost any circumstances with one big exception – rolling shutter in 4K. While the detail level is impressive and the capture aids are extensive (S-Log, zebra patterning, focus peaking, etc.), we would really have liked to see Sony address the rolling shutter issue in this new model. And a headphone monitoring port wouldn’t hurt, either.

Limited battery life is a problem endemic to mirrorless cameras as a whole, and the a6000-series is no exception. Still, probably thanks to the additional processing and touch-screen, the a6500’s battery life rating has actually decreased compared to the a6300 by over 10%. It goes without saying that’s a change in the opposite direction than we would like.

Adding it all up

The a6500’s additions over the a6300 are small in number, but potentially huge for what they offer users. Sure, the new model comes at a $ 400 premium over its mid-range sibling, but the upgrades in the new flagship model have the potential to be significant.

Of course, whether they’re significant to you depends on whether they line up with what you like to shoot. If you don’t shoot long bursts, or don’t find yourself checking focus all that often, the additional buffer performance isn’t likely to matter. If you shoot a lot of video and want more flexibility with lens choice, the in-body stabilization is likely to be very helpful. One thing that we feel will positively impact all users – even those who primarily use the viewfinder – is the touchscreen. That said, its laggy behavior is disappointing considering just how much processing power this camera has.

We generally feel that, given the sheer capability of this camera, the price premium over the a6300 is warranted. The Sony a6500 represents a lot of camera in a very lightweight package, and it’s encouraging to see that Sony is continuing to refine its APS-C offerings.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony a6300 versus a6500: what’s changed, and what still needs to change

Posted in Uncategorized

 

The Power to Change: 12 Brilliantly Reclaimed Energy Stations

11 Oct

[ By SA Rogers in Architecture & Public & Institutional. ]

reclaimed-power-plants-gasholder-park-1

As cities grow and their power needs change, the historic and often surprisingly beautiful structures holding turbines, generators, coal and gas are decommissioned, becoming prime candidates for redevelopment. A recent wave of power stations built at the turn of the 20th century, packed full of period details, have been transformed into cultural centers, hotels, apartments and more, including London’s stunning Battersea Station.

Power Plant to Cultural Art Space by Renzo Piano, Moscow

reclaimed-power-plant-renzo-piano

reclaimed-power-plant-renzo-piano-2

A historic power plant on the banks of the Moskva River in Moscow will become a new cultural center, transformed as part of a larger contemporary art site by architecture firm Renzo Piano Building Workshop. The main building was built between 1904 and 1907 and will be extensively renovated to add lots of glass, while the original towers remain intact to provide natural ventilation.

Battersea Power Station to Residential Tower, London

reclaimed-power-plant-renzo-piano-4

reclaimed-power-station-battersea-1

reclaimed-power-station-battersea-2

reclaimed-power-plant-battersea-3

battersea-foster

A pair of twin coal-fired power station buildings set on the edge of the Thames River in London were decommissioned way back in 1983, but are considered such an important landmark in London, they’ve been preserved, awaiting the perfect redevelopment plan that takes advantage of their beautiful Art Deco interior fittings and decor. One of the largest brick buildings in the world, Battersea Power Station has been the subject of many proposals, including turning it into an eco-dome or an amusement park, all of which have ultimately fallen through. The latest places the original building at the center of a mixed-use complex by architects Norman Foster and Frank Gehry, which includes both luxury residences and affordable homes, a hotel, a gym, and a series of shops, cafes and restaurants.

Brick Power Station to 5 Star Hotel, South Africa

reclaimed-power-station-south-africa-1

reclaimed-power-station-south-africa-2

reclaimed-power-station-south-africa-3

reclaimed-power-station-south-africa-4

Decommissioned since 2001, this old power station on Thesen Islands in South Africa once used waste timber to power huge turbines, which supplied electricity to nearby Knysna and Plettenberg Bay. Now, it’s part of the 5-star, 24-room boutique Turbine Hotel by CMAI Architects, redesigned to keep as much of the original structures and equipment intact as possible. Mechanical equipment, operating panels, piping and the original turbines are all incorporated into the new complex, and things like gauges and dials were worked into various parts of the hotel. The entire development scheme is considered a ‘living museum,’ where guests can clearly see what it used to be while experiencing it in a new way.

Coal-Burning Power Plant to College Learning Center

reclaimed-power-plant-beloit

reclaimed-power-plant-beloit-2

A coal-burning power plant in a small Wisconsin town will become part of Liberal Arts institution Beloit College as a leaning and wellness center. With Chicago-based Studio Gang Architects at the helm, the project will preserve the industrial feel of the site while offering a coffee shop, conference hall, lounges, lecture hall and theater as well as a competition swimming pool, 3-lane track, 10,000-square-foot fitness center and 17,000 square-foot gymnasium. It’s set to be finished in 2018.

Next Page – Click Below to Read More:
The Power To Change 12 Brilliantly Reclaimed Energy Stations

Share on Facebook





[ By SA Rogers in Architecture & Public & Institutional. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]


WebUrbanist

 
Comments Off on The Power to Change: 12 Brilliantly Reclaimed Energy Stations

Posted in Creativity

 

Fear, Laziness & Change

27 Jul

Fear Laziness Change – Lazy, slow and un-imaginative industries , companies and people


Fashion Photography Blog

 
Comments Off on Fear, Laziness & Change

Posted in Uncategorized

 

Lytro poised to forever change filmmaking: debuts Cinema prototype and short film at NAB

21 Apr
Lytro debuted its Cinema prototype to an eager crowd at NAB 2016 in Las Vegas, NV.

Lytro greeted a packed showroom at NAB 2016 in Las Vegas, Nevada to demo its prototype Lytro Cinema camera and platform, as well as debut footage shot on the system. To say we’re impressed from what we saw would be an understatement: Lytro may be poised to change the face of cinema forever.

The short film ‘Life’, containing footage shot both on Lytro Cinema as well as an Arri Alexa, demonstrated some of the exciting applications of light field in video. Directed by Academy Award winner Robert Stromberg and shot by VRC Chief Imaging Scientist David Stump, ‘Life’ showcased the ability of light field to obviate green screens, allowing for extraction of backgrounds or other scene elements based off of depth information, and seamless integration of CGI elements into scenes. Lytro calls it ‘depth screening’, and the effect looked realistic to us.

‘Life’ showcased the ability of Lytro Cinema to essentially kill off the green screen

Just as exciting was the demonstration of a movable virtual camera in post: since the light field contains multiple perspectives, a movie-maker can add in camera movement at the editing stage, despite using a static camera to shoot. And we’re not talking about a simple pan left/right, up/down, or a simple Ken Burns effect… we’re talking about actual perspective shifts. Up, down, left, right, back and forth, even short dolly movements – all simulated by moving a virtual camera in post, not by actually having to move the camera on set. To see the effect, have a look at our interview with Ariel Braunstein of Lytro, where he presents a camera fly-through from a single Lytro Illum shot (3:39 – 4:05):

The Lytro Cinema is capable of capturing these multiple perspectives because of ‘sub-aperture imaging’. Head of Light Field Video Jon Karafin explains that the system is made of multiple lenses (we see what appears to be two separate openings in the photo below), and behind each lens, in front of the sensor, is a microlens array consisting of millions of small lenses similar to what traditional cameras have. The difference, though, is that there is a 6×6 pixel array underneath each microlens, meaning that any one XY position of those 36 pixels under one microlens, when combined with the same position pixel under all other microlenses, represents the scene as seen through one portion, or ‘sub-aperture’ of the lens. These 36 sub-aperture images essentially provide 36 different perspectives, which then allow for computational reconstruction of the image with all the benefits of light field.

The 36 different perspectives affords you some freedom of movement in moving a virtual camera in post, but it is of course limited, affected by considerations like lens, focal length, and subject distance. It’s not clear yet what that range of freedom is with the Cinema, but what we saw in the short film was impressive, something cinematographers will undoubtedly welcome in place of setting up motion rigs for small camera movements. Even from a consumer perspective, consider what auto-curation of user-generated content could do with tools like these. Think Animoto on steroids.

Front of the Lytro Cinema, on display at NAB 2016. We see two openings, though it’s not clear how many main imaging lenses exist in the prototype yet.

We’ve focused on depth screening and perspective shift, but let’s not forget all the other benefits light field brings. The multiple perspectives captured mean you can generate 3D images or video from every shot at any desired parallax disparity (3D filmmakers often have to choose their disparity on-set, only able to optimize for one set of viewing conditions). You can focus your image after the fact, which saves critical focus and focus approach (its cadence) for post.* Selective depth-of-field is also available in post: choose whether you want shallow, or extended, depth-of-field, or even transition from selective to extensive depth-of-field in your timeline. You can even isolate shallow or extended depth-of-field to different objects in the scene using focus spread: say F5.6 for a face to get it all in focus, but F0.3 for the rest of the scene.

Speaking of F0.3 (yes, you read that right), light field allows you to simulate faster (and smaller) apertures previous thought impossible in post, which in turn places fewer demands on lens design. That’s what allowed the Illum camera to house a 30-250mm equiv. F2.0 constant aperture lens in relatively small and lightweight body. You could open that aperture up to F1.0 in post, and at the demo of Cinema at NAB, Lytro impressed its audience with – we kid you not – F0.3 depth-of-field footage.

The sensor housing appears to be over a foot wide. That huge light field sensor gets you unreal f-stops down to F0.3

But all this doesn’t come without a cost: the Lytro Cinema appears massive, and rightfully so. A 6×6 pixel array underneath each microlens means there are 36 pixels for every 1 pixel on a traditional camera; so to maintain spatial resolution, you need to grow your sensor, and your total number of pixels. Which is exactly what Lytro did – the sensor housing appeared to our eyes to be over a foot in width, sporting a whopping 755 million total pixels. The optics aren’t small either. The total unit lives on rails on wheels, so forget hand-held footage – for now. Bear in mind though, the original technicolor cinematic camera invented back in 1932 appeared similarly gargantuan, and Lytro specifically mentioned that different versions of Cinema are planned, some smaller in size.

The Lytro Cinema is massive. The sensor is housed in the black box behind the orange strut, which appears to be at least a foot wide. It comes with its own traveling server to deal with the 300GB/s data rates. Processing takes place in the cloud where Google spools up thousands of CPUs to compute each thing you do, while you work with real-time proxies.

So what does 755MP get you? A lot of data, for starters. We spoke to Lytro some time back about this, and were told that the massive sensor requires a bandwidth of around 300GB/s. That means Lytro Cinema comes with its own server on-set to capture that data. But processing that data isn’t easy either – in fact, no mortal laptop or desktop need apply. Lytro is partnering with Google to send footage to the cloud, where thousands of CPUs crunch the data and provide you real-time proxies for editing. One major concern with Lytro’s previous cameras was the resolution trade-off: recording angular information means that spatial resolution is sacrificed. The Illum had a roughly 40MP sensor, yet yielded only roughly 5MP images, a roughly 10-fold resolution cost. With 755MP though, even a 10x resolution cost would yield 76MP – well above the requirements for 4K video.**

Thousands of CPUs on Google’s servers crunch the data and provide you real-time proxies for editing

Speculation aside, the 4K footage from the Lytro Cinema that was mixed with Arri Alexa footage to create the short ‘Life’, viewed from our seating position, appeared comparable to what one might expect from professional cinema capture. Importantly, the footage appeared virtually noise free – which one might expect of such a large sensor area. Since image data from many pixels are used for any final image pixel, a significant amount of noise averaging occurs – yielding a clean image, and a claimed 16 stops of dynamic range.

That’s incredibly impressive, given all the advantages light field brings. This may be the start of something incredibly transformative for the industry. After all, who wouldn’t want the option for F0.3 depth-of-field with perfect focus in post, adjustable shutter angle, compellingly real 3D imagery when paired with a light field display, and more? With increased capabilities for handling large data bandwidths, larger sensors, and more pixels, we think some form of light field will exist perhaps in most cameras of the future. Particularly when it comes to virtual reality capture, which Lytro also intends to disrupt with Immerge.

It’s impressive to witness how far Lytro has come in such a short while, and we can’t wait to see what’s next. For more information, visit Lytro Cinema.


* If it’s anything like the Illum, though, some level of focusing will still be required on set, as there are optimal planes of refocus-ability.

** We don’t know what the actual trade-off is for the current Lytro Cinema. It’s correlated to the number of pixels underneath each microlens, and effective resolution can change at different focal planes.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Lytro poised to forever change filmmaking: debuts Cinema prototype and short film at NAB

Posted in Uncategorized

 

Change of focus: 755 MP Lytro Cinema camera enables 300 fps light field video

12 Apr

Lytro is bringing its Light Field technology to the world of cinema and visual effects, shortly after its CEO announced in a blog post Lytro’s intention of abandoning the consumer stills camera space. Lytro Cinema turns every frame of live action into a 3D model, capturing intensity, color, and angular information of light rays. Coupling light field with a 755 MP sensor capable of capturing images at 300 fps, Lytro Cinema promises extensive post-production freedom, including adjustment of focus, depth-of-field, shutter speed and frame rate, as well as the elimination of green screens.

Although Lytro experienced some difficulty in adoption of light field technology in stills, the technology had, and continues to have, immense potential for imaging. Saving creative decisions for post-processing allows for more creative freedom, and allows a photographer or DP to focus on other elements during capture. Nowhere will this be more appreciated than in cinema, where the realities of production mean that any technology aimed at saving certain creative decisions, like focus, for post-capture are welcome.

Focus and aperture sliders in post-production. In video. No joke. I wish my Raw converter had this (Lytro’s Raw converter already does). Photo credit: Lytro

And that’s exactly what Lytro Cinema aims to do. By capturing directional information about light rays and essentially sampling multiple perspectives behind the aperture, Lytro Cinema allows for adjustment of focus placement, depth-of-field (via aperture adjustment), perspective, and more in post-processing. And since a depth map is rendered for every frame of video, Lytro claims Cinema will make it easier to combine CGI with live footage, no longer requiring green screens to extract elements or subjects from a scene. You’ll be able to just extract a subject based on its depth, which Lytro shows in a convincing example below:

Since light field cameras effectively sample multiple perspectives from behind the lens, you can even simulate camera movement as if it were moved on-set. The degree of motion is of course limited, but the technique can be very effective, as demonstrated in this haunting music video shot entirely on the stills-focused Lytro Illum. As Lead Engineer for Light Field Video Brendan Bevensee explains: “You have a virtual camera that can be controlled in post-production.” That means there’s also nothing stopping one from simulating short dolly motion or perspective shifts in post, with nothing but a static camera at the time of capture. “You can shift the camera to the left… [or] to the right, as if you had made that exact decision on set. It can even move your camera in and out” says Head of Light Field Video, Jon Karafin.

Imagine small, smooth, meditative camera movements that don’t even require a complicated motion rig to set up.

Furthermore, by precisely recording X, Y, Z, pitch, roll, and yaw, Lytro Cinema even offers automated camera tracking, which makes it easier to composite and mat CGI elements. And just as the Illum paired with Lytro Desktop software allowed one to select various objects and depths to throw them in and out of focus for selective depth-of-field and background blur, one can do the same in video with the Cinema, choosing, for example, to marry live footage from minimum focus to, say, 10m with different footage, or CGI, for everything beyond those distances. In other words, control over not just single planes, but ranges of planes.

Beyond just light field benefits, Lytro is also addressing another common headache: the selection of shutter angle (or shutter speed). Often, this is a decision made at the time of capture, dictating the level of blur or stuttering (a la action scenes in ‘Saving Private Ryan’ or ‘Gladiator’) in your footage. At high frame rates of capture, though, high shutter angles are required, removing some of the flexibility of how much motion blur you can or can’t have (e.g. 300 fps cannot be shot with shutter speeds longer than 1/300s, which inevitably freezes action). By decoupling the shutter angle of capture from the shutter angle required for artistic effect, a DP can creatively use motion blur, or lack thereof, to suit the story. The technology, which undoubtedly uses some form of interpolation and averaging in conjunction with the temporal oversampling, also means that you can extract stills with a desired level of motion blur. 

Lytro claims that by capturing at 300 fps, they can computationally allow for any of a number of shutter angles in post-production, allowing a cinematographer to decouple shutter angle required for capture from that required for artistic intent. Photo credit: Lytro

With every development over at Lytro, we’ve been excited by the implications for both stills and video. The implications for the latter, in particular, have always been compelling. Along with the announcement of the Lytro Immerge 360º virtual reality light field rig, we’re extremely excited to see light field video becoming a reality, and look forward to what creatives can produce with what is poised to be an unimaginably powerful filmmaking platform. Filmmakers can sign up for a demonstration and a personalized production package on Lytro’s site. For now, Lytro Cinema will be available on a subscription basis, understandable given the complexities involved (the immense data capture rates require servers on-set).

Head over to the Lytro Cinema page for more in-depth information. Lytro will be demo-ing “Life”, a short film shot using Lytro Cinema at NAB 2016.

Lytro Brings Revolutionary Light Field Technology to Film and TV Production with Lytro Cinema

  • World’s First Light Field Solution for Cinema Allows Breakthrough Creative Capabilities and Unparalleled Flexibility on Set and in Post-Production
  • First Short Produced with Academy Award Winners Robert Stromberg, DGA and David Stump, ASC in Association with The Virtual Reality Company (VRC) Will Premiere at NAB on April 19

Lytro unlocks a new level of creative freedom and flexibility for filmmakers with the introduction of Lytro Cinema, the world’s first Light Field solution for film and television. The breakthrough capture system enables the complete virtualization of the live action camera — transforming creative camera controls from fixed on set decisions to computational post-production processes — and allows for historically impossible shots.

“We are in the early innings of a generational shift from a legacy 2D video world to a 3D volumetric Light Field world,” said Jason Rosenthal, CEO of Lytro. “Lytro Cinema represents an important step in that evolution. We are excited to help usher in a new era of cinema technology that allows for a broader creative palette than has ever existed before.”

Designed for cutting edge visual effects (VFX), Lytro Cinema represents a complete paradigm shift in the integration of live action footage and computer generated (CG) visual effects. The rich dataset captured by the system produces a Light Field master that can be rendered in any format in post-production and enables a whole range of creative possibilities that have never before existed.

“Lytro Cinema defies traditional physics of on-set capture allowing filmmakers to capture shots that have been impossible up until now,” said Jon Karafin, Head of Light Field Video at Lytro. “Because of the rich data set and depth information, we’re able to virtualize creative camera controls, meaning that decisions that have traditionally been made on set, like focus position and depth of field, can now be made computationally. We’re on the cutting edge of what’s possible in film production.”

With Lytro Cinema, every frame of a live action scene becomes a 3D model: every pixel has color and directional and depth properties bringing the control and creative flexibility of computer generated VFX to real world capture. The system opens up new creative avenues for the integration of live action footage and visual effects with capabilities like Light Field Camera Tracking and Lytro Depth Screen — the ability to accurately key green screens for every object and space in the scene without the need for a green screen.

“Lytro has always been a company thinking about what the future of imaging will be,” said Ted Schilowitz, Futurist at FOX Studios. “There are a lot of companies that have been applying new technologies and finding better ways to create cinematic content, and they are all looking for better ways and better tools to achieve live action highly immersive content. Lytro is focusing on getting a much bigger, better and more sophisticated cinematography-level dataset that can then flow through the VFX pipeline and modernize that world.”

Lytro Cinema represents a step function increase in terms of raw data capture and optical performance:

  • The highest resolution video sensor ever designed, 755 RAW megapixels at up to 300 FPS
  • Up to 16 stops of dynamic range and wide color gamut
  • Integrated high resolution active scanning

By capturing the entire high resolution Light Field, Lytro Cinema is the first system able to produce a Light Field Master. The richest dataset in the history of the medium, the Light Field Master enables creators to render content in multiple formats — including IMAX®, RealD® and traditional cinema and broadcast at variable frame rates and shutter angles.

Lytro Cinema comprises a camera, server array for storage and processing, which can also be done in the cloud, and software to edit Light Field data. The entire system integrates into existing production and post-production workflows, working in tandem with popular industry standard tools. Watch a video about Lytro Cinema at www.lytro.com/cinema#video.

“Life” the first short produced with Lytro Cinema in association with The Virtual Reality Company (VRC) will premiere at the National Association of Broadcasters (NAB) conference on Tuesday, April 19 at 4 p.m. PT at the Las Vegas Convention Center in Room S222. “Life” was directed by Academy Award winner Robert Stromberg, Chief Creative Officer at VRC and shot by David Stump, Chief Imaging Scientist at VRC.

Learn more about Lytro Cinema activities during the 2016 NAB Show and get a behind-the-scenes look on the set of “Life” at www.lytro.com/nab2016.

Lytro Cinema will be available for production in Q3 2016 to exclusive partners on a subscription basis. For more information on Lytro Cinema, visit www.lytro.com/cinema.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Change of focus: 755 MP Lytro Cinema camera enables 300 fps light field video

Posted in Uncategorized

 

How to Use Lighting Gels to Change your Background Color

10 Apr

If there’s one thing that bugs me about shooting in studio, more than anything, it’s that you need to have tonnes of backgrounds, taking up loads of space. I’ve even gone as far as having a painter come in and create an interesting wall for me because I get bored with what I had. I’ve got over 15 backgrounds between paper, canvas, cloth, and even some vinyl castle doors.

Part of what I love about location work is the variety of backgrounds. Often you’re restricted to working in the the studio by the client, so that where this handy technique comes into it’s own.

Changing-your-background-with-gels-magmod

I’ve been using gels in studio to add color to my subjects for years. A gel is a colored, transparent, sheet of heat resistant plastic. They look akin to the colored wrappers you get on some candies. They’re generally used in theatre to create mood, or emulate natural looks like moonlight, fire, etc. Gels were a big thing in photography in the 80s, and they’re making a comeback now thanks to photographers like Jake Hicks and Glenn Norwood. This article isn’t about their techniques, but it is about something I’ve started to do because of seeing their work.

Shooting on location with speedlights mean that I’m always on the lookout for great tools that make life easy for set up. Using gels meant that when I saw the MagMod kit, I knew I had to get a set. MagMod uses strong magnets and rubber mouldings to create a grip that stretches over your speed light. It’s much neater than rubber bands or velcro. Accessories like the MagGrid, or the MagGel holder are simply held in place with internal magnets, and are easy to swap on and off as you need.

Basic Kit 1 1861968723

The really great thing that applies here, is that the gels they use are rigid, not flimsy and awkward to get in and out of a holder. The basic kit ships with a MagGrip, a MagGrid and a MagGel set with color correcting gels. I also bought the Creative Gel set, and that’s what I’m using to get different background colors in my studio. They’ve also just announced the new Artist set as well.

Let’s start with the basic back wall in my studio. It’s dark grey, with a light grey mottle over it as brush strokes.

Using Gels background 05

You should set the flash to get the amount of light you want on the background. Here is mine set to 1/4 power (below).

Using Gels background 04

Below it is at 1/8 power, which I think is better for getting the color to work.

Using Gels background 06

From here you can add the gels to change the color of the background. There are plenty of options for using gels, you can even just use gaffa tape with sheet gel-or even just a rubber band. Even using the MagGel set, it’s possible for you to cut out sections from gels sheets, and then trap this cutout between the empty MagGel holder and the MagGrip.

 

Let’s have a look at the different gels:

Using Gels background 07

Cyan

Using Gels background 08

Purple

Using Gels background 09

Red

Using Gels background 10

Blue

Using Gels background 12

Green

Using Gels background 13

Straw

Using Gels background 15

Yellow

As well as changing the color, you can also change the intensity of the light by varying your flash power.

Here’s how the the cyan gel looks at varying power, in one stop increments starting at 1/64 power up to full power.

Using Gels background 20

1/64 Power

Using Gels background 19

1/32 Power

Using Gels background 18

1/16 Power

Using Gels background 17

1/8 Power

Using Gels background 16

Quarter (1/4) Power

Using Gels background 21

Half (1/2) Power

Using Gels background 22

Full Power

As you can see, it’s possible to get a whole range of looks from just a few gels. By using the MagGrid, you can also create coloured spots of light, that fade out to the original background color. A neutral grey background is a great starting point because it takes the color well. White tends to be harder to add color to with gels (just looks washed out). You can also mix gels together to get other colors, just know that this will also absorb more light.

If you want to get started by just using gels sheets, check out Lee Filters or Rosco on Amazon. They both have sample packs with strips that just fit over the front of most current speed lights.

Have fun!

googletag.cmd.push(function() {
tablet_slots.push( googletag.defineSlot( “/1005424/_dPSv4_tab-all-article-bottom_(300×250)”, [300, 250], “pb-ad-78623” ).addService( googletag.pubads() ) ); } );

googletag.cmd.push(function() {
mobile_slots.push( googletag.defineSlot( “/1005424/_dPSv4_mob-all-article-bottom_(300×250)”, [300, 250], “pb-ad-78158” ).addService( googletag.pubads() ) ); } );

The post How to Use Lighting Gels to Change your Background Color by Sean McCormack appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on How to Use Lighting Gels to Change your Background Color

Posted in Photography

 

Change of scenery: How a photographer’s trip to Idaho inspired a big move

27 Mar

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_8112216034″,”galleryId”:”8112216034″,”isEmbeddedWidget”:true,”standalone”:false,”selectedImageIndex”:0,”startInCommentsView”:false,”isMobile”:false}) });

What could convince a California native to leave the state’s famously beautiful coasts and sunshine behind? For photographer and Resource Travel editor Michael Bonocore, a visit to Idaho’s pristine wilderness and towering mountains was enough. He recently spent some time traveling and photographing the state, from bustling Boise to the untouched powder of the Selkirk Mountains.

The photographic opportunities were so rich and the possibilities for outdoor adventure so abundant, Bonocore decided to make a full-time move to the Gem State. See some of his photos here and read the full account of his trip on Resource Travel.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Change of scenery: How a photographer’s trip to Idaho inspired a big move

Posted in Uncategorized