RSS
 

Posts Tagged ‘Reality’

Canon Unveils a Dual Fisheye Virtual Reality Lens, the RF 5.2mm f/2.8L

08 Oct

The post Canon Unveils a Dual Fisheye Virtual Reality Lens, the RF 5.2mm f/2.8L appeared first on Digital Photography School. It was authored by Jaymes Dempsey.

Canon unveils a dual fisheye VR lens

Canon has announced a one-of-a-kind lens for EOS R cameras: the RF 5.2mm f/2.8L Dual Fisheye lens, which looks exactly as strange as it sounds:

RF 5.2mm Dual Fisheye lens side view

And check out the lens again, this time mounted to the Canon EOS R5:

virtual reality lens mounted to a Canon EOS R5

So what is this bizarre new lens? What’s it’s purpose? 

The RF 5.2mm f/2.8L is designed for virtual reality (VR) recording; it’s “the world’s first digital interchangeable dual fisheye lens capable of shooting stereoscopic 3D 180° VR imagery to a single image sensor.” In other words, the twin fisheye lenses offer two frames covering a huge field of view in total; when processed, this footage turns into a single, 180-degree image, and with the proper equipment (the press release mentions the Oculus Quest 2), viewers can feel truly present in the scene.

It seems that, when the RF 5.2mm f/2.8L debuts, it will be available solely for EOS R5 cameras, though this could change once the lens hits the market. Such a unique lens is bound to turn heads, and Canon has certainly been hard at work, offering a product with an outstanding form factor – for filmmakers who record on the go or who simply prefer to minimize kit size – along with weather resistance, a very nice f/2.8 maximum aperture, and most importantly, Canon’s in-built filter system. The latter allows you to use neutral density (ND) filters when recording, essential for serious videographers.

Unfortunately, processing dual fisheye images isn’t done with standard editing software. Instead, Canon is developing several (paid) programs capable of handling VR footage: a Premiere Pro plugin, and a “VR Utility.” The company explains, “With the EOS VR Plug-In for Adobe Premiere Pro, creators will be able to automatically convert footage to equirectangular, and cut, color, and add new dimension to stories with Adobe Creative Cloud apps, including Premiere Pro,” while “Canon’s EOS VR Utility will offer the ability to convert clips from dual fisheye image to equirectangular and make quick edits.”

So who should think about purchasing this new lens? It’s a good question, and one without an easy answer. Canon’s decision to bring out a dedicated VR lens suggests a growing interest in creating VR content. But the day when most video is viewed through VR technology seems a long way off, at least from where I’m sitting. 

That said, if VR recording sounds interesting, you should at least check out this nifty new lens. Canon suggests a December release date with a $ 1999 USD price tag, and you can expect Canon’s VR post-processing software around the same time.

Now over to you:

Are you interested in this new lens? Do you do (or hope to do) any VR recording? Share your thoughts in the comments below!

The post Canon Unveils a Dual Fisheye Virtual Reality Lens, the RF 5.2mm f/2.8L appeared first on Digital Photography School. It was authored by Jaymes Dempsey.


Digital Photography School

 
Comments Off on Canon Unveils a Dual Fisheye Virtual Reality Lens, the RF 5.2mm f/2.8L

Posted in Photography

 

Computational photography part III: Computational lighting, 3D scene and augmented reality

09 Jun

Editor’s note: This is the third article in a three-part series by guest contributor Vasily Zubarev. The first two parts can be found here:

  • Part I: What is computational photography?
  • Part II: Computational sensors and optics

You can visit Vasily’s website where he also demystifies other complex subjects. If you find this article useful we encourage you to give him a small donation so that he can write about other interesting topics.

The article has been lightly edited for clarity and to reflect a handful of industry updates since it first appeared on the author’s own website.


Computational Lighting

Soon we’ll go so goddamn crazy that we’ll want to control the lighting after the photo was taken too. To change the cloudy weather to sunny, or to change the lights on a model’s face after shooting. Now it seems a bit wild, but let’s talk again in ten years.

We’ve already invented a dumb device to control the light — a flash. They have come a long way: from the large lamp boxes that helped avoid the technical limitations of early cameras, to the modern LED flashes that spoil our pictures, so we mainly use them as a flashlight.

Programmable Flash

It’s been a long time since all smartphones switched to Dual LED flashes — a combination of orange and blue LEDs with brightness being adjusted to the color temperature of the shot. In the iPhone, for example, it’s called True Tone and controlled by a small ambient light sensor and a piece of code with a hacky formula.

  • Link: Demystifying iPhone’s Amber Flashlight

Then we started to think about the problem of all flashes — the overexposed faces and foreground. Everyone did it in their own way. iPhone got Slow Sync Flash, which made the camera increase the shutter speed in the dark. Google Pixel and other Android smartphones started using their depth sensors to combine images with and without flash, quickly made one by one. The foreground was taken from the photo with the flash while the background remained lit by ambient illumination.

The further use of a programmable multi-flash is vague. The only interesting application was found in computer vision, where it was used once in assembly schemes (like for Ikea book shelves) to detect the borders of objects more accurately. See the article below.

  • Link: Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering using Multi-Flash Imaging

Lightstage

Light is fast. It’s always made light coding an easy thing to do. We can change the lighting a hundred times per shot and still not get close to its speed. That’s how Lighstage was created back in 2005.

  • Video link: Lighstage demo video

The essence of the method is to highlight the object from all possible angles in each shot of a real 24 fps movie. To get this done, we use 150+ lamps and a high-speed camera that captures hundreds of shots with different lighting conditions per shot.

A similar approach is now used when shooting mixed CGI graphics in movies. It allows you to fully control the lighting of the object in post-production, placing it in scenes with absolutely random lighting. We just grab the shots illuminated from the required angle, tint them a little, done.

Unfortunately, it’s hard to do it on mobile devices, but probably someone will like the idea and execute it. I’ve seen an app from guys who shot a 3D face model, illuminating it with the phone flashlight from different sides.

Lidar and Time-of-Flight Camera

Lidar is a device that determines the distance to the object. Thanks to a recent hype of self-driving cars, now we can find a cheap lidar in any dumpster. You’ve probably seen these rotating thingys on the roof of some vehicles? These are lidars.

We still can’t fit a laser lidar into a smartphone, but we can go with its younger brother — time-of-flight camera. The idea is ridiculously simple — a special separate camera with an LED-flash above it. The camera measures how quickly the light reaches the objects and creates a depth map of the scene.

The accuracy of modern ToF cameras is about a centimeter. The latest Samsung and Huawei top models use them to create a bokeh map and for better autofocus in the dark. The latter, by the way, is quite good. I wish every device had one.

Knowing the exact depth of field will be useful in the coming era of augmented reality. It will be much more accurate and effortless to shoot at the surfaces with lidar to make the first mapping in 3D than analyzing camera images.

Projector Illumination

To finally get serious about computational lighting, we have to switch from regular LED flashes to projectors — devices that can project a 2D picture on a surface. Even a simple monochrome grid will be a good start for smartphones.

The first benefit of the projector is that it can illuminate only the part of the image that needs to be illuminated. No more burnt faces in the foreground. Objects can be recognized and ignored, just like laser headlights of some modern cars don’t blind the oncoming drivers but illuminate pedestrians. Even with the minimum resolution of the projector, such as 100×100 dots, the possibilities are exciting.

Today, you can’t surprise a kid with a car with a controllable light.

The second and more realistic use of the projector is to project an invisible grid on a scene to build a depth map. With a grid like this, you can safely throw away all your neural networks and lidars. All the distances to the objects in the image now can be calculated with the simplest computer vision algorithms. It was done in Microsoft Kinect times (rest in peace), and it was great.

Of course, it’s worth remembering here the Dot Projector for Face ID on iPhone X and above. That’s our first small step towards projector technology, but quite a noticeable one.

Dot Projector in iPhone X.

Vasily Zubarev is a Berlin-based Python developer and a hobbyist photographer and blogger. To see more of his work, visit his website or follow him on Instagram and Twitter.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Computational photography part III: Computational lighting, 3D scene and augmented reality

Posted in Uncategorized

 

Video: A BTS look at how Fujifilm’s GFX 100 was brought to life, from concept to reality

12 Jun

In September 2018, Fujifilm made the official announcements that it was working on a 100-megapixel medium format mirrorless camera—the Fujifilm GFX 100. Since then, we’ve had exclusive hands-on time with the behemoth, published our first-impression video review and shared pre-production photo samples.

We’re yet to get our hands on a fully-reviewable version of the GFX 100, but to tide you over in the meantime, we’re sharing a little documentary from Cinema5D that takes a behind-the-scenes look at the development process of Fujifilm’s latest medium-format mirrorless camera system.

A screenshot from the mini-doc showing how the IBIS is pieced together in the factory.

Titled ‘Birth of a Camera: Fujifilm GFX 100,’ this 17-minute video is part one of a two-part series that takes an exclusive inside look at the development process of the GFX 100. Throughout the video, Cinema5D co-founder Johnnie Behiri travels to various Fujifilm locations in Japan to talk with the executives, engineers and designers that had a part in bringing the GFX 100 to life.

The video addresses how the development process took place, from the initial conception to the final mock-up. Little by little, Behiri follows the vague chronological timeline of the creation process, from talking with the initial Fujifilm ‘CLAY’ designers who sketched up the original form of the camera to the engineers who created countless mock-ups to ensure the required components could fit inside the frame of the camera.

A screenshot from the mini-doc that shows how testing is done on the face-detection autofocus.

It’s a bit of a long watch, but well worth it if you have some free time over your lunch break or before bed.

Behiri notes in the accompanying blog post for this video that while Fujifilm does run a paid banner campaign on its website, the project was initiated and its production costs entirely self-funded by Cinema5D.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: A BTS look at how Fujifilm’s GFX 100 was brought to life, from concept to reality

Posted in Uncategorized

 

The Myth and Reality of Shooting in Manual Mode

21 Dec

I’ve heard it. You’ve heard it. And it’s a great big steaming pile of…baloney.

Myth – Professionals Only Shoot in Manual Mode

I recently read an account of a new photographer who heard that “expert” photographers only shoot in manual mode, so he headed out to shoot. Camera firmly set to M, he shot away, happy as could be. However, the results from that first exploration were, needless to say, disappointing; overexposures, under-exposures, and a lot of crappy, blurred photos.

Professionals Shoot in Manual Mode

I had about 10 seconds to make this image of a grove of Baobabs in Botswana. Had I been fiddling with finding the right manual settings, I likely would have missed the shot.

Here is the reality: Professionals and other experienced photographers use just about every shooting mode on their camera.

Those modes are there for a reason. Settings provide simplicity, speed, flexibility, or full control. Depending on the conditions in which you are shooting, any one of these may be appropriate. While other articles here at dPS discuss how to use each of the settings on your camera, I want to talk about the myth of Manual Mode, but also why it’s important to use it

Professionals Shoot in Manual Mode

Moving subjects and quickly shifting scenes are not conducive to manual mode.

The Professional Reality

Try shooting on full manual control while making images of birds in flight. Go on, try it. I’ll wait.

Professionals Shoot in Manual Mode

On the off chance that you actually went out and tried that exercise, I suspect you ended up with a lot of really bad photos. As birds passed quickly in front of different backdrops, as the sun darted in and out from behind clouds, the lighting conditions were undoubtedly in constant change. To adapt to those changes on the fly would be a nearly impossible task.

Professionals Shoot in Manual Mode

Rather, any professional would use one of the other settings. I, for example, would probably choose Shutter Priority mode under those conditions. That would assure I could maintain sharp (or artfully blurred) images as I shot, and leave the decision on aperture up to the camera. If I wanted a brighter or darker exposure I’d adjust the exposure compensation.

Now, if I was carefully shooting a landscape and had a particular vision for the final image, that’s when I’d make the switch to Manual Mode. In manual, I can take full control of the scene. I can adjust the depth of field, the exposure, incorporate blurs, or selective focus. In Manual Mode, I own all aspects of the final image, for better or worse.

Professionals Shoot in Manual Mode

My point here is simply this – professionals use all the tools at their disposal. If it were true that pros only use Manual Mode, then pro-level cameras would only have one setting. Quite obviously, that is not the case.

You Still Need to Shoot in Manual

Shoot in Manual Mode, but not all the time. But understanding exposure, focus, shutter speed, and aperture and their effect on the final image is the heart of photography. To master the technical aspects of image-creation, you need to be able to put all these together without the help of your camera.

Professionals Shoot in Manual Mode

Manual Mode is perfect for landscape photography because you have the time to dedicate to creating the image you envision.

Manual means full control

I regularly practice the art of manual settings. When a scene is in front of me, I’ll imagine a particular way to portray it. I’ll envision how bright I want the image to appear. I select the focal point, whether motion blur is incorporated or eliminated, and how deep the depth of field should be.

Once I’ve got the image in my mind. I’ll select the ISO, shutter speed, and aperture without using the camera’s light meter to help me. Then I click the shutter and have a look.

Professionals Shoot in Manual ModeThis exercise reminds me of light and settings and how the camera works, sure. But more so, it turns every aspect of the image into a purposeful decision. There is no “spray and pray” photography when you are shooting in Manual Mode. Setting your camera to that scary “M” means you grant yourself full control and full responsibility for whatever emerges.

Professionals Shoot in Manual Mode

Aurora borealis and most other night photography require the use of Manual Mode.

There is no better way to learn about your camera, light, and about thoughtful photography than to set your camera to Manual Mode, turn off the autofocus, and go make images.

Summary

It’s absolute nonsense that pros only shoot in manual. Utter garbage. Your camera has a bunch of settings for a reason. Shooting in just one would be like only eating one type of food. Each has a purpose, and each has their place in the art of photography.

Professionals Shoot in Manual Mode

Purposefully underexposed images are also well-suited to Manual Mode, particularly when you want to retain a shallow depth of field, as I did with this flower image.

However, and this is a big HOWEVER, shooting in Manual Mode may be the best tool at our disposal for turning our photography into a purposeful exercise. Using manual will force you to understand depth, light, exposure, blur, and focus.

So yes, you should shoot in manual mode. Just not all the time.

The post The Myth and Reality of Shooting in Manual Mode by David Shaw appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on The Myth and Reality of Shooting in Manual Mode

Posted in Photography

 

Snapchat ‘sky filters’ use augmented reality to replace the sky with stars, sunsets and more

27 Sep

File this one under minor smartphone photography news: it seems Snapchat is using its augmented reality powers to expose non-photographers to the magic of dropping a new sky into your photos. The newly released feature—dubbed ‘sky filters’—can take a regular boring old blue sky and replace it with a colorful sunset, starry night scene, and more.

Sky Filters are already rolling out now to both iOS and Android users, and like their other AR features, this one will rotate daily so you can experience a variety of world-bending effects.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Snapchat ‘sky filters’ use augmented reality to replace the sky with stars, sunsets and more

Posted in Uncategorized

 

Advertising vs reality: microSD memory card speed test

08 Aug

When you’re purchasing a new memory card, the card’s “read” and “write” speed is an important spec. If it’s too slow, you might pass on the card; if that number is big enough, you break out your wallet. But are those speeds accurate? In this video, Tom David Frey of Tom’s Tech Time tested 10 different microSD cards to see how the advertised speeds on the box compare to real-world performance. The results are mostly disappointing… but not surprising.

Frey tested the top-of-the-line microSD cards out there—all 4K-ready, speed class 10 and UHS class 3. And while we wish the test involved regular SD cards, CF cards or XQD cards, since those are more relevant to photographers, the difference between the cards’ read/write speed and real-world performance is still telling. Plus, drone photographers need some love too.

The cards tested include: The SanDisk Extreme PRO, Sandisk Extreme, Transcend Ultimate 633x, Samsung PRO, Sony SR-32UZ, Kingston, Panasonic, Toshiba Exceria Pro, Verbatim, and Patriot EP Series in either 16GB or 32GB sizes. Here are the contenders alongside their advertised read and write speeds:

Frey performed two tests. First, he used a USB 3.0 card reader and ran several programs to test the actual read/write speeds. Then, he took a 4.1GB video file from his hard drive and copied it to each of the cards in turn to gauge real-world write speed.

So… how did these cards perform in real life? All of them (except Panasonic, which doesn’t give read and write speeds…) advertise read speeds of 90MB/s and up, and the fastest of them claim a write speed of 90MB/s. But not a single card topped even 80MB/s read speed, and the fastest write speed reached was 78.81MB/s by the SanDisk Extreme PRO.

The good news is that some of the cards actually outperformed their advertised write speeds, but none of them were the cards claiming lightning fast 90MB/s write.

You can see all of the results for yourself at the 7:30 mark of the video.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Advertising vs reality: microSD memory card speed test

Posted in Uncategorized

 

Social Media Vs. Reality video calls out the most common Instagram lies

06 Aug

Anti-bullying organization Ditch the Label—the folks behind this 2017 survey that found Instagram is terrible for teens mental health—created a funny-but-also-very-sad video to accompany their findings. The video is called “Are You Living an Insta Lie? Social Media Vs. Reality”.

The video covers “some of the funniest and most common Insta Lies posted on social media,” and they really did cover most of their bases. Some of the tropes covered include #wokeuplikethis photos, the start of a ‘healthy’ juice cleanse, not-so-blissful relationship bliss and lots more.

We’re not sure any professional photographers use Instagram like this, but chances are good we’ve all… bent the truth on Instagram a time or two. If you can think of any common photographer Insta Lies, share them in the comments.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Social Media Vs. Reality video calls out the most common Instagram lies

Posted in Uncategorized

 

Sci-Fi to Reality: Giant Manned Robot Method-2 Has Taken Its First Steps

03 Jan

[ By SA Rogers in Conceptual & Futuristic & Technology. ]

method-2-manned-robot-main

Looking and working remarkably like the robots in the 2009 movie Avatar, the 1.5-ton, 13-foot-tall ’METHOD-2’ by South Korean firm Hankook Mirae Technology has officially taken its first steps. Engineers and reporters watched the robot navigate the company’s facility on its massive mechanical legs, traversing about ten feet before reversing. It appears to be remote-controlled for this exercise, while previous videos have shown how it functions with a human ‘pilot’ sitting inside.

The idea is that METHOD-2 will be able to help people reach the kinds of hazardous destinations that are currently too unsafe to navigate, and it’s easy to imagine this thing walking down the street like a superhero after a disaster. It sounds like the company is currently working out the mechanics of the robot itself, and it’s unclear whether it’ll ultimately be able to climb over obstacles, negotiate uneven terrain or withstand harsh climates.

method-2-robot-gif

method-2-manned-robot-6

Its first planned expedition is into the space between North and South Korea known as the DMZ (demilitarized zone), the world’s most dangerous border, but it’s still got about a year of planning and tinkering to get it into shape. Right now, it requires a tether for power. Its arms weigh 300 pounds each and are controlled by the pilot’s own limb movements.

method-2-manned-robot-3

method-2-manned-robot-4

Unsurprisingly, the robot was designed by Vitaly Bulgarov, who previously worked on the Transformers films and helped design Boston Dynamics’ bipedal robots. Yang Jin-Ho, chairman of Hankook Mirae Technology, says the robot is still taking its ‘baby steps’ but ultimately aims to “bring to life what only seemed possible in movies and cartoons.”

method-2-manned-robot-5

method-2-manned-robot-7

METHOD-2 is already getting lots of interest from companies who want to purchase one, and the price tag is expected to run around $ 8.3 million. The final version is expected to be ready for potential buyers by the end of 2017.

Share on Facebook





[ By SA Rogers in Conceptual & Futuristic & Technology. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]


WebUrbanist

 
Comments Off on Sci-Fi to Reality: Giant Manned Robot Method-2 Has Taken Its First Steps

Posted in Creativity

 

Fantasy is Now Reality: Twisting Tree-Covered Callebaut Tower Taking Shape

29 Nov

[ By SA Rogers in Architecture & Cities & Urbanism. ]

callebaut-taipei

We’ve seen lots of dazzling concepts by Belgian architect Vincent Callebaut, most of which seem far too fanciful to ever actually materialize, but his twisting high-rise tower in Taipei is finally taking shape in three dimensions. ‘Tao Zhu Yin Yuan’ is about halfway complete, pivoting on a central axis for a layout that enables outdoor space brimming with greenery on every floor. Scheduled for completion in September 2017, the residential tower will support 23,000 trees absorbing up to 130 tonnes of carbon dioxide each year.

callebaut-concept

callebaut-taipei-2

The tower is conceived as a ‘inhabited tree,’ set upon a circular footprint with towers extending from the core in a double helix shape. From the north or south, it looks like a pyramid, while east and west views give onlookers a fuller idea of the building’s scale. It will contain 40 luxury apartments and additional facilities, and is set to meet LEED gold status as well as diamond-level Low Carbon Building Alliance certification.

callebaut-taipei-6

callebaut-taipei-5

Callebaut is known for proposals that emphasize sustainability, self-sufficiency, the inclusion of vegetation and eye-popping shapes. Examples include his dragonfly-wing-shaped urban farm, the Lilypad floating city concept, the ‘Asian Cairns’ residential towers and a series of futuristic ‘smart towers’ aiming to reduce pollution and create renewable energy while integrating into existing built environments.

callebaut-taipei-3

Most of these concepts either appear too wild and expensive to developers and investors to inspire confidence for real-world success, or rely on theoretical technology that hasn’t been fully developed or proven. But nobody can accuse Callebaut of limiting his own creativity in the way he envisions the future of architecture, in a world where the choices we make for our cities directly impact our ability to withstand the consequences of climate change.

callebaut-taipei-4

“In 2050, we will be 9 billion of human beings on our blue planet and 80% of the world population will live in megacities,” says Callebaut. “It’s time to invent new eco-responsible lifestyles and to repatriate the nature in our city in order to increase the quality of our life with respect of our environment.”

Share on Facebook





[ By SA Rogers in Architecture & Cities & Urbanism. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]


WebUrbanist

 
Comments Off on Fantasy is Now Reality: Twisting Tree-Covered Callebaut Tower Taking Shape

Posted in Creativity

 

Virtual Reality Nature: Helmet Lets Humans See the Forest Like Animals Do

05 Nov

[ By SA Rogers in Conceptual & Futuristic & Technology. ]

virtual-reality-forest-1

Dragonflies experience their brief lives on this planet 10 times faster than humans, and in 12 color wavelengths as compared to our three, a viewpoint that’s been impossible to comprehend prior to the arrival of virtual reality tech. Thanks to a project called ‘In the Eyes of the Animal’ by the creative studio Marshmallow Laser Feast, we can see the world the way super-sighted creatures do in a feat that’s being called ‘sense hacking.’

virtual-reality-forest-2

Aerial drone footage, CT scans and LiDAR remote sensing technology taken from the Grizedale forest in the UK gives the team 800 million data points upon which to render a hyper-rich environment in tandem with a real-time visual and audio engine. Visitors to the real, actual forest put on virtual reality headsets obscured with moss to take it all in.

virtual-reality-forest-3

virtual-reality-forest-4

virtual-reality-forest-5

“Visual engine generates and renders whole environment in realtime with certain generative elements which makes each experience unique,” explains Creative Applications Network, . “Visual engine communicates with 3D Audio Engine via OSC [OpenSound Control] to provide positional data as well as head tracking data from the Inertial sensors of the VR headset. The sound uses Binaural audio, a technique mimicking the natural functioning of the ear by creating an illusion of 3D space and movement around the head of a listener as immersive as reality can be.”

virtual-reality-forest-6

virtual-reality-forest-7

The result is an immersive experience at the intersection of science and digital art, and the images of the helmets in use in Grizedale Forest are pretty incredible, like something from a film. If you didn’t get a chance to see it yourself during the installation’s tour of festivals, you can watch the video to see an approximation of what it looks like.

Share on Facebook





[ By SA Rogers in Conceptual & Futuristic & Technology. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]


WebUrbanist

 
Comments Off on Virtual Reality Nature: Helmet Lets Humans See the Forest Like Animals Do

Posted in Creativity