RSS
 

Posts Tagged ‘from’

Earth from 100,000 feet: Sigma sent the fp mirrorless camera into near space

03 Oct

Sigma UK recently collaborated with the company Sent Into Space to send a pair of Sigma fp full frame mirrorless cameras into the upper atmosphere. Sigma 14mm F1.8 lenses were used on each camera. It’s a notable kit because it combines the world’s smallest and lightest full frame mirrorless camera with the brightest full frame 14mm prime lens available.

The Sigma fp cameras and 14mm F1.8 lenses were attached to weather balloons and sent up to an altitude of roughly 19 mi. (about 30.5km). At altitude, the cameras captured high-resolution photos and 4K RAW video of Earth.

No good marketing operation is complete without stunning media to share with prospective customers. Sigma UK published a video to document the process of sending Sigma fp cameras into near space and show off the amazing results of the project.

The launches took place in Sheffield and the first Sigma fp to gain altitude was dedicated to recording 12-bit 4K UHD Raw video and the second camera was dedicated to capturing 24.6MP still images. Each camera was part of a kit that includes on-board equipment to provide data and telemetry back to the Sent Into Space team back on the ground.

The balloons, filled with hydrogen, expand considerably during the ascent. As the atmosphere gets thinner, the gas inside the balloon tries to escape to fill the vacuum. At a certain altitude, the balloon will fail and burst, and the equipment will return to the surface aided by onboard parachutes. As Chris Rose of Sent Into Space points out in the video above, the payload will actually descend at up to 250 mph before the atmosphere gets thick enough to act against the parachute.

Each camera was sent into space with an attached 2TB SSD drive. Even with that much storage capacity, the fp couldn’t record 4K UHD RAW video for the entire flight. The stills camera was set up with an interval timer to capture a still image every five seconds for the entire journey.

To learn more about the Sigma fp, head to our First Impressions. For more on the Sigma 14mm F1.8 DG HSM Art lens and its applications for space photography, check out Jose Francisco Salgado’s ‘Astrophotography with the Sigma 14mm F1.8 Art lens’ article.

(DIY Photography)

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Earth from 100,000 feet: Sigma sent the fp mirrorless camera into near space

Posted in Uncategorized

 

Slideshow: Winners of the 2020 Drone Photo Awards from Siena Awards

30 Sep

Winners of the 2020 Drone Photo Awards from Siena Awards

Winners for the 6th edition of the Drone Photo Awards competition, affiliated with the Siena International Photo Awards competition (you can view winners of the Creative Photo Awards here), were recently announced. Entries were sent in by drone photographers from 126 countries. ‘Love Heart of Nature’ by Australian photographer Jim Picôt, which depicts a shark swimming inside a heart–shaped salmon school, was recognized as the Overall Winner.

The awards are divided into 9 categories: Abstract, Empty Cities: Life during COVID-19, Nature, People, Sports, Series, Urban Architecture, Wedding, and Animals. All 45 winning images will be displayed at the ‘Above Us Only Sky’ exhibition, scheduled from October 24th to November 29th at the ‘Accademia dei Fisiocritici’ museum in Siena, Italy.

Overall Winner: ‘Love Heart of Nature’ by Jim Picôt

Location: Avoca Beach, NSW, Australia

Description: In winter, a shark is inside a salmon school when, chasing the baitfish, the shape became a heart shape.

Winner, Wedding: ‘Tropical Bride’ by Mohamed Azmeel

Location: (Not given)

Description: I used the flowers and the leaves leftover from the decoration of a wedding, to make something creative.

Winner, Abstract: ‘Swirl’ by Boyan Orste

Location: Pink Lake, Australia

Description: An abstract shot of a Pink lake chemical reaction in Western Australia.

Winner, Nature: ‘Coffee or Tea’ by Yi Sun

Location: Brazil

Description: (Not given)

Winner, Wildlife: ‘Outer Space Flamingos’ by Paul McKenzie

Location: Lake Natron, Tanzania

Description: (Not given)

Winner, Life Under COVID-19: ‘Black Flag’ by Tomer Appelbaum

Location: Israel

Description: Thousands of Israelis maintain social distancing due to Covid-19 restrictions while protesting against Israeli Prime Minister Benjamin Netanyahu in Rabin Square on 19 April 2020.

Winner, Sport: ‘On the Sea’ by Roberto Corinaldesi

Location: Cornwall, United Kingdom

Description: An aerial view of swimmers, where the sea becomes the place to take refuge, between the blue carpet and the white foam of the waves.

Winner, People: ‘Frozen Land’ by Alessandra Meniconzi

Location: Eurasian Steppe

Description: With temperatures of minus 30°C, winters in the Eurasian steppe can be brutal. But life doesn’t stop, and local people move from one village to another with a sledge, crossing icy rivers and lakes.

Winner, Urban: ‘Alien Structure on Earth’ by Tomasz Kowalski

Location: Kuala Lumpur, Malaysia

Description: Sometimes we need to change the perspective to feel the strength of the structure stronger than we’ve ever thought. The Petronas Towers, also known as the Petronas Twin Towers, are twin skyscrapers in Kuala Lumpur.

Winner, Wedding: ‘The Wedding Crashers’ by David Gallardo

Location: Turks & Caicos Islands

Description: (Not given)

Winner, Life Under COVID-19: ‘Lonely Guardian’ by Mauro Pagliai

Location: Siena, Italy

Description: (Not given)

Winner, Sport: ‘Ball Up’ by Brad Walls

Location: Sydney, Australia

Description: The physical motions of the tennis player against the clean abstract lineage of the court created a harmonious effect to the eye.

Winner, Abstract: ‘Fishing At Jamuna River’ by MD Tanveer Hassan Rohan

Location: Bogra, Bangladesh

Description: (Not given)

Winner, Urban: ‘Sunrise on the Top’ by Rex Zou

Location: Shanghai, China

Description: At 4:30 in the morning, mysteriously shrouded in clouds, this is what the second tallest building in Shanghai looks like.

Winner, People: ‘Mountains of Salt’ by Igor Altuna

Location: Thi Xa Ninh Hoa, Vietnam

Description: An aerial picture taken on a saltern near a small town on central Vietnam’s coast.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Slideshow: Winners of the 2020 Drone Photo Awards from Siena Awards

Posted in Uncategorized

 

How To Delete All Photos From iPhone Using iPhone, Mac or PC

25 Sep

You have filled your iPhone with thousands and thousands of photos over the last few years. Its a nice catalog of memories but now its time to move on. You are looking for a quick and safe way to delete all the photos on your iPhone but dont know how. Don’t despair. In this article we will tell you the Continue Reading

The post How To Delete All Photos From iPhone Using iPhone, Mac or PC appeared first on Photodoto.


Photodoto

 
Comments Off on How To Delete All Photos From iPhone Using iPhone, Mac or PC

Posted in Photography

 

Working From Home as a Photographer in the Current Climate: A No-Bull Guide

24 Sep

The current climate is not ideal for many businesses, particularly photographers. Without an outlet for work, it’s easy to ruminate on how you’re going to weather the storm. Going without work is not an option for many photographers, so what are the alternatives? Fortunately, there’s plenty you can do to pivot your business and retain your income. For example, increasing Continue Reading

The post Working From Home as a Photographer in the Current Climate: A No-Bull Guide appeared first on Photodoto.


Photodoto

 
Comments Off on Working From Home as a Photographer in the Current Climate: A No-Bull Guide

Posted in Photography

 

Sony a7C sample gallery updated, with more shots from compact 28-60mm kit lens

22 Sep

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_6941396376″,”galleryId”:”6941396376″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Sony’s a7C is a really compact full-frame camera – especially when paired with the new FE 28-60mm F3.5-5.6 kit lens. We’ve been doing plenty of shooting with the combo and have updated our gallery to show you just what you can expect.

Check out our gallery of sample images

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony a7C sample gallery updated, with more shots from compact 28-60mm kit lens

Posted in Uncategorized

 

Gallery: 100 best lockdown portraits revealed from Duchess of Cambridge’s Hold Still project

18 Sep

100 best lockdown portraits revealed from Duchess of Cambridge’s Hold Still project

Kate Middleton, the Duchess of Cambridge, and the UK’s National Portrait Gallery have put together a digital exhibition of the 100 best portraits taken during the lockdown and submitted to the Hold Still photographic contest. The images, centered around the themes of Helpers and Heroes, Your New Normal and Acts of Kindness, will become a print exhibition later in the year and will tour the UK.

Set up to document aspects of life in England, Scotland, Wales and Northern Ireland during the global coronavirus pandemic, the Hold Still project was launched by the Duchess, a keen photographer herself, in May and was open for entries for six weeks.

Organizers say they received 31,598 entries which were judged by the Duchess alongside the Director of the National Portrait Gallery, a poet, a photographer and the Chief Nursing Officer for England. The judges said they selected the winning images based on the ‘emotions and experiences they convey rather than on their photographic quality or technical expertise’.

We’ve rounded up ten of the 100 images in the following gallery, but for more information and to see all 100 portraits visit the National Portrait Gallery website.

THE DUCHESS OF CAMBRIDGE AND NATIONAL PORTRAIT GALLERY LAUNCH HOLD STILL DIGITAL EXHIBITION

Final 100 images unveiled in landmark community project to create a photographic portrait of the nation

The Duchess of Cambridge and the National Portrait Gallery have today unveiled the Hold Still digital exhibition, featuring one hundred portraits selected from 31,598 submissions during the project’s six-week entry period. Focussed on three core themes – Helpers and Heroes, Your New Normal and Acts of Kindness – the images present a unique record of our shared and individual experiences during this extraordinary period of history, conveying humour and grief, creativity and kindness, tragedy and hope.

Launched by The Duchess of Cambridge and the Gallery in May, Hold Still invited people of all ages, from across the UK to submit a photographic portrait which they had taken during lockdown. The project aimed to capture and document the spirit, the mood, the hopes, the fears and the feelings of the nation as we continued to deal with the coronavirus outbreak.

The Hold Still judging panel included: The Duchess of Cambridge; Nicholas Cullinan, Director of the National Portrait Gallery; Lemn Sissay MBE, writer and poet; Ruth May, Chief Nursing Officer for England and Maryam Wahid, photographer

The panel assessed the images on the emotions and experiences they convey rather than on their photographic quality or technical expertise. The final 100 present a unique and highly personal record of this extraordinary period in our history. From virtual birthday parties, handmade rainbows and community clapping to brave NHS staff, resilient keyworkers and people dealing with illness, isolation and loss. The images convey humour and grief, creativity and kindness, tragedy and hope – expressing and exploring both our shared and individual experiences.

A selection of the photographs featured in the digital exhibition will also be shown in towns and cities across the UK later in the year.

International law firm Taylor Wessing are supporting the Hold Still project in partnership with the National Portrait Gallery. They are long-term supporters of the Gallery and have sponsored the Taylor Wessing Photographic Portrait Prize for the past 12 years.

{/pressrelease}

Making bread

Photographer: James Webb
Location: Colne, Cambridgeshire

This is me and my son Jake making bread together. Baking was something that I enjoyed but didn’t get to do very often. Lockdown gave me the opportunity to bake and enjoy this passion with my children. During this time we started off making flatbreads, cupcakes, muffins and the like, and then moved on to bread. Baking became a daily pleasure we were all able to enjoy together. We’ve continued to bake as a family and my children have enjoyed learning how to knead dough and the process of proving before baking. Making bread has become the new normal in our house and is a hobby now enjoyed by the whole family.

Glass kisses

Photographer: Steph James
Location: Cowfold, West Sussex

My 1-year-old little boy and his 88-year-old great grandma, who miss each other so much at the moment. I captured this beautiful moment between them whilst dropping off groceries. Kisses through glass.

This is what broken looks like

Photographer: Ceri Hayles
Location: Bridgend

This is what broken looks like. This is operating for 3 hours in full PPE. This is dehydration. This is masks that make your ears bleed because the straps have slipped and you daren’t touch them. This is fighting an invisible enemy that becomes more visible each day. This is a face I never thought I’d show the world, but one which I wear more and more. I took this photo to have as a reminder of how far I’d been capable of pushing myself when I needed to. I sent it to my family to tell them what a hard day it had been and they were all so shocked by it. The person they know as being so well put together, always wearing a smile, was not the person they saw that day. Looking back on it now, I feel immensely proud of the commitment shown by myself and my colleagues to provide safe care for patients, even in the depths of a pandemic. We still wear full PPE for all of our cases, and you never get used to it, but I know we’ll keep doing it for as long as it is needed.

Last precious moments

Photographer: Kris Tanyag and Sue Hicks
Location: Chicester, West Sussex

This portrait was taken by Kris, the clinical lead in the care home where Phil lived. Kris took the photograph for Phil’s daughter, Sue who submitted the work. Sue said: ‘As I approached the window my father’s smile lit up the world. Probably belying the fact that he couldn’t really comprehend why, after normally frequent visits and companionship in his twilight years, his daughter hadn’t been allowed to visit for the last three weeks. Easter Saturday 2020 and these precious, intensely emotional moments, will stay with me forever. One week later our wonderful dad, grandad and great grandad passed away peacefully. I can never fully express my gratitude to the carers who, sensing the situation and having looked after my father with love, care and compassion for seven years (as well as my mother for 3 of those years), made those moments possible.’

Kris explains: ‘We devised a plan for Phil to see his daughter Sue via a glass wall and communicate using mobile phones. Hearing our plan gave Phil a burst of energy to go in his wheelchair, hold a muffled conversation, reaching over to put his hand on the glass wall, convinced that he was touching Sue. Struggling to speak but hearing Sue made him so very happy. Their expression of emotion through tearful, smiling eyes and touching hands; the entire conversation was just one amazing moment!’

Funeral heartbreak

Photographer: Bonnie Sapsford and Fiona Grant-MacDonald
Location: Cockermouth, Cumbria

My brother, Barry, lives in the Lake District and could not travel to be with his family when our beloved Gran died of Covid-19 on 3 May 2020. Her cremation took place on 13 May in Edinburgh with only 8 people in attendance – and Barry had to watch it live online – but we were so proud he suitably dressed for the occasion. His wonderful partner, Bonnie took this powerful picture and sent it on to us. The family all missed him greatly and our hearts were shattered at the realisation that our grandmother’s first grandchild could not be with her on her final resting day.

At the end of a shift

Photographer: Neil Palmer
Location: Reading, Berkshire

This is a studio portrait of Tendai, a recovery and anaesthetics nurse, who was born in Zimbabwe, and now lives in my local town – Reading, Berkshire. I wanted to portray her caring side as well as a look of concern and uncertainty that many of us have experienced during this pandemic. It’s why I chose a lower than normal angle and asked her to look off camera, placing her half way down in the frame.

Justin, from the outside in

Photographer: Sara Lincoln
Location: London

Justin didn’t know about my project when I turned up at his window with a camera. I just so happened to be across the road, capturing his daughter Safi and her family, who had volunteered to be a part of my ‘Outside In’ project, which documents my community living life in lockdown, through the window. Safi asked if I wouldn’t mind popping over to capture a frame or two of her father and I am very grateful that I did. It was wonderful meeting this brilliant man albeit through the window. We spoke about this project, his art collection and how he manages to keep his plants so well. We talked about how surreal everything is right now, how the weeks have been for him isolating alone and his plans to jet off to France as soon as this madness is over. He finished up by telling me he had a spot of hay fever… A session that wasn’t meant to happen, happens to be one of my favourites.

We’re really lucky to have a garden

Photographer: Robert Coyle
Location: Sale, Manchester

The weekend is here, lockdown continues and Bernadette and Francis enjoy the garden. One Friday, as I finished emailing at the kitchen table, my wife had taken a chair and a drink outside to enjoy the evening sun. We were doing our best, like the rest of the country, with work, childcare and news of daily death tolls. Our son, had taken to relieving himself on the plants, much to our initial amusement and then slight frustration.

Everyday hero

Photographer: Arnhel de Serra
Location: London

When I drove past Richard I had to do a double-take, as I couldn’t believe he was out on his postman’s round in fancy dress. I asked if I could photograph him, and over a few days we got to know each other. Given the doomsday scenario that the media were portraying in the early days of the COVID-19 pandemic, I felt very strongly that here was a man who had something deeply personal and positive to offer his community. Is it an earth shattering news story? Probably not. As a human interest story however, I feel that his generosity of spirit should be celebrated, and I am delighted that he will be part of this very important project.

Never without her grandma

Photographer: Melanie Lowis
Location: Teddington, London

Millie (5 years old) made a cut out of her much loved grandma (73 years old). Millie sees Grandma almost daily and lockdown prevented the pair from seeing each other. As a retired teacher, Grandma would have made the perfect partner to help Millie with home schooling. The bond between this grandma and granddaughter is truly a special one and when lockdown ends, and the real grandma can return, it will be a very happy and emotional reunion.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Gallery: 100 best lockdown portraits revealed from Duchess of Cambridge’s Hold Still project

Posted in Uncategorized

 

New DAIN algorithm generates near-perfect slow-motion videos from ordinary footage

09 Sep

Researchers with Google, UC Merced and Shanghai Jiao Tong University have detailed the development of DAIN, a depth-aware video frame interpolation algorithm that can seamlessly generate slow-motion videos from existing content without introducing excessive noise and unwanted artifacts. The algorithm has been demonstrated in a number of videos, including historical footage boosted to 4K/60fps.

Rapidly advancing technologies have paved the way for high-resolution displays and videos; the result is a mass of lower-resolution content made for older display and video technologies that look increasingly poor on modern hardware. Remastering this content to a higher resolution and frame rate will improve the viewing experience, but would typically be a costly undertaking reserved only for the most popular media.

Artificial intelligence is a promising solution for updating older video content as evidenced by the growing number of fan-remastered movies and TV shows. Key to these efforts are algorithms trained to upscale and, when necessary, repair the individual frames of videos, which are recompiled into a higher-resolution ‘remaster.’

The newly detailed DAIN algorithm is different — rather than upscaling and repairing the individual frames in a video, this AI tool works by generating new frames and slotting them between the original frames, increasing the video’s FPS for smoother and, depending on how many frames are generated, slower-motion content.

This is a process called motion (video frame) interpolation, and it typically causes a drop in quality by adding unwanted noise and artifacts to the final videos. The DAIN algorithm presents a solution to this problem, offering motion interpolation to boost frames-per-second up to 480fps without introducing any readily noticeable artifacts.

The resulting content is high-quality and nearly visually identical to the source footage, but with the added smoothness that comes with increasing the frames-per-second to 60fps. In addition, DAIN has been demonstrated as capable of transforming ordinary 30/60fps footage into smooth slow-motion videos without choppiness or decreased quality.

According to the researchers, DAIN is ‘compact, efficient, and fully differentiable,’ offering results that perform ‘favorably against state-of-the-art frame interpolation methods on a wide variety of datasets.’ The technology has many potential uses, including recovering lost frames, improving content to be more visually appealing for viewers, generating slow-motion from regular footage and more.

Such technology is arguably necessary for preserving aging media in a useful way, making it possible for new generations of people to experience historical footage, old TV shows and movies, home videos and similar content using modern high-resolution displays. As well, the technology could be useful for content creators of all sorts, enabling them to salvage the footage they already have, improve the quality of old clips for use in documentaries and similar things.

The researchers explain on their project website:

Starting from the birth of photographing in the 18-th centuries, videos became important media to keep vivid memories of their age being captured. And it’s shown in varying forms including movies, animations, and vlogs. However, due to the limit of video technologies including sensor density, storage and compression, quite a lot of video contents in the past centuries remain at low quality.

Among those important metrics for video quality, the most important one is the temporal resolution measured in frame-per-second or fps for short. Higher-frame-rate videos bring about more immersive visual experience to users so that the reality of the captured content is perceived. Therefore, the demand to improve the low-frame-rate videos, particularly the 12fps old films, 5~12fps animations, pixel-arts and stop motions, 25~30 fps movies, 30fps video games, becomes more and more urgent.

The public can view more examples of videos updated using the DAIN algorithm by checking out the related collection playlist on YouTube. As well, the full study is available in PDF form on the Arxiv website here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on New DAIN algorithm generates near-perfect slow-motion videos from ordinary footage

Posted in Uncategorized

 

Google and UC Berkeley researchers create AI that can remove shadows from images

25 Aug

Researchers with the University of California Berkeley and Google Research have published a new paper detailing an AI that can remove unwanted shadows from images. The algorithm focuses on two different types of shadows — ones from external objects and ones naturally resulting from facial features — and works to either remove or soften them in order to maintain a natural appearance.

Whereas professional images are often taken in a studio with proper lighting, the average snapshot of a person is taken ‘in the wild’ where lighting conditions may be harsh, causing dark shadows that obscure parts of the subject’s face while other parts are covered with excessive highlights.

The newly developed AI is designed to address this problem by targeting those unwanted shadows and highlights, removing and softening them until a clearer subject remains. The researchers say their tool works in a ‘realistic and controllable way,’ and it could prove useful for more than just images captured in casual settings.

Professionals could, for example, use a tool like this to salvage images taken in outdoor environments where it was impossible to control the lighting, such as wedding images taken outdoors under a bright noon sun. In their paper, the researchers explain:

In this work, we attempt to provide some of the control over lighting that professional photographers have in studio environments to casual photographers in unconstrained environments … Given just a single image of a human subject taken in an unknown and unconstrained environment, our complete system is able to remove unwanted foreign shadows, soften harsh facial shadows, and balance the image’s lighting ratio to produce a flattering and realistic portrait image.

This project is designed to target three specific elements in these photographs: foreign shadows from external objects, facial shadows caused by one’s natural facial features and lighting ratios between the lightest and darkest parts of the subject’s face. Two different machine learning models are used to target these elements, one to remove foreign shadows and the other to soften facial shadows alongside lighting ratio adjustments.

The team evaluated their two machine learning models using both ‘in the wild’ and synthetic image datasets. The results are compared to existing state-of-the-art technologies that perform the same functions. ‘Our complete model clearly outperforms the others,’ the researchers note in the study, highlighting their system’s ability in a selection of processed sample images.

In addition to using the technology to adjust images, the study explains that this method can be tapped as a way to ‘preprocess’ images for other image-modifying algorithms, such as portrait relighting tools. The researchers explain:

Though often effective, these portrait relighting techniques sometimes produce suboptimal renderings when presented with input images that contain foreign shadows or harsh facial shadows. Our technique can improve a portrait relighting solution: our model can be used to remove these unwanted shadowing effects, producing a rendering that can then be used as input to a portrait relighting solution, resulting in an improved final rendering.

The system isn’t without limitations, however, particularly if the foreign shadows are presented with ‘many finely-detailed structures,’ some residue of which may remain even after the images are processed. As well, and due to the way the system works, some bilaterally symmetric shadows may not be removed from subjects,

In addition, softening the facial shadows using this technique may, at times, result in a soft, diffused appearance due to excessive smoothing of some fine details that should remain, such as in the subject’s hair, as well as causing a ‘flat’ appearance by softening some facial shadows.

As well, the researchers note that their complete system looks for two types of shadows — facial and foreign — and that it may confuse the two at times. If facial shadows on the subject are ‘sufficiently harsh,’ the system may detect them as foreign shadows and remove (rather than soften) them.

Talking about this issue, the researchers explain:

This suggests that our model may benefit from a unified approach for both kinds of shadows, though this approach is somewhat at odds with the constraints provided by image formation and our datasets: a unified learning approach would require a unified source of training data, and it is not clear how existing light stage scans or in-the-wild photographs could be used to construct a large, diverse, and photorealistic dataset in which both foreign and facial shadows are present and available as ground-truth.

Regardless, the study highlights yet another potential use for artificial intelligence technologies in the photography industry, paving the way for more capable and realistic editing that takes less time to perform than manual editing. A number of studies over the past few years have highlighted potential uses for AI, including transforming still images into moving animations and, in the most extreme cases, generating entire photo-realistic images.

As for this latest project, the researchers have made their code, evaluation data, test data, supplemental materials and paper available to download through the UC Berkeley website.

Via: Reddit

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google and UC Berkeley researchers create AI that can remove shadows from images

Posted in Uncategorized

 

New York City map makes it easy to find historical images of NYC from 1939 to 1941

18 Aug

A newly-launched online mapping system called 1940s NYC makes it simple for anyone to find historical images of the city captured from 1939 to 1941 by the New York City Tax Department in collaboration with the Works Progress Administration. The photography initiative involved capturing images of every home, shop and other buildings in all five boroughs, the result being an incredibly detailed time capsule of the city as it existed decades ago.

These historical photographs were already available to the public, but getting them — particularly ones of specific buildings — was time-consuming. Things got a bit more simple in 2018 when the New York City Municipal Archives finished digitizing the full collection, a process that also involved tagging each image so that it could be more easily found online using the right details.

Despite that improvement, the process of browsing these images was still limited. Users must go to the NYC.gov website’s city map tool and enter the exact address for the building of the photo they want. This makes it difficult to casually browse these historical images, something the new 1940s NYC mapping tool solves.

The new and far more capable mapping tool comes from NYC-based software engineer Julian Boilen, who notes on the website that an automated process was used to place the images on the map and, therefore, there is the potential for some ‘imperfections.’

The mapping tool is exceptionally simple to use. Every black dot on the map represents a photo of that location; users can zoom in on individual streets and neighborhoods, which appear to be overlaid with historic city zoning maps. Users can also enter an address to go right to a particular building. This is quite a bit more robust than the mapping tool offered by the city itself.

Users located in New York City also have the option of clicking a location icon that will pull up their current location on the map, making it easy to see what their neighborhood looked like decades ago. As well, the map provides an ‘Outtakes’ section that is a large gallery of browseable photos. Many of these images feature black dots and NYC.gov watermarks.

In addition to serving as a portal to the 1930s – 1940s NYC images, the mapping tool also includes a link to a similar map that features the same variety of imagery, but one captured in the 1980s.

This dataset features 800,000 photos of buildings, according to the tool, as well as more than 100,000 ‘street segments.’ This mapping tool includes a ‘Stories’ feature that provides a series of images alongside the stories behind them.

These stories include things like pointers on spying interesting elements in the photos, details about whether certain buildings still stand and if they were remodeled, notable events that took place at these locations and similar information.

The website is not affiliated with the New York City Department of Records, which is the agency that owns the historical photos. Anyone can order the high-resolution digitized copies or prints of images they like from the city’s Municipal Archives, otherwise, the public is limited to the watermarked and low-resolution versions made accessible by the NYC Department of Records and Information Services.

Including these images, the NYC Municipal Archives Digital Collections website offers the public access to more than 1.6 million digital items, including photos and videos.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on New York City map makes it easy to find historical images of NYC from 1939 to 1941

Posted in Uncategorized

 

Google researchers use AI to generate 3D models from random Internet images

13 Aug

Researchers with Google Research and the Google Brain deep learning AI team have published a new study detailing Neural Radiance Fields for Unconstrained Photo Collections (NeRF). The system works by taking ‘in the wild’ unconstrained images of a particular location — tourist images of a popular attraction, for example — and using an algorithm to turn them into a dynamic, complex, high-quality 3D model.

The researchers detail their project in a new paper, explaining that their work involves adding ‘extensions’ to neural radiance fields (NeRF) that enable the AI to accurately reconstruct complex structures from unstructured images, meaning ones taken from random angles with different lighting and backgrounds.

This contrasts to NeRF without the extensions, which is only able to accurately model structures from images that were taken in controlled settings. The obvious benefit to this is that 3D models can be created using the huge number of Internet photos that already exist of these structures, transforming those collections into useful datasets.

Different views of the same model constructed from unstructured images.

The Google researchers call their more sophisticated AI ‘NeRF-W,’ one used to create ‘photorealistic, spatially consistent scene representations’ of famous landmarks from images that contain various ‘confounding factors.’ This represents a huge improvement to the AI, making it far more useful compared to a version that requires carefully controlled image collections to work.

Talking about the underlying technology, the study explains how NeRF works, stating:

‘The Neural Radiance Fields (NeRF) approach implicitly models the radiance field and density of a scene within the weights of a neural network. Direct volume rendering is then used to synthesize new views, demonstrating a heretofore unprecedented level of fidelity on a range of challenging scenes.’

There’s one big problem, though, which is that NeRF systems only work well if the scene is captured in controlled settings, as mentioned. Without a set of structured images, the AI’s ability to generate models ‘degrades significantly,’ limiting its usefulness compared to other modeling approaches.

The researchers explain how they build upon this AI and advance it with new capabilities, saying in their study:

The central limitation of NeRF that we address in this work is its assumption that the world is geometrically, materially, and photometrically static — that the density and radiance of the world is constant. NeRF therefore requires that any two photographs taken at the same position and orientation must have identical pixel intensities. This assumption is severely violated in many real-world datasets, such as large-scale internet photo collections of well-known tourist landmarks…

To handle these complex scenarios, we present NeRF-W, an extension of NeRF that relaxes the latter’s strict consistency assumptions.

The process involves multiple steps, including first having NeRF-W model the per-image appearance of different elements in the photos, such as the weather, lighting, exposure level and other variables. The AI ultimately learns ‘a shared appearance representation for the entire photo collection,’ paving the way for the second step.

In the second part, NeRF-W models the overall subject of the images…

‘…as the union of shared and image-dependent elements, thereby enabling the unsupervised decomposition of scene content into static and transient components. This decomposition enables the high-fidelity synthesis of novel views of landmarks without the artifacts otherwise induced by dynamic visual content present in the input imagery.

Our approach models transient elements as a secondary volumetric radiance field combined with a data-dependent uncertainty field, with the latter capturing variable observation noise and further reducing the effect of transient objects on the static scene representation.’

Upon testing their creation, the researchers found that NeRF-W was able to produce high-fidelity models of subjects with multiple detailed viewpoints using ‘in-the-wild’ unstructured images. Despite using more complicated images with many variables, the NeRF-W models surpassed the quality of models generated by the previous top-tier NeRF systems ‘by a large margin across all considered metrics,’ according to researchers.

The potential uses for this technology are numerous, including the ability to generate 3D models of popular destinations for VR and AR applications using existing tourist images. This eliminates the need to create carefully-controlled settings for capturing the images, which can be difficult at popular destinations where people and vehicles are often present.

A PDF containing the full study can be found here; some models can be found on the project’s GitHub, as well.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google researchers use AI to generate 3D models from random Internet images

Posted in Uncategorized