RSS
 

Posts Tagged ‘Researchers’

Researchers develop lithium-sulphur battery that can power a phone for five days

17 Jan
Image courtesy of Monash University

Researchers led by Monash University in Australia have developed what they say is the ‘most efficient’ version of a lithium-sulfur battery; one capable of powering a smartphone for five full days of continuous use. The team has filed a patent for the manufacturing process they developed and they report interest from ‘some of the world’s largest manufacturers.’

Prototype lithium-sulfur power cells were manufactured in Germany, according to an announcement from the university published last week. The technology holds promise for revolutionizing everything from consumer gadgets like cameras and phones to larger systems involving vehicles and solar power. The newly developed lithium-sulfur battery offers more than four times the performance of the market’s current most efficient batteries.

With this level of battery performance, photographers and filmmakers could spend weeks in remote locations with only power banks as their power source, eliminating the need to tote around and use solar chargers, which are dependent on direct sunlight and often take several hours or more to recharge a battery.

In addition to improved performance, the Li-S battery technology is also said to have less of an environmental impact than the lithium-ion battery products currently in use. The new battery prototype utilizes the same materials used to manufacture ordinary lithium-ion batteries; as well, the process is said to have lower manufacturing costs.

According to the university, additional testing of the technology with solar grids and cars will take place in Australia early this year. Major lithium battery manufacturers in Europe and China are interested in upscaling the production of these lithium-sulphur batteries.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers develop lithium-sulphur battery that can power a phone for five days

Posted in Uncategorized

 

London researchers develop plant-powered camera system for conservation efforts

19 Oct

ZSL London Zoo has detailed the results of a new scientific trial that successfully powered a tiny camera using plants. At the core of the system are microbial fuel cells designed to harness the energy produced by bacteria in the soil, which works to breakdown biomatter produced by plants. The end result, according to ZSL, may one day be plant-powered cameras that can be used as part of conservation efforts.

The microbial fuel cells were installed in the London Zoo’s Rainforest Life exhibit for use with a maidenhair fern named Pete. Unlike batteries, which need to be regularly recharged using sunlight or an external power source, plant-based fuel cells can be used to power many low-energy sensors, cameras, and other devices in a variety of environments.

‘We’ve quite literally plugged into nature to help protect the world’s wildlife: Pete has surpassed our expectations and is currently taking a photo every 20 seconds,’ said ZSL Conservation Technology Specialist Al Davies. ‘He’s been working so well we’ve even accidentally photobombed him a few times!’ Below are a few photos captured with the system:

By utilizing this technology, conservationists may be able to monitor plant growth, temperature, and other data using remote hardware without relying on solar panels and batteries. Following additional refinement, the team plans to test the technology in the wild.


Image credits: Photos shared with kind permission from ZSL London Zoo.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on London researchers develop plant-powered camera system for conservation efforts

Posted in Uncategorized

 

Researchers have developed reset-counting pixel that promises near-limitless highlight capture

12 Oct
Figure 4 (from the paper linked below): Realized CMOS test chip: (a) photograph of the packaged chip, (b) screenshot of the layout.

German researchers have developed a pixel design with the potential for massively increased dynamic range. Their design, reported in the ‘Advances in Radio Science’ journal isn’t limited by the point at which it saturates, meaning it can continue to capture more highlight data when other sensors would become overwhelmed.

Unlike conventional CMOS chips, their ‘self-resetting pixel’ doesn’t simply ‘clip’ when it becomes saturated, instead, it resets and has a circuit that counts how many times it’s had to reset during the exposure. It also contains a conventional analog-to-digital conversion circuit, so it is also able to measure the remaining charge at the end of the exposure.

Figure 2 (from the linked paper above): The working principle of the self-reset pixel.

This would mean that you don’t need to limit your exposure to protect highlight data and can instead set an optimal exposure for capturing your subject, safe in the knowledge that this won’t result in blown-out highlights. In their paper, the researchers from Institut für Mikroelektronik Stuttgart created a series of test pixels with different designs, and will now focus on the one that gave the most linear response to different light levels, both in terms of its reset characteristics and its conventional ADC mode.

Figure 1 (from the linked paper): Schematics of the analog and digital parts of one pixel cell and a global control for all pixel cells.

Before you get too excited, though, this work is still at a fairly early stage and is primarily focused on video for industrial applications, though lead researcher Stefan Hirsch tells us: ‘basically it should also be possible to use for still images.’

At present, the additional counting circuitry ends up meaning the light-sensitive photodiode in each pixel is very small, making up just 13% of the surface area of huge 53?m pixels. A move to a stacked CMOS design, with the circuitry built as different layers, would increase this, with potential for 20?m pixels with more of the area being light-sensitive. A three-layer design could allow still smaller pixels. For perspective, the pixels in the 12MP Full-Frame a7S II are around 8.5?m, so there would need to be a lot of work done to find a way to produce a sensor useful as a consumer video or stills camera.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers have developed reset-counting pixel that promises near-limitless highlight capture

Posted in Uncategorized

 

UC Berkeley researchers have created a drone that shrinks to squeeze through small spaces

09 Aug

Since drones entered mainstream consciousness, people have gotten creative with developing new ideas for how they can be used. Drones can deliver food and other small items. They can even bake cakes or play instruments when configured properly. Now, a team of researchers at UC Berkeley’s High Performance Robotics Laboratory (HiPeRLab) has created a ‘Passively Morphing Quadcopter’ that can temporarily shrink down to squeeze through small spaces.

While this isn’t the first drone that can compress its shape mid-flight, it is the only one that can shift its shape without using any additional hardware components. This feature helps preserve battery life, enabling the aircraft the fly even longer. Engines enable the arms to rotate freely and constant force springs provide the momentum to change shape. When no thrust is applied, the springs pull the arms into a folded configuration.

When the drone approaches an opening smaller than it can fit, it can plot a course that allows its arms to retract as it’s flying through a small small space. The rotors shut off and after the drone passes through, it loses a bit of altitude as it powers back up. While this set up can offer up a number of useful real-world applications, like inspecting hard-to-reach areas, there is still work to be done by the HiPeRLab team for it to work in any other scenario where there isn’t a wide open area on the other side of a small space for the drone to squeeze though. Nevertheless, when perfected, it could make for an innovative filmmaking tool.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on UC Berkeley researchers have created a drone that shrinks to squeeze through small spaces

Posted in Uncategorized

 

Researchers develop new anti-face-distortion method for wide-angle lenses

16 Jun

Ultra-wide-angle lenses are becoming increasingly popular on smartphones in both rear and front cameras. Especially the latter are frequently used for portraiture in the shape of selfie images of both single subjects and groups.

Unfortunately when capturing people pictures with a wide-angle lens a problem becomes apparent: faces that are located close to the edges of the frame are distorted, showing signs of unnatural stretching, squishing, and/or skewing, an effect that is also known as anamorphosis.

A group of researchers at Google and MIT led by YiChang Shih has now found an efficient way of dealing with the issue. In their paper titled “Distortion-Free Wide-Angle Portraits on Camera Phones,” they describe an algorithm that is capable of correcting the effect, making for more natural selfies and wide-angle portraits.

Previous solutions were capable of correcting distortion on faces but in turn introduced other artifacts to the background and other elements of the image. The new method works around this by creating a content-aware warping mesh and applying corrections only to the part of the frame where faces are detected and maintaining smooth transitions between faces and the rest of the image.

The researchers say good results were achieved on photos with a wide field-of-view ranging from 70° to 120° and the algorithm is fast enough to work “at an interactive rate”. More information is available on the project website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers develop new anti-face-distortion method for wide-angle lenses

Posted in Uncategorized

 

Samsung researchers create AI that transforms still images into talking portraits

24 May

Researchers with the Samsung AI Center in Moscow and the Skolkovo Institute of Science and Technology have published a new paper detailing the creation of software that generates 3D animated heads from a single still image. Unlike previously detailed AI systems capable of generating photo-realistic portraits, the new technology produces moving, talking heads that, though not perfect, are highly realistic.

‘Practical scenarios’ require a system that can be trained using only a few—or even a single —of a person rather than an extensive image dataset, the newly published study explains. To satisfy this requirement, researchers created a system for which ‘training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters.’

Using generative adversarial networks, researchers were able to animate painted portraits in addition to images, producing, among other things, a talking, moving version of the Mona Lisa. As demonstrated in a video detailing the study (below), final results vary in quality and realism, with some being arguably indistinguishable (at least at low resolutions) from real videos.

The researchers explain in their paper that the use of additional images to train the system results in life-like final results:

Crucially, only a handful of photographs (as little as one) is needed to create a new model, whereas the model trained on 32 images achieves perfect realism and personalization score in our user study (for 224p static images).

Some other issues remain with this type of system, the researchers note, including a ‘noticeable personality mismatch’ between the person featured in the still image(s) and the talking individual used to animate the portrait. The researchers explain, ‘if one wants to create “fake” puppeteering videos without such mismatch, some landmark adaptation is needed.’

The technology remains viable for purposes that don’t necessarily require a personality match, but rather the simple animation of a character that exists only as a small series of still images. Thus far, the technology only works on faces and the upper parts of one’s torso. It’s unclear whether the researchers plan to expand the system to include other body parts.

Samsung’s study joins past AI-based portrait work from NVIDIA, as well as non-portrait AI image generation, including the system NVIDIA debuted earlier this year — one capable of rapidly converting simple sketches into complex landscape images.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Samsung researchers create AI that transforms still images into talking portraits

Posted in Uncategorized

 

Researchers launch Colourise.sg, a free web app that colorizes B&W images using AI

07 Mar

Engineers with GovTech Singapore’s Data Science and Artificial Intelligence Division have launched a website called Colourise.sg that uses deep learning AI to colorize black and white images. The website doesn’t require any technical skills from the user and is free to use. Colorized results are delivered in seconds and, more often than not, are very realistic.

The project was detailed by software engineer Preston Lim, who explained that Colourise.sg was trained specifically to colorize historical black and white Singaporean photos. This differs from some competing AI-based colorizers, Algorithmia being one given example, which are often trained using an image dataset called ImageNet.

“Singapore.” The New York Public Library Digital Collections. The Miriam and Ira D. Wallach Division of Art, Prints and Photographs: Picture Collection, The New York Public Library (left), colourised photo by Colourise.sg (right)

The tool remains an excellent option for colorizing images outside of a Singaporean context, however. There are limitations to the tool, primarily that users aren’t able to specify the original color of image elements, meaning the final colorized image may be realistic, but not reflect the scene’s true colors.

According to the team behind Colourise.sg, the colorizer works best with high-resolution images featuring humans and natural scenery. The system is capable of colorizing images that contain objects it can identify based on the dataset used to train it. In photos that contain objects the AI can’t recognize, the results may include unrealistic colors as the system must simply use its best guess.

Several excellent examples of Colourise.sg’s capabilities are provided on Medium.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers launch Colourise.sg, a free web app that colorizes B&W images using AI

Posted in Uncategorized

 

Researchers recover photos from a USB drive that spent a year frozen in seal poop

09 Feb
Seals at Cape Cross, Namibia — Joachim Huber

A USB flash drive recovered from frozen seal scat has been reunited with its owner, according to New Zealand’s National Institute of Water and Atmospheric Research (NIWA). The organization revealed its findings in a post early this week, when it stated that a functional USB drive with recoverable photos and at least one video had been found in thawed seal poo.

According to the NIWA, seal scat is ‘as good as gold’ for researchers who study the creatures. Volunteers with LeopardSeals.org collect these samples and ship them to the researchers, who then freeze them until they’re ready to analyze the droppings.

In November 2017, the NIWA says marine biologist Dr. Krista Hupman received a sample collected by a local vet. The scat was placed in a freezer, only to be removed last month by volunteers with the organization. The sample was defrosted, rinsed, and then broken apart to study.

Amid the expected findings was one concerning discovery: a USB flash drive. After being left out to dry, the researchers connected the drive and were surprised to recover images of sea lions, as well as a video showing the tip of a blue kayak and a mother and baby sea lion in the water.

NIWA shared the video on its Twitter account on February 4 in an attempt to reunite the USB drive with its owner.

The amusing story went viral, and it only took a day for owner Amanda Nally to claim her property, according to The Project NZ. The hardy USB drive’s make and model remain unknown, but it’s safe to say regardless of official specs the flash drive was indeed weather ‘sealed.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers recover photos from a USB drive that spent a year frozen in seal poop

Posted in Uncategorized

 

NVIDIA researchers create AI that generates photo-realistic portraits

19 Dec

NVIDIA researchers have published a new paper detailing their latest artificial intelligence work, which involves generating photo-realistic portraits of humans that are indistinguishable from images of real people. The technology revolves around an alternative generator architecture for generative adversarial networks (GANs) that utilizes style transfer for producing the final result.

Though GANs have improved substantially in only a few years, the researchers say in their paper that the generators ‘continue to operate as black boxes, and despite recent efforts, the understanding of various aspects of the image synthesis process, e.g., the origin of stochastic features, is still lacking.’ That’s where the newly developed alternative architecture comes in.

The team’s style-based architecture enables GANs to generate new images based on photos of real subjects, but with a twist: their generator learns to distinguish between separate elements in the images on its own. In the video above, NVIDIA’s researchers demonstrate this technology by generating portraits based on separate elements from images of real people.

“Our generator thinks of an image as a collection of ‘styles,’ where each style controls the effects at a particular scale,” the team explains.

Image elements are split into three style categories: “Coarse,” “Middle,” and “Fine.” In terms of portraits, these categories include elements like facial features, hair, colors, eyes, the subject’s face shape, and more. The system is also able to target inconsequential variations, including elements like texture and hair curls/direction.

The video above demonstrates changes involving inconsequential variation on non-portrait images, which includes generating different patterns on a blanket, altering the hair on a cat, and subtly changing the background behind a car. The style-transfer GANs offer superior results to traditional GAN generator architecture, the researchers conclude, with the photo-realistic results underscoring their assessment.

The latest work further refines a technology that has been growing rapidly over only a few years. Though GANs have been used in the past to generate portraits, the results were far from photo-realistic. It’s possible that technology like this could one day be offered as a consumer or enterprise product for generating on-demand life-like images.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA researchers create AI that generates photo-realistic portraits

Posted in Uncategorized

 

DJI challenges drone plane collision test, accuses researchers of ‘sowing fear’

23 Oct

DJI has challenged a recently published video that demonstrates a small drone smashing into an airplane wing. The test collision was conducted in a simulated environment by researchers with the University of Dayton Research Institute (UDRI) to assess the potential damage such an in-air crash may cause. DJI has accused the test of being “unbalanced, agenda-driven research.”

In a letter sent to UDRI’s group leader for impact physics Kevin Poorman, DJI alleges UDRI’s “Risk in the Sky?” video (below) and related materials present a “collision scenario between a drone and an airplane wing that is simply inconceivable in real life.”

The test collision involved a 952g / 2.1lbs DJI Phantom 2 quadcopter being launched at the wing of a Mooney M20 aircraft. In a blog post about the research, UDRI researchers said the test was intended to “mimic a midair collision of a drone and a commercial transport aircraft at 238 miles per hour…”

DJI has taken issue with that claim, saying the test assumes the Mooney M20 would be flying at its max 200mph / 321kph speed, and that the drone would “apparently” be exceeding its max 33.5mph / 53.9kph speed. “At the altitudes where that plane would conceivably encounter a Phantom drone,” DJI claims, “it would fly less than half as fast – generating less than one-fourth of the collision energy.”

DJI also states:

Your video was created contrary to established U.S. Federal Aviation Administration (FAA) crash test parameters, which assume a bird striking an airplane at its sea-level cruising speed —which is typically 161 mph to 184mph for Mooney M20. Your video deliberately created a more damaging scenario, and was widely cited as evidence for what could happen to a large commercial jet —even though the Mooney M20 is a small plane with four seats.

The Chinese drone company has likewise taken issue with the test as a whole, accusing it of having not been “created as part of a legitimate scientific query, with little description of your testing methodology and no disclosure of data generated during the test.” The company accuses the researchers of having a “bias toward sowing fear,” claiming they would have otherwise also shared a video of a simulated bird-plane strike that caused “more apparent damage.”

DJI’s letter demands UDRI “remove the alarmist video,” withdraw the research, and “issue a corrective statement” that proclaims the test to be “invalid.”

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on DJI challenges drone plane collision test, accuses researchers of ‘sowing fear’

Posted in Uncategorized