RSS
 

Posts Tagged ‘pushes’

Adobe pushes critical security updates for Bridge, Photoshop and Prelude

24 Jul

Adobe has pushed live security updates for its Bridge, Photoshop and Prelude applications that patch a number of critical vulnerabilities, including a few that could enable threats to execute code on Windows computers.

While Adobe’s vague ‘Security Updates’ changelog brushes on the patches, security site ThreatPost offers a more detailed look at what Adobe has done to address 12 common vulnerabilities and exposures (CVEs) in Adobe Bridge, Adobe Photoshop and Adobe Prelude, which were first discovered by Mat Powell of Trend Micro’s Zero Day Initiative.

ThreatPost says each of the 12 ‘critical flaws stem from out-of-bounds read and write vulnerabilities, which occur when the software reads data past the end of — or before the beginning of — the intended buffer, potentially resulting in corruption of sensitive information, a crash, or code execution among other things.’ Specifically, five flaws were addressed in Adobe Photoshop, three in Adobe Bridge and four in Adobe Prelude.

According to Adobe, no known uses of these critical bugs have been reported in the wild, but you’re going to want to make sure all of your programs are up to date if you don’t have automatic updates installed. You’ll want to make sure you’re running versions 20.0.10 and 21.2.1 for Photoshop CC 2019 and Photoshop 2020, respectively. Adobe Bridge and Adobe Prelude should be running versions 10.1.1 and 9.0.1, respectively.

All updates can be downloaded via the Creative Cloud desktop app for macOS and Windows computers.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Adobe pushes critical security updates for Bridge, Photoshop and Prelude

Posted in Uncategorized

 

Fujifilm pushes out firmware updates for its X-Pro3 and X-T3 mirrorless cameras

30 Jan

Fujifilm has released firmware updates for its X-T3 and X-Pro3 mirrorless camera systems. The X-Pro3 update is rather minor, while the X-T3 update brings a number of substantially improved features.

Firmware version 1.03 for the X-Pro3 fixes an issue wherein the camera could ‘in rare cases’ freeze without warning as well as addresses a problem where ‘the color tone of recorded images is not recorded correctly in AF-C mode and continuous shooting.’ Aside from that, no other details are mentioned in the changelog. You can download the firmware version 1.03 for the X-Pro3 from Fujifilm’s website.

Moving onto the X-T3, firmware version 3.20 improves the autofocus capabilities. Specifically, Fujifilm says it improves tracking performance with eye AF, improves face-detection performance when there are different-sized faces in the same frame, and improves autofocus on foreground subjects. ‘even when there is a mixture of foreground and background subjects within a AF frame.’

Other updates in firmware version 3.20 include the ability to save up to 9,999 images in each folder (a dramatic increase from the current 999 image limit) and fixes for issues with movie autofocus, including ‘focus hunting at the minimum aperture’ and an issue that sometimes caused a black line to appear at the bottom of the frame. Other smaller bug fixes have been addressed as well.

You can find out more information about firmware version 3.20 for the X-T3 and download it on Fujifilm’s website.

Fujifilm has also updated its Camera Remote app for Android and iOS. The update adds support for Apple and Google’s latest operating systems, iOS 13 and Android 10, respectively. You can download Fujifilm Camera Remote in the Google Play Store and iOS App Store for free.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Fujifilm pushes out firmware updates for its X-Pro3 and X-T3 mirrorless cameras

Posted in Uncategorized

 

Autel pushes back EVO II launch to March after discovering pre-production bug

28 Jan

Autel’s three EVO II drones were set for a late January release, but due to a last-minute software bug the engineering team discovered during production, Autel has confirmed to DPReview that its EVO II drones will likely be pushed back to a March release.

The bug, which ‘could limit flight performance under normal operation,’ according to an email sent to us and copy shared on Autel’s social media accounts, is being addressed on the production line, rather than needing to be fixed by consumers as a day-one software update. Due to the combination of this delay and the Chinese New Year, Autel is estimating that the first units should hit shelves in March, although it notes ‘this is not a set date or time frame,’ as ‘things can always change.’

The post, which is shared in its entirety below, thanks interested customers for their patience as the 18-member Autel team works to get units out as soon as possible.

You can read our original Autel EVO II series coverage for more information about the impending drones and keep up to date with the latest developments via Autel’s Facebook, Instagram, Twitter and YouTube channel.

Hello Everyone!

First, we want to thank all of our fans and followers. It doesn’t matter if you just follow one of our social channels, or if you fly our products every day. Your support and enthusiasm have always kept us going here at the Seattle office.

With the announcement of EVO II at CES, the response has been absolutely crazy. This community is exploding, and we thank you for your patience with us as we are still trying to catch up on responses. We also want to be as transparent as possible and give you all periodic updates on the status and availability of EVO II. That way you have the most up to date information straight from us and not just rumors.

Our goal at CES was to get the initial units of EVO II (8k) into the hands of dealers by the end of January. Unfortunately, during production, we found a bug in our software that could limit flight performance under normal operation. Instead of shipping the hardware with a known issue and forcing users to perform day 1 updates, we have decided to delay the rest of production and shipments until we have resolved the issue. Our projected timeline is to have EVO II available for purchase in March. This is not a set date or time frame, and things can always change. But with the information we have today, that is our goal.

The team in Seattle is very small and we are adding channel support as we can. We are looking to start up our website newsletter again in the next few weeks. So for any future updates, please check our website, the official social channels, and emails coming directly from us.

We thank you for your patience. If you have any questions please do not hesitate to get a hold of us at support@autelrobotics.com

Thank you all again and fly safe!

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Autel pushes back EVO II launch to March after discovering pre-production bug

Posted in Uncategorized

 

Sony pushes firmware updates for eight of its cameras to improve overall stability

18 May

Sony’s software engineers might have some sleep to catch up on, as eight Sony cameras have received incremental firmware updates to smooth out the stability of the cameras.

Specifically, Sony has released firmware updates for its a9 (version 5.01), a7R III (version 3.01), a7 III (version 3.01), a7R II (version 4.01), a7S II (version 3.01), a7 II (version 4.01), a6500 (version 1.06) and a99 II cameras (version 1.01). Sony doesn’t elaborate on what exactly has been fixed, other than to say the updates ‘[improve] the overall stability of the camera[s].’

Before downloading and installing the latest firmware updates, be sure to read through the instructions provided by Sony on each of the firmware update pages linked above.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony pushes firmware updates for eight of its cameras to improve overall stability

Posted in Uncategorized

 

Five ways Google Pixel 3 pushes the boundaries of computational photography

11 Oct

With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3.

One of the first things you might notice about the Pixel 3 is the single rear camera. At a time when we’re seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.

But after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach – at least for now. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we’ve seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus.

Let’s take a closer look at some of the Pixel 3’s core technologies.

1. Super Res Zoom

Last year the Pixel 2 showed us what was possible with burst photography. HDR+ was its secret sauce, and it worked by constantly buffering nine frames in memory. When you press the shutter, the camera essentially goes back in time to those last nine frames1, breaks each of them up into thousands of ’tiles’, aligns them all, and then averages them.

Breaking each image into small tiles allows for advanced alignment even when the photographer or subject introduces movement. Blurred elements in some shots can be discarded, or subjects that have moved from frame to frame can be realigned. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise. And going back in time to the last 9 frames captured right before you hit the shutter button means there’s zero shutter lag.

Like the Pixel 2, HDR+ allows the Pixel 3 to render sharp, low noise images even in high contrast situations. Click image to view the level of detail at 100%. Photo: Google

This year, the Pixel 3 pushes all this further. It uses HDR+ burst photography to buffer up to 15 images2, and then employs super-resolution techniques to increase the resolution of the image beyond what the sensor and lens combination would traditionally achieve3. Subtle shifts from handheld shake and optical image stabilization (OIS) allow scene detail to be localized with sub-pixel precision, since shifts are unlikely to be exact multiples of a pixel.

In fact, I was told the shifts are carefully controlled by the optical image stabilization system. “We can demonstrate the way the optical image stabilization moves very slightly” remarked Marc Levoy. Precise sub-pixel shifts are not necessary at the sensor level though; instead, OIS is used to uniformly distribute a bunch of scene samples across a pixel, and then the images are aligned to sub-pixel precision in software.

We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic

But Google – and Peyman Milanfar’s research team working on this particular feature – didn’t stop there. “We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic” explains Marc. If you have enough samples, you can expect any scene element to have fallen on a red, green, and blue pixel. After alignment, then, you have R, G, and B information for any given scene element, which removes the need to demosaic. That itself leads to an increase in resolution (since you don’t have to interpolate spatial data from neighboring pixels), and a decrease in noise since the math required for demosaicing is itself a source of noise. The benefits are essentially similar to what you get when shooting pixel shift modes on dedicated cameras.

Normal wide-angle (28mm equiv.) Super Res Zoom

There’s a small catch to all this – at least for now. Super Res only activates at 1.2x zoom or more. Not in the default ‘zoomed out’ 28mm equivalent mode. As expected, the lower your level of zoom, the more impressed you’ll be with the resulting Super Res images, and naturally the resolving power of the lens will be a limitation. But the claim is that you can get “digital zoom roughly competitive with a 2x optical zoom” according to Isaac Reynolds, and it all happens right on the phone.

The results I was shown at Google appeared to be more impressive than the example we were provided above, no doubt at least in part due to the extreme zoom of our example here. We’ll reserve judgement until we’ve had a chance to test the feature for ourselves.

Would the Pixel 3 benefit from a second rear camera? For certain scenarios – still landscapes for example – probably. But having more cameras doesn’t always mean better capabilities. Quite often ‘second’ cameras have worse low light performance due to a smaller sensor and slower lens, as well as poor autofocus due to the lack of, or fewer, phase-detect pixels. One huge advantage of Pixel’s Portrait Mode is that its autofocus doesn’t differ from normal wide-angle shooting: dual pixel AF combined with HDR+ and pixel-binning yields incredible low light performance, even with fast moving erratic subjects.

2. Computational Raw

The Pixel 3 introduces ‘computational Raw’ capture in the default camera app. Isaac stressed that when Google decided to enable Raw in its Pixel cameras, they wanted to do it right, taking advantage of the phone’s computational power.

Our Raw file is the result of aligning and merging multiple frames, which makes it look more like the result of a DSLR

“There’s one key difference relative to the rest of the industry. Our DNG is the result of aligning and merging [up to 15] multiple frames… which makes it look more like the result of a DSLR” explains Marc. There’s no exaggeration here: we know very well that image quality tends to scale with sensor size thanks to a greater amount of total light collected per exposure, which reduces the impact of the most dominant source of noise in images: photon shot, or statistical, noise.

The Pixel cameras can effectively make up for their small sensor sizes by capturing more total light through multiple exposures, while aligning moving objects from frame to frame so they can still be averaged to decrease noise. That means better low light performance and higher dynamic range than what you’d expect from such a small sensor.

Shooting Raw allows you to take advantage of that extra range: by pulling back blown highlights and raising shadows otherwise clipped to black in the JPEG, and with full freedom over white balance in post thanks to the fact that there’s no scaling of the color channels before the Raw file is written.

Pixel 3 introduces in-camera computational Raw capture.

Such ‘merged’ Raw files represent a major threat to traditional cameras. The math alone suggests that, solely based on sensor size, 15 averaged frames from the Pixel 3 sensor should compete with APS-C sized sensors in terms of noise levels. There are more factors at play, including fill factor, quantum efficiency and microlens design, but needless to say we’re very excited to get the Pixel 3 into our studio scene and compare it with dedicated cameras in Raw mode, where the effects of the JPEG engine can be decoupled from raw performance.

While solutions do exist for combining multiple Raws from traditional cameras with alignment into a single output DNG, having an integrated solution in a smartphone that takes advantage of Google’s frankly class-leading tile-based align and merge – with no ghosting artifacts even with moving objects in the frame – is incredibly exciting. This feature should prove highly beneficial to enthusiast photographers. And what’s more – Raws are automatically uploaded to Google Photos, so you don’t have to worry about transferring them as you do with traditional cameras.

3. Synthetic Fill Flash

‘Synthetic Fill Flash’ adds a glow to human subjects, as if a reflector were held out in front of them. Photo: Google

Often a photographer will use a reflector to light the faces of backlit subjects. Pixel 3 does this computationally. The same machine-learning based segmentation algorithm that the Pixel camera uses in Portrait Mode is used to identify human subjects and add a warm glow to them.

If you’ve used the front facing camera on the Pixel 2 for Portrait Mode selfies, you’ve probably noticed how well it detects and masks human subjects using only segmentation. By using that same segmentation method for synthetic fill flash, the Pixel 3 is able to relight human subjects very effectively, with believable results that don’t confuse and relight other objects in the frame.

Interestingly, the same segmentation methods used to identify human subjects are also used for front-facing video image stabilization, which is great news for vloggers. If you’re vlogging, you typically want yourself, not the background, to be stabilized. That’s impossible with typical gyro-based optical image stabilization. The Pixel 3 analyzes each frame of the video feed and uses digital stabilization to steady you in the frame. There’s a small crop penalty to enabling this mode, but it allows for very steady video of the person holding the camera.

4. Learning-based Portrait Mode

The Pixel 2 had one of the best Portrait Modes we’ve tested despite having only one lens. This was due to its clever use of split pixels to sample a stereo pair of images behind the lens, combined with machine-learning based segmentation to understand human vs. non-human objects in the scene (for an in-depth explanation, watch my video here). Furthermore, dual pixel AF meant robust performance of even moving subjects in low light – great for constantly moving toddlers. The Pixel 3 brings some significant improvements despite lacking a second lens.

According to computational lead Marc Levoy, “Where we used to compute stereo from the dual pixels, we now use a learning-based pipeline. It still utilizes the dual pixels, but it’s not a conventional algorithm, it’s learning based”. What this means is improved results: more uniformly defocused backgrounds and fewer depth map errors. Have a look at the improved results with complex objects, where many approaches are unable to reliably blur backgrounds ‘seen through’ holes in foreground objects:

Learned result. Background objects, especially those seen through the toy, are consistently blurred. Objects around the peripheries of the image are also more consistently blurred. Learned depth map. Note how objects in the background (blue) aren’t confused as being closer to the foreground (yellow) as they are in the heat map below.
Stereo-only result. Background objects, especially those seen through the toy, aren’t consistently blurred. Stereo-only based depth map from dual pixels. Note how some elements in the background appear to be closer to the foreground than they really are.

Interestingly, this learning-based approach also yields better results with mid-distance shots where a person is further away. Typically, the further away your subject is, the less difference in stereo disparity between your subject and background, making accurate depth maps difficult to compute given the small 1mm baseline of the split pixels. Take a look at the Portrait Mode comparison below, with the new algorithm on the left vs. the old on the right.

Learned result. The background is uniformly defocused, and the ground shows a smooth, gradual blur. Stereo-only result. Note the sharp railing in the background, and the harsh transition from in-focus to out-of-focus in the ground.

5. Night Sight

Rather than simply rely on long exposures for low light photography, ‘Night Sight’ utilizes HDR+ burst mode photography to take usable photos in very dark situations. Previously, the Pixel 2 would never drop below 1/15s shutter speed, simply because it needed faster shutter speeds to maintain that 9-frame buffer with zero shutter lag. That does mean that even the Pixel 2 could, in very low light, effectively sample 0.6 seconds (9 x 1/15s), but sometimes that’s not even enough to get a usable photo in extremely dark situations.

The camera will merge up to 15 frames… to get you an image equivalent to a 5 second exposure

The Pixel 3 now has a ‘Night Sight’ mode which sacrifices the zero shutter lag and expects you to hold the camera steady after you’ve pressed the shutter button. When you do so, the camera will merge up to 15 frames, each with shutter speeds as low as, say, 1/3s, to get you an image equivalent to a 5 second exposure. But without the motion blur that would inevitably result from such a long exposure.

Put simply: even though there might be subject or handheld movement over the entire 5s span of the 15 frame burst, many of the the 1/3s ‘snapshots’ of that burst are likely to still be sharp, albeit possibly displaced relative to one another. The tile-based alignment of Google’s ‘robust merge’ technology, however, can handle inter-frame movement by aligning objects that have moved and discarding tiles of any frame that have too much motion blur.

Have a look at the results below, which also shows you the benefit of the wider-angle, second front-facing ‘groupie’ camera:

Normal front-camera ‘selfie’ Night Sight ‘groupie’ with wide-angle front-facing lens

Furthermore, Night Sight mode takes a machine-learning based approach to auto white balance. It’s often very difficult to determine the dominant light source in such dark environments, so Google has opted to use learning-based AWB to yield natural looking images.

Final thoughts: simpler photography

The philosophy behind the Pixel camera – and for that matter the philosophy behind many smartphone cameras today – is one-button photography. A seamless experience without the need to activate various modes or features.

This is possible thanks to the computational approaches these devices embrace. The Pixel camera and software are designed to give you pleasing results without requiring you to think much about camera settings. Synthetic fill flash activates automatically with backlit human subjects, and Super Resolution automatically kicks in as you zoom.

At their best, these technologies allows you to focus on the moment

Motion photos turns on automatically when the camera detects interesting activity, and Top Shot now uses AI to automatically suggest the best photo of the bunch, even if it’s a moment that occurred before you pressed the shutter button. Autofocus typically focuses on human subjects very reliably, but when you need to specify your subject, just tap on it and ‘Motion Autofocus’ will continue to track and focus on it very reliably. Perfect for your toddler or pet.

At their best, these technologies allow you to focus on the moment, perhaps even enjoy it, and sometimes even help you to capture memories you might have otherwise missed.

We’ll be putting the Pixel 3 through its paces soon, so stay tuned. In the meantime, let us know in the comments below what your favorite features are, and what you’d like to see tested.


1In good light, these last 9 frames typically span the last 150ms before you pressed the shutter button. In very low light, it can span up to the last 0.6s.

2We were only told ‘say, maybe 15 images’ in conversation about the number of images in the buffer for Super Res Zoom and Night Sight. It may be more, it could be less, but we were at least told that it is more than 9 frames. One thing to keep in mind is that even if you have a 15-frame buffer, not all frames are guaranteed to be usable. For example, if in Night Sight one or more of these frames have too much subject motion blur, they’re discarded.

3You can achieve a similar super-resolution effect manually with traditional cameras, and we describe the process here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Five ways Google Pixel 3 pushes the boundaries of computational photography

Posted in Uncategorized

 

Russian drone pilot pushes his tiny drone to 33,000ft

24 Mar

YouTube user and drone pilot Denis Koryakin (“????? ???????”) recently published a video showing a small drone’s trip to an altitude of around 33,000ft.

Operating a drone at that altitude would be against regulations in many places, not to mention risky to commercial aircraft. That said, this particular ascent appears to have taken place in a remote region of Russia near the Siberian city of Strejevoï, and there don’t seem to be any altitude restrictions on small drone flights in Russia, so he didn’t technically break any laws.

According to Koryakin’s video description, this “drone experiment” intended to get the drone to an altitude of 10 kilometers, which is just under 33,000ft and about the cruising altitude of passenger jets. The on-screen display shows the drone’s speed hitting 13 meters per second at one point, and Koryakin explained that temps went down to -50°C (-58°F) when the drone reached an altitude of around 8,000 meters (~26,000ft).

The video translation reads (H/T DPReview user ShaiKhulud):

March 9, 2018. Experiments with drone are still in progress. Goals for this flight are: reach a height of 10km and return to the launch site without accidents.

By popular demands, by my own desire and with a help of my friends we’ve added an air temperature gauge.

Because of the thermometer inertia, temperature is displayed with a slight delay.

The outside ground level temperature was around -10 C.

Max temperature during flight was around -50 C at 8000 m. altitude.

DVR footage and HD footage is slightly out of sync (by a few seconds) because of the frame skipping.

In the video description, Koryakin also lists the parts used to construct and control the drone, all of them readily accessible to anyone who wants to replicate it. Components include Cobra brushless motors, Gemfan carbon nylon propellers, and Sony li-ion batteries. The drone weighed around 1kg / 2.3lbs.


Disclaimer: Always check applicable local laws before trying something that might be dangerous or potentially illegal. DPReview does not condone or encourage illegal activity.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Russian drone pilot pushes his tiny drone to 33,000ft

Posted in Uncategorized

 

Nimbus Data ExaDrive pushes SSD capacity record to 100TB

21 Mar

Only a few weeks ago Samsung set a new record for SSD-drive capacity with its latest 30TB model. The achievement didn’t stand for long; US company Nimbus Data just shot past Samsung’s benchmark with the launch of a gargantuan 100TB drive.

The company says the “ExaDrive DC series raises the bar in SSD power efficiency, density, and write endurance”. At a 85% lower claimed power consumption than the competition (0.1 Watts/TB) the new drive is the world’s most efficient SSD which, according to Nimbus, means a 42% reduction cost of ownership per terabyte.

With a mean time between failures (MTBF) of 2.5 million hours, or over 285 years, longevity of the drive should be ensured as well but the the ExaDrive’s selling point is of course capacity. According to the Nimbus press release the drive has “capacity to store 20 million songs, 20,000 HD movies, or 2,000 iPhones worth of data in a device small enough to fit in your back pocket.” As a photographer you’re unlikely to ever run out of space, even when shooting high-resolution Raw files or recording 4K video footage.

The ExaDrive DC100 comes with the same 3.5″ form factor, SATA interface and plug-and-play capability as most standard hard drives, allowing for easy installation. The ExaDrive DC100 will be available this summer. No pricing information has been provided yet but given it’s targeted at datacenter use the new drive likely won’t be cheap. More information is available on the Nimbus website.

Press Release:

Nimbus Data launches the world’s largest solid state drive – 100 terabytes – to power data-driven innovation

ExaDrive DC series raises the bar in SSD power efficiency, density, and write endurance

Irvine, CA, March 19, 2018 – Nimbus Data, a pioneer in flash memory solutions, today announced the ExaDrive® DC100, the largest capacity (100 terabytes) solid state drive (SSD) ever produced. Featuring more than 3x the capacity of the closest competitor, the ExaDrive DC100 also draws 85% less power per terabyte (TB). These innovations reduce total cost of ownership per terabyte by 42% compared to competing enterprise SSDs, helping accelerate flash memory adoption in both cloud infrastructure and edge computing.

“As flash memory prices decline, capacity, energy efficiency, and density will become the critical drivers of cost reduction and competitive advantage,” stated Thomas Isakovich, CEO and founder of Nimbus Data. “The ExaDrive DC100 meets these challenges for both data center and edge applications, offering unmatched capacity in an ultra-low power design.”

Optimized to Maximize Flash Storage Capacity and Efficiency

While existing SSDs focus on speed, the DC100 is optimized for capacity and efficiency. With its patent-pending multiprocessor architecture, the DC100 supports much greater capacity than monolithic flash controllers. Using 3D NAND, the DC100 provides enough flash capacity to store 20 million songs, 20,000 HD movies, or 2,000 iPhones worth of data in a device small enough to fit in your back pocket. For data centers, a single rack of DC100 SSDs can achieve over 100 petabytes of raw capacity. Data centers can reduce power, cooling, and rack space costs by 85% per terabyte, enabling more workloads to move to flash at the lowest possible total cost of ownership.

Plug-and-Play and Balanced Performance for Diverse Workloads

Featuring the same 3.5” form factor and SATA interface used by hard drives, the ExaDrive DC100 is plug-and-play compatible with hundreds of storage and server platforms. The DC100’s low-power (0.1 watts/TB) and portability also make it well-suited for edge and IoT applications. The DC100 achieves up to 100,000 IOps (read or write) and up to 500 MBps throughput. This equally-balanced read/write performance is ideal for a wide range of workloads, from big data and machine learning to rich content and cloud infrastructure.

“The release of such a high capacity flash device that is fully compatible with HDD form factors opens up the opportunity to turbo charge big data platforms while at the same time improving reliability, significantly reducing device count, increasing data mobility, and lowering the TCO of multi-PB scale storage platforms,” said Eric Burgener, research vice president of Storage at IDC. “Devices of this class will allow flash to cost-effectively penetrate a broader set of use cases outside of tier 0 and tier 1 applications.”

Superior Reliability and Complete Data Protection

The ExaDrive DC100 is protected by an unlimited endurance guarantee for 5 years. By doing away with confusing drive-writes-per-day restrictions, the DC100 offers peace of mind, reduces hardware refresh cycles, and eliminates costly support renewals. Embedded capacitors ensure that buffered data is safely protected if there is a sudden power loss. Encryption, multiple ECC processors, and a secure-erase feature ensure data security. The DC100 offers a mean time between failures (MTBF) of 2.5 million hours.

Availability, Certifications, and Pricing

The ExaDrive DC series includes both 100 TB and 50 TB models. It is currently sampling to strategic customers and will be generally available in summer 2018. Nimbus Data has qualified the DC series in storage and server enclosures from major vendors. Pricing will be similar to existing enterprise SSDs on a per terabyte basis while offering 85% lower operating costs. Overall, the ExaDrive DC series will cost 42% less per terabyte over a 5-year period compared to existing enterprise SSDs. This TCO advantage factors in the superior endurance, balanced read/write performance, power savings, cooling savings, rack space savings, component reduction, and lower refresh costs.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nimbus Data ExaDrive pushes SSD capacity record to 100TB

Posted in Uncategorized

 

Delayed: Nikon Japan pushes D500 to end of April

05 Feb

Nikon shooters have waited a long time for a D300S replacement, and it appears that they’re going to have to keep waiting a little longer. Nikon Japan has released a statement (in Japanese) pushing the D500’s initial March release back to late April 2016. Nikon cites high demand for the camera as the cause of the delay. It seems that the D500’s battery grip and WT-7A wireless transmitter are also delayed.

Come April, the D500 will be available for $ 1,999.95 body only or with the 16-80mm F2.8-4E ED VR lens for $ 3,069.95.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Delayed: Nikon Japan pushes D500 to end of April

Posted in Uncategorized

 

Facebook further pushes photo prominence in the News Feed

08 Mar

Screen-Shot-2013-03-07-at-1.31.29-PM.jpg

Facebook has announced an upcoming update to the way photos are presented in user News Feeds. More space will be devoted to images, displaying them more prominently on the page. This is the second redesign that focuses on bigger pictures since July last year. The result of the redesign is not too dissimilar to the gallery view in Google+. In addition, the News Feed can be filtered to view only photo-based updates. The company says it is rolling the changes out in the coming weeks to both desktop and mobile version.

News: Digital Photography Review (dpreview.com)

 
Comments Off on Facebook further pushes photo prominence in the News Feed

Posted in Uncategorized

 

Facebook pushes photo prominence in timeline

01 Aug

Facebook-photo-stream.png

Facebook has updated the way photos are presented in the timeline section of users’ profiles – devoting more page space to the images and making it easier to give some images prominence. The result is an awful lot like the Google+ gallery view, and the Flickr interface for viewing contacts’ images but appears to crop all images to square format. The Facebook update gives the ability to ‘highlight’ specific images (making them four times larger) but doesn’t just present your own images – images with you tagged in them will be intermingled with your own shots, so it’s not an optimal way to showcase your photography, unless you ruthlessly de-tag yourself from other peoples’ photos.

News: Digital Photography Review (dpreview.com)

 
Comments Off on Facebook pushes photo prominence in timeline

Posted in Uncategorized