Quantcast
Channel: Enterprise Archives - StorageReview.com
Viewing all 5324 articles
Browse latest View live

VMware Ramps Support For Remote Workers

$
0
0
VMware horizon

As Covid-19 continues to change how daily work its conducted, more and more companies are adjusting to the pandemic as more employees become remote workers. VMware also announced some changes it is making to benefit its customers and employees in these trying times. At the same time the companies is making multiple different contributions to those that are helping fight the virus.

As Covid-19 continues to change how daily work its conducted, more and more companies are adjusting to the pandemic as more employees become remote workers. VMware also announced some changes it is making to benefit its customers and employees in these trying times. At the same time the companies is making multiple different contributions to those that are helping fight the virus.

VMware horizon

For customers, VMware has several solutions for remote work that keeps up connectivity and productivity. The solutions offered work regardless of the endpoint. VMware is looking for ways to deliver digital workspaces on endpoints, that are secure, while maintaining or increasing productivity. And there will be some built in elasticity to add more workers as they go home, or scale down as they return to work.

The bulk of the remote workers will benefit from VMware through the company’s digital workspace. The company promises to deliver any of their app through the endpoint of the users choosing without compromise in security. So there is no confusion, the VDI will work on all corporate and personal devices. For security, apps will be accessed with a Zero Trust access control model regardless of where they reside.

Other benefits for remote workers include:

  • Extended free trials of Workspace ONE for 90 days and 100 devices through July 31, 2020 available here.
  • Special offers are available from users’ VMware account team
  • Extended free trials of Horizon 7 on-premises, Horizon 7 on VMware Cloud on AWS, and Horizon Cloud on Azure for 90 days and 100 named users through July 31, 2020.
    A special webinar featuring EUC CTO, Shawn Bass, and Brian Madden: “How to Quickly Set Up a Remote Workforce for Success
  • An ongoing blog series addressing pertinent topics for accelerating remote work initiatives at vmware.com/euc

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post VMware Ramps Support For Remote Workers appeared first on StorageReview.com.


StorageReview Podcast #39: Christina Garza, Western Digital

$
0
0
G-Technology SSDs

In this week’s podcast Christina Garza joins us from Western Digital. More specifically, Christina is a Sr. Manager of Product marketing, leading global marketing strategy for WD’s G-Technology brand. G-Technology has been around for a very long time, offering premium portable and desktop storage for creative pros. Brian and Christina discuss modern workflows for professionals and the need to have good storage to make it all go faster. Additionally the podcast team breaks down the news of the week, discusses upcoming projects with iXsystems, Asigra, Seagate and much more, including Brian declaring. the Lenovo SE350 as the most innovative device he’s seen in the last 6 months.

In this week’s podcast Christina Garza joins us from Western Digital. More specifically, Christina is a Sr. Manager of Product marketing, leading global marketing strategy for WD’s G-Technology brand. G-Technology has been around for a very long time, offering premium portable and desktop storage for creative pros. Brian and Christina discuss modern workflows for professionals and the need to have good storage to make it all go faster. Additionally the podcast team breaks down the news of the week, discusses upcoming projects with iXsystems, Asigra, Seagate and much more, including Brian declaring. the Lenovo SE350 as the most innovative device he’s seen in the last 6 months.


G-Technology SSDs

Lenovo SE350 Inside
Lenovo ThinkSystem SE350 Inside

G-Technology

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #39: Christina Garza, Western Digital appeared first on StorageReview.com.

Zerto 8.0 Hits GA

$
0
0
Zerto 8.0

Today Zerto announced the general availability of Zerto 8.0. This release takes the data protection, recovery and mobility Zerto is known for, and expands it to hybrid clouds. Zerto has a new integration with Google Cloud, deeper integrations with Azure, AWS public cloud platforms, and new innovations with VMware.

Today Zerto announced the general availability of Zerto 8.0. This release takes the data protection, recovery and mobility Zerto is known for, and expands it to hybrid clouds. Zerto has a new integration with Google Cloud, deeper integrations with Azure, AWS public cloud platforms, and new innovations with VMware.

Zerto 8.0

As stated in our Zerto 7.5 review, many disaster recovery applications are still snapshot-based, storage-base, and LUN-oriented, causing headaches to system administrators wanting to simplify business continuity management. By replacing these legacy solutions with a single platform, Zerto is changing the way disaster recovery, data protection, and the cloud is managed. Zerto claims to be a revolutionary DR technology, that simply put, moves replication jobs from the storage level to the hypervisor. This hypervisor-based software offer continues replication allowing to replicate only what is needed and when it is needed, without the necessity to deal with LUNs.

Zerto 8.0 continues the above with new capabilities. The latest version also aims to take data protection into the future by replacing traditional snapshot-based backup with low RPO journal-based “operational recovery,” enabling enterprises to perform day-to-day granular recovery quickly.

New features and capabilities of Zerto 8.0 include:

  • New support for Google Cloud by bringing its leading Continuous Data Protection (CDP) technology to Google Cloud’s VMWare Engine. Zerto 8.0 will support the Google Cloud VMware-as-a-Service, enabling users to protect and migrate native VMware workloads in Google Cloud Platform (GCP) with Zerto’s leading RTOs, RPOs and workload mobility.
  • New cost savings, operational efficiencies and visibility with support for VMware vSphere Virtual Volumes (vVols).
  • Expanded VMware Virtual Cloud Director (vCD) connection with Zerto’s self-service recovery portal for Managed Service Providers (MSP).
  • Deeper integration with Microsoft Azure for increased simplicity and scalability of large deployments, which includes support for Microsoft Azure Unified Extensible Firmware Interface (UEFI) to offer complete data protection and recovery of Microsoft Hyper-V Generation 2 VMs, unlocking the recovery and performance benefits of Microsoft Generation 2 VMs, including UEFI-based architecture.
  • AWS Storage Gateway to be used as a target site for inexpensive and efficient cloud archive and data protection.
  • New data protection capabilities, extending the value of Virtual Protection Groups (VPGs) to data protection, delivering application consistency from seconds to years.
  • A single pane of glass for data protection reporting with status performance and capacity reporting of protected workloads.
  • Zerto’s failback functionality as part of the cost-effective incremental snapshots of Azure managed disks is now available across ALL regions.
  • New unified alert management with prioritized views of critical alerts and customization for users to receive the alerts exactly when needed for critical operations.
  • New impact analysis capability to mitigate risk of an organization’s protected and unprotected environment for on-premises or cloud.
  • New resource planning view of Unprotected VM’s for better insight into an organizations’ unprotected VMs.
  • Additional features to automate and streamline the processes for failover and configuration in the public cloud with automated OS configuration, automatic failback configuration and more.

Zerto

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Zerto 8.0 Hits GA appeared first on StorageReview.com.

Supermicro NGC-Ready Systems Announced

$
0
0
Supermicro Sys 5039

Today at the Supermicro GPU Live Forum,  Super Micro Computer, Inc. announced its portfolio of validated NGC-Ready systems optimized to accelerate AI and deep learning applications, Supermicro NGC-Ready Systems. According to the company, the new systems can scale up to 8 GPU Rackmount NGC-Ready Systems, certified to fully support NVIDIA GPU Cloud (NGC) Software. These announcements are being made in conjunction with NVIDIA’s GTC Digital.

Today at the Supermicro GPU Live Forum,  Super Micro Computer, Inc. announced its portfolio of validated NGC-Ready systems optimized to accelerate AI and deep learning applications, Supermicro NGC-Ready Systems. According to the company, the new systems can scale up to 8 GPU Rackmount NGC-Ready Systems, certified to fully support NVIDIA GPU Cloud (NGC) Software. These announcements are being made in conjunction with NVIDIA’s GTC Digital.

The biggest benefit to the new NGC-Ready Systems is the ability to train AI models using NVIDIA V100 GPUs and to perform inference using NVIDIA T4 GPUs. The NGC platform is able to deliver ready-to-use Docker containers that can run regardless of where the systems are located. So if the Supermicro NGC-Ready systems are deployed in data centers, cloud, edge micro-datacenters, or in distributed remote locations as environment-resilient and secured EGX servers, they will be ready to deploy Docker containers.

Supermicro, leaning into one of its strengths, has various validated NGC-Ready Servers, and is adding more today. Those added today are five that are validated NGC-Ready for Edge servers (EGX) optimized for edge inferencing applications. The company also offers multi-GPU optimized thermal designs that provide the highest performance and reliability for AI, Deep Learning, and HPC applications.

Supermicro NGC-Ready systems

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Supermicro NGC-Ready Systems Announced appeared first on StorageReview.com.

XenData Cloud Multi-Site Sync Service Announced

$
0
0
XenData Multi-Site Sync

Today, XenData announced its upcoming Multi-Site Sync service for cloud object storage. The service creates a global file system accessible worldwide via XenData Cloud File Gateways. XenData was founded in 2001. The company focuses on public cloud and on-premises data storage systems.

Today, XenData announced its upcoming Multi-Site Sync service for cloud object storage. The service creates a global file system accessible worldwide via XenData Cloud File Gateways. XenData was founded in 2001. The company focuses on public cloud and on-premises data storage systems.

XenData Multi-Site Sync

XenData’s upcoming Multi-Site Sync service for cloud object storage is scheduled to be launched in May of this year. The service provides a global file system managed by what XenData is calling “gateways.” Each gateway manages a local disk volume that caches frequently accessed files. They predict that their software will be able to scale up to 2 billion files with up to 256TB of local disk cache at each location. The gateways are optimized for video files. Video files can get large very quickly, but the company is claiming there is no upper limit on how large of a cloud storage their software can handle. Companies often claim to have optimized for a specific use case, but XenData appears to have actually done so. Their gateways support partial file restore, and crucially, streaming, so you can start reviewing video footage right away without needing to wait for the entire file to be copied over.

At release, XenData plans to support Amazon Web Services S3, Hot and Cool tiers of Azure Blob Storage, and Wasabi S3. XenData says they will be able to support multiple accounts across multiple providers at release.

XenData plans to include the gateway software on two appliances when they release. The first is the CX-10, a 1 RU rack-mount appliance with a 10TB disk. The second is less traditional; XenData’s X1 appliance. We’ve covered the X1 previously. The X1 is essentially a customized Intel NUC, which means it’s just a little bit bigger than your hand. The gateway software will also be supported on other Microsoft computers running Windows Server 2016, Windows Server 2019, or Windows 10.

 XenData Cloud Multi-Site Sync Availability

May 2020

XenData Main Site

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post XenData Cloud Multi-Site Sync Service Announced appeared first on StorageReview.com.

Seagate IronWolf 510 SSD Review

$
0
0
Seagate IronWolf 510 SSD

Announced recently, the Seagate IronWolf 510 SSD is an M.2 PCIe NVMe drive designed specifically for NAS devices. More specifically, the IronWolf 510 will be leveraged for SSD caching in NAS devices, improving overall performance. The new SSD is designed for NAS use the same way the Seagate IronWolf 110 is, more endurance and the right performance for caching needs.

Announced recently, the Seagate IronWolf 510 SSD is an M.2 PCIe NVMe drive designed specifically for NAS devices. More specifically, the IronWolf 510 will be leveraged for SSD caching in NAS devices, improving overall performance. The new SSD is designed for NAS use the same way the Seagate IronWolf 110 is, more endurance and the right performance for caching needs.

Seagate IronWolf 510 SSD

A vast majority of NAS device, particularly the desktop/tower type, leverage HDDs for their capacity. Since most support 16TB HDDs now, that is a ton of capacity, even in the smaller form factors. However, there is a limit on performance because of this. Several popular brands support SSD caching. This caching means you either give up two bays for 2.5” SSDs, or in the newer devices, slot in a couple of M.2 SSDs for the same experience. This last use case is where the Seagate IronWolf 510 slides in.

We have a video overview here:

Seagate states that the IronWolf 510 is built for NAS for a few reasons. NAS typically run 24×7. Noting that, the IronWolf 510 offers up to 1 DWPD and 1.8 million hours MTBF in the reliability department. Seagate also claims speeds as high as 3.15GB/s sequential read, which is good performance for caching needs.

The Seagate IronWolf 510 SSD comes in capacities ranging from 240GB to 1.92TB. The drive can be picked up for as little as $120 for the lower end capacity.

Seagate IronWolf 510 SSD Specifications

Capacity 1.92TB 960GB 480GB 240GB
Standard Model Z P1920N M 30001 Z P960N M 30001 Z P480N M 30001 Z P240N M 30001
Features
Interface PCIe G3 ×4, NVMe 1.3 PCIe G3 ×4, NVMe 1.3 PCIe G3 ×4, NVMe 1.3 PCIe G3 ×4, NVMe 1.3
NAND Flash Type 3D TLC 3D TLC 3D TLC 3D TLC
Form Factor M.2 2280-D2 M.2 2280-D2 M.2 2280-S2 M.2 2280-S2
Performance
Sequential Read (MB/s) Sustained, 128KB QD32 3,150 3,150 2,650 2,450
Sequential Write (MB/s) Sustained, 128KB QD32 850 1,000 600 290
Random Read (IOPS) (QD32T4) 270,000 345,000 193,000 100,000
Random Write (IOPS) (QD32T4) 25,000 28,000 20,000 12,000
Random Read (IOPS) (QD32T8) 290,000 380,000 199,000 100,000
Random Write (IOPS) (QD32T8) 27,000 29,000 21,000 13,000
Endurance/Reliability
Total Bytes Written (TB) 3,500 1,750 875 435
Nonrecoverable Read Errors per Bits Read 1 per 10E16 1 per 10E16 1 per 10E16 1 per 10E16
Mean Time Between Failures (MTBF, hours) 1,800,000 1,800,000 1,800,000 1,800,000
Warranty, Limited (years) 5 5 5 5
Power Management
Power Supply 3.3V 3.3V 3.3V 3.3V
Active Max Average Power (W) 6.0 6.0 6.0 5.3
Average Idle Power (W) 2.0 1.95 1.83 1.75
Environmental
Temperature, Operating Internal(°C) 0 to 70 0 to 70 0 to 70 0 to 70
Temperature, Nonoperating (°C) –40 to 85 –40 to 85 –40 to 85 –40 to 85
Shock, 0.5ms (Gs) 1500 1500 1500 1500
Physical
Height (in/mm, max) 0.140in/3.58mm 0.140in/3.58mm 0.087in/2.23mm 0.087in/2.23mm
Width (in/mm, max) 0.872in/22.15mm 0.872in/22.15mm 0.872in/22.15mm 0.872in/22.15mm
Depth (in/mm, max) 3.16in/80.15mm 3.16in/80.15mm 3.16in/80.15mm 3.16in/80.15mm
Weight (lb/g) 0.018lb/8.3g 0.017lb/8.1g 0.015lb/6.9g 0.014lb/6.5g
Carton Unit Quantity 10 10 10 10

 

Seagate IronWolf 510 SSD Design & Build

The Seagate IronWolf 510 SSD looks more or less like the majority of M.2 SSDs on the market. One side has a sticker with branding and pertinent info. Beneath the sticker are the NAND packs.

Seagate IronWolf 510 SSD rear

The flip side is a has the rest of the NAND packs and the SK hynix controller.

Seagate IronWolf 510 SSD Performance

Testbed

Our boot-drive Enterprise SSD reviews leverage a Dell PowerEdge R740xd for synthetic benchmarks. Synthetic tests that don’t require a lot of CPU resources use the more traditional dual-processor server. In both cases, the intent is to showcase local storage in the best light possible that aligns with storage vendor maximum drive specs.

Dell PowerEdge R740xd

  • 2 x Intel Gold 6130 CPU (2.1GHz x 16 Cores)
  • 4 x 16GB DDR4-2666MHz ECC DRAM
  • 1x PERC 730 2GB 12Gb/s RAID Card
  • Add-in NVMe Adapter
  • Ubuntu-16.04.3-desktop-amd64

Testing Background 

The StorageReview Enterprise Test Lab provides a flexible architecture for conducting benchmarks of enterprise storage devices in an environment comparable to what administrators encounter in real deployments. The Enterprise Test Lab incorporates a variety of servers, networking, power conditioning, and other network infrastructure that allows our staff to establish real-world conditions to accurately gauge performance during our reviews.

We incorporate these details about the lab environment and protocols into reviews so that IT professionals and those responsible for storage acquisition can understand the conditions under which we have achieved the following results. None of our reviews are paid for or overseen by the manufacturer of equipment we are testing.

For this review we will be comparing the Seagate IronWolf 510 SSD to another 1.92TB, M.2  SSD with a similar DWPD:

VDBench Workload Analysis

When it comes to benchmarking storage devices, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from “four corners” tests, common database transfer size tests, to trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices. Our testing process for these benchmarks fills the entire drive surface with data, then partitions a drive section equal to 5% of the drive capacity to simulate how the drive might respond to application workloads. This is different than full entropy tests which use 100% of the drive and take them into steady state. As a result, these figures will reflect higher-sustained write speeds.

Profiles:

  • 4K Random Read: 100% Read, 128 threads, 0-120% iorate
  • 4K Random Write: 100% Write, 64 threads, 0-120% iorate
  • 64K Sequential Read: 100% Read, 16 threads, 0-120% iorate
  • 64K Sequential Write: 100% Write, 8 threads, 0-120% iorate

First up is our random 4K read. Here, the Seagate IronWolf 510 SSD peaked at 279,510 IOPS with a latency of 456µs. This is about half the performance and twice the latency of the Samsung.

Seagate IronWolf 510 4K read

Next up is random 4K write. While the Seagate started with very low latency, 25µs at 9,983 IOPS, it went on to peak at just over 99K IOPS with a latency at about 600µs. The Samsung had peaked at 46,359 IOPS at a latency of 2.8ms.

Switching over to sequential workloads, the Seagate showed a much stronger performance altogether. The IronWolf 510 had sub-millisecond latency performance throughout with a peak of 28,709 IOPS or 1.8GB/s at a latency of 557µs. While the Samsung continued to perform better in regards to latency, the two results were much closer in bandwidth this time.

Seagate IronWolf 510 64K read

For 64K sequential write, the Seagate started well, 78.6µs latency and peaked at about 7,700K IOPS or about 480MB/s with a latency at about 500µs before dropping off. The Samsung spiked up to 2,871 IOPS or 179.5MB/s at a latency of 5.6ms.

Next, we move on to our SQL workloads. The Seagate IronWolf 510 SSD stayed below 1ms throughout with a peak of 89,092 IOPS with a latency of 359µs latency. The Samsung had over twice the performance with less than half the latency for comparison.

For SQL 90-10 the Seagate peaked at 76,340 IOPS with a latency of 418µs. Again, the Samsung outperformed the other drive by far.

With SQL 80-20 we see the Seagate hit a peak of 62,379 IOPS with a latency of 512µs.

Moving on to Oracle workloads, the Seagate IronWolf 510 SSD continued on its sub-millisecond latency performance streak. Here, we saw a peak performance of 63,030 IOPS at 568µs. The Samsung saw about 20% higher performance at about 30% lower latency.

Oracle 90-10 saw the Seagate hit 67,293 IOPS at 326µs. Here, the Samsung had over twice the performance and half the latency.

For Oracle 80-20 the Seagate peaked at 55,009 IOPS at 398µs for latency. The Samsung once again clobbered the Seagate with twice the IOPS and half the latency.

Next, we switched over to our VDI clone test, Full and Linked. For VDI Full Clone Boot, the Seagate IronWolf 510 SSD peaked at 65,029 IOPS at a latency of 516µs before dropping off a hair. Once again, the Samsung had overall higher performance at much lower latency.

VDI FC Initial Login saw the Seagate outperform the Samsung. The IronWolf 510 peaked at 22,360 IOPS at 1.3ms versus the Samsung’s 13,887 IOPS at 2.2ms.

With the VDI FC Monday Login, once again the IronWolf 510 came out on top with 18,550 IOPS with 860µs.

Switching over to Linked Clone, the Seagate fell back down again to the Samsung. Here, the IronWolf 510 peaked at 35,963 IOPS with a latency of 444µs.

Again, when we switched to Initial Login (VDI LC this time), we saw the Seagate take the lead in performance. The IronWolf 510 had peak scores of 10,964 IOPS with 724µs for latency.

Finally, with VDI LC Monday Login, the Seagate took the top spot with 13,310 IOPS and a latency of 1.2ms.

Conclusion

The Seagate IronWolf 510 SSD is a M.2 NVMe drive built for NAS caching. With an M.2 form factor, the drive slides into the M.2 slots on many popular NAS devices, saving valuable drive bays for higher capacity. With plenty of endurance, and read speeds up to 3.1GB/s, the IronWolf 510 should be ideal for most cache use cases.

Looking at performance we compared the Seagate IronWolf 510 SSD to the Samsung 983 DCT. While the Samsung 983 is a more up market enterprise model, it also comes in an M.2 form factor and has very similar endurance profile (1 DWPD on the Seagate, 0.8 DWPD on the Samsung). The IronWolf 510 is a light enterprise SSD targeting NAS, and therefore would be wise to get a better idea how it performs compared to other drives with similar endurance, form factor, and capacity. Throughout our testing the Samsung mostly performed better, showing strong advantages with read-heavy workloads, as it is designed to be higher performat. In areas that were more write focused such as 4K random or 64K sequential, the Seagate did take the upper hand. By comparing the two, we are able to better highlight the areas where the Seagate would shine, since it is marketed mainly for caching.

For highlights, the IronWolf 510 was able to hit peak scores of 280K IOPS in 4K read, 99K IOPS in 4K write, 1.8GB/s in 64K read, and 480MB/s in 64K write, outshining the Samsung in writes.  For SQL we saw 89K IOPS, for SQL 90-10 it hit 76K IOPS, and for SQL 80-20 62K IOPS. Oracle had the IronWolf peaking at 63K IOPS, Oracle 90-10 saw 67K IOPS, and Oracle 80-20 saw 55K IOPS. A bit of an interesting note with the Seagate is that it outperformed the Samsung in both the Initial Login and Monday Login of both the VDI clone tests, Linked and Full.

The Seagate IronWolf 510 SSD is a fine choice for those looked to add SSD caching to their NAS without giving up full drive bays. The SSD gives the right type of performance for caching while coming in at a decent price.

Seagate IronWolf 510 SSD on Amazon

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Seagate IronWolf 510 SSD Review appeared first on StorageReview.com.

NVIDIA Increases Support For Remote Workers

$
0
0
NVIDIA T4 GPU

With the Covid-19 pandemic still raging, many tech companies are finding ways to aid the shift to remote workers. NVIDIA announced that it is doing its part by expanding free access to its GPU virtualization software. NVIDIA’s free 90-day virtual GPU software went from 128 to 500 licenses.

With the Covid-19 pandemic still raging, many tech companies are finding ways to aid the shift to remote workers. NVIDIA announced that it is doing its part by expanding free access to its GPU virtualization software. NVIDIA’s free 90-day virtual GPU software went from 128 to 500 licenses.

NVIDIA T4 GPU

Companies that have NVIDIA’s powerful GPUs on-prem can leverage the vGPU software licenses to accelerate virtual infrastructure. This allows remote workers to work and collaborate from their quarantined spots. If you are a company that has NVIDIA GPUs that are currently being used efficiently, these too can be repurposed for the above.

All three tiers of the company’s specialized vGPU software are available through the expanded free licensing:

  • NVIDIA GRID software delivers responsive VDI by virtualizing systems and applications for knowledge workers.
  • NVIDIA Quadro Virtual Data Center Workstation software provides workstation-class performance for creators using high-end graphics applications.
  • NVIDIA Virtual Compute Server software accelerates server virtualization with GPUs to power the most compute-intensive workflows, such as AI, deep learning and data science on a virtual machine.

Leveraging vGPUs is a good way for remote workers to get the performance one expects form NVIDIA. However, leverage vGPUs can still provide high security as well. The data is still saved in data centers and not locally. The vGPU software also supports a broad ecosystem of hypervisors, platforms, user applications and management software making it easier to scale to remote workers.

NVIDIA vGPU software licenses work on all NVIDIA GPUs based on the Pascal, Volta and Turing architectures, including NVIDIA Quadro RTX 6000 and RTX 8000 GPUs, and NVIDIA M10 and M60 GPUs.

NVIDIA

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post NVIDIA Increases Support For Remote Workers appeared first on StorageReview.com.

Dell Technologies AI Solutions Announced

$
0
0
Dell Technologies AI Solutions

Today, Dell announced several Dell Technologies AI solutions. The solutions include Dell EMC HPC Ready Architecture for AI and Data Analytics, as well as two new validated architectures specifically for data analytics. Dell Technologies has been the parent company of Dell and Dell EMC ever since Dell acquired EMC in 2015. Dell was founded in 1984 and is one of the most well-known computer manufacturers.

Today, Dell announced several Dell Technologies AI solutions. The solutions include Dell EMC HPC Ready Architecture for AI and Data Analytics, as well as two new validated architectures specifically for data analytics. Dell Technologies has been the parent company of Dell and Dell EMC ever since Dell acquired EMC in 2015. Dell was founded in 1984 and is one of the most well-known computer manufacturers.

Dell Technologies AI Solutions

The new Dell EMC “HPC (High Performance Computing)” Ready Architectures for AI and Data Analytics are built on Dell’s PowerEdge servers using NVIDIA GPUs. Isilon scale-out NAS (Network Attached Storage) provides most of the storage needed by the architectures. Isilon NAS comes in two flavors. The A200 is intended for active archive storage. In contrast, the A2000 is designed for deep archive storage that is accessed infrequently.

Dell also announced two new validated architectures today. These architectures are more focused on data analytics, and so don’t have to make the same tradeoffs that the hybrid AI/Data Analytics architectures do. The first is designed to run Apache Spark on Kubernetes workloads. Apache Spark is an open-source distributed general-purpose cluster-computing framework initially developed at Berkeley. The second new reference architecture is designed to run Splunk’s enterprise software for monitoring and analyzing very large datasets. Like most of Dell’s reference architectures, they’re based around Dell EMC’s PowerEdge servers.

Dell Technologies AI solutions Availability

Immediately

Dell Technologies Main Site

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Dell Technologies AI Solutions Announced appeared first on StorageReview.com.


StorONE: Flexible Enterprise Storage

$
0
0
StorONE Flexible Enterprise Storage

Although StorONE is a relative newcomer to the storage market, since 2012, the company has developed The Enterprise Storage Platform, the technology behind their software-defined storage (SDS) product “S1.” An Enterprise Storage platform is evolving Software Defined Storage so that it provides maximum performance, supports all use cases (Block, File, Object )and all protocols (Fibre, iSCSI, NFS, SMB, S3), which all leads to dramatic reductions in cost and complexity.

Although StorONE is a relative newcomer to the storage market, since 2012, the company has developed The Enterprise Storage Platform, the technology behind their software-defined storage (SDS) product “S1.” An Enterprise Storage platform is evolving Software Defined Storage so that it provides maximum performance, supports all use cases (Block, File, Object )and all protocols (Fibre, iSCSI, NFS, SMB, S3), which all leads to dramatic reductions in cost and complexity.

StorONE has had over 50 patents granted to their company. StorONE is the creation of storage maven Gal Naor, whose name is quite familiar in the storage world. He is known for having introduced the first real-time enterprise storage compression technology in 2004 when he was at StorWize, which was later acquired by IBM in 2010. After leaving IBM, Gal set off to work on developing the next generation of storage products. Gal envisioned this next generation as being storage-protocol and platform agnostic, capable of taking advantage of the latest hardware, and being free from the constraints of last-gen technology and thinking. The end result of his vision and work is what we know today as The StorONE S1 Enterprise Platform, S1 for Short.

After looking over StorONE S1, the one word that we would use to describe it is flexible. S1 supports a multitude of storage protocols, can be deployed on many different platforms, supports all commonly used storage devices, and has an underlying pool of storage that can be divvied up and used by any of the storage protocols. A capper to this flexibility, however, is its single-interface administration and single-license requirement.

This flexibility does matter. The ugly reality of today’s datacenter is that we are limited in choices due to decisions that we, or our predecessors who have since retired or moved on to other roles, made years ago. This puts artificial constraints on our ability to modernize and take advantage of the latest innovations in IT. A prime example of this inflexibility with other storage products is the current trend in applications toward using object storage. For instance, if a datacenter decided on a different storage protocol years ago, they will now be required to adopt a new storage platform or perhaps a new (often times licensed) feature to be added to their existing storage environment. More often than not, a separate pool of storage resources must be dedicated to this new storage protocol. With StorONE, however, you would only need to create a new volume and specify it to be used as object storage; no additional software would need to be installed, and no additional licensing would be required. StorONE currently supports Fibre Channel, iSCSI, NFS, and SMB and is working on Container Storage Interface (CSI) support for container storage.

This flexibility not only manifests itself in the choice of storage protocols, but also in the ways in which S1 can be deployed: on bare-metal, as a virtual appliance, or in the cloud. The ability to deploy S1 in a variety of deployment models allows IT professionals the flexibility to choose the deployment method that best suits their needs. For example, for those who have strict corporate or governmental standards regarding the location and management of data, an on-premise deployment of S1 may be required. For bare metal deployments, StorONE has partnered with various hardware vendors to provide turn-key platforms for S1. For those who desire to reuse existing hardware or to architect their own storage platform, they can deploy S1 on it.

Regardless of the deployment method, S1 supports the use of Intel Optane, SSD (SAS and NVMe) and HDD for underlying data storage. It also supports FC or Ethernet for data connectivity and transport to meet their performance and cost requirements. If a data center has a “VM first” mentality or needs additional flexibility, deploying S1 as a VM on a hypervisor is possible. For those who want to use cloud-based S1 storage, there are many different public and private cloud providers to choose from.

Moreover, S1 is managed through the same interface and has the same workflows regardless of the platform it is running on. Moreover, the interface runs on many different operating systems (e.g., Windows, Linux, Mac OS and Android devices), supports a full-featured command-line interface (CLI), and has a well-documented API.  Both the CLI and API support all the features of the GUI. The API calls can be accessed via REST calls. The CLI supports scripting, and the CLI API allows programmatic control over StorONE storage and integration with other key pieces of your datacenter.

StorONE Flexible Enterprise Storage

When we tried creating storage by using the GUI, we were presented with an intuitive interface that walked us through all the steps required to create storage and present it for consumption. Over a thousand volumes (which can each be around half a petabyte) can be created, thin provisioned, and expanded in an S1 environment, if needed.

You can protect data stored on S1 through a high-availability mode and erasure coding. Data protection is done on a per-volume basis. For erasure coding, S1 uses N+K methodology, where is the number of simultaneous drive failures that an S1 volume can withstand, and is the number of data drives (up to five) that you would like to strip over. When using erasure coding, each volume can have different K and N values.

StorONE Flexible Enterprise Storage 2

StorONE has done extensive work on their snapshot technology and has optimized it to the point that they say it will not impact the performance of their storage. Their technology allows for the creation of an unlimited number of nestable, writeable, and persistent snapshots. These snapshots can either be initiated manually, or be scheduled to be taken at specific points in time. Snapshots can be taken on individual volumes or on groups of volumes. Snapshot restoration is accomplished via the GUI, CLI, or with an API call.

StorONE Flexible Enterprise Storage 3

The GUI can also monitor the entire storage system or a particular volume.

The technology behind S1 is StorONE’s Total Resource Utilization (TRU), which was designed from the ground up to maximize modern hardware and exploit the capabilities of flash devices and the latest extensions in CPUs. Being optimized for today’s technology allows S1 to deliver better performance while using fewer resources.

To get a better feel for S1, we installed it on a Supermicro with twelve 2.5″ and 4 NVMe drives. The process for installing, creating storage volumes, and managing the system can be seen in a video here:

All the features (e.g., storage protocol, number of snapshots, HA, etc.) on a StorONE system require only a single license. This greatly decomplicates license management and allows departments and business units to concentrate on using the best storage technology for a given application. It also allows developers to experiment with different storage technologies without incurring additional expenses or setting up new storage hardware. StorONE prides itself on its value and has made its licensing model very simple: S1 is priced at $0.01 per GB.

As a final note, as S1 is a software product the customer can use the server hardware of their choice. The enterprise storage platform is somewhat future proof. It is not tied to a particular hardware or cloud vendor, and as new hardware or storage protocols emerge, these can be also incorporated via new code. While many vendors claim they are future proof, StorONE has recently proven it. They’ve completed testing with Intel Optane and were able to get maximum performance out of Optane without changes to their software. Optane obviously became available after the company had developed S1.

StorONE: Flexible Enterprise Storage Wrap Up

For SMBs and enterprises looking for a storage solution that is reasonably priced with enterprise features, StorONE should be on their short list of products to examine. Being platform agnostic, it can be used for on-premise or cloud-based storage.

StorONE is headquartered in New York, with offices in Texas, Tel Aviv, and Singapore. More information on StorONE can be found at: https://www.storone.com/

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorONE: Flexible Enterprise Storage appeared first on StorageReview.com.

BOXX FLEXX Data Center Platform Announced

$
0
0
BOXX FLEXX Data Center Platform

Today at GTC, BOXX Technologies announced its new BOXX FLEXX data center platform. Billed as NVIDIA-powered, the new platform houses several NVIDIA RTX GPUs. On top of that, the platform houses several CPUs and supports NVIDIA GPU virtualization software for remote workstations.  BOXX is making a few other product announcements at GTC as well.

Today at GTC, BOXX Technologies announced its new BOXX FLEXX data center platform. Billed as NVIDIA-powered, the new platform houses several NVIDIA RTX GPUs. On top of that, the platform houses several CPUs and supports NVIDIA GPU virtualization software for remote workstations.  BOXX is making a few other product announcements at GTC as well.

BOXX FLEXX Data Center Platform

While we’ve touched on BOXX in the past, we’ve never covered them directly. The company is a bit more niche or specific than most vendors we cover at StorageReview. Originally founded as Digital Emulsion, Inc. in Phoenix, AZ back in 1996, the company relocated to Austin, TX two years later and became BOXX. The company focuses on high-performance computer workstations, rendering systems, and servers for engineering, product design, architecture, visual effects, animation, deep learning.

Designed for very high performance, BOXX FLEXX Data Center Platform supports the compute nodes for the highest application performance for engineers, architects, designers, artists, and other professional content creators who are working on site or remotely. The compute nodes also include NVIDIA Quadro Virtual Workstation nodes. The companies state that through this combination, remote users can see performance that was previously only available only in desk side workstations. The platform is stated as being able to provision Quadro Virtual Workstations in minutes.

The BOXX FLEXX Data Center Platform is available in various sizes in nodes measured as VUs (vertical units). The FLEXX chassis supports up to ten 1VU nodes or five 2VU nodes. The nodes can be mixed based off of needs and changed around without disrupting neighboring nodes. A FLEXX chassis is 5U and comes with redundant power supplies.

Aside from the BOXX FLEXX Data Center Platform, the company also announced the RAXX P6G Jupiter and the APEXX W4L ProViz workstation. The RAXX P6G Jupiter holds up to 16 NVIDIA Quadro RTX 8000 GPUs and is ideal for deep learning development, rendering, simulation, and other GPU-centric workflows. The APEXX W4L ProViz workstation is another device that is NVIDIA GPU heavy, housing four NVIDIA Quadro RTX GPUs. On top of the GPUs the workstation has a 28-core (56 thread) Intel Xeon W-3275 processor and 128GB of memory. The superpowered workstation is ideal for computer graphics applications like 3ds Max, Maya, and Maxon Cinema 4D, as well as rendering engines like Arnold, V-Ray, and Redshift.

BOXX

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post BOXX FLEXX Data Center Platform Announced appeared first on StorageReview.com.

Cisco Finishes Acquisition of Exablaze

$
0
0
Cisco Exablaze

At the end of the last year, Cisco announced its intention to acquire Exablaze. Now that the acquisition has gone through, the two companies are beginning to integrate. The bulk of this integration is centered around the Exablaze engineering team that will be integrated into the Cisco data center organization.

At the end of the last year, Cisco announced its intention to acquire Exablaze. Now that the acquisition has gone through, the two companies are beginning to integrate. The bulk of this integration is centered around the Exablaze engineering team that will be integrated into the Cisco data center organization.

Cisco Exablaze

A bit ahead of schedule, the acquisition was completed in February 2020, though we originally reported that it was expected to finish in the third quarter of this year. As we initially stated, by integrating Exablaze’s products and technology into Cisco’s own portfolio, they will gain the latest Field-Programmable Gate Array (FPGA) technology with the necessary flexibility and programmability their customers need. Among these Exablaze products are advanced ultra-low latency FPGA-based switches and network interface cards, as well as Terminal Access Point aggregation packet brokers.

With the engineering team integrated into a single company, they will continue to focus on high performat, low latency networking, ideal for high-frequency trading environments along with new AI/ML cloud services, 5G telecommunications infrastructure, and defense applications. Now Exablaze customers will have access to Cisco’s global reach including sales, R&D, and customer support. For current customers that may be concerned, Cisco states that technical support for Exablaze will not only remain, but they should see an improvement in service. Cisco will also be pushing on with the roadmap Exablaze has already laid out such as the ExaNIC and FDK range of products.

The fact that Exablaze’s products already complimented Cisco’s own, will make the integration a bit smoother. Cisco has been on a buying spree over the last few years and it will be interesting to see where they go with all of the companies they acquire. However, this integration will help to address the increasing need for performance-optimized bandwidth with next generation switching platforms and network interface cards.

Cisco

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Cisco Finishes Acquisition of Exablaze appeared first on StorageReview.com.

Veeam v10 Enhanced Instant VM Recovery

$
0
0
Veeam v10 Instant Recovery

In a previous article, we overviewed one of the most significant enhancements of the new Veeam version 10 Backup & Replication software, Enhanced NAS Backup. Looking forward to doing a comprehensive review of the enhanced features of this new version, this time, we take a close look at the Instant VM Recovery feature. Veeam Instant VM Recovery, in version 10, now allows the system to recover from any Veeam backup to a VMware environment. Also new, is that Veeam now allows recovering multiple virtual machines at the same time. These two are critical and demanding techniques. Overall, Instant VM Recovery helps us to improve recovery time objectives (RTO), increase protection against ransomware attacks, and downtime of production workloads.

In a previous article, we overviewed one of the most significant enhancements of the new Veeam version 10 Backup & Replication software, Enhanced NAS Backup. Looking forward to doing a comprehensive review of the enhanced features of this new version, this time, we take a close look at the Instant VM Recovery feature. Veeam Instant VM Recovery, in version 10, now allows the system to recover from any Veeam backup to a VMware environment. Also new, is that Veeam now allows recovering multiple virtual machines at the same time. These two are critical and demanding techniques. Overall, Instant VM Recovery helps us to improve recovery time objectives (RTO), increase protection against ransomware attacks, and downtime of production workloads.

Veeam v10 Instant Recovery

Instant VM Recovery is not a new feature in the Veeam backup system domains; Veeam established this capability back in its version 5. The solution was implemented from the Veeam backup repository. When admins took a backup, it was sent to the storage repository; later, the backup had to be copied back to the production system, power it on, and get access to it. Like so, customers could get workloads backed up and running within just a couple of minutes. But Veeam explored, moved forward, and improved the way that the system operates. Now on its version 10, Veeam adds the ability to make immediate copies of the backup to offsite targets during the actual backup process, ensuring that ransomware attacks won’t have the time to affect our vital data.

Focused areas of Veeam v10 Instant VM Recovery

Recovering multiple VMs can be a huge pain in the neck for companies wanting to recover hundreds of simultaneous VMs at once. The best way to approach this kind of bulky processing would be having a high-speed performing type of storage, such as SSD and NVMe. Additionally, this kind of massive operation should be scripted. Veeam v10 Instant VM Recover is enhanced precisely for this purpose, the technology has been improved in the backend engine to deliver the best possible outcomes without the need to add additional expensive hardware. Also, inside the Veeam Backup & Replication console interface, it’s now supported multiple instant VM recoveries simultaneously, helping backup strategies to be more manageable.

Through the updated Veeam documentation and material, we examined, for the Instant VM Recovery, what the keys are, in making the operations faster, and cost-effective. Veeam put the focus in three areas. The first one was the increasing RAM cache size to 1GB. Moreover, this RAM cache was optimized to measure all of the read blocks by flushing out the blocks that have been read least often, reducing latency, and enabling faster performance on recovery.

Another focused area is the intelligent prefetch of data blocks. By this, the Veeam engine looks ahead to what data blocks are going to be read next. Then, the system can intelligently read those blocks to make sure, even if the system is being recovered, they are ready. The intelligent prefetch of data blocks results in faster delivery. Veeam mentioned that depending on the system, performance could go a 5x increase in recovery speed. Besides these two enhancements, Veeam also optimized the interaction with the backup storage system. The engine takes advantage of synchronous I/O capability performing random reads and writes. For customers with high-performant storage systems, it means improving the performance of the recovery operation.

In Veeam v10, there are more operations behind the new enhanced Instant VM Recovery feature; to expand the concept overviewed here and go into in-depth tech details, we recommend you to visit Veeam’s website.

Performing an Instant VM Recovery

A very few changes have been made from the previous Veeam Backup & Replication console, in regards to the Instant VM Recovery feature. However, for a comprehensive overview, we require to add the steps to perform an instant VM recovery.

The operation can be carried out from one of two places. One method is from the ribbon menu under Restore and selecting the Instant VM Recovery option. A second method is browsing the backup that we have on disk and search for the failed VM that we want to recover.

v10 Instant Recovery 1

We are sticking with the second option since it is the most practical. From the Backups menu, we select the workload that we want to restore, right-click it, and select the first option, Instant VM recovery.

v10 Instant Recovery 2

This is going to bring the Instant VM Recovery wizard. From the Machines step, we can select the workload that we want to recovery, and we could add more workloads. Here also, we can change the Restore Point, by default, is the latest one.

Veeam v10 Instant Recovery 3

The next step is Restore Mode. Here need to specify how the VM will be restored. We selected the second option since it allows us to customize the restored VM location and more settings.

In the next step, Destination, we need to specify the destination for the restored VM. This option will change depending on the number of VMs we want to restore. Here, we can rename the restored VM, change host, VM folder, and resource pool.

At the Datastore step, we can select where to store the redo logs when a VM is running from a backup. We also have the option redirect write cache.

For Microsoft Windows workloads, we can have the Secure Restore settings. These options allow scanning the machine for virus threats before performing a recovery.

S

Finally, under Summary, we can select additional settings such as turning the VM on, as well as connect it back to the network, depending on if it’s a test scenario or it is real workload failure.

After finishing the Instant VM Recovery wizard, the VM recovery will be performed. Then, we must finalize the process, deciding whether to migrate them to the production environment or stop publishing.

Multiple VMs can be added to the recovery process from the very first step. But another excellent option is to select all the workloads that we want to restore at once.

Conclusion

One of the most critical areas of any successful data protection strategy is the ability to recover what you need when you need it quickly, to maintain continuity of the business. Today, applications and heavy workloads are not managed just by a single VM, but distributed over multiple machines, that are we need to recover concurrently. Instant VM Recovery is a functionality within Veeam, that has been enhanced in version 10, which enables the administrators to run a failed workload directly from the backup file within minutes, ensuring that ransomware attacks won’t have the time to hit the data.

Instant VM Recovery can also be executed for testing purposes, which besides for disaster recovery, is a vital role in making sure VMs are protected and valid. We can run a VM directly from the backup file, turning it on, and make sure the guest OS and applications are functioning correctly, instead of extracting VM images to production storage to perform regular DR testing. In Veeam v10, this feature not only is simple to perform, but can be leveraged from any backup that we have. Additionally, these new enhancements are integrated into the system by default.

Veeam

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Veeam v10 Enhanced Instant VM Recovery appeared first on StorageReview.com.

NetApp Kubernetes Service (NKS) Comes To An End

$
0
0
NetApp NKS

Through a customer communiqué last week, NetApp announced it would be ending its NetApp Kubernetes Service (NKS). On top of that, the company is ending its NKS on HCI & Cloud Volumes on HCI. The services will be discontinued on April 30, 2020.

Through a customer communiqué last week, NetApp announced it would be ending its NetApp Kubernetes Service (NKS). On top of that, the company is ending its NKS on HCI & Cloud Volumes on HCI. The services will be discontinued on April 30, 2020.

NetApp NKS

Kubernetes is massively popular. Over the course of the last few years, everyone and their brother has jumped on the Kubernetes bandwagon, especially the larger vendors trying to get in on the action. NetApp rolled out their NKS to help customers with Kubernetes portability. NKS allowed customers to build Kubernetes clusters on-premises (NetApp HCI and FlexPod) or in the public cloud (AWS, Azure and GCP). Cloud Volumes (also being shuddered) made file services in clouds usable by containerized applications.

NetApp is imploring customers to migrate off of NKS and related services now. NetApp support is ready to help those that need to migrate their data. So, if you happened to be one of the customers leveraging this service, best to contact support and get migrated onto something else as soon as you can.

NetApp states that it is in no way leaving the Kubernetes market. In fact, they claim that they will be investing more than ever. And the company is working on a way to enable a simplified and unified solution across their entire portfolio. NetApp will support third-party Kubernetes platforms with a distribution-agnostic hybrid infrastructure. The company states that its long-term goal is to have NetApp HCI focus on delivering a software-defined architecture and developing automation and tools around the leading Kubernetes solutions that will be critical to our future strategy.

NKS

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post NetApp Kubernetes Service (NKS) Comes To An End appeared first on StorageReview.com.

SK hynix – Get the Most from SATA and NVMe Enterprise SSDs

$
0
0
Hynix NVMe SATA SSDs

NVMe flash storage has taken the industry by storm, establishing itself as the defacto standard when high-performance, low-latency storage is the requirement. There are times however when NVMe may be overkill, or cases where a hybrid flash approach makes more sense. Many server-based software defined solutions that take advantage of flash can do so in a multi-tier capability. VMware vSAN and Microsoft Azure Stack HCI are perhaps the most well known in this way; both can leverage a small high-performance flash pool for tiering, and less expensive SSDs for capacity. Buffeting lower cost SATA SSDs with a small count of NVMe provides an excellent blend of performance, capacity and cost.

NVMe flash storage has taken the industry by storm, establishing itself as the defacto standard when high-performance, low-latency storage is the requirement. There are times however when NVMe may be overkill, or cases where a hybrid flash approach makes more sense. Many server-based software defined solutions that take advantage of flash can do so in a multi-tier capability. VMware vSAN and Microsoft Azure Stack HCI are perhaps the most well known in this way; both can leverage a small high-performance flash pool for tiering, and less expensive SSDs for capacity. Buffeting lower cost SATA SSDs with a small count of NVMe provides an excellent blend of performance, capacity and cost.

Another factor when considering the deployment of flash is the server itself. While there are plenty of all-NVMe servers from vendors both large and small, often times it’s impractical or unnecessary to go this route. With NVMe drive cost being higher than SATA, a majority of servers being sold today will offer a couple of NVMe bays, mixed in with SATA/SAS for the remainder. One such server that is sold in this way is the Dell EMC PowerEdge R640.

The Dell EMC PowerEdge R640 is a 1U, 2-socket server designed for tasks where compute density is tantamount. In our lab, we have an R640 configured with 10 2.5” drive bays, including 4 NVMe/SAS/SATA combo bays and 6 SAS/SATA bays, though Dell offers a wide variety of configurations. This type of storage configuration lets us take advantage of up to four very fast NVMe SSDs, as well as leveraging cost-optimized SATA SSDs. The combination bays also allow customers to go heavier on NVMe SSDs as I/O needs grow or stick with more SATA or SAS depending on the specific requirements of the build. 

SK hynix PE6011 NVMe SSD
SK hynix PE6011 SSD

To illustrate this concept further, we have worked with SK hynix to test a group of PE6011 NVMe SSDs and a group of SE4011 SATA SSDs. These tests are done to show how each drive can complement the other, with NVMe offering greater bandwidth and I/O potential, and SATA offering the capacity requirements without a significant drop in latency or performance. The tests clearly articulate where the performance bands are, so the enterprise has a complete picture to aid in the decision making process, especially when architecting software-defined solutions like an object store (SUSE Enterprise Storage) or more traditional virtual storage appliance (StoreONE).

SATA vs. NVMe SSDs – Dell EMC PowerEdge R640 Testbed

In our testing configuration, we leveraged a Dell PowerEdge R640 equipped with dual Intel Xeon 2nd-generation Scalable 8280 CPUs with a clock speed of 2.7GHz and 28 cores each. Paired with these CPUs were twelve 32GB 2933MHz DDR4 modules, giving the system a combined memory footprint of 384GB. For SATA connectivity the R640 included a PERC H740P RAID card and drives configured in HBA pass-through mode. For NVMe connectivity, all four SSDs communicate with the 2nd CPU with direct PCIe lanes, without the use of a PCIe switch inside the R640. This method bypassed the impact of controller cache and instead focused on the performance of the drives themselves in aggregate or individually in VMware.

Dell EMC PowerEdge R640
Dell EMC PowerEdge R640

Our testing setup consisted of two storage configurations. The first was four PE6011 NVMe SSDs, fully outfitting the four NVMe bays inside the PowerEdge R640, leaving six remaining SATA/SAS bays open. The second was eight SE4011 SATA SSDs, fully utilizing all dedicated SATA/SAS bays, leaving two NVMe combo bays available. 

For bare-metal benchmarks we used CentOS 7.2 (1908) minimal, with OpenJava installed alongside vdbench. We measured each drive group in aggregate, showing peak performance of four PE6011 NVMe SSDs and following with eight SE4011 SATA SSDs. In our virtualized testing environment we installed VMware ESXi 6.7u3, and formatted individual SSDs with Datastores, and placed SQL Server or MySQL databases on them. For Sysbench tests, we leverage 8 VMs, with two placed on each SSD in the case of the NVMe tests, one per SSD in the case of the SATA tests. For SQL Server with the test consisting of 4VMs only, we place each on its own SSD, giving us four NVMe SSDs or four SATA SSDs being tested.

VDbench testing / thread count

All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices.

Profiles:

  • 4K Random Read: 100% Read, 128 threads, 0-120% iorate
  • 4K Random Write: 100% Write, 128 threads, 0-120% iorate
  • 64K Sequential Read: 100% Read, 32 threads, 0-120% iorate
  • 64K Sequential Write: 100% Write, 16 threads, 0-120% iorate

SQL Server configuration (4VMs)

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.

Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test looks for latency performance.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Dell’s Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our servers.

SQL Server Testing Configuration (per VM)

  • Windows Server 2012 R2
  • Storage Footprint: 600GB allocated, 500GB used
  • SQL Server 2014
    • Database Size: 1,500 scale
    • Virtual Client Load: 15,000
    • RAM Buffer: 48GB
  • Test Length: 3 hours
    • 2.5 hours preconditioning
    • 30 minutes sample period

MySQL Sysbench configuration (8VMs)

Our Percona MySQL OLTP database measures transactional performance via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well.

Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller.

Sysbench Testing Configuration (per VM)

  • CentOS 6.3 64-bit
  • Percona XtraDB 5.5.30-rel30.1
    • Database Tables: 100
    • Database Size: 10,000,000
    • Database Threads: 32
    • RAM Buffer: 24GB
  • Test Length: 3 hours
    • 2 hours preconditioning 32 threads
    • 1 hour 32 threads

SK hynix SATA and NVMe SSD Performance Results

To characterize the performance of both the SK hynix PE6011 NVMe SSD and SE4011 SATA SSD, we performed a “four-corners” synthetic workload on them. This compared the raw performance of four NVMe SSDs against eight SATA SSDs all addressed directly for a total I/O picture without RAID impacting performance.

Our first workload measured peak read bandwidth from each drive group with a 64K sequential workload. In this workload we measured a peak bandwidth of 3.97GB/s at 4ms latency from the eight drive SATA group. The four drive NVMe group measured a peak bandwidth of 10.76GB/s at 0.734ms. 

Next we looked at sequential write bandwidth with the same 64K sequential workload. In this setting the SATA SSD group measured 3.06GB/s at their peak, before tapering back to 2.8GB/s with 2.8ms latency at an over-saturation point. The NVMe SSD group though scaled up to 3.6GB/s at 1.1ms latency.

NVMe vs SATA 4K read

Switching focus to our peak throughput tests measuring 4K random performance, we first look at our read workload. In this setting the group of eight SATA SSDs peaked at 542k IOPS at 1.9ms latency. By comparison the four NVMe SSDs were able to far outpace them with a peak throughput of 2.46M IOPS at 0.205ms latency.

The last component of our “four-corners” synthetic workload measured the random 4K write performance of each drive group. The eight SATA SSDs were able to offer 500k IOPS peak at 1.99ms latency, whereas the four NVMe SSDs offered 835k IOPS at 0.572ms latency.

In the last stage of our synthetic testing, we looked at two VDI use cases, the first being VDI Full Clone Boot. In this workload the four PE6011 NVMe SSDs offered a peak bandwidth of 384k IOPS or 5.3GB/s at 0.33ms latency, while the eight SATA SE4011 SATA SSDs peaked with a bandwidth of 202k IOPS or 2.8GB/s at 1.2ms latency.

Measuring the performance of the PE6011 NVMe SSDs we saw a peak bandwidth from that group peaking at 186k IOPS or 3.5GB/s at 0.55ms. The eight drive SATA group measured upwards of 109k IOPS or 2.1GB/s at 1.9ms latency.

From looking at how each drive group performed in our four-corner and VDI workloads, we see that a 2:1 ratio of SATA to NVME offered a good balance of read to write performance. The PE6011 SSDs were able to offer very strong read throughput and bandwidth at low latency compared to their SATA counterparts. Looking at write throughput and bandwidth, the SE4011 SSDs were able to absorb workloads not too far behind their NVMe counterparts, which is important when combining different classes of drives in a storage solution where data needs to move between tiers fast enough without slowing down incoming workloads.

Our last two workloads look at Microsoft SQL Server TPC-C and MySQL Sysbench performance running across multiple VMs inside a VMWare ESXi 6.7u3 virtualized environment. Both of these tests are designed to show real-world performance with our SQL Server workload focusing on latency and our MySQL test focusing on peak transactional performance.

In our SQL Server workload for this project consisted of testing 4 VMs, each placed within a single VMFS 5 datastore. This workload leveraged four of the SK hynix PE6011 NVMe SSDs and four SE4011 SATA SSDs. Using Quest Benchmark Factory, each VM has a 15k virtual user applied and the responsiveness of the database is measured. 

Across the four SK hynix PE6011 NVMe SSDs, we measured an average latency of 2ms across the four VMs. Moving that same workload to the four SE4011 SATA SSDs latency picked up to an average of 16ms. 

In our final database workload, we looked at the performance of 8 VMs. With 8 VMs, we place two on each of the 4 NVMe SSDs and one on each of the 8 SATA SSDs. In this workload we measure the individual transactional performance of each VM and aggregate them together for a total score. 

Across the four SK hynix PE6011 NVMe SSDs we measured an aggregate 18,525TPS at an average latency of 13.81ms. Moving that workload to the eight SK hynix SE4011 NVMe SSDs, the aggregate measured 13,032TPS at an average latency of 19.64ms.

SATA vs. NVMe SSDs – Final Thoughts

When contemplating any form of storage, it’s critical to understand the performance, cost and capacity characteristics of the system under consideration. In this case we’re looking at a diverse SSD portfolio from SK hynix, which is capable of meeting a nearly endless supply of use cases. Because SK hynix offers SATA and NVMe SSDs, the drives can be leveraged in a variety of ways. While NVMe SSDs are clearly fast, they carry a price premium over SATA. On the other hand, SATA SSDs give up the speed NVMe offers, but are more economical and still catch the tailwind from all the TCO benefits flash offers over hard drives. As such, a majority of businesses can benefit from a hybrid flash approach, combining the performance of NVMe, with the favorable economics of SATA. 

Hynix NVMe SATA SSDs

Nowhere is this opportunity more clear than in software-defined storage and the hyperconvergence market. Most SDS and HCI deployments are designed to take advantage of different classes of storage; StoreONE, Microsoft Azure Stack HCI and VMware vSAN are all good examples of this. In some cases the NVMe SSDs can act as a cache or tier in front of the SATA drives, which serve as the capacity for the system. In other cases distinct pools can be created, in this case a performance pool of NVMe, and a SATA pool for less critical application workloads. 

To illustrate the benefits of both types of SSDs, we tested a group of PE6011 NVMe SSDs along with SE4011 SATA SSDs in a Dell EMC PowerEdge R640. Our key findings show that the PE6011 NVMe SSDs are able to provide strong, low-latency performance across our synthetic and application workloads, providing in excess of 10.7GB/s in read bandwidth. In addition our findings show that the SE4011 SATA SSDs complement the NVMe SSDs, offering a stable capacity tier in all of our workloads, which is an important consideration in tiering or caching scenarios where data may rest on either storage pool. Write performance on the SATA SE4011 group held up very well, measuring 2.8GB/s across eight drives, versus 3.6GB/s from four PE6011 NVMe SSDs. As workloads de-stage, or need to perform well before moving into cache or tiering, strong write performance allows them to offer a consistent user-experience for a well-balanced storage solution.

SK hynix has redoubled their efforts in enterprise flash over the last year and a half, quickly coming to market with a diverse, vertically-integrated portfolio. This range of products gives customers choices, to ensure their deployments perform as expected. Whether the drives go into an SDS solution, HCI cluster or simply serve as server storage, SK hynix is ready to support their customers in this journey.

Sk hynix Enterprise SSDs

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

This report is sponsored by SK hynix. All views and opinions expressed in this report are based on our unbiased view of the product(s) under consideration.

The post SK hynix – Get the Most from SATA and NVMe Enterprise SSDs appeared first on StorageReview.com.

StorageReview Podcast #40: Dakota Calvert, SanDisk

$
0
0
SanDisk Extreme Pro CFexpress Card Type B

In the podcast this week Brian interviews Dakota Calvert from SanDisk. The guys discuss what it means for SanDisk to be reliable for content creators, and emerging tech to support creators, like memory cards capable of 1700MB/s reads. Additionally the team covers off on all things Corona, including events being pushed back, Dell Tech World is now in October as a virtual event, VeeamON was also pushed back and virtual only.

In the podcast this week Brian interviews Dakota Calvert from SanDisk. The guys discuss what it means for SanDisk to be reliable for content creators, and emerging tech to support creators, like memory cards capable of 1700MB/s reads. Additionally the team covers off on all things Corona, including events being pushed back, Dell Tech World is now in October as a virtual event, VeeamON was also pushed back and virtual only.

We spend too much time on the podcast talking about video game consoles, cover the top news and discuss free VSAs like Dell EMC UnityVSA. (free for 4TB). Adam defends his movie corner as Tom tries to encroach as the media recommendation engine with the call being It Comes at Night (Netflix). We also thank NetApp for sending jackets for the team!

SanDisk Extreme Pro CFexpress Card Type B

SanDisk Cards at Amazon

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #40: Dakota Calvert, SanDisk appeared first on StorageReview.com.


Supermicro SAP HANA Solutions For VMware HCI Announced

$
0
0
Supermicro SAP HANNA VMware

Super Micro Computer, Inc. announced new solutions that are SAP HANA-certified for VMware hyperconverged infrastructure (HCI). Being certified for SAP HANA and VMware are one big benefit, the Supermicro servers are powered by second generation Intel Xeon Scalable CPUS and support Intel Optane persistent memory. This added performance is ideal for those that wish to leverage HCI for their SAP workloads.

Super Micro Computer, Inc. announced new solutions that are SAP HANA-certified for VMware hyperconverged infrastructure (HCI). Being certified for SAP HANA and VMware are one big benefit, the Supermicro servers are powered by second generation Intel Xeon Scalable CPUS and support Intel Optane persistent memory. This added performance is ideal for those that wish to leverage HCI for their SAP workloads.

Supermicro SAP HANNA VMware

These two new systems are the SYS-2029U-E1CRT and SYS-6029U-E1CRT4. The two new systems are certified on VMWare vSAN version 6.6, vSphere version 6.7, and SAP HANA 2.0. The above listed hardware can really bring performance to SAP workloads. The new Intel Xeon Scalable CPUs also come with Intel DL Boost. Intel DL Boost is a built-in AI accelerator that provides the agility to extract further insights from AI-infused workloads.

Supermicro and VMware stated that their collaboration is an ideal alternative to the traditional Fiber Channel SAN-based virtualization infrastructure. Forgoing much of the complexity that the traditional infrastructure brings, VMware vSAN brings ease of use and scalability to HCI. vSAN can overcome other challenges brought on by SAN and NAS including the networking challenges.

Supermicro SAP HANNA Solutions

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Supermicro SAP HANA Solutions For VMware HCI Announced appeared first on StorageReview.com.

News Bits: StorONE, Samsung, FalconStor, Quantum, Microsoft, AMD, AWS, & Dynatrace

$
0
0
StorageReview logo

This week’s News Bits we look at a number of small announcements, small in terms of the content, not the impact they have. StorONE partners with SQream. Samsung mass produces eUFS 3.1. FalconStor launches StorSafe. Quantum completes acquisition of ActiveScale. Microsoft Azure NVv4 Virtual Machines hit general availability. AWS adds PyTorch support. Dynatrace offers extended free trials.

This week’s News Bits we look at a number of small announcements, small in terms of the content, not the impact they have. StorONE partners with SQream. Samsung mass produces eUFS 3.1. FalconStor launches StorSafe. Quantum completes acquisition of ActiveScale. Microsoft Azure NVv4 Virtual Machines hit general availability. AWS adds PyTorch support. Dynatrace offers extended free trials.

StorONE Partners With SQream

StorONE Flexible Enterprise Storage

StorONE has partnered with SQream to provide users with high-performance massive data analytics. According to the companies, with StorONE Enterprise Storage Platform, S1, SQream DB can saturate a multi-node, parallel file over 100GbE, ensuring the NVIDIA Tesla V100 GPUs the software uses are fully utilized.

StorONE

SQream

 Samsung Mass Produces eUFS 3.1

 

Samsung announced that it has begun mass production on its 512-gigabyte (GB) eUFS (embedded Universal Flash Storage) 3.1 for use in flagship smartphones. The new storage is said to deliver nearly 3x the write speeds of the previous version at about 1.2GB/s. It also offers 100K IOPS read and 70K IOPS writes. This brings storage performance normally thought of in a notebook to a mobile phone.

Samsung

FalconStor Launches StorSafe

FalconStor StorSafe 

FalconStor Software, Inc. launched StorSafe,  what it is calling the industry’s first enterprise-class persistent data storage container. Some of StorSafe’s benefits include:

  • Portability across any S3 cloud, object storage, and on-premises storage environments and future-proofing for technology advancements
  • Dramatically improved long-term archive accessibility and reinstatement for strategic archive leverage
  • Variable payload container storage, allowing hyper-efficient data deduplication over large storage containers to significantly reduce cost and storage capacity consumption, and smaller storage containers optimized for accelerated data reinstatement
  • Increased security and data integrity capabilities to maintain the confidentiality, integrity, availability, nonrepudiation, and accessibility of archive data, as well as a journaled integrity checks at user set intervals for data integrity validation, cyber intrusion detection, and verification of chain of custody
  • Critical retention data is no longer tied to or integrated with a specific hardware storage system or cloud platform allowing independent data portability over extended retention periods without expensive cross-platform service migration costs
  • Redundant Array of Independent Clouds (RAIC) container distribution via multi-cloud erasure coding for data redundancy, redundancy overhead reduction, and accelerated disaster recovery, which delivers 99.99999+ of availability even if an entire cloud provider goes dark

FalconStor StorSafe

Quantum Completes Acquisition Of ActiveScale 

Quantum has completed its acquisition of ActiveScale from Western Digital. The company states that this acquisition will expand its leadership role in storing and managing video and other unstructured data using a software-defined approach.

Quantum

Microsoft Azure NVv4 Virtual Machines Hit GA

Microsoft and AMD continued to expand their partnership with the announcement of Microsoft Azure NVv4 Virtual Machines, powered by 2nd Gen AMD EPYC  Processors and AMD Radeon Instinct GPUs. VMs include:

  • Dav4 series VM: The Dav4 and Dasv4 Azure VMs are made for a variety of general-purpose applications. Featuring the powerful AMD EPYC 7452 processor, the VMs offer up to 96 vCPUs, 384 GBs of RAM, and 2,400 GBs of SSD-based temporary storage and support for Azure Premium SSDs.
  • Eav4 series VM: The Eav4 and Easv4 Azure VMs are made for memory-intensive workloads. These new VMs were the first in the cloud to feature the AMD EPYC 7452 processor and offer up to 64 percent better SQL Server workload performance in Azure than the previous generation E-series VM
  • HBv2 VM: Powered by the AMD EPYC 7742 CPU, these VMs are purpose made for high-performance computing workloads like CFD, explicit finite element analysis, seismic processing, reservoir modeling, rendering and more. Recently, Azure announced that in a series of HPC benchmarks, the HBv2 VM eclipsed 80,000 cores for message passing interface scalability, providing on-premise supercomputing levels of performance in the cloud.
  • NVv4 VM: Powered by 2nd Gen AMD EPYC CPUs and AMD Radeon Instinct MI25 GPUs, NVv4 delivers a modern desktop and workstation experience in the cloud. Single Root I/O Virtualization (SR-IOV) based GPU partitioning offers four resource-balanced configuration options, from 1/8th to a full GPU, to deliver a flexible, GPU-enabled virtual desktop experience.
  • Lsv2: The Lsv2-series is well suited to big data applications, SQL and NoSQL databases, data warehousing, and large transactional databases. The Lsv2 VMs run on the AMD EPYC 7551 processor.

AMD & Microsoft Azure

AWS adds PyTorch Support for Elastic Inference

AWS announced that it added support for PyTorch models with Amazon Elastic Inference (EI). AWS states that this will bring some great benefits to ML developers including: allowing them to only use the required compute power needed, thus reducing the cost of running deep learning inference by up to 75%. Elastic Inference supports TorchScript compiled models on PyTorch in regions where EI is available.

AWS

Dynatrace Offers Extended Free Trials

 

With the Covid-19 pandemic happening, many companies are extending free offerings to boost remote workers. Dynatrace is no different with free trial access to its Software Intelligence Platform, through May 19, 2020. The company states that this will help with the following:

  • Ensuring applications and infrastructure run smoothly amid increased demand
  • Delivering instant answers on performance problems and business metrics for remote teams
  • Enabling remote teams to identify and focus on their highest priority work

Dynatrace

The post News Bits: StorONE, Samsung, FalconStor, Quantum, Microsoft, AMD, AWS, & Dynatrace appeared first on StorageReview.com.

VMware vSphere 7 Improved DRS

$
0
0
VMware DRS vSphere 7

This week, VMware went into more detail describing the new Distributed Resource Scheduling (DRS) algorithm, improved in the latest VMware vSphere, version 7. VMware now uses an advanced DRS logic and a new DRS UI in the vSphere Client. With these enhancements, VMware DRS focuses on a better solution to support modern workloads.

This week, VMware went into more detail describing the new Distributed Resource Scheduling (DRS) algorithm, improved in the latest VMware vSphere, version 7. VMware now uses an advanced DRS logic and a new DRS UI in the vSphere Client. With these enhancements, VMware DRS focuses on a better solution to support modern workloads.

VMware DRS vSphere 7

VMware released Distributed Resource Scheduling (DRS) in 2006, with an algorithm that, in the past, focused on the cluster state. The cluster was checking every 5 minutes ESXi host resources and then rebalancing, if needed, based on the resources consumed by the ESXi hosts. VMware says that the new DRS logic takes a very different approach. Now, it computes a VM DRS score on each host and moves the VM to the host that provides the highest. The VM DRS score is focused on the execution efficiency of a virtual machine and not in its health score.

Alongside the VM DRS score, VMware also shows the Cluster DRS Score in the UI, which is calculated using an aggregation of all the VM scores in the cluster. The vSphere cluster summary overview provides insights on what is happening from a DRS perspective. More information about the VM DRS score is available in the new UI, which is also one of the critical improvements of this new version.

VMware also added additional unique capabilities to the new DRS logic: Assignable Hardware and Scalable Shares.

  • Assignable Hardware: When a VM configured with hardware accelerators is first powered on, it needs to be placed on an appropriate host in the cluster. The Assignable Hardware framework integrates with DRS to do that. PCIe devices with the new Dynamic DirectPath I/O or NVIDIA vGPU profiles are supported in vSphere 7 as part of this initial placement.
  • Scalable Shares: Enabling Scalable Shares allows resource pools to have dynamic & relative entitlements. When using resource pools, one can regularly see a situation where resource pools, configured with higher share levels would not necessarily guarantee more resources for their workloads. Scalable shares aim to solve that issue. An improvement like this is also important for vSphere with Kubernetes, as the vSphere Pod Service needs this to ensure performance.

VMware vSphere

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post VMware vSphere 7 Improved DRS appeared first on StorageReview.com.

StorONE S1 Storage Platform V3 Rolls Out

$
0
0
StorONE Activity screen

Today, StorOne announced that the third major version of its Enterprise Storage Platform has entered general availability, StorONE S1 Storage Platform V3. StorONE was founded in 2012, and their focus is on providing low-cost storage through their S1 storage platform, which is the focus of this article. In an amazingly generous and humanitarian gesture, StorOne is allowing all businesses impacted by the coronavirus to use S1 at no cost through June. Healthcare and medical research firms have an even better deal, getting access to their storage platform for no charge through October.

Today, StorOne announced that the third major version of its Enterprise Storage Platform has entered general availability, StorONE S1 Storage Platform V3. StorONE was founded in 2012, and their focus is on providing low-cost storage through their S1 storage platform, which is the focus of this article. In an amazingly generous and humanitarian gesture, StorOne is allowing all businesses impacted by the coronavirus to use S1 at no cost through June. Healthcare and medical research firms have an even better deal, getting access to their storage platform for no charge through October.

StorONE Flexible Enterprise Storage

StorONE S1 Storage Platform V3 adds a host of new features. Even more new features than I’m used to seeing from a major update. The first new feature is the ability to move data across multiple tiers of data efficiently. S1 can now transfer data all the way from deep storage to a high-performance (and high cost) NVMe flash drive, or any tier in-between. The second new feature is tiering for snapshots. Previously, only production data could be tiered. The new ability to tier snapshots as well will significantly reduce customers’ costs. However, the real impact likely won’t be felt for months since the company has responded so heroically to the current situation and eliminated all costs for many companies. The third new feature is support for object storage at the volume level via the S3 protocol. Now a single S1-powered storage server can support high-performance (1 million+) IOPS storage running on top of fiber or iSCSI. It can also deliver cost-effective, high-capacity NAS or object storage via NFS, SMB, or S3. More importantly, all these protocols and use cases run under the same storage platform and enjoy the same features and benefits. The fourth and final new feature is really two intertwined features, synchronous replication. S1 now provides asynchronous, semi-synchronous, and synchronous replication of data from one StorONE system to another. Source and target storage clusters can have different drive redundancy settings, snapshot data retention policies, and drive pool types.

In addition to the many new features, the new version of StorONE’s S1 storage software also boasts enhancements to existing features. S1 has boosted their support for NVMe SSDs. S1 can now place NVMe SSDs in a separate drive pool so that administrators can put the high-performance drives in a different drive pool than SAS-based SSDs. If NVMe SSDs are available, the S1 platform will automatically place the S1 metadata on them to further accelerate performance.

StorONE S1 Storage Platform V3 Availability

Immediately

StorONE Main Site

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorONE S1 Storage Platform V3 Rolls Out appeared first on StorageReview.com.

DataStax Releases Apache Cassandra Kubernetes Operator

$
0
0
DataStax Apache Cassandra Kubernetes

Today, DataStax released the code for an Apache Cassandra Kubernetes operator. DataStax was founded in March 2010. Their eponymous flagship product is DataStax is built on the open-source Apache Cassandra database.

Today, DataStax released the code for an Apache Cassandra Kubernetes operator. DataStax was founded in March 2010. Their eponymous flagship product is DataStax is built on the open-source Apache Cassandra database.

DataStax Apache Cassandra Kubernetes

The DataStax Apache Cassandra Kubernetes operator is, like all open-source projects, freely available. Like most open-source projects these days, it lives on GitHub. The operator currently supports Kubernetes v1.15 and v1.13. The main features of the operator are that it reduces downtime and lock-in.

Today, DataStax also announced the release of their new Apache Kubernetes operator. It also offers significantly faster and more efficient streaming. By eliminating copy streaming, DataStax says it has reduced operations that used to take hours to mere minutes. The other major new feature is full support for Cassandra Graphs. Graph queries can now utilize native Cassandra data models. This allows users to query using Gremlin traversal language and build multi-model applications with joins, matching and traversals over distributed, large Cassandra data sets.

DataStax Apache Cassandra Kubernetes Availability

Immediately

DataStax Main Site

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post DataStax Releases Apache Cassandra Kubernetes Operator appeared first on StorageReview.com.

Viewing all 5324 articles
Browse latest View live