Quantcast
Channel: Enterprise Archives - StorageReview.com
Viewing all 5325 articles
Browse latest View live

GIGABYTE R281-NO0 NVMe Server Review

$
0
0

The GIGABYTE R281-NO0 is a 2U all-NVMe server that is built around Intel’s second generation of Xeon Scalable processors with a focus on performance-based workloads. With the support of 2nd gen Intel Xeon Scalable comes the support of Intel Optane DC Persistent Memory modules. Optane PMEM can bring a much larger memory footprint as, while the modules aren’t as high-performant as DRAM, they come in much higher capacities. Leveraging Optane can help unleash the full potential of the 2nd gen Intel Xeon Scalable Processors in the GIGABYTE R281-NO0.

The GIGABYTE R281-NO0 is a 2U all-NVMe server that is built around Intel’s second generation of Xeon Scalable processors with a focus on performance-based workloads. With the support of 2nd gen Intel Xeon Scalable comes the support of Intel Optane DC Persistent Memory modules. Optane PMEM can bring a much larger memory footprint as, while the modules aren’t as high-performant as DRAM, they come in much higher capacities. Leveraging Optane can help unleash the full potential of the 2nd gen Intel Xeon Scalable Processors in the GIGABYTE R281-NO0.

 

Other interesting hardware layouts of the GIGABYTE R281-NO0 include up to 12 DIMMs per socket or 24 total. The newer CPUs allow for DRAM up to 2933MHz. In total, users can outfit the GIGABYTE R281-NO0 with up to 3TB of DRAM. The server can leverage several different riser cards giving it up to six full height half-length slots for devices that can leverage PCIe x16 slots or under. The company boasts of having a very dense add-on slot design with several configurations for different use cases. The server has a modularized backplane that is able to support exchangeable expanders offering both SAS and NVMe U.2 (or a combination) depending on needs.

With storage, not only can users add a lot, they can add a lot of NVMe storage in the form of U.2 and AIC. Across the front of the server are 24 drive bays that support 2.5” HDD or SSD and supports NVMe. The rear of the server has two more 2.5” drive bays for SATA/SAS boot/logging drives. And there are tons of PCIe expansion lots for various PCIe devices, including more storage. This density and performance are ideal for AI and HPC optimized for GPU density, multi-node servers optimized for HCI, and storage servers optimized for HDD / SSD capacity.

For those interested, we have a video overview here:

For power management, the GIGABYTE R281-NO0 has two PSUs, which is not uncommon at all. However, it also has intelligent power management features to both make the server more efficient in terms of power usage and retain power in the case of a failure. The server comes with a feature known as Cold Redundancy that switches the extra PSU to standby mode with the system load is under 40%, saving power costs. The system has SCMP (Smart Crisis Management / Protection). With SCMP, if there is an issue with one PSU only two nodes will do to lower power mode while the PSU is repaired/replaced.

GIGABYTE R281-NO0 Specifications

Form Factor 2U
Motherboard MR91-FS0
CPU 2nd Generation Intel Xeon Scalable and Intel Xeon Scalable Processors
Intel Xeon Platinum Processor, Intel Xeon Gold Processor, Intel Xeon Silver Processor and Intel Xeon Bronze Processor
CPU TDP up to 205W
Socket 2x LGA 3647, Socket P
Chipset Intel C621
Memory 24 x DIMM slots
RDIMM modules up to 64GB supported
LRDIMM modules up to 128GB supported
Supports Intel Optane DC Persistent Memory (DCPMM)
​1.2V modules: 2933 (1DPC)/2666/2400/2133 MHz
Storage
Bays Front side: 24 x 2.5″ U.2 hot-swappable NVMe SSD bays
​Rear side: 2 x 2.5″ SATA/SAS hot-swappable HDD/SSD bays
Drive Type SATA III 6Gb/s
​SAS with an add-on SAS Card
RAID For SATA drives: Intel SATA RAID 0/1
​For U.2 drives: Intel Virtual RAID On CPU (VROC) RAID 0, 1, 10, 5
LAN 2 x 1Gb/s LAN ports (Intel I350-AM2)
​1 x 10/100/1000 management LAN
Expansion Slots
Riser Card CRS2131 1 x PCIe x16 slot (Gen3 x16 or x8), Full height half-length
1 x PCIe x8 slots (Gen3 x0 or x8), Full height half-length
​1 x PCIe x8 slots (Gen3 x8), Full height half-length
Riser Card CRS2132 1 x PCIe x16 slot (Gen3 x16 or x8), Full height half-length, Occupied by CNV3124, 4 x U.2 ports
1 x PCIe x8 slots (Gen3 x0 or x8), Full height half-length
1 x PCIe x8 slots (Gen3 x8), Full height half-length
Riser Card CRS2124 1 x PCIe x8 slots (Gen3 x0), Low profile half-length
​1 x PCIe x16 slot (Gen3 x16), Low profile half-length, Occupied by CNV3124, 4 x U.2 ports
2 x OCP mezzanine slots PCIe Gen3 x16
Type1, P1, P2, P3, P4, K2, K3
​1 x OCP mezzanine slot is Occupied by CNVO124, 4 x U.2 mezzanine card
I/O
Internal 2 x Power supply connectors
4 x SlimSAS connectors
2 x SATA 7-pin connectors
2 x CPU fan headers
1 x USB 3.0 header
1 x TPM header
1 x VROC connector
1 x Front panel header
1 x HDD back plane board header
1 x IPMB connector
1 x Clear CMOS jumper
​1 x BIOS recovery jumper
Front 2 x USB 3.0
1 x Power button with LED
1 x ID button with LED
1 x Reset button
1 x NMI button
1 x System status LED
1 x HDD activity LED
​2 x LAN activity LEDs
Rear 2 x USB 3.0
1 x VGA
1 x COM (RJ45 type)
2 x RJ45
1 x MLAN
​1 x ID button with LED
Backplane Front side_CBP20O2: 24 x SATA/SAS ports
Front side_CEPM480: 8 x U.2 ports
Rear side_CBP2020: 2 x SATA/SAS ports
​Bandwidth: SATAIII 6Gb/s or SAS 12Gb/s per port
Power
Supply 2 x 1600W redundant PSUs
80 PLUS Platinum
AC Input 100-127V~/ 12A, 47-63Hz
​200-240V~/ 9.48A, 47-63Hz
DC Output Max 1000W/ 100-127V
  • +12V/ 82A
  • +12Vsb/ 2.1A

Max 1600W/ 200-240V

  • +12V/ 132A
  • ​+12Vsb/ 2.1A
Environmental
Operating temperature 10°C to 35°C
Operating humidity 8-80% (non-condensing)
Non-operating temperature -40°C to 60°C
Non-operating humidity 20%-95% (non-condensing)
Physical
Dimensions (WxHxD)  438 x 87.5 x 730
Weight  20kg

Design and Build

The GIGABYTE R281-NO0 is a 2U rackmount server. Across the front are 24 hot-swappable bays for NVMe U.2 SSDs. On the left side are LED indicator lights and button for reset, power, NMI, and ID. On the right are two USB 3.0 ports.

 

Flipping the device around to the rear we see two 2.5″ SSD/HDD bays in the upper left corner. Beneath the bays are two PSUs. Running across the bottom is a VGA port, two USB 3.0 ports, Two GbE LAN ports, a serial port, and a 10/100/1000 server management LAN port. Above the ports are six PCIe slots.

 

The top pops off fairly easy giving users’ access to the two Intel CPUs (covered by heatsinks in the photo). Here one can see all the DIMM slots as well. This server is loaded down with NVMe as seen by all the direct access cables running back to the daughterboards from the front backplane. The cables themselves are neatly laid out and don’t appear to impact airflow front to back.

GIGABYTE R281-NO0 Configuration

CPU 2 x Intel 8280
RAM 384GB of 2933HMz
Storage 12 x Micron 9300 NVMe 3.84TB

Performance

SQL Server Performance

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.

Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test looks for latency performance.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Dell’s Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our servers.

SQL Server Testing Configuration (per VM)

  • Windows Server 2012 R2
  • Storage Footprint: 600GB allocated, 500GB used
  • SQL Server 2014
    • Database Size: 1,500 scale
    • Virtual Client Load: 15,000
    • RAM Buffer: 48GB
  • Test Length: 3 hours
    • 2.5 hours preconditioning
    • 30 minutes sample period

For our transactional SQL Server benchmark, the R281-NO0 posted an aggregate score of 12,645 TPS, with individual VMs ranging from 3,161.1 TPS to 3,161,5 TPS.

 

With SQL Server average latency the server had an aggregate score as well as individual VM score of 1ms.

 

Sysbench MySQL Performance

Our first local-storage application benchmark consists of a Percona MySQL OLTP database measured via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well.

Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller.

Sysbench Testing Configuration (per VM)

  • CentOS 6.3 64-bit
  • Percona XtraDB 5.5.30-rel30.1
    • Database Tables: 100
    • Database Size: 10,000,000
    • Database Threads: 32
    • RAM Buffer: 24GB
  • Test Length: 3 hours
    • 2 hours preconditioning 32 threads
    • 1 hour 32 threads

With the Sysbench OLTP the GIGABYTE saw an aggregate score of 19,154.9 TPS.

With Sysbench latency, the server had an average of 13.37ms.

In our worst-case scenario (99th percentile) latency, the server saw 24.53ms for aggregate latency.

VDBench Workload Analysis

When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from “four corners” tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices.

Profiles:

  • 4K Random Read: 100% Read, 128 threads, 0-120% iorate
  • 4K Random Write: 100% Write, 64 threads, 0-120% iorate
  • 64K Sequential Read: 100% Read, 16 threads, 0-120% iorate
  • 64K Sequential Write: 100% Write, 8 threads, 0-120% iorate
  • Synthetic Database: SQL and Oracle
  • VDI Full Clone and Linked Clone Traces

With random 4K read, the GIGABYTE R281-NO0 started at 539,443 IOPS at 114.8µs and went on to peak at 5,326,746 IOPS at a latency of 238µs.

 

4k random write showed sub 100µs performance until about 3.25 million IOPS and a peak score of 3,390,371 IOPS at a latency of 262.1µs.

 

For sequential workloads we looked at 64k. For 64K read we saw peak performance of about 640K IOPS or 4GB/s at about 550µs latency before dropping off some.

 

64K write saw a sub 100µs performance until about 175K IOPS or 1.15GB/s and went on to peak at 259,779 IOPS or 1.62GB/s at a latency of 581.9µs before dropping off some.

 

Our next set of tests are our SQL workloads: SQL, SQL 90-10, and SQL 80-20. Starting with SQL, the GIGABYTE had a peak performance of 2,345,547 IPS at a latency of 159.4µs.

 

With SQL 90-10 we saw the server peak at 2,411,654 IOPS with a latency of 156.1µs.

 

Our SQL 80-20 test had the server peak at 2,249,683 IOPS with a latency of 166.1µs.

Next up are our Oracle workloads: Oracle, Oracle 90-10, and Oracle 80-20. Starting with Oracle, the GIGABYTE R281-NO0 peaked at 2,240,831 IOPS at 165.3µs for latency.

 

Oracle 90-10 saw a peak performance of 1,883,800 IOPS at a latency of 136.2µs.

In Oracle 80-20 the server peaked at 1,842,053 IOPS at 139.3µs for latency.

 

Next, we switched over to our VDI clone test, Full and Linked. For VDI Full Clone (FC) Boot, the GIGABYTE peaked at 1,853,086 IOPS and a latency of 198µs.

Looking at VDI FC Initial Login, the server was started at 83,797 IOPS at 86.7µs and went on to pea at 808,427 IOPS with a latency of 305.9µs before dropping off some.

 

VDI FC Monday Login saw the server peaked at 693,431 IOPS at a latency of 207.6µs.

 

For VDI Linked Clone (LC) Boot, the GIGABYTE Sever peaked at 802,660 IOPS at 194µs for latency.

Looking at VDI LC Initial Login, the server saw a peak of 409,901 IOPS with 195.2µs latency.

Finally, VDI LC Monday Login had the server with a peak performance of 488,516 IOPS with a latency of 273µs.

Conclusion

The 2U GIGABYTE R281-NO0 is an all NVMe server built for performance. The server leverages two second generation Intel Xeon Scalable CPUs and supports up to 12 DIMMS per socket. Depending on the CPU choice, it supports DRAM speeds up to 2933MHz and Intel Optane PMEM. User can have up to 3TB of DRAM or a larger memory footprint with Optane. The storage setup is highly configurable, with the build we reviewed supporting 24 2.5 NVMe SSDs. And an interesting power feature is Cold Redundancy that switches the extra PSU to standby mode with the system load is under 40%, saving power costs.

For performance testing we ran our Applications Analysis Workloads as well as our VDBench Workload Analysis. For Applications Analysis Workloads we started off with SQL Server. Here we saw an aggregate transactional score of 12,645 TPS with an average latency of 1ms. Moving on to Sysbench, the GIGABYTE server gave us an aggregate score of 19,154 TPS, an average latency of 13.37ms, and a worst-case scenario of only 24.53ms.

In our VDBench Workload Analysis the server came off with some strong, impressive numbers. Peak highlights include 5.3 million IOPS for 4K read, 3.4 million IOPS for 4K write, 4GB/s for 64K read, and for 64K write of 1.62GB/s. For our SQL workloads the server hit 2.3 Million IOPS, 2.4 million IOPS for 90-10, and 2.3 million IOPS for 80-20. With Oracle we saw 2.2 million IOPS, 1.9 million IOPS for Oracle 90-10, and 1.8 million IOPS for 80-20. For our VDI Clone tests we saw 1.9 million IOPS for Boot, 808K IOPS for Initial Login, and 693K IOPS for Monday Login for Full Clone. For Linked Clone we saw 803K IOPS for Boot, 410K IOPS for Initial Login, and 489K IOPS for Monday Login.

The GIGABYTE R281-NO0 is a powerhouse of a server, capable of supporting a wide range of flash technologies. Being built around the Intel Scalable 2nd Generation hardware it also benefits from the newer CPUs supporting Optane PMEM. The server offers plenty of configurability on the storage end and some nifty power benefits. We’re most enamored by the 24 NVMe SSD bays of course; anyone with a high-performance storage need will be as well. This server from GIGABYTE is well designed to be a fantastic storage-heavy server for a variety of use cases.

GIGABYTE R281-NO0

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post GIGABYTE R281-NO0 NVMe Server Review appeared first on StorageReview.com.


Quantum To Acquire WD’s ActiveScale Portfolio

$
0
0

Quantum has announced an agreement with Western Digital Technologies to acquire its ActiveScale object storage business product line. WD offers a range of products under the ActiveScale name, including ActiveScale P100 appliance and the ActiveScale X100 hybrid cloud solution. This acquisition also comes with WD’s object storage software and erasure coding technology, allowing Quantum to expand in the object storage market.  Both companies indicate that this will be an easy transition for customers and stakeholders, as Quantum will continue to give support for ActiveScale products with a promise to continually enhance the product line.


Quantum has announced an agreement with Western Digital Technologies to acquire its ActiveScale object storage business product line. WD offers a range of products under the ActiveScale name, including ActiveScale P100 appliance and the ActiveScale X100 hybrid cloud solution. This acquisition also comes with WD’s object storage software and erasure coding technology, allowing Quantum to expand in the object storage market.  Both companies indicate that this will be an easy transition for customers and stakeholders, as Quantum will continue to give support for ActiveScale products with a promise to continually enhance the product line.

Quantum doesn’t expect too much of an initial financial impact to its business operations, and will give additional guidance with the release of the fiscal Q4 2020 financial results.  The transaction is slated to close by March 31, 2020, though this is subject to the “satisfaction of customary closing conditions.” No financial terms of the deal were revealed.

Quantum indicates the following features that make object storage useful for a range of datasets:

  • Massive Scalability:  Store, manage and analyze billions of objects, and exabytes of capacity.
  • Highly Durable and Available:  ActiveScale object storage offers up to 19 nines of data durability using patented erasure coding protection technologies.
  • Easy to Manage at Scale:  Because object storage has a flat namespace (compared to a hierarchical file system structure) managing billions of objects and hundreds of petabytes o capacity is easier than using traditional network attached storage. This reduces operational expenses.

Quantum Object Storage

WD ActiveScale Series

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Quantum To Acquire WD’s ActiveScale Portfolio appeared first on StorageReview.com.

StorageReview Podcast #31

$
0
0

The team got together for the weekly podcast to discuss the news of the week highlighted by Dell EMC, VMware, QNAP, Samsung, NAKIVO and much more. We give many updates on the YouTube channel and Tom discusses almost dying twice on his journey to Israel. In Adam’s Movie Corner we discuss American Psycho; his recommendation this week is a more current offering, Apostle, which is streaming on Netflix.

The team got together for the weekly podcast to discuss the news of the week highlighted by Dell EMC, VMware, QNAP, Samsung, NAKIVO and much more. We give many updates on the YouTube channel and Tom discusses almost dying twice on his journey to Israel. In Adam’s Movie Corner we discuss American Psycho; his recommendation this week is a more current offering, Apostle, which is streaming on Netflix.

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #31 appeared first on StorageReview.com.

MayaData Raises $26,000,000 from Investors

$
0
0

Today, MayaData announced it received additional funding totaling to twenty-six million dollars from investors, including AME Cloud Ventures, DataCore Software, and Insight Partners. MayaData was founded in 2011 under the name CloudByte and operated under that name until 2017 when it rebranded itself as MayaData. In both incarnations, MayaData specialized in enterprise containers like Kubernetes.


Today, MayaData announced it received additional funding totaling to twenty-six million dollars from investors, including AME Cloud Ventures, DataCore Software, and Insight Partners. MayaData was founded in 2011 under the name CloudByte and operated under that name until 2017 when it rebranded itself as MayaData. In both incarnations, MayaData specialized in enterprise containers like Kubernetes.

DataCore Software is investing more than just money in MayaData. It is also transferring personnel and contributing intellectual properties to the younger company. DataCore was founded in 1998 and has focused on data storage throughout its tumultuous history.

MayaData will be using the new round of investment funding and other assets to continue improving their OpenEBS Enterprise Platform. MayaData OpenEBS is an open-source container attached storage (CAS) software. It is intended to aid in the deployment and management of stateful applications to any Kubernetes environment.

As part of the investment deal, MayaData has accepted two new members to the company’s advisory board. The first is as part of this investment, MayaData announced that Dan Duet, former CIO of Goldman Sachs. The second is Jay Kidd, former CTO of NetApp and Brocade Communications.

MayaData

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post MayaData Raises $26,000,000 from Investors appeared first on StorageReview.com.

Samsung Launches 3rd Gen HBM2E

$
0
0

Samsung Electronics Co., Ltd. released the third-generation of its High Bandwidth Memory 2E (HBM2E), also known as “Flashbolt.” The latest generation of HBM2E is 16GB making it well suited for HPC systems while also helping with supercomputers, AI-driven data analytics, and state-of-the-art graphics systems. Aquabolt will continue to be produced while this generation is in production.


Samsung Electronics Co., Ltd. released the third-generation of its High Bandwidth Memory 2E (HBM2E), also known as “Flashbolt.” The latest generation of HBM2E is 16GB making it well suited for HPC systems while also helping with supercomputers, AI-driven data analytics, and state-of-the-art graphics systems. Aquabolt will continue to be produced while this generation is in production.

The previous generation, Aquabolt, had a capacity of 8GB. Flashbolt hits 16GB by vertically stacking eight layers of 10nm-class (1y) 16-gigabit (Gb) DRAM dies on top of a buffer chip. Furthermore, Flashbolt is interconnected in a precise arrangement of more than 40,000 ‘through silicon via’ (TSV) microbumps, with each 16Gb die containing over 5,600 of these microscopic holes. For performance, Samsung states that the latest generation can hit reliable data transfer speed of 3.2 gigabits per second and memory bandwidth of 410GB/s per stack. This is quite an improvement over the last generation’s 307GB/s.

Availability 

Samsung expects to begin volume production during the first half of 2020.

Samsung

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Samsung Launches 3rd Gen HBM2E appeared first on StorageReview.com.

DataCore SANsymphony Integrates With Veeam

$
0
0

Today DataCore Software announced that it has integrated its SANsymphony software-defined storage platform with Veeam Software’s Universal Storage API Plug-In. According to the two companies, this integration will enable customers of Veeam Software Backup and Replication can take snapshots and backups of VMware data stores residing on SANsymphony virtual storage pools with minimum impact on production workloads. The combination of the two technologies is said to automate, simplify, and centralize data protection, while enabling greater choice of storage technologies.


Today DataCore Software announced that it has integrated its SANsymphony software-defined storage platform with Veeam Software’s Universal Storage API Plug-In. According to the two companies, this integration will enable customers of Veeam Software Backup and Replication can take snapshots and backups of VMware data stores residing on SANsymphony virtual storage pools with minimum impact on production workloads. The combination of the two technologies is said to automate, simplify, and centralize data protection, while enabling greater choice of storage technologies.

Data centers aren’t always homogenized with the same vendor and brand-new equipment everywhere. Though the big guys like to show this off along with perfect networking cables. Another reality many data centers see is a diversity of equipment that has to meet primary and secondary storage needs. The various hardware can run into compatibility issues that needs to be address through software. This is where DataCore and Veeam come in.

The combined technology from DataCore and Veeam allow users to take low-impact snapshots and swift backups using the same integrated data protection services without regard to the make or model of the underlying storage hardware. This is realized even if the underlaying hardware does not support the Universal Storage API or even have snapshot functionality as DataCore software takes care of those actions. Furthermore, through this integration SANsymphony nodes can act as a Veeam Ready Repository for backup storage. The backups can then be migrated onto lower-cost, elastic object storage through Veeam Cloud Tier as part of the Scale-out Backup Repository.

DataCore

Veeam

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post DataCore SANsymphony Integrates With Veeam appeared first on StorageReview.com.

VMware Updates Per-CPU Pricing

$
0
0

This week, VMware announced updates to its per-CPU pricing model. VMware has gone from a pricing per CPU to a pricing per 32 cores, or an additional license for CPUs that are over 32 cores. This seems to put a cost disadvantage for newer processors that have up to 64 cores, with the onus landing more on AMD and their gen 2 EPYC CPUs more so than Intel gen 2 Scalable.


This week, VMware announced updates to its per-CPU pricing model. VMware has gone from a pricing per CPU to a pricing per 32 cores, or an additional license for CPUs that are over 32 cores. This seems to put a cost disadvantage for newer processors that have up to 64 cores, with the onus landing more on AMD and their gen 2 EPYC CPUs more so than Intel gen 2 Scalable.

According to VMware, they are trying to make their license pricing more in line with software industry standard of pricing based on core-count versus CPU socket. Standardizing pricing makes it easy for potential customers to do a price comparison, though per socket was also a fairly easy comparison to make. The 32-core limit was derived from where the core count is in the current CPU market. While it looks as though it will have a negative impact on those using high core count CPU, particularly those from AMD, VMware doesn’t believe it will be too large of an impact given what it currently being leveraged in the market today.

VMware goes on to state that the vast majority of their existing customers won’t be impacted by this change, as most have 32-cores or less. For those that have purchased a software license for over 32-cores, they will be eligible for additional free per-CPU licenses to cover the CPUs on that server, if purchased prior to April 30, 2020. While the pricing seems abrupt to some, it is in line with over VMware products such as VMware Enterprise PKS and VMware NSX Data Center subscription.

The new pricing changes go into effect on April 2, 2020.

VMware

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post VMware Updates Per-CPU Pricing appeared first on StorageReview.com.

Dynatrace Announces Updates To Its Infrastructure Monitoring & Expands Kubernetes Support

$
0
0

Today at the Perform 2020 conference, Dynatrace announced the next generation of its Infrastructure Monitoring module in its all-in-one Software Intelligence Platform. The latest generation platform will include upgrades such as enhanced AI, expanded out-of-the-box observability and the ability to create custom metrics from log events. The company has also announced expanded support for Kubernetes with its explainable AI engine, Davis, now automatically ingesting additional Kubernetes events and metrics, enabling it to deliver precise answers in real time about performance issues and anomalies across the full stack of Kubernetes clusters, containers, and workloads.


Today at the Perform 2020 conference, Dynatrace announced the next generation of its Infrastructure Monitoring module in its all-in-one Software Intelligence Platform. The latest generation platform will include upgrades such as enhanced AI, expanded out-of-the-box observability and the ability to create custom metrics from log events. The company has also announced expanded support for Kubernetes with its explainable AI engine, Davis, now automatically ingesting additional Kubernetes events and metrics, enabling it to deliver precise answers in real time about performance issues and anomalies across the full stack of Kubernetes clusters, containers, and workloads.

Over the past few years, many companies have moved away from traditional data centers to some form of hybrid and/or multi-cloud approach. A majority of companies are also leveraging microservices. The above being the case, it is becoming more and more difficult to leverage existing monitoring tools. This leads many companies to develop their own tools that can introduce a slew of issues. Dynatrace has made enhancements to its Infrastructure Monitoring module to address the above.

New enhancements to the Dynatrace Infrastructure Monitoring module include:

  • Extended out-of-the-box observability for cloud-native environments – Dynatrace now automatically ingests data from additional sources, including new AWS and Azure services, Kubernetes-native events, Prometheus OpenMetrics and Spring Micrometer metrics. This provides out-of-the-box, comprehensive observability at scale, plus more precise answers to enable faster problem resolution, improved productivity and rapid innovation in multi-cloud environments.
  • Custom metrics and events from log monitoring – The Dynatrace platform can now create custom metrics and events based on log data so organizations can extend infrastructure observability to any application, script or process that writes to a log file. This facilitates tool consolidation and reduces the cost and effort involved in manual administration.
  • Smarter infrastructure monitoring – The Dynatrace Davis AI engine now automatically provides thresholds and baselining algorithms for all infrastructure performance and reliability metrics, extending root-cause analysis and enabling organizations to easily scale infrastructure monitoring without manual configuration in dynamic cloud environments. As a result, organizations gain access to precise answers in real time, supporting faster innovation while ensuring infrastructure performance and availability.

Containers have really taken off in the last several years. Over two-thirds of organizations use some form of containers and we can only expect that number to grow. Kubernetes is a container orchestration system that has outpaced all the others and is by far the favorite. As Kubernetes become more omnipresent the ability to manage them or simply use a dashboard to see some metrics may become difficult. Piggy-backing on the above. Dynatrace has enhanced its platform to add further support to Kubernetes while making the full-stack observable and adding AI to the platform.

Key enhancements to the Dynatrace platform’s support for Kubernetes environments include:

  • Precise, AI-powered answers – Davis has been enriched with the ability to ingest additional Kubernetes events and metrics, including state changes, workload changes and critical events across clusters, containers and runtimes. As a result, Dynatrace better understands all dependencies and relationships across the entire Kubernetes stack, from clusters to containers, and the workloads running inside. This further enables Dynatrace to provide full-stack observability at scale, and deliver more precise, AI-powered answers to dramatically simplify Kubernetes roll-out and management.
  • New cloud application and microservice analysis capabilities – With Dynatrace, organizations can now understand and optimize Kubernetes resource utilization, enabling administrators and application owners to identify and solve performance issues and improve business outcomes proactively.
  • Extended automatic container instrumentation – Dynatrace now automatically discovers, instruments and maps heterogeneous container technologies within Kubernetes environments, including implementations based on Docker, CRI-O and containers. This makes it easy to deploy and manage even the largest containerized environments. New container resource usage analysis also provides broader coverage for the range of container runtimes used by organizations.

Dynatrace

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Dynatrace Announces Updates To Its Infrastructure Monitoring & Expands Kubernetes Support appeared first on StorageReview.com.


QNAP ES2486dc 24-drive All-flash NAS Announced

$
0
0

QNAP Systems has announced the ES2486dc 24-drive bay all-flash storage solution, the newest addition new to the company’s enterprise ZFS NAS series. Designed to fulfill mission-critical file servers, virtualization servers, and commercial cloud applications, the new ES2486dc is QNAP’s first high-availability NAS that features all-flash arrays and is highlighted by dual controllers with Intel Xeon D-2142IT processors and 10GbE connectivity. The new storage solution also leverages the QES 2.1.1 operating system, which uses ZFS and supports of block-based inline data deduplication and inline compression.


QNAP Systems has announced the ES2486dc 24-drive bay all-flash storage solution, the newest addition new to the company’s enterprise ZFS NAS series. Designed to fulfill mission-critical file servers, virtualization servers, and commercial cloud applications, the new ES2486dc is QNAP’s first high-availability NAS that features all-flash arrays and is highlighted by dual controllers with Intel Xeon D-2142IT processors and 10GbE connectivity. The new storage solution also leverages the QES 2.1.1 operating system, which uses ZFS and supports of block-based inline data deduplication and inline compression.

The ES2486dc uses dual active-active controller architecture to promote “near-zero downtime high availability”. Each controller is equipped with four 10GbE SFP+ LAN ports and eight RDIMM slots, amounting up to a generous 512GB of memory. This battery-protected DRAM can write to cache data protection, which reduces data loss risks during unexpected shutdowns and failures. The ES2486dc also features:

  • Two PCIe slots support 10GbE/25GbE/40GbE network cards to boost virtualization and other bandwidth-demanding applications.
  • An optional SAS expansion card for connecting multiple EJ1600 v2 expansion enclosures to expand potential storage capacity to over 1 PB.
  • An optional QDA-SA3 6 Gbps SAS to SATA drive adapter, which allows supports SATA 6 Gbps SSDs in the 2.5-inch SAS drive bay
  • Supports VMware, Microsoft and Citrix®virtualization, SnapSync and VMware Site Recovery Manager (SRM) to offer enterprise-class remote backup and disaster recovery solution for virtual applications

The QES operating system is optimized for all-flash storage arrays and performs efficient data reduction with inline data deduplication and inline data compression. QNAP indicates that this will be beneficial for reducing I/O and SSD storage consumption, which can drastically extend the lifespan of an SSD.

QNAP ES2486dc Specifications

CPU Intel® Xeon® D-2142IT 8-core 1.90 GHz processor (burst up to 3.0 GHz) per controller
CPU Architecture 64-bit x86
Floating Point Unit  Yes
Encryption Engine  (AES-NI)
System Memory 64 GB RDIMM DDR4 ECC (4 x 16 GB) per controller
Maximum Memory 512 GB (8 x 64GB)
Memory Slot 8 x R-DIMM/LR-DIMM DDR4
IMPORTANT: You can only use one type of dual in-line memory module (DIMM) at a time. Do not use registered DIMM (RDIMM) with load-reduction DIMM (LRDIMM) memory.
Flash Memory 4GB (Dual boot OS protection)
Drive Bay 24 x 2.5-inch
SAS 12Gb/s, backward-compatible to SAS 6Gb/s
Drive Compatibility 2.5-inch bays:
2.5-inch SAS/SATA hard disk drives
2.5-inch SAS/SATA solid-state drives
1. When installing SATA HDD/SSD, QDA-SA3 adapter is required.
2. The QDA-SA3 adapter is designed for QNAP Enterprise ZFS NAS and allows the use of a SATA 6Gbps drive in a 2.5-inch SAS drive bay
Hot-swappable  Yes
SSD Cache Acceleration Support Read cache only
Cache for Copy To Flash (C2F) 1 per each controller (64GB)
Gigabit Ethernet Port (RJ45) 3 for each controller (one for remote management)
10 Gigabit Ethernet Port 4 x 10GbE SFP+ for each controller
Jumbo Frame  Yes
PCIe Slot 2
Slot 1: PCIe Gen 3 x8 (CPU)
Slot 2: PCIe Gen 3 x8 (CPU)
USB 3.0 port 2 per each controller
Form Factor 2U Rackmount
LED Indicators System Power LED (Green): on/off
System Status (Green/ Orange): in operation, system errors, low power, degraded RAID mode, memory failure, fan/power supply failure, system/disk temperature too high, storage pool reaching threshold value, system performing take-over, power supply unit unplugged
LCD Status Display (Two-digit number): Status of JBOD connection
Buttons Power, Reset
Dimensions (HxWxD) 3.48 × 19.02 × 21.46 inch
Weight (Net) 60.43 lbs
Weight (Gross) 72.38 lbs
Operating temperature 0 – 40 °C (32°F – 104°F)
Relative Humidity 5-95% RH non-condensing, wet bulb: 27˚C (80.6˚F)
Power Supply Unit Redundant/ Hot Swap Power Vin:90~264VAC;700W
Power Consumption: Operating Mode, Typical 579.59 W
Fan Hot-swappable fan module (60*60*38mm; 16000RPM/12v/2.8A x 3)
Sound Level 60.9 db(A)
System Warning Buzzer
Safety Standard FCC Class A (USA only)
CE Mark ( EN55022 Class A, EN55024)
EN60950
BSMI
VCCI
CB
LVD

QNAP ES2486dc

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post QNAP ES2486dc 24-drive All-flash NAS Announced appeared first on StorageReview.com.

Nexsan Announces QLC-Powered E-Series 18F Storage Platform

$
0
0

Nexsan, part of the StorCentric family, released its new E-Series 18F (E18F) storage platform. The E18F leverages quad-level cell (QLC) NAND technology that should allow for better performance than HDDs while providing a cost-effective flash method for workloads. Nexsan also announced that it was adding RoCE and private blockchain technology to its Assureon solution.


Nexsan, part of the StorCentric family, released its new E-Series 18F (E18F) storage platform. The E18F leverages quad-level cell (QLC) NAND technology that should allow for better performance than HDDs while providing a cost-effective flash method for workloads. Nexsan also announced that it was adding RoCE and private blockchain technology to its Assureon solution.

We’ve said quite a bit about QLC since it began shipping from major vendors in 2018. QLC has downsides but it can be primarily looked at as an HDD replacement. It comes with much better performance than HDD’s though not as much as traditional flash. It does however come in much more cost-effective than traditional flash. For those that are looking to move to all-flash arrays at a lower cost, QLC does look to be the answer at the moment.

Noting this, Nexsan’s E-Flex architecture is taking advantage of the technology. First, E-Flex enables customers to start with the footprint they need and grow over time up to several petabytes. The E18F leverages QLC for applications using real-time analytics and big data. The platform can also be used for AI and ML applications as well as content delivery.

E18F features include:

  • High speed storage connectivity over Fibre Channel, iSCSI or SAS connectivity and seamless interoperability
  • High availability, non-disruptive upgrades, snapshots and asynchronous replication across data centers using 10GE Ethernet.
  • Key third-party integrations including Veeam, Commvault, VMware, Windows and Xen
  • Active Drawer Technology allows drives to remain active when the drawer is open for hot-swap drive management

In other Nexsan news, the company upgraded its active data vault storage solution, Assureon, to version 8.3. Version 8.3 includes a private Blockchain that is said to protect and secure digital assets by storing data in an immutable data structure, utilizes cryptography to secure transactions and relies on an automated integrity audit at the redundant sites to maintain data integrity and transparency. This new Blockchain is combined with Assureon’s file fingerprinting and asset serialization process to provide maximum security for long-term data protection, retention, and compliance.

Nexsan has also added high performance and low latency RoCE (RDMA over Converged Ethernet) to Assureon. The company claims that customers will be better suited to hit regulatory compliance with this ultra-low latency connection.

Other benefits of Assureon 8.3 include:

  • Virtual shortcuts that require zero disk space and reside purely in memory as reference points to physical files in the Assureon archive.
  • A 40Gb/s ethernet connection that provides blazing fast data retrieval from the Assureon server.
    • Data is retrieved directly from user-space with minimal involvement of the operating system and CPU.
    • In addition, zero-copy applications can retrieve data from the Assureon archive without involving the network stack.
    • Security, traceability, immutability, and visibility of data with Assureon Private Blockchain technology.

Nexsan

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Nexsan Announces QLC-Powered E-Series 18F Storage Platform appeared first on StorageReview.com.

StorageReview Podcast #33: David Lundell, Intel

$
0
0

In this week's podcast we debut our new interview segment, starting with David Lundell from Intel’s' Client SSD division. David updates us on the momentum of QLC SSDs in the end user space and talks a little but about what to expect with 144-layer QLC SSDs later this year. The team also breaks down the new licensing deal from VMware, which targets CPUs over 32 cores and Brian provides an update on Eaton. We have significant updates surrounding porta potties, we see what havoc Kevin has wreaked in the lab and Adam updates us on the news that is starting to pick back up. Tom is up to his usual shenanigans. Adam's Movie Corner is pushed back a week due to time constraints, giving everyone's stomachs a break. 


In this week's podcast we debut our new interview segment, starting with David Lundell from Intel’s' Client SSD division. David updates us on the momentum of QLC SSDs in the end user space and talks a little but about what to expect with 144-layer QLC SSDs later this year. The team also breaks down the new licensing deal from VMware, which targets CPUs over 32 cores and Brian provides an update on Eaton. We have significant updates surrounding porta potties, we see what havoc Kevin has wreaked in the lab and Adam updates us on the news that is starting to pick back up. Tom is up to his usual shenanigans. Adam's Movie Corner is pushed back a week due to time constraints, giving everyone's stomachs a break. 

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #33: David Lundell, Intel appeared first on StorageReview.com.

HPE ProLiant MicroServer Gen10 Plus Announced

$
0
0

HPE has announced the ProLiant MicroServer Gen10 Plus, a powerful entry-level server in an ultra micro tower form factor. Organizations can customize the Gen10 Plus for on-premises, hybrid cloud or workloads that need datacenter performance. The server also features the latest Intel Xeon E and Pentium processors with 4x 1GbE onboard NICs and USB 3.2 Gen2 Type-A connectivity. Measuring at only 4.6 in (11.7 cm) for height, the Gen10 Plus product is half that of the previous generation (MicroServer Gen10) and can be placed either horizontally or vertically to fit a range of different use cases.


HPE has announced the ProLiant MicroServer Gen10 Plus, a powerful entry-level server in an ultra micro tower form factor. Organizations can customize the Gen10 Plus for on-premises, hybrid cloud or workloads that need datacenter performance. The server also features the latest Intel Xeon E and Pentium processors with 4x 1GbE onboard NICs and USB 3.2 Gen2 Type-A connectivity. Measuring at only 4.6 in (11.7 cm) for height, the Gen10 Plus product is half that of the previous generation (MicroServer Gen10) and can be placed either horizontally or vertically to fit a range of different use cases.

The new Gen10 Plus server is powered by (up to) four cores from Intel Xeon E Processor line and up to 32 GB of 2666 MT/s DDR4 ECC UDIMM, allowing it deliver ideal performance for small business applications with a nice combination of performance, built-in capabilities, and cost-effectiveness. The Gen10 Plus also supports Intel Pentium processors.

HPE ProLiant MicroServer Gen10 Plus rear

The HPE ProLiant MicroServer Gen10 Plus server offers a 4-port NIC standard with the option to upgrade with a variety of networking options for a bit more power, including 1GbE, 10GbE BASE-T and SFP+ edge cards. It also features a VGA port and a DisplayPort 1.0, as well as six USB 3.0 ports and one internal USB 2.0 port.

Looking at storage, the Gen10 Plus offers four internal bays that are 3.5″ LFF, with SATA connectivity. The system requires a little work to get to the drives and they are not hot swapable.

The ProLiant Gen10 Plus also comes with a range of server utilities, including:

  • Active Health System, which provides continuous, proactive health monitoring of HPE servers. Learn
  • Active Health System Viewer, a web-based portal that easily read AHS logs and speed problem resolution with HPE self-repair recommendations.
  • Smart Update Manager, which keeps your servers up to date to optimize the firmware and driver updates of the Service Pack for ProLiant (SPP).
  • iLO Amplifier Pack, a free, downloadable open virtual application (OVA) that delivers the power to discover, inventory and update Gen8, Gen9 and Gen10 HPE servers.
  • HPE iLO Mobile Application, which allows you to access, deploy, and manage your server anytime from anywhere from select smartphones and mobile devices.
  • RESTful Interface Tool, which is a scripting tool to provision using RESTful API for iLO 4 to discover and deploy servers at scale.
  • Scripting Tools, which allows you to provision one to many servers using your own scripts to discover and deploy with Scripting Tool (STK) for Windows and Linux or Scripting Tools for Windows PowerShell.
  • HPE Systems Insight Manager (HPE SIM, which allows you to monitor the health of your HPE ProLiant Servers and HPE Integrity Servers and provides you with basic support for non-HPE servers. HPE SIM also integrates with Smart Update Manager to provide quick and seamless firmware updates.

HPE ProLiant MicroServer Gen10 Plus Specifications

Processors 

Intel Xeon E-2200 Series / 9th Gen Pentium G
Model CPU Frequency Cores L3 Cache Power DDR4 SGX
Xeon E-2224 3.4 GHz 4 8 MB 71W 2666 MT/s No
Pentium G5420 3.8 GHz 2 4 MB 54W 2400 MT/s No

System

 Memory
Type HPE Standard Memory

DDR4 Unbuffered (UDIMM)

DIMM Slots Available 2
Maximum Capacity 32GB (2 x 16GB UDIMM @2666 MT/s)

NOTE: The maximum memory speed depends on processor model.

Memory Protection

ECC

Interfaces
Video 1 Rear VGA port

1 Rear DisplayPort 1.0

USB 2.0 Type-A Ports 1 total (1 internal)
USB 3.2 Gen1 Type-A Ports 4 total (4 rear)
USB 3.2 Gen2 Type-A Ports 2 total (2 front)
Network RJ-45 (Ethernet) 4
 

Industry Standard Compliance

  • ACPI V6.1 Compliant
  •  PCIe 3.0 Compliant
  •  PXE Support
  •  WOL Support
  •  EMC Class B
  •  Microsoft® Logo certifications
  •  VGA Port
  •  DP Port
  •  SMBIOS 3.1
  •  UEFI 2.6
  •  Redfish API
  •  IPMI 2.0
  •  Advanced Encryption Standard (AES)
  •  Triple Data Encryption Standard (3DES)
  •  SNMP v3
  •  TLS 1.2
  •  DMTF Systems Management Architecture for Server Hardware Command Line Protocol (SMASH CLP)
  •  Active Directory v1.0
  •  ASHRAE A2
  •  UEFI (Unified Extensible Firmware Interface Forum)
  •  USB 2.0 Compliant
  •  USB 3.2 Compliant
  •  SATA 6Gb/s
Security
  • UEFI Secure Boot and Secure Start support
  • Immutable Silicon Root of Trust
  • FIPS 140-2 validation
  • Common Criteria certification
  • Configurable for PCI DSS compliance
  • Ability to rollback firmware
  • Secure erase of NAND/User data
  • TPM (Trusted Platform Module) 2.0 option
  • Front bezel lock feature, standard
  • Padlock slot, standard
  • Kensington Lock slot, standard
  • Power cord clip, standard
Others
Power Supply One (1) 180 Watts , non-redundant External Power Adapter
Server Power Cords All pre-configured models ship standard with one or more country-specific 6 ft/1.83m C5 power cords depending on models.
System Fans

 

 One (1) non-redundant system fan shipped standard

Physical and power

Power Supply One (1) 180 Watts , non-redundant External Power Adapter
Server Power Cords All pre-configured models ship standard with one or more country-specific 6 ft/1.83m C5 power cords depending on models.
Dimensions (H x W x D) (with feet) 4.68 x 9.65 x 9.65 in (11.89 x 24.5 x 24.5 cm)
Weight (approximate) Maximum

(Four drives, two DIMMs, Expansion board + iLO Enablement Kit)

15.87 lb (7.2 kg)
Minimum

(One DIMM installed, no drive, expansion board, iLO Enablement Kit)

9.33 lb (4.23 kg)
Input Requirements
(per power supply)
Rated Line Voltage  100 V AC to 240 V AC
Rated Input Current 2.5 A (at 90 V AC)
Rated Input Frequency 50 to 60 Hz
Rated Input Power 180W Power Supply

Availability

HPE is offering a couple configurations in their online store currently, including two base configurations without drives. One is powered by an Intel Pentium E-2224 processor and 16GB memory. The other comes with an Intel Pentium G5420 processor and 8GB memory. All of the Gen10 Plus systems come with four LFF drives bays that are not hot-swappable. HPE is accepting quote inquiries now, with volume shipments expected soon.

HPE ProLiant MicroServer Gen10 Plus

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post HPE ProLiant MicroServer Gen10 Plus Announced appeared first on StorageReview.com.

New IBM FlashSystem Family Introduced

$
0
0

Today IBM introduced a new FlashSystem Family with the idea of keeping things simple while meeting the storage needs for entry-level to high-end systems. The new family leverages a common software platform that is said to work across all deployment types (bare metal, virtualized, container, and hybrid multicloud) and will support customers’ existing storage devices even if they are not IBM.


Today IBM introduced a new FlashSystem Family with the idea of keeping things simple while meeting the storage needs for entry-level to high-end systems. The new family leverages a common software platform that is said to work across all deployment types (bare metal, virtualized, container, and hybrid multicloud) and will support customers’ existing storage devices even if they are not IBM.

Making off the shelf storage doesn’t make the same sense it made years ago. Now the needs of customers vary wildly. Not even just from customer to customer, within the same organization applications may have several different needs such as entry point, performance, scalability, data services, functionality, and availability. Vendors need to respond to various needs, though at times this can lead to added complexity or higher costs and cause a new set of issues. IBM is looking to address the above as well as the compatibility between differing existing gear.

The new IBM FlashSystem Family looks to address all the above while being extremely simple to use. The family leverages IBM FlashCore Modules (FCMs) that are said to deliver data compression and FIPS 140-2 data-at-rest encryption without penalizing performance. The new FCMs have usable capacities up to 38.4TB or a maximum of 4PB in only 2U of space. The FCMs can be used in older FlashSystem models as well including FlashSystem 9100 and Storwize V5100 and V7000 Gen3. Also, on the storage side, IBM is leveraging storage-class memory drives from Intel and Samsung as a tier for the new FlashSystem family for an added level of performance. As the FCMs, the storage-class memory can be used as an upgrade in the same systems listed above.

With management, IBM uses its Spectrum Virtualize software foundation to work with the storage options above as well as over 500 other heterogeneous storage systems. IBM is also leveraging AI-driven software in its IBM EasyTier that automatically places data on the appropriate tier when it needs to be there. As with most tiering, hot data is moved to faster media tiers while cooler and cold data is moved off to more cost-effective media. Other AI-based software including IBM Storage Insights for monitoring, alerting, reporting, and support.

Benefits include:

  • A single approach to extraordinary data availability: “six nines” availability across all price points, 2- and 3-site replication, cross-site high availability configurations, and the option for guaranteed 100% data availability. Data-at-rest encryption coupled with tape and cloud “air gap” copies, malware detection, and application-aware snapshots help provide a dependable cyber resiliency solution.
  • A single software foundation: IBM Spectrum Virtualize that provides enterprise-class data services from entry enterprise to high-end enterprise deployments on-prem and with consistent capability for hybrid multicloud. IBM Spectrum Virtualize provides the foundation for simplifying storage by delivering these same services for existing storage together with the ability for nondisruptive data movement, and supports transformation by enabling cloud and container deployments with all supported storage.
  • Consistent APIs for automation delivered on-prem and in the cloud and supporting all deployment approaches: bare metal, virtualized, containerized, and for hybrid multicloud.
  • Cloud-based, AI-infused management, storage analytics, and integrated proactive support: IBM Storage Insights delivers heterogeneous storage management made simple. IBM and select non-IBM storage together with cloud storage managed by IBM Spectrum Virtualize for Public Cloud can all be managed through a single pane of glass. AI-based analytics offers insights into best practice recommendations. For IBM storage, IBM Storage Insights provides streamlined access to support to help resolve issues even more quickly.

From a new storage solution standpoint, the new IBM FlashSystem family adds four new enterprise-class systems that deliver the benefits of IBM FlashCore Modules, storage-class memory, and the enterprise-class data services of IBM Spectrum Virtualize. These systems including the entry enterprise FlashSystem 5010, 5030, and 5100. IBM does offer higher end capacity in the form of the following:

  • IBM FlashSystem 7200: End-to-end NVMe and sophisticated enterprise-class hybrid multicloud functionality in a system designed for mid-range enterprise deployments. Supporting both scale-up with expansion enclosures and scale-out with up to 4-way clustering, FlashSystem 7200 supports 24% higher performance than Storwize V7000 Gen3 with a maximum of 8M IOPS. and 55% better throughput: a maximum of 128GB/s.
  • IBM FlashSystem 9200: End-to-end NVMe in a system designed for the most demanding enterprise requirements. FlashSystem 9200 delivers comprehensive storage functionality and our highest levels of performance: 20% better performance than FlashSystem 9100 with maximum 18M IOPS and 180GB/s per 4-way cluster. Both FlashSystem 7200 and 9200 also support only 70ìs latency.
  • IBM FlashSystem 9200R: Designed for clients needing an IBM-built, IBM-tested complete storage system delivered assembled, with installation and configuration completed by IBM. The system includes 2-4 IBM FlashSystem 9200 control enclosures, Brocade or Cisco switches for clustering interconnect, and optional expansion enclosures for additional capacity.

IBM FlashSystem

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post New IBM FlashSystem Family Introduced appeared first on StorageReview.com.

Oracle Makes Several Announcements At OOW London

$
0
0

At Oracle Open World London, the company made several announcements. The three key announcements include the availability of Oracle Cloud Data Science Platform. Oracle has created a new database that supports all data. And Oracle has expanded its interoperability partnership with Microsoft.


At Oracle Open World London, the company made several announcements. The three key announcements include the availability of Oracle Cloud Data Science Platform. Oracle has created a new database that supports all data. And Oracle has expanded its interoperability partnership with Microsoft.

Oracle announced the availability of its new Oracle Cloud Data Science Platform. This platform centers around machine learning and helping data scientists collaborate on building, training, managing, and deploying ML models. Giving easy access to all teams holds enormous transformational potential. Faster time to develop a project can potentially lead to faster implementation versus dying before production due to the time it takes to finish them.

Capabilities include:

  • AutoML automated algorithm selection and tuning automates the process of running tests against multiple algorithms and hyperparameter configurations. It checks results for accuracy and confirms that the optimal model and configuration is selected for use. This saves significant time for data scientists and, more importantly, is designed to allow every data scientist to achieve the same results as the most experienced practitioners.
  • Automated predictive feature selection simplifies feature engineering by automatically identifying key predictive features from larger datasets.
  • Model evaluation generates a comprehensive suite of evaluation metrics and suitable visualizations to measure model performance against new data and can rank models over time to enable optimal behavior in production. Model evaluation goes beyond raw performance to take into account expected baseline behavior and uses a cost model so that the different impacts of false positives and false negatives can be fully incorporated.
  • Model explanation: Oracle Cloud Infrastructure Data Science provides automated explanation of the relative weighting and importance of the factors that go into generating a prediction. Oracle Cloud Infrastructure Data Science offers the first commercial implementation of model-agnostic explanation. With a fraud detection model, for example, a data scientist can explain which factors are the biggest drivers of fraud so the business can modify processes or implement safeguards.
  • Shared projects help users organize, enable version control and reliably share a team’s work including data and notebook sessions.
  • Model catalogs enable team members to reliably share already-built models and the artifacts necessary to modify and deploy them.
  • Team-based security policies allow users to control access to models, code and data, which are fully integrated with Oracle Cloud Infrastructure Identity and Access Management.
  • Reproducibility and auditability functionalities enable the enterprise to keep track of all relevant assets, so that all models can be reproduced and audited, even if team members leave.

Oracle has taken databases a step further in a world that wants a one-stop-fits-all approach with a single converged database engine able to meet all the needs of a business. Not only is the database stated to meet all needs, users can leverage new technology trends as well, such as blockchain for fraud prevention, leveraging the flexibility of JSON documents, or training and evaluating machine learning algorithms inside the database.

The converged capabilities in Oracle Database include:

  • Oracle Machine Learning for Python (OML4Py): Oracle Machine Learning (OML) inside Oracle Database accelerates predictive insights by embedding advanced ML algorithms which can be applied directly to the data. Because the ML algorithms are already collocated with the data, there is no need to move the data out of the database. Data scientists can also use Python to extend the in-database ML algorithms.
  • OML4Py AutoML: With OML4Py AutoML, even non-experts can take advantage of machine learning. AutoML will recommend best-fit algorithms, automate feature selection, and tune hyperparameters to significantly improve model accuracy.
  • Native Persistent Memory Store: Database data and redo can now be stored in local Persistent Memory (PMEM). SQL can run directly on data stored in the mapped PMEM file system, eliminating IO code path, and reducing the need for large buffer caches. Allows enterprises to accelerate data access across workloads that demand lower latency, including high frequency trading and mobile communication.
  • Automatic In-Memory Management: Oracle Database In-Memory optimizes both analytics and mixed workload online transaction processing, delivering optimized performance for transactions while simultaneously supporting real-time analytics, and reporting. Automatic In-Memory Management greatly simplifies the use of In-Memory by automatically evaluating data usage patterns, and determining, without any human intervention, which tables would most benefit from being placed in the In-Memory Column Store.
  • Native Blockchain Tables: Oracle makes it easy to use Blockchain technology to help identify and prevent fraud. Oracle native blockchain tables look like standard tables. They allow SQL inserts, and inserted rows are cryptographically chained. Optionally, row data can be signed to ensure identity fraud protection. Oracle blockchain tables are simple to integrate into apps. They are able to participate in transactions and queries with other tables. Additionally, they support very high insert rates compared to a decentralized blockchain because commits do not require consensus.
  • JSON Binary Data Type: JSON documents stored in binary format in the Oracle Database enables 4X faster updates, and scanning up to 10X faster.

Finally, Oracle and Microsoft announced that they are extending their cloud partnership with a new cloud interconnect location in Amsterdam. This new location will allow organizations to share data across applications running in Microsoft Azure and Oracle Cloud. This further expands the Oracle, Microsoft cloud interoperability partnership that the companies announced last year.

Oracle

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Oracle Makes Several Announcements At OOW London appeared first on StorageReview.com.

WD Red 14TB NAS HDD Review

$
0
0

Western Digital released a new line of Red NAS drives in November of 2019 and among the newly launched drives a 14TB HDD was released; expanding the maximum capacity for the line’s HDDs. Purpose built for NAS system compatibility WD Red (non-pro) drives are ideal for home and small business NAS systems with up to 8 bays running in a 24/7 environment while supporting up to a 180 TB/year workload rate.

Western Digital released a new line of Red NAS drives in November of 2019 and among the newly launched drives a 14TB HDD was released;
expanding the maximum capacity for the line’s HDDs. Purpose built for NAS system compatibility WD Red (non-pro) drives are ideal for home and
small business NAS systems with up to 8 bays running in a 24/7 environment while supporting up to a 180 TB/year workload rate.

While there are plenty of drives on the market desktop drives aren’t typically tested or designed for the rigors of a NAS system. Choosing a drive designed for NAS offers features tailored to help preserve your data and maintain optimal performance. WD Red drives come with NASware 3.0 technology to balance performance and reliability in NAS and RAID environments. With NAS systems being always on a reliable drive is essential and reliability is a foundation of WD Red NAS hard drives. The drives are designed and tested in 24/7 conditions.

We also have a video overview for those that are interested:

At the time of review, this drive could be picked up for $500 from amazon.

WD Red 14TB NAS HDD Specifications:

Interface SATA 6GB/s
Form Factor 3.5-inch
Capacity 14TB
Performance
Interface Transfer Rate 210 MB/s
Cache 512
Performance Class 5400 RPMM
Reliability
Load/unload cycles 600,000
Non-recoverable read errors per bits read <1 in 10 billion
MTBF (hours) 1,000,000
Workload Rate (TB/year) 180
Warranty 3-year limited
Power Management
Average Power Requirements Read/Write – 4.1 W
Idle – 2.7 W
Standby and sleep 0.4 W
Environmental Specifications
Operating Temperature 0 to 65
Shock (non-operating) 250 Gs
Physical Dimensions
Dimensions (WxDxH) 4 x 5.787 x 1.028 in (101.6 x 147 x 26.1 mm)
Weight 1.4 pounds ( 0.64 kilograms)

WD Red 14TB NAS HDD Review Configuration

In this review, we look at the drives in a RAID6 configuration inside our QNAP TS-1685 NAS for testing and comparing the data to the Seagate IronWolf 14TB HDDs in the same configuration. We will be looking at eight of the WD Red 14 TB HDDs, which we configured in RAID6. We use our Dell PowerEdge R730 with a Windows S2012 R2 VM as an FIO load generator.

Performance

Enterprise Synthetic Workload Analysis

Our enterprise hard drive benchmark process preconditions each drive-set into steady-state with the same workload the device will be tested with under a heavy load of 16 threads, with an outstanding queue of 16 per thread. The device is then tested in set intervals in multiple thread/queue depth profiles to show performance under light and heavy usage. Since hard drives reach their rated performance level very quickly, we only graph out the main sections of each test.

Preconditioning and Primary Steady-State Tests:

  • Throughput (Read+Write IOPS Aggregate)
  • Average Latency (Read+Write Latency Averaged Together)
  • Max Latency (Peak Read or Write Latency)
  • Latency Standard Deviation (Read+Write Standard Deviation Averaged Together)

Our Enterprise Synthetic Workload Analysis includes four profiles based on real-world tasks. These profiles have been developed to make it easier to compare to our past benchmarks, as well as widely-published values such as max 4K read and write speed and 8K 70/30, which is commonly used for enterprise drives.

  • 4K
    • 100% Read or 100% Write
    • 100% 4K
  • 8K 70/30
    • 70% Read, 30% Write
    • 100% 8K
  • 128K (Sequential)
    • 100% Read or 100% Write
    • 100% 128K

Looking at our throughput test which measures 4K random performance, the 14TB WD Red produced well in iSCSI performance with 938 IOPS write and 8806 IOPS read. In CIFS, the 14TB WD Red posted 734 IOPS write and 5,369 IOPS read.

Next, we move on to 4K average latency. The 14TB WD Red drive hit 272.927ms latencies of write and 29.064ms read in the iSCSI configuration while CIFS measured 348.7ms write and 47.669ms read. The Seagate outperformed the WD in both configurations.

With 4K max latency, the 14TB WD Red showed 5,891.4ms and 1,306.8ms in iSCSI reads and writes, respectively. In CIFS, the 14TB hit 7,836ms read (1st) and 1,380.4ms (2nd) write. Overall, the WD Red and IronWolf showed comparable read max latencies but, the WD Red fell behind when it came to write max latencies.

In standard deviation, the 14TB WD Red showed reads and writes of 60.002ms and 489.01ms in iSCSI, respectively, and 77.598ms and 444.98ms in CIFS.

The next benchmark tests the drives under 100% read/write activity, but this time at 8K sequential throughput. In iSCSI, the 14TB WD Red hit 227,267 IOPS read and 104,679 IOPS write, while CIFS saw a quarter the IOPS in read performance with 52,235 coupled with 41,272 IOPS write.

Our next test shifts focus from a pure 8K sequential 100% read/write scenario to a mixed 8K 70/30 workload. This will demonstrate how performance scales in a setting from 2T/2Q up to 16T/16Q. In CIFS, the 14TB WD Red started at 2,497 IOPS while ending at 2,157 IOPS in the terminal queue depths. In iSCSI, we saw a range of 643 IOPS to 1,698 IOPS.

With average latency at 8K 70/30, the 14TB WD Red showed a range of 6.2ms through 172.48ms in iSCSI, while CIFS showed a range of 1.59ms through 118.54ms in CIFS.

In max latency, the 14TB WD Red posted a range of 1,136.11ms to 3,279.77ms in CIFS, while iSCSI showed 268.38ms through 4,783.24ms in the terminal queue depths.

The standard deviation latency results, the 14TB WD Red peaked at 114.29ms (CIFS) and 331.27ms (iSCSI) in the terminal queue depths.

Our last test is the 128K benchmark, which is a large-block sequential test that shows the highest sequential transfer speed. The 14TB WD Red showed 1.71GB/s read and 1.19GB/s write in CIFS, while iSCSI had 1.97GB/s read and 1.99GB/s write.

Conclusion

The latest capacity of the WD Red line is a solid addition. This NAS-specific drive gives users the highest capacity possible at a relatively inexpensive price tag, while results from our performance charts reaffirm that the line is a good choice for the SOHO market and creative professionals. The 14TB WD Red drive offers solid performance with a few extra terabytes for increasing a user’s net storage capacity enough to make a difference. Some features of the drive include support of up to an 8-bay storage NAS system, support of up to 180 TB/yr workload rate, and optimum compatibility with NASware technology. The drive comes with a 3-year limited warranty.

As far as performance goes, the WD Red performed well for its given use cases. We tested the WD Red 14TB drives in both iSCSI and CIFS configurations in RAID6. With our VDBench workloads, the WD Red 14TB was up against a 14TB Seagate IronWolf. Here, the WD Red shined in 8K sequential throughput and 128K sequential throughput hitting 227,267 IOPS and 1.97GB/s respectively in iSCSI read. In general, the WD Red fell behind when it came to latency, specifically in iSCSI configuration in our 8K max latency the WD was kind of all over the place and the WD Red trailed behind in both 4K and 8K average latency in both iSCSI and CIFS configuration.

Overall, the WD Red 14TB NAS HDD is a reliable NAS drive that features great performance in specific configurations, while its massive capacity gives users the (budget-friendly) flexibility they need to grow as their data requirements expand.

The post WD Red 14TB NAS HDD Review appeared first on StorageReview.com.


StorageReview Podcast #34: Rick Vanover, Veeam

$
0
0

This week Rick Vanover stops by the Cincinnati lab to help the team upgrade to Veeam v10. v10 isn't publicly available until next week, so soak up the preview in this week's pod interview, as well as the YouTube video we have walking through the upgrade process. The team also breaks down Coronavirus cancelling MWC, feats of flight, HPE's new micro server and the protocol of dealing with a second dog poop but only one bag. Lastly, Eaton came through this week with a new lab accessory, our first lab bar stool


This week Rick Vanover stops by the Cincinnati lab to help the team upgrade to Veeam v10. v10 isn't publicly available until next week, so soak up the preview in this week's pod interview, as well as the YouTube video we have walking through the upgrade process. The team also breaks down Coronavirus cancelling MWC, feats of flight, HPE's new micro server and the protocol of dealing with a second dog poop but only one bag. Lastly, Eaton came through this week with a new lab accessory, our first lab bar stool

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #34: Rick Vanover, Veeam appeared first on StorageReview.com.

In the Lab: Plugable USB-C and USB-A Ethernet Adapters

$
0
0

At StorageReview, we get our hands on a lot of notebooks. Instead of connecting each and every new system to our wireless network we opt to connect devices using Ethernet cords so we can just plug and play. Not only does connecting to our network via Ethernet save us time, but it’s also important for us to connect this way to get the best results when copying over our large datasets for benchmarks. However, not all the products we test have an Ethernet port. This is where Plugable Ethernet Adapters come into play. By having both a Plugable USB Type-C and a Plugable USB 3.0 Type-A on hand we can instantly connect to our network via a USB port. These adapters don’t require external power or additional drivers, which is especially convenient.


At StorageReview, we get our hands on a lot of notebooks. Instead of connecting each and every new system to our wireless network we opt to connect devices using Ethernet cords so we can just plug and play. Not only does connecting to our network via Ethernet save us time, but it’s also important for us to connect this way to get the best results when copying over our large datasets for benchmarks. However, not all the products we test have an Ethernet port. This is where Plugable Ethernet Adapters come into play. By having both a Plugable USB Type-C and a Plugable USB 3.0 Type-A on hand we can instantly connect to our network via a USB port. These adapters don’t require external power or additional drivers, which is especially convenient.

While the specific features and system requirements of the adapters vary slightly overall the benefits of these adapters include low cost, small form factor, and simple plug-and-play Gigabit Ethernet network connection capability.

USB Type-C Ethernet Adapter Specifications

Chipset ASIX AX88179
Features High performance packet transfer rate over USB bus using proprietary burst transfer mechanism (US Patent Approval).
USB Type-C male to RJ45 female adapter supporting gigabit Ethernet at USB Type speeds.
Supports all USB power saving modes (U0, U1, U2, and U3).
Supports 10/100/1000 with auto-sensing (IEEE 802.3, 802.3u, and 802.3ab).
IPv4/IPv6 checksum offload engine, crossover detection and auto-correction, TCP large send offload and IEEE 802.3az Energy Efficient Ethernet.
Supports dynamic cable length detection and dynamic power adjustment Green Ethernet (Gigabit mode only).
System Compatibility Windows 10, 8.1/8, 7
macOS 10.6 – 10.14
Linux kernels prior to 3.9 require rebuild of kernel module from source.
Chrome OS (support with latest updates)
Nintendo Switch
Cost $19

USB Type-A Ethernet Adapter Specifications

Chipset ASIX AX88179
Features High performance packet transfer rate over USB bus using proprietary burst transfer mechanism (US Patent Approval).
USB 3.0 male A to RJ45 female adapter supporting gigabit Ethernet at USB 3.0 speeds.
Supports all USB 3.0 power saving modes (U0, U1, U2, and U3).
Supports 10/100/1000 with auto-sensing (IEEE 802.3, 802.3u, and 802.3ab).IPv4/IPv6 checksum offload engine, crossover detection and auto-correction, TCP large send offload and IEEE 802.3az Energy Efficient Ethernet.
Supports dynamic cable length detection and dynamic power adjustment Green Ethernet (Gigabit mode only).
System Compatibility Windows 10, 8.1/8, 7, Vista, XP
macOS 10.6 – 10.14
Linux kernels prior to 3.9 require rebuild of kernel module from source.
Chrome OS (support with latest updates)
Nintendo Switch
Cost $14

Beyond adding an Ethernet port to a system lacking one, these adapters are handy for situations where you need more than one network connection. In many scenarios when deploying or diagnosing multi-controller platforms, having access to more than one controller at the same time is quite valuable. One example of this is we worked through multiple reconfigurations of a Dell EMC Unity platform where we had direct IP access to each CLI console as the controllers were wiped. One method to do this in the field is with a small 5 port switch (which requires power), or you could use Ethernet adapters like the Plugable models to do this with fewer items to carry around. Another situation is where you need to touch multiple VLANs or network fabrics, where just one port can’t get the job done. In either situation a simple dongle works very well.

Ethernet adapters are small but essential pieces of technology in the tech world. Wireless internet is convenient and gets the job done most of the time but, there are plenty of cases where wired Ethernet is still preferable/necessary. Ethernet adapters are useful for notebooks or tablets that don’t have them built in, which is becoming more common as many vendors look to streamline the look of products. If you are in the market for a USB-A or USB-C Ethernet adapter, Plugable has an excellent offering. These adapters offer convenience first and foremost, working universally with most operating systems. With an attractive price point these adapters are a must have for an IT triage bag or business traveler alike.

Plugable Ethernet Adapters at Amazon

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post In the Lab: Plugable USB-C and USB-A Ethernet Adapters appeared first on StorageReview.com.

Dell EMC iDRAC9 V4.0 Overview

$
0
0

In last December was announced the latest version of Dell EMC’s out-of-band management solution, the iDRAC9 4.00.00.00. The Integrated Dell Remote Access Controller (iDRAC) is designed to make server administrators more productive, allowing them to deploy, monitor, update, and manage PowerEdge servers, both locally and remotely. In the last firmware release, Dell EMC introduced many new features as well as a new license tier, the Datacenter License. The Datacenter license offers several unique features such as server data telemetry streaming and advanced thermal controls to help IT admins better understand and manage the datacenter.


In last December was announced the latest version of Dell EMC’s out-of-band management solution, the iDRAC9 4.00.00.00. The Integrated Dell Remote Access Controller (iDRAC) is designed to make server administrators more productive, allowing them to deploy, monitor, update, and manage PowerEdge servers, both locally and remotely. In the last firmware release, Dell EMC introduced many new features as well as a new license tier, the Datacenter License. The Datacenter license offers several unique features such as server data telemetry streaming and advanced thermal controls to help IT admins better understand and manage the datacenter.

Previously, we have done an in-depth review of the iDRAC9, based on the firmware version 3.34.34.34; if you are unfamiliar with this out-of-band management solution, that review makes an excellent starting point before taking a look at this new version. There have been three more firmware versions released after that one,a but the version 4.00.00.00 is the one including new significant innovations and enhancements.

Recapping our previous article, the iDRAC controller is a piece of hardware integrated on the motherboard of the server, which as its own processor, memory, network connection, and access to the system bus. It provides remote access to the system console (keyboard and screen), allowing the system BIOS to be accessed over the Internet when the server is rebooted. Critical features of iDRAC include power management, virtual media access, and remote console capabilities. These features give administrators the ability to remotely configure a machine as if they were sitting in front of the local console.

iDRAC9 License Levels

iDRAC licenses are designed to offer the right set of capabilities for the customer need. New, with this release, is the iDRAC Datacenter License.

  • iDRAC Basic – Basic instrumentation with iDRAC web GUI.
  • iDRAC Express – Expanded remote management and server lifecycle features.
  • iDRAC Enterprise – Remote presence features with advanced, enterprise-class management capabilities.
  • iDRAC Datacenter – Extended remote insight into server details, focused on high-end server options and granular power and thermal management.

The new Datacenter license target Dell EMC customers with large datacenters focused on hardware performance analytics and granular power and thermal management.

New and Enhanced Features

Similar to older versions, the new enhancement and features brought with this new release, are per license basis. And as expected, since the focus is on the new iDRAC license, most of the new features are included only in the Datacenter one.

Taken from the iDRAC9 Version 4.00.00.00 release notes, below is the list of all the features and enhancements supported by the iDRAC Datacenter and Enterpriser licenses. And the key features are covered in the following sections of this article. Needless to say, the features included with the Enterprise license are also included for the new Datacenter one, which covers all of the features. And iDRAC9 Enterprise is still offered as before with some enhancements.

Datacenter License

  • Telemetry Streaming – metric reports streamed to an analytics tool
  • Thermal Manage – advanced power and cooling features:
    • PCIe airflow customization (LFM)
    • Custom Exhaust Control
    • Custom Delta-T control
    • System Airflow Consumption
    • Custom PCIe inlet temperature
  • Auto Certificate Enrollment and renewal for SSL certificates
  • Virtual Clipboard Support Cut and Paste of text strings into the remote virtual console desktop
  • SFP Transceiver I monitoring
  • GPU inventory and monitoring
  • SMART logs for storage drives
  • System Serial Data Buffer Capture
  • Idle Server detection

Enterprise License

  • Multi-Factor Authentication through email.
  • Agent Free Crash Video Capture (Windows only).
  • Connection View for LLDP transmit
  • System Lockdown Mode – new icon in header available from any page
  • Group Manager – 250 node support
  • Enhanced support for Secure Enterprise Key Management (SEKM)
  • Enable PERC to switch to SEKM security mode.

Customers running on the iDRAC Basic or Express license can also benefit from some of the new version features which are included for all level licenses.

GUI Enhancements

  • Task summary section the dashboard
  • A search box in the header
  • SupportAssist Collection Viewer – displays the output in iDRAC GUI

The iDRAC web-UI is one of the tools that standouts among other management systems we have been reviewing in the past. This GUI is a comprehensive admin tool loaded with many options that one can notice once navigating in the menus and submenus. Besides the ones mentioned above, another few changes were added to the GUI, such as Job
Queue overview and a collapsed accordion-style for faster page loading.

API, CLI, and SCP

  • Operating system deployment by Server Configuration Profile (SCP)
  • Enable and disable boot order control to SCP and RACADM
  • New schemas up to 2018.3 is supported for Redfish APIs
  • Option to change boot source state in SCP.
  • Automation for command/attribute autocompletion in RACADM

Lifecycle Controller

  • IDSDM firmware update to version 1.9 or later using out-of-band methods
  • File browsing from USB storage device
  • Multilevel filtering for LC Log viewing.
  • Enhanced updates from downloads.dell.com.

Alerts and Monitoring

  • Custom Sender Email Address for email alerts in SMTP configuration
  • SMARTlogs in SupportAssist log collection for hard drives and PCIe SSD devices
  • Include Part Number of a failed component in alert messages

Security

  • Up to five IP filtering ranges (using RACADM commands only)
  • iDRAC user password maximum length extended to 40 characters.
  • SSH Public Keys through SCP
  • Customizable Security Banner to SSH login
  • Force Change Password(FCP) option for login

New advanced features and enhancements of the iDRAC Datacenter license are targeting the datacenter management needs of IT and server admins. Starting with the key features, the new version of iDRAC introduces telemetry data for analytics to optimize and automate IT operations. Also, this version offers thermal and power customization for optimizing facilities operations, scalable and automated infrastructure security, and better remote management experience for easiest problem diagnosis and remediation.

Telemetry Streaming

Analytics is a feature that has been spiking-up in recent years. Its need, in modern datacenters, is no longer an option for admins and companies looking to simplify daily management tasks, monitor and predict system performance, and boost productivity. With telemetry, datacenters, can collect measures and data from remote systems for analysis and monitoring. Depending on how large a company is, the amount of data spread in its datacenter could be massive; it is here where telemetry and analytics make even more sense, as it helps to optimize IT operations rapidly, with actionable insight.

iDRAC telemetry streaming collects hardware metrics and status for analytics from servers, storage, networking, OS, and workloads. There are over 20 new metric reports for streaming data via Rsyslog or Redfish SSE. These new telemetry data available include serial data log, GPU inventory and monitoring, advanced CPU metrics, optical network interfaces, and more. Telemetry also decreases downtime with predictable analytics and enhances security and compliance.

Ultimately, the new iDRAC Telemetry streaming feature provides high-performance streaming of server data. It is achieved by extracting high-value data that can be leveraged by existing customers and popular analytic tools as well as enhancing Dell EMC customer support. Over 190 different server and peripheral metrics can be streamed or pulled from iDRAC, achievable due to the iDRAC agent-free architecture.

Other Key Features and Enhancements

Enhanced Security

Critical areas regarding security have been enhanced as well. Staring with an easy Multi-Factor Authentication (MFA) that uses one-time email passcodes when a new access method is detected (source IP, for example). This enhancement is targeted at SMB customers not wanting complex MFA solutions around LDAP/AD or Radius. Password security has been improved as well. Now, longer iDRAC credentials (up to 40 characters) are needed by new automated password generators. Also, in the domains of security, it has been improved the Automatic Certificate Enrollment (ACE) for auto-renewal and deployment of iDRAC9 SSL certificates, taking advantage of the Simple Certificate Enrollment Protocol (SCEP) supported in many server OS solutions. In this way, servers can automatically keep iDRAC SSL certificates renewed with zero manual scriptings or monitoring.

Thermal Manage

Another significant feature in this version of iDRAC is the customizable thermal and airflow management in PowerEdge servers; for optimizing datacenter power policies. Thermal Manage allows customers to customize the thermal operation of their PowerEdge servers. Now, it is possible to optimize server related power and cooling efficiencies across customer datacenters. This feature is integrated with OpenManage Enterprise Power Manager for optimized management experience. An advanced PCIe thermal management dashboard is also provided.

Thermal Manage key-related power features include PCIe airflow customization (LFM), Custom Exhaust Control, Custom Delta T control, System Airflow Consumption, and Custom PCIe inlet temperature.

Other Critical Enhancements

Some other crucial improvements in the iDRAC9 Version 4.00.00.00 are the new Zero Touch Provisioning via Enhanced Server Configuration Profiles (SCPs), which is useful to automatically provision customers’ bare metal servers, including OS, Configuration, and Firmware over a secure, out of band network. Another enhancement is regarding the iDRAC email alerting; now, alerts are more compatible with Cloud-based messaging and more flexible for private email domains. Also, when using remote console operations, a virtual clipboard is now available to efficiently copy text/passwords to local clipboard and paste into HTLM5 remote console view.

Some other advanced server monitoring enhancements, available only with the iDRAC Datacenter license, are Idle server detection, which automatically detects and alerts admins to unused servers in their infrastructure. And, Agente-free OS crash and screen capture, that detects Windows OS crash events and automatically captures the desktop screenshot without installing iSM or OMSA.

Upgrading the Firmware

A walkthrough video about how to upgrade Dell EMC PowerEdge servers to version 4.00.00.00, is available from the StorageReview YouTube channel. In addition to the video, the step-by-step guide can be found in our previous iDRAC review (mentioned in the introduction), in the overview of the LifeCycle Controller section.

Conclusion

Dell EMC has released the new version of its iDRAC9 controller, version 4.00.00.00. iDRAC simplifies hardware management through the ease of use and automation. And with this new version, Dell EMC is targeting evolving trends and features of modern datacenters. Trends, such as cyber resiliency, automation of IT processes, simplified services and support, and analytics for datacenter management. This last one, is one of the critical features of this release, referring to the use of telemetry, to provide precise, time-series data for monitoring power, temperatures, performance (CUPS), and statistics (NICs, GPUs, Storage SMART attributes, and more). iDRAC telemetry, mainly, provides granular insight and control, by collecting, analyzing, and visualizing different datacenter metrics and data.

With this version, other vital areas of enhancements were regarding security, as it is a concern for IT datacenters; and secure infrastructure is becoming mandatory. For this, iDRAC focused on improvements in Multi-Factor Authentication (MFA), passwords security, and Automatic Certificate Enrollment (ACE). Thermal Manager features, email alerting, and advanced server monitoring are also enhanced. On the other hand, the management web-UI had a few but relevant improvements. With all these new features, Dell EMC with the iDRAC9 Controller keeps allowing organizations to build a secure, optimized, and intelligent IT infrastructure that simplifies deployment, configuration, and updates throughout a server’s life.

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Dell EMC iDRAC9 V4.0 Overview appeared first on StorageReview.com.

Dell Technologies Brings PowerEdge To The Edge

$
0
0

Today, Dell Technologies announced a slew of new solutions and innovation to help their customers deal with data that is being created at the edge. The new solutions include new edge server designs, smaller modular data centers, enhanced telemetry management and a streaming analytics engine. The new Dell solutions are aimed at helping customers to better realize value of the data coming from the edge as well as quickly gaining insights regardless of where the data resides.


Today, Dell Technologies announced a slew of new solutions and innovation to help their customers deal with data that is being created at the edge. The new solutions include new edge server designs, smaller modular data centers, enhanced telemetry management and a streaming analytics engine. The new Dell solutions are aimed at helping customers to better realize value of the data coming from the edge as well as quickly gaining insights regardless of where the data resides.

To get a deeper take Ravi Pendekanti sat down with our own Brian Beeler to discuss these new servers in our podcast.

As with most IT buzzwords, the edge has taken on an amorphous meaning that tends to change depending on what vendors are selling. Dell Technologies wants to take the Edge and break it free from specific meaning of place and just define it as a set of characteristics and constraints including bandwidth, IT skills, security, operating environment, space and power. Data is mainly created and processed outside of data centers and soon outside of the cloud. More and more, vendors need to look for other means of processing the data where it is created or as close to it as possible. If users can process their data close to where it is created, they can gain insights faster and get a leg up on the competition. This is where the new solutions form Dell Technologies slide in to help.

First up is a server for the edge in the Dell EMC PowerEdge XE2420 server. The XE240 is compact having shorter depth, ideal for edge locations where space is limited, but still delivering high-performance. The new PowerEdge is a two-socket system with up to 92TB of storage. The XE2420 come with front-accessible input/output and power to provide easy access for field serviceability. The system is able to survive and thrive in tougher conditions. The server has Network Equipment-Building System certification with extended operating temperature tolerance and an optional filtered bezel for dusty locations. A potential use case Dell calls out is for telecommunications customers building out edge networks critical for the implementation of 5G.

When a server is not enough, Dell EMC decided to make an entire data center available in the form of the Dell EMC Modular Data Center (MDC) Micro 415. This modular data center is said to offer pre-integrated, enterprise-level data center IT, power, cooling and remote management in a size shorter and narrower than a parking spot. So, in hard locations where a data center would normally be miles away, customers can now deploy one with the MDC Micro. The MDC Micro can withstand extreme temperature changes and has physical security such as locks, smoke detectors, and fire suppression.

If business are moving more things away from the core they need good software to manage it all. Dell EMC is introducing its iDRAC 9 Datacenter software that brings remote access giving users a consistent and secure server management experience. iDRAC 9 Datacenter is all about hitting requirements for deploying, securing and operating edge environments, saving up to 99.1% administrator-attended time per server versus manual deployment. This addition of iDRAC 9 comes with streaming data analytics capabilities critical for understanding edge operations to all Dell EMC PowerEdge servers. This capability shows users that they can discover trends, fine tune operations, and create predictive analytics to help ensure peak performance, reduce downtime and prevent risk.

Speaking of streaming, the company has released its new Dell EMC Streaming Data Platform to store and analyze data at the edge. This platform simplifies edge infrastructure while allowing users to attain insights that will help them to run more efficiently. The platform provides auto-scaling ingestion, tiered storage with historical recall on-demand and unified analytics for both real-time and historical business insights.

Availability

  • The Dell EMC PowerEdge XE2420 will begin initial availability starting March 31, 2020 and rolling out globally through April 2020
  • The Dell EMC Modular Data Center Micro is expected to be available beginning in the second half of 2020
  • The Dell EMC iDRAC9 Datacenter and the Dell EMC Streaming Data Platform are available globally now

Dell Technologies

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Dell Technologies Brings PowerEdge To The Edge appeared first on StorageReview.com.

Veeam v10 Enters General Availability

$
0
0

Today, Veeam released v10 of their popular Veeam Availability Suite. Veeam Availability Suite was first introduced in 2008 as Veeam Backup & Replication and has been their flagship backup solution ever since. Veeam was founded in 2006 and was recently acquired by Insight Partners for roughly five billion dollars. Veeam is well known for their data management and disaster recovery products.


Today, Veeam released v10 of their popular Veeam Availability Suite. Veeam Availability Suite was first introduced in 2008 as Veeam Backup & Replication and has been their flagship backup solution ever since. Veeam was founded in 2006 and was recently acquired by Insight Partners for roughly five billion dollars. Veeam is well known for their data management and disaster recovery products.

Veeam v10 highlights

Veeam Availability Suite V10 is entering general availability with a host of new and improved features. First among the new features is the long-requested ability to backup NFS and SMB file shares directly from network-attached storage (NAS). V10 also comes with upgrades for their existing ability to backup both Windows and Linux based file servers. Veeam is further enhancing their backup abilities with enhanced Amazon Simple Storage Service (S3) object storage integration and immutable backups. V10 is also rolling out with improvements to recovery once your data has been backed up. V10 extends Veeams “instant VM recovery” to allow you to a multi-VM recovery service that supports mass “instant” restores.

We did sit down with Veeam to talk about v10 recently:

Veeam Availability Suite V10 also comes with enhanced support for third-party tools. V10 offers broader platform and ecosystem support, including new, advanced capabilities for Linux, Nutanix AHV, PostgreSQL, MySQL, and more. V10 also includes an extended API to simplify third-party data analysis software integration with the new Veeam Data Integration API.

Veeam Main Site

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Veeam v10 Enters General Availability appeared first on StorageReview.com.

Viewing all 5325 articles
Browse latest View live