Quantcast
Channel: Enterprise Archives - StorageReview.com
Viewing all 5321 articles
Browse latest View live

Supermicro SuperStorage 6019P-ACR12L+ Server Review

$
0
0

The SuperStorage 6019P-ACR12L+ is a 1U server designed for organizations that need a solution for high-density object storage, scale-out storage, Ceph/Hadoop, and Big Data Analytics. This server is highlighted by Supermicro’s X11-DDW-NT motherboard family, which features support for dual socket 2nd generation Intel Xeon Scalable processors (Cascade Lake), up to 3TB of ECC DDR4-2933MHz RAM, and Intel Optane DCPMM. This means higher frequency, more cores at a given price, and more in-processor cache, all of which promote higher performance. As such, businesses will reap higher performance for the same price, better price for the same performance, or better performance at lower costs compared to previous CLX servers.


The SuperStorage 6019P-ACR12L+ is a 1U server designed for organizations that need a solution for high-density object storage, scale-out storage, Ceph/Hadoop, and Big Data Analytics. This server is highlighted by Supermicro’s X11-DDW-NT motherboard family, which features support for dual-socket 2nd generation Intel Xeon Scalable processors (Cascade Lake), up to 3TB of ECC DDR4-2933MHz RAM, and Intel Optane DCPMM. This means higher frequency, more cores at a given price, and more in-processor cache, all of which promote higher performance. As such, businesses will reap higher performance for the same price, better price for the same performance, or better performance at lower costs compared to previous CLX servers.

For storage, the 6019P-ACR12L+ can be outfitted with 12x 3.5″ HDD bays, four 7mm NVMe SSD bays and an M.2 NVMe SSD as a boot drive. Supermicro leverages the 10GbE on board and three expansion card slots for faster NICs (1x HHHL, 2x FHHL) so users can get the best possible performance out of their drives. Connectivity includes three RJ45 LAN ports (two of which are 10GBaase-T and one a dedicated IPMI), four USB 3.0 and two USB 2.0 ports, one VGA port and a TPM header.

There are a few minor differences between the plus and non-plus model. For example, the 6019P-ACR12L+ is equipped with 3x PCI-E 3.0 x16 slots, while the non-plus version has two PCI-E 3.0 x16 slots and one PCI-E 3.0 x8 LP slot. Moreover, the plus model features 800W redundant PSU versus a 600W featured inside the non-plus. The plus model also has front LEDs for drive activity.

We did an unboxing, overview video of the SuperStorage 6019P-ACR12L+ here:

 

Our build is comprised of 12 x 16GB of DDR4 RAM (for a total of 192GB), Samsung PM983 NVMe SSDs (four 3.86TB, one 960GB) and 12 x 12TB Seagate Exos HDDs. The Samsung PM983 SSDs included as part of this evaluation offer a 1.3DWPD endurance rating, which focuses more on read performance than write performance.

Supermicro SuperStorage 6019P-ACR12L+ Specifications

CPU
  • Dual Socket P (LGA 3647)
  • 2nd Gen. Intel® Xeon® Scalable Processors (Cascade Lake/Skylake),
    Dual UPI up to 10.4GT/s
  • Support CPU TDP up to 205W
Cores
  • Up to 28 Cores
System Memory
Memory Capacity
  • 12 DIMM slots
  • Up to 3TB 3DS ECC DDR4-2933MHz RDIMM/LRDIMM
  • Supports Intel® Optane DCPMM
Memory Type
  • 2933/2666/2400/2133MHz ECC DDR4 RDIMM/LRDIMM
Note 2933MHz in two DIMMs per channel can be achieved by using memory purchased from Supermicro
Cascade Lake only. Contact your Supermicro sales rep for more info.
On-Board Devices
Chipset
  • Intel® C622 chipset
SATA
  • SATA3 (6Gbps); RAID 0, 1, 5, 10
Network Controllers
  • Dual LAN with 10GBase-T from Intel C622
IPMI
  • Support for Intelligent Platform Management Interface v.2.0
  • IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Video
  • ASPEED AST2500 BMC
Input / Output
LAN
  • 2 RJ45 10GBase-T LAN ports
  • 1 RJ45 Dedicated IPMI LAN port
USB
  • 4 USB 3.0 ports (rear)
  • 2 USB 2.0 ports (front)
Video
  • 1 VGA port
TPM
  • 1 TPM Header
System BIOS
BIOS Type
  • UEFI 256Mb
Management
Software
  • Intel® Node Manager
  • IPMI 2.0
  • KVM with dedicated LAN
  • SSM, SPM, SUM
  • SuperDoctor® 5
  • Watch Dog
Power Configurations
  • ACPI / APM Power Management
PC Health Monitoring
CPU
  • Monitors for CPU Cores, Chipset Voltages, Memory.
  • 4+1 Phase-switching voltage regulator
FAN
  • Fans with tachometer monitoring
  • Status monitor for speed control
  • Pulse Width Modulated (PWM) fan connectors
Temperature
  • Monitoring for CPU and chassis environment
  • Thermal Control for fan connectors
Chassis
Form Factor
  • 1U Rackmount
Model
  • CSE-802TS-R804WBP
Dimensions and Weight
Width
  • 17.6″ (447mm)
Height
  • 1.7″ (43mm)
Depth
  • 37.40″ (950mm)
Weight
  • Net Weight: 65 lbs (29.5 kg)
  • Gross Weight: 80 lbs (36.3 kg)
Available Colors
  • Black
Front Panel
Buttons
  • Power On/Off button
  • System Reset button
LEDs
  • Power status LED
  • HDD activity LED
  • Network activity LEDs
Expansion Slots
PCI-Express
  • 3 PCI-E 3.0 x16 slots
  • 1 PCI-E 3.0 x4 NVMe M.2 slot
  • AOM slot (AOM Broadcom 3216 SAS3 IT Mode)
Drive Bays
Hot-swap
  • 12 Hot-swap 3.5″ SAS3/SATA3 drive bays
  • 4 Hot-swap 2.5″ 7mm NVMe/SATA drive bays
System Cooling
Fans
  • 6x 40x40x56mm 20.5K-17.6K RPM Counter-rotating fans
Power Supply
800W Redundant Power Supplies with PMBus
Total Output Power
  • 800W
Dimension
(W x H x L)
  • 54.5 x 40.25 x 220 mm
Input
  • 750W: 100-127Vac / 10A
  • 800W: 200-240Vac / 5.5A
  • 800W: 230-240Vdc / 5.5A
+12V
  • Max: 62.5A /Min: 0.5A (100-127Vac)
  • Max: 66.6A /Min: 0.5A (200-240Vac, 230-240Vdc)
+5Vsb
  • Max: 4A /Min: 0A
Output Type
  • Gold Finger
Certification Platinum Level
Operating Environment
RoHS
  • RoHS Compliant
Environmental Spec.
  • Operating Temperature:
    10°C ~ 35°C (50°F ~ 95°F)
  • Non-operating Temperature:
    -40°C to 60°C (-40°F to 140°F)
  • Operating Relative Humidity:
    8% to 90% (non-condensing)
  • Non-operating Relative Humidity:
    5% to 95% (non-condensing)

Design and build

The SC802TS-R804WBP is a 1U chassis under 2-inches tall, 18-inches wide and just over 37 inches in depth. With its toolless rail system design, the chassis can mount into the server rack without to use of any tools. It leverages locking mechanisms on each end of the rails, which lock onto the square mounting holes located on the front and back of a server rack.

On the front of the server is a control panel, which features a power on/off and reset button, as well as five LEDs: Power, HDD, 2x NIC, and information status indicators. Connectivity on the front includes two USB 2.0 ports. Running along the bottom of the front panel are the four hot-swap 2.5-inch bays for NVMe/SATA drives.

Also, on the front panel are two locking levers. Simply loosen the two thumb screws then rotate the levers counter clockwise to unlock and clockwise to lock the drawer. If you pull the two levers at the same time, the internal drive drawer will pop out.

On the middle part of back panel, there are three LAN ports (2 RJ45 10GBase-T LAN and 1 RJ45 Dedicated IPMI LAN), one VGA port and one TPM header. On the left side are two 800W PSUS, while the right side houses three PCIe expansion slots.

Performance

VDBench Workload Analysis

When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from “four corners” tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices.

Profiles:

  • 4K Random Read: 100% Read, 128 threads, 0-120% iorate
  • 4K Random Write: 100% Write, 128 threads, 0-120% iorate
  • 64K Sequential Read: 100% Read, 32 threads, 0-120% iorate
  • 64K Sequential Write: 100% Write, 16 threads, 0-120% iorate
  • 2048K Sequential Read: 100% Read, 24 threads, 0-120% iorate
  • 2048K Sequential Write: 100% Write, 24 threads, 0-120% iorate
  • Synthetic Database: SQL and Oracle
  • VDI Full Clone and Linked Clone Traces

Looking at random 4K read, the SuperStorage 6019P-ACR12L+ recorded sub-millisecond latency throughout starting at 233,047 IOPS at 80.5μs while peaking at 2,330,576 IOPS with 215.5μs latency.

For random 4K write, the server began 161,323 IOPS with just 16.8μs latency and was able to maintain this low latency until roughly 970,000 IOPS at a 100% workloads, where it then spikes to 411.8ms in latency and scales back to 571 K IOPS at an over-saturation to 110% and 120% loads. This performance attribute is mostly related to the type of drives the PM983 are, measuring in at 1.3DWPD of endurance.

Next, we move on to sequential work. In 64K sequential read, the 6019P-ACR12L+ started at 12,593 IOPS or 787MB/s with a latency of 303.1μs before going on to peak at 121,252 IOPS or 7.58GB/s with a latency of 931μs.

For 64K sequential write, the SuperStorage server began at 12,202 IOPS or 763MB/s at 53.2μs latency. The SuperStorage server then peaked at roughly 119K IOPS or 7.45GB/s at 526μs latency.

Measuring the raw performance of the 12 7.2K hard drives inside the server, we applied a 2048K sequential read workloads on the 6019P-ACR12L+. Performance started at 121 IOPS or 241MB/s at 14,812μs latency. The SuperStorage server then peaked at roughly 1,202 IOPS or 2,405MB/s at 200,223μs latency.

For 2048K sequential write on the hard drives, the 6019P-ACR12L+ started at 129 IOPS or 259MB/s at 5,360μs latency. The SuperStorage server then peaked at roughly 1,303 IOPS at or 2,607MB/s 191,420μs latency.

Our next set of tests are our SQL workloads: SQL, SQL 90-10, and SQL 80-20. Starting with SQL, the 6019P-ACR12L+ peaked at 902,130 IOPS with a latency of only 140.7μs.

For SQL 90-10, the SuperStorage server started at 86,588 IOPS with a latency of 78.1μs and peaked at 855,814 IOPS with 146.3μs in latency.

SQL 80-20 had the 6019P-ACR12L+ started at 58,234 IOPS with 72.2μs in latency while peaking at 555,565 IOPS with 210μs in latency.

Next up are our Oracle workloads: Oracle, Oracle 90-10, and Oracle 80-20. Starting with Oracle, the 6019P-ACR12L+ started at 74.2μs latency while peaking at 406,222 IOPS with a latency of only 298μs.

Looking at Oracle 90-10, the SuperStorage server started at 72,398 IOPS with a latency of 77.2μs and peaked at 722,830 IOPS with 120μs in latency.

With Oracle 80-20, the 6019P-ACR12L+ began at 66,807 IOPS and a latency of 92μs, while peaking at 677,406 IOPS and a latency of 129μs.

Next, we switched over to our VDI clone test, Full and Linked. For VDI Full Clone (FC) Boot, the SuperStorage 6019P-ACR12L+ began at 54,918 IOPS and a latency of 88.9μs and peaked at 556,089 IOPS at a latency of 222.5μs.

Looking at VDI FC Initial Login, the SuperStorage server started at 18,409 IOPS and 69.3μs latency, hitting a peak at 66,668 IOPS at 1,636μs.

VDI FC Monday Login saw the server start at 7,499 IOPS and 100.7 μs latency with a peak of 73,301 IOPS at 763μs.

For VDI Linked Clone (LC) Boot, the 6019P-ACR12L+ began at 27,200 IOPS with 131.9μs latency and peaked at 266,391 IOPS at 221.5μs.

Looking at VDI LC Initial Login, the 6019P-ACR12L+ began at 13,298 IOPS with 107.8μs latency and peaked at 53,900 IOPS at 375.5μs.

Finally, VDI LC Monday Login had the 6019P-ACR12L+ start at 4,398 IOPS and 115.7μs latency then peaking at 42,381 IOPS at 1,340.3μs.

Conclusion

Supermicro’s SuperStorage 6019P-ACR12L+ is designed for high-density object storage, scale-out storage, Ceph/Hadoop, and Big Data Analytics. For hardware the server supports dual socket 2nd generation Intel Xeon Scalable processors (Cascade Lake), up to 3TB of ECC DDR4-2933MHz RAM (and/or Intel Optane DCPMM) and on the storage side it can house up to twelve 3.5″ HDD bays, four 7mm NVMe SSD bays and an M.2 NVMe SSD for boot. All of this hardware fits in the server’s 1U form factor. For networking, the SuperStorage 6019P-ACR12L+ has 10GbE on board and comes with several expansion slots for more cards.

For performance we ran our VDBench Workload Analysis. Here the Supermicro SuperStorage 6019P-ACR12L+ had some fairly good peak numbers. Peak highlights for the flash drives include 2.3 million IOPS for 4K read, for 4K write saw 970K IOPS, 64k sequential read had 7.58GB/s read and 6.46GB/s write. For the spinning media we ran a 2048K sequential benchmark hitting about 2.45GB/s read and 2.6GB/s write. With our SQL workloads the sever saw peaks of 902K IOPS, 856K IOPS for 90-10, and 555K IOPS for 80-20. With Oracle we saw peaks of 406K IOPS, 723K IOPS 90-10, and 677K IOPS for 80-20. The sever continued to do well as we moved into our VDI clone test. For Full Clone we saw peaks of 556K IOPS boot, 67K IOPS Initial Login, and 73K IOPS for Monday Login. For Linked Clone we saw 266K IOPS for boot, 54K IOPS for Initial Login, and 42K IOPS for Monday Login.

The Supermicro SuperStorage 6019P-ACR12L+ is a 1U server that can pack in quite a bit of storage and connectivity while helping users as they tangle with Big Data issues. Webscale organizations will like the combination of flash and HDD capacity this unit offers and software defined storage guys will be able to leverage automated tiering capabilities to deliver flash-based performance over 144TB (or more) of HDD capacity. In all the design of the server is quite novel, entirely different from what most other vendors are doing with 1U. It may not be for everyone, but for those who have the capability to take advantage of the storage performance and flexibility this server offers, Supermicro has created a really compelling offering.

Supermicro SuperStorage 6019P-ACR12L+

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Supermicro SuperStorage 6019P-ACR12L+ Server Review appeared first on StorageReview.com.


StorageReview Podcast #35: Ravi Pendekanti, Dell Technologies

$
0
0

This week’s podcast features an interview with Ravi Pendekanti from Dell Technologies. Ravi highlights Dell’s new edge servers and an all new data center to go that is pretty sweet. If you prefer, we have a video version of the interview, complete with photos of the new gear embedded below. In other news the team discusses how people are terrible, Pliny the Younger, waitstaff that refuse to write down orders and much more. In technology news we discuss the HP Micro Server again, in addition to a lot of AMD news and systems that are in for review. PCIe Gen4 SSDs are getting closer to a real thing, as Kioxia released a pair of U.3 offerings this week. Lastly Adam recommends In the Tall Grass in his movie corner, which was roundly rejected as a bad idea.


This week’s podcast features an interview with Ravi Pendekanti from Dell Technologies. Ravi highlights Dell’s new edge servers and an all new data center to go that is pretty sweet. If you prefer, we have a video version of the interview, complete with photos of the new gear embedded below. In other news the team discusses how people are terrible, Pliny the Younger, waitstaff that refuse to write down orders and much more. In technology news we discuss the HP Micro Server again, in addition to a lot of AMD news and systems that are in for review. PCIe Gen4 SSDs are getting closer to a real thing, as Kioxia released a pair of U.3 offerings this week. Lastly Adam recommends In the Tall Grass in his movie corner, which was roundly rejected as a bad idea.

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #35: Ravi Pendekanti, Dell Technologies appeared first on StorageReview.com.

Intel Releases New 2nd Gen Intel Xeon Scalable CPUs

$
0
0

Today, Intel announced a refresh of the second generation Intel Xeon Scalable CPUs, with aggressive pricing as a core change across the portfolio. The new processors are all about competitive performance, making them more attractive against the changing compute marketplace. The CPUs can be used for a wide variety of use cases but is also part of the larger 5G announcement Intel made today as well.

There are a few things in the air surrounded CPUs at the moment. Intel released their latest version of Intel Xeon Scalable in April of 2019. The second generation came with more performance as well as new capabilities such as the ability to leverage Optane DC PMEM. Shortly thereafter, AMD fired a shot with its release of the second generation of AMD EPYC CPUs with even better performance than the newly released Xeon Scalables, and benefits of their own such as the ability to leverage PCIe 4.0. It seemed that the two giants would duke it out to the benefit of everyone. Then VMware introduced a wrinkle by switching its per socket pricing to per 32-core pricing. This in turn had AMD come up with a small release that can bypass the VMware “tax.” Now it is Intel’s turn for a smaller release.

Today’s announcement doesn’t seem to be in relation to the VMware “tax,” though, none of the new CPUs have over 32 cores. The new processors announced here are more aimed at performance, with Intel quoting speeds on average of 1.36-times higher performance and 1.42-times better performance-per-dollar compared with the 1st Gen Intel Xeon Gold processors. Intel has added cores to some processers, a larger cache, and higher frequencies. The company has broken down the new CPUs with new designations of an “R,” “T” or “U” suffix that are designed for dual- and single-socket mainstream and entry-level server systems.

Intel released two new gold processors with what they are calling the industry’s highest server processor frequency, the Intel Xeon Gold 6256 and 6250. These 12 and 8 core CPUs, respectively, are said to deliver a base and turbo frequency up to 3.9 GHz and 4.5 GHz. High clock speeds like these are good for use cases such as financial trading, simulation and modeling, high-performance computing, and databases.

Use cases for the new CPUs

  • Industry-leading frequencies for high-performance usages: New Intel Xeon Gold 6200 processors deliver up to 4.5 GHz processor frequency with Intel Turbo Boost Technology and up to 33% more processor cache, offering customers breakthrough performance for frequency-fueled workloads.
  • Enhanced performance for mainstream usages: New Intel Xeon Gold 6200R and 5200R processors deliver built-in value through a combination of higher base and Intel Turbo Boost Technology frequencies, in addition to increased processor cache.
  • Increased value and capability for entry-level, edge, networking and IoT usages: New Intel Xeon Gold 6200U, Silver 4200R, Sliver 4210T and Bronze 3200R processors deliver increased value for single-socket entry-level servers, as well as edge, networking and internet of things (IoT) usages.

Something else to point out here, is price. One large advantage to AMD over Intel, aside from performance, is price. AMD tend to run much lower in general. With this update, Intel is lower prices significantly for the same specs (the 6285R has the same specs as the 8280 but comes in under $4,000 versus $10,500). Of course, the R means dual-socket systems so they aren’t exactly identical, but it is definitely a step in the direction of giving customers more options when choosing CPUs.

Intel CPUs

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Intel Releases New 2nd Gen Intel Xeon Scalable CPUs appeared first on StorageReview.com.

AIC Announces Multi-Node HCI Series

$
0
0

AIC Inc. announced its new multi-node product series. The new series is optimized for HCI (Hyper-Converged Infrastructure) and being so combines compute, storage and networking support into a compact system. The new systems come in 1U, 2U, and 4U form factors for different needs, with the quad-node systems especially popular for software-designed solutions.

For over 20 years, AIC has been a provider of both OEM/ODM and COTS server and storage solutions. Headquartered in Taiwan, and founded in 1996, AIC has decades of experience in creating products that are flexible and configurable. They are extending this expertise to new HCI with the new series announced here.

AIC multi-node systems include:

  • HP101-AG, 1U dual-node compute server supporting single-socket AMD EPYC boards, two PCIe Gen.4 x16 slots and one OCP mezzanine card per node, with hot-swappable functionality of nodes and power supply units.
  • HP201-AG, 2U 4-node hybrid storage server supporting single-socket AMD EPYC boards and 24 x 2.5” NVMe SSD drive bays with hot-swappable functionality of nodes, drive bays, and power supply units.
  • HP202-AG, 2U 4-node hybrid storage server supporting single-socket AMD EPYC boards and 12 x 3.5” SATA/SAS drive bays with hot-swappable functionality of nodes, drive bays, and power supply units.
  • HP201-VL, 2U 4-node hybrid storage server supporting dual-socket 2nd Gen. Intel Xeon Scalable Processors boards and 24 x 2.5” NVMe SSD drive bays with hot-swappable functionality of nodes, drive bays, and power supply units.
  • HP202-VL, 2U 4-node hybrid storage server supporting dual-socket 2nd Gen. Intel Xeon Scalable Processors boards and 12 x 3.5” SATA/SAS drive bays with hot-swappable functionality of nodes, drive bays, and power supply units
  • HP401-PV,  4U 4-node hybrid storage server supporting dual-socket 2nd Gen. Intel Xeon Scalable Processors boards and 24 x 3.5” SATA/SAS drive bays with hot-swappable functionality of nodes, drive bays, and power supply units.

AIC will be demonstrating their new products at the RSA Conference 2020, from Feb. 24 to Feb. 27, at Moscone Convention Center, San Francisco, CA at booth #1868.

AIC

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post AIC Announces Multi-Node HCI Series appeared first on StorageReview.com.

GIGABYTE R281-NO0 NVMe Server Review

$
0
0

The GIGABYTE R281-NO0 is a 2U all-NVMe server that is built around Intel’s second generation of Xeon Scalable processors with a focus on performance-based workloads. With the support of 2nd gen Intel Xeon Scalable comes the support of Intel Optane DC Persistent Memory modules. Optane PMEM can bring a much larger memory footprint as, while the modules aren’t as high-performant as DRAM, they come in much higher capacities. Leveraging Optane can help unleash the full potential of the 2nd gen Intel Xeon Scalable Processors in the GIGABYTE R281-NO0.

The GIGABYTE R281-NO0 is a 2U all-NVMe server that is built around Intel’s second generation of Xeon Scalable processors with a focus on performance-based workloads. With the support of 2nd gen Intel Xeon Scalable comes the support of Intel Optane DC Persistent Memory modules. Optane PMEM can bring a much larger memory footprint as, while the modules aren’t as high-performant as DRAM, they come in much higher capacities. Leveraging Optane can help unleash the full potential of the 2nd gen Intel Xeon Scalable Processors in the GIGABYTE R281-NO0.

 

Other interesting hardware layouts of the GIGABYTE R281-NO0 include up to 12 DIMMs per socket or 24 total. The newer CPUs allow for DRAM up to 2933MHz. In total, users can outfit the GIGABYTE R281-NO0 with up to 3TB of DRAM. The server can leverage several different riser cards giving it up to six full height half-length slots for devices that can leverage PCIe x16 slots or under. The company boasts of having a very dense add-on slot design with several configurations for different use cases. The server has a modularized backplane that is able to support exchangeable expanders offering both SAS and NVMe U.2 (or a combination) depending on needs.

With storage, not only can users add a lot, they can add a lot of NVMe storage in the form of U.2 and AIC. Across the front of the server are 24 drive bays that support 2.5” HDD or SSD and supports NVMe. The rear of the server has two more 2.5” drive bays for SATA/SAS boot/logging drives. And there are tons of PCIe expansion lots for various PCIe devices, including more storage. This density and performance are ideal for AI and HPC optimized for GPU density, multi-node servers optimized for HCI, and storage servers optimized for HDD / SSD capacity.

For those interested, we have a video overview here:

For power management, the GIGABYTE R281-NO0 has two PSUs, which is not uncommon at all. However, it also has intelligent power management features to both make the server more efficient in terms of power usage and retain power in the case of a failure. The server comes with a feature known as Cold Redundancy that switches the extra PSU to standby mode with the system load is under 40%, saving power costs. The system has SCMP (Smart Crisis Management / Protection). With SCMP, if there is an issue with one PSU only two nodes will do to lower power mode while the PSU is repaired/replaced.

GIGABYTE R281-NO0 Specifications

Form Factor 2U
Motherboard MR91-FS0
CPU 2nd Generation Intel Xeon Scalable and Intel Xeon Scalable Processors
Intel Xeon Platinum Processor, Intel Xeon Gold Processor, Intel Xeon Silver Processor and Intel Xeon Bronze Processor
CPU TDP up to 205W
Socket 2x LGA 3647, Socket P
Chipset Intel C621
Memory 24 x DIMM slots
RDIMM modules up to 64GB supported
LRDIMM modules up to 128GB supported
Supports Intel Optane DC Persistent Memory (DCPMM)
​1.2V modules: 2933 (1DPC)/2666/2400/2133 MHz
Storage
Bays Front side: 24 x 2.5″ U.2 hot-swappable NVMe SSD bays
​Rear side: 2 x 2.5″ SATA/SAS hot-swappable HDD/SSD bays
Drive Type SATA III 6Gb/s
​SAS with an add-on SAS Card
RAID For SATA drives: Intel SATA RAID 0/1
​For U.2 drives: Intel Virtual RAID On CPU (VROC) RAID 0, 1, 10, 5
LAN 2 x 1Gb/s LAN ports (Intel I350-AM2)
​1 x 10/100/1000 management LAN
Expansion Slots
Riser Card CRS2131 1 x PCIe x16 slot (Gen3 x16 or x8), Full height half-length
1 x PCIe x8 slots (Gen3 x0 or x8), Full height half-length
​1 x PCIe x8 slots (Gen3 x8), Full height half-length
Riser Card CRS2132 1 x PCIe x16 slot (Gen3 x16 or x8), Full height half-length, Occupied by CNV3124, 4 x U.2 ports
1 x PCIe x8 slots (Gen3 x0 or x8), Full height half-length
1 x PCIe x8 slots (Gen3 x8), Full height half-length
Riser Card CRS2124 1 x PCIe x8 slots (Gen3 x0), Low profile half-length
​1 x PCIe x16 slot (Gen3 x16), Low profile half-length, Occupied by CNV3124, 4 x U.2 ports
2 x OCP mezzanine slots PCIe Gen3 x16
Type1, P1, P2, P3, P4, K2, K3
​1 x OCP mezzanine slot is Occupied by CNVO124, 4 x U.2 mezzanine card
I/O
Internal 2 x Power supply connectors
4 x SlimSAS connectors
2 x SATA 7-pin connectors
2 x CPU fan headers
1 x USB 3.0 header
1 x TPM header
1 x VROC connector
1 x Front panel header
1 x HDD back plane board header
1 x IPMB connector
1 x Clear CMOS jumper
​1 x BIOS recovery jumper
Front 2 x USB 3.0
1 x Power button with LED
1 x ID button with LED
1 x Reset button
1 x NMI button
1 x System status LED
1 x HDD activity LED
​2 x LAN activity LEDs
Rear 2 x USB 3.0
1 x VGA
1 x COM (RJ45 type)
2 x RJ45
1 x MLAN
​1 x ID button with LED
Backplane Front side_CBP20O2: 24 x SATA/SAS ports
Front side_CEPM480: 8 x U.2 ports
Rear side_CBP2020: 2 x SATA/SAS ports
​Bandwidth: SATAIII 6Gb/s or SAS 12Gb/s per port
Power
Supply 2 x 1600W redundant PSUs
80 PLUS Platinum
AC Input 100-127V~/ 12A, 47-63Hz
​200-240V~/ 9.48A, 47-63Hz
DC Output Max 1000W/ 100-127V
  • +12V/ 82A
  • +12Vsb/ 2.1A

Max 1600W/ 200-240V

  • +12V/ 132A
  • ​+12Vsb/ 2.1A
Environmental
Operating temperature 10°C to 35°C
Operating humidity 8-80% (non-condensing)
Non-operating temperature -40°C to 60°C
Non-operating humidity 20%-95% (non-condensing)
Physical
Dimensions (WxHxD)  438 x 87.5 x 730
Weight  20kg

Design and Build

The GIGABYTE R281-NO0 is a 2U rackmount server. Across the front are 24 hot-swappable bays for NVMe U.2 SSDs. On the left side are LED indicator lights and button for reset, power, NMI, and ID. On the right are two USB 3.0 ports.

 

Flipping the device around to the rear we see two 2.5″ SSD/HDD bays in the upper left corner. Beneath the bays are two PSUs. Running across the bottom is a VGA port, two USB 3.0 ports, Two GbE LAN ports, a serial port, and a 10/100/1000 server management LAN port. Above the ports are six PCIe slots.

 

The top pops off fairly easy giving users’ access to the two Intel CPUs (covered by heatsinks in the photo). Here one can see all the DIMM slots as well. This server is loaded down with NVMe as seen by all the direct access cables running back to the daughterboards from the front backplane. The cables themselves are neatly laid out and don’t appear to impact airflow front to back.

GIGABYTE R281-NO0 Configuration

CPU 2 x Intel 8280
RAM 384GB of 2933HMz
Storage 12 x Micron 9300 NVMe 3.84TB

Performance

SQL Server Performance

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.

Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test looks for latency performance.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Dell’s Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our servers.

SQL Server Testing Configuration (per VM)

  • Windows Server 2012 R2
  • Storage Footprint: 600GB allocated, 500GB used
  • SQL Server 2014
    • Database Size: 1,500 scale
    • Virtual Client Load: 15,000
    • RAM Buffer: 48GB
  • Test Length: 3 hours
    • 2.5 hours preconditioning
    • 30 minutes sample period

For our transactional SQL Server benchmark, the R281-NO0 posted an aggregate score of 12,645 TPS, with individual VMs ranging from 3,161.1 TPS to 3,161,5 TPS.

 

With SQL Server average latency the server had an aggregate score as well as individual VM score of 1ms.

 

Sysbench MySQL Performance

Our first local-storage application benchmark consists of a Percona MySQL OLTP database measured via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well.

Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller.

Sysbench Testing Configuration (per VM)

  • CentOS 6.3 64-bit
  • Percona XtraDB 5.5.30-rel30.1
    • Database Tables: 100
    • Database Size: 10,000,000
    • Database Threads: 32
    • RAM Buffer: 24GB
  • Test Length: 3 hours
    • 2 hours preconditioning 32 threads
    • 1 hour 32 threads

With the Sysbench OLTP the GIGABYTE saw an aggregate score of 19,154.9 TPS.

With Sysbench latency, the server had an average of 13.37ms.

In our worst-case scenario (99th percentile) latency, the server saw 24.53ms for aggregate latency.

VDBench Workload Analysis

When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from “four corners” tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices.

Profiles:

  • 4K Random Read: 100% Read, 128 threads, 0-120% iorate
  • 4K Random Write: 100% Write, 64 threads, 0-120% iorate
  • 64K Sequential Read: 100% Read, 16 threads, 0-120% iorate
  • 64K Sequential Write: 100% Write, 8 threads, 0-120% iorate
  • Synthetic Database: SQL and Oracle
  • VDI Full Clone and Linked Clone Traces

With random 4K read, the GIGABYTE R281-NO0 started at 539,443 IOPS at 114.8µs and went on to peak at 5,326,746 IOPS at a latency of 238µs.

 

4k random write showed sub 100µs performance until about 3.25 million IOPS and a peak score of 3,390,371 IOPS at a latency of 262.1µs.

 

For sequential workloads we looked at 64k. For 64K read we saw peak performance of about 640K IOPS or 4GB/s at about 550µs latency before dropping off some.

 

64K write saw a sub 100µs performance until about 175K IOPS or 1.15GB/s and went on to peak at 259,779 IOPS or 1.62GB/s at a latency of 581.9µs before dropping off some.

 

Our next set of tests are our SQL workloads: SQL, SQL 90-10, and SQL 80-20. Starting with SQL, the GIGABYTE had a peak performance of 2,345,547 IPS at a latency of 159.4µs.

 

With SQL 90-10 we saw the server peak at 2,411,654 IOPS with a latency of 156.1µs.

 

Our SQL 80-20 test had the server peak at 2,249,683 IOPS with a latency of 166.1µs.

Next up are our Oracle workloads: Oracle, Oracle 90-10, and Oracle 80-20. Starting with Oracle, the GIGABYTE R281-NO0 peaked at 2,240,831 IOPS at 165.3µs for latency.

 

Oracle 90-10 saw a peak performance of 1,883,800 IOPS at a latency of 136.2µs.

In Oracle 80-20 the server peaked at 1,842,053 IOPS at 139.3µs for latency.

 

Next, we switched over to our VDI clone test, Full and Linked. For VDI Full Clone (FC) Boot, the GIGABYTE peaked at 1,853,086 IOPS and a latency of 198µs.

Looking at VDI FC Initial Login, the server was started at 83,797 IOPS at 86.7µs and went on to pea at 808,427 IOPS with a latency of 305.9µs before dropping off some.

 

VDI FC Monday Login saw the server peaked at 693,431 IOPS at a latency of 207.6µs.

 

For VDI Linked Clone (LC) Boot, the GIGABYTE Sever peaked at 802,660 IOPS at 194µs for latency.

Looking at VDI LC Initial Login, the server saw a peak of 409,901 IOPS with 195.2µs latency.

Finally, VDI LC Monday Login had the server with a peak performance of 488,516 IOPS with a latency of 273µs.

Conclusion

The 2U GIGABYTE R281-NO0 is an all NVMe server built for performance. The server leverages two second generation Intel Xeon Scalable CPUs and supports up to 12 DIMMS per socket. Depending on the CPU choice, it supports DRAM speeds up to 2933MHz and Intel Optane PMEM. User can have up to 3TB of DRAM or a larger memory footprint with Optane. The storage setup is highly configurable, with the build we reviewed supporting 24 2.5 NVMe SSDs. And an interesting power feature is Cold Redundancy that switches the extra PSU to standby mode with the system load is under 40%, saving power costs.

For performance testing we ran our Applications Analysis Workloads as well as our VDBench Workload Analysis. For Applications Analysis Workloads we started off with SQL Server. Here we saw an aggregate transactional score of 12,645 TPS with an average latency of 1ms. Moving on to Sysbench, the GIGABYTE server gave us an aggregate score of 19,154 TPS, an average latency of 13.37ms, and a worst-case scenario of only 24.53ms.

In our VDBench Workload Analysis the server came off with some strong, impressive numbers. Peak highlights include 5.3 million IOPS for 4K read, 3.4 million IOPS for 4K write, 4GB/s for 64K read, and for 64K write of 1.62GB/s. For our SQL workloads the server hit 2.3 Million IOPS, 2.4 million IOPS for 90-10, and 2.3 million IOPS for 80-20. With Oracle we saw 2.2 million IOPS, 1.9 million IOPS for Oracle 90-10, and 1.8 million IOPS for 80-20. For our VDI Clone tests we saw 1.9 million IOPS for Boot, 808K IOPS for Initial Login, and 693K IOPS for Monday Login for Full Clone. For Linked Clone we saw 803K IOPS for Boot, 410K IOPS for Initial Login, and 489K IOPS for Monday Login.

The GIGABYTE R281-NO0 is a powerhouse of a server, capable of supporting a wide range of flash technologies. Being built around the Intel Scalable 2nd Generation hardware it also benefits from the newer CPUs supporting Optane PMEM. The server offers plenty of configurability on the storage end and some nifty power benefits. We’re most enamored by the 24 NVMe SSD bays of course; anyone with a high-performance storage need will be as well. This server from GIGABYTE is well designed to be a fantastic storage-heavy server for a variety of use cases.

GIGABYTE R281-NO0

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post GIGABYTE R281-NO0 NVMe Server Review appeared first on StorageReview.com.

StorageReview Podcast #36: Russell Resnick, Lenovo

$
0
0

This week’s podcast features Russell Resnick from Lenovo; Russell manages the single and dual CPU mainstream server platforms for Lenovo. Brian and Russell cover off on Lenovo’s AMD platforms specifically, paying special attention to the SR635 that StorageReview recently got into the test lab. The group also breaks down our terrible pain in migrating to the new website, a 94′ putt for a free car, HPE’s MicroServer, and many other topics loosely related to technology. Adam receives substantial heat for a terrible movie corner pick and hopes to redeem himself with this’s week’s pull from Hulu, The Art of Self Defense.

This week’s podcast features Russell Resnick from Lenovo; Russell manages the single and dual CPU mainstream server platforms for Lenovo. Brian and Russell cover off on Lenovo’s AMD platforms specifically, paying special attention to the SR635 that StorageReview recently got into the test lab. The group also breaks down our terrible pain in migrating to the new website, a 94′ putt for a free car, HPE’s MicroServer, and many other topics loosely related to technology. Adam receives substantial heat for a terrible movie corner pick and hopes to redeem himself with this’s week’s pull from Hulu, The Art of Self Defense.

 

For those who prefer just the interview segment, we have a video of the discussion between Russell and Brian.

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #36: Russell Resnick, Lenovo appeared first on StorageReview.com.

WD Red 14TB NAS HDD Review

$
0
0

Western Digital released a new line of Red NAS drives in November of 2019 and among the newly launched drives a 14TB HDD was released; expanding the maximum capacity for the line’s HDDs. Purpose built for NAS system compatibility WD Red (non-pro) drives are ideal for home and small business NAS systems with up to 8 bays running in a 24/7 environment while supporting up to a 180 TB/year workload rate.

Western Digital released a new line of Red NAS drives in November of 2019 and among the newly launched drives a 14TB HDD was released;
expanding the maximum capacity for the line’s HDDs. Purpose built for NAS system compatibility WD Red (non-pro) drives are ideal for home and
small business NAS systems with up to 8 bays running in a 24/7 environment while supporting up to a 180 TB/year workload rate.

While there are plenty of drives on the market desktop drives aren’t typically tested or designed for the rigors of a NAS system. Choosing a drive designed for NAS offers features tailored to help preserve your data and maintain optimal performance. WD Red drives come with NASware 3.0 technology to balance performance and reliability in NAS and RAID environments. With NAS systems being always on a reliable drive is essential and reliability is a foundation of WD Red NAS hard drives. The drives are designed and tested in 24/7 conditions.

We also have a video overview for those that are interested:

At the time of review, this drive could be picked up for $500 from amazon.

WD Red 14TB NAS HDD Specifications:

Interface SATA 6GB/s
Form Factor 3.5-inch
Capacity 14TB
Performance
Interface Transfer Rate 210 MB/s
Cache 512
Performance Class 5400 RPMM
Reliability
Load/unload cycles 600,000
Non-recoverable read errors per bits read <1 in 10 billion
MTBF (hours) 1,000,000
Workload Rate (TB/year) 180
Warranty 3-year limited
Power Management
Average Power Requirements Read/Write – 4.1 W
Idle – 2.7 W
Standby and sleep 0.4 W
Environmental Specifications
Operating Temperature 0 to 65
Shock (non-operating) 250 Gs
Physical Dimensions
Dimensions (WxDxH) 4 x 5.787 x 1.028 in (101.6 x 147 x 26.1 mm)
Weight 1.4 pounds ( 0.64 kilograms)

WD Red 14TB NAS HDD Review Configuration

In this review, we look at the drives in a RAID6 configuration inside our QNAP TS-1685 NAS for testing and comparing the data to the Seagate IronWolf 14TB HDDs in the same configuration. We will be looking at eight of the WD Red 14 TB HDDs, which we configured in RAID6. We use our Dell PowerEdge R730 with a Windows S2012 R2 VM as an FIO load generator.

Performance

Enterprise Synthetic Workload Analysis

Our enterprise hard drive benchmark process preconditions each drive-set into steady-state with the same workload the device will be tested with under a heavy load of 16 threads, with an outstanding queue of 16 per thread. The device is then tested in set intervals in multiple thread/queue depth profiles to show performance under light and heavy usage. Since hard drives reach their rated performance level very quickly, we only graph out the main sections of each test.

Preconditioning and Primary Steady-State Tests:

  • Throughput (Read+Write IOPS Aggregate)
  • Average Latency (Read+Write Latency Averaged Together)
  • Max Latency (Peak Read or Write Latency)
  • Latency Standard Deviation (Read+Write Standard Deviation Averaged Together)

Our Enterprise Synthetic Workload Analysis includes four profiles based on real-world tasks. These profiles have been developed to make it easier to compare to our past benchmarks, as well as widely-published values such as max 4K read and write speed and 8K 70/30, which is commonly used for enterprise drives.

  • 4K
    • 100% Read or 100% Write
    • 100% 4K
  • 8K 70/30
    • 70% Read, 30% Write
    • 100% 8K
  • 128K (Sequential)
    • 100% Read or 100% Write
    • 100% 128K

Looking at our throughput test which measures 4K random performance, the 14TB WD Red produced well in iSCSI performance with 938 IOPS write and 8806 IOPS read. In CIFS, the 14TB WD Red posted 734 IOPS write and 5,369 IOPS read.

Next, we move on to 4K average latency. The 14TB WD Red drive hit 272.927ms latencies of write and 29.064ms read in the iSCSI configuration while CIFS measured 348.7ms write and 47.669ms read. The Seagate outperformed the WD in both configurations.

With 4K max latency, the 14TB WD Red showed 5,891.4ms and 1,306.8ms in iSCSI reads and writes, respectively. In CIFS, the 14TB hit 7,836ms read (1st) and 1,380.4ms (2nd) write. Overall, the WD Red and IronWolf showed comparable read max latencies but, the WD Red fell behind when it came to write max latencies.

In standard deviation, the 14TB WD Red showed reads and writes of 60.002ms and 489.01ms in iSCSI, respectively, and 77.598ms and 444.98ms in CIFS.

The next benchmark tests the drives under 100% read/write activity, but this time at 8K sequential throughput. In iSCSI, the 14TB WD Red hit 227,267 IOPS read and 104,679 IOPS write, while CIFS saw a quarter the IOPS in read performance with 52,235 coupled with 41,272 IOPS write.

Our next test shifts focus from a pure 8K sequential 100% read/write scenario to a mixed 8K 70/30 workload. This will demonstrate how performance scales in a setting from 2T/2Q up to 16T/16Q. In CIFS, the 14TB WD Red started at 2,497 IOPS while ending at 2,157 IOPS in the terminal queue depths. In iSCSI, we saw a range of 643 IOPS to 1,698 IOPS.

With average latency at 8K 70/30, the 14TB WD Red showed a range of 6.2ms through 172.48ms in iSCSI, while CIFS showed a range of 1.59ms through 118.54ms in CIFS.

In max latency, the 14TB WD Red posted a range of 1,136.11ms to 3,279.77ms in CIFS, while iSCSI showed 268.38ms through 4,783.24ms in the terminal queue depths.

The standard deviation latency results, the 14TB WD Red peaked at 114.29ms (CIFS) and 331.27ms (iSCSI) in the terminal queue depths.

Our last test is the 128K benchmark, which is a large-block sequential test that shows the highest sequential transfer speed. The 14TB WD Red showed 1.71GB/s read and 1.19GB/s write in CIFS, while iSCSI had 1.97GB/s read and 1.99GB/s write.

Conclusion

The latest capacity of the WD Red line is a solid addition. This NAS-specific drive gives users the highest capacity possible at a relatively inexpensive price tag, while results from our performance charts reaffirm that the line is a good choice for the SOHO market and creative professionals. The 14TB WD Red drive offers solid performance with a few extra terabytes for increasing a user’s net storage capacity enough to make a difference. Some features of the drive include support of up to an 8-bay storage NAS system, support of up to 180 TB/yr workload rate, and optimum compatibility with NASware technology. The drive comes with a 3-year limited warranty.

As far as performance goes, the WD Red performed well for its given use cases. We tested the WD Red 14TB drives in both iSCSI and CIFS configurations in RAID6. With our VDBench workloads, the WD Red 14TB was up against a 14TB Seagate IronWolf. Here, the WD Red shined in 8K sequential throughput and 128K sequential throughput hitting 227,267 IOPS and 1.97GB/s respectively in iSCSI read. In general, the WD Red fell behind when it came to latency, specifically in iSCSI configuration in our 8K max latency the WD was kind of all over the place and the WD Red trailed behind in both 4K and 8K average latency in both iSCSI and CIFS configuration.

Overall, the WD Red 14TB NAS HDD is a reliable NAS drive that features great performance in specific configurations, while its massive capacity gives users the (budget-friendly) flexibility they need to grow as their data requirements expand.

The post WD Red 14TB NAS HDD Review appeared first on StorageReview.com.

More Advancements Added To Dell Technologies Cloud

$
0
0

Today, Dell Technologies announced new advancements to its Dell Technologies Cloud. A big advancement is the new subscription-based pricing model, that Dell believes will make the service easier to buy as well as lower the price barrier for entry. Dell is claiming that it provides the industry’s fastest hybrid cloud deployment, allowing a growing number of customers to accelerate hybrid cloud deployments and simplify IT operations.

Today, Dell Technologies announced new advancements to its Dell Technologies Cloud. A big advancement is the new subscription-based pricing model, that Dell believes will make the service easier to buy as well as lower the price barrier for entry. Dell is claiming that it provides the industry’s fastest hybrid cloud deployment, allowing a growing number of customers to accelerate hybrid cloud deployments and simplify IT operations.

Hybrid/multi-cloud approaches seem to be the way that most medium to large companies are going. There are now tons of options with the various combinations on the market today. Still, organizations are wary of going with certain solutions as they want freedom to be able to move workloads around in whichever cloud is best for the workload. At the same time, customers desire consistent cloud management that, according to ESG Research, is a long way off today with only 5% of respondents meeting the criteria for cloud management consistency.

Dell Technologies is looking to solve the above with its new subscription-based model of Cloud. The Dell Technologies Cloud is a combination of VMware and Dell EMC infrastructure. According to the company, users will be able to deploy a hybrid loud in as still as two weeks and then scale the environment in just five days. The subscription includes hardware, software and services, which includes deployment, support and asset recovery. It is sold in one- or three-year agreements and can be priced as low as $70/node per day.

Customer benefits include:

  • Avoid hidden costs or unpredictable forecasting risks
  • Lower the barrier to entry for hybrid cloud by starting with what is required today and scaling as needs change over time
  • Optimize investments from deployment through expansion and retirement with Dell Technologies Services experts

Availability

Dell Technologies Cloud, specifically VMware Cloud Foundation on Dell EMC VxRail, is available globally. The Dell Technologies Cloud subscription offering is available now in the United States.

Dell Technologies Cloud

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post More Advancements Added To Dell Technologies Cloud appeared first on StorageReview.com.


Kingston DC1000M Enterprise NVMe SSD Released

$
0
0
Kingston DC1000M

Today Kingston Digital, Inc. has launched their latest SSD, the DC1000M. Designed to be an affordable drive with NVMe performance, the DC1000M targets enterprises looking to replace aging SATA or SAS-based systems. The drive comes in a U.2 form factor and capacities up to 7.68TB. At the high end, performance is specced at 3,100MB/s sequential read and 2,800MB/s sequential write in the 7.68TB drive. 4k read IOPS top out at 540,000 on the 1.92TB drive and write IOPS hit 210,000 on the 3.84TB and 7.68TB drives.

Today Kingston Digital, Inc. has launched their latest SSD, the DC1000M. Designed to be an affordable drive with NVMe performance, the DC1000M targets enterprises looking to replace aging SATA or SAS-based systems. The drive comes in a U.2 form factor and capacities up to 7.68TB. At the high end, performance is specced at 3,100MB/s sequential read and 2,800MB/s sequential write in the 7.68TB drive. 4k read IOPS top out at 540,000 on the 1.92TB drive and write IOPS hit 210,000 on the 3.84TB and 7.68TB drives.

Kingston DC1000M

Kingston has been on quite a tear lately, launching several SSDs to target increasingly specific use cases in the enterprise. In their entry-enterprise SATA category we reviewed the DC500R, DC500M, and the DC450R. We also took at look at the DC1000B, an M.2 NVMe drive. By adding the DC1000M, Kingston creates further breadth in their NVMe portfolio, which will make up an increasing number of unit sales as SATA and SAS drives edge toward retirement.

Kingston DC1000M Specifications

    • Form Factor: U.2, 2.5″ x 15mm
    • Interface: NVMe PCIe Gen 3.0 x4
    • Capacities: 960GB, 1.92TB, 3.84TB, 7.68TB
    • NAND: 3D TLC
    • Sequential Read/Write:
      • 960GB – 3,100MBs/1,330MBs
      • 1.92TB – 3,100MBs/2,600MBs
      • 3.84TB – 3,100MBs/2,700MBs
      • 7.68TB – 3,100MBs/2,800MBs
    • Steady-State 4k Read/Write:
      • 960GB – 400,000/125,000 IOPS
      • 1.92TB – 540,000/205,000 IOPS
      • 3.84TB – 525,000/210,000 IOPS
      • 7.68TB – 485,000/210,000 IOPS
    • Latency: TYP Read/Write: <300 µs / <1 ms
    • Static and Dynamic Wear Leveling
    • Power Loss Protection (Power Caps)
    • Enterprise SMART tools: Reliability tracking, usage statistics, SSD life remaining, wear leveling, temperature
    • Endurance:
      • 960GB — (1 DWPD/5yrs)
      • 1.92TB — (1 DWPD/5yrs)
      • 3.84TB — (1 DWPD/5yrs)
      • 7.68TB — (1 DWPD/5yrs)
    • Power Consumption:
      • 960GB:   Idle: 5.14W   Average Read: 5.25W    Average Write: 9.10W    Max Read: 5.64W     Max Write: 9.80W
      • 1.92TB:  Idle: 5.22W   Average Read: 5.31W    Average Write: 13.1W    Max Read: 5.70W     Max Write: 13.92W
      • 3.84TB:   Idle: 5.54W  Average Read: 5.31W    Average Write: 14.69W   Max Read: 6.10W     Max Write: 15.5W
      • 7.68TB:   Idle: 5.74W  Average Read: 5.99W   Average Write: 17.06W   Max Read: 6.63W     Max Write: 17.88W
    • Storage temperature: -40°C ~ 85°C
    • Operating temperature: 0°C ~ 70°C
    • Dimensions: 100.09mm x 69.84mm x 14.75mm
    • Weight: 160(g)
    • Vibration operating: 2.17G Peak (7–800Hz)
    • Vibration non-operating: 20G Peak (10–2000Hz)
    • MTBF: 2 million hours
    • Warranty/support: Limited 5-year warranty with free technical support

Availability

The DC1000M comes in 960GB (SEDC1000M/960G), 1.92TB (SEDC1000M/1920G), 3.84TB (SEDC1000M/3840G) and 7.68TB SEDC1000M/7680G) capacities with five-year warranty and are available now.

Kingston Product Page

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Kingston DC1000M Enterprise NVMe SSD Released appeared first on StorageReview.com.

Supermicro Announces Pole Server

$
0
0
Supermicro Pole Server

Today, Supermicro announced a pole-mounted server. The new IP65 enclosure-based servers are targeting the 5G and other edge-focused markets. Regular readers are probably already familiar with the company from our articles like our recent review of Supermicro’s 6019P server, but I’m still going to provide a little background information on them for new readers. The company (Super Micro Computer Incorporated, SMCI) was founded in 1993 and is one of the fastest-growing IT companies in the world. Supermicro provides a wide range of products and servers, most focused around cloud software and hardware.

Today, Supermicro announced a pole-mounted server. The new IP65 enclosure-based servers are targeting the 5G and other edge-focused markets. Regular readers are probably already familiar with the company from our articles like our recent review of Supermicro’s 6019P server, but I’m still going to provide a little background information on them for new readers. The company (Super Micro Computer Incorporated, SMCI) was founded in 1993 and is one of the fastest-growing IT companies in the world. Supermicro provides a wide range of products and servers, most focused around cloud software and hardware.

 

Supermicro Pole Server

The new IP65 enclosure-based servers run on Intel Xeon D and 2nd Gen Intel Xeon processors. Expansion capability takes the form of three PCI-E slots and support for a range of storage formats and form factors, including SSD, M.2, and EDSFF drives.

While Supermicro is hoping customers will find their new server useful for AI inferencing, and other edge-focused applications, most of their focus for these appliances seem to be on the 5G RAN markets. Supermicro is a contributing member of the O-RAN (open radio access network) alliance. As such, they’re attempting to position their new appliance as the go-to choice for 5G RAN edge servers.

Supermicro Servers

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Supermicro Announces Pole Server appeared first on StorageReview.com.

Retrospect Releases Backup 17 & Virtual 2020

$
0
0

Today, Retrospect released Retrospect Backup 17 and Retrospect Virtual 2020, as well as updates to its Retrospect Management Console. Retrospect is a surprisingly old company that’s had its share of name changes since it was founded as Dantz Development in 1984. Dantz Development was bought by EMC in 2004 only to be sold to Sonic Solutions 6 years later in 2010. 2010 was a turbulent year for the company, with Rovi acquiring Sonic Solutions, and thus Retrospect less than a year after being bought. A year later, in 2011, Rovi spun Retrospect back out as an independent company focused entirely on their eponymous backup software. Just last year, Retrospect was acquired again, this time by StorCentric, who owns them today. Retrospect’s products provide data and endpoint backups.

Today, Retrospect released Retrospect Backup 17 and Retrospect Virtual 2020, as well as updates to its Retrospect Management Console. Retrospect is a surprisingly old company that’s had its share of name changes since it was founded as Dantz Development in 1984. Dantz Development was bought by EMC in 2004 only to be sold to Sonic Solutions 6 years later in 2010. 2010 was a turbulent year for the company, with Rovi acquiring Sonic Solutions, and thus Retrospect less than a year after being bought. A year later, in 2011, Rovi spun Retrospect back out as an independent company focused entirely on their eponymous backup software. Just last year, Retrospect was acquired again, this time by StorCentric, who owns them today. Retrospect’s products provide data and endpoint backups.

StorCentric Retrospect Management Console Backup

Retrospect Backup has been the core product of the company since the first version was released back in 1989, more than thirty years ago. As the name suggests, it has a laser-like focus on providing backup services across a wide range of environments. As part of that commitment, Backup includes a service Retrospective calls ProactiveAI. The service attempts to optimize the backup window for your entire environment. With the 17 Retrospective says that the backup process is ten times faster for customers with endpoints that are offline at various times. Handy, since most people turn their computers, and especially laptops off on no particular schedule. 17 further demonstrates Retrospective’s commitment to supporting a large number of environments by adding support and certification for Nexsan E-Series and Nexsan Unity storage devices. Backup 17 also adds support for exporting a preflight backup summary, including backup files and tape information.

Retrospect Virtual is targeted at VMware or Hyper-V virtual environments. Retrospect Virtual 2020 is, according to Retrospect, 50% faster at backup. Much like the non-virtual version, Virtual 2020 also adds support for new environments. 2020 updated Linux support with support for Ubuntu 19.04 and Red Hat Enterprise Linux 8. The newest version of Retrospect Virtual also has support for Backblaze B2 and Wasabi Cloud as destinations.

Availability

Retrospect Backup 17 is available immediately and Retrospect Virtual 2020 is available immediately.

Retrospect Main Site

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Retrospect Releases Backup 17 & Virtual 2020 appeared first on StorageReview.com.

SolidRun Announces Energy Efficient Edge Server

$
0
0
solidrun janux gs31

SolidRun announced what they’re calling an “AI Inference Server,” the Janux GS31. The Janux GS31 has a 1U form factor, up to 128 Gyrfalcon Technology Inc. SPR2803 AI acceleration chips and targets the edge computing market. SolidRun was founded in 2010 and is known for their tiny ARM architecture CuBox computers, which are cubes about 2 inches in all dimensions. They also provide other edge and IoT products and services. Gyrfalcon Technology was founded more recently, early in 2017, and primarily develops low power AI processors.

SolidRun announced what they’re calling an “AI Inference Server,” the Janux GS31. The Janux GS31 has a 1U form factor, up to 128 Gyrfalcon Technology Inc. SPR2803 AI acceleration chips and targets the edge computing market. SolidRun was founded in 2010 and is known for their tiny ARM architecture CuBox computers, which are cubes about 2 inches in all dimensions. They also provide other edge and IoT products and services. Gyrfalcon Technology was founded more recently, early in 2017, and primarily develops low power AI processors.

solidrun janux gs31

The Janux GS31 is, like other many other SolidRun products, an ARM-based appliance. It can support decoding and video analytics of up to a staggering 128 channels of 1080p/60Hz video. That many channels are honestly overkill for monitoring and managing all but the largest surveillance needs, but it comes in a package small enough to be used virtually anywhere. Gyrfalcon’s Lightspeeur 2803S Neural Accelerator chips make it surprisingly energy efficient, delivering up to 24 TOPS per Watt. The entire appliance has a maximum power consumption of 900W (single phase) and needs a power supply of 100V~240V via an IEC60320 connector.

SolidRun Janux GS31

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post SolidRun Announces Energy Efficient Edge Server appeared first on StorageReview.com.

Dell Wyse 5470 Mobile Thin Client Review

$
0
0
Dell Wyse 5470

The Wyse 5470 client is geared towards mobile virtual desktop infrastructure (VDI) users that need a laptop form factor VDI client. To give a brief overview of its specifications, the Wyse 5470 is a laptop format, VDI client that has a 14” screen. It is powered by an Intel Celeron N4000 (2 Cores/4MB) or an Intel Celeron N4100 (4 Cores/4MB), it runs Wyse ThinOS (with optional PCoIP), Wyse ThinLinux or Windows 10 IoT Enterprise. It supports all major VDI environments.

The Wyse 5470 client is geared towards mobile virtual desktop infrastructure (VDI) users that need a laptop form factor VDI client. To give a brief overview of its specifications, the Wyse 5470 is a laptop format, VDI client that has a 14” screen. It is powered by an Intel Celeron N4000 (2 Cores/4MB) or an Intel Celeron N4100 (4 Cores/4MB), it runs Wyse ThinOS (with optional PCoIP), Wyse ThinLinux or Windows 10 IoT Enterprise. It supports all major VDI environments.

In this article, we will give an in-depth overview of the Wyse 5470 VDI client’s specifications, design and build quality, and a summary of the testing that we carried out on it over two weeks. We will then lay out the key findings from those tests and provide our thoughts about the device and briefly discuss who would benefit from this product.

The product is labeled as a Wyse 5470 but the outer cover and the screen bezel have the Dell logo. In this article we will refer to it as a Wyse.

Dell Wyse 5470

Wyse 5470 Specifications

Manufacturer WYSE
Model Wyse 5470
MSRP starting at $699 USD
Client type mobile thin client
Form factor 14” laptop
OS Wyse ThinOS (with optional PCoIP)
Wyse ThinLinux or Windows 10 IoT Enterprise
Supported remote display protocols Microsoft RDP
VMware Horizon RDP/PCoIP
Blast Extreme
Citrix ICA/HDX (not all OS support all protocols)
CPU Intel Celeron N4000 (2 Cores/4MB/2T/up to 2.6GHz/6W)
Intel Celeron N4100 (4 Cores/4MB/4T/up to 2.4GHz/6W)
Memory 4GB 1x4GB, 2400MHz DDR4
Storage 16GB eMMC Included in assis
Display 14.0″ HD (1366×768) Anti-Glare, Non-Touch, Camera
14.0″ FHD (1920 x 1080) Anti-Glare, Non-Touch, Camera
14.0″ FHD (1920 x 1080) Anti-Glare, Touch, Camera
Battery capacity 3 Cell 42Whr ExpressCharge Capable Battery
Power 19V, 65W, 3.33A external power adapter
Ports 1 x SD Memory Card Reader
1 x USB 2.0 with PowerShare
1 x VGA out
1 x Noble Wedge Lock Slot
1 x Power Connection
1 x USB Type C 3.1 Gen 1 with Power Delivery & DisplayPort 1.2
1 x HDMI 2.0a for 4K external display
1 x RJ-45
2 x USB 3.0
Audio Jack
Multimedia Integrated Stereo Speakers
3.5 mm audio out/in jack
Network connectivity RJ45 – 10/100/1000Mb
Intel Wireless-AC 9560, Dual-band 2×2 802.11ac Wi-Fi with MU-MIMO
Bluetooth 5 Combo
Keyboard Single Pointing Backlit Keyboard
Touch Pad Wyse Clickpad with multi-touch gestures enabled
Webcam HD camera
Physical size: 0.81″ (20.6mm) x 13″ (330.3mm) x 9.3″ (238mm)
Weight 3.96 lbs. (1.8 kg)
Warranty 3-year parts and labor

Design and Build

The cardboard packaging box that the device came in was heavy and well designed, the device itself was nestled between three black foam blocks and wrapped in an antistatic plastic bag, and it had a piece of black material between the screen and the keyboard. The box also contained the power supply, and a warranty and setup guide.

The right side of the device has a SD Memory Card Reader, USB 2.0 with PowerShare, VGA out, Noble Wedge Lock Slot. The left side of the device has the power connection port, USB type C 3.1 Gen 1 with power delivery & DisplayPort 1.2, HDMI 2.0a for 4K external display, RJ-45, USB 3, and a 3.5mm audio jack.

Dell Wyse 5470 side

The entire case is made of black plastic with ventilation holes on the bottom. Opening the lid exposes a 14” LED screen that has a built-in web cam at the top of it. The web cam has an LED to the right of it to indicate if it is on. The monitor is framed with a 17mm black plastic boarder on the top, 33mm on the bottom and 10mm on the sides. The device has a full-sized keyboard, it does not have a separate number pad. The touchpad beneath the keyboard it 104mm x 65mm. The power button is at the upper right corner of the keyboard. Overall, the case on this device is on par with what you would expect on a business laptop.

Dell Wyse 5470 Bottom

The case is held together with 9 Phillips-head screws on the bottom of the device. After removing these screws, we couldn’t pry the bottom off the device to inspect the motherboard of the device to inspect the build quality of the motherboard.

Usability

The real test of a VDI client is its usability; to test the usability of the 5470, we used the client for two weeks in our Pacific Northwest lab with various configurations. Below are the key results we noted during our time using the client.

To test the 5470, we connected it to our network via a Cat 6 cable through the device’s RJ45 port which was connected to our network via a 1Gb network through a switch that was connected to either a server or a WAN router. The server was hosting our local VMware Horizon virtual desktop. In order to create a controlled environment, the network was monitored during testing to ensure that no other traffic was present on the network.

Initial Configuration

We powered on the device and it took the device 28 seconds to boot, obtain an IP address from our DNS server. We did not need to log in to the system but were log us in as a preconfigured use. The screen looked like a regular desktop; the start menu had icons for System setup and configuration. We needed to configure the system to use a static name server to do this we clicked on the desktop menu, clicked System Setup, and then click Network Setupand entered our DNS domain and server.

Dell Wyse 5470 config net

Local Horizon Desktop

For two weeks we used the device with a local Horizon virtual desktop to do our daily tasks.

We brought up the Horizon client and configured it to connect to our local Horizon desktop.

Dell Wyse 5470 config

Once we were connected to our Horizon desktop, we noticed that the resolution of the Horizon desktop was set to the native 1920 x 1080 resolution of the 5470. The virtual desktop that we used ran Windows 10 (1607), had 2 vCPUs, 8 GB of memory, and 50 GB of NVMe-based storage.

The first test we conducted was to use VLC to play a video (1280 x 720 @ 712kbs) that was stored on the virtual desktop. First, we played the video using a quarter of the display, and then once again in full-screen mode. In quarter-scale the video played without any frames dropping and in full-screen mode the video played with just a slight amount of jitter. The audio played flawlessly through the devices built-in speaker when the video was displayed in both quarter-scale and full screen modes. The speakers were loud and clear.

Dell Wyse 5470 video capture

While playing the video in ¼ scale mode we monitored the connection using ControlUp to see bandwidth consumption and user input delay. We hope to use these metrics going further to quantify a VDI client performance. We noticed a user input delay of 16ms when playing the video in full screen mode vs 0ms when the video was playing in ¼ scale mode.

Dell Wyse 5470 cpu 1

Dell Wyse 5470 cpu 2

To further test the device, we connected a Jabra voice 150 headset to a USB connection; the Jabra headset was discovered by the virtual desktop and worked without any issues.

We used the client for daily activities for two weeks without any problems. This included using Microsoft Office applications, Chrome web browser, playing internet-streaming music, etc. During this time the device performed flawlessly.

Using Other Protocols

WYSE advertises the device as working with PCoIP, VMware Horizon Blast, and Citrix HDX. We tested it with Blast and PCoIP without any noticeable difference.

Other Configurations

As this is a basic review of the device, we only tested it using tow VDI client protocols (PCoIP and Blast); we did not test the device in the following circumstances: adverse network conditions, using communication software such as Skype, or using any of the advanced features of the device such as accessing local storage on the device from the virtual desktop. However, we did test it with a secondary monitor and wireless keyboard and mouse.

Device Management

The device can be used with Wyse Management Suite (WMS) for centralized, server-based administration of Wyse thin clients. WMS is beyond the scope of this review.

Conclusion

Wyse did a good job bringing a business quality VDI laptop to the market. Aesthetically it compares favorably with a Dell Latitude, HP Pavilion, and other business laptops. As VDI becomes more and more mainstream we will see more VDI devices that allow workers to take work with them. This would be a fine choice for VDI users that need a portable, rugged and perfectly capable mobile VDI

Wyse 5470

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Dell Wyse 5470 Mobile Thin Client Review appeared first on StorageReview.com.

Western Digital Announces WD Gold NVMe SSDs

$
0
0
WD Gold NVMe

Today Western Digital announced new NVMe SSDs in their WD Gold family. The new SSDs are aimed at small and medium-sized enterprises (SMEs) to help them transition to NVMe storage and all the benefits that come with it such as much higher performance and ultra-low latency. This marks the first NVMe product in the WD Gold family and broadens the company’s overall data center offerings.

Today Western Digital announced new NVMe SSDs in their WD Gold family. The new SSDs are aimed at small and medium-sized enterprises (SMEs) to help them transition to NVMe storage and all the benefits that come with it such as much higher performance and ultra-low latency. This marks the first NVMe product in the WD Gold family and broadens the company’s overall data center offerings.

WD Gold NVMe

NVMe SSDs have really taken off has their price as come down. They offer up to five times the sequential performance of SATA SSDs and can help leverage the advancements in multi-core, multi-threaded CPUs. According to IDC, NVMe unit shipments are expected to exceed 79% by 2023. The introduction of the WD Gold NVMe SSDs, Western Digital is giving customers more options in storage going forward.

The new drives come in four capacities: 960GB, 1.92TB, 3.84TB, and 7.68TB. The SSDs are based on Western Digital’s silicon-to-system expertise, from its 3D TLC NAND SSD media to its purpose-built firmware and own integrated controller. The WD Gold NVMe SSDs are designed for the primary storage in servers delivering superior response times, higher throughput and greater scale than existing SATA devices for enterprise applications. The drives come with a five-year limited warranty and secure boot and secure erase.

WD Gold NVMe SSDs Key Specifications

Interface U.2 7mm PCIe Gen3.1 x4
Formatted Capacity 960GB 1.92TB 3.84TB 7.69TB
Performance
Read Throughput (max MiB/s, Seq 128KiB) 3K 3.1K
Write Throughput (max MiB/s, Seq 128KiB) 1.1K 2K 1.8K
Read IOPS (max, Rnd 4KiB) 413K 472K 469K 467K
Write IOPS (max, Rnd 4KiB) 44K 63K 65K
Mixed IOPS (max, 70/30 R/W, 4KiB) 111K 194K 174K 187K
Latency (μs, 4KiB Random Read QD1, 99%) 210 208 221 225
Maximum Petabytes Written 1.4 2.8 5.61 11.21
Endurance (DW/D) 0.8
Power
Requirement (DC, +/- 10%) +12V
Operating Modes (W, Average) 10, 11, 12
Idle (W, Average) 4.6 4.62 4.94 4.95
Reliability
MTBF 2
Uncorrectable Bit Error Rate (UBER) 1 in 10^17
Limited Warranty 5-year
Physical Size
z-height (mm) 7.00 +0.2/-0.5 (including labels)
Dimensions (width x length, mm)

Weight (g. max)

69.85 (+/- 0.25) x 100.45

95

Environmental
Operating Temperature 0°C to 70°C
Non-operating Temperature -40°C to 85°C

 

Availability

The new WD Gold NVMe SSDs are expected to ship in the second calendar quarter of 2020. The drives start as low at $242 for the 960GB.

WD Gold NVMe SSDs

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Western Digital Announces WD Gold NVMe SSDs appeared first on StorageReview.com.

FreeNAS & TrueNAS Get Name Changes

$
0
0
iXsystems TrueNAS 12.0

iXsystems announced today that it was making some changes around its popular open source storage software. iXsystems has two major products, the ever popular FreeNAS and the enterprise TrueNAS. With the next update the two will become one, with a few caveats.

iXsystems announced today that it was making some changes around its popular open source storage software. iXsystems has two major products, the ever popular FreeNAS and the enterprise TrueNAS. With the next update the two will become one, with a few caveats.

iXsystems TrueNAS 12.0

First, to assuage the fears of FreeNAS users, FreeNAS will remain free. For those that leverage enterprise version, TrueNAS, the company is only looking to improve and move forward. Last month, the two versions gained parity in version 11.3. Version 12.0, coming out later this year, unifies both products into a single software image and name. As of version 11.3, both systems already shared 95% of the same source code. Now they will have a common Open Source code, unified documentation, and a shared product name.

Benefits of the combination include:

  • Rapid Development: Unified images accelerate software development and releases (for example, 12.0 is a major release that would normally have taken 9-12 months to release, and with these new efficiencies, iXsystems can bring that closer to six months)
  • Improved Quality: Reduced development redundancy and unified QA increases software quality and allows the company to streamline testing
  • Earlier Hardware Enablement: Staying in-sync with upstream OS versions will be easier, allowing earlier access to newer hardware drivers. For instance, 12.0 brings improved support for AMD EPYC / Ryzen platforms and enhanced NUMA support for more efficient CPU core handling.
  • Simplified Documentation: Unified documentation eliminates redundancy such as separate user guides
  • Reduced Redundancy: Unified web content and videos refer to one software family without the need for duplication.
  • Flexibility: Unified images enable simpler transitions or upgrades between editions
  • Resource efficiency: frees up developers to work on new features and related products
  • Open ZFS 2.0: The planning for the “unified” 12.0 release began over a year ago and included the major investment in the development and integration of what will soon be released as “OpenZFS 2.0”. This effort is fast-forwarding delivery of advances like dataset encryption, major performance improvements, and compatibility with Linux ZFS pools

As stated, FreeNAS is very popular. iXsystems stating it is the #1 Open Source storage software since 2012. The name is well loved as well. It tells users right up front that it is free. Free does have a stigma of perhaps not being worth it to use in larger environments causing it to lose some traction. In that vein, iXsystems has renamed FreeNAS to TrueNAS CORE. The name has changed, but everything else about the product remains the same. TrueNAS CORE is the core of the enterprise edition as well as a nifty acronym: Community supported, Open source, Rapid development, Early availability. Like FreeNAS, TrueNAS CORE will be free to use without restriction.

TrueNAS will have a less drastic name change to TrueNAS Enterprise. Keeping all the benefits it is known for, TrueNAS Enterprise now has a name that can’t be mistaken for its intended use case. TrueNAS Enterprise will have all of the same enclosure management, high availability, and support that TrueNAS 11.3 benefits from along with all the TrueNAS CORE features.

Availability

iXsystems will offer a TrueNAS 12.0 preview to let users check it out and offer feedback. 12.0 will go through the usual ALPHA, BETA, RC1, RELEASE states that FreeNAS has gone through. The company plans to release TrueNAS CORE 12.0 in the third quarter of this year.

iXsystems

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post FreeNAS & TrueNAS Get Name Changes appeared first on StorageReview.com.


News Bits: Nexsan, KIOXIA, Veeam, Datrium, StorONE, & NVIDIA

$
0
0
StorageReview logo

This week’s News Bits we look at a number of small announcements, small in terms of the content, not the impact they have. Nexsan releases new BEAST Elite models. KIOXIA adds provisioner service to KumoScale. Insight Partners completes acquisition of Veeam. Datrium adds five new patents. StorONE buys Storage Switzerland. NVIDIA acquires SwiftStack.

This week’s News Bits we look at a number of small announcements, small in terms of the content, not the impact they have. Nexsan releases new BEAST Elite models. KIOXIA adds provisioner service to KumoScale. Insight Partners completes acquisition of Veeam. Datrium adds five new patents. StorONE buys Storage Switzerland. NVIDIA acquires SwiftStack.

Nexsan Releases New BEAST Elite Models

Nexsan Beast

The StorCentric company, Nexsan, announced that it was expanding its high-density BEAST storage platform with two new Elite Models. The first addition is the BEAST Elite for demanding storage environments such as media and entertainment, surveillance, government, health care, financial and backup. The BEAST platform supports HDDs up to 16TB meaning one can hit a density of 2.88PB in a 12U system. Nexsan also released the BEAST Elite F that support QLC NAND technology. The BEAST Elite F is for storage environments that need more performance while keeping SSD costs in line.

Nexsan BEAST

 KIOXIA Adds Provisioner Service To KumoScale

KIOXIA KumoSclae

KIOXIA announced that it has added a provisioner service to its NVMe-oF storage software, KumoScale. According to the company, KumoScale Provisioner Service works by tracking the fleet of SSDs and KumoScale storage nodes and managing the dynamic mapping of user volumes to nodes and physical drives. Feature include:

  • Performs intelligent mapping of user-specified storage volumes to KumoScale nodes and physical drives
  • Processes provisioning requests: Selects the best KumoScale node, chooses the SSD and how to map it, and creates the volume via REST API

KIOXIA KumoSclae

Insight Partners Completes Acquisition Of Veeam

veeam logo

Insight Partners completed its acquisition of Veeam Software for roughly $5 billion. The following appointments have been made:

  • William H. Largent has been promoted to Chief Executive Officer (CEO). He previously held the role of Executive Vice President (EVP), Operations.
  • Danny Allan has been promoted to Chief Technology Officer (CTO).
  • Gil Vega, previously Managing Director and CISO at CME Group, Inc. and the Associate Chief Information Officer & CISO for the U.S. Department of Energy and U.S. Immigration & Customs Enforcement in Washington, DC, has been appointed Chief Information Security Officer (CISO).
  • Nick Ayers, of Ayers Neugebauer & Company, a member of the World Economic Forum’s Young Global Leaders and former Chief of Staff to the Vice President of the United States, joins Insight Partners Managing Directors Mike Triplett, Ryan Hinkle, and Ross Devor on the Veeam Board of Directors.

Veeam

Datrium Adds 5 New Patents

Datrium DVX

Datrium announced that it has been awarded five new U.S. patents. The patents are as follows:

  • Blanket Encryption. Datrium US patent #10,540,504 is a method for preserving deduplication while providing Blanket Encryption—in use, in flight and at rest—in distributed storage systems. This advancement enables the economics of state-of-the-art cloud backup storage, while using the best encryption possible to combat emerging threats in today’s era of advanced cybercrime.
  • Split Provisioning suitable for public cloud deployment. Datrium US Patent #10,180,948 complements US patent #10,140,136 and #10,359,945 (below) and extends Datrium’s Split Provisioning to include host caching and non-volatile storage as a separated part of a scaleout storage pool. This Split Provisioning architecture enables Datrium to store data economically in blob storage such as AWS S3 and restart workloads with high performance in on-demand provisioned compute resources to respond to a disaster.
  • Managing non-volatile storage as a shared resource in a distributed system. US patent #10,359,945 is a lightweight method for efficiently managing a shared pool of high-speed, non-volatile (NV) storage in a distributed system. It also enables low-latency writes in the cloud even when the bulk of the data is stored in high-latency, but cost-effective blob storage.
  • Resilient writes in a degraded distributed erasure-coded storage system with key-based addressing. US patent #10,514,982 is a core element of Datrium Automatrix technology. It shows how to store data with full redundancy and durability even in the face of transient node or drive failures in distributed erasure-coded systems. Most modern systems have mechanisms to eventually recover from a node or drive failure, but there is typically a window after a drive fails and before recovery when new data is stored in degraded mode with reduced durability. With this technology, individual nodes can fail and Datrium will maintain the same level of durability, eliminating this window of data vulnerability. When any of the storage devices in the system become inaccessible, the chunks nominally designated to be written to the device are instead written to alternate devices.
  • Data path monitoring in a distributed storage network. Datrium US patent #10,554,520 provides an improved method for distributed storage system network resilience that will work in any cloud and does not rely on the Link Aggregation Control Protocol (LACP), which is fragile and not available in all clouds. With Datrium’s advancements, host software and persistent storage pool software communicate with each other to assess link status and direct data flow to the best paths. Given that networking software in the cloud can fail unpredictably, this method offers enterprises a strategic improvement to storage resilience.

Datrium

StorONE Buys Storage Switzerland

StorONE UI

StorONE has acquired Storage Switzerland, a leading analyst firm covering the storage, backup, and cloud markets. Through this acquisition, the companied has made George Crump their Chief Marketing Officer. That’s one way to pick up a CMO. StorONE, as well as Crump, are primarily focused on the StorONE S1 Enterprise Storage Platform launched this week.

StorONE

NVIDIA Acquires SwiftStack

SwiftStack 5

NVIDIA has signed a definitive agreement to acquire SwiftStack. SwiftStack wants a assure existing custoemrs that they will continue to maintain, enhance, and support 1space, ProxyFS, Swift, and the Controller, while working with NVIDIA to continue to solve challenges around AI at scale. The transaction is expected to close in the next few weeks, subject to customary conditions.

NVIDIA

SwiftStack

The post News Bits: Nexsan, KIOXIA, Veeam, Datrium, StorONE, & NVIDIA appeared first on StorageReview.com.

StorageReview Podcast #37: Matt Hallberg, Kioxia America

$
0
0
Podcast 37: Kioxia

 

This week’s podcast features an interview with Matt Hallberg from Kioxia America, formerly Toshiba. Matt talks about the benefits of the U.3 form factor for modern SSDs, as well as the performance benefits of moving to PCIe Gen4. We also invite in a homelabber that was wondering around looking for power cords to sit in with the gang. The crew covers off on the news of the week including several new SSDs, along with a healthy discussion about thin clients, Intel NUCs and very much more. This week Adam’s Movie Corner feature film is 2012’s Lawless.

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #37: Matt Hallberg, Kioxia America appeared first on StorageReview.com.

Evolving Storage with SFF-TA-1001 (U.3) Universal Drive Bays

$
0
0
separate storage configurations for Sas/SATA & PCIe

IT departments are challenged with having to choose and configure data storage to meet present-day and future data center, system and end-user requirements for their organizations. They have to predict application use, workload sizes, performance needs and capacity expectations for years to come. Determining these requirements, and then implementing a storage strategy that meets these needs for today and tomorrow, is a tremendous undertaking for any IT department.

IT departments are challenged with having to choose and configure data storage to meet present-day and future data center, system and end-user requirements for their organizations. They have to predict application use, workload sizes, performance needs and capacity expectations for years to come. Determining these requirements, and then implementing a storage strategy that meets these needs for today and tomorrow, is a tremendous undertaking for any IT department.

As technology evolves, upgrades to the storage system present another challenge to IT and are typically limited by the original hardwarepurchase. For example, if a SATA-based storage infrastructurehad been deployed, all hardware upgrades including the server backplane, storage controller and replacement drives would need to be SATA or possibly SAS-based. In order for storage to evolve to the next level, compute systems must be built to support required applications using current and future resources. If these objectives are achieved, the end result to IT can be significant relating to reductions in storage cost and system complexities.

With the advent of SFF-TA-1001 specification1(also known as U.3), the storage industry is moving closer to configuring storage for present-day and future application requirements. U.3 is a term that refers to compliance with the SFF-TA-1001 specification, which also requires compliance with the SFF-8639 Module specification2. Solutions which are based on U.3 could be achieved with a tri-mode configuration that utilizes a single backplane and controller, supporting all three drive interfaces (SAS, SATA and PCIe®from one server slot. Regardless of the interface, SAS and SATA SSDs and hard drives, as well as NVMe™SSDs, are interchangeable within U.3-based servers and can be used in the same physical slot. U.3 addresses a number of industry needs, all while protecting the initial storage investment.

Industry Challenge

Today’s server storage architectures are challenged in the way they accommodate mixed or tiered environments. Within any particular server, storage may require combinations of hard drives and SSDs configured with varied interfaces depending on the needs of the workload. For example, an engineering team may require fast NVMe drives to test code in their development environments. Another workgroup may require SAS drives to achieve high availability and fault tolerance for their revenue-generating database. And, another group may rely on capacity-optimized SATA drives or value SAS drives for analyzing cold data in real-time. Whatever the application is, portions of the server can be segmented to address the varied use cases.

Without U.3 from a server design perspective, OEMs need to develop multiple backplanes, mid-planes and controllers to accommodate all of the available drive interfaces, which creates a challenging abundance of SKUs and purchasing options for customers to choose from.

Drive consolidation took an initial step forward when the SAS interface enabled enterprise SATA SSDs and HDDs to connect to SAS backplanes, HBAs or RAID controllers. This capability continues today as most servers ship with SAS HBAs or RAID cards that enable SAS and SATA SSDs/HDDs to be used in the same drive bay. Though SATA drives can be easily swapped with SAS drives, there was no support for NVMe SSDs as they still required a separate configuration that utilizes an NVMe-enabled backplane (Figure 1).

separate storage configurations for Sas/SATA & PCIe

Figure 1 depicts separate backplanes required for SAS, SATA and PCIe interfaces

Support for NVMe SSDs as part of the drive consolidation strategy is extremely important as these deployments are on the rise due to the significant performance improvements they deliver over SAS and SATA SSDs. Unit consumption of NVMe SSDs in the enterprise (including both data center and enterprise versions) are expected to represent over 42.5% of all SSDs by the end of 20193. Unit consumption in the enterprise will increase to over 75% by the end of 2021, and over 91% by the end of 20233. At present, NVMe-based server, infrastructure and RAID controller options are in their early stages, requiring many data centers to continue using SAS-based RAID hardware to provide a mature, robust level of fault tolerance and performance. To migrate directly to NVMe storage will typically require the purchase of new NVMe-enabled servers that use an NVMe-specific backplane and controller.

The next step in supporting all three SSD protocols with one common infrastructure occurred with the availability of the SFF-8639 connector, in conjunction with the development of the SFF-8639 Module specification. This connector was designed to support up to four lanes of PCIe for NVMe SSDs, and up to two lanes for SAS/SATA HDDs or SSDs. Compliance with the SFF-8639 Module specification has been designated as U.2. The receptacle version of the SFF-8639 connector mounts on the server backplane, and although it supports all three drive interfaces, NVMe and SAS/SATA drives are not interchangeable unless the bay was provisioned for both. A separate NVMe-enabled backplane was still required to support NVMe SSDs.

Drive consolidation has now evolved to U.3 where SAS, SATA and NVMe drives are all supported through one SFF-8639 connector when used with a tri-mode backplane and controller (Figure 2), and are also compliant with the SFF-8639 Module specification (U.2). For this approach, the same 8639 connector is used except the high-speed lanes are remapped to support all three protocols. The U.3 specification includes the pinouts and usage for a multi-protocol accepting device connector, and was developed by the Storage Networking Industry Association (SNIA) SSD Form Factor (SFF) Technical Affiliate (TA). The specification was ratified in October 2017.

Tri-mode/Universal Backplane

Figure 2 depicts the U.3 tri-mode universal storage configuration for SAS, SATA and PCIe interfaces

Key U.3 Components

The U.3 tri-mode platform can accommodate NVMe, SAS and SATA drives from the same server slot through a single backplane design and SFF-8639 connector with revised wiring as defined by the SFF-TA-1001 specification. The platform is comprised of a: (1) Tri-mode Controller; (2) SFF-8639 Connector (one for the drive and one for the backplane); and (3) Universal Backplane Management Framework.

Tri-Mode Controller

The tri-mode controller establishes connectivity between the host server and the drive backplane, supporting SAS, SATA and NVMe storage protocols. It features a storage processor, cache memory and an interface connection to the storage devices. The storage adapter supports all three interfaces, driving the electrical signals for the three protocols through a single physical connection. An ‘auto-sense’ capability within the controller determines which of the three interface protocols is currently being serviced by the controller.

From a design perspective, the tri-mode controller eliminates the need for OEMs to use one controller that is dedicated to SAS and SATA protocols, and a different controller for NVMe. It delivers simplified control that enables common bay support for SAS, SATA and NVMe drive protocols. With this flexibility, multiple drive types can be mixed and matched with SAS and SATA SSDs/HDDs, as well as NVMe SSDs.

SFF-8639 Connector

The SFF-8639 connector enables a given drive slot on the backplane to be wired to a single cable so it can provide access to a SAS, SATA or NVMe device, and determine the proper communications protocol driven by the tri-mode host. The SFF-TA-1001 (U.3) specification ties the components together by defining pin usage and slot detection, as well as addressing host and backplane wiring issues that occur when designing for a backplane receptacle that accepts both NVMe and SAS/SATA storage devices (Figure 3).

Evolution to U.3 Tri-mode Connector

Figure 3 showcases the evolution to a U.3 tri-mode connector

The SFF-TA-1001 specification supports the three interface types on the SFF-8639 connector with signals for the host to identify its type, and with signals for the device to identify its configuration (e.g., dual-port PCIe).

U.3 eliminates the need for separate NVMe and SAS/SATA adapters, enabling OEMs to simplify their backplane designs with fewer traces, cables and connectors. This results in a cost benefit associated with building backplanes with fewer components, as well as an overall simplification of OEM server and component SKUs. Devices that are U.3-based are required to be backwards-compatible with U.2 hosts.

Universal Backplane Management Framework

The universal backplane management (UBM) framework defines and provides a common method for managing and controlling SAS, SATA and NVMe backplanes (Figure 4). It, too, was developed by the SSD Form Factor Working Group under the ratified specification SFF-TA-10054and provides an identical management framework across all server storage regardless of the interface protocol (SAS, SATA or NVMe) or the storage media (HDDs or SSDs).

SFF-TA-1005: Universal Day Management

Figure 4 showcases only one domain required for U.3 backplane and bay management

Source: Broadcom® Inc.5

The management framework allows users to manage SAS, SATA and NVMe devices without any required changes to drivers or software stacks, and addresses a number of system-level tasks that are important to the NVMe protocol, and specifically to U.3 operation. This management includes the ability to:

  • Provide exact chassis slot locations. For this capability, the UBM framework enables users to easily identify where storage drives that need to be replaced are located, or as it relates to troubleshooting, identifies possible issues that may be associated with drive slots, cables, power or the drives themselves.
  • Enable cable installation order independence. To address this capability prior to the tri-mode configuration, users were required to lay specific cables to specific drive slots as overall cable length was extremely important in these configurations. In the tri-mode configuration, a multi-use cable is connected to all drive slots eliminating this issue.
  • Manage LED patterns on the backplane. The UBM framework enables users to utilize LED encoding on each drive that delivers a visible signal of drive activity that includes drive use, drive failures, power, etc.
  • Enable power and environmental management. The UBM framework manages power to a slot and storage device with its main function to power cycle an unresponsive device.
  • Enable PCIe resets. At the bus level, PCIe resets every device attached to a PCIe bridge regardless of whether the storage drives are functioning normally or not. The UBM framework enables users to activate PCIe resets on specific drive slots, resetting only the drives that need it.
  • Enable clocking modes. With higher data rates delivered by PCIe 3.0 and PCIe 4.0, clocking becomes more difficult to support at these higher speeds. The UBM famework can configure storage devices to use either a traditional PCIe clock network or embed clock signals directly into the high-speed signals. Embedded clock signals can have a significant effect in reducing the electromagnetic interference associated with high-speed signaling, resulting in very flexible clocking.

The UBM framework enables a controller to dynamically divide the PCIe lanes by describing the backplane so U.3 x1, x2 and x4 wiring are all possible. It also provides a way to control the single PERST signal (PCIe reset) from other sideband signals (such as CLKREQ and WAKE) into multiple independent occurrences for 2×2 and 4×1 wiring. UBM also provides reference clock (REFCLK) control for 2×2 and 4×1 wiring. Though UBM is designed as a framework that can operate on its own, it unlocks the full power of U.3 when the UBM is implemented. The end result is a universal backplane management system that allows for greater configurability and true system flexibility.

U.3 Platform and SSD Availability

With the ratification of the SFF-TA-1001 specification, a U.3 ecosystem has evolved with leading server, controller and SSD vendors developing solutions to move this technology platform forward. For example, servers with tri-mode controllers, and associated backplanes, are being implemented by some tier 1 server OEMs. Initial system availability is expected to be through tier 1 and tier 2 server OEMs, followed by broad channel offerings.

From a controller perspective, most RAID/HBA vendors are developing controllers with tri-mode capabilities and support for U.3 operation.

From an SSD perspective, four drive vendors, KIOXIA (formerly Toshiba Memory), Samsung, Seagate and SK Hynix successfully participated in the first U.3 Plugfest in July 2019 held at the University of New Hampshire’s Interoperability Lab. Of these SSD vendors, KIOXIA was the first to demonstrate SFF-TA-1001 (U.3) SSDs at Flash Memory Summit 2019.

Summary

With big data getting bigger and fast data getting faster, coupled with computational-intensive applications, like artificial intelligence, machine learning and even cold data analysis, the need for higher performance in data storage is growing by leaps and bounds. Having to predict today’s application use, workload sizes, performance needs and capacity expectations is quite the challenge, but forecasting use for years to come takes the challenge to a new level.

The U.3 tri-mode approach builds on the U.2 specification using the same SFF-8639 connector. This approach combines SAS, SATA and NVMe support into a single controller inside of a server, managed by a UBM system that allows SAS SSDs/HDDs, SATA SSDs/HDDs and NVMe SSDs to be mixed and matched. U.3 provides a tremendous array of benefits that include:

  • Single backplane, connector and controller for storage
    • Eliminates separate components for each supported protocol
    • Enables hot-swapping between devices (if the device supports it)
    • Provides SAS/SATA/NVMe support from one drive slot
    • Lowers overall storage costs by using less cabling, fewer traces and fewer components
    • Delivers greater storage configurability and true system flexibility
  • High Performance
    • Delivers 64% greater drive bay bandwidth and IOPS performance when a SATA SSD is replaced by a NVMe/PCIe Gen3 x1 SSD in a U.3 drive bay6
    • Delivers 13x bay capability performance improvement when a SATA SSD is replaced by a NVMe/PCIe Gen4 x4 SSD in a U.3 drive bay given throughput of SATA = 0.6GB/s; x1 PCIe Gen3 NVMe = 0.98GB/s; and PCIe Gen4 NVMe x4 = 7.76GB/s6
  • Management
    • Provides the same management tools across all server storage protocols via UBM
  • Universal Connectivity
    • Extends the connectivity benefits of SAS and SATA to NVMe
    • Eliminates the need for protocol-specific adapters
    • Enables U.2- (SFF-8639 Module) or U.3- (SFF-TA-1001) compliant drives to be used in the same storage architecture
    • Lowers system cost through a universal backplane and shared cabling infrastructure
    • Lowers system purchase complexity (removes the possibility of selecting the ‘wrong’ backplane and storage adapters

The U.3 platform addresses a number of industry needs: reducing TCO expenditures, reducing the complexities of storage deployments, providing a viable replacement path between SATA, SAS and NVMe, maintaining backwards compatibility with current U.2 NVMe-based platforms, all while protecting the customer’s initial storage investment.

About the Authors:

John Geldman is the Director of SSD Industry Standards at KIOXIA America, Inc. (formerly Toshiba Memory America, Inc.) and leads the storage standards activities. He is currently involved in standards activities involving JEDEC, NVM Express, PCI-SIG, SATA, SFF, SNIA, T10, T13 and TCG. He has been contributing to standards activities for over three decades covering NAND flash memory, hard drive storage, Linux, networking, security, and IC development. John has been on the board, officered, chaired or edited specifications for CompactFlash, the SD Card Association, USB, UFSA, IEEE 1667, JEDEC, T10, and T13, and currently serves as a member of the Board of Directors for NVM Express, Inc.

 


John Geldman, KIOXIA

Rick Kutcipal is a Marketing Manager in the Data Center Storage Group at Broadcom Inc., and is a 25-year computer and data storage business veteran. He coordinates the majority of global storage standards activities for Broadcom. Prior to Broadcom, Rick spent nearly 15 years at LSI Logic as a product manager and was instrumental in launching the first 12Gb/s SAS expander in the industry. Earlier in his career, Rick designed advanced chips and board level systems for Evans & Sutherland. Today, Rick serves on the Board of Directors of the SCSI Trade Association (STA), playing an influential role in defining and promoting SAS technology.

Rick Kutcipal, Broadcom

Cameron Brett is the Director of Enterprise Marketing at KIOXIA America, Inc. (formerly Toshiba Memory America, Inc.) and is responsible for the outbound marketing and messaging of enterprise SSD, software and memory products. He represents KIOXIA as co-chair of the NVM Express marketing workgroup, also as a Board of Directors member and president of the SCSI Trade Association (STA), and also as co-chair of the Storage Networking Industry Association (SNIA) SSD SIG. Cam is a 20-year veteran of the storage industry and has held previous product marketing and management positions with Toshiba Memory, PMC-Sierra, QLogic, Broadcom and Adaptec.

Cameron Brett, KIOXIA

Trademarks:

Broadcom is a registered trademark of Broadcom Inc. Linux is a trademark of Linus Torvalds. NVMe and NVM Express are trademarks of NVM Express, Inc. PCIe is a registered trademark of PCI-SIG. SCSI is a trademark of SCSI, LLC. All other trademarks or registered trademarks are the property of their respective owners.

Notes:

1The SFF-TA-1001 Universal x4 Link Definition specification for SFF-8639 is available at: http://www.snia.org/sff/specifications.

2The SFF-8639 Module specification is available at: http://www.pcisig.com/specifications.

3 Source: IDC. – “Worldwide Solid State Drive Forecast Update, 2019-2023, Market Forecast Table 12, Jeff Janukowicz, December 2019, IDC #44492119.

4The SFF-TA-1005 Universal Backplane Management (UBM) specification is available at: http://www.snia.org/sff/specifications.

5Source: Broadcom Inc. – “Common Method for Management of SAS, SATA and NVMe Drive Bays – SFF-TA-1005 a.k.a. UBM: Universal Bay Management.”

6The performance numbers represent the physical capabilities of the interface running across the connector and does not represent the capabilities of the host bus adapter or the storage device.

Product Image Credits:

Figure 1: Separate Storage Configurations for SAS/SATA and PCIe:

  1. SAS Expander: Source = Avago Technologies – Avago Technologies 12Gb/s SAS expander, SAS35x48
  2. SAS HBA: Source = Broadcom Inc. – Broadcom 9400-8i SAS 12Gb/s host bus adapter
  3. PCIe Switch: Source = Broadcom Inc. – Broadcom PEX88096 PCIe storage switch
  4. SSDs: Source = KIOXIA America, Inc. – PM5 12Gbps enterprise SAS SSD, RM5 12Gbps value SAS SSD, HK6 enterprise SATA SSD, CM6 PCIe 4.0 enterprise NVMe SSD and CD6 PCIe 4.0 data center NVMe SSD

Figure 2: Tri-mode / Universal Backplane:

  1. Tri-mode Controller: Source = Broadcom Inc. — Broadcom 9400-16i tri-mode storage adapter
  2. SSDs: Source = KIOXIA America, Inc. – PM5 12Gbps enterprise SAS SSD, RM5 12Gbps value SAS SSD, HK6 enterprise SATA SSD, CM6 PCIe 4.0 enterprise NVMe SSD and CD6 PCIe 4.0 data center NVMe SSD

The post Evolving Storage with SFF-TA-1001 (U.3) Universal Drive Bays appeared first on StorageReview.com.

NetApp Acquires Talon Storage

$
0
0

NetApp has announced the acquisition of Talon Storage in an effort to enhance their cloud data services portfolio. Talon Storage is a software-defined storage solutions company that focuses on helping enterprises centralize and consolidate their IT storage infrastructure to the public clouds. NetApp indicates that the addition of Talon’s FAST software, combined with their own Cloud Volumes technology, will help customers centralize data in the cloud while still maintaining the benefits of the branch office environment.

NetApp has announced the acquisition of Talon Storage in an effort to enhance their cloud data services portfolio. Talon Storage is a software-defined storage solutions company that focuses on helping enterprises centralize and consolidate their IT storage infrastructure to the public clouds. NetApp indicates that the addition of Talon’s FAST software, combined with their own Cloud Volumes technology, will help customers centralize data in the cloud while still maintaining the benefits of the branch office environment.

Talon FAST

Talon FAST is a cloud-data software based solution that provides a “Global File Cache” service for ROBO workloads. This allows for file server consolidation into a secure, globally accessible file system on the company’s public cloud platform. Talon indicates that it will eliminate the need for branch office backups while increasing productivity and enabling global collaboration via distributed file-locking. Moreover, FAST takes advantage of “compression, streaming, and delta differencing” to move the data in increments between branch office and datacenter. This is all designed to give a high-performance experience for end users.

NetApp says that Talon’s software will integrate with NetApp Cloud Volumes ONTAP, Cloud Volumes Service and Azure NetApp Files solutions, which will help their customers move to the public cloud much faster and at a better total cost of ownership.

NetApp

Talon Storage

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post NetApp Acquires Talon Storage appeared first on StorageReview.com.

Mellanox Starts Shipping 12.8 Tbps Switches

$
0
0
Mellanox SN4000 family

Today, Mellanox began shipping their newest ethernet switches to customers.  The new switches belong to the SN4000 family and are powered by the compnay’s own scalable 12.8 Tbps Ethernet switch ASIC, Spectrum 3. Mellanox was founded in 1999 and provides data center network solutions. NVIDIA is in the process of buying Mellanox for $6.9 billion. The acquisition was expected to be completed by the end of 2019, but has been held up by Chinese authorities. NVIDIA and Mellanox refiled paperwork with China’s State Administration for Market Regulation earlier this month.

Today, Mellanox began shipping their newest ethernet switches to customers.  The new switches belong to the SN4000 family and are powered by the company’s own scalable 12.8 Tbps Ethernet switch ASIC, Spectrum 3. Mellanox was founded in 1999 and provides data center network solutions. NVIDIA is in the process of buying Mellanox for $6.9 billion. The acquisition was expected to be completed by the end of 2019, but has been held up by Chinese authorities. NVIDIA and Mellanox refiled paperwork with China’s State Administration for Market Regulation earlier this month.

Mellanox SN4000 family

SN4000 platforms come in flexible form-factors supporting a combination of up to 32 ports of 400GbE, 64 ports of 200GbE, and 128 ports of 100/50/25/10GbE. Even if you’re not using all the ports on the SN4000, the switch has a fully shared packet buffer to maximize burst absorption. Likewise, customers who are intending to use all 128 ports will be happy to know that the same shared packet buffer delivers fair bandwidth sharing and up to 200,000 NAT entries, 1 Million on-chip routes, and 4 million total routes. Like previous Mellanox switches, the SN4000s will continue to provide support for IPv4 and IPv6 addresses.

Speaking of features carried forward from their older switches, the new SN4000s will continue to come with an option for full-stack (ASIC-to-protocol) support for SONiC pre-configured. SONiC (Software for Open Networking in the Cloud) is an open-source development managed within the Open Compute Project by Microsoft, that provides cloud operators a vendor-neutral networking platform to take advantage of hardware innovation.

Mellanox Switches

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Mellanox Starts Shipping 12.8 Tbps Switches appeared first on StorageReview.com.

Viewing all 5321 articles
Browse latest View live