Quantcast
Channel: Enterprise Archives - StorageReview.com
Viewing all 5321 articles
Browse latest View live

Supermicro Unveils Appliances For Hyperscale Centers

$
0
0
Supermicro megadc

Today, Supermicro launched five new X11 appliances that it says were designed exclusively for Hyperscale Datacenters. This launch comes hot on the heels of their announcement of a pole-mounted server earlier this month. The company (Super Micro Computer Incorporated, SMCI) was founded in 1993 and is one of the fastest-growing IT companies in the world. Supermicro provides a wide range of products and servers, most focused around cloud software and hardware.

Today, Supermicro launched five new X11 appliances that it says were designed exclusively for Hyperscale Datacenters. This launch comes hot on the heels of their announcement of a pole-mounted server earlier this month. The company (Super Micro Computer Incorporated, SMCI) was founded in 1993 and is one of the fastest-growing IT companies in the world. Supermicro provides a wide range of products and servers, most focused around cloud software and hardware.

Supermicro megadc

The five Hyperscale-focused appliances are split as evenly as possible between two different form factors. Supermicro is introducing two 1U appliances and three larger 2U appliances. All of these new rack units have sockets for two second-generation Intel Xeon Scalable processors and an impressive sixteen slots for memory.

Networking comes in the form of dual 25G Ethernet ports. Going with a high-bandwidth ethernet option like 25G underlines that these devices are intended to be used in Hyperscale datacenters. The units also boast an AIOM (advanced I/O module) slot and OpenBMC support. The AIOM slot can support OCP V3.0 SFF cards.

Supermicro Main Site

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Supermicro Unveils Appliances For Hyperscale Centers appeared first on StorageReview.com.


Red Hat Releases Ceph Storage 4

$
0
0
Red Hat Ceph

Today, Red Hat released Ceph Storage 4. Ceph Storage is an open, massively scalable storage solution. Red Hat was founded in 1993 as an open-source software provider and advocate. Today it is still one of the most well-known open-source developers, providing a wide range of home and enterprise software products and services, including a Linux operating system and 24/7 support subscriptions.

Today, Red Hat released Ceph Storage 4. Ceph Storage is an open, massively scalable storage solution. Red Hat was founded in 1993 as an open-source software provider and advocate. Today it is still one of the most well-known open-source developers, providing a wide range of home and enterprise software products and services, including a Linux operating system and 24/7 support subscriptions.

Red Hat Ceph

Ceph Storage 4 is based on the Nautilus version of the Ceph open-source project. The release to general availability of the latest version brings with it several exciting new features. The management dashboard was updated to allow users to optionally hide or display components so that customers can customize the interface and focus on what really matters. The dashboard also added customizable alerts to the dashboard. The dashboard also boasts better integration with ansible playbooks, mostly in the form of better installation support. Speaking of installation, Red Hat Ceph Storage 4 adds support for a web-based installation interface.

Red Hat Ceph Storage 4 support for third-party tools is a bit of a mixed bag. Ubuntu users will be sad to learn that installing on Ubuntu is no longer supported. On the other hand, OSD BlueStore is now fully supported as a new back end allowing for storing objects directly on the block devices. Kubernetes users will be happy to hear that S3 bucket notifications are now supported, although only as a technology preview as support still has a few rough edges.

Availability

Immediately

Red Hat Storage

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Red Hat Releases Ceph Storage 4 appeared first on StorageReview.com.

VMware Expands Tanzu

$
0
0
VMware tanzu

Today VMware hit us with a deluge of announcements about their Kubernetes platform, VMware Tanzu. There have been a lot of name changes and even a few new features since we covered their announcement of Tanzu and the incorporation of NSX Service Mesh last year. VMware is the biggest name in virtualization and cloud computing, so most readers are probably aware that it was founded in 1998.

Today VMware hit us with a deluge of announcements about their Kubernetes platform, VMware Tanzu. There have been a lot of name changes and even a few new features since we covered their announcement of Tanzu and the incorporation of NSX Service Mesh last year. VMware is the biggest name in virtualization and cloud computing, so most readers are probably aware that it was founded in 1998.

VMware tanzu

Since August, the company has been releasing pieces of Tanzu. Currently, the three most significant pieces are Kubernetes Grid, Mission Control, and Application Catalog. All three of these modules are already available. VMware Tanzu Kubernetes Grid is yet another of the rapidly proliferating Kubernetes runtimes. Kubernetes Grid is built on open-source technologies, and as such, can be downloaded directly from this link. VMware is also offering 24×7 support through VMware Global Support Services (GSS). The second piece, VMware Tanzu Mission Control, was previewed back in August 2019. Tanzu Mission Control is a centralized management platform for operating Kubernetes infrastructure across multiple clouds. The third and final piece, VMware Tanzu Application Catalog, has been renamed from Project Galleon. Tanzu Application Catalog is a frontend for selecting open source software from the Bitnami catalog. VMware recently purchased Bitnami, and this is how the company has chosen to integrate their newly acquired technologies.

Speaking of renaming their recent acquisitions, Pivotal’s Application Service (PAS) has been renamed as well. Now known as Tanzu Application Service, it has been rolled into the Tanzu juggernaut as well. Confusingly, not even VMWare’s own products have escaped the tidal wave of name changes. Wavefront by VMware has undergone a name change to become Tanzu Observability by Wavefront. Even the well-known NSX Service Mesh could not escape Tanzufication, and it is now referring to it as Tanzu Service Mesh, at least when discussing other Tanzu components.

It seems that nothing is beyond the reach of Tanzu’s tentacles. Even the company’s flagship Cloud Foundation software has been affected. VMware Cloud Foundation 4 with Tanzu, is slated for release by May 1st, 2020. Cloud Foundation 4 will feature not one, but two Kubernetes runtimes. The first is Tanzu Kubernetes Grid, which we’ve already touched on in this article. The second is a hybrid infrastructure service provided by VMware vSphere 7. Existing customers will no doubt be pleased that they can continue using the Kubernetes runtime and APIs they are already familiar with. Still, the decision to include two runtimes for the same service in the same product is a curious one.

Availability

  • VMware Tanzu – Immediately
  • VMware Cloud Foundation 4 (with Tanzu) – May 1, 2020
  • VMware vSphere 7 – May 1, 2020

VMware Tanzu

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post VMware Expands Tanzu appeared first on StorageReview.com.

VMware Releases vSAN 7

$
0
0
VMware vSAN 7 LCM

Today VMware Inc. released the latest version of their very popular hyperconverged infrastructure (HCI) software, VMware vSAN 7. The latest version comes with several enhancements, with one of the major focuses being on simplifying infrastructure management through the reduction of tools required. vSAN 7 unifies block and file storage, reducing the need for third party solutions. And now vSAN supports containers through VMware Cloud Foundation.

Today VMware Inc. released the latest version of their very popular hyperconverged infrastructure (HCI) software, VMware vSAN 7. The latest version comes with several enhancements, with one of the major focuses being on simplifying infrastructure management through the reduction of tools required. vSAN 7 unifies block and file storage, reducing the need for third party solutions. And now vSAN supports containers through VMware Cloud Foundation.

VMware vSAN 7 LCM

HCI has seen an increasing adoption over the last few years in businesses of various sizes in multiple industries. While it is widely popular, VMware sees many areas where HCI can be modernized to deliver results faster. The company looked at the three areas outlined above and rolled the improvements into VMware vSAN 7 to modernize their, and the industry’s leading, HCI solution.

VMware On Simplifying Infrastructure Management

Admins may need several tools and specialized skills to maintain infrastructure and lifecycle management. Before vSAN 7 users needed vSphere Update Manager (VUM) for software and drivers and server vendor-provided utility for firmware updates. Now, users can leverage vSphere Lifecycle Manager (vLCM) a unified way to update software and firmware management that is native to vSphere. According to the company, vLCM is built off a desired-state model that provides lifecycle management for the hypervisor and the full stack of drivers and firmware for the servers powering the data center. In order to reduce the effort of monitoring compliance, vLCM can be used to apply an image, monitor the compliance, and remediate the cluster if there is a drift. vLCM should introduce a new level of simplicity to lifecycle management at scale.

Native File Service Improvements

On the same vein of simplification, vSAN 7 is introducing new native files services that again will reduce the need for third party solutions. The latest version now supports NFS v3 and v4.1 and therefore the use cases leveraging them. The enhanced file services can be provisioned and managed through the vCenter UI.

Native Support For Kubernetes

vSAN is looking to build off of cloud-native storage capabilities first introduced in vSAN 6.7 Update 3. Now, it supports native file services as persistent volumes for Kubernetes clusters. These new persistent volumes are stated to support the use of encryption and snapshots. Containerized workloads are now able to be deployed on vSAN datastores through vSpereh Add-on for Kubernetes (formerly Project Pacific).

Other enhancements in VMware vSAN 7 include:

  • Integrated DRS awareness of Stretched Cluster configurations – vSAN 7 has tighter integration with data placement and DRS. After recovering from a failure condition, DRS will keep the VM running at the same site until data is fully resynchronized between the two sites. Once resynchronization is complete, DRS will move the VM to the appropriate site in accordance with DRS rules. This improvement reduces unnecessary read operations occurring across the ISL, thereby ISL resources are prioritized to complete resynchronizations post site recovery.
  • Immediate repair operation after a vSAN Witness Host is replaced – vSAN 7 enhances the replacement and resynchronizing logic of a vSAN Witness Host for Stretched Cluster and 2-node topologies. When a vSAN Witness Host appliance is impacted or needs to be replaced, it can be easily done using a “Replace Witness” button in vCenter. After the replacement, vSAN invokes an immediate repair operation, quickly reinstating the vSAN Witness Host to a consistent state. This enhancement helps mitigate a transient vulnerability to site-level protection by expediting vSAN Witness Host restoration.
  • Stretched Cluster I/O redirect based on an imbalance of capacity across sites – A vSAN Stretched Cluster topology provides the resilience of VM and data in the event of a site outage. The agility of vSAN enables administrators to fine-tune configuration parameters for individual VMs with different protection levels or affinities. As a result, there could be an imbalance of available capacity at one site versus the other. vSAN 7 introduces new intelligence to minimize impact due to capacity strained conditions. If there is an imbalance, vSAN checks multiple parameters based on which it limits the IO to the capacity-constrained site and redirects active IO to the healthy site. These mitigation steps occur non-disruptively to the operation of the VM. This optimization is an excellent example of introducing more intelligence to vSAN to ensure predictable operation under a wide variety of conditions.
  • Accurate VM level space reporting across vCenter UI for vSAN powered VMs – vSAN 7 introduces a new level of consistency in VM level capacity reporting in vCenter for vSAN powered VMs. The initial design of vCenter accommodated for VM-level capacity reporting similar to how traditional storage operates. These improvements will help reconcile the reporting differences that may have been found between vSAN centric areas of vCenter and traditional VM reporting areas such as at the cluster and host view.
  • Improved Memory reporting for ongoing optimization – A new time-based memory consumption metric is exposed in the UI and through API to provide deeper insight into resource consumption. With the robust architecture of vSAN, as the environment evolves (through scale-up or scale-out), time-based metrics help correlate the change in memory consumption with hardware and software configuration changes made in the cluster. This helps systematically assess the impact of configuration changes and continually optimize the design.
  • Visibility of vSphere Replication objects in vSAN capacity views – VMware vSphere Replication is a hypervisor-based, asynchronous replication solution for vSphere VMs. It provides a simple and effective mechanism to protect and recover VMs. vSphere replication is included with vSphere Essentials Plus Kit and higher license editions. vSAN 7 introduces a significant improvement for environments using vSphere Replication. Administrators will now be able to easily identify vSphere Replication related object data at the VM object level, as well in the cluster-level capacity views. This awareness for vSphere Replication data goes a long way toward helping an administrator determine resources used for asynchronous replication needs.
  • Support for larger capacity devices – vSAN demonstrates great agility to meet the evolving storage needs. vSAN 7 supports newer and larger density storage devices. vSAN’s support of higher density storage devices can bring inherent improvements to customer environments, such as improved deduplication and compression ratios and a lower cost per terabyte. The support for higher density drives presents a benefit unique to vSAN’s architecture: Incrementally adding or replacing existing disk groups with new disk groups consisting of much higher density drives without any additional licensing cost.
  • Native support for planned and unplanned maintenance with NVMe hotplug – vSphere 7 introduces one feature that meets or exceeds the capability associated with older SAS and SATA devices: Hotplug support for NVMe devices in vSphere and vSAN. This introduces a new level of flexibility and serviceability to hosts populated with NVMe devices, improving uptime by simplifying maintenance tasks around adding, removing, and relocating storage devices in hosts. Modern hosts can potentially have dozens of NVMe devices, and the benefits of hotplug most help environments large and small.
  • Removal of Eager Zero Thick (EZT) requirement for shared disk in vSAN – This release also introduces improved flexibility for VM applications using shared virtual disks, such as Oracle RAC. vSAN 7 eliminates the prerequisite that shared virtual disks with multi-writer flags must use the eager zero thick format. This streamlined set of requirements improves simplicity and efficiency.

VMware vSAN

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post VMware Releases vSAN 7 appeared first on StorageReview.com.

VMware vSphere 7 Released

$
0
0
VMware vSphere 7

Today VMware has made some major announcements from expanding Tanzu to releasing vSAN 7. In another major announcement the company is releasing the latest version of vSphere with VMware vSphere 7. While there are several new features and capabilities announced, the key feature is native support for Kubernetes, allowing the ability to run containers and VMs on the same platform.

Today VMware has made some major announcements from expanding Tanzu to releasing vSAN 7. In another major announcement the company is releasing the latest version of vSphere with VMware vSphere 7. While there are several new features and capabilities announced, the key feature is native support for Kubernetes, allowing the ability to run containers and VMs on the same platform.

VMware vSphere 7

VMware vSphere is the company’s virtualization platform. The latest version sees a rearchitecting of the platform to add in new innovations. Aside from the addition of native support for Kubernetes, the new version focuses on improving developer and operator productivity. vSphere 7 also powers VMware Cloud Foundation in a secure, high performance, and resilient way.

vSphere 7 Enhancements

As with vSAN 7, VMware vSphere 7 has also simplified lifecycle management. Cloud consumption models are very popular and as more are leveraged things can get complicated. Noting this, vSAN looked to either fully automate or simplify the lifecycle management of the infrastructure software and hardware firmware. The new update allows users to seamlessly manage the lifecycle of the infrastructure using a desired state paradigm. VMware also added vCenter Server profiles to provided desired state configuration management for vCenter Server instances.

VMware vSphere 7 has increased security which is more important than ever in both the data center and that cloud. It has achieved this through the introduction of remote attestation for sensitive workloads using the new vSphere Trust Authority. vSphere provides secure vCenter Server authentication using external Identity Federation. The latest version also supports Intel Software Guard Extensions to allow SGX extensions to user applications.

What is near and dear to our hearts at StorageReview is always performance. vSphere is looking to innovate in the performance area by letting users host large workloads with an improved Distributed Resource Scheduler (DRS). The new approach uses the VM DRS score for hosts as the metric to decide placements. vSphere 7 Persistent Memory is supported on certain hardware that can deliver enhanced application performance. VMware has updated vMotion to improve performance resulting in live vMotion for databases and mission critical workloads. vSphere supports NVIDIA GPUs that can benefit AI/ML workloads as well.

Kubernetes & vSphere

One of the big updates is a common platform for running both containers (and Kubernetes) and VMs. Siloing different technologies into stacks can prevent adoption or force users to prioritize one over the other. By natively supporting Kubernetes in vSphere, companies can consolidate resources and environments while taking advantage of both technologies. According to VMware, vSphere 7 enables the DevOps model with infrastructure access for developers through Kubernetes APIs. This includes the Tanzu Kubernetes Grid Service. User can also choose the vSphere Pod Service if they don’t need full Kubernetes compliance.

From a management side, vSphere is delivering application-focused management for containerized applications. VMware states that this enables admins to apply policies to an entire group of objects and organize multiple objects into a logical group and then apply policies to the entire group. This can increase productivity and reduce errors.

vSphere with Kubernetes is available through VMware Cloud Foundation 4 with Tanzu.

VMware vSphere

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post VMware vSphere 7 Released appeared first on StorageReview.com.

HPE ProLiant MicroServer Gen10 Plus Review

$
0
0
HPE ProLiant MicroServer Gen10 Plus

Last month, HPE quietly snuck out their new HPE ProLiant MicroServer Gen 10 Plus. This nifty little device is very compact and affordable, while remaining powerful and highly customizable. The MicroServer is ideal for small business and can be used for a variety of use cases including hybrid cloud needs, or for workloads that need enterprise server reliability and management, without the rack and server room. The HPE line of MicroServers are also extremely popular with the homelab and mod communities, largely because of this combination of quality, out-of-band management and price combination in the diminutive enclosure.

Last month, HPE quietly snuck out their new HPE ProLiant MicroServer Gen 10 Plus. This nifty little device is very compact and affordable, while remaining powerful and highly customizable. The MicroServer is ideal for small business and can be used for a variety of use cases including hybrid cloud needs, or for workloads that need enterprise server reliability and management, without the rack and server room. The HPE line of MicroServers are also extremely popular with the homelab and mod communities, largely because of this combination of quality, out-of-band management and price combination in the diminutive enclosure.

HPE ProLiant MicroServer Gen10 Plus

HPE has made many changes in the generational progression to the Gen10 Plus. Immediately obvious is the reduction in size, the Plus is roughly half the size of the predecessor. Much of this is related to moving the power supply (180W) outside the enclosure, which has a secondary benefit besides size. The reduction in heat within the server means HPE could also drop down to one fan from two fans in the prior chassis. This change has another cascading effect, with one fewer fans, the Gen10 Plus makes less overall noise which is important if we assume the myriad use cases for this server, will likely have it operating in populated areas, rather than an isolated server room. Last but clearly not least, the Gen10 Plus gets an option to add iLO, HPEs out of band server management software. This is a big deal for managing multiple units in geographically dispersed areas, a clear target HPE had in mind. When this option is enabled, HPE includes a dedicated card for Ethernet access and an iLO Essentials license. The license may be upgraded to iLO Advanced. The server also supports HPE InfoSight for Servers.

Taking a deeper look at the server design, let’s first start by understanding the storage options. There is a single drive backplane option, a 4x large form factor (LFF) SATA backplane that is not hot swap. In many ways this aligns with the SMB focus, though enthusiasts would certainly like to have seen a SFF backplane option offered. HPE supports a software RAID (HPE Smart Array S100i SR Gen10) choice, which is a nice alternative from hardware-based options. That said, HPE has a hardware RAID (HPE Smart Array E208i-p SR Gen10 Controller) option available as well. The tradeoff here is that there’s only one PCIe3 x16 expansion slot, so picking up the hardware RAID will limit expansion options. For VMware environments like ours, we’re content to give up the hardware RAID to be able to add a higher-speed NIC. HPE includes a quad gigabit interface onboard, but they also support a 10GbE card option (using the single PCIe slot), which comes in handy should the Gen10 Plus be outfitted with flash.

HPE supports the Pentium G5420 with a 3.8GHz frequency, 2 cores, 4MB L3 cache and support for 2400MT/s RAM. There’s also a more powerful option in the Xeon E-2224 with a 3.4GHz frequency, 4 cores, 8MB L3 cache and support for 2666MT/s RAM.  For RAM, there are two DDR UDIMM slots with official support up to 32GB total.

Looking at software support, HPE covers off on most of the popular options. Microsoft Windows Server 2016 and 2019 are on the list, along with Red Hat Enterprise Linux (RHEL) 7.6, 7.7, 8.0, 8.1 and ClearOS. On the virtualization front VMware ESXi 6.5 U3 and 6.7 U3 are the supported options but require the Xeon E CPU.

We recently made a video that gives a good overview of the design and hardware of the server.

Our review unit is the “Performance 1” config, with the Xeon CPU and 16GB RAM, that was later upgraded to 32GB. We have the software RAID option and used the PCIe slot for a faster NIC. We have the iLO 5 option with iLO Essentials license. Starting price for these MicroServers is around $500.

HPE ProLiant MicroServer Gen10 Plus Specifications

Processors 

Intel Xeon E-2200 Series / 9th Gen Pentium G
Model CPU Frequency Cores L3 Cache Power DDR4 SGX
Xeon E-2224 3.4 GHz 4 8 MB 71W 2666 MT/s No
Pentium G5420 3.8 GHz 2 4 MB 54W 2400 MT/s No

System

 Memory
Type HPE Standard Memory

DDR4 Unbuffered (UDIMM)

DIMM Slots Available 2
Maximum Capacity 32GB (2 x 16GB UDIMM @2666 MT/s)

NOTE: The maximum memory speed depends on processor model.

Memory Protection

ECC

Interfaces
Video 1 Rear VGA port

1 Rear DisplayPort 1.0

USB 2.0 Type-A Ports 1 total (1 internal)
USB 3.2 Gen1 Type-A Ports 4 total (4 rear)
USB 3.2 Gen2 Type-A Ports 2 total (2 front)
Network RJ-45 (Ethernet) 4
 

Industry Standard Compliance

  • ACPI V6.1 Compliant
  •  PCIe 3.0 Compliant
  •  PXE Support
  •  WOL Support
  •  EMC Class B
  •  Microsoft® Logo certifications
  •  VGA Port
  •  DP Port
  •  SMBIOS 3.1
  •  UEFI 2.6
  •  Redfish API
  •  IPMI 2.0
  •  Advanced Encryption Standard (AES)
  •  Triple Data Encryption Standard (3DES)
  •  SNMP v3
  •  TLS 1.2
  •  DMTF Systems Management Architecture for Server Hardware Command Line Protocol (SMASH CLP)
  •  Active Directory v1.0
  •  ASHRAE A2
  •  UEFI (Unified Extensible Firmware Interface Forum)
  •  USB 2.0 Compliant
  •  USB 3.2 Compliant
  •  SATA 6Gb/s
Security
  • UEFI Secure Boot and Secure Start support
  • Immutable Silicon Root of Trust
  • FIPS 140-2 validation
  • Common Criteria certification
  • Configurable for PCI DSS compliance
  • Ability to rollback firmware
  • Secure erase of NAND/User data
  • TPM (Trusted Platform Module) 2.0 option
  • Front bezel lock feature, standard
  • Padlock slot, standard
  • Kensington Lock slot, standard
  • Power cord clip, standard
Others
Power Supply One (1) 180 Watts , non-redundant External Power Adapter
Server Power Cords All pre-configured models ship standard with one or more country-specific 6 ft/1.83m C5 power cords depending on models.
System Fans

 

 One (1) non-redundant system fan shipped standard

Physical and power

Power Supply One (1) 180 Watts , non-redundant External Power Adapter
Server Power Cords All pre-configured models ship standard with one or more country-specific 6 ft/1.83m C5 power cords depending on models.
Dimensions (H x W x D) (with feet) 4.68 x 9.65 x 9.65 in (11.89 x 24.5 x 24.5 cm)
Weight (approximate) Maximum

(Four drives, two DIMMs, Expansion board + iLO Enablement Kit)

15.87 lb (7.2 kg)
Minimum

(One DIMM installed, no drive, expansion board, iLO Enablement Kit)

9.33 lb (4.23 kg)
Input Requirements
(per power supply)
Rated Line Voltage  100 V AC to 240 V AC
Rated Input Current 2.5 A (at 90 V AC)
Rated Input Frequency 50 to 60 Hz
Rated Input Power 180W Power Supply

Design and Build

As stated, the HPE ProLiant MicroServer Gen 10 Plus is compact, only about five inches tall and ten inches to the width and depth. The shorter size is mostly through the removal of the internal power supply, although that is not entirely a free lunch. Users will have to contend with where the power brick gets placed and plugged in.

The server has a black metal casing with HPE branding in the center of the front. Also along the bottom of the front going from left to right are two USB 3.2 Gen2 Type-A ports, three LED indicator lights (Drive activity, NIC Status, Health), and the power/standby button.

To access the drive bays, one needs to remove the top cover by removing two thumb screws in the back and removed the bezel by unlocking it on the sides. Once those off, users can insert the drives straight into the server. The server comes with drive screws that can be added to the side of LFF HDDs and act as rails that allow them to be slid into place.

Flipping around to the rear, we can see that the fan takes up roughly a third of the back. The upper left has the security option like a padlock eye and Kensington security slot. On the bottom left there are four USB 3.2 Gen1 Type-A ports, a Displayport 1.0, and a VGA port. Near the center, bottom are four NIC ports. The power is in the bottom left, being a single input only. We have seen past microservers offer two DC-inputs for redundant power, although that is not an option on this ProLiant . Above the power is a PCIe Gen3 (PCIe x 16) expansion slot. And above the expansion slot is the iLO Enablement Kit slot.

HPE ProLiant MicroServer Gen10 Plus Rear

The motherboard tray can be removed by removing two screws giving one access to the inside including the CPU, DRAM, PCIe card slot and iLO card. Its a tiny bit of cool engineering HPE included that most other brands would skip over.

HPE ProLiant MicroServer Gen10 Plus Open

With the single fan, some questions came up on how well the system maintained airflow and cooling under load. During our Sysbench test with the CPU nearly maxed and a heavy storage I/O load, we captured a screenshot through iLO showing the system thermal layout.

At the time with thermal profile was captured, the system fan was dynamically set to just 18%. With our system having flash inside and no hard drives, we really only heard a mild whirring from the server. Noise might rank slightly above a traditional desktop, but it was a softer fan noise than say a notebook running under full load that had a small fan cranking up in speed.

Performance

For performance testing we opted to configure our HPE ProLiant MicroServer Gen 10 Plus with four Hynix SE4011 SATA SSDs. This flash configuration allowed us to better stress the platform with our application workloads as well as show peak storage performance through the storage controller using our vdbench workloads.

Here’s a video of us installing the drives and Mellanox card, along with setting up the server in ESXi.

We also have a detailed view of the configuration within VMware.

CPU 1 x Xeon E-2224
RAM 2 x 16GB of 2666Mz
Storage
  • 4 x Hynix SE4011 SATA 1.92TB
    • Baremetal vdbench Tests: 4 SSDs
    • SQL and Sysbench Application Tests: 1 SSD
Operating System
  • VMware ESXi 6.7u3: SQL Server and MySQL Tests
  • CentOS-7-x86_64-Minimal-1908: vdbench Tests

SQL Server Performance

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.

Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test looks for latency performance.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Dell’s Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out one 1,500-scale database evenly on our server.

SQL Server Testing Configuration (per VM)

  • Windows Server 2012 R2
  • Storage Footprint: 600GB allocated, 500GB used
  • SQL Server 2014
    • Database Size: 1,500 scale
    • Virtual Client Load: 15,000
    • RAM Buffer: 48GB
  • Test Length: 3 hours
    • 2.5 hours preconditioning
    • 30 minutes sample period

For our transactional SQL Server benchmark, the HPE ProLiant MicroServer Gen10 Plus had a score of 3,146.43 TPS with 1VM.

For SQL Server average latency the MicroServer saw 24ms.

Sysbench MySQL Performance

Our next local-storage application benchmark consists of a Percona MySQL OLTP database measured via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well.

Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller.

Sysbench Testing Configuration (per VM)

  • CentOS 6.3 64-bit
  • Percona XtraDB 5.5.30-rel30.1
    • Database Tables: 100
    • Database Size: 10,000,000
    • Database Threads: 32
    • RAM Buffer: 24GB
  • Test Length: 3 hours
    • 2 hours preconditioning 32 threads
    • 1 hour 32 threads

With the Sysbench OLTP the HPE ProLiant MicroServer Gen10 Plus hit 1,105.57 TPS with 1VM.

For Sysbench latency, the MicroServer had an average of 28.94ms.

In our worst-case scenario (99th percentile) latency, the MicroServer hit 90.08ms.

VDBench Workload Analysis

When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from “four corners” tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices.

Profiles:

  • 4K Random Read: 100% Read, 128 threads, 0-120% iorate
  • 4K Random Write: 100% Write, 64 threads, 0-120% iorate
  • 64K Sequential Read: 100% Read, 16 threads, 0-120% iorate
  • 64K Sequential Write: 100% Write, 8 threads, 0-120% iorate
  • Synthetic Database: SQL and Oracle
  • VDI Full Clone and Linked Clone Traces

With random 4K read, the HPE ProLiant MicroServer Gen10 Plus started at 20,706 IOPS at only 143.3µs latency. The MicroServer stayed under 1ms until about 160K IOPS and peaked at 193,648 IOPS at a latency of 2.63ms.

HPE ProLiant MicroServer Gen10 Plus 4k read

For random 4K write, the MicroServer stayed under 1ms until about 150K IOPS which was roughly its peak at about 250µs latency before falling off in performance and latency jumping sharply.

HPE ProLiant MicroServer Gen10 Plus 4k write

Switching over to sequential performance and starting with our 64K read, the MicroServer again had sub-millisecond performance throughout a majority of the run breaking 1ms at about 27K IOPS or 1.7GB/s and went on to peak at about 31K IOPS or 1.9GB/s at 4ms before dropping off some.

HPE ProLiant MicroServer Gen10 Plus 64K read

For 64K write the MicroServer ran up until about 27K IOPS again (or about 1.7GB/s) until going over 1ms. It peaked there and dropped over rather dramatically afterward.

HPE ProLiant MicroServer Gen10 Plus 64K write

Our next set of tests are our SQL workloads: SQL, SQL 90-10, and SQL 80-20. Starting with SQL, the MicroSever was able to perform at sub-millisecond latency throughout peaking at 196,799 IOPS at a latency of 639µs.

HPE ProLiant MicroServer Gen10 Plus SQL

SQL 90-10 had another performance never breaking 1ms and a peak of 177,945 IOPS at 679µs latency before dropping off some.

HPE ProLiant MicroServer Gen10 Plus SQL 90-10

The MicroServer finished our SQL tests with sub-millisecond latency with a peak of 149,358 IOPS at a latency of 642.7µs in our SQL 80-20 before falling off a bit.

HPE ProLiant MicroServer Gen10 Plus SQL 80-20

Next up are our Oracle workloads: Oracle, Oracle 90-10, and Oracle 80-20. Starting with Oracle, the HPE MicroServer showed a good performance peaking at about 134K IOPS at roughly 650µs latency before a drop in performance.

HPE ProLiant MicroServer Gen10 Plus Oracle

For Oracle 90-10 the MicroServer peaked at 171,924 IOPS at 501µs latency.

HPE ProLiant MicroServer Gen10 Plus Oracle 90-10

With Oracle 80-20 the MicroServer hit a peak of 152,129 IOPS with a latency of 539µs before a slight drop.

HPE ProLiant MicroServer Gen10 Plus Oracle 80-20

Next, we switched over to our VDI clone test, Full and Linked. For VDI Full Clone (FC) Boot, the HPE MicroServer stayed under 1ms until about 105K IOPS and peaked at 108,590 IOPS at a latency of 1.18ms.

HPE ProLiant MicroServer Gen10 Plus VDI FC Boot

VDI FC Initial Login saw the MicroServer with sub-millisecond latency performance until about 41K IOPS and a peak of about 45K IOPS at 1.25ms before dropping off more.

HPE ProLiant MicroServer Gen10 Plus VDI FC Intial Login

For VDI FC Monday Login the MicroServer broke 1ms at just north of 35K IOPS and peaked at 40,594 IOPS with a latency of 1.35ms before dipping some.

HPE ProLiant MicroServer Gen10 Plus VDI FC Monday Login

For VDI Linked Clone (LC) Boot, the MicroServer had sub-millisecond latency performance throughput with a peak of 60,364 IOPS and a latency of 977.3µs.

HPE ProLiant MicroServer Gen10 Plus VDI LC Boot

VDI LC Initial Login saw the MicroServer go over 1ms at about 20K IOPS and peak at 22,548 IOPS with a latency of 1.23ms.

HPE ProLiant MicroServer Gen10 Plus VDI LC Initial Login

Finally in our VDI LC Monday Login the MicroServer broke 1ms at about 19K IOPS and peaked at 26,118 IOPS at a latency of 1.69ms before dropping off some.

HPE ProLiant MicroServer Gen10 Plus VDI LC Monday Login

Conclusion

The HPE ProLiant MicroServer Gen10 Plus is a powerful, compact, and cost-effective server. At only about 5 inches tall and 10 x 10 inches wide, the diminutive server comes with plenty of room to add in capacity and networking to meet the needs it is intended for. Those needs are for SMBs that need server performance and function but don’t have the traditional space for it in a rack. On top of its given use case, the MicroServer is also popular in the homelab community for its quality, performance capabilities, and, of course, its price. The server’s most interesting feature is its design. With only one fan, it is built with a cascading effect for cooling. There are four LFF drive bays in the front (not hot-swappable) that will fit SATA 3.5” HDDS or SATA 2.5” SSDs. The MicroServer supports the Pentium G5420 or Xeon E-2224 CPU and up to 32GB of RAM.

From a performance perspective, we ran our Applications Analysis Workloads as well as our VDBench Workload Analysis. For Applications Analysis Workloads we started off with SQL Server. Here we saw a 3,146.43 TPS with an average latency of 24ms with 1VM. Moving to Sysbench, again with 1VM, the MicroServer was able to hit 1,105.57 TPS, with an average latency of 28.94ms, and a worst-case scenario latency of 90.08ms. Considering most use-cases for this server are test/dev, homelab, or SMB, being able to run the workloads is almost just as important as the performance being measured.

In our VDBench Workload Analysis the HPE MicroServer was able to put up some impressive numbers considering just how small it is. Peak highlights include 194K IOPS for 4K read, 150K IOPS for 4K write, 1.9GB/s for 64K read, and 1.7GB/s for 64K write. The MicroServer stayed under 1ms in both our SQL and Oracle test with highlights being 197K IOPS SQL, 178K IOPS SQL 90-10, 149K IOPS SQL 80-20, 134K IOPS Oracle, 172K IOPS Oracle 90-10, and 152K IOPS Oracle 80-20. The MicroServer once again saw a sub-millisecond in LC Boot with a peak of 60K IOPS. So overall when looking at how much storage I/O one can drive through the onboard SATA controller, it should be able to keep up with whichever four SATA devices you can mount inside, peaking at just under 2GB/s sequential read.

We may have gone a tad overboard in configuring this server for review, most will be content with HDDs in this particular box. While reasonability is a decent guide, we prefer to push servers to the edge to see what they are capable of. On that front, the MicroServer Gen10 Plus does a good job, holding up well in our testing. On the other side of the coin though, we would have liked to have seen a few changes that would take this product from really good to exceptional. We’d start with an onboard M.2 slot for boot duty; there’s a USB 2.0 port there now, but that’s not enough. We’d also like to see a second PCIe slot so a RAID card and higher speed NIC can be added at the same time, although onboard 10GbE would address this as well. Lastly, HDDs are inexpensive, we get it, but flash is where it’s at, even for SMBs there are more reasons to have flash than to not. At last a SFF chassis as an option would be appreciated. Overall though, this tiny server will do very well for HPE and their customers thanks to the affordable overall package and inclusion of iLO.

HPE ProLiant MicroServer Gen10 Plus

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post HPE ProLiant MicroServer Gen10 Plus Review appeared first on StorageReview.com.

Alluxio Releases Structured Data Service

$
0
0
Alluxio

Today, Alluxio announced the addition of Alluxio Structured Data Service (SDS) to Alluxio 2.2. SDS features data catalog and transformation services. The company develops open-source data orchestration software for the cloud and has done so since 2014. Version 2 of their eponymous Alluxio software was released just last year.

Today, Alluxio announced the addition of Alluxio Structured Data Service (SDS) to Alluxio 2.2. SDS features data catalog and transformation services. The company develops open-source data orchestration software for the cloud and has done so since 2014. Version 2 of their eponymous Alluxio software was released just last year.

Alluxio 2.2

Alluxio Structured Data Service introduces just-in-time data transformation of data to be compute-optimized, independent of the storage format. It currently supports both Presto, Apache Spark, and Apache Hive. SDS can also fuse multiple small files into one database to allow for more efficient analytic operations. Recognizing that some searches need to be run repeatedly, Alluxio SDS can also sort existing database entries to make frequent queries more efficient.

At the same time, Alluxio is also adding a new Presto connector. Presto is a distributed SQL query engine. The new connector will make it easier to use Alluxio’s new and existing features with Presto.

Availability

Alluxio 2.2 Community now includes Structured Data Service

Alluxio 2.2 Enterprise Edition now includes Structured Data Service

Alluxio

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Alluxio Releases Structured Data Service appeared first on StorageReview.com.

Veeam v10 Enhanced NAS Backup Review

$
0
0
Veeam v10 Features

One of the most exciting features of the Veeam suite version 10, which was released in February, is undoubtedly the Veeam v10 enhanced NAS backup; or what Veeam preferred to called it: “seriously powerful NAS backup.” NAS backups are something that Veeam announced back in 2017, and that finally, with this latest version, is here.

One of the most exciting features of the Veeam suite version 10, which was released in February, is undoubtedly the Veeam v10 enhanced NAS backup; or what Veeam preferred to called it: “seriously powerful NAS backup.” NAS backups are something that Veeam announced back in 2017, and that finally, with this latest version, is here.

NAS strategies are getting more popular than ever and have been extensively used, not only for SOHO and SMB but for enterprise businesses as well. NAS systems have led the storage industry to deal with a set of workloads such as enterprise applications, virtualization (and similar technologies), and large unstructured data sets. These industry requirements, at the same time, are a hurdle for backups and data availability strategies. It is a challenge to process petabytes unstructured data, scale the components, do a fast-incremental file-level backup, provide data consistency, and finally, make backup inexpensive. Veeam started recognizing these critical challenges in the NAS backup industry, so come out with the “serious powerful NAS backup” feature.

Veeam v10 Features

Before this new release, Veeam NAS backup options were limited, focused on file-to-tape only, via the NAS Backup Protocol (NDPM); and not fitting typical NAS production workloads that are presented throughout NFS and SMB protocols. Since there is a variety of NAS Systems with several protocols and versions adopted in modern business, flexibility was necessary. Consequently, an area of a critical enhancement to Veeam was to include SMB and NFS protocols, also Windows and Linux file servers. On top of this capability, Veeam v10 is also using Changed File Tracking (CFT) technology and snapshot-friendly approach, which all together aims to make faster, more reliable, and smarter NAS backup.

Key Areas of Differentiation With Veeam v10 enhanced NAS backup

Above mentioned areas (flexibility, CFT, and snapshot friendly,) are key differentiators that Veeam grouped with other technologies, as the components that make possible an enriched NAS backup. Let’s overview these components to have a better understanding of the secret source behind the enhanced NAS backup.

Starting from the flexibility focus of this feature, Veeam tags as an essential component the backup sources, as all of them share the same benefits of the next technologies (explained later). This source includes the option to protect multiple file shares such as NFS, SMB, Windows, and Linux. And the flexibility doesn’t end here for Veeam; protecting all these file systems means it is also possible to restore to another file system. This versatility gives backup strategies the option to protect data from anywhere to anywhere.

But probably the most significant differentiator in the NAS backup operations is the CFT functionality, which enables admins to perform fast incremental backups of NAS environments and allows businesses to achieve backup goals efficiently. Incremental backups copy everything modified since the last backup only, and Veeam takes advantage of this technique, considering that getting a shorter backup window makes the storage space more productive. With CFT, Veeam can determine very quickly which files have changed within a file system. So, when backing up a NAS system and incremental backup is performed, the operation doesn’t need to go through the whole file system to discover what has changed.

CFT uses a very similar concept from the Changed Block Tracking (CBT) by VMware, that rather than backing up every block of every VM in the infrastructure, backs up only the blocks that have changed. CFT is a very fast and efficient way to manage the files that have changed, which is what this technology is called Changed File Tracking. For a more straightforward explanation, consider a structured data tree created by folders and subfolders in the file system, that continuously expands out with new data entries. CFT, instead of backing up the entire folder tree, when there is a change in a subfolder, only acknowledge the changes from to the folder containing the altered objects and all the folders above, to the parent folder. This technique allows achieving the backup objectives quickly.

Following the file system sources, Veeam CFT is supported by other essential NAS backup components, such as files proxies and cache repositories. This last one store checksums in the RAM during backups and coordinates the file proxies; but don’t store or process real data. Cache repositories keep track of all objects that have changed between each backup, resulting in really-fast backup processing, according to Veeam. On the other hand, the cache repository is the one instructing the file proxies to move the specific data from the source to the target. Based on file version control, files can go to short- or long-term retention backup. For now, these proxies require a to be on a Windows operating system, but are scalable and software-defined, meaning there is no requirement for new hardware or dedicated appliances to scale this out like some other offerings in the industry.

The last area for NAS backup is snapshot friendly. This gives the ability to perform flexible backups directly from storage snapshots created by enterprise-grade NAS devices on either primary or secondary storage. Snapshot friendly provides even more ways further to enhance the performance and speed of NAS backup.

In Veeam v10, there are more technologies behind the new enhanced NAS backup feature; to expand the concept overviewed here and go into in-depth tech details, we recommend you to visit Veeam’s website.

Veeam Backup and Replication Console

From the Veeam Backup and Replication console, we can perform the NAS backup operations. From this new version, agents for file backup act comparable to the assistants installed on virtual machines running Linux and Windows. These agents allow us to back up individual disks or partitions of the server. Here are some of the new and most significant additions regarding NAS backup.

Under the Inventory, we can reach File Shares. From here, we must add to the backup infrastructure file shares that we plan to use as a source for backup.

Veeam v10 Nas 1

We can add file shares of the following types: Windows-managed or Linux-managed server, NFS file share, or SMB file share.

Veeam v10 NAS2

Under Processing, for a new SMB (or NFS) file share wizard, we can select the desires file proxies, or all proxies to develop backup scalability and speed. Also, from here, we can select the cache repositories.

Veeam v10 NAS 3

Conclusion

One of the most attractive features of Veeam suite version 10, is the Enhanced NAS backup. This feature is built for Changed File Tracking. This feature, concurrently, is very flexible, allowing customers to backup via SMB, NFS, and Windows and Linux file server formats. Also, having many restore options. Based on these new capabilities, the new NAS data protection and recovery focus on the protection of unstructured file data and file servers at scale. All these components in the enhanced NAS backup feature support Veeam to be fast processing, more scalable, and appliance agnostic. Importantly, they intend to reduce storage costs while improving recovery times.

Veeam

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Veeam v10 Enhanced NAS Backup Review appeared first on StorageReview.com.


VMware vSAN 7 Capacity Monitoring Enhancements

$
0
0
vSAN 7 monitoring enhancements

This week, with the newest release of VMware vSAN version 7, VMware brings two essential enhancements to the vSAN capacity monitoring: Improvements to virtual machine consumption metrics and vSphere Replication object reporting. VMware says that these new capacity reporting features are intended to assists administrators make decisions to add capacity to existing hosts in a vSAN cluster (scale-up). Moreover, admins might have to add hosts to a vSAN cluster (scale-out) and revisit the storage policies assigned to the VM objects. Changes to storage policies can also impact capacity usage. VMware adds that for any of these cases, accurate reporting is essential to ensure an organization continues to run without disruption.

This week, with the newest release of VMware vSAN version 7, VMware brings two essential enhancements to the vSAN capacity monitoring: Improvements to virtual machine consumption metrics and vSphere Replication object reporting. VMware says that these new capacity reporting features are intended to assists administrators make decisions to add capacity to existing hosts in a vSAN cluster (scale-up). Moreover, admins might have to add hosts to a vSAN cluster (scale-out) and revisit the storage policies assigned to the VM objects. Changes to storage policies can also impact capacity usage. VMware adds that for any of these cases, accurate reporting is essential to ensure an organization continues to run without disruption.

vSAN 7 monitoring enhancements

Storage admins know that running out of free space is a big concern for any storage or HCI platform. VMware vSAN requires free space to handle momentary operations such as snapshots, policy changes, and host maintenance. VMware vSAN 7 enhancements are induced to help avoid these concerns. The first enhancement, Virtual Machine Capacity Consumption Reporting, vSAN 7, considers the raw space consumed by the namespace folder, VM home. This folder includes configuration files, log files, and more, and coexists with other objects that belong to a VM such as virtual disks (.vmdk), snapshots, and swap files (.vswp). Swap files, in particular, during many powered-on virtual machines, can consume and report a significant amount of storage, even though they are thin-provisioned. Now, vSAN 7, from the VM and capacity and usage stats, provides more accurate reporting of the actual capacity consumed by these thin-provisioned objects.

VMware highlights that vSphere and vSAN admins should always remain aware of capacity management and reporting. With the second enhancement, vSphere Replication object reporting, the vSAN 7 capacity view now shows how much space is consumed. Space reporting for these objects are shown in a dedicated section under “User Objects.” This new option provides a more granular report of space consumed by replicated data on a vSAN datastore, VMware states.

VMware

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post VMware vSAN 7 Capacity Monitoring Enhancements appeared first on StorageReview.com.

StorageReview Podcast #38: Krish Prasad, VMware

$
0
0
StorageReview New Mini Lab

This week’s podcast features an interview with Krish Prasad from VMware. VMware had a big week this week with many launches including vSphere 7, vSAN 7 & updates to Tanzu. Krish walks us through the key vSphere updates, with a deep look at much of what VMware is doing with many investments in Kubernetes and next generation applications. 

This week’s podcast features an interview with Krish Prasad from VMware. VMware had a big week this week with many launches including vSphere 7, vSAN 7 & updates to Tanzu. Krish walks us through the key VMware vSphere updates, with a deep look at much of what VMware is doing with many investments in Kubernetes and next generation applications.

The podcast team also breaks down Maggie’s Snow Nose, cancellation of more events, the benefits of Post-It notes, Kevin’s new lab (whit happens to run largely on VMware) and much more. We didn’t make it to AMC this week, so 2012’s Lawless remains the movie of the week.VMware Podcast

StorageReview Maggie

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #38: Krish Prasad, VMware appeared first on StorageReview.com.

ASUSTOR LOCKERSTOR 10 (AS6510T) NAS Review

$
0
0

The ASUSTOR LOCKERSTOR 10 (AS6510T) NAS is a budget-friendly, enterprise NAS designed for organizations of who have high-capacity needs. Its 10-bay configuration makes the AS6510T a scalable storage solution; organizations with reduced budgets or storage requirements can purchase a smaller number of hard disks at first and then add more as their needs grow. The NAS also supports online capacity expansion and features ASUSTOR’s MyArchive, a plug-and-play storage technology that allows hard disks to be used as removable storage archives. Simply add an “archive” and swap it out for a different one whenever you need, making it a hugely flexible device.

The ASUSTOR LOCKERSTOR 10 (AS6510T) NAS is a budget-friendly, enterprise NAS designed for organizations of who have high-capacity needs. Its 10-bay configuration makes the AS6510T a scalable storage solution; organizations with reduced budgets or storage requirements can purchase a smaller number of hard disks at first and then add more as their needs grow. The NAS also supports online capacity expansion and features ASUSTOR’s MyArchive, a plug-and-play storage technology that allows hard disks to be used as removable storage archives. Simply add an “archive” and swap it out for a different one whenever you need, making it a hugely flexible device.

Asustor AS6510T NAS

The Lockerstor AS6510T is powered by an Intel Denverton-based Atom C3538 Quad-Core CPU. This is part of Intel’s third generation of SoC-based CPUs manufactured on 14nm process technology, which is ideal in networking, storage and IoT use cases due to its exceptional performance per watt, low thermal design power, and configurable high-speed I/O. The AS6510T also features 8GB DDR4-2133 SO-DIMM, which is expandable to a decent 32GB, and can be outfitted with 10x 3.5-inch SATA hard drives and two M.2 NVMe SSD ports for fast caching. With these 10 storage bays, maximum internal raw capacity can reach up to a massive 160 TB (using 10x 16TB HDDs). This can further scale to 288TB using two of the company’s 4-bay expansion units. For networking connectivity, the Lockerstor NAS is highlighted by its dual 10-Gigabit Ethernet and dual 2.5-Gigabite Ethernet ports, which helps to eliminate performance bottlenecks for the NVMe SSDs.

The AS6510T also features “Wake on WAN”, a technology that allows users to remotely wake up (and turn off) the NAS through a variety of mobile apps such as AiMaster, AiMusic, AiVideos and AiData. As such, users will be able to power down their NAS whenever they want without having to worry about starting it back up again. This helps keep data safe when the NAS isn’t in use and protects data integrity by remaining powered off during power failures.

In addition, the new AS6510T upped its game when it comes to keeping the system cool and running smoothly in a 24/7, always on environment. For example, it uses PWM fans with smart speed controls and heatsinks with push-pin mounting. This feature allows the NAS to optimize its airflow and heat dissipation, which promotes reliability and helps to maintain performance under high loads. Moreover, The Lockerstor NAS has uses heatsinks that are taller than the last gen models.

Backed by a 3-year warranty, our build is equipped with 10 x 8TB Toshiba drives and 8GB of DDR4 SO-DIMM RAM.

ASUSTOR LOCKERSTOR 10 (AS6510T) Specifications

Hardware Specifications
CPU CPU Model Intel ATOM C3538
CPU Architecture x64 64-bit
CPU Frequency Quad Core 2.1GHz
Memory Memory 8GB SO-DIMM DDR4
Memory Module Pre-installed 8GB (1 x 8GB)
Total Memory Slots 2
Memory Expandable up to 32GB (2 x 16GB)
  • Support mixed capacity
Flash Memory 4GB eMMC
Storage HDD 10 x SATA3 6Gb/s; 3.5″/2.5″ HDD/SSD
M.2 Drive Slots 2x M.2 PCIe (NVMe) or SATA SSD for SSD Caching
*Support M.2 2280, 2260 and 2242.
Maximum Internal Raw Capacity 160 TB  (16 TB HDD X 10, Capacity may vary by RAID types)
Maximum Drive Bays with Expansion Unit 18
Maximum Raw Capacity with Expansion Units 288 TB  (16 TB HDD X 18, Capacity may vary by RAID types)
External Ports Expansion USB 3.2 Gen 1 x 2
Network 10 Gigabit Ethernet x 2; 2.5 Gigabit Ethernet x 2
HDMI Output N/A
Others System Fan 120mm x 2
LCD Panel
Power Supply Unit / Adapter 250W x1
Input Power Voltage 100V to 240V AC
Certification FCC, CE, VCCI, BSMI, C-TICK
Operation Power Consumption 76.8W (Operation);
41.1W (Disk Hibernation); ²
Noise Level 22dB (HDD idle)
Operation Temperature 0°C~40°C (32°F~104°F)
Humidity 5% to 95% RH
Size and Weight Size 215.5(H) x 293(W) x 230(D) mm
Weight 6.4 (kg) / 14.1 (lb)

ASUSTOR LOCKERSTOR 10 Design and build

Not only is the ASUSTOR LOCKERSTOR 10 (AS6510T) NAS a sturdy NAS, it also sports a very slick build and has some design features that we really like. Sporting an all black/charcoal chassis, the new Lockerstor weighs in at 14.2 pounds (roughly three times the weight of an average laptop) and measures 22cm (height) x 29cm (width) x 23 cm (depth). All drives are easily accessed from the front, which are horizontally stacked in five rows of two.

Though not a tool-less process, mounting new hard drives was fairly easy. Simply press the button on the specific hard disk tray to unlock and release the latch, then pull out it out of the disk bay. For 3.5-inch hard disks, you need to secure the drive with four screws, while 2.5-inch HDDs and SSD hard disks must be placed in a specific area on the tray before you secure them via the mounting screws.

The left side of the front panel is home to all the LEDs, including the Power, System Status, Network (1x 2.5 gigabit and 1x 10 gigabit) and USB indicators. At the bottom left is the USB 3.2 Gen 1 port.

Asustor AS6510T NAS Front

At the top of the Asustor LOCKERSTOR 10 is the LCD panel, which is used to configure system settings and check system information. Next to the LCD are the four buttons used to navigate through the menus: up, down, back and confirm. This is a very nice display. It’s bright, easy to navigate/read and the buttons are very responsive; simply press the up or down buttons to navigate through the menu items, press the confirm button to access the specific submenu, and use the back button to return to the previous area. There are a range of useful menus, including Touch Backup, Network, Storage, Temperature, Operation and Configuration.

On the back side of the AS6510T are the dual-fan exhausts, which take up most of the real estate. On the right side, the USB 3.2 Gen 2, dual Intel 10-Gigabit Ethernet ports (which supports up to 20 Gbps under Link Aggregation) and dual Realtek 2.5-Gigabit Ethernet ports (which Supports up to 5 Gbps under Link Aggregation) are stacked on top of each other. The power port is located at the top left.

Asustor AS6510T NAS Rear

ASUSTOR LOCKERSTOR 10 NAS Performance

Enterprise Synthetic Workload Analysis

Our enterprise shared storage and hard drive benchmark process preconditions each drive into steady-state with the same workload the device will be tested with under a heavy load of 16 threads with an outstanding queue of 16 per thread, and then tested in set intervals in multiple thread/queue depth profiles to show performance under light and heavy usage. Since NAS solutions reach their rated performance level very quickly, we only graph out the main sections of each test.

Preconditioning and Primary Steady-State Tests:

  • Throughput (Read+Write IOPS Aggregate)
  • Average Latency (Read+Write Latency Averaged Together)
  • Max Latency (Peak Read or Write Latency)
  • Latency Standard Deviation (Read+Write Standard Deviation Averaged Together)

Our Enterprise Synthetic Workload Analysis includes four profiles based on real-world tasks. These profiles have been developed to make it easier to compare to our past benchmarks as well as widely-published values such as max 4k read and write speed and 8k 70/30, which is commonly used for enterprise drives.

  • 4K
    • 100% Read or 100% Write
    • 100% 4K
  • 8K 70/30
    • 70% Read, 30% Write
    • 100% 8K
  • 8K (Sequential)
    • 100% Read or 100% Write
    • 100% 8K
  • 128K (Sequential)
    • 100% Read or 100% Write
    • 100% 128K

We tested both CIFS and iSCSI performance using a mirrored configuration, with RAID6 configuration using Toshiba N300 8TB HDDs.

In the first of our enterprise workloads, we measured a long sample of random 4K performance with 100% write and 100% read activity. Looking at IOPS, the ASUSTOR LOCKERSTOR 10 (AS6510T) NAS showed its best performance in iSCSI, hitting 602 IOPS read and 2,523 IOPS write. In CIFS, it posted slightly slower writes with 2,463 IOPS, though reads were much slower at 322 IOPS

Asustor AS6510T NAS 4K read  With 4K average latency, the AS6510T showed iSCSI performance of 425.07ms read and 101.442ms write vs. CIFS’s 792.398ms read and 103.907ms write.

Asustor AS6510T NAS 4K write

In 4K max latency, the AS6510T had 2,217.4 read and 1,483.8ms write in iSCSI and 130ms read while CIFS posted 3,012.7ms read and an improved write performance of 1,404.5ms.

For 4K standard deviation, the AS6510T showed its best read and write performance in iSCSI with 347.951ms and 126.003ms, respectively, while the CIFS configuration had 722.201ms and 135.975ms, respectively.

Our next benchmark measures 100% 8K sequential throughput with a 16T16Q load in 100% read and 100% write operations. Here, the AS6510T results were pretty one-sided as iSCSI posted 92137 IOPS read and 65187 IOPS write, while CIFS showed less than third of that performance with 23,039 IOPS read and 17,575 IOPS write.

Compared to the fixed 16 thread, 16 queue max workload we performed in the 100% 4K write test, our mixed workload profiles scale the performance across a wide range of thread/queue combinations. For these tests, we span workload intensity from 2 threads and 2 queue up to 16 threads and 16 queue. With throughput, the ASUSTOR LOCKERSTOR 10 AS6510T posted 422 IOPS through 566 IOPS in iSCSI while CIFS had a range of 201 IOPS through 198 IOPS in the terminal queue depths (though it hit 202 IOPS at 8 Threads 4 Queue).

In average latency for the AS6510T, iSCSI showed the best overall performance again (specifically the last half of the queue depths) with a range of 9.46ms through 451.5ms. CIFS posted a range of 19.81ms to 1,281.24ms.

For the AS6510T maximum latency, though iSCSI started off with somewhat similar performance (1,222.25ms), it was soon out-performed by the CIFS configuration, which posted a range of 1,100.41ms through 4,910.5ms.

Next, we move on to the standard deviation where results were much closer iSCSI and CIFS, posting 25.06ms through 617.39ms and 26.95ms through 584.14ms, respectively for the AS6510T.

The last Enterprise Synthetic Workload benchmark is our 128K test: a large-block sequential test that shows the highest sequential transfer speed for a device. In this workload scenario, the AS6510T with the CIFS configuration had the best read performance with 1.81GB/s (and 714.2MB write) while iSCSI had the best write performance with 910.1MB/s (and 1.7GB/s read).

Conclusion

The ASUSTOR LOCKERSTOR 10 (AS6510T) NAS is viable choice for those in need of a budget-friendly storage solution with high storage capacity. ASUSTOR’s new 10-bay NAS features dual USB 3.2 Gen 1 ports and two network ports (one 2.5 gigabit and one 10 gigabit port). It is powered by an Intel Denverton-based Atom C3538 Quad-Core processor and supports up to a generous 160 TB. The NAS can further scale to 288TB via the company’s 4-bay expansion units. One of the things we liked the most about the NAS is its design. This was highlighted by its fantastic LCD on the front panel. The LCD offers users a bright, easy-to-navigate menu to configure system settings and check system information.

The AS6510T also offers comprehensive backup solutions, including Amazon S3, Dropbox, Google Drive, and OneDrive, as well as ASUSTOR Backup Plan for Windows, Time Machine for macOS, and MyArchive removable hard drives for long-term storage. ASUSTOR focuses data-loss protection from damage files and attacks with the Nimbustor 2 and 4, which uses a Linux-based ADM that employs a built-in firewall, ClamAV Antivirus, MyArchive and ADM’s backup tools help protect against ransomware attacks. Moreover, ASUSTOR’s “Wake on WAN” feature is very useful technology, as it helps to prevents attacks, protects data integrity, and saves electricity integrity by powering off (and staying off) when not in use. Users simply need to access a mobile app to turn it back on when they need it.

To test its performance, we looked at a RAID6 HDD performance configuration using both iSCSI and CIFS connectivity leveraging our Toshiba N300 8TB NAS HDDs. The AS6510T showed stronger overall performance in iSCSI. During our 4K tests, it hit 602 IOPS read, 2,523 IOPS write, and 25.07ms read and 101.442ms write in average latency. In our 8K 100% tests, results were pretty one-sided with the iSCSI posting 92,137 IOPS read and 65,187 IOPS write compared to 23,039 IOPS read and 17,575 IOPS write with CIFS. In our mixed workload profiles, the ASUSTOR NAS posted 422 IOPS through 566 IOPS in iSCSI while CIFS had a range of 201 IOPS through 198 IOPS in the terminal queue depths while average latency showed a range of 9.46ms through 451.5ms for iSCSI and 19.81ms to 1,281.24ms for CIFS. However, during our large-block sequential 128K test, CIFS recorded noticeably better read performance at 1.81GB/s (and 714.2MB write); though iSCSI had the best write performance at 910.1MB/s (and 1.7GB/s read).

Overall the AS6510T is great-looking 10-bay NAS from ASUSTOR, packing two more bays than most competing platforms. It offers a ton scalability, functionality and features at a very reasonable price point.

Asustor LOCKERSTOR 10 (AS6510T) NAS

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post ASUSTOR LOCKERSTOR 10 (AS6510T) NAS Review appeared first on StorageReview.com.

Memblaze PBlaze5 920 Series Announced

$
0
0

Memblaze has announced the PBlaze5 920 Series enterprise-class NVMe SSD, which consists of four different SKU groups: PBlaze5 D920, PBlaze5 C920, PBlaze5 D926 and PBlaze5 C926.  The new PBlaze5 NVMe SSD leverages 96-layer 3D eTLC NAND, features capacities of up to 7.68TB, and comes in form factors of 2.5-inch U.2 and HHHL add-in card. The PBlaze5 920 Series NVMe SSD helps its customers build effective, flexible and high-performing storage solutions for their mission-critical applications.

Memblaze has announced the PBlaze5 920 Series enterprise-class NVMe SSD, which consists of four different SKU groups: PBlaze5 D920, PBlaze5 C920, PBlaze5 D926 and PBlaze5 C926. The new PBlaze5 NVMe SSD leverages 96-layer 3D eTLC NAND, features capacities of up to 7.68TB, and comes in form factors of 2.5-inch U.2 and HHHL add-in card. The PBlaze5 920 Series NVMe SSD helps its customers build effective, flexible and high-performing storage solutions for their mission-critical applications.

Memblaze PBlaze 920

The Memblaze PBlaze5 920 Series NVMe SSD is quoted to deliver some impressive reads with up to 5.9GB/s and 970,000 IOPS, while latency is expected to hit just 90μs and 12μs, for reads and writes, respectively. Memblaze indicates that, with this level of performance coupled with its data protection, compatibility, and enterprise-class features, the new PBlaze5 drive will significantly improve overall user experience. This release is a step up from the previous MemBlaze PBlaze5 916.

To address the growing number of multi-application deployment scenarios on a single drive, Memblaze has added “Quota by Namespace” support to the PBlaze 920 series. This is a brand new feature that carries out quota operation to the namespaces of NVMe SSD then selects appropriate namespaces depending on the application priority so that it can optimize and expand the application scenarios. Memblaze indicates that, by creating different namespaces on their new SSD and placing quota limitation on the namespaces loaded with tasks of lower priority, namespaces with high-priority tasks will have access to more I/O resources.

The Memblaze PBlaze 920 Series NVMe SSD offers the handy features of no reboots after firmware upgrades, which can certainly help with continuous storage availability (even if there are I/O operations in the business system). Memblaze adds that this will also help simplify the operation and maintenance process while reducing IT system operation and maintenance costs.

Memblaze PBlaze 920 Series NVMe SSD Specifications

PBlaze5 920 NVMe SSD D920 C920 D926 C926
User Capacity (TB) 3.84 7.68 3.84 7.68 3.2 6.4 3.2 6.4
Interface PCIe 3.0 x 4 PCIe 3.0 x 8 PCIe 3.0 x 4 PCIe 3.0 x 8
Form Factor 2.5-inch U.2 HHHL AIC 2.5-inch U.2 HHHL AIC
128KB Sequential Read (GB/s) 3.5 3.5 5.6 5.9 3.5 3.5 5.6 5.9
128KB Sequential Write (GB/s) 3.3 3.5 3.3 3.7 3.3 3.5 3.3 3.7
Sustained Random Read (4KB) IOPS 825K 840K 835K 970K 825K 835K 835K 970K
Sustained Random Write(4KB) IOPS (Steady State) 140K 150K 140K 150K 280K 300K 280K 300K
Latency Read/Write(μs)  90 / 12 90 / 12
Lifetime Endurance 1DWPD 3DWPD
Uncorrectable Bit Error Rate < 10 -17
Mean Time Between Failures 2 million hours
Protocol NVMe 1.2a
NAND Flash Memory 3D eTLC NAND
Operation System RHEL, SLES, CentOS, Ubuntu, Windows Server, VMware ESXi
Power Consumption 7~ 25w
Basic Feature Support Power Failure Protection, Hot Pluggable, Full Data Path Protection, S.M.A.R.T, Flexible Power Management
Advanced Feature Support TRIM, Multi-namespace, AES 256 Data Encryption & Crypto Erase, Dual Port & Reservation (U.2 only), EUI64/NGUID Variable Sector Size Management & T10 PI (DIF/DIX), Firmware Upgrade without Reset, Quota by Namespace
Software Support Open source management tool, CLI debug tool OS in-box driver (Easy system integration)

Memblaze PBlaze 920 Series NVMe SSD product page

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Memblaze PBlaze5 920 Series Announced appeared first on StorageReview.com.

Parts Of Nutanix Objects 2.0 Releases Early

$
0
0
Nutanix Objects 2.0

Today, Nutanix announced they are extending their platform with some of the new features slated to be part of their next major version of their object storage platform, Nutanix Objects 2.0. Nutanix has been mentioned in passing in a number of our articles over the years, but I think this may be the first time I’ve written an article about them specifically. It’s nice to finally give them the attention they deserve. Nutanix was founded in 2009 and primarily provides cloud services and software with an emphasis on storage like their Nutanix Objects platform.

Today, Nutanix announced they are extending their platform with some of the new features slated to be part of their next major version of their object storage platform, Nutanix Objects 2.0. Nutanix has been mentioned in passing in a number of our articles over the years, but I think this may be the first time I’ve written an article about them specifically. It’s nice to finally give them the attention they deserve. Nutanix was founded in 2009 and primarily provides cloud services and software with an emphasis on storage like their Nutanix Objects platform.

Nutanix Objects 2.0

The most significant new feature is probably the addition of multi-cluster support. Breaking down cluster boundaries enables teams to leverage a single namespace across multiple Nutanix clusters. This potentially represents a big savings since customers can now take advantage of unused storage capacity anywhere in their Nutanix environment to improve storage economics. On a similarly large-scale note, Nutanix also increased the size of individual nodes. Each node can manage up to 240TB of storage. The third and final new feature being rolled out today is support for Write Once Read Many (WORM) locking. Customers can now lock their content to prevent modifications without necessarily enabling versioning.

Nutanix Objects is also now certified by Splunk as SmartStore compliant. Going forward, customers will be able to manage Splunk data growth with Nutanix Objects more easily. Joint customers can now run Splunk workloads on Nutanix software, and leverage Nutanix Objects for built-in object storage to support their Splunk environment.

Availability

Parts Nutanix Objects 2.0 are available immediately.

Nutanix Objects

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Parts Of Nutanix Objects 2.0 Releases Early appeared first on StorageReview.com.

Dell Releases ThinOS 9

$
0
0
Dell ThinOS 9

Today, Dell Technologies announced the release of version 9 of their proprietary thin client operating system, ThinOS 9. Dell Technologies is the parent company of Dell and Dell EMC since Dell acquired EMC in 2015. Dell was founded in 1984 and is one of the most well-known computer manufacturers.

Today, Dell Technologies announced the release of version 9 of their proprietary thin client operating system, ThinOS 9. Dell Technologies is the parent company of Dell and Dell EMC since Dell acquired EMC in 2015. Dell was founded in 1984 and is one of the most well-known computer manufacturers.

Dell ThinOS 9

The latest version of Dell’s proprietary thin client operating system comes out twenty years since the initial release of Dell’s ThinOS. ThinOS 9.0 supports all of their current crop of thin client appliances; Wyse 3040, Wyse 5070, Wyse 5470, and Wyse 5470 mobile. Curiously, Dell has decided to cut back on authentication and security support, adding fewer features than they are removing. ThinOS 9 adds NetScaler authentication support but removes Touch ID, Pass Through Authentication, RSA token, and a few other authentication options.

In an even more curious decision, Dell seems to have chosen to splinter support for server environments across several versions of their operating system. The good news here is that 9.0 improves support for Citrix with the addition of support for Citrix Workspace and SaaS/Webapps with SSO to their existing support for Citrix Virtual Apps and Desktops. Unfortunately, Dell is delaying support for their own VMware Horizon Client until a future release. Likewise, anyone using Wyse ThinOS 8.6 with PCoIP to access Amazon Workspaces will be unable to upgrade to the new version. In happier news, keyboard and Linux enthusiasts will be pleased to learn that Dell is stepping up their keyboard layout game by adding support for Unicode keyboard layout mapping and dynamic keyboard layout synchronization with Windows VDA.

ThinOS 9 Availability

Immediately

Dell Thin Clients

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Dell Releases ThinOS 9 appeared first on StorageReview.com.

SoftIron HyperSwitch Switches Introduced

$
0
0
SoftIron HyperSwitch

Today, SoftIron Ltd. announced their next-generation top-of-rack switch, SoftIron HyperSwitch. This new series of switches is built around maximizing the performance and flexibility of SONiC (Software for Open Networking in the Cloud). SONiC is the open source network operating system built by Microsoft for scale-out performance networking. The HyperSwitch family will have three models all focused on hyperscale data center performance.

Today, SoftIron Ltd. announced their next-generation top-of-rack switch, SoftIron HyperSwitch. This new series of switches is built around maximizing the performance and flexibility of SONiC (Software for Open Networking in the Cloud). SONiC is the open source network operating system built by Microsoft for scale-out performance networking. The HyperSwitch family will have three models all focused on hyperscale data center performance.

SoftIron HyperSwitch

Founded in 2012, SoftIron is a London-based company. Their main focus is creating purpose-built, performance-optimized, storage solutions from the data center to the edge. Though based in London, the company does all of its design, firmware, and manufacturing is done in-house in Newark, California. SoftIron also leverages Ceph, with the claim of simplifying it.

SoftIron new HyperSwitch is said to deliver simplicity and scalability with a specific leaning towards performance. For speed, the company is claiming speeds up to 1.8 terabits per second. SoftIron’s argument here is that customers can build fast, scale-out data centers without the compatibility issues and vendor lock-in of proprietary solutions. Part of how they are driving this performance is by leveraging AMD EPYC Embedded 3000 Processors.

As stated above, the new SoftIron HyperSwitch is built around SONiC. SONiC is an open sourced network operating system pioneered by Microsoft for their Azure cloud platform. As per the company, built using the Switch Abstraction Interface (SAI), SONiC has innovated the networking space by breaking monolithic switching software operations into multiple containerized microservices. SONiC simplifies switch programming and offers operators independent control to build flexible, application specific hardware platforms that meet their specific and/or evolving IT needs. As an extensible platform built for containerization, SONiC can be easily augmented with third-party components and software, delivering virtually endless capabilities that serve a range of needs from SMBs to hyperscale data centers. SoftIron HyperSwitch is hardware designed to maximize the potential of SONiC.

The SoftIron HyperSwitch family includes the following:

  • The HyperSwitch HS43200 – Includes 32 x 100GbE ports
  • The HyperSwitch HS34008 – Includes 40 x 25GbE and 8 x 100GbE ports
  • The HyperSwitch HS24008 – Includes 40 x 10GBE and 8 x 100GbE ports

SoftIron HyperSwitch

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post SoftIron HyperSwitch Switches Introduced appeared first on StorageReview.com.


Supermicro Outdoor Edge Computing Leads the Way for 5G

$
0
0
Supermicro Outdoor Edge 5G

The promise of fifth generation (5G) wireless networks has been dangled in front of businesses for years. Large IT vendors and mobile carriers have been promising next-generation solutions that are poised to enable dozens of new use cases that will transform the enterprise. Finally, at the end of 2019, some of the power of 5G began to unlock as carriers began offering service in select markets.

The promise of fifth generation (5G) wireless networks has been dangled in front of businesses for years. Large IT vendors and mobile carriers have been promising next-generation solutions that are poised to enable dozens of new use cases that will transform the enterprise. Finally, at the end of 2019, some of the power of 5G began to unlock as carriers began offering service in select markets.

From a practical perspective, the benefits of 5G can be boiled down to a simple statement. 5G offers increased bandwidth to devices. In best case scenarios this can mean gigabit download speeds, but more typically the hope is for a few hundred megabits per second. Whatever the case, businesses are on the verge of being able to not only accomplish mission critical tasks more quickly, but there is also an entire world of new opportunities that can be leveraged by organizations that are forward thinking enough to take advantage of them. One of these opportunities is not just edge computing, but specifically, outdoor edge computing.

Supermicro 4g vs 5g

While we think about 5G as a carrier problem, they have to modernize their entire delivery network, it’s important to understand that some organizations can run their own private networks for secure communications. In use cases like military, oil and gas exploration and others, there may either be no public network to connect to or there is a clear need for increased security. Under these circumstances, many more organizations can benefit from capabilities of 5G. In either case, the rules surrounding how 5G networks are put together are a little different from existing infrastructure is supported. For this reason, new hardware solutions have to be developed to address these changing needs. Supermicro, in conjunction with partners like Intel®, has developed an entire suite of servers and infrastructure dedicated to these needs.

One of the key reasons 5G is faster than current networks is due to reduced latency in the round trip path data takes from the service provider, to the device, and back to the service provider again. To enable this lower latency, existing tower infrastructure needs to be modernized. Because 5G often uses higher frequency bands, there is a shorter transmission distance capability, which translates into a need for more 5G antenna sites. At the same time 5G is hitting the market, carriers are also going through a major overhaul in their infrastructures. This change is highlighted by the broad adoption of virtualization and containers to allow for a more distributed and adaptable network.

The Need for Outdoor Edge Computing

Building traditional towers and securing the necessary land to do so, isn’t scalable when it comes to 5G. Supermicro has recently released a unique solution to this problem, an all new Pole Server. The first of its kind edge server uses an environmentally-hardened IP65 enclosure with servers that run on Intel® Xeon® D or 2nd Gen Intel® Xeon® Scalable processors. Expansion capability takes the form of three PCIe slots and support for a range of storage formats and form factors, including SSDs in both M.2 and EDSFF form factors. The servers are configurable and ready to stand up to the harsh environments that 5G antennas experience as they rarely have any shelter from the elements. While the overall Pole Server solution is new, the units are based on existing Supermicro edge server building blocks, like the popular E403.

Supermicro Pole Server

To enable a wide variety of use cases that 5G connectivity enables, the new Pole Servers support GPUs and FPGAs in the PCIe expansion slots. This support gives this family of edge servers the ability to excel in servicing emerging use cases like real-time edge AI inferencing. Of course the hardware is also well-suited for more traditional tasks like supporting content delivery networks or acting as a repository for surveillance video. Further, because the Pole Servers are based on commonly understood Intel® x86 architecture, all of these emerging use cases are handled by equipment that’s easy to support and service, reducing management overhead.

Overview of the SuperServer E403-9D-16C-IPD2 Key Specifications

  • Intel® Xeon® D or 2nd Gen Intel® Xeon® Scalable processors
  • Three PCI-E expansion slots for GPU or FPGA accelerator cards
  • 4 DIMM Slots, up to 512GB DDR4
  • 4 2.5” SATA drive bays
  • M.2 boot drive
  • 4 10G SFP+ LAN ports
  • 9 RJ45 Gigabit Ethernet LAN ports
  • 1 RJ45 Dedicated IPMI LAN port
  • Supports virtualization and containers
  • Dimensions: 319 x 821 x 258mm (12.56 x 32.31 x 10.16″)
  • Weight: 46 kg/101.5 lbs
  • Operating temp: -40°C to +50°C (depending on configuration)
  • Redundant power supply, fans and sensors
  • Ruggedized for outdoor telecom use: IP65, GR-487-CORE, and GR-3108-CORE compliant
  • 300W Heater and High-Efficiency Heat Exchanger
  • Lockable buckles and intrusion detection

Of course these towers need to be able to connect back to the core data centers as well, where the 5G opportunity is entirely different. Within the datacenter, carriers need solutions to address flexibility in storage and network. Much of this is happening at the same time as virtualization is penetrating the core data centers of carriers, in favor of proprietary solutions. In these cases, Supermicro is well positioned with their existing enterprise solutions like the SuperBlade and BigTwin platforms. The SuperBlade has a long history of adoption in the enterprise datacenter, offering an extremely dense compute platform. In our lab, we’ve seen the BigTwin a number of times: it’s a configurable multi-node server that’s perfect for software-defined storage, or hyperconverged infrastructure that delivers on the promise of flexibility in handling modern workloads.

In addition to hardware, Supermicro is part of the industry’s move to non-proprietary hardware platforms like x86 servers and the growing adoption of standardized system interfaces. Supermicro is part of the O-RAN Alliance, which promotes a cloud-native, open 5G RAN architecture for the evolution of 4G to 5G networks. Further Supermicro offers fully validated Intel® Select Solutions to help speed adoption of these technologies.

Concluding Thoughts

As businesses start to take advantage of the bandwidth 5G can provide, it’s going to be critical that carriers are using the most advanced technologies to deliver these business critical services. With their new 5G solutions and advanced data center portfolio, Supermicro has not only created a complete portfolio of offerings, but delivered specialty solutions as well that other large IT vendors simply don’t have.

This 5G portfolio is clearly punctuated by the brand new Pole Server. With the Pole Server Supermicro is flexing their engineering muscles by delivering a clear vision for what outdoor edge computing can be. While edge computing needs for 5G are widely discussed by the industry, Supermicro has put a stake in the ground with this effort, or perhaps more aptly, a server on an antenna in the ground. Either way, Supermicro is taking a commanding leadership position in the outdoor edge use case with this solution. Further, the solution is based on x86 standard hardware and developed in conjunction with Intel®, to ensure reliability and performance at the edge. This is vitally important as carriers migrate away from proprietary solutions, to standards-based solutions.

The needs for 5G infrastructure are diverse, we’ve focused largely on the need at the outdoor edge, but Supermicro has been providing data center solutions for over 25 years. Their range of rackmount servers, blade servers, multi-node servers for SDS/HCI and traditional edge servers mean that Supermicro has a solution for all components of the 5G opportunity. As carriers and those in need of a private 5G network look for a partner for this journey, Supermicro stands out by delivering innovative solutions for these emerging needs.

Intel® Select Solutions with Supermicro for 5G

Supermicro Outdoor Edge

Contact Supermicro 5G

Supermicro Think 5GThis report is sponsored by Supermicro. All views and opinions expressed in this report are based on our unbiased view of the product(s) under consideration.

The post Supermicro Outdoor Edge Computing Leads the Way for 5G appeared first on StorageReview.com.

Achieving Fast, Accurate NAS Migrations in 5 Key Steps

$
0
0
Datadobi nas migrations

Gartner estimates 83 percent of data migration projects either fail or exceed their budgets and schedules. This is likely due to the fact that many companies still operate without the aid of dedicated migration software. Trying to navigate a Network-Attached Storage (NAS) migration with outdated and incompetent tools means:

Gartner estimates 83 percent of data migration projects either fail or exceed their budgets and schedules. This is likely due to the fact that many companies still operate without the aid of dedicated migration software. Trying to navigate a Network-Attached Storage (NAS) migration with outdated and incompetent tools means:

  • High cost of both internal and external personnel
  • Increased risk across all aspects of the project from data integrity to reputation of the migration team
  • Increased number of switchover events with extended outage durations
  • Disruption to the business
  • Lack of proper reporting and governance
  • Skilled staff distracted by migrations instead of working on strategic initiatives

All these challenges can be avoided or significantly mitigated by following a solid migration process.

Datadobi nas migration

Dealing with complex NAS migrations for the past decade, it’s become clear that there are five steps to achieving a fast and accurate NAS migration.

Step 1: Planning and Discovery

Have you ever arrived to page 5 of an IKEA instruction manual to find you’ve put everything together backwards? Or better yet, you’re missing a screw? Hindsight: The first step to putting together that brand new office desk should have been to count the pieces and read through the instructions before screwing anything into place. Knowing the tools you’re working with and what you want the final product to look like is key. Otherwise, it’s a guaranteed headache. The same goes for starting a NAS migration. Knowing the type of content on the source system and the makeup of the data ahead of time is so important to preventing migration failure.

Before beginning a NAS migration, ask upfront questions to know what you’re working with:

  • What data actually needs to be migrated – everything, or can you retire some of that older data no one has touched in over seven years?
  • Will you be migrating over a LAN or WAN link?  And how fast are the links?
  • How much data is there and what is the makeup of that data?
  • Size (this is important as it can drastically affect migration performance)?
  • NFS and/or SMB or Mixed Protocol?
  • What applications and application groups will be impacted?
  • Does data need to be migrated in a specific order or in parallel?
  • How long will the overall migration take?
  • How long will the final switchover to the new system take?
  • Will all the capacity be switched over at once, or in stages?
  • What is your outage tolerance for the final cutover?
  • What is the business impact of delaying migration efforts vs the cost impact of leaving old gear on the floor?
  • Who will perform the migration and what method will they use?

In order to execute a seamless NAS migration and enable IT admins to plan well, you’ll need this in-depth view of current data and users. Your NAS migration tool of choice should provide insights into the source storage system by extracting metadata, which will explain when a file was stored, by whom, and when last accessed. Also, it should take into consideration capacity by files, directories, per user, per creation time, per modification time, or historical usage. You’ll need to analyze all of this data to better inform your approach to NAS migration.

Step 2: First Scan and Copy

This is where pen meets paper for the first time. You’ll need an enterprise-class software that allows you to migrate file or object data between storage platforms. To start, the initial set of data is moved from the source (keeping it synchronized) to the target NAS. The First Scan will discover the data that needs to be migrated, and the First Copy will copy the migration paths to the target.

Some companies are still using operating system schedules for each individual copy host, resulting in an unbalanced workload across servers. To create less protrusion, implement modern migration software that graphically defines migration policies so you can dictate what content gets copied between the source and target, the frequency at which that content is resynchronized, and a single pane of glass for easy monitoring and management.

Step 3: Steady State

Now that the first files have been scanned and copied over from the source NAS to the target NAS, it’s time to enter the next step of the migration process known as Steady State. Steady State provides a continuous mirroring of the data on the source and target systems. Here, your team will work out business details and timing decisions regarding redirecting applications and end users to the new system. It’s also important to address any errors that are taking place as data is being copied during Steady State. Errors are not uncommon, and they will help you identify data that won’t copy properly so that corrective action can be taken if needed. For example, you might look for character set conflicts and an inability to access a file on the source system. Working this out before the final switchover is, of course, imperative.

During this step, you should also consider a dry run of the final switchover step to get a good idea of whether or not you should cut everything over at one time or plan to operate in stages. Forecasting all of this will allow you to dictate how long the final switchover will take, which we’ll discuss in the next step.

Step 4: Final Cut Over

Now it’s time to cut over your data to the new system. The intention of the final cut over is to capture all files. This is done by first scanning the source data and target data to determine what needs to be moved in the final stage, including any data that might have been excluded in previous attempts. Before you begin this process, make sure the settings for your system of choice are set up to limit end user access to the source data so no source changes will happen that don’t also happen to the target data. In some systems, setting shares and exports on the source can be set to “Read Only” directly within the migration tool. For other systems, you will want to have planned time to do this via a script or manually.Once the final copy is done you will want to create the SMB shares and/or NFS exports on the target and make them read and/or write depending on what existed on the source system.

Step 5: Post Migration

Now that you’ve finally turned over the IKEA desk you worked carefully to assemble, it’s time to add some weight to make sure it’s sturdy and stable. First, I recommend doing some quick validation testing that looks at the files, file permission, and share/export access rights on the target system. Have application owners and even a few key end users run their own application tests – to check things such as home shares – is always a good idea post migration. Are you satisfied with the condition of your data? If the answer is yes, the final step is to redirect users and applications to your new source. This can be done via DFS or DNS changes depending on your environment. Or, you can forward new links to your users if you’d like.

One of the reasons so many NAS migrations fail is lack of preparation and visibility into existing data. It’s impossible to successfully execute a migration without knowing what you’re migrating in the first place. Knowing exactly what you’ll be working with ahead of time and migrating with a NAS-specific migration tool will lead to a better experience from your IT teams and your end users.

-Authored by Michael Jack, Vice President of Global Sales and Co-Founder, Datadobi

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Achieving Fast, Accurate NAS Migrations in 5 Key Steps appeared first on StorageReview.com.

QNAP SS-EC2479U-SAS-RP 24-Bay NAS Review

$
0
0

The QNAP SS-EC2479U-SAS-RP is a 24-bay, 2U rackmount network attached storage solution that is all about reliability and scalability. With a REXP-1600U-RP expansion enclosure, the SS-EC2479U-SAS-RP can scale up to 792TB of raw storage in 152 drives. The NAS supports 2.5” SAS HDDs as well as SSDs (it can also support SATA drives as well), comes with a Quad Core Intel Xeon E3-1245 v2 processor, 8GB of RAM (though it is expandable up to 32GB), and has an SSD cache to accelerate performance.

The QNAP SS-EC2479U-SAS-RP is a 24-bay, 2U rackmount network attached storage solution that is all about reliability and scalability. With a REXP-1600U-RP expansion enclosure, the SS-EC2479U-SAS-RP can scale up to 792TB of raw storage in 152 drives. The NAS supports 2.5” SAS HDDs as well as SSDs (it can also support SATA drives as well), comes with a Quad Core Intel Xeon E3-1245 v2 processor, 8GB of RAM (though it is expandable up to 32GB), and has an SSD cache to accelerate performance. The QNAP SS-EC2479U-SAS-RP supports cross-platform file sharing, comprehensive backup solutions, and iSCSI and virtualization applications.

 

The QNAP SS-EC2479U-SAS-RP is highly flexible giving it a wide range of uses. Aside from fulfilling the usual SMB use cases such as data backup, file sync, and remote access, the NAS can work as a storage solution for video editing with its expandability to 10GbE interface. The NAS is compatible with over 2,700 different IP cameras; combine that with QNAP’s surveillance station and Vmobile apps for mobile devices and user have a professional surveillance solution. The NAS runs the easy to use QTS 4.1, but through virtualization users can run multiple OS at the same time (the QNAP SS-EC2479U-SAS-RP is VMware ready and compatible with Microsoft Hyper-V certification and Windows Server 2012).

Another set of notable features on the SS-EC2479U-SAS-RP is its security. The device validated with military level FIPS 140-2 AES 256-bit encryption. The NAS meets the requirements for HIPAA for storing PHI data. And the QNAP SS-EC2479U-SAS-RP offers disaster recovery options with real-time remote replication, syncing files in real-time to a remote server or the backups can be done on a scheduled basis.

The QNAP SS-EC2479U-SAS-RP comes with a 3-year limited warranty, ships without drives, and has a street price of $9,955.

QNAP SS-EC2479U-SAS-RP specifications:

  • Form factor: 2U
  • CPU: Quad Core Intel Xeon E3-1245 v2 Processor 3.4 GHz
  • Memory:
    • System memory: 8GB DDR3 ECC RAM (4GB x 2)
    • Total memory slots: 4
    • Maximum memory: 32GB (8GB x 4)
    • Flash memory: 512MB DOM
  • Drive bays: 24
    • Maximum drive bays with expansion unit: 152
    • Compatible drive types:
    • 2.5″ SATA(III) / SATA(II) SAS-1/SAS-2 HDD
    • 2.5″ SATA(III) / SATA(II) SAS-1/SAS-2 SSD
    • Hot-swappable drive trays
    • Maximum internal capacity: 28.8TB (1.2TB x 24)
  • External ports:
    • USB 2.0 x 4
    • USB 3.0 x 2
  • File systems:
  • Internal drives:
    • EXT3
    • EXT4
  • External drives:
    • EXT3
    • EXT4
    • NTFS
    • FAT32
    • HFS+
  • Expansion ports x 1
  • Expansion slots: 2 (one PCIe Gen2 x4, one PCIe Gen2 x8)
  • LAN ports: Gigabit RJ45 Ethernet x 4
  • Power:
    • Consumption (fully populated):
    • HDD Standby: 138.7 W
    • Operating: 270 W
    • Power supply:
    • Input: 100-240V AC,50-60Hz
    • Output: 600W
  • Temperature: 0-40°C
  • Relative humidity: 5~95% RH non-condensing, wet bulb: 27˚C
  • Sound Level (dB): Sound pressure (LpAm) (by stander positions): 46.3 dB
  • Dimensions (HxWxD): 3.46 x 17.28 x 20.47 in (88 x 439 x 520 mm)

Design and Build

The QNAP SS-EC2479U-SAS-RP is a 2U rack-mounted NAS aimed at medium to large businesses. The device has 24 hot-swappable drive trays running vertically across the front. The device is black as is most of QNAP’s line once it crosses over to pro and larger business models. On the front right hand side is the power button and LED indicators for 10GbE, Status, LAN, and Storage Expansion Port Status.

 

The device comes with an optional rail kit; however the kit only works with square-hole sever racks. The drives are hot-swappable and can easily be switched by {…}.

 

Swinging around to the rear of the device, starting on the right hand side and moving left we have both power sources, two expansion slots near the top of the device, an expansion port beneath the expansion slots, to the left of the expansion port is a reset button, followed by two Gigabit Ethernet ports, followed by two USB 3.0 ports, and all the way on the left hand side are two more Ethernet Ports above four USB 2.0 ports.

Management

 

Conclusion

QNAP’s SS-EC2479U-SAS-RP is a 2U rackmount NAS that has 24 drive bays. It comes with a Quad Core Intel Xeon E3-1245 v2 Processor 3.4 GHz 8GB of DDR3 ECC RAM but is expandable up to 32GB, and with an expansion enclosure users can have a maximum of 152 bays for a total of 792TB of raw capacity. The NAS is flexible working as a backup, file sync, and cross platform file share for SMB, supports up to 2,700 IP cameras and can be used as a surveillance solution, and with its upgradable 10GbE connection, it has the capacity and speed for video editing. The SS-EC2479U-SAS-RP has a variety of security offerings including FIPS 140-2 AES 256-bit encryption, HIPPA qualified PHI data storage, and real-time remote replication for disaster recovery.

Pros

Cons

The Bottom Line

QNAP SS-EC2479U-SAS-RP at Amazon

The post QNAP SS-EC2479U-SAS-RP 24-Bay NAS Review appeared first on StorageReview.com.

Cisco Ups Support For Remote Workers

$
0
0
Cisco HyperFlex

Amid the Covid-19 pandemic, more and more of the workforce is staying home to work remotely. While this is great for social distancing to flatten the curve, it may take a toll on IT resources. Cisco aims to help customers with a set of data center offerings that will expand their virtual desktop infrastructure (VDI) including a new, pre-configured HyperFlex bundle with prioritized shipping, and free Intersight management for 180 days.

Amid the Covid-19 pandemic, more and more of the workforce is staying home to work remotely. While this is great for social distancing to flatten the curve, it may take a toll on IT resources. Cisco aims to help customers with a set of data center offerings that will expand their virtual desktop infrastructure (VDI) including a new, pre-configured HyperFlex bundle with prioritized shipping, and free Intersight management for 180 days.

Cisco HyperFlex

Several remote workers are already leveraging Webex as the face of work is transformed. Webex will continue on as usual, along with security offers and the above. With this new support, Cisco customers can better aide the growing remote user workforce. Part of the support involves releasing special HyperFlex (HX) pre-configured VDI bundles with prioritized 2-week lead times.

This Cisco architecture for a quick start 500-seat VDI configuration is now available and includes:

  • Three Cisco HyperFlex HXAF220-M5SX All-Flash nodes
  • One Cisco HyperFlex HX-C220-M5SX Compute-only node
  • A pair of Cisco’s 4th-generation fabric interconnects

Cisco also aims to help with augmentation of the VDI bundle with a 180-day free trial of our Cisco Intersight Essentials edition. Cisco Intersight is a cloud-based SaaS tool that allows user to manage HyperFlex and UCS devices through a single pane of glass. Along with this, the company stated its VDI ecosystem partners, Citrix and VMware, have their own quick start user license bundles that can be used together with the HyperFlex infrastructure.

Public sector and healthcare customers will be prioritized for shipments of HyperFlex nodes and UCS servers. Preconfigured HXAF220-M5SX and HX-C220-M5SX expansion nodes are also available for existing HX environments. These offers will be available globally from now until July 31, 2020.

Cisco Covid-19

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Cisco Ups Support For Remote Workers appeared first on StorageReview.com.

Intel NUC 10 NUC10i7FNH Review

$
0
0

In Q4 of 2019 Intel launched its latest NUC system; the Intel NUC 10  NUC10i7FNH, which comes equipped with a 10th generation Intel i7 Core Processor. The computer community tend to call NUCs by their code name, and I have seen this NUC referred to by not only its system code name Frost Canyon but also as Comet Lake which is the code name of the CPU that it using. As the product name (Frost Canyon) also covers systems powered by i3 and i5 Intel Core processors, and the Comet Lake family of Intel processors are used in a wide variety of systems in this article I will be calling it by its proper name NUC10i7FNH.

In Q4 of 2019 Intel launched its latest NUC system; the Intel NUC 10  NUC10i7FNH, which comes equipped with a 10th generation Intel i7 Core Processor. The computer community tend to call NUCs by their code name, and I have seen this NUC referred to by not only its system code name Frost Canyon but also as Comet Lake which is the code name of the CPU that it using. As the product name (Frost Canyon) also covers systems powered by i3 and i5 Intel Core processors, and the Comet Lake family of Intel processors are used in a wide variety of systems in this article I will be calling it by its proper name NUC10i7FNH.

Intel NUC10i7FHN

Intel has another Comet Lake NUC called the NUC10i7FNK. Whereas the NUC10i7FNhas 2” tall case and a 2.5” SATA drive slot the NUC10i7FNcase is slightly shorter at 1.5” and does not have a SATA drive slot. The mainboard and the CPU are the same in both of these systems.

Intel NUC systems are used by enthusiast that want to build a home lab that take up a minimal amount of space, custom builders that configure them for a specific purpose such edge computing or to build up powerful entertainment systems, and others that are looking for a quiet, system that take up a minimal amount of space.

Last year we had the chance to review two other Intel NUCs; NUC7CJYSa, low priced system powered by an Intel Celeron J4005 processor that we ended up installing IGEL and Wyse VDI client software on and used to connect to our virtual desktops. The other NUC was a more powerful NUC, the NUC8i7BEH, that we installed ESXi on after our review and we used it as an ESXi host for another project that we are working on. Both of the systems have been working extremely well for their assigned roles. This shows the flexibility and range of uses that NUCs can be used for. You can read our review of these systems here, here and here.

The Intel NUC10i7FNH that we will be reviewing is the NUC10i7FNHAA Performance Mini PC which comes 16 GB of RAM, a 256 GB M.2-based SATA SSD, a 7mm 1 TB SATA3 HDD, a Core i7-10710U CPU with an integrated Intel UHD Graphics 620 GPU and Windows 10 Home. Connectivity into it very good as it has 6 USB ports, WiFi, and an 1Gb RJ45 port.

Intel NUC NUC10i7FNH Specifications

Below are the Intel NUC10i7FNH specifications:

  • Manufacturer: Intel
  • Model: NUC10i7FNHAA
  • NUC10i7FNHAA- MSRP: $1093 USD (street price $940 USD) – with RAM and storage
  • NUC10i7FN – MSRP: $688 USD (street price $598 USD) – without RAM and storage
  • Form factor: Mini
  • OS: Windows 10 Home
  • CPU: Intel Core Intel Core i7-10710U CPU (hexa-core with HT, 25W TDP, 12M Cache, up to 4.70 GHz)
  • GPU: Intel UHD Graphics 620 GPU (up to 1.15 GHz, 24 EUs)
  • Memory: 2 x DDR4-2666 SO-DIMM RAM – Max Memory 64 GB
    • Populated with 2 x 8GB Kingston ValueRAM KVR26S19S8/8 DDR4 SODIMM
  • Internal storage options:
    • One SATA ATA-600 port for connection to a 2.5″ HDD or SSD
      • Populated with Seagate ST1000VT001 1TB 5400RPM 2.5″ Video HDD
    • One M.2 slot with PCIe X4 lanes (2242 or 2280), Optane support
      • Populated with Kingston U-SNS8154P3/256GJ PCIe 3.0 x2 NVMe SSD
    • Full sized SDXC slot
  • Display:
    • HDMI 2.0a port with 4K @ 24 Hz
    • USB Type-C port with DisplayPort* 1.2 (supports two 4K monitors @ 60 Hz)
  • Power consumption: 19V, 6.32A AC-DC power brick adapter
  • Ports:
    • 3 x USB 3.1 Gen 2
    • 1 x Thunderbolt 3
    • 2 x USB 2.0 (internal header)
  • Multimedia:
    • Quad array microphone, IR receiver
  • Network connectivity:
    • Wired 1Gb via Intel Ethernet Connection I219-V
    • Intel Wireless-AX (Intel Wi-Fi 6 AX201)
      • Supports 802.11a/b/g/n/ac/ax
      • Maximum transfer speed up to 2.4 Gbps
    • Bluetooth v5
  • Physical size: height 2” x width 4.5” x depth 4.5”
  • Physical weight: N/A
  • Color: blue-gray
  • Compliant standards: the product meets numerous safety, regulations, EMC/RF, and environmental standards
  • Package contents: NUC, power adapter, VESA mount and mounting screws
  • Support for Instantly Available PC technology, LAN Wake, Wake from USB, Wake from CIR, Microsoft Modern Standby, Intel Platform Trust Technology
  • 3-year warranty

NUC10i7FNH Design and Build

The NUC comes packaged in a heavy cardboard box, with the device itself being nested between two black foam blocks and is wrapped in an electrostatic plastic bag. The box also contains a power brick adapter, an offset screw, and quick start, regulatory and safety guides.

On the rear of the device is a HDMI port, two 3.1 USB ports, a Thunderbolt type-C port, RJ45 ethernet port, Anti-theft lock hole, and electric power inlet.

Intel NUC 10 NUC10i7FNH ports

The front of the device has the power button/LED, USB 3.1 port, USB-C port, four DMICS digital microphones, a 3.5mm speaker/headset jack, HDD LED, consumer infrared (CIR) sensor. The left side of the device has an SD card slot.

Intel NUC 10 NUC10i7FNH front

The sides and top of the device’s case are made of strong plastic with inlet metal panels with ventilation holes on the two sides, while the upper third of the back has 14 slots molded into the back for airflow. The bottom of the device is made of metal and has two threaded holes where a VESA mount can be attached. The top of the device has slight indentation and can easily pried off. Overall, the case on this device is very durable and should hold up well in a home, office, or datacenter.

The bottom of the case is attached by four captive Phillips-head screws. By removing these screws, you will expose a well-constructed motherboard. The bottom of the device has a ventilated metal slot that can house a single 2.5 HDD or SDD drive. The mother board also has a connector for a 2242 or 2280 M.2 Device. The CPU/GPU of the device is not visible as it is mounted on the underside of the motherboard. The motherboard has connectors for two DDR4 SO-DIMMs. All the visible components and ports are surface-mounted to the motherboard. The build quality of the device is above-average. The RAM that the device came with was Kingston KVR26S19S/8/8, the M.2 2280 device was a Kingston E8FK11.C and the HDD device was a Seagate Video ST1000VT001-1RE172 1TB drive.

A block diagram of the device shows that both the SATA and M.2 have 6.0 Gbps connectivity.

Intel NUC 10 NUC10i7FNH diagram

The unit has a single 10th Generation Intel Core i7-10710Ui7 processors. This CPU is a 64-bit six-core performance mobile x86 microprocessor with 12 MB Intel Smart Cache. It has support for Hyper-threading so a total of 12 threads can be running at a given time. The processor operates at normal speed of 1.1 GHz with a Turbo Boost of up to 4.7 GHz. It has a TDP (Thermal Design Power) rating of 15 W. It supports up to 64 GB of dual-channel DDR4-2666 memory.

The CPU has integrated Intel UHD Graphics 620 operating at 300 MHz with a burst frequency of 1.15 GHz. The GPU has 24 Execution Units (EU) and can support up to 3 4K displays; one through the HDMI port and two through the Thunderbolt 3 connector. The TPD and RAM are shared between the CPU and GPU.

The WLAN module in the NUC10i7FNH is the Intel Wi-Fi 6 AX201 which is capable of transfer rates of up to 2.4 Gbps.

The USB-C port supports Thunderbolt 3 which allows it to drive two external 4K displays at 60 Hz or other Thunderbolt 3 compatible periphery.

Initial Boot

For the initial boot and testing of the device, we connected the device to a Dell UltraSharp 32” 4K Monitor (U3219Q) via HDMI. The Dell monitor has a keyboard, mouse, and video (KVM) switch built into it. The monitors built-in KVM switch was extremely useful during our testing as it allowed us to switch between the NUC10i7FNH and our laptop with the push of a button. was used throughout our testing. We plugged a Dell wireless keyboard and mouse (part number KM636) keyboard/mouse’s dongle into the monitors upstream USB port.

We booted up the system and were presented with the Windows 10 Home (1903 build 18362.295) installation wizard. It took less than 5 minutes to install Windows and have the system running.

Configuration for Testing

Ideally, we would have liked to install Windows Server 2016 on this system unfortunately, Intel only provides drivers for Windows 10; Windows Server is not a supported OS on this NUC. When we tried to install Windows Server 2016 on this system, we were able to boot the Windows 2016 install media from a USB drive but got a “Loading Files” prompt twice and then a blank screen. Other have been able to install various versions of Windows servers on NUCs but had to build their own driver pack.

Rather than struggle with an unsupported OS on this system, we installed Window 10 Enterprise on the system, updated Windows and the BIOS to latest version, and installed all the suggested drivers on it.

NUC10i7FNH Performance

To evaluate the performance of the device we ran SPECworkstation 3 benchmarking test on it and compared the results to a NUC8i7BEHthat we recently tested. The full reviews of the NUC8i7BEHsystem can be found here.

SPECworkstation 3

SPECworkstation 3 is a specialized test designed for benchmarking the key aspects of workstation performance; it uses over 30 workloads to test CPU, graphics, I/O, and memory bandwidth. The workloads fall into seven broad categories; Media and Entertainment, Product Development, Life Sciences, Energy, Financial Services, General Operations, and GPU Compute. We are going to list the broad-category results for each, as opposed to the individual workloads. The results are an average of all the individual workloads in each category.

The results (Table1) show that the NUC10i7FNH system with its Intel i7-10710U hexa-core processor was more performant than the NUC8i7BEH system which had a quad-core i7-8559U CPU. We did have the SPECworkstation 3 GPU Compute category timeout due to an issue with the Caffe application on the NUC8i7BEH systems. Overall the results are inline, and what we would expect for a system with an Intel i7-10710U CPU that has 2 more cores which equates to 50% more cores and threads than the i7-8559U processer in the NUC8i7BEH.

  SPECworkstation 3
  NUC8i7BEH NUC10i7ZFNH
M&E 0.93 1.34
ProdDev 1.09 1.42
LifeSci 0.78 1.40
Energy 0.70 0.70
Financial Services 1.04 1.40
General Operations 1.38 1.38
GPU Compute Timed out 0.46

Table 1- SPECworkstation3

SPECviewperf 12.1

SPECviewperf 12 benchmark, which is the worldwide standard for measuring graphics performance based on professional applications was run on the NUC10i7FNH. The SPECviewperf test runs 9 benchmarks called “viewsets,” which represent graphics content and behavior from actual applications and include categories such as 3D Max, CATIA, Creo, Energy, Maya, Medical, Showcase, Siemens NX, and Solidworks.

The NUC10i7FNH SPECviewperf was less performant than NUC8i7BEH which has a more powerful GPU. When viewing the SPECviewperf test on the NUC10i7FNH the output looked fine and our ad-hoc testing proved that the NUC10i7FNH is very usable in normal day-to-day usage, however for heavy graphics such as video editing or CAD work the NUC8i7BEH would be a better choice.

  SPECviewperf 12
Viewsets NUC8i7BEH NUC10i7FNH
3dsmax-06 20.74 11.84
catia-05 21.33 14.12
creo-02 17.57 13.01
energy-02 0.39 0.28
maya-05 24.55 13.05
medical-02 5.32 2.79
showcase-02 12.68 7.16
snx-03 2.96 2.75
sw-04 35.13 22.44

Table 2SPECviewperf compared

PCMark 10

Earlier this year we reviewed a Lenovo ThinkCentre M90n Nano and Lenovo ThinkCentre M90n-1 Nano IoT (reviews located here and here). Below is a comparison of the Nano system to the NUC10i7FHN using PCMark 10. We tested the NUC10i7FNH using both the HHD and the SSD drive.

PCMark 10
ThinkCentre M90n-1 Nano IoT ThinkCentre M90n Nano NUC10i7FNH
SSD / HDD
Total Score 3,033 3,825  4,268 /4,093
Essentials 7,140 8,684  8,472 /7,405
Productivity 5,756 6,217 6,837 /6,657
Digital Content Creation 1,843 2,813 3,643 /3,775
Processor 8 Gen i3-8145U
2 cores 4 threads
Clock speed 2.2/3.9GHz
Intel UHD Graphics 620
8 Gen i7-8665U
4 cores 8 threads
Clock speed 1.9/4.8GHz
Intel UHD Graphics 620
10 Gen i7-10710U
6 cores 12 threads
Clock speed 1.1/4.7GHz
Intel UHD Graphics 620

This was an interesting comparison as each of the systems had a different processor with a different number of cores but all of the systems had the same integrated GPU. As expected, the Intel NUC 10 NUC10i7FNH with its 6 cores had the highest total score, followed by the 4 core M90n Nano and then the 2 core M90n-1 Nano IoT. What was unexpected was that the Digital Content Creation score was slightly higher when we used the HDD on the NUC10i7FNH than when we used the SSD.

Ad-Hoc Testing

Benchmarks are useful to quantify the performance of a device, but to get a better feel for how the device would perform for a typical home user we conducted other, less quantifiably tests on the systems. The first of these ad-hoc test that we conducted was using the MS Office suite, the second was using a web browser, and the third was based on video viewing.

In our MS Office testing, we edited a 23-page document that had imbedded graphics, an 8 sheet Excel spreadsheet and a 50 slide PowerPoint slide deck. The performance of the MS Office applications was very good and we did not notice any delay when going from the start to the end of the PowerPoint slide deck or any slow down when we had multiple documents open.

To test how well a web browser performed on the device we opened 10 tabs in Chrome browser to various sites and then switched between them without any lag or issues.

To test the performance of a streaming video on the device we played a 1080p YouTube video on the system in quarter scale mode and then in full screen mode. In quarter scale mode and in full screen mode we didn’t notice any dropped frames. The audio played flawlessly through a headset plugged in to the device.

To test the performance of local video and audio we used the libde265 player to play a 640 x 360 30 fps video that was stored locally on the system in quarter scale mode and then in full screen mode. We didn’t notice any frame drops in either mode. We also played a 4K (4096 x 1720 @ 24 fps) video and found that it played with only a barely noticeable jitter. The audio played flawlessly through a headset plugged in to the device during all the tests.

Intel NUC0NUC10i7FHN 4k

NUC10i7FNH vs. NUC8i7BEH

Invariable Intel’s new NUC10i7FNH will be compared to the 8th generation NUC8i7BEH. Below is comparison of the key hardware components. As this chart shows, and our testing proves out the NUC10i7FNH with its 6 cores, higher max clock speed and support of 64 GB of DDR4 2666 RAM can handle simultaneous processes and heavily multi-threaded applications better than the NUC8i7BEH. It also has a more advanced WiFi chip. Its GPU is less powerful that the one in the NUC8i7BEH however if you are not doing GPU intensive applications, such as gaming, CAD, or video editing the GPU in the NUC10i7FHN should be fine.

Feature NUC8i7BEH NUC10i7FNH
Processor i7-8559U
4 cores and 8 threads
Clock speed 2.7/4.5 GHz
MAX TDP 28W
i7-10710U
6 cores and 12 threads
Clock speed 1.1/4.7 GHz
MAX TDP 15W
GPU Iris Plus Graphics 655
48 Execution Units
Intel UHD Graphics 620
18 Execution Units
RAM 2 x DDR4 2400
Support for 32 GB
2 x DDR4 2666
Support for 64 GB
WiFi Intel Wireless-AC 9560
Max speed 1.73 Gbps
Intel AX 201
Max speed 2.4 Gbps

 

One oddity that we found on the NUC10i7FNH is that it’s LED on indicator is a lot dimmer than that one on the NUC8i7BEH. It was a minor thing but in bright light it did make it difficult to tell if it was on.

Conclusion

We continue to be impressed by the NUC build quality and by how much power Intel can put into these small form factor systems. The Intel NUC 10 NUC10i7FNH can be purchased as a kit and the purchaser can equip it as they see fit or it can be purchased fully configured with RAM and storage. For those that are willing to sacrifice the 2.5” drive slot they can get the slightly shorter NUC10i7FNK system. The NUC10 family can also be purchased with a i3 or i5 processor.

It does have very good connectivity with its 3 external USB 3.1 ports and single USB Type-C Thunderbolt connector. It supports a single SATA drive, and a single M.2 drive. This is the first NUC system that supports 64GB of RAM.

Although not extensively tested we did find that the built-in Wi-Fi worked well, and the WLAN module is capable of up 2.4 Gbps. The system has a three-year international limited warranty.

Based on the performance of the benchmarks and the ad-hock testing that we performed on the system we feel that it would make a very good power-user system for demanding office users that use typical office applications, does demanding web browsing, and plays streaming videos. With its 6 cores and support for 64 GB of RAM it will be able to crunch through complex calculations from multi-threaded programs.

The more adventurous have found that they can install a hypervisor such as ESXi on it, in fact NUC systems have found quite a following in home labs.

Intel NUC 10 NUC10i7FNH

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Intel NUC 10 NUC10i7FNH Review appeared first on StorageReview.com.

Viewing all 5321 articles
Browse latest View live