Quantcast
Channel: Enterprise Archives - StorageReview.com
Viewing all 5325 articles
Browse latest View live

Scale Computing Offers Acronis Cloud Storage

$
0
0
Acronis Cloud Storage

Today, Scale Computing extended their partnership with Acronis to offer Acronis Cloud Storage. We last covered Scale Computing’s partnership with Acronis in September 2019 when Scale Computing’s add Acronis Backup to their HC3 platform. According to Scale Computing’s website, they were Incorporated in 2008 and primarily provides edge-focused solutions. Their flagship product is HC3, an edge-to-cloud infrastructure platform.

Today, Scale Computing extended their partnership with Acronis to offer Acronis Cloud Storage. We last covered Scale Computing’s partnership with Acronis in September 2019 when Scale Computing’s add Acronis Backup to their HC3 platform. According to Scale Computing’s website, they were Incorporated in 2008 and primarily provides edge-focused solutions. Their flagship product is HC3, an edge-to-cloud infrastructure platform.

Acronis Cloud Storage

Scale Computing is offering Acronis Cloud Storage in increments ranging from 250GB to 5TB. Bucking pay-as-you-go trends that have been popular for the last few years, contracts for the service last either one or three years. For customers willing to use Acronis Cloud, this adds an easy way to back up data. Scale Computing can now offer end-to-end storage and cloud backups that can protect entire virtual machines (VMs). Scale Computing HC3 now provides bare-metal restore capabilities, and can restore to dissimilar hardware or platforms if required.

Acronis Cloud Storage is so integrated with Acronis Backup that it was formerly branded as Acronis Backup to Cloud. Their software supports backing up and restoring everything from individual files to entire servers. In terms of security, Acronis offers the option of encrypting your data with AES-256 before sending it, so it is encrypted while in transit, not just while at rest in their cloud.

Acronis Cloud Storage Availability

Immediately

Scale Computing and Acronis

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Scale Computing Offers Acronis Cloud Storage appeared first on StorageReview.com.


Lenovo ThinkAgile MX1021 & Lenovo ThinkSystem DM7100 Unveiled

$
0
0
Lenovo ThinkSystem MX1021

Today, Lenovo unveiled the Lenovo ThinkAgile MX1021 and the ThinkSystem DM7100 (pictured below). The ThinkAgile MX1021 is a Microsoft-based edge appliance. ThinkSystem DM710 appliances are storage solutions. Lenovo was founded in 1984 is well known for its endpoint solutions as well as their infrastructure solutions like ThinkSystem servers and the ThinkAgile software-defined portfolio.

Today, Lenovo unveiled the Lenovo ThinkAgile MX1021 and the ThinkSystem DM7100 (pictured below). The ThinkAgile MX1021 is a Microsoft-based edge appliance. ThinkSystem DM710 appliances are storage solutions. Lenovo was founded in 1984 is well known for its endpoint solutions as well as their infrastructure solutions like ThinkSystem servers and the ThinkAgile software-defined portfolio.

Lenovo ThinkSystem MX1021

The Lenovo ThinkAgile MX1021 is a Microsoft Azure Stack hyper-converged appliance aimed at edge-computing uses cases. It joins Lenovo’s existing lineup of ThingAgile MX appliances, all of which have previously used 1 or 2 Intel Xeon Processors, ranging in speed from 2 to 3.5GHz. The MX1021 is available today through Lenovo’s pay-for-what-you-use data center, Lenovo TruScale.

ThinkAgile MX1021 Specifications

Form Factor 1U height, half-width Edge server
Processors Intel Xeon D-2100 Series processor, up to 8 cores, up to 100W
Memory 4 DIMM slots. Up to 256GB maximum with 4x 64GB LRDIMMs
Expansion One PCIe 3.0 x16 slot
Drive Bays Up to 3x M.2 adapters (1x boot adapter, 2x data adapters) for a total of 10x M.2 drives.

·      1x Single M.2 Adapter (1 drive) or 1x Dual M.2 Adapter (2 drives) installed in dedicated slot, for boot

·      1x 4-bay PCIe x16 adapter in dedicated bay, for 4x M.2 drives, NVMe or SATA, for data

·      1x 4-bay PCIe x16 adapter in PCIe riser slot, for 4x M.2 adapters, NVMe only, for data

Internal Storage ·      2x M.2 2280 SATA boot drives + 8x M.2 22110 NVMe data storage drives

·      2x M.2 2280 SATA boot drives + 4x M.2 22110 NVMe/SATA data storage drives

·      SED, high temperature, high capacity, and high endurance drive options

·      Optional encryption key deletion on tamper or theft detection

Network Interface Wired network module (10G SFP+ LOM package): 2x 10GbE SFP+, 2x 1GbE RJ45 (support 10/100 Mbps), 2x dedicated ports for remote management
Power ·      Dual-redundant external power supplies 100-240V AC

·      Single DC supply: -48VDC (-40VDC to -72VDC) @8.4A

High Availability Direct-connect networking: For a 2- or 3-node HCI cluster, it is possible to connect the network adapters directly to each other without placing a network switch between the nodes.
RAID Support ·      Software RAID available

·      Hardware RAID 0/1 for M.2 SATA Boot SSDs

·      Hardware RAID 0/1 for M.2 SATA Storage SSDs

Management Optional hardware management via Lenovo XClarity and resource management through Microsoft Windows Admin Center, with mobile option
OS Support Windows Server Datacenter 2019
Limited Warranty Three-year or one-year customer-replaceable unit and onsite limited warranty, 9 x 5 next business day. Optional service upgrades are available

The ThinkSystem DM7100 series includes both all-flash memory array and hybrid memory arrays. Like the MX1021, which we discussed above, this series of appliances are compatible with Microsoft’s Azure stack. The DM7100 models come with integrated Azure cloud tiering, and data reduction technologies that Lenovo claims provide an average of 3:1 space savings. The DM7100F all-flash model’s 4U form factor gives it plenty of space for up to 88PB of storage drives.

 

Lenovo ThinkAgile MX1021 & Lenovo ThinkSystem DM7100 Availability

Immediately through Lenovo TruScale

Lenovo Truscale

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Lenovo ThinkAgile MX1021 & Lenovo ThinkSystem DM7100 Unveiled appeared first on StorageReview.com.

Rancher 2.4 Released

$
0
0
Rancher 2.4

Today, Rancher Labs released the latest version of their eponymous software, Rancher 2.4. Rancher is a Kubernetes Cluster management platform. Darren Shepherd, Shannon Williams, Sheng Liang, and Will Chan founded Rancher Labs in 2014. The company provides software to assist with Kubernetes and Docker workflows.

Today, Rancher Labs released version 2.4 of their eponymous software, Rancher. Rancher is a Kubernetes Cluster management platform. Darren Shepherd, Shannon Williams, Sheng Liang, and Will Chan founded Rancher Labs in 2014. The company provides software to assist with Kubernetes and Docker workflows.

Rancher 2.4

Rancher Labs focused on scalability for their latest release. Rancher 2.4 increases the number of supported clusters up to 2,000, along with 100,000 nodes. There’s also a “preview” version that supports 1,000,000 clusters. I wouldn’t be surprised if they’re seeing some stability issues at that scale since they limited the number of clusters to less than 1% of their preview maximum.

Aiming to make managing their steadily increasing number of clusters easier, Rancher also included improvements for maintenance workflows. When Rancher 2.4 kicks off an upgrade remotely, the process is managed on local K3s clusters. This allows upgrades and patches to use local resources to update without needing a stable connection to the management server. Once the changes are complete, the cluster syncs back up with the management server. Rancher 2.4 also allows customers to upgrade Kubernetes clusters and nodes without application interruption. Customers can also select and configure their upgrade strategy for add-ons so that DNS and Ingress do not experience service disruption.

Recognizing that many customers may not want to deal with setting up as many on-premise clusters as they now support, Rancher Labs offers Hosted Rancher deployments. Each customer gets a dedicated AWS instance of a Rancher Server with a 99.9 percent SLA.

Availability

Immediately

Rancher Software

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Rancher 2.4 Released appeared first on StorageReview.com.

Datadobi DobiMigrate 5.8 Unveiled

$
0
0
Datadobi DobiMigrate 5.8

Today Datadobi unveiled the latest version its migration software, Datadobi DobiMigrate 5.8. DobiMigrate 5.8 is said to enhance integrity reporting of migrations. It does this through enhanced chain of custody that, by default, proves every document, file, or electronic object is an exact copy of its counterpart on the source system at the time of cutover.

Today Datadobi unveiled the latest version its migration software, Datadobi DobiMigrate 5.8. DobiMigrate 5.8 is said to enhance integrity reporting of migrations. It does this through enhanced chain of custody that, by default, proves every document, file, or electronic object is an exact copy of its counterpart on the source system at the time of cutover.

Datadobi DobiMigrate 5.8 CoC

 

Founded in 2010, by Ian Leysen, Kim Marivoet, Ives Aerts, and Michael Jack (founders that were all instrumental in designing and building EMC Centera, the first ever commercial object storage platform), Datadobi is headquartered in Leuven, Flemish Brabant, Belgium. The company’s main focus is on data migration software though they dabble in data management and storage software solutions as well.

Over the past few years, several data protection measures (aimed at maintaining the integrity and security of customer’s data) have cropped up over the world. The most well-known being GDPR. This represents strict rules over confidentiality, integrity, and security of electronic information. These rules also apply when data is being migrated. Datadobi claims that DobiMigrate 5.8 provides this data protection for customers by both default and design. They go on further to state that DobiMigrate 5.8 provides IT professionals, legal teams, and compliance officers, as well as the C-suite with the knowledge that the integrity of their filesystem data will be fully preserved throughout a migration process.

Datadobi Migrate

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Datadobi DobiMigrate 5.8 Unveiled appeared first on StorageReview.com.

HYCU Extends Protection To Azure

$
0
0
HYCU for Azure

Today HYCU announced that it has extended its data backup, recovery and monitoring to Azure. This protection comes as a result of the company expanding its HYCU Protégé. Adding Azure under its umbrella of protection, now makes it even easier for companies leveraging multi-cloud environments.

Today HYCU announced that it has extended its data backup, recovery and monitoring to Azure. This protection comes as a result of the company expanding its HYCU Protégé. Adding Azure under its umbrella of protection, now makes it even easier for companies leveraging multi-cloud environments.

HYCU for Azure

As we previously stated, HYCU Protégé’s data migration service is surprisingly flexible. Protégé supports both on-demand migrations and staged migrations. In either case, Protégé maintains a backup copy. The service also allows application owners to create new instances of near production copies on the cloud for testing, development, reporting or for new production instances. All of these services are designed with transferring across the multiple clouds HYCU Protégé supports.

Azure is a major cloud provider, with rough 60% of all businesses leveraging Azure in some form. Expanding to Azure makes perfect sense for HYCU. The company is making HYCU for Azure data protection functionality available free of charge for the following three months. This is their part in helping user deal with the Covid-19 pandemic.

Highlights of HYCU for Azure include:

  • True as a Service Offering: As opposed to legacy backup solutions that require a separate infrastructure to run in the cloud, HYCU for Azure runs as a native service, available via subscription directly from Azure Marketplace and billed as part of the Azure bill.
  • Extends and Leverages the Power of the Azure Platform: Using native snapshots and integration with Microsoft’s Active Directory, HYCU for Azure is built on native APIs, supports all BLOB storage classes and auto selects the right class of Azure BLOB storage for the policy the customer chooses.
  • Simplified, Impact-free, Enterprise Class Data Protection of VMs/Apps on Azure: Agentless, HYCU for Azure provides flexible backup policies to meet different SLAs as well as 1-click view of protected and unprotected Apps/VMs. Easy to perform granular recovery, HYCU for Azure also leverages its own IP for Change Block Tracking to provide incremental backups forever for extremely efficient storage consumption.
  • 1-click, App Consistent Migration from On-Premises on to Azure and Back: As both application aware and consistent, HYCU provides 1-click migration with no worries about network constraints. Delivered as a true self-service, even for migration and Test/Dev, HYCU for Azure requires zero compute resources on the cloud during migration.
  • 1-click App Consistent DR for Customers Who Want to Use Azure as a DR target: With simplified failover and failback, no limitations with network constraints and no additional compute required on the cloud, HYCU for Azure is an extremely efficient and cost-efficient DR service for users that need that capability.

Availability

HYCU for Azure is available now with data migration and disaster recovery functionality generally available in 30 days.

HYCU

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post HYCU Extends Protection To Azure appeared first on StorageReview.com.

Canonical Managed Apps Announced

$
0
0
Canonical Managed Apps

Today Canonical, the Ubuntu people, made a fairly large announcement with its Managed Apps. The Managed Apps will allow enterprises to have their apps deployed and operated by Canonical as a fully managed service. The company stated that it will cover ten widely used cloud-native database and LMA (logging, monitoring and alerting) apps at launch on multi-cloud Kubernetes but also on virtual machines across bare-metal, public and private cloud.

Today Canonical, the Ubuntu people, made a fairly large announcement with its Managed Apps. The Managed Apps will allow enterprises to have their apps deployed and operated by Canonical as a fully managed service. The company stated that it will cover ten widely used cloud-native database and LMA (logging, monitoring and alerting) apps at launch on multi-cloud Kubernetes but also on virtual machines across bare-metal, public and private cloud.

Canonical Managed Apps

The main focus on Managed Apps is to free up DevOps to focus on business-critical needs while Canonical worries about covering workloads running on Ubuntu across Kubernetes, OpenStack, VMWare and the major public clouds. According to Canonical, this can all be done at a predictable cost. Managed Apps may solve another issue as well, the skill gaps that have been emerging in IT.

As part of the service, Canonical will also manage databases including MySQL, InfluxDB, PostgreSQL, MongoDB and ElasticSearch, the NFV management and orchestration application, Open Source Mano, and the event streaming platform, Kafka. Canonical aims to provide app reliability (even at scale), while providing SLAs for uptime, 24/7 break/fix response, as well as monitoring through an integrated LMA stack and dashboard. Canonical Managed Apps also offer full lifecycle management, high availability by default, and high security through Canonical’s managed services that have MSPAlliance CloudVerify certification – which is equivalent to SOC 2 Type2, ISO 27001 / ISO 27002, and GDPR compliance.

Canonical

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Canonical Managed Apps Announced appeared first on StorageReview.com.

Microsoft Azure Sphere Security Overview

$
0
0
Microsoft Azure Sphere

Today’s world is ruled by digital technology, where the Internet of Things (IoT) plays a prominent role for our everyday life and to enterprise businesses. IoT is a technology that, simply put, transforms any tech device into a more intelligent one. These are always-connected devices taking advantage of cloud computing, allowing sharing and analyzing data to give the required output. Accordingly, IoT manufacturers and application developers are reaching new benefits, doing more compute and analytics on the devices themselves.

Today’s world is ruled by digital technology, where the Internet of Things (IoT) plays a prominent role for our everyday life and to enterprise businesses. IoT is a technology that, simply put, transforms any tech device into a more intelligent one. These are always-connected devices taking advantage of cloud computing, allowing sharing and analyzing data to give the required output. Accordingly, IoT manufacturers and application developers are reaching new benefits, doing more compute and analytics on the devices themselves.

The Internet of Things is transforming everyday “things” into an ecosystem that enhances our lives and makes it more amenable. On the other hand, from a business perspective, a critical benefit of IoT is the capability to be integrated into almost all industries, because of its wide range of applications. Healthcare, retail, home automation, industrial, transportation, are some of the key IoT applications. Whatever the case, businesses are on the edge of being able to not only connect devices to the internet but to use the potential of their data to provide priceless insights to organizations, improve operational performance, and boost productivity. However, internet always-connected devices create a two-way-street, putting at risk critical organization products and equipment, that are now even more prone to cybersecurity threats.

Microsoft Azure Sphere

While IoT enhanced human interaction in ways we never required before, this end-to-end solution yet must allow us to build and connect a secure device ecosystem. And security is precisely the matter still concerning users and organizations. Since IoT connects all the devices to the Internet, the devices become vulnerable to multiple security threats such as lack of physical hardening, software vulnerabilities, data integrity risks, malware and ransomware attacks, poor network visibility, and more. To make sure the IT operations remain protected, IoT developers need to keep in mind all these security issues when deploying these modern devices. Given these concerns, large companies and cybersecurity researchers are giving their best to make things better for the end consumers. Microsoft, using its decades of experience in hardware, software, and cloud, with Azure Sphere, aims to provide security solutions for IoT devices.

Another critical reason to be concerned about IoT data is its integration and management with numerous devices and distributed architecture. IoT integrates multiple sensors, microcontrollers, communications modules, actuators, and cloud platforms into physical devices. They are continually establishing communication between them and additional computing devices, including servers, workstations, laptops, smartphones, and the cloud itself. In this interconnected ecosystem, remote actors could alter or monitor not just the digital environment but also the actual physical environment.

Microsoft Azure Sphere, the security solution for IoT devices 

Delivering security properties for the future of connected devices is an integral part of the IoT. While organizations may realize the problem, it can quickly become complicated, since the industry is still maturing. Current microcontrollers, used in most of the connected devices, existed before IoT; and they can no longer guarantee the security demanded by connected systems. Microsoft has recently released a new solution to face this problem, the Azure Sphere. This Microsoft solution reached GA a couple of weeks ago, which means that the platform is now ready to match the scale of production deployments. Azure Sphere is a secured, high-level application platform with built-in communication and security features for cross-industry IoT devices.

The Azure Sphere platform consists of the integration of three key technical components working as one: a brand-new secured silicon chip, the Azure Sphere OS, and the Azure Sphere Security Service. These components unite to create an end-to-end solution targeting IoT-related organizations to have the very best about making internet-connected devices secure.

Azure sphere components

Certified Azure Sphere’s chips are built by Microsoft’s silicon partners, so they possess the hardware root of trust needed. Microsoft assures that as starting in the silicon itself, these are chips that provide a foundation of security while providing connectivity and compute power for the devices. Then, there is the Azure Sphere operating system (OS). Microsoft’s custom, Linux-based microcontroller operating system that runs on the certified chips and connects to the third component, the Azure Sphere Security Service (AS3). Microsoft AS3 connects every single Azure Sphere chip with every single Azure Sphere operating system; and works with the operating system and the chip to keep the device secured throughout its lifetime. Further, these three components create and provide a secure software environment for IoT application development.

In addition to hardware, and as if those three components were not enough, Microsoft adds a fourth one. Microsoft’s staff and all their security expertise. With this human component, the company provides ongoing security monitoring, upgrades, and improvements of Azure Sphere devices and the entire ecosystem.

Furthermore, another significant aspect of the Azure Sphere solution is its capability to add protections for older IoT devices via the Guardian Module. Guardian modules provide a way to implement secure connectivity in existing devices without exposing those devices to the internet. These devices are part of Azure Sphere chips and support connections to the AS3 for security checks and automated patching.

Seven properties for highly secured devices

Putting the focus on Cybersecurity Solutions, Azure Sphere was designed based on Microsoft Research’s position on the seven properties required of highly secure devices. The company says that these properties can be easily built into your IoT ecosystem with Azure Sphere.

  • Hardware-based root of trust:This guarantees that a device is running only genuine, up-to-date software before it can connect to the rest of the internet.
  • Defense in depth:More layers of defense make it harder for an attacker to gain access to a device’s most sensitive secrets. More sensitive areas are put behind greater layers of defense.
  • Small trusted computing base: A trusted computing base should be kept as small as possible to minimize the surface that’s exposed to attackers and to reduce the probability that a bug or feature can be used to compromise it.
  • Dynamic compartmentalization:Boundaries between software components can prevent a breach in one component from propagating to others. Dynamic boundaries can be moved and redrawn safely.
  • Certificate-based authentication:Passwords can be the weakest link in many security systems. Certificate-based authentication eliminates the need for passwords to manage a device.
  • Error reporting:Early detection, analysis, and response to errors is critical to stopping threats before they cause significant damage.
  • Renewable security:The ability to deploy ongoing software updates is essential to tightening a device’s defenses and shutting down vulnerabilities.

Early Integration

Last year, Microsoft, in collaboration with Innodisk, introduced one of the first solid-state drives (SSD) built with Azure Sphere, the InnoAGE SSD. Innodisk is a developer of industrial embedded technology based in Taiwan. Technically, InnoAGE SSD designed firmware receives commands from Azure Sphere via a secure connection to Azure Cloud. The device can gather data and provide administration over the cloud. Through Azure Cloud, this end-to-end solution allows Azure Sphere to provide software updates, remote monitoring, data security, analytics, and control. Supposedly, this is the world’s first SSD that has been integrated with Azure Sphere.

Conclusion

As IoT keeps growing in importance for industrial use, businesses also start to take advantage of its benefits. IoT empowers organizations to automate processes and save money on operations. However, as we can connect particular devices in enterprise ecosystems to the internet, cybersecurity threats become a real concern. Seeking to reinforce IoT and its security challenges, Microsoft has been heavily investing in Azure Sphere, bringing a high level of security to industrial and home devices.

Holding vast experience in internet security, Microsoft focuses on three key components and seven security properties to create the foundation for Azure Sphere. This comprehensive IoT security foundation supports industrial IoT operations on a chip with robust hardware security, a secure OS, and a cloud security service that monitors devices and responds to emerging threats. Whether in the cloud or the device itself, the Azure Sphere security standards provide a level of defense against attacks, currently making it nearly unmatched by other IoT devices.

Azure Sphere

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Microsoft Azure Sphere Security Overview appeared first on StorageReview.com.

Komprise Elastic Data Migration Hits GA

$
0
0
Komprise Elastic Data Migration

Komprise announced the availability of Komprise Elastic Data Migration. As the name implies, the new solution aims to resolve bottlenecks as companies adopt multi-cloud strategies. The new solution addresses the main migration issues: speed, reliability, accuracy, and cost to help users migrate data across heterogeneous storage and cloud environments. Komprise goes on to claim this can be done up to six times faster and at less than half the cost, compared to other solutions.

Komprise announced the availability of Komprise Elastic Data Migration. As the name implies, the new solution aims to resolve bottlenecks as companies adopt multi-cloud strategies. The new solution addresses the main migration issues: speed, reliability, accuracy, and cost to help users migrate data across heterogeneous storage and cloud environments. Komprise goes on to claim this can be done up to six times faster and at less than half the cost, compared to other solutions.

Komprise Elastic Data Migration

Data migration is not something anyone really looks forward to. It takes up time, errors can easily pop up, and it can be labor intensive. More recently there has been a need to migrate unstructured data from NAS devices to the cloud over WAN. This adds a whole new layer of issues. Komprise Elastic Data Migration is here to come to the rescue. The company claims that the solution allows users to run, monitor, and manage hundreds of migrations simultaneously, while minimizing the time spent on these to free up resources elsewhere. The solution is said to address the latency issues that companies need to overcome as they switch to a multi-cloud approach.

The latest enhancements include:

  • Parallelism at every level: Maximizes the use of available resources by using parallelism at multiple levels—shares and volumes, directories, files, and threads—to maximize performance. Komprise Elastic Data Migration automatically adapts and manages its parallelism to adjust to the available resources.
  • Protocol optimized to minimize overhead: Minimizes the round-trip time over the protocol during a migration to eliminate unnecessary chatter. Rather than relying on generic protocol clients, Komprise is fine-tuned to minimize overhead for each protocol, especially beneficial when moving data over slower networks, such as WANs.
  • High-fidelity with MD5 check of each file/object: Ensures your data is migrated with all its permissions, metadata, and ACLs intact across different storage environments that may support permissions and metadata differently. Komprise performs and reports on data integrity of each file or object through MD5 Checksums.
  • Intuitive graphical user interface and API-driven: Businesses often run multiple migrations in parallel. Komprise provides an intuitive UI that enables you to run, monitor, and manage hundreds of data migrations simultaneously. Komprise also provides API access to schedule and manage data migrations programmatically.
  • Reliable, worry-free migrations: Automatically retries due to network failures and eliminates the guesswork and intensive manual effort of traditional solutions

Availability

The new Komprise Elastic Data Migration is available through the company’s channel partners.

Komprise Elastic Data Migration

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Komprise Elastic Data Migration Hits GA appeared first on StorageReview.com.


Diamanti Spektra 2.4 Goes Live

$
0
0
Diamanti Spektra

Diamanti announced the latest version of its hybrid cloud Kubernetes control plane, Diamanti Spektra 2.4. The company states that the latest version is geared toward the most demanding mission-critical applications. This is addressed through the introduction of the Diamanti D20X, a new addition to its family of hyperconverged infrastructure.

Diamanti announced the latest version of its hybrid cloud Kubernetes control plane, Diamanti Spektra 2.4. The company states that the latest version is geared toward the most demanding mission-critical applications. This is addressed through the introduction of the Diamanti D20X, a new addition to its family of hyperconverged infrastructure.

Diamanti Spektra

Kubernetes still reigns supreme in the running of containerized environments as well as several cloud-native applications. To further support this, Diamanti has enhanced its Spektra to 2.4, introduced the D20X HCI, and made enhancements to security and availability. 2.4 sees the combination of commercially supported Kubernetes distributions and the Docker runtime with enterprise-ready access controls and management. The enhancements should lead to new levels of performance and availability for enterprise Kubernetes.

New features in Diamanti Spektra 2.4 include:

  • Volume encryption and self-encrypting drives (SED) – This feature enhances the security of modern applications with integrated encryption for data in motion and at rest, all without impacting application performance and without increasing the overall data center footprint.
  • Multi-cluster asynchronous replication (for offsite DR) – This feature allows enterprises to perform offsite disaster recovery while keeping their data encrypted for distributed applications. This is in addition to the data protection and high availability features already in the platform including snapshots, backup and recovery, and synchronous mirroring.
  • New D20X infrastructure – Adding to the D20 family of modern hyperconverged infrastructure, which currently supports Intel Skylake CPUs and NVIDIA GPUs for artificial intelligence (AI) and machine learning (ML) use cases, the new D20X supports the latest 2nd Generation Intel Xeon Scalable Processors (formerly Cascade Lake). The recently released processors deliver an average 36% greater computing power, with increased core counts, higher cache and higher clock frequencies.

Availability

Diamanti Spektra 2.4 is available now with Diamanti Ultima I/O acceleration cards on an extended choice of modern, hyperconverged hardware options.

Diamanti

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Diamanti Spektra 2.4 Goes Live appeared first on StorageReview.com.

HPE Supports The Growing Remote Workforce

$
0
0

Today Hewlett Packard Enterprise (HPE) announced several initiatives around helping those that have been impacted by the Covid-19 pandemic. These initiatives cover stronger VDI, financial service tweaks, and preconfigured VDI solutions for SMB and enterprise. The company states it is devoting time and resources to help combat the challenges of Covid-19.

Today Hewlett Packard Enterprise (HPE) announced several initiatives around helping those that have been impacted by the Covid-19 pandemic. These initiatives cover stronger VDI, financial service tweaks, and preconfigured VDI solutions for SMB and enterprise. The company states it is devoting time and resources to help combat the challenges of Covid-19.

HPE remote workforce

First up, there is a much stronger demand from the remote workforce with stay-at-home and social distancing measures in place. To this end, HPE released a more powerful virtual desktop infrastructure (VDI) solution. The company stated that HPE Moonshot now ships with the new HPE ProLiant m750 server blade. This upgrade in hardware can see performance jumps as high as 70% while using 25% less power. This translates to power 33% more remote workers on a quarter less power.

On the financial side of things, businesses of all sizes are feeling the economic pinch of the pandemic. To aid here, HPE is offering new, innovative financial and asset lifecycle options including short-term rentals and 90- day payment deferrals on VDI solutions. The company is also offering VDI solutions as-a-Service through HPE Greenlake.

HPE also has new, preconfigured VDI solutions for SMB up to enterprise. The new solutions are built on either HPE ProLiant or HPE Synergy servers. If companies are looking to leverage these VDI solutions they can start as low as 80 and scale up to over 2,000 remote workers. The solutions are designed with Citrix and VMware environments in mind.

HPE Covid-19

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post HPE Supports The Growing Remote Workforce appeared first on StorageReview.com.

VMware Site Recovery Manager 8.3 Hits GA

$
0
0
VMware SRM vVols

Sliding out today, VMware Inc. announced that VMware Site Recovery Manager 8.3 (SRM) and vSphere Replication 8.3 are hitting generally availability. SRM and vSphere Replication 8.3 were announced earlier in the month along with announcements around vSphere, vSAN, and Tanzu. The company states that the above automation software integrates with an underlying replication technology to minimize downtime in case of disasters via automated orchestration of recovery plans.

Sliding out today, VMware Inc. announced that VMware Site Recovery Manager 8.3 (SRM) and vSphere Replication 8.3 are hitting generally availability. SRM and vSphere Replication 8.3 were announced earlier in the month along with announcements around vSphere, vSAN, and Tanzu. The company states that the above automation software integrates with an underlying replication technology to minimize downtime in case of disasters via automated orchestration of recovery plans.

VMware SRM vVols

Looking at VMware Site Recovery Manager 8.3, the most requested feature for SRM has been support for VMware vSphere Virtual Volumes (vVols). Ask and ye shall receive. 8.3 now supports vVols. This integration helps improve manageability as well as simplicity. The update also supports VMs on vVols replicated with array-based replication.

For data protection, SRM 8.3 comes with automatic detection and protection of VMs created in vVols Replication groups. VMware states that if a VM is created on a datastore that is replicated and protected in SRM, the VM is automatically added to an existing protection group. The automatic protection is applied to VMs for which the Storage Policy Base Management (SPBM) policy is changed to a vVols policy for replication and to a Replication group protected with SRM.

Other benefits to SRM 8.3 include:

  • Seamless disk resizing
  • Optimized vSphere replication performance
  • Security Enhancements
  • New capabilities for the vROps Management packs

Availability

Users can upgrade to VMware Site Recovery Manager 8.3 (SRM) and vSphere Replication 8.3 today. Though not mentioned above, vSphere 7 is now generally available today as well. We did go over the new features fairly well here.

VMware

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post VMware Site Recovery Manager 8.3 Hits GA appeared first on StorageReview.com.

IGEL OS 11.03.5000 Released

$
0
0
IGEL OS 11.03.5000

Today IGEL announced the release of the latest version of its operating system, IGEL OS 11.03.5000. This new version is aimed at home workers that leverage IGEL OS as a secure workspace, providing access to Office 365, Virtual Desktop Infrastructure (VDI), Desktop-as-a-Service (DaaS), and collaboration tools like Microsoft Teams and Zoom. Another big part of this release is the addition of custom partition capabilities and the new IGEL Starter License.

Today IGEL announced the release of the latest version of its operating system, IGEL OS 11.03.5000. This new version is aimed at home workers that leverage IGEL OS as a secure workspace, providing access to Office 365, Virtual Desktop Infrastructure (VDI), Desktop-as-a-Service (DaaS), and collaboration tools like Microsoft Teams and Zoom. Another big part of this release is the addition of custom partition capabilities and the new IGEL Starter License.

IGEL OS 11.03.5000

The customer partition capability is a fairly big deal in IGEL OS 11.03.5000. This feature allows users to use IGEL OS to support non-standard applications and protocols. Basically, this gives organizations the ability to support a “legacy” application or interface that they still need to use but isn’t yet supported by IGEL OS. This can save companies by not having to purchase IGEL’s Enterprise Management Pack subscription to support custom partitions.

The new IGEL Starter License in IGEL OS 11.03.5000 allows for a quicker conversion of hardware endpoints to IGEL OS-supported devices. This is particularly important at the moment with so many working from home. The license provides organizations 30 days to use IGEL Workspace Edition to accelerate and simplify their IGEL roll-out. Version 11.03.5000 doesn’t require activation or registration of a new installation of IGEL Workspace Edition. That company states that IT organizations can now automatically convert their IGEL Universal Desktop (UD) endpoints or third-party endpoint devices to IGEL OS using the IGEL OS Creator tool included in IGEL Workspace Edition.

The IGEL OS 11.03.5000 supports WVD, as well as Citrix Workspace and VMware Horizon 7, as well as Linux client support for Microsoft Windows Virtual Desktop (WVD) announced earlier this year.

Availability

IGEL OS 11.03.500 is available now.

IGEL OS

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post IGEL OS 11.03.5000 Released appeared first on StorageReview.com.

Cisco Kubeflow Starter Pack Announced

$
0
0
Cisco Kubeflow Starter Pack

Earlier this month the Kubeflow community introduced Kubeflow 1.0 up from its stable and beta release of 0.7. Being one of the top contributors to Kubeflow, Cisco announced the release of Cisco Kubeflow Starter Pack. The company claims this pack will make operationalizing machine learning for large scale deployments easier.

Earlier this month the Kubeflow community introduced Kubeflow 1.0 up from its stable and beta release of 0.7. Being one of the top contributors to Kubeflow, Cisco announced the release of Cisco Kubeflow Starter Pack. The company claims this pack will make operationalizing machine learning for large scale deployments easier.

Cisco Kubeflow Starter Pack

Kubeflow is an open-sourced project that offers an end-to-end machine learning (ML) stack orchestration toolkit to build on Kubernetes. Kubeflow is designed to make ML workflows easier to deploy, maintain, and coordinate on Kubernetes clusters. Some of its more well-known features include using Jupyter Notebook as its primary user interface for data scientists, machine learning engineers. Initially TensorFlow was Kubeflow’s deep learning framework. Version 1.0 supports other frameworks including PyTorch. Kubeflow has model serving. According to the Kubeflow community, there are built-in capabilities with TFServing enabling models to be used without worrying about the detailed logistics of a custom application. Along with many other features that are beyond the scope of this article.

Cisco, seeing an opportunity to help IT teams work closely with their data scientist counterparts, released Cisco Kubeflow Starter Pack. This pack is said to provides IT teams with a baseline set of tools to get started with Kubeflow.

The Cisco Kubeflow Starter Pack includes:

  • Kubeflow Installer: Deploys Kubeflow on Cisco UCS and HyperFlex
  • Kubeflow Ready Checker: Checks the system requirements for Kubeflow deployment. It also checks whether the particular prescribed Kubernetes distribution is able to support Kubeflow.
  • Sample Kubeflow Data Pipelines: Cisco will be releasing multiple Kubeflow pipelines to provide data science teams working Kubeflow use cases for them to experiment with and enhance.
  • Cisco Kubeflow Community Support: Cisco will be providing free community support for Cisco customers who would like to check out Kubeflow.

Cisco Kubeflow Starter Pack

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post Cisco Kubeflow Starter Pack Announced appeared first on StorageReview.com.

FTSI Uses Azure Stack HCI to Reduce Risk & Cost

$
0
0
DataON FTSI Truck

FTS International (FTSI), one of the largest well completion companies in North America, recently migrated its Active Directory infrastructure in the data center from a Dell on-premises setup to a two cluster DataON solution for Azure Stack HCI. We have been working with DataON for the last couple of months to get a better understanding of its Microsoft Azure Stack HCI solutions. Those articles have been focused on cloud interactions, the DataON HCI-224 , 2-node HCI solution, a close look at some essential hardware, and Azure Stack HCI background, respectively. FTSI’s recent adoption of Azure Stack HCI gives us an opportunity to investigate how all these elements work together in the real world.

FTS International (FTSI), one of the largest well completion companies in North America, recently migrated its Active Directory infrastructure in the data center from a Dell on-premises setup to a two cluster DataON solution for Azure Stack HCI. We have been working with DataON for the last couple of months to get a better understanding of its Microsoft Azure Stack HCI solutions. Those articles have been focused on cloud interactions, the DataON HCI-224 , 2-node HCI solution, a close look at some essential hardware, and Azure Stack HCI background, respectively. FTSI’s recent adoption of Azure Stack HCI gives us an opportunity to investigate how all these elements work together in the real world.

DataON FTSI Truck

FTSI is a very mobile and distributed company. It operates 25-35 mobile “data vans” at a time. These data vans are essentially mobile servers installed in a van so that they can be easily moved to be co-located with the current location of their pumps. The data vans send sensor feedback and other metrics from the FTSI custom-made pumps they have been parked near, back to their primary data center. FTSI uses data vans to provide its edge computing needs instead of more traditional and less “cool” solutions, because their pumps are typically on-site for only a few weeks before moving on to the next location. Data vans transmit information back to the primary data center through a satellite link. Like many small- and medium-sized businesses, FTSI uses a colocation facility to host their data center. At the start of our story, they were a Dell shop and used blade servers for all the computing needs at their main facility.

FTSI was founded in 2002 and has a veteran IT team. When we talked with its IT Infrastructure Lead, Eric Morrison, he struck us as being both experienced and knowledgeable. Bucking the stereotype of the anti-social computer nerd, he was also pleasant and polite. Although FTSI headquarters is in Fort Worth, their IT team has been managing a frequently changing network that covers roughly a quarter of the continental United States with data vans ranging from Texas to Pennsylvania to Florida.

DataON FTSI Locations

FTSI began its journey of adopting Azure Stack HCI with a focus on high availability and security. The decision to implement Red Forest security is what kicked off the search that ultimately led them to adopt Azure Stack HCI. Red Forest is an Active Directory (AD) user identification model that splits users into three horizontal tiers. Grouping services into distinct tiers limits the ability of attackers to compromise other elements of the network even after they successfully gain access to a user account through phishing or other techniques. According to Morrison, “The first step towards that was we needed a whole new separate environment to run that on. And that’s where we got the 2-node HCI.”

In addition to the need for multiple separate environments, Morrison “didn’t want to rely on anything in our data center to maintain that environment.” FTSI saw a lot of value in offloading disaster recovery and backup to the cloud. HCI is often used to mean hyper-converged infrastructure, but it can also be read, as in this case, as hybrid cloud infrastructure. One of the key strengths of a hybrid cloud approach is that it allows companies to get the safety and security of offsite backup at a fraction of the capital cost, setting up a second facility only used for backup would incur. The latter was a practice that was commonly recommended even just a decade ago.

FTSI considered several solutions before settling on DataON’s solution for Azure Stack HCI. Since FTSI was primarily a Dell shop at the time, picking one of their solutions would have been natural. Especially since Eric Morrison had several good things to say about Dell and says they still have a good relationship with them. However, DataON was able to offer a good price point, while still providing all the features that FTSI wanted. DataON was able to meet the feature needs of FTSI, primarily thanks to the improvements Microsoft has made to Azure Stack HCI in recent years. In terms of price, DataON has a significant edge over Dell and other established providers in this space because they offer a leaner platform with Azure Stack HCI. Most established providers bundle useful software with their HCI solutions. In FTSI’s case, the extra functionality that something like Nutanix or VMware would provide wasn’t needed (FTSI considers itself a Hyper-V shop). Because DataON’s Azure Stack HCI solution didn’t require FTSI to purchase licenses for an unnecessary bundle of third-party applications, it was able to offer a better price point.

Traditionally, setting up a server rack might take a week or more of unboxing, racking, cabling, and configuring before provisioning the first system. DataON servers come pre-installed with Windows Server and pre-configured with the customer’s requested specs. In addition, DataON also sent out an engineer to help FTSI set up the new system. As a result, a process that usually takes at least a week was completed in less than a day, and the FTSI team was able to start provisioning and migrating systems to its new DataON solution for Azure Stack HCI during the first day. As of the time of this writing, FTSI was still happily running ASR, Azure Monitor with Log Analytics, and all of its critical Tier 0 applications like their domain controllers on its Azure HCI stack.

The package DataON included with FTSI’s Azure Stack HCI purchase was impressively comprehensive. The meat of the package was two 1U DataON S2D-5108i server nodes with four Intel Xeon Silver eight-core CPUs. Each node had four 2TB NVMe drives for storage. Also included in the package were multiple setup scripts to facilitate the fast setup and installation of the new system. In a similarly customer-friendly vein, the package also included detailed and easy-to-understand setup instructions for the HW. Most impressively, the package came with twenty pages of customized installation instructions that were nearly idiot-proof. The instructions were not just your standard-form instructions that leave the hapless user to guess which of the versions is closest to what they need. They included only information relevant not just to the hardware FTSI had purchased, but specific to their planned use – revealing a real attention to detail. With instructions like these, it’s no surprise the install process was quicker and smoother than the industry standard. As a cherry on top, the package also included testing and benchmark support to help verify that everything was assembled correctly and nothing was damaged in transit.

DataON FTSI Server

The other reason the installation process was so smooth was that DataON provides a truly turnkey solution. Not only do they offer onsite deployment support and onsite integration setup assistance, but their team also assisted FTSI in selecting the right system for their needs. Before ever arriving onsite, DataON engineers were involved in infrastructure design. Even after the setup was complete, they also assisted with testing validation.

In terms of data throughput, the FTSI datacenter smoothly processes all the information 25-35 data vans can send it. This works out to about 30-50GB of data a week just from the data vans. All of this data is retained, although most of it is currently being archived onto slower legacy systems.

Looking to the future, FTSI has been so pleased with the performance and low-maintenance cost of their DataON solution for Azure Stack HCI that it is considering expanding where it’s used. The two apparent upgrades would be to replace the rest of its data center hardware with additional HCI appliances or replace their edge nodes in the data vans. It’s currently considering if there are any other areas of their operations where HCI clusters will improve the technology. Each data van is custom-built and upgrading them to HCI would be costly in the short term, so that switch is likely to be the last one made. However, if completed, it would likely save them money in terms of reduced outages, and operational savings like lower power demands.

DataON FTSI Inside Truck

This report was sponsored by DataON. All views and opinions expressed in this article are based on our unbiased view of the product(s) under consideration.

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post FTSI Uses Azure Stack HCI to Reduce Risk & Cost appeared first on StorageReview.com.

StorageReview Podcast #41: Simon Taylor, HYCU

$
0
0
HYCU Supported Systems

On this week’s podcast Brian talks with Simon Taylor, CEO of HYCU. HYCU is a backup and recovery provider who started out not long ago as a dedicated backup application for Nutanix HCI systems. Simon talks through the origins of HYCU and what they’re doing today to enable newly supported Azure cloud backup. For those that want to try out HYCU for Azure, they’re offering a free three month trial.

On this week’s podcast Brian talks with Simon Taylor, CEO of HYCU. HYCU is a backup and recovery provider who started out not long ago as a dedicated backup application for Nutanix HCI systems. Simon talks through the origins of HYCU and what they’re doing today to enable newly supported Azure cloud backup. For those that want to try out HYCU for Azure, they’re offering a free three month trial.

On the rest of the podcast the team quickly devolves into Tiger King talk; but at least a good conversation around data backup takes place. Kevin updates us on lab activities, including the Asigra backup plugin for FreeNAS. Tom rambles on about trying to cheat in his VMware Horizon install showdown. And Adam gives us a new movie for this week’s AMC, he recommends Upgrade, streaming on HBO and Hulu.

HYCU Supported Systems

Discuss on Reddit

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post StorageReview Podcast #41: Simon Taylor, HYCU appeared first on StorageReview.com.


News Bits: Samsung, AWS, Backblaze, Quantum, Kasten, Synology, FalconStor, Wasabi, IBM, & AMD

$
0
0
StorageReview logo

This week’s News Bits we look at a number of small announcements, small in terms of the content, not the impact they have. Samsung announces EUV DRAM. AWS makes several announcements. Backblaze hits an Exabyte. Quantum expands video surveillance portfolio. Kasten introduces K10 2.5. Synology VPN Plus licenses free until 09/20/2020. Aerospike announces Aerospike Cloud. FalconStor and Wasabi partner on hybrid cloud. IBM Cloud Bare metal Servers powered by AMD EPYC.

This week’s News Bits we look at a number of small announcements, small in terms of the content, not the impact they have. Samsung announces EUV DRAM. AWS makes several announcements. Backblaze hits an Exabyte. Quantum expands video surveillance portfolio. Kasten introduces K10 2.5. Synology VPN Plus licenses free until 09/20/2020. Aerospike announces Aerospike Cloud. FalconStor and Wasabi partner on hybrid cloud. IBM Cloud Bare metal Servers powered by AMD EPYC.

Samsung Announces EUV DRAM

Samsung EUV 

Samsung announced that it has shipped one million modules of its new 10nm-class (D1x) DDR4 (Double Date Rate 4) DRAM modules based on extreme ultraviolet (EUV) technology. According to the company (the first to adopt EUV technology), EUV overcomes challenges in DRAM scaling. EUV technology reduces repetitive steps in multi-patterning and improves patterning accuracy, enabling enhanced performance and greater yields as well as shortened development time.

Samsung 

AWS Makes Several Announcements

AWS logo

Though no AWS specific show that I know of was missed, AWS made a few announcements over the past week. AWS Outposts are now supported in AWS GovCloud (US) Regions. They have added more cost-effective HDD storage to their FSx for Windows File Server. And AWS stated that Elastic File systems now have a 400% increase in read operations by supporting up to 35,000 read operations per second now.

AWS 

Backblaze Hits An Exabyte

Backblaze announced that they are now storing and managing over 1 exabyte of data. It is quite a feat for a company that started in 2007 with just 5 employees. While this is cause for celebration for the company, they already have their eyes on their first zettabyte on the horizon.

Backblaze

Quantum Expands Video Surveillance Portfolio

Quantum expanded its video surveillance portfolio with new video recording, servers, new systems for with GPU-based video analytics, and new capabilities for the VS-HCI Series. The enhanced portfolio is based off of the company’s StorNext file system and leverages the recently acquired ActiveScale. Quantum states that this will make one of the broadest security infrastructure solution portfolios available from any single vendor.

Quantum Security solutions

Kasten Introduces K10 2.5

 

Kasten released version 2.5 of K10 their backup and restore, disaster recovery, and mobility of Kubernetes applications. The latest version features the company’s Cloud-Native Transformation Framework to enhance the automation and reliability of application and data migration in Kubernetes environments.

Kasten K10 

Synology VPN Plus Licenses Free Until 09/20/2020

In response to the Covid-19 pandemic, Synology announced that its VPN Plus licenses will be free from April 6 until September 20, 2020. This will allow user to work from home much easier. Both existing and new owners of Synology’s RT1900ac, RT2600ac, and MR2200ac wireless routers will be able to purchase VPN Plus Client VPN Access and Site-to-Site VPN licenses for free.

Synology VPN Plus

Aerospike Announces Aerospike Cloud

Aerospike announced that it has launched its own Cloud that enables customers to build, manage and automate their own Aerospike database-as-a-service (DBaaS). Built on CNCF standards and optimized for GKE on GCP, features include:

  • Kubernetes Operator: Custom Aerospike-specific extensions to the Kubernetes API that encapsulate operations domain knowledge, such as scale-up, scale-down, cluster configuration management and upgrades.
  • Helm Charts: Deploy Aerospike clusters in a Kubernetes environment using the Helm package manager, a CNCF incubating project.
  • Prometheus: Integration with the CNCF-graduated monitoring and alerting solution by way of a custom exporter for Aerospike Enterprise Edition and Alert manager configs.
  • Grafana: Integration with CNCF member Grafana Labs’ open source visualization platform through custom dashboards for the Aerospike EE Prometheus exporter.

Aerospike 

FalconStor & Wasabi Partner On Hybrid Cloud

FalconStor announced that it has partnered with Wasabi to bring together cloud data migration, long-term archival and information preservation. The collaboration/integration combines FalconStor’s deduplication and Wasabi’s public cloud object storage service. This results in lower storage consumption, lower costs, and can eliminate the need for secondary data centers.

FalconStor

Wasabi

IBM Cloud Bare Metal Servers Powered By AMD EPYC

 

IBM Cloud announced that its Bare Metal Servers will be powered by the second generation AMD EPYC CPUs. IBM Cloud Bare Metal Servers with dual-socket EPYC 7642 platform features:

  • 96 CPU cores per platform
  • Base clock frequency of 2.3GHz with a Max Boost up to 3.3GHz
  • 8 memory channels per socket for increased memory bandwidth
  • Up to 2TB memory configuration support
  • Up to 24 local storage drives
  • OS choices of RHEL, CentOS, Ubuntu, MS Server

IBM Bare Metal Servers

The post News Bits: Samsung, AWS, Backblaze, Quantum, Kasten, Synology, FalconStor, Wasabi, IBM, & AMD appeared first on StorageReview.com.

HiveIO Responds To COVID-19 By Launching DaaS

$
0
0
HiveIO DaaS

Today, HiveIO announced that they were now offering a Desktop as a Service (DaaS) solution. A Desktop as a Service (DaaS) solution provides virtual desktops through the cloud in much the same way a Virtual Desktop Infrastructure (VDI) or thin-client does using on-premise servers. HiveIO was founded in 2015, and prior to this has focused on VDI solutions.

Today, HiveIO announced that they were now offering a Desktop as a Service (DaaS) solution. A Desktop as a Service (DaaS) solution provides virtual desktops through the cloud in much the same way a Virtual Desktop Infrastructure (VDI) or thin-client does using on-premise servers. HiveIO was founded in 2015, and prior to this has focused on VDI solutions.

HiveIO DaaS

HiveIO’s built their new Desktop as a Service (DaaS) solution out of their existing VDI software. Aurora Cloud Technologies is providing cloud infrastructure for the service. Aurora Cloud Technologies shares its name with several other companies, but the one HiveIO partnered with provides private and hybrid cloud services. Aurora Cloud Technologies is providing customers of HiveIO’s new DaaS with a SAE16 Type 2 certified data center and 24×7 hardware support and desktop management. Customers will be able to administer their solutions themselves, as well.

HiveIO’s DaaS is well suited for any company that needs to enable their employees to work from home, but doesn’t have the IT staff or existing servers to support such a transition. With so many companies needing to suddenly send their employees home to protect them from the COVID-19 virus sweeping across the world, this service would have been in high demand, last month. Unfortunately for HiveIO, most companies have already settled on a solution to allow employees to work from home. While normally managing a release like this in only a month would be amazingly fast, in this situation, it may not have been fast enough.

Availability

Immediately

HiveIO Main Site

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post HiveIO Responds To COVID-19 By Launching DaaS appeared first on StorageReview.com.

How To: Recover Deleted Files With PhotoRec

$
0
0
Free File Recovery PhotoRec Drive

If you find yourself here reading this article, it probably means something has gone terribly wrong. Take a deep breath, we’re going to get through this. Buried in the depths of the Google search results for “deleted file recovery,” past the very aggressive SEO results of various companies trying to get you to buy their software, lies a result for one of my favorite pieces of free open-source software, PhotoRec. It is a companion program to TestDisk, another piece of wonderful open-source software, created by CGSecurity under the GNU General Public License. In this guide, we will go through the relatively painless process of recovering deleted files with PhotoRec. These tools are especially useful for recovering files from portable flash media used with digital cameras.

If you find yourself here reading this article, it probably means something has gone terribly wrong. Take a deep breath, we’re going to get through this. Buried in the depths of the Google search results for “deleted file recovery,” past the very aggressive SEO results of various companies trying to get you to buy their software, lies a result for one of my favorite pieces of free open-source software, PhotoRec. It is a companion program to TestDisk, another piece of wonderful open-source software, created by CGSecurity under the GNU General Public License. In this guide, we will go through the relatively painless process of recovering deleted files with PhotoRec. These tools are especially useful for recovering files from portable flash media used with digital cameras.

A few considerations before we get started to save you time; if the memory card was formatted in a professional-level camera, such as a Sony FS7 or Arri Alexa, the chances of recovery are very low, if not impossible. Unfortunately for this scenario, when you format cards in these cameras, the cards are zero’d out for security (so others can’t recover data from a sensitive or private shoot), and to maintain performance of the media. File recovery in this scenario is best left to professionals or the camera manufacturer, and even then the chances are unfortunately slim. Additionally, if the card has been used since formatting, it is very likely the media you are trying to recover has been overwritten.

If the above scenarios don’t apply to you, and you simply deleted a file or formatted a drive in a computer (note: Quick Format only, a full format in Windows also zero’s out the media), there’s a very good chance your files are still there waiting to see the light of day again. PhotoRec is available on basically every operating system, but for this guide we will be walking through this in Windows 10 Pro. The steps will still apply for Mac OS X and Linux to recover deleted files with PhotoRec.

Get Started: Recovering Deleted Files with PhotoRec

For this example, we will be using a card from an Atomos Shogun external recorder.

Free File Recovery PhotoRec Drive

We will format it to exFAT and use Quick Format (note, using a full format will cause the files to be unrecoverable). As we can see, there are seven QuickTime .mov files currently on the drive (two obscured by the formatting window). They are all 4k ProRes 422 files recorded on the device.

Free File Recovery PhotoRec Step 1

First step is to download the TestDisk and PhotoRec software suite for your operating system and extract the ZIP to wherever you’d like: https://www.cgsecurity.org/wiki/TestDisk_Download.

By this point, the media that needs to be recovered should be plugged in. Navigate to where your extracted files are and launch the PhotoRec executable. Note that the ‘qphotorec’ file is the same application, but with a GUI. You can use either one and get the same results, but this guide will walk you through the command line-based interface, since it will be most similar between platforms.

Free File Recovery PhotoRec Step 2

Let’s select the drive that we want to recover. In this example, it shows up as “JMicron Generic” as the Atomos drive is actually a 480GB SanDisk Ultra II SATA SSD.

Free File Recovery PhotoRec Step 3

On the next screen, we’re going to select the [File Opt] option with the arrow keys. This is the most critical step as it will let PhotoRec know what type of files we’re trying to recover.

By default, all the file extensions should be selected (don’t worry if they’re not). We’re going to follow the prompts on screen and press ‘s’ to disable all of the extensions, scroll down with the arrow keys, and use the spacebar to select the file extension(s) we want (if you wish to save metadata files, you can also select .xml, .csv, or whichever format your camera saves metadata in). In this example we’re going to select ‘.mov’ since we’re recovering QuickTime files. Press ‘b’ to save these settings and select ‘Quit’ twice to return to the partition selection page.

It’s worth noting that if your files come out unplayable, you can try to come back to this screen and select the ‘mov/mdat’ option, which will allow PhotoRec to recognize the fragmented files and keep them together. Please refer to the additional notes at the bottom of this article on how to merge these files if this applies to you.

Back at the partition selection screen, we’re going to select Partition 1, since in this case it is our empty, freshly formatted partition. If you don’t find all of your files, you can come back to this screen and try again and select ‘Whole Disk.’ This is also useful if the filesystem of the drive is corrupt.

Next step is straightforward. We’ll select ‘Other’ as our file system type, since it is exFAT.

In the next step, we’re going to select ‘Whole’ so PhotoRec searches the whole partition for deleted files.

In the next step, we’re going to select a destination for PhotoRec to recover the files to. Of course it goes without saying, but I’ll say it anyway, that the recovery destination should absolutely not be the media you are trying to recover. Use the arrow keys to navigate the directories (left arrow will bring you to the parent directory, up to drive selection). I recommend making a new folder on a separate drive from your operating system to recover the files to. Use ‘enter’ to enter the folder and press ‘c’ to select it. Once selected the recovery process will start automatically. PhotoRec will create a subfolder named “recup_dir.x” where ‘x’ is the number of recoveries in this folder (e.g., recup_dir.1, recup_dir.2, etc.).

Sit back and grab a stiff drink while your files (hopefully) come back from the abyss. The process is fairly quick, but will of course vary wildly depending on what type and what size the media you’re recovering is. As a frame of reference, this example took about 25 minutes for a complete scan on a 480GB SSD over a USB 3.0 reader. The recovered files were backed up to an external Thunderbolt 3 dual-bay storage array in RAID0. The seven files were found and recovered within the first 5 minutes of the scan. You can view the files as they are being recovered.

By now, your files should be back! Breathe that sigh of relief and keep this article bookmarked to share with anyone you know who may find themselves unfortunate enough to be in the same situation you were just in. If no files were recovered, skip to the next section; all hope is not lost (yet). A few of the caveats with this recovery process (and most other recovery software) include a loss of any directory structure, and a loss of the file names. This is a small price to pay for file salvation. It’s worth noting that if necessary, you can get the file names back; see the notes below on how to do this.

Just as a sanity check, I opened up the original file and the recovered file in an app called BeyondCompare and checked the binary data between the two files. They were a perfect match!

File Recovery Didn’t Work?

If the process didn’t work, there’s a few more things you can try. To reiterate the beginning of this article, if the media was formatted in a professional camera or overwritten, the chances of recovery are very slim.

  1. If you’re only getting partial or unplayable .mov files, certain cameras, such as the Canon 5D Mark III, write data to the card in fragments, which PhotoRec does not expect and does not recover. Files from GoPros will be more problematic as they create several fragmented files. You may be able to merge these in your video editing software of choice afterwards.

You can return to the partition selection screen (the screen after you select which drive to recover), select [File Opt] and in addition to selecting .mov, select ‘mov/mdat.’ This will create two files with similar names, one with _ftyp.mov and one with _mdat.mov.

This is a little advanced, so I’ll be making a few assumptions here about your skill level with command prompt and terminal. In Windows, open up a new Command prompt as an Administrator, go to the directory where the files are with the ‘cd’ command. We’re going to merge the files using  the ‘type’ command. Usage goes ’type file2_ftyp.mov file1_mdat.mov > test.mov‘. This will have to be repeated for every set of files PhotoRec recovers. Under Mac OS X and Linux, the same usage applies, however we will use ‘cat’ instead. If you get permission errors, make sure you use ‘chown -R’ to take ownership of the recovery directory.

  1. If you’re trying to recover JPEG images from a card and only got a few images, you can return to the partition selection screen (the screen after you select which drive to recover), select ‘Options’ and go up to the ‘Paranoid’ option and hit enter until you select ‘Paranoid : Yes (Brute force enabled).’ This will tell PhotoRec to save more fragmented JPEGs that can possibly be saved using other software, such as Photoshop. Note that this process happens after the regular scan and will take a bit of time and you may notice your computer get sluggish, as this is a very CPU intensive task.
  2. If the application outright crashes, you can return to the partition selection screen (the screen after you select which drive to recover), select ‘Options’ and go up to the ‘Low memory’ option, and select ‘yes.’ If you have at least 16GB of RAM in your system, this shouldn’t be an issue, but if you have a lower spec machine, this should help. Also make sure (in Windows) that you are running the application as an Administrator, even though it should launch that way by default.
  3. If you are only able to get a few files or none at all, you can return to the partition selection screen (the screen after you select which drive to recover), select ‘Options’ and go up to ‘Keep corrupted files’ and select ‘yes.’ This will tell PhotoRec not to discard corrupted files and may let you recover parts of the file in a separate video or photo editing program.

Renaming the File Back to Original

If your file recovery was successful, but you’d like to rename the recovered files to their original file names, you can use another piece of software called ExifTool. This tool is fairly straightforward, but requires a prerequisite knowledge of command-line use. It will read the embedded metadata in the files to restore the filenames back to original. Please refer to the ExifTools documentation and Chapter 14 (page 43) of the testdisk.pdf document on instructions on how to do this. It’s a little too niche and involved to explain here.

Conclusion

At this point, you’re either elated that your files are back or in desperate need of inebriation. In either scenario, let’s take a look at how we got here in the first place and how to prevent it from happening again in the future.

I would start with using applications such as Pomfort SilverStack or ShotPut Pro to download media off the cards. These applications offer checksum verification to ensure that the data has been transferred without error. They can also create reports that have the checksums, thumbnails, and all the file information in them to make sure everything is where it’s supposed to be. Simply copying and pasting files from the media to a hard drive is borderline negligent and should never happen if you care at all about what you’re copying. These programs are not free, but worth their weight in gold for peace of mind. SilverStack is Mac OS X only, and ShotPut Pro is available for Windows and Mac.

Another fantastic application is PARASHOOT from OTTOMATIC. This app is only available in Mac OS X, but is free and provides an indispensable level of “idiot checking” to make sure cards are not overwritten. As their website states, Parashoot “checks if files on an inserted memory card are already backed-up somewhere[…] It can also fake format a memory card so once inserted again into a camera it prompts to reformat the card.” This lets whoever puts that card back in the camera know that if the camera doesn’t prompt a format, something is amiss and can check if the card has been properly backed up. This process is also reversible as all it does is flip every bit of the first 2MB of the card, destroying the file system information, but in a controlled way that can be undone.

Finally, I would adhere to the industry standard 3-2-1 backup rule. This is an easy way to remember to have your data in three locations, two on separate drives, and one offsite. At the very least, you should have your data in two locations while on set before a third backup is sent offsite. And remember, RAID is not a backup! And neither is “the cloud” for that matter.

I hope this article has helped you out. Nobody wants to be in a situation where files need to be recovered and I hope the tips for best practices will prevent this from happening to you in the future. I’ve been in this situation before and I know how stressful it can be. Best of luck on all your future creative endeavors and I wish you nothing but the most reliable data storage.

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post How To: Recover Deleted Files With PhotoRec appeared first on StorageReview.com.

HP t740 Thin Client Review

$
0
0
HP t740

HP states that their HP t740 Thin Client, which they’ve geared towards remote users who use multiple high-resolution displays, is the world’s most powerful desktop thin client. This statement is based on the fact that it has the processor with the highest CPU mark on in-market desktop thin clients as of June 2019. In this review, we will give an in-depth overview of the HP t740’s specifications, design, and build quality, and a summary of the testing that we carried out on it. We will then lay out our key findings from those tests and provide our thoughts about the device, and briefly discuss who would benefit from using this thin client.

HP states that their HP t740 Thin Client, which they’ve geared towards remote users who use multiple high-resolution displays, is the world’s most powerful desktop thin client. This statement is based on the fact that it has the processor with the highest CPU mark on in-market desktop thin clients as of June 2019. In this review, we will give an in-depth overview of the HP t740’s specifications, design, and build quality, and a summary of the testing that we carried out on it. We will then lay out our key findings from those tests and provide our thoughts about the device, and briefly discuss who would benefit from using this thin client.

Although the market for virtual desktop infrastructure (VDI) clients powerful enough to run multiple 4K monitors is small, it has become increasingly important as some companies have adopted a work-at-home policy, or need to house the servers that process data in a separate location as they can be loud and produce a fair amount of heat. A few of the verticals that need to display data on multiple 4K monitors include media and entertainment (M&E), financial, and product design engineering.

HP t740

HP t740 Thin Client Specifications

To give a brief overview of its specifications, the t740 is a desktop thin client with multiple USB and video ports, which is powered by an AMD Ryzen CPU with integrated Radeon GPU and the option for having a second GPU card. The device runs an HP ThinPro or Windows 10 IoT Enterprise operating system, and supports all the major VDI environments as well some niche ones such as HP RGS.

The HP t740 Thin Client line consists of different configurations. This review is based on a top-of-the-line unit with an optional GPU. Below are the detailed specifications for the HP t740 Thin Client we used for testing purposes in this review:

  • Manufacturer: HP
  • Model: HP t740
  • Part number: 5PD46AV
  • MSRP: $956 USD (base price $651)
  • Client type: desktop thin client
  • Form factor: small desktop
  • OS: HP ThinPro or Windows 10 IoT LTSC 2019
  • Supported remote display protocols: Microsoft RDP; HP RGS; VMware Horizon RDP/PCoIP and Blast Extreme; Citrix ICA/HDX (not all OSes support ever protocol); TTWin and TTerm; and others
  • CPU: AMD Ryzen V1756B with Radeon Vega 8 Graphics
  • GPU: Discrete – AMD Radeon E9173
  • Memory: 8 GB DDR4L-2400 SDRAM (2 x 4 GB)
  • Storage: 128 GB Flash memory
  • Speaker: internal amplified speaker system for basic audio playback
  • Display: six displays at up to UHD/4K (3840 x 2160 @ 60 Hz) resolution
  • Power: 19.5V, 4.62A external power adapter
  • Ports:
    • 1 x USB Type-C 3.1 Gen 2
    • 3 x USB-A 3.1 Gen 1
    • 1 x USB-A 3.1 Gen 2
    • 2 x USB-A 2.0
    • 1 x RJ45
    • 6 x full size DisplayPort 1.2
    • 1 x 3.5 mm headphone/microphone combo
    • 1 x AC power
  • Network connectivity:
    • RJ45 – Realtek RTL8111EPH-CG Gigabit Ethernet (GbE) Controller with support for DASH out-of-band remote management
    • Intel Wireless-AC 9260 Wi-Fi/Bluetooth combo; 2×2 802.11ac Wi-Fi and Bluetooth
    • Bluetooth 5
  • Physical size: 50 x 210 x 210 mm
  • Physical weight: 1.32Kg
  • Color: black
  • Keyboard: HP USB Slim Business Keyboard with built-in smart card reader (TPC-S001K)
  • Mouse: HP USB Optical Mouse (MOFYUO)
  • Compliant standards: UL, CSA, FCC, Energy Star, and EPEAT 2019, and many others
  • Package contents: t740, power adapter, HP mouse and keyboard, base stand, warranty and setup guide
  • 1 & 3-year parts and labor warranty available

HP t740 Thin Client Design and Build

The cardboard box that the device comes packaged in is heavy and well-designed; the device itself comes in a plastic bag and is nestled between two black foam blocks. The keyboard is packaged in its own cardboard box, and the mouse comes in a plastic bag. The box also contains the power supply, warranty, setup guide, and base stand.

The front of the device has the power on button with indicator lights, a 3.5 mm headphone/microphone combo, and three USB ports: a 3.1 Gen 1, a 3.1 Type-C, and a 3.1 Type-C Gen 2.

The back of the device has two USB 2.0 ports, two USB 3.1 Gen 1 ports, six DisplayPort 1.2 ports, are two access covers. One of the covers is to access the low profile PCIe expansion slot which is populated with an AMD Radeon E91732 GPU (containing two additional DisplayPort ports). The other cover, if used, provides dual coaxial cable connectors for external antenna or a serial port.

HP t740 ports

The back of the device can be removed by pressing a latch and then removing the back cover.

HP t740 open

The left side of the device has an embedded cover that can be removed to display the certificates, regulatory labels, and serial number for reference. You will need the serial number if you need to contact HP customer service for assistance. Also contained within this embedded cover is a standoff to be used with a VESA 100 mounting bracket.

The stand (included) can be attached to the device by positioning the stand over the bottom of the thin client, lining up the captive screws in the stand with the screw holes in the thin client, and then tightening the screws.

The right side of the device can be removed to expose the device’s motherboard and GPU. The motherboard is high quality, and the CPU has an 80mm brushless DC fan on top of it and also contains the slots for the RAM, M.2 SATA Flash storage, and M.2 eMMC of NVMe storage.

The entire case is made of black plastic with ventilation slots on the top and bottom. It is very well-made and designed with serviceability in mind.

This is the first VDI client we have had that uses the AMD Ryzen V1756B with Radeon Vega 8 graphics. This CPU is a 4 core, 8 thread, 6 GPU core chip that has a base frequency of 3.2GHz and a boost frequency of 3.6 GHz. It has comparable performance to an Intel Core i5-5675.

We were pleasantly surprised with the quality of the keyboard and mouse that come with the device. Usually if a keyboard and mouse are included with a VDI client, they will be of acceptable but unremarkable quality. This included keyboard, however, is of better-than-average quality and has a built-in card reader.

HP t740 Thin Client Documentation

The device’s startup guide has a URL to the VDI client’s documentation, which contains a Maintenance and Service Guide, User Guide, and Regulatory, Safety and Environmental Notices User Guide. The Hardware Reference Guide is written in English, 51 pages in length, and explains how to set up the device; however, it does not explain how to configure it to work with the major VDI environments. To learn how to configure it to use the major VDI protocols, you will need to consult the documentation for the operating system that you have running on the system (HP ThinPro or Windows 10 IoT Enterprise). Our particular t740 had Windows 10 IoT Enterprise pre-installed on it.

Microsoft Windows 10 IoT

We are starting to see more and more VDI clients that use the Microsoft Windows 10 IoT Enterprise operating system (formerly known as Microsoft Embedded). Microsoft sees IoT as a huge opportunity and offers the OS in both single use and embedded systems; Windows 10 IoT Core only allows a single application to run, and Windows 10 IoT Enterprise is a full version of Windows 10 that has been designed to be locked down to a specific set of applications and peripherals.

HP t740 Usability and Thin Client Setup

The real test of a virtual desktop client is its usability; to put the t740 to the test, we used it for three weeks in our Pacific Northwest lab with various configurations. Below are the key results we noted during our time using the client.The HP t740 Thin Client can be used with HP Device Manager (HPDM) for centralized, server-based administration of HP thin clients.

For system configuration and our initial testing, we connected a Dell U3219Q monitor to the DisplayPort on the device marked DP1. We like using the Dell U3219Q because its built-in KVM switch is extremely useful for testing purposes as it allows us to switch between the VDI client and our laptop with the push of a button.

We powered on the device by pressing the power button on the front of the device, and upon doing so we saw a splash screen and then a black screen. After we changed the monitor connection to the lower connection on the GPU card, we were presented with a Windows screen. We were automatically logged in as User (default password User), which then brought up the Windows IoT screen.

This screen looked like any other Windows 10 system start menu; it had icons for the Citrix Receiver and VMware Horizon Client, as well as other common Windows tools. Many of the common tools, such as the command prompt, were locked down and could only be run by the Admin account.

The t740 is designed with security in mind. As such, HP Write Manager protects the contents of the flash drive of a thin client, as well as decreases its wear, by redirecting and caching writes to a virtual storage space in RAM. When a system restart occurs, the cache is then cleared, and any changes made since the last system startup are lost permanently. This protects the device from malicious codes and unsecure configurations. As this was a test system, we disabled the HP Write Manager during our testing by going into Control Panel and using the HP Write Manager Configuration tool. To be able to make these changes, we had to be logged in under the user Admin (default password Admin). The system also required a reboot after the changes were made.

The configuration of the device was the same as another Windows 10 system, and only the Admin user is allowed to make permanent changes to the device (e.g., user accounts, networking, application installation, etc.). The User account only has access to a limited number of tools, and common tools such as File Explorer are not available.

The device has two local drives, C and Z. The C drive (which is protected by the HP write filter) is a Flash drive on which the OS and apps are installed. The Z drive is a virtual RAM drive. This drive behaves like a physical drive, but it is created at system startup and destroyed at system shutdown.

We installed Speccy on the device and used it to verify the device’s hardware and the monitor’s 4k resolution.

HP t740 Local Horizon Desktop

To get an idea for how well the t740 would work in real world scenarios, we used the device with a local Horizon virtual desktop to perform our daily tasks for two weeks.

We tested the device with a local virtual desktop by connecting it to our network via a Cat 6 cable powered by a 1Gb network through a switch that was connected to either a server or a WAN router. The server was hosting our local VMware Horizon virtual desktop, while the WAN router was used to connect to cloud-based virtual desktops. In order to create a controlled environment, we monitored the network during testing to ensure that no other traffic was present on the network.

The virtual desktop that we used ran Windows 10 (1607), had 2 vCPUs, 8 GB of memory, and 128 GB of NVMe-based storage.

We brought up the Horizon client and configured it to connect to a local Horizon desktop. We were connected to a virtual desktop at the monitor’s native 4K resolution.

The first test we conducted was to use VLC to play a 640×360 30fps video that was stored on the virtual desktop. First, we played the video in its native resolution, and then once again in full screen mode. In both native and full screen modes the video played without any frames dropping. The audio played flawlessly through the device’s built-in speaker when the video was displayed in both quarter-scale and full screen modes. The device’s built-in speaker was loud enough for us to hear it in our testing environment, but in an office environment you would likely want to use a headset or external speakers.

We connected a Jabra Voice 150 headset to a USB connection, which was discovered by the virtual desktop. The headset worked without any issues and sounded good.

We used the client for our daily activities for two weeks without any problems. This included using Microsoft Office applications and Chrome web browser, playing internet-streaming music, etc. During this timeframe, the device performed flawlessly.

Using Multiple Monitors with the HP t740

For multi-monitor usage, we connected the t740 to a Dell 43 Ultra HD 4K multi-client monitor (P4317Q). We like using this monitor as it can simultaneously display content from up to four different inputs in FHD (1920×1080), or from a single input at a resolution of 4K (3840×2160) to do the work of four independent monitors. The monitor has two HDMI/MHL inputs, a Mini DisplayPort input, a full-size DisplayPort input, a VGA input, and a pair of 8-watt speakers.

We used the two ports on the t740 expansion card and the Dell P4317Q monitor’s picture-by-picture (PBP) feature to display 2 x FHD displays on it. We connected to a virtual desktop and displayed it to both monitors. After the device attached to the virtual desktop without any issues, we played a 640×360 30fps video on both of the screens and worked with LibreOffice documents to stress the device. We played the videos in native resolution and in full screen without any jitter, and were able to work with our documents at the same time without any issues. During our testing, Task Monitor showed a CPU usage of 5%, a GPU usage of 51%, and a max bandwidth of 4.7Mbps.

We then connected the ports marked DP1and DP2on the t740 to the Dell P4317Q monitor, and configured the picture-by-picture (PBP) feature to display 4 x FHD displays on it. We connected to a virtual desktop and displayed it on all four of the monitors without any issues. We then played 640×360 30fps videos on all four screens and worked with LibreOffice documents to stress the device. The videos played in native resolution and in full screen mode without any jitter, and we were able to work with our documents without any issues. Task Monitor showed a CPU usage of 11%, a GPU usage of 76%, and a max bandwidth of 11.3Mbps.

HP t740 Thin Client Multi Monitors

Leostream HP RGS Protocol

Leostream offered to set us up with a desktop that we could connect to by using the remote graphics software (RGS). Leostream is interesting because, out of all of the VDI companies, we find them to be the most agnostic with regards to the protocol and source of the virtual desktop. Leostream only provides a Connection Broker and gateway for VDI users and their desktops, and they can provision and broker virtual desktops from VMware vSphere, Amazon Web Services (AWS), Microsoft Azure, and OpenStack; they also recently announced a partnership with Scale Computing. Leostream is equally as agnostic towards the protocol that you use to connect to your desktops, and they support various niche protocols including HP RGS.

HP RGS is a client-server remote desktop protocol developed by HP in 2003 that uses a propriety algorithm for the compression transmission of data. RGS supports screen sharing between multiple users, remote USB connectivity, and audio output. RGS is used within graphics-intensive industries such as computer-aided design (CAD), oil and gas exploration, animation, architecture engineering and construction, and product design.

The desktop and connection broker that Leostream provided to us was hosted in an AWS datacenter. To connect to the desktop, we first downloaded and installed the RGS receiver software from HP on the t740. The RGS receiver is free, but downloading it did require us to create an HP account. We then used the web browser on theHP  t740 Thin Client to connect to the Leostream Connection Broker.

After entering our user name and password in the Leostream Connection Broker sign-in screen, we were presented with a dialog that allowed us to select which virtual desktop we would like to connect to.

We selected the RGS desktop. This then launched the RGS receiver, which has a very comprehensive settings dialog, but we used its default settings. After accepting these settings, we connected to the cloud-hosted virtual desktop.

From the Leostream virtual desktop, we used Chrome to browse the internet and LibreOffice to edit documents with virtually the same experience as using a local desktop.

We could play a YouTube video in quarter-scale and full screen modes without any video frames dropping, and the picture looked very vibrant. The audio was clear and stable throughout the video playback.

HP t740 Thin Client Video Playback

After disconnecting from the virtual desktop, we pinged the IP address of the connection broker and found that the round-trip time (RTT) was 98ms. Given the fact that the Leostream connection broker and virtual desktop were in an AWS datacenter located on the East Coast, and the client was running in our Pacific Northwest lab, we were pleasantly surprised to find that a virtual desktop with this much latency performed just as well as a virtual desktop that was hosted on-premise.

Conclusion

As mentioned in the introduction, HP claims that, based on having the processor with the highest CPU mark on in-market desktop thin clients as of June 2019, the HP t740 is the world’s most powerful desktop thin client. As this is a basic review, we only tested it with one 4K monitor, and two and then four FHD monitors. Based on our testing, we can confirm that the t740 handles all of these configurations without any problem; moreover, judging by the GPU and CPU usage we observed, we believe it could also handle a six-monitor configuration.

Obviously, the ability to display content on multiple monitors is the most important feature of the t740, but there were many other items that we liked about the thin client. For one, we appreciated the fact that it comes with a quality keyboard with a built-in card reader, and the case is well made and designed for easy access for maintainability. We also liked its support for enterprise futures such as out of bound management via DASH. When VDI clients first came out with Windows 10 IoT as the operating system we were a little leery; however, after using a few different VDI clients we can see no immediate issues with using it, and it should make administration easier as it uses the same configuration workflows and management tools as a standard Windows 10 system.

The HP t740 Thin Client is a very powerful VDI client that supports all of the major VDI protocols as well as some niche ones. During our heaviest testing, the AMD Ryzen V1756B’s CPU only hit 11% and the AMD Radeon E91733300U’s GPU hit 76%. As more and more power users work remotely, they will need devices to handle graphic-intensive workloads, and it is good to see that the t740 can handle these requirements.

HP Thin Clients

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post HP t740 Thin Client Review appeared first on StorageReview.com.

iXsystems FreeNAS 11.3-U2 Comes Out

$
0
0
iXsystems FreeNAS 11.3

Back in January, iXsystems had a major release in the form of FreeNAS 11.3. This is not to be confused with last month’s major announcement of FreeNAS and TrueNas gaining name parity along with other shared features.This week saw the release of iXsystems FreeNAS 11.3-U2 that mainly focuses on bug fixes and other housekeeping issues.

Back in January, iXsystems had a major release in the form of FreeNAS 11.3. This is not to be confused with last month’s major announcement of FreeNAS and TrueNas gaining name parity along with other shared features.This week saw the release of iXsystems FreeNAS 11.3-U2 that mainly focuses on bug fixes and other housekeeping issues.

iXsystems FreeNAS 11.3

The 11.3 update laid the earlier groundwork for the parity announced later, including TrueNAS gaining several of the features that were already running in FreeNAS, fully vetted and ready to go to the enterprise. These features include the modernized web UI as well as the ability to use and manage jails, plugins, and VMs. The new features are available in TrueNAS X-Series and M-Series platforms that scale from 10TB to over 10PB with hybrid or all-flash models. There was also several updates to TrueCommand, the company’s unified management system that monitors and controls TrueNAS and FreeNAS systems from a single-pane-of-glass.

The U2 release combines 150 bug fixes, updates, and improvements. Some highlights of this version include:

  • An update to Samba, version 4.10.13 (NAS-105349)
  • Bug fix when importing a pool (NAS-105297)
  • Fix for a middleware memory leak (NAS-104437)
  • Mitigation for specific LSI 9X00 cards (NAS-105568)

Availability

iXsystems FreeNAS 11.3-U2 is available now.

iXsystems

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

The post iXsystems FreeNAS 11.3-U2 Comes Out appeared first on StorageReview.com.

Viewing all 5325 articles
Browse latest View live