Article Portfolio

In addition to providing strategic and product/technical marketing consulting services, Margalla Communications also analyzes the technologies, products, vendors, applications, and market trends in the cloud and enterprise networking markets by publishing articles, white papers, and industry reports, and producing seminars/webinars.


Built into the NVMe-oF standard was support for a range of network transports. As a result, the challenge for enterprise and cloud data center architects is which fabric or network to utilize for NVMe-oF to optimize performance, cost and reliability. 

Storage vendors and customers face interesting tradeoffs and options when evaluating how to achieve the highest block storage performance on Ethernet networks, while preserving the major software and hardware investment in iSCSI.

Used primarily for high-performance computing (HPC) for more than a decade, RDMA networking is now on the cusp of becoming a mainstream means of providing a scalable and high-performance cloud infrastructure. A key factor fueling its growing use is Windows Server 2012 R2 offering several RDMA networking options. I review and compare those options within Windows Server 2012 R2 environments.

The article provides a review of the operational cost savings of using Amazon Glacier cloud storage service over traditional on-premise tape backup methods for a typical enterprise data backup and archiving scenario.

Deep dive into the requirements for storage becoming better aligned with virtualization. 

10GBASE-T is the standard technology that enables 10 Gigabit Ethernet operations over balanced twisted-pair copper, including Category 6A unshielded and shielded cabling. 10GBASE-T provides great flexibility in network design due to its 100-meter reach capability, and also provides the requisite backward compatibility that allows most end users to transparently upgrade from existing 100/1000-Mbps networks.

Ten Gigabit-per-second Ethernet (10GbE) represents the next level of Ethernet network bandwidth, with networking vendors promoting it as the next great capability. But vendors of network storage infrastructure must strike a balance between constant I/O performance improvement and delivering cost-effective solutions geared for widespread adoption. So where will this technology truly matter for storage environments?

As the component technologies of cluster systems have improved and buyers have become more confident running cluster systems, they have inevitably redirected capital once earmarked for large custom systems to larger cluster systems. These much larger clusters, often with thousands of processors, present opportunities for huge performance gains through improved parallel performance resulting in an overall higher order of magnitude return-on-investment (ROI). While algorithm and application tuning is often required to obtain these benefits, so often are cost, bandwidth, message rate, and latency of cluster interconnects.

Technological maturity is making workable cloud solutions both possible and affordable. Most large companies are already exploring ways to make corporate data centers more “cloud-like” to boost efficiency, cut capital costs, and provide the elastic scaling needed to adapt to rapidly changing business requirements. However, the best ways to accomplish this— especially where storage is concerned—may be still unclear.

Great strides have been made over the past few years in bringing 10 Gigabit Ethernet (10GbE) products to market, and the technology has progressed from lab demonstrations to commercial deployment. To date, however, the 10-GbE market- ramp is slower than forecasted, with cost cited as a major reason for the delay. Within this context, a new optical transceiver form factor has emerged as the catalyst to spark widespread 10GbE adoption. The original 10G optical module form factors of XENPAK, X2 and XFP are now being replaced by the SFP+ module, enabling the 10GbE transition in enterprise networks by meeting a variety of customer needs better than with previous modules.

The latest CPUs from AMD and Intel are more than up to the task of running 10 to 20 or more virtualized applications at a time.  However, most servers run out of I/O bandwidth well before processing power as only so many Ethernet NICs and Fibre Channel HBAs can be added to a physical server.

This article examines the impact of 10GbE on HPC infrastructure and provides guidance for the most effective transformation of your network. The initial focus will be on the top drivers and then the leading technology trends impacting 10GbE NIC designs will be reviewed.

The continued evolution towards 10 Gigabit Ethernet (10GbE) server networking with the promise of rapidly declining prices and a stream of innovations, such as power and space-efficient network interface controllers (NICs) for dense server form factors, and support of RDMA, TCP offload engine (TOE), iSCSI, and I/O virtualization capabilities, make it the obvious choice as the upgrade path for high-performance data centers.

This article focuses on a strategic approach to data-center server networking based on 10 Gigabit Ethernet (10GbE) that promises investment protection, increased efficiencies, and enhanced business agility. Enterprises that invest in such a networking foundation today will be prepared to meet future customer needs while taking advantage of ongoing industry breakthroughs.

Shared file systems also provide a single, centralized point of control for managing DI files and databases, which can help lower total costs by simplifying administration. Shared file systems typically allow administrators to manage volumes, content replication, and point-in-time copies from the network. This capability provides a single point of control and management across multiple storage subsystems.

This article covers a number of approaches to Exchange storage consolidation, with a focus on solutions that are feasible for SMBs.

This article covers iSCSI targets, availability, and security requirements. A previous article covered initiator, application, and performance requirements (see “iSCSI planning: Apps, performance, and initiators,” InfoStor, October 2003, p. 51).

This article covers applications, initiators, and performance requirements. The next article in the series will review iSCSI security, availability, network infrastructure, and storage system component issues.

TCP offload engines (TOEs) can speed up applications such as client/server backup and file serving.

For some applications, software initiators will suffice, but more-demanding applications will require iSCSI and TCP/IP hardware accelerators on host bus adapters.

Margalla Communications has a growing list of clients in the networking domain. Let us help you achieve market success!

Set up a free 30-minute-consultancy call today.