Name:
X722DA2 10Gigabit
The Intel� Ethernet Network Adapter X722 features iWARP RDMA for high
data throughput, low-latency workloads and low CPU utilization. The X722
is ideal for Software Defined Storage solutions, NVMe over Fabric
solutions and Virtual Machine Migrations acceleration. RDMA is a
host-offload, host-bypass technology that enables a low-latency,
high-throughput direct memory-to-memory data communication between
applications over a network. iWARP extensions to TCP/IP, standardized by
the Internet Engineering Task Force (IETF), eliminate three major
sources of networking overhead: TCP/IP stack process, memory copies, and
application context switches. Based on TCP/IP, iWARP is highly scalable
and ideal for Hyper-converged storage solutions.
The X722 is one of the Intel� Ethernet 700 Series Network Adapters.
These adapters are the foundation for server connectivity, providing
broad interoperability, critical performance optimizations, and
increased agility for Telecommunications, Cloud, and Enterprise IT
network solutions. Interoperability - Multiple media types for broad
compatibility backed by extensive testing and validation. Optimization -
Intelligent offloads and accelerators to unlock networkperformance in
servers with Intel� Xeon� processors. Agility - Both Kernel and Data
Plane Development Kit (DPDK) drivers for scalable packet processing.
Built on more than 35 years of continuous Ethernet innovations, the
Intel� Ethernet 700 Series delivers networking performance across a wide
range of network port speeds through intelligent offloads, sophisticated
packet processing, and quality open source drivers. All Intel� Ethernet
700 Series Network Adapters include these feature-rich technologies:
Flexible and Scalable I/O for Virtualized InfrastructuresIntel�
Virtualization Technology (Intel� VT), delivers outstanding I/O
performance in virtualized server environments. I/O bottlenecks are
reduced through intelligent offloads such as Virtual Machine Device
Queues (VMDq) and Flexible Port Partitioning, using SR-IOV with a common
Virtual Function driver for networking traffic per Virtual Machine (VM),
enabling near-native performance and VM scalability.
Host-based features supported include:
VMDq for
Emulated Path:
VMDq, enables a hypervisor to represent a single network port as
multiple network ports that can be assigned to the individual VMs.
Traffic handling is offloaded to the network controller, delivering the
benefits of port partitioning with little to no administrative overhead
by the IT staff.
SR-IOV for Direct Assignment:
Adapter-based isolation and switching for various virtual station
instances enables optimal CPU usage in virtualized environments. Up to
128 VFs, each VF can support a unique and separate data path for I/O
related functions within the PCI Express hierarchy. Use of SR-IOV with a
networking device, for example, allows the bandwidth of a single port
(function) to be partitioned into smaller slices that can be allocated
to specific VMs or guests, via a standard interface.
|