High-performance, low latency, and energy efficient ethernet adapters

ATTO Technology has released the ATTO FastFrame™3 25/40/50/100GbE network interface controllers.

ATTO FastFrame 3 NICs provide unmatched performance, the industry’s lowest latency, and the versatility needed to support the most demanding and complex ecosystems.
Supporting speeds up to 100GbE and latency as low as 1µs, FastFrame 3 NICs are ideal for IT applications such as data analytics, high performance computing (HPC) clusters, hyper converged servers and large database analysis.
FastFrame 3 NICs have built-in hardware offload engines, including CPU transport layer offloading and NVMe over Fabric target offloading to accelerate data and reduce server overhead.  Installations relying on SSDs will realize improvements in storage operations thanks to native NVMe support.
“IT professionals are looking for higher bandwidth to drive data center aggregation level traffic,” said Tim Klein, CEO of ATTO Technology.  “With the industry’s lowest latency, ATTO FastFrame 3 25/40/50/100GbE NICs enable higher performance and permit significantly faster transport of large amounts of data than our competitors.  Enhancements including RoCE support and Energy Efficient Ethernet also allow FastFrame NICs to offer higher ROI than the competition by maximizing resources and minimizing OPEX.”
AI-Native Networking Platform provides managed providers with unique visibility from fabric to GPU,...
NEXCOM has introduced the FTA 5190 AI-Powered Edge Server, powered by the Intel Xeon 6 SoC. The FTA...
Microsoft invests in Veeam to help customers maximize the value of their data while also helping...
One in two organisations is overspending on cloud storage budget.
Bringing together two of the most utilized networking portfolios, Cisco Silicon One and NVIDIA...
New StorONE v3.9 challenges the flash-first narrative, offering superior performance and cost...
StorMagic says that its SvHCI solution has been tested and validated with three Supermicro IoT...
Industry-first availability for AMD inference-optimised GPUs to build and scale AI-native...