11 February 2026
ADLINK has expanded its edge computing line-up with a new server board and three Intel Xeon 6-based GPU servers, as demand grows for more compute at industrial and on-premise sites.
The launch includes the ISB-W890 server board and the AXE-7440GW, AXE-7420GWA and AXE-7220GW systems. ADLINK is targeting organisations running advanced vision processing, analytics and generative AI inference outside central data centres. Edge deployments are moving from pilots to production, particularly in environments where connectivity limits, response-time needs and local data processing requirements shape system design. That shift is increasing demand for server-class systems suited to factory racks, labs and remote sites.
ADLINK positions the new platforms for high-end, real-time vision processing and on-site inference for large models. It also points to mixed workloads that combine visual, language and action-oriented tasks, including vision-language-action pipelines. Target sectors include medical imaging, robotics and smart factory automated optical inspection, where designers balance throughput against space, power limits and serviceability. Buyers in these environments also want room to expand GPUs, storage and I/O.
Eric Kao, General Manager of ADLINK's Net&Com and Automotive Business Unit, said the expanded line-up addresses growing complexity at the edge. "As AI workloads at the edge grow in complexity, from high-end, real-time vision processing to large-scale generative and multimodal AI, this expanded line-up empowers customers to precisely match compute power and form factor to their specific operational needs, ensuring seamless scalability at every milestone of their AI evolution," said Kao.
The ISB-W890 is a CEB form factor server board aimed at system integrators building workstation and server configurations for industrial and embedded deployments. ADLINK describes it as a foundation for designs that need broad I/O options and long-term platform support. The board includes seven PCIe slots, three of them PCIe Gen5 x16. That configuration supports multi-GPU systems for inference and vision workloads, where model size and frame-rate requirements can increase over time. Storage options include multiple M.2 NVMe and SATA III connections. The design targets systems that ingest and process data locally, such as high-resolution image streams or sensor logs, where sustained read/write performance can be a bottleneck.
Alongside the board, ADLINK introduced three systems in its AXE GPU server range. Each is designed for edge installations, with variants that prioritise GPU density, rack efficiency or a compact footprint. The AXE-7440GW is a 4U GPU server supporting up to four full-height, full-length, double-width PCIe Gen5 x16 GPU cards. ADLINK markets it for on-site large language model deployments and multimodal AI inference.
The AXE-7420GWA uses a 550mm short-depth 4U chassis and supports industrial ATX power. The form factor suits factory racks and industrial backends where standard server depth and power delivery can be constraints. ADLINK positions it for consolidated automation workloads and AI integration within existing industrial infrastructure. The AXE-7220GW targets locations with limited rack depth and floor space. It uses a 450mm short-depth 2U chassis and supports two full-height, full-length, double-width PCIe Gen5 x16 GPU cards, making it a candidate for image-heavy inspection tasks or inference nodes in space-constrained cabinets.
Teams building edge AI systems face different trade-offs than data centre operators. Edge sites may require short-depth chassis, varied power standards and limited servicing windows. At the same time, vision and generative AI workloads can demand high GPU bandwidth and fast local storage, pushing designs toward server-class configurations. The announcement also reflects a broader edge-market shift. Vendors increasingly present edge AI as a continuum, from compact inference boxes to GPU servers that resemble small data centre nodes but run closer to equipment, cameras and sensors.
Configuration details and availability will vary by system and deployment needs. ADLINK expects the new board and servers to be adopted in projects that require more compute headroom for real-time visual intelligence and on-site generative AI inference.



