NATvision
Accelerated Vision

Accelerated Vision is our synonym for high performance machine vision and deep learning solutions based on optimized FPGA algorithms for embedded applications. Our solutions are based on a vendor-independent modular open system architecture (MOSA) called MicroTCA. The 19-inch rack-mount systems provide infrastructure for clocking, triggering and synchronization of cameras without the need for external circuits. The redundant design of hot-swap capable infrastructure (switching, power supply and cooling) guarantees high availability and maintainability. Compared to traditional PC/Server-based computer vision systems our platform offers more flexibility, lower development costs and unlimited scalability with any number of cameras.
Depending on the end application, we solve image processing tasks with Artificial Intelligence (AI), OpenCV, Visual Applets® or combinations hereof. The algorithms are hardware accelerated by FPGAs which enables time critical execution on the edge device. This results in a latency advantage, which is particularly suitable for real-time applications.
NATvision is a complete environment for the development and deployment of sophisticated image and video processing applications. The environment consists of a broad range of hardware products combined with selected software solutions and engineering support from N.A.T.
Designed as a machine vision processing platform, NATvision offers higher performance and lower development costs than comparable solutions, with unlimited scalability and number of cameras. The NATvision technology uses today’s state-of-the-art FPGA resource boards, combining the performance and real-time advantages of FPGA based algorithms with software-based image processing.
NATvision systems are based on the flexible and scalable 19” industry standard, MicroTCA (Micro Telecommunication Computing Architecture). This modular open system architecture (MOSA) is a perfect fit for all kind of vision applications due to its high-performance board-to-board communication capabilities. It also includes circuits for synchronizing and triggering external cameras. As a modular system architecture, MicroTCA can be easily adapted to your specific application needs.
An integral component of any vision application is a powerful image processing board (frame grabber). Within NATvision, this functionality is delivered by advanced FPGA technologies provided by integrated circuits from Xilinx or Intel. Depending of the complexity of your application, you can choose from a range of frame grabber boards of different sizes and complexity.
The NATvision software architecture provides a universal development platform for your solution. Depending on the type of developer, one can either develop algorithms completely graphically with Visual Applets or program the FPGA on RTL level. The hybrid structure of processor and FPGA allows an efficient combination of C++ code and hardware acceleration. For example, use the HLS toolbox and accelerate OpenCV algorithms with the FPGA. For the implementation of deep learning convolutional networks (CNN, DNN) for object detection and classification we support the conversion of TensorFlow and Caffe based networks to our platform.
Using NATvision hardware, machine learning applications are implemented accurately and cost-efficient. For the implementation, we rely on hybrid CPU/FPGA technology, which offers significant advantages compared to conventional systems. Our systems calculate results in real time on the latest FPGA technology and guarantee a level of accuracy and availability.
The services of N.A.T. range from providing the bare vision hardware components only up to delivering complete and individually tailored turnkey solutions. If you have a machine vision application in mind, talk to us.
N.A.T. Networking, Automation and Technology
The N.A.T. headquarter is located in the city of Bonn, Germany, right on the banks of the river Rhine. N.A.T. are represented by certified distributors and sales agents in many countries.