概述
AI Inference Server standardizes AI model execution on SiemensIndustrial Edge. It facilitates data collection/acquisition,orchestrates data traffic, and is compatible with all powerful AIframeworks.
More information is available at this link.
Ordering optionThe app can be ordered from the Industrial Edge Marketplaceat this link.
应用
AI Inference Server is a Siemens Industrial Edge applicationthat can run on Siemens Industrial Edge devices.
AI Inference Server enables AI models to be executed using thebuilt-in Python Interpreter for the inference purposes.
The application guides the user to set up execution of the AImodel on the Siemens Industrial Edge platform using theready-to-use data connectors.
AI Inference Server standardizes logging, monitoring anddebugging of AI models
AI Inference Server is designed to integrate MLOps with the AIModel Monitor.
AI Inference Server with GPU acceleration:
AI Inference Server in the variant with GPU accelerationstandardizes the execution of the AI model on GPU-acceleratedhardware using AI-enabled inference in the Edge ecosystem.
功能
AI Inference Server
Supports the most popular AI frameworks that are compatible withPython
Orchestrates and controls AI model execution
Can run AI pipelines with both an older and a newer version ofPython
Enables horizontal scaling of the AI pipelines for optimumperformance
Simplifies tasks such as input mapping (thanks to integrationwith Databus and other Siemens Industrial Edge connectors), datacollection/acquisition, and pipeline visualization
Permits monitoring and debugging of AI models based on inferencestatistics
Features logging and image visualization
Includes pipeline version management
Permits the import of models via the user interface or via aremote connection
Supports persistent data storage on the local device for eachpipeline
AI Inference Server variant for 3 pipelines
Supports the simultaneous execution of up to 3 AI pipelines