2026年03月09日

NeuroBox D Technical Brief: AI-Powered Mechanical Design Platform

MST Semiconductor · Technical Whitepaper

NeuroBox D: AI-Powered Mechanical Design Platform for Semiconductor Equipment

Fully automated generation from P&ID drawings to 3D SolidWorks assemblies

Version 1.6 · March 2026 · MST Semiconductor (Shanghai) Co., Ltd.

Abstract

Mechanical design of semiconductor equipment has long relied on engineers manually converting P&ID drawings into 3D assemblies — a process that typically takes weeks per tool, from concept to completed assembly model. NeuroBox D employs a four-engine architecture (Detection Engine + BOM Inference Engine + Multi-View Projection Engine + Assembly Engine) that combines Vision-Language Models (VLMs), constraint solvers, and SolidWorks API integration to automate the generation of native 3D assembly files from 2D engineering drawings. Field data demonstrates 60% engineering-hour savings for a 10-person design team, with assembly constraint inference accuracy reaching 93%. This whitepaper provides a detailed overview of NeuroBox D’s system architecture, core algorithms, and engineering implementation.

1. Industry Pain Points: Why Mechanical Design Needs AI

Mechanical design of semiconductor equipment is a highly experience-dependent engineering process. A typical CVD or PVD tool comprises hundreds of components — valves, fittings, gas panels, sensors, seals, fasteners, and more — whose assembly relationships are governed by multiple layers of constraints: physical constraints (interface dimensional compatibility), process constraints (gas compatibility), and corporate standards (preferred part selection).

The core bottlenecks in the current design workflow include:

  • P&ID-to-BOM conversion is manual: Engineers must individually identify equipment symbols on the P&ID, look up corresponding standard parts, and determine auxiliary components such as seals and fasteners — a repetitive and error-prone process
  • 3D assembly modeling is time-intensive: After finalizing the BOM, engineers must insert parts one by one in SolidWorks and define assembly constraints (mates). A single gas panel 3D model can take 3 to 5 days
  • Institutional knowledge is difficult to capture: Senior engineers know whether “an IGS Block or a VCR Block should follow the MFC,” but this knowledge resides in individuals’ heads. New hires need months to accumulate comparable expertise
  • Design changes are costly: When a P&ID is revised, the entire BOM and 3D model must be manually re-verified, making change management highly inefficient

NeuroBox D was built with a clear objective: let AI handle the entire workflow from P&ID to 3D assembly, so that engineers only need to review and fine-tune.

2. System Architecture: Four-Engine Pipeline

NeuroBox D is built on a microservices architecture. The core pipeline consists of four independent engines, each responsible for a distinct stage of the workflow:

P&ID Drawing Upload
    ↓
&① Detection Engine
    Object detection + Vision-Language Model (VLM) + Open-vocabulary detection → Identifies equipment symbols, piping connections, BOM tables
    ↓
&② BOM Inference Engine
    L1 hard rules + L2 physical constraints + L3 learned constraints → Infers complete BOM (equipment + auxiliary parts + fasteners)
    ↓
&③ Multi-View Projection Engine
    View detection + Cross-view feature matching + Coordinate alignment → Maps 2D coordinates to 3D space
    ↓
&④ Assembly Engine
    Constrained layout optimization + Intelligent pipe routing + SolidWorks API → Outputs native .sldasm files

The four engines operate as independent microservices communicating via REST APIs, allowing them to be scaled and updated independently. The entire system is containerized for one-click deployment.

Supporting Services

  • Relational database: Stores parts library metadata, assembly rules, and historical learning data
  • High-speed cache: Caches part geometry information and intermediate inference results
  • Object storage: Stores uploaded drawings, generated 3D files, and parts libraries
  • Web frontend: Visual operation interface with integrated 3D preview

3. Detection Engine: Multi-Model Fusion for Drawing Comprehension

The Detection Engine’s task is to “understand” P&ID drawings — identifying every equipment symbol, piping connection, and annotation on the drawing.

3.1 Multi-Model Fusion Strategy

No single detection model can cover the full diversity of P&ID drawings (different customers, standards, and drafting styles), so we employ a three-model fusion approach:

  • Specialized object detection model (rapid localization): A supervised deep learning detection model fine-tuned on P&ID symbol datasets for fast localization of equipment regions on drawings. Supports 30+ equipment symbol classes including MFC, filter, regulator, valve, and more
  • Vision-Language Model (semantic understanding): The VLM interprets semantic information within detected regions — equipment model numbers, port specifications, and annotation text. Its strength lies in handling non-standard annotations and handwritten modifications
  • Open-vocabulary detection model (open-set detection): Text-driven object detection for identifying new equipment types not covered in the training set. Engineers simply describe the equipment characteristics (e.g., “a pressure-reducing valve with a triangular symbol”) to enable detection

3.2 One-Shot Learning

When customers use non-standard P&ID symbols, traditional approaches require re-annotating data and retraining the model. NeuroBox D supports one-shot learning: the customer provides a single symbol sample, and the system uses feature embedding and similarity matching to recognize that symbol in subsequent drawings.

3.3 Auxiliary Detection Modules

  • Pipe Tracer: Image-processing-based pipeline tracing that identifies connection relationships between equipment
  • Table Detector: Automatically extracts BOM table information embedded in P&ID drawings
  • Intelligent Segmentation: Segments multi-view drawings into individual views (front view / top view / side view boundary recognition)

4. BOM Inference Engine: Three-Layer Constraint Reasoning

Once the Detection Engine identifies the equipment, the BOM Inference Engine generates a complete bill of materials — including not only the equipment explicitly annotated on the drawing, but also seals, fasteners, tube fittings, and other auxiliary parts. These auxiliaries are typically not shown on the P&ID but are indispensable in actual assembly.

4.1 Three-Layer Constraint Architecture

L1 — Hard Constraints

Deterministic rules based on industry standards and product specifications that must not be violated:

  • An MFC must be paired with a gas block matching its interface type (IGS or VCR, depending on port configuration)
  • Filters require seals of specific materials determined by the process medium (PTFE for corrosive gases, Viton for inert gases)
  • Interface dimensions must match: a 1/4″ line cannot directly connect to a 1/2″ device without a reducer

L2 — Soft Constraints

Constraints based on physical principles and engineering best practices, prioritized but adjustable under special circumstances:

  • Port count validation: the number of ports on a block must be greater than or equal to the number of connected devices
  • Spatial layout constraints: clearance between adjacent equipment must satisfy maintenance access requirements
  • Gravity constraints: liquid lines must account for slope and drain points

L3 — Learned Constraints

Patterns automatically learned from customers’ historical design data:

  • Customer A prefers Brand X valves over Brand Y in similar configurations
  • A specific process segment (e.g., gas panel inlet section) typically uses welded connections rather than VCR
  • Certain equipment combinations consistently appear together in past projects

L3 constraints are the core implementation of the “data flywheel” — the more a customer uses the system, the richer the L3 constraints become, and the more accurate BOM inference gets.

4.2 Two-Pass Inference Process

Pass 1: Based on detection results and L1/L2 constraints, generates an initial BOM (primary equipment + direct auxiliaries)

Pass 2: Based on the Pass 1 BOM and L3 learned constraints, supplements indirect auxiliaries (fastener quantities, seal specifications, fitting types) and performs a completeness check

5. Multi-View Projection Engine: 2D-to-3D Coordinate Mapping

When the input is an assembly drawing (rather than a P&ID), it typically contains multiple views (front, top, and side). The Multi-View Projection Engine fuses coordinate information from these 2D views to reconstruct equipment positions in 3D space.

Core Algorithms

  • View detection: Automatically identifies view types and boundaries on the drawing, supporting standard orthographic views and isometric views
  • Feature matching: Optimization-based cross-view feature association — given equipment A identified in the front view, which contour in the top view corresponds to it?
  • Coordinate alignment: Establishes mapping relationships between each view’s coordinate system and the 3D world coordinate system, projecting 2D positions into 3D space

6. Assembly Engine: From Layout Optimization to Native SLDASM Output

The Assembly Engine is the final stage of the pipeline and the most technically challenging — converting BOM and 3D position data into native assembly files that can be directly opened and edited in SolidWorks. The assembly model is trained through supervised learning: using engineers’ historically completed assembly cases as labeled data, the system learns mate relationships, constraint types, and spatial layout patterns between parts, enabling automated assembly inference.

6.1 3D Layout Optimization

Uses a constraint satisfaction (CP-SAT) solver for part layout:

  • Constraints: no part overlap, interface alignment, maintenance clearance, pipe length minimization
  • Optimization objective: minimize overall footprint and total pipe routing length
  • Cabinet constraints: parts must be placed within specified cabinet dimensions

6.2 Automated Pipe Routing

Intelligent pathfinding-based automated pipe routing:

  • Constructs a 3D grid map where placed parts are marked as obstacles
  • Automatically searches for the shortest collision-free path for each pair of ports to be connected
  • Crossing optimization: detects crossing points and automatically adjusts height layering

6.3 Intelligent Mate Face Matching: The Key Technology Behind SolidWorks Auto-Assembly

This is the core technology that distinguishes NeuroBox D from other “pseudo-automation” tools.

Through deep preprocessing of the parts library, NeuroBox D establishes a unique identifier mapping for each part’s mate faces. During assembly, the system automatically matches corresponding mate faces between parts and creates precise mate relationships directly through the SolidWorks API — fully automated with no manual intervention required.

This means the generated SLDASM file contains complete mate relationship definitions. When engineers open it, they see a correctly assembled model — not a pile of scattered parts.

6.4 SolidWorks Integration

Communicates with SolidWorks instances through a proprietary remote communication module:

  • Remotely connects to a Windows workstation running SolidWorks
  • Sends assembly command sequences (insert part → define mate → generate sub-assembly → export STEP)
  • Output files are available for download through the web interface

7. Data Flywheel: A Self-Improving Learning Mechanism

NeuroBox D is not a static rules engine. It incorporates continuous learning mechanisms across three layers:

7.1 Detection Model Iteration

Through one-shot learning, every time a customer annotates a new P&ID symbol, the detection model’s coverage expands. Once sufficient samples accumulate, incremental YOLO model training can be triggered to further improve recognition accuracy.

7.2 BOM Rule Learning

When engineers review and modify the system’s inferred BOM, those modifications are automatically fed back into L3 learned constraints:

  • “System recommended IGS Block; engineer changed it to VCR Block” → learns that VCR is more appropriate in this context
  • “System missed a gasket; engineer added it manually” → learns that this equipment combination requires an additional seal

7.3 Assembly Pattern Accumulation

Every successfully generated assembly and its mate relationships are stored as historical references. When a similar equipment combination is encountered, the system can directly reuse the historical mate solution rather than inferring from scratch.

Measured Results: In internal testing, after completing 10 projects using the same customer’s standard parts library, BOM inference accuracy improved from 78% on first use to 93%, and assembly generation success rate increased from 65% to 89%. The data flywheel effect was most pronounced between projects 5 and 8.

8. Deployment Architecture and Performance Metrics

8.1 Deployment Options

NeuroBox D supports two deployment modes:

  • On-premises deployment: All services are deployed within the customer’s private network; drawings and model data never leave the facility. Containerized one-click deployment requires one GPU server (for running the Detection Engine’s VLM model) and one Windows workstation running SolidWorks
  • Hybrid deployment: Detection Engine and Inference Engine are deployed on MST’s cloud; Assembly Engine and SolidWorks integration are deployed on the customer side. Drawings are transmitted through an encrypted channel

8.2 Performance Metrics

Metric Value Notes
P&ID symbol recognition rate 94% ISO 10628-2 standard symbol set, including one-shot augmentation
BOM inference accuracy 93% With L3 constraints trained on 10+ projects
Standard parts library size 500+ Native SolidWorks parts with pre-processed mate faces
End-to-end processing time 5-15 minutes From P&ID upload to SLDASM generation (varies by complexity)
Supported input formats PDF/PNG/DWG/DXF Direct import of native AutoCAD formats
Output formats .sldasm / .step Native SolidWorks assembly or universal STEP format

9. Customer Value and Use Cases

9.1 Target Customers

  • Semiconductor equipment OEMs: Frequently design gas panels, chamber assemblies, and other modules where design cycle time directly impacts equipment delivery schedules
  • Equipment integrators: Customize equipment configurations based on end customers’ P&IDs and need to rapidly generate 3D proposals
  • Design institutes and research organizations: Handle multiple equipment design projects simultaneously, where design manpower is the critical bottleneck

9.2 Typical Use Cases

Use Case 1: Rapid Gas Panel Design

The customer provides a P&ID drawing and their standard parts library. NeuroBox D generates a complete gas panel 3D assembly in 15 minutes, including all valves, fittings, blocks, seals, and fasteners with correct mate relationships. After the engineer reviews and fine-tunes, the model is ready for downstream structural design and drawing generation.

Use Case 2: Rapid Design Change Response

After a P&ID revision (e.g., adding a gas line), the updated drawing is re-uploaded, and the system automatically identifies the changes and incrementally updates the BOM and 3D model rather than regenerating from scratch.

Use Case 3: Accelerated Onboarding

New mechanical engineers no longer need months to learn “which auxiliary parts go with which equipment” — this knowledge is already embedded in the system’s L1/L2/L3 constraints. New hires can use the system to generate initial proposals and gradually build expertise through the review and modification process.

9.3 Return on Investment

Field Case Study: 10-person design team at a semiconductor equipment company

  • Single gas panel design time: reduced from 3-5 days to 0.5 days (including review)
  • BOM verification time: reduced from 2 hours to 15 minutes
  • Overall design team engineering-hour savings: 60%
  • Design error rate (missing/incorrect parts): reduced by 75%

Learn More

To schedule a product demo or technical evaluation of NeuroBox D, please contact our technical team.

© 2025-2026 MST Semiconductor (Shanghai) Co., Ltd. · ai-mst.com · This whitepaper is protected by intellectual property rights

💬 在线客服 📅 预约演示 📞 021-58717229 contact@ai-mst.com
📱 微信扫码
企业微信客服

扫码添加客服