bev-project/RMT-PPAD-main/examples/YOLOv8-OpenVINO-CPP-Inference
bevfusion fcf3ae0ea9 Complete project state snapshot: Phase 4B RMT-PPAD Integration
🎯 Training Status:
- Current Epoch: 2/10 (13.3% complete)
- Segmentation Dice: 0.9594
- Detection IoU: 0.5742
- Training stable with 8 GPUs

🔧 Technical Achievements:
-  RMT-PPAD Transformer segmentation decoder integrated
-  Task-specific GCA architecture optimized
-  Multi-scale feature fusion (180×180, 360×360, 600×600)
-  Adaptive scale weight learning implemented
-  BEVFusion multi-task framework enhanced

📊 Performance Highlights:
- Divider segmentation: 0.9793 Dice (excellent)
- Pedestrian crossing: 0.9812 Dice (excellent)
- Stop line: 0.9812 Dice (excellent)
- Carpark area: 0.9802 Dice (excellent)
- Walkway: 0.9401 Dice (good)
- Drivable area: 0.8959 Dice (good)

🛠️ Code Changes Included:
- Enhanced BEVFusion model (bevfusion.py)
- RMT-PPAD integration modules (rmtppad_integration.py)
- Transformer segmentation head (enhanced_transformer.py)
- GCA module optimizations (gca.py)
- Configuration updates (Phase 4B configs)
- Training scripts and automation tools
- Comprehensive documentation and analysis reports

📅 Snapshot Date: Fri Nov 14 09:06:09 UTC 2025
📍 Environment: Docker container
🎯 Phase: RMT-PPAD Integration Complete
2025-11-14 09:06:09 +00:00
..
CMakeLists.txt Complete project state snapshot: Phase 4B RMT-PPAD Integration 2025-11-14 09:06:09 +00:00
README.md Complete project state snapshot: Phase 4B RMT-PPAD Integration 2025-11-14 09:06:09 +00:00
inference.cc Complete project state snapshot: Phase 4B RMT-PPAD Integration 2025-11-14 09:06:09 +00:00
inference.h Complete project state snapshot: Phase 4B RMT-PPAD Integration 2025-11-14 09:06:09 +00:00
main.cc Complete project state snapshot: Phase 4B RMT-PPAD Integration 2025-11-14 09:06:09 +00:00

README.md

YOLOv8 OpenVINO Inference in C++ 🦾

Welcome to the YOLOv8 OpenVINO Inference example in C++! This guide will help you get started with leveraging the powerful YOLOv8 models using OpenVINO and OpenCV API in your C++ projects. Whether you're looking to enhance performance or add flexibility to your applications, this example has got you covered.

🌟 Features

  • 🚀 Model Format Support: Compatible with ONNX and OpenVINO IR formats.
  • Precision Options: Run models in FP32, FP16, and INT8 precisions.
  • 🔄 Dynamic Shape Loading: Easily handle models with dynamic input shapes.

📋 Dependencies

To ensure smooth execution, please make sure you have the following dependencies installed:

Dependency Version
OpenVINO >=2023.3
OpenCV >=4.5.0
C++ >=14
CMake >=3.12.0

⚙️ Build Instructions

Follow these steps to build the project:

  1. Clone the repository:

    git clone https://github.com/ultralytics/ultralytics.git
    cd ultralytics/YOLOv8-OpenVINO-CPP-Inference
    
  2. Create a build directory and compile the project:

    mkdir build
    cd build
    cmake ..
    make
    

🛠️ Usage

Once built, you can run inference on an image using the following command:

./detect <model_path.{onnx, xml}> <image_path.jpg>

🔄 Exporting YOLOv8 Models

To use your YOLOv8 model with OpenVINO, you need to export it first. Use the command below to export the model:

yolo export model=yolov8s.pt imgsz=640 format=openvino

📸 Screenshots

Running Using OpenVINO Model

Running OpenVINO Model

Running Using ONNX Model

Running ONNX Model

❤️ Contributions

We hope this example helps you integrate YOLOv8 with OpenVINO and OpenCV into your C++ projects effortlessly. Happy coding! 🚀