Centers of Excellence Archives - Autoware https://autoware.org/category/coe/ Tue, 02 Sep 2025 17:15:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://autoware.org/wp-content/uploads/2023/01/cropped-favicon-autoware-32x32.png Centers of Excellence Archives - Autoware https://autoware.org/category/coe/ 32 32 A Tale of Two Open-Source Ecosystems: Scaling Autonomy with AutoDRIVE & Autoware https://autoware.org/scaling-autonomy-with-autodrive-autoware/ Tue, 02 Sep 2025 15:00:00 +0000 https://autoware.org/?p=3709 Developing and testing autonomous vehicle technologies often involves working across a wide range of platform sizes — from miniature testbeds to full-scale vehicles — each chosen based on space, safety, and budget considerations. However, this diversity introduces significant challenges when it comes to deploying and validating autonomy algorithms. Differences in vehicle dynamics, sensor configurations, computing ...

The post A Tale of Two Open-Source Ecosystems: Scaling Autonomy with AutoDRIVE & Autoware appeared first on Autoware.

]]>
Developing and testing autonomous vehicle technologies often involves working across a wide range of platform sizes — from miniature testbeds to full-scale vehicles — each chosen based on space, safety, and budget considerations. However, this diversity introduces significant challenges when it comes to deploying and validating autonomy algorithms. Differences in vehicle dynamics, sensor configurations, computing resources, and environmental conditions, along with regulatory and scalability concerns, make the process complex and fragmented. To address these issues, we introduce the AutoDRIVE Ecosystem — a unified framework designed to model and simulate digital twins of autonomous vehicles across different scales and operational design domains (ODDs). In this blog, we explore how the AutoDRIVE Ecosystem leverages autonomy-oriented digital twins to deploy the Autoware software stack on various vehicle platforms to achieve ODD-specific tasks. We also highlight its flexibility in supporting virtual, hybrid, and real-world testing paradigms — enabling a seamless simulation-to-reality (sim2real) transition of autonomous driving software.

The Vision

As autonomous vehicle systems grow in complexity, simulation has become essential for bridging the gap between conceptual design and real-world deployment. Yet, creating simulations that accurately reflect realistic vehicle dynamics, sensor characteristics, and environmental conditions — while also enabling real-time interactivity — remains a major challenge. Traditional simulations often fall short in supporting these “autonomy-oriented” demands, where back-end physics and front-end graphics must be balanced with equal fidelity.

To truly enable simulation-driven design, testing, and validation of autonomous systems, we envision a shift from static, fixed-parameter virtual models to dynamic and adaptive digital twins. These autonomy-oriented digital twins capture the full system-of-systems-level interactions — including vehicles, sensors, actuators, infrastructure and environment — while offering seamless integration with autonomy software stacks.

This blog presents our approach to building such digital twins across different vehicle scales, using a unified real2sim2real workflow to support robust development and deployment of the Autoware stack. Our goal is to close the loop between simulation and reality, enabling smarter, faster, and more scalable autonomy developments.

Digital Twins

To demonstrate our framework across different operational scales, we worked with a diverse fleet of autonomous vehicles — from small-scale experimental platforms to full-sized commercial vehicles. These included Nigel (1:14 scale), RoboRacer (1:10 scale), Hunter SE (1:5 scale), and OpenCAV (1:1 scale).

Each platform was equipped with sensors tailored to its size and function. Smaller vehicles like Nigel and RoboRacer featured hobby-grade sensors such as encoders, IMUs, RGB/D cameras, 2D LiDARs, and indoor positioning systems (IPS). Larger platforms, such as Hunter SE and OpenCAV, were retrofitted with different variants of 3D LiDARs and other industry-grade sensors. Actuation setups also varied by scale. While the smaller platforms relied on basic throttle and steering actuators, OpenCAV included a full powertrain model with detailed control over throttle, brakes, steering, and handbrakes — mirroring real-world vehicle commands.

For digital twinning, we adopted the AutoDRIVE Simulator, a high-fidelity platform built for autonomy-centric applications. Each digital twin was calibrated to match its physical counterpart in terms of its perception characteristics as well as system dynamics, ensuring a reliable real2sim transfer.

Autoware API

The core API development and integration with Autoware stack for all the virtual/real vehicles was accomplished using AutoDRIVE Devkit. Specifically, AutoDRIVE’s Autoware API builds on top of its ROS 2 API, which is streamlined to work with the Autoware Core/Universe stack. It is fully compatible with AutoDRIVE Simulator as well as AutoDRIVE Testbed, ensuring a seamless sim2real transfer, without change of any perception, planning, or control algorithms/parameters.

The exact inputs, outputs, and configurations of perception, planning, and control modules vary with the underlying vehicle platform. Therefore, to keep the overall project clean and well-organized, a multitude of custom meta-packages were developed within the Autoware stack to handle different perception, planning, and control algorithms using different input and output information in the form of independent individual packages. Additionally, a separate meta-package was created to handle different vehicles viz. Nigel, RoboRacer, Hunter SE, and OpenCAV. Each package for a particular vehicle hosts vehicle-specific parameter description configurations for perception, planning, and control algorithms, environment maps, RViz configurations, API scripts, teleoperation programs, and user-convenient launch files for getting started quickly and easily.

Applications and Use Cases

Following is a brief summary of the potential applications and use cases, which align well with the different ODDs proposed by the Autoware Foundation:

  • Autonomous Valet Parking (AVP): Mapping of a parking lot, localization within the created map and autonomous driving within the parking lot.
  • Cargo Delivery: Autonomous mobile robots for the transport of goods between multiple points or last-mile delivery.
  • Racing: Autonomous racing using small-scale (e.g. RoboRacer) and full-scale (e.g. Indy Autonomous Challenge) vehicles running the Autoware stack.
  • Robo-Bus/Shuttle: Fully autonomous (Level 4) buses and shuttles operating on public roads with predefined routes and stops.
  • Robo-Taxi: Fully autonomous (Level 4) taxis operating in dense urban environments to pick-up and drop passengers from point-A to point-B.
  • Off-Road Exploration: The Autoware Foundation has recently introduced an off-road ODD. Such off-road deployments could be applied for agricultural, military or extra-terrestrial applications.

Getting Started

You can get started with AutoDRIVE and Autoware today! Here are a few useful resources to take that first step towards immersing yourself within the Autoware Universe:

  • GitHub Repository: This repository is a fork of the upstream Autoware Universe repository, which contains the AutoDRIVE-Autoware integration APIs and demos.
  • Documentation: This documentation provides detailed steps for installation as well as setting up the turn-key demos.
  • YouTube Playlist: This playlist contains videos right from the installation tutorial all the way up to various turn-key demos.
  • Research Paper: This paper can help provide a scientific viewpoint on why and how the AutoDRIVE-Autoware integration is useful.

What’s Next?

PIXKIT 2.0 Digital Twin in AutoDRIVE

We are working on digitally twinning more and more Autoware-supported platforms (e.g., PIXKIT) using the AutoDRIVE Ecosystem, thereby expanding its serviceability. We hope that this will lower the barrier of entry for students and researchers who are getting started with the Autoware stack itself, or the different Autoware-enabled autonomous vehicles.

The post A Tale of Two Open-Source Ecosystems: Scaling Autonomy with AutoDRIVE & Autoware appeared first on Autoware.

]]>
Autoware Centers of Excellence Steering Committee August 2025 Update https://autoware.org/autoware-centers-of-excellence-steering-committee-august-2025-update/ Tue, 02 Sep 2025 07:43:06 +0000 https://autoware.org/?p=3705 The CoE meeting returned after a short summer pause, bringing together our university partners to share research progress and collaboration updates. Key Highlights Research Showcase: Work of many CoE representatives was highlighted in the Autoware Foundation social media channels to bring visibility and recognition to work. Roadmap Taskforce: Carried over the conversation on the work ...

The post Autoware Centers of Excellence Steering Committee August 2025 Update appeared first on Autoware.

]]>
The CoE meeting returned after a short summer pause, bringing together our university partners to share research progress and collaboration updates.

Key Highlights

Research Showcase: Work of many CoE representatives was highlighted in the Autoware Foundation social media channels to bring visibility and recognition to work.

Roadmap Taskforce: Carried over the conversation on the work that has been done at the Roadmap task force to inform the CoE members.

Off-Road Autonomy: Progress on datasets, training, and simulation for terrestrial and space exploration use cases.

Collaboration Opportunities: Upcoming EU projects, NASA/Space ROS alignment, and discussions on dataset standards and affordable platforms.


Autoware CoE Research Showcase

Article content

This month’s meeting highlighted a range of recent academic contributions from CoE members:

  • Personalized Autonomy: Dr. Ziran Wang presented ongoing work using LLMs and VLMs to make behavioral planners more transparent and understandable to drivers.
  • Vehicle Dynamics: Associate Prof. Hormoz Marzbani shared progress on scalable tire dynamics modeling and learning for control, supporting safer and more adaptive driving behavior.
  • Automated Bug Detection: The UC Irvine team (Josh Garcia and Alfred Chen) demonstrated tools for automated bug finding and fixing, applied not only to Autoware but also to Apollo and OpenPilot. Their efforts led to NSF CAREER Awards for both professors.
  • Clemson AutoDrive Platform: Venkat and his students showcased results from their simulation racing leagues with 60–70 teams, which ran without software crashes. Their digital twin environments are now being extended beyond racing to support off-road autonomy research.

Autoware Roadmap Taskforce Overview

The CoE meeting also reviewed progress from the Autoware Roadmap Taskforce, which launched earlier this year to coordinate execution across working groups. The taskforce is structured into four phases (P1–P4, three months each) and nine working groups, spanning core AV development, enabling technologies, and production/validation.

Three main objectives guide the effort:

  • Incorporate cutting-edge AI-first technologies as Autoware transitions from v1.0 to v2.0.
  • Deliver a production-ready stack that members can commercialize.
  • Expand deployments of Autoware to more vehicles and increase autonomous mileage.

This structured approach ensures that research advances from CoE members can connect directly to Autoware’s roadmap, supporting the transition from lab prototypes to production deployments.


Off-Road Working Group Focus

The Off-Road WG, led by Po-Jen Wang, is taking an end-to-end approach to autonomy in extreme environments, spanning both terrestrial and space applications.

  • Terrestrial vehicles: Off-road racing and mining vehicles, where LiDAR-based perception can be applied.
  • Space exploration: Mars rovers and Lunar Terrain Vehicles, where harsh conditions make LiDAR impractical, requiring vision-based perception pipelines.

A modified version of the AutoSeg foundation model is being applied, trained on off-road datasets to deliver three key perception capabilities:

  • Free space detection (drivable vs. non-drivable areas)
  • Object detection (structures, vehicles, vulnerable living beings, etc.)
  • Terrain classification (paved, dirt, vegetation, snow, etc.)

Planning and control modules are being adapted for rough terrain, with tighter coupling between perception and control to ensure stability in challenging conditions.


Dataset Integration and Training Progress

To support robust off-road perception, CoE members have combined six open datasets — including Rellis3D, Goose, ORFD, Yamaha-CMU, CaSSeD, and OFFSED — under a unified labeling scheme. A custom parsing and configuration framework has been developed to standardize classes and formats across sources.

Early training results are promising:

  • Free space segmentation models have reached ~80% IoU, showing reliable detection of drivable areas.

Work continues on object detection and terrain classification, with initial tests already demonstrating generalization on unseen data.

While the dataset size (~22k images) remains smaller than typical on-road datasets, discussions are underway with potential industry contributors to expand the pool of off-road training data.

Article content

Simulation Environments

The Off-Road WG is extending Autoware’s simulation capabilities to cover both terrestrial and space-focused use cases:

  • Mining environments: New vehicle and terrain models are being developed in CARLA 0.10, enabling realistic testing of heavy-duty off-road applications.
  • Off-road racing: A scalable Isaac Sim RoboRacer model is being upgraded from 1/10th to 1/5th scale. A configurable racetrack generator has been built, allowing randomized parameters such as track width, banking, and corner angles.
  • Mars environment: A digital twin has been created based on NASA Perseverance rover locations, with terrain models verified against rover images.
  • Lunar environment: An environment based on a candidate Artemis landing site at the lunar South Pole has been developed, complete with detailed terrain and rock assets.

These environments provide realistic testbeds for validating perception, planning, and control pipelines in conditions ranging from industrial sites to planetary surfaces.

Article content
Article content

Collaboration Opportunities and Next Steps & Suggestions

The August meeting also highlighted upcoming opportunities for joint work and cross-university collaboration:

  • Poznan University : Preparing a real off-road project with potential EU funding, including a 700 m² covered test facility with basalt sand for lunar rover analog studies.
  • Virginia Polytechnic Institute and State University : Collecting small-scale airport data and opening access to off-road test sections. Alignment with Space ROS standards is in progress through NASA’s Picnic team.
  • Standardized Platforms: Suggestions were raised for creating a common, affordable platform, similar to F1TENTH, to accelerate off-road autonomy research.
  • Simulation & Physics: Interest in converging simulators and advancing physics models for granular media was shared by several members.
  • Dataset Guidelines: Discussions began around tiered dataset requirements to ensure consistency across different projects.

The August CoE meeting marked a strong restart after the summer break, with new research results, simulation progress, and cross-university collaborations pointing toward impactful outcomes. From personalized autonomy and scalable vehicle dynamics to lunar and Martian testbeds, the CoE community continues to expand the boundaries of what Autoware can achieve in research and education.

We look forward to the September meeting and the next wave of updates from our university partners.

The post Autoware Centers of Excellence Steering Committee August 2025 Update appeared first on Autoware.

]]>
Advancing Software Testing for Autonomous Driving Systems: A Year of Collaboration and Contribution at UCI https://autoware.org/advancing-software-testing-for-autonomous-driving-systems/ Wed, 27 Aug 2025 15:00:00 +0000 https://autoware.org/?p=3694 Over the past year, researchers from the University of California, Irvine (UCI) — including Professors Joshua Garcia and Qi Alfred Chen, along with graduate students Yuqi Huai, Yuntianyi Chen, Chi Zhang, and Xiang Liao — have made significant contributions to the advancement and evaluation of autonomous driving systems through a collaborative effort between the Software ...

The post Advancing Software Testing for Autonomous Driving Systems: A Year of Collaboration and Contribution at UCI appeared first on Autoware.

]]>
Over the past year, researchers from the University of California, Irvine (UCI) — including Professors Joshua Garcia and Qi Alfred Chen, along with graduate students Yuqi Huai, Yuntianyi Chen, Chi Zhang, and Xiang Liao — have made significant contributions to the advancement and evaluation of autonomous driving systems through a collaborative effort between the Software Aurora Lab (SORA) and the AS²Guard Research Group. With a particular focus on scenario generation and software testing, their work spans academic research, tool development, and active participation in open-source communities such as the Autoware Foundation. Their efforts reflect a broader goal: improving the safety, reliability, and transparency of autonomous driving systems through rigorous engineering practices and collaborative engagement.

One of the key events this year was a local workshop hosted at UCI in March 2025. The workshop brought together researchers from multiple institutions with the goal of eliciting requirements for a shared, cloud-based research infrastructure to support the development and testing of autonomous driving systems. Rather than focusing on a particular software stack, the workshop centered on identifying technical, logistical, and collaborative needs that such an infrastructure must address. Participants shared perspectives on scenario generation, simulation at scale, data management, and tool interoperability—laying the groundwork for a future platform that could support reproducible, cross-institutional research in autonomous driving systems.

Complementing this effort was UCI’s organization of the SE4ADS (Software Engineering for Autonomous Driving Systems) workshop at ICSE 2025. SE4ADS serves as a growing forum for advancing software engineering research tailored to the needs of autonomous driving systems. The 2025 edition featured work on simulation-based testing, requirements integration, and safety certification. Discussions also addressed broader concerns around responsible software practices and long-term maintainability, particularly in the context of open-source autonomous systems such as Autoware. The workshop underscored a shared commitment to developing engineering foundations that can support the unique complexity and risk profile of autonomous software.

With these community needs in mind, Yuqi is now leading the design and development of a Cloud-based Autonomous Driving Systems Research Infrastructure (CADRI). Building on insights from both the UCI-hosted workshop and the SE4ADS forum, this effort aims to create a scalable, interoperable platform that supports reproducible experimentation. A key advantage of the cloud-based approach is its ability to significantly reduce upfront costs, allowing researchers to perform large-scale testing and development without substantial investment in specialized hardware. This initiative builds on Yuqi’s earlier work in scenario-based testing, including DoppelTest [1] and scenoRITA [2], two frameworks for generating scenario-based tests. He also maintains a key dependency for the SVL simulator, helping ensure that the simulation tool remains accessible to the research community. In parallel, Xiang has been working on migrating tools originally developed for other ADS platforms onto Autoware, thereby broadening tool compatibility and reinforcing Autoware’s role as an open-source foundation for reproducible research.

In Yuntianyi’s latest research, he has been emphasizing Autoware as he led the development of ConfVE [4], a tool designed to identify failures in autonomous driving systems that arise from alternative configurations. Misconfiguration is a known risk factor in real-world deployments, and ConfVE aims to prevent such issues by identifying inconsistent or unsafe parameter combinations early in the development cycle. As part of this approach, Yuntianyi also developed a Scenario Record Analyzer—an automated tool capable of detecting nine distinct types of violations in Autoware driving scenario records, providing a robust mechanism for validating ADS behavior against safety and performance requirements. This work leveraged HD map and scenario data from Autoware Evaluator, obtained through a collaboration with the Autoware Operational Design Domain (ODD) Working Group. The partnership provided access to realistic, systematically generated test scenarios that reflect the ODD characteristics of Autoware’s target deployment environments, enabling ConfVE and the Scenario Record Analyzer to be validated under conditions closely resembling real-world usage. More recently, Yuntianyi presented A Comprehensive Study of Bug-Fix Patterns in Autonomous Driving Systems [5] at FSE 2025. This large-scale analysis examined over 1,300 real-world bug-fix instances from two leading open-source platforms (i.e., Autoware and Apollo) and introduced a taxonomy encompassing 15 syntactic and 27 semantic bug-fix patterns, capturing both code-level changes (e.g., conditional modifications, data structure corrections) and domain-specific modifications (e.g., path planning optimization, module integration and interaction). Yuntianyi’s work on ConfVE and the bug-fix pattern benchmark also contributes to the CADRI project, where he serves as a project leader. His contributions provide foundational components for the Toolkit Service, enrich the ADS analytics oracles, and supply a curated dataset repository, thereby strengthening CADRI’s capability to support comprehensive analysis, testing, and improvement of autonomous driving systems.

Besides the research, Yuntianyi and Yuqi contributed to the DevOps Dojo project within the Autoware OpenADKit Working Group. As part of this effort, they refactored approximately 15% of the total Autoware ROS nodes, enhancing maintainability and consistency in the codebase. Yuntianyi also developed an automated configuration refactoring tool for Autoware ROS nodes, enabling developers to standardize and update configurations more efficiently. This tool has accelerated the development workflow, reduced manual intervention, and improved configuration reliability across the Autoware ecosystem.

While Josh approaches autonomous driving from a software engineering perspective, focusing on faults that affect system reliability and correctness, Alfred brings a security lens to the field, concentrating on vulnerabilities in autonomous vehicles. Specifically, Alfred’s team focused on evaluating the robustness of autonomous driving systems by leveraging component-level vulnerabilities using methods such as adversarial scenarios, patches, and objects. Their efforts have contributed to a Platform for Auto-driving Safety and Security (PASS) [6], a modular and extensible simulation-based evaluation framework specifically for evaluating system-level effectiveness of existing attacks or defenses across different autonomous driving models. Building on top of PASS, Chi has been spending his recent efforts on designing and developing an adversarial scenario generation framework for Autoware using the CARLA simulation environment.

With the growing complexity and expanding deployment ambitions, the need for rigorous, collaborative, and scalable engineering practices in autonomous driving systems has never been more urgent. Josh and Alfred’s teams are helping to meet this need by integrating empirical insights with tool development, infrastructure planning, and community engagement. Their work, ranging from scenario-based test generation to large-scale bug fix analyses, demonstrates how software engineering research can directly contribute to the development of safer and more reliable autonomous systems. Through close collaboration with the Autoware Foundation and a commitment to open, reproducible experimentation via efforts like CADRI, they are contributing essential building blocks for a more robust, transparent, and evidence-driven research ecosystem in autonomous driving systems.

References

[1] Yuqi Huai, Yuntianyi Chen, Sumaya Almanee, Tuan Ngo, Xiang Liao, Ziwen Wan, Qi Alfred Chen, and Joshua Garcia. 2023. Doppelgänger Test Generation for Revealing Bugs in Autonomous Driving Software. In Proceedings of the 45th International Conference on Software Engineering (ICSE ’23). IEEE Press, 2591–2603. https://doi.org/10.1109/ICSE48619.2023.00216

[2] Yuqi Huai, Sumaya Almanee, Yuntianyi Chen, Xiafa Wu, Qi Alfred Chen, and Joshua Garcia, “scenoRITA: Generating Diverse, Fully Mutable, Test Scenarios for Autonomous Vehicle Planning,” in IEEE Transactions on Software Engineering, vol. 49, no. 10, pp. 4656-4676, 1 Oct. 2023, doi: 10.1109/TSE.2023.3309610.

[3] Yuqi Huai, 2023, SORA SVL Server. Available at https://github.com/YuqiHuai/SORA-SVL (Accessed: 30 June 2025).

[4] Yuntianyi Chen, Yuqi Huai, Shilong Li, Changnam Hong, and Joshua Garcia. 2024. Misconfiguration Software Testing for Failure Emergence in Autonomous Driving Systems. Proc. ACM Softw. Eng. 1, FSE, Article 85 (July 2024), 24 pages. https://doi.org/10.1145/3660792

[5] Yuntianyi Chen, Yuqi Huai, Yirui He, Shilong Li, Changnam Hong, Qi Alfred Chen, and Joshua Garcia. 2025. A Comprehensive Study of Bug-Fix Patterns in Autonomous Driving Systems. Proc. ACM Softw. Eng. 2, FSE, Article FSE018 (July 2025), 23 pages. https://doi.org/10.1145/3715733

[6] Hu, Zhisheng, Shen, Junjie, Guo, Shengjian, Zhang, Xinyang, Zhong, Zhenyu, Chen, Qi Alfred, and Li, Kang. PASS: A System-Driven Evaluation Platform for Autonomous Driving Safety and Security. Retrieved from https://par.nsf.gov/biblio/10359464. NDSS Workshop on Automotive and Autonomous Vehicle Security (AutoSec).

The post Advancing Software Testing for Autonomous Driving Systems: A Year of Collaboration and Contribution at UCI appeared first on Autoware.

]]>
Scalable Tire Dynamics Modelling and Learning-Based Control for High-Speed Autonomy https://autoware.org/scalable-tire-dynamics-modelling-and-learning-based-control-for-high-speed-autonomy/ Wed, 20 Aug 2025 15:08:26 +0000 https://autoware.org/?p=3660 As part of the Autoware Foundation Centre of Excellence, our team at RMIT University is developing a new control capability for high-speed autonomous vehicles by integrating real-time tire modelling with advanced learning-based trajectory optimization. The core innovations—a cornering stiffness estimation module and a Safe Information-Theoretic MPC (SIT-LMPC) controller—will be tested on a 1:7 scale autonomous ...

The post Scalable Tire Dynamics Modelling and Learning-Based Control for High-Speed Autonomy appeared first on Autoware.

]]>
As part of the Autoware Foundation Centre of Excellence, our team at RMIT University is developing a new control capability for high-speed autonomous vehicles by integrating real-time tire modelling with advanced learning-based trajectory optimization. The core innovations—a cornering stiffness estimation module and a Safe Information-Theoretic MPC (SIT-LMPC) controller—will be tested on a 1:7 scale autonomous car with realistic tires, capable of performing aggressive manoeuvres.

This work expands the boundaries of what Autoware can do, enabling deployment in extreme scenarios—from autonomous racing to off-road or even extraterrestrial exploration.


Why Cornering Stiffness Matters

Tire cornering stiffness (𝐶α) is a critical parameter in lateral vehicle dynamics. It relates lateral force (𝐹y) to the slip angle (α) and underpins every aspect of handling and stability control.

We define the lateral forces at the front and rear axles as:

Assuming identical tires and steady-state conditions:

Where:

  • m is the vehicle mass
  • l = a1 + a2 is the wheelbase
  • a1 and a2 are distances from the CoG to front/rear axles
  • αf and αr are slip angles for front/rear tires
  • g is gravitational acceleration

From these, the total lateral force and yaw moment are:

These feed into the planar dynamics:

This model reveals how changes in 𝐶α—due to tire wear, surface change, or temperature—directly impact stability and control. Estimating it online gives us a safer, smarter vehicle.


Scalable Implementation on a 1:7 Autonomous Vehicle

To validate this architecture in a controlled, cost-effective way, we are developing a 1:7 scale autonomous race vehicle with:

  • Real pneumatic tires
  • High-fidelity sensors and onboard computer (Jetson Orin)
  • Autoware software stack with our added modules

This mini-vehicle will operate at speeds and accelerations sufficient to invoke measurable slip, making it ideal for identifying tire parameters and validating SIT-LMPC in realistic conditions.


SIT-LMPC: Learning to Control Aggressively but Safely

Autonomous vehicles often operate in uncertain environments. We address this with Safe Information-Theoretic Learning MPC (SIT-LMPC), which blends:

  • Sampling-based MPC (MPPI) for stochastic optimization
  • Normalizing flows to learn the cost-to-go function across iterations
  • Constraint-safe learning using adaptive penalty methods

Mathematically, we cast the infinite-horizon stochastic control problem as:

The SIT-LMPC approach iteratively builds safe sets Sℓ and learns V(x), the expected cost-to-go, using previously feasible trajectories. These are incorporated into a constrained MPC formulation solved by optimizing:

Where dX(.) and dS(.) measure constraint violations and are penalized adaptively. All computations are GPU-accelerated, supporting real-time control even with high-dimensional models.


Contributions to Autoware

Our project contributes to the Autoware ecosystem by:

  • Adding a modular cornering stiffness estimator node, usable across scales and vehicle types
  • Demonstrating SIT-LMPC integration with Autoware for real-time stochastic control
  • Deploying a scalable testbed, allowing other researchers to test slip-aware control in affordable settings

These additions make Autoware suitable for racing, off-road, and extreme-terrain applications.


What’s Next

  • Q3 2025: Validation on variable surfaces, terrain changes
  • Q4 2025: Public release of Autoware-compatible modules and documentation
  • 2026: Potential scale-up to full-size off-road vehicle deployments

We welcome collaboration—especially around data sharing, experimental validation, or deploying on similar testbeds.

The post Scalable Tire Dynamics Modelling and Learning-Based Control for High-Speed Autonomy appeared first on Autoware.

]]>
Driving by Conversation: Personalized Autonomous Driving with LLMs and VLMs https://autoware.org/driving-by-conversation-personalized-autonomous-driving-with-llms-and-vlms/ Thu, 14 Aug 2025 14:47:50 +0000 https://autoware.org/?p=3645 LLMs and VLMs: Enabling personalization in AVs through natural language The evolution of autonomous vehicles (AVs) has largely focused on safety, efficiency, and technical robustness. While these remain essential, the next frontier is clear—personalization. Today’s AV stacks typically offer static driving modes—“sport,” “comfort,” “eco”—or manual parameter adjustments. These settings are rigid, fail to capture nuanced ...

The post Driving by Conversation: Personalized Autonomous Driving with LLMs and VLMs appeared first on Autoware.

]]>
LLMs and VLMs: Enabling personalization in AVs through natural language

The evolution of autonomous vehicles (AVs) has largely focused on safety, efficiency, and technical robustness. While these remain essential, the next frontier is clear—personalization.

Today’s AV stacks typically offer static driving modes—“sport,” “comfort,” “eco”—or manual parameter adjustments. These settings are rigid, fail to capture nuanced user preferences, and cannot interpret indirect or contextual instructions. In practice, they cannot adapt when a passenger says, “I’m tired, please drive more gently,” or “I’m late, could we speed up?”

Recent advances in Large Language Models (LLMs) and Vision-Language Models (VLMs) open the door to natural, human-like interaction with AVs. These models can understand plain-language commands in any language or dialect, interpret subtle modifiers (“slightly faster,” “much gentler”), and integrate contextual cues from live perception data.

By combining these capabilities with the AV’s driving stack, it becomes possible to:

  • Enable natural and nuanced conversation by understanding plain-language commands (in any language or dialect) and subtle modifiers (“slightly faster,” “much gentler”), replacing complex menu settings.
  • Make context-aware decisions by fusing live visual cues (traffic, weather, signage) with spoken intent so the vehicle adapts safely yet personally in real time.
  • Deliver personalization that improves over time via memory-augmented models recall past rides to refine each passenger’s comfort and style preferences without retraining the core stack.

The research presented here demonstrates the first end-to-end, real-world deployments of LLM- and VLM-based frameworks, Talk2Drive and an onboard VLM motion control system, integrated with a fully functional autonomous driving stack.


System Architecture: Integrating LLM/VLM with the autonomous driving stack

The proposed architecture embeds LLM or VLM capabilities into the Strategic Driving Intelligence Layer of the AV stack (Figure 1). It processes multimodal inputs, generates context-aware driving plans, and executes low-level controls through the existing autonomy layer.

Inputs Information:

  • Human instruction (speech-to-text conversion).
  • Perception results (objects, weather, traffic conditions).
  • Vehicle state (pose, speed).
  • Available safe behaviors (slow down, lane change, stop).

Prompt Generation Interface:
Bundles raw inputs with system context (safety rules, operational role) and historical ride data, producing a structured prompt for the LLM/VLM.

VLM/LLM Agent:
Generates high-level policy parameters, target speed, decision priorities, and control adjustments aligned with passenger preferences.

Action Interface:
Translates high-level LLM/VLM output into low-level commands executed by the autonomous driving layer.


Real-World Testing Environment

To evaluate these systems, field tests were conducted at three distinct tracks:

  1. Highway Track – Testing lane changes, maintaining speed, responding to sudden slowdowns, and merging from on-ramps.
  2. Intersection Track – Handling yielding, protected and unprotected turns, and cross-traffic negotiation.
  3. Parking Lot Track – Navigating narrow lanes, avoiding static/dynamic obstacles, parallel parking, and reverse parking maneuvers.

These scenarios allow assessment of personalization performance across diverse traffic, speed, and maneuvering conditions.


Autonomous Vehicle Hardware Setup

Experiments were conducted using a Lexus RX450h equipped with:

  • Sensors: LiDAR (VLP-32C), radar (Aptiv ESR 2.5), GNSS (NovAtel Level 2.5 kit), multiple cameras (front, rear, in-cabin).
  • Computing Platform: Intel i9-9900 CPU, NVIDIA Quadro RIX-A4000 GPU, 512GB NVMe SSD.
  • Connectivity: Cradlepoint IBR900 Series Router with 4G-LTE.

This configuration supported both cloud-based LLM inference and fully onboard VLM inference for low-latency control.


Case Study 1: Talk2Drive: LLM-Based Personalized Driving

The Talk2Drive framework integrates GPT-4-based LLMs into a real-world AV, allowing natural verbal commands to directly influence driving behavior.

Core Capabilities:

  • Understanding multiple levels of human intention – from explicit (“drive faster”) to indirect (“I’m in a hurry”) commands.
  • Memory module for personalization – storing historical interaction data to refine driving style preferences over time.

Experiment Design:

  • Scenarios: Highway, intersection, and parking lot.
  • Evaluation metric: Takeover rate, frequency with which the human driver needed to intervene.
  • Comparison: With and without the memory module.

Key Findings:

  • Talk2Drive reduced takeover rates by 75.9% compared to baseline non-personalized systems.
  • Adding the memory module further reduced takeover rates by up to 65.2%, demonstrating the benefit of long-term personalization.
  • System successfully interpreted context and emotional tone, enabling safer and more responsive driving adaptations.

Case Study 2: Onboard VLM for Motion Control

While LLM-based systems can operate via cloud processing, they often face latency (3–4 seconds) and connectivity constraints. The second study addressed these limitations by developing a lightweight onboard VLM framework capable of real-time inference.

Key Features:

  • Onboard deployment – No dependency on internet connectivity.
  • Multimodal reasoning – Processing visual scene inputs and natural language instructions in real time.
  • RAG-based memory module – Retrieval-Augmented Generation allows iterative refinement of control strategies through user feedback.

Experiment Design:

  • Same multi-scenario real-world setup as Talk2Drive.
  • Evaluated explicit and implicit commands, varying environmental conditions.

Key Findings:

  • Comparable reasoning capability to cloud-based LLM solutions, with significantly lower latency.
  • Takeover rate reduced by up to 76.9%.
  • Maintained safety and comfort standards while adapting to individual driving styles.

Comparative Insights

FeatureTalk2Drive (LLM)Onboard VLM Motion Control
DeploymentCloud-based (requires connectivity)Fully onboard
Input ModalitiesSpeech/text commandsSpeech/text + visual scene
Memory ModuleHistorical personalization memoryRAG-based feedback memory
LatencyHigher (network dependent)Low (< real-time threshold)
Takeover Rate ReductionUp to 75.9%Up to 76.9%
Personalization Over TimeYesYes, with continuous feedback

Both approaches demonstrate that integrating advanced language and vision-language models with the AV stack can significantly improve personalization, trust, and user satisfaction. The choice between them depends on deployment constraints, desired input modalities, and connectivity availability.


Implications for Future Autonomous Driving

These studies represent the first real-world, end-to-end deployments of LLM and VLM personalization frameworks for autonomous vehicles. They address long-standing gaps in AV user interaction:

  1. Natural Command Interpretation – Understanding instructions without requiring structured input.
  2. Context Integration – Combining user intent with live environmental data for adaptive decision-making.
  3. Personalization Memory – Continuously refining the driving profile over multiple rides.
  4. Real-World Validation – Demonstrating effectiveness across diverse scenarios outside simulation environments.

Looking ahead, the combination of multimodal AI, onboard efficiency, and long-term personalization offers a promising path to AVs that not only drive safely but drive the way each passenger prefers.

For Further Reading:

The post Driving by Conversation: Personalized Autonomous Driving with LLMs and VLMs appeared first on Autoware.

]]>
Call For Papers – SE4ADS: The First International Workshop on Software Engineering for Autonomous Driving Systems at ICSE 2025 https://autoware.org/call-for-papers-se4ads-the-first-international-workshop-on-software-engineering-for-autonomous-driving-systems-at-icse-2025/ Tue, 08 Oct 2024 13:13:46 +0000 https://autoware.org/?p=3131 Software Engineering research in Autonomous Driving Systems (ADSes) faces some research community-oriented challenges because of the need to use specific hardware to install a complex system (i.e., the ADS) and run necessary tools (e.g., a simulator). Due to such challenges, reproducing research results becomes especially difficult, and newer approaches often either only compare against a ...

The post Call For Papers – SE4ADS: The First International Workshop on Software Engineering for Autonomous Driving Systems at ICSE 2025 appeared first on Autoware.

]]>
Software Engineering research in Autonomous Driving Systems (ADSes) faces some research community-oriented challenges because of the need to use specific hardware to install a complex system (i.e., the ADS) and run necessary tools (e.g., a simulator). Due to such challenges, reproducing research results becomes especially difficult, and newer approaches often either only compare against a random baseline as opposed to the original implementations of state-of-the-art approaches, or must compare against a re-implementation that introduces threats to validity regarding the fidelity of the implementation with respect to the original approach’s design. Moreover, re-implementing prior research work wastes time and resources because it requires researchers to repeat each other’s efforts and, in turn, hinders new opportunities for potential breakthroughs.

The main theme of this workshop, which we refer to as SE4ADS, is the exchange of ideas regarding the establishment of a community-wide infrastructure for facilitating research in the area of software engineering for autonomous driving systems. In support of that theme, SE4ADS provides a forum for practitioners and researchers to (1) share ideas and potential solutions regarding tools, libraries, benchmarks, and datasets that should belong to the infrastructure; (2) explore issues and challenges related to such research; (3) discuss mechanisms for enabling and facilitating tool availability, reusability, and interoperability in that research area; and (4) determine solutions for replicating or reproducing experiments and analyses in the workshop’s target research area. SE4ADS will help forge a new research community and create new collaborations.

The goals of the workshop are as follows:

  1. Converge on requirements and challenges for a self-sustainable, community-based research infrastructure to support software engineering for ADS. An increasingly growing number of tools and methodologies have been produced for designing, implementing, testing, analyzing, and maintaining ADSes. These tools and methodologies may have mismatched assumptions; be inaccessible, unsupported, or unmaintained while still implementing ideas useful for researchers or practitioners; or may even support different perspectives on how to construct and maintain ADSes. One of the major goals of SE4ADS is to gather researchers and practitioners together to obtain different views as to how to manage and unify these tools—and determine the best means for converging the most useful tools and datasets among them to produce a community-wide infrastructure to facilitate reusable, replicable, and reproducible software engineering research for ADS.
  2. Uniting the software engineering for ADS community. The workshop will provide the opportunity for software engineering researchers who study ADSes, as well as educators and industrial practitioners, to build a community that leverages both novel and previously existing software engineering tools, techniques, and datasets in order to tackle problems that are slowing or blocking progress for the software engineering for ADS community.
  3. Construct a repository of ADS-specific baselines, benchmarks, and datasets. As part of a shared community-wide infrastructure, workshop participants will discuss issues regarding the need to construct and maintain baselines, benchmarks, and datasets containing bug-fix pairs, reusable test cases, and reusable driving scenarios. Through the workshop, participants will aid in the generation and subsequent launch of a community-wide standard of ADS artifacts that enable meaningful, objective comparison of research techniques.
  4. Determine mechanisms needed to support ADS construction and maintenance for industrial researchers and practitioners. The immediate needs of academic researchers and educators do not necessarily coincide with or support the needs of industry practitioners and researchers. To address this issue, a major goal of SE4ADS is to solicit feedback from industrial practitioners and researchers to determine the mechanisms, features, and desirable properties of a community-wide infrastructure for ADS-oriented software engineering.

Organization Committee

Joshua Garcia, University of California, Irvine

Qi Alfred Chen, University of California, Irvine

Web and Publicity

Yuqi Huai, University of California, Irvine

Yuntianyi Chen, University of California, Irvine

Important Dates

Monday, November 11, 2024 -> Paper Submission Deadline
Sunday, December 8, 2024 -> Paper Acceptance Notification

Take a look at the workshop page here 👉 https://conf.researchr.org/home/icse-2025/se4ads-2025

The post Call For Papers – SE4ADS: The First International Workshop on Software Engineering for Autonomous Driving Systems at ICSE 2025 appeared first on Autoware.

]]>
An Anatomy of Autonomous Racing: Autonomous Go-Karts https://autoware.org/an-anatomy-of-autonomous-racing-autonomous-go-karts/ Thu, 26 Sep 2024 15:09:22 +0000 https://autoware.org/?p=3115 Racing has always been a passion of ours at Autoware Foundation. It’s not just about speed or the thrill of competition; it’s about pushing boundaries, fostering innovation, and building a strong community. Through racing, we’ve formed invaluable bonds with academics, students, and the broader racing world. These partnerships have allowed us to push the envelope ...

The post An Anatomy of Autonomous Racing: Autonomous Go-Karts appeared first on Autoware.

]]>
Racing has always been a passion of ours at Autoware Foundation. It’s not just about speed or the thrill of competition; it’s about pushing boundaries, fostering innovation, and building a strong community. Through racing, we’ve formed invaluable bonds with academics, students, and the broader racing world. These partnerships have allowed us to push the envelope of autonomous driving technology while inspiring the next generation of engineers.

Autonomous racing, as it turns out, is also an incredible educational tool. It provides students with hands-on experience in robotics, AI, and engineering in ways that traditional classroom settings cannot. From the precision required to navigate a track to the split-second decision-making needed to compete at high speeds, racing sharpens skills that are essential in the world of autonomy.

Many form factors are involved in autonomous racing, each providing unique learning opportunities. It can start with 1:10th-scale robots1, advance to versions equipped with upgraded sensors like 3D LiDARs, and move up to autonomous go-karts. The pinnacle of this progression is the full-scale Indy cars2 as it stands today, racing autonomously at speeds of over 270 km/h, showcasing just how far the technology can be pushed in high-stakes environments.

Autonomous Go-Karts: Revolutionizing Racing and Education

One of the standout formats in the world of autonomous racing is the Autonomous Karting Series (AKS), where autonomous go-karts take center stage. This competition, which began in 2023, is designed to push the limits of self-driving technology while providing an accessible platform for students and universities to compete and innovate. The AKS holds its annual National Grand Prix at Purdue University, where some of the brightest minds in autonomous technology face off.

In the 2024 Grand Prix, six teams—hailing from the University of Pennsylvania, UC Berkeley, UC San Diego, Purdue University, University of Michigan-Dearborn, and Kennesaw State University—fought fiercely for the top spot. The teams raced in three distinct categories:

Time Trial: Teams aimed to clock the fastest five laps on the track.

Open Category: Teams were allowed to pre-map the track and use the data to guide their kart. The challenge here was speed and control without the presence of cones.

Reactive Category: This was the most demanding, as teams were prohibited from pre-mapping the track. With only cones to guide their karts, teams had to rely on real-time data and rapid decision-making.

For two years in a row, the Autoware Foundation team from the University of Pennsylvania dominated the event, clinching first place in both the Open and Reactive categories in 2024, showcasing their exceptional engineering prowess and cutting-edge autonomous technology.

Adding a Niche Expertise into the Mix

In the latest AKS race, the Autoware Team at the University of Pennsylvania took their autonomous go-kart platform to new heights by incorporating Fixposition‘s Vision-RTK2 sensor. Known for its robust fusion of GNSS, inertial measurement units (IMU), and visual odometry, the Vision-RTK2 provided the team with unmatched accuracy and reliability in vehicle positioning.

The sensor was mounted near the top of the rear shelf of the go-kart, integrated with a 24V power source. It had to be positioned in a way that parts of the steering wheel were visible to the sensor’s camera. However, this interference was filtered out using Fixposition’s WebUI, ensuring clean sensor data.

The team further optimized their setup by utilizing Point One Navigation’s Polaris RTK subscription for Network Transport of RTCM via Internet Protocol (NTRIP), which enhanced the accuracy of the real-time positioning data. The odometry data from the Vision-RTK2 was used to derive the vehicle’s local position and orientation. This information was critical for the mapping and localization stack, and after applying trajectory optimization, the kart was controlled via a pure pursuit control algorithm.

Despite occasional GNSS drifts, the visual odometry from the Vision-RTK2 ensured that the position estimates were accurate throughout the race. The team achieved a remarkable sub-5-centimeter position accuracy across the track. Impressively, their go-kart completed five laps in just 5 minutes and 48 seconds, setting a new record for the fastest time trial in AKS race history.

By leveraging the cutting-edge technology provided by Fixposition, the Autoware Team demonstrated how niche sensors can significantly elevate the performance of autonomous racing platforms.

Looking Ahead: A Growing Future for Autonomous Racing

The Autonomous Karting Series is just getting started. With its popularity and technological significance growing, the competition is poised to expand beyond Purdue University in the coming years. New race tracks and larger events will likely emerge, offering more teams the chance to test their innovations and push the limits of autonomous racing technology.

The Autoware Foundation is proud to be a supporter of AKS and is committed to working with the competition as it expands. We see tremendous educational and technological potential in the series, and with the success of the Autoware Team at UPenn, the benefits of the competition are clear. In fact, the UPenn team has open-sourced their entire vehicle design under the AV4EV platform, making it freely accessible to universities and enthusiasts worldwide. This includes detailed component specifications, drive-by-wire system design, and software architecture, creating a valuable resource for anyone looking to develop their autonomous go-kart platforms.

The open-sourced AV4EV design has already inspired many universities that are part of the Autoware Centers of Excellence initiative. By adopting this platform, students and faculty can avoid having to build everything from scratch, accelerating their research and participation in autonomous racing. We expect to see more AV4EV go-kart platforms racing on tracks worldwide, with teams not just competing but also learning from and sharing their experiences with one another.

As the competition grows and more teams get involved, the spirit of collaboration and innovation will continue to drive the AKS forward. The future of autonomous racing is bright, and we at the Autoware Foundation are excited to be part of this transformative journey.

  1. Visit F1Tenth Foundation Website ↩
  2. Visit Indy Autonomous Challenge Website ↩

The post An Anatomy of Autonomous Racing: Autonomous Go-Karts appeared first on Autoware.

]]>
Autoware Tutorial at the IEEE IV 2024 https://autoware.org/autoware-tutorial-at-the-ieee-iv-2024/ Mon, 06 May 2024 19:33:52 +0000 https://autoware.org/?p=2692 Overview The Autoware Centers of Excellence is happy to anounce an incoming tutorial hosted at the IEEE Intelligent Vehicles Symposium in Jeju Shinhwa World, Jeju Island, Korea June 2-5, 2024. This tutorial aims to introduce the open-source autonomous driving software Autoware Universe and to detail how it can be used in autonomous driving research. Autoware is ...

The post Autoware Tutorial at the IEEE IV 2024 appeared first on Autoware.

]]>
Overview

The Autoware Centers of Excellence is happy to anounce an incoming tutorial hosted at the IEEE Intelligent Vehicles Symposium in Jeju Shinhwa World, Jeju Island, Korea June 2-5, 2024. This tutorial aims to introduce the open-source autonomous driving software Autoware Universe and to detail how it can be used in autonomous driving research. Autoware is an open source software platform and supporting ecosystem for autonomous driving research and development. Speakers are invited from universities worldwide to give talks on various autonomous vehicle related topics and the application of Autoware in these fields. The tutorial will serve as a general introduction to Autoware and the challenges in deploying autonomous driving systems in the real world. It will help the audience get familiar with installing Autoware, deploying Autoware on vehicle platforms and using Autoware in research projects. The tutorial will be split into 7 50-minute sessions, each covering a topic of interest. A detailed description for each session is provided below.

Audience

Professors, students, independent researchers, and industrial partners are welcome to attend. To fully enjoy its benefit, the audience should have a background in engineering, computer science, mathematics, robotics, or related fields. It is also helpful but not required to have an understanding of autonomous vehicles on either hardware or software.

Schedule

June 2th, 2024 Full Day  1:00pm – 5:50pm Korean Time (all session times TBC)

1:00pm – 1:10pm Introduction: Autoware a platform for Autonomous Driving development

1:10pm – 1:55pm Session I: Autoware on Scaled Platforms

1:55pm – 2:40pm Session II: Autoware on F1Tenth Premium

2:40pm – 3:25pm Session III: Quality Assurance for Autonomous Driving Systems: A Software Engineering Perspective

3:25pm – 3:35pm Break

3:35pm – 4:20pm Session IV: Software-Defined Vehicle Implementation based on Autoware’s Open AD Kit on an Autonomous Developer Chassis

4:20pm – 5:05pm Session V: Nebula: The Open Source Universal Sensor Driver for Autoware

5:05pm – 5:50pm Session VI: Customized AI model development environment for Autoware platform

Sessions

Introduction

Ryohsuke Mitsudome
Tier IV Inc
To be updated

Session I: Autoware on Scaled Platforms

Rahul Mangharam Po-Jen Wang
University of Pennsylvania, and Autoware Foundation

This tutorial is about running the Autoware software stacks on the scaled vehicle platforms including the 1/10 scale racing car and the 1/2 scale autonomous gokart. F1Tenth: the speaker will give a general introduction of the hardware platform, then show how to install and run Autoware on the car. The tutorial will cover different Autoware modules such as control, planning, perception, etc. This will be a mix of videos and live demonstrations. Gokart: the speaker will first give an introduction on the gokart hardware, including autonomous driving mechanical parts, different sensors involved (LiDAR, Camera, GNSS, IMU), and the driving mode supported (manual, autonomous, remote control). Then, the speaker will demonstrate how to install and run Autoware on the vehicle platform (slides and videos only). https://github.com/autowarefoundation/autoware.universe/tree/f1tenth_galactic/f1tenth

Session II: Autoware on F1Tenth Premium

Kanghee KimJinseop JeongMin KimSeryun Kang
School of Artificial Intelligence Convergence, Soongsil University, South Korea.

Autoware on F1Tenth Premium an all-in-one open-source autonomous driving stack for full-scale vehicles. As of November 2023, it has more than 300 packages and more than 630,000 lines of code. Many researchers and developers are interested in learning and researching Autoware, but full-scale vehicles are not always a good candidate platform for those purposes because of the high complexity and cost. Digital-twin simulators are usually considered a good candidate, but they may produce unrealistic sensory data that cannot be observed in the real world. As a handy vehicle platform for outdoor driving, we have developed a one-tenth vehicle, called F1Tenth Premium, that can exploit the potential of Autoware to the fullest extent. F1Tenth Premium builds upon the plain F1Tenth, which is capable of driving 70 kmph, and is equipped with high-resolution sensors used in full-scale vehicles such as 3D Lidars and automotive cameras. As a computing platform, it uses an NVIDIA Jetson AGX Orin board, powerful enough to run Autoware. In this tutorial, we will present a hands-on lab on how to build, install, and run Autoware on our F1Tenth Premium. We will also explain how to perform rosbag replay simulation, using real-world PCD and OSM maps. Finally, we will discuss the performance in terms of speed and battery run time, and lessons learned from outdoor driving experiences.

Session III: Quality Assurance for Autonomous Driving Systems: A Software Engineering Perspective

Lei MaZhijie WangJiayang SongYuheng Huang
University of Tokyo

Quality assurance for Autonomous Driving Systems (ADS) has been long recognized as a notoriously challenging but crucial task which requires substantial domain-specific knowledge and engineering efforts to bridge the last gap of further deploying the state-of-the-art ADS methodologies to safety, reliability and security-concerned practical applications. Therefore, in this tutorial, we would like to provide a high-level overview of our work in advancing the quality assurance of ADS. This tutorial aims to introduce the solutions and frameworks to tackle the quality challenges of ADS from two aspects: 1) a complete quality analysis pipeline for AI components in ADS, from unit level to system level, and 2) a series of quality assurance frameworks for AI-enabled Cyber-physical Systems (CPS) specialized for ADS. In particular, this first part will present the works for quality analysis of ADS, including robustness benchmarking of AI-enabled sensor fusion systems, testing of simulation- based ADS, risk assessment from data distribution and uncertainty, and repair methods for AI components. The second part will summarize the works of trustworthy ADS from the CPS perspective, including the CPS benchmark, ensemble method for AI-controller fusion, AI-aware testing methods and LLM-enabled approaches for planning and design of AI components. The third part will introduce the recent advances in applying LLM for autonomous driving, including taking LLM-centric decision-making using language as an interface and opportunities in applying LLM for cross-modal test generation.

Session IV: Software-Defined Vehicle Implementation based on Autoware’s Open AD Kit on an Autonomous Developer Chassis

Alexander Carballo
Gifu University

Software-Defined Vehicles (SDVs) have been identified as the next step in the automotive industry evolution, from electromechanical systems towards intelligent, connected, expandable, software-centric mobile systems that can be continuously updated and upgraded. Vehicles are already a collection of several software embedded subsystems such as adaptive cruise control, anti-lock braking, airbag, collision detection; user experience (UX) subsystems such as infotainment; and hardware subsystems such as electronic control units (ECUs). ADAS and AD systems exhibit limited modularity and functional decomposition when compared to subsystems in a vehicle. Therefore, their adoption in SDV as subsystems implemented in embedded hardware, each running real-time operating systems, has been limited. Founded on the ARM’s Scalable Open Architecture For Embedded Edge (SOAFEE) vision as to how to enable SDVs, our ongoing efforts aim to demonstrate the work being done to transform Autoware into smaller functional units as containers for SDV. The OpenAD Kit aims to lower the threshold for developing and deploying the Autoware software stack, in form of containerized workloads running on heterogeneous hardware architecture with good hardware abstraction which effectively means Autoware software can run on the cloud, at the edge or on virtual hardware. In this tutorial, we will present our efforts regarding the Autoware Foundation’s Open AD Kit, based on ARM’s Scalable Open Architecture For Embedded Edge (SOAFEE). We will also discuss our recent demonstration of Open AD Kit on an actual autonomous vehicle developer platform from PIX Moving.

Session V: Nebula: The Open Source Universal Sensor Driver for Autoware

David Robert WongMaximilian Schmeller
TIER IV, Inc.

Autoware is a software platform that has lowered the barrier to entry for autonomous driving development. However, deployment on real vehicles always starts with sensing, and the wide variety of sensors that are available make integration into Autoware with proprietary or independent drivers can be difficult. In this tutorial, we will present our sensor driver solution called Nebula. Various LiDAR, radar and camera solutions are already supported and the architecture is designed to allow easy integration of new sensors. We will cover the following topics: Nebula introduction: the background and motivation for Nebula will be explained, as well as the goals/non-goals of the project. How to use in Autoware: examples of how to integrate Nebula-supported sensors into your Autoware design. Adding new sensor support: a more technical deep-dive into the Nebula software design, and a description of the workflow that users can follow to integrate a new sensor type or model.

Session VI: Customized AI model development environment for Autoware platform

Kwon Soon
FutureDrive Inc., and  Daegu Gyeongbuk Institute of Science and Technology (DGIST)

Autoware is a well-structured, open-source, full-stack software platform for autonomous driving mobility. By using the open source Autoware platform, we can analyze and simulate each functional block such as perception, decision, planning and control in detail, and through parameter-level optimization, we can also apply it to demonstrate autonomous driving of an actual vehicle without difficulty. In particular, Autoware provides a detailed perception pipeline for 3D object detection, tracking, and motion prediction. Based on our experience using the Autoware platform, we believe that Autoware users are very interested in customizing perception blocks that directly affect autonomous driving performance. In this tutorial, we will introduce perception-specific AI datasets, evaluation benchmarks, and the AI model integration process for the Autoware platform. Our goal is to provide users with train/validation datasets for different types of sensor systems and objectively evaluate the performance of different AI models designed by themselves through the benchmark. Additionally, users can easily integrate customized AI models into the Autoware platform and test them on the vehicle using our Autoware AI configuration launcher(a cloud solution provided by FutureDrive).

The post Autoware Tutorial at the IEEE IV 2024 appeared first on Autoware.

]]>
F1Tenth Korea Delegation Visited UPenn CoE https://autoware.org/f1tenth-korea-delegation-at-upenn/ Thu, 15 Feb 2024 14:12:05 +0000 https://autoware.org/?p=2603 The F1Tenth Global Camp was supported by LINC 3.0, a South Korean government project. LINC 3.0 is a university support project involving 76 universities in South Korea. In 2023, eight of these universities participated in the F1Tenth autonomous driving education and competition program, and the Global Camp at the University of Pennsylvania was attended by ...

The post F1Tenth Korea Delegation Visited UPenn CoE appeared first on Autoware.

]]>
The F1Tenth Global Camp was supported by LINC 3.0, a South Korean government project. LINC 3.0 is a university support project involving 76 universities in South Korea. In 2023, eight of these universities participated in the F1Tenth autonomous driving education and competition program, and the Global Camp at the University of Pennsylvania was attended by 17 undergraduate and graduate students from four universities. 

For the 2024 F1Tenth Global Camp at Penn, four professors from 4 Korean universities were presented with students: Prof. Changjun Seo from Inje University, Prof.Yeongkwon Choe from Kangwon University,  Prof. Duckki Lee Yonam Institute of Technology,  and Prof. Jin Hyun Kim from Gyeongsang National University (GNU). 

Dr. Changjun Seo is a professor in the Department of Electronic, Telecommunications, Mechanical, and Automotive Engineering at Inje University. He said:  “This camp allowed students to effectively learn advanced technologies that can be used in the F1tenth autonomous driving race through theory and practical training. As a participating professor, I was able to experience a good example of how F1tenth autonomous driving training is conducted, and our Inje University has paved the way for the launch of a global educational cooperation program with UPENN, a world-class university. I would like to thank the professor for providing a well-structured training program with substantial and informative content and the TAs for their sincere and kind guidance.” 

Dr. Yeongkwon Choe, a Ph.D. in Aerospace Engineering, is currently an assistant professor in the Department of Mechatronics Engineering at Kangwon National University. His research interests include the database-referenced navigation system, SLAM, and nonlinear Bayesian filtering algorithms.  He said, “This program covers the overall topics of F1TENTH with the support of professional TAs and a solid curriculum. Students from Kangwon National University (KNU) who participated in this program were able to gain practical knowledge and broader insights into various topics. For the participating faculty, the excellent education program from UPenn served as a valuable reference in establishing KNU’s own program.”

Dr. Duckki Lee is the Dean of the Department of Software Engineering and the Chair of the Smart Software Department at Yonam Institute of Technology.  He had this to say about the Global Camp: Through our participation in this prestigious global camp, students from our institute had the unique chance to immerse themselves in both theoretical and practical aspects of F1 tenth-scale autonomous driving, a field in which Upenn is a globally recognized leader. My involvement as a professor in this program was extraordinarily enriching, providing me with invaluable insights into teaching methodologies for F1 tenth-scale autonomous driving technology and fostering innovation among students. I extend my heartfelt thanks to Professor Rahul for his visionary leadership in creating such a remarkable global camp and to the teaching assistants whose dedication and enthusiasm were pivotal in its successful execution.

Dr. Jin Hyun Kim is a former postdoctoral fellow at the University of Pennsylvania and is currently an associate professor at Gyeongsang National University (GNU). He is in charge of the F1Tenth Korea program and this global camp. He is currently working with Prof. Rahul to establish the F1Tenth Korea Foundation. Commenting on the global camp, Dr. Kim said, “The Korea F1Tenth education/competition program consists of a boot camp for beginners, technology exchange workshops, F1Tenth Korea Championship, and a global camp, designed to help students acquire basic to advanced skills in autonomous driving based on the F1Tenth platform over the course of a year. The Global Camp, held at the University of Pennsylvania, was the final stage of the education and competition program, allowing students to gain a deeper understanding, experience, and insight into autonomous driving. We look forward to working closely with the University of Pennsylvania and xLab to continue this program in the future.

The post F1Tenth Korea Delegation Visited UPenn CoE appeared first on Autoware.

]]>
Autoware Centers of Excellence Updates November 2023 https://autoware.org/autoware-centers-of-excellence-updates-november-2023/ Wed, 06 Dec 2023 02:43:18 +0000 https://autoware.org/?p=2454 Autoware Centers of Excellence Updates November 2023 Welcome back to the Autoware Centers of Excellence Updates. In November 2023, this is our second edition of the CoE newsletter. Autoware Centers of Excellence are bustling with activity. Here are some of the highlights from last month. The University of Macau joins the Autoware Centers of Excellence ...

The post Autoware Centers of Excellence Updates November 2023 appeared first on Autoware.

]]>
Autoware Centers of Excellence Updates November 2023

Welcome back to the Autoware Centers of Excellence Updates. In November 2023, this is our second edition of the CoE newsletter.

Autoware Centers of Excellence are bustling with activity. Here are some of the highlights from last month.


The University of Macau joins the Autoware Centers of Excellence network!

We welcomed Prof. Cheng-Zong Xu from the University of Macau as a new addition to Autoware CoE directors. Dr. Xu’s research interests lie in parallel and distributed computing and cloud computing. He is a Chief Scientist of the Key Project on Smart City of MOST, China, and a Principal Investigator of the Key Project on Autonomous Driving of FDCT, Macau SAR.

Visit Dr. Xu’s webpage for more information.

Additionally, Dr. Xu introduced his work to the CoE members at the monthly CoE call, and it already sparked interest in collaboration with other academia members at the Autoware CoEs.


Arizona State University joins the Autoware Centers of Excellence network!

We welcomed Prof. Junfeng Zhao from Arizona State University as a new addition to the Autoware CoE directors. Dr. Zhao’s research interests particularly include connected and automated vehicles (CAV), CAV simulation and system integration, motion planning and controls, electrified propulsion system controls, and intelligent transportation systems.

Visit Dr. Zhao’s webpage for more information.

Another key point is that Dr. Zhao introduced the BELIV (Battery Electric & Intelligent Vehicle) lab he founded at Arizona State University. At his lab, Dr. Zhao and his team have an autonomous research vehicle, and they’ve already got to work on adapting the drive-by-wire systems for Autoware use.


Introduction to AutoDrive, a simulator for autonomous research and education

Chinmay Samak and Tanmay Samak from Clemson University presented their work on AutoDrive, a simulator for autonomous research and education. This simulator has been used for developing autonomous parking, behavioral cloning, and multi-agent planning.

At the Autoware Foundation LinkedIn feed, we mentioned their work before, but the tinkering twins continue to push the boundaries of simulation. Many academic publications and project competitions accredit their work, and they continue to improve their simulation platform.

Here are some of the must-reads regarding the AutoDrive platform:

1. Announcement of the AutoDrive platform

2. AutoDrive for F1Tenth simulator

And don’t forget to watch the AutoDrive pitch video!


Autoware has been invited to organize a track at the IEEE Serious Open Source Summit

An exciting event is coming up!

To be held on February 20-21, 2024, at the Computer History Museum in Mountain View, California, the IEEE Serious Open Source Summit brings together industry leaders, practitioners, and visionaries who are leveraging open-source technology to drive digital transformation in ClimateTech, Telecommunications, Manufacturing, Healthcare and Life Sciences, Finance, CivicTech, the Metaverse and other verticals.

Hosted by the IEEE Standards Association | IEEE SA, this event will showcase open source that forms the foundation of business operations, data science, AI, IT, sustainability, and IoT efforts and provide an opportunity to learn about the IEEE SA Open collaborative ecosystem designed to bridge the gap between open source communities and industry applications with a growing set of projects, standards, and initiatives.

Autoware Foundation was invited to organize a track at this wonderful event, and accordingly, we’re working on representing Autoware’s work with the motto “running a commercially successful business using open-source.” Undoubtedly, it will be interesting with invited speakers, presentations and a panel discussion.

For more information about the event, visit the event website.


Kicking off the Autonomous Logistics business case exploration work by Wharton School and the Autoware Foundation!

As mentioned earlier on our social media, The Autoware Foundation has been working with The Wharton School on several special projects (we will soon make a recap of those activities). One is the exploration of autonomous logistics business cases where the team will focus on autonomous material transportation for indoor/outdoor intralogistics in large production and manufacturing facilities.

Some of the deliverables will be

  1. Customer discovery in the US
  2. Market development and market-entry strategy
  3. Pilot customer deployments in large industrial sites.

Additionally, this activity is built on top of the good work done at the Cargo Delivery ODD times, and it clearly shows interest and a path towards commercialization of solutions built on open-source technologies.


Before closing…

Autoware Centers of Excellence network is bustling with activity, by research, development, and deployment alike!

You don’t want to miss the third edition of the newsletter. Don’t forget to subscribe, like and share!

See you next time!

The post Autoware Centers of Excellence Updates November 2023 appeared first on Autoware.

]]>