Technical Blogs Archives - Autoware https://autoware.org/category/technical-blogs/ Tue, 02 Sep 2025 17:15:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://autoware.org/wp-content/uploads/2023/01/cropped-favicon-autoware-32x32.png Technical Blogs Archives - Autoware https://autoware.org/category/technical-blogs/ 32 32 A Tale of Two Open-Source Ecosystems: Scaling Autonomy with AutoDRIVE & Autoware https://autoware.org/scaling-autonomy-with-autodrive-autoware/ Tue, 02 Sep 2025 15:00:00 +0000 https://autoware.org/?p=3709 Developing and testing autonomous vehicle technologies often involves working across a wide range of platform sizes — from miniature testbeds to full-scale vehicles — each chosen based on space, safety, and budget considerations. However, this diversity introduces significant challenges when it comes to deploying and validating autonomy algorithms. Differences in vehicle dynamics, sensor configurations, computing ...

The post A Tale of Two Open-Source Ecosystems: Scaling Autonomy with AutoDRIVE & Autoware appeared first on Autoware.

]]>
Developing and testing autonomous vehicle technologies often involves working across a wide range of platform sizes — from miniature testbeds to full-scale vehicles — each chosen based on space, safety, and budget considerations. However, this diversity introduces significant challenges when it comes to deploying and validating autonomy algorithms. Differences in vehicle dynamics, sensor configurations, computing resources, and environmental conditions, along with regulatory and scalability concerns, make the process complex and fragmented. To address these issues, we introduce the AutoDRIVE Ecosystem — a unified framework designed to model and simulate digital twins of autonomous vehicles across different scales and operational design domains (ODDs). In this blog, we explore how the AutoDRIVE Ecosystem leverages autonomy-oriented digital twins to deploy the Autoware software stack on various vehicle platforms to achieve ODD-specific tasks. We also highlight its flexibility in supporting virtual, hybrid, and real-world testing paradigms — enabling a seamless simulation-to-reality (sim2real) transition of autonomous driving software.

The Vision

As autonomous vehicle systems grow in complexity, simulation has become essential for bridging the gap between conceptual design and real-world deployment. Yet, creating simulations that accurately reflect realistic vehicle dynamics, sensor characteristics, and environmental conditions — while also enabling real-time interactivity — remains a major challenge. Traditional simulations often fall short in supporting these “autonomy-oriented” demands, where back-end physics and front-end graphics must be balanced with equal fidelity.

To truly enable simulation-driven design, testing, and validation of autonomous systems, we envision a shift from static, fixed-parameter virtual models to dynamic and adaptive digital twins. These autonomy-oriented digital twins capture the full system-of-systems-level interactions — including vehicles, sensors, actuators, infrastructure and environment — while offering seamless integration with autonomy software stacks.

This blog presents our approach to building such digital twins across different vehicle scales, using a unified real2sim2real workflow to support robust development and deployment of the Autoware stack. Our goal is to close the loop between simulation and reality, enabling smarter, faster, and more scalable autonomy developments.

Digital Twins

To demonstrate our framework across different operational scales, we worked with a diverse fleet of autonomous vehicles — from small-scale experimental platforms to full-sized commercial vehicles. These included Nigel (1:14 scale), RoboRacer (1:10 scale), Hunter SE (1:5 scale), and OpenCAV (1:1 scale).

Each platform was equipped with sensors tailored to its size and function. Smaller vehicles like Nigel and RoboRacer featured hobby-grade sensors such as encoders, IMUs, RGB/D cameras, 2D LiDARs, and indoor positioning systems (IPS). Larger platforms, such as Hunter SE and OpenCAV, were retrofitted with different variants of 3D LiDARs and other industry-grade sensors. Actuation setups also varied by scale. While the smaller platforms relied on basic throttle and steering actuators, OpenCAV included a full powertrain model with detailed control over throttle, brakes, steering, and handbrakes — mirroring real-world vehicle commands.

For digital twinning, we adopted the AutoDRIVE Simulator, a high-fidelity platform built for autonomy-centric applications. Each digital twin was calibrated to match its physical counterpart in terms of its perception characteristics as well as system dynamics, ensuring a reliable real2sim transfer.

Autoware API

The core API development and integration with Autoware stack for all the virtual/real vehicles was accomplished using AutoDRIVE Devkit. Specifically, AutoDRIVE’s Autoware API builds on top of its ROS 2 API, which is streamlined to work with the Autoware Core/Universe stack. It is fully compatible with AutoDRIVE Simulator as well as AutoDRIVE Testbed, ensuring a seamless sim2real transfer, without change of any perception, planning, or control algorithms/parameters.

The exact inputs, outputs, and configurations of perception, planning, and control modules vary with the underlying vehicle platform. Therefore, to keep the overall project clean and well-organized, a multitude of custom meta-packages were developed within the Autoware stack to handle different perception, planning, and control algorithms using different input and output information in the form of independent individual packages. Additionally, a separate meta-package was created to handle different vehicles viz. Nigel, RoboRacer, Hunter SE, and OpenCAV. Each package for a particular vehicle hosts vehicle-specific parameter description configurations for perception, planning, and control algorithms, environment maps, RViz configurations, API scripts, teleoperation programs, and user-convenient launch files for getting started quickly and easily.

Applications and Use Cases

Following is a brief summary of the potential applications and use cases, which align well with the different ODDs proposed by the Autoware Foundation:

  • Autonomous Valet Parking (AVP): Mapping of a parking lot, localization within the created map and autonomous driving within the parking lot.
  • Cargo Delivery: Autonomous mobile robots for the transport of goods between multiple points or last-mile delivery.
  • Racing: Autonomous racing using small-scale (e.g. RoboRacer) and full-scale (e.g. Indy Autonomous Challenge) vehicles running the Autoware stack.
  • Robo-Bus/Shuttle: Fully autonomous (Level 4) buses and shuttles operating on public roads with predefined routes and stops.
  • Robo-Taxi: Fully autonomous (Level 4) taxis operating in dense urban environments to pick-up and drop passengers from point-A to point-B.
  • Off-Road Exploration: The Autoware Foundation has recently introduced an off-road ODD. Such off-road deployments could be applied for agricultural, military or extra-terrestrial applications.

Getting Started

You can get started with AutoDRIVE and Autoware today! Here are a few useful resources to take that first step towards immersing yourself within the Autoware Universe:

  • GitHub Repository: This repository is a fork of the upstream Autoware Universe repository, which contains the AutoDRIVE-Autoware integration APIs and demos.
  • Documentation: This documentation provides detailed steps for installation as well as setting up the turn-key demos.
  • YouTube Playlist: This playlist contains videos right from the installation tutorial all the way up to various turn-key demos.
  • Research Paper: This paper can help provide a scientific viewpoint on why and how the AutoDRIVE-Autoware integration is useful.

What’s Next?

PIXKIT 2.0 Digital Twin in AutoDRIVE

We are working on digitally twinning more and more Autoware-supported platforms (e.g., PIXKIT) using the AutoDRIVE Ecosystem, thereby expanding its serviceability. We hope that this will lower the barrier of entry for students and researchers who are getting started with the Autoware stack itself, or the different Autoware-enabled autonomous vehicles.

The post A Tale of Two Open-Source Ecosystems: Scaling Autonomy with AutoDRIVE & Autoware appeared first on Autoware.

]]>
Advancing Software Testing for Autonomous Driving Systems: A Year of Collaboration and Contribution at UCI https://autoware.org/advancing-software-testing-for-autonomous-driving-systems/ Wed, 27 Aug 2025 15:00:00 +0000 https://autoware.org/?p=3694 Over the past year, researchers from the University of California, Irvine (UCI) — including Professors Joshua Garcia and Qi Alfred Chen, along with graduate students Yuqi Huai, Yuntianyi Chen, Chi Zhang, and Xiang Liao — have made significant contributions to the advancement and evaluation of autonomous driving systems through a collaborative effort between the Software ...

The post Advancing Software Testing for Autonomous Driving Systems: A Year of Collaboration and Contribution at UCI appeared first on Autoware.

]]>
Over the past year, researchers from the University of California, Irvine (UCI) — including Professors Joshua Garcia and Qi Alfred Chen, along with graduate students Yuqi Huai, Yuntianyi Chen, Chi Zhang, and Xiang Liao — have made significant contributions to the advancement and evaluation of autonomous driving systems through a collaborative effort between the Software Aurora Lab (SORA) and the AS²Guard Research Group. With a particular focus on scenario generation and software testing, their work spans academic research, tool development, and active participation in open-source communities such as the Autoware Foundation. Their efforts reflect a broader goal: improving the safety, reliability, and transparency of autonomous driving systems through rigorous engineering practices and collaborative engagement.

One of the key events this year was a local workshop hosted at UCI in March 2025. The workshop brought together researchers from multiple institutions with the goal of eliciting requirements for a shared, cloud-based research infrastructure to support the development and testing of autonomous driving systems. Rather than focusing on a particular software stack, the workshop centered on identifying technical, logistical, and collaborative needs that such an infrastructure must address. Participants shared perspectives on scenario generation, simulation at scale, data management, and tool interoperability—laying the groundwork for a future platform that could support reproducible, cross-institutional research in autonomous driving systems.

Complementing this effort was UCI’s organization of the SE4ADS (Software Engineering for Autonomous Driving Systems) workshop at ICSE 2025. SE4ADS serves as a growing forum for advancing software engineering research tailored to the needs of autonomous driving systems. The 2025 edition featured work on simulation-based testing, requirements integration, and safety certification. Discussions also addressed broader concerns around responsible software practices and long-term maintainability, particularly in the context of open-source autonomous systems such as Autoware. The workshop underscored a shared commitment to developing engineering foundations that can support the unique complexity and risk profile of autonomous software.

With these community needs in mind, Yuqi is now leading the design and development of a Cloud-based Autonomous Driving Systems Research Infrastructure (CADRI). Building on insights from both the UCI-hosted workshop and the SE4ADS forum, this effort aims to create a scalable, interoperable platform that supports reproducible experimentation. A key advantage of the cloud-based approach is its ability to significantly reduce upfront costs, allowing researchers to perform large-scale testing and development without substantial investment in specialized hardware. This initiative builds on Yuqi’s earlier work in scenario-based testing, including DoppelTest [1] and scenoRITA [2], two frameworks for generating scenario-based tests. He also maintains a key dependency for the SVL simulator, helping ensure that the simulation tool remains accessible to the research community. In parallel, Xiang has been working on migrating tools originally developed for other ADS platforms onto Autoware, thereby broadening tool compatibility and reinforcing Autoware’s role as an open-source foundation for reproducible research.

In Yuntianyi’s latest research, he has been emphasizing Autoware as he led the development of ConfVE [4], a tool designed to identify failures in autonomous driving systems that arise from alternative configurations. Misconfiguration is a known risk factor in real-world deployments, and ConfVE aims to prevent such issues by identifying inconsistent or unsafe parameter combinations early in the development cycle. As part of this approach, Yuntianyi also developed a Scenario Record Analyzer—an automated tool capable of detecting nine distinct types of violations in Autoware driving scenario records, providing a robust mechanism for validating ADS behavior against safety and performance requirements. This work leveraged HD map and scenario data from Autoware Evaluator, obtained through a collaboration with the Autoware Operational Design Domain (ODD) Working Group. The partnership provided access to realistic, systematically generated test scenarios that reflect the ODD characteristics of Autoware’s target deployment environments, enabling ConfVE and the Scenario Record Analyzer to be validated under conditions closely resembling real-world usage. More recently, Yuntianyi presented A Comprehensive Study of Bug-Fix Patterns in Autonomous Driving Systems [5] at FSE 2025. This large-scale analysis examined over 1,300 real-world bug-fix instances from two leading open-source platforms (i.e., Autoware and Apollo) and introduced a taxonomy encompassing 15 syntactic and 27 semantic bug-fix patterns, capturing both code-level changes (e.g., conditional modifications, data structure corrections) and domain-specific modifications (e.g., path planning optimization, module integration and interaction). Yuntianyi’s work on ConfVE and the bug-fix pattern benchmark also contributes to the CADRI project, where he serves as a project leader. His contributions provide foundational components for the Toolkit Service, enrich the ADS analytics oracles, and supply a curated dataset repository, thereby strengthening CADRI’s capability to support comprehensive analysis, testing, and improvement of autonomous driving systems.

Besides the research, Yuntianyi and Yuqi contributed to the DevOps Dojo project within the Autoware OpenADKit Working Group. As part of this effort, they refactored approximately 15% of the total Autoware ROS nodes, enhancing maintainability and consistency in the codebase. Yuntianyi also developed an automated configuration refactoring tool for Autoware ROS nodes, enabling developers to standardize and update configurations more efficiently. This tool has accelerated the development workflow, reduced manual intervention, and improved configuration reliability across the Autoware ecosystem.

While Josh approaches autonomous driving from a software engineering perspective, focusing on faults that affect system reliability and correctness, Alfred brings a security lens to the field, concentrating on vulnerabilities in autonomous vehicles. Specifically, Alfred’s team focused on evaluating the robustness of autonomous driving systems by leveraging component-level vulnerabilities using methods such as adversarial scenarios, patches, and objects. Their efforts have contributed to a Platform for Auto-driving Safety and Security (PASS) [6], a modular and extensible simulation-based evaluation framework specifically for evaluating system-level effectiveness of existing attacks or defenses across different autonomous driving models. Building on top of PASS, Chi has been spending his recent efforts on designing and developing an adversarial scenario generation framework for Autoware using the CARLA simulation environment.

With the growing complexity and expanding deployment ambitions, the need for rigorous, collaborative, and scalable engineering practices in autonomous driving systems has never been more urgent. Josh and Alfred’s teams are helping to meet this need by integrating empirical insights with tool development, infrastructure planning, and community engagement. Their work, ranging from scenario-based test generation to large-scale bug fix analyses, demonstrates how software engineering research can directly contribute to the development of safer and more reliable autonomous systems. Through close collaboration with the Autoware Foundation and a commitment to open, reproducible experimentation via efforts like CADRI, they are contributing essential building blocks for a more robust, transparent, and evidence-driven research ecosystem in autonomous driving systems.

References

[1] Yuqi Huai, Yuntianyi Chen, Sumaya Almanee, Tuan Ngo, Xiang Liao, Ziwen Wan, Qi Alfred Chen, and Joshua Garcia. 2023. Doppelgänger Test Generation for Revealing Bugs in Autonomous Driving Software. In Proceedings of the 45th International Conference on Software Engineering (ICSE ’23). IEEE Press, 2591–2603. https://doi.org/10.1109/ICSE48619.2023.00216

[2] Yuqi Huai, Sumaya Almanee, Yuntianyi Chen, Xiafa Wu, Qi Alfred Chen, and Joshua Garcia, “scenoRITA: Generating Diverse, Fully Mutable, Test Scenarios for Autonomous Vehicle Planning,” in IEEE Transactions on Software Engineering, vol. 49, no. 10, pp. 4656-4676, 1 Oct. 2023, doi: 10.1109/TSE.2023.3309610.

[3] Yuqi Huai, 2023, SORA SVL Server. Available at https://github.com/YuqiHuai/SORA-SVL (Accessed: 30 June 2025).

[4] Yuntianyi Chen, Yuqi Huai, Shilong Li, Changnam Hong, and Joshua Garcia. 2024. Misconfiguration Software Testing for Failure Emergence in Autonomous Driving Systems. Proc. ACM Softw. Eng. 1, FSE, Article 85 (July 2024), 24 pages. https://doi.org/10.1145/3660792

[5] Yuntianyi Chen, Yuqi Huai, Yirui He, Shilong Li, Changnam Hong, Qi Alfred Chen, and Joshua Garcia. 2025. A Comprehensive Study of Bug-Fix Patterns in Autonomous Driving Systems. Proc. ACM Softw. Eng. 2, FSE, Article FSE018 (July 2025), 23 pages. https://doi.org/10.1145/3715733

[6] Hu, Zhisheng, Shen, Junjie, Guo, Shengjian, Zhang, Xinyang, Zhong, Zhenyu, Chen, Qi Alfred, and Li, Kang. PASS: A System-Driven Evaluation Platform for Autonomous Driving Safety and Security. Retrieved from https://par.nsf.gov/biblio/10359464. NDSS Workshop on Automotive and Autonomous Vehicle Security (AutoSec).

The post Advancing Software Testing for Autonomous Driving Systems: A Year of Collaboration and Contribution at UCI appeared first on Autoware.

]]>
Scalable Tire Dynamics Modelling and Learning-Based Control for High-Speed Autonomy https://autoware.org/scalable-tire-dynamics-modelling-and-learning-based-control-for-high-speed-autonomy/ Wed, 20 Aug 2025 15:08:26 +0000 https://autoware.org/?p=3660 As part of the Autoware Foundation Centre of Excellence, our team at RMIT University is developing a new control capability for high-speed autonomous vehicles by integrating real-time tire modelling with advanced learning-based trajectory optimization. The core innovations—a cornering stiffness estimation module and a Safe Information-Theoretic MPC (SIT-LMPC) controller—will be tested on a 1:7 scale autonomous ...

The post Scalable Tire Dynamics Modelling and Learning-Based Control for High-Speed Autonomy appeared first on Autoware.

]]>
As part of the Autoware Foundation Centre of Excellence, our team at RMIT University is developing a new control capability for high-speed autonomous vehicles by integrating real-time tire modelling with advanced learning-based trajectory optimization. The core innovations—a cornering stiffness estimation module and a Safe Information-Theoretic MPC (SIT-LMPC) controller—will be tested on a 1:7 scale autonomous car with realistic tires, capable of performing aggressive manoeuvres.

This work expands the boundaries of what Autoware can do, enabling deployment in extreme scenarios—from autonomous racing to off-road or even extraterrestrial exploration.


Why Cornering Stiffness Matters

Tire cornering stiffness (𝐶α) is a critical parameter in lateral vehicle dynamics. It relates lateral force (𝐹y) to the slip angle (α) and underpins every aspect of handling and stability control.

We define the lateral forces at the front and rear axles as:

Assuming identical tires and steady-state conditions:

Where:

  • m is the vehicle mass
  • l = a1 + a2 is the wheelbase
  • a1 and a2 are distances from the CoG to front/rear axles
  • αf and αr are slip angles for front/rear tires
  • g is gravitational acceleration

From these, the total lateral force and yaw moment are:

These feed into the planar dynamics:

This model reveals how changes in 𝐶α—due to tire wear, surface change, or temperature—directly impact stability and control. Estimating it online gives us a safer, smarter vehicle.


Scalable Implementation on a 1:7 Autonomous Vehicle

To validate this architecture in a controlled, cost-effective way, we are developing a 1:7 scale autonomous race vehicle with:

  • Real pneumatic tires
  • High-fidelity sensors and onboard computer (Jetson Orin)
  • Autoware software stack with our added modules

This mini-vehicle will operate at speeds and accelerations sufficient to invoke measurable slip, making it ideal for identifying tire parameters and validating SIT-LMPC in realistic conditions.


SIT-LMPC: Learning to Control Aggressively but Safely

Autonomous vehicles often operate in uncertain environments. We address this with Safe Information-Theoretic Learning MPC (SIT-LMPC), which blends:

  • Sampling-based MPC (MPPI) for stochastic optimization
  • Normalizing flows to learn the cost-to-go function across iterations
  • Constraint-safe learning using adaptive penalty methods

Mathematically, we cast the infinite-horizon stochastic control problem as:

The SIT-LMPC approach iteratively builds safe sets Sℓ and learns V(x), the expected cost-to-go, using previously feasible trajectories. These are incorporated into a constrained MPC formulation solved by optimizing:

Where dX(.) and dS(.) measure constraint violations and are penalized adaptively. All computations are GPU-accelerated, supporting real-time control even with high-dimensional models.


Contributions to Autoware

Our project contributes to the Autoware ecosystem by:

  • Adding a modular cornering stiffness estimator node, usable across scales and vehicle types
  • Demonstrating SIT-LMPC integration with Autoware for real-time stochastic control
  • Deploying a scalable testbed, allowing other researchers to test slip-aware control in affordable settings

These additions make Autoware suitable for racing, off-road, and extreme-terrain applications.


What’s Next

  • Q3 2025: Validation on variable surfaces, terrain changes
  • Q4 2025: Public release of Autoware-compatible modules and documentation
  • 2026: Potential scale-up to full-size off-road vehicle deployments

We welcome collaboration—especially around data sharing, experimental validation, or deploying on similar testbeds.

The post Scalable Tire Dynamics Modelling and Learning-Based Control for High-Speed Autonomy appeared first on Autoware.

]]>
Driving by Conversation: Personalized Autonomous Driving with LLMs and VLMs https://autoware.org/driving-by-conversation-personalized-autonomous-driving-with-llms-and-vlms/ Thu, 14 Aug 2025 14:47:50 +0000 https://autoware.org/?p=3645 LLMs and VLMs: Enabling personalization in AVs through natural language The evolution of autonomous vehicles (AVs) has largely focused on safety, efficiency, and technical robustness. While these remain essential, the next frontier is clear—personalization. Today’s AV stacks typically offer static driving modes—“sport,” “comfort,” “eco”—or manual parameter adjustments. These settings are rigid, fail to capture nuanced ...

The post Driving by Conversation: Personalized Autonomous Driving with LLMs and VLMs appeared first on Autoware.

]]>
LLMs and VLMs: Enabling personalization in AVs through natural language

The evolution of autonomous vehicles (AVs) has largely focused on safety, efficiency, and technical robustness. While these remain essential, the next frontier is clear—personalization.

Today’s AV stacks typically offer static driving modes—“sport,” “comfort,” “eco”—or manual parameter adjustments. These settings are rigid, fail to capture nuanced user preferences, and cannot interpret indirect or contextual instructions. In practice, they cannot adapt when a passenger says, “I’m tired, please drive more gently,” or “I’m late, could we speed up?”

Recent advances in Large Language Models (LLMs) and Vision-Language Models (VLMs) open the door to natural, human-like interaction with AVs. These models can understand plain-language commands in any language or dialect, interpret subtle modifiers (“slightly faster,” “much gentler”), and integrate contextual cues from live perception data.

By combining these capabilities with the AV’s driving stack, it becomes possible to:

  • Enable natural and nuanced conversation by understanding plain-language commands (in any language or dialect) and subtle modifiers (“slightly faster,” “much gentler”), replacing complex menu settings.
  • Make context-aware decisions by fusing live visual cues (traffic, weather, signage) with spoken intent so the vehicle adapts safely yet personally in real time.
  • Deliver personalization that improves over time via memory-augmented models recall past rides to refine each passenger’s comfort and style preferences without retraining the core stack.

The research presented here demonstrates the first end-to-end, real-world deployments of LLM- and VLM-based frameworks, Talk2Drive and an onboard VLM motion control system, integrated with a fully functional autonomous driving stack.


System Architecture: Integrating LLM/VLM with the autonomous driving stack

The proposed architecture embeds LLM or VLM capabilities into the Strategic Driving Intelligence Layer of the AV stack (Figure 1). It processes multimodal inputs, generates context-aware driving plans, and executes low-level controls through the existing autonomy layer.

Inputs Information:

  • Human instruction (speech-to-text conversion).
  • Perception results (objects, weather, traffic conditions).
  • Vehicle state (pose, speed).
  • Available safe behaviors (slow down, lane change, stop).

Prompt Generation Interface:
Bundles raw inputs with system context (safety rules, operational role) and historical ride data, producing a structured prompt for the LLM/VLM.

VLM/LLM Agent:
Generates high-level policy parameters, target speed, decision priorities, and control adjustments aligned with passenger preferences.

Action Interface:
Translates high-level LLM/VLM output into low-level commands executed by the autonomous driving layer.


Real-World Testing Environment

To evaluate these systems, field tests were conducted at three distinct tracks:

  1. Highway Track – Testing lane changes, maintaining speed, responding to sudden slowdowns, and merging from on-ramps.
  2. Intersection Track – Handling yielding, protected and unprotected turns, and cross-traffic negotiation.
  3. Parking Lot Track – Navigating narrow lanes, avoiding static/dynamic obstacles, parallel parking, and reverse parking maneuvers.

These scenarios allow assessment of personalization performance across diverse traffic, speed, and maneuvering conditions.


Autonomous Vehicle Hardware Setup

Experiments were conducted using a Lexus RX450h equipped with:

  • Sensors: LiDAR (VLP-32C), radar (Aptiv ESR 2.5), GNSS (NovAtel Level 2.5 kit), multiple cameras (front, rear, in-cabin).
  • Computing Platform: Intel i9-9900 CPU, NVIDIA Quadro RIX-A4000 GPU, 512GB NVMe SSD.
  • Connectivity: Cradlepoint IBR900 Series Router with 4G-LTE.

This configuration supported both cloud-based LLM inference and fully onboard VLM inference for low-latency control.


Case Study 1: Talk2Drive: LLM-Based Personalized Driving

The Talk2Drive framework integrates GPT-4-based LLMs into a real-world AV, allowing natural verbal commands to directly influence driving behavior.

Core Capabilities:

  • Understanding multiple levels of human intention – from explicit (“drive faster”) to indirect (“I’m in a hurry”) commands.
  • Memory module for personalization – storing historical interaction data to refine driving style preferences over time.

Experiment Design:

  • Scenarios: Highway, intersection, and parking lot.
  • Evaluation metric: Takeover rate, frequency with which the human driver needed to intervene.
  • Comparison: With and without the memory module.

Key Findings:

  • Talk2Drive reduced takeover rates by 75.9% compared to baseline non-personalized systems.
  • Adding the memory module further reduced takeover rates by up to 65.2%, demonstrating the benefit of long-term personalization.
  • System successfully interpreted context and emotional tone, enabling safer and more responsive driving adaptations.

Case Study 2: Onboard VLM for Motion Control

While LLM-based systems can operate via cloud processing, they often face latency (3–4 seconds) and connectivity constraints. The second study addressed these limitations by developing a lightweight onboard VLM framework capable of real-time inference.

Key Features:

  • Onboard deployment – No dependency on internet connectivity.
  • Multimodal reasoning – Processing visual scene inputs and natural language instructions in real time.
  • RAG-based memory module – Retrieval-Augmented Generation allows iterative refinement of control strategies through user feedback.

Experiment Design:

  • Same multi-scenario real-world setup as Talk2Drive.
  • Evaluated explicit and implicit commands, varying environmental conditions.

Key Findings:

  • Comparable reasoning capability to cloud-based LLM solutions, with significantly lower latency.
  • Takeover rate reduced by up to 76.9%.
  • Maintained safety and comfort standards while adapting to individual driving styles.

Comparative Insights

FeatureTalk2Drive (LLM)Onboard VLM Motion Control
DeploymentCloud-based (requires connectivity)Fully onboard
Input ModalitiesSpeech/text commandsSpeech/text + visual scene
Memory ModuleHistorical personalization memoryRAG-based feedback memory
LatencyHigher (network dependent)Low (< real-time threshold)
Takeover Rate ReductionUp to 75.9%Up to 76.9%
Personalization Over TimeYesYes, with continuous feedback

Both approaches demonstrate that integrating advanced language and vision-language models with the AV stack can significantly improve personalization, trust, and user satisfaction. The choice between them depends on deployment constraints, desired input modalities, and connectivity availability.


Implications for Future Autonomous Driving

These studies represent the first real-world, end-to-end deployments of LLM and VLM personalization frameworks for autonomous vehicles. They address long-standing gaps in AV user interaction:

  1. Natural Command Interpretation – Understanding instructions without requiring structured input.
  2. Context Integration – Combining user intent with live environmental data for adaptive decision-making.
  3. Personalization Memory – Continuously refining the driving profile over multiple rides.
  4. Real-World Validation – Demonstrating effectiveness across diverse scenarios outside simulation environments.

Looking ahead, the combination of multimodal AI, onboard efficiency, and long-term personalization offers a promising path to AVs that not only drive safely but drive the way each passenger prefers.

For Further Reading:

The post Driving by Conversation: Personalized Autonomous Driving with LLMs and VLMs appeared first on Autoware.

]]>
Racing Into the Future: How Sim Racing Is Expanding Access to Autonomy https://autoware.org/how-sim-racing-is-expanding-access-to-autonomy/ Mon, 21 Jul 2025 17:30:58 +0000 https://autoware.org/?p=3607 In recent years, autonomous racing has grown from a niche academic interest to a global engineering challenge, drawing participation from hundreds of students, researchers, and developers. One of the most exciting developments in this space has been the launch of the RoboRacer Sim Racing League — a virtual autonomous racing competition powered by the AutoDRIVE ...

The post Racing Into the Future: How Sim Racing Is Expanding Access to Autonomy appeared first on Autoware.

]]>
A video game graphics of a race car

AI-generated content may be incorrect.

In recent years, autonomous racing has grown from a niche academic interest to a global engineering challenge, drawing participation from hundreds of students, researchers, and developers. One of the most exciting developments in this space has been the launch of the RoboRacer Sim Racing League — a virtual autonomous racing competition powered by the AutoDRIVE Ecosystem.

How it Started?

RoboRacer (formerly known as F1Tenth) is an international community of researchers, engineers, and autonomous systems enthusiasts. It was originally founded at the University of Pennsylvania in 2016 but has since spread to many other institutions worldwide. Their mission is to foster interest, excitement, and critical thinking about the increasingly ubiquitous field of autonomous systems. They have been semi-regularly hosting international competitions focused on building high-performance autonomy stacks for 1:10-scale racecars. The objective is deceptively simple: drive fast, don’t crash!

AutoDRIVE is envisioned to be an open, comprehensive, flexible and integrated cyber-physical ecosystem for enhancing autonomous driving research and education. It bridges the gap between software simulation and hardware deployment by providing the AutoDRIVE Simulator and AutoDRIVE Testbed, a well-suited duo for real2sim and sim2real transfer targeting vehicles and environments of varying scales and operational design domains. It also offers AutoDRIVE Devkit, a developer’s kit for rapid and flexible development of autonomy algorithms using a variety of programming languages and software frameworks, including Autoware.

RoboRacer and AutoDRIVE recently joined hands to introduce the Sim Racing Leagues, which take place entirely in a virtual environment, making them globally accessible and fully reproducible.

How it Works?

A blue whale and a blue whale with arrows

AI-generated content may be incorrect.

The competitions took place in 2 rounds:

  • Qualification Round: Teams demonstrated their ability to complete multiple laps around the practice track without colliding with the track bounds at run time.
  • Competition Round: Teams competed against the clock (a.k.a. time-attack race), on a previously unseen racetrack, to secure a position on the leaderboard.

The competitions adopted a containerization workflow using Docker to run and evaluate the submissions in a reproducible manner. Containerization provided a lightweight and portable environment, allowing applications to be easily packaged along with their dependencies, configurations, and libraries.

Each team was provided with a standardized simulation setup comprising the digital twin of RoboRacer racecars and racetracks within the high-fidelity AutoDRIVE Simulator. Additionally, teams were also provided with a working implementation of the AutoDRIVE Devkit to get started with developing their autonomy algorithms using ROS 2 or Autoware. Teams had to develop perception, planning, and control algorithms to parse the real-time sensor data streamed from the simulator and generate control commands to be fed back to the simulated vehicle.

The participants had the option to run the simulations in headless mode, or across varying grades of graphics fidelity and interface them with their autonomous racing software stacks locally or in a distributed computing setting. This levelled the playing field by relaxing the computational requirements without compromising the performance.

Staying true to its mission of open accessibility and transparency, the Sim Racing League provided each team with private access to their race logs and video recordings. Additionally, teams were also encouraged to leverage openly released tailor-made Foxglove layouts and scripts for data visualization and analysis. This allowed teams to post-process their performance, identify bottlenecks, and iterate on their software between rounds.

How it’s Going?

IROS 2024CDC 2024ICRA 2025

The RoboRacer Sim Racing League has been successfully deployed 3 times so far. The very first deployment at IROS 2024 in Abu Dhabi, UAE witnessed 58 teams (160+ participants) and the second one at CDC 2024 in Milano, Italy witnessed 51 teams (170+ participants). The most recent deployment at ICRA 2025 in Atlanta, USA registered 58 teams (150+ participants) from all over the world (32 organizations, 25 countries).

Teams participated with an exciting mix of reactive algorithms, map-based algorithms, and learning-based algorithms to push the virtual RoboRacer vehicles to their limits and autonomously race at over 24 mph!

RankTeam NameRace TimeCollision CountAdjusted Race TimeBest Lap TimeVideo
01🥇 VAUL111.46 s0111.46 s11.28 s YouTube
02🥈 Autoware Aces122.16 s0122.16 s12.08 s YouTube
03🥉 Kanka129.28 s0129.28 s12.76 s YouTube

The 3rd RoboRacer Sim Racing League @ ICRA 2025 concluded on May 14, 2025, with the following teams taking the top positions on the leaderboard:

🥇 VAUL (Université Laval) took the gold with an impressive race time of 111.46 seconds, maintaining a flawless run with zero collisions and setting the fastest lap of the competition at 11.28 seconds. Their top speeds reached above 10.6 m/s!

🥈 Autoware Aces (Autoware Foundation) secured the silver, completing the race in 122.16 seconds. Their consistent performance also saw zero collisions, with a best lap time of 12.08 seconds and top speed of over 9.3 m/s.

🥉 Kanka (University of Minnesota) claimed the bronze, finishing the race in 129.28 seconds. They also performed consistently without any collisions and put up a best lap time of 12.76 seconds with a top speed of over 9.6 m/s. They improved their autonomous racing stack so much that it beat the clock against their own qualification time, which was recorded on a smaller, simpler track.

🏁 Check out the full leaderboard here: https://autodrive-ecosystem.github.io/competitions/roboracer-sim-racing-icra-2025/#results 

Finally, it was interesting to see how some of the teams were able to perform a sim2real transfer of their autonomous racing algorithms during the physical races at ICRA 2025!

What’s Next?

With an overwhelmingly positive reception from the global community, the RoboRacer Sim Racing Leagues will continue to be organized across premier robotics, autonomous driving, and controls conferences. Expect new tracks, new race formats, enhanced simulation features, multi-friction surfaces, uneven terrain, off-roading, and much more!

Whether you’re a student, researcher, or robotics enthusiast, the RoboRacer Sim Racing League is a perfect place to test your limits and push the boundaries of autonomous systems. Are you ready to race?

The post Racing Into the Future: How Sim Racing Is Expanding Access to Autonomy appeared first on Autoware.

]]>
Managing Multiple Autoware Vehicles with Zenoh https://autoware.org/managing-multiple-autoware-vehicles-with-zenoh/ Wed, 19 Feb 2025 13:13:56 +0000 https://autoware.org/?p=3277 Written by ChenYing Kuo (ADLINK and ZettaScale) As autonomous vehicles become increasingly popular, it is important to monitor and manage them from the cloud. The management system needs to collect data from autonomous driving systems and send control commands when necessary. This feature will be critical as Autoware gains wider adoption within the open-source community. ...

The post Managing Multiple Autoware Vehicles with Zenoh appeared first on Autoware.

]]>
Written by ChenYing Kuo (ADLINK and ZettaScale)

As autonomous vehicles become increasingly popular, it is important to monitor and manage them from the cloud. The management system needs to collect data from autonomous driving systems and send control commands when necessary. This feature will be critical as Autoware gains wider adoption within the open-source community. In this scenario, we believe that Zenoh can play a significant role to enhance the communication efficiency.

We built a prototype project zenoh_autoware_fms, to show how Zenoh improves vehicle management. Autoware is based on ROS 2 and uses CycloneDDS as its communication layer. DDS functions well within vehicles, but it lacks compatibility for Internet communication and cannot interface with the cloud. We can indeed choose some popular protocols for external communication, for instance, MQTT or Kafka. However, we would suggest using Zenoh because of the following reasons:

  1. Zenoh is more efficient than other protocol according to our benchmark result.
  2. Zenoh can easily work with other protocol with the help of plugins. For example, zenoh-bridge-ros2dds can easily transform ROS 2 messages to Zenoh ones and provide other useful features, like access control and downsampling.

Architecture

In our architecture, we use the zenoh-bridge-ros2dds as a gateway to communicate with the management system server. Every vehicle has its own zenoh-bridge-ros2dds with a unique namespace. The server can talk to different vehicles by adding the corresponding namespace.

On the server side, we implement some basic functions: monitoring vehicles, remote driving and setting goal. The server is web-based, and the backend speaks in Zenoh for communication.

On the client side, we leverage AD API defined by Autoware to get data and send commands. zenoh-bridge-ros2dds can filter uninterested ROS 2 messages, and we only allow AD API to enter into the Zenoh network.

Tutorial

In the following section, we’ll show you how to run the system step by step. You can also refer to the tutorial here.

Firstly, let’s download and install the management system.

# Download the code
git clone https://github.com/evshary/zenoh_autoware_fms.git
# Install prerequisite
cd zenoh_autoware_fms 
./prerequisite.sh

Then, we need to leverage our previous work to run multiple vehicles in the Carla simulator. You can also check the online document for the detail commands.

  • 1st terminal: Run Carla simulator
./CarlaUE4.sh -quality-level=Epic -world-port=2000 -RenderOffScreen
  • 2nd terminal: Run the bridges
# Go inside "Carla bridge container"
./container/run-bridge-docker.sh
# Run zenoh_carla_bridge and Python Agent
cd autoware_carla_launch
source env.sh
./script/run-bridge-two-vehicles.sh
  • 3rd terminal: Run the first vehicle
# Go inside "Autoware container"
./container/run-autoware-docker.sh
# Run zenoh-bridge-ros2dds and Autoware
cd autoware_carla_launch
source env.sh
# You can ignore the Bridge IP and management system IP if they are on the same host.
./script/run-autoware.sh v1 <Bridge IP> <FMS IP>
  • 4th terminal: Run the second vehicle
# Go inside "Autoware container"
./run-autoware-docker.sh
# Run zenoh-bridge-ros2dds and Autoware
cd autoware_carla_launch
source env.sh
# You can ignore the Bridge IP and management system IP if they are on the same host.
./script/run-autoware.sh v2 <Bridge IP> <FMS IP>

Finally, let’s run the management system.

cd zenoh_autoware_fms
source env.sh
./run_server.sh

The following video shows how we run the management system on the ADLINK ADM-AL30 platform, which is designed for autonomous driving applications. There are three main functions inside the system:

  • Monitoring vehicles (0:29): Listing all the available vehicles on the Zenoh network.
  • Setting goal (0:47): Assign the route for each vehicle.
  • Remote driving (2:04): Control the vehicle from the web console. It is not easy to control with a mouse, but it is sufficient to demonstrate Zenoh’s compatibility in remote driving.

Conclusion

The blog and the project give an overview of how to use Zenoh in the Autoware management system. As we’ve already mentioned, the project is only a prototype to demonstrate the power of Zenoh. There should be more features for a complete management system. If you are interested in applying Zenoh in your communication system, welcome to contribute to the project or contact us.

Special Thanks

The project is also contributed by ADLINK interns:

  • Denny Kuo: Implement the basic features of the project.
  • Leann Hsu: Upgrade both Zenoh and Autoware to the latest version and enhance the performance.

The post Managing Multiple Autoware Vehicles with Zenoh appeared first on Autoware.

]]>
An Anatomy of Autonomous Racing: Autonomous Go-Karts https://autoware.org/an-anatomy-of-autonomous-racing-autonomous-go-karts/ Thu, 26 Sep 2024 15:09:22 +0000 https://autoware.org/?p=3115 Racing has always been a passion of ours at Autoware Foundation. It’s not just about speed or the thrill of competition; it’s about pushing boundaries, fostering innovation, and building a strong community. Through racing, we’ve formed invaluable bonds with academics, students, and the broader racing world. These partnerships have allowed us to push the envelope ...

The post An Anatomy of Autonomous Racing: Autonomous Go-Karts appeared first on Autoware.

]]>
Racing has always been a passion of ours at Autoware Foundation. It’s not just about speed or the thrill of competition; it’s about pushing boundaries, fostering innovation, and building a strong community. Through racing, we’ve formed invaluable bonds with academics, students, and the broader racing world. These partnerships have allowed us to push the envelope of autonomous driving technology while inspiring the next generation of engineers.

Autonomous racing, as it turns out, is also an incredible educational tool. It provides students with hands-on experience in robotics, AI, and engineering in ways that traditional classroom settings cannot. From the precision required to navigate a track to the split-second decision-making needed to compete at high speeds, racing sharpens skills that are essential in the world of autonomy.

Many form factors are involved in autonomous racing, each providing unique learning opportunities. It can start with 1:10th-scale robots1, advance to versions equipped with upgraded sensors like 3D LiDARs, and move up to autonomous go-karts. The pinnacle of this progression is the full-scale Indy cars2 as it stands today, racing autonomously at speeds of over 270 km/h, showcasing just how far the technology can be pushed in high-stakes environments.

Autonomous Go-Karts: Revolutionizing Racing and Education

One of the standout formats in the world of autonomous racing is the Autonomous Karting Series (AKS), where autonomous go-karts take center stage. This competition, which began in 2023, is designed to push the limits of self-driving technology while providing an accessible platform for students and universities to compete and innovate. The AKS holds its annual National Grand Prix at Purdue University, where some of the brightest minds in autonomous technology face off.

In the 2024 Grand Prix, six teams—hailing from the University of Pennsylvania, UC Berkeley, UC San Diego, Purdue University, University of Michigan-Dearborn, and Kennesaw State University—fought fiercely for the top spot. The teams raced in three distinct categories:

Time Trial: Teams aimed to clock the fastest five laps on the track.

Open Category: Teams were allowed to pre-map the track and use the data to guide their kart. The challenge here was speed and control without the presence of cones.

Reactive Category: This was the most demanding, as teams were prohibited from pre-mapping the track. With only cones to guide their karts, teams had to rely on real-time data and rapid decision-making.

For two years in a row, the Autoware Foundation team from the University of Pennsylvania dominated the event, clinching first place in both the Open and Reactive categories in 2024, showcasing their exceptional engineering prowess and cutting-edge autonomous technology.

Adding a Niche Expertise into the Mix

In the latest AKS race, the Autoware Team at the University of Pennsylvania took their autonomous go-kart platform to new heights by incorporating Fixposition‘s Vision-RTK2 sensor. Known for its robust fusion of GNSS, inertial measurement units (IMU), and visual odometry, the Vision-RTK2 provided the team with unmatched accuracy and reliability in vehicle positioning.

The sensor was mounted near the top of the rear shelf of the go-kart, integrated with a 24V power source. It had to be positioned in a way that parts of the steering wheel were visible to the sensor’s camera. However, this interference was filtered out using Fixposition’s WebUI, ensuring clean sensor data.

The team further optimized their setup by utilizing Point One Navigation’s Polaris RTK subscription for Network Transport of RTCM via Internet Protocol (NTRIP), which enhanced the accuracy of the real-time positioning data. The odometry data from the Vision-RTK2 was used to derive the vehicle’s local position and orientation. This information was critical for the mapping and localization stack, and after applying trajectory optimization, the kart was controlled via a pure pursuit control algorithm.

Despite occasional GNSS drifts, the visual odometry from the Vision-RTK2 ensured that the position estimates were accurate throughout the race. The team achieved a remarkable sub-5-centimeter position accuracy across the track. Impressively, their go-kart completed five laps in just 5 minutes and 48 seconds, setting a new record for the fastest time trial in AKS race history.

By leveraging the cutting-edge technology provided by Fixposition, the Autoware Team demonstrated how niche sensors can significantly elevate the performance of autonomous racing platforms.

Looking Ahead: A Growing Future for Autonomous Racing

The Autonomous Karting Series is just getting started. With its popularity and technological significance growing, the competition is poised to expand beyond Purdue University in the coming years. New race tracks and larger events will likely emerge, offering more teams the chance to test their innovations and push the limits of autonomous racing technology.

The Autoware Foundation is proud to be a supporter of AKS and is committed to working with the competition as it expands. We see tremendous educational and technological potential in the series, and with the success of the Autoware Team at UPenn, the benefits of the competition are clear. In fact, the UPenn team has open-sourced their entire vehicle design under the AV4EV platform, making it freely accessible to universities and enthusiasts worldwide. This includes detailed component specifications, drive-by-wire system design, and software architecture, creating a valuable resource for anyone looking to develop their autonomous go-kart platforms.

The open-sourced AV4EV design has already inspired many universities that are part of the Autoware Centers of Excellence initiative. By adopting this platform, students and faculty can avoid having to build everything from scratch, accelerating their research and participation in autonomous racing. We expect to see more AV4EV go-kart platforms racing on tracks worldwide, with teams not just competing but also learning from and sharing their experiences with one another.

As the competition grows and more teams get involved, the spirit of collaboration and innovation will continue to drive the AKS forward. The future of autonomous racing is bright, and we at the Autoware Foundation are excited to be part of this transformative journey.

  1. Visit F1Tenth Foundation Website ↩
  2. Visit Indy Autonomous Challenge Website ↩

The post An Anatomy of Autonomous Racing: Autonomous Go-Karts appeared first on Autoware.

]]>
CES Special – Restarting the IAC SIM races using AWSIM https://autoware.org/ces-special-restarting-the-iac-sim-races-using-awsim/ Wed, 06 Dec 2023 03:17:27 +0000 https://autoware.org/?p=2459 Welcome to another blog post as a part of our CES 2024 work to be showcased in Las Vegas between 9 and 12 of January 2024. Autoware Foundation partnered with the Indy Autonomous Challenge (IAC) and Autonoma to restart the IAC Sim Races using Autoware’s open-source simulator platform AWSIM. Since the IAC Sim Races will ...

The post CES Special – Restarting the IAC SIM races using AWSIM appeared first on Autoware.

]]>
Welcome to another blog post as a part of our CES 2024 work to be showcased in Las Vegas between 9 and 12 of January 2024.

Autoware Foundation partnered with the Indy Autonomous Challenge (IAC) and Autonoma to restart the IAC Sim Races using Autoware’s open-source simulator platform AWSIM.

Since the IAC Sim Races will be based on AWSIM, it’s a good idea to have a quick read about AWSIM before jumping into the IAC Sim Races.

Here’s the link from Autoware Blog: https://autoware.org/awsim-end-to-end-digital-twin-simulation-platform/

We also recorded a podcast with Will Bryan, CEO of Autonoma, to talk about why SIM races are important, what type of integration was undertaken for autonomous racing and some progress on the practice round that has been going on for a while.

Watch the full podcast episode on Autoware’s YouTube channel.

Let’s do a quick overview of the Indy Autonomous Challenge

If you are into autonomous racing, you must have heard of the Indy Autonomous Challenge. Dallara-built AV21 vehicles, retrofitted with hardware, sensors and control to enable automation (efforts led by Clemson University on preparing the sensors and compute in a singular package), racing on world-famous race tracks, setting up new speed records and improving performance virtually every other race are what Indy Autonomous Challenge offers.

Seeing the IAC racecars competing on the flying laps at the famous Monza track (The Temple of Speed) was a thrilling experience for us (Christian John and myself) when we were invited to the IAC event in Monza. That was about June when Autonoma was working on integrating the racing elements into Autoware’s open-source simulation platform AWSIM.

Here’s a throwback to that day:

 

Sim Races: Why is it important?

In the 2020-2021 season, IAC organized a series of autonomous SIM races, which allowed 31 teams comprised of 40 universities across four continents to test their AI drivers and determine their readiness to compete in on-track competitions.

Although it’s a thrilling experience to witness racecars at the on-track competitions, the number of teams that compete at IAC is limited by the number of available racecars. Not all universities can raise the funding necessary to afford to get a Dallara AV-21 vehicle. Even if they could, there are only 10 of them available, and they are all pretty much all accounted for.

SIM races are a solution to break down barriers of funding or physical car availability. Virtually any team could develop AI drivers to compete with others around the globe. Additionally, sim races prepare teams to progress to real-world, on-track competitions.

The IAC is relaunching the autonomous SIM races in partnership with the Autoware Foundation to develop the official IAC racing simulation platform integrated into the Autoware Foundation’s open-source simulation platform, AWSIM. Autonoma, a startup founded by former IAC contestants, provides integration of simulation modeling that replicates real-world IAC racecars and the race tracks integrated into AWSIM.

How are we restarting the Sim Races through Autonoma’s contributions?

Autonoma is a startup founded by former IAC university team (Auburn University) members, and they bring years of experience working with the IAC racecars. Autonoma’s core business is simulation, and that’s why they were a natural fit for the Autoware Foundation to partner with to enhance AWSIM with racing capabilities.

“The basics of the simulation are based on AWSIM, but we have been working with IAC teams since our founding in 2022 to create a really accurate digital twin of the vehicles: the dynamics, the sensors, the environments and the interfaces, so by partnering with the Autoware Foundation we’ve taken what we developed and we integrated that into AWSIM platform. We are providing that to the teams as a resource for their testing and preparations for the actual events and the SIM races,” said Will Bryan, CEO of Autonoma, in our podcast episode. 

Autonoma created a competition version of the simulator, an executable that Autonoma provides to all participating teams that are registered.

Visit Autonoma’s GitHub for the SIM races repository.

Autonoma created a highly accurate vehicle model of the IAC racecar, including suspensions, tires, and aerodynamics models to create a realism that is necessary for high-speed (around 180 mph or 300 kmh) racing. They also replicated the sensor models for the onboard IAC racecars including three GNSS/IMUs, six cameras, three LiDARs, and three radars. Not only the perception and localization related sensors but also beyond the external sensors they replicated hundreds of sensors on the vehicle itself that the competing teams interface with through the CANBUS.

Autonoma also created the assets for Monza track, down to the level of tire marks on the curbs to provide the confidence for the competing teams on the simulation, as they transition from simulation to on-track competing, they should expect to see same behaviour during their development process.


Interested to learned more? Tune in to our podcast with Will, we got into quite some details about the preparation work done by Autonoma, as well as the progress on the practice rounds.

See you in our next blog!

The post CES Special – Restarting the IAC SIM races using AWSIM appeared first on Autoware.

]]>
CES Special – The Open AD Kit Blueprint https://autoware.org/ces-special-the-open-ad-kit-blueprint/ Fri, 01 Dec 2023 10:12:46 +0000 https://autoware.org/?p=2411 We are starting a series of blog posts and podcasts as a part of our CES 2024 work to be showcased in Las Vegas between 9 and 12 January 2024. The Autoware Foundation will be exhibiting Open AD Kit demos in a number of booths at the CES 2024 show floor and celebrate the inauguration ...

The post CES Special – The Open AD Kit Blueprint appeared first on Autoware.

]]>
We are starting a series of blog posts and podcasts as a part of our CES 2024 work to be showcased in Las Vegas between 9 and 12 January 2024.

The Autoware Foundation will be exhibiting Open AD Kit demos in a number of booths at the CES 2024 show floor and celebrate the inauguration of our partnership with the Indy Autonomous Challenge for Sim Racing, based on Autoware’s open-source simulation platform AWSIM.

If you are interested, we have blog posts about both the Open AD Kit and AWSIM, previously published at the Autoware Foundation blog on our website. Here are some links to those material:

In this blog post, we want to lay the groundwork for the upcoming posts, where we will delve deep into the Software-Defined Vehicle (SDV) by dissecting and analyzing the concepts and components that are the puzzle pieces of the entire paradigm.

The buzz around the software-defined vehicles

The topic of SDV didn’t take long to become a buzzword around the automotive sphere because of its potential to transform conventional automotive software development methodology. Shortening the time-to-market, collecting almost real-time feedback from the consumer to improve the user experience in the vehicle, enabling software testing at scale while reducing test and validation costs, and deploying and delivering software ubiquitously are big promises, and there are no reasons why automotive stakeholders should not aspire for the software-defined vehicles to become a reality.

But, (there’s always a but) there’s a need for a drastic change in today’s automotive software development practices, and this change is not super-straightforward. Thankfully, we are seeing deep commitment from the entire automotive industry stakeholders to work together (this is one of the key enablers of the concept), and again, thankfully, we already have some other software-defined industries from which we can take our inspiration.

Software-defined vehicle requirements. Good-to-haves and must-haves

In a nutshell, the SDV paradigm suggests a few key requirements. Let’s take a quick look at them to start looking at the big picture.

  1. Defining open standards and specifications to develop automotive software and, instead of competing on standards, focusing on value-added differentiation to make the consumers happy.
  2. Getting the vehicle more capable in time (software development doesn’t end after the vehicle leaves the factory) via regularly upgrading the in-vehicle software.
  3. Functionalities in the vehicle are not defined by hardware but by software. Achieving this by avoiding an ECU for every function mentality, thus reducing complexity as well.
  4. Shift left to enable testing at scale, improve code quality, reduce the losses due to recalls, and shorten the automotive software development cycle using agile methodologies.

At Autoware Foundation’s Open AD Kit, we’re ticking all the boxes for the above-mentioned software-defined vehicle requirements. We gave a thorough look at the Open AD Kit initiative before, that’s why we won’t revisit all the details again, but let’s see how we are addressing these requirements in a concise way:

  1. First of all, the Autoware project is open-source. Not only that, but we’re also interacting with the entire automotive ecosystem using the Open AD Kit framework (let’s not forget that the Open AD Kit is the first blueprint of SOAFEE) to create synergies using open standards to enable moving software seamlessly across different environments, as well as focusing on delivering differentiated and value-added applications.
  2. Upgrading the automotive software via OTA is an important thrust of the Open AD Kit framework. Autoware Foundation has a roadmap to gradually achieve higher levels of automation by continuously adding new use cases and scenarios.
  3. The Autoware project promotes using zonal and consolidated, mixed-critical hardware platforms. The application components can use isolated processes within a given CPU/GPU architecture via virtualization, drastically improving resource use.
  4. The Open AD Kit framework dictates the use of cloud-native paradigms. Instead of using the hardware-first approach, we try to utilize the software-first approach via simulation and validation on the cloud for scenario testing (let’s not forget hundreds of scenarios are being tested on Autoware’s CI/CD pipeline every week).

Now, let’s go a little bit more technical to understand the underlying principles of the Open AD Kit framework.

Taking a quick look at the SOAFEE architecture

As we mentioned several times, Open AD Kit is the first blueprint of SOAFEE. To have a better understanding, let’s take a look at the SOAFEE’s architecture.

SOAFEE architecture is consciously designed to be simple. The main idea of the SOAFEE architecture is to create a direct path to run and deploy various domain-specific applications in different environments.

The domain-specific application depicted here is any containerized workload working on top of the SOAFEE reference implementation, which provides the necessary hardware and software abstraction via container runtime, orchestration, and virtualization technologies.

SOAFEE reference implementations are sitting on top of the standards-based firmware, which enables alignment for the hardware platforms in the market so that the SOAFEE reference implementations can be moved from one platform to another, but also different types of hardware such as edge, cloud and virtual instances. Once the software is abstracted from the underlying hardware, the development, testing, validation, and deployment processes can be moved where they’re running best (e.g., edge, cloud, virtual) with the optimal use of resources.

Now that we understand the core principles of SOAFEE architecture, we can dive into the Open AD Kit blueprint.

Taking a quick look at the Open AD Kit Blueprint

Open AD Kit shares the same core principles with SOAFEE architecture: containerization of workloads and agreeing on open standards.

The Open AD Kit framework is developed at Autoware’s Open AD Kit working group. The main goal of the working group is to transform the monolithic Autoware architecture into containerized workloads.

“Autoware software stack already existed when we started the Open AD Kit, so we had a lot of legacy code, a lot of well-written code and great functionality, but it was also complicated to deploy them in a flexible way. We’ve decided to take on this huge effort by dividing it into smaller threads and processes”

said Kasper Mecklenburg from Arm and SOAFEE in our podcast session to explain how the Open AD Kit working group is tackling the task of containerization of the Autoware project gracefully and after diligent discussions to create agreement and alignment.

We call this effort DevOps Dojos, a permanent segment at every Open AD Kit working group meeting where we update and track progress in achieving software-defined Autoware. Gradually, we’re tackling ROS2 nodes that constitute the Autoware software and making necessary changes to modernize them to make Autoware software software-defined.

What do we do at the Open AD Kit working group?

There’s actually an open call for volunteers to help us with this task. Everyone that is familiar with Autoware and ROS, along with C/C++, YAML, and JSON schema skills, can come in and contribute to our efforts. Here’s the call for participation announcement.

We convene every Thursday at 3 pm CET (look at the Autoware calendar for more information) at the Open AD Kit working group to update and discuss the items on the task board. It’s a great opportunity to learn about this exciting initiative; please join us!

The Open AD Kit working group has ambitious goals, and the progress has been shown time after time at various global events. We’re working towards preparing very exciting, tangible and comprehensive demos for the upcoming CES show. As a part of that effort, we have already achieved good preliminary results on containerizing Autoware software into granular containers that can be moved from cloud to edge.

Future work and forward-looking statements

Unsurprisingly, the containerization of the Autoware software was the initial goal of the Open AD Kit working group; however, the work won’t end there. As mentioned earlier, once we achieve the containerized form of workloads, we can move those containers freely. By leveraging the open standards, we will target great software portability across different types of hardware. The good news is that processing hardware vendors strive for the same objective. Many PoCs are already in the works to deploy Autoware software on various hardware platforms compatible with the consortia-led open standards.

On the other hand, shift-left is an important aspect of the software-defined vehicle paradigm, so using the cloud and collaborating on the cloud is another objective the Open AD Kit working group will focus on soon. Our CES demo will definitely provide a glimpse of that idea, and we will continue working on prototyping demos and solutions to attract attention from parties across the board.

In the next blog post of this series, we will dial it down a bit on the technical side, take a broader look at the Software-Defined Vehicle realm and talk about business models that make sense, how the SDV will benefit the OEMs and consumers, and the activities of the active stakeholders and the roadmap to collaboration of the entire automotive ecosystem.

See you in our next blog!

The post CES Special – The Open AD Kit Blueprint appeared first on Autoware.

]]>
Introducing Autonomous Vehicles for Electric Vehicles (AV4EV) https://autoware.org/introducing-autonomous-vehicles-for-electric-vehicles-av4ev/ Wed, 22 Nov 2023 14:39:31 +0000 https://autoware.org/?p=2395 While much research has been performed in the areas of autonomous vehicle-related modules such as perception, localization, planning, control, and prediction, human-in-the-loop “end-to-end” approaches such as Deep Learning (DL) and Imitation Learning (IL) still have a ways to go when it comes to safety-critical operation.  The advantage of the “end-to-end” approach is that it considers ...

The post Introducing Autonomous Vehicles for Electric Vehicles (AV4EV) appeared first on Autoware.

]]>
While much research has been performed in the areas of autonomous vehicle-related modules such as perception, localization, planning, control, and prediction, human-in-the-loop “end-to-end” approaches such as Deep Learning (DL) and Imitation Learning (IL) still have a ways to go when it comes to safety-critical operation. 

The advantage of the “end-to-end” approach is that it considers the autonomous system as a whole and maps directly from the raw sensory input to the control outputs: the throttle, steering, and brakes. This approach is helpful in the racing field and can be extended to make meaningful real-road applications.

Imitation Learning (IL) has shown itself to be an effective end-to-end method that could be trained effectively using expert demonstrations such as existing models or human participation, but the safety aspects require much to be validated. One example, such as an open-source driving simulator, can support the development and validation of algorithms, but it lacks the connection to the real world and needs further testing on physical platforms. 

Other examples, such as using reduced-scale vehicles for testing algorithms on one-twentieth and one-tenth scales, achieve promising results, but they have very limited computing power and sensing capability and are significantly different from full-scale vehicles. Finally, implementing learning algorithms on a real car generally requires a large financial commitment, cumbersome reverse engineering, and significant safety risks.

The Testing Solution: AV4EV Go-Kart Platform

Led by Professor Rahul Mangharam from the Department of Electrical and Systems Engineering at the University of Pennsylvania, a Graduate Student team at the UPenn CoE has developed a solution: an open design with a one-half-scale autonomous go-kart platform based on an existing chassis provided by the company Topkart USA, which can be used as a platform for development and testing of AV algorithms.

The AV4EV go-kart’s work focuses on supporting repeatable development, testing, and deployment workflows for the go-kart platform software, including the base system, middleware and packages, the application layers, and a simulation environment. 

*This software-defined vehicle (SDV) approach is such that an application deployed on the go-kart can be used on another physical platform entirely (e.g., scooter, forklift, automobile) without building new applications from the ground up.

The go-kart’s software stack is based on ROS 2 and can run at high speeds.  The stack includes LiDAR, a Camera, GNSS, and IMU sensors, and it implements localization, trajectory following, and obstacle avoidance functions.  It’s also capable of carrying a human driver and is technically and financially friendly to universities and research institutions. This solution fills the gap between reduced-scale cars and full-scale vehicular platforms while extending the research scope from modular pipeline development in racing competitions to end-to-end sensing and control outputs.

AV4EV Go-Kart Systems Explained

The go-kart’s mechatronics system is designed as a modular system, consisting of several subsystems responsible for different tasks. There are seven major subsystems: Power Distribution (PD), Main Control (MC), User Interface (UI), Throttle-by-Wire (TBW), Brake-by-Wire (BBW), and Steer-by-Wire (SBW), Rear-Shelf Design (RSD) as indicated in the following images.

 

The “x-by-wire” system design approach, which centers around replacing conventional mechanical and hydraulic control systems with electronic signals, has been gaining popularity in the automotive industry. Eliminating traditional mechanical components could increase control stability, design flexibility, cost reduction, and efficiency. In our go-kart drive-by-wire design, all subsystems except the PD and the RSD use an STM32 Nucleo development board on a standalone PCB as the electronic control unit (ECU).

Like modern vehicle design, communication is achieved using the controller area network (CAN) to allow efficient information exchange between nodes. These modular control systems are integrated with the original go-kart chassis in a non-intrusive manner and are easy to understand, build, and modify.

In short, this go-kart is the first attempt in an effort to establish a standardized open design for modular electrical vehicle platforms. It provides a complete sensing solution and open-source software to perform autonomous vehicle-related perception, localization, planning, and control tasks, and it also provides an open-standard hardware solution to adapt a one-half-scale go-kart chassis to fill the gap between autonomous RC cars and full-size vehicles.

We see the future as very bright for AV4EV, and opportunities for academic research and development will continue to increase. As AV4EV knowledge and testing continue to make significant progress, it can be implemented in more closed vehicle systems such as hospitals, airports, campuses, and similar uses, helping reduce carbon emissions, improve vehicle safety, enhance mobility, and increase business efficiencies.


Connect with Prof. Rahul Mangharam at rahulm@seas.upenn.edu and UPenn’s Department of Electrical and Systems Engineering and benefit from a wide range of collaborations, knowledge sharing, and partnerships from various industries. As a research institute, we actively engage our network through meetings, conferences, and workshops designed by senior Faculty to provide cutting-edge insights and information on the newest developments.

The post Introducing Autonomous Vehicles for Electric Vehicles (AV4EV) appeared first on Autoware.

]]>