Cloud2Edge Elastic AI Framework for Prototyping and Deployment of AI Inference Engines in Autonomous Vehicles

23rd September, 2020

Authors Sorin Grigorescu, Tiberiu Cocias, Bogdan Trasnea, Andrea Margheri, Federico Lombardi and Leonardo Aniello


Abstract: Self-driving cars and autonomous vehicles are revolutionizing the automotive sector, shaping the future of mobility altogether. Although the integration of novel technologies such as Artificial Intelligence (AI) and Cloud/Edge computing provides golden opportunities to improve autonomous driving applications, there is the need to modernize accordingly the whole prototyping and deployment cycle of AI components. This paper proposes a novel framework for developing so-called AI Inference Engines for autonomous driving applications based on deep learning modules, where training tasks are deployed elastically over both Cloud and Edge resources, with the purpose of reducing the required network bandwidth, as well as mitigating privacy issues. Based on our proposed data driven V-Model, we introduce a simple yet elegant solution for the AI components development cycle, where prototyping takes place in the cloud according to the Software-in-the-Loop (SiL) paradigm, while deployment and evaluation on the target ECUs (Electronic Control Units) is performed as Hardware-in-the-Loop (HiL) testing. The effectiveness of the proposed framework is demonstrated using two real-world use-cases of AI inference engines for autonomous vehicles, that is environment perception and most probable path prediction.

The EB-AI Framework

The diagram in the next figure illustrates the development workflow of an AI Inference Engine based on the EB-AI framework, according to our own data driven V-Model. Once the specific problem to solve has been defined and enough data have been collected, that data is ingested at a prototyping level by a DNN, which stands at the core of the inference engine. During this stage, the DNN is trained, evaluated and refined within the Cloud according to the Software-in-the-Loop (SiL) principles. Finally, the inference engine is deployed on a target Edge device inside a vehicle and evaluated again in real-world scenarios as Hardware-in-the-Loop (HiL). This process allows us to refine the engine even further, according to a continuous feedback loop aimed to keep improving applications over their entire life span. The novelty of the AI Inference Engine concept lies in the effective integration of the DNN’s training, evaluation as SiL and testing using the HiL paradigm.
Cloud2Edge diagram
A key aspect to consider in the EB-AI workflow is the huge amount of data required to train DNNs. This data can be either synthetically generated (first stage) or collected via real sensors mounted on test vehicles (last stage). In both cases, the data has to be made available at training the stage. Although the training of DNNs can be highly parallelized, with unprecedented levels of parallelization reached by leveraging the recent wide availability of GPUs, it is still prohibitive to have enough computational power to process the amount of data required by autonomous driving applications. As a consequence, all Cloud computing providers offer commoditized AI solutions (e.g., IBM’s Watson AI, Amazon AI, Microsoft’s Azure AI and Google’s Cloud AI) for scalable computing training facilities. However, the following limitations hold at the AI Inference Engine prototyping stage, when large quantities of data have to be uploaded to the Cloud:

  • data upload impracticality due to bandwidth bottlenecks and latency
  • difficult enforcement of privacy requirements (e.g., GDPR ), since part of the collected raw data can be sensitive and thus cannot be shared with cloud providers.
You may find the article both on the journal’s website, as well as on Arxiv
, sorin.grigorescu@elektrobit.com Grigorescu Sorin , tiberiu.cocias@elektrobit.com Cocias Tiberiu , Trasnea Bogdan , Margheri Andrea , Lombardi Federico Aniello Leonardo