Computer Vision Research Center, National Yang-Ming Chiao-Tung university
Summary Development of AI Platform for Smart Drone - Intelligent Flight:
Due to its high mobility and the ability to fly in the sky, the drone has inspired more and more innovative applications/services in recent years. The goal of this project is to resolve the problem of blindly flying an unmanned aerial vehicle (UAV, which a drone in our case) when it is out of human sight or the range of wireless communication, and three major research and development directions will be considered in this project. Three artificial intelligence (AI) technologies, namely, smart sensing, smart control, and smart simulation, are applied in this project.
Smart sensing - a flight system is developed, which can avoid the obstacles, complete a flight mission, and land safely.
Smart control - an intelligence flight control system and a light-weighted somatosensory vest are developed.
Smart simulation - a cost-effective training system and a 3D model simplification method are designed.

Smart UAV technologies list:
Based on the AI technologies, major innovations and benefits contribute in this project at least as follows.
In smart sensing, we develop:
- A tiny object detection system for vehicles and humans.
- A parking lot detection system from the UAV camera.
- A building detection and recognition system.
- A real-time stereo distance estimation technology with embedding system.
- A single camera distance estimation technique.
- An object tracking system.
- A precision landing technology, the UAV can land on an A4 size area.
In smart control, we develop:
-An obstacle avoidance system
-A light-weighted and wireless somatosensory vest
-An autonomous flight control system.
-An UAV object delivery system.
In smart simulation, we develop:
-A VR environment control simulator
-A third-person and first-person view simulator
-A simplify 3D model
If you want to know more details, please visit our YouTube channel:

Depth Estimation via Spatiotemporal Correspondence:
Stereo matching and flow estimation are two essential tasks for scene understanding, spatially in 3D and temporally in motion. Existing approaches have been focused on the unsupervised setting due to the limited resource to obtain the large-scale ground truth data. To construct a self-learnable objective, co-related tasks are often linked together to form a joint framework. However, the prior work usually utilizes independent networks for each task, thus not allowing to learn shared feature representations across models. In this paper, we propose a single and principled network to jointly learn spatiotemporal correspondence for stereo matching and flow estimation, with a newly designed geometric connection as the unsupervised signal for temporally adjacent stereo pairs. We show that our method performs favorably against several state-of-the-art baselines for both unsupervised depth and flow estimation on the KITTI benchmark dataset.
This technique had been published in IEEE CVPR 2019.

Company Description:
Intelligent video surveillance is a cutting-edge technology that has attracted much attention in recent years. In order to enhance Taiwan's standard in the visual surveillance industry, the CVRC of NYCU, consisting of top-notch professors from domestic universities and research institutes, has been working on the development of key technologies ready to be transferred to manufacturers since 2004. The CVRC has achieved fruitful results in the past decade and has developed nearly 200 core technologies and nearly 100 transferable technologies. It has become Taiwan's largest and world-leading "visual surveillance technology collection" and "computer vision talent pool"
Technical Film
Keyword Development of AI Platform for Smart Drone - Intelligent Flight Smart UAV technologies list Depth Estimation via Spatiotemporal Correspondence
Provide the latest information of AI research centers and applied industries