• Medicine & Healthcare

    Integration of an ICU Visualization Dashboard (i-Dashboard) as a Platform to Facilitate Multidisciplinary Rounds

    Multidisciplinary rounds (MDRs) are scheduled, patient-focused communication mechanisms among multidisciplinary providers in the intensive care unit (ICU). The surgical ICU team of National Cheng Kung University Hospital has developed and integrated i-Dashboard as a platform to facilitate MDRs. i-Dashboard is a custom-developed visualization dashboard that supports (1) key information retrieval and reorganization, (2) time-series data, and (3) display on large touchscreens during MDRs. The i-Dashboard increases the efficiency in data gathering and enhances communication accuracy and information exchange in MDRs.
    intensive care unit multidisciplinary round visualization dashboard large touchscreen information management strategy
  • Semiconductor & Manufacturing

    Super-fast Convergence for Radiance Fields Reconstruction

    The NeRF-based technique describes a super-fast convergence approach to reconstructing the per-scene radiance field from a set of images that capture the scene with known poses.
    NeRF View Synthesis Scene Reconstruction Computer Vision Deep Learning
  • Medicine & Healthcare

    Machine Learning to Predict In-Hospital Cardiac Arrest in Patients Admitted from the Emergency Department with COVID-19 and Suspected Pneumonia

    By using the machine learning algorithms, this study developed a risk stratification model for predicting the occurrence of in-hospital cardiac arrest (IHCA) events in patients admitted from the emergency department with COVID-19 and pneumonia. The results showed that the model's performance is better than by using the National Early Warning Score (NEWS).
    COVID-19 machine learning cardiac arrest
  • Medicine & Healthcare

    HeaortaNet (Automatic Pericardium/Aorta Segmentation AI Model [HeaortaNet])

    The Pericardium/Aorta Segmentation and Cardiovascular Risk Prediction AI Total Solution Model, HeaortaNet, is a deep learning model based on UNet and attention gate, and had been trained by >70,000 axial images with verified annotations of the pericardium and aorta. It shortens the time for data processing from 60 minutes, by manual segmentation of both pericardium and aorta, to 0.4 seconds. The segmentation accuracy is 94.8% for the pericardium, and 91.6% for the aorta. The applicability of HeaortaNet had been demonstrated by analyzing the non-contrast chest CT scans (>5,000 cases) deposited in the mega-image bank of National Health Insurance Databank.
    coronary artery disease artificial intelligence computed tomography pericardium aorta
  • Medicine & Healthcare

    Cardiovascular Health Guardian – Novel Pulse Wave Velocity and Personal Blood Pressure Estimation System for Smart Watch

    Our team develops an accurate PWV estimation algorithm that uses wrist PPG and ECG signals from wearable devices. A missing-feature imputation and ambiguous-feature resolution technique is developed and the availability of wrist PPG morphological features is raised from 60% to 99.1%. A weighted pulse decomposition approach is adopted and 5 component waves can be acquired to examine more detailed properties. The PWV is then estimated by XGBoost algorithm with the hierarchical regression model.
    photoplethysmography (PPG) electrocardiography (ECG) wearable device smart watch pulse wave velocity
  • Service


    YOLOv7 is a new generation of real-time object detector, which provides the most advanced real-time object detection architecture from edge computing to cloud computing.
    object detection edge computing cloud computing
  • Service

    Embedding multimodal machine intelligence in the digital life of AI technology

    This project collaborates with the international team to collect a very large-scale Chinese emotional corpus. In terms of technology, the fairness of speech emotion recognition is also discussed to solve social issues that may be encountered regarding the usability of emotion recognition. Among them, it is found that the database annotations are all labeled with the unfair perspective of men and women, which leads to biases in the trained model. In order to solve this problem, there have been preliminary achievements in the technological development of fairness, and will be submitted in the near future.
    Emotional Corpus Fairness Algorithm Speech Emotion Recognition
  • Service

    Deep Reinforcement Learning in Autonomous Miniature Car Racing

    This project develops a high-performance end-to-end reinforcement learning training platform for autonomous miniature car racing. With this platform, our team won the championship of Amazon DeepRacer, a world autonomous racing competition. In addition, by combining various reinforcement learning algorithms and frameworks, our self-developed autonomous racing platform can operate at a much higher speed, surpassing the performance of Amazon DeepRacer.
    reinforcement learning artificial intelligence autonomous driving auto racing AWS
  • Environment

    Development and application of marine exploration and ecological survey technologies under climate change

    The project is to establish an AUV system capable of performing underwater exploration and ecological surveys in various shallow water areas over a long period. This system can automate the collection, analysis, and recording of coral reef ecosystems' imaging, acoustic and hydrological data in designated areas.
    Artificial Intelligence Underwater Creatures Coral Underwater Ecological Survey Marine Conservation
  • Environment

    Argicultural Literature Reading Comprehension based on Question Generation

    With the maturity of deep learning technology, reading comprehension model (given an article and a question, the AI model automatically finds the answer to the question from the article) has become a key element in natural language applications. Such as knowledge extraction and knowledge graph construction can be solved through reading comprehension model. In this project, we investigate the employment of the reading comprehension model to build an agricultural knowledge graph from Taiwan agricultural literatures. One challenge, however, is that existing reading comprehension models are not tailored for agricultural literature and therefore cannot be used directly. In this project, we leverage the question generation technology as a mechanism for agricultural data augmentation, and then train the literature reading comprehension model in the agricultural field.
    Reading Comprehension Model Question Generation Argicultural Knowledge Graph
  • Core-tech

    Data Representation and Learning for Dialogue System

    The application of voice assistants is becoming more and more popular, however, due to the inefficiency of artificial intelligence-based technology, current products are mostly built by using rules-based methods. Therefore, in this project, we would like to propose some corresponding solutions for different components of the dialogue system to improve the data efficiency and work efficiency of each component.
    Dialogue system Automatic speech recognition Natural language understanding Natural language generation Text to speech
  • Core-tech

    Snippet Policy Network: Knee-Guided Neuroevolution for Multi-Lead ECG Early Classification

    We have proposed in this project the first time series classification technique that considers accuracy, earliness, and varied lengths simultaneously, containing a novel deep reinforcement learning framework and a new multi-objective optimization neural network algorithm. The proposed technique is fit for the problem of early classification of cardiovascular diseases based on ECG signals and shown to deliver the best performance in this area, holding the leading position worldwide.
    Early Classification Multi-objective Optimization Reinforcement Learning ECG Artificial Intelligence
  • Service

    A comprehensive evaluation of self-supervised speech models - SUPERB

    Machines need annotations to learn, but human babies learn human languages with almost no annotations. Can machines do the same thing? To allow machines to learn human languages with only observations like human babies, a research team at Taiwan has partnered with the speech research groups in Meta, CMU, MIT, and JHU to develop a brand new self-supervised speech processing evaluation framework, Speech Processing Universal PERformance Benchmark (SUPERB).
    Self-supervised Learning
  • Service

    Advanced Technologies for Designing Trustable AI Services

    This integrated research project follows the Taiwan's 2030 Science & Technology Vision and takes LOHAS community and inclusive technology as the major research direction. We aim to develop trustable AI technologies, and introduce them to future smart services. That will realize the development of human-centric smart technology, and strengthen the governance and application of emerging technologies. The integrated project consists of 7 sub-projects led by PIs from National Taiwan University, National Tsing-Hua Universiy and Academia Sinica and composed of top AI technological teams. These sub-projects are divided into 3 clusters, including machine learning (sub-projects 1 and 2), computer vision (sub-projects 3 and 4), and human-centric computing (sub-projects 5, 6 and 7). We will deal with the issues of bias, fairness, transparency, explainability, traceability, and so on, from the aspects of data collection, technology, and application landing. Each sub-project will implement specific smart services to reflect the benefits and practical applications of the developed technologies. The NTU Joint Research Center for AI Technology and All Vista Healthcare, an AI Innovation Research Center supported by MOST, is responsible for management, planning, and execution of the integrated research project. We will propose a plan that can be generalized and applied to the intelligent service industry.
    Computer Vision Human-Centric Computing Machine Learning Natural Language Processing Trustable AI
  • Smart City

    Development of TheorySystems of Robot Learning from Human Demonstration (LfD)-Development of Learning from Human Demonstration Robot

    This project proposes a learning from demonstration (LfD) system that allows robots to be not only taught by human via demonstration but also adjusted by themselves.
    Learning from Demonstration, LfD Deep Learning Robot Calligraphy System Six-axis Manipulator Two-wheeled Autonomous Balancing Robot