AI農情調查之UAV群眾協作平台
Summary | 技術簡介: AI農情調查之UAV群眾協作平台後台支援自動鑲嵌建模,更具備四項突破技術:(1)巨量影像格化技術;(2)平行運算技術;(3)任務規格標準化;(4)UAV任務媒合。致力於打造空中UBER協作服務,未來能應用於農作物分佈調查、大範圍災情調查、農業保險、農地違法使用調查與休耕補助調查等面向。 技術之科學突破性: 氣候變化與全球化趨勢,提高農情調查方式效率將能高效平衡市場供需。導入雲端與AI技術,能實現以低投入高效率的永續管理方式實踐精準農業。無人機(UAV)的高機動性為精準農業調查提供俱優勢的新工具,UAV。 AI農情調查(AIAS)之UAV群眾協作平台後台支援自動鑲嵌建模,AIAS平台更具備四項突破技術:(1)巨量影像格化技術;(2)平行運算技術;(3)任務規格標準化;(4)UAV任務媒合。(1)(2)搭配雲端API串連機制,能因應大面積(千頃以上)飛行任務要求,加速8倍以上。AIAS平台可減少飛手外包、後製及購買軟體負擔,更迅速使業主掌握進度與成果。以(3)標準化飛行任務所需參數,減低甲乙方溝通門檻增加派發效率。(4)雲端媒合與派發任務可高效完成甲乙雙方需求,能降低UAV任務門檻與75%執行時間。 根據Wierzbicki(2015)研究顯示,UAV飛行時的天候狀況對於影像品質影響甚鉅,惡劣天候影像品質平均降低25%。主持人Yang(2018)提出量化影像品質指標包括模糊程度、亮度、對比度等。由物體邊緣灰階與線散函數的擬合程度評估模糊程度,以影像灰階平均值與標準差表示影像亮度與對比度,成果已發表於頂級國際期刊。整合此機制於UAV媒合平台,標準化監督把關職業飛手上傳UAV影像品質。 目前急需農業領域的技術整合服務,AIAS平台已整合水稻最適收穫期預測模式、高效率大面積勘災技術,以AI技術的卷積神經網絡與深度學習技術,基於長期收集之UAV農作物影像資料集,建立災損辨識模型,相較於現有UAV勘災流程,提升75%勘災效率,已發表兩篇頂尖國際期刊(2018, 2020)。此技術可降低農作物調查所需之大量人力,達跨區與跨作物別無人機農業落地應用目標。 AIAS平台提供發案業主與飛手間的需求資訊整合服務,清楚界定航拍任務需求項目,提升UAV產業效率。農業AI分析技術發展與突破,未來能延伸應用於:(1)農作物分佈調查;(2)農作物大範圍災情調查;(3)農業保險調查;(4)農地違法使用調查與;(5)休耕補助調查等面向,為智慧農業落地的重要基礎建設。 技術之產業應用性: 農情查報包含種植作物、收穫面積、產量、災損及敏感性作物面積。每年調查範圍約80萬公頃,需靠專家在地面上逐一清查,易產生人為偏誤、耗時費工。無人機(UAV)於農業管理與分析迅速發展,機動性強且快速、精準收集大面積影像,為高效調查工具。全臺已有逾兩千位合格UAV飛手,卻多苦於無任務可執行,急需一媒合平台串接兩方需求。 AI農情調查(AIAS)之UAV群眾協作平台搭載巨量影像格化技術與平行運算模式,高效執行UAV影像處理服務與任務媒合工作,並有以下優勢:(1)在地化的適任專業飛手媒合,減少跨地區飛手長途交通成本,預估提供3倍接案機會、並降低時間與交通成本,僅需目前25%時間即能完成任務;(2)無人機影像品質標準化檢核,替業主自動化把關UAV影像品質;(3)飛手完成任務即時由系統檢核是否需重拍,並回報業主進度狀態,預期減少錯誤與反覆確認50%時間;(4)影像正射鑲嵌製作與植生指標等可視化展示功能,減少每位飛手委外影像處理或購買商用軟體數萬元成本。 農業災害及補償攸關農民生計,政府制定《農業天然災害救助辦法》與《農業保險法》已推行21品項農業保險。現行災損評估方法為人工辨識,主觀且缺乏一致標準評估流程,難以應對大範圍災害分析,保險公司更難衡量合適保險產品,使農民對農保信心與購買意願低落。 AIAS平台整合農業AI分析技術於農業資訊加值服務,如採收時間預測與高校率災損分析,以數位化分析落實於產業。採收建議能提升水稻田每公頃2000元以上售價;本團隊2020年以此AI災損分析技術服務臺南市政府農業局之後壁區災損分析,成本僅需原本75%,並大幅減少人力需求。UAV災損分析模式將能協助加速推動農業保險制度,協助業者高效率勘災與出險,並留存影像於平台供簡易查閱與瀏覽。 市調公司Tractica評估UAV市場2025年將達467億美元,智慧農業為重要項目之一。日本日亞集團與美國DroneDeploy提供雲端化UAV影像處理方案,包含UAV航線規劃、製作地圖與影像鑲嵌,但並無飛行任務媒合功能。目前已邀請數位職業飛手註冊AIAS平台,測試及改善平台流程。未來預期至少百位合格飛手註冊此AIAS平台,打造空中UBER協作服務,延伸應用於:(1)農作物分佈調查;(2)農作物大範圍災情調查;(3)農業保險調查;(4)農地違法使用調查與;(5)休耕補助調查等面向,具極大跨國商業潛力。 |
||
---|---|---|---|
Technical Film | |||
Keyword | 未來科技館 科技部 | ||
Research Project | |||
Research Team |
More like this
Provide the latest information of AI research centers and applied industries
-
Embedding multimodal machine intelligence in the digital life of AI technology
This project collaborates with the international team to collect a very large-scale Chinese emotional corpus. In terms of technology, the fairness of speech emotion recognition is also discussed to solve social issues that may be encountered regarding the usability of emotion recognition. Among them, it is found that the database annotations are all labeled with the unfair perspective of men and women, which leads to biases in the trained model. In order to solve this problem, there have been preliminary achievements in the technological development of fairness, and will be submitted in the near future.
-
Deep Reinforcement Learning in Autonomous Miniature Car Racing
This project develops a high-performance end-to-end reinforcement learning training platform for autonomous miniature car racing. With this platform, our team won the championship of Amazon DeepRacer, a world autonomous racing competition. In addition, by combining various reinforcement learning algorithms and frameworks, our self-developed autonomous racing platform can operate at a much higher speed, surpassing the performance of Amazon DeepRacer.
-
A comprehensive evaluation of self-supervised speech models - SUPERB
Machines need annotations to learn, but human babies learn human languages with almost no annotations. Can machines do the same thing? To allow machines to learn human languages with only observations like human babies, a research team at Taiwan has partnered with the speech research groups in Meta, CMU, MIT, and JHU to develop a brand new self-supervised speech processing evaluation framework, Speech Processing Universal PERformance Benchmark (SUPERB).
-
Advanced Technologies for Designing Trustable AI Services
This integrated research project follows the Taiwan's 2030 Science & Technology Vision and takes LOHAS community and inclusive technology as the major research direction. We aim to develop trustable AI technologies, and introduce them to future smart services. That will realize the development of human-centric smart technology, and strengthen the governance and application of emerging technologies. The integrated project consists of 7 sub-projects led by PIs from National Taiwan University, National Tsing-Hua Universiy and Academia Sinica and composed of top AI technological teams. These sub-projects are divided into 3 clusters, including machine learning (sub-projects 1 and 2), computer vision (sub-projects 3 and 4), and human-centric computing (sub-projects 5, 6 and 7). We will deal with the issues of bias, fairness, transparency, explainability, traceability, and so on, from the aspects of data collection, technology, and application landing. Each sub-project will implement specific smart services to reflect the benefits and practical applications of the developed technologies. The NTU Joint Research Center for AI Technology and All Vista Healthcare, an AI Innovation Research Center supported by MOST, is responsible for management, planning, and execution of the integrated research project. We will propose a plan that can be generalized and applied to the intelligent service industry.
-
Computer Vision Research Center, National Yang-Ming Chiao-Tung university
Development of AI Platform for Smart Drone - Intelligent Flight: Due to its high mobility and the ability to fly in the sky, the drone has inspired more and more innovative applications/services in recent years. The goal of this project is to resolve the problem of blindly flying an unmanned aerial vehicle (UAV, which a drone in our case) when it is out of human sight or the range of wireless communication, and three major research and development directions will be considered in this project. Three artificial intelligence (AI) technologies, namely, smart sensing, smart control, and smart simulation, are applied in this project. Smart sensing - a flight system is developed, which can avoid the obstacles, complete a flight mission, and land safely. Smart control - an intelligence flight control system and a light-weighted somatosensory vest are developed. Smart simulation - a cost-effective training system and a 3D model simplification method are designed.
-
Ckip Lab
Textual Advertisement Generator: Given any limited specifics of any product, AI Advertisement Producer can automatically generate tons of top-quality descriptions and advertisements for the product in just one second. And not just one copy is produced. With deep learning and natural language processing technologies learned from millions of existing samples, our AI model can produce various styles of advertisements at the same time for users to select. It will be a big helper or a virtual brainstorming partner for any brands or advertisers to create their advertisements.
-
Stepped Respiratory Care Platform based on Zero-Contact Physiological Monitoring System
Combined with millimeter wave radar detection of chest undulation breathing mode and heart rate, continuous blood oxygen detection, active disease record of chat robot, and mobile phone analysis of 30 second sitting and standing alternate activity frequency mode, a set of personalized respiratory capacity benchmark is established through AI modeling, which can be applied to zero-contact respiratory physiological monitoring and useful for infectious disease ward, epidemic prevention hotels, centralized quarantine centers.
-
Deep Learning Based Anomaly Detection
For video anomaly detection, we apply pretrained models to obtain the foreground and the optical flow as ground truth. Then our model estimates the information by taking only a single frame as input. For human behaviors, we take the human poses as input and use a GCN-based model to predict the future poses. Both the anomaly scores of these two works are given by the error of the estimation. For defect detection, our model takes patches of the image as input and learns to extract features. The anomaly score of each patch is given by the distance between the patch and all the training patches.
-
Visually Impaired Navigation Dialogue System with Multiple AI Models
The dialogue system is the main subsystem of the visually impaired navigation system, which provides destinations for the navigation system through multiple dialogues. We use the knowledge graph as the basis for reasoning. In terms of close-range navigation, deep learning technology is used to develop RGB camera detection depth algorithm, indoor semantic cutting algorithm, integrated detection depth estimation and indoor semantic cutting in indoor obstacle avoidance, etc. The whole system uses the CellS software design framework to integrate distributed AIoT systems.