Visually Impaired Navigation Dialogue System with Multiple AI Models
Summary The dialogue system is the main subsystem of the visually impaired navigation system, which provides destinations for the navigation system through multiple dialogues. We use the knowledge graph as the basis for reasoning. In terms of close-range navigation, deep learning technology is used to develop RGB camera detection depth algorithm, indoor semantic cutting algorithm, integrated detection depth estimation and indoor semantic cutting in indoor obstacle avoidance, etc. The whole system uses the CellS software design framework to integrate distributed AIoT systems.

This technology integrates semantic recognition into the dialogue system, and implements a dialogue system based on the knowledge graph as the basis for reasoning. Unsupervised neural network is used to classify the knowledge graph, and then deep neural network is used for semantic recognition. In the MQTT architecture, CellS is used to implement the dialogue engine and applied to multi-AI model control. Performance experiments show that our application can be accelerated up to 1.94 times.

This technology is applicable to all human-computer interaction systems that require voice-triggered activation, such as popular voice assistant products such as Google Assistant, Apple's Siri, Microsoft's Cortana, and Amazon's Alex. In addition to improving the security of the system, the low computational complexity and memory requirement are more conducive to achieving low power, low cost microcontroller-based product design.
Technical Film
Keyword voice Digital content and digital learning Intelligent Information System Software Defined Network Interdisciplinary integration
Research Project
Research Team
More like this
Provide the latest information of AI research centers and applied industries