• Topics

    OECD:Leveraging AI, big data analytics and people to fight untruths online

    The human-beings originally held great expectations upon the advent of internet, presuming that a brave new world of free flow of information utopianly will be not a daydream anymore, however, the truth is totally opposite. For the echo chambers’effects, the forementioned idea cannot be realized easily. Hence, the scholars began to typologize fake news and tried to make viable policies to twist the chaotic situation of our cyber world.

    Lies, misleading information, conspiracy theories, and propaganda have existed since the serpent tempted Adam and Eve to take that fateful bite of the apple. What has changed the dynamic is the Internet, which makes producing and disseminating the collection of untruths that exist today much easier and faster.

    Stopping the creators and spreaders of untruths online will play an important role in reducing political polarisation, building back trust in democratic institutions, promoting public health, and improving the well-being of people and society more generally. To do so, we must leverage the power of AI, big data analytics, and people in smart and new ways.

    The Internet’s rise as a key conduit for the spread of untruths The global free flow of information was one of the early Internet’s main drivers. The pioneers of the Internet’s architecture viewed an open, interconnected, and decentralised Internet as a vehicle to bridge knowledge gaps worldwide and level the information playing field. But despite these idealistic beginnings, societies across the world are now confronted with a dystopian reality: the Internet has reshaped and amplified the ability to create and disseminate false and misleading information in ways that we are only just beginning to understand.

    Inaccurate and misleading information online became a problem at the same time that the Internet became a major news source. Data show that the share of people in the European Union who read online newspapers and news magazines nearly doubled from 2010 – 2020, with the percentage of individuals aged 16 to 75 consuming news online rising to 65%. Likewise, Research suggests that 86% of adult Americans access news on a digital device and it is the preferred medium for half of all Americans.

    Researchers from MIT Sloan show that untrue tweets were 70% more likely to be retweeted than those stating facts, and that false and misleading content reaches its first 1,500 viewers faster than true content.

    Echo chambers and filter bubbles contribute to the wide-scale circulation of untruths online. While these phenomena exist in the analogue world – for example, newspapers with a particular political inclination – it is easier and faster to spread information online. User-specific cookies that log individuals’ preferences, memberships in social networks and other links to people or groups help reinforce the type of content that people see. When users consistently interact with or share the specific types of content that reinforce existing biases, the echo chambers that confirm them emerge and grow.

    Untruths come in many different shapes and sizes Often, terms like “misinformation” (false or misleading but without the intent to harm) and “disinformation” (deliberately fabricated untrue content designed to deceive) are used loosely in policy circles and the mainstream media. This creates confusion as to what untruths really are and what needs to be done to stop them. It all depends on the context, source, intent and purpose.

    Establishing clear and shared definitions for different types of untruths can help policy makers design well-targeted policies and facilitate efforts to measure untruths and improve the evidence base. Our new paper provides definitions for the range of untruths that circulate online:
    1. Disinformation,
    2. Misinformation,
    3. Contextual deception,
    4. Propaganda,
    5. Satire.

    These definitions support a typology of false and misleading content that can be differentiated along two axes:
    1. the information spreader’s intent to cause harm
    2. the degree of fabrication by the information creator – altering photos, writing untrue texts and articles, creating synthetic videos, etc. (Figure 1).
    These distinctions are important because some types of false and misleading content are not deliberately created to deceive (e.g. satire) or they are not spread to intentionally inflict harm (e.g. misinformation).

    Figure 1. From a policy perspective, it is important to differentiate the creators from the spreaders. There may be policies better suited to addressing false content creators, particularly considering that some spreaders disseminate falsehoods unknowingly (misinformation) or as part of societal or political commentary (satire). Before devising policy options, however, policy makers need to consider where and how false information spreads online.

    AI, big data analytics and people are part of the problem and the solution A particular characteristic of the digital age is that false information spreads by digital technologies that were created for entirely different purposes. For example, AI-based tools built to increase user engagement, monitor user interaction and deliver curated content now help false and inaccurate information to circulate widely. The use of algorithms and AI-based approaches to curate content makes it difficult to track the sources of untruths online and even more complicated to monitor its flow or stop its spread. For example, sophisticated disinformation attacks use bots, trolls, and cyborgs that are specifically aimed at the rapid dissemination of untruths. Efforts to stop these harmful practices require transparency about how these technologies work.

    Existing approaches to reducing untruths online often depend on manual fact-checking, content moderation and takedown, and quick responses to attacks. All of this involves human intervention and allows for a finer-grained assessment of the degree of the accuracy of content. Collaborations between independent, domestic fact-checking entities and platforms can also help identify untruths (e.g. DIGI in Australia and FactCheck Initiative Japan) and can be useful to ensure that cultural and linguistical considerations are taken into account.

    While human understanding is essential to interpreting specific content in the context of cultural sensitivities and belief or value systems, monitoring online content in real-time is a mammoth task that may not be entirely feasible without AI and big data analytics. The automation of certain content moderation functions and the development of technologies that embed them “by design” could considerably enhance the efficacy of techniques used to prevent the spread of untruths online. However, such approaches often provide less nuance on the degree of accuracy of the content (i.e. content is usually identified as either “true” or “false”).

    Such approaches would also benefit from partnerships between local fact-checking entities and online platforms to ensure cultural and linguistical biases are addressed. Advanced technologies such as automated fact-checking or natural language processing and data mining can be leveraged to detect creators of inaccurate information and prevent sophisticated disinformation attacks. However, the spreaders of untruths have found ways to circumvent such approaches (e.g. by using images rather than words).

    Innovative tools that use AI and big data analytics In this regard, the transparent use of digital technologies by online platforms to identify and remove untrue content can improve the dissemination of accurate information. For example, the chequeabot is a bot tool that incorporates natural language processing and machine learning to identify claims made in the media and matches them with existing fact checks. Chequeabot is notable in that it is in Spanish and it was developed by professional fact-checkers.

    Another example is Google’s collaboration with Full Fact to build AI-based tools to help fact-checkers verify claims made by key politicians. Full Fact groups the content by topic and matches them with similar claims from across the press, social networks and even radio with the help of speech-to-text technology . These tools help Full Fact process 1,000 times more content, detecting and clustering over 100,000 claims per day. Importantly, these tools give Full Fact’s fact-checkers more time to verify facts rather than identify which facts to check. Using a machine learning BERT-based model, the technology now works in four languages (English, French, Portuguese and Spanish).

    With better understanding, AI and big data analytics can more effectively combat untruths AI solutions can accurately detect many types of false information and recognise disinformation tactics deployed through bots and deepfakes. They are also more cost-effective because they reduce the time and human resources required for detecting and removing false content. However, the effective use of AI for countering untruths online depends on large volumes of data as well as supervised learning. Without that, AI tools run the risk of identifying false positives and enforcing or even creating biases.

    At the same time, the technical limitations of AI and other technologies point toward the need to adopt a hybrid approach that integrates both human intervention and technological tools in fighting untruths online. Approaches that marry technology-oriented solutions and human judgement may be best suited to ensure both efficient identification of problematic content and potential takedown after careful human deliberation, taking into account all relevant principles, rights and laws such as those regarding fundamental human rights.

    A better understanding of a range of complex and intertwined issues about untruths online is urgent to develop best practice policies for addressing this important problem. In the meantime, concrete steps can be taken to begin the fight.

    Tackling untrue content online requires a multistakeholder approach where people, firms, and governments all play active roles to identify and remove inaccurate content online. Everyone should be encouraged to exercise good judgment before sharing information online. While technology can take us a long way, people are still an important part of the solution.

    Reference
  • Topics

    EU introduces digital twin technology to support its digital and green twin transition

    EU launches the Destination Earth project which attempts to model, monitor and simulate the natural phenomena and human activities on Earth through digital twin technology. This model will focus on climate change, water and marine environment, polar regions, cryosphere, biodiversity and the impact of extreme weather events, then try to find out the possible adaptation and mitigation strategies to achieve European sustainable development goals.

    After taking office in 2019, the current European Commission has set the main goal of EU policy to promote the green and digital twin transition, and emphasized that the two transitions are not independent but complementary. In order to implement the digital and green transition and achieve the 2050 climate neutrality goal, Commission updated its AI coordination plan in 2021 and proposed four policy pillars. One of them is to “build strategic leadership in high-impact areas” within which lists seven key areas, and one is "let AI contribute to the climate and the Environment". It aims to optimize the application fields such as energy, transportation and production, as well as reduce energy consumption and greenhouse gas emissions by introducing innovative AI and smart big data solutions.

    Among the many innovative AI application projects, the largest is Destination Earth. Launched in early 2022, this project is to develop a high-precision digital model of the Earth, and use digital twin technology to model, monitor and simulate natural phenomena and human activities in next 10 years. The model focuses on the impacts of climate change, water and marine environment, polar regions, cryosphere, biodiversity or extreme weather events, and possible adaptation and mitigation strategies to achieve the EU’s sustainable development goals.

    The Commission will collaborate with the European Space Agency (ESA), the European Centre for Medium-Range Weather Forecasts (ECMWF) and the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT), to carry out this project. Scientists will be able to study the effectiveness and impact of various climate policies and other global engineering through this tool, thereby contributing to the decision-making and planning of countries around the world in formulating climate neutral plans. Until 2024, the project could receive an initial support around 150 million euros from the Digital Europe Project.

    Reference
  • Topics

    Multi-value added usecross-domain combination of meteorological data

    Taiwan is one of the vulnerable areas frequently suffering from natural disasters. Typhoons, floods, landslides, droughts, and earthquakes are the most frequent natural disasters. Despite this increasing risk, which is due to a number of climatic and non-climatic factors, improvements in decision support system for disaster risk management and early warning are making it possible to limit losses from natural disasters. This would not be possible without the informed use of constantly improving meteorological, hydrological and related data-information. In addition, meteorological data can be integrated and analyzed with the related multiple data collected by AIOT (Artificial Intelligence of Things), and then applied in different fields to generate data-driven innovation and economic value. According to the Global Framework for Climate Services (GFCS) by the World Meteorological Organization (WMO), the informed use of meteorologicaland related data-information can deliver enormous benefits to society, including agriculture, water resources, the natural environment, human health, tourism,energy, transport and communications, urban settlement and sustainable development, etc.


    Reference

  • Topics

    Trade warcovid-19 accelerate the realization of smart manufacturing

    In 2018, the trade war and tech war between the United States and China had a huge impact on the global economy, and it also promoted the reorganization of the global supply chain. In 2020, due to the Covid-19 pandemic, the impact on the supply chain and production chain is more serious. The manufacturing industry that relies on the operation of physical equipment, is faced with problems such as the inability to import materials, the shutdown of production lines, and the inability to export products. Due to these uncertain factors, many countries and industries realize the benefits of digital transformation, accelerate the intelligence and digitalize of manufacturing, and help companies develop more competitive operations and business models. The rate of IT investment in my country’s manufacturing industry in the past three years has also increased year by year, especially in technologies such as AI, IoT, and machine learning. Develop solutions for process optimization and predictive maintenance to realize intelligent operation, production, and management. Most importantly, it will strengthen the ability to cope with future challenges and create a sustainable and resilient manufacturing industry.


    Reference : MarketsandMarkets、iThome

  • Topics

    The post-epidemic new normal accelerates the deployment of new networkinformation security technologies

    Under the impact of Covid-19, the world has adopted lockdown measures to slow the spread of the virus. The companies must quickly move to remote work from home in order to operate normally. However, remote access has caused more blind spots in cybersecurity, including the unstable behavior of employees at home, the inconsistent security of personal devices, and the use of home internet. All these factors reduce the security visibility and allow cyberattacks to take advantage. According to the recent VMWare Global Security Insights report, nearly 80% of corporates have suffered cyberattacks due to WFH, and most of the cases have serious damage. It shows that under the new normal after the epidemic, traditional cybersecurity systems are no longer adequate. This trend has also begun to become significant as the epidemic in Taiwan is heating up. According to Check Point’s research, cyberattacks in the Asia-Pacific region increased by 53% during April and May 2021, and Taiwan's growth rate ranked fifth, reaching 17%. In order to cope with the fact that remote work from home will become the new normal after the epidemic, and the proportion of enterprises using cloud services has gradually increased, new cybersecurity technologies and solutions, such as the secure access service edge (SASE) model, and the user identity-centric Zero Trust Architecture (ZTNA), will help enterprises redefine network security and border defense.


    Reference

  • Topics

    An aging populationhas accelerated the adoption of telecare in Taiwan

    According to the Population Projections for R.O.C. of National Development Council, Taiwan became an aged society in 2018, and is projected to become a super-aged society in 2025. The problem of an aging population and its long term care is now an important topic. The problem is that the elderly have a lack of care, insufficient resources for care in rural areas, difficulties traveling long distance for care services, and the high risk of spreading infectious diseases. However, telecare systems can be used in the improvement of the care quality. It is a continuous, automatic and remote monitoring of real time emergencies and lifestyle changes over time in order to manage the risks associated with the elderly living independently. Telecare is believed to be one of the solutions to keep older people independent and reduce medical costs in an aging society. In addition, AI technologies transform health care industry and increase the telecare market value in Taiwan.Refer to the Smart Health-Assist (SHA) programme in Singapore, Infocomm Development Authority (IDA) and Ministry of Health (MOH)rolled out a Smart Health-Assist pilot project in the Jurong Lake District. It is designed to record data from user - friendly sensors in the houses of the elderly and the patients suffering from chronic diseases to be sent securely online to healthcare providers, allowing them to monitor individuals, receive alerts, and respond to any emergencies. The Smart Health-Assist (SHA) programmerecommended solution is(1) Development of next generation sensors: These can include network ready-heart rate monitors and blood pressure monitors which come in the form of stick-on patches, digital wrist watches, or sensors embedded in household items like pillows and rugs;(2) Networking the homes and care centres: The captured data must make its way securely to healthcare providers;(3)Decision support systems:These systems can help healthcare professionals to recommend the right treatment and care plans for patients;(4) Big data analyticswith national health databases: In the area of big data & analytics, data collected by the sensors can be collated with data in national health databases. This will lead to more consistent delivery of evidence-based care and monitoring of key clinical and service outcomes.


    Reference

    Data reference
  • Topics

    AIST launched the Global Zero Emission Research Center (GZR)

    In order to solve the urgent global climate change problem, Japan hopes to create discontinuous innovations in environmental and energy technologies. Therefore, the Japanese government announced that it will gather global wisdom and establish Global Zero Emission Research Center (GZR) by Industry's Institute of Industrial Technology (AIST). GZR was established in January 2021,as a zero-emission international joint research base that takes G20 framework as the core, and cooperates with the world's top public research institutions to conduct innovative technology research (renewable energy, storage batteries, hydrogen, CO2 separation and use, artificial photosynthesis, etc.), aiming to promote environmental technology innovation for the aforementioned goals. Achieve the vision of a zero-emission society. Dr. YOSHINO Akira, who won the 2019 Nobel Prize in Chemistry, was appointed as the director of the research center. The headquarters of the research center is located in the AIST Tokyo Waterfront main building, and the new research facility is located in the Tsukuba area. The Fukushima Renewable Energy Research Institute (FREA) is responsible for demonstrating the innovative technologies that have been developed, and the Kansai Regional Center of AIST is engaged in development of Storage battery.

     
    Reference

  • 1
本網站使用您的Cookie於優化網站及您的購物經驗。繼續瀏覽網站即表示您同意本公司隱私權政策,您可至隱私權政策了解詳細資訊。