• Tech Trends

    YOLOv7 has become one of the trends in AI research.

    Since YOLOv7 was proposed on July 7, 2022, fast has once again  become one of the trends in AI research.

    YOLOv7 has been widely echoed since its introduction in July 2022. It received the most stars on the AI research Github of the month. Not only that, it also won the fourth place on the twitter AI research list of the month. Research based on YOLOv7 has also sprung up like mushrooms after the rain, and has been cited more than 10 times in just two months. We believe that from now on, various state-of-the-art computer vision systems will use YOLOv7 as the first step for their core systems, including OSTA autonomous driving perception technology, and multi-object tracking, which are popular technologies.

    Research based on the twitter AI research list of the month.

  • Topics

    OECD:Leveraging AI, big data analytics and people to fight untruths online

    The human-beings originally held great expectations upon the advent of internet, presuming that a brave new world of free flow of information utopianly will be not a daydream anymore, however, the truth is totally opposite. For the echo chambers’effects, the forementioned idea cannot be realized easily. Hence, the scholars began to typologize fake news and tried to make viable policies to twist the chaotic situation of our cyber world.

    Lies, misleading information, conspiracy theories, and propaganda have existed since the serpent tempted Adam and Eve to take that fateful bite of the apple. What has changed the dynamic is the Internet, which makes producing and disseminating the collection of untruths that exist today much easier and faster.

    Stopping the creators and spreaders of untruths online will play an important role in reducing political polarisation, building back trust in democratic institutions, promoting public health, and improving the well-being of people and society more generally. To do so, we must leverage the power of AI, big data analytics, and people in smart and new ways.

    The Internet’s rise as a key conduit for the spread of untruths The global free flow of information was one of the early Internet’s main drivers. The pioneers of the Internet’s architecture viewed an open, interconnected, and decentralised Internet as a vehicle to bridge knowledge gaps worldwide and level the information playing field. But despite these idealistic beginnings, societies across the world are now confronted with a dystopian reality: the Internet has reshaped and amplified the ability to create and disseminate false and misleading information in ways that we are only just beginning to understand.

    Inaccurate and misleading information online became a problem at the same time that the Internet became a major news source. Data show that the share of people in the European Union who read online newspapers and news magazines nearly doubled from 2010 – 2020, with the percentage of individuals aged 16 to 75 consuming news online rising to 65%. Likewise, Research suggests that 86% of adult Americans access news on a digital device and it is the preferred medium for half of all Americans.

    Researchers from MIT Sloan show that untrue tweets were 70% more likely to be retweeted than those stating facts, and that false and misleading content reaches its first 1,500 viewers faster than true content.

    Echo chambers and filter bubbles contribute to the wide-scale circulation of untruths online. While these phenomena exist in the analogue world – for example, newspapers with a particular political inclination – it is easier and faster to spread information online. User-specific cookies that log individuals’ preferences, memberships in social networks and other links to people or groups help reinforce the type of content that people see. When users consistently interact with or share the specific types of content that reinforce existing biases, the echo chambers that confirm them emerge and grow.

    Untruths come in many different shapes and sizes Often, terms like “misinformation” (false or misleading but without the intent to harm) and “disinformation” (deliberately fabricated untrue content designed to deceive) are used loosely in policy circles and the mainstream media. This creates confusion as to what untruths really are and what needs to be done to stop them. It all depends on the context, source, intent and purpose.

    Establishing clear and shared definitions for different types of untruths can help policy makers design well-targeted policies and facilitate efforts to measure untruths and improve the evidence base. Our new paper provides definitions for the range of untruths that circulate online:
    1. Disinformation,
    2. Misinformation,
    3. Contextual deception,
    4. Propaganda,
    5. Satire.

    These definitions support a typology of false and misleading content that can be differentiated along two axes:
    1. the information spreader’s intent to cause harm
    2. the degree of fabrication by the information creator – altering photos, writing untrue texts and articles, creating synthetic videos, etc. (Figure 1).
    These distinctions are important because some types of false and misleading content are not deliberately created to deceive (e.g. satire) or they are not spread to intentionally inflict harm (e.g. misinformation).

    Figure 1. From a policy perspective, it is important to differentiate the creators from the spreaders. There may be policies better suited to addressing false content creators, particularly considering that some spreaders disseminate falsehoods unknowingly (misinformation) or as part of societal or political commentary (satire). Before devising policy options, however, policy makers need to consider where and how false information spreads online.

    AI, big data analytics and people are part of the problem and the solution A particular characteristic of the digital age is that false information spreads by digital technologies that were created for entirely different purposes. For example, AI-based tools built to increase user engagement, monitor user interaction and deliver curated content now help false and inaccurate information to circulate widely. The use of algorithms and AI-based approaches to curate content makes it difficult to track the sources of untruths online and even more complicated to monitor its flow or stop its spread. For example, sophisticated disinformation attacks use bots, trolls, and cyborgs that are specifically aimed at the rapid dissemination of untruths. Efforts to stop these harmful practices require transparency about how these technologies work.

    Existing approaches to reducing untruths online often depend on manual fact-checking, content moderation and takedown, and quick responses to attacks. All of this involves human intervention and allows for a finer-grained assessment of the degree of the accuracy of content. Collaborations between independent, domestic fact-checking entities and platforms can also help identify untruths (e.g. DIGI in Australia and FactCheck Initiative Japan) and can be useful to ensure that cultural and linguistical considerations are taken into account.

    While human understanding is essential to interpreting specific content in the context of cultural sensitivities and belief or value systems, monitoring online content in real-time is a mammoth task that may not be entirely feasible without AI and big data analytics. The automation of certain content moderation functions and the development of technologies that embed them “by design” could considerably enhance the efficacy of techniques used to prevent the spread of untruths online. However, such approaches often provide less nuance on the degree of accuracy of the content (i.e. content is usually identified as either “true” or “false”).

    Such approaches would also benefit from partnerships between local fact-checking entities and online platforms to ensure cultural and linguistical biases are addressed. Advanced technologies such as automated fact-checking or natural language processing and data mining can be leveraged to detect creators of inaccurate information and prevent sophisticated disinformation attacks. However, the spreaders of untruths have found ways to circumvent such approaches (e.g. by using images rather than words).

    Innovative tools that use AI and big data analytics In this regard, the transparent use of digital technologies by online platforms to identify and remove untrue content can improve the dissemination of accurate information. For example, the chequeabot is a bot tool that incorporates natural language processing and machine learning to identify claims made in the media and matches them with existing fact checks. Chequeabot is notable in that it is in Spanish and it was developed by professional fact-checkers.

    Another example is Google’s collaboration with Full Fact to build AI-based tools to help fact-checkers verify claims made by key politicians. Full Fact groups the content by topic and matches them with similar claims from across the press, social networks and even radio with the help of speech-to-text technology . These tools help Full Fact process 1,000 times more content, detecting and clustering over 100,000 claims per day. Importantly, these tools give Full Fact’s fact-checkers more time to verify facts rather than identify which facts to check. Using a machine learning BERT-based model, the technology now works in four languages (English, French, Portuguese and Spanish).

    With better understanding, AI and big data analytics can more effectively combat untruths AI solutions can accurately detect many types of false information and recognise disinformation tactics deployed through bots and deepfakes. They are also more cost-effective because they reduce the time and human resources required for detecting and removing false content. However, the effective use of AI for countering untruths online depends on large volumes of data as well as supervised learning. Without that, AI tools run the risk of identifying false positives and enforcing or even creating biases.

    At the same time, the technical limitations of AI and other technologies point toward the need to adopt a hybrid approach that integrates both human intervention and technological tools in fighting untruths online. Approaches that marry technology-oriented solutions and human judgement may be best suited to ensure both efficient identification of problematic content and potential takedown after careful human deliberation, taking into account all relevant principles, rights and laws such as those regarding fundamental human rights.

    A better understanding of a range of complex and intertwined issues about untruths online is urgent to develop best practice policies for addressing this important problem. In the meantime, concrete steps can be taken to begin the fight.

    Tackling untrue content online requires a multistakeholder approach where people, firms, and governments all play active roles to identify and remove inaccurate content online. Everyone should be encouraged to exercise good judgment before sharing information online. While technology can take us a long way, people are still an important part of the solution.

  • Policies

    Government of Canada launches second phase of the Pan-Canadian Artificial Intelligence Strategy.

    The Minister of Innovation, Science and Industry, announced the launch of the second phase of the Pan-Canadian Artificial Intelligence Strategy. Backed by an investment of more than $443 million committed in Budget 2021, the second phase of the strategy will seek to bridge world‑class talent and cutting-edge research capacity with commercialization and adoption to ensure that Canadian ideas and knowledge are mobilized here at home.

    The second phase of the Pan- Canadian Artificial Intelligence Strategy includes:
    1. $60 million for Canada' s national AI institutes- Amii in Edmonton, Mila in Montréal and the Vector Institute in Toronto- to help translate AI research into commercial applications and grow the capacity of businesses to adopt these new technologies.
    2. $125 million for Canada' s Global Innovation Clusters to accelerate AI commercialization by supporting Canadian small and medium- sized enterprises, attracting private investment from other public and private sources, and developing made-in-Canada AI solutions.
    3. $8.6 million for the Standards Council of Canada to advance the development and adoption of standards and a conformity assessment program related to AI.
    4. $160 million for CIFAR to continue programs to attract, retain and develop academic research talent and maintain centres of research, innovation and training at Canada' snational AI institutes.
    5. $48 million for CIFAR to renew and enhance its advanced research, training and knowledge mobilization programs.
    6. $40 million to provide dedicated computing capacity for AI researchers across Canada.
    This investment reflects the government' s commitment to accelerate the responsible adoption and commercialization of AI across the Canadian economy by advancing programs to enhance Canada' s research base and talent pool. This will enable established industries to be more productive and competitive, while helping all Canadians to benefit from growth in the digital economy.

  • 1