As part of the fight against deepfake – video and audio created using artificial intelligence – researchers at the University of Oregon are testing one of the most unusual ideas. A group of scientists is trying to teach mice to recognize differences in speech that are invisible to the human ear, so that they can then train the machine in this recognition mechanism. Researchers have trained mice to understand a small set of phonemes that distinguish one word from another. Mice received a reward every time they correctly identified sounds, which amounted to 80% of cases. “We taught the mice to distinguish between sounds surrounded by different vowels in different contexts. We believe that it is possible to train mice to recognize false and real speech, ”said Jonathan Saunders, one of the project’s researchers, to BBC. The idea of the project is to understand how mice identify sounds, and to teach this machine. The auditory system of mice is similar to the human one except that the mice do not understand the meaning of the words they hear. This lack of understanding becomes a bonus in detecting artificially created speech. A deepfake audio file may contain a small error, for example, the sound “b”, instead of “g”. People may not notice this inaccuracy, because we extract meaning from words and sentences entirely. The mouse, not understanding the meaning of the word, will not miss an error. “We believe that mice are a promising model for studying sound processing,” the researchers said in a white paper at the Black Hat conference in Las Vegas. “Studying the mechanisms by which the mammalian auditory system detects fake audio signals can provide the basis for fake detection algorithms.” The U.S. Department of Defense Advanced Research Projects Agency (DARPA) previously announced it will hold a special event on August 28 to talk about its Semantic Forensics (SemaFor) program. The program will develop ways to bypass some of the weaknesses of modern deepfake tools. “However, existing algorithms for the automatic generation and manipulation of multimedia are largely susceptible to semantic errors,” the DARPA noted. By “semantic errors”, the specialists of the Department mean situations when miscalculations are allowed in artificially generated images – for example, when a person with an artificially made face has “inappropriate earrings” or other incorrect details.
- Inconsistent transport of biomaterials to the Beresheet ‘lunar library’
- Russian schoolchildren and students won prizes at the FIRA RoboWorld Cup 2019 World Robotics Championship
- Huawei will launch its Map Kit mapping service in October 2019
- Flow – a gadget to stimulate brain neurons and treat depression
- Apple sues developer of virtual copies of iOS devices