AI for face recognition are used to count chimpanzees

Researchers from Oxford published a work in the journal Science Advances, which says that scientists use artificial intelligence to better understand the behavior of animals – in this case, chimpanzees.   Using approximately 50 hours of video footage taken over 14 years, the researchers extracted 10 million images of the faces of twenty-three chimpanzees. After this, the images were processed by the neural network.   “The algorithm was able to identify animals with an accuracy of 93% and correctly determine their gender in 96% of cases,” the authors of the study write.   Researchers used an algorithm to analyze the social interactions of a chimpanzee population. The analysis showed that mother and children spend most of their time together. This means that the conclusions of the algorithm are consistent with known patterns of behavior of the pack.   “Why is it important? – writes MIT Review. – Animal researchers often rely on video to track the behavior of populations that they study over a period of time. But sorting huge amounts of data is tedious and time consuming, and manual analysis can be inaccurate. Artificial intelligence demonstrates a new promising method of accelerating animal behavior research. ”   According to the MIT Review, scientists can also use artificial intelligence to improve the work of tracking endangered species and those animals that are victims of poachers and are trafficked.   An experiment by Oxford scientists is not the first time that face recognition is used to track animals. A similar tool called ChimpFace is actively used to combat the illegal trade in chimpanzees. According to the MIT Review, there are other studies aimed at tracking lemurs, baboons, and other endangered species of primates. Researchers from Oxford in their work argue that their algorithm is improved compared to its predecessors. According to them, it minimizes the amount of pre-processing of the material, while previous algorithms faced problems with image processing in conditions of changing lighting or when working with images of poor quality.   “Our algorithm works better in such conditions, since it was developed using a more diverse data set,” the researchers said.