From 4,675 fully labeled bear faces on DSLR photographs, taken from research and bear-viewing sites at Brooks River, Ala., and Knight Inlet, they randomly split images into training and testing data sets. Once trained from 3,740 bear faces, deep learning went to work “unsupervised,” Dr. Clapham said, to see how well it could spot differences between known bears from 935 photographs.
First, the deep learning algorithm finds the bear face using distinctive landmarks like eyes, nose tip, ears and forehead top. Then the app rotates the face to extract, encode and classify facial features.
The system identified bears at an accuracy rate of 84 percent, correctly distinguishing between known bears such as Lucky, Toffee, Flora and Steve.
But how does it actually tell those bears apart? Before the era of deep learning, “we tried to imagine how humans perceive faces and how we distinguish individuals,” said Alexander Loos, a research engineer at the Fraunhofer Institute for Digital Media Technology, in Germany, who was not involved in the study but has collaborated with Dr. Clapham in the past. Programmers would manually input face descriptors into a computer.
But with deep learning, programmers input the images into a neural network that figures out how best to identify individuals. “The network itself extracts the features,” Dr. Loos said, which is a huge advantage.
He also cautioned that, “It’s basically a black box. You don’t know what it’s doing,” and that if the data set being examined is unintentionally biased, certain errors can emerge.