Facebook recently hired a machine learning expert Yann LeCunn, who is also a renowned practitioner of artificial intelligence (AI), having his expertise in a technique known as “deep learning,” which roughly simulates a hierarchical learning procedure of the brain. Apart from working in New York University (NYU) part time, he will be serving as director of Facebook’s new AI lab, the main focus being to comprehensively analyze the millions of photos added to the social website each day!
“Machine learning is already used in hundreds of places throughout Facebook, ranging from photo tagging to ranking articles to your newsfeed. Better machine learning will be able to help improve all of these features, as well as help Facebook create new applications that none of us have dreamed of yet,” said Andrew Ng who has already directed the same technique to analyze YouTube for Google at Stanford’s AI lab. For Facebook, the technique will give exciting results due to the unique data present on it in order to gain information on anything and everything going on around the World.
As of now Facebook doesn’t have that full knowledge of its users; the purpose of using machine learning is basically to see which features help extended the use of website. As per Aaron Hertzman, a research scientist at Adobe “If you post a picture of yourself skiing, Facebook doesn’t know what’s going on unless you tag it,” thus cutting-edge deep learning algorithms will be useful in gathering data from Facebook’s massive store of photos.
The deep learning technique will be applied to the problem of identifying items in a photo; approach matches the visual cortex of the brain that receives data from the retina of the eye.
The step-by-step process works in layers with the application of different filters to give diverse results. The filter is set to work in different pixel range and layering. It then works out a process of assembling the objects starting from simple elements and constructing them into full objects.
With his 25 years hands on experience, LeCunn is a already catering to massive computational demands. For example, he envisioned the ongoing $7.5 million project of crafting a small, self-flying drone proficient in traveling through an unfamiliar forest at 35 MPH, known as “Endor.tech”, funded by the Office of Naval Research. The drone has the ability to analyze video images at 30 frames per second. A similar algorithm will analyze the videos and the pictures uploaded on Facebook.