Once it’s flawless, MIT’s Pic2Recipe could change the way we eat
Apparently a picture of food is all you need to decipher what your meal is made of. MIT researchers have created an algorithm called “Pic2Recipe” that generates a list of ingredients based off of a picture of food, and even suggests recipes. The program uses machine learning to evaluate food photos and pulls ingredients from a huge database. In testing, it produced the correct recipe more than half the time.
Nicholas Hynes, a graduate student at MIT’s Computer Science and Artificial Intelligence lab, compiled data from All Recipes, Food.com, and other food sites to create Recipe1M, a database that stores more than a million recipes. “That the program recognizes food is really just a side effect of how we used the available data to learn deep representations of recipes and images,” he told Gizmodo. “What we’re really exploring are the latent concepts captured by the model. For instance, has the model discovered the meaning of ‘fried’ and how it relates to ‘steamed?’ We believe that it has and we’re now trying to extract the knowledge from the model to enable downstream applications that include improving peoples’ health.”
The algorithm has the most success with baked goods like pastries, but struggles with more complex dishes like smoothies or sushi. Still, it means that one day there could be an app that helps you better understand your meal at a restaurant so you can recreate it at home.
The researchers are working to improve the program to better recognize different food preparation methods such as slicing, boiling, and frying. They hope it can serve as a “dinner aide,” so that people can customize it based off of what ingredients they have at home as well as their dietary restrictions.