Advertisement

MIT's newest computer vision algorithm identifies images down to the pixel

The STEGO system's higher fidelity could give AI a more accurate view of the world.

seamartini via Getty Images

For humans, identifying items in a scene — whether that’s an avocado or an Aventador, a pile of mashed potatoes or an alien mothership — is as simple as looking at them. But for artificial intelligence and computer vision systems, developing a high-fidelity understanding of their surroundings takes a bit more effort. Well, a lot more effort. Around 800 hours of hand-labeling training images effort, if we’re being specific. To help machines better see the way people do, a team of researchers at MIT CSAIL in collaboration with Cornell University and Microsoft have developed STEGO, an algorithm able to identify images down to the individual pixel.

imagine looking around, but as a computer
MIT CSAIL

Normally, creating CV training data involves a human drawing boxes around specific objects within an image — say, a box around the dog sitting in a field of grass — and labeling those boxes with what’s inside (“dog”), so that the AI trained on it will be able to tell the dog from the grass. STEGO (Self-supervised Transformer with Energy-based Graph Optimization), conversely, uses a technique known as semantic segmentation, which applies a class label to each pixel in the image to give the AI a more accurate view of the world around it.

Whereas a labeled box would have the object plus other items in the surrounding pixels within the boxed-in boundary, semantic segmentation labels every pixel in the object, but only the pixels that comprise the object — you get just dog pixels, not dog pixels plus some grass too. It’s the machine learning equivalent of using the Smart Lasso in Photoshop versus the Rectangular Marquee tool.

The problem with this technique is one of scope. Conventional multi-shot supervised systems often demand thousands, if not hundreds of thousands, of labeled images with which to train the algorithm. Multiply that by the 65,536 individual pixels that make up even a single 256x256 image, all of which now need to be individually labeled as well, and the workload required quickly spirals into impossibility.

Instead, “STEGO looks for similar objects that appear throughout a dataset,” the CSAIL team wrote in a press release Thursday. “It then associates these similar objects together to construct a consistent view of the world across all of the images it learns from.”

“If you're looking at oncological scans, the surface of planets, or high-resolution biological images, it’s hard to know what objects to look for without expert knowledge. In emerging domains, sometimes even human experts don't know what the right objects should be,” MIT CSAIL PhD student, Microsoft Software Engineer, and the paper’s lead author Mark Hamilton said. “In these types of situations where you want to design a method to operate at the boundaries of science, you can't rely on humans to figure it out before machines do.”

Trained on a wide variety of image domains — from home interiors to high altitude aerial shots — STEGO doubled the performance of previous semantic segmentation schemes, closely aligning with the image appraisals of the human control. What’s more, “when applied to driverless car datasets, STEGO successfully segmented out roads, people, and street signs with much higher resolution and granularity than previous systems. On images from space, the system broke down every single square foot of the surface of the Earth into roads, vegetation, and buildings,” the MIT CSAIL team wrote.

imagine looking around, but as a computer
MIT CSAIL

“In making a general tool for understanding potentially complicated data sets, we hope that this type of an algorithm can automate the scientific process of object discovery from images,” Hamilton said. “There's a lot of different domains where human labeling would be prohibitively expensive, or humans simply don’t even know the specific structure, like in certain biological and astrophysical domains. We hope that future work enables application to a very broad scope of data sets. Since you don't need any human labels, we can now start to apply ML tools more broadly.”

Despite its superior performance to the systems that came before it, STEGO does have limitations. For example, it can identify both pasta and grits as “food-stuffs” but doesn't differentiate between them very well. It also gets confused by nonsensical images, such as a banana sitting on a phone receiver. Is this a food-stuff? Is this a pigeon? STEGO can’t tell. The team hopes to build a bit more flexibility into future iterations, allowing the system to identify objects under multiple classes.