Fashion++ turns your fashion Don’t into a Do with minimal tweaks – VentureBeat

Given an outfit pieced together from a limitless wardrobe, what nips and tucks might improve its overall stylishness? That’s the question researchers at Cornell, Georgia Tech, and Facebook AI Research recently investigated in a research paper published on the preprint server Arxiv.org. In it, they describe an approach that aims to identify small adjustments to outfits that might have an outsized impact on fashionability.

It brings to mind Amazon’s Echo Look, a connected camera that combines human and machine intelligence to recommend styles, color-filter clothes, compare two outfits, and keep track of what’s in personal wardrobes. But the researchers assert their techniques are more sophisticated than most.

“The elegant Coco Chanel’s famous words advocate for making small changes with large impact on fashionability. Whether removing an accessory, selecting a blouse with a higher neckline, tucking in a shirt, or swapping to pants a shade darker, often small adjustments can make an existing outfit noticeably more stylish,” wrote the coauthors. “Motivated by these observations, we introduce a new computer vision challenges: minimal edits for outfit improvement.”

The researchers point out that the goal presents several technical challenges, the first of which concerns AI model training. Pairs of images showing superior and inferior versions of each outfit could teach a system the difference; but this sort of data isn’t readily available and is likely to become out of date as trends evolve. And even if those image pairs could be procured, the aforementioned model would have to distinguish subtle differences between positives and negatives and reason about the garments within the original outfit and how their synergy changes with each tweak. Fashion++

The researchers’ solution is an approach they call Fashion++, which operates on encodings synthesized by an image generation system trained on over 15,000 fashion images. Given an original outfit, it maps composing pieces (e.g., bag, blouse, boots) to their respective codes and then uses a “discriminative fashionability” model fed more than 12,000 photos of outfits (and their negative alterations) as an editing module to update the encodings. This maximizes the outfit’s score, thereby improving its style.

After optimizing an edit, Fashion++ provides its output in two formats: retrieved apparel from an inventory that would best achieve its recommendations and a rendering of the same person in the newly adjusted look, generated from the edited outfit’s encodings. To account for both patterns, colors, shapes, and fits, the researchers factorized each garment’s encoding to basic texture and shape components, allowing the editing module to control where and what to change (e.g., tweaking a shirt’s color while keeping its cut versus changing the neckline or tucking it in, or rolling up sleeves as opposed to making pants baggier.) Additionally, they ensured the updated trajectory offered a spectrum of edits in the end, starting from the least changed and moving towards the most fashionable.

The team reports that in a human perceptual study involving over 100 test outfits and nearly 300 people recruited through Amazon’s Mechanical Turk, 92% of respondents judged outfits as more fashionable after Fashion++ made changes to them. Furthermore, 84% said they thought already-fashionable outfits modified by Fashion++ were similarly or more fashionable.

“Our results are quite promising,” wrote the paper’s coauthors. “In future work, we plan to broaden the composition of the training source — [for instance] using wider social media platforms like Instagram, bias an edit towards an available inventory, or generate improvements conditioned on an individual’s preferred style or occasion.”

Let’s block ads! (Why?)