A recent digital project mapping the perceived attractiveness of restaurant patrons in major U.S. cities has sparked controversy, raising questions about societal values and the ethical implications of AI-driven analysis.
Created by 22-year-old San Francisco-based software developer Riley Walz, the interactive platform titled LooksMapping uses artificial intelligence to analyze public data from Google Maps reviews and assigns attractiveness scores to diners frequenting thousands of restaurants across New York City, Los Angeles, and San Francisco. The resulting heat map ranks these venues on a scale from 1 to 10 based on how “hot” their customers are perceived to be.
At the top of the list for New York City is Urbani Midtown, a Georgian restaurant located in Midtown East, which received a perfect score of 10. Other top-rated spots include Shinn WEST in Hell’s Kitchen, KYU NYC in NoHo, Aroy Dee Thai Kitchen in the Financial District, and Thai 55 Carmine in the West Village. These establishments are marked in fiery red on the map, symbolizing high levels of perceived attractiveness among their clientele.
On the opposite end of the spectrum, several restaurants were labeled as attracting the “least hot” diners. Among them is Jimbo’s Hamburger Palace in Harlem, which scored a lowly 1 out of 10. Others include Hop Won Express, Cocotazo, Malone’s Irish Bar & Restaurant, and Michael’s New York, all located in various parts of Midtown East. These spots appear in cool blue hues, signifying lower attractiveness scores according to the model.
How Does the Model Work?
Walz’s methodology involved scraping 2.8 million Google Maps reviews, narrowing down the dataset to 587,000 profile images with visible faces from approximately 1.5 million unique user accounts. Using AI tools trained on phrases like “attractive and beautiful,” “unattractive and ugly,” and age descriptors such as “young person” or “old person,” the system assigned relative attractiveness ratings to each reviewer.
However, Walz admits that the algorithm isn’t without flaws. In an interview with The New York Times, he noted that the AI often latched onto superficial visual cues — such as whether a photo was blurry, taken at an angle, or featured someone wearing formal attire like a wedding dress. This led to potentially skewed judgments where style or context influenced the scoring more than actual facial features.
Ethical Concerns and Criticism
Critics have raised concerns about potential racial and cultural biases embedded in the model. Berkeley-based food critic Soleil Ho pointed out disparities in how San Francisco restaurants were ranked, noting a disproportionate favoring of Asian-owned venues and a tendency to undervalue Black-owned businesses or those located in predominantly Black neighborhoods.
Social media reactions also highlighted geographical patterns: areas with higher concentrations of red pins (indicating “hot” diners) tended to coincide with wealthier, majority-white neighborhoods, while blue pins clustered in areas like the Bronx.
A Mirror to Society?
Despite its controversial nature, Walz insists that the project isn’t just a superficial ranking tool but rather a commentary on how humans instinctively judge places based on the people who frequent them.
“The model is certainly biased. It’s certainly flawed,” Walz wrote on the website. “But we judge places by the people who go there. We always have. And are we not flawed?”
He describes the project as “a mirror held up to our collective vanity,” emphasizing that it merely quantifies the subjective, often shallow assumptions people make daily. By assigning numerical values to these perceptions, Walz aims to provoke discussion about how society values appearances and how easily algorithms can reinforce existing prejudices.
Each pin on the map offers additional insights beyond attractiveness scores, including demographic breakdowns of gender and age ranges of patrons. Users can click on any location to view detailed metrics alongside the restaurant’s overall rating.
From Satire to Social Commentary
This isn’t Walz’s first foray into tech-fueled social experiments. He previously gained attention for co-creating Mehran’s Steak House, a fictional restaurant that briefly opened in 2023 after accumulating glowing fake reviews online. That stunt served as a critique of how easily digital platforms can be manipulated.
With LooksMapping, Walz continues to explore the intersection of technology, perception, and bias — this time using a seemingly trivial lens to expose deeper truths about human behavior and the unintended consequences of AI.
While some may dismiss the project as frivolous, others see it as a cautionary tale about the dangers of reducing complex human traits into simplistic metrics — especially when those metrics influence how we perceive spaces, communities, and even ourselves.