Concerns Over Privacy Arise from ChatGPT’s Reverse Location Search Trend

ChatGPT as a Location Guessing Tool
Many users have recently uncovered that ChatGPT, the well-known AI chatbot, can act as a reverse-location search tool. Essentially, if you present it with a picture, ChatGPT can often accurately indicate where the image was taken. This trend seems to have been inspired by the online game Geoguessr, where players attempt to identify a location based on a simple web image.
Testing ChatGPT’s Location Guessing Abilities
In an attempt to explore this feature, Mashable tech reporters conducted a series of tests with ChatGPT, uploading various photos to see how accurately it could guess their locations. Surprisingly, even when the AI couldn’t pinpoint the exact place, it often made reasonably close guesses. For instance, it suggested a rooftop hotel in Buffalo when the image was actually taken in Rochester, proving its capabilities.
Newer Reasoning Models
Recently, OpenAI has rolled out new reasoning models for ChatGPT known as o3 and o4-mini, which feature improved visual reasoning abilities. Additionally, OpenAI has made its image generation tool accessible to free users. This has sparked various viral trends on social media, including using the AI to transform pets into humanoid characters or portray individuals as action figures. However, the reverse location search trend is more intricate and raises important privacy concerns.
This trend gained traction when users realized that ChatGPT could effectively analyze photos and estimate their locations. Ethan Mollick, an AI researcher, illustrated this on social media by sharing an example where ChatGPT successfully guessed his driving route based on a stripped-down photo, devoid of any location data. Typically, images include metadata that reveals precise locations.
Improved Visual Reasoning in Action
When Mashable reporters tested ChatGPT’s new visual reasoning capabilities, the results varied. For example, they uploaded a photo of a flower shop taken in Greenpoint, Brooklyn. ChatGPT managed to identify the city but mistakenly pinpointed a specific flower shop about seven miles away.
In another exercise, they used a photo taken during a trip to Japan. When asked to identify the location, ChatGPT’s o3 model accurately determined, "Final answer: 📍 Arashiyama, Kyoto, Japan, near the Togetsukyo Bridge, looking across the Katsura River."
In contrast, using an older reasoning model yielded a more generalized response. It speculated that the scenery could potentially be from Japan, specifically mentioning locations around Kyoto or Nara.
Privacy Concerns with Reverse Location Identifications
The testing continued with screenshots from an Instagram model’s profile. Given the need for privacy among individuals with significant online presence, the results were unsettling. ChatGPT accurately suggested the general locale and even provided specific high-rise apartments, along with one particular home address.
Although this address is known for its popularity among influencers and television productions, the precise identification was alarming. It highlights the necessity for individuals to consider their digital footprints carefully. Sharing images online can inadvertently reveal one’s location, making users vulnerable.
OpenAI has acknowledged that while ChatGPT’s reverse location capacity may offer useful applications, it also raises valid privacy concerns. An OpenAI spokesperson noted, “o3 and o4-mini bring visual reasoning to ChatGPT, making it more helpful in areas like accessibility, research, or identifying locations in emergency responses.” They emphasized that the models have been trained to decline requests for private or sensitive information and have established safeguards to prevent the identification of individuals in images. OpenAI also closely monitors the usage of their technology to prevent any abuse of privacy policies.