What may appear to be an image of Tacoma Wash., is, in fact, a simulated one, created by transferring visual patterns of Beijing onto a map of a real Tacoma neighborhood. (Image via Zhao et al., 2021, Cartography and Geographic Information Science)

“Seeing is believing.” It’s an aphorism that used to be a lot more true than it is today, now that computers can easily produce all manner of fake images and altered recordings. Many of us have seen the photos of celebrities who don’t exist and videos of lip-synching politicians. These “deepfakes” have raised real concerns about what is and isn’t true in our newsfeeds and other media.

Bo Zhao. (UW Photo)

This problem even extends to the maps and satellite images that represent our world. Techniques such as “location spoofing” and deepfake geography present significant risks for our increasingly connected society.

Because of this, a team of researchers at University of Washington are working to identify ways to detect these fakes, as well as proposing the creation of a geographic fact-checking system.

Led by Bo Zhao, an assistant professor of geography at UW, the study focused on how deepfake satellite images and maps might be detected. Though such deepfakes may sound futuristic, in fact they already exist and are a growing concern for national security officials.

“The techniques are already there,” Zhao said. “We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”

However, due to the often-sensitive nature of deepfake satellite imagery, the researchers couldn’t get access to suitable existing images for their study. So, it was necessary to start off by creating their own.

To do this, the researchers used a generative adversarial network, or GAN, a form of AI frequently used for creating deepfakes. Such GANs use two neural networks that “compete” with each other. One, the discriminator, attempts to detect which images are fake. The other uses the information that enabled the detection in order to generate even better fakes. The two modules incrementally improve until the results are so realistic that they’re often undetectable to the untrained eye.

For this study, the GAN worked with basemaps and satellite images of Seattle, Tacoma, Wash., and Beijing. Ultimately, the team constructed a deepfake detection dataset containing 8,064 satellite images. Half of these were authentic images of the three cities. The remainder were deepfakes of Tacoma, with half in the visual pattern of Seattle and half the visual pattern of Beijing. The process was similar to how certain software can be used to map features from the face of one person onto another.

(Image via Zhao et al., 2021, Cartography and Geographic Information Science)

Though the problem of fake and altered maps has existed for centuries, with the development of GANs, the challenges have risen sharply. Many computer-generated satellite images are able to fool even expert eyes, raising concerns about their use for propaganda and disinformation. This is a considerable concern for government and the military where these techniques are seen as a potential threat to national security. An FBI warning in March highlights this: “Malicious actors almost certainly will leverage synthetic content [including deepfakes] for cyber and foreign influence operations in the next 12-18 months.”

But motivations for creating fake maps and satellite images needn’t run only to espionage or propaganda. As mobile devices have become increasingly capable of detecting and reporting on where we are, “location spoofing” — ways of faking our whereabouts — have become increasingly common. For instance, several mobile apps already exist for just this purpose.

“Motives can be fairly diverse when it comes to location spoofing,” Zhao said. “People change their location as a way to show off their fake vacations. Or in Pokemon Go, people will sometimes change their location to get gaming awards.”

While some progress has been made in detecting other kinds of fraudulent images and recordings, according to the study, deepfake satellite image detection hasn’t been previously explored. With their deepfake dataset established, the researchers tested different methods for automating detection, using AI tools such as convolutional neural networks.

“We used both traditional methods and some of the latest GAN-detection algorithms to try to find some clues in terms of how we can detect the deepfakes,” Zhao said.

The researchers looked at 26 features in the spatial, histogram and frequency domains to develop their detection strategies. Individually, the features yielded differing levels of accuracy, but when the detection models were combined, performance rose to its highest levels.

The researchers plan is to release tools that the public and professionals can use for identifying probable fakes. Zhao said, “if someone finds a suspicious image, they might upload it to a website similar to Factcheck.org to confirm if the image has inconsistencies.”

Zhao pointed out that they’re cautious about having detection tools report an image as being definitively fake or genuine. “From a social perspective, we found if something is described as definitely fake, people interpret this very negatively,” Zhao said. “So, we’d prefer to say to the users that we found possible inconsistencies, then let the user come to their own conclusion about what that means in context.”

However, he added that in cases where the determination is statistically conclusive and the image has significant social consequences, then the status of its authenticity needs to be clearly stated.

While deepfake-generating GANs have gotten a lot of attention in recent years, Zhao added that GAN-based algorithms aren’t necessarily a bad thing and that such methods have many beneficial uses. For instance, they can be used to fill in missing data in a record set or correct motion blur in photos.

Recognizing what is and isn’t true in our world continues to be a growing challenge. Research like this may help map our path to building a more authentic future.

_______________________________

Co-authors on the study were Yifan Sun, a graduate student in the UW Department of Geography; Shaozeng Zhang and Chunxue Xu of Oregon State University; and Chengbin Deng of Binghamton University. “Deep fake geography? When geospatial data encounter Artificial Intelligence” was published on April 21, 2021 in Cartography and Geographic Information Science.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.