So this is a fun project I built, but it does a great job of illustrating a very real use-case. So stop me if you’ve heard this one before, “I would like to train a model to be able to take in a geo-bounding box, get satellite imagery, and then scan those images for something specific.” The industries that I’ve seen this type of use case exist almost everywhere. Things like disaster response, financial, construction, etc, are alll common place.
So I built the following to be able to showcase what it’s like to use Microsoft Planetary Computer, and how you can use other Microsoft tools like Custom Vision to be able to build out something using this data and make it more real. And I thought of a great use-case, Helping Voltron fight Godzilla!
What is Planetary Computer?
So Microsoft created the public version of the planetary computer a few years ago, and it’s a really cool project. Microsoft gathered Petabytes of geospatial data, and made it available for a wide variety of use-cases, but specifically education and climate change. The scientific implications of making this data available to anyone is HUGE. Climate Change is a major challenge facing our world on a global scale, and this kind of information and data is critical to all the myriad of efforts going on around the world.
And if that wasn’t enough, the next major announcement with regard to Planetary Computer was Planetary Computer Pro. Which is even more exciting, this enables organizations that are sitting on their own massive caches of geospatial imagery to leverage the STAC specification to catalog and index these assets. This is important because many organizations are sitting on massive caches of this imagery, but organizing, cataloging, and searching is a constant challenge. Planetary Computer Pro provides options of leveraging the same capabilities of the public planetary computer, but for private data.
STAC is an open-source standard for managing these assets and does so with a json-based data template to make it easy to interact with any programming language.
So how does this help with Godzilla?
Like I said, I wanted to build something fun, and have run this demo at many different schools. The idea is a simple one, I wanted to solve two use cases:
Use Case 1 – Train a custom vision model:
- Pull down imagery for a specific bounding box from Planetary Computer.
- Convert the images from GeoTiff to png.
- Chip that imagery into smaller pieces.
- Run a process to inject Godzilla, Mothra, and Voltron into the images.
- Upload those images to Azure Custom Vision, and train a model.
Use Case 2 – Use the model to locate Godzilla, Mothra and Voltron
- Pull down a specific location (bounding box) from Planetary Computer.
- Convert the images from GeoTiff to PNG
- Inject Godzilla, Mothra, and Voltron
- Run the images through custom vision.
- Save the output as a json file.
So to that end I built the following repo kaiju-detector. And this repo is designed to provide everything you need to follow through on this deployment.
Specifically it provides the following services which can be run from bash script or as docker containers:
- service-check-image: Performs the Custom Vision check of the images.
- service-chip-images: Takes in png’s and chips into smaller units.
- service-convert-images: Converts images from geo-tiffs to PNGs.
- service-get-bounding-box: Takes in an address, and then finds the bounding boxes within a configured area around it.
- service-get-satellite-imagery: Uses the bounding box configured to pull data from Planetary Computer.
- service-inject-kaiju: This service takes in images and then injects our kaiju and robot into the imagery.
- service-resize-image: Can be used to shrink an image to meet parameter needs.
Now all of these docker containers function using the same basic approach, and the idea being this is how the container retrieves the local imagery:
sequenceDiagram
participant InboxLocal
participant Inbox
participant Python
Python->>Inbox: Reads in images
Inbox->>InboxLocal: Reads from directory mounted locally.
InboxLocal->>Python: Reads in and processes images.
And this is how the container saves the output:
sequenceDiagram
participant OutboxLocal
participant Outbox
participant Python
Python->>Outbox: Saves output messages.
Outbox->>OutboxLocal: Saves the files to the local disk.
The readme documents how to work with the solution and I would encourage you to experiment with the solution to showcase how to use Planetary Computer.