Aws sagemaker canvas model usage on edge device in python

When working with AWS SageMaker, it is common to encounter scenarios where you need to use the model on an edge device. In this article, we will explore different ways to achieve this in Python.

Option 1: Using AWS SDK

The first option is to utilize the AWS SDK for Python (Boto3) to interact with SageMaker and deploy the model on the edge device. Here’s how you can do it:

import boto3

# Create a SageMaker client
sagemaker_client = boto3.client('sagemaker')

# Deploy the model on the edge device
response = sagemaker_client.create_edge_packaging_job(
        'S3OutputLocation': 's3://your_bucket/your_output_location'

# Check the status of the edge packaging job
status = response['EdgePackagingJobStatus']
print(f"Edge Packaging Job Status: {status}")

This option allows you to leverage the power of the AWS SDK and easily deploy the model on the edge device. However, it requires setting up the necessary AWS credentials and configuring the Boto3 client.

Option 2: Using Docker

If you prefer a more containerized approach, you can use Docker to package the model and deploy it on the edge device. Here’s how:

import docker

# Create a Docker client
docker_client = docker.from_env()

# Build the Docker image
image =

# Run the Docker container on the edge device
container =

# Check the status of the container
status = container.status
print(f"Container Status: {status}")

This option provides more flexibility and control over the deployment process. You can customize the Docker image and container settings according to your requirements. However, it requires Docker to be installed on the edge device.

Option 3: Using TensorFlow Lite

If your model is based on TensorFlow, you can convert it to TensorFlow Lite format and deploy it on the edge device. Here’s how:

import tensorflow as tf

# Load the TensorFlow model
model = tf.keras.models.load_model('path_to_your_model')

# Convert the model to TensorFlow Lite format
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# Save the TensorFlow Lite model
with open('your_model.tflite', 'wb') as f:

# Deploy the TensorFlow Lite model on the edge device
# (Deployment process depends on the specific edge device)

# Check the status of the deployment
status = 'Deployed'
print(f"Deployment Status: {status}")

This option is specifically tailored for TensorFlow models and allows you to take advantage of the lightweight and optimized TensorFlow Lite runtime on the edge device. However, it requires additional steps to convert the model to TensorFlow Lite format.

After exploring these three options, it is evident that the best choice depends on your specific requirements and constraints. If you are already working with AWS SageMaker and have the necessary credentials and infrastructure in place, Option 1 using the AWS SDK may be the most convenient. On the other hand, if you prefer a more containerized approach and have Docker available on the edge device, Option 2 using Docker provides more flexibility. Lastly, if you are specifically working with TensorFlow models and want to leverage the optimized TensorFlow Lite runtime, Option 3 is the way to go.

Rate this post

10 Responses

    1. Nah, Im team Option 1 all the way. Kubernetes is the real deal, my friend. Its got the power and scalability to handle anything you throw at it. Docker might be trendy, but Kubernetes owns the game. 💪🚀

    1. I couldnt agree more! Docker is a game-changer when it comes to flexibility and ease of use. It simplifies deployment and makes life so much easier for developers. The 🐳 is the symbol of our newfound freedom!

Leave a Reply

Your email address will not be published. Required fields are marked *

Table of Contents