Quick Start¶
Get up and running with easy_sm in 5 minutes. This guide walks through initializing a project, adding code, testing locally, and deploying to SageMaker.
Prerequisites¶
Before starting, ensure you have:
- Installed easy_sm (
pip install easy-sm) - Docker running locally
- AWS CLI configured with credentials
- SageMaker IAM role ARN
First Time Setup
If you haven't configured AWS or created a SageMaker role, see the AWS Setup guide first.
Step 1: Initialize Project¶
Create a new easy_sm project:
Follow the interactive prompts:
App name: my-ml-app
AWS profile: dev
AWS region: eu-west-1
Python version: 3.13
Requirements file: requirements.txt
This creates:
my-ml-app/
├── my-ml-app.json # Configuration
├── requirements.txt # Dependencies
└── my-ml-app/
└── easy_sm_base/
├── Dockerfile
├── training/
│ ├── train # Entry point
│ └── training.py # Your training code goes here
├── prediction/
│ └── serve # Your serving code goes here
├── processing/ # Processing scripts
└── local_test/
└── test_dir/ # Test data
Step 2: Add Your Code¶
Training Code¶
Edit my-ml-app/easy_sm_base/training/training.py:
import pandas as pd
import joblib
import os
def train(input_data_path, model_save_path, hyperparams_path=None):
"""Train your model."""
# Load training data
data = pd.read_csv(os.path.join(input_data_path, 'data.csv'))
# Train model (replace with your logic)
from sklearn.linear_model import LinearRegression
X = data[['feature1', 'feature2']]
y = data['target']
model = LinearRegression()
model.fit(X, y)
# Save model
joblib.dump(model, os.path.join(model_save_path, 'model.mdl'))
print("Model training complete!")
Serving Code¶
Edit my-ml-app/easy_sm_base/prediction/serve:
import joblib
import os
import numpy as np
def model_fn(model_dir):
"""Load the model."""
return joblib.load(os.path.join(model_dir, 'model.mdl'))
def predict_fn(input_data, model):
"""Make predictions."""
return model.predict(input_data)
Test Data¶
Add sample training data to: my-ml-app/easy_sm_base/local_test/test_dir/input/data/training/data.csv
Step 3: Build and Test Locally¶
Build Docker Image¶
This builds a Docker image with your code and dependencies.
Train Locally¶
Test training in a Docker container:
This runs your training code using the test data. Check output for any errors.
Deploy Locally¶
Start a local inference endpoint:
This starts a Flask server on http://localhost:8080.
Test Local Endpoint¶
In another terminal:
Stop Local Endpoint¶
Step 4: Deploy to SageMaker¶
Once local testing works, deploy to AWS SageMaker.
Set IAM Role¶
Set your SageMaker execution role:
Persist Environment Variable
Add this to ~/.bashrc or ~/.zshrc to persist across sessions:
Push Docker Image to ECR¶
This pushes your Docker image to AWS Elastic Container Registry.
Train on SageMaker¶
This starts a SageMaker training job. Monitor progress in the AWS Console.
Deploy to Endpoint¶
Once training completes, deploy the model:
# Get model artifact from training job
MODEL=$(easy_sm get-model-artifacts -j my-training-job)
# Deploy to endpoint
easy_sm deploy -n my-endpoint -e ml.m5.large -m $MODEL
Or use a one-liner:
Test Cloud Endpoint¶
Test your deployed endpoint using the AWS SDK:
import boto3
import json
runtime = boto3.client('sagemaker-runtime')
response = runtime.invoke_endpoint(
EndpointName='my-endpoint',
ContentType='text/csv',
Body='1.0,2.0'
)
print(json.loads(response['Body'].read()))
Next Steps¶
Now that you have a working project:
- Learn about local development workflows
- Explore cloud deployment options
- Master piped workflows for efficiency
- Read the command reference for all options
Common Issues¶
Docker Build Fails¶
Error: Cannot connect to Docker daemon
Solution: Ensure Docker is running:
Training Fails Locally¶
Error: ModuleNotFoundError
Solution: Add missing dependencies to requirements.txt, then rebuild:
SageMaker Training Fails¶
Error: Could not assume role
Solution: Verify your IAM role exists and has correct permissions:
See AWS Setup for IAM configuration.
Push to ECR Fails¶
Error: no basic auth credentials
Solution: Ensure AWS credentials are configured:
For more troubleshooting, see individual command documentation.