Pattern: S3 Upload Trigger
React to S3 object creation events in a typed Lambda handler — fan-out notifications, update records, or kick off downstream processing when a file lands in a bucket.
The Pattern
// src/lambdas/s3/S3ObjectCreated/index.ts
import { defineS3Handler } from '@mantleframework/core'
import { sendMessage } from '@mantleframework/aws'
import { getRequiredEnv } from '@mantleframework/env'
const s3 = defineS3Handler({
operationName: 'S3ObjectCreated',
trigger: 'direct',
bucket: 'files',
})
export const handler = s3(async (record) => {
const fileName = record.key // e.g. 'dQw4w9WgXcQ.mp4'
const file = await getFileByFilename(fileName)
const userIds = await getUsersOfFile(file)
if (userIds.length === 0) {
logInfo('No users to notify', { fileId: file.fileId, fileName })
return
}
// Fan-out: notify all waiting users in parallel, tolerate partial failures
const results = await Promise.allSettled(
userIds.map((userId) => dispatchFileNotificationToUser(file, userId))
)
const succeeded = results.filter((r) => r.status === 'fulfilled')
const failed = results.filter((r): r is PromiseRejectedResult => r.status === 'rejected')
metrics.addMetric('NotificationsSent', MetricUnit.Count, succeeded.length)
if (failed.length > 0) {
metrics.addMetric('NotificationsFailed', MetricUnit.Count, failed.length)
failed.forEach((failure) => {
const userId = userIds[results.indexOf(failure)]
logError('Failed to dispatch notification', { fileId: file.fileId, userId })
})
}
})
// Sends a notification message to the push queue for a single user
function dispatchFileNotificationToUser(file: File, userId: string) {
const { messageBody, messageAttributes } = createDownloadReadyNotification(file, userId)
return sendMessage({
MessageBody: messageBody,
MessageAttributes: messageAttributes,
QueueUrl: getRequiredEnv('SNS_QUEUE_URL'),
})
}How It Works
defineS3Handler registers a Lambda that fires on s3:ObjectCreated:* events from the named bucket. Each S3 notification record is passed individually to the handler with a normalized record.key field (URL-decoded object key) — no raw S3 event envelope to unwrap.
trigger: 'direct' means S3 invokes the Lambda directly (as opposed to routing through EventBridge). Mantle wires up the S3 bucket notification and the Lambda resource policy automatically from the bucket name, which must match a storage entry in mantle.config.ts.
The handler uses Promise.allSettled rather than Promise.all so that one failed SQS send does not abort notifications for other users. Failures are logged individually and tracked in CloudWatch metrics — the S3 event is still acknowledged as processed.
Real-World Usage
Source: aws-cloudformation-media-downloader/src/lambdas/s3/S3ObjectCreated/index.ts
In the media-downloader, S3ObjectCreated fires when a video file lands in the files bucket (uploaded by StartFileUpload). It looks up which users requested that video and fans out a DownloadReady notification to each user's SQS push-notification queue entry, which triggers SendPushNotification to deliver an APNS push to their device.
Configuration
// mantle.config.ts
storage: [
{
name: 'files',
bucketNameOverride: '${module.core.name_prefix}-lifegamesportal-videos',
cloudfront: true,
intelligentTiering: true,
assets: ['videos/default-file.mp4'],
}
],The name field in the storage entry must match the bucket option in defineS3Handler. Mantle generates the S3 notification configuration and grants the Lambda s3:GetObject on that bucket automatically.
The generated Terraform (lambda_s3object_created.tf) wires the trigger via:
module "lambda_s3object_created" {
source = "../../mantle/modules/lambda"
# ...
s3_trigger_bucket_arn = module.storage_files.bucket_arn
}Variations
Multiple buckets: Deploy separate handler Lambdas, each with its own defineS3Handler pointing to a different bucket name in config.
Object key filtering: To react only to specific prefixes or suffixes (e.g. uploads/*.mp4), add an S3 filter rule in the generated Terraform after ejecting the file (remove the generated header comment).
EventBridge bridge: For complex fan-out across multiple services, configure S3 to send events to EventBridge instead (trigger: 'eventbridge') and route from there — trading directness for flexibility.