When uploading files to AWS S3, you typically use multipart uploads for larger files. The default chunk size when using the high-level boto3
S3 transfer manager, TransferConfig
, is 8 MB.
To ensure that each chunk is uploaded successfully, you can employ a combination of error handling and logging within your code. Here’s a more refined example of the previous Python code snippet with error handling and logging:
import boto3
import logging
from boto3.s3.transfer import TransferConfig
# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Initialize a session using Amazon S3
s3 = boto3.client('s3')
# Set up the configuration, with a custom multipart_chunksize
config = TransferConfig(multipart_chunksize=15 * 1024 * 1024) # 15 MB
# Filename and bucket name
file_name = 'path_to_your_file'
bucket_name = 'your_bucket_name'
def upload_file():
try:
# Upload the file
s3.upload_file(file_name, bucket_name, file_name, Config=config)
logger.info(f"{file_name} uploaded successfully!")
except Exception as e:
logger.error(f"Error uploading {file_name}: {e}")
# Run the upload
upload_file()
This example uses Python’s built-in logging module to log information about the upload process, and any errors encountered will be caught and logged, so you can review the logs to confirm that each chunk has been uploaded successfully or see any errors that occurred during the upload.
Remember that AWS S3 handles the parts assembling on the server-side, and you will receive an error if any part of the multipart upload fails, allowing you to handle it properly in your application.
Additionally, you can use AWS S3 Multipart Upload Events or enable AWS S3 Server Access Logging to log requests made to specific S3 buckets, providing you with more visibility on the uploaded chunks.