Batch upload files via S3
Simple Storage Service (S3) offers object level storage, with objects stored in buckets.
Buckets are repositories specific to a region, and data stored in bucktes are redundantly stored across multiple availability zones (data centers) within the region. Workflow adapted from AWS tutorial on accessing S3 using AWS CLI
- Log into AWS account
- Ensure correct region is selected (e.g., us-east-2)
- Create IAM role
- Search and select IAM in Management Console
- Add user - e.g., AWS_admin
- Determine access - i.e., Management Console/Programmatic (via CLI)
- Permissions: Attach existing policies directly -> Administrator Access
- After creating user, download credentials
- Download the AWS CLI
- Open command prompt and enter: aws configure
- Provide access key, secret key, default region (e.g., us-east-2) and output (json)
- Create S3 bucket
- Management Console
- Search and select S3 in Management Console
- Create bucket name
- Upload data
- AWS CLI
- Create bucket (name must be globally unique): aws s3 mb s3://{name-of-bucket}
- Upload data to bucket: aws s3 cp {path-to-local-directory} s3://{name-of-bucket/
- Download data from bucket: aws s3 cp s3://{name-of-bucket}/{name-of-file} ./
- Delete file in bucket: aws s3 rm s3://{name-of-bucket}/{name-of-file}
- Management Console
- Access S3 storage in EC2 instance via CLI
- ssh to EC2 instance (with IAM role = access to read S3 bucket)
- sync directory: aws s3 sync s3://{name-of-bucket}/{name-of-file}