Batch upload files via S3

Simple Storage Service (S3) offers object level storage, with objects stored in buckets.
Buckets are repositories specific to a region, and data stored in bucktes are redundantly stored across multiple availability zones (data centers) within the region. Workflow adapted from AWS tutorial on accessing S3 using AWS CLI

  1. Log into AWS account
  2. Ensure correct region is selected (e.g., us-east-2)
  3. Create IAM role
    1. Search and select IAM in Management Console
    2. Add user - e.g., AWS_admin
    3. Determine access - i.e., Management Console/Programmatic (via CLI)
    4. Permissions: Attach existing policies directly -> Administrator Access
    5. After creating user, download credentials
    6. Download the AWS CLI
    7. Open command prompt and enter: aws configure
    8. Provide access key, secret key, default region (e.g., us-east-2) and output (json)
  4. Create S3 bucket
    1. Management Console
      1. Search and select S3 in Management Console
      2. Create bucket name
      3. Upload data
    2. AWS CLI
      1. Create bucket (name must be globally unique): aws s3 mb s3://{name-of-bucket}
      2. Upload data to bucket: aws s3 cp {path-to-local-directory} s3://{name-of-bucket/
      3. Download data from bucket: aws s3 cp s3://{name-of-bucket}/{name-of-file} ./
      4. Delete file in bucket: aws s3 rm s3://{name-of-bucket}/{name-of-file}
  5. Access S3 storage in EC2 instance via CLI
    1. ssh to EC2 instance (with IAM role = access to read S3 bucket)
    2. sync directory: aws s3 sync s3://{name-of-bucket}/{name-of-file}