Version 1.4
...
Quick Reference
If you believe a disaster has occurred that will affect our business, contact the Managing Director or the Principal Engineer.
...
Key contacts
...
CloudCard Support
Phone: (434) 253-5657
Email: support@onlinephotosubmission.com
Luke Rettstatt / Managing Director
Phone: (434) 253-5657
Anthony Erskine / Principal Engineer
Phone: (434) 248-0444
...
Key Locations
...
Primary Office:
1103 Wise Street, Lynchburg, VA 24504
Online Meeting Room:
https://onlinephotosubmission.com/meeting
Contents
Table of Contents | ||||||
---|---|---|---|---|---|---|
|
Purpose
This document defines how CloudCard will respond to a disaster affecting our ability to serve our customers. The goal of the Disaster Recovery Plan is to restore services to the widest extent possible in the shortest possible time, while ensuring security and compliance are maintained.
Scope
A disaster for the purposes of this plan is defined as any event that causes prolonged unavailability of one or two AWS Availability Zones in the CloudCard’s primary operating region.
The following events are excluded from the scope for this plan:
Loss of availability of the entire AWS region (large scale events of this sort will be responded to on a case-by-case basis).
Loss of availability of CloudCard’s offices (see Business Continuity Plan)
Loss of availability of a production application or service necessary to CloudCard’s operations that either (a) does either not affect all of CloudCard’s core services, or (b) is short-lived (outage lasting less than 4 hours) (see Incident Response Plan)
Security breaches (see Incident Response Plan)
Policy
In the event of a disaster causing a major disruption to CloudCard’s production services, the person discovering the disaster must notify the Managing Director. The Managing Director will review the situation in consultation with the Principal Engineer (see Appendix: Diagnostics Steps) and determine the plan of action. If the disaster falls within the scope above, the Managing Director should activate this Disaster Recovery Plan and follow the checklist appropriate for the given scenario (see Appendix: Scenarios).
Hard copies of this plan should be kept in each CloudCard office, as well as the home office of all relevant employees.
Review
This plan must be reviewed and tested annually and updated to address any issues identified. The plan must also be reviewed and updated after any activation of the plan to determine improvements for future disaster scenarios.
Activation
This Disaster Recovery plan is to be activated when one or more of the following criteria are met:
An Amazon data center in which CloudCard stores its data is unavailable or is in imminent danger of becoming unavailable for an extended period of time.
The person discovering the potential disaster must notify the Managing Director (contact details listed above). If the Managing Director is unavailable, the Principle Engineer must be notified instead.
Communications Processes
Once notified of a potential disaster, if the Managing Director activates this Plan, the Managing Director will direct the Principal Engineer and all relevant employees to convene in the CloudCard Meeting Room (https://onlinephotosubmission.com/meeting). This online meeting room will be used as the primary mechanism to coordinate action and internally communicate status updates.
If the CloudCard Meeting Room is unavailable, the Managing Director will arrange an alternate digital or physical meeting room and communicate the location to the Principal Engineer and all relevant employees.
Roles and Responsibilities
...
Person
...
Roles
...
Responsibilities
...
Managing Director
...
Coordination and Communication
...
Determine activation of plan
Coordinate employee response
Communicate status internally and externally
Review and Test plan annually
Ensure pizza is provided
...
Principal Engineer
...
Technical Execution; Alternate for Managing Director
...
Ensure all failovers complete smoothly
Deploy new infrastructure to replace failed infrastructure where necessary.
Review and Test plan annually
Designate and brief alternate person in case of unavailability.
...
Customer Support Team
...
Communication
...
Communicate status to customers
Handle questions from customers
...
Engineering Team
...
Technical Execution
...
Support Principal Engineer as needed to recover services
Revision History
...
Version
...
Date
...
Changes
...
1.1
...
October 2018
...
Initial Plan
...
1.1
...
November 2019
...
Clarity and Accuracy Updates
...
1.2
...
March 2021
...
Updates to Contact Details
...
1.2
...
February 2023
...
Accuracy Update
...
1.3
...
March 2023
...
Updated to reflect Active-Active AWS strategy
...
1.4
...
March 2023
...
Improved based on results of testing of plan
Appendices
Appendix: Disaster Recovery Strategies
AWS Multi-site Active-Active Strategy
Application load is distributed across multiple resources located in two or more physical locations (AWS Availability Zones). If one Availability Zone becomes unavailable, resources are automatically or manually provisioned in the healthy Availability Zone to handle the load from the first zone.
Specific resources following these strategies:
RDS - core application database is provisioned on two servers: a primary server and a read replica, located in separate Availability Zones. In the event of a failure of the primary server, the database fails over to the read replica, which is promoted to become the primary server. The former primary server can be rebooted and recovered to become the read replica, or if it is completely out of commission, a new read replica can be instantiated.
In addition to active data, Aurora backs up the database automatically and continuously for a 7 day period. Additionally Aurora takes a daily snapshot to ensure further redundancy above the continuous backups.
Elastic Beanstalk - the application servers run in an auto scaling group distributed across more than one availability zone. If an entire availability zone were to become unavailable, the auto scaling logic would provision more servers in the other availability zone until the load from the users was met.
S3 (Simple Storage Service) - redundantly stores objects on multiple devices redundantly store objects on multiple devices across a minimum of three Availability Zones in an AWS Region, and is designed to sustain data in the event of the loss of an entire Amazon S3 Availability Zone. (from https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html )
...
Check the CloudCard Internal Status Dashboard (an internal site which will be known to relevant employees which includes information on CloudCard system health and relevant AWS Status feeds)
Determine if CloudCard systems are experiencing downtime
Determine if AWS has published any notices.
Attempt to log into CloudCard
Attempt to log into the AWS console
Observe the state of the database and application environments:
Are the major components (autoscaling functionality, rds cluster) still operational?
Is autoscaling and failover functioning normally and recovering the services?
Is the service recovery trending towards normal within less than 15 minutes?
Based on the evidence gained from the above diagnostic steps, the Managing Director will decide, in consultation with the Principal Engineer, if a disaster has occurred. If the disaster corresponds to one of the scenarios in the below appendix, the Managing Director will direct the execution of the given checklist. If the disaster does not correspond to a prepared scenario, the Managing Director will consult with the Principal Engineer to determine the appropriate plan of action.
If AWS has not acknowledged the disaster on their public site, consider submitting an AWS support ticket to notify AWS of the issue.
...
Single AZ failure
Plan of Action
Assemble team in the appropriate meeting room (Managing Director)
Pray
Monitor service Failover (Principal Engineer)
Ensure database failover occurs; Add additional read replica if needed.
Ensure auto scaling replaces lost services with new nodes.
Determine if any data loss occurred, or if data needs to be corrected (e.g. to prevent stuck jobs). If so, restore or recover the data from backups.
Determine if any secondary services are down, and recover them.
Determine service and data recovery timeframes (Principal Engineer)
If service is likely to be degraded for more than 15 minutes, Direct Communications Team to contact Customers and Resellers to make them aware of the situation (Managing Director)
Update the service updates page and direct customers to review it for updates: https://onlinephotosubmission.com/service-updates
Improve upon the process in case of a future disaster (Managing Director and Principal Engineer)
Two AZ failure
Plan of Action
Assemble team in the appropriate meeting room (Managing Director)
Pray
Deploy new infrastructure and restore data from backup (Principal Engineer)
If database failover was successful, add additional read replica if needed.
If the database completely failed, create a new cluster and restore from backup.
If auto scaling infrastructure is still in place, ensure auto scaling replaces lost services with new nodes.
If the application scaling infrastructure is disabled, create a new application environment from backed up code artifacts.
Determine if any data loss occurred, or if data needs to be corrected (e.g. to prevent stuck jobs). If so, restore or recover the data from backups.
Determine if any secondary services are down, and recover them.
Determine service and data recovery timeframes (Principal Engineer)
Direct Communications Team to contact Customers and Resellers to make them aware of the situation (Managing Director)
Update the service updates page and direct customers to review it for updates: https://onlinephotosubmission.com/service-updates
Improve upon the process in case of a future disaster (Managing Director and Principal Engineer)
Out of Scope Scenario Examples
Hurricane causes power outage in Lynchburg VA - Business Continuity
Snow makes impossible for employees to commute - Business Continuity
Outage to our email and business application service (Google Apps) - Business Continuity
Entire Region down - Catastrophic event, handled on an as-needed basis
Employee deletes a large number of resources in AWS - Catastrophic event, handled on an as-needed basis
Employee deletes a single database server or other core resource - Incident
Appendix: Asset RTO and RPO
...
Priority
...
Asset
...
Scenario
...
Recovery Time Objective (RTO)
...
Recovery Point Objective (RPO)
...
1
...
AWS data and services
...
Amazon data center failure or destruction
...
< 1 hour
...
< 1 hour
Appendix: Test Plan
The Managing Director and Principal Engineer will meet with all other relevant employees for the following:
Read through the plan and address any questions.
For each of the scenarios defined in Appendix: Scenarios, craft an example of that scenario, and walk through how the plan would be implemented in that scenario. Document the estimated time taken for each action; including failures to follow the plan that are discovered later in the conversation. For actions that can be simulated, note those actions for later simulation and continue the walkthrough.
Simulate the actions noted in step 2, and add the actual RPO and RTO achieved during these simulations to the walk-through notes. These actions should include (but are not limited to):
Test failover of database to another availability zone and adding a new read replica to the cluster.
Test scaling up the application cluster to introduce new servers in a different availability zone to replace others lost in the outage. Ensure that all availability zones in the region can be used by the cluster.
Test deploying a completely new database cluster from a database backup.
Test deploying a completely new application cluster.
Perform an after action review - collect all suggestions from all those included in the test for review.
Document the test results and after action review notes.
Update this Plan based on the results and suggestions.
Appendix: Planned Improvements
...
This document has been moved - see here for the latest version:Business Continuity and Disaster Recovery Plan