Category: Cloud

[GUIDE]: Creating an Oracle RDS Read Replica

AWS recently announce the support of Read Replica for Oracle RDS starting with versions 12.1.0.2.v10 and higher 12.1 versions, for all 12.2 versions, and for all 18 versions.

AWS RDS uses Oracle Data Guard to replicate the database changes from source to replica. Updates made to the source DB instance are asynchronously copied to the Read Replica.  Normally a Read replica is used to offload heavy read traffic for an application. The load can be reduced on the source database by routing read queries from the applications to the Read Replica.

Creation of read replica doesn’t require an outage for the source RDS. Amazon RDS sets the necessary parameters and permissions for the source DB instance and the Read Replica without any service interruption. A snapshot is taken of the source DB instance, and this snapshot becomes the Read Replica. No outage occurs when a read replica instance is deleted. 

Configuring Oracle Read Replica

 Pre-requisites for configuring the replica in Oracle

  • Must enable automatic backups on the source database – Set backup retention to value other than zero.
  • Enable force logging mode on the database
    exec rdsadmin.rdsadmin_util.force_logging(p_enable => true); 
  • If there are any changes to be done on source database’s redo logs like changing the logsize, it should be done before the read replica is created. Modifications after the replica creation can cause the online redo logging configuration to get out of sync with the standby logging configuration.
  • Make sure that the setting of the max_string_size parameter is same on the source DB instance and the Read Replica if using different parameter groups
Creating Oracle Read Replica
  • Login to AWS RDS console and select the source database
  • Click on ActionsàCreate read replica

[GUIDE]: Creating an Oracle RDS Read Replica 1

  • Choose the instance specifications for the read replica. Recommended to use the same instance class and storage type as the source
  • For Multi-AZ deployment, choose Yes to create a standby of your replica in another Availability Zone for failover support for the replica.
  • Choose the DB instance Identifier for the read replica
    [GUIDE]: Creating an Oracle RDS Read Replica 2
  • Choose Create read replica

Amazon RDS for Oracle keeps a minimum of two hours of transaction logs on the source DB instance. Logs are purged from the source after two hours or after the archivelog retention hours setting has passed, whichever is longer.

A Read Replica is created with the same storage type as the source DB instance. However, you can create a Read Replica that has a different storage type from the source DB.

We can create up to five Read Replicas from one source DB instance. The Oracle DB engine version of the source DB instance and all of its Read Replicas must be the same.

When the source RDS is upgraded to higher version, Amazon RDS upgrades the Read Replicas immediately after upgrading the source DB instance, regardless of a Read Replica’s maintenance window

Oracle read replica can neither be stopped nor another read replica can be created from the existing replica (cascading read replicas not supported for Oracle). Also, the source RDS cannot be stopped if the read replica is running.

 Similar to another engines, read replica can be promoted into standalone DB instance. When you promote a Read Replica, the DB instance is rebooted before it becomes available.

[GUIDE]: Creating an Oracle RDS Read Replica 3

Monitoring Replication

   Replication monitoring can be done at the AWS RDS console and the database level

[GUIDE]: Creating an Oracle RDS Read Replica 4

The ReplicaLag metric is the sum of the Apply Lag value and the difference between the current time and the apply lag’s DATUM_TIME value. The DATUM_TIME value is the last time the Read Replica received data from its source DB instance.

Data dictionary views used for checking the replication lag:

  • V$ARCHIVED_LOG – Shows which commits have been applied to the Read Replica.
  • V$DATAGUARD_STATS – Shows a detailed breakdown of the components that make up the replicaLag
  • V$DATAGUARD_STATUS – Shows the log output from Oracle’s internal replication processes.
Current Limitations of Oracle Read Replica
  • Must have an Active Data Guard license
  • Oracle Read Replicas are only available on the Oracle Enterprise Edition (EE) engine
  • Oracle Read Replicas are available for Oracle version 12.1.0.2.v10 and higher 12.1 versions, for all 12.2 versions, and for all 18 versions
  • Oracle Read Replicas are only available for DB instances on the EC2-VPC platform
  • Oracle Read Replicas are only available for DB instances running on DB instance classes with two or more vCPUs
  • Amazon RDS for Oracle does not intervene to mitigate high replica lag between a source DB instance and its Read Replicas. Ensure that the source DB instance and its Read Replicas are sized properly, in terms of compute and storage, to suit their operational load
  • Amazon RDS for Oracle Read Replicas must belong to the same option group as the source database. Modifications to the source option group propagate to Read Replicas
  • Cross-region Read Replicas currently not supported
  • The replica process in Oracle cannot stopped. Only option is to delete the read replica
  • Amazon RDS doesn’t support circular replication
  • Currently no manual snapshots of Amazon RDS for Oracle Read Replicas can be created or enable automatic backups for them

North America

+1-(917)-793-2500

Europe

+49-(621)-400-676-00

Asia-Pacific

+91-(80)-466-58999

Email

connect@smartshifttech.com

Read More

[GUIDE]: Setting up an AWS VPC Client VPN

AWS Client VPN is a AWS client-based VPN service that enables we to securely access our resources in AWS and our on-premises network. With Client VPN, we can access our resources from any location using an OpenVPN-based VPN client.

Below are the step to implement AWS VPC Client VPN.

Server and Client Certificate and keys:

Generate Server and Client Certificates and Keys using below steps on any Linux system

 

  • git clone https://github.com/OpenVPN/easy-rsa.git
  • cd easy-rsa/easyrsa3
  • ./easyrsa init-pki
  • ./easyrsa build-ca nopass
  • ./easyrsa build-server-full server nopass (This step will generate server certificate and key)
  • ./easyrsa build-client-full client1.domain.tld nopass (This step will generate client certificate and the client private key)
  • Store/Copy the server and client certificates and keys in specified location as these are important
  • mkdir /custom_folder/
  • cp pki/ca.crt /custom_folder/
  • cp pki/issued/server.crt /custom_folder/
  • cp pki/private/server.key /custom_folder/
  • cp pki/issued/client1.domain.tld.crt /custom_folder
  • cp pki/private/client1.domain.tld.key /custom_folder/

Upload the Certificate to AWS ACM:

Once the certificate creation is completed, login to the AWS console and import the certificates through ACM.

[GUIDE]: Setting up an AWS VPC Client VPN 5

Note: Certificate body content will be server.crt | Certificate key content will be server.key

Create Client VPN EndPoint:

Open the Amazon VPC console, In the navigation pane, choose Client VPN Endpoints and choose Create Client VPN Endpoint. Use the certificates which are uploaded in previous step while configuring EndPoint.

[GUIDE]: Setting up an AWS VPC Client VPN 6

  • For Client IPv4 CIDR, specify an IP address range, in CIDR notation, from which to assign client IP addresses
  • For Server certificate ARN, specify the ARN for the TLS certificate to be used by the server. Clients use the server certificate to authenticate the Client VPN endpoint to which they are connecting
  • Specify the authentication method to be used to authenticate clients when they establish a VPN connection. To use mutual certificate authentication select Use mutual authentication, and then for Client certificate ARN
  • Click on “Create Client VPN endpoint” and Select Associations to associate VPC with Subnet And Associate the same wait till Client VPN endpoint becomes available

[GUIDE]: Setting up an AWS VPC Client VPN 7

VPC Subnet Association:

To enable clients to establish a VPN session, you must associate a target network with the Client VPN endpoint. A target network is a subnet in a VPC

Select the Associations column and specify the VPC and Subnet to associate and then click on Associate

[GUIDE]: Setting up an AWS VPC Client VPN 8

Authorize Clients to Access a Network:

To authorize clients to access the VPC in which the associated subnet is located, you must create an authorization rule. The authorization rule specifies which clients have access to the VPC. In this document, we grant access to all users by clicking Authorize Ingress and specify Destination CIDR as 0.0.0.0/0

[GUIDE]: Setting up an AWS VPC Client VPN 9

You can enable access to additional networks connected to the VPC, such as AWS services, peered VPCs, and on-premises networks. For each additional network, you must add a route to the network and configure an authorization rule to give clients access. This is Optional selection and can be achieved by selecting “Create Route” option under Route table

[GUIDE]: Setting up an AWS VPC Client VPN 10

Once all the steps are completed in AWS, Download the Client configuration

[GUIDE]: Setting up an AWS VPC Client VPN 11

Once client configuration is downloaded appended the client certificate and key in the file at the end which was generated in step #1, (client1.domain.tld.crt abd client1.domain.tld.key) with below syntax

<cert>

Enter Certificate here

</cert>

<key>

Enter key here

</key>

Configuring OpenVPN Client:

Download the OpenVPN software in your Local machine and Import the file

[GUIDE]: Setting up an AWS VPC Client VPN 12

  • Connect to Client VPN using the configuration file

[GUIDE]: Setting up an AWS VPC Client VPN 12[GUIDE]: Setting up an AWS VPC Client VPN 14

  • Try connecting the Instance with private IP which is in the same VPC

With this we have successfully established an AWS VPC Client VPN.

North America

+1-(917)-793-2500

Europe

+49-(621)-400-676-00

Asia-Pacific

+91-(80)-466-58999

Email

connect@smartshifttech.com

Read More

Setup Kubernetes Cluster with Zero Cost

This document provides some basic understanding…

Read More
Afroz Hussain Afroz Hussain April 29, 2019 0 Comments

[GUIDE]: Encrypting existing MySQL RDS with reduced downtime

AWS RDS instances and snapshots at rest can be encrypted by enabling the encryption option in AWS. Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, Read Replicas, and snapshots.

Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt the  data on the server that hosts Amazon RDS DB instances. After the data is encrypted, Amazon RDS handles authentication of access and decryption of the data transparently with a minimal impact on performance.

Encryption can be enabled for the newly created RDS instances while launching the instance itself by choosing Enable encryption  option.  However, the existing RDS cannot be encrypted on the fly. The option to migrate the existing unencrypted RDS to encrypted is to:

  • Create a snapshot of DB instance
  • Create an encrypted copy of that snapshot.
  • Restore a DB instance from the encrypted snapshot

The process mentioned above would take more time and yields more downtime which is not acceptable for the production databases.  To reduce the downtime to migrate the unencrypted MySQL RDS to encrypted ,  master-slave replication can be used for MySQL RDS along with read replica feature from AWS RDS

Master-Slave Replication Configuration
  • Create a read replica for the unencrypted MySQL RDS and ensure it is in sync with master. This replica will be used to capture the master bin log details which is later used in the master-slave configuration
  • Increase the binlog retention period to a higher value in master
    mysql> call mysql.rds_set_configuration(‘binlog retention hours’,<value>);
  • Create replication user in master
    mysql> CREATE USER ‘repl’@’%’ IDENTIFIED BY ‘<password>’;
    mysql> GRANT REPLICATION SLAVE ON *.* TO ‘repl’@’%’;
  • Stop the replication in Read Replica
    mysql> CALL mysql.rds_stop_replication;
  • On read replica , annotate the binlog status (Master_Log_File and Read_Master_Log_Pos)
    mysql> show slave status\G
  • Backup the read replica using RDS snapshot method
  • Encrypt the snapshot using AWS Copy Snapshot method – Default key or KMS option can be chosen for encrypting the snapshot based on the requirements
  • Delete the read replica RDS instance
  • Restore the DB instance using the encrypted snapshot

    While restoring the DB instance, ensure you set the Multi-AZ and backup retention period as per the requirement.  Also, create a new parameter group for the restored instance and set READ_ONLY to ON
  • Using the earlier captured values of Master_Log_File and Read_Master_Log_Pos , set the encrypted RDS as slave for the master RDS
    mysql>    call mysql.rds_set_external_master(‘<master_rds_endpoint>’,<port>,’repl’,'<repl_user_password>’, <master_log_file>’, <master_log_pos>,0);
  • Start the replication on encrypted RDS and make sure it is in sync with the master i.e., unencrypted RDS
    mysql>    call mysql.rds_start_replication;

On the encrypted RDS DB instance, run the show slave status\G  command to determine when the replica is up-to-date with the replication master. The results of the SHOW SLAVE STATUS command include the Seconds_Behind_Master field. When the Seconds_Behind_Master field returns 0, then the replica is up-to-date with the master.

Redirect Live Application to the encrypted RDS Instance

After the encrypted RDS instance is up-to-date with the replication master, live application can be updated to use the encrypted RDS instance.

  • Verify that the Seconds_Behind_Master field in the below command results is 0, which indicates that the replica is up-to-date with the replication master.
    mysql> show slave status\G
  • Close all connections to the master(unencrypted RDS) when their transactions complete.
  • Stop the replication on encrypted RDS and ensuring that there is no lag.
    mysql> show slave status\G
    mysql> CALL mysql.rds_stop_replication;
  • Update the application to use the encrypted RDS DB instance. This update typically involves changing the connection settings to identify the host name and port of the encrypted RDS DB instance, the user account and password to connect with, and the database to use.

Note:- If the same endpoint to be retained as unencrypted RDS to avoid any changes in the application configurations,  rename the unencrypted RDS to temporary DB identifier and then rename the encrypted RDS to the original endpoint.

  • Set the encrypted RDS to read-write mode by setting READ_ONLY to OFF in parameter group
  • Reset the replication configuration in encrypted RDS so that this instance is no longer identified as a replica.
    mysql> CALL mysql.rds_reset_external_master;
Summary
  • Encryption can be enabled for an Amazon RDS DB instance when it is created, not after the DB instance is created.  For the existing RDS, it can be enabled using the copy encrypted snapshot of an unencrypted RDS.
  • DB instances that are encrypted can’t be modified to disable encryption.

The encryption migration for MySQL RDS can be performed with the reduced downtime using master-slave replication as mentioned in the steps described in this post

North America

+1-(917)-793-2500

Europe

+49-(621)-400-676-00

Asia-Pacific

+91-(80)-466-58999

Email

connect@smartshifttech.com

Read More
Smitha Hiriyanna January 10, 2019 0 Comments

S3 Data Copy – Lambda Function

Abstract

This whitepaper is intended for solutions architects and developers who are building solutions that will be deployed on Amazon Web Services (AWS). It provides architectural patterns on how we can build a stateless automation to copy S3 objects between AWS account and how to design systems that are secure, reliable, high performing, and cost efficient.

Introduction

Amazon Simple Storage Service (Amazon S3) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web. It is designed to deliver 99.999999999% durability, and scale past trillions of objects worldwide. It is simple to move large volumes of data into or out of Amazon S3 with Amazon’s cloud data migration options. Once data is stored in S3, it can be automatically tiered into lower cost, longer-term cloud storage classes like S3 Standard – Infrequent Access and Amazon Glacier for archiving.

We will be further explaining, how we can perform a copy of the objects(folder/file) uploaded to S3 bucket from one AWS account to another account.

Scenario: We have multiple AWS accounts with consolidated billing. Linked(Source) account running EC2 instances with different TimeZone upload the logs of applications to S3 for backup. The application runs daily log rotation and uploads the data to S3. The payee master(Destination) account has some log analysis application which needs the application data from all the linked(Source) account in a single S3 bucket.

Problem:  As the log rotation depends on the EC2 instance Timezone, we cannot schedule a script to sync/copy the data on a specific time between S3 Buckets.

Solution Walkthrough
  1. When an object is uploaded to Source S3 bucket, SNS event notification associated with an S3 bucket will notify the SNS topic in source account.
  2. The SNS topic which has a lambda function subscribed to it will run the Lambda function.
  3. The Lambda function will assume the Destination Account IAM Role and copy the object from Source Bucket to Destination bucket.

Note: The S3 bucket event will have the source S3 bucket name and its object.

Solution flow diagram

AWS Resource in Source Account:

  • IAM Role
  • S3 Bucket
  • Lambda function
  • SNS Notification

AWS Resource in Destination Account:

  • IAM Role
  • S3 Bucket
Configuration in Source AWS Account
  1. Create an IAM role, this will be used for creating the Cloudwatch log and running Lambda function. The Role also should assume the Role of Destination IAM. Attach below AWS policy and Trust relationship for Lambda service.
  1. Attach Cloud watch log policy with CreateLogGroup, CreateLogStream and PutLogEvents. This policy will be used by Lambda to upload the lambda output to CloudWatch logs.

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Effect”: “Allow”,

“Action”: [

“logs:CreateLogGroup”,

“logs:CreateLogStream”,

“logs:PutLogEvents”

],

“Resource”: “arn:aws:logs:*:*:*”

}

]

}

b. Create an inline policy to Assume the role of Destination IAM user.

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Sid”: “Stmt1489133353000”,

“Effect”: “Allow”,

“Action”: [

“sts:AssumeRole”

],

“Resource”: [

“arn:aws:iam::<Destination AWS Account Number>:role/<Destination Role>”

]

}

]

}

c. Trust the Lambda in IAM Role.

2. Create S3 Bucket in Source Account, to which the logs will be uploaded.

Add below Bucket Access policy to the IAM Role created in Destination account.

{

“Version”: “2008-10-17”,

“Id”: “Policy1398367354624”,

“Statement”: [

{

“Sid”: “CrossAccount”,

“Effect”: “Allow”,

“Principal”: {

“AWS”: “arn:aws:iam:: <Destination AWS Account Number>:role/<Destination Role>”

},

“Action”: “s3:*”,

“Resource”: “arn:aws:s3:::<Source Bucket>/*”

}

]

}

3. Create the Lambda Function

Lambda function will assume the Role of Destination IAM Role and copy the S3 object from Source bucket to Destination.

  1. In the Lambda console, choose Create a Lambda function.
  2. Directly move to configure function.
  3. For Name, enter a function name. The function name should match the name of the S3 Destination Bucket.
  4. Enter a description that notes the source bucket and destination bucket used.
  5. For Runtime, choose Python 2.7.
  6. For Code entry type, choose Edit code inline.
  7. Paste the following into the code editor:

import urllib

import boto3

import ast

import json

print(‘Loading function’)

def lambda_handler(event, context):

s3 = boto3.client(‘s3’)

sns_message = ast.literal_eval(event[‘Records’][0][‘Sns’][‘Message’])

target_bucket = context.function_name

source_bucket = str(sns_message[‘Records’][0][‘s3’][‘bucket’][‘name’])

key = str(urllib.unquote_plus(sns_message[‘Records’][0][‘s3’][‘object’][‘key’]).decode(‘utf8’))

copy_source = {‘Bucket’:source_bucket, ‘Key’:key}

print “Copying %s from bucket %s to bucket %s …” % (key, source_bucket, target_bucket)

sts_client = boto3.client(‘sts’)

assumedRoleObject = sts_client.assume_role(

RoleArn=”arn:aws:iam::<Destination Account ID>:role/<Destination Role>“,

RoleSessionName=”AssumeRoleSession1”

)

credentials = assumedRoleObject[‘Credentials’]

s3 = boto3.client(

‘s3’,

aws_access_key_id = credentials[‘AccessKeyId’],

aws_secret_access_key = credentials[‘SecretAccessKey’],

aws_session_token = credentials[‘SessionToken’],

)

s3.copy_object(Bucket=target_bucket, Key=key, CopySource=copy_source)

h. Select the Existing Role option and select the IAM Role created in above Step

4. Create SNS Topic.

The SNS topic will be used by S3 bucket. When an object is uploaded to S3 bucket, it will invoke SNS Topic. SNS is subscribed with Lambda function which will trigger the Lambda function created in the previous step.

  1. Create SNS topic in Source Account.
  2. In the SNS topic options, select Edit topic policy

In the Popup window, select the Advanced view TAB as below screenshot and update the policy provided below.

{

“Version”: “2008-10-17”,

“Id”: “<default_policy_ID>”,

“Statement”: [

{

“Sid”: “<default_statement_ID>”,

“Effect”: “Allow”,

“Principal”: {

“AWS”: “*”

},

“Action”: “SNS:Publish”,

“Resource”: “arn:aws:sns:us-east-1:<Source Account ID>:<Source SNS Topic Name>”,

“Condition”: {

“ArnLike”: {

“AWS:SourceArn”: “arn:aws:s3:::<Source S3 Bucket>”

}

}

}

]

}

c. Create the subscription for the SNS Topic with protocol select Lambda and lambda function created in the previous step as Endpoint. This will run the Lambda function when SNS Topic is invoked.

5. Create the Notification Event for Source S3 Bucket.

  1. In the S3 bucket property option select the Event option
  2. Add Notification to Event
  3. Provide the name for the Notification Event
  4. Select ObjectCreate(All) and Prefix for the object which we want to upload to Destination Bucket. This will make any object uploaded to <Bucket Name>/<Prefix> will be uploaded to destination
  5. Provide the Suffix for your log files
  6. Select SNS as Send to and SNS topic name
  7. Save the Event Notification

Note: By providing Prefix and Suffix we can clearly define the objects we want to upload from Source to Destination S3 bucket.

Configuration in Destination AWS Account:

  1. Create IAM Role, which will be used by Lambda to Copy the objects.
  1. Attach S3 Bucket Role to IAM

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Effect”: “Allow”,

“Action”: [

“s3:GetObject”

],

“Resource”: [

“arn:aws:s3:::<Source Bucket>”,

“arn:aws:s3:::<Source Bucket>/*”

]

},

{

“Sid”: “Stmt1489073558000”,

“Effect”: “Allow”,

“Action”: [

“s3:PutObject”

],

“Resource”: [

“arn:aws:s3:::<Destination Bucket>”,

“arn:aws:s3:::<Destination Bucket>/*”

b. Add the Trust Relationship Source Account IAM Role for Lambda service

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Effect”: “Allow”,

“Principal”: {

“AWS”: “arn:aws:iam::<Source Account>:role/<Source Role>”,

“Service”: “lambda.amazonaws.com”

},

“Action”: “sts:AssumeRole”

}

]

}

Configuration Validation
  1. Upload an object to the source bucket.
  2. Verify that the object was copied successfully to the destination buckets.
  3. Optional: view the CloudWatch logs entry for the Lambda function execution. For a successful execution, this should look like the following screenshot.
Known Issue and Limitation
  1. If the file size is greater than 5GB, replace the S3 copy command in the lambda function.

From: s3.copy_object(Bucket=target_bucket, Key=key, CopySource=copy_source)

To     : s3.copy(copy_source, context.function_name, key)

2. As of now, the LAMBDA has a timeout value of 5 minutes. If the files size is huge the lambda function used in the document will not copy the data.

Conclusion

The solution is simple and can be used for multiple use cases such as cross account-cross region replications, centralized auditing for logs, and centralized backup/achieve locations.

North America

+1-(917)-793-2500

Europe

+49-(621)-400-676-00

Asia-Pacific

+91-(80)-466-58999

Email

connect@smartshifttech.com

Read More

AWS Deployment – Jenkins & Puppet

This document provides some basic understanding…

Read More