Category: DevOps

[GUIDE]: Creating an Oracle RDS Read Replica

AWS recently announce the support of Read Replica for Oracle RDS starting with versions and higher 12.1 versions, for all 12.2 versions, and for all 18 versions.

AWS RDS uses Oracle Data Guard to replicate the database changes from source to replica. Updates made to the source DB instance are asynchronously copied to the Read Replica.  Normally a Read replica is used to offload heavy read traffic for an application. The load can be reduced on the source database by routing read queries from the applications to the Read Replica.

Creation of read replica doesn’t require an outage for the source RDS. Amazon RDS sets the necessary parameters and permissions for the source DB instance and the Read Replica without any service interruption. A snapshot is taken of the source DB instance, and this snapshot becomes the Read Replica. No outage occurs when a read replica instance is deleted. 

Configuring Oracle Read Replica

 Pre-requisites for configuring the replica in Oracle

  • Must enable automatic backups on the source database – Set backup retention to value other than zero.
  • Enable force logging mode on the database
    exec rdsadmin.rdsadmin_util.force_logging(p_enable => true); 
  • If there are any changes to be done on source database’s redo logs like changing the logsize, it should be done before the read replica is created. Modifications after the replica creation can cause the online redo logging configuration to get out of sync with the standby logging configuration.
  • Make sure that the setting of the max_string_size parameter is same on the source DB instance and the Read Replica if using different parameter groups
Creating Oracle Read Replica
  • Login to AWS RDS console and select the source database
  • Click on ActionsàCreate read replica

[GUIDE]: Creating an Oracle RDS Read Replica 1

  • Choose the instance specifications for the read replica. Recommended to use the same instance class and storage type as the source
  • For Multi-AZ deployment, choose Yes to create a standby of your replica in another Availability Zone for failover support for the replica.
  • Choose the DB instance Identifier for the read replica
    [GUIDE]: Creating an Oracle RDS Read Replica 2
  • Choose Create read replica

Amazon RDS for Oracle keeps a minimum of two hours of transaction logs on the source DB instance. Logs are purged from the source after two hours or after the archivelog retention hours setting has passed, whichever is longer.

A Read Replica is created with the same storage type as the source DB instance. However, you can create a Read Replica that has a different storage type from the source DB.

We can create up to five Read Replicas from one source DB instance. The Oracle DB engine version of the source DB instance and all of its Read Replicas must be the same.

When the source RDS is upgraded to higher version, Amazon RDS upgrades the Read Replicas immediately after upgrading the source DB instance, regardless of a Read Replica’s maintenance window

Oracle read replica can neither be stopped nor another read replica can be created from the existing replica (cascading read replicas not supported for Oracle). Also, the source RDS cannot be stopped if the read replica is running.

 Similar to another engines, read replica can be promoted into standalone DB instance. When you promote a Read Replica, the DB instance is rebooted before it becomes available.

[GUIDE]: Creating an Oracle RDS Read Replica 3

Monitoring Replication

   Replication monitoring can be done at the AWS RDS console and the database level

[GUIDE]: Creating an Oracle RDS Read Replica 4

The ReplicaLag metric is the sum of the Apply Lag value and the difference between the current time and the apply lag’s DATUM_TIME value. The DATUM_TIME value is the last time the Read Replica received data from its source DB instance.

Data dictionary views used for checking the replication lag:

  • V$ARCHIVED_LOG – Shows which commits have been applied to the Read Replica.
  • V$DATAGUARD_STATS – Shows a detailed breakdown of the components that make up the replicaLag
  • V$DATAGUARD_STATUS – Shows the log output from Oracle’s internal replication processes.
Current Limitations of Oracle Read Replica
  • Must have an Active Data Guard license
  • Oracle Read Replicas are only available on the Oracle Enterprise Edition (EE) engine
  • Oracle Read Replicas are available for Oracle version and higher 12.1 versions, for all 12.2 versions, and for all 18 versions
  • Oracle Read Replicas are only available for DB instances on the EC2-VPC platform
  • Oracle Read Replicas are only available for DB instances running on DB instance classes with two or more vCPUs
  • Amazon RDS for Oracle does not intervene to mitigate high replica lag between a source DB instance and its Read Replicas. Ensure that the source DB instance and its Read Replicas are sized properly, in terms of compute and storage, to suit their operational load
  • Amazon RDS for Oracle Read Replicas must belong to the same option group as the source database. Modifications to the source option group propagate to Read Replicas
  • Cross-region Read Replicas currently not supported
  • The replica process in Oracle cannot stopped. Only option is to delete the read replica
  • Amazon RDS doesn’t support circular replication
  • Currently no manual snapshots of Amazon RDS for Oracle Read Replicas can be created or enable automatic backups for them

North America







Read More

[GUIDE]: Setting up an AWS VPC Client VPN

AWS Client VPN is a AWS client-based VPN service that enables we to securely access our resources in AWS and our on-premises network. With Client VPN, we can access our resources from any location using an OpenVPN-based VPN client.

Below are the step to implement AWS VPC Client VPN.

Server and Client Certificate and keys:

Generate Server and Client Certificates and Keys using below steps on any Linux system


  • git clone
  • cd easy-rsa/easyrsa3
  • ./easyrsa init-pki
  • ./easyrsa build-ca nopass
  • ./easyrsa build-server-full server nopass (This step will generate server certificate and key)
  • ./easyrsa build-client-full client1.domain.tld nopass (This step will generate client certificate and the client private key)
  • Store/Copy the server and client certificates and keys in specified location as these are important
  • mkdir /custom_folder/
  • cp pki/ca.crt /custom_folder/
  • cp pki/issued/server.crt /custom_folder/
  • cp pki/private/server.key /custom_folder/
  • cp pki/issued/client1.domain.tld.crt /custom_folder
  • cp pki/private/client1.domain.tld.key /custom_folder/

Upload the Certificate to AWS ACM:

Once the certificate creation is completed, login to the AWS console and import the certificates through ACM.

[GUIDE]: Setting up an AWS VPC Client VPN 5

Note: Certificate body content will be server.crt | Certificate key content will be server.key

Create Client VPN EndPoint:

Open the Amazon VPC console, In the navigation pane, choose Client VPN Endpoints and choose Create Client VPN Endpoint. Use the certificates which are uploaded in previous step while configuring EndPoint.

[GUIDE]: Setting up an AWS VPC Client VPN 6

  • For Client IPv4 CIDR, specify an IP address range, in CIDR notation, from which to assign client IP addresses
  • For Server certificate ARN, specify the ARN for the TLS certificate to be used by the server. Clients use the server certificate to authenticate the Client VPN endpoint to which they are connecting
  • Specify the authentication method to be used to authenticate clients when they establish a VPN connection. To use mutual certificate authentication select Use mutual authentication, and then for Client certificate ARN
  • Click on “Create Client VPN endpoint” and Select Associations to associate VPC with Subnet And Associate the same wait till Client VPN endpoint becomes available

[GUIDE]: Setting up an AWS VPC Client VPN 7

VPC Subnet Association:

To enable clients to establish a VPN session, you must associate a target network with the Client VPN endpoint. A target network is a subnet in a VPC

Select the Associations column and specify the VPC and Subnet to associate and then click on Associate

[GUIDE]: Setting up an AWS VPC Client VPN 8

Authorize Clients to Access a Network:

To authorize clients to access the VPC in which the associated subnet is located, you must create an authorization rule. The authorization rule specifies which clients have access to the VPC. In this document, we grant access to all users by clicking Authorize Ingress and specify Destination CIDR as

[GUIDE]: Setting up an AWS VPC Client VPN 9

You can enable access to additional networks connected to the VPC, such as AWS services, peered VPCs, and on-premises networks. For each additional network, you must add a route to the network and configure an authorization rule to give clients access. This is Optional selection and can be achieved by selecting “Create Route” option under Route table

[GUIDE]: Setting up an AWS VPC Client VPN 10

Once all the steps are completed in AWS, Download the Client configuration

[GUIDE]: Setting up an AWS VPC Client VPN 11

Once client configuration is downloaded appended the client certificate and key in the file at the end which was generated in step #1, (client1.domain.tld.crt abd client1.domain.tld.key) with below syntax


Enter Certificate here



Enter key here


Configuring OpenVPN Client:

Download the OpenVPN software in your Local machine and Import the file

[GUIDE]: Setting up an AWS VPC Client VPN 12

  • Connect to Client VPN using the configuration file

[GUIDE]: Setting up an AWS VPC Client VPN 12[GUIDE]: Setting up an AWS VPC Client VPN 14

  • Try connecting the Instance with private IP which is in the same VPC

With this we have successfully established an AWS VPC Client VPN.

North America







Read More

Setup Kubernetes Cluster with Zero Cost

This document provides some basic understanding…

Read More
Afroz Hussain Afroz Hussain April 29, 2019 0 Comments

[GUIDE]: Encrypting existing MySQL RDS with reduced downtime

AWS RDS instances and snapshots at rest can be encrypted by enabling the encryption option in AWS. Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, Read Replicas, and snapshots.

Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt the  data on the server that hosts Amazon RDS DB instances. After the data is encrypted, Amazon RDS handles authentication of access and decryption of the data transparently with a minimal impact on performance.

Encryption can be enabled for the newly created RDS instances while launching the instance itself by choosing Enable encryption  option.  However, the existing RDS cannot be encrypted on the fly. The option to migrate the existing unencrypted RDS to encrypted is to:

  • Create a snapshot of DB instance
  • Create an encrypted copy of that snapshot.
  • Restore a DB instance from the encrypted snapshot

The process mentioned above would take more time and yields more downtime which is not acceptable for the production databases.  To reduce the downtime to migrate the unencrypted MySQL RDS to encrypted ,  master-slave replication can be used for MySQL RDS along with read replica feature from AWS RDS

Master-Slave Replication Configuration
  • Create a read replica for the unencrypted MySQL RDS and ensure it is in sync with master. This replica will be used to capture the master bin log details which is later used in the master-slave configuration
  • Increase the binlog retention period to a higher value in master
    mysql> call mysql.rds_set_configuration(‘binlog retention hours’,<value>);
  • Create replication user in master
    mysql> CREATE USER ‘repl’@’%’ IDENTIFIED BY ‘<password>’;
    mysql> GRANT REPLICATION SLAVE ON *.* TO ‘repl’@’%’;
  • Stop the replication in Read Replica
    mysql> CALL mysql.rds_stop_replication;
  • On read replica , annotate the binlog status (Master_Log_File and Read_Master_Log_Pos)
    mysql> show slave status\G
  • Backup the read replica using RDS snapshot method
  • Encrypt the snapshot using AWS Copy Snapshot method – Default key or KMS option can be chosen for encrypting the snapshot based on the requirements
  • Delete the read replica RDS instance
  • Restore the DB instance using the encrypted snapshot

    While restoring the DB instance, ensure you set the Multi-AZ and backup retention period as per the requirement.  Also, create a new parameter group for the restored instance and set READ_ONLY to ON
  • Using the earlier captured values of Master_Log_File and Read_Master_Log_Pos , set the encrypted RDS as slave for the master RDS
    mysql>    call mysql.rds_set_external_master(‘<master_rds_endpoint>’,<port>,’repl’,'<repl_user_password>’, <master_log_file>’, <master_log_pos>,0);
  • Start the replication on encrypted RDS and make sure it is in sync with the master i.e., unencrypted RDS
    mysql>    call mysql.rds_start_replication;

On the encrypted RDS DB instance, run the show slave status\G  command to determine when the replica is up-to-date with the replication master. The results of the SHOW SLAVE STATUS command include the Seconds_Behind_Master field. When the Seconds_Behind_Master field returns 0, then the replica is up-to-date with the master.

Redirect Live Application to the encrypted RDS Instance

After the encrypted RDS instance is up-to-date with the replication master, live application can be updated to use the encrypted RDS instance.

  • Verify that the Seconds_Behind_Master field in the below command results is 0, which indicates that the replica is up-to-date with the replication master.
    mysql> show slave status\G
  • Close all connections to the master(unencrypted RDS) when their transactions complete.
  • Stop the replication on encrypted RDS and ensuring that there is no lag.
    mysql> show slave status\G
    mysql> CALL mysql.rds_stop_replication;
  • Update the application to use the encrypted RDS DB instance. This update typically involves changing the connection settings to identify the host name and port of the encrypted RDS DB instance, the user account and password to connect with, and the database to use.

Note:- If the same endpoint to be retained as unencrypted RDS to avoid any changes in the application configurations,  rename the unencrypted RDS to temporary DB identifier and then rename the encrypted RDS to the original endpoint.

  • Set the encrypted RDS to read-write mode by setting READ_ONLY to OFF in parameter group
  • Reset the replication configuration in encrypted RDS so that this instance is no longer identified as a replica.
    mysql> CALL mysql.rds_reset_external_master;
  • Encryption can be enabled for an Amazon RDS DB instance when it is created, not after the DB instance is created.  For the existing RDS, it can be enabled using the copy encrypted snapshot of an unencrypted RDS.
  • DB instances that are encrypted can’t be modified to disable encryption.

The encryption migration for MySQL RDS can be performed with the reduced downtime using master-slave replication as mentioned in the steps described in this post

North America







Read More
Smitha Hiriyanna January 10, 2019 0 Comments

AWS Deployment – Jenkins & Puppet

This document provides some basic understanding…

Read More

Automation Path to S4 & Beyond

Along with our friends in the SAP industry, we have spent too much of our time in the past year in the debate around the merits of a ‘Greenfield’ approach versus a ‘Brownfield’ approach for customers moving to SAP’s newest ERP platform. ‘Greenfield’ is based on the premise that it is better for customers to start with a blank slate and re-implement, while ‘Brownfield’ is based on the premise that customers should migrate their current system functionality to S/4HANA. In our opinion this ongoing ‘philosophical debate’ is taking attention and energy away from the primary customer concerns which involve the time, money and risk involved in getting the maximum benefits from SAP S/4HANA as their Digital Core. The effective debate should be based on asking and answering the question “How fast can we go?” This whitepaper lays out smartShift’s vision and approach for solving that problem.

Our customer’s business vision, strategy and plans are heavily geared towards Digitalization.  This means that every industry we work in, the effects and influences of Silicon Valley are being felt.  Tesla, GE Digital and Amazon are changing the game in their sectors and competitors must respond immediately.  The Digital game is all about speed.   With this in mind, when we at smartShift look at the best way to manage ERP technology, we start with the question we call ‘WWSVD’ – What Would Silicon Valley Do?   Would 3-5 years to production be acceptable?  Would implementation budgets in the 8 to 9 figures be acceptable?  How about project teams with hundreds of staff?  Would these timeframes and prices be acceptable while critical market and business priorities are on hold?  We think not.   So how do world-class Digital companies approach these kind of problems and how can we apply that thinking to ERP?

We think there are 3 key concepts in the ‘Digital Playbook’ that are directly relevant and have massive benefits for your Digital Transformation strategy.  We will discuss each of them and then tie them all together in a case study.

  1. Minimum Viable Product
  2. Agile/DevOps Approach
  3. Automation and Tooling

Minimum Viable Product (MVP) in the technology business means the minimum product that early adopter or beta customers can receive utility from.  In an environment where speed matters, defining the MVP is a critical step in finding the fastest path to market.  The MVP does not represent the end of the journey – it is just the baseline to get into the market and begin to receive real-world feedback from customers so the product can be further developed and optimized.

When customers initially think about MVP for a production ERP system, it doesn’t seem to translate well.  Factory operations, supply chains and global finance teams that have been optimized over decades cannot revert to a simple, minimalist approach easily.  Hence, the ‘Greenfield’ approach inherently must begin with a massive redesign effort to document a very complex MVP that the business would accept.  MVP in the ‘Brownfield’ camp is similarly daunting as MVP represents rebuilding all of the functionality you have today working exactly the same way, but on a completely different technology.  So ‘Brownfield’ customers now face a massive technology design effort in lieu of a functional design.

We think it is helpful to pause here and think about the move to SAP S/4HANA as two logically distinct efforts.  One is a change in the technology stack – the other is a change in the functional paradigm.  Combining and conflating these two distinct changes is what leads to complexity in defining the S/4HANA MVP.   It is worth noting that while digital companies frequently undertake both of these they rarely combine them into a single effort, resorting to that approach only if required.

Based on this concept and the work with our customers we have defined the SAP S/4HANA MVP as a solution that:

  1. Does not disrupt any current key business process unless necessary
  2. Takes advantage of any ‘free’ or obvious process or functional improvements available in the upgrade
  3. Achieves the technology platform upgrade as quickly as possible
  4. Lays the foundation for iterative/agile improvements going forward

The most difficult part of identifying the S/4HANA MVP is that the knowledge required to understand the true costs behind these decisions is dispersed across many parties in your organization, which brings us to the next principle.

Agile/DevOps Approach

When packaged ERP went mainstream in the 1990s, the implementation approach was universally the Waterfall methodology.  Today Digital Companies approach technology projects in a different manner based on Agile Development.  Agile was created as a response to the shortcomings of Waterfall development, specifically:

  • Waterfall is not responsive or adaptable to changing requirements or conditions
  • It requires significant oversight and micro-management, increasing at scale
  • It depends on highly specialized and compartmentalized resources

The Agile approach, in contrast leverages self-organizing, cross-functional, lean teams with short cycle sprints to incremental releases using evolutionary design thinking.  When one thinks about business cycles today, it is hard to rationalize multi-year waterfall projects for anything if Agile is a viable alternative.   The challenge is that virtually all of the skilled and knowledgeable resources in the ERP market have been trained in Waterfall, thus the dominant planning paradigm is Waterfall. The hallmarks of a Waterfall-based approach are:

  • The first phase of any project is a lengthy planning and design exercise
  • There are clearly delineated business, functional, application, infrastructure and testing teams and resources, usually with conflicting views and objectives
  • The primary issues raised are about ‘what could go wrong’ and ‘how might I be blamed’

In contrast, when adopting an Agile-based approach we see:

  • The first steps are action oriented, for example a pilot based on a defined MVP
  • There is a single team with representatives from all disciplines but one common goal
  • The primary issues raised are around ‘how can we go faster and do better with less’

The reason we absolutely must change the way we approach an S/4HANA project has to do with optimizing the decision making.  The fundamental trade-offs between ‘Greenfield’ and ‘Brownfield’ are about the overall costs and benefits of migration versus reimplementation.  These decisions are both complex as well as granular.  That is for any given piece of system functionality there are different costs and benefits that require input from many angles.   The only way to address this is with a cross-functional team making these cost/benefit decisions together.  The problem with ‘Greenfield’ versus ‘Brownfield’ is that you have effectively made a unilateral and overarching decision on the approach for all functionality, as well as effectively enabled the traditional Waterfall-based approach to persist.

At the executive level, the appeal of Agile-based approaches is the size and cost of the teams.   One of the hallmarks of Digital businesses is the leverage gained from relatively small groups of resources that do not adhere to the traditional ‘silos’ or hierarchies that we are all so conditioned to.  The practice of having all specialties represented in a single team is referred to as DevOps where a single team designs, develops, tests and deploys a product.  As this approach is rapidly taking over the technology marketplace, a key differentiator is emerging which can be a game-changer for ERP.  A core discipline in DevOps is maximizing the use of Automation and Tooling throughout the entire process, allowing people to focus on the high-value and complex decisions and work, while delegating the lower-value work to machines.  And this brings us to the final game-changer…

Automation and Tooling

When we think about the cost of migration or re-implementation of an ERP system there is a factor of scale that we just cannot avoid. These systems have massive scope in terms of functionality, data and interfaces.  The most obvious example every customer can relate to is the cost and complexity of a single full regression test on a major ERP release, which is usually the limiting factor in the frequency of releases of SAP (it is mathematically impossible to do monthly releases when you have a three month testing window). Another example, and one we address daily at smartShift, is the complexity of modifying and managing the massive customizations customers have made to their system to ensure they continue to work and perform as the system is modified. The massive scale of these problems ensures that if addressed with expensive and unpredictable human resources they will take a long time, cost a lot of money and have unforeseen issues arise along the way.

However, if these problems are addressed by designing processes where machines do the bulk of the work, we can plan once and execute over and over.  So while standing up an automated testing regime for a single release is not cost-effective, in a world of monthly releases investment in testing automation has tremendous payoffs.  Similarly, implementing Automation to manage your customizations to make 1000 code-changes once may not be cheaper than shipping the code offshore for a given project, but over several technical and business releases Automation installed once will create significant economies.  Silicon Valley companies clearly understand this approach as they use it to its maximum extent.  There is no other way that they could develop and manage billion-dollar generating products with daily releases managed by teams that can be fed with two pizzas.

With respect to the S/4HANA move, Automation plays an absolutely key role in changing the game by fundamentally changing the cost/benefit decision in defining the MVP.  In the classic ‘Greenfield’ versus ‘Brownfield’ perspective a large amount (if not all of) of your system has issues that require manual labor through either technical re-design and implementation or through process reengineering and reimplementation.   So no matter what you do, cost is high in this paradigm.  While Automation cannot redesign business processes, it can dramatically lower the cost of bringing forward existing functionality without manual effort.  For the tech geeks, think about what Docker does for legacy apps moving to Cloud infrastructure.  Working with smartShift, our customers use Automation to identify and categorize functionality into four major S/4HANA ‘buckets’.

  1. Is not actually being used enough to justify keeping – decommission
  2. Will migrate with no intervention – no need to change
  3. Can be migrated by using Automated Transformation – no need to change
  4. Requires significant rework – should be evaluated for re-implementation or replaced with standard S/4 functionality by redesigning the business processes.

As opposed to a pure ‘Greenfield’ approach we are limiting the scope of the redesign significantly, and when compared to a ‘Brownfield’ approach we are significantly reducing the costs of many of the technical changes with Automation while singling out the especially problematic ones to be re-implemented. Thus we are using the optimal framework for best cost/benefit decisions leading to the optimal MVP.  Of course, this approach only works when we have a cross-functional team that is fully empowered, informed and bought into this approach.

For customers, the idea of taking on a new approach that is neither pure ‘Greenfield’ or ‘Brownfield’ to get to S/4HANA much faster, cheaper and with less risk has to be appealing.  The real question is does it work in the real world.  Lets look at a case study.

The Döhler Example

Döhler is a global producer, marketer and provider of natural ingredients, ingredient systems and integrated solutions for the food and beverage industry. They have run their operations on SAP ERP since 1993 and have a highly customized and specialized system with over 33,000 custom objects. With S/4H Döhler wants to provide a digital core system that enables them to run new business processes and even new business models.

To meet the need to get to S/4HANA Döhler assembled a core team representing all disciplines within SAP reflecting the agile principles. The team used smartShift and SAP Solution Manager’s Automation capabilities to perform a thorough analysis of all functionality in their system. Using this information, they were able to identify two sequential MVP releases that could be accomplished within time and budget windows acceptable to management.

The first release was a ‘lift and shift’ to Suite on HANA (SoH) swapping out the database layer without making functional changes to the system. This project was completed end-to-end in 3 months total time. Technical scope was optimized by performing an assessment of risk and maximizing the amount of predictable automated transformation that could be performed resulting in about 210,000 technical changes either required for HANA compatibility or ‘low hanging fruit’ to enhance performance or simplify the configuration. Because the core team clearly understood the dependencies and risks quality was extremely high with 15 total errors in testing – a clear example of the power of applying the dev ops approach.

Immediately following the SoH release, Döhler launched the sprint to S/4HANA. This release contains a more complex combination of functional and technical changes. Döhler informed this decision by running the SAP Simplification Database which identified over 5500 functional gaps to be addressed in the move to S/4HANA They then used the 4 ‘buckets’ approach discussed above. Working with this approach they reduced the functional changes requiring process engineering to the top 100 with suitable migrations for the remaining 98%+. With a four month window to make the MVP S/4HANA release this allowed Döhler one business day per process area. Döhler’s first S/4HANA release preserves all critical functionality and limits S/4HANA optimization to the most critical areas. Döhler is now moving to an iterative approach to perform incremental S/4HANA optimization as the users become more familiar with the Fiori interface, SAP release cycles for S/4HANA, or natural business changes create opportunity.

For customers that believe that their system is more complex than Döhler, we would counter that Döhler’s level of customization puts them in the top 5% of enterprise SAP systems and that the biggest global systems never have more than 4x Döhler’s level of customization. This remarkable case is a function of taking a fundamentally different approach resulting in a markedly different and faster outcome.

Please contact us if you are considering the next step to S/4HANA, would like valuable insights from our experienced team, or want to learn more about smartShift’s S/4 automation platform!

Reach out to our SAP specialists using the below contact information.

North America







Read More