Skip to main content
Version: 4.8

Common Application Stack Documentation

This page contains documentation common to all SCALE application stacks. Individual application stack pages link here for shared concepts and configuration details.

Getting Started

Refer to the following link to learn about getting started in the Azure DevOps environment: Getting Started in Azure DevOps Environment

Repository Structure

The structure of the source code repository for an application stack will contain a source directory tree with the application source code, and one or two directory trees with details for deployment, as follows:

  • source - this directory contains the application source code
  • appCode-serviceName - (named as the AppCode followed by the application or service name) contains the Helm chart deployment materials

Also at the top of the repository is a file called azure-pipelines.yml. This file contains reference to the appropriate version of the CI/CD pipeline logic, some variables unique to the application (e.g. container version) as well as YAML data structures providing key information about the environments into which to deploy the application and the sequence of events to complete the deployment (e.g. dependencies, additional steps to retrieve secrets to be passed to the deployed container, etc).

Additional items in this repository will generally not be modified and should not be changed to avoid risk of breaking the pipeline workflows.

Application Configuration

Once navigated to the appropriate repository, it can be cloned to a local development environment (i.e. local workstation) for the actual application development and configuration work. Configuration of the pipeline would include setting the appropriate container version (reflected in the appVersion variable under the parameters: section) and defining the target workspace and hostspaces plus any dependencies between hostspaces or prerequisite deployment objects required. Once it is ready to be deployed into the SCALE workspace and the new application version has been set as well, create a Git pull request for the updates to the code. The creation of this pull request will trigger the Continuous Integration pipeline to start.

CI/CD Pipeline Overview

Please note that the document "SCALE Concepts and Procedures" contains more details about the specific flow of the application through the CI/CD pipelines

Once a pull request has been created against the Git repository, it will trigger the start of the CI/CD pipeline in order to build, scan and prepare the application to deploy into the Workspace. Although the CI/CD pipeline flow is automatically triggered, there are times when it needs to be manually run (or re-run) and the pipeline should be examined for the results of its preparation of the application and deployments. Details for examining the CI/CD pipeline and analyzing its results can be found here: Continuous Integration and Continuous Delivery Pipelines

Upgrading to Pipeline v4.7

SCALE DevOps Pipeline v4.7 introduces CyberArk Integration with Rancher. Conjur performs certificate-based authentication with each Rancher cluster and securely supplies secrets to each namespace/application within the cluster. Secrets are securely stored in the CyberArk vault. For more information refer Conjur Integration Document.

  1. Setup CyberArk integration (link)

  2. Remove ambassador.yaml file from templates folder.

  3. Add release 4.7 reference in azure-pipelines.yaml.

resources:
repositories:
- { repository: templates, type: git, name: devexp-engg/automation, ref: release/v4.7 }
- { repository: spaces, type: git, name: spaces, ref: devops/v1.0 }

Note: Security scan stage was released as a minor update in v4.5. To read more about how this feature works please refer New Security Stage of Pipeline: CrowdStrike Image Scanning and Gitleaks Secret Scanning.

To transition to v4.7, we recommend using the dxpctl Pipeline Upgrade Tool.

Pipeline Definition

The logic for the CI/CD pipeline is a common and shared code base for all applications; however the configuration of the pipeline that applies the common logic to the specific application is defined in the top level directory of the source code repository for the application in a file named azure-pipelines.yml.

The structure of the azure-pipelines.yml file is a YAML file using standard YAML syntax. For general information about YAML structures, there are many available resources, including the tutorial at this link: YAML Tutorial.

There are certain required data elements that must be defined within the azure-pipelines.yml file as a prerequisite to the CI/CD pipeline running while other elements are optional and used to modify the standard behavior of the CI/CD pipeline.

More details about the azure-pipelines.yml file can be found here: General Structure of the azure-pipelines.yml File

The following resource repositories configuration is required to run v4.7 pipelines:

resources:
repositories:
- { repository: templates, type: git, name: devexp-engg/automation, ref: release/v4.7 }
- { repository: spaces, type: git, name: spaces, ref: devops/v1.0 }

Defining Hostspaces

Pipeline 4.7 simplifies the structure of azure-pipelines.yml to define all spaces (workspace, devint, db/hostspaces) as a single list. Older format is still supported but to continue using old format ref and k8s must be removed from spaces. It's strongly recommended to follow newer format. acceptanceSpace must be specified in the file. For example:

parameters:
acceptanceSpace: devexp-ae103-stg
spaces:
workspace:
helm:
overrideFiles: |
appCode-serviceName/values.workspace.yaml
devexp-ae101-dnt:
helm:
overrideFiles: |
appCode-serviceName/values.hostspace.yaml
devexp-ae103-stg:
helm:
overrideFiles: |
appCode-serviceName/values.hostspace.yaml
devexp-npc02-prd:
helm:
overrideFiles: |
appCode-serviceName/values.hostspace.yaml

Skipping Deployments

It's a handy feature introduced in the v4.x pipeline. During initial builds or testing, deployments can be excluded for all spaces (workspace, devint, db/hostspaces) even if configured in azure-pipelines.yaml. Pipeline will only perform application build and scans.

extends:
template: devops/<appstack>.yml@spaces
parameters:
system:
skipDeploy: true

Replace <appstack> with the appropriate application stack name (e.g., springboot, nodejs, flask, etc.).

Horizontal Pod Autoscaler

Pipeline v4.7 includes configuration of Horizontal Pod Auto-scaling for any deployment. Please refer to documentation for more details. To configure HPA use the below template in values.yaml for the host space required. The below commented example starts the deployment with a single pod which can be scaled up to 3 pods when average CPU usage crosses 70%.

# If you are using Horizontal POD Auto Scaler, you must disable ReplicaCount
replicaCount: 2

# hpa:
# min: 1
# max: 3
# cpu: 70

PVC for Application Stacks

Persistent Volume Claims (PVCs) for application stacks involve providing support for persistent storage in application stacks deployed in Kubernetes clusters. By enabling PVCs for application stacks, this feature will allow developers to define and configure persistent storage requirements for their applications using PVCs.

persistence:
- enabled: true
name: data
#storageClass: ""
accessMode: ReadWriteMany
# subPath: appdata
size: 25Gi
mountPath: /data

Pod Anti-Affinity

Pipeline v4.7 enhances pod scheduling on the cluster with addition of Pod Anti-affinity. For multi-pod applications, this ensures pods are scheduled scattered across the nodes within a cluster.

Detailed Pipeline Configuration

The remainder of the configuration work in the azure-pipelines.yml file focuses primarily on defining the target workspace and hostspaces for the application and providing details for these spaces, any additional tasks needed to prepare the deployments, and dependencies that will dictate the sequence of deployments in these spaces beyond the established SCALE environment deployment approval processes. Details for configuring these elements of the pipeline can be found here: Pipeline Configuration Details

Kubernetes Deployment Objects

In order to deploy an application, a number of Kubernetes objects must be defined and deployed. The definition of these objects is controlled through a set of files in one of two forms: Either as Helm charts (the default method) or as Kustomize files. Information about the contents and customization of these Kubernetes deployment objects can be found here: Kubernetes Deployment Objects

Troubleshooting

If something fails to deploy, the information about why the deployment failed (or was not even initiated) will be found in the logs of the CI/CD pipeline and can be tracked down using the methods described earlier in the "Continuous Integration and Continuous Delivery Pipelines" section.

However, additional information may be required either to better troubleshoot a failed deployment or to investigate the runtime behavior of an application that has been successfully deployed. In those cases, much of the information can be found in the Rancher web console. Information about navigating and analyzing information from the Rancher web console can be found here: Navigating the Rancher Web Console