Skip to main content

27 docs tagged with "Application"

View all tags

Alternative Release and Git Flows

The main flow of an application through SCALE is flexible but generally modelled after a code branching methodology called Release Flow, which is a simplified version of a methodology called Git Flow. The CI/CD pipelines in SCALE can implement both types of software life-cycle flows. Many of the design concepts within SCALE are consistent with the Release Flow software life-cycle flow, by default.

Azure Pipelines

The contents of the azure-pipelines.yml file drive the selection of CI/CD pipeline logic to be executed and the configuration and customization of the pipeline and its actions.

CaaS

Container-as-a-Service (CAAS) Application Stack

Canary Deployments

The version 3.5 pipeline introduces support for the deployment of applications using one of two available strategies for each target Kubernetes environment:

CI/CD Pipelines

V3 CI/CD pipelines are YAML pipelines fully coded in YAML source files within Azure DevOps. Navigate to the appropriate pipeline, within a given project in Azure DevOps, as follows:

Defining Routes

An application in Kubernetes can be exposed and made accessible using an "Ingress" routing layer. The primary way that this is provided in SCALE is using Ambassador, an edge inbound proxy service. Up until version 3.7 of the platform, another method was the use of OpenShift Routes, which is a feature uniquely tied to the OpenShift platform of Kubernetes. However, as of version 3.7, OpenShift Routes are no longer supported, they are unique to the OpenShift platform while Ambasador mappings work across multiple Kubernetes platforms. Note also that Ambassador mappings have additional advanced routing configuration options, which will be described in this document.

Flask

The Flask application stack is used to build and deploy python based Flask applications and produce deployment-ready containers for the SCALE environment.

Kubernetes Objects

Specific details about the deployment of the application as a set of Kubernetes objects in the OpenShift cluster is controlled through a collection of files used to modify the deployment details based on the environment to which the application is deployed. There is currently only one available method to modify the deployed Kubernetes objects based on the environment, and that via Helm v2 Charts.

Nginx

Nginx Application Stack

OpenShift Console

In order to troubleshoot a deployment, sometimes it is necessary to navigate to the OpenShift Web Console to gather information about a deployment and troubleshoot. Using a Workspace as an example,

Pipeline Configuration

Most of the configuration detail in the azure-pipelines.yml file will be found in the extends YAML object which

Rancher Console

In order to troubleshoot a deployment, sometimes it is necessary to navigate to the Rancher Web Console to gather information about a deployment and troubleshoot. The Rancher web console URL for your cluster can be found in the ServiceNow CMDB and is also referenced within Azure DevOps for the given application environment.

Running Prerequisite Pipelines

In some cases, the build of an application might require that another component first be built in a separate pipeline. Version 3.6 introduces this capability in the form of a required: section (under extends.parameters) of the YAML in the azure-pipelines.yml file.

Rust

Rust Application Stack

Splunk Integration

Splunk is a log collection and management system that receives log streams from SCALE and stores logs for searching and analysis, while also parsing certain logs for key metrics that are then tracked in pre-constructed dashboards. The dashboards and the searchable logs are grouped by application codes (appCode) consistent with application stacks as provisioned and managed in SCALE. Anyone with access to the development spaces or automation environments for a given application (appCode) can access the Splunk dashboards and logs for that application; depending on the specific Active Directory groups a user is assigned to, a user can also create their own custom dashboards.