Skip to main content

Continuous integration / continuous delivery (CI/CD)

What is CI/CD?

Continuous integration / continuous delivery (sometimes also deployment) (CI/CD) is a process that companies use to make testing and deployment easier. This process focuses on making incremental changes and testing them frequently so that any issues are found early and fixed before they become a problem down the road. Finsemble itself is developed by using this process, and we recommend that you manage your specific customizations and modifications this way.

The CI/CD process uses automation and monitoring at various stages of app development. It makes deployment less risky because the changes are small and incremental. There are two parts to this process. The first step is the integration, where you regularly build, test, and merge code into a shared repository. The second step is about automating further stages of the pipeline, including unit and end to end (E2E) testing.

Teaching you CI/CD in depth is out of scope for this topic. To learn more, check out What is CD/CD? or another available online resource.

Testing

In this topic we focus on how we apply the CI/CD process to Finsemble. Feel free to use our process and to modify it to fit your needs.

Types of tests

Like any CI/CD, we use different types of tests at different stages. The testing type and frequency depends on the content.

A unit test focuses on the smallest testable part of an app. Such a test doesn’t look at the entirety of the product. Instead, it looks at the local logic only and tests one specific behavior. We perform unit tests for every commit. Also, a dev can perform such a test at any time during the development without having to wait to commit.

In contrast, an end to end (E2E) test is comprehensive, looking at the overall interactions of various project parts with the goal of examining data flow for correctness. We perform E2E testing daily and we recommend that you do too. One good practice is to run such tests at night after all the work is completed for the day and all the commits are in. This way, we test the latest code without interfering with anyone’s work.

The third type of CI/CD test we do is format testing, to verify syntax and formatting. This type of test can be performed anytime. Ideally, the dev catches and fixes these issues before committing, but that is not always the case. For this reason, we use tslint, a static code analysis tool. We use it in CI and in pre-commit hook, and we highly recommend you do too. You can also use any of the readily available linting tools, and you can extend them to meet your needs.

Testing can save time and money

As you can see, we perform a lot of testing before we are confident that our code works as expected. You might think that testing this much is too costly. It’s true that there is a cost associated with testing, but we strongly believe it’s worth it. Just think how many problems down the road you can avoid by catching issues early. In fact, the sooner you catch an issue, the less expensive it is to fix. These cost savings easily outweigh the cost of testing.

There are many tools available for CI/CD. We recommend GitHub Actions, and it is also what we use in our company. You can also adapt our GitHub Actions to another CI/CD platform if you prefer.

Testing environments

Sometimes you develop on one platform but plan to run the product on another platform. Our own build system at Cosaic runs on Linux building for the Windows environment in cases where code signing isn't necessary (that is, the development environment). So it's easy to mix and match environments. Even so, you should always run your tests on the supported production environment. For this reason, we run our tests in a Windows environment. We strongly recommend that if you build for a specific operating system, you test on all the major versions of that operating system you have in your company.

It is critical that you test on the environment that your users actually have. At the very least, you should test on the most common configurations in your organization, using the major versions of releases, and the most common additional software. You need to mimic your real desktops as close as possible, otherwise your tests are likely to miss something important.

For example, you might discover which software doesn’t play nicely together. In fact, we discovered that one common communication platform wouldn’t work with Finsemble. We fixed it, of course, but we wouldn’t have known if not for end to end testing in the environment where that platform was present.

Sharing test results

Sharing all the test results is crucial, and yet some organizations miss this. The results of the test are useless unless they are available to all the engineers involved in your project. After all, if you find a problem but nobody else knows about it, they can’t fix it. They simply don’t know. You must have heard the dreaded phrase “it works on my machine” before. This is a symptom of a culture in which comprehensive testing is not the norm. A dev that is accustomed to thorough testing knows that just because something works locally it doesn’t mean it will always work. This is why you need to always share the actual real results with the engineers. Therefore, we recommend that you make the results available to all and by always using the same access method. You also need to make sure that everyone on the team knows where to find them.

Examples

All tests are specific to the situation, so it is difficult to give you an example that will work for you. We can only show you an example of what we do. First, we will look at an example of a unit test, and then we look at a sequence of commands that we use for an E2E test.

Example of a unit test

Here is an example of a unit test we use. This example tests the DragHandle event. You need a similar test for every behavior you allow.

import * as React from "react";import { mount } from "enzyme";
import { describe, it } from "mocha";
import { expect } from "chai";
import sinon from "sinon";
import { Basic } from "./DragHandle.stories";

// Necessary to allow sinon to work with mocked actions (which are shown in the Actions panel in Storybook)
import addons, { mockChannel } from "@storybook/addons";
addons.setChannel(mockChannel());

const coreElement = ".cq-drag";

describe("<DragHandle/>", () => {
afterEach(() => {
sinon.restore();
});
it("should display icon", () => {
const wrapper = mount(<Basic {...Basic.args} />);
expect(wrapper.find("svg").exists()).to.be.true;
});
it("should call start moving action on mousedown", () => {
const buttonSpy = sinon.spy(Basic.args as any, "actionStart");
const wrapper = mount(<Basic {...Basic.args} />);
wrapper.find(coreElement).simulate("mousedown");
expect(buttonSpy.calledOnce).to.be.true;
});
it("should call stop moving action on mouseup", () => {
const buttonSpy = sinon.spy(Basic.args as any, "actionEnd");
const wrapper = mount(<Basic {...Basic.args} />);
wrapper.find(coreElement).simulate("mouseup");
expect(buttonSpy.calledOnce).to.be.true;
});
});

Example of an E2E test

After a unit test and a lint test both pass, you are ready for an E2E test. For Finsemble each of these tests involves a well defined workflow, which is a sequence of commands.

Here is a sequence of commands we use:

  1. yarn install – installs the node modules. For details, see Installing.
  2. yarn build – builds the seed project with any modifications that were included
  3. yarn makeInstaller – makes the installer for the OS platform on which you are currently only. See https://documentation.finsemble.com/tutorial-deployingYourSmartDesktop.html
  4. E2E testing - See https://github.com/ChartIQ/finsemble-selenium-example

Here is an example of a build and test workflow using GitHub actions. You are welcome to modify it to fit your needs.

seed_test:
runs-on: windows-latest
strategy:
fail-fast: false
matrix:
env_node_version: [14.x,18.x]
steps:
- uses: actions/checkout@v3
with:
repository: chartiq/finsemble-selenium-example
path: finsemble-selenium-example
- uses: actions/checkout@v3
with:
repository: chartiq/finsemble-seed
ref: ${{ github.event.inputs.seed_branch }}
path: finsemble-seed
- name: Use Node.js ${{ matrix.env_node_version }}
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.env_node_version }}
- name: yarn install
run: yarn install
working-directory: finsemble-seed
- name: yarn build
run: yarn run build:seed
working-directory: finsemble-seed

# Module install for pip and pipenv

- name: Bootstrap the Python e2e environment
run: pip install && pipenv install
timeout-minutes: 10
working-directory: finsemble-selenium-example

# BVT E2E

- name: Set resolution to 1080p
run: Set-DisplayResolution -Width 1920 -Height 1080 -Force

# Sample command, modify to fit

- name: Run the e2e BVT
run: pipenv run behave -D echo_output_to_console=true -D chromedriver_for_electron=chromedriver_98 -D finsemble_launch_configuration=src
-D finsemble_launch_path=../../../finsemble-seed/ -D finsemble_server_path=../../../finsemble-seed/ --format json --outfile results.json
timeout-minutes: 5
working-directory: finsemble-selenium-example
id: bvt

# BVT results upload

- name: Add testresults.json to artifacts
uses: actions/upload-artifact@v3
with:
name: testresults.json
path: finsemble-selenium-example/testresults.json
if: ${{ always() }} # Upload even if the test failed
continue-on-error: true

See also

What is CD/CD?

tslint

Installing

Deploying your smart desktop

The selenium example