How Addteq Automated UI tests by using Selenium

Testing is a crucial yet tedious task for any release. If that testing can be automated, not only can it prevent bugs that might have fallen through manual testing, but can save valuable developer time. Addteq decided to give automated testing for their Confluence Spreadsheet add-on, Excellentable. By using Selenium, Addteq was able to achieve automated testing without having to dedicate a developer to go through the testing process. 

Why UI tests?

Since Excellentable is a front-end heavy tool, where a lot of operations occur only in the UI (the front-end has its own mini-backend), there are a lot of test scenarios that we can only cover using UI tests. These UI tests must re-create the user interaction in order to test correctly. Operations such as cell re-drawing, interaction with menu bar, cell selection for formulas, etc; need to be evaluated in this way to ensure proper functionality.

Since this is a Confluence add-on, Excellentable could produce other errors when running in different browsers, Confluence versions, databases, etc. Our UI tests serve as a full validation in multiple environments/configurations and can detect any of these errors. They serve as our final automated check, within our deployment pipeline.

About Selenium

Selenium is considered to be one of the most widely used tools for end-to-end testing of web applications. It allows you to define tests in a variety of languages, and can run in any operating system using a JVM (cross-platform). It can be setup to work as a cluster of multiple other Selenium nodes. Each one running a test/process, allowing for a scalable landscape where more can join the hub. Selenium emulates user interactions that occur on a website and allows you to define UI tests as code. We can even run our tests in any browser of our liking. Which is exactly what we need for Excellentable.

Selenium provides the runtime environment and the API to define the tests. But we still need a runner/executable that communicates with the Selenium server. For our setup we use SE-Interpreter, made by Zarkonen from SauceLabs. It allows us to run multiple tests in parallel. Because it uses the Selenium 2’s JSON format, it makes it simpler to debug the tests using the SE Builder Plugin. For the command line execution, we added some customization- a better output report for failed tests and their reason for failure, plus a re-run feature for unstable Confluence environments.

We have supplied our fork of the SE-Interpreter to our public GitHub and you can check it out below.

Writing Simple Tests for SE-Interpreter

As mentioned, SE-Interpreter only supports Selenium 2’s JSON format.

				
					{
  "seleniumVersion": "2",
  "formatVersion": 1,
  "steps": [
    {
      "type": "get",
      "url": "http://www.sebuilder.com/"
    },
    {
      "type": "verifyTextPresent",
      "text": "Open Source Software"
    }
  ]
}
				
			

It simply opens the site “www.sebuilder.com” and checks that the text “Open Source Software” exists.

Similar to a standard selenium test, written using the Selenium Webdriver in Java/Python/NodeJS, the steps are defined individually in sequence. Each one specifying its action, target element, and (if applicable) input values to use. The examples in the SE-interpreter GitHub project contain all of the steps that are currently supported.

We can include assertions as part of the steps, and allow the test execution to continue once the assertion succeeds. This is useful to constantly check the outputs and properties at different points in a test, making sure the process flows as expected and not just relying on a final output check. A downside to SE-Interpreter is that once an assertion fails, it will kill the entire test.

Separating Excellentable specific Test Logic from SE-Interpreter-specific Format

Now that we have defined the raw format in which test code will be loaded into the test runner and we have a list of functions we want to test in Excellentable, we need to define the flow in the test file that each of these functions will follow. To make this more illustrative, we will look at the functionality Bold font. The high-level steps for checking that text can be bolded are:

  1. Login into Confluence
  2. Create a new page
  3. Add a new Excellentable instance (using the Confluence macro)
  4. Set some text in a cell
  5. Click the button “Bold”
  6. Check that the text is now bold.

If we attempt to define all these steps in pure Selenium JSON format, we would be overloaded with all the smaller steps involved in the process. This will require us to manage enormous JSON files loaded with very low-level operations. For example, here are the small steps in the login phase, when converted to Selenium JSON format:

  1. Open confluence url
  2. Wait for confluence home page to load
  3. Wait for login button to load
  4. Click login
  5. Wait for username and password input fields to load
  6. Add text in username
  7. Add text in password
  8. Find submit button
  9. Click submit
  10. Wait for the logged-in Confluence page loads

These 10 steps could easily require between 100 – 200 lines in the file. Yet, we are still far from touching the Bold functionality we want to test. Each of these JSON files could be thousands of lines, making it much harder to track what the test is doing. Ideally our test would have only 6 lines, which could look something like this:

				
					loginIntoConfluence()
createANewPage()
createANewExcellentable()
setTextInCellAt(row, column, sampleText)
clickBoldButton()
assertThatTextInCellIsBold(row, column)
				
			

There are multiple functions that will be repeated for other tests. Say we want to write the test for Italics font. You would think that all of the steps would be the same, except that we would click the “Italics” button instead. This is in fact true for any Excellentable function. All of the steps up to “Adding a new Excellentable macro” are the same. If we have a way of re-using all of these steps and always run each test in a new empty Excellentable, our test file could be simplified further and look more like this:

				
					setTextInCellAt(row, column, sampleText)
clickBoldButton()
assertThatTextInCellIsBold(row, column)
				
			

Not all of the tests will be so simple. Let’s take testing of a column filter where we will be dealing with multiple cells within a row, multiple values, and the expected results, when changing the filter. Having the tests in JSON format will not allow us to have any comments inside them. However, having them as Javascript files would work just the same.

By having them as Javascript code, we could define each test as an individual module and create another javascript converter that takes care of adding all the repetitive steps. We would also convert our high-level Excellentable specific files into raw Selenium JSON format. The design would look similar to this:

Writing a Testing Framework That Fits Our Needs

Formalizing the previous diagram in our code, the test file converter became our “Tests Compiler”. This compiler provides a module called “Excellentable-Testing-Framework” (ETF) to invoke operations that will occur in the UI. In this fashion, our test file for Bold font now look like this:

				
					const ETF = require('Excellentable-Testing-Framework')

// Remember we assume the execution starts at an new, empty Excellentable

ETF.writeTextInCell(A0, "sample")

ETF.openTab("format")

ETF.clickButton("Bold")

ETF.assertTextStyleInCell(A0, "bold")
				
			

Notice that the functions used in the test are Excellentable specific. We also have some operations occurring in the Tests Compiler which are Confluence specific. For example:

				
					const ETF = require('Excellentable-Testing-Framework')


for (var testfile from allTests) {
    ETF.openConfluence()
    ETF.login(username, password)
    ETF.createNewPage()
    // etc...
	
	runTest(testfile)
}


// ...
				
			

Note that all executions of the test are happening through the ETF module, acting as the connector between our test logic and the selenium-details. Given that we separated our Excellentable and Confluence specific functions, and are still aiming to remain decoupled from Selenium JSON format (and SE-Interpreter) so that it can be plug-able, our final design becomes more like this:

This is a more accurate overview of what’s happening internally. The module for SE-Interpreter has been isolated completely and passed as a plugin into the other parts of the framework. This in turn isolates the mid-level step details of Excellentable from the assertion logic in the tests. Our compiler also groups all the repetitive steps done in the test files, making it simpler to manage initialization/cleanup steps. 

Benefits of Using the Framework

By using this framework, we see similarities in the benefits seen by Mehdi Kalili’s approach in his article. You can take a better look by reviewing his article below. 

High Decoupling – Pluggable design

Each module is separated based on a specific concern and interconnected to others through their individual exposed interface. This allows us to easily replace any module, and gives us the ability to make changes to the code without affecting other parts of the code. This design allows for unit testing at multiple points. We see this in the compilation or conversion from high-level functions to low-level JSON steps. Further helping validate the correctness of the framework.

Easier use of Multiple Environments

Since we test in multiple Confluence environments, browsers and configurations, there is a variety of environment-specific steps that must be used versus the standard steps in the test. Our test compiler allows us to specify the environment we want to run the tests in. Allowing us to change the little environment-specific variants, that are applied during the compilation of tests. This provides us with dynamic JSON files that are easily adaptable for multiple environments, without ever having to do major code modifications in both the frameworks.

Faster Development and Execution

Keeping the test logic at this higher level makes it easier to create tests for new features, detect bugs or unwanted behavior, as well as track what each test is doing without spending a ton of effort. Having isolated the test logic allows us to combine tests in the way of execution. For instance, we can choose between creating each test as a separate Confluence page, having them all within a single page for faster (but a bit dirty-er) run, or parallelize them as needed. 

Conclusion

Although Selenium and SE-Interpreter are already very helpful tools for automating end-to-end UI tests, they only provide the low-level details for the test runs and should not be completely coupled to the relevant test logic we’re intending to perform. Establishing a good design methodology has many benefits. Here at Addteq we put significant effort into our software design, keeping our code flexible, reliable, concise and simple. This helps Addteq maintain consistently fast development speeds, while not sacrificing quality.

Related Content
work from anywhere
Embracing the Freedom: Work from anywhere
If our products can be used from anywhere, we should also be able to work from anywhere. This blog shows...
Be_Unstoppable
Jira Accessibility: Best Practices for enhancing collaboration
Jira is a powerful tool to streamline workflows and enhance productivity. This blog explores four best...
addteq_fb_collab4b
The Perfect Match: Confluence & Excellentable
Discover the perfect match for your team's collaboration needs this Valentine's Day. Learn how to seamlessly...

Leave a Reply

Your email address will not be published. Required fields are marked *