dockerselen.jpg

Selenium is a popular tool for end-to-end testing of web-apps. It's grid capability allows to have fast test executions, across multiple browsers, in parallel. However, the setup of such environment is always an issue to deal with. That is why, for our setup of Selenium Grid, we decided to use Docker containers to standardize the creation of the Hub and Nodes. This allowed us to have nodes that can be easily re-built, are source-controlled, re-usable and isolated from each other. Additionally, we had special issues come up when working with Safari and IE.

This article is intended as a general reference to follow along to, in order to easily set up a new Selenium grid with Docker, condensing everything we learned during the process. We also give some insight on SE-Interpreter, (the tool we decided to use to communicate with the Selenium Grid) - as to why we chose it and what benefits it has.

Selenium-grid Organization Diagram

Our Selenium Grid currently supports all 4 major browsers through Docker containers and VMs.  Our Chrome and Firefox instances are the most used, so the containers for them are ready to be instantiated at any moment. Having 8 of them ensures a quick execution, given that deployments do not happen too often (unlike regular builds on commits/push).

For the Safari and IE nodes, these are run a bit less frequently then Chrome and Firefox, only for compatibility validation, so 1 VM for each one suffices.

Selenium Hub (3).png


Selenium nodes as Docker containers:  Adding nodes on-demand

It becomes too time-consuming to set up and maintain VM's for each selenium instance assigned to a specific browser, so instead, we decided to have them using Docker containers. These can be maintained through source-controlled code, and can be instantiated as needed. It also helps solve yet another one of the many problems of UI tests, that they are often slow to run. Adding temporary Docker containers helps reduce execution time and can be destroyed when not in full use. The containers can also be used locally during the development of new tests, that way the Selenium Grid is always free to run product deployments.

The Grid setup consists in a Docker Compose file, which creates the Central Grid Hub and the Chrome+Firefox nodes. We use the official Chrome and Firefox debug-node containers given by the Selenium project. Unlike the regular nodes, the debug nodes allow remote access to them through VNC, which is useful for tracking the tests during development and execution. Adding Chrome or Firefox nodes is a simple matter of cloning, it's container definition in the Docker compose file by adding a number in its name (firefox2 or chrome3).

hub:
    image: 'selenium/hub:2.53.0'
    restart: always
    ports:
        - '4444:4444'

chrome:
    image: 'selenium/node-chrome-debug:2.53.0'
    restart: always
    hostname: chrome
    ports:
        - '5909:5900'
    links:
        - hub
    volumes:
        - /dev/shm:/dev/shm
 
chrome2:
    image: 'selenium/node-chrome-debug:2.53.0'
    restart: always
    hostname: chrome2
    ports:
        - '5910:5900'
    links:
        - hub
    volumes:
        - /dev/shm:/dev/shm
 
firefox:
    image: 'selenium/node-firefox-debug:2.53.0'
    restart: always
    hostname: firefox
    ports:
        - '5901:5900'
    links:
        - hub
 
firefox2:
    image: 'selenium/node-firefox-debug:2.53.0'
    restart: always
    hostname: firefox2
    ports:
        - '5902:5900'
    links:
        - hub

Additionally, it's a good practice to restart Selenium often in order to keep overhead low, that might be caused by continuous test-runs. Addteq's containers are removed and re-created every day. We use Atlassian Bamboo to automatically re-build the containers, using these commands:

docker -p selenium  down  -v
docker -p selenium  up  -d

Dealing with Safari and IE

For Safari, starting up the server was relatively easy, but we found execution issues when testing with the working tests, (after checking them in Chrome and Firefox). For instance, some of the icons/buttons used in Confluence were not being found by Selenium or they were trying to go through Confluence to create or open a page. It was randomly timing out and failing and when run in isolation, the same tests succeeded without issues.

The Safari web driver is not supported by the official Selenium project anymore; so as a way to mitigate the issue, we added re-run capability for whatever tests fail within the first try. The second run has shown to solve the random failures for almost the whole time (rarely does a test fail twice in a row).

For IE, the main issue came from the way the IEDriver behaves compared to the other browsers. IE tries to re-create truthful interactions with the website by using Windows native click and keyboard events, instead of a JavaScript sandbox to control the website. However, it turned out to also be noticeably slower, than the other browsers in many other simple steps. (For instance, clicking a button takes a couple more seconds then the others, and it can be seen how the IEDriver hovers over the element a few times, as if validating its correct location.) We found that some of the buttons were not being clicked correctly, so we had to include code fixes to do click through JavaScript in certain tests, instead of using the standard selenium way.

A little about SE-Interpreter

The tool we used to communicate with the Selenium grid was SE-Interpreter. Originally published by Zarkonnen in Github (he still maintains it), as a derivative from SE-Builder, it uses JSON files as the medium for the test definitions. It's already nice to use since we can also export the JSON version of the tests to be used in SE-Builder, and track them down, in case we find any misbehavior or failure in the execution (as if it were a debugger). It also includes out-of-the-box support for parallel executions and an easy interface to connect to a remote grid and different browser environments.

Our modified version also includes reduction in the amount of unnecessary logs printed by the tool, a final summary of the failed tests, re-run capability for failed tests, and a way to locate in which container a failed test was running (for monitoring purposes).

Conclusion

Having a dedicated and high performance grid for our UI tests helps out a lot to keep our product coverage high and check all scenarios we can. Since many of the grid components follow a pattern/template, using Docker containers is a great way to manage and run them. Even with the quirks found in Safari and IE, once these are setup once they can be containerized. Plus, adding the use of SE-Interpreter allowed us to quickly move into writing the tests we wanted and receive quick feedback from them.

 

Learn More About Selenium Grid