Dynatrace AI: In Action Rogue Python Script Impacting Atlassian DevOps Tools

Addteq – Your DevOps Experts and partners of Atlassian and Dynatrace – is building tighter DevOps Use Case integrations between Dynatrace and the Atlassian tool suite. Addteq also educates and helps our joint customers on how to our tools can optimize your end-to-end delivery processes.

Not only are they building these integrations and educate you on DevOps best practices, but they also use Dynatrace to monitor their own internal DevOps toolchain such as JIRA, Confluence, Bitbucket, Bamboo … 

On a recent status call with Himanshu Chhetri (CTO) and Sukhbir Dhillon (CEO) they mentioned how well the Dynatrace AI worked for them after deploying the Dynatrace OneAgent on their internal servers. Dynatrace automatically identified the root cause of a slow down of their JIRA and Confluence even before their developers got heavily impacted by this problem. This obviously speaks to the pro-active nature of Dynatrace.


#1 – The Dynatrace Problem Ticket

Every time Dynatrace detects an anomaly in your environment we create a problem ticket. Thanks to the JIRA Integration Dynatrace provides and Addteq is currently extending, the problem ticket automatically created a JIRA ticket which triggered their own internal problem resolution workflow.

The Dynatrace Problem ticket indicated that 6 services were impacted including JIRA, Confluence as well as some shared services such as the TokenService that experienced a very high failure rate:

 

Dynatrace automatically detected unusual high failure rate on the TokenResource service.


#2 – Problem Evolution

One of the features that gets people excited that see the Dynatrace AI in action is the fact that Dynatrace correlates all events that happen on all depending components into the opened problem ticket. Instead of having to look at events from your log monitoring, infrastructure monitoring, application performance monitoring and end user monitoring tool you get all this information in Dynatrace.

And the one view in the Dynatrace Web UI that shows all these events along a timeline is the Problem evolution view that gives us a time-lapse option to “replay” the chain of events. Here is the problem evolution for their problem: 

Dynatrace Problem Evolution showing that the problem when through two problem phases.



#3 – Automatic Deployment Detection

If you look at the distribution of events (top right) we can see that this problem when through two phases. Each phase shows a spike of events coming through: one shortly after midnight – the second one shortly before 2AM.

The first bulk of events are all related with a restart and a redeploy of the jira-install service. Turns out that this is “normal.” Well – kind of normal. Digging through the automatically detected deployment events shows us that every time they restart that service, we see high CPU and error log messages, resulting in some of the failures we can observe in the other depending services:


Dynatrace shows us that every time jira-install gets restarted we see some impact on the other depending services.



#4 – Python process gone ROGUE!

The second bulk of events is related to the real problem. Turns out that the server addteq-crowd, a Linux machine hosting Atlassian Crowd, runs out of CPU. Crowd is single sign on services that is used by all other Atlassian tools such as JIRA and Confluence. If this service is impacted it impacts everyone else. 

 

Dynatrace detected the CPU saturation issue on that Linux machine and also created an event that was correlated to our problem ticket

Looking closer at this Linux machine shows us that it is not Crowd itself, which runs in the Tomcat container, that uses all the CPU. Turns out it is the Python-based app called duplicity which is used for file and directory backups.


Dynatrace OneAgent automatically monitors all processes – including the ones consuming all the CPU!

Duplicity runs on every of Addteq’s hosts but only runs into high CPU on addteq-crowd. This can easily be seen by looking at the Dynatrace Process Group overview for Duplicity – showing us resource consumption of all instances of Duplicity across all hosts where it runs:

Dynatrace automatically monitors every process instance across all hosts with easy accessible charts in the Process Group Details view


TipProcess Group Detection is a key capability in Dynatrace. The automatic detection works extremely well but can always be customized to your special needs. To learn more check out Mike Kopp’s blog on Enhanced Process Group Detection.

 

Actions based on Problem Detection

Our friends from Addteq weren’t aware of Duplicity having an issue on a single machine and didn’t know the actual impact it had on Atlassian Crowd which impacted all the other services. Because Dynatrace automatically analyzes all this data and is aware of all these dependencies it is possible to identify these problems that we would have normally not even thought of.

Its great to see a partner like Addteq not only building integrations and therefore extending the Dynatrace ecosystem. It also great that Addteq “Walks the Talk” and is actively using Dynatrace to ensure their systems are optimally running and their employees not impacted by any python scripts gone rogue 

If you want to try Dynatrace yourself simply sign up for our Dynatrace SaaS Trial

Related Content
work from anywhere
Embracing the Freedom: Work from anywhere
If our products can be used from anywhere, we should also be able to work from anywhere. This blog shows...
Be_Unstoppable
Jira Accessibility: Best Practices for enhancing collaboration
Jira is a powerful tool to streamline workflows and enhance productivity. This blog explores four best...
addteq_fb_collab4b
The Perfect Match: Confluence & Excellentable
Discover the perfect match for your team's collaboration needs this Valentine's Day. Learn how to seamlessly...

Leave a Reply

Your email address will not be published. Required fields are marked *