Importance of logging in test automation development

Danny Briskin, Quality Engineering Practice Manager

The issue

In the realm of software applications, failures are inevitable. The causes of these failures may vary, ranging from issues in the code, logic, data, or the environment itself. However, it is essential to identify the root cause of each failure. While not all failures necessitate immediate changes, distinguishing between different types of failures is crucial.

Typically, developers employ concise error messages that provide potential reasons for the failure. Even in the event of a Blue Screen of Death (BSoD) occurrence, there is valuable information to consider. However, these messages, at best, only provide insights into the current failure situation. They cannot transport us back in time to reveal the state of the system prior to the crash.

Logging? In test automation? Really, why?

While logs have been in existence since the advent of modern computers, it is peculiar that software developers, particularly those specializing in test automation (SDETs), frequently neglect the implementation of proper logging practices within their applications. In my experience, the products developed by SDETs, which encompass test automation frameworks, exhibit a higher likelihood of encountering failures compared to “regular” applications. When confronted with a failure, the common approach is to repeat the test scenario in the hopes of identifying the underlying cause. Although this may prove effective for small test cases that require minimal preparatory work, it becomes considerably more challenging to identify failures in comprehensive End-to-End tests. Consider the scenario where the test suite is executed overnight. How can one accurately reproduce the state of the system during that specific nightly run?

Log everything!

Implement comprehensive logging in your application from the early stages of development. Ensure that log messages are included at significant points in the code, particularly where failures are anticipated. Additionally, incorporate logging at key junctures in the logic branching. If your testing involves a visual component, such as desktop, mobile, or web tests, consider adding relevant screenshots to the logs. Although organizing this sort of logging can be challenging, you can automate many aspects of the process. For further insights on AOP logging and how it can help streamline logging efforts, refer to this informative article.

… but be flexible …

In contemporary logging frameworks, there is an abundance of configuration options available to establish logging setups. This allows for precise control over what information is logged, when it is logged, how it is logged, and where it is logged. It is essential to ensure that the logs encompass all the necessary information required for investigating potential failures. You can customize and define your own mnemonic codes to represent specific events, facilitating easier interpretation. For instance:

[oo] Starting scenario [Scenario Name]
[>>] ClassName::methodName1(parameterValue1, (ParameterClass2))
[<<] ClassName::methodName1()
[>>] ClassName::methodName2(parameterValue3)
[* ] custom log message
[o<] [returnedValue] <== ClassName::methodName2()
[><] Finishing scenario [Scenario Name]

In the above example, “[oo]” indicates the starting point, “[><]” - the exit point, “[»]” and “[«]” - method calls and exits respectively, while “[o<]” is for returned value.

Additionally, it is imperative to include the precise date and time of each log event. The log level can also be of great assistance. For instance:

D 20230221050902892    

In this example, “D” denotes the log level “DEBUG”, which was used, and the date is formatted as “YYYYMMDDHH24MISSZZZ”. This format proves highly useful when sorting dates in a natural manner.

… do level it up …

Furthermore, it is essential to consider log levels, as they play a crucial role in logging. Implementing a dynamic logging level system allows the code to generate different outputs in the logs based on the specified level. For instance:

TRACE 20230221050902892 [>>] ClassName::methodName1(parameterValue1, (ParameterClass2))
DEBUG 20230221050902892 [o<] [returnedValue] <== ClassName::methodName2() 
WARN  20230221050902892 [* ] [returnedValue] is negative!
ERROR 20230221050402892 [* ] [returnedValue] does not meet given criteria (-3<=value<=0)!
CRIT  20230221050902892 [* ] browser quits unexpectedly

If the current log level is set to WARN, only messages with WARN level or higher (ERROR and CRIT) will be displayed.

It is also considered good practice to pass thelog level from outside the application.

… because it is expensive sometimes.

Once logging is integrated into the application, the process does not end there. It is crucial to actively monitor the logs over a specified period. Questions such as the number of lines generated per day and the continued necessity of this information after a certain period (e.g., a week or a month) need to be addressed. Additionally, it is vital to evaluate the performance impact of logging. For example, introducing logging at a specific point may result in a doubling of execution time and the generation of an additional 100MB of logs per day.

Most logging frameworks offer features to automatically archive or manage old logs, and it is important not to overlook these capabilities.

Summary

Incorporating logs into your initial architecture can offer significant benefits by saving substantial time and effort required to investigate the underlying cause of test framework failures. Establishing an efficient log setup can prove invaluable, ensuring comprehensive visibility into the test execution process. By making logs an integral component of your framework design, you can enhance troubleshooting capabilities and reap the favorable outcomes that result from this proactive approach.