在跨瀏覽器測試期間處理次要間歇性自動化故障的最佳方法是什麼?


6

我在某些相當通用的跨瀏覽器方案中以SpecFlow作為我的測試驅動程序運行Web驅動程序(.Net版本),因此我經常在添加新測試或清理舊自動化程序時重新運行測試。有時,先前通過的測試突然在頁面中的某個元素上超時,或者出現類似的超時錯誤,但是在該測試之前或之後的5分鐘運行了測試。通常,考慮到我經常運行測試的頻率,我通常會在尋找問題之前立即嘗試重新運行測試。因此,我傾向於立場,因為某些超出我控制範圍的條件,我可以忽略此錯誤。

其他人如何處理?我應該花點時間找出可能導致測試總體失敗的間歇性問題嗎,如果它導致總體運行率不足10%,那麼我認為重新運行測試並追逐一次超時並沒有價值。看到它過去了。還是這是很多人看到的東西,而我只是想找到一個根本沒有的問題?

8

As a general rule of thumb, everytime a test fails or has an indeterminate outcome it requires investigation. If you ignore a failing/indeterminate test then you increase the risk of missing or overlooking a bug.

The value of automation is to execute tests that we deem important enough that we want to run them repeatitively, or tests that are executed more effectively or efficiently via automated code as compared to manual processes in order to free up our time to do other testing tasks.

If we are spending time baby-sitting tests that are throwing false positives that is time that is being taken away from other testing tasks.

We recently put a lot of effort into improving automated test case reliability. In our critical suites our automated tests must pass a 100x run w/o failure before it is checked-in to the lab. This gives us 99% confidence that any failures in these suites indicate a regression of some sort and are not likely to be false positives.

This was a big cost investment and required in-depth root cause analysis of the different failures we were seeing. 10% may not seem like a lot of effort, but over time and as the # of tests increase over time the costs add up.

Bascially, it boils down to your tipping point of when you are spending (wasting) time investigating false positives versus the time your automated tests should be providing that will free you up to do other testing tasks.


1

We also have same issue in our production monitoring automation suite. To solve this problem to some extent, whenever a test is failed it is marked as INTERMITTENT and same test is repeated for max. 3 times or until it is passed. If it is getting passed in any one of next repetition then issue is left as Intermittent and not alerted. If it is failing all the 3 iterations at the same point/step then it is marked as failed and alerted. If it is failing at different steps in different iterations then it is most likely script issue and marked as false positive and not alerted. Hope this helps..


0

I also asked question for suggestion how to write reliable test automation, you may find it useful: Your suggestions for writing reliable Web UI automation