在將構建傳遞給質量檢查團隊之前,開發人員應進行哪種測試?


73

作為開發人員,我對最佳質量檢查實踐等的了解僅限於通過編寫單元測試等知識足以使我了解。

從測試人員的角度來看,開發人員在將項目簽署給要進行質量檢查的質量檢查小組之前,理想情況下應該經過哪些測試程序?

顯然,這將取決於所採用的開發方法,因此基於不同方法的不同答案將是極好的。我不想踩任何腳趾,但同時我也不想傳遞完全未經測試的代碼。

根據我的第一句話,我是否應該加強對這一領域的了解?

5

In my organization, we try to have a developer and a tester sit down before writing any code to have exactly that discussion - because the answer will probably be different for each feature you implement.

Unit testing is a given in my team - but more often than not, the tester and developer will also automate GUI tests up front too. That way the core business logic is tested when the feature is passed to QA - leaving testers to conduct exploratory testing.


3

We have Developers test their code by doing a build and deploying it to the Development environment, and verify what they can in the new environment. If there are other changes in the build they need to coordinate with others to make sure testing is done. In other environments Developers would write out simple plans and switch them with each other, so each Developer is testing code they did not write.

I'd check with your QA team and see what they would need, at a minimum what you should have done is enough to verify that the build will not crash when started in the QA environment so they don't waste time installing a build that will not work.


31

Although unit tests are normally good enough, it's nice to see a developer ensure that they've run a process from start to finish, not just the unit tests. Because the types of tests that should be run vary greatly with the type of application, I would discuss with a testing point of contact for the product.

With new developers to me, I usually ask them if they'd be comfortable with installing the software in production without being tested by QA and taking responsibility for any defects(never really serious about it). This tends to get them thinking of types of tests that I hadn't even thought of!


27

What has worked well for me is the following steps.

  1. The developer makes their best effort at writing good code.
  2. The developer gets a code-peer review from another developer
  3. The developer runs their unit tests, and checks that they all pass.
  4. The developer and the tester sit down for a review:
    • The developer walks through with the tester to let them know what is being delivered, any related changes and any areas that aren't finished yet so that bugs aren't logged in those areas.
    • The tester runs through the tests that they have written for the feature . This allows the developer to review the tests and ensure that they make sense.
    • The tester takes a few minutes to execute some high level test and walk through the feature with the developer. Any obvious bugs are fixed on the spot, without logging them in the system.
    • With the review complete, the developer checks in
  5. The tester then formally tests the feature in the next days daily build.

Whilst on paper this may seem really onerous, we found it to be really effective in flushing out and fixing a lot of obvious issues really quickly and cheaply as we haven't touched the defect tracking system yet, and that saves everyone time in the long run.


3

All of these suggestions are great. I have only thing to add. When a bug is fixed in 1 platform, since code is sometimes shared with other platforms, please test the other platform(s) too to see if the bug exists there. Especially with customer bugs.


7

I'm also a firm believer in the idea of developers peer review. Since adopting this methodology at my place of work the quality of work has significantly increased. The new set of eyes provided by peer review are frequently finding more efficient ways to write code or picking up errors that would have otherwise been missed.

This said there is always the matter of available resources / size of the project; a small team might not be able to effectively peer review all new features while simultaneously sustaining their regular work flow.

Additionally as obvious as this sounds I would encourage you to clearly inform your testers of all changes that you make to your code (no matter how small). There have been countless times when new features have been implemented but because the relevant work items were never updated with specific enough details I never knew the new feature existed; until I stumbled across it by accident during exploratory testing that is.

This is even more important if your testers have a large amount of automated tests (as the smallest change can cause these tests to begin to fail).


9

There is no hard and fast rule for what kind of testing a developer should do before handing off to QA. It depends on the developer, the QA team, the organization, and the product. Testing is a means to an end, not an end in itself.

The important thing is to pay attention to the quality of what QA receives and the quality of the released product. If the quality at either of those points in time isn't what you want it to be, your organization needs to look at where the problems are. The problem might be with a particular developer who writes a lot of bugs, but it could just as well be, for example, a fragile third-party package or a vague or unstable set of requirements.

As a rule of thumb, here is a prioritized list of what the developer should test before handing off to QA:

  • The product still builds.
  • The product still installs.
  • The smoke test passes, i.e. the parts of the product that are essential to the most fundamental kinds of testing still work.
  • The most positive obvious test cases (by whatever definition is available) all work. In other words, if you are just trying to use the product, it should work.
  • The negative test cases all work. In other words, if someone tries to break the product, they can't.

3

At an absolute minimum, a developer should ensure that the software:

  • compiles (if applicable)
  • installs (if applicable)
  • runs (open and/or log in)
  • unit tests pass

If any of these fail, then there is no reason to go on to further testing.

Also, you may want to have an automated "smoke test" that runs through the basic functionality and verifies that there are no major errors when taking all the common paths through the system. If a smoke test of this kind exists, it would also make sense for it to be run before handing the build off to the QA team.


4

The development process should include as part of the deliverable, well written unit tests that accompany the software into the QA cycle. Preferably, a Continuous Integration server would be configured to run the unit tests after QA test build completion.


6

This really depends on the complexity of the project, the size of the team and the difficulty of integration.

  • Unit Tests. Everybody knows this, but it doesn't hurt to repeat. Run your unit tests to make sure your code still does what it should. This is particularly important for developers with code that doesn't need compilation (I'm looking at you, applicative DBAs).
  • Integration Tests. Whenever a project has more than two developers, it needs integration. You would not believe how many times I've had broken builds when one dev breaks an interface without telling another dev. These are also usually quite easy to automate, as they just involve an quick regression check of the unit test.
  • Smoke Test/Preliminary Acceptance. These are quick (15 minute) tests run by the developers after a build to ensure that the build is good enough for QA. The best strategy I know for these is to get QA to write them and dev to execute them. That way nobody can complain about the others, and there is less feeling of time wasted. These are usually done per feature, and you only need to test what you've changed.
  • Automated Regression Test. Another idea is to have an automatic test run right after the nightly build. The test compares the results of the new build to a known good build and reports any differences. We run the build at 0300 and the test at 0430. That way you have results waiting when you come in to the office.

Those should normally deliver good builds to QA, which tends to improve the working relationship, among its other advantages.


4

It depends. If your process is lightweight (e.g. Scrum), then continuous build, automated tests, etc. may be sufficient.

If your process is heavyweight (e.g. waterfall) with segregation/slowness between development and testing organizations, then you probably want to do your own formalized manual testing so to avoid being the bottleneck. In addition to automated tests (unit, integration, UI automation if possible), you should deploy the software in a production-like environment using your production deployment process.

After the deployment, run a few smoke tests against important and/or high risk components. Then, have each developer verify their bug fix(es) and feature add(s). Better yet, have developer B test developer A's work and vice versa.


3

Test more than Functionality, especially if a web application.

  • SQL Injection
  • XSS
  • Security Modules (http vs https, header injection, cookie hijacking) etc
  • Also ensure functionality of your client side code in multiple browsers if your environment allows multiple browsers
  • Test with Javascript on/off
  • Test Section 508 if needed
  • Test buffer overflows
  • Test when application crashes and determine if it leaves things in an insecure state

There are plenty of things a developer should be concerned with other than functionality (but make sure you do test functionality).

The functional tester probably isn't going to find the more critical errors if he/she is only interested if it works or not.


2

Each and every one of us is a QA, whether we be a developer, business analyst, product owner or end user. You have to be both the developer AND the QA while you are developing it. Quality control is required at every level of product development, from every single person involved. Learn what you can from the QA team; educate yourself about it from online articles and courses (Unit testing, by the way, comes nowhere close to what end users actually do in the real world).

You have to treat the final build you send to the QA team as though it were the final build for production, and with full responsibility for it.


2

In addition to your unit tests, please check that your feature meets any written requirements. If the written requirements are out of date, make sure your product manager and your QA are aware of it.

There is nothing more annoying than checking software against the agreed requirements, only to find out that those requirements were not correct or straight-up ignored and abandoned. If the developer is just going to build what she wants and the product manager is just going to go along with it, why bother having QA manually test it? I would prefer to spend my time automating regression tests.

If there are no written requirements, For the Love of Linux, write some down somewhere. You could express them verbally to a QA engineer, but you might end up repeating yourself. Just jot down the basics in an email or a wiki page, or create a document with your QA engineer as you present your feature.

Another thing a developer can do is find ways to make sure that the features are testable. For instance, if the feature should act a certain way in the off-chance that, say, a mail server is down, find an accessible way to simulate that scenario. Build it in as part of your programming effort.


1

Have 3 amigos.

The 3 Amigos (in case you don't know)is a meeting where the Business Analyst presents a business requirement coupled with test scenarios (collectively called a “feature”) for review by a member of the development team and a member of the quality assurance team. This helps to build an shared understanding between all 3 of them.

Make sure your develop the feature as per this shared understanding and cover these test scenarios as part of unit testing up to possible extent.

I found this very helpful to have open conversation between Developer & tester to divide which part to cover on unit level(developer) & integration/end to end test level(tester).