In my organization, we try to have a developer and a tester sit down before writing any code to have exactly that discussion - because the answer will probably be different for each feature you implement.
Unit testing is a given in my team - but more often than not, the tester and developer will also automate GUI tests up front too. That way the core business logic is tested when the feature is passed to QA - leaving testers to conduct exploratory testing.
We have Developers test their code by doing a build and deploying it to the Development environment, and verify what they can in the new environment. If there are other changes in the build they need to coordinate with others to make sure testing is done. In other environments Developers would write out simple plans and switch them with each other, so each Developer is testing code they did not write.
I'd check with your QA team and see what they would need, at a minimum what you should have done is enough to verify that the build will not crash when started in the QA environment so they don't waste time installing a build that will not work.
Although unit tests are normally good enough, it's nice to see a developer ensure that they've run a process from start to finish, not just the unit tests. Because the types of tests that should be run vary greatly with the type of application, I would discuss with a testing point of contact for the product.
With new developers to me, I usually ask them if they'd be comfortable with installing the software in production without being tested by QA and taking responsibility for any defects(never really serious about it). This tends to get them thinking of types of tests that I hadn't even thought of!
What has worked well for me is the following steps.
Whilst on paper this may seem really onerous, we found it to be really effective in flushing out and fixing a lot of obvious issues really quickly and cheaply as we haven't touched the defect tracking system yet, and that saves everyone time in the long run.
All of these suggestions are great. I have only thing to add. When a bug is fixed in 1 platform, since code is sometimes shared with other platforms, please test the other platform(s) too to see if the bug exists there. Especially with customer bugs.
I'm also a firm believer in the idea of developers peer review. Since adopting this methodology at my place of work the quality of work has significantly increased. The new set of eyes provided by peer review are frequently finding more efficient ways to write code or picking up errors that would have otherwise been missed.
This said there is always the matter of available resources / size of the project; a small team might not be able to effectively peer review all new features while simultaneously sustaining their regular work flow.
Additionally as obvious as this sounds I would encourage you to clearly inform your testers of all changes that you make to your code (no matter how small). There have been countless times when new features have been implemented but because the relevant work items were never updated with specific enough details I never knew the new feature existed; until I stumbled across it by accident during exploratory testing that is.
This is even more important if your testers have a large amount of automated tests (as the smallest change can cause these tests to begin to fail).
There is no hard and fast rule for what kind of testing a developer should do before handing off to QA. It depends on the developer, the QA team, the organization, and the product. Testing is a means to an end, not an end in itself.
The important thing is to pay attention to the quality of what QA receives and the quality of the released product. If the quality at either of those points in time isn't what you want it to be, your organization needs to look at where the problems are. The problem might be with a particular developer who writes a lot of bugs, but it could just as well be, for example, a fragile third-party package or a vague or unstable set of requirements.
As a rule of thumb, here is a prioritized list of what the developer should test before handing off to QA:
At an absolute minimum, a developer should ensure that the software:
If any of these fail, then there is no reason to go on to further testing.
Also, you may want to have an automated "smoke test" that runs through the basic functionality and verifies that there are no major errors when taking all the common paths through the system. If a smoke test of this kind exists, it would also make sense for it to be run before handing the build off to the QA team.
The development process should include as part of the deliverable, well written unit tests that accompany the software into the QA cycle. Preferably, a Continuous Integration server would be configured to run the unit tests after QA test build completion.
This really depends on the complexity of the project, the size of the team and the difficulty of integration.
Those should normally deliver good builds to QA, which tends to improve the working relationship, among its other advantages.
It depends. If your process is lightweight (e.g. Scrum), then continuous build, automated tests, etc. may be sufficient.
If your process is heavyweight (e.g. waterfall) with segregation/slowness between development and testing organizations, then you probably want to do your own formalized manual testing so to avoid being the bottleneck. In addition to automated tests (unit, integration, UI automation if possible), you should deploy the software in a production-like environment using your production deployment process.
After the deployment, run a few smoke tests against important and/or high risk components. Then, have each developer verify their bug fix(es) and feature add(s). Better yet, have developer B test developer A's work and vice versa.
Test more than Functionality, especially if a web application.
There are plenty of things a developer should be concerned with other than functionality (but make sure you do test functionality).
The functional tester probably isn't going to find the more critical errors if he/she is only interested if it works or not.
Each and every one of us is a QA, whether we be a developer, business analyst, product owner or end user. You have to be both the developer AND the QA while you are developing it. Quality control is required at every level of product development, from every single person involved. Learn what you can from the QA team; educate yourself about it from online articles and courses (Unit testing, by the way, comes nowhere close to what end users actually do in the real world).
You have to treat the final build you send to the QA team as though it were the final build for production, and with full responsibility for it.
In addition to your unit tests, please check that your feature meets any written requirements. If the written requirements are out of date, make sure your product manager and your QA are aware of it.
There is nothing more annoying than checking software against the agreed requirements, only to find out that those requirements were not correct or straight-up ignored and abandoned. If the developer is just going to build what she wants and the product manager is just going to go along with it, why bother having QA manually test it? I would prefer to spend my time automating regression tests.
If there are no written requirements, For the Love of Linux, write some down somewhere. You could express them verbally to a QA engineer, but you might end up repeating yourself. Just jot down the basics in an email or a wiki page, or create a document with your QA engineer as you present your feature.
Another thing a developer can do is find ways to make sure that the features are testable. For instance, if the feature should act a certain way in the off-chance that, say, a mail server is down, find an accessible way to simulate that scenario. Build it in as part of your programming effort.
The 3 Amigos (in case you don't know)is a meeting where the Business Analyst presents a business requirement coupled with test scenarios (collectively called a “feature”) for review by a member of the development team and a member of the quality assurance team. This helps to build an shared understanding between all 3 of them.
Make sure your develop the feature as per this shared understanding and cover these test scenarios as part of unit testing up to possible extent.
I found this very helpful to have open conversation between Developer & tester to divide which part to cover on unit level(developer) & integration/end to end test level(tester).