API測試的範圍是什麼


3

我正在測試產品的幾個API端點,並且有很多觀察結果,例如:

  1. 如果應用程序內容類型是JSON而不是表單數據,則服務器響應為500錯誤。
  2. 不同的終結點為相同的無效主體提供不同的錯誤響應。例如,如果未提供文件,則一個端點響應為[錯誤:未提供文件],另一個為{文件:"這是必填字段"
  3. 指向具有不同字符大小寫的產品的不同版本的端點,例如:在版本1中,其{File:" something}},v2 {file:" something"}和v3其{FILE:" something"}

我將這些問題作為bug提出,但開發人員表示沒有人願意做這些事情,因為API合同中所有內容都寫得很好

Does these counts as valid observations? and if not then why would we text negative test cases, as the proper use case is already mentioned in the contract and lets blindly believe that users will follow it.

2

Going through your particular points I would say that

  1. This is the valid observation since this is improperly configured request (aka client error) so it has to be responded with 4xx status code. In your case there is obviously lack of request validation on the server so that it tries to execute business logic against incorrect data which might result in different issues starting from performance degradation finishing with the entire service halting.
  2. This is arguable point since this is a human readable (not for automated processing or so) message and in general both messages describe the issues in the well understandable manner
  3. I would say this is also a valid observation since when you de-serialize the object you might go into issue when the client classes wouldn't match the case of their fields with the ones received from the server.

2

Does these counts as valid observations?

Without going into details then yes, those are valid observations and concerns that should be raised, and probably tracked as issues.

But that doesn't mean those concerns should be fixed now or ever, your testing does not live in a vacuum and should take into account user expectations and their legacy code, the project's deadlines the effort the fix the problems and most important the harm if a user stumbles upon one of those issues.

I use the term issue since those are not necessarily bugs in the context of your project. You want to document your findings for future reference or improvement work, but you could use tasks, feature requests or something similar that will not increase bug count.

If you are worried then raise the question in a wider forum or to other managers, but be prepared to accept the same answer.

why would we text negative test cases, as the proper use case is already mentioned in the contract and lets blindly believe that users will follow it.

There is legacy behind this code, there is probably an obscure reason why it was built like that. You should continue testing trying to find real and important problems, while raising concerns on the way. Those concerns might add up building a "weight of the evidence" that eventually will lead to some action.

Project managers are fine with a casual low severity problem here and there, but if they find out over time that there's enough problems accumulated they might take action to fix them.


1

Well, I think you're focusing too much on just bugs, but they do not exist in isolation. What is your situation? What is your context? E.g. case-sensitiveness might be important if the error goes all the way to a frontend or when it's read by another software that works case-sensitively. You need to know your context, so you know what bugs are important bugs you need to focus on, and what priority/severity assign to your bugs.

One exception among your example might be the first case with response code 500. In my opinion, API should never return this, it should return a more suitable error code, in this case perhaps 415 or 400.

I'd suggest talking to other members involved in developing that software, they might provide you with more context, then you can decide whether or not to insist on these issues.


1

I do think your findings are worth discussing with the development team, they could be defects. The push back of the developers is understandable, in this case they do not see the risk, certainly not one that is worth doing rework.

This is a common pattern I see in teams where testers are not collaborating with developers before they start coding. Breaking the system after it was build gives insights, but I would rather focus on building the best system together from the start. Becarefull that you do not swamp a development team with trivial defects. Defects that not really improve the value of the product.

The JSON example is a risk, probably the behaviour could have been discussed earlier in the development process. During a Three-Amigo session try to challenge with "But what if we pass the API invalid form-data? for example a JSON". The developer could suggest a 500 error. Now you can discuss this by protesting and saying our API guidelines say we should return an object with a clear error message.

I like API consistancy, but if your consumers are only your own developers, they might not think the refactor is worth their time. Maybe you lack API guidelines, how you can improve over time?

How can you start to have the right discussions at the right moment. A conversation about defects/risks with just testers and developers is a risk in itself. What is the business value? Involve business minded people. How much time should we spend on technical-debt? Will we go faster or slower?