As a general rule of thumb, everytime a test fails or has an indeterminate outcome it requires investigation. If you ignore a failing/indeterminate test then you increase the risk of missing or overlooking a bug.
The value of automation is to execute tests that we deem important enough that we want to run them repeatitively, or tests that are executed more effectively or efficiently via automated code as compared to manual processes in order to free up our time to do other testing tasks.
If we are spending time baby-sitting tests that are throwing false positives that is time that is being taken away from other testing tasks.
We recently put a lot of effort into improving automated test case reliability. In our critical suites our automated tests must pass a 100x run w/o failure before it is checked-in to the lab. This gives us 99% confidence that any failures in these suites indicate a regression of some sort and are not likely to be false positives.
This was a big cost investment and required in-depth root cause analysis of the different failures we were seeing. 10% may not seem like a lot of effort, but over time and as the # of tests increase over time the costs add up.
Bascially, it boils down to your tipping point of when you are spending (wasting) time investigating false positives versus the time your automated tests should be providing that will free you up to do other testing tasks.