Overall a great explanation, but this section isn’t quite right. The value of a test is based on value of information; if we have a test that we know has 10% false negatives and 0.001% false positives, a positive result would tell us a tremendous amount, and a negative one could be interpreted as weak evidence.
Formally, [risk neutral] value of information is E(Decision with Test) - E(Decision without Test). That means that, as one example, if our decision without the test is usually to do nothing, since most people do not have a cancer, an inexpensive test that has a high false negative rate is a significant marginal improvement.
The conceptual mistake here is simple; there is a difference between an imperfect test that is treated as infallible, and one that is used judiciously. In ML, this distinction is typically ignored, since current state of the art is to use the measure generated by the test as a decision metric — but this is problematic in many more ways than this, as I discussed as a resident blogger on Ribbonfarm in great detail here and then here.