Now, this isn't usually the case -- there are bit patterns that represent "not a valid number", and on a computer that does math according to the IEEE standard for floating-point arithmetic, the result of multiplying "not a valid number" and 0.0 is supposed to be "not a valid number". However, the single-precision floating-point arithmetic on the Cell's coprocessors are not designed to comply with all the niceties of the IEEE standard for this; instead, like the Cray processors of old, they're designed to go as fast as possible. So it was a reasonable conjecture that they would, in fact, simply return 0.0 for everything.
The question, then, was how to quickly get an answer that I'd trust. Google was one option, but trusting random stuff on the internet is unwise, especially when the risk is introducing a subtle and hard-to-track-down bug.
The naive answer, of course, is to test every possible input. This is the sort of thing that every computer-science freshman knows is absurd and impossible -- the number of possible inputs grows exponentially, and you can get numbers like "every possible position of every electron in the universe since the big bang" without hardly trying.
Even in this exceedingly simple case, there are 232 possible 32-bit patterns that could happen. That's a bit over four billion of them. Four billion grains of sand will overfill a 55-gallon drum. This is not really even a comprehensible number.
Except, wait. This is a processor with a clock speed of 3.2 billion cycles per second, and it takes probably a few dozen cycles to test each number. That's ... entirely plausible.
So I tried it.
It turned out to take it about two minutes. They are, indeed, all zero.