Even a gently thrown rock can bring down a glass house. That image came to mind last summer as I read a research note by two Ph.Ds. in finance, Jason Hsu and Vitali Kalesnik. Both work at Research Affiliates, of which Hsu is a cofounder.

Ever since then, I’ve been waiting for more stones to fly or, at least, a few shoes to drop.

If these names ring some bells, the other cofounder at Research Affiliates is Rob Arnott. Hsu isn’t a household name, but Arnott comes close. They are the fellows who brought the idea of fundamental indexing into the world. The idea is a challenge to basic Bogle indexation and it has gathered billions in exchange traded fund and institutional assets. Both Arnott and Hsu are scholarly; Hsu is the quiet one.

Over time, I’ve learned that when Hsu speaks, it’s good to listen.

In “Finding Smart Beta in the Factor Zoo,” Hsu and Kalesnik fired one— but just one— of the early rounds in what historians will likely call ‘the new skepticism.’ They pointed to something we’ve all joked about—that if you torture data long enough, it will confess anything. Worse, if you leave enough graduate students alone with computers and demand a thesis topic, they are likely to discover something no one has ever seen before, even if it isn’t there.

The case Hsu and Kalesnik cite is the proliferation of “factors” that are supposed to add performance to your investment. From very basic factors—such as small cap versus large cap or traditional value investing— researchers have now ferreted out multitudes of factors that are supposed to give portfolios new zip.

How many factors are there in a multitude? It depends who is doing the counting. Hsu cites some investors who build portfolios using 81 different factors. But other researchers have found and cataloged 315 factors. And the record goes to two other researchers who found 600 factors named in research literature.

Who knew?

The only problem is that most of these factors— the vast majority— are confessions from data that has been tortured too long. When closely examined, the higher return so proudly announced can’t be found, let alone duplicated. They are the result of what Uri Simonsohn, a professor at the University of Pennsylvania, has called “p-hacking”— the conscious or unconscious manipulation of a data sample so that it crosses the magical line that, by historical convention, makes something statistically significant.

And that’s important. If something becomes statistically significant, it can be written about. It can be announced. It can be discovered. It can call for further research and greater funding. Come up just a smidge short of statistical significance, however, and you disappear as though your work never existed. Lots of pressure there.

Disappointed that there aren’t 600 investment factors out there, just waiting to make us rich? Don’t be. This topic is far deeper and far more important than arguments among the mandarins of finance.

It’s about what we know, how we know it and, well, whether we know it.

The new skepticism has been brewing a long time. Its arrival in financial research is relatively late because the topic of data torture, poor research designs, and sample manipulation can actually be traced back at least 21 years. That’s when the British Medical Journal (now known as the BMJ) published statistician Doug Altman’s editorial on the poor quality of medical research. In it, he cited all the kinds of flaws now being found in the research that created the investment factor Zoo.

Nor is Altman a lone voice. Researcher John P.A. Ioannidis has devoted his career to showing the over-claiming or outright wrong claiming in much medical research. Remember how important mammograms were? Or PSA tests? How about colonoscopies? All less useful than originally claimed.

What is important here isn’t that you and I should follow all of the arguments and examinations. That would be an impossible task. What’s important is that the questions are being asked. What we can expect, over time, is more modest claims about most everything.

For the moment just two things are clear. First, that simple is better most of the time. Second, that the odds are against complexity.

That’s good to know. And wise to remember.