The problem with most data people these days is they're poorly educated, too much statistical training is either mathematical proofs or hands on how to use stat packages.
I focused on econometrics in graduate school not to be an econometrician (I wasn't that good), but to understand the intuition behind statistical models.
My favorite econometrician was George Box, and I love his famous quote, "all models are wrong, some models are useful." The problem is most people who use data don't understand the intuition behind the models, apply them mechanically, and don't know how to properly interpret the results. Gee, it passed the significance tests.
I realized early that all significance tests are seriously biased with real world data (experimental data is another world altogether, because you have more control but other ways to bias your results), because left out variable error is ubiquitous. It's an issue that's ignored because it's too ugly to solve.
There's nothing wrong with analytics, as long as the analyst has the proper humility to appreciate the limits of the data, and is vigilant in searching for other data, including qualitative data (i.e. scouting input). My belief is you need three things, a model, data and a story. The story doesn't accept causality as presented by the analytics, rather, it attempts to identify the mechanism of causality, and by doing so, hopefully will expose the limits of the analysis and avenues for further research to verify or reject the story. That is, it's not enough to say the data suggests causality runs from Variable A to Variable B if you can't explain HOW A causes B.