Jumping in-one thing that keeps coming up in practice is that while frameworks like FAIR and Monte Carlo get a lot of attention, their outputs are only as good as the data you feed them. I’ve seen organizations get bogged down trying to assign precise probabilities or loss magnitudes when they just don’t have historical data specific enough to their environment. That often leads to a false sense of precision or, worse, model fatigue.
Another practical pitfall: once you’ve set the model up, actually updating and validating it is usually treated as optional rather than essential. Adversaries shift tactics quickly, so unless you’ve got a process to constantly re-calibrate (preferably with real incident data), the model gets stale fast.
On tools-everyone gravitates toward RiskLens, because it’s the big FAIR platform, but honestly, it’s as complex as it is powerful. Sometimes, a well-designed R script or even a good spreadsheet, if you trust your data and know your assumptions, can go just as far and be easier to adapt.
Curious to hear if anyone’s solved patchy data for likelihood/scenario calibration in a way that doesn’t eat up all your time? That’s still the biggest bottleneck I’ve seen.