ASO leader advocates ditching statistical perfectionism for faster growth

Posted: July 22, 2025

App marketing teams are embracing a more pragmatic approach to data analysis, prioritizing business outcomes over statistical significance as industry leaders advocate for “data-powered” rather than “data-driven” decision making.

Speaking at App Promotion Summit London 2025, Simon Thillay, Head of ASO Strategy & Market Insights at AppTweak, outlined how marketing teams can make confident decisions even when statistical certainty remains elusive—a challenge that has long plagued App Store Optimization efforts.

The A/B testing disconnect

Thillay’s presentation addressed a persistent industry problem: why A/B test results often fail to match real-world performance after implementation. Using interactive polling with the audience, Thillay demonstrated how statistical uncertainty increases dramatically as confidence intervals widen, making decision-making progressively more difficult even when underlying performance differences may be minimal.

“Most of the time, we keep having results that don’t appear to match what we observe after we apply the winning result,” Thillay explained, drawing from eight years of experience in App Store Optimization and growth marketing.

The sensitivity analysis foundation

Before diving into complex statistical measures, Thillay advocated for sensitivity analysis as a first-line assessment tool. The method involves calculating how a single additional data point — one more impression or install — would change key metrics like conversion rates.

He showed how one install from four impressions yields a 25% conversion rate, but adding just one more impression could shift that rate to either 40% (if it converts) or 20% (if it doesn’t). This dramatic variation immediately signals when sample sizes are too small to support confident decision-making.

“That’s a great way to visualize how shaky the ratio you’re looking at is at the moment,” Thillay noted, while acknowledging the method’s limitations for high-traffic applications where thousands of daily impressions make rapid fluctuations more complex to assess.

Moving beyond P-values

While P-values remain useful for binary decisions about statistical significance, Thillay argued that confidence intervals provide superior practical value for marketing teams. Rather than a simple yes-or-no answer about significance, confidence intervals offer visual representation of uncertainty and enable planning around best- and worst-case scenarios.

The distinction proved crucial in a real-world Apple Ads example Thillay shared. With a current bid of $1 and 20% conversion rate yielding $5 cost-per-install, a CMO’s requirement for positive return on ad spend after one month (with $4 user revenue) would typically suggest lowering the bid to $0.80.

However, when factoring in the confidence interval showing conversion rates between 19-21%, the worst-case scenario could still result in $4.21 cost-per-install at the $0.80 bid level. By incorporating this uncertainty, the team could adjust to a $0.76 bid, ensuring profitability across the entire confidence range.

The incrementality challenge

Beyond comparing different time periods, Thillay addressed the critical question of separating genuine marketing impact from seasonal effects — what he terms the “incrementality” challenge. This involves creating baseline predictions that account for yearly seasonality, weekly patterns, ongoing growth trends, and specific events.

“When we talk about incrementality, we want to compare not the improvement we’ve seen, but rather say, what is the improvement we’ve seen that is an improvement we wouldn’t have seen if we hadn’t done anything,” he explained.

The methodology proved its value in analyzing Amazon Prime Video’s performance after adding Taylor Swift’s Eras Tour to their platform. Using AppTweak’s statistical calculator, the analysis showed a 22% increase in revenue estimates the day after the concert was added — an impact that could be clearly attributed to the content addition rather than seasonal variations.

Practical applications across marketing functions

Thillay detailed how these statistical approaches apply across various marketing scenarios, from evaluating in-app events and major updates to assessing Custom Product Page performance in Apple Ads campaigns. Importantly, he emphasized that finding non-significant results can be equally valuable for strategic decision-making.

In one case, a client experienced a significant drop in app average rating that correlated with decreased downloads. While the natural assumption pointed to causation, statistical analysis revealed the download decrease fell within expected seasonal variation ranges. This insight helped redirect resources toward both rating improvement and alternative growth strategies rather than focusing solely on review management.

The speed vs. precision balance

Perhaps most importantly, Thillay challenged the industry’s obsession with statistical significance, arguing that business context should determine acceptable uncertainty levels. Using another bidding example, he demonstrated how a seemingly large confidence interval (conversion rates between 50.4% and 69.6%) translates to much smaller practical variations at scale.

With 10,000 daily taps, the bid variation drops to just 1.5 cents, often negligible compared to the cost of delayed decision-making. “You don’t need to have statistical significance all the time,” Thillay emphasized. “You need to consider what’s the outcome, at what level you’re actually able to start saying: okay, it’s not perfect, but it’s good enough for us to make a decision.”

Tools and implementation

Recognizing that statistical expertise isn’t universal among marketing teams, Thillay advocated for leveraging online calculators and AI tools like ChatGPT for analysis support.

“You don’t need to know the math to actually use the statistics,” he noted.

The data-powered philosophy

Thillay’s presentation concluded with a crucial distinction between being “data-driven” versus “data-powered.” Rather than allowing data to make decisions automatically, Thillay advocated for using statistical insights to inform decision-making while maintaining focus on practical business outcomes.

“Focus on outcomes, not statistics,” he advised. “Statistics should inform your reporting and decision making, but the practical outcomes are what should determine how you make your decisions.”

This pragmatic approach acknowledges that even with sophisticated statistical analysis, App Store Optimization often reduces decisions to simple binary choices — keep current creative assets or change them — regardless of whether improvements are modest or substantial. The key lies in understanding what level of uncertainty teams can accept while maintaining competitive agility.

As mobile marketing becomes increasingly complex, Thillay’s framework offers a balanced approach: embrace statistical rigor where it adds value, but don’t let the pursuit of perfect data prevent timely, informed decision-making. The goal isn’t statistical perfection, rather better business outcomes through smarter use of available data.

Catch Thillay’s full session here.