Apple Search Ads (ASA) aren’t just another new User Acquisition channel for app marketers, but one that tends to bring very good results; reaching highly valuable users at the exact moment when they’re searching for a solution and ready to convert and install your app.
Building an ASA strategy brings with it the potential benefits of:
- Protecting your brand: if you don’t buy your brand keywords, your competitors will. And they will have the opportunity to show an alternative product to your users. The other way around is also true: you will get the chance to show users your app while they are looking for your competitors.
- Getting visibility: this is especially important for apps that don’t yet have a known brand or organic visibility in the app store. With ASA you have the chance to show your product to users that otherwise wouldn’t ever see your app, because you’re lower down in the ranks.
- Growing your organic rankings: buying keywords (and the general increase in the volume of installs that results) will give you a nice push up in ranking, that I observed occur not only for the keywords I purchased but for our ranking in the category as well.
But despite my resounding positivity towards ASA campaigns, I have to admit they can actually be a tricky channel to get right, specifically because the channel can be hard to track.
This is because Apple’s attribution techniques often cause large discrepancies for marketers between the number of installs their Mobile Measurement Platform (MMPs) report, and the number of installs reported by Apple. If you were not aware of the discrepancies you can face while tracking your ROAS with ASA, there’s a great explanation in this Appsflyer MAMA Board session.
When I started ASA campaigns I also had to learn how to correctly interpret the mix of attribution data I was receiving. Its a critical step for establishing specific goals for the channel and explaining results to management.
Because ASA is a complex topic, sharing valuable insight means getting into the weeds a little. But for any app marketers interested in improving their ASA strategy and the quality of insight gathered from the channel, I hope sharing my lessons learned will be useful.
Discovering the effect of LAT and re-downloads on your data
If you have been working with ASA you should already be aware that two of the main causes of the discrepancy in attribution between Apple and your MMP (but not the only causes), because Apple counts re-downloads and allows users to activate Limited Ad Tracking (LAT-On) which essentially blocks the demographic or search history info of the device that is downloading an app. Because your MMP does NOT receive the data coming from re-downloads either LAT-On users, the two platforms can end up reporting very different install figures for the same campaign (I found something from 30-70% discrepancy).
One way I found to isolate and identify how much LAT-On and re-downloads affect your data is to create a brand campaign using a strategic structure that will make a comparison between what “is tracked” and what “is not tracked” by using two different ad groups.
Ad group 1: In the first brand campaign, you set no targets (all users, all genders, all ages).
Ad group 2: In the second ad group you target only new users (in order to spot the re-download %) and specify all genders, but with an age range of 18-65+ (in order to spot the % of users who have activated LAT-On and are thus not disclosing their age).
After running the two ad groups for a few weeks (depending on how much traffic you have), the difference between the two ad groups should reveal what is the difference in ROAS (or CPA) that you are facing.
With this data, you can then create models to extrapolate the ROAS for all your campaigns on ASA.
How I create a diverse and scalable campaign structure
After experimenting I have found an account structure that allows me to maximize results on currently best-performing keywords while also exploring new topics and tailoring the message to the audience.
All campaigns are separated by countries for our main countries of focus, and aggregated into small groups of countries or areas for locations we don’t focus on as much. We often aggregate them based on language and time zone location.
1. Brand campaigns (Exact Match, EM)
Organization: 3 ad groups per campaign:
- Targeted New users, Female, age 18-65
- Targeted New users, Male, age 18-65
- Non-targeted: all user, all genders, all ages
This is a good structure that will allow you to test gender-specific creatives and give specific CPAs for each ad group. If age is something more important than gender for your app, you can create ad groups following this target.
Note: Using many ad groups that target users at a granular level might not be a good idea if you are just starting out or if you don’t have big volumes, because it will give you more difficulty manage the campaigns and the data at the keyword level will be diluted, making it hard to optimize.
2. Generic “stable” campaigns (Exact Match)
Organization: One campaign per topic (for each country or multi-country group).
This structure makes it easier to focus on topics that bring better results while still testing new topics.
For topics that are working well, you can be more aggressive on CPA targets, bids and increase daily budgets. If you have enough daily volume per campaign, you can even create ad groups with different targeted audiences to improve the results even more (ROAS, CPA…).
The topics/keywords that have medium performance can still be kept active for the sake of volume or to explore other optimization strategies, like using specific creative sets that will better inform the users what your app can offer and better fits what they are looking for.
3. Generic “testing” campaigns (Exact Match)
Organization: I use only one ad group, targeting ONLY new users, ages 18-65 to be able to minimize the attribution data loss, resulting from the before mentioned redownloads and LAT-On issue.
When I want to test new keywords, that might not be directly connected to any of my “stable” campaigns, I separate them into a one or more new campaigns (depending on how I can best aggregate them into topics).
That way the data at the keyword level will be aggregated with more volume, and be easier to optimize on your in-app KPIs.
In my testing campaigns, I add new keywords I find in different tools, like AppTweak, AppFollow, MobileAction, or even using web keyword suggestion tools like Ubersuggest.
This is a good strategy for:
- Testing keywords related to newly launched features for example, that you don’t know yet how they will perform.
- Testing new keywords for indirect competitors.
4. Discovery campaigns (Broad Match, BM)
I add to my BM campaigns only Exact Match keywords that have mid-high search volume. Using keywords with already low search volume usually just gives me back search terms aggregated in “low volume search terms”, so it doesn’t work as a strategy to mine new keywords.
I also add competitor’s keywords to these campaigns (for the main countries, I keep them separated to be able to control the daily budget better). If the app is doing well in terms of ASO, you can find very good new keywords, including even new topics to explore. It can also be a hacky way to figure out if something has changed in competitors’ apps (like if they’ve added a new feature, for example).
Note: For the BM campaigns, it is important to add all of your EM keywords as negatives, so you keep the structure clean and organized.
My thoughts on Search Match (SM) campaigns:
Together with Brand, these were the first campaigns I started in ASA. They didn’t work at all – in terms of performance (more or less expected) and in terms of mining keywords for all countries. One explanation for this is that our app didn’t have the store metadata optimized in some countries/languages, so non-relevant keywords were showing up for SM campaigns, what gave me the extra work of adding negatives.
I suggest looking first at your ASO and for each keyword your app currently is ranking for. If you don’t see good results, don’t expect much from SM campaigns. You can still test them for sure, but only with low bids, low CPA, and daily caps, to avoid risk – at least in the beginning. Nowadays, I use other strategies for mining keywords (like using the tools I mentioned above), and I keep SM active only for countries for which we have optimized metadata.
Of course, there is a lot more that could be said about Apple Search ads, but the important takeaways are that the channel requires strategic planning. With the right targeting, ad groups and campaign structure, is possible to reduce attribution discrepancies and to keep your results sufficiently organized to capture useful insight, improve your strategies, and drive high-value users to your app.
Read Thais Brizolera’s interview with the Mobile Masterminds community and find out more about the author.