Understanding Ad Platforms: A Practical Guide
Article 1: The Basic Mechanics - Campaign Structure and Launch
The Three-Level Hierarchy
Every major advertising platform, whether it's Google, Meta, or LinkedIn, organises campaigns using the same fundamental structure. At the top level you have the Campaign itself, which contains Ad Sets (or Ad Groups in Google's terminology), and each Ad Set contains multiple Individual Ads. This isn't an arbitrary arrangement dreamed up by product managers. Each level serves a distinct purpose in how the advertising system functions.
The Campaign level is where you establish your objective. This is what you're fundamentally trying to achieve with your advertising spend. You might want to drive traffic to your website, generate conversions like purchases or signups, build brand awareness through impressions and video views, or collect lead information directly. You also set your overall budget at this level, choosing between a daily budget where you spend up to a certain amount per day, or a lifetime budget where you allocate a total amount to be spent over the campaign's duration.
Why does the objective matter so much? Because it fundamentally changes how the platform's optimization algorithm behaves. When you tell the platform you want conversions, it will actively seek out people who are likely to complete purchases or signups, even if those clicks cost more. If you say you want traffic, the platform will focus on getting you the cheapest clicks possible, regardless of whether those people are likely to convert. The platform takes your stated objective quite literally and will bend its entire optimisation process toward achieving exactly what you asked for.
The Ad Set level is where you define the specifics of who sees your ads and how the platform should compete for those impressions. Here you specify your targeting parameters, deciding who your audience is based on demographics, interests, and behaviors. You also choose where your ads will appear, whether that's Facebook's news feed, Instagram stories, Google search results, or any of dozens of other placement options. The schedule for when your ads run gets set here, along with your bid strategy, which determines how much you're willing to pay and how aggressively you want to compete in the auction system.
One campaign can contain multiple ad sets, and this is where strategic testing happens. Imagine you're selling running shoes. You might create one ad set targeting people interested in marathons, another targeting general fitness enthusiasts, and a third targeting people who previously visited your website but didn't make a purchase. All three ad sets share the same campaign objective (conversions) and the same ads, but you're testing which audience responds best. This separation allows you to compare performance across different audience segments without mixing up your data.
The Individual Ad level is where your creative lives. This is the actual image or video, the headline and descriptive text, the call-to-action button, and the URL where people land after clicking. Typically you'll run three to five variations of ads per ad set to test which creative approach performs best. Maybe one uses a product photo while another uses a lifestyle image. Perhaps one emphasizes price while another emphasises quality. The platform will show all variations initially, then automatically allocate more budget to the ones that perform better.
The reason for this hierarchical structure becomes clear when you understand how optimisation works. The platform learns at the ad set level, gathering signals about which types of people respond to which types of ads. If you crammed all your targeting variations into a single ad set, you couldn't compare their performance. If you split every variation into separate campaigns, you'd fragment your data so much that learning would take forever. The three-level structure provides the right balance between testing flexibility and data concentration.
What You Control When Setting Up
When you sit down to create a campaign, you're essentially configuring a set of instructions and constraints for the platform's optimisation system. The first major decision is your budget and how you want it paced. With a daily budget, you might set $100 per day, and the platform will pace spending throughout each day. The platform doesn't necessarily spend exactly $100 every single day. It might spend $80 one day and $120 the next, using an average approach that gives it flexibility to spend more when opportunities are good and less when they're not.
A lifetime budget works differently. You might allocate $3,000 to be spent over 30 days, and the platform decides when to spend it. This approach allows the algorithm to spend more heavily on days when performance is strong and pull back on days when it's not. Once you know a campaign works, lifetime budgets generally produce better results because they give the platform more flexibility. For testing though, daily budgets are safer since you won't accidentally blow through money if something goes wrong.
There's an important nuance here about budget psychology. Platforms are built to spend your budget. If you set $100 per day but your targeting is extremely narrow, the platform will struggle to find enough qualified people to show ads to. Rather than underspend and leave money on the table, it might gradually lower its quality thresholds to hit your budget number. This means overly conservative budgets relative to your audience size can actually hurt performance quality.
The objective you choose shapes everything that follows. When you optimise for traffic, the platform's algorithm searches for people who have a history of clicking on things. These are the clickers of the internet, people with a propensity to engage with ads. You'll get cheap clicks, but many of these people may not have any real intent to purchase. They just like clicking. When you optimize for conversions, the platform shifts its focus entirely. Now it's looking for people who not only click, but who complete actions on websites. These people cost more per click, but they're far more valuable because they actually do what you want them to do.
This is where many advertisers trip up. They set up a traffic campaign because clicks are cheap, then complain that nobody's buying. But that's exactly what they asked for, cheap clicks from people who click things, not people who buy things. The platform delivered precisely what was requested. If you want sales, you must optimise for conversions, even though the initial cost per click will be higher. Over time, as the platform learns who your actual customers are, efficiency improves and costs come down.
Targeting is where you define your audience. At the most basic level, you can filter by demographics like age, gender, location, and language. These are the foundational filters that almost every advertiser uses. Beyond demographics, you can target based on interests. The platforms infer interests from the content people engage with, pages they like, and topics they follow. If someone regularly interacts with content about running, likes Nike's page, and reads articles about marathons, the platform categorises them as interested in running and marathon-related topics.
Behavioural targeting goes deeper, focusing on what people actually do rather than what they're interested in. This includes purchase behavior, device usage patterns, and travel habits. You might target people who are frequent online shoppers, or people who primarily use mobile devices, or people who travel internationally for business. These behavioral signals often predict purchasing likelihood better than stated interests do.
The most powerful targeting option is custom audiences, where you upload your own data. This might be a list of email addresses or phone numbers of existing customers. You can also target people who visited your website, based on pixel tracking data. These warm audiences already know who you are, which makes them far more likely to convert than completely cold traffic.
Lookalike audiences take your custom audiences and scale them. The platform analyses the characteristics of your existing customers, identifying patterns in their demographics, interests, behaviors, and ad engagement history. Then it searches its entire user base for people who match those patterns, ranking them by similarity. The top 1% most similar people become your lookalike audience. This allows you to find new potential customers who resemble your best existing customers.
There's an interesting paradox in targeting. You might think narrower targeting is always better because it's more relevant. But narrower targeting means smaller audiences, which means slower learning for the optimisation algorithm. The platform needs volume to learn effectively. Modern best practice has actually shifted toward broader targeting. Rather than trying to manually specify exactly who your customer is, you give the platform a wide starting point and let its algorithm figure out through actual conversion data who responds best. The machine is often better at finding your customer than your intuition is.
Your creative is what users actually see. The visual element might be a static image, a carousel of multiple images, or a video. While production quality matters, authenticity often beats polish in actual performance. User-generated content, like a customer's photo using your product, frequently outperforms professionally shot studio photos. There's a realness to it that makes people stop scrolling.
The copy consists of a headline, usually five to seven words that communicate your core value proposition, and body text of one to two sentences that expand on the benefit. The key insight here is that people scan, they don't read. Your ad is competing with hundreds of other pieces of content in someone's feed. You have maybe half a second to catch attention and communicate value. Simplicity wins.
The call-to-action button might seem like a minor detail, but it's surprisingly important. "Shop Now" versus "Learn More" can produce meaningfully different conversion rates. "Shop Now" signals transactional intent, filtering for people ready to buy. "Learn More" is softer, attracting people earlier in their decision process. Which one works better depends on your offer and where customers are in their journey.
Your landing page, where people arrive after clicking, must match the promise your ad made. If your ad shows a specific product on sale, but clicking leads to your generic homepage, most people will leave immediately. Load speed is critical. A page that takes four seconds to load on mobile will lose more than half of potential customers who bounce before it even renders. Mobile optimisation isn't optional anymore, it's the default experience for most traffic.
Bid strategy determines how much you're willing to pay and how the platform should compete in auctions on your behalf. The "Lowest Cost" option, which used to be called automatic bidding, is where the platform manages everything. You set your budget and objective, and the algorithm figures out the optimal bidding to spend your budget getting maximum results. For most advertisers, especially those just starting out, this is the right choice.
Cost Cap bidding adds a constraint where you tell the platform to get you results but try to keep the cost per result below a certain threshold. You might say "get me conversions, but don't pay more than $50 each." This is useful if you know your unit economics and have a hard ceiling on what you can afford. The risk is setting the cap too low, which might cause the platform to under-deliver because it can't find enough inventory at your price point.
Bid Cap is the most advanced option, where you set the maximum bid for individual auctions. This gives you precise control but is easy to mess up. Set your cap too low and you'll get little to no delivery. Most advertisers should avoid this until they have deep experience with how the auction dynamics work.
Target Cost tries to maintain a stable average cost per result around a target you specify. The platform will accept some higher-cost conversions and some lower-cost ones, balancing them to hit your target average. This produces more predictable spending than Lowest Cost but may sacrifice some efficiency.
What Happens After Launch
When you finally hit the launch button, you're not immediately in full optimisation mode. The platform enters a learning phase where it needs to gather data before it can optimise effectively. During the first few days, it's in exploration mode. Your ad gets shown to a broad sample of people within your targeting parameters. The platform is testing different audience segments within that broader group, experimenting with different times of day, trying different placements, essentially running a series of micro-experiments to see what works.
By days four through seven, initial optimisation begins. The platform starts to favour segments that showed better performance in the exploration phase. It's still collecting data and the results will be inconsistent, but patterns are beginning to emerge. You might see great performance one day and weak performance the next. This is normal. The algorithm is still calibrating.
After two weeks or so, you enter optimisation mode. The platform has found your best audiences, understands which creative performs well with which segments, and delivery becomes more efficient. Costs should stabilise and ideally improve. This entire learning process requires volume. The general rule is that you need about 50 conversions per week at the ad set level to exit the learning phase and reach stable optimization. If you're only getting five conversions per week, learning will take much longer and performance will remain inconsistent.
This is why the advice to not judge campaign performance on day one is so critical. The platform doesn't know anything yet. Making major changes in the first week restarts the learning process. You essentially throw away the data you just paid to collect and start over. Patience in the learning phase is genuinely difficult but necessary for the system to work.
Every time someone could potentially see an ad, an auction happens. This occurs billions of times per day across these platforms. When someone opens their Facebook app or performs a Google search or browses a website, an auction fires. It happens in milliseconds. The platform evaluates which ads are eligible based on targeting, which advertisers want to show an ad at this moment (all of them), what the expected value is to each party based on their bid and quality score, and then selects a winner. You're not manually bidding on each of these individual impression opportunities. The platform auto-bids on your behalf, trying to achieve your objective within your budget constraints.
The platform doesn't spend your budget evenly across time. If you set a daily budget of $100, it doesn't spend $4.17 per hour. Instead, it uses smart pacing. It might spend more during hours when conversion rates are historically high for your ads. It slows down if it's burning through budget too fast relative to performance. It accelerates if it's underspending and there are good opportunities. You might see $110 spent today and $90 tomorrow, which averages out over time. This is why if your ads suddenly stop showing, it might be budget exhaustion rather than poor performance. The platform spent your daily allocation.
When deciding whether to show your ad to a particular user, the platform weighs multiple factors. Your bid matters, but it's not the only thing. Your ad quality and relevance to that specific user matters significantly. The user's likelihood to engage based on their history matters. Frequency caps matter, the platform won't show someone the same ad ten times in a day. Ad fatigue matters, if someone has seen your ad repeatedly and never engaged, the platform will stop showing it to them. You have limited control over these moment-to-moment decisions. This is the platform's black box, and it's largely opaque to advertisers.
When someone clicks your ad, performance data immediately flows back to the platform. The click itself is recorded. A pixel on your website fires, which we'll cover in detail in the next article. If the person converts by making a purchase or signing up, that signal goes back to the platform. The platform updates its model with this new information. It learns that ads like yours work for people like that. This creates a feedback loop where more data leads to better targeting, which leads to better performance, which generates more conversions, which provides more data. This is why established campaigns beat new ones. They've accumulated learning that new campaigns lack.
Understanding the Metrics
After launch, you're looking at a dashboard full of numbers. Impressions tell you how many times your ad was shown. This is the gross reach number. If impressions are going up, you're getting exposure. If impressions are very high but clicks are very low, you might be experiencing ad fatigue where people have seen your ad too many times and are ignoring it.
Reach measures how many unique people saw your ad, which is different from impressions. One person can see your ad multiple times. If you have a reach of 10,000 people but 50,000 impressions, that's an average frequency of five impressions per person. Frequency is important to track. Too low, under two impressions per person, means people probably don't remember seeing your ad. Not enough repetition for the message to stick. Too high, over five impressions per person, and you're likely annoying people with ad fatigue setting in. The sweet spot for most campaigns is a frequency of two to four.
Clicks measure how many people clicked your ad. This is your basic engagement signal. Not all clicks are created equal though. Accidental clicks happen, especially on mobile. People trying to scroll past your ad might accidentally tap it. The platform counts these as clicks and you pay for them, even though they had no real intent.
Click-through rate, calculated as clicks divided by impressions, shows how relevant your ad is to the audience seeing it. An average CTR across most industries is around 1%. If you're seeing less than 0.5%, you likely have weak creative or you're showing ads to the wrong audience. If you're above 3%, you have very strong resonance between your message and your audience. CTR is one of the earliest signals of whether your campaign has product-market fit in its messaging.
Cost per click tells you what you paid per click on average. This varies wildly by industry, from $0.50 in some categories to $5 or more in competitive spaces like insurance or finance. What matters more than the absolute number is the trend over time. If your CPC is declining, that's the optimisation working. The platform is getting better at finding people likely to click at lower costs.
Conversions are people who completed your defined objective. Purchases, signups, downloads, whatever action you set as your conversion event. This is the only metric that truly matters for ROI. You can have amazing impressions and clicks, but if nobody converts, the campaign isn't working.
Conversion rate, calculated as conversions divided by clicks, shows how effective your landing page and offer are. A typical e-commerce conversion rate is somewhere between 2% and 5%. If you're seeing less than 1%, you probably have a landing page problem rather than an ad problem. The ad got people interested enough to click, but something on your site failed to close the deal.
Cost per acquisition tells you what you paid per conversion on average. This must be less than your profit per customer for the advertising to be worthwhile. If it costs you $50 to acquire a customer who generates $40 in profit, you're losing money. CPA is your ultimate success metric on the cost side. You want to see this declining over time as optimisation improves.
Return on ad spend is calculated as revenue divided by ad spend. If you spent $1,000 on ads and generated $3,000 in revenue, your ROAS is 3x. But you need to factor in your product costs. If your products cost $1,500 to produce and deliver, your actual profit is only $500, not $3,000. A common mistake is celebrating a 3x ROAS when your margins mean you need 4x to break even. Most businesses need ROAS of 2-3x just to break even and 4x or higher to be genuinely profitable.
Looking at these metrics together tells you where problems are. High impressions but low clicks means wrong audience or weak creative. High clicks but low conversions means a landing page problem. High CPA means you need either better targeting or better creative to improve efficiency. Declining CTR over time is a classic sign of ad fatigue, signalling you need new creative.
How Platform Optimisation Works
You don't manually adjust bids or targeting in real-time. The algorithm handles this automatically. Here's what's happening behind the scenes. Imagine you launched a campaign targeting men aged 25 to 45 interested in fitness. In week one, the platform shows your ads to a broad sample across that entire range. As data accumulates, it learns that men aged 25 to 30 convert at a much higher rate than men aged 40 to 45. By week two, the platform automatically shifts delivery. It spends more budget showing ads to the 25 to 30 segment and reduces spend on the 40 to 45 segment. It doesn't turn off the older segment completely, it keeps testing it in case patterns change, but budget allocation shifts meaningfully.
By week four, refinement goes deeper. Within that 25 to 30 age range, the platform notices that afternoon traffic converts better than morning traffic. Certain geographic areas perform better than others. Mobile users convert at a higher rate than desktop users. The platform adjusts bidding and delivery to favour these high-performing micro-segments. Your targeting settings never changed. You still have it set to men 25 to 45 interested in fitness. But the platform found your best audience within those parameters and automatically optimised toward it.
The same thing happens with creative. You launched five ad variations. During days one and two, the platform shows all five roughly equally to measure response. By days three through seven, it's allocating more budget to the top two performers while still showing the others at reduced levels. By week two, your top-performing ad might be getting 60% of the budget while the other four get 10% each. Eventually one ad dominates, receiving 90% of delivery while the others remain active as backups in case the winner starts to fatigue.
This is exactly why you need multiple ad variations. You let the platform find the winner through actual performance data. Your intuition about which creative will work best is usually wrong. The audience tells you what works through their behavior, and the platform translates that behavior into budget allocation.
With bid optimisation, assuming you chose "Lowest Cost" bidding, the platform's goal is to spend your entire budget while maximising conversions. It adjusts bids dynamically based on what it's learning. It bids higher during proven high-conversion time periods. It bids lower during hours when traffic quality is poor. It increases bids if you're underspending relative to your budget. It decreases bids if conversions are too expensive relative to what it thinks is achievable. You never see individual bid amounts. You just see your average cost per acquisition declining over time as the algorithm learns.
Placement optimisation works similarly. You might allow your ads to run on Facebook feed, Instagram feed, Instagram stories, and the audience network. After a few weeks of data collection, the platform discovers that Instagram stories drive conversions at three times the rate of Facebook feed. Facebook feed gets cheaper clicks but lower conversion rates. The audience network performs terribly with very few conversions. By week three, about 80% of your budget is going to Instagram stories. You could manually turn off the audience network, but the platform has already de-prioritized it so heavily that it's barely spending there anyway.
The Learning Feedback Loop
This is the fundamental mechanism that makes digital advertising work. The platform shows your ad to someone. That person either clicks or doesn't. If they click, they either convert or don't. That signal, click or no click, conversion or no conversion, flows back to the platform's optimisation system. The platform learns from this signal and adjusts its next round of delivery decisions. Better targeting leads to more conversions, which generates more learning data, which enables even better targeting. It's a compounding loop.
This takes time to build momentum. On day one, the platform has no data and is essentially guessing. By day seven, some basic patterns are emerging from the data. By day 30, the platform has rich data showing clear patterns and can optimise with confidence. By day 90, the platform knows your customer profile intimately, understanding subtle patterns about who converts and who doesn't. This accumulated learning is why a 90-day-old campaign will almost always outperform a brand new campaign with identical settings.
The trap that kills many campaigns is making changes too early. You spend $500 over three days, see poor performance, panic, and change your targeting. What you just did is throw away the $500 worth of learning data. The platform was building its model of who your customers are, and you reset it back to zero. Now it has to start learning all over again.
When to Intervene vs Let It Learn
This is genuinely one of the hardest judgment calls in running campaigns. You should let the platform learn if your campaign is less than seven days old, you're seeing gradual improvement in metrics even if absolute performance isn't great yet, your daily spend is relatively consistent with your budget, and you're getting at least some conversions even if the cost is high.
You should consider making changes if you have zero conversions after spending twice your target cost per acquisition, your click-through rate is below 0.3% after three days indicating a creative problem, your cost per click is three times or more above your industry average suggesting a targeting problem, or your daily spend is way under budget meaning your targeting is too narrow.
When you do need to make changes, be surgical about what you change. If you have poor CTR, the problem is creative. You need new images, videos, or copy. If you have good CTR but poor conversions, the problem is likely your landing page, not the ad platform at all. The ad successfully got people interested, but your website failed to close the deal. If you're underspending, you need broader targeting or a higher budget to give the platform more room to find your audience. If you have high cost per acquisition, you might need tighter targeting or a different campaign objective that optimises for higher-quality traffic.
The golden rule is to change one thing at a time. If you simultaneously change your creative and your targeting, you won't know which change fixed or broke performance. Isolate variables. Make one change, let it run for at least five to seven days to gather statistically significant data, evaluate results, then make another change if needed.
The Cold Start Challenge
Your first campaign is the hardest. The platform knows nothing about who your customers are. This manifests in several painful ways. Week one will be expensive as the platform learns on your dime. You need sufficient budget to generate meaningful data, typically at least $500 to $1,000 to get through the learning phase. You need enough conversion volume, if you're only getting two conversions per week, it will take months to learn effectively. Most importantly, you need patience. Don't expect profitability immediately. The first few weeks are an investment in building the optimisation model.
There are strategies to reduce cold start pain. You can start with retargeting campaigns aimed at people who already visited your website. These are warmer leads who know who you are, so conversion rates will be higher. If you have existing customer data, upload email addresses or phone numbers to create a custom audience, then build a lookalike audience from that. The platform starts with a hypothesis about who your customers are rather than starting from zero. Some advertisers run traffic campaigns first just to build pixel data and website custom audiences before shifting to conversion campaigns. This provides the platform with behavioural data it can use.
Another approach is to syndicate your conversion tracking across multiple platforms even if you're only advertising on one initially. Set up the Meta pixel, Google tracking, and LinkedIn insight tag on your website. When conversions happen, all three platforms receive the signal and begin building their models of who your customers are. Later, when you decide to advertise on a second platform, it already has months of conversion data to work with rather than starting cold.