So… We’re Talking About Birth Control Now. Thanks, YouTube.
An advertiser survey that left me puzzled. And, uncomfortable.
Note: This newsletter discusses a survey question related to birth control options. While the content isn’t explicit, it touches on personal health choices.
Have you ever been in the middle of a YouTube video when a survey pops up? Honestly? I’m immediately intrigued. Because, y’all—someone is paying good money to get that info from you.
Most of the time, these surveys are laughably generic and completely forgettable. But not this week. This week’s question left me almost too stunned to speak.
Here’s the question that appeared:
Which of these products would you consider choosing?
Prescription Birth Control
Opill
Condoms
Oral Contraception
None of the above
My jaw about hit the floor. An advertiser wants to ask me about my sex life?
I quickly took a screengrab, thinking, I’ll never have the gall to do a survey makeover for something this controversial (y’all, I was raised in a pretty religious household). But here we are, because I cannot stop thinking about how truly WILD this question was.
So… What Is This Question EVEN Trying to Do?
Setting aside the initial shock factor, I’m incredibly curious how this data is being used. Because from a strictly logistical perspective, this question is a nightmare.
Let’s walk it through in slo-mo.
The Problems With This Question
Overlapping Answer Choices
“Prescription birth control” vs. “oral contraception” → But oral contraception is a form of prescription birth control.
Opill (which I had to Google) is an oral contraceptive, but it’s available over the counter—so technically, it checks two boxes. It used to be prescription, which could make this even more confusing.
Because the response categories overlap, this data could inflate or undercount how many people actually consider certain methods. And if you can’t trust the numbers, why run the survey in the first place?
Oddly Selective Choices
Condoms are fundamentally different from the others, making me wonder—are they asking about preference, accessibility, familiarity, or something else?
Where are IUDs? The patch? The shot? There are plenty of other common birth control methods that aren’t listed. The choices seem weirdly specific, which makes me think: Why these, specifically?
This isn’t just about missing options—it’s about how that shapes the data. If a respondent’s actual preferred method isn’t listed, what do they do? Do they select something they might consider but don’t actually use? Do they pick “None of the Above” even though they do use birth control? The lack of clarity here means the results won’t reflect real preferences accurately. And if the intent was to measure market share, the missing choices make it impossible to get a complete picture.
“None of the Above” Is a Black Hole
This answer choice could mean:
I’d use birth control, but not these.
I wouldn’t use birth control.
Stop asking me about my sex life, YouTube.
I’m not sexually active, so this question doesn’t apply to me.
That’s a huge problem because you’re lumping together people with completely different reasons for choosing this option. Because this choice could include people who don’t use birth control, people who prefer a method that wasn’t listed, or people who are uncomfortable answering, it is impossible to interpret the data properly. If you later see that 35% of people chose “None of the above,” what does that tell you? Nothing concrete. Are they rejecting the options? Do they just not want to answer? Without a way to differentiate responses, this answer choice becomes meaningless.
Who’s Behind This? And Why?
The more I thought about the question, the more questions I had:
Who’s paying to ask this?
What in the heck are they trying to learn?
Since Opill is the only brand name listed, it could be their ad. But there’s no way to know for sure. It could also be:
A competitor might want to see how Opill stacks up against other methods, but the question is so vague that it doesn’t tell them anything useful.
A research institution or health organization could be studying birth control trends, but this question lacks the nuance needed for meaningful insights. If they’re trying to understand what options people consider, why limit the choices? If they’re studying awareness, why not ask directly? The way the question is structured makes it difficult to extract meaningful data, even for a well-intentioned study.
A political group could be trying to influence the narrative around birth control—either inflating Opill’s popularity or trying to suggest people don’t use it.
That last one is an important possibility. People often assume surveys are neutral, but they’re not. They can certainly reflect the intent of whoever is funding them. And when the question itself is unclear, the data can be spun in different ways.
Here’s the thing—no matter who is asking or why, they’re not getting good data.
If This Is Market Research… It’s Bad Market Research
Let’s say Opill is running this survey to gauge awareness of their product. This is not the question to ask.
I have to chuckle when I imagine the moment this question got approved in a meeting. Like, someone—maybe multiple people—looked at this and said, Yes. Let’s spend money on this exact wording.
They might be hoping to say something like, “XX% of respondents prefer Opill over prescription birth control options.” But because Opill is an oral contraceptive, and because people may not realize it’s now available over the counter, the responses are going to be muddied. You have overlapping categories, a lack of clarity about whether people know Opill is different from prescription options, and no way to untangle those who recognize the brand from those who don’t. If the goal is to gauge brand awareness or product preference, this question doesn’t actually deliver useful data.
The Screener Issue
I assume I got this question because I’m a woman over 18, but I don’t actually know what targeting criteria they used. Even if advertisers use demographic targeting, that doesn’t mean the right people are seeing the question. Screening questions help ensure the data collected is actually relevant and insightful.
This question still assumes:
The respondent is sexually active.
The respondent has considered birth control.
The listed methods are the only ones worth asking about.
If that context is important to the survey, it should be screened for explicitly.
The Sensitivity Factor: Is YouTube Even the Right Place to Ask This?
This isn’t just a confusing question—it’s a deeply personal one.
In today’s political climate, where birth control access and women’s rights are constantly under attack, people might hesitate to answer honestly—or at all. Anonymity matters here. When questions touch on personal health choices, people need to know if their responses are anonymous and how their data will be used. Without that clarity, they may avoid answering—or worse, give misleading responses. That’s basic ethics, especially when discussing reproductive choices.
Candidly, YouTube feels like a weird place for this kind of survey. Unlike a private medical setting, there’s no trust, no context, and no assurance that the data is being handled responsibly. A random ad survey about birth control? Might not be the best move. Wouldn’t this data be more reliable if collected through Planned Parenthood, university health systems, or medical researchers?
The only way to opt out is to skip the survey entirely—but there’s no option to say, “I don’t feel comfortable answering.” And that response alone could be meaningful data. If a significant number of people feel uneasy answering, that’s a sign the question itself may not be appropriate for this setting.
How to Actually Get Useful Data
To me, this isn’t just about one bad survey question.
This is a perfect example of why survey goals matter. When designing any survey (and, let’s be real, especially if you’re investing in a paid survey opportunity)—you need to start with the end in mind.
What do you want to be able to say when this survey is over?
What decision do you need to make?
What story are you trying to tell?
Your goal needs to translate into clear, focused questions that actually give you the data to support your message or decision.
Every question you ask should lead to data that helps you confidently say, “Here’s what we learned.”
Pro tip: Before launching your survey, write out the exact fill-in-the-blank sentence you want to be able to say with your data.
“__% of people surveyed said they strongly agree that ___.”
72% of respondents said they strongly agree that working from home has improved their quality of life.
“When considering _____, the top three most popular ____ were ____, ____, and ___.”
When considering streaming services, the top three most popular were Netflix, Hulu, and Disney+.
“On average, people reported doing ___, ___ times per [week/month/year].”
On average, people reported going to a coffee shop 3 times per week.
“Among those who use ___, the most common reason cited was ___.”
• Among those who use a fitness tracker, the most common reason cited was tracking their steps.
“People who ____ were __% more likely to ______.”
People who exercise daily were 40% more likely to report high energy levels.
“__% of people said yes when asked if they ___.”
82% of people said yes when asked if they would be interested in working with us again.
If your question doesn’t give you data that lets you fill in that sentence in a meaningful way, it needs refining.
If, at the end of your survey, you’re left staring at results that don’t actually tell you anything useful, you’ve wasted time, money, and effort.
(And hey, I’m always open to the wonderful surprise of someone from this survey reaching out and saying, Actually, we were trying to learn this incredibly specific thing that required us to ask about contraception in exactly this way. If that’s the case, please tell me. I’d love to know what I’m missing.)
How This Survey Could Have Been Better
If the goal was brand awareness, they could have asked:
“Have you heard of Opill?”
“Did you know Opill is now available over the counter?”
“Which of the following birth control methods have you heard of?”
If the goal was understanding preferences, they could have asked:
“Which birth control methods have you used in the past 12 months?”
“Which birth control methods have you seriously considered using in the past 12 months?”
“What birth control method do you trust the most?”
“Over the past month, which birth control method did you use the most?”
If the goal was evaluating decision factors, the question should have been:
“What factors are most important to you when choosing a birth control method?”
“What, if anything, the biggest challenge to accessing your preferred birth control method?”
See how these actually get at something?