Last week, one of my videos about the ChatGPT 5 update went unexpectedly viral on Instagram. It racked up 25,000 views in a matter of days. I had to turn off comments because so many people wanted to tell me the same thing: AI is the future, you’re an idiot, and you — a human with a PhD — are about to be replaced.
(And, y’all…that’s a polite version of the comments).
If you’ve spent any time online lately, you’ve probably seen the claim that ChatGPT is now like having a Ph.D. in your pocket or that the next era of AI is models that don’t just respond to prompts, but do things for you. The vision is: you set a goal, and the AI takes it from there. You don't have to do too much, and I get why that's exciting—people want to take things off their plates, they want to move faster.
I can’t help but imagine what the version of this is for survey design: You log into a platform, type in a general idea of what you hope to learn from your survey, click a button, and AI instantly spits out a polished survey. SurveyMonkey is already doing this! You could take it a step further and imagine a platform that then goes and collects synthetic responses, analyzes them, and delivers a tidy set of findings back to you in seconds. (We’re probably not at that stage yet, but I don’t think it’s unrealistic to think that’s where we’re headed).
The Value Is in the Process
What the people who jumped in my comments don’t know is that I’ve spent the better part of this year building a platform that could, in theory, replace much of what I do. I’m not afraid of being replaced, and I think AI has the potential to make research and survey design way better. There are lots of things in these fields that can be automated or streamlined, and probably should be.
At the same time, I think there’s a big difference between using AI to support your thinking and using it to replace your thinking altogether. There is value in the process. In fact, I’d argue that what makes for good research and good surveys is the process.
After over a decade of designing research and evaluations with organizations, what I’ve learned is this: The more people are engaged in setting the vision and shaping the questions, the better the data is and the more likely they are to actually use it.
When people have a hand in deciding what they want to learn, thinking through the best ways to collect information to help them learn it, and the data is treated like something valuable—it’s something that takes time, that takes effort, and is not just an obligation to check off—there are better results. The process matters:
Narrowing your goals or research questions forces clarity about what’s actually important.
Choosing who to ask forces you to think about whose voices matter and whose stories you’re going to elevate.
Deciding how to ask your questions pushes you to align your methods with your goals, all while putting yourself in the shoes of people answering your questions.
These aren’t steps to just get through or automate. They are where the learning happens. When we bypass these steps entirely, when we move to data on demand, we cheapen the work.
Research isn’t a vending machine. “Insert the prompt, click a few buttons, and receive the answer.” And yet, there seems to be a growing number of people thinking it either is—or should be—exactly that.
The Myth of the “Ph.D. in Your Pocket”
I think about the deep research feature that OpenAI rolled out many months ago, and that I previously critiqued. Not only does it get things wrong, but it deprives you of learning. You assume that it knows best, that it knows where to look for sources, and that it knows better than you what is credible and correct.
What people are forgetting is a really important note: large language models are exactly that—large language models. They’re generalists by design, built to predict the next likely word across any topic. Without fine-tuning or specialized retrieval, they’re not reliable deep experts — they mimic expertise by predicting what an expert might say. That’s why the “Ph.D. researcher in your pocket” line floating around feels like marketing spin.
The “Ph.D. in your pocket” idea works because LLMs can sound like experts, especially in areas with clear rules or abundant public data. But sounding like an expert isn’t the same as being one — and in research, it’s the judgment, the nuance, the “what’s worth asking” part that a generalist can’t reliably replicate.
A Ph.D. is a specialist who spent years developing deep expertise in one specific area, and that expertise is exactly what you don’t get from a general-purpose model. Some fields seem to “luck out” in the sense that it appears the training sets seem to be more or less accurate. But that’s not the case in many fields, and it’s certainly not the case in research because the truth is rarely simple, even our best practices are often hotly debated.
For example, if you’ve been reading me for a while, you know I think neutral response categories on surveys are a bad idea about 95% of the time. Some researchers love a midpoint. What should AI recommend? In studies across fields, you’ll often find conflicting results—do you report the one that’s most positive, the one that’s most negative, or the one that’s most recent? Who decides what’s truth?
I worry that in this scenario, we’re saying let the algorithm do it.
The truth is complicated and it takes expertise, context, and judgment to navigate. This is one of the reasons platforms struggle to reliably flag misinformation—some cases are obvious falsehoods, others involve conflicting but credible evidence, and still, there’s just a deluge of loud, not very credible evidence that can drown out the truth.
What should we outsource to AI?
When I think about the future of survey design, I don’t picture a world where you barely touch the process. I picture a world where AI handles the repetitive tasks so that you can focus on the parts that make the work meaningful.
I think that there’s good agentic vs. bad agentic. The question isn’t whether to use AI in research; it’s where and how.
Good agentic AI supports your thinking without replacing it. It parses a survey you’ve already written and loads it into the platform for you. It reviews your questions against best practices and flags where you might introduce confusion. It formats your survey for accessibility, helps you figure out how to do logic, cleans up typos, and tests different survey paths — tedious but necessary steps.
Bad agentic AI takes over the most human parts of research. It generates your survey questions from scratch, decides what you should ask based on generic assumptions, collects synthetic responses and calls it data, and delivers insights without an opportunity for you to really dig in and wrestle with the data.
One makes you sharper, the other makes you passive.
That’s why I’ve been so intentional about building Bisque with clearly defined, limited use of AI. The moment we stop engaging with our own questions, we stop learning, and the moment we stop learning, the work isn’t worth doing.
I think about it this way: AI could log into Amazon and buy me a shirt or buy me a coffee maker, and for some people, that’s fine — they want to outsource the choice, they trust that AI will make the best decision about their shirt or their coffee maker (especially if they give it a good prompt [insert groan here]).
You outsource the choice, the box shows up at your door, and you get on with your life. If you don’t love the shirt or coffee maker, the only person that really impacts is you (and you can return it).
Research is not like picking a shirt or a coffee maker. Outsourcing your research and survey decisions doesn’t just affect you; it affects your clients, your customers, your employees, your community.
The stakes are bigger because the ripple effects are bigger. It’s not like using a fun little bot to shop for you. Research is an inherently creative process that requires expertise, context, and judgment.
And here’s the kicker: If you don’t know what good research or good survey design looks like, you won’t even know if you’re being guided down the wrong path.
If AI gave me the wrong steps to fix my kitchen faucet, I’d figure it out because the faucet still wouldn’t work — that’s annoying but contained. If AI gives you the wrong steps to design a research study, you might never realize it. You would still get data and a report, but the study could be flawed in ways that lead to bad decisions, wasted resources, or even harm.
And those effects don’t stop with you; they spread, they shape how decisions are made in your organization, they reinforce misinformation in society. That’s the difference in stakes, and why research — real research — shouldn’t be just “push a button and trust the output.”
The future we should choose— and help create.
The future doesn’t have to be thoughtless in order to be fast. We can have speed and convenience without losing depth. We can use AI without losing our curiosity or creativity. We just have to choose to do it.
My resistance to people believing that AI can replace subject-matter experts or that we should develop agents to do our work for us has nothing to do with protecting my job or hubris about my credentials. I am an AI fangirl. I use it, I build with it, I fine-tune it. And yes, I critique it.
I think it’s critical that we protect the integrity of research, knowledge, and learning. And, I think AI offers a badass way to make those skills more accessible to more people. Which is amazing (if we do it the right way).
When you let AI decide what to ask, you’re not just outsourcing your labor — you are outsourcing your curiosity. You are letting a model decide what is worth knowing.
That’s not innovation — that’s abdication.
And abdication has consequences. It changes what gets measured, whose voices get heard, and which truths survive. It narrows the questions before they’re even asked. It trains us to accept easy answers over real ones. Real human responses carry nuanced lived experience and perspectives that no probability model can replicate.
This is why if you’re an expert in something, I want you in this conversation about AI. I want you building your own AI, training AI, and making models that reflect your deep knowledge so more people can access it. We don’t need AI to replace expertise — we need it to amplify it.
Research (including surveys!) is how we systematically learn about the world around us — its patterns, its truths, its contradictions. If you outsource that process, you’re not just delegating a task. You’re delegating the shaping of knowledge itself.
The moment we give up control over understanding the world around us, we stop shaping our own future and let someone — or something — else decide it for us.