By
Vlad Shvets
Qvery Podcast Episode #2: AI Inside Marketing Teams
Hey, Vlad here. My guest this episode is Polina Medvedeva, head of growth & marketing at nuvacom — an AI platform that helps teams consolidate all their AI collaboration under one roof.
Polina has spent close to a decade in B2B SaaS and FinTech growth, knows what it actually looks like when organizations try to implement AI internally, and happens to have some strong opinions about Claude (spoiler: mostly positive).
We last met in Munich in May 2025 and had a great coffee conversation about how AI was changing marketing. This episode picks up where that left off — specifically on the question Polina brought to the table: how do marketing teams adopt AI internally? Not the theory. The reality, which as you will hear, is considerably messier.
The Model Wars Are Being Fought by Individuals, Not Teams
Here is a problem every company is quietly sitting with: nobody made a conscious organizational decision about which AI tools to use. People started using whatever they personally liked. Preferences formed. Habits formed. And now companies have inherited a collection of individual choices with zero shared context between any of them.
Polina described it plainly: "LLM models are personified at this point. When they all started spreading out, people got so excited about it and everybody chose what they liked."
Polina walked through the landscape as she sees it: most people she knows use ChatGPT. There is a massive Gemini community on Android phones that exists in an entirely separate bubble; she has a Samsung and has never once opened Gemini. She personally onboarded onto Claude in August 2024 and has been a paying Pro subscriber ever since. Three platforms. Three communities. Almost no crossover.
I have lived this from the other side. I migrated my consultancy from ChatGPT Enterprise to Claude. Then Piotr and I ended up on Claude Code at Qvery. It feels as foundational as picking your email platform — the institutional knowledge, the workflows, the mental models all attach themselves to whichever tool you are using. Switching is not just a software decision. It is an organizational one.
And most companies are making this decision by accident.
The Gap Between Personal AI and Team AI
Polina has a Claude Pro project she pays for personally. She has fed it three years of newsletters from the best people in the industry: GTM strategy content, Lenny's Newsletter, everything relevant to her growth work. When she needs to think through a HubSpot integration, figure out how to use a new outreach tool, or work through a campaign approach, that project is her knowledge base. Accumulated, personal, and deeply useful.
She also uses nuvacom for team work. The dividing line is specific.
"Everything that goes over three weeks timeframe will almost always be in nuvacom. Anything I need to solve immediatel — some quick brief, something that does not require deep company knowledge — for that, I will use Claude".
The reason is not model quality. Both platforms are running comparable models in the background. The reason is context retention: what happens when you need to maintain a coherent thread across multiple people, multiple weeks, and external data sources like Slack, Google Drive, and SharePoint.
She framed the broader organizational problem in a way I keep coming back to.
"There is a rapid gap between how people are using LLMs in their personal lives and how that part transitions in their professional lives. And this is where I think the system breaks a little bit."
Personal AI is focused, individual, and shaped by personal habits. Team AI needs to be shared, persistent, and connected to organizational data. These are fundamentally different requirements, and most organizations are trying to solve the second problem with tools built for the first.
Context Is the Feature That Actually Matters
When I asked on what specifically breaks down in individual chat tools for team use, Polina was precise. After a certain length and topic overlap in a Claude project, the contextual thread weakens. The model starts losing its connection to earlier context. You end up spending mental energy managing the quality of outputs instead of focusing on the actual task.
nuvacom is built around exactly this problem: deep memory across all project chats, direct connections to Google Drive, SharePoint, and Slack. And it does not degrade when a project extends over weeks or when more people join the workspace. Context compounds instead of eroding.
"If you use AI every single day," Polina says, "things do get very complicated."
I have been solving the same problem differently: Claude Code with MCP tools for Google Drive, where every meeting gets transcribed automatically and Claude can access all those transcripts on demand. Different architecture, same root problem: making accumulated context accessible rather than ephemeral.
We both ended up building systems around context retention. Neither of us solved it by just buying an LLM subscription.
The practical question for any marketing team evaluating AI tools: it is not "which model is smarter?" It is "which platform actually remembers what we are doing three months from now, and keeps remembering it as the team grows?"
The Aha Moment Is No Longer Optional
One of the sharper observations Polina made was about what happens when organizational AI adoption is driven by pressure rather than genuine need.
Right now, there is enormous pressure on every company to have an AI strategy. Leaders feel it. Teams feel it. The "everybody is AI-ing" dynamic is real - and it leads to a specific failure mode: adopting tools to have something to point to, rather than because those tools are solving a real problem. Use cases for the sake of use cases.
The result is performative adoption. Teams adopt tools, check the box, underuse them, and quietly abandon them when the pressure moves on. For anyone building AI products, Polina's diagnosis was direct.
"If your AI product does not really have a proper aha moment — the one that really instills the idea in the person's mind that I need that, I want that, I felt great using that — this is also kind of a death sentence."
In traditional SaaS, aha moments were good product design practice, something to optimize in onboarding. In AI products, they are the difference between genuine adoption and performative adoption that churns the moment organizational pressure shifts.
We shipped Qvery's self-serve the day before this conversation. Time-to-aha is the first metric we are watching.

Free Users Are Now a Liability
This is the SaaS rule that AI quietly invalidated, and I do not think enough people have fully internalized it yet.
In traditional SaaS, free users were an asset. Your word-of-mouth engine, your trial conversion funnel, the foundation of any PLG strategy. Getting as many people as possible into a generous free tier made sense because the marginal cost of a free user was close to zero. High gross margins meant you could afford generosity.
AI-native products cannot run this math. Every AI interaction burns compute. Every API call costs real money. The margins that made generous free tiers viable simply do not exist when your core product is token-driven.
Polina was direct: "Having free users is frankly speaking, financially, a bad thing. You do not want to have free users at all. You want to get them in, get them to really say this is great, subscribe as a paid customer — because that money is no longer your huge-margin situation. It is your ability to continue building."
I raised the extreme example: Anthropic's Max plan at $199/month reportedly costs Anthropic around $5,000 per month to serve a single heavy user. I am almost certainly one of those users, running Claude Opus 4.6 on Claude Code at full capacity. I have made peace with it. The math will have to change at some point, and when it does, it changes for everyone building on top of these models.
The implication: design trials for speed to paid, not breadth of free adoption. Get someone to a genuine "I cannot imagine going back" moment as fast as possible, then convert. A free user who never reaches that moment is a compute expense with no upside.
Hiring: The Framework That Survived
I asked Polina what she looks for when hiring marketing talent now. Half-expecting a manifesto about AI fluency requirements.
Polina's answer: stack and attitude. Same as it has always been.
"Do you know the stack? Do you have the hard skills that are simply necessary to work in marketing in a tech environment? And then, are you comfortable with unpredictable stuff? With being challenged constantly, with smaller things not working and knowing how to fix them?"
"That did not change much in the past 10 years," Polina added.
The underlying qualities that make someone worth hiring have not moved. What AI has done is accelerate the gap between people who have them and people who do not. The tools amplify capacity. They do not create it.
On whether agents could replace roles outright, she was direct.
"Bots or agents can replace a function really well. A very good super agent or workflow can replace a function really well. It will never be able to replace a person."
The right question is: does this function repeat often enough, and is it structured enough, that you should automate it instead of hire for it? That leads to better decisions than "can I replace this headcount with an agent?"
The Fundamentals Do Not Have an AI Exception
The most grounding part of our conversation came near the end, when Polina pushed back on the tailwind narrative.
Yes, the wave is real. Both of us are building in markets that barely existed two years ago. The timing is genuinely good. But good timing does not exempt you from the fundamentals.
"Nobody has canceled these rules. Your product still needs to come in at the right moment at the right pain point. And it needs to solve a specific problem — not just everybody needs AI, because if everybody needs it, you also kind of do not know what nobody needs."
You still need to test your ICP. You still need to listen to what your salespeople hear at the front line. You still need customer success feedback, data, and experiments. You still need the humility to accept that your first theory about who your customer is will be wrong. "Just build and they will come" was proven wrong in SaaS. It will be proven wrong in AI too.
Polina's parting advice, delivered with complete sincerity: stay strong, and use pen and paper for note taking.
Thanks for reading! The full conversation is on Spotify above 🎙️
