S4E4: AI, Product Failure, and the Skills PMs Need for What’s Next with Dina Atia
AI is transforming the way we work, but building great AI products takes more than hype. In this episode of Productly Speaking, Karl Abbott talks with Dina Atia, an AI Product Manager at Microsoft, about how to navigate the noise and focus on what really matters: solving real user problems. Dina shares practical insights on managing expectations, balancing bold visions with incremental progress, and the skills PMs need to thrive in this fast-moving space. From rapid prototyping to aligning metrics with user value, this conversation is your guide to building smarter AI solutions.
What You’ll Learn in This Episode:
-
How to identify real user needs in an era of AI hype
-
Why managing expectations is a PM superpower
-
The role of rapid prototyping in shaping better products
-
How to align evaluation metrics with what truly matters to users
-
What skills will set PMs apart in the next wave of AI
Key Quotes:
-
“Not understanding the problem is a huge one… especially in the AI space.”
-
“AI isn’t thinking. It’s predicting the next word.”
-
“Bring back engineers being lazy! What is the minimum we can do to solve this problem?”
-
“Nobody achieves anything significant alone.”
Resources & References:
-
Gartner Hype Cycle: https://www.gartner.com/en/research/methodologies/gartner-hype-cycle
-
Freytag’s Pyramid: https://writers.com/freytags-pyramid
-
Tools mentioned:
-
GitHub Copilot: https://github.com/features/copilot
-
Lovable: https://lovable.dev/
-
V0: https://v0.app/
-
Cursor: https://cursor.com/
Connect with Dina Atia:
https://www.linkedin.com/in/dinaatia/
Welcome to Productly Speaking, the product management podcast where building products
is never a straight line, though the roadmap says otherwise. I'm your host, Karl Abbott.
In each episode, I talk with some of the most innovative minds in the industry. Together,
we share real stories, the breakthroughs, the missteps, and the lessons that usually
arrive, well, after the deadline. So whether you're a seasoned PM or just starting your
journey and wondering what really happens behind the backlog, let's give it a go.
On today's episode, we'll be talking about product failure and how hype can play into that.
To have this conversation today, we have Dina Athia, who is a product manager working on
AI products at Microsoft and a two-time graduate from MIT. Dina, welcome to Productly Speaking.
Hey, Karl. Thank you for having me.
Dina, from your perspective, what are some of the most common causes of product failures?
This is such a PM answer, but I think not understanding the problem is a huge one. I think
especially in the AI space, there is really cool emerging technology and there is a desire
to use that technology. And sometimes the sort of natural way to use a technology or the natural
capability that it provides is not the most pressing need that the people you're trying to
support actually have. I would say one of the biggest things is getting caught up in a cool
idea that you had and not focusing on what people actually want and what pain points they actually
have.
Yeah. And that's kind of one of these things, you know, we talked about hype and hype can really
kind of come into that. And right now there's a lot of hype around AI. What makes AI products
uniquely vulnerable to that hype?
I think it's that we don't really understand what AI is. A number of years ago, it was this
kind of like scary robots taking over the world story that we used to tell people. And then now
it's this like, it's going to be your personal assistant. It's going to transcribe all of your
meetings, write all of your documents for you and lets you focus on the important stuff.
But I don't think there is enough of an understanding of what AI broadly is and what we usually refer
to when we say AI, which is large language models, what those are in particular. So AI, artificial
intelligence is very, very broad category that depending on how you define it can encompass
your dryer being able to tell when the clothes are dry and stopping by itself, which does not require
any sort of machine learning models or anything like that. And so, you know, on the one spectrum,
it can be as simple as that. And on the other end of the spectrum, it's chat GPT. It's the robots
that are eventually going to take over the world. It can be all of that too. So when you have such a
broad category, there are so many promises being made, so many fears being spread about what it can do.
I think it's very hard to stop and be like, wait, how does any of this actually work?
And at the core, we have a lot of data that we're now able to collect. And we have different sorts of
designs or architectures for how to combine that data with one another and how to learn things from
that. You have like, you know, very simple classification models where a model has maybe
seen a lot of pictures of cats and dogs. It can now tell, you know, given a picture of either a cat
or dog, which one of those two it is. And on the other hand, you have large language models,
you know, which are extremely, extremely large models. But what they really do is just predict
what the next word that it should output should be. And so there isn't even the sort of deep
thinking that we imagine. Like we have these models, like deep research and specialized thinking
models that we imagine have a brain in like a sense that we can understand. There's a very
philosophical conversation around, you know, aren't our brains just a bunch of electrons too? Like,
aren't we computers too? But I think it's a deep sense in which we are still much more intelligent
than just being able to predict what the next right word to say should be. And so I think not
understanding what's actually going on, what are the capabilities and current limitations of the
technology that we put under this broad umbrella of AI, I think that makes us really susceptible to
think that like, it's this massive umbrella, it could be anything, it could do anything for me,
and both a positive and a negative sense. So I think there's a lot of hype and there's a lot of fear.
And both probably go too far in some cases.
Yeah, that's a fascinating perspective, because yeah, you're absolutely dead on that AI really
isn't doing a lot in terms of thinking, it's it is literally running an algorithm where we take
words, and we assign them numerical values, and then we crunch some mathematics to figure out what
is the most likely thing that's going to come next. And then that's what we give you. However,
it is really good. And it has that perception when you interact with it, that it actually did
some thinking because well, it took a whole bunch of different disparate pieces of information from
the internet, most likely a lot of people have those large language models that have been trained on the
internet. So that's where you're getting a lot of that data. And it brings it together to you and
almost like a human like response. And I think that that's kind of what generates that. It's
fascinating, because you've got that type of perception that people are now like, wow,
this thing is more than it really is. And then you've got people that are out there like, well,
this doesn't help me at all. And so I think you're right to point out that it's so broad,
and we haven't really defined it to the point where it makes it difficult from a product perspective to
say, okay, this is the problem we're solving for you now, because it is so broad.
And maybe that's like part of the magic. I think kind of a silly thing is when I was young,
I used to think like people who could solve a Rubik's cube were geniuses. Then in like eighth
grade, I got interested in, you know, becoming a genius. So I look at how to solve a Rubik's cube,
and it turns out it's a very simple algorithm, you sort of solve one side. And then based on that,
the cube ends up in a certain state, based on the state you observed the Rubik's cube to be in,
there's literally like sequences you can memorize and execute very quickly to solve it.
And like, when I learned this, it became kind of a cool party trick for a while. But I ultimately
became very, very bored with it. Because I was like, I didn't have to be smart at all to do this.
I just had to remember some stuff. I can understand why maybe everybody shouldn't have like an education
around AI where that, you know, where we're like, Oh, it's not that cool. It's just matrix multiplication,
because that might make people swing too much in the other direction where they don't see the value
and what it can do for you. So I want to be really careful not to be like, Oh, you know,
it's not that special. It is very special. It has created a lot of value. It's it's like what I do
for a living. So yeah, I think just having a balanced perspective is very important. A lot of
people when they see how you solve the Rubik's cube, when they see how machine learning models work,
they might lose faith or excitement about them completely. I don't think that is the right way
to look at this. I also think, of course, having it be this like amazing black box that can do
anything for you. If we just, you know, give it enough data, give it enough time, give it enough money.
I don't think that's the right way to look at it either. Having a measured response where like this is
something that can do really amazing things for me, and it can save me a lot of time. But it also
definitely has limitations. I think that is really important. In general, as a product manager, I think
managing expectations is very important. So you know, like I said, a very PM answer, but that's what
I think. Yeah, so I mean, doing product management for AI products at Microsoft, how did you handle some
of this ambiguity that's in the space and sell that to people? Because, you know, to some degree, there is the whole
adage of sometimes people don't know what they want, or they don't know how to describe what they want. So like in the case
of cars, you know, Henry Ford is famous for having said that if he had asked people what they wanted, they'd want a
faster horse, not a car, and then he invents the car. And then that creates a whole new space. Same story
with the iPhone, right? Nobody really knew what a smartphone should be at that point. Apple puts the
iPhone out there and everybody's like, yeah, that's exactly what we all wanted. And then the whole industry
has to retool to basically match the iPhone. It's kind of similar in that right now where you're in kind of
an industry breaking space. So how do you kind of work with those early customers to make sure you're solving
real problems? To kind of go to those examples that you provided of the car and of the iPhone,
maybe people specifically didn't know that they wanted a car or know that they wanted an iPhone,
but they knew that they wanted the underlying benefit. People wanted to be able to get places
faster. People wanted to be able to talk to those that they loved. And so I think when you look at AI
products, it's really about identifying the need and the user problem more than identifying the specific
solution that solves that problem. So what is like the sort of modern person's version of like,
they wanted to be faster, they want to be able to talk to their family and friends and be more
connected. It's not just one thing. I think the biggest thing that has actually been realized from
AI is people want to save time for sure. People are able to get very tedious things done quickly right
now with AI tools. If you look at information retrieval, document generation, even image generation,
coding, like these are all things that you know what you want. And then there's this big tedious
task between you and the finish line. And you feel like you already kind of solved the problem by
figuring out what you need to do, but you just like sit down and do it. That's a very frustrating
problem. So I think the biggest thing is time savings. But there are a lot of opportunities here,
like people also can do things in a deeper way than they otherwise would have if they leverage AI
tools down to like the amount of sources that you can consult has gone crazy. If you think back to
like, if I wanted to research a topic, I had to go to the library and find the books or articles there
that were relevant and spend a lot of time finding within that the specific information that was
relevant to me. We've come a long way since then. Once the internet became commonplace, I could find
many relevant results without ever having to leave my house. But I still had to look through all of them
individually. But having the ability to look at hundreds of sources and look at the exact piece
of information from each source that's relevant to the question I asked that quickly, it doesn't just
make me faster. It also increases the depth of information and the number of sources I'm able
to consult. So I think the underlying needs are people want to save time. People want to be able to do
things with better quality, increased depth than is just sort of possible, even if you give a human more
time. And I think people want to focus on what feels like a quote unquote real problem, what feels
like a human problem, matters of like making decisions, things like that. Part of AI right now
is just a really bold vision. There's big visions for AI products and big visions in the AI space. And
depending on who you are, depends on how you frame that. But how do you balance that bold vision with
the reality of incremental progress? Say more about that. Yeah. So you've got this bold vision
and everybody kind of buys into it. And especially in the case of the economics around it, the market
and all of the people around the market, they buy into it. And then they're like, hey, you haven't given
us enough return yet. You know, this is going too slow. It's taking too long. It's incremental progress
as opposed to this, like, we want to be there today. I would say that there are a couple of
things at play here. One is the actual technical progress is definitely like we are as an industry
kind of over-promising, under-delivering. We could have a whole separate conversation about AGI,
artificial general intelligence, and when that's supposed to get here and what that's supposed to
do for us. But a lot of people think it's not coming and they're probably right. So there is that
aspect. Definitely, we're making very big promises that we can't keep as an industry.
But on the other hand, I do think a lot of the lack of realized time savings and improved quality,
et cetera, that people are not seeing is also an adoption problem. There is a lot that we can do
today that people are choosing not to engage with. Part of this is for bad reasons. Like, you know,
people are just stuck in their ways. But also part of this is for good reasons. I think we're kind of
in a lawless gray area kind of space right now as an industry. And I think people who are concerned
about their privacy, people who are concerned about their job security, whether or not them using AI
for their work is going to, you know, teach AI how to do it better than them. People who are concerned
about the environmental costs. There are a lot of good reasons to not be, you know, an early adopter or
like a power user of AI products. And again, like there are some bad ones. I think if you don't use it
because it's just not what you're used to, it's probably not going to serve you well in the long
term. I think if we wanted to be able to realize the gains that are currently available, we would
be investing in solving these problems that prevent people for good reason and for bad reasons from
using it. So showing people how AI tools can save them time so that we immediately eliminate the bad
reason. But then much deeper issues around like how can we reduce the environmental cost? How can we
create protections for people so that they feel comfortable here and feel like we're not mining
them for their data? And how can we usher in the next generation of what it looks like to work with
these tools and not what it looks like to have these tools make you redundant at work? Like there's a lot
of fear there and a lot of it is well founded. So I think there are some big problems here at, you know,
the business level, the policy level, and the tech level that need to be addressed before we need to go
over to like, oh, why don't we have AGI yet? I don't think that's going to solve the problem. I
think getting people to use what exists today is much bigger. Yeah. And I think that when you talk
about people's fears of being made redundant, that's a very valuable thing. I mean, you hear
that in the industry, companies are using that. Well, we've replaced X number of people with AI.
We have seen throughout history, some of these changes were all of a sudden, you know, if you had these
job skills, you were out of a job because the world changed and it was a new set of skills that was
needed to do the work. But on almost every one of these passes, we've never had less work for people
to do. We've always come up with new ways to work. And so a new set of skills and people have been back
into employment. Do you kind of foresee maybe the same thing happening with AI here? And some of that
AI is taking the jobs? Yeah. I mean, I definitely think there will be a divide between people who learn
how to use AI tools to make them more productive at work. And, you know, there's this whole concept of
like the 10x engineer, there will definitely be a divide between those people and people who work
in a more traditional sense. Because for those people, most of the time, they are doing those
tedious tasks that we can already do for them. If they're not learning how to transform the way that
they work, and I'm speaking primarily about like knowledge workers in the types of spaces where AI is
currently a threat, if they can't use it, then I do think that is a threat. And I think one of the
big challenges is like, again, I think if somebody doesn't use these tools, because they just don't
want to learn anything new, they're in the sort of like, you can't teach an old dog new tricks era of
their career, then like, that's a bad sign. You probably don't want to be working with somebody
who is inflexible and who is not open to learning, whether that's learning about AI tools or learning
about anything else, like most jobs require continued learning. So from that perspective,
I don't see an issue, like people should adopt these technologies, where I do see that being
a threat is like, if people are not adopting these technologies and not adapting the way they work
accordingly, because of these other concerns around like privacy, the environment, etc, then that
could be a real challenge. People largely do need to be able to trust that they're, you know, safe in
their place of work, and that the way they do their job is not like a threat to their personal
safety, or that they're not actively harming the world that they live in. So I kind of went a couple
different directions there. But, but to answer your core question, I do think the way that we work is
changing. And I do think we will start to see new jobs emerge. There are already, you know, people whose
job is to like, help your company adopt AI more. And so like, there are some of those types of new ways
of working that are emerging. It's hard to tell yet. And I don't know if anyone can make a good
prediction yet of whether or not there will be enough of those to counteract the number of people
who are being made redundant by AI. But I also think that a lot of those people who currently like
we think are being made redundant are going to be asked to come back, because there is always a need
for a human touch. It might take fewer humans now. But I think the idea that like, you could
completely replace a marketing department with like an image generation model is just not going to
happen. There is always an aspect of like, somebody needs to have a cohesive vision for
this. We will always need like planners, tastemakers, and people who are able to articulate
what the problem is, and what the optimal outcome looks like. I don't think as many people are going
to be made redundant as we think, you know, just as during COVID, there was a lot of overhiring. I
think right now there's kind of a lot of overfiring, and eventually things will level out. I do think
new jobs are going to be created to support the changing in the ways that we work. And I do think the
right thing to do for a person's career from strictly like a learning perspective is to learn
more about these tools and to start adopting them in their work. However, I do see a completely valid
concern that some people have around privacy and environmental issues that might prevent them
from doing so. And I think that we really need to be doing the most that we can as an industry to
alleviate those concerns, because having principled people not want to work for you anymore is
probably not an outcome that we want. Yeah, not where anybody wants to go. Well, and then also just kind of
interesting is that the models grow stale if they don't get new training data. If you're not adding
to that level of training for a lot of this stuff, there's trends in the market, there's changes to how
people see things. And there's just kind of that creative nature that people bring to the table.
And if you don't have people creating stuff still, then AI is just going to get stale in terms of what
it responds with. So I think that there's to some level, at least for me, I see protection just in
that, that there's got to be that creative spark. There's got to be that creative output for AI to
continue to work at all these cases. I agree. I think the question just becomes like, how many
people do we need to sustain that? And is it as many people as we currently have? Only time will tell us.
Earlier, before we started the show, you and I were talking about the Gartner hype cycle, which has a
couple of different stages of hype for a product. And in this case, we're talking about an entire new
technology, AI, which arguably AI ML is not really new technology. It has been around for as long as
computers have. But in terms of it being a topic in society, it's definitely, you know, with generative
AI became a much bigger thing as opposed to just this distant threat. Now it's here. Now people are
talking about it. We're definitely kind of in those early stages of that Gartner hype cycle. But you were
telling me that the Gartner hype cycle actually mirrors Freytag's pyramid, which tells us about the
different stages of a story, because there's pretty much one way to tell a story in which you've got a
hero and it's a problem and then they overcome the problem and you hit to resolution. So tell me a
little bit about how those things mirror each other. So first I'll provide some context. The Gartner hype
cycle starts at the bottom with an innovation trigger, which is like, you know, when there's an
exciting new emerging technology, and then it goes up to a peak of inflated expectations. So, you know,
when we all think like AI is going to do every job, and then it goes way down to a trough of
disillusionment, and then it starts to slowly come up, although not nearly as much as at the peak of
inflated expectations. It goes up through a slope of enlightenment to a plateau of productivity. That's the
Gartner hype cycle. And what Freytag's pyramid is, is a similar looking picture where there is a story, and this
explains how we tell a story. So it starts with exposition, then there's a rising action that leads
to a climax, and then there's a falling action and a resolution. So the sort of mapping I see here is
that the exposition and the rising action mirror the innovation trigger in the Gartner hype cycle,
and then through the rising action, we go up to the climax of the story or the peak of inflated
expectations. As we approach the end of the story, there's the falling action and the resolution.
And in this case, this is where they start to differ a little bit. We come down to the trough
of disillusionment, but then we do slowly come back up the slope of enlightenment and plateau of
productivity. This is where I think we get to the part that's not in a story, which is why it's not
part of Freytag's pyramid. This is the sort of like, and then they lived happily ever after, or, you know,
and then everything crashed and burned, and life just sort of goes on in that new state. The reason I think
it's interesting that these two pictures look so similar is because I think we are kind of telling
ourselves this story where, you know, we have these large language models or the hero and they come
up and we're like, oh, they're going to change everything. The world is never going to be the
same. We get into a lot of hype around what they can do. And then it reaches this climax. And then
I think now we are kind of on the way down to the trough of disillusionment where we're like,
wait a minute, are we in a bubble? Why is social media full of AI slop? Like, this isn't impressive.
I didn't expect these like massive data centers to be powering cat videos on TikTok. Like,
what are we doing? We don't want to have to think about like how much electricity did we use to build
that cat video? Yeah. And so I think we're definitely in a weird space right now where
like if you say AI, people start to roll their eyes. Whereas a few months ago, that definitely
wasn't the case. People were interested. They wanted to learn. They wanted to adopt, but we've gone
one too many cat videos too far. And so people are just kind of sick of it. And I think economically,
we were very, very sick of it as well. Where I think this is going is we're going to like
mellow out in our disillusionment. And the big story part of it is almost over, but pretty soon
over the next few months, up to a few years, we're going to get into that phase where we're just kind
of living life. Our baselines for how productive people should be will adjust. We will start to
develop a new normal. It's not going to be as much of like the center of conversation of what
everyone's talking about as it has been for really the past couple of years now, which is good. I
think I'm, I'm tired of talking about it in work and outside of work.
Yeah. It's just the way you do things now, as opposed to this whole brand new, we need to be
educating everybody. Yeah. Although I do think like in the way that everyone kind of has a basic
education about the internet, I think people will have a basic education on like how to use AI tools
as well. But I think it will be a lot more just sort of integrated with how we live.
So what is that going to do for AI product development over the next three to five years?
A lot of parallels have been drawn between the AI boom and the dot com boom and then bubble. I think
in the way where every company is now an AI company, and back then every company was an internet company,
I think we're going to reach a stage where like a company is just a company again, and your company
happens to use AI or make an AI tool. Like this is just going to be how we live. And one thing we're
already seeing is that we get these, you know, hyper specialized products and these very specialized
companies, which is a good thing. We want a person to identify a very specific problem.
If the current solution that is, you know, not an AI solution to this problem can be improved by them
developing an AI solution, that's great. But I also think we'll get to the stage where people start
to think like, actually, this problem doesn't get better if I throw AI at it. And that's okay. Like,
you know, maybe this startup idea isn't good. Or like, maybe this product idea, and you know,
within my company isn't good.
We have AI everything right now.
Yeah. And a lot of times, you know, going back to like why products fail, I think a lot of AI
projects fail is also because the need for this to be done with AI was never really there. And if it
was, certainly not with, you know, GPT-5. If you have a problem where there are these different like
issues that we see in some space, and I want to categorize them, I would say first, like, have you
tried a classification model? Those are very cheap. Maybe you don't want to pay OpenAI all of
that money so that it can do something that we've been able to do for many, many, many years locally
on your computer. That's one thing is like, once we all have a better understanding, and we start to
like chill out a little bit, I really hope to see improved discernment about what problems really need
you to throw a whole LLM at them versus which ones can be solved with a technology that's existed for
a long time. A lot more like the small language models, things you can run locally, so not
necessarily. Yeah, not even language models at all. Like there's nothing wrong with a simple like
rules-based execution or traditional machine learning models like classification models, or
then a small language model, then a large language model. But I think before you've gone through
at least ideating what a solution in each of these ways would look like, and is it worth the difference
in cost, in development time for this to be like a large language model-based solution?
Bring back engineers being lazy. Like what is the minimum we can do to solve this problem? We do not
need to be doing the most. We need to be doing the least. That's just the most efficient way to get
from point A to point B. So given that, what skills do you see product leaders needing to thrive
through this next wave of AI as we kind of move more into the stories been told, trough of
disillusionment coming out back into that productivity plateau?
I think the ability for a PM to be able to very quickly spin up an interactive demo or prototype of
what they want to build, I think that's a real value. Because first of all, I've always hated Figma.
I think it's very tedious and stuff. But like also, it's just it doesn't feel real. And as a PM with some
software engineering background, it used to cross my mind like, oh, you know, what if I just kind of
made a very simple version of what I want to do? But it doesn't feel worth the investment if it's
just to show somebody an idea that you have. Being able to very quickly do that now with prompting
and create a demo that is interactive, it has like pretty detailed functionality that I want us to
build in our actual product. But I can show it on a very simple front end with, you know, some fake
data in the background, powering how it works. I think that really improves the quality of questions
people ask about my ideas, it improves the depth at which I actually develop those ideas. You know,
if I'm writing something down, I don't think through like exactly after you do this, what should
happen? Is that the easiest way we can do this? Could we make this even simpler for the user? What
choices might a person want to have? But when I'm looking at this interface, I do see like, oh,
we should have a button for that. We should have, you know, it's not intuitive to me that when I
click this, that's what happens. I think these types of issues bubble up much, much more quickly
and being able to build very basic demos just using GitHub Copilot, or if you're not in Microsoft using
like Lovable or like V0 or Cursor or whatever, like any of those tools, if you can build something
very simple, very quickly to show your team what you're thinking, that's really, really valuable.
It will start some really, really interesting conversations. I think it's much better than Figma
Mox, which I never liked. So that's one. And then also I do think a sort of trend that's been
emerging in the PM role is PMs are increasingly more involved in how their systems should be
evaluated. We're very involved at the sort of very early stages of the product lifecycle where it's
like, you know, this is what people want. This is what we have to build. This is what the
specifications should be. And we do have some of like, you know, this is how we should measure if
it's working well. But I think we can do a lot better now than just coming up with a couple of
metrics. And most of the time, the main one is like user satisfaction collected via thumbs up,
thumbs down. We are able to generate synthetic data and help with running evaluations. We're able to
use LMS as judge to conduct evaluations. We're able to have access to a much broader set of metrics that
we can use ways that we can measure them. And just generally, I think we're able to bring a lot
more to the table there. And I think that as much as we can align what we measure with what actually
matters to users, our products will continue to get better and better for that. I have background
in teaching. And one of the big problems with teaching is, is that what we measure is not what
matters. We have these like standardized tests that we give students. And the very natural behavior that
you get is that people will start to think what you measure is what matters. So teachers will start to
teach to the test. Students will start to like hyper optimize for the test. Like when I was studying
for the SAT, I learned a trick that like, if they ask you on the math section to find some expression
that contains three X plus Y, you don't have to solve all the way down for X and then rebuild that
expression. They only ask you that question. If there is a way that you can simplify the expression
down for three X plus Y to already be in there. I learned that trick and then I started to do these
problems way faster. But I don't think that made me like better at algebra in some meaningful sense.
I just knew that they would only ask this question if I can do that. And I wouldn't try that if I
didn't know it was possible because it would waste time on the test. Whether it's an education or in
product, there is a really, really deep importance of aligning what you are optimizing for with what
truly matters. So the more we can be involved in the evaluation process, which is increasingly
becoming more possible as evaluation is done using LLMs, the more alignment we'll create. And I think
we can really give users what they want because we could specifically, you know, define a metric to
be the thing that the user told me. And then we can have an LLM judge how much that's the case.
Well, that's pretty exciting stuff. So if you had one piece of advice to give to PMs entering the AI
space, what would that be? Communication is very important. It's increasingly important the more
noise there is. Being someone who is really clear about what you're working on, why it's important,
what impact it has. I think that's going to help a lot with the process of landing that career. And
then once you're in that career, managing people's expectations, helping other people on your team
communicate as well. I always am kind of annoying even about like asking for an ETA for something,
not so that I can go and be like, where is this? You're late. But so that I can remind myself when
to check in with someone and identify early if somebody needs support. Being a very open communicator and
doing things as transparently as you can is always helpful because that allows people to help you.
It allows them to point you to tools, maybe some of the MAI tools that can help you. But at the end
of the day, like nobody achieves anything significant alone. And so the way that you prevent your work
from being stuck in a vacuum and being limited by your individual capabilities and individual
knowledge is to not do it alone. Thank you. And one final question, just to help our audience get to
know you a little bit better. If you had to give a TED Talk tomorrow on something totally non-work
related, what would it be? This is like a terrible question for me because I'm like a serial like
hobby switcher. Maybe an overarching theme is like hobbies and like enjoying yourself. I've been trying to
not be online as much and enjoying like more, you know, quote unquote, real life hobbies. So I'm very
into baking. I'm very into gardening. I'm very into like fitness and exercise. And like I hesitate to
say anything specific because like, you know, it was fencing and then it was like taekwondo and then
it was like lifting. And so, you know, there's a lot of different things in that category. But yeah,
I think like I think something I care a lot about is having that childlike passion and joy and excitement
for something. And it's very hard to find that when you're consuming. And so the overarching theme is
like always be learning something, always be creating something. Maybe it would be about learning and about
like how to make learning fun. And then I would like go over all of the things that I'm, you know, trying
to learn and then putting down and then switching.
Very cool. I would go to that TED Talk. Well, Dina, thank you so much for coming on Productly Speaking. It's been a
pleasure.
Thank you so much for having me. It's been a pleasure for me too.
Today's episode of Productly Speaking was brought to you by the letters P and M. If you enjoyed this
conversation, subscribe so you don't miss the next one. Share it with friends, coworkers, or anyone
who loves a good product story. Got feedback? We'd love to hear it. Visit www.productlyspeaking.com,
connect with us on LinkedIn, or email us at hello at productlyspeaking.com. Thanks for listening.
And remember, keep calm and build on, especially when the meeting invites keep multiplying.