Is your AI Product on the right track?
Essential signs to see if your AI product is on the right or wrong track.
Hi there, it’s Matt! This is part of a larger series on AI product. Also check out our AI strategy thinking series.
Time to Read: 5 mins
When I started as a data scientist, AI products weren't on my radar. My focus was on building models, data governance, and handling the technical details. The product side seemed secondary. But as a lead data scientist and consultant, everything shifted.
Suddenly, I had to make sure the product succeeded. Working closely with the product manager, I learned how to spot when an AI product would thrive or need more work.
Recognizing key warning signs and positive signals early for your product helps you stay on track and avoid costly mistakes if you're building out an AI product.
In this article I discuss:
Warning signs for AI products. These are pretty critical to nip in the bud early.
Sucess Signs for AI products. These are great progress - that often seem like red flags.
Let’s dig into these.
What are AI product warning signs?
Let’s dive into the red flags. They aren't fun, but catching them early can make or break your AI product. Spotting danger early lets you do quick pivots and stay on the path to success.
I've noticed three that often stand out:
1. Confusion on Why
Understanding the reason is crucial for high-quality products. To improve, you must know why you use AI in business.
Its a big red flag if team members and other teams don’t have clarity on:
The reason why the company is adopting AI
Value and product market fit for users and stakeholders
AI product’s high level goals for profitability and products
A product strategy gives clear direction and a structure. It helps everyone understand the importance of having an AI product.
For engineers and data science teams it:
Draws a roadmap, providing clear technical requirements.
Gives them a defined space to build solutions in
Defines the standard of their work that is critical to the product’s success.
For the Business Team it:
It ensures alignment of the AI product with market needs.
It provides clear objectives that align with company goals.
It ensures everyone is united in pursuing the same direction.
At the end of the day, you should be able to explain not just the reasons for using AI. You should also be able to explain the context - business and tech wise.
Defining the why and being able to explain tech and business context gets you so much traction. You can have much more impact influencing both. You do this by explaining why you’re using AI.
2. Lack of Communication
Lack of meeting regularly is a big red flag. Developing an AI product is getting people excited. Or at least interested. That takes meetings and interaction.
Don’t underestimate excitement. Excited people provide valuable feedback and tweaks. It gets critical traction for your project and executive engagement.
A lack of executive engagement is a critical red flag. A lack of an executive sponsor is a bigger red flag.
Having sponsors and advocates are important at every stage of your AI product journey. Adoption of an AI product needs them.
I have a friend who is incredible at working with sponsors and advocates. She finds key sponsors and advocates before a project. Then sets up meetings to understand expectations about her AI project. The best part is she’s able to get feedback early, which helps her team build faster. Her sponsors know she’s reliable and proactive - which helps build excitement and advocacy for her work.
Data teams and stakeholders should know why we use AI. They should be able to grasp the larger product vision and how it enables the business model. Make sure that they can also understand what uses cases you’re trying to solve with AI.
3. Lack of Resources
You've started creating your AI product. But, you don’t see progress or early positive signals from the model training you’re starting. You might not have the resources needed to take the product to market.
This often is because:
lack of talent or knowledge
tech or data stack lacks maturity
platform to serve your AI product can’t handle workload
pricing strategy for AI products
Understanding the levels of these need can be hard at the outset. Many organizations struggle with establishing the right combination of factors as they get started with AI.
These aren’t uncommon. They’re very common difficulties you’ll face. Root causing these are difficult, since two or more factors may be working together.
If you are struggling for talent, consider working with AI consultancies that can help you leverage from a diverse talent pool that’s had exposure to use cases similar to yours.
What are signs of AI product success?
Yes, red flags are very common. But you shouldn’t assuming failure is destiny. What you may think are red flags? Might be marks of AI product success. Check before panicking.
When you spot these signs, you're doing great. Even if it doesn't seem that way at first.
1. First iteration fails.
Expect it to. The first draft of anything is crap.
Your first AI product PoC will be messy and ugly. Mine still are. There’s no shame in that.
Temper your expectations. You may struggle with an unexpected poor result. Or you may notice others' disappointed reactions. Don’t stop. Assess your roadmaps and check next steps. You might be okay. Be a steady rock for yourself and others around you.
I'll never forget a time when I felt unsatisfied with the first iteration of an AI project. I doubted the model's ability to perform well in production. But we impressed stakeholders when we presented it. In production, the project exceeded expectations. The model was better than I had imagined - early doubt doesn't always mean failure.
If you first PoC fails, find a way to build it into a first iteration. If its unsavable, postmortem it. You learn a lot more by understanding mistakes. Then testing alternative paths.
AI is experimental. You’ll have to experiment, and A/B test a bit. This is normal. Companies with mature AI organizations have experimentation teams and platforms just for this. Be patient, determined, and systematic in your experiments.
Yes, it’s hard to see progress. The beginning is hard. It’s even harder if you’re coming from a traditional software product view that isn’t experimental. Give it time and don’t give up.
2. Overwhelming Feedback.
You present the product features, pricing, and market. Suddenly everyone wants to give their opinion. On the surface it looks like you got more work to do.
It seems like a bad sign. But you accomplished one big thing: you generated excitement and interest. It’s a mark that you’ve succeeded in advocating your AI product vision, goals, and roadmap.
More feedback can be exhausting. You can often be afraid of letting everyone down. But it means people are engaged. Especially if they ask about the specifics of AI technologies and the project's goals.
This gives massive momentum for improvement.
Momentum matters. Its infectious for users, stakeholders, and data science teams. Teams build better and have a sense of accomplishment. Stakeholders and users are more willing to volunteer critical feedback.
It’s hard to manage expectations with a lot of enthusiasm. But its far harder to do it when there’s no excitement about the AI product.
For the data science teams, you they got them traction to iterate. For the product, you got them credibility to propel the product roadmap forward. It creates momentum.
3. Increasing Tweaks
You might see the constant need to tweak your project plan or pivot your strategy as a sign of poor planning. But this is actually a positive indicator.
It shows that your team is flexible. They respond quickly to new insights and changes. They also remain committed to refining the approach for the best outcomes.
Flexibility is vital for innovation. It can drive better results as the project advances. Adapting to new data or tech fast shows that your team is proactive. They can handle challenges and jump on opportunities.
This adaptability ensures that your AI project stays relevant. It will align with strategic goals, even if the goals shift during the project.
4. Change in Data Culture
You see teams changing their views on data. They begin focusing on data engineering, platform, and new data sources.
Suddenly, they’re asking you a high volume of new questions. Slack messages fill up. You’re start thinking there’s a problem - they’re making more changes after you set the product roadmap.
If your internal teams are changing how think about, use, and ingest data, the data culture is changing. They’re making an effort towards supporting your AI product. You're seeing teams’ data maturity go up.
Data is the heart of an AI product. If your organization realizes they have to overhaul how they label, store, collect, and use data?
That means AI transformation is happening - internal teams and culture is shifting to transformation focus. An AI focused data culture is developing. Think about ways to sustain it.
Takeaways
That’s it on what to look out for warning and success signs.
If you take away one thing, let it be this: assess constantly. Put yourself on alert and think about what you’re seeing. Be objective as you asses warning and success signs.
Your product might be doing better than you think.
In summary:
Check alignment why we are building an AI product
Make sure communication and interest is sustainable
Look for overwhelming feedback
Don’t sweat bad first iterations
Watch for changes in data culture.
Up Next
I’ve been traveling a bit while writing this, so thanks for bearing with me.
My LinkedIn Learning class response has been amazing. Thank you to the over 30k people who have watched it! That’s 1k a day. You can check it out here:
How to be a Lead Data Scientist?
On top of that, I’m hard at work on:
Rules vs. Algo for AI Products
How the 5 V’s of data affect your AI strategy
Breaking down barriers in AI strategy: How to overcome friction.
I’ll be updating this as time allows.
See you next time! 🍻