Many AI products are not as useful as they claim to be, because they suffer from the problem of “hubris.” They try to do too much.
Let me give you an example. One of the most crowded spaces in AI has been meeting notetakers, and a rising star is Granola.
Pretty much every other AI notetaker has the same features: they transcribe the meetings, summarize the main takeaways with AI, and allow you to chat with the notes through Q&A. AI handles all the work, while humans can sit back and relax.
But Granola is different. Granola says: you humans need to do some work too. You actually need to type stuff during the meeting, but only stuff that you think is important, or thoughts that pop up in your head that only you can verbalize. Meanwhile, AI takes care of the dirty work of transcribing what everyone said. After the meeting, Granola enriches your rough notes with the transcript, giving you final, polished notes.
People stick with Granola because the quality of the notes is just much higher. Why? Because its AI is humble. It doesn’t try to decide what’s important in the meeting; the user decides what’s important in the meeting.
Because something seemingly simple like meeting note-taking is actually extremely complex: what one person thinks is important may not be important to another. It involves a ton of subjective decision-making and taste. The AI should not presumptuously decide what the notes should be; the humans need to do their part and offer their input too.
So many AI products these days proclaim that they can produce wonderful outputs with minimal input from the human: Generate a full application with just a prompt! Create a beautiful slide deck with just one sentence!
But in reality, their output is hardly useful; they’re just fancy demos at best. Because professionals never start from scratch. Professionals bring their own existing work and ideas, and need AI to help with polishing it.
A professional developer already has existing codebases that they’re working on; they don’t need to generate an app from scratch. A professional marketer already has assets and brand guidelines that they need to use; they don’t need to generate a pitch deck from scratch. A professional designer already has Figma files they’ve been working on; they don’t need to generate a prototype from scratch.
So many AIs can make stuff from scratch, but where’s the AI that can take my existing work to the next level?
We don’t need an AI to bring something from 0 to 100. Most of the time, we just need it to turn 60 into 80.
AI products should recognize their own limitations and thoughtfully invite humans to contribute at appropriate timings with minimal friction. It needs to know not just what it can do, but also what it can’t do, and how to close that gap with human input.
Humility is a trait for successful humans; it will be a trait for successful AI too.
There are many AI note-taking tools that claim to be the second brain, but I really want to figure out where the boundary is between our human brain and this second brain.
This piece feels like a quiet invitation to slow down, to notice, and to remember humility isn't a weakness but a foundation.
In both my consulting work and in the classroom, I navigate the same dynamic you describe. With clients, I watch how quick confidence can mask deeper uncertainty. With students, I notice how hesitation often hides untapped insight. What bridges both spaces is creating room for vulnerability, asking “what don’t we yet know,” and being willing to learn as we go.
The most capable systems technical or human are those built by people who start with questions, not answers. It’s not noise to pause and ask “why this, why now?” it’s essential.
Thank you for paving the way for more thoughtful design and practice. I’d welcome continuing this exploration together seeing how humility can guide not just individual tools, but our collective way of working.