Aha moments: MHV’s Peng on AI
Jul 18, 2023
Community notes by: Sina, CTO/CEO of Learning Loop
Disclaimer: This post is not written by AI. A Learning Loop member wrote them while listening to the talk. Therefore the notes aren’t direct quotes. The speaker isn’t responsible for how these notes or the tone are perceived/interpreted. If you have questions about specific parts of the post, email email@example.com for clarification
Transactions vs Relationships
Current business optimise for transactions, businesses of future will optimise for relationships.
Relationships that maximise exposure to clients’ needs, and use that exposure to maximise value deliver to clients and capture some of that value in $.
Transactions don’t scale, relationships do
Why now? This is the first time we get to interact with AI the way we’ve interacted with each other.
Using AI to create human relationships extends LTV [potentially] infinitely
AI putting the “lifetime” in Lifetime Value (LTV)
He’s seen businesses who said their LTV is 6 months. He’s like “but why”
Historically we’ve measured LTV by looking at a series of transactions in a period of time. first to last transaction.
INSTEAD: Proprietary information that grows in value and volume enables new businesses to add value to clients proportionally. The more you know, the more useful you can to be people, the more transactions that enables you to have. What’s more important? Transactions? Or the relationship itself?
Clients and not users. User is a transactional term. Client is relationship focused
Proprietary information = startups' only sustainable differentiation and leverage in relation to big tech
If you accept that, then there’s no way for 3 big tech companies to run the world.
They can’t control all of your proprietary information. You interact with many players
As a founder, you better have a clear understanding of what proprietary information you’re collecting
And the value and volume better grow exponentially over time
e.g. in the case of Caesar (the moderator), he collects proprietary information on people’s financial risk appetite, therefore he can make investment recommendations (and make money) in ways that the big tech companies can’t do, since they dont have that information about people
How does the recent AI advancement impact the investment scene?
We’re seeing a lot of emerging behaviours in AI tech now. it’s hard to predict.
Current state of AI tech can generate OR destroy a huge amount of value.
The accuracy/truth of LLM results is an NP hard problem at the moment.
What are some tools people already use transactionally that could employ a virtual relationship manager?
Not just to increase efficiency, but to build a predictably and increasingly good relationships
Question from a founder: How to optimise for long-term stuff if startup life is struggle in the short-term? :D
Peng elaborated how he thinks about short-term vs long-term risk, survival vs ambition:
Where Peng takes risk, where Peng avoids risk
70-80% of startups die because of PMF risk. If you’re a leader and I tell you that your company is 80% likely to die, what would you do? Well, figure out how not to die first and foremost.
Eliminate the PMF risk. Example of a company he invested in:
The company took 4-5 years of exploring different ambitious models to help people maximise their career potential.
Then eventually when they pitched to him they removed PMF risk by becoming a straightforward headhunting company. 0 PMF risk since that business model worked and they were making decent revenue.
Would he invest if they were gonna remain just a headhunting company for ever though? It appears not. They had a clear vision to start low risk and eventually become a tech company that infinitely increases LTV.
Eliminate PMF risk, then go wild. He’s not interested in death, he’s not interested in not going wild.
He usually invests in series A companies that have derisked PMF. Sometimes if the team and product are still chaotic but founder is great that’s fine. MHV looks for strong founders
He wants companies that derisk PMF in 5 areas where you can take massive long-term risk on AI, where the founders will take that risk, and want it badly since day 1.
The 5 areas
People have needed these for ever and will continue to. If you’re building in any of these, optimise for long-term relationships
Minimum viable competence of a team building AI products
He thinks for a few years it was okay if you didn’t have a technical founder. Because you could glue different APIs and tools together to build an app
Back in the day you needed actual tech people to build your core tech components
Now we’re back to that era where again you need tech people and can’t fake your way through things and glue different APIs together to build differentiation
What you need to do culturally within your company?
How do you get the deep tech people to interact with people in the other business functions in the company? Deep tech people are…different.
How do you make sure the best ideas get built, given the above?
How do you do all of these while minimising risk of PMF?
“ELIMINATE PMF RISK”
Expert systems will rise again
There's more to AI than ML.
He believes expert systems will rise again, using LLM as the entry point
Especially in healthtech
Mother of all R+K
Much easier to drive viral coefficient when you’re “friends” with a client.
Thoughts on AI and Future of education
1/3 of the world can’t read/write.
Learning how to understand, how to lead, how to learn are not accessible to everyone yet.
Things kids should learn: how to lead other people (philosopher/warrior perspective). hard sciences. martial arts.
He thinks most problems in education outside of the above other than that are first-world problems.
the problems he prioritises in education are the globally existential ones that if solved, they’ll radically solve bigger global issues.
Comment from the moderator: Salman Khan talking about AI and education https://www.youtube.com/watch?v=hJP5GqnTrNo
Caesar’s biggest lesson as a founder
Coming from big tech, he learned that as a startup founder, you always have a 10x vision, but you have to learn to execute incrementally towards the 10x vision.
Risk of AI-generated psychopathic relationships
In normal human relationships, there are consequences. In psychopathic relationships, there are no consequences. Is it possible that this will become the default outcome of relationships scaled up with AI?
Response: AI will most likely be on the consumer side, as it’ll reinforce competition. If 1 platform is bad to its users, it’ll be challenged by other platforms that are friendly. Users sharing screenshots of how they were treated by various platforms will be an example of check and balance. The bad actor will have to correct its behaviour for survival.
What about traditional industries that run on excel?
Influence the boss to shift, then the rest will follow
Or, take a chunk of manual labor, do it at a lower cost (most likely manually but more cheaply, to match the current output with less input for the client, then gradually automate and reduce your own input too).
Gotta deal with risk management, and social change management at once
Lots of GPT something companies raising money and fighting for market share.
It’s noise, don’t follow it.
He calls this negative blitzscaling. Bad unit economics, everyone’s doing it, everyone’s becoming bigger while generating and capturing less and less value
Why? companies follow each other.
He thinks the thing that’ll stop this is: many companies will die because they have no differentiation
The surest way to get in trouble is a founder is if you don’t have clarity on why you’re doing what you’re doing.
He just wrote an article recently “What do I look for in companies?”
Add “0 PMF risk” to it
What sort of behavioural engineering do you need to do for a customer to put their trust into an AI relationship manager?
Love this question.
Example: give money to AI to invest for you. How do people trust this?
Educate the member on how the system works. It can’t appear as a blackbox
Show people the data “These are the experiments we’ve run over the past 15 years”. “Keep your risk at an S&P 500 company level”
It’s hard to explain the depth of the stats and analytics, but you have to do it anyway.
People don’t trust systems, they trust people
Start with a community of friends in your target audience, then get to friends of those friends, then friends of friends of friends, then everyone else. scale the relationships
Do you expect to see a significant change in the persona/composition/persona of founding teams given AI?
He’s been talking to some tech CEOs about this. The ones in bigger companies have been allocating millions to build deep tech teams.
In the startup world, momentum matters a lot. Many companies will stitch random cheap chat GPT stuff together to move fast and they’ll probably collapse
The tech companies building services around other tech may or may not get in trouble depending on how that service changes
You’re still gonna need a team that’ll glue APIs and stuff together, but you’ll also need the team that’ll build the AI customer relationship manager together. That’s deep tech
You can’t just use Chat GPT and stuff to glue them together
Final word of wisdom
Understand who you spend time with and why you spend time with them
- End of note -
If you're a VC-backed and you read this whole note, I'd love to invite you to Learning Loop. Use this link to get $200 off your first year's membership fee.
Learning Loop is a personalised learning and coaching platform. It facilitates honest self-reflections in small private groups, in text and video format, to help members sense-check hard decisions at work and in life weekly. The more you connect and learn with other members on Learning Loop, the more personalised recommendations you receive from the platform, to maximise your growth towards your life goals.