The AI hype will accelerate three important conversations

Most innovation follows a hype cycle of some sort. AI is no different. Today, AI is almost everywhere to be found as a discussion topic. Many governments, companies, organizations and individuals are investing a lot of energy in AI. Many of them will be short-term disappointed by the lack of immediate measurable impact and significant progress, and by things being more than a little bit more complex than originally anticipated. This said, AI will help accelerate several important conversations. Three important conversations I find very interesting are: Society: What do we want? Organizations: How solid is our innovation process? Individuals: What is my unique value-add?

On society: AI can help us do things. So the question is what we want AI to do for us? I’m not talking about specific tasks here, but more about what we want AI to optimize society for in the long term? This is often referred to as the alignment problem, meaning the challenge of aligning AI’s values to our own. So what are humanity’s values? This is a conversation currently not being had broad and deep enough in society. If looking at the conflicts in the world, not just linked to geopolitics, but also to opinions and beliefs, is it even reasonable to talk about a concept such as “human values”? Or should we just accept that we have very very very diverse values? Either way, we should find ways to have this conversation so that everyone is involved and so that it becomes sufficiently deep. There are many challenges we will face here, but the biggest one is probably the gap between what people express as values vs how they act in reality. Many people wouldn’t express values of wanting to uphold slavery like conditions, but yet we buy amazingly cheap products. Or to destroy the environment, but yet we do. Or to state that different people have different worth, but yet we act in such ways many times. Or to be condescending, but yet we often are just that in social media. Or to take advantage of less privileged people, but yet we consume loads of pornography. Imagine if AI would only look at today’s human actions and assume that we want more of what we are currently doing. Would that be good or bad for humanity? We need to find a way to create a diverse set of values that are net positive to society and so specific that AI can learn from them. How can we innovate an ecosystem of forums where everyone, not just someone, gets to partake in agreeing on what to optimize society for?

On organizations: Many organizations are stressed about AI. I would argue that AI is not really what they are stressed about though. With the rapid acceleration of hype around primarily generative AI, many organizations are probably more fundamentally stressed about whether or not their innovation processes are strong and fast enough. Would you even be able to in a simple way articulate the innovation process for your organization? Would everyone in your organization be able to do so? If not, that’s probably the bigger cause of stress. The good thing is that AI will help accelerate more deliberate innovation processes in organizations. In a simple form, an innovation process can consist of three steps: 1) Explore - How do we get a continuous and diverse stimuli of ideas from around the world of what’s possible? 2) Prototype - How do we rapidly build great prototypes to learn faster than the world around us? 3) Scale - How do we rapidly scale successful prototypes so they don’t end up benefiting just parts of what’s possible across the organization? To enable this innovation process, organizations should be very deliberate about their continuous reskilling / lifelong learning of their people. How can every organization get very clear on their innovation process and start practicing it?

On individuals: AI won’t be able to automate everything, or even close to it, for a very long period of time. Poor data quality, lack of integrations between systems, inability to do most physical tasks, lack of human connection etc are all major blockers for this. That said, with the notion of broader automation of tasks comes the question for individuals what I should be investing my time in? If we take for example a B2B sales person: Which elements of the job are best managed by AI in the future, and which elements are best managed by the human being? As AI capabilities continue to grow, this will be a moving target. You are not competing with AI, you are competing with people in a similar role as you using AI. This conversation is important to have for every role and profession so that we already now can start to pivot education in this direction. Today, many schools forbid the use of AI for students. What is their long-term plan with this position? Just like with a calculator, there is value in understanding the underlying dynamics of a task. That said, in daily life we will use tools such as calculators all the time, so it’s important that we learn how to use them early on. How can every individual kickstart a lifelong learning journey of leveraging AI, whatever your job is today?

So, irrespective of how and how fast the hype curve of AI evolves, we should accelerate these three conversations so that we are always ready, come what may.

Previous
Previous

The Daily Struggle of Respect - Or why belonging is a double-edged sword

Next
Next

Buy better to be better