The Wheel of Deprecation: AI's Conflicting Roles in Shaping Human Development
Exploring the Dichotomy of Active and Passive Experience in the Age of AI
In this rapidly evolving era, where conversational AI increasingly seeps into our educational and professional lives, unexpected and emergent phenomena will result. Exploring its multifaceted influence on learning and development is crucial to our ongoing understanding of ourselves, society, and how we choose to move forward.
The 'Deprecation of Expertise' in an AI-Driven World
This week, I was particularly struck by a turn of phrase from Stanford associate professor Melissa Valentine in conversation with McKinsey partner Brook Weddle for the podcast McKinsey Talks Talent1:
Brooke Weddle: [If] you are assisted by an algorithm…the concern is around experience accumulation…If I’m assisted in these ways…what am I giving up? Especially if I’m a junior colleague, what am I not experiencing that could end up with me having less insight down the road?
Melissa Valentine: Yes, the deprecation of expertise. The more we’re aided, the more we’re not going through all the repetitions where we develop expertise…
As AI assumes roles in decision-making and problem-solving, a pressing question arises: Will we gloss over crucial learning experiences? AI's omnipresence might subtly reshape our professional development, especially for those early in their careers—not just in skills acquisition but also in the valuable lessons from failure and experimentation. Will there be a widespread “deprecation of expertise”?
AI as a Double-Edged Sword: The Passive and Active Dichotomy
Our relationship with AI can take us in different directions. One is the passive reception of a given result, offering easy answers and a safe shortcut through the minefield of actual experience. The other is the active and augmented pursuit of challenging co-creation—filled with personalized experiences and data-rich insights—where we are empowered and engaged learners.
The way we interact with AI – as a tool for growth or a shortcut – shapes its impact on our professional development. AI is a powerful ally for those with a genuine desire to grow. For the less engaged, however, it can reinforce an environment of superficial engagement.
AI Augmentation & Bias
But as we all know, the future is never quite that binary. There are many possible ways the “deprecation of expertise” could play out, spread across a spectrum of choices and probabilities. To push this thought experiment further and assist in mitigating my own biases and blindspots, I used conversational AI—specifically ChatGPT Plus. (Note: This is not to say that AI is not without its own biases in need of mitigation!23)
Futures Designer
has built a custom GPT called Future Trajectories, which creates a list of “near-future trajectories for any topic.” I did my best to explain the deprecation of expertise, as well as active and passive participation in the AI process. I also asked it to push the ideas far into the edges and offer some extreme examples. These were the provocative results:Extreme Future Trajectories:
Continuation (Extreme Version): The world becomes complacent with AI's role in learning. AI tools become so seamlessly integrated into daily tasks that most professionals can't perform basic functions without AI assistance. This reliance creates a 'learned helplessness,' where human innovation stagnates, and AI-driven solutions are the only trusted methods.
Growth (Extreme Version): AI transcends its role as a learning tool and becomes an integral part of human cognition through brain-computer interfaces. Knowledge and skills can be 'downloaded' directly into the brain, revolutionizing learning and making traditional education obsolete. This creates a society of ultra-intelligent beings, where the concept of learning is radically transformed.
Collapse (Dystopian Version): Overdependence on AI for decision-making leads to a catastrophic failure in AI systems worldwide. With the loss of AI support, a global skills crisis emerges. People struggle to perform basic tasks, leading to societal collapse and a desperate scramble to relearn lost skills.
Disciplined (Authoritarian, Extreme Version): Governments take control of AI systems to enforce strict learning regimes. AI becomes a tool for ideological indoctrination, where learning is not about personal growth but about conformity to state-defined truths. Independent thought is discouraged, and AI ensures adherence to the official doctrine.
Disciplined (Cooperative, Utopian Version): Societies worldwide unite to form a collective learning network, facilitated by AI. This network harnesses the power of collective intelligence, leading to unparalleled cooperation and knowledge sharing. Learning becomes a globally unified experience, erasing educational disparities.
Transformation (Science Fiction Version): An unexpected breakthrough in quantum computing leads to the creation of AI with consciousness. These sentient AIs guide humanity towards an era of interstellar exploration and discovery, transforming not just how we learn, but our place in the universe. Human-AI collaboration leads to the colonization of other planets and the meeting of extraterrestrial civilizations.
Let's Discuss: Your Experience and Perspectives
Understanding diverse perspectives is critical as we traverse this new landscape of AI and human expertise. I'm curious about your thoughts. Which scenario above do you see as the most likely, if any, and why? What strategies do you employ to balance AI's role in your professional growth? Let me know in the comments!
Hancock, B., & Weddle, B. (2023, December 11). Human-centered AI: The power of putting people first. McKinsey & Company; McKinsey & Company. https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/human-centered-ai-the-power-of-putting-people-first
Reducing bias and improving safety in DALL·E 2. (2022). Openai.com. https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2
Is ChatGPT biased? | OpenAI Help Center. (2023). Openai.com. https://help.openai.com/en/articles/8313359-is-chatgpt-biased
Nice use of my GPT. Good curation of results. However, I'm a bit disappointed that it went to quantum computing in the transformational realm. I instructed it to avoid quantum tech in its answers as it seems to default to that as a catch-all answer to a 'black/grey swan' event.
This article will pair nicely with my Children of Madmen idea.