The Creative Conundrum: Embracing AI with Caution
How AI is Redefining Creativity and Professionalism in this new age
Should we trust AI?
David Hogue (Design Lead at Google) hosted an engaging panel session at the Spotlight on AI conference with Frincy Clement (Canadian Ambassador for Women in Artificial Intelligence), Katherine Valenzuela (Designer and Founder of The Good Citizen Studio), and me, Jason Theodor, on this very topic. Coincidentally, this occurred on November 30th, the anniversary of the launch of ChatGPT—the fastest-adopted technology in human history which reached over a million users in its first five days.
So, should we trust AI? I have three current thoughts on this matter…
But first, I want to quickly re-introduce my prompting course, 7 Habits for Highly Effective Prompting, which is back by popular demand! These are all the best practices and research I’ve gathered from the last year of interacting with large language models. Learn how to augment your existing abilities and creativity through more effective prompting. Give yourself superpowers for the new year ahead!
Should we trust Electricity?
Think of something so ubiquitous and pervasive that you use it all day every day, without a second thought. Now, imagine that this something is incredibly dangerous. It can cause fires or ruin equipment. It can kill or scorch you in a flash. It can travel through solid objects and stop your heart, or fry your brain.
Obviously, I’m talking about electricity (I kind of gave it away in the section title). But it also doesn’t take much of a cognitive leap to put natural gas, combustion engines, or air travel into this category of ‘mundane mortal danger’. Why do we feel safe using these things? Because of universal standards and regulations. We have building codes and safety protocols. We have consumer protection laws. Regulations allow us to use the toaster, turn up the fireplace, or fly home for the holidays without the constant worry of sudden death or catastrophic failure.
AI is similar to these everyday services and utilities. It’s like fire. It’s like nuclear energy. It must have limits and controls to keep us safe(r) and allow us to reap the benefits of this miraculous technology. The urgency for comprehensive regulations becomes increasingly apparent as AI's capabilities expand, from influencing employment opportunities to ethical considerations like algorithmic bias. Hopefully, we’ll have enough forethought (and time) to get this right.
Should we trust Assistants?
You wouldn’t let an intern work on an essential company document without review. Anything representing your voice, brand, team, or company requires experience and scrutiny. Is it accurate? Does it have the right tone? Is it hitting all the right notes? Will it meet expectations? Then you might offer some notes for improvement, send it back for refinement, and examine it a second or third time before its completion.
This ‘review and revise’ cycle serves a purpose for both parties: you refine communicating your intent, and the intern refines their understanding and capabilities. Everyone improves. Once the intern has enough experience, they move on—they graduate to greater spheres of responsibility.
I often liken conversational AI and other gen(erative) AI tools to an executive assistant, an assistant coach, or an intern. It is extremely capable, and tries its best to help you with your needs. But the AI isn’t human. It never gets upset, never complains, and it never feels awkward to tell it to start over or completely change direction. It is fast, polite, consistent, and available at all hours. But it still makes mistakes.
You should always check (and double-check) AI output. LLMs (Large Language Models) are notorious for ‘hallucinating’—confidently making up fabricated answers when they don’t know something. AI-generated misinformation or biases in decision-making can highlight the necessity of this vigilance. And it is imperative when we create to be active in our creations, lest we give up our own agency. “Create or be created” is one of my internal mantras.
Should we trust Success?
Where does this AI era leave our budding professionals—tomorrow's designers, writers, analysts, etc.? The adage, “The opposite of success is learning,” suggests that failure and mistakes are vital for growth. However, as AI tools like ChatGPT become more powerful and ubiquitous, I consider the impact they might have on traditional learning and development paths.
These AI systems are tirelessly efficient, with no need for rest or personal growth and potentially usurp the roles traditionally filled by junior professionals. Where once a junior might draft a document, create a design, or analyze data, tasks that offered invaluable learning opportunities—we now increasingly delegate these tasks to AI. This shift poses a dual challenge: the risk of eroding certain skills, particularly those related to creative discernment and judgment of quality, and the necessity for junior professionals to carve new niches for themselves (Stanford professor Melissa Valentine calls this the “deprecation of expertise”).1
This evolving landscape might alter the skill sets currently deemed valuable and fundamentally transform our culture and relationship with creativity. As experienced professionals increasingly rely on AI for tasks once used for training and skill development, there's a real danger that the nuanced understanding of what makes something 'well-made' or 'authentic' might diminish over time. This shift in skill importance could lead to a cultural redefinition of creativity, where effective collaboration with AI will be prioritized over more traditional creative skills.
In this context, mentorship and apprenticeship become more crucial yet more complicated. The question arises: Is learning to command AI as valuable as learning the skill itself? This dilemma presents an urgent need to rethink our educational and professional development models. We must find a balance that allows AI to augment, not replace, the human creative process and ensures that emerging professionals are equipped not just to use AI, but to innovate and lead in a world where AI is a tool, not a crutch.
In tech we trust?
Trusting AI is not about blind faith but informed caution, much like our approach to other powerful tools like electricity. It requires a balance between reaping benefits and being wary of potential dangers. We need robust regulations, a discerning eye for AI-assisted outputs, and a conscious effort to maintain personal growth alongside technological advancements. As we navigate this complex landscape of AI adoption, our trust in technology must be as dynamic and evolving as the technology itself. We are in for some interesting times ahead.
If you are interested in how AI might augment your work, team, or company, check out my creative consultancy, More Better Different.
Hancock, B., & Weddle, B. (2023, December 11). Human-centered AI: The power of putting people first. McKinsey & Company; McKinsey & Company. https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/human-centered-ai-the-power-of-putting-people-first
The analogy of electricity is fitting. Just as education and regulation allow society to safely harness a technology that holds the potential for 'mundane mortal danger,' so too can these policies be applied to AI. While there'll always be those who misuse it, the majority will value, respect, and leverage its capabilities for the greater good. The trust is in people, not AI.