AI continues to play an increasingly prominent role in our lives, transforming entire industries and shaping the future of work. It can even help you identify the perfect avocado at the grocery store. But despite its power and potential, AI is still a tool whose value depends on the judgment, clarity, and intent of the people who use it.
As AI—particularly generative AI—takes on more responsibilities and fosters connections that were once unimaginable, we need to think more holistically about the world around us. In short, we need to become better system thinkers.
Professor Ed Crawley describes system thinking as “simply thinking about something as a system: the existence of entities—the parts, the chunks, the pieces—and the relationships between them.” It’s a straightforward definition, but it captures the essence of a mindset that is gaining relevance.
Keep reading to learn about several ways that AI and systems thinking intersect and why developing a system thinking mindset is imperative for operating in an AI-forward world.
As Professor Crawley puts it, “We use systems thinking because the complexity of the systems that we deal with is growing. AI is one such complex system that we’re all spending significant time thinking about.” The characteristics of AI that contribute to its complexity include:
Probabilistic modeling: Ask ChatGPT the same question multiple times, and the outputs will vary with each answer. The core information provided will likely be similar, but you'll notice differences in word choice, sentence structure, and the specific details provided. Change just one word in your prompt, and the model may produce a noticeably different result. These variances occur because AI modeling is probabilistic (rather than deterministic) and makes decisions based on probabilities (rather than certainty).
Model switching: Many AI tools decide behind the scenes which model will handle a request. Chat-based applications often rely on several models with different strengths. The system picks one depending on how difficult it thinks your question is or where you are in the conversation. Because those models behave differently, the same prompt can produce different answers depending on which one is being used.
Personalization and memory: Users' stored preferences—writing guidelines, conversation style, punctuation to avoid—influence an LLM’s behavior. Two people can ask the same question and get different answers simply because their systems have learned different habits from past interactions.
Professor John D. Sterman shared these words of wisdom on how to approach imperfect models within the context of system thinking: “We must make the best decisions we can despite the inevitable limitations of our knowledge and models, then take personal responsibility for them.” His advice seems equally apt for considering generative AI tools today. Strengthening our system thinking knowledge is one way to make better decisions when providing inputs to AI—and assessing its outputs.
In the workplace setting, AI can efficiently execute routine or narrowly defined tasks, freeing humans for higher-level, strategic work. As that happens, many roles are evolving to include ownership of larger workflows—workflows that require people to think in terms of systems rather than isolated steps.
That need will only deepen as organizations move beyond treating AI as a tool for single tasks and integrate it more fully into their operations. To maximize productivity gains from AI, many traditional business models and processes will need to be reimagined and redesigned. Those efforts rely on system thinking because they involve understanding how work moves through a complex organization and how all the pieces influence one another.
The AI-enabled models and processes that emerge will connect systems, data, and teams in ways previously impossible, thereby increasing interdependence. The more points of interdependence there are, the more critical it becomes to understand how even small changes or disruptions can affect the system as a whole.
Though AI introduces complexity into organizations, it also offers a powerful way to strengthen system thinking itself. At its best, AI can assist people in thinking more broadly and more clearly about the systems they develop and maintain.
For instance, LLMs can act as a thought partner by helping to map out the elements of a system, identify the relationships among them, and explore how changes in one area could influence everything else. The opportunity to minimize cognitive load with AI (while, of course, being mindful of cognitive debt) may prove especially valuable as companies take on the monumental task of designing new models and processes around AI.
One thing is certain: the world is not getting any simpler. “Life is only getting more complex,” says Professor Crawley. “One of the characteristics of the 21st century is that we’re investing more in complexity, and things are just getting damn complicated.”
As AI and other emerging technologies push the boundaries of what’s possible, staying current requires more than learning the technology itself; it demands understanding how it functions as a system and how it fits into the systems around it.
That’s a tall order. Professor Sterman strikes at the heart of it: “Learning about complex systems when you also live in them is difficult. We are all passengers on an aircraft we must not only fly but redesign in flight.”
MIT xPRO’s online learning courses are built for this challenge. With AI courses on unlocking practical skills and system thinking courses on making sense of the world around you, there is something for any professional looking to upskill in an increasingly complex world.