The Greatest Threat to Humans: It is not AI

I have been through many technology evolutions and revolutions from mainframe to cloud services. Now we have a new one: Artificial Intelligence.

Has anyone noticed that we are frightening each other by the choice of these words? Machine Learning. Artificial Intelligence.

I had a great conversation with a person who has been on the frontlines of these new technologies and has a unique perspective on why they may fail or, like many tools that humans have invented in the past, may be used in dangerous or ineffective ways.

We spoke with Brian Evergreen, founder of The Profitable Good Company. In 2023 he published a book: Autonomous Transformation: Creating a More Human Future in the era of Artificial Intelligence. He has been an executive at Microsoft, Amazon Web Services, and Accenture developing unconventional and innovative methods and frameworks that have helped launch a number of digital transformation initiatives.

I attempted to get to the root of the word human. How would we have described ourselves 1,000 years ago? How would we describe ourselves today? What will we look like in the future?

Brian stepped us back to what hasn’t changed. There is a thread that has been constant and, he believes, will continue into the future: We are a species that searches for meaning, attempts to create meaning, and attempts to explain meaning to others and ourselves. We are doing this now with the current emerging technologies. We invent tools to help us have more time for exploration and to build expressions of meaning in our lives. We love to off load burdensome tasks.

I challenged Brian with the statement that we may be creating a new iteration of humanity, a syntheses of man and machine. An evolutionary moment. But he had a great response. We don’t even understand our own “sentience”; our underlying cognition or sense of self. We have math expressions being calculated by enormous computing power. But they are still tools.

In his book, Brian dives into how we evolved our current language, leadership, and systems around the industrial age. We took the scientific method and employed something called scientific management. if we can just measure the velocity and quality by which all these human machines produce things, we can make small iterations over time that produce the profits we need.

When he was working with executives at the worlds largest tech companies, he saw leaders getting in their own way. by treating problems as single chess pieces on a board. Moving one piece at a time. Without ever looking at the whole game board. You will lose that game every time. If we are going to harness AI and other technologies that are evolving, we have to stop playing small ball with our people, process, and tools. and turn to the whole board where value creation takes place. He spends some time sharing his “Future Solving” approach as the antidote to incremental thinking. . This guided us into a new definition of leadership and change that many of us feel is needed.

I asked Brian what he will be telling his children about this future we are entering. He didn’t hesitate. Study literature and history. Learn from the past to inform your future. And let them know there is no one way to see the world or navigate through it. It wasn’t long ago where the life expectancy was 48 years old. The idea of being able to get in a car or a plane and visit a different environment or a different culture was only possible in the last 100 years. There were diseases we could not cure.

And for every positive advance, we humans find a way, intentionally or not, to harm someone or something. But this has been going on since the beginning of time. We knew certain animals or certain humans were dangerous, but we created social systems and tools to mitigate the risk.. We will do the same with these evolving technologies.

Great promise is not without risk.

This was a great conversation with a man who feels he and us have agency in this world to pursue meaning on our individual and collective paths to value.