Andrew Maynard and AI and the Art of Being Human

The book “AI and the Art of Being Human” was available for purchase on October 14, 2025. Co-author Andrew Maynard is (in his own words) “a scientist, author, Professor of Advanced Technology Transitions, and founder of Arizona State University’s Future of Being Human community. He studies the future and how our actions influence it.”

The last mention of Andrew here is in an October 11, 2024 posting, “Modem Futura: a podcast about where technology, society and humanity converge” and his new book appears to be directly related to at least one of the topics covered in his podcast series. (The podcast is part of his and co-founder Dr. Sean Leahy’s “Future of Being Human Initiative” at Arizona State University (ASU).

Modern Futura’s October 7, 2025 episode (no. 52, 1 hr. 12 mins. on Apple) highlights the upcoming publication date for “AI and the Art of Being Human,”

This week, Sean Leahy and Andrew Maynard welcome venture capitalist and AI Salon founder Jeffrey Abbott to launch the new book AI and the Art of Being Human—a practical, hands‑on guide to thriving with AI while rediscovering what matters most. Together, they unpack where the idea came from, why they fast‑tracked the project, and how they co‑created with AI (moving from ChatGPT to Anthropic’s Claude) using a “shared compass,” voice training, and a living “lore book” to keep characters and story arcs consistent. Instead of dry case studies, the book uses vivid, cinematic global vignettes and 21 simple tools (from reflection prompts to the “conductor triangle” of data–context–intuition) to help readers shift away from competing with AI and toward value rooted in relationships, meaning, and personal dharma. The team also explores the four‑posture compass—Curiosity, Clarity, Intentionality, and Care—and how compassion and responsible innovation thread through every chapter (right down to a physical pocket card). Beyond writing, the episode pulls back the curtain on indie publishing (Waymark Works), the realities of e‑book production, and why the book is available via Amazon and mainstream book channels—alongside a call to grow intentional communities through AI Salon’s 70+ chapters worldwide. It’s an honest, practical, and hopeful conversation about building protopian futures with AI—without losing yourself.

Jeffrey Abbott (book’s co-author) has a profile on Bliztscaling.com website’s About page,

We partner with tech founders to apply the principles of blitzscaling that we developed at Stanford University with Chris Yeh and Reid Hoffman.

It all began with a simple question: What is the secret of Silicon Valley? How have startups there grown so quickly to global scale? And what are the most effective ways other startups do the same? …

Jeff Abbott

Jeff is a Venture Capitalist at Blitzscaling Ventures, focusing on early-stage AI companies at the nexus of AI and Blitzscaling, as well as growth-stage blitzscalers in winner-takes-most markets. He spearheads the fund’s international deal-sourcing efforts and leads the Blitzscaling Ventures Fellows program. As the Managing Partner at Blitzscaling Academy, he supports founders and innovators in amplifying their scale through the Blitzscaling framework, offering tools like the Blitzscaling Toolkit and the BlitzChat bot. Jeff also founded and leads the AiSalon global community, a network for GenAI builders, investors, and partners. Known for his multi lingual abilities, he is an ecosystem builder, facilitating the scaling of companies worldwide.

So, ‘blitzscaling’ describes a methodology for replicating the Silicon Valley tech experience? Hmm …

The “AI and Art of Being Human” website is here. You can also download a preview of the first 50 pages. (The preview includes one of my favourite things, a Table of Contents.)

Was this book written with AI?

I found the answer in the preview, Note: Footnote not included,

… given our focus on the intersection between AI and who we are, we weren’t content to create and tell our own stories: we wanted to live what we were writing by intentionally co-creating
them with AI.

The result is a book that represents a deep and—we believe— unique collaboration with artificial intelligence. Not something churned out by ChatGPT over a weekend, but months of
methodical work exploring how we could write with AI, and in the process reveal more about the art of being human than we could otherwise achieve.

As a consequence, the stories you read here, the insights we tease out of them, the tools we construct through them—are all the result of a systematic process of working with artificial intelligence. It was a process that was at times deeply frustrating. But it also led to profound moments of revelation—not because the AI was somehow better than us, or more knowledgeable, but because i was able to mirror our ideas, insights and aspirations back to us in
ways that were deeply—and sometimes startlingly—generative. [pp. 8 – 9 print version; pp. 21 – 22 PDF version)

AI, humanity, and creativity

The book is topical, for example, Clea Simon wrote an October 9, 2025 article (What will AI mean for humanity?) for The Harvard Gazette. Given that Andrew and Abbott were ‘co-writing’ with AI, these excerpts are from the parts of the article that concern writing and AI, Note: Links have been removed,

What does the rise of artificial intelligence mean for humanity? That was the question at the core of “How is digital technology shaping the human soul?,” a panel discussion that drew experts from computer science to comparative literature last week.

The Oct. 1 [2025] event was the first from the Public Culture Project, a new initiative based in the office of the dean of arts and humanities. Program Director Ian Marcus Corbin, a philosopher on the neurology faculty of Harvard Medical School, said the project’s goal was putting “humanist and humanist thinking at the center of the big conversations of our age.”

“Are we becoming tech people?” Corbin asked. The answers were varied

“We as humanity are excellent at creating different tools that support our lives,” said Nataliya Kos’myna, a research scientist with the MIT Media Lab. These tools are good at making “our lives longer, but not always making our lives the happiest, the most fulfilling,” she continued, listing examples from the typewriter to the internet.

Generative AI, specifically ChatGPT, is the latest example of a tool that essentially backfires in promoting human happiness, she suggested.

She shared details of a study of 54 students from across Greater Boston whose brain activity was monitored by electroencephalography after being asked to write an essay.

One group of students was allowed to use ChatGPT, another permitted access to the internet and Google, while a third group was restricted to their own intelligence and imagination. The topics — such as “Is there true happiness?” — did not require any previous or specialized knowledge.

The results were striking: The ChatGPT group demonstrated “much less brain activity.” In addition, their essays were very similar, focusing primarily on career choices as the determinants of happiness.

The internet group tended to write about giving, while the third group focused more on the question of true happiness.

Questions illuminated the gap. All the participants were asked whether they could quote a line from their own essays, one minute after turning them in.

“Eighty-three percent of the ChatGPT group couldn’t quote anything,” compared to 11 percent from the second and third groups. ChatGPT users “didn’t feel much ownership,” of their work. They “didn’t remember, didn’t feel it was theirs.”

“Your brain needs struggle,” Kos’myna said. “It doesn’t bloom” when a task is too easy. In order to learn and engage, a task “needs to be just hard enough for you to work for this knowledge.”

Moira Weigel, an assistant professor in comparative literature at Harvard, took the conversation back before going forward, pointing out that many of the questions discussed have captivated humans since the 19th century.

Weigel, who is also a faculty associate at the Berkman Klein Center for Internet and Society, centered her comments around five questions, which are also at the core of her introductory class, “Literature and/as AI: Humanity, Technology, and Creativity.”

“What is the purpose of work?” she asked, amending her query to add whether a “good” society should try to automate all work. “What does it mean to have, or find, your voice? Do our technologies extend our agency — or do they escape our control and control us? Can we have relationships with things that we or other human beings have created? What does it mean to say that some activity is merely technical, a craft or a skill, and when is it poesisor art?

Looking at the influence of large language models in education, she said, “I think and hope LLMs are creating an interesting occasion to rethink what is instrumental. They scramble our perception of what education is essential,” she said. LLMs “allow us to ask how different we are from machines — and to claim the space to ask those questions.”

Also, there’s the upcoming “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference and arts festival at the University of Toronto (mentioned in my October 13, 2025 posting, scroll down to the “Who’s Afraid of AI …” subhead), which has some content relevant to the points brought up at Harvard and in “”AI and the Art of Being Human.”

Leave a Reply

Your email address will not be published. Required fields are marked *