The Metaverse (and the Future of Human Intelligence)

metaverse-bookshot-gradient

The Metaverse (and the Future of Human Intelligence)

The Metaverse, by Matthew Ball, is required reading for anyone who fears a dystopic digital future.

As someone who wants to help create a brighter future, your first act is to lean into understanding what is known.

Matt has scoured the digital world, interviewed all of the major players, and shares the prevalent thinking “ripped-from-the headlines” alongside history “dug-from-the-trenches,” making this book a time-saving primer for you.

Matt walks us through a great working definition for what many are now calling the “Next Internet.” Then, he outlines the building blocks of the metaverse and their technical and social arcs. Next, we peek into how the metaverse will affect economics and key players. Last, Matt closes with what governments should consider regulating.

You’ll finish with a solid view of the metaverse landscape

… And become up-to-speed on past, present, and future theories on what it will bring.

You, having this framework beneath you, is important to me so that we can play together in envisioning the “next-next” world.

Below, I build on Matt’s solid foundation and share my thoughts with you on the dangers and requirements for the metaverse to enable human transformation.

What is the “Next-Next” World?

The “Next, Next” world is a scenario where the metaverse fulfills its potential as a tool of transformation and becomes, among other things, an amplifier of human intelligence (HI).

Today, when we think about human intelligence (HI) in relation to artificial intelligence (AI), much of the response lands in one of three camps: fear and apathy, no worries, or a deeply technical path that includes adding chips and RAM inside our brains.

I believe that there is another path to increase human intelligence that includes leveraging technology like the metaverse to understand and amplify embodied and extended cognition.

AI is certain to continue its trajectory but that does not mean that we should cede developing and expanding human intelligence too.

I believe that a metaverse that can help develop healthy, skilled, unhobbled humans is a good goal and outline in part how to think about this in The Metaverse – a Growth Triangle for Billions.

The metaverse only develops this capacity to serve us if we endeavor to make it so.

We need to think about these things together to help shape what is next.

And reading Matt’s book is a great place to start.

I spend my time with an unlikely mix of scientists, coaches, technologists, philosophers, investors, and engineers.

Also, I see things a bit differently.

With this perspective, I’ve outlined the six core dangers to how people are thinking, or not thinking, about the metaverse today.

Here’s my short-list for how to “Get the Metaverse Wrong.”

Danger #1: Practice brains-in-jars thinking
Danger #2: Ignore evolution
Danger #3: Don’t fund science and cede understanding human psychology ground to ad-sellers
Danger #4: Get neurodata and biodata rights wrong
Danger #5: Build isolated layers vs. an ecosystem
Danger #6: Think that the metaverse is a game that won’t affect you

Let’s go through them one by one.

Danger #1: Practice brains-in-jars thinking

One of the biggest dangers in the metaverse is an extension of the concept that we humans are brains in jars.

We are this and we are not only this.

Our brains may be in our skulls, but our “minds” are distributed throughout our bodies, our spaces and places (digital or physical), and our interconnections between one another.

This body of scientific work is known as embodied cognition.

Much of what you see in the digital world has a dis-embodied perception of intelligence and humanness baked into it on a foundational level.

My concern is that if we do not bring a broader definition of human intelligence, senses and cognition with us into the metaverse, we risk being deeply hobbled by our own technology.

We don’t flourish in that scenario.

Danger #2: Ignore evolution

The next biggest danger is to ignore our biology and assume the past we ignored equals the future we’ll create.

One of the biggest fears I hear from people is that the digital layer will be “so much better” that we abandon the real layer.

This ignores the headstart that evolution has had on human biology, interaction, our driving need to be together, and the known and unknown ways that togetherness nourishes us in-person.

Stand back for a moment and ponder this.

As a species of curious creators, we have yet to turn our attention to magnifying our altruistism, our deep connectedness, and the other aspects of ourselves that care for one another.

No one got off the Serengeti alone and we still all have those genes because evolution is a slow and ponderous beast.

The danger here is that we mistake our current skill gap around togetherness to be a foundational flaw of who we are rather than build the solutions to strengthen this core trait.

If we do NOT bridge this gap, then yes, we will choose a mostly digital existence

But only because we never learned how to truly inhabit the real.

This metaverse danger is less an outcome of technology than it is a failure of imagination. (See Brains-in-Jars)

The true crime of Ready Player One was not that the digital world was so extraordinary it seduced us from the physical, it’s that they built an extraordinary world because they screwed up this one and had no choice.

We do have a choice.

Danger #3: Don’t fund science and cede understanding human psychology ground to ad-sellers

The next danger is if we cede leveraging tech for understanding human psychology to ad sellers.

The next biggest fear I hear is some version of “but technology only exacerbates our terrible natures and polarizes us.”

And that is true for some of the technology humans have built so far, but not all.

This fear is absolutely true of attention-economy tech (which is what most people refer to when they say “tech” in this context).

And because of the abuse we’ve seen so far, we are at best cautious and at worst terrified.

But understanding ourselves is essential to increasing human intelligence (HI) – we cannot leave it to the ad-sellers.

I believe we should use technology to understand, amplify, personalize and optimize our biology to improve human intelligence, wellbeing, and interpersonal relations.

I believe we should leverage technology to serve science and scientists to develop the understanding that will allow us to “catch-up” our minds and embedded pro-social behaviors to counteract, and then solve, our current existential crises.

For example, nature has little excess, yet the volume of biosignal broadcasting and receiving that humans do in every live interaction suggests that something meaningful is happening.

The key is that all this data is very personal and not universal (i.e. emotions are constructed, See How Emotions Are Made).

We are finally getting to the place technically where we can use sensors, machine learning, wearables and much more to begin to witness and match biosignal, context, in the deeply personal way required to begin to make any sense of ourselves.

We need to fund this science. We need to understand so we can build bridges to our future.

Danger #4: Get neurodata and biodata rights wrong

Governments must regulate and empower humans to own their data.

Matt talks a bit about sensors that can capture neuro or biodata via VR.

While the tools for this are in their infancy in terms of the metaverse, the sheer volume of neuro biodata already being captured daily across all apps, tools, and recording mechanisms is not.

The data simply hasn’t been connected to the digital sphere in a way to scare you as much as it should.

I spend a significant amount of my time in discussion and communication with organizations like BrainMind and others like Dr. Rafeal Yust and the Neurorights Initiative who are leaning into this.

The level of self-understanding we must attain to become the humans who can build the future will also convey a level of power that cannot be left to industry to self-regulate.

It’s too big a discussion for this post.

Danger #5: Build isolated layers vs. an ecosystem

The next biggest danger is to see the digital layer as an isolated instance versus one part of an interoperable human ecosystem.

The definition that Matt shares in the book includes “interoperability” which means that humans can move from experience to experience on the digital layer.

I believe that we should build interoperability not just across digital experiences but between layers.

One key to not abandoning the physical layer is to have a design space that considers the entire stack – real, augmented, and digital – as a single ecosystem where each layer serves according to its best use.

Seen as an essential part of an ecosystem, we can use the metaverse not only to grow and evolve but to solve problems referred to in the dangers above.

For example, many imagine the metaverse changing education.

But consider how we might change education if we adopt a multi-level ecosystem view that leverages the physical layer, our biology, and an extended definition of human cognition and intelligence? (See UCSD’s Embodied Cognition Centers)

Or, what if one use of the metaverse is to teach us how to have better conflicts?

I’m inspired by the work of Jonathan Stray, a Senior Scientist at the Center for Human-Compatible AI, who looks at how humans can learn to have “better” conflict.

Given my background in games, and my proximity to healers of all kinds, people often suggest that the darker elements of being human should be removed entirely from interactive media.

I don’t agree with that because as humans we have a full range of emotions.

I think the better path is to learn how to have conflicts, to understand how to have them productively.

I do not believe that we can leave our emotions behind, but we can use digital spaces to teach us the consequences of conflict in its main forms, and then also teach us better conflict without dangers to our physical bodies. (See Better Conflict)

Or, what if unburdened by evolutionary fitness-driven views of reality we are able to use the digital layer to expand who we are in the real world. (See Donald Hoffman: The Case Against Reality)

Danger #6 Think that the metaverse is a game that won’t affect you.

You really don’t get to reject this one if you’re paying attention.

You cannot help shape what you don’t understand.

Everyone born today is a gamer.

And no matter your age, you are your high level character.

Let me know your ideas on how we can solve these dangers.

It’s going to take all of us to get it right.

Love,
Nichol

P.S. Read Matthew’s book so we can play together in envisioning the “Next-Next” world.

P.S.S. Here’s the other articles in this series in order.

5/9/2022 – Overview: Deep Human – Warm Up
5/15/2022 – Build while Being
5/25/2022 – My Core Premise
6/1/2022 – Growth Triangle
6/8/2022 – The Choice
6/15/2022 – The Growth 🔼 – The Metaverse — a Growth Triangle for Billions?
6/22/2022 – Design Principles – The Metaverse as Human Potential Tech
7/7/2022 – The Stack
7/14/2022 – My Origin Story — And Why We Need Human Embodiment AND Technology

Comments (2)

  1. Terence Daniels

    What is your opionion around the idea that the metaverse was suddenly invented in order to steer the economy, excite the markets, and hopefully widen the playing field past what are already the major players in the gaming industry?
    Please explain how the transmutation of exact copies of humans into a world other than our own will enhance relations between people who are already immersed in the gaming experience.

  2. Mary Alber

    Nicole – Your latest post on sensory tech – and my response about leveraging the metaverse to build human connection – inspired me to read this one from July on the metaverse book by Mathew Ball.

    Resonating with your closing profound throughts:

    “You cannot help shape what you don’t understand. Everyone born today is a gamer.
    And no matter your age, you are your high level character.”

    As someone born far from the age of games and having raised digital native indigo kids who face all the negative social-emotional impacts of social media misuse, I am deeply concerned and committed to finding how to leverage technology for GOOD – for positive human evolution – starting with a rebuilding of SAFETY and BELONGING in the digital age – the crucial bottom foundation of Maslow’s pyramid that we’ve let ROT in our society!

    I’m conducting a prototype program for middle school kids right now (through my non-profit Education Innovation Collaborative) to help kids build their capacity for self-understanding, confidence, and belonging through designing and building entrepreneurial projects together. One of the project teams is VR game design where they are trying to create an experience that is healthy and wholesome for kids like them – not addictive and violent.

    While I haven’t read Ball’s book yet (just ordered), I wonder what you and he would say to our inviting more of our YOUTH to be designing the experiences in XR – Extended Reality (umbrella term for VR/AR/MR pr MIT course I’m taking on VR design)? Coaching them on how to design stories and game play that solved meaningful problems in society – while enhancing their own health, well-being, and personal goals?

Leave your thought here

Your email address will not be published. Required fields are marked *


Cart
  • No products in the cart.
Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
  • Attributes
  • Custom attributes
  • Custom fields
Click outside to hide the compare bar
Compare
Wishlist 0
Open wishlist page Continue shopping