Editor’s note: Because this issue is so crucial to our lives, I’ve broken the original post down into these six topics—each in their own post to allow for easier reading.

This is a huge topic. That’s why the original post was comprised of more than 7,000 and why I have broken it out into five separate posts. Even so, it didn’t cover nearly every aspect of the importance of AI to our future. My hope is to motivate readers to think about this, learn more, and have many conversations with their families and friends.

I’ve broken the original post down into these six topics—each in their own post. Feel free to jump around.

  1. An introduction to the opportunities and threats we face as we near the realization of human-level artificial intelligence
  2. What do we mean by intelligence, artificial or otherwise?
  3. The how, when, and what of AI
  4. AI and the bright future ahead
  5. AI and the bleak, darkness that could befall us
  6. What should we be doing about AI anyway? (this post)

While it may be true that we are inventing our replacements in so many of life’s activities we need to ensure that things don’t go off the rails and we create the bringers of our own demise. There are researchers, philosophers, computer scientists, and others working on the problem of artificial intelligence goal alignment and safety right now, and it will be a growing field. But, what can laypeople do to help ensure that when AI is developed it is done safely? I think the activities that will have the biggest impact can be broadly grouped into these activities: learning more, discussing with others, supporting the effort, and then, for a few, directing your careers and other efforts directly.

Learn more

If you’re inspired to learn more, I heartily encourage you. To get started, here are a few sources that have inspired me:

Discuss with family and friends

Chances are, if you’ve just read these posts, you are now more informed on AI and AI safety issues than nearly anyone you know or are likely to meet for a while. Normal people simply don’t spend time on this.

Max Tegmark asks these 7 questions at the beginning of Chapter 5 in Life 3.0. While interesting in their own right, these questions serve as great jumping off points for conversations. I encourage you to have those conversations, both here on this site in the comments and back in your own lives.

  1. Do you want there to be superintelligence?
  2. Do you want humans to still exist, be replaced, cyborgized, and/or uploaded/simulated?
  3. Do you want humans or machines in control?
  4. Do you want AIs to be conscious or not?
  5. Do you want to maximize positive experiences, minimize suffering, or leave this to sort itself out?
  6. Do you want life spreading into the cosmos?
  7. Do you want a civilization striving toward a greater purpose that you sympathize with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?

You can take a survey featuring some of these concepts (and more) and see the results at the Future of Life website. Just take care on the twelve aftermath scenarios. Example: I thought the Egalitarian Utopia sounded good but it isn’t clear just from the survey that it represents a repressing of any superintelligent AI development (to which I am opposed).

For my part, I am already planning on breaking this immense and important subject down further in future posts. If you have concepts, ideas, questions, etc, on things you would like to see me research and write about, feel free to mention it in the comments or reach out to me directly.

“Most benefits of civilization stem from intelligence, so how can we enhance these benefits with artificial intelligence without being replaced on the job market and perhaps altogether?” The Future of Life Institute

Support AI safety efforts

Not everyone is an AI researcher but nearly everyone can support their work. There are numerous researches engaged in important work.

In spring of 2015, FLI launched our AI Safety Research program, funded primarily by a generous donation from Elon Musk. By fall of that year, 37 researchers and institutions had received over $2 million in funding to begin various projects that will help ensure artificial intelligence will remain safe and beneficial.

Here’s a small list with the funds they have sought.

There are several prominent non-profits that could make use of your generosity. Among them are:

The Machine Intelligence Research Institute (MIRI), which focuses on mathematics research that is unlikely to be produced by academics, trying to build the foundations for the development of safe AIs.

The Future of Humanity Institute—a mature, well-established research institute, affiliated with Oxford and led by Nick Bostrom (author of Superintelligence: Paths, Dangers, Strategies).

OpenAI — Started by some tech heavyweights including Elon Musk, of Tesla /SpaceX/Neuralink/etc, LinkedIn co-founder Reid Hoffman; PayPal co-founder Peter Thiel and more with a mission is to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible. They even include a suite of software tools to help with that effort.

If you have trouble deciding, the Open Philanthropy Project is attempting to determine which groups are most effective. With this type of research, however, it may be hard to determine each groups impact for some time.

Get involved directly

Donations not your thing? Want to get your hands dirty? Well, are you a computer programmer? Are you tasked with writing programs that analyze a bunch of data? Are you working on algorithms or machine learning to make this easier? You are are the forefront of the efforts to create artificial intelligence, even if you don’t think you are. You can use some of OpenAI’s tools to aid in testing and even pledge to create only ethical technologies.

Not yet an engineer? Consider getting a PhD in Computer Science.

You can support AI safety with your activism and civic duties—support organizations and candidates who demonstrate an understanding of the risks and a willingness to advocate, negotiate, and legislate to bring AI about in a safe way and perhaps not these ones trying to fight any regulation.

If you can’t beat them…

Want to get on our robot overlords’ good side? You could join the Way of the Future church, who’s mission is to create a peaceful and respectful transition of who is in charge of the planet from people to people + “machines”.


Other parts in this series (in case you missed them):

Part One: An introduction to the opportunities and threats we face as we near the realization of human-level artificial intelligence
Part Two: What do we mean by intelligence, artificial or otherwise?
Part Three: The how, when, and what of AI
Part Four AI and the bright future ahead
Part Five: AI and the bleak, darkness that could befall us


This is such an immense topic that I ended up digressing to explain things in greater detail or to provide additional examples and these bogged the post down. There are still some important things I wanted to share, so I have included those in a separate endnotes post.