A question on safeguarding humanity

A question on safeguarding humanity

A number of non-profit organizations in some way seek to safeguard humanity. Each organization will have its own set of perceived risks, whether or not supported by science, as to what humanity needs safeguarding from.

Two organizations very much on the science side of any debate on humanity’s future would be The Long Now Foundation and The Lifeboat Foundation. The Long Now Foundation takes a 10,000 year view, whereas the Lifeboat Foundation appears to focus on this century. Both organizations are populated by some of the brightest scientific and intellectual minds on the planet.

Yesterday, 01.06.02015, an advisory board member of the Lifeboat Foundation, posted a challenge to 800 or more colleagues in a members’ private forum. The challenge: to come up with new ideas for a “Plan of Action to Prevent Human Extinction Risks”, the plan laid out in an easy to follow chart format.

Quoting author Alexei Turchin, ‘…anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap…’.

A case was made to, and accepted by, the author to allow this roadmap to travel outside of the halls of intellectuals and think tanks into the worlds of engineers, oil & gas people professionals, international viewpoints, grassroots artists, white and blue collar alike, for new ideas.

So there lies opportunity and question:

  • opportunity: to send one’s thoughts on safeguarding humanity to The Lifeboat Foundation;
  • question: what new ideas for safeguarding humanity are out there?

The Turchin Plan of Action to Prevent Human Extinction Risks is pasted below and is freely downloadable and shareable. Comments and ideas arising will be directed to Alexei Turchin.

[Interesting that there does not seem to be much emphasis on near term risk, particularly the next decade as many exponential technologies come out of deceptive phase and into a disruptive phase, very disruptive. The risks to humanity’s future within a decade as the world changes from less than 3 billion occasionally connected souls to 8 billion hyper-connected souls as an example.]

What do you think?

IMG_3623
Resources:

https://xa.yimg.com/kq/groups/18795012/or/1229544550/name/globriskeng.jpg?download=1

http://lifeboat.com/blog/author/alexeiturchin

http://longnow.org/

https://lifeboat.com/ex/about

A short thought on AI

A short thought on AI

The debate in the public square of the crowd appears to be less about the inevitability of powerful AI and more about who will be at the control levers of the future.

For those aware of AI and looking up and forward at it and the future, there appears to be two ways this can go:

  • the power of AI is going to be with the few or AI itself, screwed again; or
  • the power of AI will be in the hands of the ~8 billion strong crowd by 2025 and things might be alright after all.

Those that are able to look down and back at things largely concur, unless the crowd is involved and on board, things could get a bit ugly and time is short.

More disruptive change is coming in the next decade than the last 4 decades combined.

Only a small fraction of the crowd have heard of AI. How do you give people AI when they do not understand it, are not comfortable with it and are more concerned with putting food in their bellies?

The challenge then becomes, how to start a conversation and harness the power of AI to care and provide for the basic needs and dignities of the crowd thereby winning the debate.

Is that not what this conversation is about?

Anyhow, just a short thought from a seasoned old roughneck.