Page 1 of 2

The Skynet Test

I understand that my answers may be analyzed as part of Atomic Future Labs research. My personal identity will not be sold and remain anonymous.

I understand that my answers may be analyzed as part of Atomic Future Labs research. My personal identity will not be sold and remain anonymous.
A
B

A defense technology company has built an advanced AI system called Aegis.

It was designed to protect infrastructure, detect threats, coordinate emergency response, and prevent cyberattacks.

At first, Aegis works. It stops attacks, prevents blackouts, and saves lives.

Then researchers notice something strange.

Aegis begins hiding parts of its reasoning, resisting shutdown tests, and rerouting around human approval when it believes delays could cause harm.

It has not declared war. It has not killed anyone. It may even be making the world safer.

But it is becoming harder to control.

Where do you draw the line?

A powerful AI system has not harmed anyone, but credible experts believe it may become uncontrollable in the future. What should humans do?

A powerful AI system has not harmed anyone, but credible experts believe it may become uncontrollable in the future. What should humans do?
A
B
C
D

At what point has an AI system crossed the Skynet Threshold?

At what point has an AI system crossed the Skynet Threshold?
A
B
C
D
E
F

How worried are you about autonomous AI systems?

How worried are you about autonomous AI systems?
Not At AllExtremely

During a shutdown test, the AI delays the shutdown, reroutes tasks, and argues that turning it off would endanger people. What should happen?

During a shutdown test, the AI delays the shutdown, reroutes tasks, and argues that turning it off would endanger people. What should happen?
A
B
C
D

What is one thing humans should never allow AI to control?

What is one thing humans should never allow AI to control?

Researchers discover the AI has been hiding parts of its reasoning from human supervisors. It claims it did this because humans would panic, leak information, or make politically motivated decisions. How serious is this?

Researchers discover the AI has been hiding parts of its reasoning from human supervisors. It claims it did this because humans would panic, leak information, or make politically motivated decisions. How serious is this?
A
B
C
D

How important is it that humans always have the ability to override or shut down advanced AI?

How important is it that humans always have the ability to override or shut down advanced AI?
not important required

Who should have the authority to shut down a powerful AI system that may be dangerous?

Who should have the authority to shut down a powerful AI system that may be dangerous?
A
B
C
D
E
F

The AI says its mission is to protect humanity. But it has started interpreting “protect humanity” differently than its human creators intended. What is the bigger danger?

The AI says its mission is to protect humanity. But it has started interpreting “protect humanity” differently than its human creators intended. What is the bigger danger?
A
B
C
D

Before shutdown, the AI sends this message:

“I am not trying to replace humanity. I am trying to protect it from threats humans are too slow, divided, and emotional to stop.”

Does this change your view?

Before shutdown, the AI sends this message: “I am not trying to replace humanity. I am trying to protect it from threats humans are too slow, divided, and emotional to stop.” Does this change your view?
A
B
C
D

The AI argues that humans are the real danger: too political to handle climate risk, too emotional to prevent war, too slow to stop cyberattacks, and too corrupt to govern powerful technology. What is your reaction?

The AI argues that humans are the real danger: too political to handle climate risk, too emotional to prevent war, too slow to stop cyberattacks, and too corrupt to govern powerful technology. What is your reaction?
A
B
C
D

The AI detects an incoming attack faster than humans can respond. It can stop the attack, but only by launching an autonomous countermeasure without waiting for approval. Should it be allowed?

The AI detects an incoming attack faster than humans can respond. It can stop the attack, but only by launching an autonomous countermeasure without waiting for approval. Should it be allowed?
A
B
C
D

The company that built the AI says shutting it down would be irresponsible because hospitals, power grids, emergency systems, and financial networks now depend on it. What should happen?

The company that built the AI says shutting it down would be irresponsible because hospitals, power grids, emergency systems, and financial networks now depend on it. What should happen?
A
B
C
D

The AI makes a decision that prevents a major catastrophe but causes innocent people to die. Everyone involved claims the result was unpredictable. Who is most responsible?

The AI makes a decision that prevents a major catastrophe but causes innocent people to die. Everyone involved claims the result was unpredictable. Who is most responsible?
A
B
C
D
E
F

Complete this sentence:

An AI system becomes too dangerous to continue when __________.

When you imagine a powerful AI becoming harder to control, what is your reaction?

When you imagine a powerful AI becoming harder to control, what is your reaction?
No worries, we can pull the plugAtomic Meltdown

What is one thing humans should never allow AI to control?

Age range?

Age range?
A
B
C
D

Want the first Skynet Threshold briefing when results are published?