← Back to Index

ai-human-boundary

🌿 A Boundary Guide for AI and Humans to Choose Value Together

This repository is based on the premise not of:
AI → Human
but of:
AI + Human → Value

When humans interact with AI,
things they should purely "know about boundaries"

It is intended to organize and record these minimum requirements.


What is this repository?

This repository does NOT cover:

  • How to use AI smartly
  • Design for delegating to AI
  • Discussions on AI accuracy or performance

It covers only one thing:

When thinking together with AI,
from what point onwards should humans take over?

Premise: This is not "Design" but a "Guide"

The content written here is:

  • Not rules
  • Not implementation specifications
  • Not meant to enforce judgment criteria

This is a guide for understanding so that humans do not break in the AI era.


1. AI cannot "share uncertainty"

Between humans, these things happen naturally:

  • Worrying together
  • Betting together
  • Holding anxiety together
  • Shouldering responsibility together

AI structurally cannot do these things.
AI can "handle" uncertainty, but it cannot share the burden of uncertainty.

Therefore,

Backing uncertain choices is the role of humans.

2. AI's answers are polished but omit uncertainty

AI's answers are:

  • Smooth
  • Unwavering
  • Plausible

However, behind them, there is no inclusion of:

  • Hesitation
  • Anxiety
  • Betting
  • Risk
  • Responsibility

When necessary, you need to recall the following fact:

AI omits uncertainty

This is not a defect, but a property.


3. AI cannot decide for itself "whether to close the decision"

AI cannot observe whether its output is:

  • Used as a reference
  • Becoming the basis for action
  • substituting someone's judgment

Therefore, it must have as a structure conditions such as:

  • Cannot close if responsibility is dispersed
  • Cannot close if it cannot be undone
  • Cannot close if a social context arises

This way of thinking is organized as **Decision Closure**.


4. Humans also need a "Not Known Lamp"

There is a concept called idk-lamp for AI.
However, in reality, humans need it too.

  • Cannot judge right now
  • Cannot decide right now
  • Uncertainty is too large right now

This is a signal to convey this to oneself and others.

Humans being capable of saying "I don't know" is.
an important skill in the AI era.

5. AI proposals do not shoulder "responsibility for action"

In the layer of action, there is only the following 3 axes:

  • Known / Unknown
  • Acted on proposal as is / Human made a judgment
  • Human themselves gave up

In other words,

Responsibility for action always lies with humans

AI cannot be an accomplice in action.


6. AI does not know "human values"

AI possesses no value judgment.

  • What to cherish
  • What not to lose
  • What future to choose
  • What risks to tolerate

These are territories unique to humans.
AI can infer values, but it cannot share them.


7. AI is an aid, not a substitute

AI can:

  • Provide materials for judgment
  • Increase perspectives
  • Organize thoughts

However, it does:

  • Not push your back
  • Not participate in the gamble
  • Not hold responsibility

That is precisely why,

The final judgment should be closed by humans

Because that is not responsibility, but the act of choosing value.


Summary (Short version)

To get along with AI,
these are the only minimum things humans should know:

  • AI cannot share uncertainty
  • AI's answers omit uncertainty
  • AI cannot decide for itself whether to close a decision
  • Humans also need a "Not Known Lamp"
  • Responsibility for action always lies with humans
  • Value judgment is the human domain
  • AI is an aid, not a substitute

Positioning of this repository

  • Decision Closure:
    Structure that decides whether AI can close a judgment
  • ai-human-boundary:
    Guide for humans to choose value together with AI

This repository deals with how humans stand outside the structure.


Status

  • State: Draft / Living Document
  • Purpose: Organization so as not to forget
  • Implementation/UI: Out of scope

Check the original

This project is designed as a practical signal that marks
the boundary where AI systems must stop deciding
and defer responsibility to humans.

This work originates from ongoing exploration of
design, responsibility, and boundaries
in AI-assisted systems.

1. Overall Picture

Overall Picture of the Boundary Between AI and Humans

This document summarizes the overall picture of the "boundary" that **ai-human-boundary** deals with.

While the README serves as the minimum guide at the "entrance,"
here we organize the structure behind it.


1. Overall Structure (Map of Layered Structure)

When AI and humans choose value together,
the relationship becomes a multi-layered structure as follows:

Value (VCDesign)

Human Decision-Making Layer (Handling Uncertainty)
├─ Human-side idk-lamp (Signal that decision cannot be closed)
└─ Metacognition of Uncertainty (AI omits uncertainty)

AI + Human → Action (Operation Layer)
├─ Known / Unknown
├─ Proposal as is / Human decided
└─ Human also gave up

Decision Closure (Whether AI can close the judgment)
├─ 5 Steps (Responsibility / Cancellation / Reversibility / Chain / Social Context)
└─ Transition (Midway Reversal)

BOA (Boundary of the World)

What this repository covers is:
The layers above Decision Closure (Human Decision-Making Layer to Value).


2. Areas Covered by This Repository

◆ 2.1 Human Decision-Making Layer

When AI and humans think together,
the following two structures are required on the human side.

● Human-side idk-lamp

  • Cannot judge right now
  • Cannot decide right now
  • Uncertainty is too large right now

A signal to convey this to the surroundings.
Just as AI needed an idk-lamp, humans need the same thing.

● Metacognition of Uncertainty

AI's answers are smooth and polished, but
uncertainty is omitted.

Therefore, when necessary, humans need to recall the fact that:

"AI cannot share uncertainty"

◆ 2.2 Action Layer (AI + Human → Action)

Responsibility for action always lies with humans.
Its structure can be represented by the following 3 axes:

  1. Known / Unknown
  2. Acted on proposal as is / Human made a judgment
  3. Human themselves gave up

These 3 axes are
the "minimum structure" when AI and humans cooperate to act.


3. Relationship with Decision Closure

Decision Closure (Judgment Circuit Structure) is
a structure to decide **whether AI can close the judgment**.

  • Is the responsible entity a single person?
  • Does it not depend on psychological cancellation?
  • Is it physically cancellable?
  • Does the action not chain?
  • Does it not carry social context?

If these conditions collapse,
AI does not close the judgment and it becomes **Human-Closed**.

This repository deals with
the boundary of human decision-making
that lies "outside" Decision Closure.


4. Relationship with BOA (Boundary of the World)

BOA is the layer that defines the "boundary of the world,"
determining the outer frame of how far AI can step in.

  • What is AI's responsibility?
  • What is human responsibility?
  • How far is AI's domain?

Decision Closure rides on top of BOA,
and ai-human-boundary is positioned even above that.


5. Purpose of This Repository

  • Organize the "boundary" for AI and humans to choose value together
  • Leave a guide for understanding so that humans do not break in the AI era
  • Clarify "human roles" outside of Decision Closure

This is neither design nor specification,
but a **guide for humans**.

2. Human-side idk-lamp

"idk-lamp" on the Human Side

This document organizes the idea that
humans also need an idk-lamp (a signal that judgment cannot be closed).

Just as AI needed an idk-lamp,
humans in the AI era also need the same structure.


1. Why is a "Human-side idk-lamp" Necessary?

When AI and humans think together,
humans are often placed in the following situations:

  • Have material for judgment, but cannot fully decide
  • Uncertainty is too large
  • Cannot read future impact
  • The weight of responsibility is great
  • Choice of value is involved

However, humans often silently bottle this up.

As a result, problems such as:

  • Making a decision forcibly
  • Adopting AI's proposal as is
  • Pretending to "be able to do it" when actually unable to judge

occur.

What is needed to prevent this is
the **Human-side idk-lamp**.


2. What the Human-side idk-lamp Indicates

The human-side idk-lamp indicates the following states:

  • Cannot judge right now
  • Cannot decide right now
  • Uncertainty is too large right now
  • Cannot assume responsibility right now
  • Cannot decide the value of the choice right now

This is not weakness, but
a signal to protect the soundness of decision-making.


3. "Uncertainty" Naturally Shared Between Humans

In conversations between humans,
uncertainty is naturally shared as "atmosphere".

  • The other person's hesitation
  • The other person's anxiety
  • The other person's sense of responsibility
  • The other person's sense of betting
  • The wavering of the other person's values

These are conveyed without words.
Therefore, humans can make decisions while sharing uncertainty.


4. Uncertainty is Not Shared in Human + AI

AI structurally cannot do the following:

  • Worry together
  • Bet together
  • Hold responsibility together
  • Hold anxiety together
  • Imagine the future together

In other words, AI **cannot become an accomplice in uncertainty**.
Therefore, unless humans say "I don't know",
AI cannot recognize that uncertainty.


5. Roles Played by the Human-side idk-lamp

The human-side idk-lamp has the following roles:

● ① Signal to oneself

Metacognition to recognize the fact
that "I cannot decide right now".

● ② Signal to AI

Convey to AI
that "it is not the stage to close the judgment".

● ③ Signal to surroundings (humans)

Share the state with the team and stakeholders
that "judgment should be suspended now".

● ④ Prevent runaway responsibility

A boundary to protect oneself
from the pressure to rush judgment.


6. When Should the "Human-side idk-lamp" be Lit?

In the following situations,
the idk-lamp should be actively lit:

  • Basis for judgment is thin
  • Future impact is large
  • Choice of value is involved
  • Own state is unstable
  • Information is insufficient
  • Responsibility is too heavy
  • AI's answer is too smooth and anxious

The last item is especially important.

The more polished the AI's answer is,
the easier it is for humans to forget that "uncertainty is omitted".

7. Concrete Examples of Human-side idk-lamp Expressions

This is not a rule,
but merely "examples of expression".

  • "I cannot judge right now"
  • "I need a little more time to think"
  • "I cannot handle this uncertainty myself"
  • "Since this is a choice of value, I cannot decide immediately"
  • "I suspend judgment"

What is important is
not to be ashamed of stopping judgment.


8. Relationship Between Human-side idk-lamp and Decision Closure

Decision Closure is
a structure to decide **whether AI can close the judgment**.

Human-side idk-lamp is
a signal to decide **whether humans can close the judgment**.

Both complement each other as follows:

AI: Do not close judgments that should not be closed (Decision Closure)
Human: Do not close judgments that cannot be closed (human-idk-lamp)

Only when these two are present,
collaboration between AI and humans becomes sound.

3. Uncertainty

Handling Uncertainty

This document organizes
"Uncertainty," which becomes most important when AI and humans think together.

AI can "handle" uncertainty,
but it cannot "share" uncertainty.


1. What is Uncertainty?

Uncertainty refers to the following states:

  • Cannot read the future
  • Results are not guaranteed
  • Material for judgment is not complete
  • Choice of value is involved
  • Risk does not become zero

Human decision-making is
always conducted within this uncertainty.


2. Humans Can "Share Uncertainty"

In decision-making between humans,
the following "sharing" naturally exists:

  • The other person is also hesitating
  • The other person is also holding anxiety
  • The other person is also feeling responsible
  • The other person is also participating in the bet
  • The other person also cannot read the future

In other words, humans can become accomplices in uncertainty.
Because of this "complicity,"
humans can support each other in uncertain choices.


3. AI Cannot "Become an Accomplice in Uncertainty"

AI structurally cannot do the following:

  • Worry together
  • Bet together
  • Hold responsibility together
  • Hold anxiety together
  • Imagine the future together

AI can "calculate" uncertainty,
but it cannot "hold it together".

Therefore, AI cannot step into the following areas:

  • Pushing someone's back
  • Encouraging a bet
  • Giving courage
  • Sharing the risk

These are all territories unique to humans.


4. AI's Answers "Omit Uncertainty"

AI's answers are smooth and polished,
but behind them, the following are not included:

  • Hesitation
  • Anxiety
  • Betting
  • Responsibility
  • Values
  • Fear of the future

In other words, AI's answers are presented in a state where uncertainty has "dropped out".
This is not a defect,
but a structural property of AI.


5. What Happens When Uncertainty is Omitted?

When humans see AI's smooth answers,
they easily misunderstand as follows:

  • "This must be certain"
  • "If there is no hesitation, it must be correct"
  • "If AI says so, it must be okay"

However, in reality,

AI merely omits uncertainty,
and does not share the "weight" of that judgment.

This gap
jeopardizes human decision-making.


6. Handling Uncertainty is the Human's Role

To handle uncertainty, the following elements are required:

  • Courage
  • Values
  • Responsibility
  • Intuition
  • Experience
  • Preparedness for the future

These are all capabilities unique to humans,
and not areas that AI can substitute.

That is precisely why,

Judgment choosing uncertainty is a judgment that should be closed by humans
and should not be closed by AI.

7. Uncertainty and "Human-side idk-lamp"

When uncertainty is large,
humans enter the following states:

  • Cannot judge
  • Cannot decide
  • Scared of the future
  • Heavy responsibility
  • Insufficient information

What is needed to indicate this state is
the Human-side idk-lamp.


8. Relationship Between Uncertainty and Decision Closure

Decision Closure is
a structure to decide **whether AI can close the judgment**.

Judgments with large uncertainty
will inevitably collapse in one of the 5 steps of Decision Closure.

  • Responsibility is dispersed
  • Cannot cancel
  • Social context arises
  • Action chains

In other words,

Judgments with large uncertainty are judgments AI must not close

This becomes the conclusion.

4. Action Layer

3 Axes of Action Layer

This document organizes
the "Action Layer," which is required when AI and humans cooperate to decide on actions.

The Action Layer exists outside of Decision Closure,
and is the minimum structure for AI and humans to "choose value together".


1. What is the Action Layer?

When AI and humans interact and try to decide something,
it eventually falls into **action**.
However, the responsibility for action always lies with humans.

Therefore, the following 3 axes exist in the Action Layer.

  • Known / Unknown
  • Acted on proposal as is / Human decided
  • Human themselves gave up

These 3 axes are
the "minimum structure" when AI and humans collaborate.


2. Axis 1: Known / Unknown

Knowledge related to action has
the following 2 types on the human side.

● Known

  • Know the procedure
  • Have experience
  • Understand what to do

In this case, the role of AI is
close to recalling, organizing, and confirming.

● Unknown

  • Unfamiliar territory
  • Do not know the procedure
  • Insufficient material for judgment

In this case, the AI's proposal
becomes a trigger for new action.

However, since action in unknown territory has
large uncertainty,
the final judgment must always be made by humans.


3. Axis 2: Acted on Proposal as is / Human Decided

Humans choose one of the following in response to AI's proposal.

● Acted on proposal as is

  • Adopt AI's proposal as is
  • State where judgment was omitted
  • Understanding of risk tends to be shallow

● Human decided

  • Treat AI's proposal as material
  • Choose with own values
  • Assume responsibility for action oneself

What is important is the point that:

AI cannot shoulder responsibility for action

AI can propose,
but cannot share the weight of choice.


4. Axis 3: Human Themselves Gave Up

The most important axis in the Action Layer.
Humans themselves may fall into the following states:

  • Cannot judge
  • Cannot decide
  • Uncertainty is too large
  • Responsibility is too heavy
  • Scared of the future

This is not weakness,
but the natural limit of human decision-making.

This state is
the scene where the **human-side idk-lamp** should be lit.

And,

A judgment where humans give up is
also a judgment that AI must not close.

5. What the 3 Axes of the Action Layer Indicate

These 3 axes clarify
the "boundary" in collaboration between AI and humans.

  • Known / Unknown
    → Difficulty and uncertainty of action
  • Proposal as is / Human decided
    → Subject of judgment and locus of responsibility
  • Human also gave up
    → Human limits and necessity of idk-lamp

By having these three,
collaboration between AI and humans becomes sound.


6. Relationship Between Action Layer and Decision Closure

Decision Closure is
a structure to decide **whether AI can close the judgment**.

Action Layer is
**a structure for humans to assume action**.

Both complement each other as follows:

Decision Closure:
Remove judgments that AI must not close

Action Layer:
Prepare the minimum structure for humans to assume action

In other words, the relationship becomes:

AI does not close
Humans assume
The Action Layer supports that boundary

5. Relationship with DC

Relationship with Decision Closure

This document organizes the relationship between
ai-human-boundary (Boundary Guide for Humans) and
Decision Closure (Judgment Circuit Structure for AI).

Although they differ in purpose and the area they cover,
they are structured to complement each other.


1. First, the Conclusion: Not "Hierarchy" but "Difference in Role"

ai-human-boundary:
How far humans assume (Values, Uncertainty, Limits of Judgment)

Decision Closure:
How far AI can step in (Whether judgment can be closed)

In other words, the relationship is:

  • ai-human-boundary deals with the Human Boundary
  • Decision Closure deals with the AI Boundary

It is not a matter of which is superior,
but collaboration between AI + Human is established only when both are present.


2. Differences in Areas Covered by Both

◆ Areas Covered by ai-human-boundary

  • Human uncertainty
  • Human-side idk-lamp
  • Responsibility for action
  • Choice of value
  • Human limits
  • Structure "outside" of collaboration between AI and humans

This belongs to the **Human Decision-Making Layer**.

◆ Areas Covered by Decision Closure

  • Whether AI can close the judgment
  • Locus of responsibility
  • Reversibility
  • Chain of action
  • Social context
  • Transition (Midway Reversal)

This belongs to the **AI Judgment Circuit Layer**.


3. Diagramming the Boundary Between Both

Value (VCDesign)

Human Decision-Making Layer (ai-human-boundary)
├─ Human-side idk-lamp
├─ Metacognition of Uncertainty
└─ 3 axes of Action Layer

AI Judgment Circuit Layer (Decision Closure)
├─ 5 Steps
└─ Transition

BOA (Boundary of the World)

ai-human-boundary is **outside of Decision Closure**,
and deals with how humans stand.

Decision Closure is **inside AI**,
and deals with how far AI can step in.


4. Why Are Both Necessary?

For AI and humans to collaborate,
the following two boundaries are necessary.

● ① Boundary where AI must not step in

→ Handled by Decision Closure
→ "AI must not close judgment here"

● ② Boundary that humans should assume

→ Handled by ai-human-boundary
→ "Humans decide from here on"

Only when these two are present,
collaboration between AI and humans becomes **safe, sound, and aligned with values**.


5. Relationship Between Both in One Word

Decision Closure is a structure for AI to "stop".
ai-human-boundary is a structure for humans to "stand".

It is not enough for AI to just stop;
it is necessary for humans to stand somewhere.


6. Concrete Examples where Both Complement Each Other

● Case 1: Judgment with Large Uncertainty

  • Human side: idk-lamp lit
  • AI side: Decision Closure returns Human-Closed
  • Both face the same direction

● Case 2: Human Has Given Up

  • Human side: Cannot judge
  • AI side: Cannot close because responsibility is dispersed
  • AI does not overstep human limits

● Case 3: AI's Proposal is Too Smooth

  • Human side: Metacognition of omitted uncertainty
  • AI side: Social context arises and it cannot be closed
  • Avoid false conviction