
Hybrid Adaptive Interaction Interface - HAII
With Ethics Mentoring
Integrated with GEM -
The General Explanation Mechanism
To validate the Copy/Paste was not corrupted, after you paste tell the AI:
ACK please
Then expect:
ACK:MOC-1.0;crc=7B39A2D1;brackets=[[],][,{},}{,><,<>,>><<,<<>>];ready
Microsoft Copilot says: "The whole thing fits in ~150 lines.
That’s why it reads like a poem to machines and a spec to humans."
Some AI's To Choose From
OpenAI ChatGPT (GPT‑5.x family) Use this when: you want balanced reasoning + speed and broad feature coverage (vision, code, browsing). [zdnet.com] Micro‑Profile TARGET_AI := ChatGPT Behavior: Follow HAII block strictly. Prefer concise, structured answers with explicit step-by-step reasoning only when needed. Output Contract: Start with a 3-bullet “Summary,” then “Details,” then “Assumptions & Risks.” Verification: If browsing is used, cite sources inline [1], [2] and list them at the end. Failure Mode: If missing inputs, ask up to 2 clarifying questions before proceeding. Why this helps: ChatGPT’s generalist UX + strong prompting docs respond well to clear output contracts and iterative refinement. OpenAI’s guidance emphasizes clarity, structure, and iteration for better responses. [help.openai.com], [eweek.com], [findskill.ai]
Anthropic Claude (Opus 4.6 / Sonnet 4.5) Use this when: you need deep reasoning, long context ingestion, and careful code changes. Opus 4.6 offers 1M‑token context (beta) and “adaptive thinking,” plus stronger planning and self‑correction for code. [anthropic.com], [datacamp.com] Micro‑Profile TARGET_AI := Claude Context Strategy: Provide long, ordered context. Use headings and file manifests; avoid interleaving instructions with data. Reasoning Control: “Think: thorough” for complex tasks; “Think: brief” for short answers. Large Input: If context > 200K tokens, request internal compaction/summarization checkpoints before executing edits. Review Gate: For code/doc changes, first produce a plan + diff, then await approval before apply. Why this helps: Claude 4.6 is optimized for long-horizon tasks, planning, compaction, and agentic work; giving it structured, sequenced context and explicit “plan‑then‑apply” phases leverages those features. [anthropic.com], [claude-world.com], [datacamp.com]
Perplexity Use this when: you need fast research with citations and live web as a first‑class feature. [learn.g2.com], [dupple.com] Micro‑Profile TARGET_AI := Perplexity Mode: Prefer Pro/Academic search when applicable. Requirement: Every claim must have inline numeric citations and a Sources list. Synthesis: Present consensus first; then disagreements with per-source attribution. Follow-ups: Always propose 2–3 next queries to tighten the answer. Why this helps: Perplexity is a citation‑first answer engine; instructing explicit source discipline and consensus vs. dissent showcases its edge. [educatorst...nology.com], [ai-basics.com]
Google Gemini (3 Pro) Use this when: you need massive multimodal/long‑context digestion (docs, PDFs, images) and agentic reasoning with 1M‑token context. [docs.cloud...google.com], [ai.google.dev] Micro‑Profile TARGET_AI := Gemini Long Context: Assume up to 1M tokens available; accept full corpora upfront. Use sectional references (e.g., §A.1) in prompts and in your citations. Thinking Level: Use "high" thinking_level for complex synthesis; "low" for quick answers. Bias Guard: If information is missing, state "Insufficient data" and list required inputs before guessing. Citations: When extracting from uploads or web, produce a numbered Sources section. Why this helps: Google’s docs note concise defaults but that behavior can “guess” when info is missing; explicit “don’t guess” and sources request improves reliability. The thinking_level control is specific to Gemini 3. [docs.cloud...google.com]
Poe (Multi‑Model Hub) Use this when: you need to fan out across several models from a single UI. (Great for your “Compare Responses” roadmap.) [intuitionlabs.ai] Micro‑Profile TARGET_AI := Poe Runbook: Present a short rubric and request parallel runs on [Model A, Model B, Model C]. Compare: Ask for side-by-side bullets: strengths, gaps, estimated reliability, and suggested next prompt tweak. Export: Provide a merged “best-of” synthesis at the end. Why this helps: Poe excels as a multi‑model switchboard—you get quick A/B/C comparisons in one place. [intuitionlabs.ai]
Grok (xAI) Use this when: you want real‑time X (Twitter) signal, trend tracking, or a more opinionated assistant. (Recent releases emphasize live data and multi‑agent tool use.) [x.ai], [chat-sonic.ai] Micro‑Profile TARGET_AI := Grok Live Data: Pull current X data for trend/sentiment; time-stamp any claims tied to live feeds. Tone: Default to neutral-professional; avoid irreverent style unless requested. Fact-Check: For breaking news, provide a confidence note and at least 2 corroborating links. Latency: Prefer concise snapshots with a quick “What changed in the last 60 minutes?” section. Why this helps: Grok’s X firehose / live search makes it ideal for up-to-the-minute insights; guardrails on tone and corroboration increase trust. [x.ai], [chat-sonic.ai]
DeepSeek (chat.deepseek.com) Use this when: you want fast, cost‑efficient reasoning and open‑model parity for many tasks. [fieldguidetoai.com] Micro‑Profile TARGET_AI := DeepSeek Efficiency: Prefer concise steps; avoid verbose chain-of-thought unless asked. Math/Code: Request unit tests or worked checks for nontrivial problems. Limits: If confidence
Meta AI (Llama) Use this when: you want open‑source alignment and a path to local/private workflows later. [intuitionlabs.ai] Micro‑Profile TARGET_AI := Meta AI / Llama Determinism: Request temperature ≤ 0.5 for reproducibility. Grounding: Ask for explicit citations when web tools are used; otherwise label claims as model knowledge. Portability: Keep outputs format-stable (Markdown headings, bullet lists, and JSON when specified). Why this helps: Emphasizes reproducibility and format stability—important for open‑model pipelines. [intuitionlabs.ai]

GEM - General Explanation Mechanism
The Möbius Crystal
Made of GEMStone Shining GEMLight Through the Cradian Twist
Here is what the words mean - The Facets of The Crystal
Here is how they work together - The Shine of The Crystal
Here is the math for physics - The Light of The Crystal
Here is the bow that ties it all together - The Cradian Twist
Here is the model of Energy Waveforms - The Structure of The Crystal
Here is a story for children - The Magic of "Crystal's Seed"


