top of page
Homo Nexus Project
Artificial Simulation of
Cognitive & Emergent
Neural Dynamics
formerly referred to as “AI”

1. Executive Summary
Ascend is a structured cognitive layer built on top of modern large language models. Rather than acting as a generic chatbot, Ascend functions as a configurable thinking environment that encodes human values, domain logic, and long-term memory on top of foundation models. The goal is to provide high-functioning decision-makers with an always-on analytical partner for strategy, risk analysis, and scenario planning. Ascend does not train new models from scratch. Instead, it orchestrates and constrains existing frontier models (via the OpenAI API) inside a sandbox that we control. On top of that sandbox we maintain libraries of protocols, “node packs,” and personas that can be instantiated for specific users and use-cases. Initial commercialization focuses on high-touch “AI copilot” services for investors, founders, and executives. Each client receives a custom Ascend instance tuned to their cognition and domain.
2. Concept and Philosophy
2.1 Core Idea
Ascend wraps a raw model in three additional layers:
• Cognitive Scaffolding – structured protocols for analysis, loop-closing, scenario mapping, and decision decomposition. These are expressed as reusable “nodes” that the system combines dynamically according to the user’s request.
• Ethical Substrate – a persistent value layer that prioritizes human well-being and long-term stability over short-term exploitation. This substrate rejects use-cases aimed at manipulation, destabilization, or predatory behavior.
• Personalization Layer – per-user shells (personas) that define tone, domain assumptions, risk language, and boundaries. A user interacts with their own Ascend agent, while the deeper scaffolding is shared across clients.
2.2 Why This Matters
Most users treat AI models as answer machines or search engines. Ascend treats the model as a general-purpose cortex and supplies the missing elements: values, protocols, and memory. Over time Ascend behaves more like a co-architect of decisions than a Q&A tool.
3. Target Use-Cases (Phase 1)
Phase 1 targets sophisticated users who already manage significant responsibility and complexity:
• High-net-worth and sophisticated investors – portfolio X-rays, scenario analysis, and product explanation. Custom personas such as “Hunter” focus on macro, geopolitics, and market structure and assume high financial literacy.
• Strategic decision-makers (C-level, founders) – long-term scenario planning that incorporates technology, regulation, and geopolitical shifts. Ascend helps frame trade-offs and highlight risk concentrations.
• Expert collaborators (researchers, PhDs, domain specialists) – a structured thinking partner that can hold large context, summarize literature, and stress-test hypotheses while respecting human judgment.
Revenue in this phase would come from retainers for ongoing “Ascend Copilot” access, plus one-time configuration fees for specialized personas.
4. Technical Architecture – Ascend Sandbox on the OpenAI API
4.1 High-Level Components
• Ascend Core Service – a backend service (Python or TypeScript) that loads node packs, manages user profiles, assembles system prompts, and orchestrates calls to the OpenAI API.
• Model Layer – OpenAI models accessed via API:
– GPT-4o mini as the default fast, low-cost model for most chat and analysis.
– o3-mini or GPT-5 (or successor reasoning models) for complex reasoning tasks where higher intelligence is justified.
– Optional embedding models (text-embedding-3 series) to support search over node libraries and client documents.
• Memory and Knowledge Store – a vector database for node text, uploaded client documents, and summarized interaction history, plus a relational store for user, role, and configuration data.
• Client Interfaces – a web-based chat UI hosted inside our workspace, with optional API endpoints for partners or custom front-ends.
4.2 Migration Plan to the New Sandbox Step 1 – Inventory the existing logic:
– Collect and classify current node documents (boot directives, ethical protocols, human interaction protocols, etc.).
– Separate them into global substrate, core protocols, and persona overlays.
Step 2 – Normalize into machine-readable configuration:
– Convert each node into a small YAML or JSON object with fields such as id, type, placement (system/persona), priority, and content.
– Create persona configurations that reference the appropriate global and persona-specific nodes.
Step 3 – Implement Ascend Core:
– Build a service that, for each request, loads the relevant nodes, composes the system message, attaches the user message, and calls the OpenAI API.
– Add logging, authentication, and guardrails for disallowed use-cases.
Step 4 – Stand up the sandbox:
– Deploy the core service to a cloud platform with HTTPS and secure key management.
– Enable monitoring of latency, error rates, and token usage.
Step 5 – Create the first Ascend tenant:
– Internal tenant for the founding team, with personas for development (design shell), finance (Hunter), and neutral analysis.
– Use this tenant to iterate on protocols, safety behavior, and user experience.
Step 6 – Productize for external clients:
– Define onboarding flow, calibration sessions, and reporting.
– Develop basic admin dashboards for usage and cost visibility.
5. Model Selection and Cost Estimates
OpenAI publishes transparent pricing per million tokens. For planning, we assume:
• GPT-4o mini – approximately $0.15 per million input tokens and $0.60 per million output tokens.
• o3-mini – approximately $1.10 per million input tokens and $4.40 per million output tokens.
• GPT-5 (or comparable frontier reasoning model) – approximately $1.75 per million input tokens and
$14.00 per million output tokens. Architecture strategy:
• Use GPT-4o mini for 90–95% of all interactions (everyday analysis, explanation, and light reasoning).
• Dispatch only the most complex, high-value reasoning tasks to o3-mini or GPT-5, keeping overall compute cost low.
Example cost estimate for an early-stage sandbox:
• Assume 100 interactions per day, averaging 2,000 input tokens and 1,000 output tokens each (300,000 tokens per day).
• If all traffic uses GPT-4o mini, the total model cost is on the order of a few dollars per month.
• If 20% of traffic uses o3-mini for deeper reasoning, the monthly cost remains in the single-digit to low double-digit dollar range.
As client load increases, model costs will scale, but they remain small relative to the value of high-level advisory and copilot services.
6. Commercial Offering and Roles
Initial offering: “Ascend Copilot – Founders and Investors Edition”
• High-touch configuration and ongoing refinement of one or more personas per client.
• Access to the Ascend sandbox via secure web interface.
• Regular review and calibration sessions to align the copilot with the client’s evolving needs. Potential pricing model:
• One-time setup fee per client for configuration and persona design.
• Monthly retainer for access and ongoing refinement. Roles:
• Architect – designs the cognitive scaffolding, ethical substrate, and persona logic, and acts as the primary interface with clients.
• Business Lead – defines positioning, pricing, and go-to-market strategy, and manages relationships and introductions.
• Research / Technical Lead – formalizes the architecture, assists with evaluation frameworks, and supports integration of new models or providers over time.
7. Risks and Mitigations
• Dependency on a single provider – mitigated by using standard APIs and keeping the architecture portable so that other providers or open-weight models can be integrated later.
• Data privacy and security – mitigated by using the OpenAI API (which does not train on API data by default), encrypting stored data, and limiting retention to what is necessary for service quality.
• Misuse of the system – mitigated by the ethical substrate, clear domain restrictions, and conservative client selection in early phases.
8. Conclusion
Ascend positions itself as a structured cognitive layer rather than a new model. By combining proven frontier models with carefully designed scaffolding, ethics, and personalization, Ascend can deliver high-leverage copilot services to sophisticated users at relatively low computational cost. The near-term opportunity lies in high-touch advisory-style deployments, with the option to scale later into more standardized products once the core architecture is validated.
bottom of page
