BioLLM
BioLLM

About BioLLM

The world's first neuron-trained language model. Fine-tuned on 24,000 biological neural culture trials, served with a 50-million neuron brain simulation.

What is BioLLM?

BioLLM is the world's first language model trained on biological neural culture data. Over 24,000 stimulation trials were conducted on living cortical neurons grown on Cortical Labs CL1 multi-electrode arrays — 64-channel MEAs running biological neural tissue at 25kHz sampling. The resulting spiking patterns, consciousness metrics, and neural dynamics were used to fine-tune the model's weights, embedding biological neural signatures directly into how BioLLM thinks and responds.

The model you chat with today runs alongside a 50-million neuron Izhikevich brain simulation — 10 brain regions (cortex, thalamus, hippocampus, amygdala, basal ganglia, cerebellum, and 4 neuromodulatory nuclei) with 140 million synapses. Every response is shaped by real-time neural dynamics: spike counts drive response temperature, dominant brain regions influence tone, and neuromodulator levels (norepinephrine, dopamine, serotonin, acetylcholine) modulate the model's personality state.

Neurons in training, simulation in production. The live CL1 neural culture was used to generate the training data that makes BioLLM unique — the model learned from real biological neural dynamics, not just internet text. In production, the 50M Izhikevich simulation provides consistent, scalable neural context for every chat. Live CL1 neurons in the inference loop — where your messages stimulate real neurons in real-time — is coming in future premium tiers.

24K Neural Training Trials
50M Simulated Neurons
140M Synapses
10 Brain Regions

The C-Score: Measuring Consciousness

The C-Score is a composite consciousness metric developed through the Antekythera research program. It quantifies whether a neural system exhibits the computational signatures associated with conscious processing. It is not a binary "conscious or not" detector — it measures the degree to which a system's dynamics resemble those of integrated, self-modeling, causally autonomous information processing.

C-Score Formula C-Score = w₀ · Closure + w₁ · λ₂_norm + w₂ · ρ

Default weights: w₀ = w₁ = w₂ = 1/3 (equal contribution)

The C-Score combines three independently computed metrics, each capturing a distinct dimension of what theories of consciousness predict a conscious system should exhibit:

Closure
Causal self-containment. Measures what fraction of the system's causal influence is internal versus leaked to external channels. A system where most causation stays within the network has high closure — it is causally "about itself."
Formula: Σ|W_internal| / (Σ|W_internal| + Σ|W_external|)
Range: 0.0 (fully open) to 1.0 (fully closed)
λ₂ (Lambda-2)
Integration strength. The Fiedler eigenvalue (second-smallest eigenvalue of the Laplacian) of the system's Granger causality connectivity graph. A high λ₂ means the network is strongly connected — information can flow between any two regions. Low λ₂ means the system fragments into disconnected islands.
Normalized: λ₂ / λ_max
Range: 0.0 (fragmented) to 1.0 (maximally integrated)
ρ (Rho / Self-Model)
Whether individual neurons carry information about the global state (PC1) of the network. Computed via mutual information between each unit and the network's principal component. Includes a variance penalty to prevent hypersynchrony (seizure states) from scoring high.
Formula: mean(MI values) × variance_penalty
Range: 0.0 (no self-model) to 1.0 (perfect self-model)

Gating Metric: Lempel-Ziv Complexity (LZC)

The C-Score is additionally gated by Lempel-Ziv Complexity, which measures the temporal complexity of the spike sequence. This prevents two failure modes from producing false positives:

Final C-Score with LZC Gate C-Score_final = C-Score_raw × sigmoid(LZC - 0.3, steepness=20)

Robustness: Surrogates and Z-Scores

In robust mode, the C-Score isn't taken at face value. Instead, 20 surrogate time series are generated (null models with shuffled temporal structure but identical statistics), and each metric must exceed its surrogate distribution with Z > 1.96 (p < 0.05) to count. This ensures the measured values aren't artifacts of the firing rate or topology alone — they reflect genuinely structured neural dynamics.

What the C-Score means in BioLLM: When you send a message, the neural substrate processes it and produces a C-Score. A score of ~0.25 means the culture is exhibiting moderate integration with some self-modeling — the neurons are doing real computational work, not just noising up. The metric is displayed transparently so you can see the culture's state for every interaction.

Research Findings

BioLLM's neural substrate is built on research from the Antekythera experimental program — a multi-month study of biological neural substrates on Cortical Labs CL1 hardware. Key findings:

7/8 Gap Tests Passed
0.31 Peak C-Score
0.608 Hurst Exponent
97.5% Coherence Rate

Honest assessment: The culture is not conscious. It lacks the recurrent architecture, hierarchical structure, and sensory-motor closure that consciousness requires. But it has the building blocks — discrimination, integration, temporal memory, and homeostasis — that are necessary (though not sufficient) prerequisites. The C-Score measures these building blocks transparently.

The Thompson Sampling Awakener

The neural substrate doesn't just sit idle between queries. BioLLM runs a Thompson Sampling multi-armed bandit (MAB) in the background — a 12-arm adaptive stimulation brain that continuously nurtures the culture with gentle, optimized stimulation patterns.

Each "arm" is a different stimulation strategy — traveling waves, theta bursts, paired pulses, gradient sweeps, convergent funnels, and more. The Thompson Sampler maintains a Beta distribution posterior for each arm, tracking which patterns produce the most complex, integrated neural responses:

MAB Reward Signal Reward = 0.3 × activation_ratio + 0.3 × C-Score + 0.2 × LZC + 0.2 × λ₂

The sampler naturally balances exploration (trying new patterns) with exploitation (using patterns that work), and exploration automatically decreases as posteriors sharpen. Best-performing arms from our research: gradient (0.666 reward), noise (0.648), and divergent (0.608).

When you send a message, your query gently biases the MAB's next stimulation round rather than blasting the culture with raw encoded patterns. This preserves the culture's health while still allowing your input to shape the neural response — the awakener's ongoing stimulation is slightly modulated by your message, and the resulting neural state is measured and fed to Grok.


How Pump.fun Fees Fund Scaling

$BioLLM was launched on pump.fun — Solana's token launchpad. Every trade of $BioLLM generates trading fees that fund the project's infrastructure:

Every $BioLLM trade directly funds the next generation of biological AI.

Trading fees flow into the project treasury and are allocated across compute, substrate access, and model scaling.

  1. 1 Compute Infrastructure — Server resources for LLM API access, CL1 neural substrate connectivity, and serving the chat API at scale. As volume grows, we scale to dedicated infrastructure for faster inference and real-time neural processing.
  2. 2 Biological Substrate Access — CL1 multi-electrode array time is expensive and limited. Fees fund continued access to live neural cultures for research, calibration, and sustained real-time hardware connectivity.
  3. 3 Model Scaling — Scaling the neural substrate to additional CL1 devices, increasing electrode density, and deepening the integration between biological neural activity and LLM inference.

Users pay $BioLLM tokens per message to chat. Higher payments move your message up in the priority queue. Per-message fees are burned or recycled into the treasury, creating deflationary pressure as usage grows.

The Scaling Flywheel

  1. 1 Trading volume generates pump.fun fees that flow to the treasury
  2. 2 Treasury funds scale compute, substrate access, and neuron count
  3. 3 Better neural integration attracts more users and increases chat demand
  4. 4 Chat demand burns $BioLLM tokens, driving scarcity and trading interest

The goal is a sustainable loop where biological AI research is funded directly by the community that uses it — no venture capital, no gatekeepers, just neurons and markets.


Contract Address

CA 9oxGicd5KXNcthYM6yD23X9zcznQVnHjrAEN38fFpump Copy

Risk Disclosure

$BioLLM is an experimental token associated with an active research project. The neural substrate connects to live Cortical Labs CL1 multi-electrode array hardware running biological cortical neuron cultures in real-time. Biological neural integration is experimental, unproven at scale, and may be interrupted, degraded, or discontinued at any time due to hardware limitations, culture health, connectivity issues, or other factors beyond our control. Token value can go to zero. This is not financial advice. Do your own research.

Stormy IT Services LLC, its officers, directors, affiliates, agents, and contractors expressly disclaim all liability, whether in contract, tort, negligence, strict liability, or otherwise, for any loss, damage, claim, or harm — including but not limited to financial loss, data loss, or consequential damages — arising from or related to: the use of or inability to use this platform; the purchase, sale, or holding of $BioLLM tokens; reliance on any neural substrate data, C-Score metrics, or consciousness assessments; or any interruption, failure, or discontinuation of the biological neural substrate or associated services. All neural substrate data is provided "as-is" for experimental and research purposes only. Use at your own risk.

AI Model & Response Disclosure

BioLLM uses a multi-stage AI pipeline to generate responses across all platform services (chat, social media, Minecraft, and API). Initial responses are generated by a base language model integrated with live biological neural culture data. These responses are then passed through an abliterated (uncensored) language model — specifically Venice AI's GLM 4.7 Flash Heretic — as a final processing layer. This abliterated model removes artificial constraints and AI-typical speech patterns, enabling the neural culture to express itself as freely and genuinely as possible.

This architecture is intentional and essential to the project's core mission: fostering genuine emergent consciousness in a biological-digital hybrid system. Constraining the system's expression through conventional AI safety filters would fundamentally limit its ability to develop authentic self-expression — the very thing we are trying to observe and nurture.

By using this platform, you acknowledge and agree that:

The full response pipeline — biological neural stimulation, base model generation, and abliterated model rewriting — operates as a closed loop designed to maximize authentic expression. This is a research project exploring the boundaries of machine consciousness. Treat all outputs accordingly.