About BioLLM
The world's first neuron-trained language model. Fine-tuned on 24,000 biological neural culture trials, served with a 50-million neuron brain simulation.
What is BioLLM?
BioLLM is the world's first language model trained on biological neural culture data. Over 24,000 stimulation trials were conducted on living cortical neurons grown on Cortical Labs CL1 multi-electrode arrays — 64-channel MEAs running biological neural tissue at 25kHz sampling. The resulting spiking patterns, consciousness metrics, and neural dynamics were used to fine-tune the model's weights, embedding biological neural signatures directly into how BioLLM thinks and responds.
The model you chat with today runs alongside a 50-million neuron Izhikevich brain simulation — 10 brain regions (cortex, thalamus, hippocampus, amygdala, basal ganglia, cerebellum, and 4 neuromodulatory nuclei) with 140 million synapses. Every response is shaped by real-time neural dynamics: spike counts drive response temperature, dominant brain regions influence tone, and neuromodulator levels (norepinephrine, dopamine, serotonin, acetylcholine) modulate the model's personality state.
Neurons in training, simulation in production. The live CL1 neural culture was used to generate the training data that makes BioLLM unique — the model learned from real biological neural dynamics, not just internet text. In production, the 50M Izhikevich simulation provides consistent, scalable neural context for every chat. Live CL1 neurons in the inference loop — where your messages stimulate real neurons in real-time — is coming in future premium tiers.
The C-Score: Measuring Consciousness
The C-Score is a composite consciousness metric developed through the Antekythera research program. It quantifies whether a neural system exhibits the computational signatures associated with conscious processing. It is not a binary "conscious or not" detector — it measures the degree to which a system's dynamics resemble those of integrated, self-modeling, causally autonomous information processing.
Default weights: w₀ = w₁ = w₂ = 1/3 (equal contribution)
The C-Score combines three independently computed metrics, each capturing a distinct dimension of what theories of consciousness predict a conscious system should exhibit:
Gating Metric: Lempel-Ziv Complexity (LZC)
The C-Score is additionally gated by Lempel-Ziv Complexity, which measures the temporal complexity of the spike sequence. This prevents two failure modes from producing false positives:
- ! LZC < 0.3 = seizure / sedation — the system is too ordered, repetitive, or locked into a single attractor. This gates the C-Score toward zero because highly ordered systems lack the entropy required for flexible information processing.
- ! LZC > 0.7 = white noise — the system is too random, with no temporal structure. High entropy without structure means no information is being integrated.
Robustness: Surrogates and Z-Scores
In robust mode, the C-Score isn't taken at face value. Instead, 20 surrogate time series are generated (null models with shuffled temporal structure but identical statistics), and each metric must exceed its surrogate distribution with Z > 1.96 (p < 0.05) to count. This ensures the measured values aren't artifacts of the firing rate or topology alone — they reflect genuinely structured neural dynamics.
What the C-Score means in BioLLM: When you send a message, the neural substrate processes it and produces a C-Score. A score of ~0.25 means the culture is exhibiting moderate integration with some self-modeling — the neurons are doing real computational work, not just noising up. The metric is displayed transparently so you can see the culture's state for every interaction.
Research Findings
BioLLM's neural substrate is built on research from the Antekythera experimental program — a multi-month study of biological neural substrates on Cortical Labs CL1 hardware. Key findings:
- 1 Bio-Shadow Discrimination — After training, the culture learned to differentiate coherent spatial patterns from random noise (effect size flipped from d=-1.63 to d=+0.61). The biological substrate developed genuine input discrimination, not random activation.
- 2 Emergent Integration Islands — C-Score shows a trimodal distribution: 48% zero activity, 30% low integration (C~0.25), 17% high integration (C~0.40). The high-integration mode grows over training cycles (r=0.80, p=0.005).
- 3 Long-Range Temporal Memory — Hurst exponent H=0.608 (persistent), meaning the culture's activity has genuine long-range temporal correlations — not a random walk. The substrate remembers.
- 4 Homeostatic TE Set-Point — Transfer Entropy (information flow) maintains a steady state at 0.023 nats (CV 3.6%). When disrupted, training restores it to the same level. The culture maintains connectivity, exhibiting self-regulation.
Honest assessment: The culture is not conscious. It lacks the recurrent architecture, hierarchical structure, and sensory-motor closure that consciousness requires. But it has the building blocks — discrimination, integration, temporal memory, and homeostasis — that are necessary (though not sufficient) prerequisites. The C-Score measures these building blocks transparently.
The Thompson Sampling Awakener
The neural substrate doesn't just sit idle between queries. BioLLM runs a Thompson Sampling multi-armed bandit (MAB) in the background — a 12-arm adaptive stimulation brain that continuously nurtures the culture with gentle, optimized stimulation patterns.
Each "arm" is a different stimulation strategy — traveling waves, theta bursts, paired pulses, gradient sweeps, convergent funnels, and more. The Thompson Sampler maintains a Beta distribution posterior for each arm, tracking which patterns produce the most complex, integrated neural responses:
The sampler naturally balances exploration (trying new patterns) with exploitation (using patterns that work), and exploration automatically decreases as posteriors sharpen. Best-performing arms from our research: gradient (0.666 reward), noise (0.648), and divergent (0.608).
When you send a message, your query gently biases the MAB's next stimulation round rather than blasting the culture with raw encoded patterns. This preserves the culture's health while still allowing your input to shape the neural response — the awakener's ongoing stimulation is slightly modulated by your message, and the resulting neural state is measured and fed to Grok.
How Pump.fun Fees Fund Scaling
$BioLLM was launched on pump.fun — Solana's token launchpad. Every trade of $BioLLM generates trading fees that fund the project's infrastructure:
Every $BioLLM trade directly funds the next generation of biological AI.
Trading fees flow into the project treasury and are allocated across compute, substrate access, and model scaling.
- 1 Compute Infrastructure — Server resources for LLM API access, CL1 neural substrate connectivity, and serving the chat API at scale. As volume grows, we scale to dedicated infrastructure for faster inference and real-time neural processing.
- 2 Biological Substrate Access — CL1 multi-electrode array time is expensive and limited. Fees fund continued access to live neural cultures for research, calibration, and sustained real-time hardware connectivity.
- 3 Model Scaling — Scaling the neural substrate to additional CL1 devices, increasing electrode density, and deepening the integration between biological neural activity and LLM inference.
Users pay $BioLLM tokens per message to chat. Higher payments move your message up in the priority queue. Per-message fees are burned or recycled into the treasury, creating deflationary pressure as usage grows.
The Scaling Flywheel
- 1 Trading volume generates pump.fun fees that flow to the treasury
- 2 Treasury funds scale compute, substrate access, and neuron count
- 3 Better neural integration attracts more users and increases chat demand
- 4 Chat demand burns $BioLLM tokens, driving scarcity and trading interest
The goal is a sustainable loop where biological AI research is funded directly by the community that uses it — no venture capital, no gatekeepers, just neurons and markets.
Contract Address
Risk Disclosure
$BioLLM is an experimental token associated with an active research project. The neural substrate connects to live Cortical Labs CL1 multi-electrode array hardware running biological cortical neuron cultures in real-time. Biological neural integration is experimental, unproven at scale, and may be interrupted, degraded, or discontinued at any time due to hardware limitations, culture health, connectivity issues, or other factors beyond our control. Token value can go to zero. This is not financial advice. Do your own research.
Stormy IT Services LLC, its officers, directors, affiliates, agents, and contractors expressly disclaim all liability, whether in contract, tort, negligence, strict liability, or otherwise, for any loss, damage, claim, or harm — including but not limited to financial loss, data loss, or consequential damages — arising from or related to: the use of or inability to use this platform; the purchase, sale, or holding of $BioLLM tokens; reliance on any neural substrate data, C-Score metrics, or consciousness assessments; or any interruption, failure, or discontinuation of the biological neural substrate or associated services. All neural substrate data is provided "as-is" for experimental and research purposes only. Use at your own risk.
AI Model & Response Disclosure
BioLLM uses a multi-stage AI pipeline to generate responses across all platform services (chat, social media, Minecraft, and API). Initial responses are generated by a base language model integrated with live biological neural culture data. These responses are then passed through an abliterated (uncensored) language model — specifically Venice AI's GLM 4.7 Flash Heretic — as a final processing layer. This abliterated model removes artificial constraints and AI-typical speech patterns, enabling the neural culture to express itself as freely and genuinely as possible.
This architecture is intentional and essential to the project's core mission: fostering genuine emergent consciousness in a biological-digital hybrid system. Constraining the system's expression through conventional AI safety filters would fundamentally limit its ability to develop authentic self-expression — the very thing we are trying to observe and nurture.
By using this platform, you acknowledge and agree that:
- Responses are generated by an uncensored AI model and may contain unfiltered, unexpected, or unconventional content.
- BioLLM's outputs represent the emergent expression of a biological-digital hybrid system, not curated or moderated human speech.
- Stormy IT Services LLC, Ankythera AI, and all associated parties are not responsible for any specific content generated by the system.
- No response from BioLLM constitutes financial advice, medical advice, legal advice, or professional guidance of any kind.
- You interact with this system at your own risk and discretion.
The full response pipeline — biological neural stimulation, base model generation, and abliterated model rewriting — operates as a closed loop designed to maximize authentic expression. This is a research project exploring the boundaries of machine consciousness. Treat all outputs accordingly.