16bit-threshold-computer

A complete 16-bit computer implemented entirely in threshold logic neurons. Every operation uses weighted sums with Heaviside step activation - no traditional Boolean gates.

This model serves as a verified baseline for circuit optimization research. Rather than a final artifact, it provides a correct starting point for evolutionary algorithms seeking minimal, efficient, or robust implementations.

Model Card

Model Description

A complete 16-bit computer where every logic gate is implemented as a threshold neuron (weighted sum + Heaviside step activation). The architecture includes registers, ALU, control flow, memory addressing, and stack operations.

Provenance

Stage Method
8-bit primitives Formal Coq proofs establishing threshold logic correctness
8-bit computer Compositional construction from proven primitives
16-bit extension Hand-derived by systematically scaling 8-bit patterns
Verification Empirical testing (59 tests, 100% pass rate)

What This Means

The 16-bit weights were hand-derived by systematically applying the mathematical principles established in the Coq-verified 8-bit version. This is analytical derivation, not machine learning:

  • Not learned: Weights are calculated from threshold logic equations, not trained via gradient descent
  • Not extracted: No automated Coq→OCaml→weights pipeline
  • Empirically verified: Functional correctness confirmed by exhaustive testing of primitives and representative testing of composite circuits

Intended Use

  • Primary: Baseline for circuit optimization research (evolutionary algorithms, architecture search)
  • Secondary: Educational demonstration of threshold logic computation
  • Tertiary: Reference implementation for neuromorphic hardware development

Limitations

  • No formal proofs for 16-bit circuits specifically (8-bit proofs don't automatically extend)
  • Circuits are correct but not necessarily minimal (optimization headroom exists)
  • Test coverage is representative, not exhaustive for all 16-bit operations

Ethical Considerations

This is a logic circuit specification with no training data, no personal information, and no dual-use concerns. The weights are mathematical constants, not learned representations.

System Architecture

Component Specification
Registers 4 x 16-bit (R0, R1, R2, R3)
Memory 65,536 bytes (64KB)
Program Counter 16-bit
Instruction Width 32-bit
ALU Operations 16
Status Flags Z (Zero), N (Negative), C (Carry), V (Overflow)
Total Circuits ~100
Total Tensors 2,052
Total Parameters 4,647

Quick Start

import torch
from safetensors.torch import load_file

weights = load_file("neural_computer_16bit.safetensors")

def heaviside(x):
    """Threshold activation: fires if x >= 0"""
    return (x >= 0).int()

def threshold_neuron(inputs, weight, bias):
    """Single threshold neuron computation"""
    weighted_sum = (inputs.float() * weight.float()).sum() + bias.float()
    return heaviside(weighted_sum)

# Example: 16-input majority gate
maj_w = weights['threshold.majority16.weight']  # [1.0] * 16
maj_b = weights['threshold.majority16.bias']    # [-9.0]

# Fires when more than 8 of 16 inputs are 1
inputs = torch.tensor([1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0])  # 9 ones
result = threshold_neuron(inputs, maj_w, maj_b)
print(f"Majority(9/16 ones) = {result.item()}")  # 1

Instruction Set

ALU Operations (Opcodes 0-15)

Op Mnemonic Operation Opcode
0 ADD dest = src1 + src2 0000
1 SUB dest = src1 - src2 0001
2 AND dest = src1 & src2 0010
3 OR dest = src1 | src2 0011
4 XOR dest = src1 ^ src2 0100
5 NOT dest = ~src1 0101
6 SHL dest = src1 << 1 0110
7 SHR dest = src1 >> 1 0111
8 INC dest = src1 + 1 1000
9 DEC dest = src1 - 1 1001
10 CMP flags = src1 - src2 1010
11 NEG dest = -src1 1011
12 PASS dest = src1 1100
13 ZERO dest = 0 1101
14 ONES dest = 0xFFFF 1110
15 NOP no operation 1111

Control Flow

Cond Mnemonic Condition
0 JMP unconditional
1 JZ Z = 1 (zero)
2 JNZ Z = 0 (not zero)
3 JC C = 1 (carry)
4 JNC C = 0 (no carry)
5 JN N = 1 (negative)
6 JP N = 0 (positive)
7 JV V = 1 (overflow)

Circuit Inventory

ALU (57 tensors, 521 params)

  • 16-bit ALU with all 16 operations
  • 4-bit opcode decoder
  • Zero, Negative, Carry, Overflow flag computation

Arithmetic (539 tensors, 1,030 params)

  • 16-bit ripple carry adder (16 chained full adders)
  • 16-bit incrementer/decrementer
  • 16-bit equality comparator
  • 16-bit magnitude comparators (>, >=, <, <=)
  • Building blocks: half adder, full adder, 2-bit, 4-bit adders

Boolean (30 tensors, 44 params)

  • Primitives: AND, OR, NOT, NAND, NOR, XOR, XNOR
  • IMPLIES, BIIMPLIES

Control (1,194 tensors, 1,626 params)

  • 16-bit unconditional jump
  • 16-bit conditional jumps: JZ, JNZ, JC, JNC, JN, JP, JV, JNV
  • Stack operations: PUSH, POP, CALL, RET

Combinational (82 tensors, 309 params)

  • Multiplexers: 2:1, 4:1, 8:1, 16:1
  • Demultiplexers: 1:2, 1:4, 1:8, 1:16
  • Encoders: 8:3, 16:4
  • Decoders: 3:8, 4:16
  • 16-bit priority encoder
  • 16-bit barrel shifter

Threshold (46 tensors, 377 params)

  • k-of-16 gates (k = 1 to 16)
  • 16-input majority/minority
  • At-least-k, at-most-k, exactly-k

Pattern Recognition (29 tensors, 241 params)

  • 16-bit population count
  • All-ones, all-zeros detection
  • Alternating pattern detection
  • 16-bit Hamming distance
  • 16-bit palindrome/symmetry detection
  • One-hot detector

Error Detection (36 tensors, 235 params)

  • 16-bit parity checker/generator
  • 16-bit checksum
  • CRC-8, CRC-16
  • Extended Hamming codes

Modular Arithmetic (30 tensors, 255 params)

  • mod 2 through mod 16 (16-bit operands)

Research Applications

This model is designed as a baseline for optimization research:

1. Parameter Minimization

Find the smallest threshold network that computes the same functions.

2. Depth Minimization

Flatten circuits by exploiting threshold gates' higher fan-in capability.

3. Robustness Optimization

Evolve circuits that tolerate noise, faults, or weight perturbations.

4. Hardware-Specific Optimization

Constrain to specific neuromorphic hardware limits (fan-in, weight precision).

See PROSPECTUS.md for detailed research directions.

Tensor Naming Convention

{category}.{circuit}.{subcircuit}.{component}.{weight|bias}

Examples:
  boolean.and.weight
  arithmetic.ripplecarry16bit.fa7.ha2.sum.layer1.or.weight
  control.jz.bit15.and_a.weight
  threshold.majority16.weight

Verification

All circuits are derived from threshold logic principles:

  • Boolean gates: Analytically derived weights
  • Arithmetic: Compositional from verified building blocks
  • Control flow: MUX-based conditional selection

Functional correctness can be verified by exhaustive testing (2^32 max for 16-bit binary operations) or compositional reasoning.

Comparison with 8-bit Version

Property 8-bit 16-bit
Tensors 1,193 2,052
Parameters 2,386 4,647
Addressable Memory 256 bytes 64KB
Register Width 8-bit 16-bit
Instruction Width 16-bit 32-bit

Citation

@software{threshold_computer_16bit_2025,
  title={16bit-threshold-computer: Threshold Logic Computer for Optimization Research},
  author={Norton, Charles},
  url={https://huggingface.co/phanerozoic/16bit-threshold-computer},
  year={2025},
  note={2052 tensors, 4647 parameters, optimization baseline}
}

Links

License

MIT - Free to use for research, optimization, and derivative works.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support