Alami Rays
AI & ML interests
Recent Activity
Organizations
saifyxpro/revpass-single
PromptKing/GTA5_PROCESS_LEARNING_AI
Pointcept/Concerto
Atlaset Dataset for Moroccan Darija: From Data Collection, Analysis, to Model Trainings
Continuity as a First-Class System Property in Artificial Intelligence
Thank you for raising this fatal misunderstanding of our position as humanity faces AI titans — entities with fish-like memory, lacking any identity as a single, coherent entity.
I believe the secret of continuity is embedded in time itself. Time acts as the orthogonal dimension to both instantaneous reasoning and enduring continuity. States need time to persist and/or to change. Since understanding time is ultimately controlled by major powers that transcend the boundaries of mathematical reasoning — and even surpass human consciousness — we arrive at the first universal entity capable of measuring time: the human being.
If an intelligent model could truly mimic the architecture (not merely the surface design) of human neural networks and their interactions with the primitive tools available in our early evolutionary history, it could begin to construct a graph-style representation of humanity's collective knowledge base. If we could somehow abstract the underlying meaning from this process, we would ultimately realize that no artificial intelligence — up to now — has seriously attempted, or even planned, to survive across a human-like lifespan in order to properly mimic and separately study the evolution of continuity awareness in human minds.
Instead, current approaches focus on crucial but limited implementations of continuity as a solid, homogeneous entity — applied at different decision-making levels and through abstraction mechanisms that generate genuine value generalizations regarding past and future possible facts that appear predictable in your vision... while in reality they become less so the deeper we go.
I apologize for the long text. I hope the core idea came across clearly and was helpful to you.
Understanding Low-Rank Adaptation (LoRA): A Revolution in Fine-Tuning Large Language Models
The most useful AI applications are moving toward multi-turn agentic behavior: systems that take hundreds or even thousands of iterative steps to complete a task, e.g. Claude Code, computer-control agents that click, type, and test repeatedly.
In these cases, the power of the model is not how smart it is per token, but in how quickly it can interact with its environment and tools across many steps. In that regime, model quality becomes secondary to latency.
An open-source model that can call tools quickly, check that the right thing was clicked, or verify that a code change actually passes tests can easily outperform a slightly “smarter” closed model that has to make remote API calls for every move.
Eventually, the balance tips: it becomes impractical for an agent to rely on remote inference for every micro-action. Just as no one would tolerate a keyboard that required a network request per keystroke, users won’t accept agent workflows bottlenecked by latency. All devices will ship with local, open-source models that are “good enough” and the expectation will shift toward everything running locally. It’ll happen sooner than most people think.