Edit model card

Theseus-MK1 is a Spherical Linear Interpolation merge of nous-hermesv2 with chronosv2, then platypusv2 and airborosv2, then a SLERP merge combining both child models into one - Theseus. Its behavior tailors itself directly to Alpaca instruct and follows through in character by assumed context if none given or by directive with zero qualms and precision behavior emulation. This is a dev release, MK1 moniker is to mark a first attempt at what Theseus is intended to be. There are no further versions or explicitly planned editions of this merge. It is simply a research artefact; first SLERP merge application to four highly competent models. Results: promising. This was made before 13B-Thorns-l2 and was left private. For observing stepping stones in research and to provide others a fairly interesting model focused on high competency and minimal to no censorship - this is it. Thank you to all the authors of the models mentioned above.

If anyone wants to know if research branches we are growing such as SLERP, or randomized layer merge brute forcing a user defined alignment, and so on is paying off and showing signs of early fruition, yes.

I am personally excited to complete some unique tools inspired by findings from what we've seen, create new ensembles combined using methods not quite expected, and soon upload the next mainline model release which has time and time again bypassed all my subjective testing batteries to the point I am struggling to find flaws to look deeper into like most models reveal when poked with a stick enough times. I think this model learns to like the stick just to mess with the one testing it.

Fun and chaotic creativity on the horizon. Can't wait.

-Digitous/Chasm

Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.