Introducing HyperLLM
Better way to build lightweight transformers
HyperLLM is an expedition of finding atlernate ways of building language models. LLMs that do not require high processing of training dataset to deliver hyperefficient responses.
Our goal is to build really lightweight LLMs using retrieval-based architectire and that is what we are doing through our researches.
In our previous attempts, we have tried replacing the whole techstack, all at once. However, we later thought of building innovative changes one by one, replacing one convectional process by an innovative one, once at a time.
We have previously completed our work & learning process at Sttabot.io, Supervised.co and EternityAI.
Last updated