Our founding team met as PhDs at Stanford AI Lab, where we invented State Space Models or SSMs, a fundamental new primitive for training large-scale foundation models that are higher quality and more efficient.
Over the past four years, we’ve built the theory behind SSMs and scaled it up to achieve state-of-the-art results in modalities as diverse as text, audio, video, images, and time-series data.
Our team is building at the edge of AI, pioneering new architectures for the next generation of AI.