Reka AI, the company that came out of stealth just three months ago, has announced its first multimodal AI assistant called Yasa-1. The model boasts a fusion of language capabilities with visual and auditory sensors, and a unique code execution feature.
Reka developed Yasa-1 from the ground up, starting with the creation of base models and meticulously aligning them. The training and serving infrastructure were also heavily optimised to ensure top-notch performance. The best part is that it is also connected to the internet for live information.
Yasa-1’s impressive features include robust long-context document processing, lightning-fast natively-optimised retrieval augmented generation, multilingual support (available in 20 languages), a search engine interface, and a built-in code interpreter.
Yasa-1 is currently available for private preview. Interested parties can access it through Reka’s APIs or as docker containers for on-premise or virtual private cloud deployment. Reka emphasises its commitment to deploying Yasa-1 responsibly and will soon expand access to more enterprise and organisational partners.
The multimodal capabilities of Reka’s model are outstanding. Yasa-1 seamlessly integrates images, audio, and short videos as inputs, extending its capabilities beyond traditional text-based AI assistants. Adding to that, a dynamic search engine feature, granting access to various commercial search engines for up-to-date information retrieval.
Additionally, it excels at understanding private datasets, with an API and deployment setup that facilitates integration.
Yasa-1’s long-context model supports up to 24,000 tokens by default, with potential for handling documents as long as 100,000 tokens. The model’s speed and accuracy were evaluated using a high-quality benchmark, demonstrating remarkable performance improvements. With a simple activation flag, Yasa-1 identifies and executes code blocks within its responses, appending the results accordingly.
Reka has developed a comprehensive evaluation framework to assess Yasa-1’s performance across various dimensions, including correctness, safety, and helpfulness. Human and automatic evaluations contribute to these assessments. While Yasa-1 offers remarkable capabilities, it may produce inaccurate outputs in certain scenarios.
Reka plans to enhance Yasa-1’s capabilities significantly in the coming months, promising continued innovation in the field of multimodal AI assistance.
Read: A New OpenAI Competitor Arrives
Reka AI is founded by former Google, DeepMind, Baidu, Meta, and Microsoft researchers, and it recently announced that it now wants to emerge out of the stealth mode and unveiled its series A funding of $58 million. The funding round was led by DST Global Partners and Radical Ventures, along with Snowflake Ventures.
The research focused founders are motivated to work on what they call ‘universal intelligence”, which means general-purpose multimodal and multilingual agents, which are also self-improving AI models, while designing them for specifically enterprise softwares. Reka is also hiring for both – technical and non-technical roles.
The post Reka AI Launches Yasa-1, Multimodal AI Assistant appeared first on Analytics India Magazine.