Stability AI & CarperAI announces FreeWilly 1 & 2

Stability- AI -& -CarperAI -announces -FreeWilly -1- &- 2

July 25th, 2023, New Jersey – Stability AI and CarperAI Lab have made a groundbreaking announcement, introducing their latest creations, FreeWilly1 and FreeWilly2 – two powerful Large Language Models (LLMs) that are poised to revolutionize the field of AI research. These open access models demonstrate exceptional reasoning abilities across diverse benchmarks and promise to elevate the landscape of natural language understanding.

The development of FreeWilly1 and FreeWilly2 was achieved through meticulous research and innovation. Leveraging the LLaMA 65B foundation model, FreeWilly1 was carefully fine-tuned using a novel synthetically-generated dataset, employing the Supervised Fine-Tune (SFT) approach in standard Alpaca format. Similarly, FreeWilly2 builds upon the LLaMA 2 70B foundation model, achieving remarkable performance that rivals GPT-3.5 in certain tasks.

Adhering to a non-commercial license, both models are offered as open resources, fostering collaborative research within the community. Stability AI and CarperAI Lab’s commitment to transparency is evident through their internal red-teaming to ensure the models’ ethical deployment. The organizations eagerly welcome community feedback and support to further enhance the safety and reliability of the models.

The data generation process for FreeWilly1 and FreeWilly2 was inspired by Microsoft’s pioneering methodology outlined in the “Orca: Progressive Learning from Complex Explanation Traces of GPT-4″ paper. However, the two models diverge in their data sources. By prompting language models with high-quality instructions from datasets curated by Enrico Shippole, including COT Submix Original, NIV2 Submix Original, FLAN 2021 Submix Original, and T0 Submix Original, the team successfully generated 600,000 data points for training. Significantly, this sample size is only one-tenth of the size used in the original Orca paper, substantially reducing the training cost and carbon footprint while maintaining exceptional performance.

Comprehensive evaluations conducted by Stability AI researchers and independently reproduced by Hugging Face on July 21st, 2023, showcased the excellence of FreeWilly1 and FreeWilly2 in numerous domains. These models excel in intricate reasoning, adeptly handling linguistic nuances, and addressing complex questions, including those related to specialized fields like Law and mathematical problem-solving.

The models’ prowess is evidenced by their placement on the prestigious Open LLM Leaderboard, where they surpassed benchmarks set by other LLMs. Additionally, both FreeWilly models delivered remarkable 0-shot performance on GPT4ALL benchmarks and AGI Eval, further solidifying their status as cutting-edge advancements in the realm of AI.

FreeWilly1 and FreeWilly2 have already set a new standard in the domain of open access Large Language Models. Their impact on research is vast, significantly advancing natural language understanding and unlocking possibilities for complex tasks. Stability AI and CarperAI Lab expressed their gratitude to their dedicated team of researchers, engineers, and collaborators, whose unwavering efforts have culminated in this extraordinary milestone.

With these remarkable achievements, the AI community eagerly awaits the boundless opportunities that FreeWilly1 and FreeWilly2 will unlock, inspiring new applications and pushing the boundaries of AI research into uncharted territories.

Leave a Reply