Meta introduces Llama 2 AI models for chatbots

Share via:

Meta, formerly known as Facebook, has announced the launch of Llama 2, a new family of AI models specifically designed to power modern chatbots like OpenAI’s ChatGPT and Bing Chat. Building upon the success of its predecessor, Llama, Meta claims that Llama 2 demonstrates significantly improved performance due to its training on a mix of publicly available data.

Meta Accessibility and Compatibility of Llama 2

Unlike its predecessor, which was only accessible by request to prevent potential misuse, Llama 2 is freely available for research and commercial use. Furthermore, you can fine-tune it on popular AI model hosting platforms like AWS, Azure, and Hugging Face.Meta has optimized Llama 2 for Windows, thanks to its expanded partnership with Microsoft, and also for smartphones and PCs equipped with Qualcomm’s Snapdragon system-on-chip. Qualcomm aims to bring Llama2 to Snapdragon devices in 2024.

Key Differences and Variants of Llama 2

Llama 2 comes in two versions: Llama 2 and Llama 2-Chat, with the latter specifically fine-tuned for two-way conversations. Based on their parameters, both versions divide into different levels of sophistication. Parameters define the skill of the AI model in generating text. The available variants include 7 billion parameter, 13 billion parameter, and 70 billion parameter models.

Enhanced Training Data for Improved Performance

Llama2 was trained on a dataset comprising two million tokens, where tokens represent raw text, such as individual words or partial words like “fan,” “tas,” and “tic” for “fantastic.” This is nearly double the number of tokens used to train the original Llama (1.4 trillion). In the field of generative AI, more tokens generally lead to better performance. For reference, Google trained its leading large language model, PaLM 2, on 3.6 million tokens, and speculations suggest that they trained GPT-4 on trillions of tokens.

Benchmarks and Evaluations

While Meta refrains from disclosing the specific sources of the training data, the whitepaper confirms that the data is primarily from the web, mostly in English, and not from Meta’s own products or services. In a range of benchmarks, Llama2 models perform slightly below the highest-profile closed-source rivals, GPT-4 and PaLM 2, particularly in computer programming tasks. However, human evaluators found Llama2 to be roughly as “helpful” as ChatGPT, with Llama2 providing satisfactory answers across a set of approximately 4,000 prompts assessing “helpfulness” and “safety.”

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Meta introduces Llama 2 AI models for chatbots

Meta, formerly known as Facebook, has announced the launch of Llama 2, a new family of AI models specifically designed to power modern chatbots like OpenAI’s ChatGPT and Bing Chat. Building upon the success of its predecessor, Llama, Meta claims that Llama 2 demonstrates significantly improved performance due to its training on a mix of publicly available data.

Meta Accessibility and Compatibility of Llama 2

Unlike its predecessor, which was only accessible by request to prevent potential misuse, Llama 2 is freely available for research and commercial use. Furthermore, you can fine-tune it on popular AI model hosting platforms like AWS, Azure, and Hugging Face.Meta has optimized Llama 2 for Windows, thanks to its expanded partnership with Microsoft, and also for smartphones and PCs equipped with Qualcomm’s Snapdragon system-on-chip. Qualcomm aims to bring Llama2 to Snapdragon devices in 2024.

Key Differences and Variants of Llama 2

Llama 2 comes in two versions: Llama 2 and Llama 2-Chat, with the latter specifically fine-tuned for two-way conversations. Based on their parameters, both versions divide into different levels of sophistication. Parameters define the skill of the AI model in generating text. The available variants include 7 billion parameter, 13 billion parameter, and 70 billion parameter models.

Enhanced Training Data for Improved Performance

Llama2 was trained on a dataset comprising two million tokens, where tokens represent raw text, such as individual words or partial words like “fan,” “tas,” and “tic” for “fantastic.” This is nearly double the number of tokens used to train the original Llama (1.4 trillion). In the field of generative AI, more tokens generally lead to better performance. For reference, Google trained its leading large language model, PaLM 2, on 3.6 million tokens, and speculations suggest that they trained GPT-4 on trillions of tokens.

Benchmarks and Evaluations

While Meta refrains from disclosing the specific sources of the training data, the whitepaper confirms that the data is primarily from the web, mostly in English, and not from Meta’s own products or services. In a range of benchmarks, Llama2 models perform slightly below the highest-profile closed-source rivals, GPT-4 and PaLM 2, particularly in computer programming tasks. However, human evaluators found Llama2 to be roughly as “helpful” as ChatGPT, with Llama2 providing satisfactory answers across a set of approximately 4,000 prompts assessing “helpfulness” and “safety.”

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Australia’s ‘Barefoot Investor’ takes on crypto scammers stealing his...

Australian investing and finance educator Scott Pape, known...

SingPost fires CEO, CFO over handling of whistleblower’s report

The top executives reject accusations and will "vigorously...

The ‘superglue effect’ of eSIMs on fintech

Southeast Asia is accustomed to all-in-one apps, so...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!