A series of large language models developed by Baichuan Intelligent Technology Baichuan 2 🤗 Hugging Face • 🤖 ModelScope • 💬 WeChat 🚀 百川53B大模型在线对话平台已正式向公众开放 🎉 中文 | English 目录 📖 模型介绍 📊 Benchmark 结果 🥇🥇🔥🔥 ⚙️ 推理和部署 🛠️ 模型微调 💾 中间 Checkpoints 🔥🔥 👥 社区与生态 📜 声明与协议 模型介绍 Baichuan 2 是百川智能推出的新一代开源大语言模型,采用 2.6 万亿 Tokens 的高质量语料训练。 Baichuan 2 在多个权威的中文、英文和多语言的通用、领域 benchmark 上取得同尺寸最佳的效果。 本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits 量化。 所有版本对学术研究完全开放。同时,开发者通过邮件申请并…
Universal Command Line Interface for Amazon Web Servicesaws-cli This package provides a unified command line interface to Amazon Web Services. Jump to: Getting Started Getting Help More Resources Getting Started This README is for the AWS CLI version 1. If you are looking for information about the AWS CLI version 2, please visit the v2 branch. Requirements The aws-cli package works on Python versions: 3.7.x and greater 3.8.x and greater 3.9.x and greater 3.10.x and greater 3.11.x and gre…
The Dom amongst the Flipper Zero Firmware. Give your Flipper the power and freedom it is really craving. Let it show you its true form. Dont delay, switch to the one and only true Master today!XFW - Xtreme Firmware for the Flipper Zero Website | Intro | Install | Changelog | Wiki | Discord | Donate This firmware is a complete overhaul of the Official Firmware, and also features lots of awesome code-bits from Unleashed. What makes it special? We have spent many hours perfecting this c…
This is a Next.js project bootstrapped with create-next-app. Getting Started First, run the development server: npm run dev # or yarn dev # or pnpm dev Open http://localhost:3000 with your browser to see the result. You can start editing the page by modifying app/page.js. The page auto-updates as you edit the file. This project uses next/font to automatically optimize and load Inter, a custom Google Font. Learn More To learn more about Next.js, take a look at the following resources: Next.js D…
⚡ Building applications with LLMs through composability ⚡🦜️🔗 LangChain ⚡ Building applications with LLMs through composability ⚡ Looking for the JS/TS version? Check out LangChain.js. Production Support: As you move your LangChains into production, we'd love to offer more hands-on support. Fill out this form to share more about what you're building, and our team will get in touch. 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23 In an effort to make langchain lea…
A collection of New Grad full time roles in SWE, Quant, and PM.2024 New Grad Positions by Coder Quad and Simplify Use this repo to share and keep track of entry-level software, tech, CS, PM, quant jobs for new graduates. ⚠️ Please note that this repository is exclusively for roles in the United States, Canada, or Remote positions 🌎 🙏 Contribute by submitting an issue! See the contribution guidelines here! 🙏 Update (Sep 1, 2023) 🥳 You might have noticed that the repo looks a little different 👀.…
📷 EasyPhoto | Your Smart AI Photo Generator.📷 EasyPhoto | Your Smart AI Photo Generator. Introduction English | 简体中文 EasyPhoto is a Webui UI plugin for generating AI portraits that can be used to train digital doppelgangers relevant to you. Training is recommended to be done with 5 to 20 portrait images, preferably half-body photos and do not wear glasses (It doesn't matter if the characters in a few pictures wear glasses). After the training is done, we can generate it in the Inference sec…
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.Prompt flow Welcome to join us to make Prompt flow better by participating discussions, opening issues, submitting PRs. Prompt flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to…
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens. TinyLlama-1.1B English | 中文 The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01. We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-sourc…