A Chrome extension helps you group your tabs with AI.AI Group Tabs A Chrome extension helps you group your tabs with AI. The extension is still under development, feel free to open issues and pull requests. Any suggestions are welcome. Demo Video: Roadmap Group tabs with AI by default categories Fill OpenAI API key in popup and save in Chrome storage Customize categories in popup Group new tabs automatically Publish on Chrome store Better prompt engineering Logo and name CI / CD…
Sign arbitrary messages with keys obtained from Namada's trusted setupNamada Trusted Setup Claimer CLI util to sign arbitrary messages with. The signing keys are obtained from Namada's trusted setup ceremony. To run it just type: cargo run You will be asked to provide the seed used during the trusted setup ceremony. Once you've done that, you can choose to: Show the public key Sign a message
Set of tools to assess and improve LLM security. 🤗 Models on Hugging Face | Blog | Website | CyberSec Eval Paper | Llama Guard Paper Purple Llama Purple Llama is a an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future. Why purple? Borrowing…
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment AnythingEfficientSAM EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything News [Dec.6 2023] EfficientSAM demo is available on the Hugging Face Space (huge thanks to all the HF team for their support). [Dec.5 2023] We release the torchscript version of EfficientSAM and share a colab. Online Demo & Examples Online demo and examples can be found in the project page. EfficientSAM Instance Segmentat…
Optimum-NVIDIA Optimized inference with NVIDIA and Hugging Face Optimum-NVIDIA delivers the best inference performance on the NVIDIA platform through Hugging Face. Run LLaMA 2 at 1,200 tokens/second (up to 28x faster than the framework) by changing just a single line in your existing transformers code. Installation You can use a Docker container to try Optimum-NVIDIA today. Images are available on the Hugging Face Docker Hub. docker pull huggingface/optimum-nvidia An Optimum-NVIDIA…
Build in-app AI chatbots 🤖, and AI-powered Textareas ✨, into react web apps. The Open-Source Copilot Platform In-app chatbots, and AI-enabled TextArea. Explore the docs » Join our Discord · Website · Report Bug · Request Feature Questions? Book a call with us » 🌟 <CopilotPortal />: Build in-app AI chatbots that can "see" the current app state + take act…
Examples in the MLX frameworkMLX Examples This repo contains a variety of standalone examples using the MLX framework. The MNIST example is a good starting point to learn how to use MLX. Some more useful examples include: Transformer language model training. Large scale text generation with LLaMA or Mistral. Parameter efficient fine-tuning with LoRA. Generating images with Stable Diffusion. Speech recognition with OpenAI's Whisper. Contributing Check out the contribution guidelines for mo…
MLX: An array framework for Apple siliconMLX Quickstart | Installation | Documentation | Examples MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research. Some key features of MLX include: Familiar APIs: MLX has a Python API that closely follows NumPy. MLX also has a fully featured C++ API, which closely mirrors the Python API. MLX has higher-level packages like mlx.nn and mlx.optimizers with APIs that closely follow PyTorch to simpl…