C 2018 года карта на 2gis.ru рендерится при помощи WebGL API для рисования трехмерной графики. Сначала мы в команде веб-карт использовали эту технологию просто как очень быструю рисовалку двухмерных данных с небольшими исключениями в виде 3D-домов и моделей.Приход в карту иммерсивных возможностей начал менять сложившееся положение вещей — моделей стало больше, они стали красивее и детальнее, их больше хочется рассматривать.Наши картографические движки, заточенные на работу на масштабе города и…
Привет, Хабр! Меня зовут Владимир Захаров (@vzkhrv), я расскажу про утечки памяти в SSR. На самом деле, утечки могут случиться в JavaScript везде – и на сервер-сайде, и на клиенте, поэтому информация будет полезна даже тем, у кого пока нет SSR. Давайте чуть подробнее познакомимся. Я ведущий фронтэнд-разработчик, около 8 лет в отрасли. В Зарплате.ру больше не работаю, но основной опыт, о котором хочу рассказать, получен именно там. Я люблю плавающие баги, разговоры о техдолге и шутки про ненас…
A modern, portable, easy to use crypto library. Sodium is a new, easy-to-use software library for encryption, decryption, signatures, password hashing and more. It is a portable, cross-compilable, installable, packageable fork of NaCl, with a compatible API, and an extended API to improve usability even further. Its goal is to provide all of the core operations needed to build higher-level cryptographic tools. Sodium supports a variety of compilers and operating systems, including Windows …
Starfield Script Extender Building git clone https://github.com/ianpatt/sfse cmake -B sfse/build -S sfse cmake --build sfse/build --config Release Runtime Support SFSE supports the latest version of Starfield on Steam. The MS Store/Gamepass version is not supported. No, making it so you can see the files doesn't solve the problem.
🤖 AgentVerse 🪐 provides a flexible framework that simplifies the process of building custom multi-agent environments for large language models (LLMs). 🤖 AgentVerse 🪐 A Framework for Multi-LLM Environment Simulation AgentVerse offers a versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs). Designed to facilitate swift development and customization with minimal effort, our fra…
Turn expensive prompts into cheap fine-tuned models OpenPipe Turn expensive prompts into cheap fine-tuned models. Hosted App - Running Locally - Experiments Use powerful but expensive LLMs to fine-tune smaller and cheaper models suited to your exact needs. Evaluate model and prompt combinations in the playground. Query your past requests and export optimized training data. Try it out at https://app.openpipe.ai or run it locally. Features Experiment …
Inference Llama 2 in one file of pure 🔥llama2.🔥 why this port? This repository serves as a port that provides a Mojo-based implementation of llama2.c. With the release of Mojo, I was inspired to take my Python port of llama2.py and transition it to Mojo. The result? A version that leverages Mojo's SIMD & vectorization primitives, boosting the Python performance by nearly 250x. Impressively, the Mojo version now outperforms the original llama2.c compiled in runfast mode out of the b…
A private, p2p alternative to Slack and Discord built on Tor & IPFS Quiet Encrypted p2p team chat with no servers, just Tor. Downloads | How it Works | Features | Threat Model | Mission | FAQ | Developer setup Quiet is an alternative to team chat apps like Slack, Discord, and Element that does not require trusting a central server or running one's own. In Quiet, all data syncs directly between a team's device…
A fast inference library for running LLMs locally on modern consumer-class GPUsExLlamaV2 This is a very initial release of ExLlamaV2, an inference library for running local LLMs on modern consumer GPUs. It still needs a lot of testing and tuning, and a few key features are not yet implemented. Don't be surprised if things are a bit broken to start with, as almost all of this code is completely new and only tested on a few setups so far. Overview of differences compared to V1 Faster, better…