ggerganov / llama.cpp
LLM inference in C/C++
See what the GitHub community is most excited about today.
LLM inference in C/C++
A C++ header-only HTTP/HTTPS server and client library
Distribute and run LLMs with a single file.
Open-source simulator for autonomous driving research.
Super Repository for Coding Interview Preperation
Cross-platform ground control station for drones (Android, iOS, Mac OS, Linux, Windows)
Fast C++ logging library.
ArduPlane, ArduCopter, ArduRover, ArduSub source
FlatBuffers: Memory Efficient Serialization Library
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
DuckDB-powered Postgres for high performance apps & analytics.
An Open Source Machine Learning Framework for Everyone
ClickHouse® is a real-time analytics DBMS
Enabling PyTorch on XLA Devices (e.g. Google TPU)
C++ implementation of the Google logging module
NoSQL data store using the seastar framework, compatible with Apache Cassandra
ncnn is a high-performance neural network inference framework optimized for the mobile platform
GoogleTest - Google Testing and Mocking Framework
Ceph is a distributed object, block, and file storage platform
CUDA Templates for Linear Algebra Subroutines
The new Windows Terminal and the original Windows console host, all in the same place!
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.