Why I Built a Native ML Inference Engine in Rust
Adding ML to an app shouldn't require Python, ONNX, CUDA, or 6.8GB of dependencies. Kjarni is a single native library that runs transformer models anywhere — Rust, C#, Go, or the command line.
Technical articles on AI inference, search, embeddings, and building developer tools.
Adding ML to an app shouldn't require Python, ONNX, CUDA, or 6.8GB of dependencies. Kjarni is a single native library that runs transformer models anywhere — Rust, C#, Go, or the command line.
Run sentiment analysis locally in C# using transformer models. Positive, negative, neutral, emotions, toxicity. No Python, no API calls, no containers. Three lines of code.
Add semantic search to your C# app in 5 lines. Match text by meaning, not keywords. No Python, no external services, no vector database. Runs locally on CPU.
Build a full-text search engine in C# with keyword search, semantic search, hybrid ranking, and reranking. Index files and query them in 10 lines of code. No Elasticsearch, no external services.
Run sentiment analysis, generate embeddings, detect toxicity, and search documents from your terminal. Pipes into any script or CI pipeline. No Python, no API keys.