1 min readfrom Machine Learning

DharmaOCR: Open-Source Specialized SLM (3B) + Cost–Performance Benchmark against LLMs and other open-sourced models [R]

Hey everyone, we just open-sourced DharmaOCR on Hugging Face. Models and datasets are all public, free to use and experiment with.

We also published the paper documenting all the experimentation behind it, for those who want to dig into the methodology.

We fine-tuned open-source SLMs (3B and 7B parameters) using SFT + DPO and ran them against GPT-5.4, Gemini 3.1 Pro, Claude Opus 4.6, Google Document AI, and open-source alternatives like OlmOCR, Deepseek-OCR, GLMOCR, and Qwen3.

- The specialized models came out on top: 0.925 (7B) and 0.911 (3B).

- DPO using the model's own degenerate outputs as rejected examples cut the failure rate by 87.6%.

- AWQ quantization drops per-page inference cost ~22%, with insignificant effect on performance.

Models & datasets: https://huggingface.co/Dharma-AI

Full paper: https://arxiv.org/abs/2604.14314

Paper summary: https://gist.science/paper/2604.14314

submitted by /u/augusto_camargo3
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#Excel alternatives for data analysis
#financial modeling with spreadsheets
#rows.com
#big data performance
#natural language processing for spreadsheets
#generative AI for data analysis
#google sheets
#Excel alternatives
#DharmaOCR
#open-source
#SLMs
#cost-performance
#benchmark
#fine-tuned
#SFT
#DPO
#GPT-5.4
#Gemini 3.1 Pro
#Claude Opus 4.6
#Google Document AI