2 min readfrom Machine Learning

"I don't know!": Teaching neural networks to abstain with the HALO-Loss. [R]

Current neural networks have a fundamental geometry problem: If you feed them garbage data, they won't admit that they have no clue. They will confidently hallucinate.
This happens because the standard Cross-Entropy loss requires models to push their features "infinitely" far away from the origin to reach a loss of 0.0 which leaves the model with a jagged latent space. It literally leaves the model with no mathematically sound place to throw its trash.

I've been working on a "fix" for this, and as a result I just open-sourced the HALO-Loss.

It's a drop-in replacement for Cross-Entropy but by using shift-invariant distance math, HALO bounds maximum confidence to a finite distance. This allows it to bolt a zero-parameter "Abstain Class" directly to the origin of the latent space. Basically, it gives the network a mathematically rigorous "I don't know" button for free.

Usually in AI safety, building better Out-of-Distribution (OOD) detection means sacrificing your base accuracy. With HALO, that safety tax basically vanishes.

Testing on CIFAR-10/100 against standard CCE:

  • Base Accuracy: Zero drop (actually +0.23% on CIFAR10, -0.14% on CIFAR100).
  • Calibration (ECE): Dropped from ~8% down to a crisp 1.5%.
  • Far OOD (SVHN) False Positives (FPR@95): Slashed by more than half (e.g., 22.08% down to 10.27%).

Comparing the results on OpenOOD, getting this kind of native outlier detection without heavy ensembles, post-hoc scoring tweaks, or exposing the model to outlier data during training is incredibly rare.

At the same time HALO is super useful if you're working on safety-critical classification, or if you're training multi-modal models like CLIP and need a mathematically sound rejection threshold for unaligned text-image pairs.

I wrote a detailed breakdown on the math, the code, and on the tricks to avoid fighting high-dimensional gaussians soap bubbles.
Blog-post: https://pisoni.ai/posts/halo/

Also, feel free to give HALO a spin on your own data, see if it improves your network's overconfidence and halucinations, and let me know what you find.
Code: https://github.com/4rtemi5/halo

submitted by /u/4rtemi5
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#generative AI for data analysis
#Excel alternatives for data analysis
#natural language processing for spreadsheets
#real-time data collaboration
#financial modeling with spreadsheets
#big data management in spreadsheets
#conversational data analysis
#rows.com
#intelligent data visualization
#data visualization tools
#enterprise data management
#big data performance
#data analysis tools
#data cleaning solutions
#automated anomaly detection
#row zero
#no-code spreadsheet solutions
#AI-native spreadsheets
#cloud-native spreadsheets
#real-time collaboration