hacker news Hacker News
  1. new
  2. show
  3. ask
  4. jobs

Why do we still flatten embedding spaces?

7 points

by Intrinisical-AI

1 days ago

10 comments

story

ost dense retrieval systems rely on cosine similarity or dot-product, which implicitly assumes a flat embedding space. But embedding spaces often live on curved manifolds with non-uniform structure—dense regions, semantic gaps, asymmetric paths.

I’ve been exploring the use of:

- Ricci curvature as a reranking signal

- Soft-graphs to preserve local density

- Geodesic-aware losses during training

Curious if others have tried anything similar? Especially in information retrieval, QA, or explainability. Happy to share some experiments (FiQA/BEIR) if there's interest.

loading...