From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem 38 points by future-shock-ai 2 days ago 5 comments story