TLDR
-
The Internet’s best-effort IP model and ML’s probabilistic softmax outputs both succeed by not requiring guaranteed correctness.
Key Takeaways
-
IP makes no delivery promises; complete failure fulfills the protocol, enabling simpler and more powerful layered design.
-
TCP handles failure by restarting IP communication, but can still surface failure upward.
-
Neural nets use softmax to never fully rule out any output, always assigning nonzero probability to every possibility.
-
When complexity is too high, models produce near-uniform distributions rather than forcing a wrong single answer.
-
Allowing probabilistic mistakes gives ML models flexibility to solve hard problems more often than rigid output constraints would.
Hacker News Comment Review
-
No substantive HN discussion yet.
Original | Discuss on HN