xAI releases Grok 4.3 with a 1M token context window, function calling, structured outputs, reasoning, and aggressive pricing at $1.25/$2.50 per 1M input/output tokens.
Rate limits: 1,800 requests/minute and 10M tokens/minute via us-east-1 and eu-west-1 clusters.
Capabilities include function calling, structured outputs, and built-in reasoning (chain-of-thought before response).
Cached token pricing at $0.20/1M makes high-volume repeated-context workloads significantly cheaper.
Hacker News Comment Review
Commenters note 202.7 tok/s inference speed and the $1.25 input price as standout value compared to frontier peers, though some find the model quality inconsistent with system prompts causing erratic outputs.
Grok’s Twitter-derived training data is credited for unusually natural tone and formality-matching in non-English languages, but commenters flag that growing AI-generated Twitter content may degrade this advantage over time.
A practical grey-area use case emerged: one commenter reports Grok completing sensitive classification tasks (trafficking-related content moderation) that other frontier models refused, citing lighter guardrails as the differentiator.
Notable Comments
@OtherShrezzing: High tok/s suggests xAI over-provisioned compute relative to actual demand, calling it a potentially expensive miscalculation.