AVWBeta
ajad van wyk
About
Portfolio
Tech Stack
Blog
Gallery
Stimming
Resources
AVW
AVW
Get in touch

Let'sbuildsomethingremarkabletogether.

Start with the background, selected work, or the profile brief for a sharper read on the practice and the work behind it.

Get in TouchProfile Brief

Explore

About
Portfolio
Blog

Expertise

Finance & Investment
Legal Framework
AI & Innovation
Research & Writing

Resources

Browse Study Notes
Download LLM Profile
Tech Stack
Gallery

Trust

Email
Download Profile Brief
About AVW
Selected Work

Latest Notes

Archive

Phoenix

X Algorithm

Thunder

X Algorithm

X Algorithm — For You Feed

X Algorithm

AVWCape Town, South Africa

© 2026 Ajad Van Wyk. All rights reserved.

Back to Resources
X Algorithm

Phoenix

15 Apr 2026
3 min read
recommendation-systemx-twitterout-of-network-retrievaltransformermachine-learninggrok
In this note
  1. 01Two functions
  2. 02Part 1: Retrieval — two-tower model
  3. 03Part 2: Ranking — Grok transformer
  4. 04Implementation notes
  5. 05Related notes

Phoenix

"ML component with two functions: (1) two-tower retrieval for out-of-network post discovery, and (2) Grok-based transformer ranking that predicts engagement probabilities."
01

Two functions

Phoenix
├── Retrieval   →  find relevant posts from global corpus
└── Ranking     →  score each candidate post

02

Part 1: Retrieval — two-tower model

Architecture

User engagement history          All posts in corpus
         │                               │
         ▼                               ▼
    User tower                      Post tower
  (hash embeddings)              (hash embeddings)
         │                               │
         ▼                               ▼
   User embedding              Post embeddings (all)
         │                               │
         └──────────┬────────────────────┘
                    ▼
           Dot product similarity
           user · post = cos(θ)   [after L2 normalisation]
                    │
                    ▼
              Top-K candidates  →  out-of-network pool

How dot product similarity works

Both vectors are L2-normalised, so the dot product equals the cosine of the angle between them:

$$\text{similarity} = \mathbf{u} \cdot \mathbf{p} = \cos(\theta)$$

  • - High score (→ 1.0): vectors point in the same direction → post is relevant
  • - Low score (→ 0.0): vectors are orthogonal → no relationship
  • - Negative score: vectors point away from each other → likely not relevant

Posts closest to the user vector in embedding space become out-of-network candidates.

Hash-based embeddings

Rather than a fixed vocabulary lookup table, both towers use multiple hash functions per feature. This handles an unbounded feature space (billions of post IDs, author IDs, etc.) without a rigid vocab.


03

Part 2: Ranking — Grok transformer

Input

  • - User context: engagement history sequence (likes, replies, reposts, clicks, etc. in order)
  • - Candidate post: text, author, media features — all encoded via hash embeddings

Candidate isolation

"Posts cannot attend to each other during inference. Each post only attends to the user context."

This is a deliberate design decision:

  • - Scores are batch-independent — the same post gets the same score regardless of what else is in the batch
  • - Scores are therefore cacheable

Output — action probabilities

The model outputs ~15 probabilities per post:

CategoryActions
Positivelike, reply, repost, quote, click, share, follow author, video view
Neutraldwell, photo expand, profile click
Negativeblock author, mute author, report, not interested

Scoring formula

$$\text{Score} = \sum_{i} w_i \cdot P(\text{action}_i)$$

Positive actions get positive weights; negative actions get negative weights, actively suppressing unwanted content.


04

Implementation notes

  • - Transformer architecture ported from Grok-1 open source release by xAI
  • - Adapted for recommendation use cases (not generative text)
  • - No hand-engineered features — the transformer learns all relevance from raw engagement sequences
05

Related notes

  • - X Algorithm — For You Feed — where Phoenix fits in the pipeline
  • - Thunder — in-network retrieval counterpart
  • - X Algorithm — Scoring Pipeline — weighted scorer and diversity scorer details
  • - X Algorithm — Key Concepts — user embedding, dot product similarity
Next

Thunder

More from this module

Thunder

recommendation-systemx-twitterin-network-retrievalrust

X Algorithm — For You Feed

recommendation-systemmachine-learningx-twitterstudy-guide

X Algorithm — Key Concepts

recommendation-systemx-twitterconceptsglossary

X Algorithm — Pre-Scoring Filters

recommendation-systemx-twitterfilterspipeline
Keep exploring

More notes and resources

Browse the full resource vault, or reach out to discuss any of these topics.

All resources
Get in touch