Strategy 10 min read

Sailthru Predictive Recommendations: Turning Customer Data Into Revenue

By Excelohunt Team ·
Sailthru Predictive Recommendations: Turning Customer Data Into Revenue

Product recommendation blocks in email are not new. Every major ESP has some version of them. What makes Sailthru’s recommendation engine distinctive is the depth of individual-level data it uses to compute those recommendations — and the continuous feedback loop that makes the model more accurate over time.

This guide covers how the recommendation algorithms work, how to implement recommendation blocks effectively in email campaigns, how to run A/B tests that prove their value, and how to feed Sailthru the data it needs to make recommendations accurate.

How Sailthru’s Recommendation Algorithms Work

Sailthru offers several recommendation algorithms, each suited to different use cases. Understanding the differences helps you choose the right one for each email context.

Personalised Recommendations (Horizon-Based)

The default and most sophisticated algorithm. For each recipient, Sailthru’s Horizon engine ranks items from the product catalogue by the match between the item’s attributes (category, brand, price range, tags) and the individual’s interest profile.

This algorithm produces the most relevant recommendations for users with sufficient profile data — typically users who have been engaged with the brand for at least 2–4 weeks and have generated enough browsing, clicking, and (ideally) purchasing signals.

For new users or low-engagement users with sparse profiles, the personalised recommendation algorithm falls back to popularity-based recommendations filtered by the user’s best-match category.

Item-Based Collaborative Filtering (Similar Items)

Recommends items that are frequently purchased or engaged with by other users who also engaged with a specific item. This is the “customers who bought this also bought” model.

Best used in: post-purchase emails (after a specific product is purchased), cart abandonment emails (alongside the abandoned item), and product page browse abandonment emails.

Recommends the most popular items across the catalogue, optionally filtered by category or tag. No individual personalisation — the same recommendations go to all recipients filtered by this algorithm.

Best used in: re-engagement campaigns (where individual profile data may be stale), new subscriber emails (where profile data is sparse), and as a fallback when personalised algorithms have insufficient data.

Editorial or Curated

Merchandising teams can manually curate a set of recommendations, overriding algorithmic selection. Useful for promotional periods (featuring specific sale items), product launches (guaranteeing new arrivals appear), and brand partnership promotions.

In practice, most sophisticated Sailthru implementations use a hybrid approach: algorithmic recommendations with editorial overrides and exclusion rules that ensure certain items always or never appear.

Implementing Personalised Recommendation Blocks in Email

Recommendation blocks in Sailthru email are implemented using Sailthru’s templating language (Zephyr) or via content personalisation slots in the email editor.

Basic Implementation

In Sailthru’s email editor, recommendation blocks are added as personalised content areas. Configuration options include:

  • Algorithm selection (as above)
  • Number of items to show (typically 3–6 for email)
  • Image layout (product image size and placement)
  • Display fields (price, name, category, button)
  • Filtering rules (in-stock only, minimum price, category constraints)
  • Exclusion rules (exclude recently purchased items, exclude items already seen in previous emails)

Zephyr-Based Implementation

For more control over recommendation logic, Sailthru’s Zephyr templating language allows you to build recommendation blocks programmatically. This is useful when:

  • You need to implement complex fallback logic (try algorithm A, fall back to algorithm B if fewer than N results are returned)
  • You want to merge editorial picks with algorithmic recommendations in the same block
  • You need to apply custom formatting or conditional content around recommendation results

A basic Zephyr recommendation loop retrieves items from the recommendation API and renders each in a repeating HTML block, inserting product name, image URL, link, and price from the catalogue data.

Open-Time Rendering

As covered in the broader Sailthru personalisation guide, open-time rendering is particularly valuable for recommendation blocks. When configured for open-time rendering, the recommendation block is computed fresh each time the email is opened, ensuring:

  • Product availability is current (items that went out of stock between send and open are excluded)
  • Pricing is accurate (sale prices that started after the email was sent are reflected)
  • Recent engagement is factored in (if the user purchased something between send and open, that purchase is excluded from recommendations and informs the ranking of remaining items)

A/B Testing Recommendations vs. Curated Content

Running controlled tests between recommendation blocks and editorial/curated content is one of the most valuable experiments a retail email programme can run. The results often surprise teams who assume algorithmic personalisation always wins.

Setting Up the Test

Split your send list into two equal groups:

  • Group A: Receives the email with algorithmic recommendation blocks
  • Group B: Receives the same email with editorially curated product blocks (the same products for all recipients)

Measure over a sufficient sample size and a representative time period (ideally 2–4 weeks to avoid week-specific anomalies).

Metrics to Track

  • Click-through rate on the recommendation block: Primary engagement signal
  • Revenue per email sent: The most direct measure of commercial impact
  • Conversion rate from recommendation click to purchase: Are clicks from recommendations converting at the same rate as editorial clicks?
  • Average order value: Does personalisation change the average transaction value?

What the Results Typically Show

Algorithmic recommendations tend to outperform editorial curation on click-through rate and revenue per email for large, engaged list segments — because relevance naturally drives higher engagement.

Editorial curation can outperform algorithms during specific promotions (when you want specific items featured), during product launches (when algorithmic models haven’t yet built up enough data on new items), and for small, niche segments where the algorithm’s training data is thin.

The practical conclusion: use algorithms as the default, apply editorial overrides strategically, and test both regularly to keep the decision data-informed.

Content Recommendations for Media and Publishing

Sailthru’s recommendation capabilities extend equally to content as to products. For media companies, the content recommendation block is the primary personalisation tool.

Content recommendations pull from Sailthru’s content library — a database of articles and content pieces ingested via RSS feed or API. Each content piece is tagged with metadata: topic, author, publication date, content type, section.

The recommendation algorithm ranks content items by the match between the content’s metadata and the user’s interest profile (built from their history of content engagement — reads, clicks, time on page).

For a media publisher sending a daily or weekly email digest, every subscriber’s email contains a different mix of articles — ordered and selected by their individual interest profile. A subscriber who primarily reads tech and science content sees a digest dominated by tech and science. A subscriber who reads politics and culture sees an entirely different selection.

Implementation considerations specific to media recommendations:

  • Content freshness rules: Typically configure recommendations to exclude articles older than a defined window (7 days for daily news, 30 days for longer-form content). Otherwise, high-engagement evergreen articles from months ago dominate all recommendations.
  • Reading history exclusion: Exclude articles the user has already read (tracked via web page view events). Recommending an article someone has already read is a trust-eroding experience.
  • Section diversity: Some media implementations add a diversity rule to ensure recommendations aren’t entirely from one topic area, even if that’s the user’s dominant interest.

Feeding Sailthru Enough Data to Make Recommendations Accurate

Recommendation quality is a function of data quality. These are the most important data feeds to ensure are working correctly:

Product or content catalogue feed: Sailthru’s recommendation engine needs up-to-date catalogue data — product names, categories, tags, prices, availability, image URLs, and page URLs. This is typically ingested via a daily or real-time API feed. Stale or incomplete catalogue data directly degrades recommendation quality.

Web tracking events: Browse, product view, cart add, and purchase events from the website feed the Horizon profile. Missing web tracking events means the profile is being built from email engagement alone, which provides a much narrower view of user interest.

Purchase events: As noted, purchase events are the highest-signal input to the recommendation model. Ensure every purchase is passed to Sailthru as a purchase event with full product metadata.

Review or rating data: If available, integrating product review and rating data into Sailthru’s catalogue allows recommendations to be weighted by product quality — prioritising highly-rated items when engagement signals are otherwise equal.

Benchmarks for Recommendation-Driven Revenue

Benchmarks vary significantly by industry, list quality, and implementation maturity, but directional reference points from brands with mature Sailthru implementations include:

  • Personalised recommendation blocks in post-purchase emails contributing 15–25% of total email revenue in some retail programmes
  • Click-through rates on personalised recommendation blocks 2–4x higher than on non-personalised product features in the same email
  • Revenue per email sent lifting 20–40% after implementing personalised recommendations compared to editorial curation (in programmes with sufficient engagement data to power the algorithm)

Sailthru’s predictive recommendation engine is a genuinely differentiated capability for retail and media brands with the data infrastructure to support it. The algorithms are sophisticated, the open-time rendering is powerful, and the feedback loop that continuously improves recommendations over time creates compounding returns.

At Excelohunt, we implement and optimise Sailthru recommendation programmes for retail and media brands — from data feed architecture to recommendation block design to performance testing. If you want to understand what personalised recommendations could add to your email revenue, we can show you.


Looking to implement these strategies with expert support?

Tags: sailthrupersonalizationproduct-recommendationsemail-marketing

Want Us to Implement This for Your Brand?

Get a free email audit and see exactly where you're losing revenue.

Get Your Free Audit
1