Skip to main content

A/B Tests Overview

A/B Testing allows you to test multiple notification variants to determine which performs best before rolling out to your entire audience.

Types of A/B Tests

OCM Pulse supports two ways to create A/B tests:

1. Manual A/B Tests

Create tests directly with your own variants. Full control over content.

Create Manual A/B Test →

2. RSS-Triggered A/B Tests

Automatically generated from RSS feed content using AI. Great for content automation.

Learn about RSS A/B Testing →

Accessing A/B Tests

  1. Navigate to your app
  2. Click A/B Tests in the app menu

A/B Test List

View all tests with:

  • Test name
  • Status badge
  • Creation date
  • Click to view details

RSS Queue Button

If you have RSS A/B testing enabled, a button shows the count of pending tests awaiting review.

Creating an A/B Test

1. Test Settings

SettingDescriptionRange
Test NameDescriptive nameRequired
Target SegmentWho to test withOptional
Test PercentageSubscribers in test phase10-100%
Auto-Select WinnerAutomatically pick bestOn/Off
Auto-Select DelayHours before selection1-168

2. Create Variants

Add 2-10 notification variants. For each:

FieldLimitRequired
Label50 charsAuto-generated
Title255 charsYes
Body255 charsYes
URLValid URLYes
ImageFile or URLNo

Labels are auto-generated as A, B, C, etc.

3. Save as Draft

Click Create Test to save. Test starts in Draft status.

Test Lifecycle

Status Flow

PENDING_REVIEW → DRAFT → TESTING → SELECTING_WINNER → COMPLETED
↓ ↓ ↓
CANCELLED CANCELLED CANCELLED

Status Descriptions

StatusDescription
Pending ReviewAI variants generated from RSS, awaiting user approval
DraftCreated or approved but not yet started
TestingVariants being sent, collecting data
Selecting WinnerTest phase complete, awaiting selection
CompletedWinner rolled out to all
CancelledTest stopped, no further action
info

The Pending Review status applies to RSS-triggered A/B tests. Manual A/B tests start directly in Draft status.

Starting a Test

  1. Open a Draft test
  2. Review settings and variants
  3. Click Start Test
  4. Confirm the action

What happens:

  1. Subscribers are split based on test percentage
  2. Each group receives a different variant
  3. Clicks are tracked per variant
  4. Status changes to Testing

Viewing Results

Test Details Page

Shows:

  • Test status and settings
  • For RSS tests: source article
  • Subscriber counts (test vs. remaining)
  • Variant performance cards

Variant Performance

For each variant:

MetricDescription
SubscribersNumber who received it
ClicksTotal clicks
Click RatePercentage clicked
Progress BarVisual comparison

Progress bars help visualize relative performance.

Selecting a Winner

Automatic Selection

If Auto-Select Winner is enabled:

  1. System waits for configured delay
  2. Compares click rates
  3. Selects highest performer
  4. Rolls out to remaining subscribers

Manual Selection

To select manually:

  1. Open the test details
  2. Review variant performance
  3. Click Select Winner on your choice
  4. Confirm the selection

What happens:

  1. Test status becomes Completed
  2. Winning variant is sent to remaining subscribers
  3. Full performance data is recorded

Cancelling a Test

You can cancel tests in:

  • Draft status (before starting)
  • Testing status (during test)

To cancel:

  1. Open the test
  2. Click Cancel Test
  3. Confirm

Cancelled tests:

  • Stop any further notifications
  • Don't roll out to remaining subscribers
  • Preserve collected data

Deleting a Test

Only Draft tests can be deleted:

  1. Open the draft test
  2. Click Delete
  3. Confirm deletion

Understanding Results

Click Rate

The primary success metric:

Click Rate = (Clicks / Delivered) × 100%

Higher click rate = better performing variant.

Statistical Significance

For reliable results:

  • Have enough subscribers in the test
  • Wait for sufficient time
  • Compare meaningful differences

Rules of thumb:

  • 100+ subscribers per variant minimum
  • 1-2% difference may not be significant
  • Wait at least a few hours

What to Test

Title variations:

  • Different wording
  • Questions vs. statements
  • Urgency vs. curiosity

Body variations:

  • Short vs. detailed
  • Benefits vs. features
  • CTA phrasing

URL variations:

  • Different landing pages
  • With/without UTM parameters

Best Practices

Test Design

  • Change one element at a time
  • Make differences meaningful
  • Have a hypothesis

Test Percentage

Audience SizeRecommended %
< 1,00030-50%
1,000 - 10,00020-30%
> 10,00010-20%

Timing

  • Allow enough time for results
  • Don't select too early
  • Consider time-of-day effects

Learning

  • Document what you learn
  • Apply insights to future notifications
  • Build a testing cadence

A/B Test vs. Segment

Use A/B Test When...Use Segment When...
Testing content variantsTargeting specific audiences
Optimizing performanceSending different content
Unsure what works bestKnow what audience needs
Want data-driven decisionsHave clear audience criteria

Example Test Scenarios

Testing Title Approaches

Variant A: "New Feature: Dark Mode Now Available" Variant B: "Your Eyes Will Thank You - Try Dark Mode" Variant C: "Did You Know? We Now Have Dark Mode"

Goal: Find which approach (direct, benefit-focused, or curiosity) performs best.

Testing Urgency

Variant A: "Sale Ends Tonight" Variant B: "Limited Time: 50% Off"

Goal: Compare time urgency vs. offer urgency.

Testing Length

Variant A:

  • Title: "New Post"
  • Body: "Check out our latest article on productivity tips"

Variant B:

  • Title: "5 Productivity Tips You Haven't Tried"
  • Body: "Number 3 will surprise you"

Goal: Compare informative vs. teaser style.

Next Steps