A/B Tests Overview
A/B Testing allows you to test multiple notification variants to determine which performs best before rolling out to your entire audience.
Types of A/B Tests
OCM Pulse supports two ways to create A/B tests:
1. Manual A/B Tests
Create tests directly with your own variants. Full control over content.
2. RSS-Triggered A/B Tests
Automatically generated from RSS feed content using AI. Great for content automation.
Accessing A/B Tests
- Navigate to your app
- Click A/B Tests in the app menu
A/B Test List
View all tests with:
- Test name
- Status badge
- Creation date
- Click to view details
RSS Queue Button
If you have RSS A/B testing enabled, a button shows the count of pending tests awaiting review.
Creating an A/B Test
1. Test Settings
| Setting | Description | Range |
|---|---|---|
| Test Name | Descriptive name | Required |
| Target Segment | Who to test with | Optional |
| Test Percentage | Subscribers in test phase | 10-100% |
| Auto-Select Winner | Automatically pick best | On/Off |
| Auto-Select Delay | Hours before selection | 1-168 |
2. Create Variants
Add 2-10 notification variants. For each:
| Field | Limit | Required |
|---|---|---|
| Label | 50 chars | Auto-generated |
| Title | 255 chars | Yes |
| Body | 255 chars | Yes |
| URL | Valid URL | Yes |
| Image | File or URL | No |
Labels are auto-generated as A, B, C, etc.
3. Save as Draft
Click Create Test to save. Test starts in Draft status.
Test Lifecycle
Status Flow
PENDING_REVIEW → DRAFT → TESTING → SELECTING_WINNER → COMPLETED
↓ ↓ ↓
CANCELLED CANCELLED CANCELLED
Status Descriptions
| Status | Description |
|---|---|
| Pending Review | AI variants generated from RSS, awaiting user approval |
| Draft | Created or approved but not yet started |
| Testing | Variants being sent, collecting data |
| Selecting Winner | Test phase complete, awaiting selection |
| Completed | Winner rolled out to all |
| Cancelled | Test stopped, no further action |
The Pending Review status applies to RSS-triggered A/B tests. Manual A/B tests start directly in Draft status.
Starting a Test
- Open a Draft test
- Review settings and variants
- Click Start Test
- Confirm the action
What happens:
- Subscribers are split based on test percentage
- Each group receives a different variant
- Clicks are tracked per variant
- Status changes to Testing
Viewing Results
Test Details Page
Shows:
- Test status and settings
- For RSS tests: source article
- Subscriber counts (test vs. remaining)
- Variant performance cards
Variant Performance
For each variant:
| Metric | Description |
|---|---|
| Subscribers | Number who received it |
| Clicks | Total clicks |
| Click Rate | Percentage clicked |
| Progress Bar | Visual comparison |
Progress bars help visualize relative performance.
Selecting a Winner
Automatic Selection
If Auto-Select Winner is enabled:
- System waits for configured delay
- Compares click rates
- Selects highest performer
- Rolls out to remaining subscribers
Manual Selection
To select manually:
- Open the test details
- Review variant performance
- Click Select Winner on your choice
- Confirm the selection
What happens:
- Test status becomes Completed
- Winning variant is sent to remaining subscribers
- Full performance data is recorded
Cancelling a Test
You can cancel tests in:
- Draft status (before starting)
- Testing status (during test)
To cancel:
- Open the test
- Click Cancel Test
- Confirm
Cancelled tests:
- Stop any further notifications
- Don't roll out to remaining subscribers
- Preserve collected data
Deleting a Test
Only Draft tests can be deleted:
- Open the draft test
- Click Delete
- Confirm deletion
Understanding Results
Click Rate
The primary success metric:
Click Rate = (Clicks / Delivered) × 100%
Higher click rate = better performing variant.
Statistical Significance
For reliable results:
- Have enough subscribers in the test
- Wait for sufficient time
- Compare meaningful differences
Rules of thumb:
- 100+ subscribers per variant minimum
- 1-2% difference may not be significant
- Wait at least a few hours
What to Test
Title variations:
- Different wording
- Questions vs. statements
- Urgency vs. curiosity
Body variations:
- Short vs. detailed
- Benefits vs. features
- CTA phrasing
URL variations:
- Different landing pages
- With/without UTM parameters
Best Practices
Test Design
- Change one element at a time
- Make differences meaningful
- Have a hypothesis
Test Percentage
| Audience Size | Recommended % |
|---|---|
| < 1,000 | 30-50% |
| 1,000 - 10,000 | 20-30% |
| > 10,000 | 10-20% |
Timing
- Allow enough time for results
- Don't select too early
- Consider time-of-day effects
Learning
- Document what you learn
- Apply insights to future notifications
- Build a testing cadence
A/B Test vs. Segment
| Use A/B Test When... | Use Segment When... |
|---|---|
| Testing content variants | Targeting specific audiences |
| Optimizing performance | Sending different content |
| Unsure what works best | Know what audience needs |
| Want data-driven decisions | Have clear audience criteria |
Example Test Scenarios
Testing Title Approaches
Variant A: "New Feature: Dark Mode Now Available" Variant B: "Your Eyes Will Thank You - Try Dark Mode" Variant C: "Did You Know? We Now Have Dark Mode"
Goal: Find which approach (direct, benefit-focused, or curiosity) performs best.
Testing Urgency
Variant A: "Sale Ends Tonight" Variant B: "Limited Time: 50% Off"
Goal: Compare time urgency vs. offer urgency.
Testing Length
Variant A:
- Title: "New Post"
- Body: "Check out our latest article on productivity tips"
Variant B:
- Title: "5 Productivity Tips You Haven't Tried"
- Body: "Number 3 will surprise you"
Goal: Compare informative vs. teaser style.
Next Steps
- Create Your First A/B Test
- Set Up RSS A/B Testing
- Create Segments for targeted tests