Skip to main content

Testing Guide

Automated E2E testing infrastructure using Maestro for Sanctiv React Native app. Related:

Overview

Maestro enables AI agents and developers to verify React Native code works correctly on iOS/Android simulators before committing, reducing blind fixes and improving code quality. Why Maestro:
  • YAML tests (agent-friendly syntax)
  • Screenshot capture on every step
  • Works in CI/CD (GitHub Actions)
  • Free local testing, pay only for cloud
  • Used by: Microsoft, Meta, Uber, Disney, Stripe, Bluesky
Official Documentation:

Quick Start

Prerequisites

  • macOS with Xcode 15+ (for iOS testing)
  • iOS Simulator - Test with open -a Simulator
  • Bun installed - Run bun --version to verify
  • EAS CLI - Run eas --version to verify

Installation

# Install Maestro CLI
curl -Ls "https://get.maestro.mobile.dev" | bash

# Verify installation
maestro --version

# Add to PATH (add to ~/.zshrc or ~/.bashrc for persistence)
export PATH="$HOME/.maestro/bin:$PATH"

Running Tests

# Start Expo dev server in one terminal
bun start

# In another terminal, run smoke test
bun run test:e2e:smoke

# Run all tests
bun run test:e2e

# iOS only
bun run test:e2e:ios

# View test results and screenshots
ls ~/.maestro/tests/

Test Structure

.maestro/
├── config.yaml          # Maestro configuration
└── flows/               # Test flows directory
    ├── smoke.yaml       # Basic app launch test
    ├── onboarding.yaml  # Welcome screen + auto-navigation
    └── journal.yaml     # Journal functionality + tab navigation

⚠️ Pre-Commit Validation (REQUIRED)

ALWAYS run these commands before committing any dependency changes, fixes, or features:
# 1. Validate Expo SDK compatibility (CRITICAL)
npx expo-doctor

# 2. Type checking
bun run typecheck

# 3. Linting
bun run lint

# 4. Smoke test (if app is testable)
bun run test:e2e:smoke
Why expo doctor is critical:
  • Detects version mismatches between Expo SDK and dependencies
  • Prevents CI build failures from incompatible package versions
  • Catches issues like @expo/metro-runtime version mismatches
  • Must show 0 critical errors before pushing
Example output (good):
✓ 15/17 checks passed
⚠ 2 checks failed (warnings only - acceptable)
Example output (bad - DO NOT COMMIT):
✖ Check that packages match versions required by installed Expo SDK
  @expo/[email protected] - expected version: ~5.0.5

Test Flows

smoke.yaml - Verifies app launches successfully
  • Tags: smoke, critical
  • Validates: App launches, welcome screen appears
onboarding.yaml - Verifies welcome screen and auto-navigation
  • Tags: onboarding, critical
  • Validates: Welcome features, loading state, auto-navigation to main app
journal.yaml - Tests journal functionality and navigation
  • Tags: journal, critical
  • Validates: Journal tab, mock data loading, tab navigation (Insights, Goals, Progress)

Writing Tests

Basic Commands

# Launch app
- launchApp

# Tap element with text
- tapOn: "Button Text"

# Assert element is visible
- assertVisible: "Expected Text"

# Input text in focused field
- inputText: "text to enter"

# Capture screenshot
- takeScreenshot: descriptive-name

# Wait for animations
- waitForAnimationToEnd:
    timeout: 5000

# Scroll view
- scroll

# Swipe gesture
- swipe
Full API: Maestro Commands Reference

Creating New Tests

  1. Create new YAML file in .maestro/flows/
  2. Follow existing test structure:
# Test description
appId: com.sanctiv.app
tags:
  - feature-name
  - critical
---
- launchApp
- assertVisible: "Expected Element"
- takeScreenshot: test-step-name
  1. Test locally before committing:
bun run test:e2e
  1. Add screenshots at key steps for debugging
  2. Use descriptive names and tags

Test Best Practices

Element Matching:
  • Use wildcards for partial matches: "Welcome*"
  • Check for typos in text assertions
  • Use maestro studio to inspect app elements
Waits:
  • Add explicit waits before assertions
  • Increase timeout in config.yaml for slow operations
  • Account for animations that delay rendering
Screenshots:
  • Capture at every major step
  • Use descriptive names: journal-01-home, journal-02-create
  • Screenshots saved to ~/.maestro/tests/
  • Note: Add screenshots to PR comments for documentation, not to version control

Testing Requirements for AI Agents

Before Committing Code

AI agents MUST run relevant E2E tests before committing fixes:
  1. Identify affected flows (onboarding, journal, etc.)
  2. Run tests: bun run test:e2e:<flow-name>
  3. If tests fail:
    • Check screenshots in ~/.maestro/tests/
    • Analyze failure reasons
    • Fix issues
    • Re-run tests
  4. Only commit when tests pass
  5. Include test results in commit message

Writing New Tests for Features

When adding features, create corresponding E2E tests:
  1. Create YAML file in .maestro/flows/
  2. Test critical user paths
  3. Add screenshots at key steps
  4. Tag appropriately (smoke, critical, etc.)
  5. Verify tests pass locally

Test-Driven Development for Bug Fixes

  1. Write failing test that reproduces bug
  2. Fix the bug
  3. Verify test now passes
  4. Commit both fix and test

CI/CD Integration

Overview: Two-Tier E2E Testing

We use a two-tier E2E testing strategy optimized for cost and confidence:
┌─────────────────────────────────────────────────────────────────────────────┐
│                         E2E TESTING PYRAMID                                  │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                              │
│                              ▲                                               │
│                             /│\      TIER 2: Real Device E2E                │
│                            / │ \     • Maestro Cloud (GitHub Actions)       │
│                           /  │  \    • Label: run-e2e-real                  │
│                          /   │   \   • Cost: $$ (EAS Build + Cloud)         │
│                         /────┼────\  • Time: 15-20 min                      │
│                        /     │     \                                         │
│                       /      │      \   TIER 1: Simulator E2E               │
│                      /       │       \  • EAS Workflows                     │
│                     /        │        \ • Auto on "Ready for Review"        │
│                    /─────────┼─────────\• Cost: $ (uses cached builds)      │
│                   /          │          \• Time: 10-15 min                  │
│                                                                              │
│   Use Tier 1 for rapid feedback. Use Tier 2 for high-confidence releases.   │
│                                                                              │
└─────────────────────────────────────────────────────────────────────────────┘
TierEnvironmentWorkflow LocationLabelAuto-Trigger
Tier 1iOS Simulator.eas/workflows/2-pr-e2e.ymlrun-e2e✅ On “Ready for Review”
Tier 2Real iPhones.github/workflows/maestro-cloud.ymlrun-e2e-real❌ Manual only

When to Use Each Tier

ScenarioTier 1 (Simulator)Tier 2 (Real Device)
Normal PR development✅ Auto❌ Skip
Complex UI changes✅ Auto⚠️ Consider
Device-specific features✅ Auto✅ Required
Pre-production release✅ Auto✅ Required
Hotfix✅ Auto⚠️ Optional

Trigger Strategy

Tier 1: Simulator E2E (EAS Workflows) - Uses A+B+D pattern:
EventE2E Runs?Why
Create draft PRSave resources during development
Push to draftStill drafting
Mark ready for reviewReady for validation
Push (no label)Single validation is enough
Add run-e2e labelEnable continuous mode
Push (with label)Continuous testing active
Tier 2: Real Device E2E (Maestro Cloud) - Explicit opt-in only:
EventE2E Runs?Why
Create/push/readyExpensive, not for every PR
Add run-e2e-real labelExplicit opt-in for real devices
Push (with label)Continuous real device testing

Workflow 1: EAS Workflows (Simulator)

File: .eas/workflows/e2e-ios.yml Best for: Default E2E testing, fast feedback
# Uses Expo's native Maestro integration
# Builds with EAS (e2e-test profile)
# Tests on iOS Simulator
# Cost: $0.05/job + CI/CD minutes
# Time: ~10-15 minutes
Pros: Simple config, Expo-native, fast Cons: Simulator-only

Workflow 2: Maestro Cloud (Real Devices)

File: .github/workflows/maestro-cloud.yml Best for: Production confidence, real device validation
# Builds with EAS (preview profile)
# Tests on real iPhones via Maestro Cloud
# Cost: EAS Build + Maestro Cloud
# Time: ~15-20 minutes
Pros: Real devices, video recordings, catches device-specific bugs Cons: Requires Maestro Cloud account Required secrets:
  • EXPO_TOKEN - EAS Build authentication
  • MAESTRO_CLOUD_API_KEY - Maestro Cloud API key

Workflow 3: OTA Preview Updates

File: .eas/workflows/preview-update.yml Purpose: Instant JS updates for PR testing
# Publishes OTA update to preview channel
# Anyone with Preview build sees changes instantly
# Cost: Free (included in plan)
# Time: ~2-3 minutes

Agent PR Workflow

For AI agents (Cursor, Claude Code, etc.):
# 1. Create draft PR
gh pr create --draft --title "feat: ..." --body "..."

# 2. Work and push commits (no E2E runs - safe to iterate)
git push

# 3. When ready, mark for review (Tier 1 simulator E2E runs automatically)
gh pr ready

# 4. Check results in Expo Dashboard or GitHub

# 5. If fixes needed + want continuous simulator E2E:
gh pr edit --add-label "run-e2e"

# 6. Before production release, add real device testing:
gh pr edit --add-label "run-e2e-real"

# 7. When done with continuous mode:
gh pr edit --remove-label "run-e2e"
gh pr edit --remove-label "run-e2e-real"
Label Reference:
LabelTierWhat It Does
run-e2e1Re-run simulator tests (cheap, fast)
run-e2e-real2Run real device tests (expensive, thorough)
build-dev-Create development build for hot reload

EAS Build Profiles

The workflows use these eas.json profiles:
ProfilePurposeOutput
e2e-testEAS Workflows Maestro.app (simulator)
previewMaestro Cloud + OTA.ipa (signed)
developmentHot reload on device.ipa (dev client)

Authentication in Tests

For tests requiring authenticated users (journal, profile, etc.), use dedicated test accounts. Option 1: Test Account Credentials (Recommended)
# .maestro/flows/journal.yaml
appId: com.sanctiv.app
env:
  TEST_EMAIL: ${TEST_USER_EMAIL}
  TEST_PASSWORD: ${TEST_USER_PASSWORD}
---
- launchApp
- tapOn: "Sign In"
- tapOn: "Email"
- inputText: ${TEST_EMAIL}
- tapOn: "Password"
- inputText: ${TEST_PASSWORD}
- tapOn: "Log In"
- assertVisible: "Journal"
Set environment variables locally:
export TEST_USER_EMAIL="[email protected]"
export TEST_USER_PASSWORD="test-password-123"
Option 2: Deep Links for Auth Bypass
- launchApp:
    url: "sanctiv://auth?token=${TEST_AUTH_TOKEN}"
- assertVisible: "Journal"
Security Note: Never commit credentials to git. Use environment variables or CI/CD secrets.

Multi-Tenant Testing

Test org_id isolation (critical for Sanctiv’s multi-tenant architecture):
# Test Church 1 isolation
- launchApp
- tapOn: "Sign In"
- inputText: "[email protected]"
- tapOn: "Password"
- inputText: "${CHURCH1_PASSWORD}"
- tapOn: "Sign In"
- assertVisible: "Church 1*"
- assertNotVisible: "Church 2" # Verify isolation
- takeScreenshot: church1-isolated
Related: SUPABASE.md - Multi-tenant patterns

Android Testing

While iOS-only testing is sufficient to start, here’s how to add Android support: Prerequisites:
  • Android Studio installed
  • Android Emulator configured
  • ANDROID_HOME environment variable set
Platform-Specific Tests:
# iOS-specific
appId: com.sanctiv.app
platform: ios
---
- launchApp
- assertVisible: "Back" # iOS back button

# Android-specific
appId: com.sanctiv.app
platform: android
---
- launchApp
- pressKey: back # Android back button
Run Android tests:
# Start Android emulator first
emulator -avd Pixel_6_API_34

# Run tests
bun run test:e2e:android

Troubleshooting

Issue: “Maestro command not found”

Solution:
export PATH="$HOME/.maestro/bin:$PATH"
# Add to ~/.zshrc or ~/.bashrc for persistence

Issue: “App not found”

Solution:
  • Update appId in test files to match actual bundle ID (com.sanctiv.app)
  • Check app.json for correct ios.bundleIdentifier

Issue: “Element not found”

Solution:
  • Use maestro studio to inspect app and find correct element identifiers
  • Use wildcards: "Welcome*" instead of exact text
  • Add wait commands before assertions

Issue: Tests pass locally but fail in CI

Solution:
  • Check simulator/emulator versions match
  • Increase timeouts in CI environment
  • Verify all environment variables are set
  • Review GitHub Actions logs for specific errors

Issue: Build fails in CI

Symptom: App build fails during Maestro workflow Solution:
  • Check Xcode setup: Ensure correct Xcode version is selected
  • Verify dependencies: Run bun install locally to ensure it works
  • Check build logs: Look for specific compilation errors
  • For EAS builds: Ensure EXPO_TOKEN has proper permissions (only needed for EAS builds, not Maestro testing)

Issue: Flaky Tests

Solution:
  • Add explicit waits before assertions
  • Increase timeout in config.yaml
  • Check for animations that might delay rendering
  • Verify mock data loads consistently

Maestro Studio (Local Development)

Setup Maestro Studio

1. Install Maestro CLI (if not already installed):
curl -Ls "https://get.maestro.mobile.dev" | bash
export PATH="$HOME/.maestro/bin:$PATH"
2. Install Maestro Studio Desktop: macOS:
brew tap mobile-dev-inc/tap
brew install maestro-studio
Or download directly: 3. Launch Maestro Studio:
maestro studio
This opens Maestro Studio in your browser at http://localhost:8080

Running Tests Locally with Maestro Studio

1. Start your app in development mode:
# Terminal 1: Start Expo dev server
bun start

# Terminal 2: Run iOS simulator
bun run ios
2. In Maestro Studio:
  • Click “Create a new test” or open existing flow files
  • Use the inspector to find element selectors
  • Click “Run Locally” to execute tests on your simulator
  • View screenshots and test results in the Studio interface
3. Run tests from command line:
# Run all tests
maestro test .maestro/flows/

# Run specific flow
maestro test .maestro/flows/smoke.yaml

# Run with tags
maestro test --includeTags smoke .maestro/flows/
4. View test results:
# Screenshots and results are saved to:
ls ~/.maestro/tests/

# Open in Finder (macOS)
open ~/.maestro/tests/

Using Maestro Studio Inspector

The inspector helps you find correct element selectors:
  1. Launch Studio: maestro studio
  2. Connect device: Select your iOS Simulator from the device list
  3. Inspect elements: Click on UI elements to see their selectors
  4. Copy selectors: Use the copied selectors in your test flows

Troubleshooting Local Setup

Issue: Maestro Studio won’t connect to simulator Solution:
# Ensure simulator is booted
xcrun simctl list devices | grep Booted

# If not booted, start it:
open -a Simulator
Issue: Tests can’t find app Solution:
  • Verify appId in flow files matches your bundle ID (com.sanctiv.app)
  • Ensure app is installed: xcrun simctl listapps booted | grep sanctiv
Issue: Maestro Studio shows “No devices” Solution:
  • Ensure iOS Simulator is running
  • Check Maestro can see devices: maestro devices
  • Restart Maestro Studio: maestro studio

Maestro Studio (Legacy)

Interactive tool for inspecting app elements and building tests visually. Launch:
maestro studio
Features:
  • Inspect element hierarchy
  • Test commands interactively
  • Record interactions
  • Generate test YAML

Cost Breakdown

Free Tier (Current Setup)

  • Local testing: FREE
  • Maestro CLI: FREE
  • Maestro Studio: FREE
  • GitHub Actions (public repos): FREE
Total: $0/month
  • Maestro Cloud (iOS): $250/device/month
  • Maestro Cloud (Android): $250/device/month
Total: $250-500/month (only needed for cloud testing) When to upgrade:
  • Need hosted iOS simulators
  • Want parallel test execution
  • Require device farm access

Additional Resources

Official Documentation

Case Studies

Community Resources


Last Updated: 2025-11-07 Related Issue: #51