Skip to content

Conversation

@Iansabia
Copy link

Summary

  • Adds 10 Maestro E2E test flows covering all core app functionality: onboarding/sign-in, conversations (list, detail, CRUD), memories, chat, apps/plugins, settings, device connection, and recording
  • Flows are tagged core (runs on simulator) vs device_required (needs physical Omi hardware)
  • Includes runner scripts (run_all.sh, run_device.sh) with pass/fail reporting and optional HTML output
  • Integrates into existing test.sh via --e2e flag

How It Works

  1. Install Maestro: brew install maestro
  2. Build and install the app on a simulator or device
  3. Run core tests: bash app/.maestro/scripts/run_all.sh
  4. Run device tests (with Omi connected): bash app/.maestro/scripts/run_device.sh
  5. Or use bash app/test.sh --e2e to run unit + widget + E2E tests together

After ~1 hour you get a full report covering sign-in, conversation recording/transcription, CRUD operations, chat, and app management.

Test Flows

Flow What It Tests Device Required
01_onboarding Sign-in, name entry, language, permissions No
02_conversations_list List rendering, scrolling No
03_conversation_detail Opening conversation, transcript view No
04_conversation_crud Create, update, delete conversations No
05_memories Memory list, creation, interaction No
06_chat Chat input, AI responses No
07_apps App store, plugin install/manage No
08_settings Settings navigation, preferences No
09_device_connection BLE scan, pair, connect status Yes
10_recording Record, transcribe, verify conversation Yes

Test plan

  • Install Maestro CLI
  • Build app with flutter build ios --flavor dev --simulator
  • Run bash app/.maestro/scripts/run_all.sh on simulator
  • Verify all 8 core flows pass
  • Run bash app/.maestro/scripts/run_device.sh with Omi connected
  • Verify recording + device flows pass

Closes #3857

Configures Maestro for automated functional testing with
core flows and device-required flow separation via tags.
Tests app launch, sign-in, name entry, language selection,
permissions, speech profile skip, and home screen landing.
Tests conversation list rendering, scrolling, and list item visibility.
Tests opening a conversation, viewing transcript, and detail screen elements.
Tests creating, updating, and deleting conversations through the UI.
Tests memory list display, creation, and interaction.
Tests chat input, message sending, and AI response display.
Tests app store browsing, plugin installation, and management.
Tests settings screen navigation, preference toggles, and profile access.
Tests BLE device scanning, pairing, and connection status.
Requires physical Omi device (tagged device_required).
Tests recording start, transcription indicator, and conversation
creation from captured audio. Requires physical Omi device.
Runs all core E2E flows sequentially with pass/fail summary
and optional HTML report generation.
Runs Maestro flows that require a physical Omi device connected.
Adds --e2e flag to run Maestro functional tests alongside
existing unit/widget tests.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive suite of Maestro E2E tests, which is a great addition for ensuring app quality. The tests cover core functionality and are well-structured with tags for simulator vs. device-specific flows. My review focuses on improving the maintainability and robustness of the new test scripts. I've identified a few areas with code duplication in both the YAML flow definitions and the shell runner scripts. Addressing these will make the test suite more resilient and easier to manage as the app evolves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

omi mobile app functional tests ($300)

1 participant