product design
|
Engineering

Orcasound

Design and develop an application for listening to and exploring bioacoustic audio data captured from Salish Sea hydrophones.
Timeline
9 months (in progress)
Team
Scott, Product Owner / Marine Bioacoustician
Dave, Marine Ecologist
Team
Paul, Lead Developer
Skander, Senior Developer

Context

Orcas in the Salish Sea are in dangerous decline, with scientists currently tracking fewer than 100 individuals. Human-generated marine noise is a major factor.

One of the best ways to monitor for the presence of orcas is through hydrophones, which can pick up the distinctive calls, clicks, whistles from up to 5 miles away. Orcasound operates the only hydrophone network in the region that streams audio live to the public. Community scientists plus a Microsoft AI model listen and make alerts.

Objectives

Research, strategize and design an interactive web app experience to help increase the number of users (awareness), increase the quality and accuracy of detections (science), and drive support toward conservation initiatives (action).

Research & Insights

I began by helping the team analyze and interpret actual user behaviors in the app, to reference against user interviews and other methods. The quantitative results surprised many team members, and shifted our collective understanding of the project.

User Type 1: Listeners

Through analytics and interviews, we determined as a group that listeners to the audio streams can be usefully categorized three ways: active listeners (smallest); affinity group listeners (medium); and notification subscribers (largest).

User Type 2: Moderators

Behind the scenes, project scientists play an important role in verifying reports, identifying whale species and call types, and making go/no-go decisions about sending out alerts and notifications. None of this work was supported directly by the app.

Research Takeaways

From this process, we made some key conclusions that were impactful in how we approached the project:

  • Community members provide critical detection coverage
  • The audience for listening to orca vocalizations is large and enthusiastic
  • Listeners are hungry for more data, audio content, and contributions

Project Definition

With user insights in place, I led the team in secondary research workshops to form our product strategy.

  • Affinity mapping workshop to define organizational goals
  • Persona workshops to prioritize user pain points, wants and needs
  • Shared vision across disciplines and stakeholders

We understood the moderator user journey to include many steps from real-time detection and verification, to longer term data analysis, annotation, and communication, which can all be supported by the app.

With listeners, we characterized a group of people who are seeking to enrich their lives through interactions with orcas, and whose passion for these animals and their conservation is stoked through interacting with more audio, data, and scientific commentary.

Designs

With a firm direction and objectives, I began the design process by generated wireframes to capture necessary tasks and visualize opportunities presented by the available data.

Wireframing the moderator interface

  • Moderators have 24-7 responsibility for sending real-time alerts, so mobile access is important.
  • Web-based spectrogram editor enhances users’ ability to find and annotate whale sounds due to their distinctive shape.
  • Moderators' main goals are to decide whether to send out a real-time alert, determine the precise start and end of the orca event, and provide keyword tags, commentary, and verification to the community that will consume the dataset.

Wireframing the listener interface

  • 66% of listeners are on mobile, frequently coming from social media channels
  • The interface listeners need to fully experience the available data is a unique combination of interaction models, including maps, data insights, audio streaming, and user-generated content.
  • Flexible, scalable UI with modular components is needed to build iteratively and account for a growing feature set that we may not have imagined yet.

Testing

In testing the interactive wireframes, we heard positive feedback from participants but found that we needed to work with real data to properly understand how well the concepts were landing with users. To create true-data prototypes, we discovered ways to restructure the API to more easily build certain features, that would benefit both listeners and moderators.

Navigation and layout

  • Desktop provides more screen real estate, mobile has a specific types of interactions like bottom drawer navigation
  • Solving for the ability to see both visual and audio data sources, real-time and historical

Data-driven content experience

  • As listeners learn to interpret the soundscape, they become more accurate detectors and can call out egregious vessels
  • Report verification, keyword tags, and comments made by moderators add a rich layer of meta data to make the historical record searchable with the ability to compose playlists of certain species and pods
  • Recordings remain open to annotation, and users can pull out short clips to share

Outcomes

Designs are currently in development (April 2025).

  • Working alongside dev team as front-end lead building React components
  • Connecting with multiple APIs to enhance data
  • Agile approach to feature development