After announcing ResearchKit 2.0 last week, Apple has shared more specifics on what the latest version will offer in iOS 12. Features include new speech recognition and speech in noise tasks, a new PDF viewer, a refreshed UI, and more.
Detailed in a blog post, the first change that many will notice is an overhauled user interface.
ResearchKit 2.0 has a whole new look and feel! The UI has been updated across the entire framework to closely reflect the latest iOS style guidelines.
The ResearchKit team notes that all updates are geared to improve the user experience and make it more effortless.
Footers are now sticky to the bottom of all views with filled button styles and the ‘cancel’ and ‘skip’ buttons have relocated under the continue button to allow for easier navigation. Additionally, a new card view enhances the look of forms and surveys.
In addition to the UI update, there are a good amount of new features and tasks, here’s the full changelog:
- PDF Viewer: A step that enables users to quickly navigate, annotate, search and share PDF documents.
- Speech Recognition: A task that asks participants to describe an image or repeat a block of text and can then transcribe users’ speech into text and allow editing if necessary.
- Speech in Noise: A task that spans speech and hearing health and allows developers and researchers to assess results on participants’ speech reception thresholds by having participants listen to a recording that incorporates ambient background noise as well as a phrase, and then asking users to repeat phrases back.
- dBHL Tone Audiometry: A task that uses the Hughson Westlake method for determining the hearing threshold level of a user in the dB HL scale. To facilitate this task we have also open-sourced calibration data for AirPods.
- Environmental SPL Meter: A task that enables developers to record users’ current background noise levels during active tasks and set thresholds to ensure users are in the proper environment before completing other tasks.
- Amsler Grid: A task that will instruct participants to hold the phone at a certain distance from their face and then provide instructions to close one eye or the other. As participants progress through the instructions, a grid is displayed for users to view and mark any areas on the grid where they see any sort of distortion.
Check out 9to5Mac on YouTube for more Apple news:
FTC: We use income earning auto affiliate links. More.