Healthcare AI has a transparency problem
This problem has prevented contemporary prediction products from being truly actionable because they are increasingly based on blackbox techniques. Two regulatory changes in the healthcare industry necessitate a coming need for algorithmic innovation: first, the transition from fee for service to value-based care, and second, data interoperability guidelines specified in the 21st Century Cures Act. Providers and payers require trustworthy and flexible analytical tools to navigate value-based-care, where quality monitoring, decision support, and patient engagement will become more important. Health care decision-making is high-stakes, requiring interpretable algorithms and models. Additionally, the coming breakdown of the silos between disparate data sources motivates our new paradigm for thinking about healthcare analytics multi-modally.
The Value Proposition
We are focused on addressing a glaring problem in big data applications within healthcare. Methodology that has been successful in other domains has transferred poorly, yielding algorithms that offer at-most minor performance advantages over classical methods. Importantly, these newer methods produce blackboxes - algorithms that have no intrinsic interpretability. Examples of these methods include deep learning and ensemble tree methods. Some vendors couple these blackbox models with misleading “explanations.” The explanations offered in these cases are approximations and not true to the inner workings of the models used in production. The machine learning research community is aware that these methods can guide unjustified, imprecise, or harmful interventions. Due to the opaque nature of these methods, one can never rule out hidden algorithmic biases. We intend to create a healthcare insight pipeline that addresses these issues. In addition, our methodology places patient privacy and safety at the forefront.
We are pioneering and productizing a new class of statistical models that mimic some of the capabilities of contemporary blackbox methods within well-structured Bayesian modeling frameworks. Doing so, we retain the interpretability of classical frameworks while gaining the expressiveness of blackbox methods. Besides interpretability, our platform also offers the following features: 1) economical training, 2) uncertainty estimation, 3) explicit data privacy guarantees beyond differential privacy, 4) multi-modal data use-cases including the capability to deal with missing and incomplete data, and 5) continuous model expansion for learning from new data as it becomes available 6) transferability of information between models trained on different datasets. The key feature of our framework is its hierarchical structure and nested granularity which adapts statistically to the level of data coverage. Combined, all of these attributes give the platform the ability to use the same analytics to empower diverse inter-related use-cases for different types of end-users, each with different types of data available. We will use this ability to improve healthcare quality by actively pre-empting medical error and helping to guide providers and payers through the migration to value based care.