More data almost always yields better results when it comes to the effectiveness of machine learning, and the healthcare sector is sitting on a data goldmine. As per the estimates by McKinsey, big data and machine learning in pharma and medicine could generate a value of up to $100B annually, based on better decision-making, optimized innovation, improved efficiency of research/clinical trials, and new tool creation for physicians, consumers, insurers, and regulators. But what are the obstacles that need to be addressed while applying ML technologies to pharma and medicine? Let’s find out.
One of the most pressing issues that need to be addressed at present is data governance. It seems logical to assume that most of the public is wary of releasing data in lieu of data privacy concerns as the medical data is still personal and not easy to access.
To meet the stringent regulations on drug development, the need for more transparent algorithms is necessary. People need to be able to see through the wall and need to understand the causal reasoning behind machine conclusions.
In the pharmaceutical industry, the two major necessities are building a robust skills pipeline and recruiting data science talent.
To help shift the industry’s mindset toward embracing and seeing value in incremental changes over the long-term, breaking down “data silos” and encouraging a “data-centric view” by seeing the value in sharing and integrating data across sectors is of paramount importance in. Unless there is an immediate and significant monetary value pharmaceutical companies have historically been hesitant to make changes or support research initiatives.
The electronic records, which at present are still messy and fragmented across databases need to be streamlined and this will be an essential initial step in ramping up personalized treatment solutions.